Robots are systems that sense, actuate, and compute. So far, we have studied the basic physical principles of actuation, i.e., locomotion and manipulation. Understanding these principles is necessary to make and implement plans, e.g., using the A* or RRT for calculating shortest path for a mobile robot to travel on . Similarly, we need to understand the basic principles of robotic sensors that provide the data-basis for computation.

The goals of this lecture are

  • provide an overview of sensors commonly used on robotic systems
  • outline the physical principles that are responsible for uncertainty in sensor-based reasoning

Robotic Sensors

The development of sensors is classically driven by industries other than robotics. These include submarines, automatically opening doors, safety devices for industry, servos for remote-controlled toys, and more recently the cell-phone, automobiles and gaming consoles. These industries are mostly responsible for making “exotic” sensors available at low cost by identifying mass-market applications, e.g., accelerometers and gyroscopes now being used in mass-market smart phones or the 3D depth sensor “Kinect” as part of its XBox gaming console. As we will see later on in this class, sensors are hard to classify by their application. In fact, most problems benefit from every possible source of information that they can obtain. For example, localization can be achieved by counting encoder increments, but also by measuring acceleration, or using vision. All of these approaches differ drastically in their precision and the kind of data that they provide, but none of them is able to completely solve the localization problem.
Exercise: Think about the kind of data that you could obtain from an encoder, an accelerometer, or a vision sensor on a non-holonomic robot. What are the fundamental differences?
Although an encoder is able to measure position, it is used in this function only on robotic arms. If the robot is non-holonomic, closed tours in its configuration space, i.e., robot motions that return the encoder values to their initial position, do not necessarily drive the robot back to its starting point. Encoders are therefore mainly useful to measure speed. An accelerometer instead, by definition, measures the derivative of speed. Vision, finally, allows to calculate the absolute position (or the integral of speed) if the environment is equipped with unique features. An additional fundamental difference between those three sensors is the amount and kind of data they provide. An accelerometer samples real-valued quantities that are digitized with some precision. An odometer instead delivers discrete values that correspond to encoder increments. Finally, a vision sensor delivers an array of digitized real-valued quantities (namely colors). Although the information content of this sensor exceeds that of the other sensors by far, cherry-picking the information that are really useful remains a hard, and largely unsolved, problem.
We will now study the sensors on robots that are available at CU Boulder.

The iRobot Create

The iRobot Create is the research version of iRobots “Roomba” vacuum cleaner. The Create is equipped with the following sensors:

  • Wheel encoders
  • Infrared “cliff” sensors detecting stairs or other edges the robot could fall off
  • An infrared “wall” sensor allowing the robot to keep its distance to a wall to its left
  • A front-bumper to detect collisions
  • A mechanical “pick-up” sensor detecting lift of the robot
  • An infrared sensor that can detect and decode infrared messages sent by the charger.
The iRobot Create is a good example for making very efficient use of the little information the sensors above provide. This allows this product to be relatively cheap and perform reasonably well on the original application, floor-cleaning, it was made for. Particularly noteworthy is the homing mechanism that allows the robot to autonomously return to its charger: the charger emits three different infrared signals that have information modulated on them and can be recognized as the “red buoy”,  the “green buoy” and a “force field”. The geometry of each field, tuned by careful design of the charging station, is made so that the robot is able to use the different fields for inferring its location relative to the charger. This process is explained in more detail on Page 18 of the Create reference manual.
In summary, the Create provides a series of digital on/off sensors, one analog sensor to detect the distance to the wall that is encoded with 12-bit resolution (values from 0 to 4096), and an infrared sensor that operates as a communication device by decoding information modulated on the infrared signal. How this modulation works, is not part of this class.

The E-Puck Robot

The E-Puck robot is a miniature robot that was specifically developed for education. It is therefore equipped with a large number of sensors to provide the opportunity for hands-on experience. In particular, the E-Puck provides
What an actual signal from these sensors looks like is described on the pages at each of the above links.
Exercise: Study the response from the E-Puck distance sensors and that of the accelerometer. Why do you think the response of the distance sensors is non-linear (compare this also with how you modeled the distance sensor in Webots in Exercise 1)? Why does the accelerometer show a constant offset on all three channels? 
The distance sensors are made from relatively simple components, namely an infrared emitter and an infrared receiver. They work by emitting an infrared signal and then measuring the strength of the reflected signal. In the graph on the e-Puck website, large values correspond to large distance or little reflected light and small values correspond to short distances or a lot of reflected light. (This should be actually the other way round, but the designers of the E-Puck have wired up the sensor so that the response becomes more intuitive, i.e. low values for small distances and high values for large distances.) As light intensity decays at least quadratically with distance (and not linear), the response of the sensor is not linear, but more precise close to the obstacle. The accelerometer shows constant offset for the z-axis as it measures the constant acceleration of the earth’s gravitational field. As the robot is not perfectly horizontal due to a missing caster wheel, also values of the x and y axes are slightly offset. As for the distance sensor, values are reported between 0 and 4095. This is because the analog-digital converter on the robot has a resolution of 12-bit, which corresponds to chopping the analog voltage it measures into 4096 equal chunks. As the accelerometer is able to measure acceleration both along the positive and the negative axes, zero acceleration corresponds to a measurement of 2048.

The PrairieDog Robot

The PrairieDog is a research platform developed at CU Boulder and MIT that extends on the iRobot Create platform. In addition to the sensors of the create, it is equipped with
  • an USB web-cam
  • a localization system that can decipher infra-red markers mounted at the ceiling for global localization
  • a scanning laser distance sensor
  • encoders in each of its 5-DOF arm joints

PrairieDog with a CrustCrawler manipulating arm

The scanning laser consists of a laser that scans the environment by being directed via a quickly rotating mirror and the distance at which the laser is reflected sampled at regular intervals. This process, and the resulting data, is nicely animated on the the wikipedia site that is linked from this text. How the distance is calculated is fundamentally different from an infrared distance sensor. Instead of measuring the strength (aka amplitude) of the reflected signal, laser range scanners measure the phase difference of the reflected wave. In order to do this, the emitted light is modulated with a wave-length that exceeds the maximum distance the scanner can measure. If you would use visible light and do this much slower, you would see a light that keeps getting brighter, then getting darker, briefly turns off and then starts getting brighter again. Thus, if you would plot the amplitude, i.e. its brightness, of the emitted signal vs. time you would see a wave that has zero-crossings when the light is dark. As light travels with the speed of light, this wave propagates through space with a constant distance (the wavelength) between its zero crossings. When it gets reflected, the same wave travels back (or at least parts of it that get scattered right back). For example, modern laser scanners emit signals with a frequency of 5 MHz (turning off 5 million times in one second). Together with the speed of light of approximately 300,000km/s, this leads to a wavelength of 60m and makes the laser scanner useful up to 30m.

When the laser is now at a distance that corresponds exactly to one half the wave-length, the reflected signal it measures will be dark at the exact same time its emitted wave goes through a zero-crossing. Going closer to the obstacle results in an offset that can be measured. As the emitter knows the shape of the wave it emitted, it can calculate the phase difference between emitted and received signal. Knowing the wave-length it can now calculate the distance. As this process is independent from ambient light (unless it has the exact same frequency as the laser being used), the estimates can be very precise. This is in contrast to a sensor that uses signal strength. As the signal strength decays at least quadratically, small errors, e.g. due to fluctuations in the power supply that drives the emitting light, noise in the analog-digital converter, or simply differences in the reflecting surface have drastic impact on the accuracy and precision (see below for a more formal definition of this term).

The xBox Kinect

While scanning lasers have revolutionized robotics by providing a real-time view on the obstacles in the vicinity of the robot at fairly high accuracy, they could only provide a 2D slice of the world. For example a table would only be seen by its legs, not allowing the robot to judge whether it could pass underneath it. Although people have played with pivoting laser scanners, the vertical scanning speed is intrinsically limited by the horizontal scanning speed of the device. For example, a laser scanner that provides a 2D slice of the environment 10 times per second (or at 10Hz), would need 1s to provide 10 vertical slices. This is not fast enough for real-time navigation, and requires multiple lasers. For example, the Velodyne 3D laser scanner employs 64 (!) parallel scanning lasers and provides 2cm accuracy over 50m range.

An alternative solution to this problem are 3D depth sensors that work by projecting an infrared pattern into the environment (aka structured light), observe it via a camera, and quickly calculate the depth of the environment based on the observed deformation. A prominent example of this technology is the Kinect for Microsoft’s xBox, which made the technology affordable by targeting a mass-market application. Although neither the resolution nor range are comparable with state-of-the-art laser range scanners, the sensor can provide depth images of 640×480 points (resolution of a VGA camera) between 0.7 and 6m at 30Hz, which not only enables real-time navigation but also gesture recognition from people, opening up novel avenues in human robot interaction and the potential for affordable, highly capable robot platforms.

Terminology

It is now time to introduce  a more precise definition of terms such as “speed” and “resolution”, as well as additional taxonomy that is used in a robotic context. Roboticists differentiate between proprioceptive and exteroceptive sensors. Proprioceptive sensors measure quantities that are internal to the robot such as wheel-speed, current consumption, joint position or battery status. Exteroceptive sensors measure quantities from the environment, such as distance to a wall, the strength of ambient light or the pattern of a picture at the wall.

Roboticists also differentiate between active and passive sensors. Active sensors emit energy of some sort and measure the reaction of the environment. Passive sensors instead measure energy from the environment. For example, most distance sensors are active sensors (as they sense the reflection of a signal they emit), whereas an accelerometer, compass, or a push-button are passive sensors.

The difference between the upper and the lower limit of the quantity a sensor can measure its known as its range. This should not be confused with the dynamic range, which is the ratio between two quantities (usually used for sensors that sense light or sound). The minimal distance between two values a sensor can measure is known as its resolution. The resolution of a sensor is given by the device physics (e.g., a light detector can only count multiples of a quant), but usually limited by the analog-digital conversion process. The resolution of a sensor should not be confused with its accuracy or its precision (which are two different concepts). For example, whereas an infrared distance sensor might yield 4096 different values to encode distances from 0 to 10cm, which suggests a resolution of around 24 micrometers, its precision is far above that (in the order of millimeters) due to noise in the acquisition process.

Technically, a sensors accuracy is given by the difference between a sensors output m and the true value v:

accuracy=1-\frac{|m-v|}{v}

A sensor’s precision instead is given by the ratio of range and statistical variance of the signal. Precision is therefore a measure of repeatability of a signal, whereas accuracy describes a systematic error that is introduced by the sensor physics.

The speed at which a sensor can provide measurements is known as its bandwidth. For example, if a sensor has a bandwidth of 10 Hz, it will provide a signal ten times a second. This is important to know, as querying the sensor more often is a waste of computational time and potentially misleading. In Webots you usually provide the bandwidth of each sensor when calling its enable function, which requires the update rate in ms.

Additional Sensors

There are a few additional sensors that are important in current robotic systems, but are not used on the platforms discussed above.

Gyroscope: a gyroscope is an electro-mechanical device that can measure rotational orientation. It is complementary to the accelerometer that measures translational acceleration. Classically, a gyroscope consists of a rotating disc that could freely rotate in a system of pivots and gimbals. When moving the system, the inertial momentum keeps the original orientation of the disc, allowing to measure the orientation of the system relative to where the system was started. A variation of the gyroscope is the rate gyro, which measures rotational speed. The rate gyro can also be implemented using optics, which allows for extreme miniaturization. Rate gyros are for example used in Apple’s iPhone IV or in Nintendo’s latest generation Wii controller.

Compass: a compass measures absolute orientation with respect to the earth’s magnetic field. Unlike the mechanical compass that measures the magnetic field only in one dimension, last generation electronic sensors can measure the earth’s magnetic field in three axes. Although this information is unreliable indoors and gets disturbed by metal parts in the environment, a compass can be helpful to determine local orientation.

Inertial-Measurement Unit (IMU): together, accelerometers, gyroscopes, and magnetometers (compass) can provide an estimate of a robots acceleration, velocity, and orientation along all 6 dimensions, and drastically improve the accuracy of odometry. As the underlying sensors are not very precise, however, absolute values provided by the IMU quickly drift and require update by some sort of global positioning system.

Global Positioning System (GPS): GPS systems triangulate the receiver’s position with meter accuracy from estimating the position of satellites in the orbit that have known position. As such, there are mostly adapt for outdoor applications. There exist a series of equivalent systems for indoor robotic systems that operate either using active or passive beacons in the environment and can provide millimeter accuracy. As of now, there is no established standard in industry.

Ultra-sound distance sensor: An ultra-sound distance sensor operates by emitting an ultra-sound pulse and measures its reflection. Unlike a light-based sensor that measures the amplitude of the reflected signal, a sound-based sensor measures the time it took the sound to travel. This is possible, because sound travels at much lower speed (300m/s) than light (300,000km/s). The fact that the sensor actually has to wait for the signal to return leads to a trade-off between range and bandwidth. (Look these definitions up above before you read on.) In other words, allowing a longer range requires waiting longer, which in turn limits how often the sensor can provide a measurement. Although US distance sensors have become less and less common in robotics, they have an advantage over light-based sensors: instead of sending out a ray, the ultra-sound pulse results in a cone with an opening angle of 20 to 40 degrees. By this, US sensors are able to detect small obstacles without the requirement of directly hitting them with a ray. This property makes them the sensor of choice in automated parking helpers in cars.

Take-home lessons

  • Most of a robot’s sensors either address the problem of robot localization or localizing and recognizing objects in its vicinity
  • Each sensors has advantages and drawbacks that are quantified in its range, precision, accuracy, and bandwidth. Therefore, robust solutions to a problem can only be achieved by combining multiple sensors with differing operation principles.
  • Solid-state sensors (i.e. without mechanical parts) can be miniaturized and cheaply manufactured in quantity. This has enabled a series of affordable IMUs and 3D depth sensors that will provide the data basis for localization and object recognition  on mass-market robotic systems

 

 

 

 
  • http://boulderhackerspace.com/ Daniel Zukowski

    There was brief mention of Sebastian Thrun today in class. Along with Peter Norvig, he is providing his Intro to AI course at Stanford this semester for free online. The course starts in October and you can still register. The course has two levels of participation, one with just the videos and simple self-quiz exercises, and another level with homework assignments and exams. If you take the advanced track, you get a certificate and a grade at the end of the course. You can register for free at ai-class.com

    There are also intro courses in Machine Learning and Databases being offered in the same manner.

  • http://correll.cs.colorado.edu/?page_id=19 Nikolaus Correll

    Kinect Hackers Are Changing the Future of Robotics

    http://www.wired.com/magazine/2011/06/mf_kinect/all/1

  • Nikolaus Correll

    Rodney Brooks talks about “exponentials” in other industries that can enable breakthroughs in robotics:

    http://fora.tv/2009/05/30/Rodney_Brooks_Remaking_Manufacturing_With_Robotics