The information generated by sensors can be large. For example, a simple webcam generates 640×480 color pixels (red, green and blue) or 921600 Bytes between 20-30 times per second. A single-ray laser scanner still provides around 600 distance measurements 10 times per second. This is in contrast to the information that the robot actually requires. For example, in the Ratslife world, the e-puck camera can be used to recognize one of the 48 different color patterns or the presence or absence of a charger, essentially reducing hundreds of bytes of camera data to around 6 bit content. The goal of most image processing algorithms is therefore to first *reduce* information content in a meaningful way and then *extract* relevant information. So far, we have been seen convolution-based filters such as blurring, detecting edges, or binary operations such as thresholding. You have used these operations in the lab to filter a camera image, apply a threshold so that only the colored pattern remains, reduce it to its edges using a Sobel filter, quarter it, and count the number of pixels in each quadrant to guess the type of the pattern. We are now interested in methods to extract higher-level features such as lines and techniques to extract them:

- Extracting line features using the least-squares method
- Split-and-merge algorithm
- Extracting line features using RANSAC

*highlight*lines and edges in images, additional algorithms are needed to extract structured information such as the orientation and position of a line with respect to the robot.

*same*feature from sensor data, even if the robot has slightly turned or moved farther or closer to the feature. There are many feature detectors available that accomplish this, prominent examples are the Harris corner detectors (essentially detecting points in the image where vertical and horizontal lines cross) and the SIFT feature detector. Feature detection is important far beyond robotics and is for example used in hand-held cameras that can automatically stitch images together. Here, feature detectors will fire on the same features in two images taken from slightly different perspectives, which allows the camera to calculate the transformation between the two. This lecture focuses on line features as those can be detected in both range and image data, and provides a tangible example for the least-squares and RANSAC algorithms.

## Least-Square Fitting a Line to a Point-Cloud

A laser scanner or a sweep using an infra-red sensor along a wall will return a suite of points at position . These points can also be represented in polar coordinates . We can now imagine a line running through these points that is parametrized with a distance and an angle . Here, is the distance of the robot to the wall and its angle. As all sensors are noisy, each point will have distance from the “optimal” line running through the points. Using simple trigonometry we can now write

(make a drawing of this). Different lines – parametrized by and – will have different values for . We can now write an expression for the error as

Here, we square each individual error to account for the fact that a negative error, i.e. a point left of the line, being as bad as a positive error, i.e. a point right of the optimal line. In order to *optimize* , we need to take the partial derivatives with respect to and , set them zero, and solve the resulting system of equations for and .

Here, the symbol indicates that we are taking a partial derivative. Solving for and is involved, but possible:

and

We can therefore calculate the distance and orientation of a wall captured by our proximity sensors relative to the robot’s positions or the height and orientation of a line in an image – for example the wall of the Ratslife arena – based on a collection of points that we believe might belong to a line.

This approach is known as the least-square method and can be used to fit data to any parametric model. If the result cannot be obtained analytically as in this example, numerical methods have to be used to find the best fit that minimizes the quadratic error.

## Split-and-merge algorithm

A key problem with this approach is that it is often unclear how many lines there are and where a line starts and where it ends. Looking through the camera, for example, we will see vertical lines corresponding to wall corners and horizontal ones that correspond to wall-floor intersections and the horizon; using a proximity sensor, the robot might detect a corner. We therefore need an algorithm that can separate point clouds into multiple lines. One possible approach is to find the outlier with the strongest deviation from a fitted line and split the line at this point. This can be done iteratively until each line has no outliers above a certain threshold. A recursive version of this algorithm, also known as Ramer-Douglas-Peucker algorithm is described here.

## RANSAC: Random Sample Consensus

If the number of “outliers” are large, a least square fit will generate poor results as it will generate the “best” fit that accomodates both “inliers” and”outliers”. Also, split-and-merge algorithms will fail as they are actually extremely susceptive to noise: depending on the actual parameters every outlier will split a potential line into two. A solution to this problem is to randomly sample possible lines and keep those that satisfy a certain desired quality given by the number of points being somewhat close to the best fit. In practice, these are two parameters of the algorithm, namely the number of points required to consider a fit a line, and the maximum from a line to consider a point an inlier and not an outlier. The algorithm proceeds as follows: select two *random* points from the set and connect them with a line. Grow this line by in both directions and count the number of inliers. Repeat this until one or more lines that have sufficient number of inliers are found, or a maximum number of iterations are reached.

The RANSAC algorithm is fairly easy to understand in the line fitting application, but can be used to fit arbitrary parametric models to any-dimensional data. Here, its main strength is to cope with noisy data.

## Outlook

Why are lines so useful? As you noticed in the lab, the key challenge in estimating a robot’s pose is unreliable odometry, in particular when it comes to turning. Here, a simple infrared sensor can provide the robot with a much better feel what actually happened. Similarly, if a robot has the ability to track markers in the environment using vision, it gets another estimate on how much it is actually moving. How information from odometry and other sensors can be fused not only to localize the robot, but also to create maps of its environment, will be the focus of the remainder of this class.

## Take-home lessons

- Features are “interesting” information in sensor data that are robust to variations in rotation and scale as well as noise
- Which features are most useful depends on the characteristics of the sensor generating the data, the structure of the environment, and the actual application.
- There are many feature detectors available some of which operating as simple filters, others relying on machine learning techniques.
- Lines are among the most important features in mobile robotics as they are easy to extract from many different sensors and provide strong clues for localization.

http://blog.makezine.com/archive/2011/10/swarm-farming-explained.html

[…] In “Introduction to Robotics”, we learned how to perform basic filtering and how to construct simple feature detectors that are trained/hard-coded for detecting simple patterns in the environment. Deploying such […]

[…] extraction. In “Introduction to Robotics”, we learned how to perform basic filtering and how to construct simple feature detectors that are trained/hard-coded for detecting simple patterns in the environment. Deploying such […]

[…] than the cheapest 2D laser scanner) RGB-D (color image plus depth) cameras. Point cloud data allows fitting of lines using RANSAC, which can serve as features in EKF-based localization, but can also be used for […]