Robots are able to keep track of their position using a model of the noise arising in their drive train and their forward kinematics to propagate this error into a spatial probability density function (Lecture: Error propagation). The variance of this distribution can shrink as soon as the robot sees uniquely identifiable features with known locations. This can be done for discrete locations using Bayes’ rule (Lecture: Markov Localization) and for continuous distributions using the Extended Kalman Filter (Lecture: Kalman filter). The key insight here was that every observation will reduce the variance of the robot’s position estimate. Here, the Kalman filter performs an optimal fusion of two observations by weighting them with their variance, i.e., unreliable information counts less than reliable one. In the robot localization problem, one of the observations is typically the robot’s position estimate whereas the other observation comes from a feature with known location on a map. So far, we have assumed that these locations are known. Today, we will introduce

  • the concept of covariance (or, what all the non-diagonal elements in the covariance matrix are about)
  • how to estimate the robot’s location and that of features in the map at the same time (Simultaneous Localization and Mapping or SLAM)
The SLAM problem has been considered as the holy grail of mobile robotics for a long time. This lecture will introduce one of the first comprehensive solutions to the problem, which has now be superseded by computationally more efficient versions. We will begin with studying a series of special cases.

Special Case I: Single Feature

Consider a map that has only a single feature. We assume that the robot is able to obtain the relative range and angle of this feature, each with a certain variance. An example of this and how to calculate the variance of an observation based on sensor uncertainty is described in Lecture: Line detection. This feature could be a wall, but also a graphical tag that the robot can uniquely identify. The position of this feature  in global coordinates m_i=[\alpha_i,r_i] is unknown, but can now easily calculated using an estimate of the robot’s position \boldsymbol{\hat{x}_k}.  The variance of m_i‘s components is now the variance of the robot’s position plus the variance of the observation.
Now consider the robot moving closer to the obstacle and obtaining additional observations. Although its uncertainty in position is growing, it can now rely on the feature m_i to reduce the variance of its old position. Also, repeated observations of the same feature from different angles might improve the quality of its observation. The robot has therefore a chance to keep its variance very close to that with which it initially observed the feature and stored it into its map. We can actually do this using the EKF framework we studied last week. There, we assumed that features have a known location (no variance), but that the robot’s sensing introduces a variance. This variance was propagated into the covariance matrix of the innovation (\boldsymbol{S}), see step 5 of last week’s example. We can now simply add the variance of the estimate of the feature’s position to that of the robot’s sensing process.

Special Case II: Two Features

Consider now a map that has two features. Visiting one after the other, the robot will be able to store both of them in its map, although with a higher variance for the feature observed last. Although the observations of both features are independent from each other, the relationship between their variances depend on the trajectory of the robot. The differences between these two variances are much lower if the robot connect them in a straight line than when it performs a series of turns between them. In fact, even if the variances of both features are huge (because the robot has already driven for quite a while before first encountering them), but the features are close together, the probability density function over their distance would be very small. The latter can also be understood as the covariance of the two random variables (each consisting of range and angle). In probability theory, the covariance is the measure of how much two variables are changing together. Obviously, the covariance between the locations of two features that are visited immediately after each other by a robot is much higher as those far apart. It should therefore be possible to use the covariance between features to correct estimates of features in retrospect. For example, if the robot returns to the first feature it has observed, it will be able to reduce the variance of its position estimate. As it knows that has not traveled very far since it observed the last feature, it can then correct this feature’s position estimate.

The Covariance Matrix

When estimating quantities with multiple variables, such as the position of a robot that consists of its x-position, its y-position and its angle, matrix notation has been a convenient way of writing down equations. For error propagation, we have written the variances of each input variable into the diagonal of a covariance matrix. For example, when using a differential wheel robot, uncertainty in position expressed by \sigma_x, \sigma_y and \sigma_{\theta} were grounded in the uncertainty of its left and right wheel. We have entered the variances of the left and right wheel into a 2×2 matrix and obtained a 3×3 matrix that had \sigma_x, \sigma_y and \sigma_{\theta} in its diagonal. Here, we set all other entries of the matrix to zero and ignored entries in the resulting matrix that were not in its diagonal. The reason we could actually do this is because uncertainty in the left and right wheel are independent random processes: there is no reason that the left wheel slips, just because the right wheel slips.  Thus the covariance – the measure on how much two random variables are changing together – of these is zero. This is not the case for the robot’s position: uncertainty in one wheel will affect all output random variables (\sigma_x, \sigma_y and \sigma_{\theta}) at the same time, which is expressed by their non-zero covariances – the non-zero entries off the diagonal of the output covariance matrix.

EKF Slam

The key idea in EKF Slam is to extend the state vector from the robot’s position to contain the position of all features. Thus, the state

\hat{\boldsymbol{x}}_{k'|k-1}=(x,y,\theta)^T

becomes

\hat{\boldsymbol{x}}_{k}=(x,y,\theta,\alpha_1,r_1,\ldots,\alpha_N,r_N)^T

assuming N features, which is a (3+2N) x1 vector. The action update (or “prediction update”) is identical to that if features are already known, the robot simply updates its position using odometry and updates the variance of its position using error propagation. The covariance matrix is now a (3+2N)x(3+2N) matrix that initially holds the variances on position and those of each feature in its diagonal.

The interesting things happen during the perception update. Here it is important that only one feature is observed at a time. Thus, if the robot observes multiple features at once, one needs to do multiple, consecutive perception updates. Care needs to be taken that the matrix multiplications work out. In practice you will need to set only those values of the observation vector (a (3+2N)x1 vector) that correspond to the feature that you observe. Similar considerations apply to the observation function and its Jacobian.

 

 

 

One Response to Introduction to Robotics #11: SLAM

  1. […] cameras. Point cloud data allows fitting of lines using RANSAC, which can serve as features in EKF-based localization, but can also be used for improving odometry, loop-closure detection, and mapping. The goals of […]

Leave a Reply

Set your Twitter account name in your settings to use the TwitterBar Section.