Robots are able to keep track of their position using a model of the noise arising in their drive train and their forward kinematics to propagate this error into a spatial probability density function (Lecture: Error propagation). The variance of this distribution can shrink as soon as the robot sees uniquely identifiable features with known locations. This can be done for discrete locations using Bayes’ rule (Lecture: Markov Localization) and for continuous distributions using the Extended Kalman Filter (Lecture: Kalman filter). The key insight here was that *every* observation will *reduce* the variance of the robot’s position estimate. Here, the Kalman filter performs an *optimal* fusion of two observations by weighting them with their variance, i.e., unreliable information counts less than reliable one. In the robot localization problem, one of the observations is typically the robot’s position estimate whereas the other observation comes from a feature with known location on a map. So far, we have assumed that these locations are known. Today, we will introduce

- the concept of covariance (or, what all the non-diagonal elements in the covariance matrix are about)
- how to estimate the robot’s location and that of features in the map at the same time (Simultaneous Localization and Mapping or SLAM)

## Special Case I: Single Feature

## Special Case II: Two Features

Consider now a map that has two features. Visiting one after the other, the robot will be able to store both of them in its map, although with a higher variance for the feature observed last. Although the observations of both features are independent from each other, the relationship between their variances depend on the trajectory of the robot. The differences between these two variances are much lower if the robot connect them in a straight line than when it performs a series of turns between them. In fact, even if the variances of both features are huge (because the robot has already driven for quite a while before first encountering them), but the features are close together, the probability density function over their distance would be very small. The latter can also be understood as the covariance of the two random variables (each consisting of range and angle). In probability theory, the covariance is the measure of how much two variables are changing together. Obviously, the covariance between the locations of two features that are visited immediately after each other by a robot is much higher as those far apart. It should therefore be possible to use the covariance between features to correct estimates of features in retrospect. For example, if the robot returns to the first feature it has observed, it will be able to reduce the variance of its position estimate. As it knows that has not traveled very far since it observed the last feature, it can then correct this feature’s position estimate.

## The Covariance Matrix

When estimating quantities with multiple variables, such as the position of a robot that consists of its x-position, its y-position and its angle, matrix notation has been a convenient way of writing down equations. For error propagation, we have written the variances of each input variable into the diagonal of a covariance matrix. For example, when using a differential wheel robot, uncertainty in position expressed by and were grounded in the uncertainty of its left and right wheel. We have entered the variances of the left and right wheel into a 2×2 matrix and obtained a 3×3 matrix that had and in its diagonal. Here, we set all other entries of the matrix to zero and ignored entries in the resulting matrix that were not in its diagonal. The reason we could actually do this is because uncertainty in the left and right wheel are *independent* random processes: there is no reason that the left wheel slips, just because the right wheel slips. Thus the covariance – the measure on how much two random variables are changing together – of these is zero. This is not the case for the robot’s position: uncertainty in one wheel will affect all output random variables ( and ) at the same time, which is expressed by their non-zero covariances – the non-zero entries off the diagonal of the output covariance matrix.

## EKF Slam

The key idea in EKF Slam is to extend the state vector from the robot’s position to contain the position of all features. Thus, the state

becomes

assuming features, which is a (3+2N) x1 vector. The action update (or “prediction update”) is identical to that if features are already known, the robot simply updates its position using odometry and updates the variance of its position using error propagation. The covariance matrix is now a (3+2N)x(3+2N) matrix that initially holds the variances on position and those of each feature in its diagonal.

The interesting things happen during the perception update. Here it is important that only *one *feature is observed at a time. Thus, if the robot observes multiple features at once, one needs to do multiple, consecutive perception updates. Care needs to be taken that the matrix multiplications work out. In practice you will need to set only those values of the observation vector (a (3+2N)x1 vector) that correspond to the feature that you observe. Similar considerations apply to the observation function and its Jacobian.

[…] cameras. Point cloud data allows fitting of lines using RANSAC, which can serve as features in EKF-based localization, but can also be used for improving odometry, loop-closure detection, and mapping. The goals of […]