![]() |
Thanks for stopping by! Please note that this page will not be updated anymore and all content has been moved into an open-source textbook. The books is available open-source on github, compiled as PDF, and in print on Amazon.com
|
The location of a robot is subject to uncertainty due to wheel-slip and encoder noise. We learned in the past how the variance in position can be derived from the variance of the robot’s drive train using the error propagation law and the forward kinematics of the robot. One can see that this is error is continuously increasing unless the robot has additional observations, e.g., of a static object with known location. This update can be formally done using Bayes’ rule, which relates the likelihood to be at a certain position given that the robot sees a certain feature to the likelihood to see this feature at the hypothetical location. For example, a robot that drives towards a wall will become less and less certain of its position (action update) until it encounters the wall (perception update). It can then use its sensor model that relates its observation with possible positions. Its real location must be therefore somewhere between its original belief and where the sensor tells it to be. Bayes’ rule allows us to perform this location for discrete locations and discrete sensor error distributions. This is inconvenient as we are used to represent our robot’s position with a 2D Gaussian distribution. Also, it seems much easier to just change the mean and variances of this Gaussian instead of updating hundreds of variables. The goals of this lecture are
- to introduce a technique known as the Kalman filter to perform action and perception updates exclusively using Gaussian distributions.
- to formally introduce the notion of a feature map.
- to develop an example that puts everything we learned so far together: forward kinematics, error propagation and feature estimation.
Probabilistic Map based localization
In order to localize a robot using a map, we need to perform the following steps
- Calculate an estimate of our new position using the forward kinematics and knowledge of the wheel-speeds that we sent to the robot until the robot encounters some uniquely identifiable feature.
- Calculate the relative position of the feature (a wall, a landmark or beacon) to the robot.
- Use knowledge of where the feature is located in global coordinates to predict what the robot should see.
- Calculate the difference between what the robot actually sees and what it believes it should see.
- Use the result from (4) to update its belief by weighing each observation with its variance.
Sensor Fusion using the Kalman filter
Optimal Sensor Fusion
The Kalman Filter
Extended Kalman Filter
In the extended Kalman filter, the state transition and observation models need not be linear functions of the state but may instead be differentiable functions. The action update step looks as follows:
Here is a function of the old state
and control input
. This is nothing else as the odometry update we are used to, where
is a function describing the forward kinematics of the robot,
its position and
the wheel-speed we set.
We can also calculate the covariance matrix of the robot position
This is nothing else as the error propagation law applied to the odometry of the robot with the covariance matrix of the wheel-slip and the Jacobian matrices of the forward kinematic equations
with respect to the robot’s position (indicated by the index
) and with respect to the wheel-slip of the left and right wheel.
The perception update (or prediction) step looks as follows:
At this point the indices should start making sense. We are calculating everything twice: once we update from
to an intermediate result
during the action update, and obtain the final result after the perception update where we go from
to
.
We need to calculate three additional variables:
- The innovation
- The covariance of the innovation
- The (near-optimal) Kalman gain
Example
Whereas the update step is equivalent to forward kinematics and error propagation we have seen before, the observation model and the “innovation” require an example. We will show how a mobile robot equipped with a laser scanner can correct its position estimate by relying on unreliable odometry, unreliable sensing, but a correct map, in an optimal way.
1.Prediction Update
We assume for now that the reader is familiar with calculating and its variance
. Here,
, the covariance matrix of the wheel-slip error, is given by
where is the wheel movement of the left and right wheel and
are constants. See also the odometry lab for detailed derivations of these calculations and how to estimate
and
. The state vector
is a 3×1 vector, the covariance matrix
is a 3×3 matrix, and
that is used during error propagation is a 3×2 matrix. See the error propagation lecture for details on how to calculate
.
2.Observation
Let us now assume that we can detect line features , where
and
are the angle and distance of the line from the coordinate system of the robot. These line features are subject to variances
and
, which make up the diagonal of
. See the line detection lecture for a derivation of how angle and distance as well as their variance can be calculated from a laser scanner. The observation is a 2×1 matrix.
3. Measurement Prediction
We assume that we can uniquely identify the lines we are seeing and retrieve their real position from a map. This is much easier for unique features, but can be done also for lines by assuming that our error is small enough and we therefore can search through our map and pick the closest lines. As features are stored in global coordinates, we need to transpose them into how the robot would see them. In practice this is nothing but a list of lines, each with an angle and a distance, but this time with respect to the origin of the global coordinate system. Transposing them into robot coordinates is straightforward. With and
the corresponding entry from the map, we can write
and calculate its Jacobian as the partial derivatives of
to
in the first row, and the partial derivatives of
in the second. How to calculate
to predict the radius at which the robot should see the feature with radius
from the map is illustrated in the figure below.

Example on how to predict the distance to a feature the robot would see given its estimated position and its known location from a map.
4. Matching
We are now equipped with a measurement and a prediction
based on all features stored in our map. We can now calculate the innovation
which is simple the difference between each feature that we can see and those that we predict from the map. The innovation is again a 2×1 matrix.
5. Estimation
We now have all the ingredients to perform the perception update step of the Kalman filter:
It will provide us with an update of our position that fuses our odometry input and information that we can extract from features in the environment in a way that takes into account their variances. That is, if the variance of your previous position is high (because you have no idea where you are), but the variance of your measurement is low (maybe from a GPS or a symbol on the Ratslife wall), the Kalman filter will put more emphasis on your sensor. If your sensors are poor (maybe because you cannot tell different lines/walls apart), more emphasis will be on the odometry.
As the state vector is a 3×1 vector and the innovation a 2×1 matrix, the Kalman gain must be a 3×2 matrix. This can also be seen when looking at the covariance matrix that must come out as a 3×3 matrix, and knowing that the Jacobian of the observation function is a 2×3 matrix. We can now calculate the covariance of the innovation and the Kalman gain using
Take home lessons
- The Extended Kalman Filter is the optimal way to fuse observations of different random variables that are Gaussian distributed.
- The derivation of the Kalman filter equations is involved, but bases on minimizing the least-square error between prediction and real value.
- Possible random variables could be the estimate of your robot position from odometry and observations of static beacons with known location (but uncertain sensing) in the environment.
- In order to take advantage of the approach, you will need differentiable functions that relate measurements to state variables as well as an estimate of the covariance matrix of your sensors.
I still have no idea 🙁
🙂