## Thanks for stopping by! Please note that this page will not be updated anymore and all content has been moved into an open-source textbook. The books is available open-source on github, compiled as PDF, and in print on Amazon.com. |

Simultaneous Localization and Mapping (SLAM) is the “hen-egg” problem of localizing both a robot as well features on a map at the same time. Usually, a robot obtains an initial estimate of where it is using some onboard sensors (odometry, optical flow, etc.) and uses this estimate to localize features (walls, corners, graphical patterns) in the environment. As soon as a robot revisits the same feature twice, it can update the estimate on its location. This is because the variance of an estimate based on two independent measurements will always be smaller than any of the variances of the individual measurements. As consecutive observations are not independent, but rather closely correlated, the refined estimate can then be propagated along the robot’s path. This is formalized in EKF-based SLAM. A more intuitive understanding is provided by a spring-mass analogy: each possible pose (mass) is constrained to its neighboring pose by a spring. The higher the uncertainty of the relative transformation between two poses (e.g., obtained using odometry), the weaker the spring. Every time a robot gains confidence on a relative pose, the spring is stiffened instead. Eventually, all poses will be pulled in place. This approach is known as *Graph-Based SLAM. *This lecture has two parts:

- Part I: Formulating SLAM as a maximum-likelihood estimation (MLE) problem.
- Part II: Efficient numerical techniques to approximate solutions to the SLAM MLE.

# Part I: SLAM as a Maximum-Likelihood Estimation Problem

*temporal structure*of the problem. In practice, solving the SLAM problem requires

- A motion update model, i.e., the probability to be at location given an odometry measurement and being at location .
- A sensor model, i.e., the probability to make observation given the robot is at location and the map .

*data association*problem. Graph-based SLAM does not solve this problem and will fail if features are confused.

*joint*likelihood of all observations. As such, graph-based SLAM is a maximum likelihood estimation problem.

*constraint*. This can be pairs of distance and angle, e.g. In the literature, e.g., [Gris2010], the measurement of a transformation between node

*i*and a node

*j*is denoted . Its

*expected*value is denoted . This value is expected for example based on a map of the environment that consists of previous observations.

*information matrix*) is used, which we denote by .

*joint probability*of all measurements over all edge pairings following the maximum likelihood estimation framework, it is customary to express the PDF using the

*log-likelihood.*By taking the natural logarithm on both sides of the PDF expression, the exponential function vanishes

*and*becomes or , where is the log-likelihood distribution for .

*directly*derived from the definition of the normal distribution, but using the information matrix instead of the covariance matrix and is ridden of the exponential function by taking the logarithm on both sides.

*error*between measurement and expected value. Note that the sum actually needs to be minimized as the individual terms are technically the negative log-likelihood.

# Part II: Numerical Techniques for Graph-based SLAM

*n*points, but taking only a subset of these points when calculating the next best guess. As gradient descent works iteratively, the hope is that the algorithm takes a large part of the constraints into account. For solving Graph-based SLAM, a

*stochastic*gradient descent algorithm would not take into account

*all*constraints available to the robot, but iteratively work on one constraint after the other. Here, constraints are observations on the mutual pose of nodes

*i*and

*j.*Optimizing these constraints now requires moving both nodes

*i*and

*j*so that the error between where the robot thinks the nodes should be and what it actually sees gets reduced. As this is a trade-off between multiple, maybe conflicting observations, the result will approximate a Maximum Likelihood estimate.

*i*and

*j*, not only

*i*and

*j*‘s pose will be updated but all points inbetween will be moved a tiny bit.

*relative*translation and rotation from one node to the other. Thus, altering a relationship between two nodes will automatically propagate to all nodes in the network. In [Ols06] the graph is essentially a chain of nodes whose edges consist of odometry measurements. This chain then becomes a graph whenever observations (using any sensor) introduce additional constraints. Whenever such a “loop-closure” occurs, the resulting error will be distributed over the entire trajectory that connects the two nodes. This is not always necessary, for example when considering the robot driving a figure-8 pattern. If a loop-closure occurs in one half of the 8, the nodes in the other half of the 8 are probably not involved.

# References

[Gris2010] G. Grisetti, R. Kuemmerle, C. Stachniss and W. Burgard. A Tutorial on Graph-Based SLAM. IEEE Intelligent Transportation Systems Magazine, 2(4):31-43, 2010.

[Ols06] E. Olson, J. Leonard and S. Teller. Fast Iterative Alignment of Pose Graphs with Poor Initial Estimates. Proc. of ICRA, pp 2262-2269, Orlando, FL, 2006. Source code.

[Gris07] G. Grisetti, C. Stachniss, S. Grzonka and W. Burgard. A Tree Parameterization for Efficiently Computing Maximum Likelihood Maps using Gradient Descent**. **Robotics: Science and Systems (RSS), Atlanta, GA, USA, 2007. Source code.

What a beautiful article!!