Markov Localization (pp. 301-314)

Markov localization is one form of probabilistic map-based localization. The basic function of map-based localization is to provide the robot with its most likely location in a known map. Since all the sensors that allow a robot to interact with its environment are subject to error, we will never know with certainty exactly where the robot is. The best we can do is to say the robot is in position with some probability. Last week we saw that as the robot continues to move through its environment the error in its position continues to grow. Map-based localization provides a method for decreasing the error that accumulates as the robot navigates its environment. There are five things required for map-based localization:

1) – the belief of the initial position. In some forms of map-based localization, e.g. the Kalman filter, this belief is known with certainty. In Markov localization, the initial position is unknown and there is a uniform distribution over all poses.

2) Map – the known map. For Markov localization to work, a known map is required. In the absence of a known map, the robot must create a map representation. We will work on methods for map creation next week. For now assume we have a known map.

3) Sensor data – there are two kinds of sensor data that we will work with, control (proprioceptive) and measurement (exteroceptive) data. Control data is given by some measurement of the internal state of the robot, e.g. the encoder values. Measurement data is data collected from the environment, e.g. by a distance sensor. For the purpose of the equations, control data will be denoted and measurement data by .

4) Probabilistic motion model – this is the error propagation model from last week. Here we say that the belief about the robot’s position is a function of the previous position and the change in position as given by the control data, . You can think of this as we did last week as the new position, , is given by , where is the previous position, and is the change in position, and both are subject to some error as described by the Jacobian and Covariance matrices.

5) Probabilistic measurement model – this is the error model for the measurement sensors. You can think of this value as the probability of getting the correct sensor data, , if the robot were at position on map , or . For example, with the distance sensors on the E-Puck, if we get a reading of 306 then we are supposedly 0.03 meters from the object sensed. However, there is an error associated with that reading of 12.5%, so we could be as close as 0.026 meters or as far as 0.034 meters. Generally this error is represented with a Gaussian distribution in which the 0.03 meters would be the mean and the error rate would give the variance.

Similar to the error propagation from last week, the probability of the robot’s pose is a combination of the previous belief of where it was and the probability of its current belief gained from sensor data. Written using Bayes Rule, the probability of the robot being at position given the current measurement data, , is given by . Here is the probability to get sensor data assuming the robot were at , is the prior probability that the robot is at , and is some normalizing factor formed by considering that the integral of the probability distribution is 1. Usually is denoted by , yielding . This final equation is similar to the error propagation from last week in which the pose of the robot is a function of where it previously was, updated by the current information.

Explained step-by-step using the five ingredients above:

1) the first thing we need is a belief about the initial starting point, . For the Markov localization this is a value between 0 and 1 representing the uniform probability distribution over all possible positions. For example, in the RatsLife maze with 100 cells, this value would be .

2) The second thing we need is a known map. Think of this as some pre-programmed map class similar to the RatMap.java class that we used in Lab #6, path-planning.

3) The third thing we need is sensor data from the encoders and from the distance sensors. Both of these can be retrieved using the Webots API functions.

4) The fourth thing needed is the probabilistic motion model, called the prediction update. This is the result of Lab #8, error propagation. In the notation of the Markov localization the belief of the current position based on the previous position and the error incurred moving through one time step is given by . Here is the belief of the position of the robot based on the control data (the probability of being at the new position), is the belief of the robot’s previous position (which for the first time step would just be ), and is the from the error propagation lab.

5) The final thing needed is the probabilistic measurement model, called the perception update. This is very similar to #4, but uses the measurement data rather than the control data. In the book’s notation, , where is the belief of the robot’s position after the measurement data has been incorporated, is a normalizing factor to ensure the probabilities all sum to 1 (since we know for certain the robot is somewhere), is the probability distribution of the error inherent in the sensor being used, and is the value found in #4. This equation gives the final probability that the robot is in the cell in question.

In summary, we need the value from the initial uniform probability across all cells, the error resulting from the movement of the robot as given by the encoders (Lab #8), and the error inherent in the distance sensors. By multiplying these probabilities we get a decrease in the error of robot position with each feature we encounter. There are two things to keep in mind when using Markov localization. The first is that Markov localization computes the probability across all possible robot positions, which is computationally heavy for large maps or with maps that have high resolution, although methods have been developed to address this. The second thing to keep in mind is that this algorithm uses the Markov assumption which states that a robot’s position is only affected by its current readings and a *single* previous position. In reality the robot is affected by all previous actions/positions and therefore Markov localization remains an approximation, although a very useful one.

Markov Localization in Action:

The goal of this lab is to see Markov Localization and Particle Filter techniques in action. As the code is involved, this exercise will be carried out with MATLAB. MATLAB is available on the CSEL cluster and can be started by typing *matlab* at the command line.

Download the MATLAB files for the lab and place all the files in the same directory. To ensure that MATLAB finds the files, open the terminal and change to the directory in which you downloaded the files, then launch MATLAB by typing *matlab*. Alternately you can change to the correct directory from within MATLAB.

To start type

*load PF_WS*

to load all the working variables. Then to run the program type:

*PF_Loc*

If you want to watch the program plots step-by-step then you can type

*program_step = true*

Or, if you want to view path lines on the plot, type

*path_plot = true*

Thanks to Matthew Kirchner from Introduction to Robotics Fall 2010 for providing this code.

Exercise:

This lab does not have a tangible deliverable. Rather your goal is to understand how the program works by watching the program execute and by examining the code. Some questions to think about:

1) What belief is a “particle” representing in this code?

2) What is the compute_weights.m function doing?

3) The default program is simulating a robot with an ‘obstacle detector’ sensor that has a probability of detecting nearby objects with 60% accuracy. That means 40% of the time the sensor gives an incorrect reading. How can we still converge to the actual robot location using a sensor that is incorrect 40% of the time?

4) Now type *MAP = MAP_Plot* in MATLAB and rerun PF_Loc. This now gives a sensor that is 100% accurate. How does this change affect the rate of convergence?