Understand basic low-level image processing including convolution-based filters, thresholding, and simple pattern recognition.  The code provided in this lab will serve as the basis for your own explorations with the camera in the RatsLife competition.


Download the files for lab6.  The whole world and controller should work out of the box for this lab.  Open Webots and then in the World window open the world contained in the lab6 world, advanced_pattern_recognition_test.wbt.  This should bring up the world, scene tree, and source code for the controller.  If for some reason this does not work, you can use the wizard to create a new project directory and manually place the files in this new directory.  This world allows you to control the E-Puck using the arrow keys on the keyboard and provides a display for showing processed images.  The robot is programmed to detect patterns that are coded on the walls in the environment.  This capability will become important to provide global localization to your robot in the RatsLife competition.

The activity of this lab is a little different than in other labs.  We are providing you with all the code and your task is to experiment with it, gain an understanding of how it works, and improve its functionality.  Since you will be altering the original code, I would highly recommend that you make a copy of the original controller and put it on the Desktop or somewhere where you won’t mess with it in case you need the original controller back.

Convolution-based Smoothening:

As the images provided by a digital camera are quite noisy, one of the first steps in almost any image processing algorithm is to perform low-pass filtering or smoothening of the image by averaging over the pixel values.  This can be conveniently implemented by a convolution that multiplies a Gaussian kernel over the image.  This is similar to the convolution we discussed in the lecture using the Sombrero function.  The Gaussian kernel can be defined as a 3×3 matrix of values between (0-255) in the beginning of the controller (variable gaussian).  Fill in an appropriate definition for a Gaussian kernel (hint: the Gaussian distribution is high around its mean and goes down symmetrically to its sides) and watch the result in the window to the bottom right after compilation.  Notice that the implementation of the convolution calculates the normalization factor for the kernel.  Experiment with the Gaussian kernel until you understand the implementation of the convolution operation in the code.   Think about why the convolution operation employs four for-loops and the function of each.


Uncomment the appropriate writeImageToDisplay() routine in the run() method that lets you see what the threshold operation does.  Play with the threshold in the program by changing the variable thresholdOne and notice the effect this has on the processed image.

Convolution-based Edge Detection:

A simple edge detection mechanism is to calculate the difference between a pixel and its neighbors.  If this difference is high, the pixel most likely lies on some kind of edge.  As this is a linear operation, it can also be implemented by a convolution.  The Sobel filter provides a simple implementation simply consisting of 1, 0, -1 elements in its kernel.  Activate the appropriate writeImageToDisplay() statement to see the output of the filter after edge detection.  Experiment with the edge detection kernel until you understand how it works.  What do you need to change in the code to display just the outlines of the shape?  What do you need to change to display just the horizontal lines?  What do you need to change to display just the vertical lines?  Notice that in lines #124-129 the image is copied to another variable.  What happens when you perform the convolution on the original image instead of the copied image?

Tag Recognition:

The noise reduction, thresholding, and edge detection operations that you performed in the previous sections prepare the image data for recognizing the patterns that you find at the walls of the example world.  Track down the function that is actually doing the pattern recognition.  Once you identify and understand this function, think about the specific steps the program utilizes to distinguish between different patterns.


1)  Make the shape detection process more robust by adding the Gaussian kernel parameters and playing with the thresholds.

2)  There are some shapes along the walls that are not currently part of the shape detection algorithm.  Extend this algorithm to include another shape that is not currently recognized.


3 Responses to Introduction to Robotics, Lab #6: Low-Level Image Processing

  1. Brain Carlsen says:

    is there a reference you could post so we know if we are recognizing the correct patterns?

  2. […] Gaussian filters, and feature extraction. In “Introduction to Robotics”, we learned how to perform basic filtering and how to construct simple feature detectors that are trained/hard-coded for detecting simple […]

Leave a Reply

Set your Twitter account name in your settings to use the TwitterBar Section.