The  goals of this lab are to

  • Understand the basics of point cloud sensor data
  • Familiarize yourself with the Point Cloud Library
  • Work with sensor noise and variance
  • Detect the pose (position and rotation) of cublets/blocks.


  • C++ code that can detect blocks on a table and their relative pose
  • A plot comparing one or more important PCL function variables (iterations, threshold, etc) against the reliability of the block’s rotation (the average change of detected rotation between several point cloud input snapshots)
  • 2 weeks
Deliverables are due prior to the next lab via This is lab requires coding, start early!


  1. Get the sample data
    • Download the cublets.bag (141MB) sample rosbag file of Asus Xtion Pro recorded sample data. It contains 90 seconds of recorded data.
    • Start roscore in a terminal
    • To run the downloaded file:
      rosbag play cublets.bag
    • It won’t do anything until you run something to listen to the published messages. You can read more about rosbag if you are curious/lost
  2. (optional) Visualize the data using Rviz. I think this lab will be slightly easier with Rviz but it is not required because the PCL visualizer is already setup for you.
    • Download the rviz config file
    • Launch rviz
      rosrun rviz rviz
    • Open the config file that is already setup to work easily with point clouds and the code template (to be explained)
      File > Open Config
    • While running the rosbag play command above, you should see the point cloud visualized in the “Input” display
  3. Install PCL
    • sudo apt-get install ros-groovy-pcl-ros ros-groovy-pcl
  4. Download and compile this lab’s template package:
    • Go to your ros workspace, e.g.
      cd ~/catkin_ws/src
    • Download from github the lab template:
      git clone git://
    • Move back to your ros workspace
      cd ~/carkin_ws/
    • Make the package
    • Ignore the “#warning This file includes at least one deprecated or antiquated header” message – that is a PCL problem as far as I can tell. Let me know if you figure out how to supress or fix this!
  5. Test the template package to see if everything works ok together…
    • First run roscore if you haven’t already
    • Now run the node you just compiled
      rosrun point_cloud_lab point_cloud_lab
    • Re-start playing the rosbag file:
      rosbag play cublets_large.bag
    • You should now see the recorded image shown in a pop-up window like so:
      Screenshot from 2013-03-01 00:06:28
    • Note where the coordinate frame is… the camera’s position. This neans that the x,y,z position of every pixel is in reference to a point high above the tabletop plane. Calculations and filtereing could be simplified if we set the coordinate frame’s x,y position to be the same as the plane of the table
  6. Move the coordinate frame to the base of the robot (the big black square at the bottom of the image):
    • First, see what is in the bag file:
      rosbag info cublets.bag
    • Notice there are two types… sensor_msgs/PointCloud2 and tf/tfMessage. This is because when the data was recorded, the transform frames of the whole ROS system were also specefied to be recorded so that later the relative pose of the camera to the base of the robot could be used.
    • Edit the src/point_cloud_lab.cpp file using the editor of your choice
    • Find the line that says
    • Before this line, add the following code:
      // -------------------------------------------------------------------------------------------------------
       // Get the transform between camera_link and base_link
       if( !has_transform_)
       tf_listener_.lookupTransform(base_link_, cloud->header.frame_id,ros::Time(0), transform_);
       has_transform_ = true; // only lookup once since camera is fixed to base_link permanently
       catch (tf::TransformException ex){
       ROS_INFO("Waiting on TF cache to build: %s",ex.what());
      // -------------------------------------------------------------------------------------------------------
       // Make new point cloud that is transformed into our working frame
       pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud_transformed(new pcl::PointCloud<pcl::PointXYZRGB>);
      pcl_ros::transformPointCloud(*cloud, *cloud_transformed, transform_);
    • Also, edit the afore-mentioned line to now say:
    • Recompile the code (catkin_make) and re-run everything as in step 5. You should now see the coordinate frame moved to the base of the table like so:
      Screenshot from 2013-03-01 00:07:15
    • What have we just done? We have used a TF listener to get a transform from the rosbag being played back. The transform we want is from the point cloud’s frame id to the base_link frame id. Once we get the transform we save it to save processing time and avoid timing/caching issues, since we know the transform never changes since the camera is rigidly attached to the base.
  7. Read these tutorials/documentation to learn PCL and prepare you for the next step:
  8. You are now ready to use PCL’s extensive and powerful libraries to detect the blocks on top of the table. To complete this assignment, you need to program your own solution to detecting the middle point of as many blocks as possible, as well as the block’s rotation with respect to the x/y plane of the table. Use the previously listed tutorials and PCL documentation to complete this.
    • I suggest you start with the first 3 tutorial code examples in step 7
    • Using rviz is nice because you can publish different parts of the point cloud to separate “displays” – see note in code
    • To speed up load time of code in testing, you could hard code the transform_ data instead of reading it from the bagfile each time. To hard code it, you would first need to print out its detected values… see API doc
  9. Once you have PCL consistently identifying the blocks, collect some data of the variance of the block’s position and rotation and plot it as described in the deliverables above.

Leave a Reply

Set your Twitter account name in your settings to use the TwitterBar Section.