Robotics celebrated its 50th birthday in 2011, dating back to the first commercial robot in 1961 (the Unimate). In a “Tonight show” from the time, this robot did amazing things: its opening a bottle of beer, pouring it, putting a golf ball into the hole, and even conducting an orchestra. This robot does all what we expect a good robot to do: its dexterous, its accurate, and even creative. Since this robots appearance on the Tonight show 50 years have passed – so how must advanced robots look like and what must they be able to do?

Interestingly, we just recently learned doing all the things demonstrated by Unimate autonomously. Unimate indeed did what was shown on TV, but all motions have been pre-programmed and the environment has been carefully staged.  Only the advent of cheap and powerful sensors and computation has recently enabled robots to detect an object by themselves, plan motions to it and grasp it. Yet, robotics is still far away from doing these tasks with human-like performance. Environments still need to be heavily staged for a robot to operate, and some objects are easier to work with than others. For example, “Rollin’ Justin” is demonstrating skills very similar to those shown by the Unimate, but is able to perceive and reason about its environment:

This works as follows: the robot perceives its environment with a sensor that generates 3D range data, such as stereo-vision, a sweeping laser scanner or an xbox Kinect. The robot creates a 3D representation of its environment that is relative to its own coordinate system. The robot now plans its motion using this 3D representation. This can be seen in the inset to the top right when the robot is manipulating items on the table. Detecting objects in the environment, matching them to their 3D model, and finding feasible grasp points are still major research challenges.

The “sense-plan-act” paradigm, that is obtaining a 3D model of the world, planning therein, and consequently executing actions is not the final solution, however. Do humans really do this when performing complex manipulations? Rather not. Instead, we are relying on vision feedback or the sense of touch when performing a grasp. (Try to grasp an object after putting your hand on an ice block, temporarily impeding your sense of touch.)

This class builds up on “Introduction to Robotics” and introduces current solutions to these problems. We will learn advanced concepts in robotic kinematics, such as car-like steering and inverse kinematics of high-DOF manipulators, feature recognition, RGB-D perception, visual servoing and SLAM. We will also learn to use state-of-the-art tools that allow you to model, visualize and control your robot. You will work with a 7-DOF manipulator arm both in simulation and using real hardware. The development environment consists of Ubuntu Linux, ROS and OpenCV that are available as VirtualBox. The real manipulator arm is available for experimentation in the lab. We will perform laboratory experiments – that can be mostly prepared and executed in simulation – that will introduce the tools and methods we are using and guide you toward an independent project.

The class consists of homework assignments (experiments) and a team project. The final deliverable for the team project is a 3 page (5 pages) for undergraduates (graduate students). Your team project needs to articulate a hypothesis (what do you want to show?) that is validated experimentally on physical hardware. In order to prepare you for this, each laboratory exercise will require analys of experimental data rather than submission of code.

 

Leave a Reply

Set your Twitter account name in your settings to use the TwitterBar Section.