Robots are systems that combine sensing, actuation and computation to perform intelligent functions in their environment. Designing robots therefore requires a thorough understanding of their mechanics, their sensors, the uncertainty that arises from interaction with the real world, and algorithms that can perform decisions and planning under uncertainty. The following paragraphs provide a brief summary of the lecture’s content.

Actuation

Locomotion and Manipulation: We started with understanding how a robot can locomote and manipulate its environment. These two activities are closely related: an arm can double as a leg and vice versa. The position (and speed) of the robot’s actuators are usually defined with respect to some fixed coordinate system on the robot. The position of this coordinate system is then given in a world coordinate system with arbitrary origin, i.e., in the corner of a room, or coincident with the earths geographic coordinate system, e.g., when using a GPS. These coordinate systems are known as frames of reference. There can be many frames of reference on a single robot and they can be arbitrarily chosen by the designer. The degrees of freedom of a robot determine along how many orthogonal axes of a coordinate system it can translate and around how many axes it can rotate. For a wheeled robot, the degrees of freedom of the mechanism are given by the type of wheels, e.g., spherical, Swedish or Standard, and their configuration. Each configuration has advantages and drawback in terms of controllability and maneuverability. Important configurations are the differential drive using two standard wheels, the tricycle using a steerable standard wheel and two fixed standard wheels, and fully-omni-directional robots.

Forward and Inverse Kinematics: The kinematics describe the relationship between actions of the robot’s actuators, such as its wheels, and the robot’s position in the world. A function that expresses this position, say in terms of x,y, and \theta (the robot’s orientation), is known as forward kinematics. Knowing the forward kinematics of a mechanism allows us to predict how a robot moves given the input to its actuators. Except for fully holonomic mechanisms, the forward kinematics can only relate actuator speed, e.g., wheel-speed, to the speed of the robot’s coordinate frame, but not the position of an actuator to the position of the robot in the world. This has the following reason: when executing a closed trajectory in a robot’s configuration space, i.e., a trajectory that ends at the same position than where it started, the robot’s position in the world might be completely different if the robot is non-holonomic. A simple example is a car. After moving the car around, returning the wheels to their initial rotational position will not necessarily drive the car to its origin. This is different for a manipulator arm. The position of its end-effector is always uniquely defined by the position of its joints. Inverting the forward kinematics, i.e., calculating the required actuator velocities given a desired velocity of the robot coordinate frame, is known as inverse kinematics. The main tools required for forward and inverse kinematics are geometry and trigonometry. How much a wheel moves forward given its rotational speed can be calculated using the arc length, and how  much each wheel contributes to the speed of the robot’s coordinate system can be calculated using the sine and cosine of the (carefully chosen) triangles. For mechanisms that involve multiple actuators, speeds behave additive. You can therefore calculate the contribution of each wheel separately and add them up. As the forward kinematics involve multiple input (actuator velocity) and output variables (robot velocity), they can be conveniently written in Matrix notation. Inverting these matrices is non-trivial other than for simple mechanisms.

Path-Planning: Finding the shortest path for a robot to move is a central problem of mobile robotics. It can be addressed using standard algorithm’s such as Dijkstra’s Algorithm, A* or Rapidly-Exploring Random Trees (RRT) (among others). In order not to have to deal with calculating collisions of the robot’s body with the world, planning is done in the Configuration Space. In the configuration space, each obstacle is inflated by the maximum radius of the robot. By this, the planning problem is reduced to finding the shortest path for a point. Dijkstra’s and A* require the discretization of the environment into a graph in which nodes are sub-sets of the free space and edges are navigable connections between them (in its simplest form, such a graph can be simply a grid). RRT instead can operate on continuous representations by sampling random points in the environment and connecting them to a search tree until a path to the goal has been found. Interestingly, the same planning algorithms that are used for finding paths for a robot on the plane can be used to find paths for a manipulating arm. In this case the configuration space is spanned by the joint positions of the arm.

Sensing

Sensors: A robot that can only move is just an automaton. In order to make possibly intelligent decisions it requires sensors to understand its environment. In mobile robotics, sensors are mainly used to localize and identify objects in the environment and localize the robot. There is a wide range of physical principles that can be exploited to construct capable sensors, including mechanical (acceleration, speed), optical (cameras, encoders), and sound (distance sensors, microphones). For example, range can be measured by observing the time-of-flight, signal intensity, and phase shift of light or sound.  The dominating sensors in mobile robotics are wheel encoders, range sensors and vision. The difference between the minimum and maximum value a sensor can measure is known as its range. The smallest distance between two distinct sensor readings is known as its resolution. How often a sensor reports a reading per second is known as its bandwidth. How reliable the sensor can reproduce an output provided with a static input is known as precision. Finally, the expected variation between the true and the measured value is known as the accuracy. The key problem with all sensors is that they are subject to uncertainty and their measurement is essentially a random variable.

Vision: cameras have become ubiquitous and cheap sensors in robotics and provide the data basis for object recognition and scene understanding algorithms. Basic processing methods that can go a long way are convolution-based filters. Convolving an image with a Gaussian Kernel smooths an image and is the first step in almost any image processing algorithm as it suppresses noise. Convolving an image with a Sobel Kernel and thresholding allows to extract edges, which might provide important cues to the robot.

Computation

Features: Data intensive sensors such as cameras or range sensors need to be processed to extract meaningful information. A feature is any such information that can be reliably (precisely and accurately) extracted from this data stream. Examples of such features are lines or bar codes in the environment. Lines are particularly easy to extract from data using RANSAC, divide-and-conquer or least-square fitting and can provide easy to detect points of references, for example walls in 2D range data, or corners of a wall in a 2D image.

Uncertainty and Error Propagation: Due to the uncertainty of the real world, e.g., due to noise in electronic systems and physical artifacts such as friction, sensor measurements are essentially random variables. The likelihood for a random variable to have a certain value can be described by a probability density function. For example, the distance of a wall is most likely close to its real value, but can also be far off, although with considerably smaller likelihood (depending on the sensor of course). A commonly used probability density function is the Gaussian Distribution. Although the Gaussian distribution is often far from accurately describing a specific sensor, it is often used as it provides convenient analysis. It allows to describe the behavior of a sensor measurement using a mean and a variance. As many quantities of interest are aggregated from multiple sensors and multiple readings, a key challenge in robotics is to predict the variance of an aggregated quantity given the variance of all of its components. This problem is known as error propagation. The key insight is that the variance of the aggregated measurement changes in the same way as a small change in an individual measurement affects the aggregated measurement. This insight, which involves the partial derivatives of the output variable with respect to the input variable, is known as the error propagation law. Classical examples are the variance of angle and distance to a wall that is measured based on multiple points on the wall, each subject to a certain variance themselves, and the variance of the robot’s position in the plane given the variance of measurements of its wheel-speed.

Markov Localization: Localizing a robot in the environment is another central problem in robotics. Error propagation allows to maintain a probability density function of the robot’s position in the environment. The more the robot moves, the more uncertain is its position, expressed by an increased variance. The corresponding probability density function also allows to derive the probability of a robot to be at a certain location. Updating these probabilities, e.g., on a grid, is known as action update in Markov Localization. If the robot now perceives some kind of feature and knows how likely it is to see this feature, e.g. a door, at this location, it can update its belief using Bayes Rule. This is known as perception update. The likelihood to see a certain feature at a location is usually calculated using a map that stores the location of all features and a sensor model that allows calculating the probability to see a certain feature from a certain position. With sufficient features and perception capabilities, this allows a robot to localize itself in an environment with high reliability.

Extended Kalman Filter: The drawback of Markov localization is that the robot needs to maintain a probability for every possible location on the map, which can be computationally intensive. Instead, the error propagation law allows us to maintain the robot’s position as a closed-form Gaussian probability density function. The Extended Kalman Filter provides an elegant way to update this probability density function based on perception. For this, the Kalman Filter first propagates the motion error using the error propagation law. This is known as prediction step. Once a feature with known location (map) can be identified in the environment, the robot can calculate where it would need to be to see this feature. It can then calculate the difference between where it thinks it is and where it should be. This difference is known as the innovation. It can now calculate its next best estimate by averaging its a priori belief and the new insight provided by the sensor measurement. This average is weighted by the variance of the prior belief and the variance of the measurement. This allows the Kalman filter to ignore unreliable measurements as well as quickly forget unreliable position beliefs. The key insight here is that every measurement – no matter how unreliable – always decreases the variance of the position estimate. The drawback of the Kalman filter is that it does not allow to maintain multi-hypothesis beliefs, which is possible with Markov Localization.

Simultaneous Localization and Mapping: Both Markov Localization and the Extended Kalman Filter require a map that contains the location of known features. Localizing in an environment where the location of features is unknown is known as Simultaneous Localization and Mapping (SLAM). One of the first solutions to SLAM simply extend the EKF to not only maintain an estimate of robot position, but also the position of each feature. The first time a feature is observed, its position will be initialized based on the (uncertain) position of the robot. This uncertainty keeps growing and so will the uncertainty of subsequent features the robot finds in the environment. The key insight is now that the error between two consecutive features is highly correlated – in other words, while their position is highly uncertain, the distance between them can be estimated with much lesser uncertainty. Thus, once the uncertainty of one feature’s position can be reduced, this propagates to all features. This happens once the robot visits a feature twice: the Kalman filter will reduce the variance of both the feature as well as the robot itself. Another way to think about this is about feature locations being connected by springs. Every time the distance between two features is measured with some accuracy, this spring can be stiffened, eventually pulling all features in place.

 

Leave a Reply

Set your Twitter account name in your settings to use the TwitterBar Section.