Joseph Knuth: Research
Home  •   Publications  •   Research  •   Courses  •   Curriculum Vitae

My research focuses on understanding and mitigating the error in localization (estimating the absolute position and orientation) when robots have only relative measurements available (such as robots equipped with cameras, but not GPS or compass).  We further consider the case when multiple robots capable of cooperation have access to measurements between robots. The goal is to utilize measurements between robots to improve estimates over what any robot can do on its own.

UAV Example

We consider robots capable of localizing their self through dead reckoning (inter-time relative pose measurements are available.) In addition we assume inter-robot relative measurements are available at time time. We consider inter-robot relative measurements of the relative
position, orientation, bearing, distance, or any combination therein. We model this problem as a fully labeld, time varing graph in which nodes represent the absolute pose of a given robot at a given time, and edges represent relative measurements, both inter-time and inter-robot. An example of such a graph is seen here.

Graph

We solve this problem by minimizing a cost function. This function is the sum of edge cost functions that describe how well a given measurement fits the estimated absolute poses.

In general a problem like what was descirbed above requires a paramaterization of the space of rotations.  We insist that that no paramaterization is used, rather we give the group its Riemannian manifold structure.  We then minimize a cost function defined over a product Riemannian manifold. Further we perform this optimization using manifold techniques so that our estimates never leave the manifold.

Manifold Vis

We have identified situations under which such an algorithm outperforms other state of the art algorithms (EKF and standard graph optimization). Simulations and experimental results have been conducted to verify the improvements.

We recently completed two different experiments on these topics. The first used a Pioneer P3-DX wheeled robot equipped with cameras and wheel odometers. The robot was instructed to move in a circle while capturing pictures and wheel odometer measurements every 0.2 seconds. A camera located on the ceiling was used to track the robot throughout the experiment. The pictures and odometer measurements were then used to reconstruct 3-D pose measurements for the robot. Using those measurements, the robot estimated its current position, which was compared with the measurements from the ceiling camera. We repeated this experiment a number of times to estimate the mean and variance of the localization error.

(a)The robot used in the experiments, and (b) a few snapshots from Fig. 4. Schematic of the test set-up.


We also conducted a similar experiment in which two such robots move around the circle. Each robot, in addition to estimating is relative pose, also estimates the pose of the other robot relative to its self. We then utilize an averaging algorithm we proposed in a previous paper to find an improved (we hope/have shown) estimate of each robots global pose. Again, the experiment was repeated multiple times to estimate the mean and variance in the localization error.

Fig. 6. The robots used in the experiments.

For more information, please refer to my Publications page.


Updated: October 5th, 2012