Autonomous Learning for Control and Robotics

Autonomous learning has been a promising direction in control and robotics for more than a decade since data-driven learning allows to reduce the amount of engineering knowledge, which is otherwise required. How long does it take for an autonomous robot to learn a task from scratch if no informative prior knowledge is available? Typically, very long: Autonomous reinforcement learning (RL) approaches typically require many interactions with the system to learn controllers, which is a practical limitation in real systems, such as robots, where many interactions can be impractical and time consuming. To address this problem, current learning approaches typically require task-specific knowledge in form of expert demonstrations, realistic simulators, pre-shaped policies, or specific knowledge about the underlying dynamics. 

We follow a different approach and speed up learning by extracting more information from data. In particular, we learn a probabilistic, non-parametric Gaussian process transition model of the system. By explicitly incorporating model uncertainty into long-term planning and controller learning our approach reduces the effect of model errors, a key problem in model-based learning. 

Compared to state-of-the art RL our model-based policy search method achieves an unprecedented speed of learning. We demonstrate its applicability to autonomous learning in real robot and control tasks.

Low-Cost Robotic Manipulator

We used a standard robot arm by Lynxmotion and a Kinect-depth camera (total cost is 500 USD) and demonstrate that fully autonomous learning (with random intializations) requires only a few trials.

 

Cart-Pole Swing-Up

Our autonomous learning approach could solve this standard benchmark task using data of less than 20 seconds.

 

Contact

Marc Deisenroth

Collaborators

Dieter Fox, University of Washington
Carl Edward Rasmussen, University of Cambridge

Delicious Twitter Digg this StumbleUpon Facebook