PhD Thesis

Learning Relational Models with Human Interaction for Planning in Robotics

Work default illustration


  • Started: 01/01/2013
  • Finished: 21/02/2017


Automated planning has proven to be useful to solve problems where an agent has to maximize a reward function by executing actions. As planners have been improved to solve more expressive and difficult problems, there is an increasing interest in using planning to improve efficiency in robotic tasks. However, planners rely on a domain model, which has to be either handcrafted or learned. Although learning domain models automatically can be costly, recent approaches provide good generalization capabilities and integrate human feedback to reduce the amount of experiences required to learn.

In this thesis we propose new methods that allow an agent with no previous knowledge to solve certain problems more efficiently by using automated planning. First, we show how to apply probabilistic planning to improve robot performance in manipulation tasks (such as cleaning the dirt or clearing the tableware on a table). Planners obtain sequences of actions that get the best result in the long term, beating reactive strategies.

Second, we introduce new reinforcement learning algorithms where the agent can actively request demonstrations from a teacher to learn new actions and speed up the learning process. In particular, we propose an algorithm that allows the user to set the minimum sum of rewards to be achieved, where a better reward also implies that a larger number of demonstrations will be requested. Moreover, the learned model is analyzed to extract the unlearned or problematic parts of the model. This information allow the agent to avoid irrecoverable errors and to provide guidance to the teacher when a demonstration is requested.

Finally, a new domain model learner is introduced that, in addition to relational probabilistic action models, can also learn exogenous effects. This learner can be integrated with existing planners and reinforcement learning algorithms to solve a wide range of problems.

In summary, we improve the use of learning and automated planning to solve unknown tasks. The improvements allow an agent to obtain a larger benefit from planners, learn faster, balance the number of action executions and teacher demonstrations, avoid irrecoverable errors, interact with a teacher to solve difficult problems, and adapt to the behavior of other agents by learning their dynamics. All the proposed methods were compared with state-of-the-art approaches, and were also demonstrated in different scenarios, including challenging robotic tasks.

The work is under the scope of the following projects:

  • IntellAct: Intelligent observation and execution of Actions and manipulations (web)
  • MANIPlus: Manipulación robotizada de objetos deformables (web)
  • RobInstruct: Instructing robots using natural communication skills (web)