Publication

Categorizing object-actions relations from semantic scene graphs

Conference Article

Conference

IEEE International Conference on Robotics and Automation (ICRA)

Edition

2010

Pages

398-405

Doc link

http://dx.doi.org/10.1109/ROBOT.2010.5509319

File

Download the digital copy of the doc pdf document

Authors

Projects associated

Abstract

In this work we introduce a novel approach for detecting spatiotemporal object-action relations, leading to both, action recognition and object categorization. Semantic scene graphs are extracted from image sequences and used to find the characteristic main graphs of the action sequence via an exact graph-matching technique, thus providing an event table of the action scene, which allows extracting object-action relations. The method is applied to several artificial and real action scenes containing limited context. The central novelty of this approach is that it is model free and needs a priori representation neither for objects nor actions. Essentially actions are recognized without requiring prior object knowledge and objects are categorized solely based on their exhibited role within an action sequence. Thus, this approach is grounded in the affordance principle, which has recently attracted much attention in robotics and provides a way forward for trial and error learning of object-action relations through repeated experimentation. It may therefore be useful for recognition and categorization tasks for example in imitation learning in developmental and cognitive robotics.

Categories

automation, pattern recognition.

Author keywords

object-action categorization

Scientific reference

E.E. Aksoy, A. Abramov, F. Wörgötter and B. Dellen. Categorizing object-actions relations from semantic scene graphs, 2010 IEEE International Conference on Robotics and Automation, 2010, Anchorage, pp. 398-405.