Learning Spatial Relationships From 3D Vision Using Histograms

Severin Andreas Fichtl, Frank Guerin

Research output: Chapter in Book/Report/Conference proceedingConference contribution

8 Citations (Scopus)

Abstract

Effective robot manipulation requires a vision system which can extract features of the environment which determine what manipulation actions are possible. There is existing work in this direction under the broad banner of recognising “affordances”. We are particularly interested in possibilities for actions afforded by relationships among pairs of objects. For example, if an object is “inside” another or “on top” of another. For this there is a need for a vision system which can recognise such relationships in a scene. We use an approach in which a vision system first segments an image, and then considers a pair of objects to determine their physical relationship. The system extracts surface patches for each object in the segmented image, and then compiles various histograms from looking at relationships between the surface patches of one object and those of the other object. From these histograms a classifier is trained to recognise the relationship between a pair of objects. Our results identify the most promising ways to construct histograms in order to permit classification of physical relationships with high accuracy. This work is important for manipulator robots who may be presented with novel scenes and must identify the salient physical relationships in order to plan manipulation activities.
Original languageEnglish
Title of host publicationProceedings of IEEE International Conference on Robotics and Automation
PublisherIEEE Press
Pages501-508
Number of pages7
ISBN (Print)9781479936861
Publication statusPublished - May 2014

Fingerprint Dive into the research topics of 'Learning Spatial Relationships From 3D Vision Using Histograms'. Together they form a unique fingerprint.

Cite this