Abstract
Robots performing everyday tasks such as cooking in a kitchen need to be able to deal with variations in the household tools that may be available. Given a particular task and a set of tools available, the robot needs to be able to
assess which would be the best tool for the task, and also where to grasp that tool and how to orient it. This requires an understanding of what is important in a tool for a given task, and how the grasping and orientation relate to performance in the task. A robot can learn this by trying out many examples.
This learning can be faster if these trials are done in simulation using tool models acquired from the Web. We provide a semi-automatic pipeline to process 3D models from the Web, allowing us to train from many different tools and their uses in simulation. We represent a tool object and its grasp and
orientation using 21 parameters which capture the shapes and sizes of principal parts and the relationships among them. We then learn a ‘task function’ that maps this 21 parameter vector to a value describing how effective it is for a particular task. Our trained system can then process the unsegmented point cloud of a new tool and output a score and a way of using the tool for a
particular task. We compare our approach with the closest one in the literature and show that we achieve significantly better results.
assess which would be the best tool for the task, and also where to grasp that tool and how to orient it. This requires an understanding of what is important in a tool for a given task, and how the grasping and orientation relate to performance in the task. A robot can learn this by trying out many examples.
This learning can be faster if these trials are done in simulation using tool models acquired from the Web. We provide a semi-automatic pipeline to process 3D models from the Web, allowing us to train from many different tools and their uses in simulation. We represent a tool object and its grasp and
orientation using 21 parameters which capture the shapes and sizes of principal parts and the relationships among them. We then learn a ‘task function’ that maps this 21 parameter vector to a value describing how effective it is for a particular task. Our trained system can then process the unsegmented point cloud of a new tool and output a score and a way of using the tool for a
particular task. We compare our approach with the closest one in the literature and show that we achieve significantly better results.
Original language | English |
---|---|
Title of host publication | Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS 2017) |
Publisher | IEEE Press |
Pages | 4923-4929 |
Number of pages | 7 |
ISBN (Electronic) | 978-1-5386-2681-8, 978-1-5386-2682-5 |
DOIs | |
Publication status | Published - 28 Sep 2017 |
Event | 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) - Duration: 24 Sep 2017 → 28 Sep 2017 |
Conference
Conference | 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) |
---|---|
Period | 24/09/17 → 28/09/17 |
Keywords
- tools
- three dimensional displays
- solid modelling