We address the problem of executing tool-using manipulation skills in scenarios where the objects to be used may vary. We assume that point clouds of the tool and target object can be obtained, but no interpretation or further knowledge about these objects is provided. The system must interpret the point clouds and decide how to use the tool to complete a manipulation task with a target object; this means it must adjust motion trajectories appropriately to complete the task. We tackle three everyday manipulations: scraping material from a tool into a container, cutting, and scooping from a container. Our solution encodes these manipulation skills in a generic way, with parameters that can be filled in at run-time via queries to a robot perception module; the perception module abstracts the functional parts for the tool and extracts key parameters that are needed for the task. The approach is evaluated in simulation and with selected examples on a PR2 robot.
|Title of host publication||Proceedings 2019 International Conference on Robotics and Automation (ICRA)|
|Publication status||Accepted/In press - 26 Jan 2019|
|Event||2019 International Conference on Robotics and Automation (ICRA) - Palais des congres de Montreal, Montreal, Canada|
Duration: 20 May 2019 → 24 May 2019
|Conference||2019 International Conference on Robotics and Automation (ICRA)|
|Period||20/05/19 → 24/05/19|