Bootstrapping Relational Affordances of Object Pairs using Transfer

Severin Fichtl, Dirk Kraft, Norbert Krüger, Frank Guerin

Research output: Contribution to journalArticle

1 Citation (Scopus)
4 Downloads (Pure)

Abstract

Robots acting in everyday environments need a good knowledge of how a manipulation action can affect pairs of objects in a relationship, such as ‘inside’ or ‘behind’ or ‘ontop’. These relationships afford certain means-end actions such as pulling a container to retrieve the contents, or pulling a tool to retrieve a desired object. We investigate how these relational affordances could be learnt by a robot from its own action experience. A major challenge in this approach is to reduce the number of training samples needed to achieve accuracy, and hence we investigate an approach which can leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new affordance predictor is augmented with the output of previously learnt affordances. In the second approach (category based bootstrapping), we form categories that capture underlying commonalities of a pair of existing affordances and augment the state-space with this category classifier’s output. In addition, we introduce a novel heuristic, which suggests how a
large set of potential affordance categories can be pruned to leave only those categories which are most promising for bootstrapping future affordances. Our results show that both bootstrapping approaches outperform learning without bootstrapping. We also show that there is no significant difference in performance between direct and category based bootstrapping.
Original languageEnglish
Pages (from-to)56-71
Number of pages16
JournalIEEE Transactions on Cognitive and Developmental Systems
Volume10
Issue number1
Early online date11 Oct 2016
DOIs
Publication statusPublished - 9 Mar 2018

Fingerprint

Robots
Containers
Classifiers

Keywords

  • Robot sensing systems
  • planning
  • visualisation
  • containers
  • acceleration
  • service robots

Cite this

Bootstrapping Relational Affordances of Object Pairs using Transfer. / Fichtl, Severin; Kraft, Dirk; Krüger, Norbert; Guerin, Frank.

In: IEEE Transactions on Cognitive and Developmental Systems, Vol. 10, No. 1, 09.03.2018, p. 56-71.

Research output: Contribution to journalArticle

Fichtl, Severin ; Kraft, Dirk ; Krüger, Norbert ; Guerin, Frank. / Bootstrapping Relational Affordances of Object Pairs using Transfer. In: IEEE Transactions on Cognitive and Developmental Systems. 2018 ; Vol. 10, No. 1. pp. 56-71.
@article{6d9371422e4c442b8ce414b215beb272,
title = "Bootstrapping Relational Affordances of Object Pairs using Transfer",
abstract = "Robots acting in everyday environments need a good knowledge of how a manipulation action can affect pairs of objects in a relationship, such as ‘inside’ or ‘behind’ or ‘ontop’. These relationships afford certain means-end actions such as pulling a container to retrieve the contents, or pulling a tool to retrieve a desired object. We investigate how these relational affordances could be learnt by a robot from its own action experience. A major challenge in this approach is to reduce the number of training samples needed to achieve accuracy, and hence we investigate an approach which can leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new affordance predictor is augmented with the output of previously learnt affordances. In the second approach (category based bootstrapping), we form categories that capture underlying commonalities of a pair of existing affordances and augment the state-space with this category classifier’s output. In addition, we introduce a novel heuristic, which suggests how alarge set of potential affordance categories can be pruned to leave only those categories which are most promising for bootstrapping future affordances. Our results show that both bootstrapping approaches outperform learning without bootstrapping. We also show that there is no significant difference in performance between direct and category based bootstrapping.",
keywords = "Robot sensing systems, planning, visualisation, containers, acceleration, service robots",
author = "Severin Fichtl and Dirk Kraft and Norbert Kr{\"u}ger and Frank Guerin",
note = "This work was supported in part by the U.K. EPSRC DTG EP/J5000343/1 at Aberdeen, and in part by the EU Cognitive Systems Project XPERIENCE at SDU under Grant FP7-ICT-270273.",
year = "2018",
month = "3",
day = "9",
doi = "10.1109/TCDS.2016.2616496",
language = "English",
volume = "10",
pages = "56--71",
journal = "IEEE Transactions on Cognitive and Developmental Systems",
issn = "2379-8920",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
number = "1",

}

TY - JOUR

T1 - Bootstrapping Relational Affordances of Object Pairs using Transfer

AU - Fichtl, Severin

AU - Kraft, Dirk

AU - Krüger, Norbert

AU - Guerin, Frank

N1 - This work was supported in part by the U.K. EPSRC DTG EP/J5000343/1 at Aberdeen, and in part by the EU Cognitive Systems Project XPERIENCE at SDU under Grant FP7-ICT-270273.

PY - 2018/3/9

Y1 - 2018/3/9

N2 - Robots acting in everyday environments need a good knowledge of how a manipulation action can affect pairs of objects in a relationship, such as ‘inside’ or ‘behind’ or ‘ontop’. These relationships afford certain means-end actions such as pulling a container to retrieve the contents, or pulling a tool to retrieve a desired object. We investigate how these relational affordances could be learnt by a robot from its own action experience. A major challenge in this approach is to reduce the number of training samples needed to achieve accuracy, and hence we investigate an approach which can leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new affordance predictor is augmented with the output of previously learnt affordances. In the second approach (category based bootstrapping), we form categories that capture underlying commonalities of a pair of existing affordances and augment the state-space with this category classifier’s output. In addition, we introduce a novel heuristic, which suggests how alarge set of potential affordance categories can be pruned to leave only those categories which are most promising for bootstrapping future affordances. Our results show that both bootstrapping approaches outperform learning without bootstrapping. We also show that there is no significant difference in performance between direct and category based bootstrapping.

AB - Robots acting in everyday environments need a good knowledge of how a manipulation action can affect pairs of objects in a relationship, such as ‘inside’ or ‘behind’ or ‘ontop’. These relationships afford certain means-end actions such as pulling a container to retrieve the contents, or pulling a tool to retrieve a desired object. We investigate how these relational affordances could be learnt by a robot from its own action experience. A major challenge in this approach is to reduce the number of training samples needed to achieve accuracy, and hence we investigate an approach which can leverage past knowledge to accelerate current learning (which we call bootstrapping). We learn Random Forest based affordance predictors from visual inputs and demonstrate two approaches to knowledge transfer for bootstrapping. In the first approach (direct bootstrapping), the state-space for a new affordance predictor is augmented with the output of previously learnt affordances. In the second approach (category based bootstrapping), we form categories that capture underlying commonalities of a pair of existing affordances and augment the state-space with this category classifier’s output. In addition, we introduce a novel heuristic, which suggests how alarge set of potential affordance categories can be pruned to leave only those categories which are most promising for bootstrapping future affordances. Our results show that both bootstrapping approaches outperform learning without bootstrapping. We also show that there is no significant difference in performance between direct and category based bootstrapping.

KW - Robot sensing systems

KW - planning

KW - visualisation

KW - containers

KW - acceleration

KW - service robots

U2 - 10.1109/TCDS.2016.2616496

DO - 10.1109/TCDS.2016.2616496

M3 - Article

VL - 10

SP - 56

EP - 71

JO - IEEE Transactions on Cognitive and Developmental Systems

JF - IEEE Transactions on Cognitive and Developmental Systems

SN - 2379-8920

IS - 1

ER -