Natural reference to objects in a visual domain

Margaret Mitchell, Kees van Deemter, Ehud Reiter

Research output: Chapter in Book/Report/Conference proceedingConference contribution

28 Citations (Scopus)

Abstract

This paper discusses the basic structures necessary for the generation of reference to objects in a visual scene. We construct a study designed to elicit naturalistic referring expressions to relatively complex objects, and find aspects of reference that have not been accounted for in work on Referring Expression Generation (REG). This includes reference to object parts, size comparisons without crisp measurements, and the use of analogies. By drawing on research in cognitive science and psycholinguistics, we begin developing the input structure and background knowledge necessary for an algorithm capable of generating the kinds of reference we observe.
Original languageEnglish
Title of host publicationINLG 2010 - Proceedings of the Sixth International Natural Language Generation Conference, July 7-9, 2010, Trim, Co. Meath, Ireland
EditorsJohn Kelleher, Brian Macnamee, Ielka van der Sluis, Anja Belz, Albert Gatt, Alexander Koller
Place of PublicationStroudsburg, PA, USA
PublisherAssociation for Computational Linguistics
Pages95-104
Number of pages10
Publication statusPublished - 2010

Keywords

  • generation of referring expressions
  • elicitation experiment
  • spoken monologue

Cite this

Mitchell, M., van Deemter, K., & Reiter, E. (2010). Natural reference to objects in a visual domain. In J. Kelleher, B. Macnamee, I. van der Sluis, A. Belz, A. Gatt, & A. Koller (Eds.), INLG 2010 - Proceedings of the Sixth International Natural Language Generation Conference, July 7-9, 2010, Trim, Co. Meath, Ireland (pp. 95-104). Association for Computational Linguistics. http://homepages.abdn.ac.uk/k.vdeemter/pages/INLG2010-Meg.pdf