Generating Expressions that Refer to Visual Objects

Margaret Mitchell, Kees van Deemter, Ehud Baruch Reiter

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

Abstract

We introduce a novel algorithm for generating
referring expressions, informed by human
and computer vision and designed to refer to
visible objects. Our method separates absolute
properties like color from relative properties
like size to stochastically generate a diverse
set of outputs. The algorithm mimics
the majority of human data in several visual
scenes, outperforming the well-known Incremental
Algorithm (Dale and Reiter, 1995) and
the Graph-Based Algorithm (Krahmer et al.,
2003; Viethen et al., 2008) across domains.
We additionally introduce a new evaluation
method that takes the proposed algorithm’s
non-determinism into account.
Original languageEnglish
Title of host publicationProc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Place of PublicationAtlanta, Georgia
PublisherAssociation for Computational Linguistics
Publication statusPublished - Jun 2013

Keywords

  • generation of referring expressions
  • stochastic generation

Fingerprint

Dive into the research topics of 'Generating Expressions that Refer to Visual Objects'. Together they form a unique fingerprint.

Cite this