Generating Expressions that Refer to Visual Objects

Margaret Mitchell, Kees van Deemter, Ehud Baruch Reiter

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We introduce a novel algorithm for generating
referring expressions, informed by human
and computer vision and designed to refer to
visible objects. Our method separates absolute
properties like color from relative properties
like size to stochastically generate a diverse
set of outputs. The algorithm mimics
the majority of human data in several visual
scenes, outperforming the well-known Incremental
Algorithm (Dale and Reiter, 1995) and
the Graph-Based Algorithm (Krahmer et al.,
2003; Viethen et al., 2008) across domains.
We additionally introduce a new evaluation
method that takes the proposed algorithm’s
non-determinism into account.
Original languageEnglish
Title of host publicationProc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Place of PublicationAtlanta, Georgia
PublisherAssociation for Computational Linguistics
Publication statusPublished - Jun 2013

Fingerprint

Computer vision
Color

Keywords

  • generation of referring expressions
  • stochastic generation

Cite this

Mitchell, M., van Deemter, K., & Reiter, E. B. (2013). Generating Expressions that Refer to Visual Objects. In Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies Atlanta, Georgia: Association for Computational Linguistics.

Generating Expressions that Refer to Visual Objects. / Mitchell, Margaret; van Deemter, Kees; Reiter, Ehud Baruch.

Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia : Association for Computational Linguistics, 2013.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Mitchell, M, van Deemter, K & Reiter, EB 2013, Generating Expressions that Refer to Visual Objects. in Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics, Atlanta, Georgia.
Mitchell M, van Deemter K, Reiter EB. Generating Expressions that Refer to Visual Objects. In Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia: Association for Computational Linguistics. 2013
Mitchell, Margaret ; van Deemter, Kees ; Reiter, Ehud Baruch. / Generating Expressions that Refer to Visual Objects. Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia : Association for Computational Linguistics, 2013.
@inproceedings{90eda48cef0f4948a9afcb78508fa3a8,
title = "Generating Expressions that Refer to Visual Objects",
abstract = "We introduce a novel algorithm for generatingreferring expressions, informed by humanand computer vision and designed to refer tovisible objects. Our method separates absoluteproperties like color from relative propertieslike size to stochastically generate a diverseset of outputs. The algorithm mimicsthe majority of human data in several visualscenes, outperforming the well-known IncrementalAlgorithm (Dale and Reiter, 1995) andthe Graph-Based Algorithm (Krahmer et al.,2003; Viethen et al., 2008) across domains.We additionally introduce a new evaluationmethod that takes the proposed algorithm’snon-determinism into account.",
keywords = "generation of referring expressions, stochastic generation",
author = "Margaret Mitchell and {van Deemter}, Kees and Reiter, {Ehud Baruch}",
year = "2013",
month = "6",
language = "English",
booktitle = "Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
publisher = "Association for Computational Linguistics",

}

TY - GEN

T1 - Generating Expressions that Refer to Visual Objects

AU - Mitchell, Margaret

AU - van Deemter, Kees

AU - Reiter, Ehud Baruch

PY - 2013/6

Y1 - 2013/6

N2 - We introduce a novel algorithm for generatingreferring expressions, informed by humanand computer vision and designed to refer tovisible objects. Our method separates absoluteproperties like color from relative propertieslike size to stochastically generate a diverseset of outputs. The algorithm mimicsthe majority of human data in several visualscenes, outperforming the well-known IncrementalAlgorithm (Dale and Reiter, 1995) andthe Graph-Based Algorithm (Krahmer et al.,2003; Viethen et al., 2008) across domains.We additionally introduce a new evaluationmethod that takes the proposed algorithm’snon-determinism into account.

AB - We introduce a novel algorithm for generatingreferring expressions, informed by humanand computer vision and designed to refer tovisible objects. Our method separates absoluteproperties like color from relative propertieslike size to stochastically generate a diverseset of outputs. The algorithm mimicsthe majority of human data in several visualscenes, outperforming the well-known IncrementalAlgorithm (Dale and Reiter, 1995) andthe Graph-Based Algorithm (Krahmer et al.,2003; Viethen et al., 2008) across domains.We additionally introduce a new evaluationmethod that takes the proposed algorithm’snon-determinism into account.

KW - generation of referring expressions

KW - stochastic generation

M3 - Conference contribution

BT - Proc of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

PB - Association for Computational Linguistics

CY - Atlanta, Georgia

ER -