A Dynamic Model of Trust in Dialogues

Gideon Ogunniye, Alice Toniolo, Nir Oren

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Downloads (Pure)

Abstract

In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.

Original languageEnglish
Title of host publicationTheory and Applications of Formal Argumentation
Subtitle of host publicationTAFA 2017
EditorsElizabeth Black, Sanjay Modgil, Nir Oren
Place of PublicationCham
PublisherSpringer Verlag
Pages211-226
Number of pages16
ISBN (Electronic)978-3-319-75553-3
ISBN (Print)9783319755526
DOIs
Publication statusPublished - 2018
Event4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017: (TAFA 2017) - Melbourne, Australia
Duration: 19 Aug 201720 Aug 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10757 LNAI
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017
CountryAustralia
CityMelbourne
Period19/08/1720/08/17

Fingerprint

Dynamic models
Dynamic Model
Iteration
Argumentation
Intuitive
Update
Dialogue
Game
Interaction
Demonstrate
Model

ASJC Scopus subject areas

  • Theoretical Computer Science
  • Computer Science(all)

Cite this

Ogunniye, G., Toniolo, A., & Oren, N. (2018). A Dynamic Model of Trust in Dialogues. In E. Black, S. Modgil, & N. Oren (Eds.), Theory and Applications of Formal Argumentation: TAFA 2017 (pp. 211-226). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10757 LNAI). Cham: Springer Verlag. https://doi.org/10.1007/978-3-319-75553-3_15

A Dynamic Model of Trust in Dialogues. / Ogunniye, Gideon; Toniolo, Alice; Oren, Nir.

Theory and Applications of Formal Argumentation: TAFA 2017. ed. / Elizabeth Black; Sanjay Modgil; Nir Oren. Cham : Springer Verlag, 2018. p. 211-226 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10757 LNAI).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ogunniye, G, Toniolo, A & Oren, N 2018, A Dynamic Model of Trust in Dialogues. in E Black, S Modgil & N Oren (eds), Theory and Applications of Formal Argumentation: TAFA 2017. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10757 LNAI, Springer Verlag, Cham, pp. 211-226, 4th International Workshop on Theory and Applications of Formal Argumentation, TAFA 2017, Melbourne, Australia, 19/08/17. https://doi.org/10.1007/978-3-319-75553-3_15
Ogunniye G, Toniolo A, Oren N. A Dynamic Model of Trust in Dialogues. In Black E, Modgil S, Oren N, editors, Theory and Applications of Formal Argumentation: TAFA 2017. Cham: Springer Verlag. 2018. p. 211-226. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-75553-3_15
Ogunniye, Gideon ; Toniolo, Alice ; Oren, Nir. / A Dynamic Model of Trust in Dialogues. Theory and Applications of Formal Argumentation: TAFA 2017. editor / Elizabeth Black ; Sanjay Modgil ; Nir Oren. Cham : Springer Verlag, 2018. pp. 211-226 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).
@inproceedings{ca026ad99a4a41deac14f2aa3826e0d4,
title = "A Dynamic Model of Trust in Dialogues",
abstract = "In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.",
author = "Gideon Ogunniye and Alice Toniolo and Nir Oren",
year = "2018",
doi = "10.1007/978-3-319-75553-3_15",
language = "English",
isbn = "9783319755526",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer Verlag",
pages = "211--226",
editor = "Black, {Elizabeth } and Sanjay Modgil and Nir Oren",
booktitle = "Theory and Applications of Formal Argumentation",
address = "Germany",

}

TY - GEN

T1 - A Dynamic Model of Trust in Dialogues

AU - Ogunniye, Gideon

AU - Toniolo, Alice

AU - Oren, Nir

PY - 2018

Y1 - 2018

N2 - In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.

AB - In human interactions, trust is regularly updated during a discussion. For example, if someone is caught lying, any further utterances they make will be discounted, until trust is regained. This paper seeks to model such behaviour by introducing a dialogue game which operates over several iterations, with trust updates occurring at the end of each iteration. In turn, trust changes are computed based on intuitive properties, captured through three rules. By representing agent knowledge within a preference-based argumentation framework, we demonstrate how trust can change over the course of a dialogue.

UR - http://www.scopus.com/inward/record.url?scp=85043999809&partnerID=8YFLogxK

U2 - 10.1007/978-3-319-75553-3_15

DO - 10.1007/978-3-319-75553-3_15

M3 - Conference contribution

SN - 9783319755526

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 211

EP - 226

BT - Theory and Applications of Formal Argumentation

A2 - Black, Elizabeth

A2 - Modgil, Sanjay

A2 - Oren, Nir

PB - Springer Verlag

CY - Cham

ER -