Trusting Social Robots

Paula Sweeney* (Corresponding Author)

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

2 Downloads (Pure)

Abstract

In this paper, I argue that we need a more robust account of our ability and willingness to trust social robots. I motivate my argument by demonstrating that existing accounts of trust and of trusting social robots are inadequate. I identify that it is the feature of a façade or deception inherent in our engagement with social robots that both facilitates, and is in danger of undermining, trust. Finally, I utilise the fictional dualism model of social robots to clarify that trust in social robots, unlike trust in humans, must rely on an independent judgement of product reliability.
Original languageEnglish
Pages (from-to)419-426
Number of pages8
JournalAI and Ethics
Volume3
Issue number2
Early online date24 May 2022
DOIs
Publication statusPublished - May 2023

Keywords

  • Artificial agents
  • Social robots
  • Trust
  • Fictional dualism
  • Reliability

Fingerprint

Dive into the research topics of 'Trusting Social Robots'. Together they form a unique fingerprint.

Cite this