In Search of a Goldilocks Zone for Credible AI

Kevin Allan*, Nir Oren, Jacqui Hutchison, Douglas Martin

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


If artificial intelligence (AI) is to help solve individual, societal and global problems, humans should neither underestimate nor overestimate its trustworthiness. Situated in-between these two extremes is an ideal ‘Goldilocks’ zone of credibility. But what will keep trust in this zone? We hypothesise that this role ultimately falls to the social cognition mechanisms which adaptively regulate conformity between humans. This novel hypothesis predicts that human-like functional biases in conformity should occur during interactions with AI. We examined multiple tests of this prediction using a collaborative remembering paradigm, where participants viewed household scenes for 30 s vs. 2 min, then saw 2-alternative forced-choice decisions about scene content originating either from AI- or human-sources. We manipulated the credibility of different sources (Experiment 1) and, from a single source, the estimated-likelihood (Experiment 2) and objective accuracy (Experiment 3) of specific decisions. As predicted, each manipulation produced functional biases for AI-sources mirroring those found for human-sources. Participants conformed more to higher credibility sources, and higher-likelihood or more objectively accurate decisions, becoming increasingly sensitive to source accuracy when their own capability was reduced. These findings support the hypothesised role of social cognition in regulating AI’s influence, raising important implications and new directions for research on human–AI interaction
Original languageEnglish
Article number13687
JournalScientific Reports
Early online date1 Jul 2021
Publication statusE-pub ahead of print - 1 Jul 2021


Dive into the research topics of 'In Search of a Goldilocks Zone for Credible AI'. Together they form a unique fingerprint.

Cite this