Delegating strategic decision-making to machines: Dr. Strangelove redux?

James Johnson* (Corresponding Author)

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

23 Citations (Scopus)

Abstract

Will the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force (or automating escalation) to machines? How might non-nuclear state and non-state actors leverage AI to put pressure on nuclear states? This article analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. It argues that AI-enabled decision support tools, by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process, will be fundamentally destabilizing if defense planners come to view AI’s ‘support’ function as a panacea for the cognitive fallibilities and human analysis and decision-making. The article also considers the nefarious use of AI-enhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability
Original languageEnglish
Pages (from-to)439-477
Number of pages40
JournalThe Journal of Strategic Studies
Volume45
Issue number3
Early online date30 Apr 2020
DOIs
Publication statusPublished - 2022

Keywords

  • Artificial intelligence
  • U.S.-China relations
  • nuclear security
  • deterrence policy
  • emerging technology
  • strategic stability

Fingerprint

Dive into the research topics of 'Delegating strategic decision-making to machines: Dr. Strangelove redux?'. Together they form a unique fingerprint.

Cite this