Artificial intelligence in nuclear warfare: a perfect storm of instability?

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

A significant gap exists between the expectations and fears of public opinion, policymakers, and global defense communities about artificial intelligence (AI) and its actual military capabilities, particularly in the nuclear sphere. The misconceptions that exist today are largely caused by the hyperbolic depictions of AI in popular culture and science fiction, most prominently the Skynet system in The Terminator. Misrepresentations of the potential opportunities and risks in the military sphere (or “military AI”) can obscure constructive and crucial debate on these topics—specifically, the challenge of balancing the potential operational, tactical, and strategic benefits of leveraging AI, while managing the risks posed to stability and nuclear security. This article demystifies the hype surrounding AI in the context of nuclear weapons and, more broadly, future warfare. Specifically, it highlights the potential, multifaceted intersections of this disruptive technology with nuclear stability. The inherently destabilizing effects of military AI may exacerbate tension between nuclear-armed great powers, especially China and the United States, but not for the reasons you may think.
Original languageEnglish
Pages (from-to)197-211
JournalThe Washington Quarterly
Volume43
Issue number2
DOIs
Publication statusPublished - 16 Jun 2020

Fingerprint

Dive into the research topics of 'Artificial intelligence in nuclear warfare: a perfect storm of instability?'. Together they form a unique fingerprint.

Cite this