How might nuclear deterrence be affected by the proliferation of artificial intelligence (AI) and autonomous systems? How might the introduction of intelligent machines affect human-to-human (and human-to-machine) deterrence? Are existing theories of deterrence still applicable in the age of AI and autonomy? The article builds on the rich body of work on nuclear deterrence theory and practice and highlights some of the variegated and contradictory – especially human cognitive psychological – effects of AI and autonomy for nuclear deterrence. It argues that existing theories of deterrence are not applicable in the age of AI and autonomy and introducing intelligent machines into the nuclear enterprise will affect nuclear deterrence in unexpected ways with fundamentally destabilising outcomes. The article speaks to a growing consensus calling for conceptual innovation and novel approaches to nuclear deterrence, building on nascent post-classical deterrence theorising that considers the implications of introducing non-human agents into human strategic interactions.