‘Better analogies and frameworks are needed to understand the role of AI in strategic affairs’
| Photo Credit: Getty Images/iStockphoto
The concerns about an artificial intelligence (AI) arms race have become increasingly frequent. There is speculation about how long it would take for researchers to develop artificial general intelligence or AGI, that refers to AI that can outperform human cognitive abilities and is hypothetically capable of finding solutions to problems outside of merely what it has been trained to know.
While many people are writing about AI and its new and evolving capabilities, the scholarship about how AI impacts strategic affairs is still severely impoverished. A recent paper by Eric Schmidt, the former CEO of Google, Dan Hendrycks, and Alexandr Wang, the CEO of Scale AI, has been a high-profile contribution to this conversation. Still, some of its analysis seems inadequate.
Points that can be questioned
Whether AGI is on the horizon or not is a heated debate in itself, but arguably Schmidt, Hendrycks and Wang are right in that if AGI does become a reality, states need to equip themselves to handle security threats and competition. Moreover, as a RAND commentary on the paper points out, the idea of AI non-proliferation makes an important argument focused on preventing potentially dangerous technologies from falling into the hands of the wrong actors. However, some of the mechanisms they prescribe are worth questioning, and one of the central tenets of their paper — the idea that AI is comparable to nuclear weapons — falls short.
The authors propose the idea of MAIM: Mutual Assured AI Malfunction, which functions similarly to the idea of Mutual Assured Destruction (MAD). MAD is a condition akin to a stalemate that can exist between two nuclear-armed states that posits that a nuclear attack by one state would result in a counterattack of at least the same magnitude, which would lead to their mutual destruction. The comparison is flawed, since MAD is a condition of mutual annihilation that comes from deploying nuclear weapons, whereas MAIM serves as a strategy of deterrence to prevent the wrong actors from developing superintelligent AI. This comparison can have dangerous implications for the way states draft policies.
The underlying assumption that states can destroy each other’s AI projects as they would try to do with physical weapons infrastructure does not account for the fact that AI projects are, by and large, much more diffused than any nuclear projects in terms of infrastructure and have individuals across different places contributing to them.
Also read | As Israel uses U.S.-made AI models in war, tech can decide who lives and who is killed
Attempting to destroy a project of a terrorist group or rogue state could have many unintended consequences, including unnecessary escalation. The paper argues for the preemptive destruction of ‘rogue’ AI projects, but states do not have perfect surveillance and intelligence capabilities. Additionally, the idea of MAIM for deterrence and the endorsement of sabotage of enemy technologies as a strategic action can be used as a justification for overt military action.
An unfeasible proposal?
Another ambitious proposal that the authors make is to control AI chip distribution as one would with nuclear materials such as enriched uranium. What makes this proposal unfeasible is the fundamental difference between the two technologies. AI models do not require ongoing physical resources to function once trained, making supply chain controls harder to enforce.
The paper also makes certain assumptions and leaps in reasoning that serve as worst-case-scenarios, but the reasoning is unclear. First, the authors assume that AI-powered bioweapons and cyberattacks are inevitable unless states intervene early on. However, while AI could theoretically lower the barriers to cyber-threats, whether it warrants being treated like a weapon of mass destruction cannot be said as yet. The assumption that the development of AI will be state-driven also feels like a mere speculation. While there is government oversight to a certain extent, today, the private sector spearheads AI research and then makes technologies available to states and militaries for functions related to national security.
For policymakers grappling with technology as dynamic as AI and the threat of superintelligence looming over them, it is important to leave behind strategy from another time. Making historical comparisons to understand current predicaments is useful, but the analogy is imperfect as a lens through which to look at AI. AI is developed, distributed and deployed vastly differently from nuclear technologies. This comparison can lead to the assumption that deterrent mechanisms would work similarly for superintelligent AI.
Need for more scholarship
Better analogies and frameworks are needed to understand the role of AI in strategic affairs. Alternatively, the General Purpose Technology (GPT) framework, which talks about how technologies are diffused across sectors as they develop and are crucial for a state’s power, could be a better analogy to look at AI. However, in its current state, AI does not fulfil the ‘general’-ness of the GPT diffusion theory since large language models (LLMs) still have severe limitations and are not in a state to be as widely diffused as GPTs.
Increased scholarship on AI in strategic affairs is the only way to equip states to handle superintelligent AI if it becomes a reality in the future. Yet, if and when it happens are the two critical factors that determine what direction policymaking will take since there is currently no way to determine what superintelligent AI would be capable of.
Adya Madhavan is a research analyst at the Takshashila Institution
Published – April 18, 2025 12:08 am IST