top of page
Search

Lead, Don’t lag: American Must Maintain First Mover Advantage in Militarized AI

  • Samuel Chen
  • 2 days ago
  • 4 min read

Updated: 4 hours ago


When it becomes clear that a technology is vital to national security, it is the moral and practical imperative of any competent government to pursue that technology to the fullest extent it can. In a perfect world where every country gets along and deals in good faith, a rational person could argue for limits on AI integration into the military. However, we live in an anarchic, deceitful world that only takes advantage of that naivety. AI-enabled targeting, autonomous drones, and machine-speed C3i are coming whether we like it or not. The only real question is whether those capabilities mature first under American auspices or under governments that are opposed to America’s current global position. Critics will respond that AI is different from other civilizationally disruptive tech, like nuclear bombs, or that it fundamentally threatens a broader notion of the human condition. Some say policymakers should assume the worst about AI in the military. Indeed, there should be people advocating for proper checks and introspection. However, that’s not an argument for restraint: that’s an argument for building serious AI capability and military doctrine now, before we are on the wrong end of a crisis. Let us dive into the reasons we should deepen and accelerate American leadership for AI in the military.


Capability First, Deterrence Philosophy Later

Deterrence has never been about good intentions and caution for the sake of caution. It lives or dies on strength and capabilities. During the early atomic age, the U.S. didn’t wait for a global town hall on nuclear ethics; instead building a working arsenal before coaxing (and coercing) others toward test bans, nonproliferation, and arms control from a position of strength. There was no Limited Test Ban Treaty (LTBT) of 1963 until long after the US, USSR, and the earliest nuclear powers had enough in their arsenals as working leverage for a mutual agreement on nukes. Every arms-control success story rests on an unromantic foundation: fear of the alternative. The LTBT and Non-Proliferation Treaty (NPT) of 1968 locked in unequal nuclear rights only because non-nuclear states understood that the genie was already out of the bottle. Test bans and strategic arms limits followed after both superpowers proved they could annihilate each other.


Military AI is simply the next rung on that ladder. Even if we choose to incorporate AI into our arsenal, China and Russia already treat AI as a decisive lever of national power. Beijing is explicit about AI “overtaking” the West, and Moscow talks openly about whoever leads in AI “ruling the world.” If the U.S. artificially slows itself while adversaries sprint, we may be morally satisfied with no AI in our military - at the cost of an era where illiberal regimes antithetical to American interests hold the deadliest AI tools. Any meaningful attempt to restrict autonomous weapons, set red lines around critical infrastructure, or govern AGI research internationally will require the U.S. to have real capability leverage. Risks are real, but the answer isn’t to disarm ourselves in the hope that Beijing reciprocates. We tried something similar with space ‘peaceful use’ norms, only to watch anti-satellite capabilities proliferate anyway.


Human-Machine Teaming Together Can Make War More Discriminating

One of the best things AI in the military offers is the ability to gather and interpret data, which gives our warfighters a new tool to effectively prosecute targets. Good AI can  provide second opinions to parse soldiers from civilians, but only if the U.S. invests in AI-enhanced ISR and targeting. That money invested can make our use of force more precise and more accountable. Many critics rightfully point out the biggest risks lie in hypothetical true autonomy ( no human-in-the-loop). While any mistake from this hypothetical true autonomy situation would be genuinely tragic and call for deep research into why a mistake happens, we have to analyze AI in the context of what it works alongside: human judgement. Humans are also famously flawed and biased, yet that doesn’t stop us from keeping soldiers accountable while employing their talent. We must understand AI cannot be directly compared as a 1-to-1 equivalent of human judgment. Instead of arguing for one or the other, the true solution is to keep humans involved in the prosecution process of fighting, while designing AI to augment our decisionmaking with good data analysis. There’s already significant leeway given at the Pentagon on the development of AI systems with great autonomy, but it would be wise to establish as a norm in combat an ethical need to keep humans involved. The goal shouldn’t be achieving zero mistakes in combat with AI, but to make fewer mistakes total when comparing purely human and purely AI driven systems. 


Build First, Then Bind on American terms

The more credible American power is in militarized AI, the more tools D.C. has: it can offer joint safety standards from strength, tie export controls to verifiable behavior, and punish states that violate agreed red lines. Self-limits for cooperation with no threat of outpacing our adversaries simply opportunism. For any future ‘Military AI Limitation’ agreement, we first need tech dominance that makes others recognize their own self-interest in a deal. We don’t have the luxury of treating military AI as an elective seminar topic we can defer implementation on until ivory tower humanitarians reach consensus. Our adversaries aren’t waiting, and IR law has rarely led technology (Usually limping behind trying to cage what’s already loose). We cannot read the minds in Beijing and Moscow to determine their own ethical boundaries on militarized AI, but we can at least be assured of our own intentions to give ourselves the option of AI dominance. The strategic direction is clear: The U.S. should lead AI militarization not because AI is safe, but precisely because it is unsafe - and far safer under our imperfect constraints than in the hands of adversaries who will get to dictate military AI norms.

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page