While the US advances what it calls a “maximum lethality, not tepid legality” doctrine in its war with Iran, it is simultaneously conducting the first large-scale test of its AI-focused military ecosystem, offering an invaluable insight into how AI may change warfare forever.
The 2026 Iran war is serving as one of the first field tests of an AI-integrated military machine. As of April 9th, more than 13,000 targets have been struck under Operation Epic Fury, 1,000 of which were hit on the opening day alone, according to US Central Command (CENTCOM). Behind the volume of these strikes lies a system designed to compress targeting decisions that once took days into seconds, raising profound questions as to speed, accountability, and the cost of civilian life.
At the centre of the US military’s AI strategy lies the Maven Smart System (MSS). The platform, developed by software company Palantir, grew out of Project Maven, a Pentagon initiative established in 2017 that uses computer vision algorithms to analyse radar, video, and satellite imagery for the purpose of target identification. Originally adopted by the National Geospatial-Intelligence Agency (NGA), it was designated as a formal program of record in 2023.
The MSS integrates the NGA’s mapping data into a unified mission control platform, giving commanders a live, synchronised view of the battlefield. Beneath the interface, machine-learning models analyse incoming data, classify objects, and assign confidence scores to potential detections. Once a target is formally identified, the system moves it through a targeting pipeline, recommending strike options, suggested weaponry, and ranked courses of action. A human officer reviews their recommendations and either authorises a strike or forwards the target package for further approval.
Anthropic’s Claude, a large language model, has been integrated into the MSS to translate intelligence reports into plain language for officers and analysts. However, on March 4th, the US Department of Defense (DoD) blacklisted Anthropic, citing it as a national security supply chain risk, marking the first time the US government has used this designation against an American company. President Trump ordered a phasing out of Anthropic tools within the next six months, a decision that followed Anthropic’s refusal to permit its AI to be used for mass domestic surveillance or fully autonomous weapons systems.
Project Maven, prior to its full-scale rollout in Iran, was an ambitious experiment trialled on the frontlines of the Russia-Ukraine war. US Lt. Gen. Christopher T. Donahue described the conflict as Project Maven’s ‘laboratory’, a live experiment to test the system’s ability to filter intelligence regarding enemy troop movements and communications into an accessible, user-friendly platform, directing Ukrainian commanders’ strategic capabilities.
The version provided to Ukraine provided command with a clearer picture of the battlefield but deliberately excluded sensitive information and the most advanced US system capabilities, falling short of delivering precise targeting data.
While the frontlines of Ukraine operated as Project Maven’s workshop, Israel’s military campaign in Gaza offered a preliminary look at AI’s potential as a primary targeting mechanism. Israel’s Defense Ministry Director-General Eyal Zamir described AI as a “complete game-changer” in eliminating targets more effectively.
The IDF integrated two AI systems: Lavender and Habsora (The Gospel). Reportedly, the Gospel has been used to select and prioritise physical strike targets like buildings and infrastructure, while Lavender has been used to generate human targets, providing a database that at one point identified 37,000 potential targets, predominantly potential junior Palestinian Islamic Jihad (PIJ) or Hamas militants who fit the category of ‘military-aged male’.
The IDF’s AI system integrated Lavender and the Gospel to provide targeting intelligence reports to analysts. If the analysts determined that the object qualified as a target, they would pass this determination on to a higher-level intelligence officer who could confirm the strike.
The Maven Smart Systems primary function is to compress the ‘kill chain’, the process from intelligence gathering to a completed strike, which has been the central focus of military strategy since the advent of long-range weaponry in the 20th century. Compressing this process to increase first-strike effectiveness, achieve decision superiority and maximise lethality has long been a key strategic aim for the US military.
In 2000, the then-head of the US Air Combat Command, John P. Jumper, stated his objective to reduce the time taken for the destruction of emerging targets to “single-digit minutes”. This benchmark has long since been surpassed.
The MSS has been tested since 2020 in a live exercise called Scarlet Dragon to evaluate how significantly AI tools can streamline the process. The results were striking: 20 soldiers using the system were able to handle a targeting workload equivalent to that managed by 2,000 personnel during the 2003 invasion of Iraq. A new objective was set to achieve 1,000 tactical decisions per hour, or one every 3.6 seconds.
Brad Cooper, head of US Central Command and leader of the war effort, seemingly confirmed the benchmark has been reached while disclosing the use of AI systems in the war, stating that “processes that used to take hours and sometimes days” were now being carried out in seconds.
Kill chain expert Craig Jones believes that the MSS has enabled decision compression to such a level that it is now “much quicker in some ways than the speed of thought”. Operations that would have unfolded over weeks in previous conflicts, like the coordination of leadership strikes paired with large-scale ballistic missile barrages, were executed rapidly and simultaneously.
The 2026 Iran war has recorded more than 1,700 civilian deaths according to the US-based Human Rights Activists News Agency (HRANA), approximately 15 per cent of whom are reported to be children.
The single deadliest incident, a strike on Sharejeh Tayebeh Primary School in Minab, killed 175 people, the majority of them schoolgirls aged between 7 and 12. Investigations conducted by the BBC, NPR, CBC and the New York Times concluded that the strike was most likely carried out by US forces, a finding supported by sources cited in internal military inquiries. A preliminary investigation reported by the New York Times attributed the strike to outdated intelligence.
Satellite imagery indicated the school had previously shared a compound with an Islamic Revolutionary Guard Corps (IRGC) base before the two were separated in 2016. The school had an active website and a publicly visible presence on Google Maps at the time of the strike.
The precise role, if any, that the MSS played in the incident is unconfirmed, and investigations remain ongoing, yet the question is raised as to the oversight limits of a system optimised to process 1,000 targeting decisions an hour, and its structural capability to recognise incidents of outdated intelligence.
Critics argue that reducing targeting decisions from hours to seconds fundamentally limits the space for legal review and deliberation. International Lawyer Davit Khacatryan, writing for the Centre for International Policy, has argued that preventing civilian harms requires “hard-wired limits on where and when systems may operate, relentless stress-testing under a wide variety of conditions, and the retention of a genuine human veto at every stage”.
These limits sit in direct tension with the objectives of AI-driven targeting models, perfecting operational speed. US Secretary of War Pete egseth, speaking on Operation Epic Fury, declared that “America is winning decisively, devastating and without mercy”, pursuing “maximum lethality, not tepid legality” and “violent effect, not politically correct”. The MSS’s deployment in this arena of maximum lethality thus raises concerns as to the relationship between operational efficiency and the value placed on civilian lives.
Further worries have been raised regarding automation bias, the human propensity to prefer suggestions from automated decision-making systems. When deliberation is seen as latency, the human operators’ oversight may become marred by pressures to achieve strike efficiency targets, thus deferring to the system’s targeting recommendation without critical evaluation.
A report by Israeli Publications +972 and Local Call detailed testimony from six intelligence officers as to the inner workings of the Israeli AI tools. One IDF intelligence officer who used Lavender, anonymously confirmed his automation bias, stating that while every operator lost people on October 7th, which added emotion to the decision-making process, “the machine did it coldly, and that made it easier”. Another Lavender operator admitted that “I would invest 20 seconds for each target at this stage and do dozens of them every day. I had zero value as a human, apart from being a stamp of approval. It saved a lot of time”. Those 20 seconds would be spent just to check that the marked targets were male, as Hamas and the PIJ do not recruit women.
While the Middle East has emerged as the epicentre of AI-facilitated precision targeting, the Ukraine-Russia conflict has served as the world’s foremost testing ground for drone innovation and the backdrop for the race towards fully autonomous weapons systems.
The Ukrainian military’s central objective is the replacement of humans with fully autonomous battle systems. Ukraine’s Deputy Defense Minister Yuriy Myronenko has stated that while fully autonomous weapons systems do not yet exist, Ukraine has “partially implemented it in some devices”.
This is in contrast to the early stages of the conflict, where the Ukrainian military experimented with deploying drones as surveillance tools on the front line before recognising their utility in offsetting their personnel disadvantage, attaching explosives to cheap commercial frames and developing a fleet of inexpensive killing machines.
Drones quickly developed from a tool in the Ukrainian military arsenal to their primary weapon of war, reorganising their entire war effort around the technology. In March 2026, drones accounted for 96% of Russia’s 35,551 battlefield casualties. In 2025 alone, Ukrainian drones killed or seriously injured more than 240,000 Russian soldiers, according to Defense Minister Mykhailo Federov. President Zelensky’s stated goal is to target 50,000 soldiers a month, a number that the Ukrainian military believes, if reached, will offset Russia’s personnel advantage.
Ukraine’s Defence Ministry announced one such advancement in its tactical approach, when it revealed that a “new model of warfare is being introduced” on the Ukrainian front lines. The basis for the model: drone assault units, which combine aerial and ground drones with infantry into a single system”. This new tactical approach was announced after Ukraine achieved the first-ever capture of an enemy position using exclusively robotic and unmanned systems.
The Martians, Ukraine’s new-generation drone fleet, is rumoured to possess elements of autonomy. Unlike previous Ukrainian drone generations that relied on GPS navigation and radio-frequency control, both of which are vulnerable to Russian jamming, the Martians navigate by processing real-time visual data against terrain features, requiring no GPS signal and no radio operator. The Martians reportedly fly at up to 300km per hour and are able to entirely avoid Russian drone detection systems.
Although it is still unclear whether the Martians have autonomy regarding their final targeting decisions, it has been confirmed that Ukrainian forces have used AI tools that allow drones to lock onto targets and fly autonomously over the final few hundred metres to counteract Russian drone defense measures.
While Russia is reportedly behind Ukraine in the race towards fully autonomous weapons systems, its Lancet fleet, kamikaze drones that have conducted more than 4000 strikes on the Ukrainian military hardware since July 2022, has been upgraded with an AI targeting module built on Nvidia’s Jetson platform. The system processes camera imagery locally, runs object recognition algorithms without relaying data to a remote operator, and is capable of autonomously selecting targets from pre-set categories. Lancet drones can relay targeting data between themselves about armoured vehicle concentrations and engage autonomously in coordinated sequences.
The new Strategy for the Development of Unmanned Aviation document has also confirmed that Russia is actively pursuing swarm technology, which involves drones capable of operating together autonomously, coordinating attacks without human instruction at the unit level.
In 2017, Russian President Vladimir Putin, when discussing AI, declared that “whoever becomes the leader in this sphere will become the leader in the world”, and current military strategy and spending are reflecting this.
The US Pentagon FY2026 budget request includes $13.4 billion specifically dedicated to AI-facilitated autonomous systems, $9.4b of which is earmarked for unmanned aerial vehicles. China has not revealed its spending on equivalent systems, but it is estimated to be comparable. At the 2024 Zhuhai Airshow, Norinco, a prominent Chinese defence manufacturer, debuted an entire brigade of armoured vehicles and drones controlled and operated by AI. On January 23rd 2026, a broadcast of a drone swarm operation by the PLA’s National University of Defence Technology showed one soldier operating a formation of 200 autonomous drones. The Pentagon is reportedly concerned they cannot match the speed or scale of China’s manufacturing dominance of autonomous weapons.
The race towards autonomous weapon systems raises questions as to accountability, responsibility, and compatibility with international law. These concerns were tabled by Austria, and a group of 30 co-sponsoring states in a 2025 UN General Assembly (UNGA) resolution that contended artificial intelligence and autonomy in weapons systems raise serious challenges from “humanitarian, legal, security, technological and ethical perspectives” in undermining the role of humans in the use of armed force. The resolution was favoured by 156 states. Russia, Israel and the US voted against, while China abstained.
Anthropic CEO Dario Amodei explained that the company rejected the Pentagons’ request to amend the terms of use for its LLM, Claude, within fully autonomous weapon systems, because “frontier AI systems are simply not reliable enough to power fully autonomous weapons”, and that the required regulatory mechanisms to ensure compliance with international law do not yet exist, a clear signal that the rate of technological development is exceeding the ability to govern it.
Nicole Van Rooijen, Executive Director of Stop Killer Robots, a coalition of non-government organisations to ensure “human control in the use of force”, stated in response to Amodei’s comments that “when the companies building these technologies are themselves refusing to deploy it on safety grounds, it must raise alarm bells for governments and people everywhere”.
While Anthropic is distancing itself from the AI arms race, other software companies are further embedding themselves into military infrastructures. NATO signed a contract on 25 March 2025 with Palantir to adopt its own version of the Maven Smart System, enabling a “common data-enabled warfighting capability”, placing Palantir’s AI targeting architecture at the centre of the Western military alliance.
In December 2025, the UK Ministry of Defense awarded Palantir a $325 million three-year contract for data analytics supporting “critical strategic, tactical and live operational decision making across classifications”, along with a broader strategic partnership worth up to $2 billion.
Palantir CEO Alex Karp has been clear in his belief that the AI arms race represents another “Oppenheimer moment”, recognising AI weapons’ potentially world-ending capabilities and calling for an urgent direction towards “building the technical architecture and regulatory frameworks” to ensure human control over AI systems. Yet Palantir continues to embed itself more deeply into military architectures, while the governing law on AI remains in its infancy. It does so because, inaction, Karp argues, will be punished by adversaries who do not “indulge in theatrical debates” about the morality, legality, or dangers of developing technologies.
On 18th April, Palantir released what many are labelling a manifesto on X, titled The Technological Republic. The post summarises Karp’s book of the same title into twenty-two key points, which include claims that Silicon Valley should be obligated to participate in the defense of the nation and a call for mandatory national conscription. The post also argues that some cultures are “worse, regressive and harmful” and, as such, pluralism should be rejected. Karp goes on to state how hard power based on software will be the key for free and democratic societies to prevail, and that AI, not atomic power, is what the new era of deterrence will be built on, fundamentally positioning Palantir as a key player in the new AI arms race.
However, it is not just states and private software companies that are participating in this race. There is growing concern regarding the proliferation of AI-integrated weapon systems into the hands of terrorists. Between 2018 and September 2024, more than 810 drone attacks were carried out by non-state actors. First-person vision (FPV) drones are increasingly being used to carry out drone strikes in West Africa. An al-Qaeda affiliate in Burkina Faso and Mali has conducted at least 69 drone strikes since 2023, while two Islamic State affiliates have carried out at least 20 strikes, primarily in Nigeria.
In October 2023, a terrorist attack in Syria utilised explosive-laden drones to kill over 89 soldiers and civilians and wound at least 240 others. According to the 2026 Global Terrorism Index, the Revolutionary Armed Forces of Colombia (FARC) and the National Liberation Army (ELN) have adopted drone warfare, taking direct inspiration from innovation in Ukraine, with 77 attacks recorded between 2024 and 2025, killing 10 people and wounding over 134.
According to ACLED, while only 10 non-state armed groups had access to drone weaponry in 2010, 469 groups deployed drones in attacks in 2025 across 17 countries, with 58 groups doing so for the first time this year. The development of autonomous weapon systems risks this same democratisation. Drones are developing at a rapid rate, and the proliferation of advancing technologies into terrorists’ hands has been historically consistent, as they become more accessible and if these capabilities offer low barriers to entry. Paul Scharre, an autonomous weapons expert, believes that we are possibly entering a future where the “technology to build lethal autonomous weapons is available to not only nation-states but to individuals as well”.
There are historical precedents visible in the AI arms race. Private industry has long been intertwined with defense and advances in technology have consistently reshaped the conduct of war and the actors with the capabilities to partake in it. However, the pace and nature of AI integration presents a distinct challenge. Military capabilities are evolving faster than the legal and governance frameworks designed to regulate them, and the role of human judgement within these systems is becoming less certain. Whether humans will maintain meaningful oversight over weapon systems, or simply serve as a rubber stamp that approves automated decision-making, is becoming increasingly unclear as the AI arms race intensifies.