The War in Iran Opens a New Era: AI-Powered Warfare

Written by: Adel Khelifi on March 11, 2026

Wars have always been technological accelerators. The First World War was the war of the industrial mechanization of violence. The Second World War cemented the dominance of aviation and strategic bombing. The Cold War was dominated by the balance of nuclear terror. More recently, the war in Ukraine popularized kamikaze drones and swarm warfare.

The current conflict around Iran could well inaugurate a new phase in military history: the war driven by artificial intelligence.

For the first time, AI no longer merely assists analysts or optimizes logistics. It directly participates in the strategic chain that runs from intelligence analysis to the decision to strike a target.

When artificial intelligence replaces analysts

For decades, the work of intelligence services relied on patient human labor. Analysts spent hours comparing satellite images, listening to intercepted communications, analyzing seized documents or cross-checking testimonies from human sources on the ground.

Cross-referencing all this information took time, mobilized entire teams, and involved substantial costs. Military decisions were built gradually, sometimes over several days, or even weeks.

Today, this process can be compressed to an unprecedented speed thanks to artificial intelligence.

As part of the operation against the Iranian command apparatus, several AI technologies would have been mobilized to analyze vast volumes of data. Among them is Claude, an artificial intelligence model developed by the American company Anthropic and integrated into certain analysis platforms used by the company Palantir.

This system would have processed thousands of documents in Persian as well as hours of intercepted communications between Iranian military officials. The algorithm would then identify in record time inconsistencies, breaks in the chain of command, and clues enabling the location of decision centers.

In a few seconds, AI can now propose several possible military action scenarios, ranked according to their probability of success. Artificial intelligence does not officially make the final decision, but it prepares the decision by synthesizing vast amounts of information that no human analyst could process at such speed.

In other words, the machine now does most of the heavy lifting in strategic work.

Palantir, Anthropic, and the new industry of algorithmic warfare

Behind this transformation are new technology companies that are playing an increasingly central role in Western military architecture.

Palantir is often described as one of the software brains behind this type of operations. Founded notably by Peter Thiel, close to Donald Trump, the company has specialized in platforms for analyzing massive data sets destined for intelligence services and the armed forces.

These platforms allow instant cross-referencing of information streams coming from satellites, electronic eavesdropping, databases, or human reports.

To bolster these capabilities, Palantir relies on advanced AI models like Claude, developed by Anthropic. This type of language model can analyze documents, detect patterns in intercepted conversations, and reconstruct relational networks between individuals or institutions.

In a military operation, these tools enable rapid identification of the weak links in a command-and-control system and to anticipate the movements or communications of strategic targets.

Artificial intelligence thus transforms intelligence into an almost instantaneous process.

Autonomous drones: the other revolution

Yet analysis is only part of the ongoing technological revolution. The other mutation lies in the execution phase.

In the context of the operation against the Iranian security apparatus, swarms of autonomous drones would have been used to locate and lock onto certain targets.

These drones are developed by defense companies such as Anduril, a technology company specializing in autonomous military systems.

Unlike traditional drones remotely piloted by a human operator, these devices can act largely autonomously. They are capable of identifying a target, coordinating with other drones in a swarm, and adapting their formation according to obstacles, threats, or defense systems encountered.

Once the target is identified, the system can proceed with the final lock-on.

In some cases, the machine leaves a very short window for a human operator to validate or abort the attack. In some models cited by the manufacturers themselves, this delay could be on the order of twenty seconds.

This extremely short time window raises a major question. Can one truly speak of a human decision when it has to be taken within a few seconds amid a complex and often incomplete flow of information?

The debate on autonomous weapons

Proponents of these technologies advance an argument that may seem paradoxical. According to them, weapons equipped with artificial intelligence could be more precise and therefore potentially less dangerous than completely blind systems.

One of the founders of a company involved in these technologies summed up this reasoning: the real danger would rather be deploying weapons devoid of intelligence, unable to distinguish a military vehicle from a bus full of civilians.

According to this logic, one would have to choose between intelligent weapons and stupid weapons.

But for many experts and jurists, the problem lies elsewhere. When machines participate directly in the targeting and decision-making process, moral and legal responsibility becomes much more difficult to establish.

Who is responsible in case of error? The programmer, the operator, the company, the army or the algorithm itself?

A dizzying acceleration of war

The true rupture introduced by artificial intelligence perhaps lies less in the precision of weapons than in the speed of war.

AI makes it possible to drastically shorten the time between intelligence gathering and military action. What used to take days can now be accomplished in a few minutes.

This acceleration profoundly transforms the nature of modern conflicts. Decision chains become shorter, operations faster and strikes more numerous.

But this new speed also increases the risks of errors, uncontrolled escalation and decisions made under pressure.

In densely populated urban environments, where civilians and fighters mingle, a twenty-second validation window can seem terribly short.

The dawn of algorithmic warfare

If the trends observed in this conflict are confirmed, military history may remember this moment as the beginning of a new era.

After mechanized warfare, aerial warfare, nuclear warfare and drone warfare, the world could enter the era of algorithmic warfare.

A war where machines analyze data, propose targets and accelerate human decision-making.

A war where humans officially stay in the loop, but the thinking time is reduced to a few seconds.

And a war where the boundary between technological assistance and machine autonomy becomes increasingly hard to trace.

In this new strategic reality, the question is no longer only what weapons states possess. It is now which artificial intelligences they are capable of deploying.

Adel Khelifi

Adel Khelifi

My name is Adel Khelifi, and I’m a journalist based in Tunis with a passion for telling local stories to a global audience. I cover current affairs, culture, and social issues with a focus on clarity and context. I believe journalism should connect people, not just inform them.