IN BRIEF

AI and automation are reshaping modern warfare by accelerating decision-making, expanding what is targetable, and lowering some of the traditional costs of using force. From Ukraine’s drone-saturated frontlines to Israel’s AI-assisted targeting in Gaza and Iran’s reported use of Chinese commercial satellite services to support strikes on US forces in the Middle East, today’s conflicts are becoming real-time laboratories for cheap precision weapons and machine-mediated decisions. As the cost of deploying violence drops and lines of accountability blur, long-standing constraints on war are eroding. In a moment where the international rules-based order is already under strain, what is being normalized in these conflicts today will shape the rules, thresholds, and power dynamics of war tomorrow. 

THE GIST

A new kind of conflict is emerging – defined by increasing automation and the rapid, low-cost production of weapons. On Ukraine’s front lines, swarms of low-cost first person-view (FPV) drones have destroyed Russian military equipment worth millions, demonstrating how remotely piloted systems can disrupt traditional military hierarchies. These drones are not “AI weapons” in the strict sense, but they are part of a broader shift: more automation, more digital sensing, and more rapid adaptation. Humans still fly the drones and pull the trigger, yet the tempo of operations, the availability of targets, and the perceived cost of using force are all changing in ways that risk making escalation easier and restraint harder.  

In Gaza, Israel has gone a step further by integrating AI systems directly into the targeting process. Systems such as “The Gospel” and “Lavender” reportedly analyze vast amounts of surveillance and intelligence data to recommend bombing targets – both buildings and individuals – linked algorithmically to Hamas or Palestinian Islamic Jihad. Formally, humans remain in the loop, but the scale and speed of target generation are no longer set by human cognitive limits, reducing decisions that once required minutes or hours to mere seconds. This compresses deliberation, increases the risk of error, and raises profound questions about accountability when lethal decisions are shaped by opaque, probabilistic pattern-matching and perfunctory human involvement.

Iran’s recent strikes on US military bases in the Middle East highlight yet another dimension: the growing dependence of modern warfare on global, commercial digital infrastructure. The systems Iran reportedly leveraged are not themselves AI‑powered, but they reveal how dual‑use commercial platforms that are neither purely national nor purely military now sit inside the chain of effects that leads from data to decision to strike. They also expose a governance gap: existing export controls primarily regulate who can access advanced technologies, and only weakly constrain how they are used in practice, reinforcing power asymmetries and incentivizing workarounds through commercial third‑country providers.

Across these settings, the boundaries of conflict are expanding – in speed, in scope, and in the systems involved. In Ukraine, automation and rapid iteration accelerate the tempo of operations. In Gaza, AI-driven target-generation systems scale the production of lethal decisions beyond human cognitive limits. In Iran’s case, the use of commercial space services draws in actors far beyond national militaries, widening the circle of those implicated in conflict. Together, these trends lower the political and human costs that once acted as informal brakes on escalation. They also widen the range of assets that can be drawn into a conflict, exposing data centers, satellite constellations, and undersea cables as critical assets in their own right.

THE TAKEAWAY

Modern warfare is not yet fully autonomous, but it is becoming faster and less accountable. The danger is not only in the acceleration of conflict, but in the way this speed compresses the space for responsibility. As decisions shift to machine tempo, ownership and answerability thin to the point of abstraction. Combined with a fractured geopolitical landscape, Ukraine’s drones, Israel’s AI-assisted targeting, and Iran’s use of commercial space infrastructure are not isolated episodes; they are precedents. They normalize a world where war becomes more automated and increasingly detached from deliberate human judgment.

The deeper risk is that these conflicts cement the transition away from a rules‑based international order toward a capability‑based one, just as the tools of power themselves are being reshaped by AI and automation. If governance does not catch up, those who can innovate fastest – whether states or firms – will define the new boundaries of acceptable behavior by acting first and arguing later. The question is whether we can still build governance frameworks that re‑center human judgment, accountability, and restraint – for example by setting limits on AI‑driven target generation and clarifying responsibilities for dual‑use infrastructures – in an environment where the technical and political momentum points the other way.

DELVE DEEPER