When Technology Meets History: AI, War, and the New Theocracies

5 min read

The views in this post are entirely my own and do not represent my employer or any organisation I am affiliated with.


In 2019, at a Summit in Sydney](https://aws.amazon.com/events/summits/sydney/), a smart cities partner said something that stayed with me:

“When the people are served well, they’re not interested in politics.”

Then 2020 happened. And politics became impossible to escape — including in technology.

Six and a bit years later, it’s gotten harder.

This weekend delivered one of the most compressed, consequential 48 hours in tech and geopolitics I can remember. I want to just list what happened without judgement.

AI Image

Anthropic Holds the Line

Anthropic CEO Dario Amodei refused to remove safety guardrails preventing Claude from being used for mass domestic surveillance of Americans or for fully autonomous weapons targeting. The Department of War threatened to designate them a national security “supply chain risk” — a label historically reserved for foreign adversaries like Huawei. President Trump then ordered all federal agencies to phase out Anthropic.

Amodei’s position was clear: “In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.” The two narrow exceptions — no mass domestic surveillance, no fully autonomous weapons targeting — were, in his words, ones the company “cannot in good conscience” remove. But if the technology is this dangerous, who should decide how/when it’s used?

OpenAI Reaches a Deal — On Its Own Terms

OpenAI reached a deal with the Department of War hours later, claiming their agreement contains stronger protections than any previous classified AI deployment — including Anthropic’s original contract — with red lines around mass domestic surveillance, autonomous weapons, and high-stakes automated decisions. Legal scholars raised concerns that shifting policy definitions could render those protections weaker than claimed. Altman acknowledged the deal was “definitely rushed.” The Department of War did not publicly confirm or deny the specific stipulations.

Palmer Luckey and Anduril

There is a perspective on the Anthropic dispute that deserves to be named, even if you disagree with it. Palmer Luckey — founder of defence technology company Anduril Industries and one of the most prominent voices for AI-powered autonomous weapons — has been direct about where he stands. His argument, in essence: military decisions should rest with elected leaders, not with technology executives imposing their own moral judgements. “You are effectively saying you do not believe in this democratic experiment — that you want a corporatocracy,” he has said of tech companies that refuse to work with the Pentagon. What I am noting is that Luckey’s position — that AI weapons decisions should be in the hands of elected leaders — is being made in the same week that those elected leaders launched strikes on Iran. The question is: which democratic safeguards do we trust, and why.

Operation Epic Fury

And then — as all of this was unfolding in the AI world — the United States and Israel launched Operation Epic Fury against Iran. Strikes hit Tehran, Isfahan, and Qom. Iranian Supreme Leader Khamenei is reported dead. Iran has retaliated with missile strikes across the region.

The New Theocracies

Here’s an analysis I keep coming back to. Simon Wardley published a piece — “AI and the New Theocracies”. His argument: AI is simultaneously changing language, medium, and tools — the three primary ways humans reason about the world. That hasn’t happened since the Enlightenment. And whoever controls those three things can shape how entire populations think. That framing lands differently this week.

His warning isn’t directed at any one company. It’s structural. He argues that both doing nothing and creating ethics committees lead to the same destination: a small group — corporate or governmental — becoming the de facto high priests of how we reason. The defence he proposes is radical openness: truly open-sourced models, weights, training data, the whole stack — combined with an education system built around critical thinking.

Because what we watched play out between Anthropic and the Department of War wasn’t simply a debate about safety. It can be seen as two competing entities fighting over who gets to control the keys to the cathedral. One a corporation defending its right to set limits on its own technology. The other a government demanding the right to remove these limits. Both represent the same underlying risk: concentrated, opaque control over the systems that shape how we reason. And democratic institutions that have not yet adapted to this new technology.

What I Do Know

I don’t have a neat conclusion because we’re still working at adding better transparency with closed LLMs, that would allow democratic oversight with either the labs or the government. What I do know is that the decisions being made this week — in boardrooms, in the Pentagon, in the skies above Tehran — are happening faster than our frameworks for understanding them.

History sometimes arrives very loudly, and only later do we understand what we were living through. What it means — for Iran, for AI governance, for the relationship between technology and power — I genuinely don’t know. I’m not sure anyone does yet.


The views in this post are entirely my own and do not represent my employer or any organisation I am affiliated with.