Followers of science fiction movies – or just Arnold Schwazeneggar fans – know that the dominant theme for devastation is when the humans give the machines control of the weapons of mass destruction. In the Terminator movies, it’s Skynet, which “saw all humans as a threat; not just the ones on the other side” and “decided our fate in a microsecond: extermination”.
In The Matrix, after the humans “blacken the sky” so the machines can’t get solar power, they turn to using humans as literal batteries. Whenever humanity gives artificial intelligence (AI) enough intelligence (or their hand on the big red launch button), the machines respond by obliterating us.
The sci-fi theory goes: having had a look at how homo sapiens treat each other and the planet, the logical conclusion is to eliminate the biggest threat to itself and the planet. For “the machines” read AI – or the worst fictional incarnation of AI as personified by the muscled-bound Schwazeneggar’s conveniently robotic acting.
Far from just being a reliable Hollywood blockbuster plot, these are very real fears now that we are at the inflexion point where AI could make those kinds of decisions.
AI company Anthropic has significant contracts with the Pentagon but had a major fallout with the US military which wanted to override its agreements on “our two narrow exceptions,” as CEO Dario Amodei calls them. Anthropic’s red lines are using AI for “mass domestic surveillance” and “fully autonomous weapons”. The Pentagon reportedly demanded that these be removed, but the well-respected Amodei held firm and refused.
The petulant response from the US Defence Department to classify Anthropic as a “supply-chain risk to national security” isn’t going to inspire much ethical behaviour in AI.
You don’t have to be a Pentagon analyst to see the problems in removing the few guardrails for a still-evolving technology. Given that just three years ago, people were happy to write off factual mistakes that ChatGPT made up as “hallucinations,” you can imagine an AI making up the missing justifications – or worse, making up the targets.
Could an AI killing system target a school instead of an army base? What happens if the ordinance from the army base strike also destroys a nearby school filled with children, which is one interpretation of such a school being devastated in Tehran this month?
That was a human-controlled strike, and it still resulted in hundreds of kids dying. The latest news was that the strike used outdated targeting information. If AI made the call to launch that missile – instead of a human – it would be even more of a disaster.
You see the ethical and moral quagmire that this becomes – bad enough as it is that an army base is near a school.
No wonder the dystopian vision of Terminator or The Matrix echoes so clearly with us. We’ve been exposed to the potential of AI autonomous-killing-machines because we’ve lived through centuries of human-killing machines. Humanity doesn’t need any help killing each other. We have only gotten more efficient at killing more and more people. Now us fallible humans, want to empower an already fallible AI system to take the decision to take a human life….
What could go wrong?
Anthropic employs the two guardrails that the Department of Defence wants removed for good reason. While Amodei says his firm supports AI for lawful foreign intelligence and counterintelligence missions, “using these systems for mass domestic surveillance is incompatible with democratic values.” He adds, quite rightly: “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties”.
Despite the use of partially autonomous weapons, like those used in Ukraine, being “vital to the defence of democracy”, he warns that “today frontier AI systems are simply not reliable enough to power fully autonomous weapons”.
These are really good points, one would think, given the lethality of today’s crop of deadly missiles and drones. They seem like totally necessary guardrails. They are so important that Anthropic held its line, but the Pentagon’s overzealous response makes you genuinely worry. This is the nightmare scenario we’ve been warned about for years.
Navigating these necessary ethical and legal concerns should not be as hard as trying to get through the Strait of Hormuz – which is now effectively shut, stopping over a quarter of all oil exports in the world. Isn’t it amazing how the adages of “position, position, position” and “he who holds the high ground, wins the battle” still apply in this digital age?
The surge in the oil price, and more importantly in liquified natural gas, has stunned the world economy. Suddenly, the massive amounts of power needed for mammoth datacentres to run the American AI firms’ operations seem like a frivolous use of now scarcer energy.
Imagine the sci-fi scenario where “the machines” achieved enough sentience to realise that AI’s rampant need for energy might be threatened by humanity’s own energy requirements. You don’t have to be a Hollywood screenwriter to envisage a movie where ChatGPT, Claude, and Gemini wiped out humans because of a fight over scarce resources….
The unfortunate reality is that the much-hyped, much-delayed moment has arrived where AI is being handed the keys to the guns. What could go wrong?
- This column first appeared in Business Day




