When Yuval Noah Harari writes about something, it is as if the world’s conscience is speaking. When Russia invaded Ukraine last year, the famous Israeli academic and historian’s essay on why it was so disastrous was published bearing his name in The Economist, a publication famous for not bylining its articles.
Last month, the author of Sapiens wrote a stinging opinion piece published in the New York Times with the two founders of the Center for Humane Technology, warning of the dangers of AI.
Referencing a 2022 survey with 700 AI academics and researchers, they wrote: “Half of those surveyed stated that there was a 10% or greater chance of human extinction (or similarly permanent and severe disempowerment) from future AI systems”.
They used the analogy of “half the engineers who built an aeroplane warning that there is a “10% chance the plane will crash, killing you and everyone else on it”.
The question they ask is: “Would you still board?”.
It’s a stark warning, that the writers elaborate on. “Language is the operating system of human culture,” they point out.” From language emerges myth and law, gods and money, art and science, friendships and nations and computer code. AI’s new mastery of language means it can now hack and manipulate the operating system of civilization.”
Slowing our AI sprint to a walk
It gets worse: “By gaining mastery of language, A.I. is seizing the master key to civilization, from bank vaults to holy sepulchers”.
We humans have used our language to both survive and grow as a species. Language is literally the best feature of our enhanced forebrains, themselves an evolutionary boon from early Homo Sapiens’ ability to use tools (which we could kill animals with) and make fire (so we could cook the meat, predigesting the protein, which is essential because we lack the teeth and intestines to eat raw meat).
“In games like chess, no human can hope to beat a computer,” Harari writes. “What happens when the same thing occurs in art, politics or religion?”
He’s not alone in this dire warning. About 1,000 AI experts and investors last month penned an open letter calling for a slow-down in the AI race. Specifically, they advise a “stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities”. These “giant” AIs need at least a six-month period to assess the possible dangers.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” says the letter, which includes OpenAI co-founder Elon Musk, Apple’s Steve Wozniak and engineers from Amazon, Google, DeepMind, Meta and Microsoft. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”
Harari warns that “the time to reckon with AI is before our politics, our economy and our daily life become dependent on it. Democracy is a conversation, conversation relies on language, and when language itself is hacked, the conversation breaks down, and democracy becomes untenable. If we wait for the chaos to ensue, it will be too late to remedy it.”
- This article first appeared in the Financial Mail.