Thomas Jefferson, the American statesman and third US president, was many things (including, notoriously, a slave-owner). But whatever else he was (or wasn’t), he was a firm believer in what he called the “suffrage of the people” — what today we’d call democracy.
The democracy he had in mind, of course, wasn’t a truly “general suffrage” of all citizens: in its most ambitious form it enfranchised only male taxpayers and soldiers. It was also far removed from the classical ideal set by Ancient Athens, in which all eligible citizens gathered regularly to debate and settle policy. Still, even Jefferson’s limited and strictly “representative” version of democracy required something vital if it was to function properly: not just an able and knowledgeable public service, but a well-informed voting public.
As Jefferson himself put it: “Whenever the people are well-informed, they can be trusted with their own government.” Most Western democracies subscribe to this example today. But in the face of scientific and technological progress over the course of the 20th century, many political scientists, futurists and journalists have been left wondering about the future of democracy.
In the quest to figure out where we’re headed, an obvious question looms. Just how well-informed can we expect the average citizen to be in a world that grows ever more complex and befuddling by the day? It would be naïve to think that the rise of science and technology hasn’t made it more difficult to fully comprehend the problems we face as citizens.
Global warming is the standout issue. Unless you happen to belong to a handful of experts who are well-informed on geology, meteorology and oceanography, you have to make a serious effort to understand the intricacies of climate science.
Add global warming scepticism to the news and it’s no wonder climate scepticism is so high in some countries. In the US, up to 20% of US citizens don’t think human activity contributes much, or anything at all, to climate change. In Australia, 38% of people surveyed don’t consider climate change to be a major threat. The same survey found that in Canada that figure is 34%, and in the UK 30%.
There’s a new game in town
Unfortunately, the past five to 10 years have also seen the rise of artificial intelligence (AI), and more particularly a branch of AI called “machine learning”.
Machine learning occupies an interesting position in the story of scientific progress. On one hand it’s a natural outcome of developments in computer science that began in the 1980s. On the other hand, its total dependence on information — and its ability to make do with all sorts of information, including things like your keystroke and heart rate — marks what could turn out to be a more radical break with previous technologies.
Machine learning uses existing information to generate new information. But it also allows that new information to be put to a variety of questionable uses, including surveillance and manipulation.
If you’ve ever been recommended products while shopping online, you’ve probably been profiled. Ever been denied an application for a credit card in short order? Again, you’ve probably been profiled. Algorithmic profiling presents a host of ethical and legal challenges, particularly around discrimination and privacy. But profiling is just the tip of an ever-expanding iceberg.
Democracy under attack?
Many uses of big tech pose a threat to individuals as individuals, which is bad enough. Other uses, though, pose a threat to individuals as democratic citizens. Depressingly, there’s already a standout example here.
In 2017, it transpired that the UK company, Cambridge Analytica, had assisted the UK’s 2016 Brexit Leave campaign by providing it with targeted political advertising services. These services were facilitated by access to Facebook data, in a major breach of Facebook’s own policies.
Such so-called “dark” ads are usually sent to the very people most likely to be susceptible to them. Unlike old-school pamphleteering and letterboxing, the ads aren’t distributed helter-skelter. They’re targeted, based on in-depth mining of people’s browsing histories, Facebook likes, tweets, and online purchases. What’s more, a dark ad is typically sent without the receiver having the benefit of hearing the opposing view.
This isn’t how the democratic “marketplace of ideas” is supposed to work. Indeed, how we’re to understand and regulate the influence of algorithms on our perceptions is among the most important questions AI poses today. Another question worth pondering is why so many governments around the world seem bent on automating public administration when there’s plenty of evidence to suggest it’s often neither efficient nor fair.
Basic lack of understanding obstructs more fruitful civic engagement with AI, data and big tech. But as citizens, we should know what’s going on — and who benefits.
That’s why my colleagues and I put our heads together and wrote a book that we think will help people sort their way through the AI jungle. Citizens deserve more than a superficial acquaintance with tech — nothing to cause confusion, but enough to inform a principled understanding of the world around them.
As Time journalist Frank Trippett put it way back in 1979: “The expert will have to play a more conscious role as citizen, just as the ordinary (citizen) will have to become ever more a student of technical lore.”
Our hope is that more journalists, industry leaders and academics will fulfill Trippett’s vision by becoming expert citizens themselves. This means giving people as much clear information as they need to make informed, responsible democratic choices. Democracy demands no less.
- is Research Fellow of Philosophy, Cognitive Science and AI, University of Cambridge
- This article first appeared on The Conversation