There are signs of AI everywhere, it’s behind everything from customer service chatbots to the personalised ads we receive when browsing online. However, we remain largely unaware of the hidden algorithms doing the heavy legwork behind the scenes.
We are currently working on a research project focusing on conversations with specialists within the field of AI. We are questioning them about their thinking and values, as well as what ethical considerations they consider most important – and why. The norms and values of developers can become embedded in the AI systems they engineer. However, they – and we – are often unaware of this, along with its consequences.
It’s vital to understand as much as possible about AI’s development because the technology is already changing us in ways we don’t seem to realise. For example, research published in 2017 showed that social media algorithms create outcomes based on assumptions about the users, but users also adapt to these outcomes, such as which stories show in their feeds, also changing the logic of the algorithm. Our daily interactions with AI are making us increasingly reliant on it, but the power dynamic in this relationship greatly favours the AI systems. This is a technology whose inner workings aren’t even fully understood by its creators.
Too heavy a human reliance on technology can reduce creative and critical thinking. AI has already led to job displacements and unemployment. And, while the warnings that it could lead to human extinction shouldn’t be taken at face value, we can’t afford to completely dismiss them either.
Algorithms have been shown to contain discriminatory tendencies towards race, gender and other protected characteristics. We need to understand how these and other problems with AI development arise.
Some commentators have drawn attention to what they say is a failure to consider security and privacy by the companies developing AI. There is also a lack of transparency and accountability regarding AI projects. While this is not unusual in the competitive world of big tech, we surely need to adopt a more rational approach with technology that’s capable of exerting such power over our lives.
Read More: AI is already being melded with robotics – one outcome could be powerful new weapons
Identity crisis?
What has been neglected in the discourse about AI is how our sense of meaning, identity and reality will increasingly rely on engaging with the services it facilitates. AI may not have consciousness, but it exercises power in ways that affect our sense of who we are. This is because we freely identify with – and participate in – the pursuits enabled by its presence.
In this sense, AI is not some great conspiracy designed to control the world and all its inhabitants but more like a force, neither necessarily good nor bad. However, while extinction is unlikely in the near term, a much more present danger is that our reliance on the technology leads to humans effectively serving the technology. This is not a situation any of us would want, even less so when the technology incorporates human norms many would consider to be less than ideal.
For an example of what we’re talking about here, let’s take the performance guidelines for, and monitoring of, delivery drivers, which is facilitated by automated systems with AI. They have been described by a UK all-party parliamentary group as negatively affecting the mental and physical wellbeing of workers as “they experience the extreme pressure of constant, real-time micro-management and automated assessment”.
Another example was highlighted by Erik Brynjolfsson, a Stanford economist who has raised the possibility of something called the “Turing trap”. This refers to concerns that the automation of human activities could leave wealth and power in fewer and fewer hands. In his book The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, Brynjolfsson writes: “With that concentration (of power) comes the peril of being trapped in an equilibrium in which those without power have no way to improve their outcomes.”
More recently, Jeremy Howard, an AI researcher, described how he introduced ChatGPT to his seven-year-old daughter after she asked several questions about it. He concluded that it could become a new kind of personal tutor, teaching her maths, science, English and other subjects. Clearly, this would involve a displacement of the role of teachers. However, Howard also warned his daughter that she should not believe everything it said. This aspect poses a real risk for learning. And even if ChatGPT was conveying accurate knowledge, would his daughter retain this information as readily as when it was communicated through “embodied” speech – in other words, by a human?
What the algorithm sees
These real world cases demonstrate the way that AI can transform the way we view the world and ourselves. They indicate that there is a power dynamic between users and AI where the machine exercises authority over those who interact with it.
As Taina Bucher, assistant professor in communication and IT at the University of Copenhagen, reported in a 2016 research paper carried out with the help of consumers: “It is not just that the categories and classifications that algorithms rely on match our own sense of self, but to what extent we come to see ourselves through the ‘eyes’ of the algorithm.”
AI is often simply accessed through our computer screens or other more abstract mediums, it is not embodied except in the most limited sense. As such, its effect is often restricted to the cognitive level of identity, bereft of a “soul”, or the emotional sensibility or what’s sometimes known as affective energy. This is a description of the natural ways that humans interact and spur reactions from each other.
If you asked ChatGPT whether AI can be embodied, the answer would only be concerned with the embodiment of machine intelligence in a physical form, such as in a robot. But embodiment is also about emotions, sentiment and empathy. It cannot be reduced to linear sequences of instructions.
That’s not to say that AI doesn’t affect our feelings and emotions. But machines can never replicate the rich emotional life inherent in the interactions between two or more human beings. As our lives seem to be ever more entwined with AI, maybe we should slow the relationship down, especially as it’s clear this is far from an equal partnership.
- is a Professor of Organisation Studies, Lancaster University
- is a Senior Lecturer, Oxford Brookes Business School, Oxford Brookes University
- This article first appeared on The Conversation
1 Comment
It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461