In November 2022, the UN symbolically declared that the 8 billionth human being had been born in Manilla, Philippines, thus adding to the 2,3 billion children already living in the world. Another event happened the same month, one that will likely have a huge impact on the education and upbringing of all these children: ChatGPT was launched on 30 November 2022, revolutionising education for the best according to some, destroying its very possibility for others.
Disruptive innovation vs breakthrough innovation
Beyond value judgments, generative AI has led philosophers of technology and innovation scholars to debate whether ChatGPT amounts to a disruption in education.
The business theorist and consultant Clayton Christensen has stated that disruptive innovations can take aim at the technology itself, consumer habits, or one’s competitors’ business model, up to the point where their survival is threatened.
To differentiate the notions of a breakthrough and disruptive innovation, Kristina Rakic specifies that:
“breakthrough innovation is something that enhances the competences of the firms, refers to the technological dimension of a product […]. Instead, the disruptive innovation, such as proposed by Christensen refers more to a change in the market, a change in the competitors’ structure causing the failure of the incumbents, and a change in the business model adopted by firms”.
While there are fears that generative AI poses civilisational risks and obliges us to entirely rethink education, some perceive it as yet a breakthrough innovation similar to the printing press, radio, computers, and the Internet, none of which led to the disappearance of schools or universities.
The self unravelled by technology
Several contemporary philosophers have argued that digital innovations in general are in continuity with previous technologies, and that perhaps the disruption is in the transformation of the humans using the technology rather than the technology itself.
For instance, in Philosophy of the Connected Space: The Reality of Internet (2022, not translated), French philosopher Isabelle Pariente-Butterlin argues that the digital world is in continuity with the real world and, and the window through which we navigate it. We would be the ones lost without it. Going a step further, in The Ethics of Ordinary Technology (2016), Michel Puech argues that “the self is endangered not by contemporary technology but by contemporary philosophy”. What he means is that the self is diluting itself not just because of technological developments, but because we are not equipped with the proper philosophical framework that would allow us to use technology wisely.
Two major controversies are associated with the use of ChatGPT in education: performance, and especially the accuracy of the information provided, and ethical concerns, such as the potential for plagiarism. Furthermore, the specificity of AI-originated text is that, by definition, it does not provide reliable sources.
Regarding ChatGPT in education, several issues must then be addressed, including, but not limited to:
- ChatGPT has been found to refer to nonexistent scientific studies, thus “making up” references.
- More generally, the kind of creativity that used to be required for plagiarism and cheating is now no longer needed: all students can use ChatGPT for writing essays or even doctoral dissertations, and it is increasingly harder to detect – notwithstanding the lack of grammar or spelling mistakes that may provide a hint.
- Even journals’ editors are having difficulties discriminating between writing by humans and ChatGPT.
When the Turing no longer cuts it
The issue of distinguishing between human and machine-induced language has been famously addressed by Alan Turing with the so-called Turing test: if a human is unable to determine whether he’s having a conversation with a computer or a fellow human, then the computer successfully has passed the test. The current situation is that ChatGPT repeatedly passes the Turing Test, not just for informal conversations or task-based chatbots, but in education and academia as well.
In this context, the focus perhaps needs to be shifted away from the tool to the person who is using it, and their understanding of what is being produced. AI-literate and critical education would seek to arm students with techniques beyond that of prompt engineering.
A key question for philosophers of technology in the digital age has always been the one of consciousness: can we consider that the computer, the software, here ChatGPT, think, in the same way that humans do?
Read More: ChatGPT and the movie ‘Her’ are just the latest example of the ‘sci-fi feedback loop’
In 1980, at the dawn of the AI journey, philosopher John Searle proposed a thought experiment to address that question, the “Chinese room”, which has led to many controversies. Imagine a man who does not speak Chinese at all who is locked into a room. There, messages written in Chinese from an unknown sender are slipped under the door. The man does not understand them, but he has access to a book with proper symbol associations. He then sends strings of Chinese characters back out under the door, and thus leads the person outside to mistakenly think that there is a Chinese speaker in the room.
This well-known thought experiment has been used to refute functionalism – the theory that what makes something a thought depends not on its internal constitution, but solely on its function. To put it simply, if it works, then it is. In this case, just because the Chinese characters sent back make sense, does not mean that the sender, be it a man or a machine, is gifted with true understanding.
The massive use of ChatGPT by students may cause educators to reconsider conceptual tools like the Turing test or the Chinese room by asking a different question about consciousness. Perhaps it is not about wondering whether artificial intelligence understands and therefore has some level of consciousness. Instead, it’s whether students who sometimes use AI are in a Chinese room, understanding what they’re doing, or whether they themselves have become mere inputs in a system they’re no longer equipped to understand.
A disruption or just an innovation?
In conclusion, ChatGPT is a breakthrough innovation in education, but it is still too early to know if it will become a disruptive innovation, able to transform schools and universities.
Additionally, this reflection on learning through a conversational process between students and ChatGPT should not blind us to the fact that the acquisition of new knowledge is fostered by social interaction, whether with professors or other students. This focus on social epistemology may give universities and other educational institutions hope that they won’t be too disrupted in the short term.
- is an Associate professor in Ecological Transition and Social Entrepreneurship, EM Lyon Business School
- is a Professor in management science, EM Lyon Business School
- This article first appeared in The Conversation