A new study by Microsoft and researchers at Carnegie Mellon University shows some distressing information about the widespread adoption of artificial intelligence. The study examined generative AI usage specifically, asking participants to self-report on the effect the new technology is having on their cognitive functions.
In what seems like an obvious outcome, it seems extended use of generative AI can cause users to lose cognitive habits. They may be nonessential when AI is doing all the heavy lifting but it can leave user cognition “atrophied and unprepared when…exceptions do arise.”
A study in Microsoft
The problem isn’t one unique to artificial intelligence, something the study points out. Any time a new technology comes along, there’s a possibility — even an inevitability — that previously-common skills will atrophy. As cars become more advanced, fewer folks know how they work. When cars came along, people forgot how to care for horses. Those skills have remained but are now tailored to specialities that have become careers.
Microsoft’s study isn’t exhaustive but it does indicate that widespread generative AI is changing how people think. Unlike sailing a schooner or skinning a beaver, the changes are more widespread than has previously been the case. The potential for harmful change is increased.
The study’s participants — all 319 of them — reported reduced amounts of cognitive load when asking AI to automate a task that humans once performed manually. This load was transferred to other places in the cognitive chain, meaning mental effort was spent in making sure the AI was able to understand the task, keeping it on track, and ensuring that it hadn’t supplied faulty responses.
It’s not just that humans are thinking in different places in the thought chain. Mental outcomes are less thoughtful and creative. The paper says that “users with access to GenAI tools produce a less diverse set of outcomes for the same task, compared to those without. This tendency for convergence reflects a lack of personal, contextualised, critical and reflective judgement of AI output and thus can be interpreted as a deterioration of critical thinking.”
Since this is a Microsoft study — the company has gone hard on its support of artificial intelligence — there’s less doom and gloom here and more of an optimistic take. The answer, according to those conducting the study, is building AI systems that maintain critical thinking and information-gathering skills. It might have been a better idea to build those systems before releasing the technology into the world, to reduce the amount of critical thinking atrophy currently taking place, but eventually, we might see a generative AI that doesn’t actively make its users dumber.