Artificial intelligence, if you aren’t sick of hearing the term yet, has serious potential behind the concept. A computer system that solves the world’s problems? Who wouldn’t want that? Most philosophers, for one. When humans have nothing to strive for, they create discord and discontent. Those things, that friction is a requirement for developing agency and self-governance. It’s like Rise Against’s Tim McIlrath says, “You won’t know your worth now, son, until you take a hit.” Anything that solves all of your problems will probably kill you as a human being.
Happily, unless you’re in charge of ChatGPT, Gemini, Grok, or one of the other conversational AIs, that’s not a problem that immediately faces the world. The misguided who chase utopia (we can argue, but you’re entirely wrong) will be running quite some time longer. AI isn’t going to save the world any time soon. It can barely do a high-school kid’s homework.
Assessing artificial intelligence
You can test this yourself, provided you have at least one area of expertise gained the old-fashioned way. Pick your favourite subject — episodes of Supernatural, the known behaviour of Earth’s sun, the history of Japanese optics in the latter part of the twentieth century — and start asking your chosen artificial intelligence questions about it. Basic questions, more advanced questions, up to expert-level questions. You’ll see something remarkable happen.
Conversational artificial intelligence will get most of it right. It has to, since it’s a search engine on steroids. In fact, it’s about the only thing it’s really, properly suited for. My personal theory is that Google burned its completely functional Search arm to the ground, specifically to force its users to adopt Gemini, at least as a first step. Oh, and for the money.
But you’ll also, if you’re really familiar with your subject, notice that there are errors in there. Not everywhere, and not every time, but there are enough errors to deflect anyone who doesn’t know any better. If you’re using AI as a learning tool, but don’t a) already know better yourself or b) have a human teacher who does, you’re being misguided. Otherwise, it just looks correct. Confident. And if you’re new to the subject, how are you going to find the mistakes?
It’s not just you
Microsoft’s recent troubles with keeping its Windows updates in line, which caused issues both in December last year and this past January, aren’t being laid at the feet of artificial intelligence — Microsoft is pushing AI in its products and isn’t allowed to let the tech look bad — but there’s a reason one of the first things you do with a broken computer is check the last thing that was done to it. In Microsoft’s case, that was the introduction of loads of AI automation into its coding chain.
Microsoft head Satya Nadella said last year that up to 30% of the company’s code was being written by AI. It’s speculation on my part, but confident-looking code that kinda, sorta, mostly works probably isn’t being inspected too closely. After all, the system has done these tasks right an awful lot of the time. But that’s where incremental errors can creep in. ‘Vibe-coding’ is an epithet in some circles for extremely valid reasons.
I can’t say for sure that Microsoft’s recent Windows issues are AI-related, but the most recent major change to how the company creates its products was the introduction of the technology. Unless all of Microsoft’s best software engineers were replaced by morons and nobody thought to mention that to anyone, the problem being artificial intelligence serving up something that looks right but isn’t quite correct is a very reasonable assumption to make.
Searching for answers
Currently, the best use of AI for the average person is to replace stupidly-torched search engines. But with that comes the realisation that you have to treat your AI responses with the same level of suspicion as you would any random search result. Perversely, this process of discernment is harder to do when everything is served up in a lovely, authoritative little package. All of the work looks done. It’s so much easier to treat it as done and move on to the next thing. Yes, even if it’s incorrect. After all, it’s not like you can really know that artificial intelligence has botched the job. Not unless you’re already an expert.






