<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>hallucination Archives - Stuff South Africa</title>
	<atom:link href="https://stuff.co.za/tag/hallucination/feed/" rel="self" type="application/rss+xml" />
	<link>https://stuff.co.za/tag/hallucination/</link>
	<description>South Africa&#039;s Technology News Hub</description>
	<lastBuildDate>Thu, 17 Apr 2025 06:58:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
<atom:link rel="hub" href="https://pubsubhubbub.appspot.com"/>
<atom:link rel="hub" href="https://pubsubhubbub.superfeedr.com"/>
<atom:link rel="hub" href="https://websubhub.com/hub"/>
<atom:link rel="self" href="https://stuff.co.za/tag/hallucination/feed/"/>
	<item>
		<title>We need to stop pretending AI is intelligent – here’s how</title>
		<link>https://stuff.co.za/2025/04/17/we-need-to-stop-pretending-ai-intelligent/</link>
		
		<dc:creator><![CDATA[The Conversation]]></dc:creator>
		<pubDate>Thu, 17 Apr 2025 06:58:59 +0000</pubDate>
				<category><![CDATA[AI News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artifical intelligence]]></category>
		<category><![CDATA[hallucination]]></category>
		<category><![CDATA[The Conversation]]></category>
		<guid isPermaLink="false">https://stuff.co.za/?p=208387</guid>

					<description><![CDATA[<p>We are constantly fed a version of AI that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity. But here’s the truth: it possesses none of those qualities. It is not human. And presenting it as if [...]</p>
<p>The post <a href="https://stuff.co.za/2025/04/17/we-need-to-stop-pretending-ai-intelligent/">We need to stop pretending AI is intelligent – here’s how</a> appeared first on <a href="https://stuff.co.za">Stuff South Africa</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>We are constantly fed a version of <a href="http://stuff.co.za/tag/AI">AI</a> that looks, sounds and acts suspiciously like us. It speaks in polished sentences, mimics emotions, expresses curiosity, claims to feel compassion, even dabbles in what it calls creativity.</p>
<p>But here’s the truth: it possesses none of those qualities. It is not human. And presenting it as if it were? That’s dangerous. Because it’s convincing. And nothing is more dangerous than a convincing illusion.</p>
<p>In particular, general artificial intelligence — the mythical kind of AI that supposedly mirrors human thought — <a href="https://www.nature.com/articles/s41599-020-0494-4">is still science fiction</a>, and it <a href="https://www.newscientist.com/article/2473622-leading-ai-models-fail-new-test-of-artificial-general-intelligence/">might well stay that way</a>.</p>
<p>What we call AI today is nothing more than a statistical machine: a digital parrot regurgitating patterns mined from oceans of human data (the situation hasn’t changed much since it was <a href="https://dl.acm.org/doi/pdf/10.5555/3495724.3495883">discussed here</a> five years ago). When it writes an answer to a question, it literally just guesses which letter and word will come next in a sequence – based on the data it’s been trained on.</p>
<p>This means AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.</p>
<p>So why is a real “thinking” AI likely impossible? Because it’s bodiless. It has no senses, no flesh, no nerves, no pain, no pleasure. It doesn’t hunger, desire or fear. And because there is no cognition—not a shred—there’s a fundamental gap between the data it consumes (data born out of human feelings and experience) and what it can do with them.</p>
<p>Philosopher David Chalmers calls the mysterious mechanism underlying the relationship between our physical body and consciousness the <a href="https://philpapers.org/rec/CHAFUT">“hard problem of consciousness”</a>. Eminent scientists have recently hypothesised that consciousness actually emerges from the integration of internal, <a href="https://direct.mit.edu/jocn/article-abstract/36/8/1653/119429/Homeostatic-Feelings-and-the-Emergence-of?redirectedFrom=fulltext">mental states</a> with sensory representations (such as changes in heart rate, sweating and much more).</p>
<p>Given the paramount importance of the human senses and emotion for consciousness to “happen”, there is a profound and probably irreconcilable disconnect between general AI, the machine, and consciousness, a human phenomenon.</p>
<h3>The master</h3>
<p>Before you argue that AI programmers are human, let me stop you there. I know they’re human. That’s part of the problem. Would you entrust your deepest secrets, life decisions, and emotional turmoil, to a computer programmer? Yet that’s exactly what people are doing — just ask Claude, GPT-4.5, Gemini … or, if you dare, Grok.</p>
<p>Giving AI a human face, voice<span style="box-sizing: border-box; margin: 0px; padding: 0px;">, or tone is a dangerous act of digital cross-dressing. It triggers an automatic response in us, an anthropomorphic reflex, leading to aberrant claims that some AIs have passed the famous <a href="https://theconversation.com/chatgpt-just-passed-the-turing-test-but-that-doesnt-mean-ai-is-now-as-smart-as-humans-253946" target="_blank" rel="noopener">Turing test</a> (which tests a machine’s ability to exhibit intelligent, human-like behaviour</span>). But I believe that if AIs are passing the Turing test, we need to update the test.</p>
<p>The AI machine has no idea what it means to be human. It cannot offer genuine compassion, it cannot foresee your suffering, cannot intuit hidden motives or lies. It has no taste, no instinct, no inner compass. It is bereft of all the messy, charming complexity that makes us who we are.</p>
<p>More troubling still: AI has no goals of its own, no desires or ethics unless injected into its code. That means the true danger doesn’t lie in the machine, but in its master — the programmer, the corporation, the government. Still feel safe?</p>
<p>And please, don’t come at me with: “You’re too harsh! You’re not open to the possibilities!” Or worse: “That’s such a bleak view. My AI buddy calms me down when I’m anxious.”</p>
<p>Am I lacking enthusiasm? Hardly. I use AI every day. It’s the most powerful tool I’ve ever had. I can translate, summarise, visualise, code, debug, explore alternatives, analyse data — faster and better than I could ever dream to do it myself.</p>
<p>I’m in awe. But it is still a tool — nothing more, nothing less. And like every tool humans have ever invented, from stone axes and slingshots to quantum computing and atomic bombs, it can be used as a weapon. It will be used as a weapon.</p>
<p><iframe  id="_ytid_36940"  width="749" height="421"  data-origwidth="749" data-origheight="421" src="https://www.youtube.com/embed/dJTU48_yghs?enablejsapi=1&#038;autoplay=0&#038;cc_load_policy=0&#038;cc_lang_pref=&#038;iv_load_policy=1&#038;loop=0&#038;rel=1&#038;fs=1&#038;playsinline=0&#038;autohide=2&#038;theme=dark&#038;color=red&#038;controls=1&#038;disablekb=0&#038;" class="__youtube_prefs__  epyt-is-override  no-lazyload" title="YouTube player"  allow="fullscreen; accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen data-no-lazy="1" data-skipgform_ajax_framebjll=""></iframe></p>
<p>Need a visual? Imagine falling in love with an intoxicating AI, like in the film <em>Her</em>. Now imagine it “decides” to leave you. What would you do to stop it? And to be clear: it won’t be the AI rejecting you. It’ll be the human or system behind it, wielding that tool become weapon to control your behaviour.</p>
<h3>Removing the mask</h3>
<p>So where am I going with this? We must stop giving AI human traits. My first interaction with GPT-3 rather seriously annoyed me. It pretended to be a person. It said it had feelings, ambitions, even consciousness.</p>
<p>That’s no longer the default behaviour, thankfully. But the style of interaction — the eerily natural flow of conversation — remains intact. And that, too, is convincing. Too convincing.</p>
<p>We need to de-anthropomorphise AI. Now. Strip it of its human mask. This should be easy. Companies could remove all reference to emotion, judgement or cognitive processing on the part of the AI. In particular, it should respond factually without ever saying “I”, or “I feel that”… or “I am curious”.</p>
<p>Will it happen? I doubt it. It reminds me of another warning we’ve ignored for over 20 years: “We need to cut CO₂ emissions.” Look where that got us. But we must warn big tech companies of the dangers associated with the humanisation of AIs. They are unlikely to play ball, but they should, especially if they are serious about developing more <a href="https://www.anthropic.com/news/claudes-constitution">ethical AIs</a>.</p>
<p>For now, this is what I do (because I too often get this eerie feeling that I am talking to a synthetic human when using ChatGPT or Claude): I instruct my AI not to address me by name. I ask it to call itself AI, to speak in the third person, and to avoid emotional or cognitive terms.</p>
<p>If I am using voice chat, I ask the AI to use a flat prosody and speak a bit like a robot. It is actually quite fun and keeps us both in our comfort zone.</p>
<hr />
<ul>
<li><a href="https://theconversation.com/profiles/guillaume-thierry-384284" rel="author"><span class="fn author-name">Guillaume Thierry</span></a> is a Professor of Cognitive Neuroscience, Bangor University</li>
<li>This article first appeared in <a href="https://theconversation.com/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090" target="_blank" rel="noopener"><em>The Conversation</em></a></li>
</ul>
<p><script type="text/javascript" src="https://theconversation.com/javascripts/lib/content_tracker_hook.js" id="theconversation_tracker_hook" data-counter="https://counter.theconversation.com/content/254090/count?distributor=republish-lightbox-advanced" async="async"></script></p>
<p>The post <a href="https://stuff.co.za/2025/04/17/we-need-to-stop-pretending-ai-intelligent/">We need to stop pretending AI is intelligent – here’s how</a> appeared first on <a href="https://stuff.co.za">Stuff South Africa</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>What are AI hallucinations? Why AIs sometimes make things up</title>
		<link>https://stuff.co.za/2025/03/24/what-are-ai-hallucinations-why-ais-sometime/</link>
		
		<dc:creator><![CDATA[The Conversation]]></dc:creator>
		<pubDate>Mon, 24 Mar 2025 07:05:02 +0000</pubDate>
				<category><![CDATA[AI News]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[hallucination]]></category>
		<category><![CDATA[The Conversation]]></category>
		<guid isPermaLink="false">https://stuff.co.za/?p=207042</guid>

					<description><![CDATA[<p>When someone sees something that isn’t there, people often refer to the experience as a hallucination. Hallucinations occur when your sensory perception does not correspond to external stimuli. Technologies that rely on artificial intelligence (AI) can have hallucinations, too. When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists [...]</p>
<p>The post <a href="https://stuff.co.za/2025/03/24/what-are-ai-hallucinations-why-ais-sometime/">What are AI hallucinations? Why AIs sometimes make things up</a> appeared first on <a href="https://stuff.co.za">Stuff South Africa</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p>When someone sees something that isn’t there, people often refer to the experience as a hallucination. <a href="https://www.merriam-webster.com/dictionary/hallucination">Hallucinations</a> occur when your sensory perception does not correspond to external stimuli. Technologies that rely on artificial intelligence (<a href="http://stuff.co.za/tag/AI">AI</a>) can have hallucinations, too.</p>
<p>When an algorithmic system generates information that seems plausible but is actually inaccurate or misleading, computer scientists call it an AI hallucination. Researchers have found these behaviours in different types of AI systems, from chatbots such as <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC9939079/">ChatGPT</a> to <a href="https://aclanthology.org/2023.emnlp-main.20/">image generators</a> such as Dall-E to <a href="https://pratt.duke.edu/news/engineers-develop-hack-to-make-automotive-radar-hallucinate/">autonomous vehicles</a>. We are <a href="https://scholar.google.com/citations?hl=en&amp;user=cGB8_a8AAAAJ&amp;view_op=list_works&amp;sortby=pubdate">information science</a> <a href="https://scholar.google.com/citations?hl=en&amp;user=m8Fcl7QQLMAC&amp;view_op=list_works&amp;sortby=pubdate">researchers</a> who have studied hallucinations in AI speech recognition systems.</p>
<p>Wherever AI systems are used in daily life, their hallucinations can pose risks. Some may be minor – when a chatbot gives the wrong answer to a simple question, the user may end up ill-informed. But in other cases, the stakes are much higher. From courtrooms where AI software is used to <a href="https://doi.org/10.1080/0731129X.2023.2275967">make sentencing decisions</a> to health insurance companies that use algorithms to <a href="https://doi.org/10.1001/jamahealthforum.2024.0622">determine a patient’s eligibility</a> for coverage, AI hallucinations can have life-altering consequences. They can even be life-threatening: Autonomous vehicles use <a href="https://doi.org/10.1016/j.jksuci.2022.03.013">AI to detect obstacles</a>, other vehicles and pedestrians.</p>
<h3>Making it up</h3>
<p>Hallucinations and their effects depend on the type of AI system. With large language models – the underlying technology of AI chatbots – hallucinations are pieces of information that sound convincing but are incorrect, made up or irrelevant. An AI chatbot might create a reference to a scientific article that doesn’t exist or provide a historical fact that is simply wrong, yet <a href="https://doi.org/10.1145/3571730">make it sound believable</a>.</p>
<p>In a 2023 <a href="https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/">court case</a>, for example, a New York attorney submitted a legal brief that he had written with the help of ChatGPT. A discerning judge later noticed that the brief cited a case that ChatGPT had made up. This could lead to different outcomes in courtrooms if humans were not able to detect the hallucinated piece of information.</p>
<p>With AI tools that can recognize objects in images, hallucinations occur when the AI generates captions that are not faithful to the provided image. Imagine asking a system to list objects in an image that only includes a woman from the chest up talking on a phone and receiving a response that says a woman talking on a phone <a href="https://doi.org/10.18653/v1/D18-1437">while sitting on a bench</a>. This inaccurate information could lead to different consequences in contexts where accuracy is critical.</p>
<h3>What causes hallucinations</h3>
<p>Engineers build AI systems by gathering massive amounts of data and feeding it into a computational system that detects patterns in the data. The system develops methods for responding to questions or performing tasks based on those patterns.</p>
<p>Supply an AI system with 1,000 photos of different breeds of dogs, labelled accordingly, and the system will soon learn to detect the difference between a poodle and a golden retriever. But feed it a photo of a blueberry muffin and, as <a href="https://www.kaggle.com/datasets/samuelcortinhas/muffin-vs-chihuahua-image-classification">machine learning researchers</a> have shown, it may tell you that the muffin is a chihuahua.</p>
<figure class="align-center zoomable">
<div class="placeholder-container">
<figure style="width: 754px" class="wp-caption alignnone"><img fetchpriority="high" decoding="async" class=" lazyloaded" src="https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" sizes="(min-width: 1466px) 754px, (max-width: 599px) 100vw, (min-width: 600px) 600px, 237px" srcset="https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=3 2262w" alt="two side-by-side four-by-four grids of images" width="754" height="377" data-src="https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;fit=clip" data-srcset="https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=1 600w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=2 1200w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=600&amp;h=300&amp;fit=crop&amp;dpr=3 1800w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=45&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=1 754w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=30&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=2 1508w, https://images.theconversation.com/files/656105/original/file-20250318-56-lc58g2.jpg?ixlib=rb-4.1.0&amp;q=15&amp;auto=format&amp;w=754&amp;h=377&amp;fit=crop&amp;dpr=3 2262w" /><figcaption class="wp-caption-text">Object recognition AIs can have trouble distinguishing between chihuahuas and blueberry muffins and between sheepdogs and mops. <a href="https://doi.org/10.48550/arXiv.2201.11105" target="_blank" rel="noopener">Shenkman et al</a>, <a href="http://creativecommons.org/licenses/by/4.0/" target="_blank" rel="noopener">CC BY</a></figcaption></figure>
</div>
</figure>
<p>When a system doesn’t understand the question or the information that it is presented with, it may hallucinate. Hallucinations often occur when the model fills in gaps based on similar contexts from its training data, or when it is built using biased or incomplete training data. This leads to incorrect guesses, as in the case of the mislabeled blueberry muffin.</p>
<p>It’s important to distinguish between AI hallucinations and intentionally creative AI outputs. When an AI system is asked to be creative – like when writing a story or generating artistic images – its novel outputs are expected and desired. Hallucinations, on the other hand, occur when an AI system is asked to provide factual information or perform specific tasks but instead generates incorrect or misleading content while presenting it as accurate.</p>
<p>The key difference lies in the context and purpose: Creativity is appropriate for artistic tasks, while hallucinations are problematic when accuracy and reliability are required.</p>
<p>To address these issues, companies have suggested using high-quality training data and limiting AI responses to follow certain <a href="https://www.ibm.com/topics/ai-hallucinations">guidelines</a>. Nevertheless, these issues may persist in popular AI tools.</p>
<p><iframe  id="_ytid_28126"  width="749" height="421"  data-origwidth="749" data-origheight="421" src="https://www.youtube.com/embed/cfqtFvWOfg0?enablejsapi=1&#038;autoplay=0&#038;cc_load_policy=0&#038;cc_lang_pref=&#038;iv_load_policy=1&#038;loop=0&#038;rel=1&#038;fs=1&#038;playsinline=0&#038;autohide=2&#038;theme=dark&#038;color=red&#038;controls=1&#038;disablekb=0&#038;" class="__youtube_prefs__  epyt-is-override  no-lazyload" title="YouTube player"  allow="fullscreen; accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen data-no-lazy="1" data-skipgform_ajax_framebjll=""></iframe></p>
<figure><figcaption><em><span class="caption">Large language models hallucinate in several ways.</span></em></figcaption></figure>
<h3>What’s at risk</h3>
<p>The impact of an output such as calling a blueberry muffin a chihuahua may seem trivial, but consider the different kinds of technologies that use image recognition systems: An autonomous vehicle that fails to identify objects could lead to a <a href="https://www.npr.org/2019/11/07/777438412/feds-say-self-driving-uber-suv-did-not-recognize-jaywalking-pedestrian-in-fatal-">fatal traffic accident</a>. An autonomous military drone that misidentifies a target could put civilians’ lives in danger.</p>
<p>For AI tools that provide automatic speech recognition, hallucinations are AI transcriptions that include words or phrases that were <a href="https://doi.org/10.1145/3630106.3658996">never actually spoken</a>. This is more likely to occur in noisy environments, where an AI system may end up adding new or irrelevant words in an attempt to decipher background noise such as a passing truck or a crying infant.</p>
<p>As these systems become more regularly integrated into health care, social service and legal settings, hallucinations in automatic speech recognition could lead to inaccurate clinical or legal <a href="https://doi.org/10.1145/3630106.3658996">outcomes that harm</a> patients, criminal defendants or families in need of social support.</p>
<h3>Check AI’s work</h3>
<p>Regardless of AI companies’ efforts to mitigate hallucinations, users should stay vigilant and question AI outputs, especially when they are used in contexts that require precision and accuracy. Double-checking AI-generated information with trusted sources, consulting experts when necessary, and recognizing the limitations of these tools are essential steps for minimizing their risks.</p>
<hr />
<ul>
<li><a href="https://theconversation.com/profiles/anna-choi-2251692" rel="author"><span class="fn author-name">Anna Choi </span></a>is a Ph.D. Candidate in Information Science, Cornell University</li>
<li><a href="https://theconversation.com/profiles/katelyn-mei-2251694" rel="author"><span class="fn author-name">Katelyn Mei </span></a>is a Ph.D. Student in Information Science, University of Washington</li>
<li>This article first appeared in <a href="https://theconversation.com/what-are-ai-hallucinations-why-ais-sometimes-make-things-up-242896" target="_blank" rel="noopener"><em>The Conversation</em></a></li>
</ul>
<p><script type="text/javascript" src="https://theconversation.com/javascripts/lib/content_tracker_hook.js" id="theconversation_tracker_hook" data-counter="https://counter.theconversation.com/content/242896/count?distributor=republish-lightbox-advanced" async="async"></script></p>
<p>The post <a href="https://stuff.co.za/2025/03/24/what-are-ai-hallucinations-why-ais-sometime/">What are AI hallucinations? Why AIs sometimes make things up</a> appeared first on <a href="https://stuff.co.za">Stuff South Africa</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
