Author: The Conversation

Self-correction is fundamental to science. One of its most important forms is peer review, when anonymous experts scrutinise research before it is published. This helps safeguard the accuracy of the written record. Yet problems slip through. A range of grassroots and institutional initiatives work to identify problematic papers, strengthen the peer-review process, and clean up the scientific record through retractions or journal closures. But these efforts are imperfect and resource-intensive. Soon, artificial intelligence (AI) will be able to supercharge these efforts. What might that mean for public trust in science? Peer review isn’t catching everything In recent decades, the digital age…

Read More

Back in 2008, The Atlantic sparked controversy with a provocative cover story: Is Google Making Us Stupid? In that 4,000-word essay, later expanded into a book, author Nicholas Carr suggested the answer was yes, arguing that technology such as search engines was worsening Americans’ ability to think deeply and retain knowledge. At the core of Carr’s concern was the idea that people no longer needed to remember or learn facts when they could instantly look them up online. While there might be some truth to this, search engines still require users to use critical thinking to interpret and contextualise the results. Fast-forward to today, and…

Read More

Language technologies like generative artificial intelligence (AI) hold significant potential for public health. From outbreak detection systems that scan global news in real time, to chatbots providing mental health support and conversational diagnostic tools improving access to primary care, these innovations are helping address health challenges. At the heart of these developments is natural language processing, an interdisciplinary field within AI research. It enables computers to interpret, understand and generate human language, bridging the gap between humans and machines. Natural language processing can process and analyse enormous volumes of health data, far more than humans could ever handle manually. This is especially valuable in regions…

Read More

Grok, the artificial intelligence (AI) chatbot embedded in X (formerly Twitter) and built by Elon Musk’s company xAI, is back in the headlines after calling itself “MechaHitler” and producing pro-Nazi remarks. The developers have apologised for the “inappropriate posts” and “taken action to ban hate speech” from Grok’s posts on X. Debates about AI bias have been revived too. We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X. xAI is training only truth-seeking…

Read More

Generative AI, especially large language models (LLMs), present exciting and unprecedented opportunities and complex challenges for academic research and scholarship. As the different versions of LLMs (such as ChatGPT, Gemini, Claude, Perplexity.ai and Grok) continue to proliferate, academic research is beginning to undergo a significant transformation. Students, researchers and instructors in higher education need AI literacy knowledge, competencies and skills to address these challenges and risks. In a time of rapid change, students and academics are advised to look to their institutions, programs and units for discipline-specific policy or guidelines regulating the use of AI. Researcher use of AI A recent study led…

Read More

The Wimbledon tennis tournament in 2025 has brought us familiar doses of scorching sunshine and pouring rain, British hopes and despair, and the usual queues, strawberries and on-court stardust. One major difference with this year’s tournament, however, has been the notable absence of human line judges for the first time in 147 years. In a bid to modernise, organisers have replaced all 300 line judges with the Hawk-Eye electronic line-calling (ELC) system powered by 18 high-speed cameras and supported by around 80 on-court assistants. It has been sold as a leap forward, but has already caused widespread controversy. In her…

Read More

In my writing and rhetoric courses, students have plenty of opinions on whether AI is intelligent: how well it can assess, analyze, evaluate and communicate information. When I ask whether artificial intelligence can “think,” however, I often look upon a sea of blank faces. What is “thinking,” and how is it the same or different from “intelligence”? We might treat the two as more or less synonymous, but philosophers have marked nuances for millennia. Greek philosophers may not have known about 21st-century technology, but their ideas about intellect and thinking can help us understand what’s at stake with AI today. The divided…

Read More

Artificial intelligence (AI) is rapidly becoming an everyday part of our lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes. It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school. But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly. Here’s why this is a problem and what we can do instead. AI use in unis so far…

Read More

Many dating app companies are enthusiastic about incorporating generative AI into their products. Whitney Wolfe Herd, founder of dating app Bumble, wants gen-AI to “help create more healthy and equitable relationships”. In her vision of the near future, people will have AI dating concierges who could “date” other people’s dating concierges for them, to find out which pairings were most compatible. Dating app Grindr is developing an AI wingman, which it hopes to be up and running by 2027. Match Group, owner of popular dating apps including Tinder, Hinge and OK Cupid, have also expressed keen interest in using gen-AI in their products, believing…

Read More

The advent of generative AI has elicited waves of frustration and worry across academia for all the reasons one might expect: Early studies are showing that artificial intelligence tools can dilute critical thinking and undermine problem-solving skills. And there are many reports that students are using chatbots to cheat on assignments. But how do students feel about AI? And how is it affecting their relationships with peers, instructors and their coursework? I am part of a group of University of Pittsburgh researchers with a shared interest in AI and undergraduate education. While there is a growing body of research exploring how generative AI is affecting higher…

Read More