Disclosure, consent and platform power have become newly invigorated battlefields with the rise of AI. The issue came to the fore recently with YouTube’s controversial decision to use AI-powered tools to “unblur, denoise and improve clarity” for some of the content uploaded to the platform. This was done without the consent, or even knowledge, of the relevant content creators. Viewers of the material knew nothing of YouTube’s intervention. Without transparency, users have limited recourse to identify, let alone respond to, AI-edited content. At the same time, such distortions have a history that significantly predates today’s AI tools. A new kind of invisible…
Author: The Conversation
With the rapid advancement of generative artificial intelligence (AI), teachers have been thrust into a new and ever-shifting classroom reality. The public, including many students, now has widespread access to GenAI tools and large language models (LLMs). Students sometimes use these tools with schoolwork. School boards have taken different approaches to regulating or integrating tech in classrooms. Teachers, meanwhile, find themselves responding to these paradigm shifts while juggling student needs and wider expectations. The Canadian Teachers’ Federation (CTF) has called on the federal government and the Council of Ministers of Education, Canada to work with provinces and territories to enact enforceable policies that protect student…
In today’s world, huge amounts of data are being created all the time, yet more than half of it is never used. It stays in silos, or isn’t managed, or can’t be accessed because systems change, or isn’t needed because business priorities change. This “dark data” accumulates in servers and storage devices, consuming electricity and inflating the digital carbon footprint. It may appear harmless, but this growing mass of digital waste has consequences for the environment. Storing unused or obsolete digital data requires constant power for servers and cooling systems. This drives up electricity consumption and greenhouse gas emissions. Dark data alone…
Astronomers are living in a golden age of bigger and better telescopes. But even our most advanced technology pales in comparison to the power of nature’s own “cosmic magnifying glasses” – strong gravitational lenses. In less than 50 years we have gone from the first-ever discovery of a strong gravitational lens to now finding thousands. As new telescopes come online, we’re expecting to discover thousands more. With these lenses, we can look deep into the universe, and catch glimpses into the most puzzling of contemporary cosmic mysteries: dark matter and dark energy. So, what are gravitational lenses and how do they work? A…
In 1770, after Captain Cook’s Endeavour struck the Great Barrier Reef and was held up for repairs, botanists Joseph Banks and Daniel Solander collected hundreds of plants. One of those pressed plants is among 170,000 specimens in the herbarium at the University of Melbourne. Worldwide, more than 395 million specimens are housed in herbaria. Together they comprise an unparalleled record of Earth’s plant and fungal life over time. We wanted to find a better, faster way to tap into this wealth of information. Our new research describes the development and testing of a new AI-driven tool Hespi (short for “herbarium specimen sheet pipeline”). It has the potential…
How is an animal feeling at a given moment? Humans have long recognised certain well-known behaviours like a cat hissing as a warning, but in many cases we’ve had little clue of what’s going on inside an animal’s head. Now we have a better idea, thanks to a Milan-based researcher who has developed an AI model that he claims can detect whether their calls express positive or negative emotions. Stavros Ntalampiras’s deep-learning model, which was published in Scientific Reports, can recognise emotional tones across seven species of hoofed animals, including pigs, goats and cows. The model picks up on shared features of…
For the past half-century, the jobs that have commanded the greatest earnings have increasingly concentrated on knowledge work, especially in science and technology. Now with the spread of generative artificial intelligence (AI), that may no longer be true. Employers are beginning to report their intent to replace certain white-collar jobs with AI. This raises questions over whether the economy will need as many creative and analytic workers, such as computer programmers, or support as many entry-level knowledge economy jobs. This shift matters not just for workers but for K-12 teachers, who are accustomed to preparing students for white-collar work. Families, too, are concerned about the skills their children will…
Portable air cleaners aimed at curbing the indoor spread of infections are rarely tested for how well they protect people – and very few studies evaluate their potentially harmful effects. That’s the upshot of a detailed review of nearly 700 studies that we co-authored in the journal Annals of Internal Medicine. Many respiratory viruses, such as COVID-19 and influenza, can spread through indoor air. Technologies such as HEPA filters, ultraviolet light and special ventilation designs – collectively known as engineering infection controls – are intended to clean indoor air and prevent viruses and other disease-causing pathogens from spreading. Along with our…
Grok is a generative artificial intelligence (genAI) chatbot by xAI that, according to Elon Musk, is “the smartest AI in the world.” Grok’s latest upgrade is Ani, a porn-enabled anime girlfriend, recently joined by a boyfriend informed by Twilight and 50 Shades of Grey. This summer, both xAI and OpenAI launched updated versions of their chatbots. Each touted improved performance, but more notably, new personalities. xAI introduced Ani; OpenAI rolled out a colder-by-default GPT-5 with four personas to replace its unfailingly sycophantic GPT-4o model. Similar to claims made by Google DeepMind and Anthropic, both companies insist they’re building AI to “benefit all humanity” and “advance human comprehension.” Anthropic claims, at least…
Earlier this month, when OpenAI released its latest flagship artificial intelligence (AI) system, GPT-5, the company said it was “much smarter across the board” than earlier models. Backing up the claim were high scores on a range of benchmark tests assessing domains such as software coding, mathematics and healthcare. Benchmark tests like these have become the standard way we assess AI systems – but they don’t tell us much about the actual performance and effects of these systems in the real world. What would be a better way to measure AI models? A group of AI researchers and metrologists – experts in…










