The pandemic forced many educational institutions to move to online learning. Could the rise of chatbots, including OpenAI’s ChatGPT and Google’s Bard, now further improve the accessibility of learning and make education more obtainable for everyone? Chatbots are computer programmes that use artificial intelligence to simulate conversation with human users. They work by analysing the context of a conversation and generating responses they believe to be relevant. They have been trained on massive data sets of human language, allowing them to generate responses to a wide range of questions. Chatbots like ChatGPT and Bard can be used in a variety of educational settings,…
Author: The Conversation
Artificial Intelligence-powered tools, such as ChatGPT, have the potential to revolutionize the efficiency, effectiveness and speed of the work humans do. And this is true in financial markets as much as in sectors like health care, manufacturing and pretty much every other aspect of our lives. I’ve been researching financial markets and algorithmic trading for 14 years. While AI offers lots of benefits, the growing use of these technologies in financial markets also points to potential perils. A look at Wall Street’s past efforts to speed up trading by embracing computers and AI offers important lessons on the implications of using them for decision-making. Program trading…
There has been shock around the world at the rapid rate of progress with ChatGPT and other artificial intelligence created with what’s known as large language models (LLMs). These systems can produce text that seems to display thought, understanding and even creativity. But can these systems really think and understand? This is not a question that can be answered through technological advance, but careful philosophical analysis and argument tells us the answer is no. And without working through these philosophical issues, we will never fully comprehend the dangers and benefits of the AI revolution. In 1950, the father of modern computing, Alan…
The artificial intelligence (AI) pioneer Geoffrey Hinton recently resigned from Google, warning of the dangers of the technology “becoming more intelligent than us”. His fear is that AI will one day succeed in “manipulating people to do what it wants”. There are reasons we should be concerned about AI. But we frequently treat or talk about AIs as if they are human. Stopping this, and realising what they actually are, could help us maintain a fruitful relationship with the technology. In a recent essay, the US psychologist Gary Marcus advised us to stop treating AI models like people. By AI models, he means…
Are we alone in the universe? It’s a question that fascinates scientists and the public alike. In science, the focus tends to be on our search for life elsewhere. The idea that we might be watched by a distant alien civilisation, however, is usually confined to the realm of science fiction. But if there are other technological civilisations out there, they would probably be significantly more developed than we are. After all, we have only just emerged as a fledgling technical (industrial) civilisation in the last 200 years – other technical civilisations could easily be 1,000 or 10,000 or even 100,000 years…
For the most part, the focus of contemporary emergency management has been on natural, technological and human-made hazards such as flooding, earthquakes, tornadoes, industrial accidents, extreme weather events and cyber attacks. However, with the increase in the availability and capabilities of artificial intelligence, we may soon see emerging public safety hazards related to these technologies that we will need to mitigate and prepare for. Over the past 20 years, my colleagues and I — along with many other researchers — have been leveraging AI to develop models and applications that can identify, assess, predict, monitor and detect hazards to inform emergency response operations and decision-making. We are now reaching a…
Infectious disease outbreaks in African countries are, unfortunately, all too common. Ebola in the Democratic Republic of the Congo or Uganda; Marburg virus in Guinea or Equatorial Guinea; cholera in Malawi; malaria and tuberculosis are among them. These diseases do not respect human-made or porous borders. So it’s essential that scientists in Africa are able to generate and share critical data on the pathogens in time to inform public-health decisions. Genomic sequencing technologies are powerful tools in this kind of work. They enable scientists to decode the genetic material of diseases and create biological “fingerprints” to investigate and track the pathogens that cause those diseases. This information…
For the first time, astronomers have captured images that show a star consuming one of its planets. The star, named ZTF SLRN-2020, is located in the Milky Way galaxy, in the constellation Aquila. As the star swallowed its planet, the star brightened to 100 times its normal level, allowing the 26-person team of astronomers I worked with to detect this event as it happened. I am a theoretical astrophysicist, and I developed the computer models that our team uses to interpret the data we collect from telescopes. Although we only see the effects on the star, not the planet directly, our…
Like most people I check my emails in the morning, wading through a combination of work requests, spam and news alerts peppering my inbox. But yesterday brought something different and deeply disturbing. I noticed an alert from the American Cybersecurity and Infrastructure Security Agency (CISA) about some very devious malware that had infected a network of computers. The malware in question is Snake, a cyber espionage tool deployed by Russia’s Federal Security Service that has been around for about 20 years. According to CISA, the Snake implant is the “most sophisticated cyber espionage tool designed and used by Center 16 of Russia’s Federal…
The past few years have seen an explosion of progress in large language model artificial intelligence systems that can do things like write poetry, conduct humanlike conversations and pass medical school exams. This progress has yielded models like ChatGPT that could have major social and economic ramifications ranging from job displacements and increased misinformation to massive productivity boosts. Despite their impressive abilities, large language models don’t actually think. They tend to make elementary mistakes and even make things up. However, because they generate fluent language, people tend to respond to them as though they do think. This has led researchers to study the models’ “cognitive” abilities and biases, work that has grown…