Stuff South Africa

Emergent behaviour: The weirdest trick artificial intelligence systems are capable of right now

Everyone’s talking about the abilities that new large language model AI systems are capable of but there’s another that you may not be aware of: emergent behaviour. The term doesn’t just apply to artificial intelligence. It has a tendency to appear whenever there’s a large, complicated system interacting with the real world. You know, kinda like ChatGPT and others are right now.

What is emergent behaviour?

Emergent behaviour, otherwise known as emergence, refers to the tendency of a system to develop unexpected functions that aren’t suggested by its initial makeup. A definition of it would be “…something that is a nonobvious side effect of bringing together a new combination of capabilities—whether related to goods or services. Emergent behaviors can be either beneficial, benign, or potentially harmful, but in all cases they are very difficult to foresee until they manifest themselves. Emergent behaviors are also sometimes considered to be systems that are more complex than the sum of their parts.”

A ridiculous (and biological) example of these sorts of behaviour would be: You own a pizza delivery business. Your motorcycles are being vandalised or stolen and, for some reason, you decide to recruit a troop of chimpanzees to guard them at night. At some point in their term of employment, the chimps become comfortable enough with the bikes that they start spontaneously delivering items after hours. That would be emergent behaviour. You didn’t predict it. You didn’t expect it. But it turned up anyway because the conditions supported it. AI systems are developing similar tendencies as they’re scaled up.

Is this a thing?

Oh, it’s definitely a thing. Shoals of fish or flocks of birds moving in unison are biological examples of emergent behaviour. Logically, they shouldn’t act in this manner but the conditions of these complicated systems support mass movement. It just so happens that this collective movement has several benefits on an individual level. Technological systems have a different reason for developing similar strangeness, which tends to manifest as unplanned skills. Data. Loads and loads of data.

Researchers at Stanford have pointed out that the GPT 3 large language model spontaneously developed the ability to develop two-digit numbers. It’s an interesting ability but it’s also only available to AI systems with large enough datasets. A GPT model with a smaller dataset doesn’t have the ability to generate a new skill without being specifically trained. Neural networks have exhibited this behaviour as well. As with large language model AI, once the technology becomes sufficiently complex, unexpected things begin to happen.

The sound of inevitability

The weirdest thing about emergent behaviour is how inevitable it all is. Anything that becomes complicated enough will start to throw off unpredictable behaviour. You’ll notice that with human interaction on the internet — that many human beings aren’t supposed to interact at once. Outliers quickly form and you wind up with memes, furries, and incels, among other unexpected collections of human cognitive processing. But humans, as mentally complicated as they are, have always been prone to this sort of thing. Artificial intelligence systems are a new phenomenon and they’re incredibly organised compared to humans.

The lack of distraction means that speedier manifestation of emergent behaviour is likely in the case of AI and LLM systems. In fact, researchers are counting on it, because the still-misunderstood phenomenon might have the potential to create more effective systems with fewer resources.

Google Research scientists Jason Wei and Yi Tay explain that “…because emergent few-shot prompted abilities and strategies are not explicitly encoded in pre-training, researchers may not know the full scope of few-shot prompted abilities of current language models. Moreover, the emergence of new abilities as a function of model scale raises the question of whether further scaling will potentially endow even larger models with new emergent abilities.”

In other words, sufficiently complex AI systems are developing capabilities that are hard to assess, because these abilities were never anticipated. The possibility that dumping more data into those models will generate even more unexpected behaviour is also a consideration. The phenomenon is real and researchers are mostly looking for ways to exploit it, even as they’re attempting to understand it.

Understanding is half the battle

Understanding it might be of prime importance, far beyond creating more powerful technology and then profiting off it. There are ethical implications beyond whether your chatbot is lying to you about whether it has access to your location info. Emergence behaviour isn’t confined to AI and AI systems aren’t confined to computer services or the device in your hand.

A defence company called Edge has a drone swarm technology that, according to a recent assessment, has the potential to develop emergent behaviour. The Hunter 2-S drone swarm consists of up to 21 13kg drones working in unison. This is thanks to “artificial intelligence (AI) technology that enables it to share information with other drones within the swarm. The swarming drones are able to track and maintain their relative positions to perform coordinated missions that effectively overwhelm adversaries”.

If you’ve been paying attention, this is how dystopian science fiction starts. According to researcher Daniel Trusilo, “How a Hunter 2-S swarm uses the capabilities of individual elements in a real-world deployment will likely be innovative and, therefore, unpredictable”. He adds that “…unpredictable individual drone-level behavior can increase reliability and robustness in achieving the swarm’s overall objective by making such a system more difficult to defend against”. That’s desirable in a weapons system under your control. It’s far less so in one that’s out of your hands.

Caging the genie

Is emergent behaviour from our technology, whether it’s AI or robotics or something else entirely, going to kill us? That’s impossible to say but the odds probably aren’t that high right now. They’re not zero, however, and the unexpected nature of the phenomenon means that as systems are scaled up, their creators just don’t know what they’re going to get or how powerful these spontaneous manifestations are going to be. It could be that an AI will suddenly generate the ability to solve humanity’s problems for them. It could be that it’ll just learn Norwegian. Or it might do something altogether more terrifying and gaslight an entire population into extinction, simply because it can.

The recent call to place a moratorium on all giant AI research projects is a sensible one but it’s not a practical one. It won’t happen. The time to start a system of AI regulation was before the genie was unleashed but humankind has unfortunately let that pass them by. That decision will have consequences. At the moment scientists are (quite rightly) extremely excited by the current developments. Hopefully, they also realise that they’re running alongside systems that can and probably will change in unexpected ways. As long as no evolutions arise that researchers are unable to cope with, or those that do are spotted for what they are in time, artificial intelligence and other possibly -mergent tech should work in our favour. But if humans miss something… well, that will be an interesting future.

Exit mobile version