Site icon Stuff South Africa

DeepMind is back at it again, this time teaching AI how to play football

AI has long been better than humans at our own games. Chess (Damn you, Nelson!) and more recently DeepMind’s ‘Go’ AI, which comfortably took down a professional player. AI will only improve as time goes on, moving onto different projects and perfecting them one by one.

And then there’s DeepMind’s latest venture; teaching an AI how to play the beautiful game. Which shouldn’t be harder than teaching it chess, right? Yesterday, DeepMind – which is Google-owned – published a paper showing off its new neural probabilistic motor primitives (NPMP). The point? Teaching AI how to use physical bodies, well enough for a game of football.

Hello World

An agent learning to imitate a MoCap trajectory (shown in grey). Image: DeepMind

First things first. What is an NPMP? It might be easier for DeepMind to weigh in on this:

“An NPMP is a general-purpose motor control module that translates short-horizon motor intentions to low-level control signals, and it’s trained offline or via RL [reinforcement learning] by imitating motion capture (MoCap) data, recorded with trackers on humans or animals performing motions of interest.”

Basically, DeepMind researchers have created an AI that does stuff inside a physics simulator. Well, learns to do stuff inside a physics simulator. It watches the real thing, like data captured from a real human (wearing a suit that records body movement). This information is viewed over and over. It also watches other AI agents learning the same skills, and tries to improve based on its own and others’ failures.

DeepMind began the initial training five years ago, first giving the simulator body the tools to walk. It couldn’t do so at first, but eventually learned by trial and error. After learning how to walk and run, it could navigate a basic obstacle course… though not as elegantly as you might imagine.

Humanoid character learning to traverse an obstacle course through trial-and-error, which can lead to idiosyncratic solutions. Heess, et al. “Emergence of locomotion behaviours in rich environments” (2017). Image: DeepMind

Like watching Liverpool play

DeepMind AI “skills” Image: DeepMind

Next, it was time to teach an AI how to shoot and dribble.

Of course, there’s more to football than just kicking a ball into a net (unless you’re Erling Haaland). There’s teamwork, individual skill and intense decision-making that all determine a match’s outcome. There’s also gravity, weather, and other players to think about. DeepMind needed to confer this knowledge onto the AI it is creating.


Read More: A celebrated AI has learned a new trick: How to do chemistry


First, DeepMind gave the AI a footballer moveset to learn and copy. This helped the AI agent to dribble, chase a ball or ‘kick to a target’. It was also learned about “agile locomotion, passing and division of labour” based on a number of real-world analytics. This is where the ‘team’ learns better coordination and decision-making in a real situation.

DeepMind lets its AI train for days at a time. After spending three days of human time, the equivalent of five years of simulated matches (which totals just under 30 000 matches), the AI could chase after a ball and sometimes score. As time passed, it picked up more and more skills.

The system has enough training to play a simplified 2v2 match of football which you can see below.

It’s still a work in progress (obviously) though it doesn’t mean the achievement deserves any less praise. We’d like a full 90 minutes of this please, DeepMind.

Over time, the AI agents will improve the more they play. When this all started, they couldn’t walk. Now, they’re scoring better goals than Roberto Firmino – though that isn’t hard.

Exit mobile version