Google DeepMind AI-powered robots taught themselves to play soccer without being programmed by humans
ChatGPT has been getting a lot of attention since its debut in November of last year. While everyone is buzzing about the Open AI chatbot, DeepMind is another AI startup that most people are likely unaware of and should probably be on your radar. The startup is currently developing cutting-edge machine-learning technologies that will blow your mind, and potentially usher in a new era of what experts called artificial generative intelligence (AGI).
Demis Hassabis, along with his childhood buddy Mustafa Suleyman, and Shane Legg, founded DeepMind in London in 2010. Google later acquired the firm in 2015, and it is now a subsidiary of Alphabet, Google’s parent company. DeepMind is building AI that can learn and reason like people, as opposed to ChatGPT, which simply reacts to discussions by prompts.
DeepMind is particularly interested in developing AI that can learn and grow on its own, without the need for human interaction. Its research has spanned several fields, including machine learning, deep learning, and reinforcement learning.
One of DeepMind’s most notable accomplishments is AlphaGo, a program that defeated a human champion in the game of Go in 2016. This was an important breakthrough in the science of artificial intelligence since Go is a considerably more difficult game than chess and was considered to be beyond the capacity of computers to grasp.DeepMind has continued to make important achievements in AI research since then, especially in healthcare, climate change, and energy efficiency.
Over the weekend, DeepMind demoed how its AI-powered robots taught themselves to play soccer during an interview with CBS’ “60 Minutes” that aired Sunday. The interview featured a soccer match between two DeepMind robots at Google’s AI lab in London.
“Ah! Goal! A soccer match at DeepMind looks like fun and games but, here’s the thing: humans did not program these robots to play–they learned the game by themselves,” CBS Scott Pelley exclaimed.
Unlike other robots out there, these robots were not programmed by humans to play, they instead learned the game by themselves.
During the segment, Raia Hadsell, vice president of Research and Robotics, also showed how engineers used motion capture technology to teach the AI program how to move like a human. “But on the soccer pitch, the robots were told only that the object was to score. The self-learning program spent about two weeks testing different moves. It discarded those that didn’t work, built on those that did, and created all-stars,” Pelley explained.
Below is a full video of DeepMind robots playing soccer.
Below is the transcript
Scott Pelley: The revolution in artificial intelligence is the center of a debate ranging from those who hope it will save humanity to those who predict doom. Google lies somewhere in the optimistic middle, introducing AI in steps so civilization can get used to it. We saw what’s coming next in machine learning at Google’s AI lab in London — a company called DeepMind — where the future looks something like this.
Scott Pelley: Look at that! Oh, my goodness…
Raia Hadsell: They got a pretty good kick on them…
Scott Pelley: Ah! Goal!
A soccer match at DeepMind looks like fun and games but, here’s the thing: humans did not program these robots to play–they learned the game by themselves.
Robots powered by AI taught themselves to play soccer.
Raia Hadsell: It’s coming up with these interesting different strategies, different ways to walk, different ways to block…
Scott Pelley: And they’re doing it, they’re scoring over and over again…
Raia Hadsell, vice president of Research and Robotics, showed us how engineers used motion capture technology to teach the AI program how to move like a human. But on the soccer pitch the robots were told only that the object was to score. The self-learning program spent about two weeks testing different moves. It discarded those that didn’t work, built on those that did, and created all-stars.
And with practice, they get better. Hadsell told us that, independent from the robots, the AI program plays thousands of games from which it learns and invents its own tactics.
Raia Hadsell: Here we think that red player’s going to grab it. But instead, it just stops it, hands it back, passes it back, and then goes for the goal.
Scott Pelley: And the AI figured out how to do that on its own.
Raia Hadsell: That’s right. That’s right. And it takes a while. At first all the players just run after the ball together like a gaggle of, you know, 6-year-olds the first time they’re playing ball. Over time what we start to see is now, ‘Ah, what’s the strategy? You go after the ball. I’m coming around this way. Or we should pass. Or I should block while you get to the goal.’ So, we see all of that coordination emerging in the play.
Scott Pelley: This is a lot of fun. But what are the practical implications of what we’re seeing here?
Raia Hadsell: This is the type of research that can eventually lead to robots that can come out of the factories and work in other types of human environments. You know, think about mining, think about dangerous construction work or exploration or disaster recovery.
Raia Hadsell is among 1,000 humans at DeepMind. The company was co-founded just 12 years ago by CEO Demis Hassabis.
Demis Hassabis: So if I think back to 2010 when we started nobody was doing AI. There was nothing going on in industry. People used to eye roll when we talked to them, investors, about doing AI. So, we couldn’t, we could barely get two cents together to start off with which is crazy if you think about now the billions being invested into AI startups.
Cambridge, Harvard, MIT, Hassabis has degrees in computer science and neuroscience. His Ph.D. is in human imagination. And imagine this, when he was 12, in his age group, he was the number two chess champion in the world.
It was through games that he came to AI.
Demis Hassabis: I’ve been working on AI for decades now, and I’ve always believed that it’s gonna be the most important invention that humanity will ever make.
Scott Pelley: Will the pace of change outstrip our ability to adapt?
Demis Hassabis: I don’t think so. I think that we, you know, we’re sort of an infinitely adaptable species. You know, you look at today, us using all of our smartphones and other devices, and we effortlessly sort of adapt to these new technologies. And this is gonna be another one of those changes like that.
Among the biggest changes at DeepMind was the discovery that self-learning machines can be creative. Hassabis showed us a game playing program that learns. It’s called AlphaZero and it dreamed up a winning chess strategy no human had ever seen.
Scott Pelley: But this is just a machine. How does it achieve creativity?
Demis Hassabis: It plays against itself tens of millions of times. So, it can explore parts of chess that maybe human chess players and programmers who program chess computers haven’t thought about before.
Scott Pelley: It never gets tired. It never gets hungry. It just plays chess all the time.
Demis Hassabis: Yes. It’s kind of an amazing thing to see, because actually you set off AlphaZero in the morning and it starts off playing randomly. By lunchtime, you know, it’s able to beat me and beat most chess players. And then by the evening, it’s stronger than the world champion.
Demis Hassabis sold DeepMind to Google in 2014. One reason, was to get his hands on this. Google has the enormous computing power that AI needs. This computing center is in Pryor, Oklahoma. But google has 23 of these, putting it near the top in computing power in the world. This is one of two advances that make AI ascendant now. First, the sum of all human knowledge is online and, second, brute force computing that “very loosely approximates” the neural networks and talents of the brain.
Demis Hassabis: Things like memory, imagination, planning, reinforcement learning, these are all things that are known about how the brain does it, and we wanted to replicate some of that in our AI systems.
Those are some of the elements that led to DeepMind’s greatest achievement so far — solving an ‘impossible’ problem in biology.
Most AI systems today do one or maybe two things well. The soccer robots, for example, can’t write up a grocery list or book your travel or drive your car. The ultimate goal is what’s called artificial general intelligence– a learning machine that can score on a wide range of talents.
Scott Pelley: Would such a machine be conscious of itself?
Demis Hassabis: So that’s another great question. We– you know, philosophers haven’t really settled on a definition of consciousness yet, but if we mean by sort of self-awareness and– these kinds of things– you know, I think there’s a possibility AI one day could be. I definitely don’t think they are today. But I think, again, this is one of the fascinating scientific things we’re gonna find out on this journey towards AI.
Even unconscious, current AI is superhuman in narrow ways.
Back in California, we saw Google engineers teaching skills that robots will practice continuously on their own.
Robot: Push the blue cube to the blue triangle.
They comprehend instructions…
And learn to recognize objects.
Robot 106: What would you like?
Scott Pelley: How ’bout an apple?
Ryan: How about an apple.
Robot 106: On my way, I will bring an apple to you.
Vincent Vanhoucke, senior director of Robotics, showed us how Robot 106 was trained on millions of images…
Robot 106: I am going to pick up the apple.
…and can recognize all the items on a crowded countertop.
Vincent Vanhoucke: If we can give the robot a diversity of experiences, a lot more different objects in different settings, the robot gets better at every one of them.
Now that humans have pulled the forbidden fruit of artificial knowledge…
Scott Pelley: Thank you.
…we start the genesis of a new humanity…
Scott Pelley: AI can utilize all the information in the world. What no human could ever hold in their head. And I wonder if humanity is diminished by this enormous capability that we’re developing.
James Manyika: I think the possibilities of AI do not diminish humanity in any way. And in fact, in some ways, I think they actually raise us to even deeper, more profound questions.
Google’s James Manyika sees this moment as an inflection point.
James Manyika: I think we’re constantly adding these, in, superpowers or capabilities to what humans can do in a way that expands possibilities, as opposed to narrow them, I think. So I don’t think of it as diminishing humans, but it does raise some really profound questions for us. Who are we? What do we value? What are we good at? How do we relate with each other? Those become very, very important questions that are constantly gonna be, in one case– sense exciting, but perhaps unsettling too.
It is an unsettling moment. Critics argue the rush to AI comes too fast — while competitive pressure– among giants like Google and start-ups you’ve never heard of, is propelling humanity into the future ready or not.
Sundar Pichai: But I think if take a 10-year outlook, it is so clear to me, we will have some form of very capable intelligence that can do amazing things. And we need to adapt as a society for it.
Google CEO Sundar Pichai told us society must quickly adapt with regulations for AI in the economy, laws to punish abuse, and treaties among nations to make AI safe for the world.
Sundar Pichai: You know, these are deep questions. And, you know, we call this ‘alignment.’ You know, one way we think about: How do you develop AI systems that are aligned to human values– and including– morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on. And I think we have to be very thoughtful. And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.
We’ll end with a note that has never appeared on 60 Minutes but one, in the AI revolution, you may be hearing often. The preceding was created with 100% human content.