WUZHEN, China -- According to Demis Hassabis, the founder of the leading artificial intelligence company DeepMind, intelligence is computational and computers can replicate it. Hassabis sees this as an incredible tool, like having the world's best research assistant at your fingertips.
Where this leads -- solving humanity's greatest challenges or the possibility of AI being used by people with ill intent -- is part of a wide-ranging conversation Hassabis had with The Nikkei on the sidelines of the Future of Go summit held on May 23-27 in Wuzhen, China.
Excerpts from the interview follow.
Q: You've demonstrated how powerful the combination of deep learning and reinforcement learning is, and based on that, AI can be better than humans in a narrow domain like the game of Go. But how far along are you in terms of fundamentally understanding what intelligence is, and its processes and its mechanisms?
A: The analogy I give is that putting deep learning and reinforced learning together was a big innovation and has allowed a lot of our advances to happen in AlphaGo, the Atari program, DQN and these kinds of things. I would say it's like ... I kind of give the analogy that it's like the first rung of a ladder.
So, the ladder's quite tall. We don't know how many steps there are. But the first step is very important, and I think we're on the right ladder. I think in the past, with AI, we've climbed the wrong ladders. Right? (Laughs.)
And we had to climb back down, with logic systems, and other ways -- like handcrafted systems -- these other ways of trying to do AI.
I think we are on the right ladder, so that's number one. And then, I think we've made the first step. And it's important because it means you can calibrate how hard the steps are. Because, before the first step, you don't know. The first step might be so large you can't take the ... nobody can make progress.
But, there are many more steps to go. And we sort of know roughly what those are, how high a level, because ... it's neuroscience. Right? So, we know we need memory. We know we need imagination. We know we need concepts. We know we need language. All of these capabilities. And we're actively researching all of these things now to add to a system like AlphaGo.
Q: Is intelligence computable? In other words, is it possible to compute everything that is going on in the brain?
A: Yeah, well this is an open research question, but the current -- my current -- betting is probably yes, because there doesn't seem to be anything non-computable in the brain. So, people like ... Do you know Roger Penrose?
He's speculated for many years that maybe there's some kind of quantum effect in the brain. "Quantum consciousness," he called it. Right? If that was true, then it might be non-computable.
But he collaborated with some top biologists to see if they could find any quantum effect in the micro tubules or other parts of the biology, and no one's found anything that isn't, kind of, classical computing. So, it suggests that it is computable, in the Alan Turing sense of the word.
It's just incredibly complex. But computable. So that's my current working assumption. But, we might find otherwise as we go on this journey.
Q: An underlying and probably the most important message I got from this event in Wuzhen, China, was that AI is not there to go against humans but, rather, to collaborate with scientists and doctors, to solve the great challenges faster. Am I right?
A: Yes, completely correct. That's been my dream since I was 11 years old. To help in very important areas of the world: climate, disease and other areas of science -- chemistry, biology, materials science -- to advance the world for the benefit of everyone.
And I think -- you know, we just use games initially, as a convenient way -- a very convenient way -- to measure our program, right? And build things fast and so on. But, most things in the world are not "one person wins, one person loses." It's everyone can win. Right? If we improve the climate or improve -- say, if we cure a disease -- everybody wins. So those are the domains we ultimately want to apply this to.
And then we think of AI as like, a telescope, the Hubble Space Telescope.
I've used this kind of analogy to say it's an amazing tool for the top scientists and top surgeons and clinicians to use, and to help them. It's like everybody could have the world's best research assistant with them.
This would be, I think, amazing, if ... for the progress of science, and then you, as a scientist, could think of the next hypothesis and ask the system for the latest number crunching. This is the kind of way I'm thinking about how you would work together with AI.
Q: In the opening remarks you said, "Even though AlphaGo is designed to play the game of Go, ..." but you've also designed it to be general purpose. How much realignment is required to use it for other purposes like energy optimization in data centers?
A: We try and make the algorithm as general as possible, and we continue to make it more and more general. So, this version of AlphaGo is stronger. It's also more general. People will see that when we publish the new paper. It's more general. And we're continuing to do that.
But, whenever you tackle a new domain, like energy or something else, it's not just the algorithm. You also need to understand the domain, to understand what are the important problems. Because obviously there's entire knowledge in that domain as well. Right? So you don't know which ones are the valuable problems, which ones are the hard problems, or which ones are the important problems.
What we like to do is combine with an expert in that domain -- a company, an academic or whoever are the best people in that domain -- and to figure out with them what their problems are, and then we can figure out if our architecture and algorithms are useful for that.
That's our process for every new domain we go into.
Q: And that's the journey you are going to pursue after this, right?
A: Yes. We're already doing it in the health care field, with the National Health Service in the UK, and energy we've just started very initially. And then we have some other domains we're thinking about next.
Q: AlphaGo surprised the Go world as it showed that it could create its own moves -- creative ones at that. But it also raised concerns that if we give computers the instinct not just to think what we might be thinking but to think ahead of us, eventually they will be in control. How do you address that concern?
A: Well, I think, again, these are important research questions that we need a lot of work on in the next decade or two. While we're building these systems, we need to understand their controllability, and the right way to set goals. As I mentioned on stage, with AlphaGo, we give it the goal of winning the game of Go.
It's not like it's going to think of doing something else. Right? It can't. There's nothing in the architecture to allow it to do that.
So, we need to analyze how to build those systems in the right way -- to make them like tools -- and then build other tools, like visualization tools or interpretability tools, to understand how the system is working and making its decisions. And we have a lot of people working on those kinds of projects. Again, it's in the early stage.
But I think if you imagine something similar to FMRI machines for neuroscience, you know, you go in the brain scanner and then it takes pictures and then you can see what parts of the brain light up when you make certain thoughts. That's the kind of thing I would like, as a tool, for virtual brains. So we call it "virtual brain analytics."
Q: To flag something?
A: Yeah, to flag something like, "Okay, so this is why ... these parts of the neural network came on when this thing happened, so maybe that's causing this to happen."
Q: To avoid the black box?
A: Yes, exactly. We would like to unpack the black box, and I think in the next five or 10 years we will have tools like that.
Q: Google and other companies are trying to democratize AI, providing all the tools and computing resources through the cloud. But doesn't it increase the possibility that these capabilities will be used by people with ill intent?
A: I think this is a really important question, and it's not easy to answer because, of course, you want as many people as possible to benefit from it. That's why we openly publish everything. So much stuff is open source, like tensor flow, all these things. So I think that's definitely good for the research community.
But the problem is that yes, there might be bad actors in the world, so as things get more sophisticated, the community is going to have to think about that, about how to address this problem.
One way to address it would be to publish less and open source less. But, of course, that has its own consequences in terms of ... for the good aspects.
It's a tricky trade-off. I don't have good answers for that at the moment. But again, it's something that should be debated, and ethics should be thought about for that.
Q: You have been successful in amassing the largest pool of deep learning researchers in the world. But, I assume that what you are trying to achieve requires much more than what you have today. How are you going to recruit more AI talent, which is now one of the most scarce resources?
A: Yes. This is true. We're kind of victims of our own success -- that because we've been successful, the talent is even more valuable.
I think it's down to the culture we have. It's the most important thing. We've tried to, and I've tried and my management team have tried, to think of creating the perfect culture for research. Right? Like a nirvana for research. So, if you're a top researcher, everything you could possibly dream of doing, plus you have the best colleagues, and you have larger computing power, and you're working on the most interesting problems, and you're in the forefront.
I think these are all quite attractive things. We are lucky. We get many of the world's best researchers to come to our door -- we don't have to go to them; they come to us.
And I think things like AlphaGo demonstrate the results of that kind of culture.
Q: I really like the analogy you used: "the Apollo program for AI." That tells you something about bringing together all the talent...
A: Talent, yeah, and this kind of mission, with the energy that would have, to achieve something really big. Right. And that's how we live it.
Q: You have recently talked about how, today, working on AI has become very fashionable. And I know that many businesses claim that they are using AI without elaborating on what it really is, right?
A: Yes, that's very true.
Q: In part to lure investors, I guess.
A: Yes, exactly.
Q: So, are we in kind of an AI bubble?
Q: How much are you concerned about this?
A: I don't like it. We're definitely in an AI bubble because, exactly as you say, every company is saying they're using AI, when some of them don't even know what that means.
It's kind of funny, like a marketing term. It's like the marketing department saying, "We have AI." What does it mean?
So I think maybe about 90% of the stuff out there is like that, which is a lot.
But, because it works with investors and other things, everybody is saying this now, right?
I think that's pretty bad, and I think there'll be a lot of disappointment. So, for some people, it might look like ... you know, AI has been through hype cycles before. Right?
They call it "AI winter." For some people, they're viewing this as the same as what happened the last two times, that we're in another bubble, and then there will be a winter.
It's funny. I think both things are true. We are in a bubble, but it's real this time. It's just we're not as far as the bubble thinks we are, and lots of people are just using it as a buzzword. But I really believe this time that we're on the right ladder. That's what I mean by "the right ladder." I think there won't be another winter.
Q: Your journey to crack the game of Go is coming to an end. But how about your love of games?
A: That will never leave me because games are a part of my being. I would say that's probably the most essential part of my being. That's why I started to train in ... I played games at a high level. I've made games. I've tested games. Now we're using games for AI. So, my whole life seems to be intertwined with games, one way or another.
And I love everything about games. As a training for the mind, as a beautiful art form, as fun.
You know, there's a book, as a matter of fact, the book is called "Homo Ludens" which means humans as a game-playing animal.
It even suggests maybe that's what we -- the fun of the game -- is actually how we learn creative play. So, I think it will always be part of me.
Interviewed by Nikkei staff writer Joshua Ogawa