Can AI help us answer life's biggest questions? In this visionary conversation, Google DeepMind cofounder and CEO Demis Hassabis delves into the history and incredible capabilities of AI with head of TED Chris Anderson. Hassabis explains how AI models like AlphaFold — which accurately predicted the shapes of all 200 million proteins known to science in under a year — have already accelerated scientific discovery in ways that will benefit humanity. Next up? Hassabis says AI has the potential to unlock the greatest mysteries surrounding our minds, bodies and the universe.
Learn more about our flagship conference happening this April at attend.ted.com/podcast
Hosted on Acast. See acast.com/privacy for more information.
[00:00:00] TED Audio Collective The utility of AI is ever-growing beyond our imaginations. Today, scientists and engineers are testing its limits, and the potential is thrilling. AI is being used to tackle tough computational challenges to make new discoveries. It's synthesizing over 200 million new protein data
[00:00:34] that would normally take a PhD student years to sift through. It's even helping us discover opportunities that might lead to greater quality of life for humanity. I'm Sherelle Dorsey, and this is TED Tech. Today, we'll listen in to a conversation between Google DeepMind co-founder and CEO,
[00:00:57] Demis Hassabis, and TED's Chris Anderson. Together, they discuss the capabilities of AI and its abilities to decode the mysteries of our minds, bodies, and the greater universe. This show was brought to you by Schwab. With Schwab Investing Themes, it's easy to invest in ideas you believe in,
[00:01:28] like artificial intelligence, big data, robotic revolution, and more. Choose from over 40 themes. Buy as-is or customize the stocks in a theme to fit your goals. Learn more at schwab.com slash thematic investing. Ever wish you could look around the corner
[00:01:46] to make sense of today's big business and social issues and prepare for what's coming tomorrow? Dozens of podcasts promise to bring you the latest news and the latest trends. But where's the so what? Why does it matter? And what does it all mean for you?
[00:02:00] BCG's flagship podcast, The So What from BCG, features award-winning British journalist Georgie Frost, interviewing BCG's leading thinkers and doers to get you the answers you want and need. Hear the ideas that are shaping and disrupting the future. This is not your typical business strategy podcast.
[00:02:17] Listen to The So What from BCG wherever you get your podcasts. This episode is brought to you by Progressive Insurance. Whether you love true crime or comedy, celebrity interviews or news, you call the shots on what's in your podcast queue. And guess what?
[00:02:32] Now you can call them on your auto insurance too with the Name Your Price tool from Progressive. It works just the way it sounds. You tell Progressive how much you want to pay for car insurance and they'll show you coverage options that fit your budget.
[00:02:43] Get your quote today at progressive.com to join the over 28 million drivers who trust Progressive. Progressive Casualty Insurance Company & Affiliates. Price and coverage match limited by state law. Sleeped money is a weekly discussion of the most important stories in the world of business and finance.
[00:03:01] Over the past 10 years, we've become known as the place to go if you want to understand what's going on in business or if you just want to laugh at some of its successes. So if you want a podcast that doesn't turn CEOs into heroes,
[00:03:15] listen to Sleeped Money with me, Felix Salmon, and my co-hosts, Emily Peck and Elizabeth Spiers every Saturday morning wherever you get your podcasts. Dennis, so good to have you here. It's fantastic to be here. Thanks, Chris. Now, you told Time Magazine, I want to understand the big questions,
[00:03:35] the really big ones, that you normally go into philosophy or physics if you're interested in them. I thought building AI would be the fastest route to answer some of those questions. Why did you think that? Well, I guess when I was a kid, my favorite subject was physics,
[00:03:54] and I was interested in all the big questions, fundamental nature of reality, what is consciousness, all the big ones. And usually, you go into physics if you're interested in that. But I read a lot of the great physicists,
[00:04:06] some of my all-time scientific heroes like Feynman and so on, and I realized in the last 20, 30 years, we haven't made much progress in understanding some of these fundamental laws. So I thought, why not build the ultimate tool to help us, which is artificial intelligence?
[00:04:22] And at the same time, we could also maybe better understand ourselves and the brain better by doing that too. So not only was it an incredible tool, it was also useful for some of the big questions itself. Super interesting. So obviously, AI can do so many things,
[00:04:37] but I think for this conversation, I'd love to focus in on this theme of what it might do to unlock the really big questions, the giant scientific breakthroughs, because it's been such a theme driving you and your company. Yeah, so one of the big things AI can do,
[00:04:53] and I've always thought about is, we're getting even back 20, 30 years ago, the beginning of the internet era and computer era, the amount of data that was being produced and also scientific data, just too much for the human mind to comprehend in many cases.
[00:05:10] And I think one of the uses of AI is to find patterns and insights in huge amounts of data and then surface that to the human scientists to make sense of and make new hypotheses and conjectures. So it seems to me very compatible with the scientific method. Right.
[00:05:26] But gameplay has played a huge role in your own journey in figuring this thing out. Yeah, I think it must have been about nine years old. I'm captaining the England Under-11 team, and we're playing in a four nations tournament.
[00:05:39] I think we're playing France, Scotland, and Wales, I think it was. That is so weird. Yeah. Because that happened to me too. In my dreams. Right. I mean, this is, OK. And it wasn't just chess. You loved all kinds of games. I loved all kinds of games, yeah.
[00:05:55] And when you launched DeepMind, pretty quickly, you started having it tackle gameplay. Why? Well, look, I mean, games actually got me into AI in the first place because while we were doing things like, we used to go on training camps with the England team and so on.
[00:06:11] And actually back then, I guess it was in the mid-80s, we would use the very early chess computers, if you remember them, to train against as well as playing against each other. And they were big lumps of plastic, physical boards that you used to,
[00:06:26] some of you remember, used to actually press the squares down and little LED lights came on. And I remember actually not just thinking about the chess style, I was actually just fascinated by the fact that this lump of plastic, someone had programmed it to be smart
[00:06:41] and actually play chess to a really high standard. And I was just amazed by that. And that got me thinking about thinking and how does the brain come up with these thought processes, these ideas, and then maybe how we could mimic that with computers.
[00:06:56] So yeah, it's been a whole theme for my whole life, really. But you raised all this money to launch DeepMind and pretty soon you were using it to do, for example, this. Well, we started off with games at the beginning of DeepMind,
[00:07:12] this was back in 2010, so this is about 10 years ago, and this is a breakthrough because we started off with classic Atari games from the 1970s, the simplest kind of computer games there are out there. And one of the reasons we use games is they're very convenient
[00:07:27] to test out your ideas and your algorithms, they're really fast to test. And also as your systems get more powerful, you can choose harder and harder games. And this was actually the first time ever that our machine surprised us,
[00:07:42] the first of many times, which it figured out in this game called Breakout that you could send the ball around the back of the wall and actually it'd be a much safer way to knock out all the tiles of the wall. So it's a classic Atari game there.
[00:07:53] And that was our first real aha moment. So this thing was not programmed to have any strategy, it was just told, try and figure out a way of winning. You just move the bat at the bottom and see if you can find a way of winning.
[00:08:05] Right, it was a real revolution at the time, this was in 2012, 2013, where we coined these terms deep reinforcement learning. And the key thing about them is that those systems were learning directly from the pixels, the raw pixels on the screen. But they weren't being told anything else.
[00:08:21] So they were being told, maximize the score, here are the pixels on the screen, 30,000 pixels. The system has to make sense on its own from first principles, what's going on, what it's controlling, how to get points. And that's the other nice thing about using games to begin with,
[00:08:36] is that you can use those objectives to win, to get scores. So you can measure very easily that your systems are improving. But there was a direct line from that to this moment a few years later where the country of South Korea and many other parts of Asia,
[00:08:50] and in fact the world, went crazy over what? Yeah, so this was the pinnacle of, this was in 2016, the pinnacle of our games playing work. So we'd done Atari, we'd done some more complicated games, and then we reached the pinnacle, which was the game of Go,
[00:09:07] which is what they play in Asia instead of chess. But it's actually more complex than chess. And the actual brute force algorithms that were used to kind of crack chess were not possible with Go, because it's a much more pattern-based game, much more intuitive game.
[00:09:25] So even though Deep Blue beat Garry Kasparov in the 90s, it took another 20 years for our program, AlphaGo, to beat the world champion at Go. And we always thought, myself and the people working on this project for many years, if you could build a system
[00:09:41] that could beat the world champion at Go, it would have had to have done something very interesting. And in this case, what we did with AlphaGo is it basically learned for itself by playing millions and millions of games against itself ideas about Go, the right strategies,
[00:09:53] and in fact invented its own new strategies that the Go world had never seen before, even though we've played Go for more than 2,000 years. It's the oldest board game in existence. So it was pretty astounding, not only did it win the match,
[00:10:08] it also came up with brand new strategies. And you continued this with a new strategy of not even really teaching anything about Go, but just setting up systems that just from first principles would play so that they could teach themselves from scratch Go or chess.
[00:10:27] Talk about AlphaZero and the amazing thing that happened in chess. Yeah, so following this, AlphaGo started, we started with AlphaGo by giving it all of the human games that are being played on the internet. So it started that as a basic starting point for its knowledge.
[00:10:43] And then we wanted to see what would happen if we started from scratch, from literally random play. So this is what AlphaZero was, that's why it's the zero in the name, because it started with zero prior knowledge. And the reason we did that is because then
[00:10:57] we would build a system that was more general. So AlphaGo could only play Go, but AlphaZero could play any two-player game. And it did it by playing initially randomly and then slowly incrementally improving, well not very slowly actually, within the course of 24 hours,
[00:11:13] going from random to better than world champion level. And so this is so amazing to me. So I'm more familiar with chess than with Go. And for decades, thousands and thousands of AI experts worked on building incredible chess computers. Eventually they got better than humans.
[00:11:30] You had a moment a few years ago where in nine hours, AlphaZero taught itself to play chess better than any of those systems ever did. Talk about that. Yeah, it was a pretty incredible moment actually. So we set it going on chess.
[00:11:49] And as you said, there's this rich history of chess and AI where there were these expert systems that had been programmed with these chess ideas, chess algorithms. And you start, you have this amazing, I remember this day very clearly, where you sort of sit down with the system,
[00:12:03] starting off random in the morning, you go for a cup of coffee, you come back, I can still just about beat it by lunchtime, maybe just about. And then you let it go for another four hours and by dinner, it's the greatest chess-playing entity that's ever existed.
[00:12:18] And it's quite amazing looking that live on something that you know well, like chess, and you're an expert in, and actually just seeing that in front of your eyes. And then you extrapolate to what it could then do in science or something else, which of course,
[00:12:32] games were only a means to an end. They were never the end in themselves. They were just the training ground for our ideas and to make quick progress in a matter of, less than five years actually went from Atari to Go.
[00:12:46] I mean, this is why people are in awe of AI and also kind of terrified by it. And it's not just incremental improvement. The fact that in a few hours, you can achieve what millions of humans over centuries have not been able to achieve.
[00:13:02] That is just, that gives you pause for thought. I mean, it's- It does. I mean, it's a hugely powerful technology. It's going to be incredibly transformative and we have to be very thoughtful about how we use that capability. This was always my aim with AI from a kid,
[00:13:17] which is to use it to accelerate scientific discovery. And actually ever since doing my undergrad at Cambridge, I had this problem in mind one day for AI. It's called the protein folding problem. And it's kind of like a 50 year grand challenge in biology.
[00:13:30] And it's very simple to explain. Proteins are essential to life. They're the building blocks of life. Everything in your body depends on proteins. And you describe a protein, a protein sort of described by its amino acid sequence, which you can think of as roughly the genetic sequence
[00:13:46] describing the protein. But in nature, in your body or in an animal, this string, a sequence turns into this beautiful shape. That's the protein. Those letters describe that shape. And the important thing about that 3D structure is that the 3D structure of the protein
[00:14:02] goes a long way to telling you what its function is in the body, what it does. And so the protein folding problem is, can you directly predict the 3D structure just from the amino acid sequence? It's not calculating it from the letters.
[00:14:15] It's looking at patterns of other folded program, proteins that are known about and somehow learning from those patterns that this may be the way to do this. So when we started this project, actually straight after AlphaGo, I thought we were ready once we'd cracked Go.
[00:14:31] I felt we were finally ready after almost 20 years of working on this stuff to actually tackle some scientific problems, including protein folding. And what we start with is painstakingly over the last 40 plus years, experimental biologists have pieced together around 150,000 protein structures using very complicated X-ray crystallography techniques
[00:14:54] and other complicated experimental techniques. And the rule of thumb is that it takes one PhD student their whole PhD, so four or five years to uncover one structure. But there are 200 million proteins known to nature. So you could just take forever to do that.
[00:15:12] And so we managed to actually fold using AlphaFold in one year, all those 200 million proteins known to science. So that's a billion years of PhD time saved. So it's amazing to me just how reliably it works. Yeah, and the more you deeply go into proteins,
[00:15:30] you just start appreciating how exquisite these proteins are. And each of these things do a special function in nature, and they're almost like works of art. And it still astounds me today how well it can predict this to within the width of an atom, on average,
[00:15:44] is how accurate the prediction is, which is what is needed for biologists to use it and for drug design and for disease understanding, which is what AlphaFold unlocks. You made a surprising decision, which was to give away the actual results of your 200 million proteins. We open-sourced AlphaFold
[00:16:04] and gave everything away on a huge database with our wonderful colleagues at the European Bioinformatics Institute. I mean, you're part of Google. Was there a phone call saying, uh, Demis, what did you just do? Well, you know, I'm lucky we have very supportive,
[00:16:21] Google are really supportive of science and understand the benefits this can bring to the world. And, you know, the argument here was that we could only ever have even scratched the surface of the potential of what we could do with this. You know, maybe like a millionth
[00:16:36] of what the scientific community is doing with it. There's over a million and a half biologists around the world have used AlphaFold in their predictions. We think that's almost every biologist in the world is making use of this now, every pharma company.
[00:16:48] So we'll never know probably what the full impact of it all is. But you're continuing this work in a new company that's spinning out at Google called Isomorph. Isomorphic, yeah. Isomorphic. Give us just a sense of the vision there. What's the vision?
[00:17:03] So AlphaFold is a sort of fundamental biology tool. Like, what are these 3D structures? And then what might they do in nature? And then if you... You know, the reason I thought about this and was so excited about this is that this is the beginnings of understanding disease
[00:17:20] and also maybe helpful for designing drugs. So if you know the shape of the protein, then you can kind of figure out which part of the surface of the protein you're going to target with your drug compound. And Isomorphic is extending this work we did in AlphaFold
[00:17:36] into the chemistry space where we can design chemical compounds that will bind exactly to the right spot on the protein and also, importantly, to nothing else in the body. So it doesn't have any side effects and it's not toxic and so on.
[00:17:52] And we're building many other AI models, sort of system models to AlphaFold to help make predictions in chemistry space. So we can expect to see some pretty dramatic health medicine breakthroughs in the coming few years. I think we'll be able to get down drug discovery
[00:18:09] from years to maybe months. OK. Demis, I'd like to change direction a bit. Our mutual friend Liv Borei gave a talk last year at TED-AI that she called the Moloch Trap. The Moloch Trap is a situation where organizations, companies in a competitive situation
[00:18:29] can be driven to do things that no individual running those companies would by themselves do. And it's felt, I was really struck by this talk, and it's felt as a sort of layperson observer that the Moloch Trap has been shockingly in effect in the last couple years.
[00:18:47] So here you are with DeepMind sort of pursuing these amazing medical breakthroughs and scientific breakthroughs and then suddenly kind of out of left field, OpenAI with Microsoft releases ChatGPT and the world goes crazy and suddenly goes, holy crap, AI is, you know, everyone can use it.
[00:19:10] And there's a sort of, it felt like the Moloch Trap in action. I think Microsoft CEO Satya Nadella actually said, Google is the 800-pound gorilla in the search space. We wanted to make Google dance. How? And it did, Google did dance. There was a dramatic response.
[00:19:35] Your role was changed. You took over the whole Google AI effort. Products were rushed out. You know, Gemini, part amazing, part embarrassing. I'm not going to ask you about Gemini because you've addressed it elsewhere. But it feels like this was the Moloch Trap happening,
[00:19:54] that you and others were pushed to do stuff that you wouldn't have done without this sort of catalyzing competitive thing. Meta did something similar as well. They rushed out open source versions of AI, which is arguably a reckless act in itself. This seems terrifying to me.
[00:20:14] Why? Is it terrifying? Look, it's a complicated topic, of course. And first of all, I mean, there are many things to say about it. First of all, we were working on many large language models. And in fact, obviously, Google Research actually invented Transformers, as you know,
[00:20:31] which was the architecture that allowed all this to be possible five, six years ago. And so we had many large models internally. The thing was, I think what the ChatGPT moment did that changed and fair play to them to do that was they demonstrated,
[00:20:46] I think somewhat surprisingly to themselves as well, that the public were ready to, you know, the general public were ready to embrace these systems and actually find value in these systems. And as impressive though they are, I guess when we're working on these systems,
[00:21:00] mostly you're focusing on the flaws and the things they don't do and hallucinations and things you're all familiar with now. We were thinking, you know, would anyone really find that useful given that it does this and that and the other?
[00:21:11] And we wanted to improve those things first before putting them out. But interestingly, it turned out that even with those flaws, many tens of millions of people still find them very useful. And so that was an interesting update on maybe the convergence
[00:21:26] of products and the science that actually all of these amazing things we've been doing in the lab, so to speak, are actually ready for prime time for general use beyond the rarified world of science. And I think that's pretty exciting in many ways.
[00:21:41] So at the moment, we've got this exciting array of products which we're all enjoying and all this generative AI stuff is amazing. But let's roll the clock forward a bit. Microsoft and OpenAI are reported to be building or investing like $100 billion into an absolute monster database supercomputer
[00:22:02] that can offer computers at orders of magnitude more than anything we have today. I think it takes like 5 gigawatts of energy to drive this, it's estimated. That's the energy of New York City to drive a data center. So we're pumping all this energy into this giant, vast array.
[00:22:21] Google, I presume, is going to match this type of investment, right? Well, yeah. I mean, we don't talk about our specific numbers, but I think we're investing more than that over time. And that's one of the reasons we teamed up with Google back in 2014
[00:22:36] is we knew that in order to get to AGI, we would need a lot of compute and that's what's transpired and Google had and still has the most computers. So Earth is building these giant computers,
[00:22:50] these giant brains that are going to power so much of the future economy and it's by companies that are in competition with each other. How will we avoid the situation where someone is getting a lead? Someone else has got $100 billion invested in their thing.
[00:23:09] Isn't someone going to go, wait a sec, if we used reinforcement learning here to maybe have the AI tweak its own code and rewrite itself and make it so powerful, we might be able to catch up in nine hours over the weekend
[00:23:23] with what they're doing, roll the dice, damn it. We have no choice, otherwise we're going to lose a fortune for our shelters. How are we going to avoid that? Yeah, well, we must avoid that, of course, clearly. And my view is that as we get closer to AGI,
[00:23:37] we need to collaborate more. And the good news is that most of the scientists involved in these labs know each other very well and we talk to each other a lot at conferences and other things. And this technology is still relatively nascent,
[00:23:52] so probably it's okay what's happening at the moment. But as we get closer to AGI, I think as a society, we need to start thinking about the types of architectures that get built. So I'm very optimistic, of course, that's why I spent my whole life
[00:24:08] working on AI and working towards AGI. But I suspect there are many ways to build the architecture safely, robustly, reliably, and in an understandable way. And I think there are almost certainly going to be ways of building architectures that are unsafe or risky in some form.
[00:24:26] So I see a sort of a kind of bottleneck that we have to get humanity through, which is building safe architectures as the first types of AGI systems. And then after that, we can have a sort of a flourishing of many different types of systems
[00:24:43] that are perhaps sharded off those safe architectures that ideally have some mathematical guarantees or at least some practical guarantees around what they do. Do governments have an essential role here to define what a level playing field looks like and what is absolutely taboo?
[00:24:59] Yeah, I think it's not just about actually, I think government and civil society and academia and all four parts of society have a critical role to play here to shape along with industry labs what that should look like as we get closer to AGI
[00:25:12] and the cooperation needed and the collaboration needed to prevent that kind of runaway race dynamic happening. Okay, well it sounds like you remain optimistic. You know, we've been talking a lot about science and a lot of science can be boiled down to,
[00:25:26] if you imagine all the knowledge that exists in the world as a tree of knowledge, and then maybe what we know today as a civilization is some small subset of that. And I see AI as this tool that allows us,
[00:25:40] as scientists, to explore potentially the entire tree one day. And we have this idea of root node problems that, like alpha fold, the protein folding problem, where if you could crack them, it unlocks an entire new branch of discovery or new research.
[00:25:56] And that's what we try and focus on at DeepMind and Google DeepMind to crack those. And if we get this right, then I think we could be in this incredible new era of radical abundance, curing all diseases, spreading consciousness to the stars. What's the single... Maximum human flourishing.
[00:26:15] We're out of time, but what's the last example of, like, in your dreams, this dream question that you think there is a shot in your lifetime AI might take us to? Well, I mean, once AGI is built,
[00:26:26] what I'd like to use it for is to try and use it to understand the fundamental nature of reality. So do experiments at the Planck scale, the smallest possible scale, theoretical scale, which is almost like the resolution of reality. You know, I was brought up religious,
[00:26:43] and in the Bible there's a story about the tree of knowledge that doesn't work out very well. Um... Is... Is there any scenario where we discover knowledge that the universe says, humans, you may not know that? Potentially. I mean, there might be some unknowable things.
[00:27:04] So, but I think scientific method is the greatest sort of invention humans have ever come up with. You know, the Enlightenment and scientific discovery, that's what's built this incredible modern civilization around us and all the tools that we use. So I think it's the best technique we have
[00:27:23] for understanding the enormity of the universe around us. Well, Demis, you've already changed the world. I think probably everyone here will be cheering you on in your efforts to ensure that we continue to accelerate in the right direction. Thank you. Demis Hassabis. Thank you. That was really excellent.
[00:27:49] That was Demis Hassabis in conversation with Chris Anderson at TED 2024. TED Tech is part of the TED Audio Collective. This episode was produced by Nina Lawrence, edited by Alejandra Salazar, and fact-checked by Julia Dickerson. Special thanks to Maria Larias, Ferideh Grange, Corey Hajim, Daniela Valareso, and Michelle Quint.
[00:28:15] I'm Sherelle Dorsey. Thanks for listening, and talk to you again next week.

