Search:

Google DeepMind CEO Demis Hassabis on AI, Creativity, and a Golden Age of Science | All-In Summit

September 12, 202531:48
00:00:01
A genius who may hold the cards of our future. CEO of Google DeepMind, which is the
00:00:07
engine of the company's artificial intelligence. After his Nobel and a nighthood from King Charles, he became a pioneer of
00:00:15
artificial intelligence. We were the first ones to start doing it seriously in the modern era. Alph Go was
00:00:22
the big watershed moment, I think, not just for Deep Mind at my company, but for AI in general. This was always my
00:00:27
aim with AI from a kid, which is to to use it to accelerate scientific discovery.
00:00:33
Ladies and gentlemen, please welcome Google Deep Minds Deis Hassabis.
00:00:39
[Applause] Welcome. Great to be here.
00:00:45
Thanks. Thanks for following Tucker, Mark Cuban at all. Um, first off,
00:00:50
congrats on winning the Nobel Prize. Thank you. Um, thanks
00:00:58
for the incredible breakthrough of AlphaFold. Maybe you may have done this before, but I know everyone here would
00:01:03
love to hear your recounting of how you where you were when you won the Nobel Prize. How'd you find out?
00:01:09
Well, it was very surreal moment obviously. Um, you know, it's that everything about it is surreal. The way
00:01:14
they tell you, they tell you like 10 minutes before it all goes live. It's just, you know, you can't really it's
00:01:19
you're sort of shell shocked when you get that call from Sweden. It's the call that every scientist dreams about. And
00:01:25
um and then the seal ceremonies the whole week in Sweden with the royal family. It's amazing. Obviously, it's
00:01:30
been going for 120 years. Uh and the most amazing bit is they bring out the this Nobel book from the from the vaults
00:01:36
in the safe and you get to sign your name next to you know all the other greats. So it's quite an incredible
00:01:42
moment sort of leafing back to the other pages and seeing Fineman and Mary Cury
00:01:48
and Einstein and Neils Bore and you just carry on going backwards and you get to put your name on that in that book. It's
00:01:54
incredible. Did you have an inkling you had been nominated and that this might be coming your way? Well, you you you get you hear rumors.
00:02:00
It's amazingly locked down actually in today's age how they keep it so so quiet, but um it's sort of like a a
00:02:06
national treasure for Sweden. And um and so you hear, you know, maybe Alpha Fold
00:02:12
is the kind of thing that that would be uh worthy of that recognition. And it has they look for impact as well as the
00:02:18
scientific breakthrough uh impact in the real world. And that can take 20 30 years to to arrive. So you just never
00:02:25
know, you know, whether how soon it's going to be and and and whether it's going to be at all. So it's a surprise.
00:02:30
Well, congrats. Yeah. Thank you. Um and thank you. You let me take a picture with it a few weeks ago when we
00:02:35
So that's something I'll cherish. Um what is Deep Mind within Alphabet?
00:02:41
Alphabet is a sprawling organization, sprawling business units. What is Deep Mind? What are you responsible for?
00:02:46
Well, we sort of see DeepMind now and Google Deep Mind as it's become. We sort of merged a couple of years back all of
00:02:51
the different AI efforts across Google and Alphabet including Deep Mind. Put it all together, the kind of bringing the
00:02:57
the strengths of all the different groups together into one division. And um really the way I describe it now is
00:03:03
that we're the engine room of the whole of Google and the whole of Alphabet. So Gemini, our main model that we're
00:03:09
building, but also many of the other models that we also build, the the video models and interactive world models, uh
00:03:15
we plug them in all across Google now. So pretty much every product, every surface area has um uh one of our ai
00:03:24
models in it. So you know, billions of people now interact with Gemini models, whether that's through AI overview, AI
00:03:30
mode, or the Gemini app. Uh and that's just the beginning. You know, we're kind of incorporating into workspace into
00:03:36
Gmail and so on. So it's a fantastic opportunity really for us to do cutting
00:03:41
edge research, but then immediately ship it to billions of users. And uh how many people what's the
00:03:47
profile? Are these scientists, engineers? What's the makeup of your There's around 5,000 people in in in my
00:03:53
or in Google Deep Mind and and you know, it's predominantly I guess 80% plus engineers and PhD researchers. So, uh
00:04:00
yeah, about you know, three three or 4 thousand. So, there's an evolution of models, a lot of new models coming out and also
00:04:05
new classes of models. Um the other day you released this Genie World model.
00:04:11
Yes. So, what is the Genie World model and um I think we got a video of it. Is it
00:04:17
worth looking at and we can talk about it live? Yeah, we can watch. Sure. Because I think you have to see it to understand it because it's so
00:04:23
extraordinary. Um, can we pull up the video and then uh Demis can narrate a little bit about what we're looking at. What you're seeing are not games or
00:04:30
videos, they're worlds. Each one of these is an interactive environment generated by Genie 3, a new
00:04:37
frontier for world models. With Genie3, you can use natural language to generate a variety of worlds
00:04:44
and explore them interactively. All with a single text prompt. Yeah. So all of these videos, all these
00:04:50
interactive worlds that you're seeing, so you're seeing someone actually can control the video. It's not a static video. It's just being generated by a
00:04:56
text prompt. And then people are able to control the 3D environment using the arrow keys and the spacebar. So
00:05:03
everything you're seeing here is being fully all these pixels are being generated on the fly. They don't exist
00:05:09
until the player or the the the person interacting with it goes to that part of the world. So, um, all of this richness,
00:05:17
um, and then you'll see in a second. So, this is fully generated. This is not a real video. This is generated someone
00:05:23
painting their room and they're painting some stuff on the wall. And then the the player is going to look to the right.
00:05:29
Uh, and then look back. So, now this part of the world didn't exist before, so now it exists. And then
00:05:36
they look back and they see the same painting marks they they left just earlier. And again, this is fully every
00:05:43
pixel you can see is fully generated. And then you can type things like person in a chicken suit or a jet ski and it
00:05:49
will just uh in real time uh include them in the scene. So um I think
00:05:56
you know it's quite mind-blowing really. I think what's hard to gro when looking at this because we've all played video
00:06:01
games that have a 3D element to them when you're in an immersive world, but there's no objects that have been
00:06:07
created. There's no rendering engine. You're not using Unity or Unreal which are the 3D rendering engines.
00:06:13
Yeah, this is actually just 2D images that are being rendered like created on the fly
00:06:18
by the AI. This model is reverse engineering intuitive physics. So, you know, it's
00:06:24
watched many millions of videos and YouTube videos and other things about the world. And just from that, it's kind
00:06:30
of reverse engineered how a lot of the world works. It's not perfect yet, but it can generate um a consistent minute
00:06:37
or two of um interaction as you as the user uh in many many different worlds.
00:06:43
There there's some videos later on where you can control, you know, a dog on a beach or a jellyfish or that's not
00:06:48
limited to just human things cuz the way a 3D rendering engine works is you type in the programmer programs
00:06:55
all the laws of physics. How does light reflect off of an object? You create a 3D object, light reflects off, and then
00:07:02
so what I see visually is rendered by the software because it's got all the programming on how to create physics,
00:07:08
how to do physics. But this this was just trained off of video and it figured it all out.
00:07:13
Yeah, it was trained off of video and some synthetic data from from game engines and it's just reverse engineered
00:07:20
it. For me, it's it's it's very close to my heart, this project, but it's also quite mind-blowing because in the 90s, in my early career, I used to write uh
00:07:27
video games and AI for video games and graphics engines. And I remember how hard it was to do this by hand, program
00:07:33
all the polygons and the physics engines. Um, and it's amazing to just see this, do it effortlessly, all of the
00:07:40
reflections on the water and the the way materials flow, um, and and and objects
00:07:46
behave. And it's just doing that all out of the box. I think it's hard to describe like how much complexity was
00:07:53
solved for with that model. Uh it's it's it's really really really mind-blowing.
00:07:59
Where does this lead us? So fast forward this model to gen five. Yeah. So so the reason we're building
00:08:05
these kind of models is um we feel and we've always felt obviously progressing
00:08:10
on the normal language models like with our Gemini model but from the beginning with Gemini we wanted it to be
00:08:15
multimodal. So we wanted it to input any take any kind of input images audio
00:08:20
video and it can output anything and uh and and so we've been very interested in this because you for an AI to be truly
00:08:27
general to build AGI we feel that the AGI system needs to understand the world around us and the physical world around
00:08:34
us not just the abstract world of languages or mathematics and of course that's what's critical for robotics to
00:08:40
work. It's probably what's missing from it today. And also things like smart glasses, a smart glass assistant that
00:08:46
helps you in your everyday life. It's got to understand the physical context that you're in and and how the world the
00:08:51
intuitive physics of the world works. So we think that building these types of models uh these genie models and also VO
00:08:58
our the best text to video models um those are expressions of us uh building
00:09:03
world models that understand the dynamics of the world the physics of the world. If you can generate it then um
00:09:09
that's that's an expression of your system understanding uh those dynamics and that leads to a world of robotics
00:09:16
ultimately um one one one aspect one application but maybe we can talk about
00:09:21
that what is the state-of-the-art with the vision language action models today
00:09:26
so a generalized system a box a machine that can observe the world with a camera
00:09:32
and then I can use language I can use text or speech to tell I want you to do it. And then it knows how to act
00:09:38
physically to do something in the physical world for me. That's right. So, if you if you look at our uh Gemini Gemini live version of of
00:09:45
Gemini where you can hold up your phone to the world around you, uh I'd recommend any of you try it. It's kind
00:09:51
of magical what it already understands about the physical world. Um you can think of the next step as as
00:09:56
incorporating that in some sort of more handy device like glasses. Um and then it will be an everyday assistant. and
00:10:02
it'll be able to recommend things to you uh as you're walking the streets or we can embed it into Google Maps. Um and
00:10:08
then with robotics uh we've we've built a something called Gemini robotics models which are sort of fine-tuned
00:10:14
Gemini with extra robotics data. And what's really cool about that is and and we released some demos of this over the
00:10:20
summer was um you can have you know we've got these tabletop setups of two hands uh interacting with objects on a
00:10:27
table, two robotic hands and you can just talk to the robot. So you can say you know put the the yellow object into
00:10:34
the red bucket or whatever it is and it will just it will it will interpret that instruction that language instruction
00:10:40
into motor movements and that's the power of a multimodal model rather than just a robotic specific model is that it
00:10:47
will be able to bring in real world understanding to the way you interact with it. So in the end it will be the UI
00:10:52
UX that you you need for as well as the understanding the robotic the robots
00:10:57
need to to navigate the world safely. I asked Sundar this, does that mean that ultimately you could build what would be
00:11:04
the equivalent of call it either a Unix like an operating system layer or like an Android for generalized robotics at
00:11:12
which point if it works well enough across enough devices, there will be a proliferation of robotics uh devices and
00:11:20
and companies and products that will suddenly take off in the world because this software exists to do this
00:11:25
generally. Exactly. That's certainly one strategy we're pursuing is a is a kind of Android play if you like a cross as a kind of
00:11:32
robotics almost an OS layer cross robotics. Um but there's also some quite interesting things about vertically
00:11:38
integrating our latest models uh with specific robot uh uh types and robot
00:11:44
designs and some kind of endto-end learning of that too. So both are actually pretty interesting and we're
00:11:49
pursuing uh both strategies. Do you think that there's humanoid robots as a good kind of um form factor?
00:11:56
Is that does that make sense in the world? Because some folks have criticized it as being good for humans cuz we're meant to do lots of different
00:12:02
things, but if we want to solve a problem, there may be a different form factor to fold laundry or do dishes or
00:12:07
clean the house or whatever. Yeah, I think I think there's going to be a place for both. So, so actually I used to be of the opinion maybe five 5
00:12:13
10 years ago that we'll have form specific robots for certain tasks and I
00:12:18
think in industry industrial robots will definitely be like that where you can optimize the robot for the specific task
00:12:24
whether it's a laboratory or a production line you'd want quite different types of robots. Uh on the
00:12:30
other hand, for um uh uh general use or personal use robotics um and just
00:12:36
interacting with the the ordinary world, uh the humanoid form factor could be pretty important because of course we've
00:12:41
designed the physical world around us uh to be for for humans. And so steps,
00:12:47
doorways, all the things that we've designed for ourselves, um rather than changing all of those in the real world,
00:12:54
it might be easier to design the form factor to work seamlessly uh with the way we've already designed the world. So
00:12:59
I think there's an argument to be made that the humanoid form factor could be very important for for those types of tasks. Um but I think there is a place
00:13:06
also for specialized robotic forms. Do you have a view on hundreds of millions, millions, thousands over the
00:13:12
next 5 years, seven years? I mean, do you have a like in your head, do you have a vision on Yeah, I I do and I spend quite a lot of time on this and I think we're we're
00:13:18
we're still I I feel we're still a little bit early on robotics. I think in the next couple of years there'll be a
00:13:24
sort of real wow moment with robotics, but um I think the algorithms need a bit
00:13:29
more development. Uh the general purpose uh uh uh models that these these these robotics models are built on still need
00:13:36
to be better and more reliable uh and and better understanding the world around it. Um and I think that will come
00:13:41
in the next couple of years. And then also on the on the hardware side, the key is I think eventually we will have
00:13:48
millions of robots uh helping helping helping society and and increasing productivity. But the key there is when
00:13:54
you talk to hardware experts is at what point uh do you have the right level of hardware to go for the scaling uh option
00:14:01
because effectively when you start building factories around trying to make tens of thousands hundreds of thousands
00:14:06
of particular robot type um you know it's harder for you to update quickly iterate the the robot design. So, it's
00:14:14
one of those kind of questions where if you call it too early, uh, then then then the next generation of robot might
00:14:20
be invented in 6 months time that's just more reliable and better and more dextrous. Sounds like using a computing analogy,
00:14:25
we're kind of in the 70s era PC DOSs kind of uh, yeah, potentially. But of course, I think the the the the maybe that's where
00:14:32
we are, but I think the except that 10 years happens in one year probably. So, right
00:14:38
one of those years, right? Exactly. Yeah. Um so let's talk about other
00:14:44
applications um particularly in in science. Uh true to your heart as as as
00:14:49
a as a scientist as the Nobel Prize winning scientist um I always felt like
00:14:55
the greatest thing things that we would be able to do with AI would be the problems that are intractable to humans
00:15:01
with our current technology and capabilities and our brains and whatnot and we can unlock all of this potential.
00:15:08
what are the areas of science and breakthroughs in science that you're most excited about and what kinds of
00:15:13
models do we use to get there? Yeah, I mean a AI to accelerate scientific discovery and and help with things like
00:15:19
human health is the reason I spent my whole career on AI and I think um it's
00:15:25
the most important thing we can do with AI and I feel like if we build AGI in the right way it will be the ultimate
00:15:30
tool for science and I think we've been showing at Deep Mind a lot of the way of that obviously Alphafold uh most
00:15:37
famously but actually we've we've um applied uh our AI systems to many branches of science whether it's
00:15:43
material design um helping with controlling plasma and fusion reactors, predicting the weather, um solving, you
00:15:50
know, mass olympiad uh uh uh math problems and um the same types of
00:15:55
systems uh with some uh extra finetuning can basically uh solve a lot of these
00:16:01
complex problems. So I think we're just scratching the surface of what AI will be able to do and there are some things
00:16:07
that are missing. So uh AI today I would say doesn't have true creativity in the sense that it can't come up with a new
00:16:13
conjecture yet or new hypothesis. It can maybe prove something uh that you give it uh but it's not able to come up with
00:16:20
a sort of new idea or new theory itself. So I think that would be one of the tests actually for
00:16:25
AGI. What is that creativity as a human? Yeah. What is creativity? I think it's this sort of intuitive
00:16:31
leaps that we often celebrate with the best scientists in history and and and artists of course. Um and you know maybe
00:16:38
it's done through analogy or analogical reasoning. There are many theories in psychology and neuroscience and as to how uh we as human scientists do it. But
00:16:46
a good test for it would be something like um give one of these modern AI systems a knowledge cutoff of 1901 and
00:16:53
see if it can come up with special relativity like Einstein did in 1905.
00:16:59
Right? If it's able to do that then I think we're on to something really uh really important where perhaps we're
00:17:05
nearing an AGI. Another example would be with our Alpha Go program that beat the
00:17:10
world champion at Go. Um, not only did it win in, you know, back 10 years ago, it it invented new strategies that had
00:17:16
never been seen before uh for the game of Go, this famously move 37 in game two
00:17:21
that is now studied. But can an AI system come up with a game as elegant,
00:17:27
as satisfying, as aesthetically beautiful as go, not just a new strategy? And the answer to those things
00:17:32
at the moment is no. So that's one of the things I think that's missing uh from uh a true general system an AGI
00:17:39
system is it should be able to do uh those kinds of things as well. Can you break down what's missing and
00:17:45
maybe related to the point of view shared by Daario Sam others about AGI is
00:17:51
a few years away. Do you not subscribe to that belief and maybe help us understand what is it
00:17:58
in your understanding of structure in your understanding of the system architecture what what's lacking
00:18:03
well so I think the fundamental aspect of this is um can we mimic these
00:18:08
intuitive leaps rather than incremental uh advances that that the best human
00:18:13
scientists seem to be able to do. I always say like what separates a great scientist from a good scientist is they're both technically very capable of
00:18:20
course. Um but the great scientist is more creative and so maybe they'll spot some pattern from another subject area
00:18:27
that can be uh uh can sort of have an analogy or some sort of pattern matching
00:18:32
to the area they're trying to solve. And I think one day AI will be able to do this, but it doesn't have the reasoning
00:18:38
uh capabilities and and some of the um uh uh thinking capabilities that um are
00:18:44
going to be needed to to make that kind of breakthrough. Um I also think that we're lacking consistency. So you often
00:18:50
hear some of our competitors talk about uh you know these modern systems that we have today are PhD intelligences. I
00:18:57
think that's a nonsense. They're not they're not PhD intelligences. They have some capabilities that are PhD level. um
00:19:04
but they're not in general uh capable and that's what exactly what general intelligence should be of of performing
00:19:10
across the board at the PhD level. In fact, as we all know interacting with today's chat bots, if you pose the
00:19:16
question in a certain way, they can make simple mistakes with even like high school maths um and and simple counting.
00:19:24
So, uh that shouldn't be possible for a true AGI system. So I think that we are
00:19:30
maybe you know I would say sort of five to 10 years away um from having an AGI system that's capable of doing those
00:19:36
things. Um another thing that's missing is continual learning. This ability to like online teach the system something
00:19:42
new um or or some or adjust its behavior in some way. And so a lot of these I
00:19:48
think core capabilities are still missing and maybe scaling will get us there but I feel if I was to bet I think
00:19:53
there are probably one or two missing breakthroughs that are still required um and will come over the next uh five five
00:20:00
or so years. In the meantime some of the reports and the the scoring systems that
00:20:06
are used seem to be demonstrating two things. One perhaps, and tell me if we're wrong on this, a convergence of
00:20:12
performance of large language models, and number two perhaps is a slowing down or a flatlining of improvements in
00:20:18
performance on each generation. Are those two statements generally true or not so much? No, I mean, we're not we're not seeing
00:20:24
that internally and and um we're still seeing a huge rate of progress. Um but
00:20:29
also uh we're sort of looking at things more broadly. You see with our Genie models and VO models and nano banana is
00:20:37
insane. It's bananas. It's bananas. Has anyone here can Can I see who's used it? Has anyone used Nano Banana?
00:20:43
It's incredible, right? I mean, I'm I'm a nerd who used to use Adobe Photoshop as a kid and Kai's power tools and I was
00:20:50
telling you Bryce 3D. So, like the graphic systems and like recognizing what's going on there was just like
00:20:56
mind-blowing. Well, I think that's the future of uh a lot of these creative tools is you're just going to sort of vibe with it or just talk to them and
00:21:04
they'll be consistent enough where like with Nana Banana, what's amazing about it is it's an image generator. It's best in best, you know, it's state-of-the-art
00:21:11
and bestin-class, but it's one of the things that makes it so great is it's consistency. It's able to un instruction
00:21:17
follow what you want changed and keep everything else the same. And so you can iterate with it uh and eventually get
00:21:23
the kind of output that you want. And that's um I think what the future of a lot of these creative tools is going to
00:21:29
be um and and sort of signals the direction and people love it and and they love creating with it.
00:21:34
So democratization of creativity I I think is really powerful. I remember
00:21:40
having to buy books on Adobe Photoshop as a kid and then you'd read them to learn how to remove something from a from an image and how to fill it in and
00:21:46
feather and all this stuff. Now anyone can do it with Nano Banana and just they can explain to the software what they
00:21:52
want it to do and it just does it. Yeah. I think you're going to see two things which is that um uh this sort of democratization of these tools for
00:21:59
everybody to just use and create with without having to learn, you know, incredibly complex UX's and UIs uh like
00:22:06
like we had to do in the past. But on the other hand, I think we and we're also collaborating with filmmakers and
00:22:13
top creators and artists. Um so they're helping us design what these new tools should be, what features would they
00:22:18
want. people like the director Darren Aronowski who's a good friend of mine, an amazing director and and he's been
00:22:23
making and his team making films using VO and some of our other tools and we're learning a lot by observing them and and
00:22:30
collaborating them. And what we find is that it's it also superpowers and turbocharges the best professionals too
00:22:36
cuz they're suddenly um the best creatives, the professional creatives, they're suddenly able to be 10x, 100x
00:22:41
more productive. they can just try out all sorts of ideas they have in mind, you know, very low cost and then get to
00:22:47
the beautiful thing that they wanted. So, I actually think it's sort of both things are true. We're we're democratizing it for everyday use, uh,
00:22:54
for YouTube creators and so on. But on the other hand, at the high end, um, the people who are who understand these
00:23:00
tools and it's and it's not everyone can get the same output out of these tools, there's a skill in that as well as um,
00:23:05
the vision and the storytelling and the narrative style of uh, the top creatives. And I think it just allows
00:23:12
them, they really enjoy using these tools. It allows them to iterate way faster. Do we get to a world where each
00:23:18
individual describes what sort of content they're interested in? Play me music like Dave Matthews and it'll play
00:23:26
some new track. Yes. Or I want to play a video game set, you know, in the movie Braveheart and I want
00:23:31
to be in that movie. Yes. And I just have that experience. Do we end up there or do we still have a one
00:23:36
to many creative process in society? how important culturally and I know this is a little bit philosophical but it's
00:23:43
interesting to me which is are we still going to have storytelling where we have one story that we all share because someone made it
00:23:48
or we each going to start to develop and pull on our own kind of virtual I I actually foresee a world and I think
00:23:54
a lot about this having started in the games industry as a game designer and programmer is that uh in the '90s is
00:24:00
that you know I think the future of enter this is what we're seeing is the beginning of the future of entertainment
00:24:05
maybe some new genre or new art form and where there's a bit of co-creation. I
00:24:10
still think that you'll have the top creative visionaries. Um they will be creating these compelling experiences
00:24:15
and dynamic story lines and they'll be of higher quality even if they're using the same tools than the everyday person
00:24:20
can do. But also and so millions of people will potentially dive into those
00:24:25
worlds, but maybe they'll also be able to create co-create certain parts of those worlds and perhaps that you know
00:24:32
the the the main creative uh person is almost an editor of that world. So that's the kind of things I'm foreseeing
00:24:39
in the next few years and I'd actually like to explore ourselves with with with technologies like Genie.
00:24:44
Right. Incredible. And how are you spending your time? Are you at is and maybe you can describe Isomorphic?
00:24:50
Of course. What isomorphic is and are you spending a lot of your time there? I am. So, so I also run Isomorphic which
00:24:55
is our spinout company uh to revolutionize drug discovery building on our alpha fold breakthrough in in
00:25:02
protein folding and of course um pro knowing the structure of a protein is only one step in the drug discovery
00:25:08
process. So isomorphic you can think of it as building many uh adjacent alpha folds to help with things like designing
00:25:15
chemical compounds that don't have any side effects but bind to the right place on the protein. And um I think we could
00:25:22
reduce down drug discovery from taking years, sometimes a decade to do down to
00:25:27
maybe weeks or even days uh over the next 10 years. It's incredible. Do you think that's in
00:25:32
clinic soon or is that still in the discovery phase? And we're building up the platform right now and it's uh we have great partnerships
00:25:38
with Eli Liy. I think you had the CEO speaking earlier and and Novartis which are fantastic and our own internal drug
00:25:44
programs and I think we'll be entering sort of pre-clinical phase sometime next year. So candidates get handed over to
00:25:50
the pharma company and they then take them forward. That's right. And we're working on cancers and immunology and oncology and
00:25:55
we're working with uh uh uh uh places like MD Anderson. How much of this requires and I just
00:26:00
want to go back to your point about AGI as it relates to what you just said. models can be probabilistic or
00:26:07
deterministic and tell me if I'm reducing this down too simplistically that the model takes an input and it
00:26:12
outputs something very specific like it's got a logical algorithm and it outputs the same thing every time and it
00:26:18
could be probabilistic where it can change things and make selections the probability is 80% I'll select this letter 90% I'll select this letter next
00:26:25
etc um how much do we have to kind of develop deterministic models that sync
00:26:30
up with for example the the the physics or the chemistry underlying the molecular interactions as
00:26:38
you do your drug discovery modeling. How much are you building novel deterministic models that work with the
00:26:43
models that are probabilistic trained on data? Yeah, it's a great question. Actually, we for the moment and I think probably for the next 5 years or so, we're
00:26:50
building what maybe you could call hybrid models. So, alphafold itself is a hybrid model where you have the learning
00:26:56
component, this probabilistic component you're talking about which is you know based on neuronet networks and transformers and things and that's
00:27:02
learning from the data you give it uh you know any data you have available but also to in a lot of cases with biology
00:27:09
and chemistry there isn't enough data to learn from. So you also have to build in some of the rules about chemistry and
00:27:16
physics that you already know about. So for example with alpha fold um the angle of bonds between atoms. um so and make
00:27:23
sure that the the alpha fold understood you couldn't have atoms overlapping with each other and things like that. Now in
00:27:28
theory it could learn that but it would waste a lot of the learning capacity. So actually it's better to kind of have that as a as a yeah as a as a constraint
00:27:36
in there. Now the trick is with all hybrid systems is and and Alpha Go was another hybrid system where a neural
00:27:42
network learning about the game of Go and what's what kind of patterns are good and then we had Monte Carlo research on top which was doing the
00:27:48
planning and so the trick is how do you marry up a learning system with a a more
00:27:54
handcrafted system bespoke system and actually have them work well together. Uh and that's that's pretty tricky to
00:28:00
do. Does that sort of architecture ultimately lead to the breakthroughs needed for AGI do you think? Are there
00:28:05
deterministic components that need to be solved? I think ultimately you what you want to do is um when you figure out something
00:28:10
where this one of these hybrid systems, what you want what you ultimately want to do is upstream it into the learning component. So it's always better if you
00:28:17
can do endto-end learning and and and directly predict the thing that you're after from the data that you you you're
00:28:23
given. So um so once you figured out something uh using one of these hybrid systems, you then try and go back and
00:28:30
reverse engineer what you've done and see if you can incorporate that learning uh that that that that information into
00:28:37
the learning system. And this is sort of what we did with Alpha Zero, the more general form of Alph Go. So Alph Go had
00:28:42
some uh Go specific knowledge in it. But then with Alpha Zero, we we got rid of that including the human data human
00:28:49
games that we learned from and actually just did self-learning from scratch. And of course then it was able to learn any
00:28:54
game not just go. A lot of hype and hoopla has been made about the demand for energy arising from
00:29:00
AI. Uh this is a big part of the AI summit we held in Washington DC a few weeks ago and it seems to be the number
00:29:07
one topic everyone talks about in tech nowadays. Where's all this power going to come from? But I ask the question of
00:29:12
you, are there changes in the architecture of the models or the hardware or the relationship between the
00:29:19
models and the hardware that brings down the energy per token of output or the cost per token of output that ultimately
00:29:26
maybe say mutes the energy demand curve that's in front of us or do you not think that that's the case and we're
00:29:32
still going to have a pretty kind of geometric energy demand curve? Well, look, interestingly again, I think both
00:29:37
cases are true in the sense that uh especially us at Google and at DeepMind, we we focus a lot on very efficient
00:29:44
models uh that are powerful because we have our own internal use cases of course where we need to serve say AI
00:29:49
overviews to billions of users uh every day and it has to be extremely efficient, extremely low latency and
00:29:55
very cheap to serve and and so we've we've kind of pioneered many um
00:30:00
techniques that allow us to do that like distillation where you sort of have a bigger model internally that trains the
00:30:05
smaller model, right? So, you train the smaller model to mimic the bigger model. And over time, you look at the progress of the last 2 years, uh the model
00:30:12
efficiencies are like 10x, you know, even 100x better for the same performance. Now, the the reason that
00:30:18
that isn't reducing demand is because we're still not got to AGI yet. So, also the frontier models, you keep wanting to
00:30:25
train and experiment with uh new ideas at larger and larger scale, whilst at the same time at the serving side, uh
00:30:32
things are getting more and more efficient. So both things are true and I and in in the end I think that from the
00:30:37
energy perspective um I think AI systems will give back a lot more to energy uh
00:30:43
uh and climate change and these kind of things than they take in terms of efficiency of of of grid systems and
00:30:49
electrical systems material design new types of properties new energy sources. I think AI uh will help with all of that
00:30:56
over the next 10 years that will far outweigh um the energy that it uses today. As the last question, describe
00:31:02
the world 10 years from now. Wow. Okay. Well, I mean, you know, 10 years uh even even 10 weeks is is a
00:31:10
lifetime in AI. So, um field of 10 years, right? But I do feel like if we will
00:31:16
have AGI in the next 10 years, you know, full AGI and um I think that will usher
00:31:22
in a new golden era of science. So, a kind of new renaissance. Um, and I think
00:31:27
we'll see the benefits of that right across from from energy to to human health. Amazing. Please join me in thanking
00:31:34
Nobel laurate Dennis. Thank you. That was great. Thank you.
00:31:42
[Applause]

Podspun Insights

In this captivating episode, the spotlight shines on Demis Hassabis, the CEO of Google DeepMind and a recent Nobel Prize winner. The conversation kicks off with a surreal recounting of his Nobel Prize win, where he shares the thrill of signing his name in the prestigious Nobel book alongside legends like Einstein and Curie. As the dialogue unfolds, Hassabis dives into the transformative power of AI, particularly through DeepMind's groundbreaking projects like AlphaFold and the newly launched Genie World model. This model, which allows users to generate and interact with 3D worlds using natural language, showcases the incredible strides AI has made in understanding intuitive physics and creating immersive experiences. The episode also explores the future of robotics, the potential for AI to revolutionize drug discovery, and the philosophical implications of AGI. Hassabis envisions a world where AI not only accelerates scientific discovery but also democratizes creativity, allowing everyone to co-create in ways previously unimaginable. With a blend of excitement and introspection, this episode is a thought-provoking journey into the future of technology and its impact on society.

Badges

This episode stands out for the following:

  • 95
    Best concept / idea
  • 92
    Best overall
  • 90
    Most inspiring
  • 90
    Best visuals

Episode Highlights

  • Nobel Prize Moment
    Demis Hassabis shares the surreal experience of receiving the Nobel Prize.
    “It's the call that every scientist dreams about.”
    @ 01m 19s
    September 12, 2025
  • The Power of Genie World
    Introducing Genie World, a groundbreaking model that generates interactive environments from text prompts.
    “This model is reverse engineering intuitive physics.”
    @ 06m 18s
    September 12, 2025
  • AI's Role in Science
    Hassabis discusses how AI can accelerate scientific discovery and solve complex problems.
    “AI will be the ultimate tool for science.”
    @ 15m 30s
    September 12, 2025
  • The Future of Creative Tools
    The evolution of creative tools like Nano Banana allows anyone to create without complex learning.
    “Democratization of creativity is really powerful.”
    @ 21m 34s
    September 12, 2025
  • A New Era of Entertainment
    The future of entertainment may involve co-creation and dynamic storytelling.
    “The future of entertainment is the beginning of a new genre.”
    @ 24m 00s
    September 12, 2025
  • Revolutionizing Drug Discovery
    Isomorphic aims to reduce drug discovery time from years to weeks using AI advancements.
    “We could reduce drug discovery from taking years to maybe weeks or even days.”
    @ 25m 22s
    September 12, 2025
  • The Future of AI and Energy
    AI systems are expected to give back more to energy and climate change than they consume.
    “AI will help with all of that over the next 10 years.”
    @ 30m 56s
    September 12, 2025

Episode Quotes

Key Moments

  • Nobel Prize Win01:19
  • Genie World Model06:18
  • AI in Science15:30
  • Democratization of Creativity21:34
  • New Era of Entertainment24:00
  • Revolutionizing Drug Discovery25:22
  • AI and Energy Future30:56

Words per Minute Over Time

Vibes Breakdown