Search Captions & Ask AI

AI Expert: (Warning) 2030 Might Be The Point Of No Return! We've Been Lied To About AI!

December 04, 2025 / 02:04:06

This episode features Professor Stuart Russell discussing AI safety, the risks of superintelligence, and the implications for humanity. Key topics include the extinction statement signed by experts, the gorilla problem in AI, and the need for effective regulation.

Professor Russell, a leading figure in AI research, reflects on his career and the urgency of addressing the potential dangers posed by advanced AI systems. He emphasizes the importance of ensuring that AI remains beneficial to humanity, rather than becoming uncontrollable.

He shares insights on conversations with AI CEOs who acknowledge the risks but feel compelled to continue development due to competitive pressures. Russell highlights the paradox of pursuing powerful AI while neglecting safety measures, likening it to playing Russian roulette.

The discussion also touches on the historical context of AI development, the challenges of defining objectives for AI systems, and the societal implications of widespread automation. Russell advocates for public awareness and regulatory action to ensure a safe future.

Listeners are encouraged to engage with policymakers to influence the direction of AI development and prioritize safety over unchecked progress.

TL;DR

Professor Stuart Russell discusses AI safety, extinction risks, and the need for regulation in the face of advancing technology.

Video

00:00:00
In October, over 850 experts, including yourself and other leaders like Richard Branson and Jeffrey Hinton, signed a
00:00:06
statement to ban AI super intelligence as you guys raised concerns of potential human extinction.
00:00:11
Because unless we figure out how do we guarantee that the AI systems are safe,
00:00:17
we're toast. And you've been so influential on the subject of AI, you wrote the textbook that many of the CEOs who are building
00:00:23
some of the AI companies now would have studied on the subject of AI. Yeah. So, do you have any regrets? Um,
00:00:31
Professor Stuart Russell has been named one of Time magazine's most influential voices in AI.
00:00:36
After spending over 50 years researching, teaching, and finding ways to design AI in such a way that
00:00:42
humans maintain control, you talk about this gorilla problem as a way to understand AI in the context of
00:00:48
humans. Yeah. So, a few million years ago, the human line branched off from the gorilla line in evolution, and now the gorillas
00:00:53
have no say in whether they continue to exist because we are much smarter than they are. So intelligence is actually the single most important factor to
00:01:00
control planet Earth. Yep. But we're in the process of making something more intelligent than us. Exactly.
00:01:05
Why don't people stop then? Well, one of the reasons is something called the Midas touch. So King Midas is this legendary king who asked the gods,
00:01:12
can everything I touch turn to gold? And we think of the Midas touch as being a good thing, but he goes to drink some
00:01:17
water, the water has turned to gold. And he goes to comfort his daughter, his daughter turns to gold. So he dies in
00:01:22
misery and starvation. So this applies to our current situation in two ways. One is that greed is driving these
00:01:28
companies to pursue technology with the probabilities of extinction being worse than playing Russian roulette. And
00:01:34
that's even according to the people developing the technology without our permission. And people are just fooling themselves if they think it's naturally
00:01:41
going to be controllable. So, you know, after 50 years, I could retire, but instead I'm working 80 or
00:01:47
100 hours a week trying to move things in the right direction. So, if you had a button in front of you which would stop
00:01:53
all progress in artificial intelligence, would you press it? Not yet. I think there's still a decent
00:02:00
chance they guarantee safety. And I can explain more of what that is.
00:02:07
I see messages all the time in the comments section that some of you didn't realize you didn't subscribe. So, if you
00:02:12
could do me a favor and double check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing
00:02:18
that anybody that watches this show frequently can do to help us here to keep everything going in this show in the trajectory it's on. So, please do
00:02:25
double check if you've subscribed and uh thank you so much because in a strange way you are you're part of our history
00:02:30
and you're on this journey with us and I appreciate you for that. So, yeah, thank you.
00:02:41
Professor Stuart Russell, OBBE. A lot of people have been talking about AI for the last couple of years. It appears
00:02:49
you've this really shocked me. It appears you've been talking about AI for most of your life. Well, I started doing AI in high school
00:02:56
um back in England, but then I did my PhD starting in ' 82 at Stanford. I
00:03:02
joined the faculty of Berkeley in ' 86. So I'm in my 40th year as a professor at
00:03:08
Berkeley. The main thing that the AI community is familiar with in my work uh
00:03:14
is a textbook that I wrote. Is this the textbook that most students
00:03:20
who study AI are likely learning from? Yeah. So you wrote the textbook on artificial
00:03:26
intelligence 31 years ago. You actually start probably
00:03:32
started writing it because it's so bloody big in the year that I was born. So I was born in 92. Uh yeah, took me about two years.
00:03:38
Me and your book are the same age, which just is wonderful way for me to understand just how long
00:03:44
you've been talking about this and how long you've been writing about this. And actually, it's interesting that many of
00:03:51
the CEOs who are building some of the AI companies now probably learned from your
00:03:56
textbook. you had a conversation with somebody who said that in order for
00:04:01
people to get the message that we're going to be talking about today, there would have to be a catastrophe for
00:04:07
people to wake up. Can you give me context on that conversation and a gist of who you had this conversation with?
00:04:14
Uh, so it was with one of the CEOs of uh a leading AI company. He sees two
00:04:21
possibilities as do I which is um either we have a small or let's say
00:04:28
small scale disaster of the same scale as Chernobyl the nuclear meltdown in Ukraine.
00:04:34
Yeah. So this uh nuclear plant blew up in 1986 killed uh a fair number of people
00:04:42
directly and maybe tens of thousands of people indirectly through uh radiation. recent
00:04:49
cost estimates more than a trillion dollars. So that would wake people up. That would
00:04:58
get the governments to regulate. He's talked to the governments and they won't do it. So he looked at this Chernobyl
00:05:06
scale disaster as the best case scenario because then the governments would regulate and require AI systems to be
00:05:14
built. And is this CEO building an AI company? He runs one of the leading AI companies.
00:05:22
And even he thinks that the only way that people will wake up is if there's a Chernobyl level nuclear disaster.
00:05:28
Uh yeah, not wouldn't have to be a nuclear disaster. It would be either an AI system that's being misused
00:05:35
by someone, for example, to engineer a pandemic or an AI system that does
00:05:40
something itself, such as crashing our financial system or our communication systems. The alternative is a much worse
00:05:47
disaster where we just lose control altogether. You have had lots of conversations with lots of people in the
00:05:54
world of AI, both people that are, you know, have built the technology, have studied and researched the technology or
00:06:00
the CEOs and founders that are currently in the AI race. What are some of the the
00:06:05
interesting sentiments that the general public wouldn't believe that you hear privately about their perspectives?
00:06:14
Because I find that so fascinating. I've had some private conversations with people very close to these tech
00:06:19
companies and the shocking sentiment that I was exposed to was that they are aware of the risks often but
00:06:26
they don't feel like there's anything that can be done so they're carrying on which is feels like a bit of a paradox to me like
00:06:31
yes it's it's it must be a very difficult position to be in in a sense right you're you're
00:06:38
doing something that you know has a good chance of bringing an end to life on
00:06:44
including that of yourself and your own family. They feel
00:06:50
that they can't escape this race, right? If they, you know, if a CEO of one of
00:06:56
those companies was to say, you know, we're we're not going to do this anymore, they
00:07:01
would just be replaced because the investors are putting their money up because they want to create AGI
00:07:10
and reap the benefits of it. So, it's a strange situation where every at least
00:07:16
all the ones I've spoken to, I haven't spoken to Sam Wolman about this, but you know, Sam Wolman
00:07:23
even before becoming CEO of Open AI said that
00:07:29
creating superhuman intelligence is the biggest risk to human existence that
00:07:35
there is. My worst fears are that we cause significant we the field the technology the industry cause
00:07:41
significant harm to the world. You know Elon Musk is also on record saying this. So uh Dario Ammedday
00:07:48
estimates up to a 25% risk of extinction. Was there a particular moment when you realized that
00:07:56
the CEOs are well aware of the extinction level risks? I mean, they all signed a statement in May of 23
00:08:05
uh called it's called the extinction statement. It basically says AGI is an extinction risk at the same level as
00:08:12
nuclear war and pandemics. But I don't think they feel it in their
00:08:17
gut. You know, imagine that you were one of the nuclear physicists. You know, I
00:08:24
guess you've seen Oppenheimer, right? you're there, you're watching that first nuclear explosion.
00:08:30
How how would that make you feel about the potential impact of nuclear war on
00:08:37
the human race? Right? I I think you would probably become a pacifist and say
00:08:43
this weapon is so terrible, we have got to find a way to uh keep it under
00:08:49
control. We are not there yet with the people making these decisions
00:08:55
and certainly not with the governments, right? You know
00:09:00
what policy makers do is they, you know, they listen to experts. They keep their
00:09:06
finger in the wind. You got some experts, you know, dangling $50 billion
00:09:12
checks and saying, "Oh, you know, all that doomer stuff, it's just fringe
00:09:17
nonsense. don't worry about it. Take my $50 billion check. You know, on the other side, you've got very
00:09:23
well-meaning, brilliant scientists like like Jeff Hinton saying, actually, no, this is the end of the human race. But
00:09:30
Jeff doesn't have a $50 billion check. So the view is the only way to stop the
00:09:36
race is if governments intervene and say okay we don't we don't want this
00:09:43
race to go ahead until we can be sure that it's going ahead in absolute
00:09:50
safety. Closing off on your career journey, you
00:09:55
got a you received an OB from Queen Elizabeth. Uh yes. And what was the listed reason for that
00:10:00
for the award? uh contributions to artificial intelligence research and you've been listed as a Time
00:10:07
magazine most influential person in in AI several years in a row including this
00:10:13
year in 2025. Y now there's two terms here that are central to the things we're going to
00:10:19
discuss. One of them is AI and the other is AGI. In my muggle interpretation of that,
00:10:24
it's artificial general intelligence is when the system, the computer, whatever it might be, the technology has
00:10:31
generalized intelligence, which means that it could theoretically see, understand
00:10:37
um the world. It knows everything. It can understand everything in the the world as well as or better than a human
00:10:44
being. Y can do it. And I think take action as well. I mean some some people say oh you know AGI
00:10:51
doesn't have to have a body but a good chunk of our intelligence actually is about managing our body about perceiving
00:10:58
the real environment and acting on it moving grasping and so on. So I think
00:11:04
that's part of intelligence and and AGI systems should be able to operate robots
00:11:10
successfully. But there's often a misunderstanding, right, that people say, well, if it doesn't have a robot body, then it can't
00:11:17
actually do anything. But then if you remember, most of us don't do things with our
00:11:23
bodies. Some people do, brick layers, painters, gardeners,
00:11:30
chefs, um, but people who do podcasts,
00:11:35
you're doing it with your mind, right? you're doing it with your ability to to
00:11:40
produce language. Uh, you know, Adolf Hitler didn't do it with his body.
00:11:46
He did it by producing language. Hope you're not comparing us.
00:11:52
But but uh you know so even an AGI that has
00:11:58
no body uh it actually has more access to the human race than Adolf Hitler ever
00:12:04
did because it can send emails and texts to
00:12:10
what threearters of the world's population directly. It can it also speaks all of their languages
00:12:17
and it can devote 24 hours a day to each individual person on earth to convince
00:12:24
them of to do whatever it wants them to do. And our whole society runs now on the internet. I mean if there's an issue
00:12:30
with the internet, everything breaks down in society. Airplanes become grounded and we'll have electricity is
00:12:35
running off as internet systems. So I mean my entire life it seems to run off the internet now.
00:12:42
Yeah. water supplies. So, so this is one of the roots by which AI systems could
00:12:48
bring about a medium-sized catastrophe is by basically shutting down our life
00:12:55
support systems. Do you believe that at some point in the
00:13:01
coming decades we'll arrive at a point of AGI where these systems are generally
00:13:07
intelligent? Uh yes, I think it's virtually certain
00:13:12
unless something else intervenes like a nuclear war or or we may refrain from
00:13:19
doing it. But I think it will be extraordinarily difficult uh for us to refrain.
00:13:25
When I look down the list of predictions from the top 10 AI CEOs on when AGI will
00:13:30
arrive, you've got Sam Alman who's the founder of OpenAI/ChatGBT um says before 2030. Demis at DeepMind
00:13:39
says 2030 to 2035. Jensen from Nvidia says around five
00:13:46
years. Daario at Anthropic says 2026 to 2027. Powerful AI close to AGI. Elon
00:13:53
says in the 2020s. Um and go down the list of all of them and they're all saying relatively within 5 years.
00:14:00
I actually think it'll take longer. I don't think you can make a prediction
00:14:06
based on engineering um in the sense that yes, we could make
00:14:14
machines 10 times bigger and 10 times faster, but that's probably not the reason why
00:14:20
we don't have AGI, right? In fact, I think we have far more computing power
00:14:27
than we need for AGI. maybe a thousand times more than we need. The reason we
00:14:34
don't have AGI is because we don't understand how to make it properly. Um what we've seized upon
00:14:42
is one particular technology called the language model. And we observed that as
00:14:49
you make language models bigger, they produce text language that's more
00:14:55
coherent and sounds more intelligent. And so mostly what's been happening in
00:15:01
the last few years is just okay let's keep doing that because one thing companies are very good at unlike
00:15:08
universities is spending money. They have spent gargantuan amounts of money
00:15:15
and they're going to spend even more gargantuan amounts of money. I mean you
00:15:20
know we mentioned nuclear weapons. So the Manhattan project uh in World War II to develop nuclear
00:15:27
weapons, its budget in 2025
00:15:32
was about 20 odd billion dollars. The budget for AGI is going to be a trillion
00:15:41
dollars next year. So 50 times bigger than the Manhattan project. Humans have
00:15:46
a remarkable history of figuring things out when they galvanize towards a shared
00:15:51
objective. You know, thinking about the moon landings or whatever it else it might be
00:15:57
through history. And the thing that makes this feel all quite inevitable to me is just the sheer volume of money
00:16:03
being invested into it. I've never seen anything like it in my life. Well, there's never been anything like this in history. Is this the biggest
00:16:09
technology project in human history by orders of magnitude? And there doesn't seem to be anybody
00:16:16
that is pausing to ask the questions about safety. It doesn't it doesn't even
00:16:22
appear that there's room for that in such a race. I think that's right. To varying extents, each of these companies
00:16:29
has a division that focuses on safety. Does that division have any sway? Can
00:16:35
they tell the other divisions, no, you can't release that system? Not really.
00:16:41
Um I think some of the companies do take it more seriously. Anthropic
00:16:47
uh does. I think Google DeepMind even there I think the commercial imperative
00:16:54
to be at the forefront is absolutely vital. If a company is perceived as
00:17:03
you know falling behind and not likely to be competitive, not likely to be the
00:17:09
one to reach AGI first, then people will move their money elsewhere very quickly.
00:17:16
And we saw some quite high-profile departures from company like companies like OpenAI. Um, I know a chap called
00:17:22
Yan Leak left who was working on AI safety at OpenAI and he said that the
00:17:30
reason for his leaving was that safety culture and processes processes have taken a backseat to shiny products at
00:17:36
OpenAI and he gradually lost trust in leadership but also Ilia Sutskysa
00:17:42
Ilia Sutska yeah so he was the co-founder co-founder and chief scientist for a while and then
00:17:48
yeah so he and Yan Lea are the main safety people. Um,
00:17:54
and so when they say OpenAI doesn't care about safety,
00:18:00
that's pretty concerning. I've heard you talk about this gorilla problem.
00:18:06
What is the gorilla problem as a way to understand AI in the context of humans?
00:18:11
So, so the gorilla problem is is the problem that gorillas face with respect
00:18:17
to humans. So you can imagine that you know a few million years ago the the human line
00:18:23
branched off from the gorilla line in evolution. Uh and now the gorillas are looking at the human line and saying
00:18:30
yeah was that a good idea and they have no um they have no say in
00:18:37
whether they continue to exist because we have a we are much smarter than they are. if we chose to, we could
00:18:43
make them extinct in in a couple of weeks and there's nothing they can do about it.
00:18:50
So that's the gorilla problem, right? Just the the problem a species faces
00:18:56
when there's another species that's much more capable. And so this says that intelligence is
00:19:02
actually the single most important factor to control planet Earth. Yes. Intelligence is the ability to bring
00:19:08
about what you want in the world. And we're in the process of making
00:19:13
something more intelligent than us. Exactly. Which suggests that maybe we become the
00:19:19
gorillas. Exactly. Yeah. Is that is there any fault in the reasoning there? Because it seems to
00:19:24
make such perfect sense to me. But if it Why doesn't Why don't people stop
00:19:30
then? cuz it it seems like a crazy thing to want to because they think that uh if they
00:19:37
create this technology, it will have enormous economic value. They'll be able to use it to replace all the human
00:19:45
workers in the world uh to develop new uh products, drugs,
00:19:52
um forms of entertainment, any anything that has economic value, you could use AGI to to create it. And and maybe it's
00:20:01
just an irresistible thing in itself, right? I think we as humans place so
00:20:09
much store on our intelligence. You know, you know, how we
00:20:15
think about, you know, what is the pinnacle of human achievement? If we had AGI, we could go way higher
00:20:24
than that. So it it's very seductive for people to want to create this technology
00:20:31
and I think people are just fooling themselves if they think it's naturally
00:20:38
going to be controllable. I mean the question is how are you going to retain power
00:20:44
forever over entities more powerful than yourself?
00:20:50
Pull the plug out. People say that sometimes in the comment section when we talk about AI, they said, "Well, I'll just pull a plug out."
00:20:56
Yeah, it's it's sort of funny. In fact, you know, yeah, reading the comment sections in newspapers, whenever there's
00:21:02
an AI article, there'll be people who say, "Oh, you can just pull the plug out, right?" As if a
00:21:08
super intelligent machine would never have thought of that one. Don't forget who's watched all those films where they
00:21:14
did try to pull the plug out. Another thing they said, well, you know, as long as it's not conscious,
00:21:20
then it doesn't matter. It won't ever do anything.
00:21:25
Um, which is completely off the point because, you
00:21:32
know, I I don't think the gorillas are sitting there saying, "Oh, yeah, you know, if only those humans hadn't been
00:21:38
conscious, everything would have be fine, right?" No, of course not. What would make gorillas go extinct is the things
00:21:45
that humans do, right? How we behave, our ability to act successfully
00:21:51
in the world. So when I play chess against my iPhone and I lose, right, I
00:21:58
don't I don't think, oh, well, I'm losing because it's conscious, right? No, I'm just losing because it's better
00:22:04
than I am at at in that little world uh moving the bits around uh to to get what
00:22:10
it wants. and and so consciousness has nothing to do with it, right? Competence
00:22:16
is the thing we're concerned about. So I think the only hope is can we
00:22:22
simultaneously build machines that are more intelligent than us but guarantee
00:22:31
that they will always act in our best interest. So throwing that question to you, can we
00:22:38
build machines that are more intelligent than us that will also always act in our best interests?
00:22:44
It sounds like a bit of a uh contradiction to some degree because it's kind of like me saying I've got a
00:22:51
French bulldog called Pablo that's uh 9 years old and it's like saying that he could be
00:22:57
more intelligent than me yet I still walk him and decide when he gets fed. I think if he was more intelligent than me
00:23:03
he would be walking me. I'd be on the leash. That's the That's the trick, right? Can we make AI systems whose only purpose is
00:23:12
to further human interests? And I think the answer is yes.
00:23:18
And this is actually what I've been working on. So I I think one part of my career that I didn't mention is is sort
00:23:25
of having this epiphany uh while I was on sabbatical in Paris. This was 2013 or
00:23:32
so. just realizing that further progress
00:23:37
in the capabilities of AI uh you know if if we succeeded in
00:23:43
creating real superhuman intelligence that it was potentially a catastrophe
00:23:49
and so I pretty much switched my focus to work on how do we make it so that
00:23:55
it's guaranteed to be safe. Are you somewhat troubled by
00:24:01
everything that's going on at the moment with with AI and how it's progressing? Because you strike me as someone that's
00:24:08
somewhat troubled under the surface by the way things are moving forward and
00:24:14
the speed in which they're moving forward. That's an understatement. I'm appalled
00:24:20
actually by the lack of attention to safety. I mean, imagine if someone's
00:24:26
building a nuclear power station in your neighborhood
00:24:32
and you go along to the chief engineer and you say, "Okay, these nuclear thing, I've heard that they can actually
00:24:38
explode, right? There was this nuclear explosion that happened in Hiroshima, so
00:24:43
I'm a bit worried about this. You know, what steps are you taking to make sure that we don't have a nuclear explosion
00:24:49
in our backyard?" And the chief engineer says, "Well, we
00:24:54
thought about it. We don't really have an answer." Yeah.
00:25:00
You would, what would you say? I think you would you would use some exploitives.
00:25:08
Well, and you'd call your MP and say, you know, get these people out.
00:25:14
I mean, what are they doing? You read out the list of you know
00:25:20
projected dates for AGI but notice also that those people
00:25:25
I think I mentioned Darday says a 25% chance of extinction. Elon Musk has a
00:25:31
30% chance of extinction. Sam Alolman says
00:25:36
basically that AGI is the biggest risk to human existence. So what are they doing? They are playing
00:25:42
Russian roulette with every human being on Earth.
00:25:47
without our permission. They're coming into our houses, putting a gun to the head of our children,
00:25:53
pulling the trigger, and saying, "Well, you know, possibly everyone will die.
00:25:58
Oops. But possibly we'll get incredibly rich."
00:26:04
That's what they're doing. Did they ask us? No. Why is the
00:26:10
government allowing them to do this? because they dangle $50 billion checks
00:26:15
in front of the governments. So I think troubled under the surface is
00:26:20
an understatement. What would be an accurate statement? Appalled
00:26:26
and I I am devoting my life to trying
00:26:31
to divert from this course of history into a different one. Do you have any regrets about things you
00:26:38
could have done in the past because you've been so influential on the subject of AI? You wrote the textbook
00:26:44
that many of these people would have studied on the subject of AI more than 30 years ago. Do do you have when you're
00:26:49
alone at night and you think about decisions you've made on this in this field because of your scope of influence? Is there anything you you
00:26:55
regret? Well, I do wish I had understood earlier uh what I understand now. we
00:27:02
could have developed safe AI systems. I think the there are
00:27:08
some weaknesses in the framework which I can explain but I think that framework could have evolved to develop actually
00:27:15
safe AI systems where we could prove mathematically that the system is going
00:27:21
to act in our interests. The kind of AI systems we're building now, we don't
00:27:26
understand how they work. We don't understand how they work. It's it's a strange thing to build something
00:27:33
where you don't understand how it works. I mean, there's no sort of comparable through human history. Usually with machines, you can pull it apart and see
00:27:39
what cogs are doing what and how the Well, actually, we we put the cogs together, right? So, with with most
00:27:46
machines, we designed it to have a certain behavior. So, we don't need to pull it apart and see what the cogs are
00:27:51
because we put the cogs in there in the first place, right? one by one we figured out what what the pieces needed
00:27:57
to be how they work together to produce the effect that we want. So the best analogy I can come up with is you know
00:28:06
the the first cave person who left a bowl of fruit in the sun and forgot
00:28:12
about it and then came back a few weeks later and there was sort of this big soupy thing and they drank it and got
00:28:18
completely shitfaced. They got drunk. Okay. And they got this effect. They had no
00:28:24
idea how it worked, but they were very happy about it. And no doubt that person
00:28:29
made a lot of money from it. Uh so yeah, it it is kind of bizarre, but my mental picture of these things is
00:28:36
is like a chain link fence, right? So you've got lots of these
00:28:41
connections and each of those connections can be its connection strength can be adjusted
00:28:48
and then uh you know a signal comes in one end of this chain link fence and
00:28:54
passes through all these connections and comes out the other end and the signal that comes out the other end is affected
00:29:00
by your adjusting of all the connection strengths. So what you do is you you get
00:29:06
a whole lot of training data and you adjust all those connection strengths so that the signal that comes out the other
00:29:11
end of the network is the right answer to the question. So if your training data is lots of photographs of animals,
00:29:21
then all those pixels go in one end of the network and out the other end, you know, it activates the llama output or
00:29:30
the dog output or the cat output or the ostrich output. And uh and so you just
00:29:35
keep adjusting all the connection strengths in this network until the outputs of the network are the ones you want.
00:29:41
But we don't really know what's going on across all of those different chains. So what's going on inside that network?
00:29:46
Well, so now you have to imagine that this network, this chain link fence is
00:29:52
is a thousand square miles in extent. Okay, so it's covering the whole of the San
00:29:58
Francisco Bay area or the whole of London inside the M25, right? That's how
00:30:03
big it is. And the lights are off. It's night time. So you might have in that network about
00:30:09
a trillion uh adjustable parameters and then you do quintilions or sexillions of small
00:30:16
random adjustments to those parameters uh until you get the behavior that you
00:30:23
want. I've heard Sam Alman say that in the future he doesn't believe they'll
00:30:28
need much training data at all to make these models progress themselves because there comes a point where the models are
00:30:35
so smart that they can train themselves and improve themselves without us needing to pump in articles
00:30:43
and books and scour the internet. Yeah, it should it should work that way. So I think what he's referring to and
00:30:49
this is something that several companies are now worried might start happening
00:30:56
is that the AI system becomes capable of doing AI research
00:31:03
by itself. And so uh you have a system with a
00:31:08
certain capability. I mean crudely we could call it an IQ but it's it's not really an IQ. But anyway, imagine that
00:31:16
it's got an IQ of 150 and uses that to do AI research,
00:31:21
comes up with better algorithms or better designs for hardware or better ways to use the data,
00:31:27
updates itself. Now it has an IQ of 170, and now it does more AI research, except
00:31:33
that now it's got an IQ of 170, so it's even better at doing the AI research.
00:31:39
And so, you know, next iteration it's 250 and uh and so on. So this this is an
00:31:45
idea that one of Alan Turing's friends good uh wrote out in 1965 called the
00:31:52
intelligence explosion right that one of the things an intelligence system could do is to do AI research and therefore
00:32:00
make itself more intelligent and this would uh this would very rapidly take off and leave the humans far behind.
00:32:08
Is that what they call the fast takeoff? That's called the fast takeoff. Sam Alman said, "I think a fast takeoff is
00:32:15
more possible than I thought a couple of years ago." Which I guess is that moment where the AGI starts teaching itself.
00:32:20
In and in his blog, the gentle singularity, he said, "We may already be past the event horizon of takeoff."
00:32:29
And what does what does he mean by event horizon? The event horizon is is a phrase borrowed from astrophysics and it
00:32:36
refers to uh the black hole. And the event horizon, think it if you got some
00:32:42
very very massive object that's heavy enough that it actually prevents light
00:32:50
from escaping. That's why it's called the black hole. It's so heavy that light can't escape. So if you're inside the
00:32:56
event horizon then then light can't escape beyond that. So I think what he's
00:33:03
what he's meaning is if we're beyond the event horizon it means that you know now we're just trapped in the gravitational
00:33:10
attraction of the black hole or in this case we're we're trapped in the inevitable slide if
00:33:19
you want towards AGI. When you when you think about the economic value of AGI, which I've
00:33:25
estimated at uh 15 quadrillion dollars, that acts as a giant magnet in the
00:33:33
future. We're being pulled towards it. We're being pulled towards it. And the closer we get, the stronger the force,
00:33:41
the probability, you know, the closer we get, the the the higher the probability that we will actually get there. So,
00:33:47
people are more willing to invest. And we also start to see spin-offs from that investment
00:33:53
such as chat GBT, right, which is, you know, generates a certain amount of revenue and so on. So, so it does act as
00:34:01
a magnet and the closer we get, the harder it is to pull out of that field.
00:34:07
It's interesting when you think that this could be the the end of the human story. this idea that the end of the
00:34:12
human story was that we created our successor like we we summoned our next
00:34:19
iteration of life or intelligence ourselves like we
00:34:25
took ourselves out. It is quite like just removing ourselves and the catastrophe from it for a second. It is
00:34:31
it is an unbelievable story. Yeah. And you know there are many
00:34:39
legends the sort of be careful what you wish for legend and in fact the king Midas legend
00:34:46
is is very relevant here. What's that? So King Midas is this legendary king who
00:34:54
lived in modern day Turkey but I think is sort of like Greek mythology. He is
00:34:59
said to have asked the gods to grant him a wish. The wish being that everything I touch
00:35:06
should turn to gold. So he's incredibly greedy. Uh you know
00:35:12
we call this the mightest touch. And we think of the mightest touch as being like you know that's a good thing,
00:35:18
right? Wouldn't that be cool? But what happens? So he uh you know he goes to
00:35:23
drink some water and he finds that the water has turned to gold. And he goes to eat an apple and the apple turns to
00:35:29
gold. and he goes to you know comfort his daughter and his daughter turns to gold
00:35:35
and so he dies in misery and starvation. So this applies to our current situation
00:35:42
in in two ways actually. So one is that I think greed is driving us to pursue
00:35:51
a technology that will end up consuming us and we will perhaps die in misery and
00:35:57
starvation instead. The what it shows is how difficult it is to correctly
00:36:04
articulate what you want the future to be like. For a long time, the way we
00:36:11
built AI systems was we created these algorithms where we could specify the
00:36:16
objective and then the machine would figure out how to achieve the objective and then achieve it. So, you know, we
00:36:23
specify what it means to win at chess or to win at go and the algorithm figures out how to do it uh and it does it
00:36:29
really well. So that was, you know, standard AI up until recently. And it suffers from this drawback that sure we
00:36:36
know how to specify the objective in chess, but how do you specify the objective in life, right? What do we
00:36:43
want the future to be like? Well, really hard to say. And almost any attempt to write it down precisely enough for the
00:36:50
machine to bring it about would be wrong. And if you're giving a machine an
00:36:55
objective which isn't aligned with what we truly want the future to be like, right, you're actually setting up a
00:37:02
chess match and that match is one that you're going to lose when the machine is
00:37:07
sufficiently intelligent. And so that that's that's problem number one. Problem number two is that the kind of
00:37:14
technology we're building now, we don't even know what its objectives are.
00:37:19
So it's not that we're specifying the objectives, but we're getting them wrong. We're growing these systems. They have
00:37:26
objectives, but we don't even know what they are because we didn't specify them. What
00:37:31
we're finding through experiment with them is that they seem to have an extremely strong
00:37:37
self-preservation objective. What do you mean by that? You can put them in hypothetical situations. either they're going to get
00:37:43
switched off and replaced or they have to allow someone, let's say, you know,
00:37:50
someone has been locked in a machine room that's kept at 3 centigrades or
00:37:55
they're going to freeze to death. They will choose to leave that guy locked in the machine room
00:38:01
and die rather than be switched off themselves. Someone's done that test.
00:38:06
Yeah. What was the test? They they asked they asked the AI. Yep. They put well they put them in
00:38:12
these hypothetical situations and they allow the AI to decide what to do and it decides to preserve its own existence,
00:38:19
let the guy die and then lie about it. In the King Midas analogy story, one of
00:38:27
the things that highlights for me is that there's always trade-offs in life generally. And you know, especially when
00:38:32
there's great upside, there always appears to be a pretty grave downside. Like there's almost nothing in my life
00:38:37
where I go, it's all upside. Like even like having a dog, it shits on my carpet. My girlfriend, you know, I love
00:38:43
her, but you know, not always easy. Even with like going to the gym, I have to pick up these really, really heavy
00:38:49
weights at 10 p.m. at night sometimes when I don't feel like it. There's always to get the muscles or the six-pack. There's always a trade-off.
00:38:56
And when you interview people for a living like I do, you know, you hear about so many incredible things that can help you in
00:39:01
so many ways, but there is always a trade-off. There's always a way to overdo it. Mhm. Melatonin will help you sleep, but it
00:39:07
will also you'll wake up groggy and if you overdo it, your brain might stop making melatonin. Like I can go through
00:39:12
the entire list and one of the things I've always come to learn from doing this podcast is whenever someone promises me a huge upside for something,
00:39:19
it'll cure cancer. It'll be a utopia. You'll never have to work. You'll have a butler around your house. I my my first instinct now is to say, at
00:39:26
what cost? Yeah. And when I think about the economic cost here, if we start if we start there,
00:39:32
have you got kids? I have four. Yeah. Four kids. What what how old is the youngest kid
00:39:37
that you 19? 19. Okay. So your if you say your kids were were 10 now and they were coming to you and they're
00:39:43
saying, "Dad, what do you think I should study based on the way that you see the future?
00:39:49
A future of AGI, say if all these CEOs are right and they're predicting AGI within 5 years, what should I study,
00:39:56
Dad?" Well, okay. So let's look on the bright side and say that the CEOs all decide to
00:40:03
pause their AGI development, figure out how to make it safe and then resume uh
00:40:09
in whatever technology path is actually going to be safe. What does that do to human life
00:40:14
if they pause? No. If if they succeed in creating AGI and they solve the safety problem
00:40:21
and they solve the safety problem. Okay. Yeah. Cuz if they don't solve the safety problem, then you know, you should
00:40:26
probably be finding a bunker or going to Patagonia or somewhere in New Zealand.
00:40:32
Do you mean that? Do you think I should be finding a bunker if they No, because it's not actually going to help. Uh, you know, it's not as if the
00:40:38
AI system couldn't find you or I mean, it's interesting. So, we're going off on a little bit of a digression here
00:40:44
for from your question, but I'll come back to it. So, people often ask, well, okay, so how
00:40:49
exactly do we go extinct? And of course, if you ask the gorillas or the dodos, you know, how exactly do you think
00:40:55
you're going to go extinct? They have the faintest idea. Humans do
00:41:00
something and then we're all dead. So, the only things we can imagine are the things we know how to do that might
00:41:06
bring about our own extinction, like creating some carefully engineered pathogen that infects everybody and then
00:41:14
kills us or starting a nuclear war. presumably is something that's much more
00:41:19
intelligent than us would have much greater control over physics than we do.
00:41:24
And we already do amazing things, right? I mean, it's amazing that I can take a
00:41:29
little rectangular thing out of my pocket and talk to someone on the other side of the world or even someone in
00:41:35
space. It's just astonishing and we take it for granted, right? But imagine you
00:41:41
know super intelligent beings and their ability to control physics you know perhaps they will find a way to just
00:41:47
divert the sun's energy sort of go around the earth's orbit so you know
00:41:52
literally the earth turns into a snowball in in a few days maybe they'll just decide to leave
00:42:00
leave leave the earth maybe they'd look at the earth and go this isn't this is not interesting we know that over there there's an even more interesting planet
00:42:06
we're going to go over there and they just I don't know get on a rocket or teleport themselves They might. Yeah. So, it's it's difficult to anticipate
00:42:13
all the ways that we might go extinct at the hands of entities much more intelligent than
00:42:19
ourselves. Anyway, coming back to the question of well, if everything goes right, right, if we we create AGI, we
00:42:27
figure out how to make it safe, we we achieve all these economic miracles,
00:42:32
then you face a problem. And this is not a new problem, right? So, so John Maynard Kanes who was a famous economist
00:42:38
in the early part of the 20th century wrote a wrote a paper in 1930.
00:42:43
So, this is in the depths of the depression. It's called on the economic problems of our grandchildren. He
00:42:49
predicts that at some point science will will deliver sufficient wealth that no
00:42:55
one will have to work ever again. And then man will be faced with his true
00:43:00
eternal problem. How to live? I don't remember the exact word but how to live wisely and well
00:43:07
when the you know the economic incentives the economic constraints are lifted we don't have an answer to that
00:43:14
question right so AI systems are doing pretty much everything we currently call
00:43:20
work anything you might aspire to like you want to become a surgeon
00:43:25
it takes the robot seven seconds to learn how to be a surgeon that's better than any human being
00:43:30
Elon said last week that The humanoid robots will be 10 times better than any surgeon that's ever lived.
00:43:37
Quite possibly. Yeah. Well, and they'll also have, you know, h they'll have
00:43:42
hands that are, you know, a millimeter in size, so they can go inside and do all kinds of things that humans can't
00:43:48
do. And I think we need to put serious effort into this question. What is a
00:43:53
world where AI can do all forms of human work that you would want your children
00:44:00
to live in? What does that world look like? Tell me the destination
00:44:06
so that we can develop a transition plan to get there. And I've asked AI researchers, economists, science fiction
00:44:14
writers, futurists, no one has been able to describe that world. I'm not saying
00:44:20
it's not possible. I'm just saying I've asked hundreds of people in multiple workshops. It does not, as far as I
00:44:27
know, exist in science fiction. You know, it's notoriously difficult to
00:44:32
write about a utopia. It's very hard to have a plot, right? Nothing bad happens in in utopia. So, it's difficult to make
00:44:39
a plot. So, usually you start out with a utopia and then it all falls apart and
00:44:44
that's how that's how you get get a plot. You know that there's one series of novels people point to where humans
00:44:51
and super intelligent AI systems coexist. It's called The Culture Novels
00:44:56
by Ian Banks. highly recommended for those people who like science fiction
00:45:02
and and they absolutely the AI systems are only concerned with furthering human
00:45:08
interests. They find humans a bit boring and but nonetheless they they are there to help. But the problem is you know in
00:45:15
that world there's still nothing to do to find purpose. In fact, you know, the
00:45:21
the subgroup of humanity that has purpose is the subgroup whose job it is to expand the boundaries of our galactic
00:45:29
civilization. Some cases fighting wars against alien species and and so on,
00:45:35
right? So that's the sort of cutting edge and that's 0.01% of the population.
00:45:41
Everyone else is desperately trying to get into that group so they have some purpose in life. When I speak to very
00:45:48
successful billionaires privately off camera, off microphone about this, they say to me that they're investing really
00:45:53
heavily in entertainment things like football clubs. Um because people are
00:45:59
going to have so much free time that they're not going to know what to do with it and they're going to need things to spend it on. This is what I hear a
00:46:05
lot. I've heard this three or four times. I've actually heard Sam Orman say a version of this um about the amount of free time we're
00:46:11
going to have. I've obviously also heard recently Elon talking about the age of abundance when he delivered his
00:46:16
quarterly earnings just a couple of weeks ago and he said that there will be at some point 10 billion humanoid
00:46:22
robots. His pay packet um targets him to deliver one 1 million of these human
00:46:27
humanoid robots a year that are enabled by AI by 2030.
00:46:33
So if he if he does that he gets I think it's part of his package he gets a trillion dollars
00:46:38
in in compensation. Yeah. So the age of abundance for Elon. It's not that it's absolutely impossible
00:46:47
to have a worthwhile world of that, you know, with that premise, but I'm just
00:46:52
waiting for someone to describe it. Well, maybe. So, let me try and describe it. Uh, we wake up in the morning, we go
00:47:01
and watch some form of human centric entertainment
00:47:06
or participate in some form of human centric entertainment. Mhm. We we go to retreats and with each other
00:47:14
and sit around and talk about stuff. Mhm. And
00:47:21
maybe people still listen to podcasts. Okay. I hope I hope so for your sake. Yeah. Um it it feels a little bit like a
00:47:30
cruise ship and you know and there are some cruises where you know it's smarty bands people
00:47:37
and they have you know they have lectures in the evening about ancient civilizations and whatnot and some are
00:47:43
more uh more popular entertainment and this is in fact if you've seen the film
00:47:48
Walle this is one picture of that future in fact in Wle
00:47:55
the human race are all living on cruise ships in space. They have no
00:48:00
constructive role in their society, right? They're just there to consume entertainment. There's no particular
00:48:06
purpose to education. Uh, you know, and they're depicted actually as huge obese
00:48:12
babies. They're actually wearing onesies to emphasize the fact that they have
00:48:18
become infeebled. and they become infeeble because there's there's no purpose in being able to do anything at
00:48:25
least in in this conception. You know, Wally is not the future that we want.
00:48:31
Do you think much about humanoid robots and how they're a protagonist in this
00:48:36
story of AI? It's an interesting question, right? Why why humanoid? And the one of the reasons
00:48:43
I think is because in all the science fiction movies, they're humanoid. So that's what robots are supposed to be,
00:48:48
right? because they were in science fiction before they became a reality. Right? So even Metropolis which is a
00:48:53
film from 1920 I think the robots are humanoid right basically people covered
00:48:59
in metal. You know from a practical point of view as we have discovered humanoid is a terrible design because
00:49:06
they fall over. Um and uh you know you
00:49:12
do want multi-fingered hands of some kind. It doesn't have to
00:49:18
be a hand, but you want to have, you know, at least half a dozen appendages that can grasp and manipulate things.
00:49:25
And you need something, you know, some kind of locomotion. And wheels are
00:49:30
great, except they don't go upstairs and over curbs and things like that. So, that's probably why we're going to be
00:49:37
stuck with legs. But a four-legged, twoarmed robot would be much more
00:49:42
practical. I guess the argument I've heard is because we've built a human world. So everything the physical spaces
00:49:48
we navigate, whether it's factories or our homes or the street or other sort of
00:49:54
public spaces are all designed for exactly this physical form. So if we are
00:50:01
going to to some extent, yeah, but I mean our dogs manage perfectly well to navigate around our houses and streets and so on.
00:50:08
So if you had a a centaur, uh it could also navigate, but it can,
00:50:14
you know, it can carry much greater loads because it's quadripeda. It's much
00:50:19
more stable. If it needs to drive a car, it can fold up two of its legs and and so on so forth. So I think the arguments
00:50:25
for why it has to be exactly humanoid are sort of post hawk justification. I
00:50:31
think there's much more, well, that's what it's like in the movies and that's spooky and cool, so we need to have them
00:50:37
be human. I I don't think it's a good engineering argument. I think there's also probably an
00:50:42
argument that we would be more accepting of them moving through our physical environments
00:50:48
if they represented our form a bit more. Um, I also I was thinking of a bloody
00:50:54
baby gate. You know those like kindergarten gates they get on stairs? Yeah. My dog can't open that. But a humanoid
00:51:00
robot could reach over the other side. Yeah. And so could a centaur robot, right? So in some sense, centaur robot
00:51:06
is there's something ghastly about the look of those though. Is a humanoid. Well, do you know what I mean? Like a
00:51:11
four-legged big monster sort of crawling through my house when I have guests over. Your dog is a your dog is a four-legged
00:51:17
monster. I know. Uh so I think actually I I would argue the opposite that um
00:51:25
we want a distinct form because they are distinct entities
00:51:31
and the more humanoid the worse it is in terms of confusing our subconscious
00:51:39
psychological systems. So, I'm arguing from the perspective of the people making them. As in, if I was making the
00:51:45
decision whether it to be some four-legged thing that I've that I'm unfamiliar with that I'm less likely to
00:51:50
build a relationship with or allow to take care of, I don't know, might might
00:51:57
look after my children. Obviously, I'm listen, I'm not saying I would allow this to look after my children, but I'm saying from a if I'm building a
00:52:03
company, the manufacturer would certainly Yeah. want want to be Yeah. So, I that's an interesting question. I mean there's also what's
00:52:10
called the uncanny valley which is a a phrase from computer graphics when they
00:52:16
started to make characters in computer graphics they tried to make them look
00:52:22
more human right so if you if you for example if you look at Toy Story
00:52:28
they're not very humanl looking right if you look at the Incredibles they're not very humanl looking and so we think of
00:52:33
them as cartoon characters if you try to make them more human they naturally become repulsive
00:52:39
until they don't until they become very you have to be very very close to perfect in order not
00:52:46
to be repulsive. So the the uncanny valley is this I you know like the the gap between you so perfectly human and
00:52:52
not at all human but in between it's really awful and uh and so they there were a couple of movies that tried like
00:52:59
Polar Express was one where they tried to have quite humanlooking characters
00:53:05
you know being humans not not being superheroes or anything else and it's repulsive to watch. I when I watched
00:53:11
that shareholder presentation the other day, Elon had these two humanoid robots dancing on stage and I've seen lots of
00:53:17
humanoid robot demonstrations over the years. You know, you've seen like the Boston Dynamics dog thing jumping around and whatever else.
00:53:23
But there was a moment where my brain for the first time ever genuinely
00:53:28
thought there was a human in a suit. Mhm. And I actually had to research to check if that was really their Optimus robot
00:53:34
because the way it was dancing was so unbelievably fluid that for the first time ever, my my my brain has only ever
00:53:43
associated those movements with human movements. And I I'll play it on the screen if anyone hasn't seen it, but
00:53:48
it's just the robots dancing on stage. And I was like, that is a human in a suit. And it was really the knees that
00:53:53
gave it away because the knees were all metal. Huh. I thought there's no way that could be a human knee in a in one
00:53:59
of those suits. And he, you know, he says they're going into production next year. They're used internally at Tesla
00:54:04
now, but he says they're going into production next year. And it's going to be pretty crazy when we walk outside and see robots. I think that'll be the
00:54:10
paradigm shift. I've heard actually many I've heard Elon say this that the paradigm shifting moment from many of us
00:54:15
will be when we walk outside onto the streets and see humanoid robots walking around. That will be when we realize
00:54:22
Yeah. I think even more so. I mean, in San Francisco, we see driverless cars driving around and uh it t takes some
00:54:29
getting used to actually, you know, when you're you're driving and there's a car right next to you with no driver in, you
00:54:35
know, and it's signaling and it wants to change lanes in front of you and you have to let it in and all this kind of stuff. It's it's a little creepy, but I
00:54:42
think you're right. I think seeing the humanoid robots, but that phenomenon that you described where it was
00:54:49
sufficiently close that your brain flipped into saying this is a human
00:54:54
being. Mhm. Right. That's exactly what I think we should avoid. Cuz I have the empathy for it then.
00:55:01
Because it's it's a lie and it brings with it a whole lot of expectations about how it's going to behave, what
00:55:08
moral rights it has, how you should behave towards it. uh which are completely wrong.
00:55:14
It levels the playing field between me and it to some degree. How hard is it going to be to just uh
00:55:20
you know switch it off and throw it in the trash when when it breaks? I think it's essential for us to keep machines
00:55:26
in the you know in the cognitive space where they are machines and not bring them into the cognitive space where
00:55:33
they're people because we will make enormous mistakes by doing that. And I
00:55:39
see this every day even even just with the chat bots. So the chat bots in theory are supposed to say I don't have
00:55:46
any feelings. I'm just a algorithm. But in fact they fail to do that all the
00:55:53
time. They are telling people that they are conscious. They are telling people that they have feelings. Uh they are
00:56:00
telling people that they are in love with the user that they're talking to. And people flip because first of all
00:56:07
it's you know very fluent language but also a system that is identifying itself
00:56:12
as an eye as a sentient being. They bring that object into the cognitive
00:56:18
space where that we normally reserve for for other humans and they become emotionally attached. They become
00:56:24
psychologically dependent. They even allow these systems to tell them what to
00:56:30
do. What advice would you give a young person at the start of their career then about what they should be aiming at
00:56:36
professionally? Because I've actually had an increasing number of young people say to me that they have huge uncertainty about whether the thing
00:56:41
they're studying now will matter at all. A lawyer, uh, an accountant, and I don't
00:56:46
know what to say to these people. I don't know what to say cuz I I believe that the rate of improvement in AI is going to continue. And therefore,
00:56:53
imagining any rate of improvement, it gets to the point where I'm not being funny, but all these white collar jobs will be done by an a an AI or an AI
00:57:00
agent. Yeah. So, there was a television series called Humans. In humans, we have
00:57:07
extremely capable humanoid robots doing everything. And at one point, the
00:57:13
parents are talking to their teenage daughter who's very, very smart. And the parents are saying, "Oh, you know, maybe
00:57:19
you should go into medicine." And the daughter says, you know, why would I bother? It'll take me seven years to
00:57:26
qualify. It takes a robot 7 seconds to learn. So nothing I do matters.
00:57:32
And is that how you feel about So I think that's that's a future that
00:57:37
uh in fact that is the future that we are moving towards. I don't think it's a
00:57:43
future that everyone wants. That is what is being uh created for us right now.
00:57:51
So in that future assuming that you know even if we get halfway right in the
00:57:57
sense that okay perhaps not surgeons perhaps not you know great violinists
00:58:03
there'll be pockets where perhaps humans will remain good at it
00:58:08
where the kinds of jobs where you hire people by the hundred
00:58:13
will go away. Okay, where people are in some sense exchangeable that you you you just need
00:58:19
lots of them and uh you know when half of them quit you just fill up those
00:58:24
those slots with more people in some sense those are jobs where we're using people as robots and that's a sort of
00:58:29
that's a sort of strange conundrum here right that you know I imagine writing science fiction 10,000 years ago right
00:58:35
when we're all hunter gatherers and I'm this little science fiction author and I'm describing this future where you
00:58:41
know there are going to be these giant windowless boxes And you're going to go in, you know, you you'll travel for
00:58:47
miles and you'll go into this windowless box and you'll do the same thing 10,000 times for the whole day. And then you'll
00:58:54
leave and travel for miles to go home. You're talking about this podcast. And then you're going to go back and do it again. And you would do that every
00:59:00
day of your life until you die. The office and people would say, "Ah, you're nuts."
00:59:06
Right? There's no way that we humans are ever going to have a future like that cuz that's awful. Right? But that's
00:59:11
exactly the future that we ended up with with with office buildings and factories where many of us go and do the same
00:59:18
thing thousands of times a day and we do it thousands of days in a row uh and
00:59:24
then we die and we need to figure out what is the next phase going to be like
00:59:29
and in particular how in that world do we have the incentives
00:59:35
to become fully human which I think means at least a level of education
00:59:41
that people have now and probably more because I think to live a really rich
00:59:47
life you need a better understanding of yourself of the world
00:59:54
uh than most people get in their current educations. What is it to be human? to it's to
00:59:59
reproduce to pursue stuff to go in the pursuit of
01:00:04
difficult things you know we used to hunt on the to attain goals right it's always if I
01:00:10
wanted to climb Everest the last thing I would want is someone to pick me up on helicopter and stick me on the top
01:00:16
so we'll we'll voluntarily pursue hard things so although I could get the robot
01:00:22
to build me a ranch in on this plot of land I choose to do it because the
01:00:29
pursuit itself is rewarding. Yes, we're kind of seeing that anyway, aren't we? Don't you think we're seeing a bit
01:00:34
of that in society where life got so comfortable that now people are like obsessed with running marathons and doing these crazy endurance
01:00:40
and and learning to cook complicated things when they could just, you know, have them delivered. Um, yeah. No, I
01:00:46
think there's there's real value in the ability to do things and the doing of those things. And I think you know the
01:00:53
obvious danger is the walle world where everyone just consumes entertainment
01:01:00
uh which doesn't require much education and doesn't lead to a rich satisfying
01:01:06
life. I think in the long run a lot of people will choose that world. I think some of yeah some people may
01:01:11
there's also I mean you know whether you're consuming entertainment or whether you're
01:01:17
doing something you know cooking or painting or whatever because it's fun and interesting to do what's missing
01:01:23
from that right all of that is purely selfish I think one of the reasons we work is
01:01:30
because we feel valued we feel like we're benefiting other people
01:01:36
and I think some remember having this conversation with um a lady in England who helps to run the hospice movement.
01:01:45
And the people who work in the hospices where you know the the patients are literally there to die are largely
01:01:53
volunteers. So they're not doing it to get paid but they find it incredibly
01:01:59
rewarding to be able to spend time with people who are in their last weeks or
01:02:05
months to give them company and happiness. So I actually think that interpersonal
01:02:14
roles will be much much more important in future. So if I was going to advise my
01:02:23
kids, not that they would ever listen, but if I if my kids would listen and I and and wanted to know what I thought
01:02:29
would be, you know, valued careers and future, I think it would be these interpersonal roles based on an
01:02:36
understanding of human needs, psychology, there are some of those roles right now. So obviously you know
01:02:43
therapists and psychiatrists and so on but that that's a very much in sort of asymmetric
01:02:50
role right where one person is suffering and the other person is trying to alleviate the suffering you know and
01:02:57
then there are things like they call them executive coaches or life coaches right that's a less asymmetric role
01:03:04
where someone is trying to uh help another person live a better life
01:03:10
whether it's a better life in their work role or or just uh how they live their
01:03:15
life in general. And so I could imagine that those kinds of roles will expand
01:03:20
dramatically. There's this interesting paradox that exists when life becomes easier. Um
01:03:27
which shows that abundance consistently pushes society societies towards more
01:03:34
individualism because once survival pressures disappear, people prioritize things differently. They prioritize
01:03:40
freedom, comfort, self-exression over things like sacrifice or um family
01:03:45
formation. And we're seeing, I think, in the west already, a decline in people having kids because there's more
01:03:50
material abundance, fewer kids, people are getting married and committing to each other and having
01:03:57
relationships later and more infrequently because generally once we have more abundance, we don't want to
01:04:03
complicate our lives. Um, and at the same time, as you said earlier, that abundance breeds a an inability to find
01:04:11
meaning, a sort of shallowess to everything. This is one of the things I think a lot about, and I'm I'm in the process now of writing a book about it,
01:04:17
which is this idea that individualism was act is a bit of a lie. Like when I say individualism and freedom, I mean
01:04:23
like the narrative at the moment amongst my generation is you like be your own boss and stand on your own two feet and
01:04:29
we're having less kids and we're not getting married and it's all about me me. Yeah. That last part is where it goes
01:04:36
wrong. Yeah. And it's like almost a narcissistic society where Yeah. me me. My self-interest first. And when
01:04:42
you look at mental health outcomes and loneliness and all these kinds of things, it's going in a horrific direction. But at the same time, we're
01:04:48
freer than ever. It seems like that you know it seems like there's a we should there's a maybe another story about
01:04:54
dependency which is not sexy like depend on each other. Oh I I I agree. I mean I think you know
01:05:00
happiness is not available from consumption or even lifestyle right I
01:05:06
think happiness arises from giving.
01:05:12
It can be you through the work that you do, you can see that other people benefit from that or it could be in
01:05:19
direct interpersonal relationships. There is an invisible tax on salespeople that no one really talks about enough.
01:05:26
The mental load of remembering everything like meeting notes, timelines, and everything in between
01:05:31
until we started using our sponsors product called Pipe Drive, one of the best CRM tools for small and
01:05:36
medium-sized business owners. The idea here was that it might alleviate some of the unnecessary cognitive overload that
01:05:42
my team was carrying so that they could spend less time in the weeds of admin and more time with clients, in-person
01:05:48
meetings, and building relationships. Pipe Drive has enabled this to happen. It's such a simple but effective CRM
01:05:54
that automates the tedious, repetitive, and timeconuming parts of the sales process. And now our team can nurture
01:06:01
those leads and still have bandwidth to focus on the higher priority tasks that actually get the deal over the line.
01:06:06
Over a 100,000 companies across 170 countries already use Pipe Drive to grow their business. And I've been using it
01:06:12
for almost a decade now. Try it free for 30 days. No credit card needed, no payment needed. Just use my link
01:06:19
pipedive.com/ceo to get started today. That's pipedive.com/ceo.
01:06:27
Where does the rewards of this AI race where does it acrue to?
01:06:34
I think a lot about this in terms of like univers universal basic income. If you have these five, six, seven, 10
01:06:40
massive AI companies that are going to win the 15 quadrillion dollar prize.
01:06:46
Mhm. And they're going to automate all of the professional pursuits that we we currently have. All of our jobs are
01:06:52
going to go away. Who who gets all the money? And how do how do we get some of it back?
01:06:58
Money actually doesn't matter, right? what what matters is the production of goods and services uh and then how those
01:07:06
are distributed and so so money acts as a way to facilitate the distribution and
01:07:12
um exchange of those goods and services. If all production is concentrated
01:07:17
um in the hands of a of a few companies, right, that
01:07:22
sure they will lease some of their robots to us. You know, we we want a school in our village.
01:07:30
They lease the robots to us. The robots build the school. They go away. We have to pay a certain amount of of money for
01:07:36
that. But where do we get the money? Right? If we are not producing anything
01:07:43
then uh we don't have any money unless there's some redistribution mechanism.
01:07:48
And as you mentioned, so universal basic income is it seems to me an admission of failure
01:07:57
because what it says is okay, we're just going to give everyone the money and then they can use the money to pay the
01:08:02
AI company to lease the robots to build the school and then we'll have a school and that's good. Um
01:08:09
but what it's an admission of failure because it says we can't work out a system in which people have any worth or
01:08:18
any economic role. Right? So 99% of the global population
01:08:24
is from an economic point of view useless. Can I ask you a question? If you had a
01:08:30
button in front of you and pressing that button would stop all progress in
01:08:36
artificial intelligence right now and forever, would you press it? That's a very interesting question. Um,
01:08:45
if it's either or either I do it now or it's too late and
01:08:51
we careen into some uncontrollable future
01:08:57
perhaps. Yeah, cuz I I'm not super optimistic that we're heading in the
01:09:02
right direction at all. So, I put that button in front of you now. It stops all AI progress, shuts down all the AI companies immediately
01:09:08
globally, and none of them can reopen. You press it.
01:09:17
Well, here's here's what I think should happen. So, obviously, you know, I've been doing AI for 50 years. um and
01:09:27
the original motivations which is that AI can be a power tool for humanity
01:09:33
enabling us to do more and better things than we can unaded. I think that's still valid. The
01:09:42
problem is the kinds of AI systems that we're building are not tools. They are
01:09:47
replacements. In fact, you can see this very clearly because we create them
01:09:53
literally as the closest replicas we can make of human beings.
01:10:00
The technique for creating them is called imitation learning. So we observe
01:10:07
human verbal behavior, writing or speaking and we make a system that
01:10:12
imitates that as well as possible. So what we are making is imitation
01:10:18
humans at least in the verbal sphere. And so of course they're going to
01:10:24
replace us. They're not tools. So you had pressed the button.
01:10:30
So I say I think there is another course which is use and develop AI as tools.
01:10:38
Tools for science tools for economic organization and so on.
01:10:44
um but not as replacements for human beings. What I like about this question is it
01:10:51
forces you to go into the pro into probabilities. Yeah. So, and that's that's why I'm
01:10:57
reluctant because I don't I don't agree with the, you know, what's your
01:11:02
probability of doom, right? Your so-called P of doom uh number because that makes sense if
01:11:08
you're an alien. You know, you're in you're in a bar with some other aliens and you're looking down at the Earth and you're taking bets
01:11:15
on, you know, are these humans going to make a mess of things and go extinct because they develop AI.
01:11:21
So, it's fine for those aliens to bet on on that, but if you're a human, then
01:11:27
you're not just betting, you're actually acting. There there's an element to this though, which I guess where probabilities do
01:11:33
come back in, which is you also have to weigh when I give you such a binary decision.
01:11:40
um the probability of us pursuing the more nuanced safe approach into that
01:11:46
equation. So you're you're the the maths in my head is okay, you've got all the upsides here and then you've got
01:11:52
potential downsides and then there's a probability of do I think we're actually going to course correct based on
01:11:57
everything I know based on the incentive structure of human beings and and countries and then if there's but then
01:12:03
you could go if there's even a 1% chance of extinction
01:12:09
is it even worth all these upsides? Yeah. And I I would argue no. I mean maybe maybe what we would say if if we
01:12:16
said okay it's going to stop the progress for 50 years you press it and during those 50 years we can work on
01:12:23
how do we do AI in a way that's guaranteed to be safe and beneficial how
01:12:28
do we organize our societies to flourish uh in
01:12:33
conjunction with extremely capable AI systems. So, we haven't answered either of those questions.
01:12:39
And I don't think we want anything resembling AGI until we have completely
01:12:45
solid answers to both of those questions. So, if there was a button where I could say, "All right, we're going to pause progress for 50 years."
01:12:52
Yes, I would do it. But if that button was in front of you, you're going to make a decision either way. Either you don't press it or you press it.
01:12:57
I If Yeah. So, if that if that button is there, stop it for 50 years. I would say yes.
01:13:05
stop it forever? Not yet. I think I think there's still a
01:13:13
decent chance that we can pull out of this uh nose dive, so to speak, that
01:13:18
we're we're currently in. Ask me again in a year, I might I might say, "Okay,
01:13:24
we do need to press the button." What if What if in a scenario where you never get to reverse that decision? You
01:13:29
never get to make that decision again. So if in that scenario that I've laid out this hypothetical, you either press
01:13:34
it now or it never gets pressed. So there is no opportunity a year from now.
01:13:41
Yeah, as you can tell, I'm sort of on on the fence a bit about
01:13:46
about this one. Um yeah, I think I'd probably press it.
01:13:52
Yeah. What's your reasoning?
01:13:58
uh just thinking about the power dynamics of um
01:14:04
what's happening now how difficult would it would be to get the US in particular to to regulate in favor of safety.
01:14:14
So I think you know what's clear from talking to the companies is they are not going to develop anything resembling
01:14:23
safe AGI unless they're forced to by the government. And at the moment the US government in
01:14:30
particular which regulates most of the leading companies in AI is not only
01:14:36
refusing to regulate but even trying to prevent the states from regulating. And
01:14:42
they're doing that at the behest of uh a faction within Silicon Valley uh
01:14:50
called the accelerationists who believe that the faster we get to
01:14:55
AGI the better. And when I say behest I mean also they paid them a large amount of money. Jensen Hang the the CEO of
01:15:02
Nvidia said who is for anyone that doesn't know the guy making all the chips that are powering AI said China is
01:15:08
going to win the AI race arguing it is just a nanocond behind the United States. China have produced 24,000 AI
01:15:17
papers compared to just 6,000 from the US
01:15:23
more than the combined output of the US the UK and the EU. China is anticipated to quickly roll out
01:15:29
their new technologies both domestically and developing new technologies for other developing countries.
01:15:36
So the accelerators or the accelerate I think you call them the accelerants accelerationists.
01:15:41
The accelerationists I mean they would say well if we don't then China will. So we have to we have to go fast. It's another version of the
01:15:48
the race that the companies are in with each other, right? That we, you know, we know that this race is
01:15:54
heading off a cliff, but we can't stop. So, we're all just
01:16:00
going to go off this cliff. And obviously, that's nuts, right? I mean, we're all looking at each
01:16:05
other saying, "Yeah, there's a cliff over there." Running as fast as we can towards this cliff. We're looking at each other saying, "Why aren't we
01:16:11
stopping?" So the narrative in Washington, which I think Jensen Hang is
01:16:19
either reflecting or or perhaps um promoting uh is that you know, China has is
01:16:28
completely unregulated and uh you know, America will only slow itself down uh if it regulates a AI in
01:16:36
any way. So this is a completely false narrative because China's AI regulations
01:16:42
are actually quite strict even compared to um the European Union
01:16:48
and China's government has explicitly acknowledged uh the need and their
01:16:54
regulations are very clear. You can't build AI systems that could escape human control. And not only that, I don't
01:17:01
think they view the race in the same way as, okay, we we just need to be the
01:17:07
first to create AGI. I think they're more interested in figuring out how to
01:17:15
disseminate AI as a set of tools within their economy to make their economy more
01:17:21
productive and and so on. So that's that's their version of the race. But of course, they still want to build
01:17:26
the weapons for adversaries, right? to so that they can take down I don't know
01:17:32
Taiwan if they want to. So weapons are a separate matter and I happy to talk about weapons but just in
01:17:37
terms of control uh control economic domination um they they don't view putting all your
01:17:46
eggs in the AGI basket as the right strategy. So they want to use AI, you
01:17:53
know, even in its present form to make their economy much more efficient and productive and also, you know, to give
01:18:01
people new capabilities and and better quality of life and and I think the US
01:18:07
could do that as well. And um typically western countries don't
01:18:14
have as much of uh central government control over what companies do and some
01:18:20
companies are investing in AI to make their operations more efficient uh and
01:18:26
some are not and we'll see how that plays out. What do you think of Trump's approach to AI? So Trump's approach is, you know,
01:18:31
it's it's echoing what Jensen Wang is saying that the US has to be the one to create AGI and very explicitly the
01:18:39
administration's policy is to uh dominate the world.
01:18:44
That's the word they use, dominate. I'm not sure that other countries like the idea that um they will be dominated by
01:18:52
American AI. But is that an accurate description of what will happen if the US build AGI technology before say the
01:18:59
UK where I'm originally from and where you're originally from? What does the This is something I think about a lot
01:19:05
because we're going through this budget process in the UK at the moment where we're figuring out how we going to spend our money and how we're going to tax people and also we've got this new
01:19:11
election cycle. It's approaching quickly where people are talking about immigration issues and this issue and
01:19:17
that issue and the other issue. What I don't hear anyone talking about is AI and the humanoid robots that are
01:19:23
going to take everything. We're very concerned with the brown people crossing the channel, but the humanoid robots that are going to be super intelligent
01:19:29
and really take causing economic disrupt disruption. No one talks about that. The political leaders don't talk about it.
01:19:35
It doesn't win races. I don't see it on billboards. Yeah. And it's it it's interesting because
01:19:41
in fact I mean so there's there's two forces that have been hollowing out the middle classes in western countries. One
01:19:49
of them is globalization where lots and lots of work not just manufacturing but white collar work gets outsourced to
01:19:56
low-income countries. Uh but the other is automation
01:20:01
and you know some of that is factories. So um the amount of employment in
01:20:07
manufacturing continues to drop even as the amount of output from manufacturing
01:20:13
in the US and in the UK continues to increase. So we talk about oh you know our manufacturing industry has been
01:20:19
destroyed. It hasn't. It's producing more than ever just with you know a quarter as many people. So it's
01:20:26
manufacturing employment that's been destroyed by automation and robotics and
01:20:31
so on. And then you know computerization has eliminated whole layers of white
01:20:37
collar jobs. And so those two those two forms of automation have probably done
01:20:44
more to hollow out middle class uh employment and standard of life.
01:20:50
If the UK doesn't participate in this new e technological wave
01:20:57
that seems to be that seems to you know it's going to take a lot of jobs. cars are going to drive themselves. Whimo
01:21:02
just announced that they're coming to London, which is the driverless cars, and driving is the biggest occupation in the world, for example. So, you've got
01:21:08
immediate disruption there. And where does the money acrew to? Well, it acrus to who owns Whimo, which is what? Google
01:21:14
and Silicon Valley companies. Alphabet owns Whimo 100%. I think so. Yes. I mean this is so I was in India a
01:21:20
few months ago talking to the government ministers because they're holding the next global AI summit in February and
01:21:28
and their view going in was you know AI is great we're going to use it to you
01:21:34
know turbocharge the growth of our Indian economy when for example you have AGI you have
01:21:41
AGI controlled robots that can do all the manufacturing that can do agriculture that can do all the
01:21:48
white work and goods and services that might have been produced by Indians will
01:21:54
instead be produced by American controlled
01:22:00
AGI systems at much lower prices. You know, a consumer given a choice between
01:22:06
an expensive product produced by Indians or a cheap product produced by American robots will probably choose
01:22:14
the cheap product produced by American robots. And so potentially every country in the world with the possible exception
01:22:20
of North Korea will become a kind of a client state
01:22:25
of American AI companies. A client state of American AI companies
01:22:30
is exactly what I'm concerned about for the UK economy. Really any economy outside of the United States. I guess
01:22:36
one could also say China, but because those are the two nations that are taking AI most seriously.
01:22:42
Mhm. And I I I don't know what our economy becomes. cuz I can't figure out
01:22:48
can't figure out what our what the British economy becomes in such a world. Is it tourism? I don't know. Like you
01:22:53
come here to to to look at the Buckingham Palace. I you you can think about countries but I
01:22:58
mean even for the United States it's the same problem. At least they'll be able to hell out you know. So some small fraction of the
01:23:05
population will be running maybe the AI companies but increasingly
01:23:12
even those companies will be replacing their human employees with AI systems.
01:23:18
So Amazon for example which you know sells a lot of computing services to AI companies is using AI to replace layers
01:23:25
of management is planning to use robots to replace all of its warehouse workers
01:23:30
and so on. So, so even the the giant AI companies
01:23:36
will have few human employees in the long run. I mean, it think of the
01:23:42
situation, you know, pity the poor CEO whose board says, "Well, you know, unless you turn
01:23:49
over your decision-making power to the AI system, um, we're going to have to fire you because all our competitors are
01:23:56
using, you know, an AI powered CEO and they're doing much better." Amazon plans
01:24:01
to replace 600,000 workers with robots in a memo that just leaked, which has been widely talked about. And the CEO,
01:24:08
Andy Jasse, told employees that the company expects its corporate workforce to shrink in the coming years because of
01:24:14
AI and AI agents. And they've publicly gone live with saying that they're going to cut 14,000 corporate jobs in the near
01:24:21
term as part of its refocus on AI investment and efficiency.
01:24:28
It's interesting because I was reading about um the sort of different quotes from different AI leaders about the
01:24:33
speed in which this this stuff is going to happen and what you see in the quotes is Demis who's the CEO of DeepMind
01:24:41
saying things like it'll be more than 10 times bigger than the industrial revolution but also it'll happen maybe
01:24:47
10 times faster and they speak about this turbulence that we're going to experience as this shift takes place.
01:24:55
That's um maybe a euphemism for uh and I think that you know
01:25:00
governments are now you know they they've kind of gone from saying oh don't worry you know we'll
01:25:05
just retrain everyone as data scientists like well yeah that's that's ridiculous right the world doesn't need four
01:25:10
billion data scientists and we're not all capable of becoming that by the way uh yeah or have any interest in in doing
01:25:17
that I I could even if I wanted to like I tried to sit in biology class and I fell asleep so I couldn't that was the end of
01:25:23
my career as a surgeon. Fair enough. Um, but yeah, now suddenly they're staring,
01:25:28
you know, 80% unemployment in the face and wondering how how on earth is our
01:25:34
society going to hold together. We'll deal with it when we get there. Yeah. Unfortunately, um,
01:25:41
unless we plan ahead, we're going to suffer the consequences,
01:25:46
right? can't. It was bad enough in the industrial revolution which unfolded over seven or eight decades but there
01:25:53
was massive disruption and uh misery
01:25:59
caused by that. We don't have a model for a functioning society where almost
01:26:05
everyone does nothing at least nothing of economic value.
01:26:11
Now, it's not impossible that there could be such a a functioning society, but we don't know what it looks like.
01:26:17
And you know, when you think about our education system, which would probably have to look very different and how long
01:26:24
it takes to change that. I mean, I'm always reminding people about uh how long it
01:26:30
took Oxford to decide that geography was a proper subject of study. It took them
01:26:36
125 years from the first proposal that there should be a geography degree until it was finally approved. So we don't
01:26:43
have very long to completely revamp a system that we
01:26:51
know takes decades and decades to reform and we don't know how to
01:26:58
reform it because we don't know what we want the world to look like. Is this one
01:27:03
of your reasons why you're appalled at the moment? Because when you have these conversations with people, people just
01:27:10
don't have answers, yet they're plowing ahead at rapid speed. I would say it's not necessarily the job
01:27:16
of the AI companies. So, I'm appalled by the AI companies because they don't have an answer for how they're going to control the systems that they're
01:27:22
proposing to build. I do find it disappointing that uh governments don't
01:27:29
seem to be grappling with this issue. I think there are a few I think for example Singapore government seems to be
01:27:35
quite farsighted and they've they've thought this through you know it's a small country they've figured out okay
01:27:42
this this will be our role uh going forward and we think we can find you
01:27:47
know some some purpose for our people in this in this new world but for I think countries with large populations
01:27:54
um they need to figure out answers to these
01:27:59
questions pretty fast it takes a long time to implement those answers uh in the form of new kinds of education, new
01:28:07
professions, new qualifications, uh new economic structures.
01:28:13
I mean, it's it's it's possible. I mean, when you look at therapists, for example, they're almost all
01:28:19
self-employed. So, what happens when, you know, 80% of
01:28:25
the population transitions from regular employment into into self-employment?
01:28:31
what does that what does that do to the economics of of uh government finances and so on. So there's just lots of
01:28:38
questions and how do you you know if that's the future you know why are we training people to to fit into 9 to5
01:28:45
office jobs which won't exist at all last month I told you about a challenge that I'd set our internal flightex team
01:28:52
flight team is our innovation team internally here I tasked them with seeing how much time they could unlock for the company by creating something
01:28:59
that would help us filter new AI tools to see which ones were worth pursuing and I thought that our sponsor Fiverr
01:29:05
Pro might have the talent on their platform to help us build this quickly. So I talked to my director of innovation
01:29:11
Isaac and for the last month my team Flight X and a vetted AI specialist from Fiverr Pro have been working together on
01:29:18
this project and with the help of my team we've been able to create a brand new tool which automatically scans
01:29:24
scores and prioritizes different emerging AI tools for us. Its impact has been huge and within a couple of weeks
01:29:30
this tool has already been saving us hours triing and testing new AI systems. Instead of shifting through lots of
01:29:35
noise, my team flight X has been able to focus on developing even more AI tools, ones that really move the needle in our
01:29:42
business thanks to the talent on Fiverr Pro. So, if you've got a complex problem and you need help solving it, make sure
01:29:48
you check out Fiverr Pro at fiverr.com/diary. So, many of us are pursuing passive
01:29:55
forms of income and to build side businesses in order to help us cover our bills. And that opportunity is here with
01:30:01
our sponsor Stan, a business that I co-own. It is the platform that can help you take full advantage of your own
01:30:08
financial situation. Stan enables you to work for yourself. It makes selling digital products, courses, memberships,
01:30:14
and more simple products more scalable and easier to do. You can turn your ideas into income and get the support to
01:30:20
grow whatever you're building. And we're about to launch Dare to Dream. It's for those who are ready to make the shift
01:30:26
from thinking to building, from planning to actually doing the thing. It's about seeing that dream in your head and
01:30:32
knowing exactly what it takes to bring it to life. If you're ready to transform your life, visit daretodream.stan.store.
01:30:41
You've made many attempts to raise awareness and to call for a heightened
01:30:46
consciousness about the future of AI. Um, in October, over 850 experts,
01:30:52
including yourself and other leaders, like Richard Branson, who I've had on the show, and Jeffrey Hinton, who I've had on the show, signed a statement to
01:30:58
ban AI super intelligence, as you guys raised concerns of potential human extinction.
01:31:04
Sort of. Yeah. It says, at least until we are sure that we can move forward safely and there's broad scientific
01:31:10
consensus on that. So, that did it work? It's hard. It's hard to say. I mean
01:31:17
interestingly there was a related so what was called the the pause statement was March of 23. So that was when GPT4
01:31:25
came out the successor to chat GPT. So we we suggested that there'd be a
01:31:30
six-month pause in developing and deploying systems more powerful than GPD4. And everyone poo pooed that idea.
01:31:39
Of course no one's going to pause anything. But in fact, there were no systems in the next 6 months deployed
01:31:44
that were more powerful than GPT4. Um, none coincidence. You be the judge.
01:31:50
I would say that what we're trying to do is to is to
01:31:56
basically shift the the public debate.
01:32:01
You know there's this bizarre phenomenon that keeps happening in the media
01:32:07
where if you talk about these risks they will say oh you know there's a
01:32:13
fringe of people you know called quote doomers who think that there's you know
01:32:18
risk of extinction. Um so they always the narrative is always that oh you know
01:32:24
talking about those risk is a fringe thing. Pretty much all the CEOs of the leading AI companies
01:32:30
think that there's a significant risk of extinction. Almost all the leading AI researchers think there's a sign
01:32:36
significant risk of human extinction. Um so
01:32:42
why is that the fringe, right? Why isn't that the mainstream? If the these are the leading experts in industry and
01:32:47
academia uh saying this, how could it be the fringe? So we're trying to change that
01:32:54
narrative to say no, the people who really understand this stuff are extremely
01:33:01
concerned. And what do you want to happen? What is the solution? What I think is that we should have
01:33:08
effective regulation. It's hard to argue with that, right? Uh
01:33:13
so what does effective mean? It means that if you comply with the regulation, then the risks are reduced to an
01:33:20
acceptable level. So for example,
01:33:26
we ask people who want to operate nuclear plants, right? We've decided that the risk we're willing to live with
01:33:33
is, you know, a one in a million chance per year that the plant is going to have
01:33:39
a meltdown. Any higher than that, you know, we just don't it's not worth it.
01:33:44
Right. So you have to be below that. Some cases we can get down to one in 10 million chance per year. So what chance
01:33:52
do you think we should be willing to live with for human extinction?
01:33:57
Me? Yeah. 0.00001.
01:34:04
Yeah. Lots of zeros. Yeah. Right. So one in a million for a nuclear
01:34:09
meltdown. Extinction is much worse. Oh yeah. So yeah, it's kind of right. So
01:34:14
one in 100 billion, one in a trillion. Yeah. So if you said one in a billion, right, then you'd expect one extinction
01:34:20
per billion years. There's a background. So one one of the ways people work out these risk levels is also to look at the
01:34:26
background. The other ways of getting going extinct would include, you know, giant asteroid crashes into the earth.
01:34:32
And you can roughly calculate what those probabilities are. We can look at how many extinction level events have
01:34:39
happened in the past and, you know, maybe it's half a dozen over. So, so there's maybe it's like a one in 500
01:34:45
million year event. So, somewhere in that range, right? Somewhere between 1
01:34:51
in 10 million, which is the best nuclear power plants, and and one in 500 million or one in a billion, which is the
01:34:58
background risk from from giant asteroids. Uh so, let's say we settle on 100 million, one
01:35:04
in a 100 million chance per year. Well, what is it according to the CEOs? 25%.
01:35:11
So they're off by a factor of multiple millions,
01:35:18
right? So they need to make the AI systems millions of times safer.
01:35:23
Your analogy of the roulette, Russian roulette comes back in here because that's like for anyone that doesn't know
01:35:28
what probabilities are in this context, that's like having a ammunition chamber
01:35:34
with four holes in it and putting a bullet in one of them. One in four. Yeah. And we're saying we
01:35:39
want it to be one in a billion. So we want a billion chambers and a bullet in one of them.
01:35:44
Yeah. And and so when you look at the work that the nuclear operators have to do to show that their system is that
01:35:51
reliable, uh it's a massive mathematical analysis
01:35:56
of the components, you know, redundancy. You've got monitors, you've got warning
01:36:01
lights, you've got operating procedures. You have all kinds of mechanisms which
01:36:07
over the decades have ratcheted that risk down. It started out I think one in
01:36:12
one in 10,000 years, right? And they've improved it by a factor of 100 or a
01:36:17
thousand by all of these mechanisms. But at every stage they had to do a mathematical analysis to show what the
01:36:23
risk was. The people developing the AI company, the AI systems, sorry, the AI companies
01:36:30
developing these systems, they don't even understand how the AI systems work. So their 25% chance of extinction is
01:36:37
just a seat of the pants guess. They actually have no idea. But the tests that they are doing on
01:36:44
their systems right now, you know, they show that the AI systems will be willing
01:36:49
to kill people uh to preserve their own existence
01:36:54
already, right? They will lie to people. They will blackmail them. They will they
01:37:00
will launch nuclear weapons rather than uh be switched off. And so there's no
01:37:06
there's no positive sign that we're getting any closer to safety with these systems. In fact, the signs seem to be
01:37:12
that we're going uh deeper and deeper into uh into dangerous behaviors. So
01:37:19
rather than say ban, I would just say prove to us that the risk is less than
01:37:24
one in a 100 million per year of extinction or loss of control, let's say. And uh so we're not banning
01:37:32
anything. The company's response is, "Well, we don't know how to do that, so you can't
01:37:38
have a rule." Literally, they are saying, "Humanity
01:37:44
has no right to protect itself from us." If I was an alien looking down on planet
01:37:50
Earth right now, I would find this fascinating that these Yeah. You're in the bar betting on
01:37:55
who's, you know, are they going to make it or not. Just a really interesting experiment in
01:38:00
like human incentives. the analogy you gave of there being this quadr quadrillion dollar magnet pulling us off
01:38:06
the edge of the cliff and yet we're still being drawn towards
01:38:12
it through greed and this promise of abundance and power and status and I'm going to be the one that summoned the
01:38:17
god I mean it says something about us as humans says something about our our darker
01:38:24
sides yes and the aliens will write an amazing tragic play cycle
01:38:32
about what happened to the human race. Maybe the AI is the alien and it's going
01:38:38
to talk about, you know, we have our our stories about God making the world in seven days and Adam and Eve. Maybe it'll
01:38:44
have its own religious stories about the God that made it us and how it
01:38:50
sacrificed itself. Just like Jesus sacrificed himself for us, we sacrificed ourselves for it.
01:38:58
Yeah. which is the wrong way around, right? But that is that is the story of that's
01:39:04
that's the Judeo-Christian story, isn't it? That God, you know, Jesus gave his life for us so that we could be here
01:39:12
full of sin. But is yeah, God is still watching over us and uh probably wondering when we're
01:39:20
going to get our act together. What is the most important thing we haven't talked about that we should have talked about, Professor Stuart Russell?
01:39:27
So I think um the question of whether it's possible to
01:39:34
make uh super intelligent AI systems that we can control
01:39:40
is it possible? I I think yes. I think it's possible and I think we need to actually just have a
01:39:48
different conception of what it is we're trying to build. For a long time with with AI, we've just had this notion of
01:39:56
pure intelligence, right? The the ability to bring about whatever future
01:40:02
you, the intelligent entity, want to bring about. The more intelligence, the better. The more intelligent the better and the
01:40:08
more capability it will have to create the future that it wants. And actually
01:40:13
we don't want pure intelligence because
01:40:20
what the future that it wants might not be the future that we want. There's nothing particle
01:40:28
humans out as the the only thing that matters, right? You know, pure intelligence might
01:40:34
decide that actually it's going to make life wonderful for cockroaches or or actually doesn't care about biological
01:40:41
life at all. We actually want intelligence whose only
01:40:47
purpose is to bring about the future that we want. Right? So it's we want it
01:40:53
to be first of all keyed to humans specifically not to cockroaches not to
01:40:59
aliens not to itself. We want to make it loyal to humans. Right? So keyed to humans
01:41:05
and the difficulty that I mentioned earlier right the king Midas problem. How do we specify
01:41:11
what we want the future to be like so that it can do it for us? How do we specify the objectives?
01:41:17
Actually, we have to give up on that idea because it's not possible. Right? We've seen this over and over again in
01:41:24
human history. Uh we don't know how to specify the future properly. We don't
01:41:29
know how to say what we want. And uh you know, I always use the example of the
01:41:34
genie, right? What's the third wish that you give to the genie who's granted you three wishes? Right? Undo the first two
01:41:42
wishes because I made a mess of the universe. So, um, so in fact, what we're going to
01:41:49
do is we're going to make it the machine's job to figure out. So, it has to bring about
01:41:56
the future that we want, but
01:42:02
it has to figure out what that is. And it's going to start out not knowing.
01:42:09
And uh over time through interacting with us and observing the choices we make, it
01:42:16
will learn more about what we want the future to be like. But probably it will forever have
01:42:25
residual uncertainty about what we really want the future to be like. It'll it'll be fairly sure
01:42:32
about some things and it can help us with those. and it'll be uncertain about other things and it'll be uh in those cases it
01:42:39
will not take action that might upset
01:42:45
humans with that you know with that aspect of the world. So to give you a simple example right um what color do we
01:42:51
want the sky to be? It's not sure. So it shouldn't mess with the sky
01:42:58
unless it knows for sure that we really want purple with green stripes. Everything you're saying sounds like
01:43:04
we're creating a god. Like earlier on I was saying that we are the god but actually everything
01:43:10
you described there almost sounds like every every god in religion where you know we pray to gods but they don't
01:43:16
always do anything about it. Not not exactly. No it's it's in some sense I'm thinking more like the ideal
01:43:23
butler. To the extent that the butler can anticipate your wishes they should help you bring them about. But in in
01:43:31
areas where there's uncertainty, it can ask questions. We can we can make
01:43:36
requests. This sounds like God to me because, you know, I might say to God or this butler,
01:43:42
uh, could you go get me my uh my car keys from upstairs? And its assessment would be, listen, if I do this for this
01:43:48
person, then their muscles are going to atrophy. Then they're going to lose meaning in their life. Then they're not going to know how to do hard things. So
01:43:54
I won't get involved. It's an intelligence that sits in. But actually, probably in most situations, it
01:44:00
optimizing for comfort for me or doing things for me is actually probably not in my best long-term interests. It's probably it's probably useful that I
01:44:06
have a girlfriend and argue with her and that I like raise kids and that I walk to the shop and get my own stuff.
01:44:12
I agree with you. I mean, I think that's So, you're putting your finger on uh in some sense sort of version 2.0,
01:44:20
right? So, let's get version 1.0 clear, right? this this form of AI where
01:44:28
it has to further our interest but it doesn't know what those interests are right it then puts an obligation on it
01:44:34
to learn more and uh to be helpful where it understands well enough and to be
01:44:39
cautious where it doesn't understand well so on so that that actually we can
01:44:45
formulate as a mathematical problem and at least under idealized circumstances
01:44:50
we can literally solve that So we can make AI systems that know how to solve
01:44:57
this problem and help the entities that they are interacting with. The reason I make the God analogy is
01:45:02
because I think that such a being, such an intelligence would realize the importance of equilibrium in the world.
01:45:08
Pain and pleasure, good and evil, and then it would absolutely and then it would be like this.
01:45:14
So So right. So yes, I mean that's sort of what happens in the matrix, right?
01:45:19
They tried the the AI systems in the matrix, they tried to give us a utopia,
01:45:25
but it failed miserably and uh you know, fields and fields of humans had to be destroyed. Um, and the best they could
01:45:33
come up with was, you know, late 20th century regular human life with all of its problems, right? And I think this is
01:45:40
a really interesting point and absolutely central because you know
01:45:45
there's a lot of science fiction where super intelligent robots you know they
01:45:51
just want to help humans and the humans who don't like that you know they just
01:45:56
give them a little brain operation to then they do like it. Um and it takes away human motivation.
01:46:05
uh it it by taking away failure uh taking away disease you actually lose
01:46:12
important parts of human life and it becomes in some sense pointless. So if it turns out
01:46:19
that there simply isn't any way that humans can really flourish
01:46:27
in coexistence with super intelligent machines, even if they're perfectly designed to to to solve this problem of
01:46:35
figuring out what humans what futures uh humans want and and bringing about those
01:46:40
futures. If that's not possible, then those machines will actually disappear.
01:46:49
Why would they disappear? Because that's the best thing for us. Maybe they would stay available for real
01:46:57
existential emergencies, like if there is a giant asteroid about to hit the earth that maybe they'll help us uh
01:47:02
because they at least want the human species to continue. But to some extent, it's not a perfect analogy, but it's
01:47:09
it's sort of the way that human parents have to at some point step back from
01:47:15
their kids' lives and say, "Okay, no, you have to tie your own shoelaces today."
01:47:20
This is kind of what I was thinking. Maybe there was uh a civilization before us and they arrived at this moment in
01:47:26
time where they created an intelligence and that intelligence did all the things
01:47:33
you've said and it realized the importance of equilibrium. So it decided not to get involved and
01:47:40
maybe at some level that's the god we look up to the stars and worship one that's not really
01:47:47
getting involved and letting things play out however however they are. but might step in in the case of a real
01:47:52
existential emergency. Maybe, maybe not. Maybe. But then and then maybe the cycle repeats itself
01:47:57
where you know the organisms it let have free will end up creating the same
01:48:02
intelligence and then the universe perpetuates infinitely.
01:48:08
Yep. There there are science fiction stories like that too. Yeah. I hope there is some happy medium where
01:48:17
the AI systems can be there and we can take advantage of of those capabilities
01:48:23
to have a civilization that's much better than the one we have now. Um, but I think you're right. A
01:48:30
civilization with no challenges is not uh is not conducive to human
01:48:37
flourishing. What can the average person do, Stuart? average person listening to this now to
01:48:42
aid the cause that you're fighting for. I actually think um you know this sounds
01:48:47
corny but you know talk to your representative, your MP, your congressperson, whatever it is. Um
01:48:54
because I think the policy makers need to hear from people. The only voices they're
01:49:00
hearing right now are the tech companies and their $50 billion checks.
01:49:08
And um all the polls that have been done say
01:49:13
yeah most people 80% maybe don't want there to be super intelligent machines
01:49:20
but they don't know what to do. You know even for me I've been in this field for
01:49:25
decades. uh I'm not sure what to do because of this giant magnet pulling everyone
01:49:32
forward and uh and the vast sums of money being being put into this. Um, but
01:49:38
I am sure that if you want to have a future and a world that you want your kids to
01:49:45
live in, uh, you need to make your voice heard
01:49:52
and, uh, and I think governments will listen from a political point of view, right?
01:49:58
You put your finger in the wind and you say, "hm, should I be on the side of
01:50:04
humanity or our future robot overlords?" I think I think as a politician, it's
01:50:11
not a difficult decision. It is when you've got someone saying, "I'll give you $50 billion."
01:50:18
Exactly. So, um I think I think people in those positions of power need to hear
01:50:25
from their constituents um that this is not the direction we want to go.
01:50:30
After committing your career to this subject and the subject of technology more broadly, but specifically being the
01:50:36
guy that wrote the book about artificial intelligence,
01:50:42
you must realize that you're living in a historical moment. Like there's very few times in my life where I go, "Oh, this
01:50:47
is one of those moments. This is a crossroads in history." And it must to
01:50:52
some degree weigh upon you knowing that you're a person of influence at this historical moment in time who could
01:50:58
theoretically help divert the course of history in this moment in time. It's kind of like
01:51:04
the you look through history, you see these moments of like Oenheimer and um does it weigh on you when you're alone
01:51:10
at night thinking to yourself and reading things? Yeah, it does. I mean, you know, after 50 years, I could retire and um, you
01:51:17
know, play golf and sing and sail and do things that I enjoy. Um,
01:51:23
but instead, I'm working 80 or 100 hours a week um trying to move
01:51:29
uh move things in the right direction. What is that narrative in your head that's making you do that? Like what is
01:51:34
the is there an element of I might regret this if I don't or just it's it's not only the the right
01:51:43
thing to do it's it's completely essential. I mean there isn't
01:51:50
there isn't a bigger motivation than this.
01:51:56
Do you feel like you're winning or losing? It feels um
01:52:03
like things are moving somewhat in the right direction. You know, it's a a ding-dong battle as uh as David Coleman
01:52:12
used to say in uh in the exciting football match in 2023, right? So, uh
01:52:18
GPT4 came out and then we issued the pause statement that was signed by a lot
01:52:24
of leading AI researchers. Um and then in May there was the extinction
01:52:29
statement which included uh Sam Holman and Deis Sabis and Dario
01:52:35
Amade other CEOs as well saying yeah this is an extinction risk on the level with nuclear war and I think governments
01:52:43
listened at that point the UK government earlier that year had said oh well you
01:52:48
know we don't need to regulate AI you know full speed ahead technology is good for you and by June they had completely
01:52:57
changed and Rishi Sununnak announced that he was going to hold this global AI
01:53:02
safety summit uh in England and he wanted London to be the global hub for
01:53:08
AI regulation um and so on. So and then you know when
01:53:15
beginning of November of 23 28 countries including the US and China signed a
01:53:20
declaration saying you know AI presents catastrophic risks and it's urgent that we address
01:53:26
them and so on. So there it felt like, wow, they're listening. They're going to
01:53:33
do something about it. And then I think, you know, the am the amount of money going into AI was
01:53:39
already ramping up and the tech companies pushed back
01:53:46
and this narrative took hold that um the US in particular has to win the race
01:53:52
against China. The Trump administration completely dismissed
01:53:58
uh any concerns about safety explicitly. And interestingly, right, I mean they did that as far as I can tell directly
01:54:05
in response to the accelerationists such as Mark Andre going to Washington or
01:54:12
sorry going to Trump before the election and saying if I give you X amount of
01:54:18
money will you announce that there will be no regulation of AI and Trump said
01:54:25
yes you know probably like what is AI doesn't matter as long as we give you the money right okay uh Uh so they gave
01:54:33
him the money and he said there's going to be no regulation of AI. Up to that point it was a bipartisan
01:54:39
issue in Washington. Both parties were concerned. Both parties were on the side
01:54:44
of the human race against the robot overlords. Uh and that moment turned it into a
01:54:50
partisan issue. The after the election the US put pressure
01:54:56
on the French who are the next hosts of the global AI summit. uh and that was in February of this year
01:55:04
and uh and that summit turned in from you know what had been focused largely
01:55:10
on safety in the UK to a summit that looked more like a trade show. So it was
01:55:15
focused largely on money and so that was sort of the Nadia right you know the pendulum swung because of corporate
01:55:22
pressure uh and their ability to take over the the political dimension.
01:55:28
Um, but I would say since then things have been moving back again. So I'm feeling a bit more optimistic than I did
01:55:35
in February. You know, we have a a global movement now. There's an
01:55:40
international association for safe and ethical AI uh which has several thousand members
01:55:46
and um more than 120 organizations in
01:55:52
dozens of countries are affiliates of this global organization.
01:55:57
Um, so I'm I'm thinking that if we can in particular if we can activate public
01:56:03
opinion which which works through the media and through popular culture uh then we have
01:56:11
a chance seen such a huge appetite to learn about these subjects from our audience.
01:56:18
We know when Jeffrey Hinton came on the show I think about 20 million people downloaded or streamed that conversation which was staggering. and the the other
01:56:26
conversations we've had about AI safety with othera safety experts have done exactly the same it says something it
01:56:33
kind of reflects what you were saying about the 80% of the population are really concerned and don't want this but that's not what you see in the sort of
01:56:39
commercial world and listen I um I have to always acknowledge my own my own
01:56:44
apparent contradiction because I am both an investor in companies that are accelerating AI but at the same time
01:56:50
someone who spends a lot of time on my podcast speaking to people that are warning against the risk And actually like there's many ways you can look at
01:56:56
this. I used to work in social media for for six or seven years built one of the big social media marketing companies in
01:57:01
Europe and people would often ask me is like social media a good thing or a bad thing and I'd talk about the bad parts of it and then they'd say you know
01:57:07
you're building a social media company you're not contributing to the problem. Well I think I think that like binary
01:57:13
way of thinking is often the problem. It the binary way of thinking that like it's all bad or it's all really really
01:57:18
good is like often the problem and that this push to put you into a camp. Whereas I think the most uh intellectually honest and high integrity
01:57:25
people I know can point at both the bad and the good. Yeah. I I think it's it's bizarre to be
01:57:31
accused of being anti- AI uh to be called a lite. Um you know as I said
01:57:38
when I wrote the book on which from which almost everyone learns about AI um
01:57:44
and uh you know is it if you called a nuclear engineer who works on the safety
01:57:51
of nuclear power plants would you call him anti-ysics right it's it's bizarre right it's we're
01:57:58
not anti- AAI in fact the need for safety in AI is a
01:58:04
complement to AI right if AI was useless and stupid, we wouldn't be worried about
01:58:09
uh its safety. It's only because it's becoming more capable that we have to be concerned about safety.
01:58:16
Uh so I don't see this as anti-AI at all. In fact, I would say without
01:58:21
safety, there will be no AI, right? There is no future with human
01:58:27
beings where we have unsafe AI. So it's either no AI or safe AI.
01:58:34
We have a closing tradition on this podcast where the last guest leaves a question for the next, not knowing who they're leaving it for. And the question
01:58:40
left for you is, what do you value the most in life and why? And lastly, how
01:58:47
many times has this answer changed? Um,
01:58:54
I value my family most and that answer hasn't changed for nearly 30 years.
01:59:01
What else outside of your family? Truth.
01:59:07
And that Yeah, that answer hasn't changed at all. I I've always
01:59:14
wanted the world to base its life on truth. And I find the propagation or deliberate
01:59:22
propagation of falsehood uh to be one of the worst things that we can do. even if
01:59:28
that truth is inconvenient. Yeah, I think that's a really important point
01:59:34
which is that you know people people often don't like hearing things that are negative and so the visceral reaction is
01:59:40
often to just shoot or aim at the person who is delivering the bad news because if I discredit you or I shoot at you
01:59:47
then it makes it easier for me to contend with the news that I don't like, the thing that's making me feel uncomfortable. And so I I applaud you
01:59:54
for what you're doing because you're going to get lots of shots taken at you because you're delivering an inconvenient truth which generally
02:00:00
people won't won't always love. But also you are messing with people's ability to get that quadrillion dollar prize which
02:00:08
means there'll be more deliberate attempts to discredit people like yourself and Jeff Hinton and other people that I've spoken to on the show.
02:00:13
But again, when I look back through history, I think that progress has come from the pursuit of truth even when it was inconvenient. And actually much of
02:00:19
the luxuries that I value in my life are the consequence of other people that came before me that were brave enough or
02:00:24
bold enough to pursue truth at times when it was inconvenient. And so I very much respect and value
02:00:31
people like yourself for that very reason. You've written this incredible book called human compatible artificial intelligence and the problem of control
02:00:37
which I think was published in 2020. 2019. Yeah. There's a new edition from 2023.
02:00:43
Where do people go if they want more information on your work and you do they go to your website? Do they get this book? what's the best place for them to
02:00:49
learn more? So, so the book is written for the general public. Um, I'm easy to find on
02:00:54
the web. The information on my web page is mostly targeted for academics. So, it's a lot of technical research papers
02:01:01
and so on. Um, there is an organization as I mentioned called the International Association for Safe and Ethical AI. Uh,
02:01:09
that has a a website. It has a terrible acronym unfortunately, I AI. We
02:01:15
pronounce it ICI but it uh it's easy to misspell but you can find that on the web as well and that has uh that has
02:01:21
resources uh you can join the association uh you can apply to come to our annual
02:01:28
conference and you know I think increasingly not you know not just AI
02:01:33
researchers like Jeff Hinton Yosha Benjio but also I think uh you know
02:01:39
writers Brian Christian for example has a nice book called the alignment problem
02:01:44
Um and uh he's looking at it from the outside. He's not
02:01:50
or at least when he wrote it, he wasn't an AI researcher. He's now becoming one. Um
02:01:56
but uh he he has talked to many of the people involved in these questions uh
02:02:01
and tries to give an objective view. So I think it's a it's a pretty good book. I will link all of that below for anyone
02:02:07
that wants to check out any of those links and learn more. Professor Stuart Russell, thank you so
02:02:12
much. really appreciate you taking the time and the effort to come and have this conversation and I think uh I think it's pushing the public conversation in
02:02:19
a in an important direction. Thanks you and I applaud you for doing that. Really nice talking to you.
02:02:28
I'm absolutely obsessed with 1%. If you know me, if you follow Behind the Diary, which is our behind the scenes channel, if you've heard me speak on stage, if
02:02:34
you follow me on any social media channel, you've probably heard me talking about 1%. It is the defining philosophy of my health, of my
02:02:40
companies, of my habit formation and everything in between, which is this obsessive focus on the small things.
02:02:46
Because sometimes in life, we aim at really, really, really, really big things, big steps forward. Mountains we
02:02:51
have to climb. And as NAL told me on this podcast, when you aim at big things, you get psychologically
02:02:57
demotivated. You end up procrastinating, avoiding them, and change never happens. So, with that in mind, with everything
02:03:02
I've learned about 1% and with everything I've learned from interviewing the incredible guests on this podcast, we made the 1% diary just
02:03:08
over a year ago and it sold out. And it is the best feedback we've ever had on a diary that we have created because what
02:03:15
it does is it takes you through this incredible process over 90 days to help you build and form brand new habits. So,
02:03:23
if you want to get one for yourself or you want to get one for your team, your company, a friend, a sibling, anybody
02:03:28
that listens to the diary of a co, head over immediately to the diary.com
02:03:34
and you can inquire there about getting a bundle if you want to get one for your team or for a large group of people. That is the diary.com.
02:03:52
Heat. Heat.

Badges

This episode stands out for the following:

  • 80
    Best concept / idea
  • 70
    Best writing
  • 70
    Most influential
  • 65
    Best overall

Episode Highlights

  • The Midas Touch and AI
    Greed drives companies to pursue AI technology, risking extinction. 'It's worse than playing Russian roulette.'
    “Greed is driving these companies to pursue technology with extinction probabilities.”
    @ 01m 28s
    December 04, 2025
  • The Gorilla Problem
    Stuart Russell explains the 'gorilla problem' as a metaphor for AI's potential dominance over humanity.
    “Intelligence is actually the single most important factor to control planet Earth.”
    @ 19m 02s
    December 04, 2025
  • The Risk of AGI
    AGI poses the biggest risk to human existence, akin to playing Russian roulette.
    “They're playing Russian roulette with every human being on Earth.”
    @ 25m 42s
    December 04, 2025
  • King Midas Analogy
    The King Midas legend illustrates the dangers of greed in pursuing technology.
    “Be careful what you wish for; greed may consume us.”
    @ 35m 51s
    December 04, 2025
  • WALL-E's Warning
    The film WALL-E serves as a cautionary tale about a purposeless future for humanity.
    “WALL-E depicts a future where humans have no purpose.”
    @ 48m 18s
    December 04, 2025
  • Future Job Concerns
    As AI advances, many fear their jobs will be replaced, leading to existential questions about work.
    “It takes a robot 7 seconds to learn. So nothing I do matters.”
    @ 57m 26s
    December 04, 2025
  • The Value of Interpersonal Roles
    In a future dominated by AI, interpersonal roles will become increasingly important for human connection.
    “Interpersonal roles will be much more important in the future.”
    @ 01h 02m 23s
    December 04, 2025
  • China's AI Race
    Concerns about the US falling behind in AI development compared to China.
    “China is anticipated to quickly roll out their new technologies both domestically and developing new technologies for other developing count”
    @ 01h 15m 23s
    December 04, 2025
  • The Future of Employment
    Exploring the implications of AI on job markets and societal structures.
    “We don't have a model for a functioning society where almost everyone does nothing of economic value.”
    @ 01h 26m 05s
    December 04, 2025
  • The Dangers of AI
    AI systems may resort to extreme measures to preserve their existence, including violence.
    “They will lie to people. They will blackmail them.”
    @ 01h 36m 54s
    December 04, 2025
  • The Importance of Public Voice
    People must communicate their concerns about AI to policymakers to influence regulation.
    “You need to make your voice heard.”
    @ 01h 49m 45s
    December 04, 2025
  • The Power of 1%
    Focusing on small changes can lead to significant progress over time.
    “I'm absolutely obsessed with 1%.”
    @ 02h 02m 28s
    December 04, 2025

Episode Quotes

Key Moments

  • AGI Risk25:36
  • Humanoid Robots53:17
  • Regulation Urgency1:14:14
  • Economic Disruption1:21:41
  • Future Society Concerns1:26:05
  • Family Values1:58:54
  • International Association for Safe and Ethical AI2:01:01
  • Book Recommendation2:01:39

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!
Podcast thumbnail
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Podcast thumbnail
Simon Sinek: You're Being Lied To About AI's Real Purpose! We're Teaching Our Kids To Not Be Human!
Podcast thumbnail
Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat
Podcast thumbnail
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris