Search Captions & Ask AI

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

September 04, 2025 / 01:27:38

This episode features Dr. Roman Yimpolski, an expert in AI safety, discussing the potential dangers of artificial intelligence and predictions for the future. Key topics include AI safety, unemployment due to automation, the implications of super intelligence, and simulation theory.

Dr. Yimpolski shares his concerns about the rapid development of AI, predicting that by 2027, artificial general intelligence (AGI) could lead to unprecedented unemployment rates, potentially reaching 99%. He emphasizes that while AI capabilities are advancing quickly, safety measures are lagging behind.

The conversation touches on the ethical implications of AI development, with Dr. Yimpolski arguing that the primary obligation of companies is to their investors, not to societal safety. He warns that without proper control, the emergence of super intelligence could lead to catastrophic outcomes.

Dr. Yimpolski also discusses simulation theory, suggesting that we may be living in a simulation created by a higher intelligence. He believes that understanding this could change our perspective on existence and morality.

Throughout the episode, Dr. Yimpolski calls for a reevaluation of how we approach AI development, advocating for a focus on safety and ethical considerations to prevent potential disasters.

TL;DR

Dr. Roman Yimpolski discusses AI safety, predicting 99% unemployment by 2027 due to AGI, and explores simulation theory's implications.

Video

00:00:00
You've been working on AI safety for two decades at least. Yeah, I was convinced we can make safe AI, but the more I looked at it, the
00:00:06
more I realized it's not something we can actually do. You have made a series of predictions about a variety of different states. So,
00:00:13
what is your prediction for 2027? [Music]
00:00:19
Dr. Roman Yimpolski is a globally recognized voice on AI safety and associate professor of computer science.
00:00:25
He educates people on the terrifying truth of AI and what we need to do to save humanity. In 2 years, the capability to replace
00:00:32
most humans in most occupations will come very quickly. I mean, in 5 years, we're looking at a world where we have
00:00:39
levels of unemployment we never seen before. Not talking about 10% but 99%.
00:00:45
And that's without super intelligence. A system smarter than all humans in all domains. So, it would be better than us
00:00:51
at making new AI. But it's worse than that. We don't know how to make them safe and yet we still have the smartest
00:00:57
people in the world competing to win the race to super intelligence. But what do you make of people like Saman's journey with AI?
00:01:04
So decade ago we published guard rails for how to do AI, right? They violated every single one and he's gambling 8
00:01:11
billion lives on getting richer and more powerful. So I guess some people want to go to Mars, others want to control the
00:01:17
universe. But it doesn't matter who builds it. The moment you switch to super intelligence, we will most likely
00:01:23
regret it terribly. And then by 2045, now this is where it gets interesting.
00:01:30
Dr. Roman Gimpolski, let's talk about simulation theory. I think we are in one. And there is a
00:01:35
lot of agreement on this and this is what you should be doing in it so we don't shut it down. First,
00:01:42
I see messages all the time in the comment section that some of you didn't realize you didn't subscribe. So, if you
00:01:47
could do me a favor and double check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing
00:01:54
that anybody that watches this show frequently can do to help us here to keep everything going in this show in the trajectory it's on. So, please do
00:02:00
double check if you've subscribed and uh thank you so much because in a strange way, you are you're part of our history
00:02:06
and you're on this journey with us and I appreciate you for that. So, yeah, thank you,
00:02:13
Dr. Roman Yimpolski. What is the mission that you're currently on? Cuz it's quite clear to me
00:02:20
that you are on a bit of a mission and you've been on this mission for I think the best part of two decades at least.
00:02:26
I'm hoping to make sure that super intelligence we are creating right now
00:02:31
does not kill everyone.
00:02:37
Give me some give me some context on that statement because it's quite a shocking statement. Sure. So in the last decade we actually
00:02:45
figured out how to make artificial intelligence better. Turns out if you add more compute, more data, it just
00:02:53
kind of becomes smarter. And so now smartest people in the world, billions
00:02:58
of dollars, all going to create the best possible super intelligence we can.
00:03:04
Unfortunately, while we know how to make those systems much more capable, we don't know how to make them safe.
00:03:12
how to make sure they don't do something we will regret and that's the state-of-the-art right
00:03:18
now. When we look at just prediction markets, how soon will we get to
00:03:23
advanced AI? The timelines are very short couple years two three years
00:03:29
according to prediction markets according to CEOs of top labs
00:03:34
and at the same time we don't know how to make sure that those systems are
00:03:40
aligned with our preferences. So we are creating this alien intelligence. If aliens were coming to
00:03:49
earth and you you have three years to prepare you would be panicking right now.
00:03:55
But most people don't don't even realize this is happening. So some of the counterarguments might be
00:04:02
well these are very very smart people. These are very big companies with lots of money. They have a obligation and a
00:04:08
moral obligation but also just a legal obligation to make sure they do no harm. So I'm sure it'll be fine.
00:04:14
The only obligation they have is to make money for the investors. That's the legal obligation they have. They have no
00:04:20
moral or ethical obligations. Also, according to them, they don't know how to do it yet. The state-of-the-art
00:04:26
answers are we'll figure it out when we get there, or AI will help us control more advanced AI.
00:04:33
That's insane. In terms of probability, what do you think is the probability that something goes catastrophically wrong?
00:04:40
So, nobody can tell you for sure what's going to happen. But if you're not in charge, you're not controlling it, you
00:04:46
will not get outcomes you want. The space of possibilities is almost infinite. The space of outcomes we will
00:04:53
like is tiny. And who are you and how long have you
00:04:59
been working on this? I'm a computer scientist by training. I
00:05:04
have a PhD in computer science and engineering. I probably started work in AI safety mildly defined as control of
00:05:13
bots at the time uh 15 years ago. 15 years ago. So you've been working on
00:05:19
AI safety before it was cool. Before the term existed, I coined the term AI safety. So you're the founder of the term AI
00:05:25
safety. The term? Yes. Not the field. There are other people who did brilliant work before I got there.
00:05:31
Why were you thinking about this 15 years ago? Because most people have only been talking about the term AI safety for the last two or three years.
00:05:37
Yeah. It started very mildly just as a security project. I was looking at poker
00:05:43
bots and I realized that the bots are getting better and better. And if you
00:05:48
just project this forward enough, they're going to get better than us, smarter, more capable. And it happened.
00:05:56
They are playing poker way better than average players. But more generally, it
00:06:01
will happen with all other domains, all the other cyber resources. I wanted to
00:06:06
make sure AI is a technology which is beneficial for everyone. So I started to work on making AI safer.
00:06:14
Was there a particular moment in your career where you thought oh my god?
00:06:19
First 5 years at least I was working on solving this problem. I was convinced we
00:06:24
can make this happen. We can make safe AI and that was the goal. But the more I looked at it, the more I realized every
00:06:31
single component of that equation is not something we can actually do. And the more you zoom in, it's like a fractal.
00:06:38
You go in and you find 10 more problems and then 100 more problems. And all of
00:06:43
them are not just difficult. They're impossible to solve. There is no seinal
00:06:49
work in this field where like we solved this, we don't have to worry about this. There are patches. There are little
00:06:55
fixes we put in place and quickly people find ways to work around them. They drill break whatever safety mechanisms
00:07:03
we have. So while progress in AI capabilities is exponential or maybe
00:07:09
even hyper exponential, progress in AI safety is linear or constant. The gap is
00:07:15
increasing. The gap between the how capable the systems are and how well
00:07:21
we can control them, predict what they're going to do, explain their decision making.
00:07:26
I think this is quite an important point because you said that we're basically patching over the issues that we find.
00:07:32
So, we're developing this this core intelligence and then to stop it doing things
00:07:38
or to stop it showing some of its unpredictability or its threats, the
00:07:44
companies that are developing this AI are programming in code over the top to say, "Okay, don't swear, don't say that
00:07:49
read word, don't do that bad thing." Exactly. And you can look at other examples of that. So, HR manuals, right?
00:07:55
We have those humans. They're general intelligences, but you want them to behave in a company. So they have a
00:08:01
policy, no sexual harassment, no this, no that. But if you're smart enough, you
00:08:06
always find a workaround. So you're just pushing behavior into a different not
00:08:11
yet restricted subdomain. We we should probably define some terms
00:08:16
here. So there's narrow intelligence which can play chess or whatever. There's the artificial general
00:08:21
intelligence which can operate across domains and then super intelligence which is smarter than all humans in all
00:08:27
domains. And where are we? So that's a very fuzzy boundary, right? We
00:08:33
definitely have many excellent narrow systems, no question about it. And they are super intelligent in that narrow
00:08:39
domain. So uh protein folding is a problem which was solved using narrow AI
00:08:44
and it's superior to all humans in that domain. In terms of AGI, again I said if
00:08:49
we showed what we have today to a scientist from 20 years ago, they would be convinced we have full-blown AGI. We
00:08:56
have systems which can learn. They can perform in hundreds of domains and they better than human in many of them. So
00:09:04
you can argue we have a weak version of hi. Now we don't have super intelligence
00:09:09
yet. We still have brilliant humans who are completely dominating AI especially
00:09:14
in science and engineering. But that gap is closing so fast. You can
00:09:19
see especially in the domain of mathematics 3 years ago large language models
00:09:26
couldn't do basic algebra multiplying three-digit numbers was a challenge now
00:09:31
they helping with mathematical proofs they winning mathematics olympiads competitions they are working on solving
00:09:39
millennial problems hardest problems in mathematics so in 3 years we closed the gap from subhuman performance to better
00:09:47
than most mathematicians in the And we see the same process happening in science and in engineering.
00:09:54
You have made a series of predictions and they correspond to a variety of different dates. I have those dates in
00:10:00
front of me here. What is your prediction for the year 2027?
00:10:07
We're probably looking at AGI as predicted by prediction markets and tops
00:10:13
of the labs. So we have artificial general intelligence by 2027. And how would that make the world
00:10:19
different to how it is now? So if you have this concept of a drop in
00:10:25
employee, you have free labor, physical and cognitive, trillions of dollars of it. It makes no sense to hire humans for
00:10:32
most jobs. If I can just get, you know, a $20 subscription or a free model to do
00:10:38
what an employee does. First, anything on a computer will be automated.
00:10:43
And next, I think humanoid robots are maybe 5 years behind. So in five years all the physical labor can also be
00:10:50
automated. So we're looking at a world where we have levels of unemployment we
00:10:55
never seen before. Not talking about 10% unemployment which is scary but 99%. All
00:11:01
you have left is jobs where for whatever reason you prefer another human would do
00:11:06
it for you. But anything else can be fully automated. It doesn't mean it will be
00:11:13
automated in practice. A lot of times technology exists but it's not deployed.
00:11:18
Video phones were invented in the 70s. Nobody had them until iPhones came around.
00:11:25
So we may have a lot more time with jobs and with world which looks like this.
00:11:30
But capability to replace most humans and most occupations will come very quickly.
00:11:38
H okay. So let's try and drill down into that and and stress test it. So,
00:11:46
a podcaster like me. Would you need a podcaster like me?
00:11:52
So, let's look at what you do. You prepare. You ask questions.
00:11:58
You ask follow-up questions. And you look good on camera. Thank you so much. Let's see what we can do. Large language
00:12:04
model today can easily read everything I wrote. Yeah. And have very solid understanding better. I I assume you haven't read
00:12:11
every single one of my books. Right? That thing would do it. It can train on every podcast you ever did. So, it knows
00:12:18
exactly your style, the types of questions you ask. It can also
00:12:23
find correspondence between what worked really well. Like this type of question really increased views. This type of
00:12:30
topic was very promising. So, you can optimize I think better than you can because you don't have a data set. Of
00:12:36
course, visual simulation is trivial at this point. So it can you can make a video within seconds of me sat here and
00:12:43
so we can generate videos of you interviewing anyone on any topic very efficiently and you just have to get
00:12:51
likeness approval whatever are there many jobs that you think would remain in a world of AGI if you're
00:12:57
saying AGI is potentially going to be here whether it's deployed or not by 2027 what kind and then okay so let's
00:13:04
take out of this any physical labor jobs for a second are there any jobs that you
00:13:09
think a human would be able to do better in a world of AGI still? So that's the question I often ask
00:13:15
people in a world with AGI and I think almost immediately we'll get super intelligence as a side effect. So the
00:13:22
question really is in a world of super intelligence which is defined as better than all humans in all domains. What can
00:13:29
you contribute? And so you know better than anyone what it's like to be you. You know what ice
00:13:37
cream tastes to you? Can you get paid for that knowledge? Is someone interested in that?
00:13:43
Maybe not. Not a big market. There are jobs where you want a human. Maybe you're rich and you want a human
00:13:49
accountant for whatever historic reasons. Old people like traditional ways of
00:13:57
doing things. Warren Buffett would not switch to AI. He would use his human accountant.
00:14:02
But it's a tiny subset of a market. Today we have products which are man-made in US as opposed to
00:14:10
mass-produced in China and some people pay more to have those but it's a small subset. It's a almost a fetish. There is
00:14:18
no practical reason for it and I think anything you can do on a computer could be automated using that technology.
00:14:27
You must hear a lot of rebuttals to when this when you say it because people experience a huge amount of mental
00:14:33
discomfort when they hear that their job, their career, the thing they got a degree in, the thing they invested
00:14:38
$100,000 into is going to be taken away from them. So, their natural reaction some for some people is that cognitive
00:14:44
dissonance that no, you're wrong. AI can't be creative. It's not this. It's
00:14:49
not that. It'll never be interested in my job. I'll be fine because you hear these arguments all the time, right?
00:14:55
It's really funny. I ask people and I ask people in different occupations. I ask my Uber driver, "Are you worried
00:15:02
about self-driving cars?" And they go, "No, no one can do what I do. I know the streets of New York. I can navigate like
00:15:08
no AI. I'm safe." And it's true for any job. Professors are saying this to me.
00:15:14
Oh, nobody can lecture like I do. Like, this is so special. But you understand it's ridiculous. We already have
00:15:20
self-driving cars replacing drivers. That is not even a question if it's
00:15:26
possible. It's like how soon before you fired. Yeah. I mean, I've just been in LA
00:15:31
yesterday and uh my car drives itself. So, I get in the car, I set put in where I want to go and then I don't touch the
00:15:38
steering wheel or the brake pedals and it takes me from A to B, even if it's an hourong drive without any intervention
00:15:44
at all. I actually still park it, but other than that, I'm not I'm not driving the car at all. And obviously in LA we
00:15:50
also have Whimo now which means you order it on your phone and it shows up with no driver in it and takes you to
00:15:56
where you want to go. Oh yeah. So it's quite clear to see how that is potentially a matter of time for those
00:16:02
people cuz we do have some of those people listening to this conversation right now that their occupation is driving to offer them a and I think
00:16:10
driving is the biggest occupation in the world if I'm correct. I'm pretty sure it
00:16:15
is the biggest occupation in the world. One of the top ones. Yeah. What would you say to those people? What
00:16:22
should they be doing with their lives? What should they should they be retraining in something or what time frame? So that's the paradigm shift here.
00:16:28
Before we always said this job is going to be automated, retrain to do this other job. But if I'm telling you that
00:16:33
all jobs will be automated, then there is no plan B. You cannot retrain.
00:16:41
Look at computer science. Two years ago, we told people learn to code. you are an artist, you cannot make
00:16:49
money. Learn to code. Then we realized, oh, AI kind of knows how to code and
00:16:54
getting better. Become a prompt engineer. You can engineer prompts for AI. It's
00:17:01
going to be a great job. Get a four-year degree in it. But then we're like, AI is way better at designing prompts for
00:17:06
other AIs than any human. So that's gone. So I can't really tell you right
00:17:11
now. The hardest thing is design AI agents for practical applications. I guarantee you in a year or two it's
00:17:17
going to be gone just as well. So I don't think there is a this
00:17:22
occupation needs to learn to do this instead. I think it's more like we as a humanity then we all lose our jobs. What
00:17:30
do we do? What do we do financially? Who's paying for us? And what do we do in terms of meaning? What do I do with
00:17:38
my extra 60 80 hours a week? You've thought around this corner,
00:17:44
haven't you? a little bit. What is around that corner in your view? So the economic part seems easy. If you
00:17:51
create a lot of free labor, you have a lot of free wealth, abundance, things which are right now not very affordable
00:17:59
become dirt cheap and so you can provide for everyone basic needs. Some people say you can provide beyond basic needs.
00:18:05
You can provide very good existence for everyone. The hard problem is what do you do with all that free time? For a
00:18:12
lot of people, their jobs are what gives them meaning in their life. So they would be kind of lost. We see it with
00:18:19
people who uh retire or do early retirement. And for so many people who
00:18:25
hate their jobs, they'll be very happy not working. But now you have people who are chilling all day. What happens to
00:18:32
society? How does that impact crime rate, pregnancy rate, all sorts of issues? Nobody thinks about. governments
00:18:39
don't have programs prepared to deal with 99% unemployment.
00:18:47
What do you think that world looks like? Again, I I think you very important part
00:18:54
to understand here is the unpredictability of it. We cannot predict what a smarter than us system
00:19:01
will do. And the point when we get to that is often called singularity by
00:19:06
analogy with physical singularity. You cannot see beyond the event horizon. I can tell you what I think might happen,
00:19:13
but that's my prediction. It is not what actually is going to happen because I just don't have cognitive ability to
00:19:20
predict a much smarter agent impacting this world.
00:19:25
Then you read science fiction. There is never a super intelligence in it actually doing anything because nobody
00:19:31
can write believable science fiction at that level. They either banned AI like Dune because this way you can avoid
00:19:38
writing about it or it's like Star Wars. You have this really dumb bots but not nothing super intelligent ever cuz by
00:19:46
definition you cannot predict at that level because by definition of it being super
00:19:51
intelligent it will make its own mind up. By definition if it was something you could predict you would be operating at
00:19:58
the same level of intelligence violating our assumption that it is smarter than you. If I'm playing chess with super
00:20:04
intelligence and I can predict every move, I'm playing at that level. It's kind of like my French bulldog trying to predict exactly what I'm
00:20:11
thinking and what I'm going to do. That's a good cognitive gap. And it's not just he can predict you're going to work, you're coming back, but he cannot
00:20:17
understand why you're doing a podcast. That is something completely outside of his model of the world.
00:20:25
Yeah. He doesn't even know that I go to work. He just sees that I leave the house and doesn't know where I go.
00:20:30
Buy food for him. What's the most persuasive argument against your own
00:20:35
perspective here? That we will not have unemployment due to advanced technology
00:20:41
that there won't be this French bulldog human gap in understanding and
00:20:49
I guess like power and control. So some people think that we can enhance
00:20:56
human minds either through combination with hardware. So something like Neurolink or through genetic
00:21:02
re-engineering to where we make smarter humans. Yeah, it may give us a little more
00:21:09
intelligence. I don't think we are still competitive in biological form with silicon form. Silicon substrate is much
00:21:16
more capable for intelligence. It's faster. It's more resilient, more energy efficient in many ways,
00:21:22
which is what computers are made out of versus the brain. Yeah. So I don't think we can keep up just with improving our
00:21:30
biology. Some people think maybe and this is very speculative we can upload our minds into computers. So scan your
00:21:38
brain connect of your brain and have a simulation running on a computer and you
00:21:43
can speed it up give it more capabilities. But to me that feels like you no longer exist. We just created
00:21:49
software by different means and now you have AI based on biology and AI based on
00:21:56
some other forms of training. You can have evolutionary algorithms. You can have many paths to reach AGI but at the
00:22:02
end none of them are humans. I have a another date here which is
00:22:10
2030. What's your prediction for 2030? What will the world look like?
00:22:15
So we probably will have uh humanoid robots with enough flexibility, dexterity to compete with humans in all
00:22:24
domains including plumbers. We can make artificial plumbers. Not the plumbers where that was that
00:22:30
felt like the last bastion of uh human employment. So 2030, 5 years from now,
00:22:36
humanoid robots, so many of the companies, the leading companies including Tesla are developing humanoid robots at light speed and they're
00:22:43
getting increasingly more effective. And these humanoid robots will be able to move through physical space for, you
00:22:50
know, make an omelette, do anything humans can do, but obviously have be
00:22:56
connected to AI as well. So they can think, talk, right? They're controlled by AI. They
00:23:02
always connected to the network. So they are already dominating in many ways.
00:23:08
Our world will look remarkably different when humanoid robots are functional and
00:23:14
effective because that's really when you know I start think like the combination
00:23:19
of intelligence and physical ability is really really doesn't leave much does
00:23:26
it for us um human beings
00:23:31
not much. So today if you have intelligence through internet you can hire humans to do your bidding for you.
00:23:37
You can pay them in bitcoin. So you can have bodies just not directly controlling them. So it's not a huge
00:23:44
game changer to add direct control of physical bodies. Intelligence is where it's at. The important component is
00:23:50
definitely higher ability to optimize to solve problems to find patterns people
00:23:56
cannot see. And then by 2045,
00:24:01
I guess the world looks even even more um which is 20 years from now.
00:24:06
So if it's still around, if it's still around, Ray Kurszswe predicts that that's the
00:24:12
year for the singularity. That's the year where progress becomes so fast. So this AI doing science and engineering
00:24:19
work makes improvements so quickly we cannot keep up anymore. That's the definition of singularity. point beyond
00:24:25
which we cannot see, understand, predict, see, understand, predict the
00:24:31
intelligence itself or what is happening in the world, the technology is being developed. So right
00:24:36
now if I have an iPhone, I can look forward to a new one coming out next year and I'll understand it has slightly
00:24:43
better camera. Imagine now this process of researching and developing this phone is automated. It happens every 6 months,
00:24:50
every 3 months, every month, week, day, hour, minute, second. You cannot keep up with 30 iterations of
00:24:58
iPhone in one day. You don't understand what capabilities it has, what
00:25:04
proper controls are. It just escapes you. Right now, it's hard for any researcher and AI to keep up with the
00:25:11
state-of-the-art. While I was doing this interview with you, a new model came out and I no longer know what the
00:25:17
state-of-the-art is. Every day, as a percentage of total knowledge, I get dumber. I may still know more because I
00:25:23
keep reading. But as a percentage of overall knowledge, we're all getting dumber.
00:25:29
And then you take it to extreme values, you have zero knowledge, zero understanding of the world around you.
00:25:37
Some of the arguments against this eventuality are that when you look at other technologies like the industrial
00:25:43
revolution, people just found new ways to to work and new careers that we could
00:25:50
never have imagined at the time were created. How do you respond to that in a world of super intelligence?
00:25:56
It's a paradigm shift. We always had tools, new tools which allowed some job to be done more efficiently. So instead
00:26:03
of having 10 workers, you could have two workers and eight workers had to find a new job. And there was another job. Now
00:26:09
you can supervise those workers or do something cool. If you creating a meta
00:26:15
invention, you're inventing intelligence. You're inventing a worker, an agent, then you can apply that agent
00:26:21
to the new job. There is not a job which cannot be automated. That never happened
00:26:26
before. All the inventions we previously had were kind of a tool for doing something.
00:26:33
So we invented fire. Huge game changer. But that's it. It stops with fire. We
00:26:39
invent the wheel. Same idea. Huge implications. But wheel itself is not an
00:26:44
inventor. Here we're inventing a replacement for human mind. A new
00:26:50
inventor capable of doing new inventions. It's the last invention we ever have to make. At that point it
00:26:56
takes over and the process of doing science research even ethics research
00:27:02
morals all that is automated at that point. Do you sleep well at night?
00:27:07
Really well. Even though you you spent the last what 15 20 years of your life working on AI
00:27:14
safety and it's suddenly among us in a in a way that I don't think anyone could have predicted 5
00:27:20
years ago. When I say among us, I really mean that the amount of funding and talent that is now focused on reaching
00:27:26
super intelligence faster has made it feel more inevitable and more soon
00:27:32
than any of us could have possibly imagined. We as humans have this built-in bias about not thinking about really bad
00:27:39
outcomes and things we cannot prevent. So all of us are dying. Your kids are dying, your parents are
00:27:46
dying, everyone's dying, but you still sleep well. you still go on with your day. Even 95 year olds are still doing
00:27:53
games and playing golf and whatnot cuz we have this ability to not think about
00:27:59
the worst outcomes especially if we cannot actually modify the outcome. So that's the same infrastructure being
00:28:07
used for this. Yeah, there is humanity level deathlike event. We're happening
00:28:14
to be close to it probably, but unless I can do something about it, I I can just
00:28:20
keep enjoying my life. In fact, maybe knowing that you have limited amount of
00:28:25
time left gives you more reason to have a better life. You cannot waste any. And that's the survival trait of
00:28:32
evolution, I guess, because those of my ancestors that spent all their time worrying wouldn't have spent enough time
00:28:38
having babies and hunting to survive. Suicidal ideiation. People who really start thinking about how horrible the
00:28:44
world is usually escape pretty soon.
00:28:52
One of the you co-authored this paper um analyzing the key arguments people make against the importance of AI safety. And
00:28:58
one of the arguments in there is that there's other things that are of bigger importance right now. It might be world
00:29:04
wars. It could be nuclear containment. It could be other things. There's other things that the governments and podcasters like me should be talking
00:29:11
about that are more important. What's your rebuttal to that argument? So, super intelligence is a meta
00:29:17
solution. If we get super intelligence right, it will help us with climate change. It will help us with wars. It
00:29:24
can solve all the other existential risks. If we don't get it right, it
00:29:30
dominates. If climate change will take a hundred years to boil us alive and super intelligence kills everyone in five, I
00:29:37
don't have to worry about climate change. So either way, either it solves it for me or it's not an issue.
00:29:44
So you think it's the most important thing to be working on? Without question, there is nothing more important than getting this right.
00:29:54
And I know everyone says it. you take any class with you take English professor's class and he tells you this is the most important class you'll ever
00:30:00
take but u you can see the meta level differences with this one
00:30:07
another argument in that paper is that we all be in control and that the danger is not AI um this particular argument
00:30:14
asserts that AI is just a tool humans are the real actors that present danger and we can always m maintain control by
00:30:21
simply turning it off can't we just pull the plug out I see that every time we have a conversation on the show about AI, someone says, "Can't we just unplug
00:30:27
it?" Yeah, I get those comments on every podcast I make and I always want to like get in touch with a guy and say, "This
00:30:33
is brilliant. I never thought of it. We're going to write a paper together and get a noble price for it. This is
00:30:38
like, let's do it." Because it's so silly. Like, can you turn off a virus? You have a computer virus. You don't
00:30:44
like it. Turn it off. How about Bitcoin? Turn off Bitcoin network. Go ahead. I'll
00:30:49
wait. This is silly. Those are distributed systems. You cannot turn them off. And on top of it, they're
00:30:54
smarter than you. They made multiple backups. They predicted what you're going to do. They will turn you off
00:31:00
before you can turn them off. The idea that we will be in control applies only
00:31:05
to preup intelligence levels. Basically what we have today, today humans with AI
00:31:11
tools are dangerous. They can be hackers, malevolent actors. Absolutely. But the moment super intelligence
00:31:18
becomes smarter, dominates, they no longer the important part of that equation. It is the higher intelligence
00:31:24
I'm concerned about, not the human who may add additional malevolent payload,
00:31:29
but at the end still doesn't control it. It is tempting
00:31:35
to follow your the next argument that I saw in that paper, which basically says, listen, this is inevitable.
00:31:42
So, there's no point fighting against it because there's really no hope here. So, we should probably give up even trying
00:31:48
and be faithful that it'll work itself out because everything you've said sounds really inevitable. And if with
00:31:55
with China working on it, I'm sure Putin's got some secret division. I'm sure Iran are doing some bits and pieces. Every European country's trying
00:32:02
to get ahead of AI. The United States is leading the way. So, it's it's inevitable. So, we probably should just
00:32:09
have faith and pray. Well, praying is always good, but incentives matter. If you are looking at
00:32:16
what drives this people, so yes, money is important. So there is a lot of money in that space and so everyone's trying
00:32:23
to be there and develop this technology. But if they truly understand the argument, they understand that you will
00:32:29
be dead. No amount of money will be useful to you, then incentive switch.
00:32:34
They would want to not be dead. A lot of them are young people, rich people. They have their whole lives ahead of them. I
00:32:41
think they would be better off not building advanced super intelligence concentrating on narrow AI tools for
00:32:48
solving specific problems. Okay, my company cures breast cancer. That's all. We make billions of dollars. Everyone's
00:32:55
happy. Everyone benefits. It's a win. We are still in control
00:33:00
today. It's not over until it's over. We can decide not to build general super
00:33:05
intelligences. I mean the United States might be able to conjure up enough enthusiasm for that
00:33:12
but if the United States doesn't build general super intelligences then China are going to have the big advantage
00:33:18
right so right now at those levels whoever has more advanced AI has more advanced military no question we see it
00:33:25
with existing conflicts but the moment you switch to super intelligence uncontrolled super intelligence it
00:33:31
doesn't matter who builds it us or them and if they understand this argument they also would not build it. It's a
00:33:38
mutually assured destruction on both ends. Is this technology different than say
00:33:44
nuclear weapons which require a huge amount of investment and you have to like enrich the uranium and you need
00:33:51
billions of dollars potentially to even build a nuclear weapon.
00:33:56
But it feels like this technology is much cheaper to get to super intelligence potentially or at least it
00:34:02
will become cheaper. I wonder if it's possible that some some guy some startup is going to be able to build super
00:34:08
intelligence in you know a couple of years without the need of you know billions of dollars of compute or or
00:34:15
electricity power. That's a great point. So every year it becomes cheaper and cheaper to train
00:34:20
sufficiently large model. If today it would take a trillion dollars to build super intelligence, next year it could
00:34:25
be a hundred billion and so on at some point a guy in a laptop could do it.
00:34:31
But you don't want to wait four years for make it affordable. So that's why so much money is pouring in. Somebody wants
00:34:37
to get there this year and lucky and all the winnings lite cone level award. So
00:34:43
in that regard they both very expensive projects like Manhattan level projects
00:34:49
which was the nuclear bomb project. The difference between the two technologies is that nuclear weapons are
00:34:55
still tools. some dictator, some country, someone has to decide to use them, deploy them.
00:35:03
Whereas super intelligence is not a is not a tool. It's an agent. It makes its own decisions and no one is controlling
00:35:09
it. I cannot take out this dictator and now super intelligence is safe. So that's a fundamental difference to me.
00:35:17
But if you're saying that it is going to get incrementally cheaper, like I think it's Mo's law, isn't it? the technology
00:35:22
gets cheaper then there is a future where some guy on his laptop is going to be able to create
00:35:28
super intelligence without oversight or regulation or employees etc. Yeah that's why a lot of people
00:35:34
suggesting we need to build something like an surveillance planet where you are
00:35:42
monitoring who's doing what and you're trying to prevent people from doing it. Do I think it's feasible? No. At some
00:35:48
point it becomes so affordable and so trivial that it just will happen. But at this point we're trying to get more
00:35:55
time. We don't want it to happen in five years. We want it to happen in 50 years.
00:36:01
I mean that's not very hopeful. See depends on how old you are. Depends on how old you are.
00:36:08
I mean if you're saying that you believe in the future people will be able to make super intelligence
00:36:14
without the resources that are required today then it is just a matter of time. Yeah. But so will be true for many other
00:36:21
technologies. We're getting much better in synthetic biology where today someone with a bachelor's degree in biology can
00:36:27
probably create a new virus. This will also become cheaper other technologies like that. So we are approaching a point
00:36:35
where it's very difficult to make sure no technological breakthrough is the last one. So
00:36:42
essentially in many directions we have this uh pattern of making it easier in
00:36:48
terms of resources in terms of intelligence to destroy the world. If you look at uh I don't know 500 years
00:36:55
ago the worst dictator with all the resources could kill a couple million people. He couldn't destroy the world.
00:37:01
Now we know nuclear weapons we can blow up the whole planet multiple times over. Synthetic biology we saw with CO you can
00:37:09
very easily create a combination virus which impacts billions of people and all
00:37:15
of those things becoming easier to do in the near term. You talk about extinction being a real risk, human
00:37:21
extinction being a real risk. Of all the the pathways to human extinction that you think are most likely, what what is
00:37:28
the leading pathway? because I know you talk about there being some issue pre-eployment of these AI tools like you
00:37:34
know someone makes a mistake um when they're designing a model or other
00:37:40
issues post deployment when I say post- deployment I mean once a chat or something an an agent's released into
00:37:46
the world and someone hacking into it and changing it and reprogram reprogramming it to be malicious of all
00:37:52
these potential paths to human extinction which one do you think is the
00:37:57
highest probability So I can only talk about the ones I can predict myself. So
00:38:02
I can predict even before we get to super intelligence someone will create a very advanced biological tool create a
00:38:08
novel virus and that virus gets everyone or most everyone I can envision it. I
00:38:14
can understand the pathway. I can say that. So just to zoom in on that then that would be using an AI to make a virus and
00:38:21
then releasing it. Yeah. And would that be intentional or
00:38:26
There is a lot of psychopaths, a lot of terrorists, a lot of doomsday cults. We
00:38:32
seen historically again they try to kill as many people as they can. They usually fail. They kill hundreds of thousands.
00:38:38
But if they get technology to kill millions of billions, they would do that gladly.
00:38:45
The point I'm trying to emphasize is that it doesn't matter what I can come up with. I am not a malevolent actor
00:38:51
you're trying to defeat here. It's a super intelligence which can come up with completely novel ways of doing it.
00:38:57
Again, you brought up example of your dog. Your dog cannot understand all the ways
00:39:03
you can take it out. It can maybe think you'll bite it to
00:39:08
death or something, but that's all. Whereas you have infinite supply of
00:39:13
resources. So if I asked your dog exactly how you going to take it out, it would not give
00:39:20
you a meaningful answer. It can talk about biting. And this is what we know. We know viruses. We experienced viruses.
00:39:27
We can talk about them. But what an AI system capable of doing novel
00:39:33
physics research can come up with is beyond me. One of the things that I think most
00:39:38
people don't understand is how little we understand about how these AIs are actually working. Because one would
00:39:45
assume, you know, with computers, we kind of understand how a computer works. We we know that it's doing this and then this and it's running on code, but from
00:39:52
reading your work, you describe it as being a black box. We actually So, in
00:39:57
the context of something like ChatBT or an AI, we know you're telling me that the people that have built that tool don't actually know what's going on
00:40:05
inside there. That's exactly right. So even people making those systems have to run
00:40:11
experiments on their product to learn what it's capable of. So they train it
00:40:16
by giving it all of data. Let's say all of internet text. They run it on a lot
00:40:21
of computers to learn patterns in that text and then they start experimenting with that model. Oh, do you speak
00:40:28
French? Oh, can you do mathematics? Oh, are you lying to me now? And so maybe it
00:40:33
takes a year to train it and then 6 months to get some fundamentals about
00:40:38
what it's capable of some safety overhead. But we still discover new
00:40:45
capabilities and old models. If you ask a question in a different way, it becomes smarter.
00:40:51
So it's no longer engineering how it was the first 50
00:40:56
years where someone was a knowledge engineer programming an expert system AI to do specific things. It's a science.
00:41:03
We are creating this artifact growing it. It's like a alien plant and then we
00:41:09
study it to see what it's doing. And just like with plants we don't have 100%
00:41:14
accurate knowledge of biology. We don't have full knowledge here. We kind of know some patterns. We know okay if we
00:41:20
add more compute it gets smarter most of the time but nobody can tell you precisely what the outcome is going to
00:41:28
be given a set of inputs. I've watched so many entrepreneurs treat
00:41:33
sales like a performance problem. When it's often down to visibility because when you can't see what's happening in
00:41:38
your pipeline, what stage each conversation is at, what's stalled, what's moving, you can't improve
00:41:44
anything and you can't close the deal. Our sponsor, Pipe Drive, is the number one CRM tool for small to medium
00:41:50
businesses. Not just a contact list, but an actual system that shows your entire sales process, end to end, everything
00:41:57
that's live, what's lagging, and the steps you need to take next. All of your teams can move smarter and faster. Teams
00:42:04
using Pipe Drive are on average closing three times more deals than those that aren't. It's the first CRM made by
00:42:11
salespeople for salespeople that over 100,000 companies around the world rely
00:42:16
on, including my team who absolutely love it. Give Piperive a try today by visiting piperive.com/ceo.
00:42:24
And you can get up and running in a couple of minutes with no payment needed. And if you use this link, you'll
00:42:30
get a 30-day free trial. What do you make of OpenAI and Sam Alman and what
00:42:35
they're doing? And obviously you're aware that one of the co-founders was it um was it Ilia Jack?
00:42:41
Ilia Ilia. Yeah. Ilia left and he started a new company called Super Intelligent Safety.
00:42:47
Super AI safety wasn't challenging enough. He decided to just jump right to the hard problem.
00:42:55
as an onlooker when you see that people are leaving OpenAI to to start super
00:43:00
intelligent safety companies. What was your read on that situation?
00:43:06
So, a lot of people who worked with Sam said that maybe he's not the most direct
00:43:14
person in terms of being honest with them and they had concerns about his views on safety. That's part of it. So,
00:43:21
they wanted more control. They wanted more concentration on safety. But also,
00:43:26
it seems that anyone who leaves that company and starts a new one gets a $20 billion valuation just for having it
00:43:33
started. You don't have a product, you don't have customers, but if you want to make many billions of dollars, just do
00:43:39
that. So, it seems like a very rational thing to do for anyone who can. So, I'm not surprised that there is a lot of
00:43:46
attrition meeting him in person. He's super nice, very smart.
00:43:53
absolutely perfect public interface. You see him testify in the Senate, he says the right
00:44:00
thing to the senators. You see him talk to the investors, they get the right message. But if you look at what people
00:44:07
who know him personally are saying, it's probably not the right person to be
00:44:13
controlling a project of that impact. Why?
00:44:19
He puts safety second. Second to
00:44:25
winning this race to super intelligence, being the guy who created Godic and controlling light corn of the universe.
00:44:32
He's worse. Do you suspect that's what he's driven by is by the the legacy of being an
00:44:38
impactful person that did a remarkable thing versus the consequence that that
00:44:44
might have on for society. Because it's interesting that he's his other startup is Worldcoin which is ba basically a
00:44:50
platform to create universal basic income i.e. a platform to give us income in a world where people don't have jobs
00:44:57
anymore. So in one hand you're creating an AI company and the other hand you're creating a company that is preparing for people not to have employment.
00:45:05
It also has other properties. It keeps track of everyone's biometrics.
00:45:12
it uh keeps you in charge of the world's economy, world's wealth. They're retaining a large portion of world
00:45:19
coins. So I I think it's kind of very reasonable part to integrate with world
00:45:26
dominance. If you have a super intelligence system and you control money,
00:45:32
you're doing well. Why would someone want world dominance?
00:45:39
People have different levels of ambition. Then you a very young person with billions of dollars fame. You start
00:45:46
looking for more ambitious projects. Some people want to go to Mars. Others want to control Litecoin of the
00:45:52
universe. What did you say? Litecoin of the universe. Litecoin.
00:45:57
Every part of the universe light can reach from this point. Meaning anything accessible you want to grab and bring
00:46:04
into your control. Do you think Sam Alman wants to control every part of the universe?
00:46:10
I I suspect he might. Yes. It doesn't mean he doesn't want a side
00:46:15
effect of it being a very beneficial technology which makes all the humans happy. Happy humans are good for
00:46:21
control. If you had to guess
00:46:27
what the world looks like in 2,100,
00:46:32
if you had to guess, it's either free of human existence or
00:46:39
it's completely not comprehensible to someone like us.
00:46:44
It's one of those extremes. So there's either no humans. It's basically the world is destroyed or
00:46:50
it's so different that I cannot envision those predictions.
00:46:56
What can be done to turn this ship to a more certain positive outcome at this
00:47:03
point? Is is there still things that we can do or is it too late? So I believe in personal self-interest.
00:47:10
If people realize that doing this thing is really bad for them personally, they will not do it. So our job is to
00:47:16
convince everyone with any power in this space creating this technology working for those companies they are doing
00:47:23
something very bad for them. Not just forget our 8 billion people you experimenting on with no permission, no
00:47:30
consent. You will not be happy with the outcome. If we can get everyone to
00:47:36
understand that's a default and it's not just me saying it. You had Jeff Hinton, Nobel Prize winner, founder of a whole
00:47:42
machine learning space. He says the same thing. Benjio, dozens of others, top
00:47:47
scholars. We had a statement about dangers of AI signed by thousands of scholars, computer scientists. This is
00:47:54
basically what we think right now. And we need to make it a universal. No one
00:47:59
should disagree with this. And then we may actually make good decisions about what technology to build. It doesn't
00:48:06
guarantee long-term safety for humanity, but it means we're not trying to get there as soon as possible to the worst
00:48:13
possible outcome. And do are you hopeful that that's even possible?
00:48:18
I want to try. We have no choice but to try. And what would need to happen and who
00:48:24
would need to act? What is it government legislation? Is it Unfortunately, I don't think making it
00:48:29
illegal is sufficient. There are different jurisdictions. There is, you know, loopholes. And what are you going
00:48:35
to do if somebody does it? You going to find them for destroying humanity? Like very steep fines for it? Like what are you going to do? It's not enforceable.
00:48:42
If they do create it, now the super intelligence is in charge. So the judicial system we have is not
00:48:48
impactful. And all the punishments we have are designed for punishing humans.
00:48:53
Prisons capital punishment doesn't apply to AI. You know, the problem I have is when I have these conversations, I never
00:48:59
feel like I walk away with I hope that something's going to go
00:49:05
well. And what I mean by that is I never feel like I walk away with clear some kind of clear set of actions that can
00:49:12
course correct what might happen here. So what should what should I do? What should the person sat at home listening
00:49:18
to this do? You you talk to a lot of people who are building this technology. Mhm.
00:49:24
Ask them precisely to explain some of those things they claim to be impossible. How they solved it or going
00:49:32
to solve it before they get to where they going. Do you know? I don't think Sam Orman wants to talk to me.
00:49:37
I don't know. He seems to go on a lot of podcasts. Maybe he does. He wants to go online.
00:49:43
I I wonder why that is. I wonder why that is. I'd love to speak to him, but I
00:49:48
don't I don't think he wants to I don't think he wants me to uh interview him.
00:49:55
Have an open challenge. Maybe money is not the incentive, but whatever attracts people like that. Whoever can convince
00:50:01
you that it's possible to control and make safe super intelligence gets the prize. They come on your show and prove
00:50:08
their case. anyone. If no one claims the price or even accepts the challenge after a few
00:50:14
years, maybe we don't have anyone with solutions. We have companies valued
00:50:19
again at billions and billions of dollars working on safe super intelligence. We haven't seen their
00:50:26
output yet. Yeah, I'd like to speak to Ilia as well
00:50:31
because I know he's he's working on safe super intelligence. So like notice a pattern too. If you look at
00:50:36
history of AI safety organizations or departments within companies, they
00:50:43
usually start well, very ambitious, and then they fail and disappear. So, Open
00:50:49
AI had super intelligence alignment team. The day they announced it, I think they said we're going to solve it in 4
00:50:55
years. Like half a year later, they canled the team. And there is dozens of
00:51:01
similar examples. Creating a perfect safety for super intelligence, perpetual
00:51:07
safety as it keeps improving, modifying, interacting with people, you're never going to get there. It's impossible.
00:51:14
There's a big difference between difficult problems in computer science and be complete problems and impossible
00:51:20
problems. And I think control, indefinite control of super intelligence is such a problem.
00:51:26
So what's the point trying then if it's impossible? Well, I'm trying to prove that it is specifically that once we
00:51:31
establish something is impossible, fewer people will waste their time claiming they can do it and find looking for
00:51:37
money. So many people going, "Give me a billion dollars in 2 years and I'll solve it for you." Well, I don't think
00:51:43
you will. But people aren't going to stop striving towards it. So, if there's no attempts
00:51:48
to make it safe and there's more people increasingly striving towards it, then
00:51:53
it's inevitable. But it changes what we do. If we know that it's impossible to make it right, to make it safe, then this direct path
00:52:00
of just build it as soon as you can become suicide mission hopefully fewer people will pursue that they may go in
00:52:07
other directions like again I'm a scientist I'm an engineer I love AI I love technology I use it all the time
00:52:14
build useful tools stop building agents build narrow super intelligence not a
00:52:19
general one I'm not saying you shouldn't make billions of dollars I love billions of dollars
00:52:25
But uh don't kill everyone, yourself included.
00:52:33
They don't think they're going to though. Then tell us why. I hear things about intuition. I hear things about we'll
00:52:40
solve it later. Tell me specifically in scientific terms. Publish a peer-reviewed paper explaining how you're going to control super
00:52:46
intelligence. Yeah, it's strange. It's strange to it's strange to even bother if there was even a 1% chance of human extinction. strange
00:52:53
to do something like if there was a 1% chance someone told me there was a 1% chance that if I got in a car I might
00:53:00
not I might not be alive. I would not get in the car. If you told me there was a 1% chance that if I drank whatever
00:53:06
liquid is in this cup right now I might die. I would not drink the liquid. Even if there was
00:53:12
a billion dollars if I survived. So the 99% chance I get a billion dollars. The 1% is I die. I wouldn't drink it. I
00:53:18
wouldn't take the chance. It's worse than that. Not just you die. Everyone dies. Yeah. Yeah.
00:53:24
Now, would we let you drink it at any odds? That's for us to decide. You don't get to make that choice for us. To get
00:53:32
consent from human subjects, you need them to comprehend what they are consenting to. If those systems are
00:53:39
unexplainable, unpredictable, how can they consent? They don't know what they are consenting to.
00:53:45
So, it's impossible to get consent by definition. So, this experiment can never be run ethically. By definition
00:53:51
they are doing unethical experimentation on human subjects. Do you think people should be protesting?
00:53:58
There are people protesting. There is stop AI, there is pause AI. They block offices of open AI. They do it weekly,
00:54:04
monthly, quite a few actions and they're recruiting new people. Do you think more people should be
00:54:10
protesting? Do you think that's an effective solution? If you can get it to a large enough scale to where majority of population is
00:54:17
participating, it would be impactful. I don't know if they can scale from current numbers to that. But uh I
00:54:23
support everyone trying everything peacefully and legally. And for the for the person listening at
00:54:29
home, what should they what should they be doing? What what what cuz they they don't want to feel powerless. None of us want to feel powerless.
00:54:35
So it depends on what scale we're asking about time scale. Are we saying like this year your kid goes to college, what
00:54:42
major to pick? Should they go to college at all? Yeah. Should you switch jobs? Should you go into certain industries? Those questions
00:54:48
we can answer. We can talk about immediate future. What should you do in 5 years with uh this being created for
00:54:56
an average person? Not much. Just like they can't influence World War II, nuclear, holocaust, anything like that.
00:55:03
It's not something anyone's going to ask them about. Today, if you want to be a
00:55:08
part of this movement, yeah, join POSAI, join Stop AI. those uh organizations
00:55:14
currently trying to build up momentum to bring democratic powers to influence
00:55:21
those individuals. So in the near term, not a huge amount. I was wondering if there there are any
00:55:27
interesting strategies in the near term. Like should I be thinking differently about my family about I mean you've got
00:55:33
kids, right? You got three kids that I know about. Yeah. Three kids. How are you thinking about parenting in
00:55:40
this world that you see around the corner? How are you thinking about what to say to them, the advice to give them, what they should be learning? So there is general advice uh outside of
00:55:47
this domain that you should live your every day as if it's your last. It's a good advice no matter what. If you have
00:55:54
three years left or 30 years left, you lived your best life. So
00:55:59
try to not do things you hate for too long. Do interesting things. Do impactful
00:56:05
things. If you can do all that while helping people do that. Simulation theory is a interesting uh sort of
00:56:14
adjacent subject here because as computers begin to accelerate and get more intelligent and we're able to
00:56:21
you know, do things with AI that we could never have imagined in terms of like can imagine the world that we could
00:56:26
create with virtual reality. I think it was Google that recently released what was it called? Um like the AI worlds.
00:56:35
You take a picture and it generates a whole world. Yeah. And you can move through the world. I'll put it on the screen for people to see. Google have released this
00:56:42
technology which allows you I think with a simple prompt actually to make a threedimensional world that you can then
00:56:49
navigate through and in that world it has memory. So in the world if you paint on a wall and turn away you look back
00:56:54
the wall it's persistent. Yeah it's persistent. And when I saw that I go jeez bloody hell this is
00:57:00
this is like the foothills of being able to create a simulation that's indistinguishable from everything I see
00:57:06
here. Right. That's why I think we are in one. That's exactly the reason AI is getting
00:57:12
to the level of creating human agents, human level agents, and virtual reality
00:57:17
is getting to the level of being indistinguishable from ours. So, you think this is a simulation? I'm pretty sure we are in a simulation.
00:57:24
Yeah. For someone that isn't familiar with the simulation arguments, what are what are the first principles here that convince
00:57:31
you that we are currently living in a simulation? So, you need certain technologies to
00:57:37
make it happen. If you believe we can create human level AI, yeah, and you believe we can create virtual
00:57:43
reality as good as this in terms of resolution, haptics, whatever properties it has, then I commit right now the
00:57:50
moment this is affordable, I'm going to run billions of simulations of this exact moment, making sure you are
00:57:56
statistically in one. Say that last part again. You're going to run, you're going to run,
00:58:02
I'm going to commit right now and it's very affordable. It's like 10 bucks a month to run it. I'm going to run a
00:58:08
billion simulations of this interview. Why? Because statistically that means you are
00:58:14
in one right now. The chances of you being in a real one is one in a billion.
00:58:19
Okay. So to make sure I'm clear on this, it's a retroactive placement. Yeah. So the minute it's affordable,
00:58:26
then you can run billions of them and they would feel and appear to be exactly
00:58:32
like this interview right now. Yeah. So assuming the AI has internal states,
00:58:37
experiences, qualia, some people argue that they don't. Some say they already have it. That's a separate philosophical
00:58:43
question. But if we can simulate this, I will.
00:58:48
Some people might misunderstand. You're not you're not saying that you will. You're saying that someone will. I
00:58:55
I can also do it. I don't mind. Okay. Of course, others will do it before I get there. If I'm getting it for $10,
00:59:02
somebody got it for a,000. That's not the point. If you have technology, we're definitely running a lot of simulations
00:59:08
for research, for entertainment, games, uh, all sorts of reasons. And the number
00:59:15
of those greatly exceeds the number of real worlds we're in. Look at all the video games kids are playing. Every kid
00:59:21
plays 10 different games. There's, you know, billion kids in the world. So there is 10 billion simulations in one
00:59:27
real world. Mhm. Even more so when we think about
00:59:33
advanced AI super intelligent systems, their thinking is not like ours. They think in a lot more detail. They run
00:59:39
experiments. So running a detailed simulation of some problem at the level
00:59:45
of creating artificial humans and simulating the whole planet would be something they'll do routinely. So there
00:59:51
is a good chance this is not me doing it for $10. It's a future simulation thinking about something in this world.
00:59:59
H. So it could be the case that
01:00:07
a species of humans or a species of intelligence in some form got to this
01:00:12
point where they could affordably run simulations that are in
01:00:17
indistinguishable from this and they decided to do it and this is it right now.
01:00:25
And it would make sense that they would run simulations as experiments or for games or for entertainment. And also
01:00:31
when we think about time in the world that I'm in in this simulation that I could be in right now, time feels long
01:00:37
relatively you know I have 24 hours in a day but on their in their world it could
01:00:42
be time is relative. Relative yeah it could be a second. My whole life could be a millisecond in
01:00:47
there. Right. You can change speed of simulations you're running for sure.
01:00:53
So your belief is that this is probably a simulation most likely and there is a lot of agreement on that. If you look again
01:00:59
returning to religions, every religion basically describes what a super intelligent being, an engineer, a
01:01:06
programmer creating a fake world for testing purposes or for whatever. But if
01:01:12
you took the simulation hypothesis paper, you go to jungle, you talk to primitive people, a local tribe and in
01:01:20
their language you tell them about it. Go back two generations later. They have religion. That's basically what the
01:01:27
story is. Religion. Yeah. Describes a simulation the theory. Basically somebody created.
01:01:33
So by default that was the first theory we had. And now with science more and more people are going like I'm giving it
01:01:39
non-trivial probability. A few people are as high as I am, but a lot of people give it some credence.
01:01:45
What percentage are you at in terms of believing that we are currently living in a simulation? Very close to certainty.
01:01:52
And what does that mean for the nature of your life? If you're close to 100%
01:01:57
certain that we are currently living in a simulation, does that change anything in your life?
01:02:02
So all the things you care about are still the same. Pain still hurts. Love still love, right? Like those things are
01:02:08
not different. So it doesn't matter. They're still important. That's what matters. The little 1% different is that
01:02:16
I care about what's outside the simulation. I want to learn about it. I write papers about it. So that's the only impact.
01:02:22
And what do you think is outside of the simulation? I don't know. But we can look at this
01:02:29
world and derive some properties of the simulators. So clearly brilliant engineer, brilliant scientist, brilliant
01:02:36
artist, not so good with morals and ethics. Room for improvement
01:02:42
in our view of what morals and ethics should be. Well, we know there is suffering in the
01:02:47
world. So unless you think it's ethical to torture children, then I'm
01:02:52
questioning your approach. But in terms of incentives to create a positive incentive, you probably also
01:02:58
need to create negative incentives. suffering seems to be one of the negatives and incentives built into our
01:03:03
design to stop me doing things I shouldn't do. So like put my hand in a fire, it's going to hurt.
01:03:09
But it's all about levels, levels of suffering, right? So unpleasant stimuli, negative feedback doesn't have to be at
01:03:15
like negative infinity hell levels. You don't want to burn alive and feel it.
01:03:20
You want to be like, "Oh, this is uncomfortable. I'm going to stop." It's interesting because we we assume
01:03:26
that they don't have great moral mor morals and ethics but we too would we take animals and cook them and eat them
01:03:32
for dinner and we also conduct experiments on mice and rats but to get university approval to
01:03:37
conduct an experiment you submit a proposal and there is a panel of efficists who would say you can't
01:03:43
experiment on humans you can't burn babies you can't eat animals alive all those things would be banned
01:03:50
in most parts of the world where they have ethical boards. Yeah. Some places don't bother with it, so
01:03:56
they have easier approval process. It's funny when you talk about the simulation theory, there's there's an
01:04:02
element of the conversation that makes life feel less meaningful in a weird way. it like it I know it doesn't matter
01:04:12
but whenever I have this conversation with people not on the podcast about are we living in a simulation you almost see
01:04:19
a little bit of meaning come out of their life for a second and then they forget and then they carry on but the
01:04:24
the the thought that this is a simulation almost posits that it's not important or that I think humans want to
01:04:32
believe that this is the highest level and we're at the most important and we're the it's It's all about us. We're
01:04:37
quite egotistical by design. And I just an interesting observation I've always had when I have these
01:04:43
conversations with people that it it seems to strip something out of their life. Do you feel religious people feel that
01:04:48
way? They know there is another world and the one that matters is not this one. Do you feel they don't value their
01:04:55
lives the same? I guess in some religions I think um they think that this world is
01:05:01
being created for them and that they are going to go to this heaven or or hell and that still puts them at the very
01:05:07
center of it. But but if it's a simulation, you know, we could just be some computer game that four-year-old
01:05:14
alien has is messing around with while he's got some time to burn. But maybe there is, you know, a test and
01:05:21
there is a better simulation you go to and a worse one. Maybe there are different difficulty levels. Maybe you
01:05:27
want to play it on a harder setting next time. I've just invested millions into this
01:05:32
and become a co-owner of the company. It's a company called Ketone IQ. And the story is quite interesting. I started
01:05:39
talking about ketosis on this podcast and the fact that I'm very low carb, very very low sugar, and my body
01:05:44
produces ketones which have made me incredibly focused, have improved my endurance, have improved my mood, and
01:05:49
have made me more capable at doing what I do here. And because I was talking about it on the podcast, a couple of weeks later, these showed up on my desk
01:05:56
in my HQ in London, these little shots. And oh my god, the impact this had on my
01:06:03
ability to articulate myself, on my focus, on my workouts, on my mood, on
01:06:08
stopping me crashing throughout the day was so profound that I reached out to the founders of the company, and now I'm
01:06:14
a co-owner of this business. I highly, highly recommend you look into this. I highly recommend you look at the science
01:06:20
behind the product. If you want to try it for yourself, visit ketone.com/stephven for 30% off your subscription order. And
01:06:26
you'll also get a free gift with your second shipment. That's ketone.com/stephven.
01:06:33
And I'm so honored that once again, a company I own can sponsor my podcast. I've built companies from scratch and
01:06:39
backed many more. And there's a blind spot that I keep seeing in early stage founders. They spend very little time
01:06:44
thinking about HR. And it's not because they're reckless or they don't care. It's because they're obsessed with
01:06:50
building their companies. And I can't fault them for that. At that stage, you're thinking about the product, how to attract new customers, how to grow
01:06:55
your team, really, how to survive. And HR slips down the list because it doesn't feel urgent. But sooner or
01:07:02
later, it is. And when things get messy, tools like our sponsor today, Just Works, go from being a nice to have to
01:07:08
being a necessity. Something goes sideways and you find yourself having conversations you did not see coming. This is when you learn that HR really is
01:07:15
the infrastructure of your company and without it things wobble and just work stops you learning this the hard way. It
01:07:20
takes care of the stuff that would otherwise drain your energy and your time automating payroll, health
01:07:25
insurance benefits and it gives your team human support at any hour. It grows with your small business from startup
01:07:32
through to growth even when you start hiring team members abroad. So if you want HR support that's there through the
01:07:37
exciting times and the challenging times head to justworks.com now. That's just
01:07:43
works.com. And do you think much about longevity? A lot. Yeah. It's probably the second
01:07:48
most important problem because if AI doesn't get us, that will. What do you mean? You're going to die of old age.
01:07:56
Which is fine. That's not good. You want to die? I mean, you don't have to. It's just a disease.
01:08:02
We can cure it. Nothing stops you from living forever
01:08:08
as long as universe exists. Unless we escape the simulation. But we wouldn't want a world where
01:08:14
everybody could live forever, right? That would be Sure, we do. Why? Who do you want to die?
01:08:19
Well, I don't know. I mean, I say this because it's all I've ever known that people die. But wouldn't the world
01:08:24
become pretty overcrowded if No, you stop reproducing if you live forever. You have kids because you want a replacement for you if you live
01:08:30
forever. You're like, I'll have kids in a million years. That's cool. I'll go explore universe first. Plus, if you
01:08:37
look at actual population dynamics outside of like one continent, we're all shrinking. We're not growing.
01:08:43
Yeah. This is crazy. It's crazy that the more rich people get, the less kids they they have, which aligns with what you're
01:08:49
saying. And I do actually think I think if I'm going to be completely honest here, I think if I knew that I was going
01:08:55
to live to a thousand years old, there's no way I'd be having kids at 30. Right. Exactly. Biological clocks are
01:09:01
based on terminal points. Whereas if your biological clock is infinite, you'll be like one day.
01:09:07
And you think that's close being able to extend our lives? It's one breakthrough away. I think
01:09:13
somewhere in our genome, we have this rejuvenation loop and it's set to basically give us at most 120. I think
01:09:20
we can reset it to something bigger. AI is probably going to accelerate that.
01:09:26
That's one very important application area. Yes, absolutely. So maybe Brian Johnson's right when he
01:09:32
says don't die now. He keeps saying to me, he's like don't die now. Don't die ever.
01:09:38
But you know, he's saying like don't die before we get to the technology, right? Longevity escape velocity. You want to long live long enough to live
01:09:45
forever. If at some point we every year of your existence at 2 years to your
01:09:51
existence through medical breakthroughs, then you live forever. You just have to make it to that point of longevity,
01:09:56
escape, velocity. And he thinks that long longevity escape velocity especially in a world of AI is pretty is
01:10:02
pretty is decades away minimum which means as soon as we fully understand human
01:10:08
genome I think we'll make amazing breakthroughs very quickly because we know some people have genes for living
01:10:14
way longer. We have generations of people who are centarians. So if we can understand that and copy that or copy it
01:10:21
from some animals which will live forever we'll get there. Would you want to live forever? Of course.
01:10:28
Reverse reverse the question. Let's say we lived forever and you ask me, "Do you want to die in 40 years?" Why would I
01:10:33
say yes? I don't know. Maybe you're just used to the default. Yeah, I am used to the default. And nobody wants to die. Like no matter
01:10:38
how old you are, nobody goes, "Yeah, I want to die this year." Everyone's like, "Oh, I want to keep living." I wonder if life and everything would be
01:10:46
less special if I lived for 10,000 years. I wonder if going to Hawaii for
01:10:52
the first time or I don't know a relationship all of these things would be way less special to me if they were
01:11:00
less scarce and if that I just you know it could be individually less special but there is so much more you can do
01:11:07
right now you can only make plans to do something for a decade or two. You cannot have an ambitious plan of working
01:11:13
in this project for 500 years. Imagine possibilities open to you with infinite time in the infinite universe.
01:11:21
Gosh. Well, you can feels exhausting. It's a big amount of time. Also, I don't
01:11:27
know about you, but I don't remember like 99% of my life in detail. I remember big highlights. So, even if I
01:11:33
enjoyed Hawaii 10 years ago, I'll enjoy it again. Are you thinking about that really practically as as in terms of, you know,
01:11:40
if in the same way that Brian Johnson is, Brian Johnson is convinced that we're like maybe two decades away from being able to extend life. Are you
01:11:47
thinking about that practically and are you doing anything about it? Diet, nutrition. I try to think about
01:11:53
investment strategies which pay out in a million years. Yeah. Really? Yeah. Of course. What do you mean? Of course. Of course.
01:11:59
Why wouldn't you? If you think this is what's going to happen, you you should try that. So, if we get AI right now,
01:12:05
what happens to economy? We talked about world coin. We talked about free labor.
01:12:10
What's money? Is it now Bitcoin? Do you invest in that? Is there something else which becomes the only resource we
01:12:16
cannot fake? So those things are very important research topics. So you're investing in Bitcoin, aren't
01:12:22
you? Yeah, because it's a
01:12:28
it's the only scarce resource. Nothing else has scarcity. Everything else if
01:12:33
price goes up will make more. I can make as much gold as you want given a proper price point. You cannot make more
01:12:39
Bitcoin. Some people say Bitcoin is just this thing on a computer that we all agreed was value. We are a thing on a computer,
01:12:48
remember? Okay. So, I mean, not investment advice,
01:12:53
but investment advice. It's hilarious how that's one of those things where they tell you it's not, but you know it is immediately. There is a
01:13:00
your call is important to us. That means your call is of zero importance. And investment is like that.
01:13:05
Yeah. Yeah. When they say no investment advice, it's definitely investment advice. Um but it's not investment advice. Okay. So you're bullish on
01:13:11
Bitcoin because it's it can't be messed with. It is the only thing which we know how
01:13:18
much there is in the universe. So gold there could be an asteroid made out of
01:13:24
pure gold heading towards us devaluing it. Well also killing all of us. But
01:13:30
Bitcoin I know exactly the numbers and even the 21 million is an upper limit.
01:13:35
How many are lost? Passwords forgotten. I don't know what Satoshi is doing with his million. It's getting scarcer every
01:13:42
day while more and more people are trying to accumulate it.
01:13:47
Some people worry that it could be hacked with a supercomput. A quantum computer can break that
01:13:53
algorithm. There is uh strategies for switching to quantum resistant cryptography for that. And quantum
01:13:59
computers are still kind of weak. Do you think there's any changes to my life that I should make following this
01:14:07
conversation? Is there anything that I should do differently the minute I walk out of this door? I assume you already invest in Bitcoin
01:14:13
heavily. Yes, I'm an investor in Bitcoin. Business financial advice. Uh, no. Just
01:14:18
you seem to be winning. Maybe it's your simulation. You're rich, handsome, you have famous people hang out with you.
01:14:25
Like that's pretty good. Keep it up.
01:14:33
Robin Hansen has a paper about how to live in a simulation, what you should be doing in it. And your goal is to do
01:14:39
exactly that. You want to be interesting. You want to hang out with famous people so they don't shut it down. So you are part of a part
01:14:45
someone's actually watching on pay-per-view or something like that. Oh, I don't know if you want to be watched on pay-per-view because then it
01:14:51
would be the same. Then they shut you down. If no one's watching, why would they play it?
01:14:57
I'm saying, don't you want to fly under the radar? Don't you want to be the the guy just living a normal life that the the masters?
01:15:02
Those are NPCs. Nobody wants to be an NPC. Are you religious?
01:15:08
Not in any traditional sense, but I believe in simulation hypothesis which has a super intelligent being. So,
01:15:14
but you don't believe in the like you know the religious books. So different religions. This religion
01:15:20
will tell you don't work Saturday. This one don't work Sunday, don't eat pigs, don't eat cows. They just have local
01:15:27
traditions on top of that theory. That's all it is. They all the same religion. They all worship super intelligent
01:15:32
being. They all think this world is not the main one. And they argue about which animal not to
01:15:39
eat. Skip the local flavors. Concentrate on what do all the religions have in
01:15:45
common. And that's the interesting part. They all think there is something greater
01:15:50
than humans. Very capable, all knowing, all powerful. Then I run a computer game. Four of those characters in a
01:15:57
game. I am that I can change the whole world. I can shut it down. I know everything in a world.
01:16:05
It's funny. I was thinking earlier on when we started talking about the simulation theory that there's there might be something innate in us that is
01:16:11
been left from the creator almost like a clue like a like an intuition cuz that's what we we tend to have through history.
01:16:17
Humans have this intuition. Yeah. That all the things you said are true. that there's this somebody above and
01:16:24
we have generations of people who were religious who believed God told them and was there and give them books and that
01:16:31
has been passed on for many generations. This is probably one of the earliest generations not to have universal
01:16:37
religious belief. Wonder if those people are telling the truth. I wonder if those people those
01:16:43
people that say God came to them and said something. Imagine that. Imagine if that was part of this. I'm looking at the news today. Something
01:16:48
happened an hour ago and I'm getting different conflicting results. I can't even get with cameras, with drones, with
01:16:54
like guy on Twitter there. I still don't know what happened. And you think 3,000
01:17:00
years ago we have accurate record of translations and no of course not. You know these conversations you have
01:17:06
around AI safety, do you think they make people feel good?
01:17:12
I don't know if they feel good or bad, but people find it interesting. It's one of those topics. So I can have a
01:17:18
conversation about different cures for cancer with an average person, but everyone has opinions about AI. Everyone
01:17:24
has opinions about simulation. It's interesting that you don't have to be highly educated or a genius to
01:17:30
understand those concepts. Cuz I tend to think that it makes me feel
01:17:36
not positive. And I understand that, but I've always
01:17:42
been of the opinion that you shouldn't live in a world of
01:17:48
delusion where you're just seeking to be positive, have sort of uh positive
01:17:54
things said and avoid uncomfortable conversations. Actually, progress often in my life comes from like having
01:17:59
uncomfortable conversations, becoming aware about something, and then at least being informed about how I can do
01:18:05
something about it. And so I think that's why that's why I asked
01:18:10
the question because I I assume most people will should if they're you know if they're normal human beings listen to
01:18:16
these conversations and gosh that's scary and this is concerning
01:18:24
and and then I keep coming back to this point which is like what what do I do with that energy? Yeah. But I'm trying to point out this
01:18:31
is not different than so many conversations we can talk about. Oh, there is starvation in this region,
01:18:37
genocide in this region, you're all dying, cancer is spreading, autism is up. You can always find something to be
01:18:45
very depressed about and nothing you can do about it. And we are very good at concentrating on what we can change,
01:18:52
what we are good at, and uh basically not trying to embrace the whole world as
01:18:59
a local environment. So historically, you grew up with a tribe, you had a dozen people around you. If something
01:19:05
happened to one of them, it was very rare. It was an accident. Now if I go on the internet, somebody gets killed
01:19:10
everywhere all the time. Somehow thousands of people are reported to me every day. I don't even have time to
01:19:16
notice. It's just too much. So I have to put filters in place. And I think this topic
01:19:23
is what people are very good at filtering as like this was this entertaining
01:19:30
talk I went to kind of like a show and the moment I exit it ends. So usually I
01:19:35
would go give a keynote at a conference and I tell them basically you're all
01:19:40
going to die you have two years left any questions and people be like will I lose
01:19:46
my job? How do I lubricate my sex robot? like all sorts of nonsense clearly not
01:19:51
understanding what I'm trying to say there and those are good questions interesting questions but not fully
01:19:58
embracing the result they still in their bubble of local versus global
01:20:03
and the people that disagree with you the most as it relates to AI safety what is it that they say
01:20:10
what are their counterarguments typically so many don't engage at all like they
01:20:16
have no background knowledge in a subject. They never read a single book, single paper, not just by me, by anyone.
01:20:24
They may be even working in a field. So they are doing some machine learning work for some company maximizing ad
01:20:30
clicks and to them those systems are very narrow and then they hear that oh
01:20:37
this guy is going to take over of the world like it has no hands. How would it do that? It it's nonsense. This guy is
01:20:43
crazy. He has a beard. Why would I listen to him? Right? That's uh then they start reading a little bit. They
01:20:50
go, "Oh, okay. So maybe AI can be dangerous. Yeah, I see that. But we always solve problems in the past. We're
01:20:56
going to solve them again. I mean at some point we fixed a computer virus or something. So it's the same." And uh
01:21:03
basically the more exposure they have, the less likely they are to keep that
01:21:08
position. I know many people who went from super careless developer to safety
01:21:16
researcher. I don't know anyone who went from I worry about AI safety to like there is nothing to worry about.
01:21:29
What are your closing statements? Uh let's make sure there is not a closing statement we need to give for
01:21:34
humanity. Let's make sure we stay in charge in control. Let's make sure we
01:21:39
only build things which are beneficial to us. Let's make sure people who are making those decisions are remotely
01:21:46
qualified to do it. They are good not just at science, engineering and
01:21:51
business but also have moral and ethical standards. And uh if you doing something which
01:21:57
impacts other people, you should ask their permission before you do that. If there was one button in front of you and
01:22:04
it would shut down every AI company in the world
01:22:09
right now permanently with the inability for anybody to start a new one, would you press the button?
01:22:15
Are we losing narrow AI or just super intelligent AGI part? Losing all of AI.
01:22:21
That's a hard question because AI is extremely important. It controls stock
01:22:26
market power plants. It controls hospitals. It would be a devastating
01:22:32
accident. Millions of people would lose their lives. Okay, we can keep narrow AI. Oh yeah, that's what we want. We want
01:22:39
narrow AI to do all this for us, but not God we don't control doing things to us.
01:22:45
So you would stop it. You would stop AGI and super intelligence. We have AGI. What we have today is great
01:22:51
for almost everything. We can make secretaries out of it. 99% of economic
01:22:56
potential of current technology has not been deployed. We make AI so quickly it doesn't have time to propagate through
01:23:02
the industry through technology. Something like half of all jobs are considered BS jobs. They don't need to
01:23:08
be done. jobs. So those can be not even automated. They can be just gone. But I'm saying we can replace 60%
01:23:16
of jobs today with existing models. We're not done that. So if the goal is
01:23:21
to grow economy to develop we can do it for decades without having to create
01:23:27
super intelligence as soon as possible. Do you think globally especially in the western world unemployment is only going to go up from here? Do you think
01:23:33
relatively this is the low of unemployment? I mean it fluctuates a lot with other
01:23:38
factors. There are wars there is economic cycles but overall the more jobs you automate and the higher is the
01:23:45
intellectual necessity to start a job the fewer people qualify.
01:23:50
So if we plotted it on a graph over the next 20 years, you're assuming
01:23:55
unemployment is gradually going to go up over that time. I think so. Fewer and fewer people would be able to contribute already. We kind
01:24:03
of understand it because we created minimum wage. We understood some people don't contribute enough economic value
01:24:09
to get paid anything really. So we had to force employers to pay them more than
01:24:15
they worth. Mhm. And we haven't updated it. It's what 725 federally in US. If you keep up with
01:24:22
economy, it should be like $25 an hour now, which means all these people making
01:24:28
less are not contributing enough economic output to justify what they getting paid.
01:24:35
We have a closing tradition on this podcast where the last guest leaves a question for the next guest not knowing who they're leaving it for. And the
01:24:40
question left for you is what are what are the most important characteristics
01:24:47
for a friend, colleague or mate?
01:24:52
Those are very different types of people. But for all of them, loyalty is number
01:24:58
one. And what does loyalty mean to you? Not betraying you, not screwing you, not
01:25:07
cheating on you. despite the temptation,
01:25:12
despite the world being as it is, situation, environment.
01:25:18
Dr. Roman, thank you so much. Thank you so much for doing what you do because you're you're starting a conversation and pushing forward a conversation and
01:25:24
doing research that is incredibly important and you're doing it in the face of a lot of um a lot of skeptics.
01:25:30
I'd say there's a lot of people that have a lot of incentives to discredit what you're saying and what you do
01:25:36
because they have their own incentives and they have billions of dollars on the line and they have their jobs on the
01:25:41
line potentially as well. So, it's really important that there are people out there that are willing to,
01:25:47
I guess, stick their head above the parapit and come on shows like this and go on big platforms and talk about the
01:25:55
unexplainable, unpredictable, uncontrollable future that we're heading towards. So, thank you for doing that.
01:26:00
This book, which which I think everybody should should check out if they want a continuation of this conversation, I
01:26:05
think was published in 2024, gives a holistic view on many of the things we've talked about today. Um,
01:26:11
preventing AI failures and much, much more, and I'm going to link it below for anybody that wants to read it. If people
01:26:16
want to learn more from you, if they want to go further into your work, what's the best thing for them to do? Where do they go?
01:26:21
They can follow me. Follow me on Facebook. Follow me on X. Just don't follow me home. Very important. Follow you home. Okay. Okay, so I'll put
01:26:27
your Twitter, your ex account um as well below so people can follow you there and yeah, thank you so much for doing what
01:26:33
you do. remarkably eye opening and it's given me so much food for thought and it's actually convinced me more that we are living in a simulation but it's also
01:26:39
made me think quite differently of religion I have to say because um you're right all the religions when you get
01:26:45
away from the sort of the local traditions they do all point at the same thing and actually if they are all pointing at the same thing then maybe
01:26:51
the fundamental truths that exist across them should be something I pay more attention to things like loving thy
01:26:56
neighbor things like the fact that we are all one that there's a a divine creator and maybe also they all seem to
01:27:03
consequence beyond this life. So maybe I should be thinking more about how I behave in this life and and where
01:27:09
I might end up thereafter. Roman, thank you. Amen. [Music]
01:27:33
[Music]

Badges

This episode stands out for the following:

  • 80
    Most shocking
  • 80
    Best concept / idea
  • 80
    Most controversial
  • 75
    Most intense

Episode Highlights

  • The Terrifying Truth of AI
    Dr. Roman Yimpolski discusses the urgent need for AI safety as capabilities grow exponentially.
    “We don't know how to make them safe.”
    @ 03m 04s
    September 04, 2025
  • Predictions for 2027
    Yimpolski predicts AGI by 2027, leading to unprecedented unemployment levels.
    “We're looking at a world where we have levels of unemployment we never seen before.”
    @ 10m 55s
    September 04, 2025
  • The Future of Work
    In a world dominated by AGI, the concept of retraining for jobs becomes obsolete.
    “If I'm telling you that all jobs will be automated, then there is no plan B.”
    @ 16m 33s
    September 04, 2025
  • Human Knowledge Decline
    As AI advances, our understanding may diminish, leading to a paradox of knowledge.
    “Every day, as a percentage of total knowledge, I get dumber.”
    @ 25m 11s
    September 04, 2025
  • The Uncontrollable Nature of AI
    Once super intelligence is achieved, it may operate beyond human control.
    “It is not a tool. It's an agent. It makes its own decisions.”
    @ 35m 03s
    September 04, 2025
  • Future of Humanity
    The future could either see the end of human existence or a world beyond comprehension.
    “If you had to guess, it's either free of human existence or it's completely not comprehensible to someone like us.”
    @ 46m 32s
    September 04, 2025
  • The Dangers of AI
    Experts warn that unchecked AI development could lead to disastrous outcomes for humanity.
    “You will not be happy with the outcome.”
    @ 47m 30s
    September 04, 2025
  • Ethics of AI Experimentation
    The ethical implications of AI experimentation on humans are questioned, highlighting the need for consent.
    “This experiment can never be run ethically.”
    @ 53m 39s
    September 04, 2025
  • The Simulation Hypothesis
    The idea that we might be living in a simulation gains traction as technology advances.
    “I'm pretty sure we are in a simulation.”
    @ 57m 24s
    September 04, 2025
  • Living Forever: A Conversation
    Exploring the implications of immortality and its effects on human behavior.
    “Why would I say yes?”
    @ 01h 10m 28s
    September 04, 2025
  • The Value of Loyalty
    Discussing the importance of loyalty in relationships and its meaning.
    “Loyalty is number one.”
    @ 01h 24m 58s
    September 04, 2025
  • The Future of AI
    A deep dive into the potential and risks of AI technology.
    “Thank you for doing what you do because you're starting a conversation.”
    @ 01h 25m 24s
    September 04, 2025

Episode Quotes

Key Moments

  • AI Safety Concerns03:04
  • Knowledge Paradox25:11
  • Unplugging AI Myth30:27
  • Technological Breakthroughs36:48
  • Future Extremes46:32
  • Hope and Action48:18
  • Living Forever1:10:28
  • Investment Insights1:11:59

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
AI Expert: (Warning) 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
Podcast thumbnail
Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!
Podcast thumbnail
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Podcast thumbnail
Daniel Priestley: AI Will Make Plumbers Earn More Than Lawyers! (2029 PREDICTION)
Podcast thumbnail
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris
Podcast thumbnail
Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat