Search Captions & Ask AI

Yuval Noah Harari: An Urgent Warning They Hope You Ignore. More War Is Coming!

January 11, 2024 / 01:46:10

This episode features historian Yuval Noah Harari discussing the future of humanity, artificial intelligence, and the consequences of our technological advancements. Key topics include the dangers of AI, the potential for immortality, and the importance of understanding historical narratives.

Harari expresses concern about the rapid development of AI, noting that it is the first technology capable of making independent decisions, which could lead to humans becoming mere puppets. He warns that as algorithms increasingly influence critical decisions, we risk losing our agency and understanding of reality.

The conversation also touches on the implications of pursuing immortality, with Harari suggesting that living indefinitely could lead to unprecedented anxiety and a loss of meaning in life. He emphasizes the need to reflect on what makes life valuable rather than merely seeking to extend it.

Harari further discusses the potential societal divides created by advancements in bioengineering and AI, cautioning against a future where the rich gain access to enhancements that could lead to biological inequality.

In closing, Harari encourages listeners to focus on one issue they care about and work collaboratively to make a difference, highlighting the power of human cooperation in addressing the challenges we face.

TL;DR

Yuval Noah Harari discusses AI, immortality, and humanity's future challenges, emphasizing the need for reflection and cooperation.

Video

00:00:00
We are now in a new era of wars. And unless you reestablish order fast, then
00:00:06
we are doomed. You've all Noah Harrari, one of the brightest minds on planet earth, historian, a bestselling author
00:00:12
of some of the most influential non-fiction books in the world today. I think we are very near the end of our
00:00:18
species because people often spend so much effort trying to gain something
00:00:23
without understanding the consequences. For example, we will get to a life where
00:00:29
you can live indefinitely. But realizing that you have a chance to live forever,
00:00:34
but if there is an accident, you die. The people who will be in that situation will be at a level of anxiety and terror
00:00:42
unlike anything that we know. Then you have artificial intelligence and the
00:00:47
world is is not ready for it. It's the first technology in history that can make decisions by itself and take power
00:00:54
away from us to hack human beings, manipulate our behavior and making all these decisions for us or about us.
00:01:02
Whether to give you a loan, whether to give you a mortgage, dating us, shaping your romantic life. But the real problem
00:01:08
is that increasingly the humans at the top could be puppets. When the most
00:01:14
consequential decisions are made by algorithms, global financial decisions, wars. This is extremely dangerous, but
00:01:20
it's not inevitable. Humans can change it. But with what's to come, are you optimistic about the future?
00:01:27
I'm very worried about two things. First of all, quick one. This is really, really
00:01:32
fascinating to me. On the back end of our YouTube channel, it says that 69.9%
00:01:38
of you that watch this channel frequently over the lifetime of this channel haven't yet hit the subscribe button. I just wanted to ask you a
00:01:44
favor. It helps this channel so much if you choose to subscribe. Helps us scale the guests, helps us scale the
00:01:50
production, and it makes this show bigger. So, if I could ask you for one favor, if you've watched the show before and you've enjoyed it and you like this
00:01:56
episode that you're currently watching, could you please hit the subscribe button? Thank you so much. and I will repay that gesture by making sure that
00:02:03
everything we do here gets better and better and better and better. That is a promise I'm willing to make you. Do we have a deal?
00:02:11
[Music]
00:02:16
I have three of your books here and these are three books that sent a huge
00:02:22
tidal wave, a ripple through society. with these books and with all of the work that you're doing now, with the
00:02:28
lectures you give, the the interviews you give, what is your mission? What
00:02:33
what is the sort of if I was to be able to summarize what your collective mission is with your work? What is that?
00:02:39
It's to clarify and to focus the public conversation, the global conversation,
00:02:44
uh to help people focus on the most important challenges that are facing humankind and also to bring at least a
00:02:52
little bit of clarity to the collective and and to the individual mind. I mean,
00:02:57
one of my main messages in all the books is that our minds are like factories
00:03:04
that constantly produce stories and fictions that then come
00:03:10
between us and the world. And we often spend our lives interacting with
00:03:16
fictions that we or that other people created uh with and completely losing
00:03:22
touch w with with reality. And my job and I think the job of historians more
00:03:28
generally is to show us a way out.
00:03:33
Inherent in much of your work is what feels like a warning.
00:03:40
And I've I've watched hundreds of videos that you've produced or interviews you've done um all around the world and
00:03:48
it feels like you're trying to warn us about something, multiple things. Mhm. If my estimation there is correct,
00:03:56
what is the warning? Much of what we take to be real is is is
00:04:02
fictions. And and the reason that fictions are so central in in human
00:04:07
history is because we control the planet and rather than the chimpanzees or the
00:04:13
elephants or any of the other animals because not because of some kind of in individual genius that each of us has
00:04:20
but because we can cooperate much better than any other animal. We can cooperate
00:04:26
in much larger numbers and also much more flexibly. And the reason we can do that is because
00:04:34
we can create and believe in fictional stories because every largecale human
00:04:39
cooperation whether uh religion or nations or corporations
00:04:45
are based on mythologies on on fictions. Again I'm not just talking about gods.
00:04:51
This is the easy example. Money is also a fiction that we created. Corporations
00:04:57
are a fiction. they exist only in our minds. Uh even lawyers would tell you
00:05:02
that corporations are legal fictions. And this is on on the one hand such a
00:05:08
source of of immense power. But on the other hand, again the danger is that we completely lose touch with
00:05:14
reality and we are manipulated by all
00:05:20
these fictions, by all these stories. Again, stories are not not bad. They are tools. As long as we use them to
00:05:27
cooperate and to help each other, that's wonderful. Um, money is not bad. If we
00:05:33
didn't have money, we would not have a trade network. We everybody would have maybe with their friends and family to
00:05:39
to produce everything by themselves like the chimpanzees do. uh the fact that we can enjoy uh food and clothing and
00:05:48
medicines and enter entertainment created by people on the other side of
00:05:54
the world is largely because of money. But if we forget that this is a tool that we created in order to help
00:06:00
ourselves and instead uh this tool kind of enslaves us and runs our life and um
00:06:09
you know I'm now just back home in Israel there is a terrible war being
00:06:15
waged and most wars in history and also now they are about stories they're about
00:06:21
fictions. People think that humans fight over the same things that wolves or
00:06:27
chimpanzees fight about, that we fight about territory, that we fight about food. It sometimes happens, but most
00:06:34
wars in history were not really about territory or food. There is enough land,
00:06:40
for instance, between the Jordan River and the Mediterranean to build houses and schools and hospitals for everybody.
00:06:47
And there is certainly enough food. There's no shortage of food. But people have different mythologies, different
00:06:54
stories in their minds and they can't find a common story they can they can agree about. And this is at the root of
00:07:01
most UN conflicts. And being able to tell the difference between what is a
00:07:08
fiction in our own mind and what is the reality. This is a a crucial skill and
00:07:16
we are not getting better at finding this difference as time go time time
00:07:22
goes on and also with new technologies which I write about a lot like artificial
00:07:28
intelligence. The fantasy that AI will answer our questions, will find the
00:07:35
truth for us, will tell us the difference between fiction and reality. This is this is just another fiction. I
00:07:42
mean AI can do many things better than humans but for reasons that we can
00:07:48
discuss I don't think that it will necessarily be better than humans at
00:07:53
finding the truth or uh um uncovering reality. It it strikes me that the the
00:08:00
thing that made us successful, you know, this ability to believe in fictions and I use the word successful,
00:08:06
you know, powerful powerful. Yes. Took over the world. The thing that made us powerful could
00:08:13
well be the thing that makes us powerless in the sense that our ability to believe
00:08:18
in fictions and stories create a society that would potentially lead to our
00:08:24
powerlessness. That's kind of one of the the the messages that when I connect the dots
00:08:30
throughout your work and you look off into the future, um I'm left feeling.
00:08:35
And even you think about the modern problems we have, those are typically consequences of our ability to believe
00:08:40
in stories and to believe in fictions. And if you play that forward 100 years, maybe 200 years,
00:08:47
you don't believe that um you believe we'll be the last of our species, right?
00:08:53
I think we are very near the kind of end of our species. It doesn't necessarily mean that we'll be destroyed in some
00:09:01
huge nuclear war or something like that. Uh it could very well mean that we'll
00:09:06
just change ourselves using uh bioengineering and using AI and brain
00:09:13
computer interfaces. We will change ourselves to such an extent that we'll
00:09:19
become something completely different, something far more different from present day homo sapiens than we today
00:09:26
are different from chimpanzees or from Neanderthalss. I mean basically you know
00:09:31
um you have a very deep connection still with all the other animals because we
00:09:38
are completely organic. We are organic entities. our psychology,
00:09:44
our social habits, they are the product of organic evolution and male and more
00:09:50
specifically mamalian evolution over tens of millions of years. So we share
00:09:55
so much of our psychology and of our kind of social habits with chimpanzees
00:10:01
and with with other other mammals. Looking a 100 years or 200 years to the
00:10:07
future, maybe we are no longer organic or not fully organic. Um you could uh
00:10:13
have a world dominated by cyborgs which are entities combining organic with
00:10:20
inorganic parts for instance with brain computer interfaces. Um you could have
00:10:25
completely nonorganic entities. So all the legacy and also all the
00:10:32
limitations of 4 billion years of organic evolution might be irrelevant or
00:10:40
inapplicable do you think to the beings of the future? What bet would you make? Because you're saying maybe here
00:10:47
I don't know. I mean we could destroy ourselves. I think there is a greater I mean to completely destroy every last
00:10:54
single human in the world. it is possible given the technology that we now command but it's it's very
00:11:02
difficult. Um I think it's there is a greater chance and again this is just
00:11:08
speculation nobody really knows but I think I mean lots of people could suffer
00:11:14
terribly but I think it's more likely that uh uh
00:11:19
some people will survive and then will undergo radical changes.
00:11:24
So it's not that humanity is completely destroyed. It's just transformed
00:11:31
into into something else. And just to give an example of what we are talking about, organic beings like us need to be
00:11:40
in one place at any one time. We are now here in this room. That's it. Um, if you
00:11:46
kind of disconnect our hands or our feet from our body, we die or at least we
00:11:54
lose control of of these. I mean, and this is true of all organic entities, of plants, of of animals. Now, with cyborgs
00:12:03
or with inorganic entities, this is no longer true. They could be spread over
00:12:08
time, time, and space. I mean if you find a way and people are working on finding ways to directly connect brains
00:12:16
with computers or brains with bionic parts there is there there is no
00:12:22
essential reason that all the parts of the be of of the entity need to be in
00:12:27
the same room at the same time. As you said that, you know, I started thinking a little bit about Neuralink and what Elon Musk is doing interfacing us with
00:12:34
computers. But then I had a secondary thought which is if there could be two
00:12:39
Stevens, one here and then one in the United States right now because we're connected to the same computer interface. Theoretically, I could hack
00:12:48
Jack over there. I could hack his interface. So there could be three Stevens because I hack
00:12:53
Jack. And then I hack you and then there's four. And then I could eventually try and hack the entirety of
00:12:59
the world or a country. Yeah. And there could basically be one one once you can connect directly brains to
00:13:06
computers. First of all, I'm not sure if it's possible. I mean, people like Elon Musk in Norolink, they tell us it's possible.
00:13:12
I'm I'm I'm I'm still waiting for the evidence. I don't think it's impossible, but I think it's much more difficult
00:13:19
than than than people assume. partly because we are very far from understanding the brain and we are even
00:13:25
further away from understanding the mind. We assume that the brain somehow produces the mind but this is just an
00:13:32
assumption. We still don't have a working model a working theory for how it happens. Uh but if it happens, if it
00:13:40
is possible to directly connect brains and computers and integrate them into
00:13:46
these kinds of cyborgs, nobody has any idea what happens next, how the world
00:13:52
would look like. And it is certainly makes it a plausible if again if this is
00:13:59
this if you reach that point that you could have an interbrain net
00:14:05
the same way that lots of computers are connected together to form the internet. If you can connect also brains and
00:14:12
computers directly why can't we then connect an interbrain net which connects
00:14:17
lots of brains as as you as you as you uh uh uh described. Again, I I have no
00:14:23
idea what it means. I think this is the point when the way that our
00:14:30
organic brains understand reality u
00:14:36
even our imagination in the end is the product as far as we can tell of organic
00:14:42
biochemistry. Do you think wait so so we we are not equipped I think to have a kind of serious
00:14:49
discussion of what a nonorganic
00:14:55
brain or a non-organic mind might be capable of of doing how it would how it
00:15:02
would look like and all the basic assumptions that we have about brains and minds they are limited to the
00:15:09
organic types. How do you feel about artificial intelligence and what's happening? This year has been a real
00:15:15
sort of landmark year in the a big leap forward for artificial
00:15:20
intelligence, the conversation, public awareness, um the technology itself, the investment
00:15:27
in the technology, which is always, you know, a a very important indicator of what's to come. Yeah.
00:15:32
How do you how do you as someone that spent a lot of time thinking about this emotionally, how do you feel about it?
00:15:40
uh very concerned. I mean it's moving even faster than I expected. Uh when I
00:15:46
wrote say Homodos in 2016, I didn't think we would reach this this point so
00:15:51
quickly where we are at 2023. And the world is is not ready for it.
00:15:58
And again, it's not AI has enormous positive potential.
00:16:03
We and and this this should be clear. And there is no chance of just banning
00:16:09
AI or stopping all development in AI. I tend to speak a lot about the dangers
00:16:15
simply because you have enough people out there, all the entrepreneurs and uh uh all the investors talking about the
00:16:21
positive potential. So it's kind of my job to talk about the negative potential, the dangers. But it there is
00:16:29
a lot of positive potential and uh humans are incredibly capable in terms
00:16:36
of adapting to new situations. I don't think it's impossible for human society to adapt to the new AI reality. The only
00:16:45
thing is it takes time and apparently we don't have that time and people compare
00:16:52
it to previous big historical revolutions like the invention of print
00:16:59
or the invention of or or the the industrial revolution. And you hear people say yes when the industrial
00:17:05
revolution happened in the 19th century. So you had all these pro prophecies of doom about how industry and the new
00:17:14
factories and the steam engines and electricity how how they will destroy humanity or destroy our psychology or
00:17:21
whatever. And in the end it was okay. And when I hear these kinds of
00:17:27
comparisons as as a historian I'm very worried about two things. First of all,
00:17:33
they underestimate the magnitude of the AI revolution. AI is nothing like print.
00:17:40
It's nothing like uh the industrial revolution of the 19th century. It's far far bigger. There is a fundamental
00:17:47
difference between AI and the printing press of the steam engine or the radio
00:17:52
or any previous technology we invented. The difference is it's the first technology in history that can make
00:17:59
decisions by itself and that can create new ideas by itself. A printing press or
00:18:06
a radio set could not write new music or uh new speeches and could not decide
00:18:14
what to print and what to broadcast. This was always the job of humans. This
00:18:19
is why the printing press and the radio set in the end empowered humanity.
00:18:24
that you now have more power to disseminate your ideas. AI is different.
00:18:30
It can potentially take power away from us. It can decide, it's already deciding by itself what to broadcast on social
00:18:38
media. Its algorithms deciding what to promote. And increasingly, it also creates much
00:18:46
of the content by itself. It can compose entirely new music. it can compose
00:18:52
entirely new political manifestos, holy books, whatever. Um, so it's a much
00:18:59
bigger challenge to handle that kind. It's it's an independent agent in a way
00:19:05
that radio and the printing press were not. The other thing I find worrying
00:19:12
about the comparison with say the industrial revolution is that yes in the
00:19:17
end in a way it was okay but to get there we had to pass through some
00:19:24
terrible experiments. When the industrial revolution came along nobody knew how to build a a
00:19:33
benign industrial society. So people experimented.
00:19:38
One big experiment was European imperialism. Many people thought that to build an industrial society means
00:19:45
building an empire. Unless you have an empire that controls the sources of the
00:19:51
raw materials you need, iron, coal, rubber, cotton, whatever. And unless you
00:19:58
control the markets, you will not be able to survive as an industrial society. And there was a very close link
00:20:06
also conceptually between building an industrial society
00:20:11
and building an empire. And all the leaders the the the initial leaders of
00:20:17
the industrial revolution built empires. Not just Britain and and France also
00:20:23
small countries like Belgium also Japan when it joined the industrial revolution
00:20:28
it immediately set about conquering an empire. Another tribal experiment was
00:20:35
Soviet communism. They also thought how do you build an industrial society? You
00:20:41
build a communist dictatorship. And it was the same with Nazism. You cannot separate communism and Nazism from the
00:20:48
industrial revolution. You could not have created a communist or a Nazi totalitarian regime in the 18th century.
00:20:56
If you don't have trains, if you don't have electricity, if you don't have radio, you cannot create a totalitarian
00:21:02
regime. So these are just a few examples of the failed experiments. You know, you
00:21:09
try to adapt to something completely new, you very often uh um experiment and
00:21:16
some of your experiments fail. And if we now have to go in the 21st century
00:21:23
through the same process, okay, we now have not radio and and trains, we now
00:21:29
have AI and bioengineering. And we again need to experiment perhaps with new
00:21:35
empires, perhaps with new totalitarian regimes in order to discover how to
00:21:40
build a benign AI society, then we are doomed as a specy. we will not be able
00:21:47
to survive another round of imperialist wars and totalitarian regimes. So
00:21:54
anybody who thinks hey we've passed through the industrial revolution with all the prophecies of doom in the end we
00:22:00
got it right. No if as a historian I I would say that I would give humanity a C
00:22:06
minus on how we adapted to the industrial revolution. If we get a C
00:22:11
minus again in the 21st century that's the end of us. It seems quite trivial to
00:22:18
many that the AI revolution has seemed to begun
00:22:24
with large language models. And when I read sapiens, this book I have here,
00:22:30
language was so central to what made us powerful as homo sapiens. In the beginning was the word. I didn't
00:22:37
say it. You know, it's a it's a very very widespread idea
00:22:43
that um ultimately our power is based on words. Uh the reason that we controlled
00:22:49
the world and not the chimpanzeee or the elephants is because we had a much more sophisticated language
00:22:55
which enabled us again to tell these stories. stories about ancestral spirits and
00:23:02
about guardian gods and about our tribe, our nation, which formed the basis for
00:23:09
cooperation. And because we could cooperate, you could have a thousand people, a thousand humans cooperating in
00:23:16
a tribe, whereas the Neandertos could cooperate only on the level of say 50 or 100 individuals. This is why we rule the
00:23:23
world and not the Neandertos. And you look at every subsequent kind of growth in human power and uh you
00:23:32
see the same thing that uh ultimately you tell a story with words and language
00:23:40
is like the master key that unlocks all the doors of our civilization.
00:23:47
Whether it's cathedrals or whether it's banks, they're based on language, on
00:23:52
stories we tell. that again it's very obvious in the case of religion
00:23:58
but also if you think about the world's financial system so money has no value
00:24:04
except in the stories that we tell and believe each other if you think about
00:24:10
gold coins or paper bank notes or cryptocurrencies like Bitcoin um they
00:24:16
have no value in themselves you cannot eat them or drink them or do anything
00:24:21
useful with them but you have people telling you very compelling stories
00:24:26
about the value of these things and if enough people believe the story then it
00:24:32
works. They're also protected by language like my cryptocurrency is protected by a
00:24:37
bunch of words. Yeah. Uh they're created by words and they they function with with words
00:24:44
and and and and symbols. Uh when you communicate with your banker it's it's with words. I mean what
00:24:51
happens when AI can uh uh create deep fakes of your everything, your voice,
00:24:57
your image, uh the the way you talk, the type of words you use. So there is
00:25:03
already an arms race between banks and fraudsters. I mean we want the easiest communication with our banker. I just
00:25:10
pick up the phone, I tell a few words, and they transfer a million dollars. But at the same time, I also want want to be
00:25:16
protected from an AI that impersonates my my voice and tone of tone of voice
00:25:21
and and whatever. And this is becoming difficult. But on a deeper level, again,
00:25:28
AI could create because money is ultimately made of words, of stories.
00:25:35
AI could create new kinds of money. uh the same way that you know
00:25:40
cryptocurrencies like Bitcoin have been created simply by somebody telling people a story and enough people finding
00:25:47
this story convincing. And I I guess as a CEO and as an as an entrepreneur, you
00:25:52
know that if you want to get investments, what really gets investments is a good
00:25:59
story. And what happens to the financial system if increasingly our financial stories
00:26:07
are told by AI? And what happens to the financial system
00:26:13
and even to the political system if AI eventually creates new financial
00:26:21
devices that humans cannot understand? Already today much of the activity
00:26:30
on the world markets is being done by algorithms at such a speed and with such complexity
00:26:38
that most people don't understand what's happening there. I I if you had to guess what is the
00:26:44
percentage of people in the world today that really understand the financial system
00:26:50
what would be your kind of less than 1%. Less than 1%. Okay. Let's be kind of
00:26:55
conservative about it. 1% let's say. Okay. Fast forward 10 or 20 years. AI creates
00:27:02
such complicated financial devices that there is not a single human being on earth that understand finance anymore.
00:27:10
What are the implications for politics? Like you vote for a government but none
00:27:16
of the humans in the government, not the prime minister, not the finance minister, nobody understand the
00:27:21
financial system. They just rely on AI to tell them what is happening.
00:27:29
Is this still a democracy? Is this still a a human form of government in any way?
00:27:34
What do you say to someone that hears that and goes, "Ah, that's just that's nonsense. That's never going to happen."
00:27:40
Why not? I mean, let's look back 15 years to the last big financial crisis
00:27:45
in 2007208. This financial crisis to a large extent
00:27:51
began with these extremely complicated financial devices CDOS's
00:27:57
what's the acronym collateral depth something I don't even know what the word letter stands for you had these
00:28:03
kind of whiz kids in Wall Street inventing a new financial device that nobody except them really understood
00:28:10
which is why also it wasn't regulated effectively by the banks and the governments and it worked well for a
00:28:17
couple of years and then it brought down the world's financial system
00:28:22
and um what happens if now AI's comes with even more sophisticated financial
00:28:28
devices and for a couple of years everything works well they make trillions of
00:28:33
dollars for us and then one day it doesn't one day the system collapses and
00:28:40
nobody understands what is happening and uh again it's not that you didn't go to
00:28:47
college or whatever. No, it's just objectively the complexity of the system
00:28:53
has reached a point when only an AI
00:28:58
is able to crunch the numbers, is able to process enough data to really get to
00:29:06
really grasp the shape the dynamics of of the financial system.
00:29:11
We're already there though. You know, I think if anyone does understand how the financial system works and the markets work, it is a bunch of
00:29:19
homo sapiens relying on a computer to tell it something and it it trusting that computer's calculations.
00:29:26
Yeah. And and this will get more and more complicated and and and
00:29:31
sophisticated. And for people who say no, it's not going to happen, the question is what is stopping it? I mean,
00:29:40
you know, in all the discussions about AI, the kind of dangers that draw people's
00:29:46
attention, like the poster child of AI dangers is things like AI creating a new
00:29:54
virus that kills billions of people, a new pandemic. So you a lot of people
00:29:59
concerned about how do we prevent an AI by itself or maybe some small terrorist
00:30:05
organization or even a 16-year-old teenager giving an AI a task to create a
00:30:12
dangerous virus and release it to the world. How do we prevent this? And this is a serious concern and we should be
00:30:17
concerned about it. But this gets a lot more attention than the question, how do
00:30:23
we prevent the financial system from becoming so complicated that humans can
00:30:29
no longer understand it? And I see a lot of regulations
00:30:35
being uh at least considered how to prevent AI from creating dangerous new
00:30:41
viruses. Um I don't see any kind of effort to keep the financial system at a
00:30:49
level that humans understand it. Why do you think that is?
00:30:54
U I mean I had a guess. My guess was why would the UK Mhm. cut off then, you know, why would they
00:31:01
give themselves a disadvantage? Exactly. When you know there it just means that the UK will suffer and if America is
00:31:06
using a really advanced AI algorithm to get ahead, we have to keep up. Yeah. It's it's the logic of of the arms
00:31:13
race. And again, it's not all bad. I mean, you have a better financial system. Uh you have a more prosperous
00:31:18
economy. I mean, money isn't bad. I mean, it's the basis for almost all human cooperation.
00:31:25
And a lot of financial devices in the end, if you think what are they, they are devices to establish trust between
00:31:33
people, especially trust between strangers. And money in essence is a
00:31:38
device for establishing trust. I don't know you, you don't know me, but we both
00:31:43
trust this gold coin or piece of paper so we can cooperate on uh uh uh sharing
00:31:51
food or creating a a medicine. And the most sophisticated financial
00:31:57
devices, they basically do the same thing. Stocks and bonds and these CDOS's, they are a method to establish
00:32:05
trust. And when you open a new bank account, the most important thing is how
00:32:11
do I trust the bank to really uh h take care of my money and to follow my
00:32:17
instructions but not to be open to fraud and things like that. And again, you as as as an investor
00:32:24
um when you try to get money from from from or you as an entrepreneur when when you try to get money from investors, the
00:32:31
biggest issue is always trust. And if somebody can comes up can can
00:32:37
come up with a new uh uh way to establish trust between people, that's a
00:32:43
good thing. But if this new way
00:32:49
increasingly depends on non-human intelligence on again on systems that
00:32:56
humans cannot understand. That's the big question. What happens to human society
00:33:02
when the trust that is at the basis of all social interactions
00:33:09
is actually no longer trust in humans. It's trust in a non-human intelligence
00:33:18
that we don't fully understand and that we cannot anticipate. And part of the
00:33:24
problem with regulating AI or AI safety, it goes back to what we discussed
00:33:30
earlier that AI is different from printing presses or radio sets or even
00:33:35
atom bonds. If you want to make nuclear energy safe,
00:33:42
then you need to think about all the different ways that uh I don't know a
00:33:47
nuclear power station can uh uh uh can have an accident.
00:33:52
And I guess there is a limited number of things that can go wrong. And ideally if
00:33:59
if you think hard, if you have if you have enough people thinking hard enough, you can make safe nuclear reactors, safe
00:34:08
nuclear power stations. Now, but AI is fundamentally different
00:34:14
because AI keeps changing. It keeps reacting to the world. It keeps reacting
00:34:20
to you coming up with new inventions, new ideas, new decisions. So making AI
00:34:28
safe is a bit like making a nuclear reactor safe taking into account the fact that the nuclear reactor can decide
00:34:35
to change in ways that you can't anticipate and even worse it can react to you. So if
00:34:44
you build a particular safety mechanism for the nuclear reactor, what happens if the nuclear reactor say oh they build
00:34:51
this mechanism let's do that to h somehow get around the safety mechanism.
00:34:57
We don't have this problem with nuclear reactors. But this is the problem with AI. We are trying to contain something
00:35:04
which is an independent agent and which might actually come to understand us
00:35:11
better than we understand it. I'm really curious about how this will impact you
00:35:19
know you talked about elected officials there and how their systems will be sort of um dri their financial decision-m
00:35:26
might be driven by algorithms but government's an authority itself
00:35:32
I've pondered recently whether there'll come a day in the notsodistant future
00:35:37
where we might vote for an algorithm where we might vote for an AI to be our
00:35:43
government. Is that crazy thinking? I think we we we're quite a long way off from there. We would still want humans
00:35:51
at least in the symbolic role of being the prime minister, the the member of
00:35:56
parliament, whatever, the president. The real problem is that increasingly
00:36:02
these humans could be kind of figureheads or or puppets when the real
00:36:08
decisions, the most consequential decisions are uh are made by algorithms.
00:36:15
be partly because the the the um it will just be too complicated for the humans
00:36:23
at the top to understand the situation or to understand the different options.
00:36:30
So going back to the financial example. So imagine that you know it's it's 4:00 in the morning. There is a phone call uh
00:36:37
to the prime minister from the finance algorithm telling the P the prime minister that we
00:36:44
are facing a financial meltdown uh and that we have to do something
00:36:50
within the next I don't know 30 minutes to prevent a national or global
00:36:56
financial meltdown. And there are like three options and the algorithm recommends option A and there is just
00:37:03
not enough time to explain to the prime minister how did the algorithm reach the
00:37:08
conclusion and even what is the meaning of these different options
00:37:14
and again people think about this scenario mostly in relation to war.
00:37:20
Mhm. that what happens if you have an algorithm in charge of the your security
00:37:26
system and it it alerts you to a massive incoming cyber attack and you have to
00:37:33
react immediately and this could if you react in in a specific way this could
00:37:38
mean war with another nation but you just don't have enough time to
00:37:44
understand how the algorithm reached the decision and how the algorithm was also
00:37:51
able to determine that of the all the different options, this is the best option. Do you think that humans believe we're
00:37:57
more complicated and special than we actually are? Because I think part of much many many
00:38:03
of the rebuttals when we talk about artificial intelligence stem back to this idea that we're in, you know, we're
00:38:09
like innately genius, creative, spiritual, special,
00:38:17
you know, um artificial intelligence
00:38:22
like our our intelligence is somewhat divine or we've got free will and you know we
00:38:29
Yeah. Yeah, I mean it's if the argument is we have free will, we
00:38:37
have a divine soul and therefore no algorithm will will ever be able to
00:38:43
understand us and to predict our decisions or to manipulate us then this
00:38:49
is a very common argument but it's obviously nonsensical. I mean even before AI uh it was uh even with
00:38:57
previous technology it was possible to a large extent to predict people's
00:39:03
behavior and to manipulate them and AI just takes it to the next level. Now
00:39:10
with regard to the discussion of of free will my my position is you cannot start with
00:39:17
the assumption that humans have free will. If you start with this assumption
00:39:24
then it's uh actually is very it it makes you very incurious lacking
00:39:32
curiosity about about yourself about human beings.
00:39:37
It kind of closes off the investigation before it began. Um you assume that any decision you make
00:39:44
is just a result of my free will. Why did I choose this politician, this
00:39:49
product, uh uh this spouse? Because it's my free will. And if this is your
00:39:55
position, there is nothing to investigate. You just assume you have this kind of divine spark within you
00:40:02
that makes all the decisions and there is nothing to investigate there.
00:40:08
Um I would say no start investigating and you'll probably discover that there
00:40:15
are a lot of factors whether it's external factors like cultural
00:40:21
traditions and also internal factors like biological mechanisms that shape
00:40:27
your decisions. you chose this politician or this spouse because of
00:40:33
certain cultural traditions and because of certain biological mechanisms, your
00:40:39
DNA, your uh uh brain structure, whatever. And this actually makes it
00:40:45
possible for you to get to know yourself better. Now if after a long
00:40:51
investigation you've reached the conclusion that yes there are cultural influences, there are
00:40:59
political influences, there are genetic and neurological influences, but still
00:41:05
there is a certain percentage of my decision that cannot be explained by any
00:41:11
of these things. Then okay, call it free will and we can discuss it. But don't
00:41:18
start with this assumption because then you lose the incentive to explore
00:41:24
yourself. And anybody who embarks on such a process of self exploration, whether
00:41:32
it's in therapy, whether it's in meditation, whether it's in the laboratory of a brain scientist or uh as
00:41:40
a historian in the archive, you will be amazed to discover how much of your
00:41:47
decisions are not the result of some mystical free will. They are the result
00:41:53
of cultural and biological factors. And this also means that you are vulnerable
00:42:00
to being deciphered and manipulated by political parties, by corporations,
00:42:08
by AI. People who have this kind of mystical belief in free will are the
00:42:13
easiest people to manipulate because they don't think they can be manipulated.
00:42:20
Uh and obviously they can. We humans should get used to the idea that we are
00:42:25
no longer mysterious souls. We are now hackable animals. That's what we are.
00:42:31
You said that at the World Economic Forum. Yeah. Again, this is the same point
00:42:38
basically that it's now possible to hack human beings. Not just to hack our smartphones, our bank accounts, our
00:42:44
computers, but to really hack our brains, our minds, and to uh uh predict
00:42:50
our behavior and manipulate our behavior more than in any previous time in history.
00:42:56
The other line that you said uh which really made me think and ponder was
00:43:01
um as previously human life was about the drama of decision-m and without this we
00:43:07
won't have a meaning in life. Yeah. that if you look, you know, at politics,
00:43:14
at religion and at at culture, people told the stories about their
00:43:20
lives or the lives of people in general as a kind of of drama of decision
00:43:26
making. Mhm. That you reach a particular junction in life and you need to choose you need to
00:43:33
choose between good and evil. You need to choose between political parties. You
00:43:38
need to choose your what to study at university or where to work, what kind of job to to to apply to.
00:43:46
And our stories revolved around these decisions.
00:43:52
And what happens to human life if increasingly the power to make decisions
00:43:59
is taken from us? And increasingly it's algorithms
00:44:04
making all these decisions for us or about us. Is that possible?
00:44:10
It's already happening. Increasingly, you know, you apply to a bank to get a loan. In many places, it's no longer a
00:44:17
human banker who is making this decision about you whether to give you a loan,
00:44:23
whether to give you a mortgage. It's an algorithm analyzing billions of bits of data about you and
00:44:30
about of millions of other customers or previous loans determining whether you
00:44:36
are creditw worthy or not. And if you ask the bank if they refuse to give you
00:44:42
a loan and you ask the bank why didn't you give me a loan and the bank says we
00:44:47
don't know. the the computer said no and we just believe our our our computer our
00:44:52
algorithm and it's happening also in the judicial system increasingly that uh um
00:44:59
various judicial decisions verdicts like for how many like the judge decided that
00:45:05
you committed some crime the sentence whether to send you to two months or
00:45:11
eight months or two years in prison is increasingly determined by an algorithm
00:45:17
uh you apply to a place at university, you apply to a job. This too is increasingly decided by algorithms.
00:45:24
Dating uh dating. Yes. I mean even um even un
00:45:30
unknown unbeknownst to you, the algorithms of the dating apps that
00:45:36
you're using are shaping your romantic life. But what in a world of you know
00:45:43
robotics and artificial intelligence why do I need to find a person at all?
00:45:48
Why not just have a relationship with with a robot or with an AI? Yeah. Uh we do see the beginning of of
00:45:56
of this that people are building more and more intimate relationships with
00:46:02
non-human intelligences with AIs and bots and and so forth. And this raises a
00:46:09
lot of of difficult and and profound questions. Now, part of the problem is
00:46:16
that the AIS are built to mimic intimacy
00:46:23
that the the ability intimacy is an extremely powerful thing. Not just in
00:46:29
romance, also in the market, also in politics. If you want to change
00:46:35
somebody's mind about anything, political issue, a commercial uh
00:46:40
preference, intimacy is kind of the most powerful weapon.
00:46:47
And somebody you really trust, somebody you have intimate relationships with will be
00:46:54
able to change your views on a lot of things more than uh someone you see on TV or just an an article you read in
00:47:03
newspaper. There is a huge incentive for the creators of AIS to create AIS that
00:47:10
are able to forge intimate relationships with humans. And um this makes us extremely
00:47:17
vulnerable to this new type of manipulation
00:47:23
that was previously just unimaginable cuz loneliness is at you know all-time highs especially in the sort of western
00:47:30
world and sexlessness and I I was reading some stats about how the like body bottom 50%
00:47:36
of men in particular are having almost no sex relative to the top sort of 10%
00:47:42
and you think you know this disparacy the rise of digitalization, loneliness, we're in our homes on screens more than
00:47:48
ever before. And then you hear about this industry of AI and sex dolls and all this and you just wonder, you play it forward and go,
00:47:55
yeah, it's it's going there. And and the thing is that it it's not that that the
00:48:01
humans are so stupid or something that they they they kind of project something
00:48:07
onto the AI and fall in love with an AI chatbot. The AI is deliberately
00:48:14
built, created, trained to fool us. To
00:48:19
the same way, you know, you look at the previous 10 years, there was a big
00:48:25
battle for human attention. There was a battle between different social media
00:48:30
giants and what whatever how to grab human attention and they created
00:48:36
algorithms that were really amazing at
00:48:41
grabbing people's attention and now they are doing the same thing but with intimacy and we are extremely
00:48:49
exposed. We are extremely vulnerable to it. Now the big problem is and and again
00:48:54
this is where it it gets kind of really philosophical that
00:49:00
what humans really want or need from a relationship is to be in touch with
00:49:07
another conscious entity. H an intimate relationship is not just
00:49:13
about providing my needs. Then it's exploitative. Then it's
00:49:18
abusive. If you're in a relationship and the only thing you think about is how
00:49:24
how would I feel better? How would my needs be provided for? Then this is a
00:49:29
very abusive situation. Uh a a really healthy relationship is
00:49:35
when it goes both ways. You also care about the feelings and the needs of the
00:49:41
other person of the other entity. Now
00:49:46
what happens if the other entity has no feelings, has no emotional needs because
00:49:53
it it has no consciousness. That's the big question. And there is a huge confusion between
00:50:00
consciousness and intelligence. AI is artificial intelligence.
00:50:06
But what exactly is the relation between intelligence and consciousness?
00:50:12
Now intelligence is the ability to solve problems, to win a chess, to invest money, to
00:50:20
drive a car. This is intelligence. Consciousness is the ability to feel things like pain and pleasure and love
00:50:27
and hate and sadness and anger and and so many other things. Now in humans and
00:50:33
also in other mammals, intelligence and consciousness actually go together. We
00:50:39
solve problems by having feelings. But computers are fundamentally
00:50:45
different. They are already more intelligent than us in at least several
00:50:52
narrow fields, but they have zero consciousness.
00:50:58
They don't feel anything. When they beat us at chess or go or some other game,
00:51:05
they don't feel joyful and happy. If they make a wrong move, they don't feel sad or or angry. They have zero
00:51:13
consciousness. As far as we can tell, they might soon be far more intelligent
00:51:20
than us and still have zero consciousness. Now what happens when you
00:51:27
are in a relationship with an entity which is far more intelligent than you
00:51:34
and can also imitate mimic consciousness. It it knows how to solve
00:51:41
the problem of making you feel as if it
00:51:46
is conscious but it still has no feelings of its own.
00:51:52
And this is a very disturbing vision of the future.
00:51:58
It opens us up to manipulation. Is that what you're saying? It first of all it opens us to manipulation but also it uh uh the the
00:52:08
big question what does it mean for the health of our own mind of our own
00:52:14
psyche? If we are in a relationship or or many of our important relationship in life
00:52:21
are with non-concious entities that uh that they don't really have any
00:52:28
feelings of their own. Again, they are very good at faking at faking it. They're very good at
00:52:33
catering to our feelings, but um again it's just it's just
00:52:41
manipulation in the end. Are you optimistic about the happiness of humans going forward? Or do you think happiness
00:52:47
will take its own? You know, I've heard you talk about how happiness might just become a bio biochemical,
00:52:53
I don't know, prescription or something. Yeah. I mean, we don't have a good track
00:52:59
record with regard to happiness. If you look at the last 100,000 years from say
00:53:05
the stone age until the 21st century, you see a dramatic rise in human power.
00:53:12
We are thousands of times more powerful as a species and as individuals than we
00:53:18
were in the stone age. We are not thousands of times happier. We just
00:53:24
don't really know how to translate power into happiness. And this is very clear
00:53:31
when you look at the lives of the most powerful people in the world that there is no correlation between how
00:53:39
rich and powerful you are and how happy you are as as as as a person. I mean I I
00:53:46
don't have the I don't get the impression that people like I don't know Vladimir Putin or Elon Musk are the
00:53:53
happiest people in the world even though they are they are some of the most powerful people in the world.
00:54:01
So there is no reason to think that as humanity gets even more powerful in
00:54:06
coming decades we will get any happier. And understanding happiness is about
00:54:13
understanding the deep dynamics of of not not even the brain but of the mind
00:54:20
of consciousness and we are just not there yet.
00:54:25
Um we are very very good and and the related problem is that humans usually
00:54:31
understand how to manipulate something long before they understand the
00:54:37
consequences of the manipulations. If you look at the outside world, at the
00:54:44
ecological system, we have learned how to cut forests, how to build huge dams
00:54:51
over rivers long before we understood what will be the consequences for the
00:54:57
ecological system. Which is why we now have this ecological crisis. We
00:55:03
manipulated the world without understanding the consequences.
00:55:09
As something similar might happen with the world inside us,
00:55:15
with more powerful medicines, with brain computer interfaces, with genetic
00:55:21
engineering and and so forth, we are gaining the power to manipulate our
00:55:27
internal world, the world within us. But again, the power to manipulate is
00:55:34
not the same thing as understanding the complexity of the system and the
00:55:39
consequences of the manipulation. A related manipulation there is immortality and our pursuit of it. I've
00:55:45
sat with people on this podcast who are committing their lives to staying alive forever. And there's a through line
00:55:51
there between our desire to be immortal, you know, the rise in the scientific
00:55:57
discoveries that are enabling that and our happiness. I I've often thought, you know, much of the reason why things are
00:56:04
special in my life is because they're scarce, including my time. Yeah. And I I always I almost wonder about the
00:56:10
psychological um issues I would face if I knew I was
00:56:16
immortal. Like if I knew that the partner I'm with doesn't come at the
00:56:22
expense of another one I can be with, you know, at 30 years old. And the car, you know, the choices you
00:56:28
make, I think what makes them scaled are their scarcity. Mhm. Against the backdrop of an of a finite
00:56:35
life. Uh yeah, it will definitely change everything if you think about relations
00:56:40
between parents and children. So if you live forever, so the 20 years you raised
00:56:47
uh uh you spent raising somebody 2,000 years ago, what do they mean now? But I
00:56:52
think long before we get to that point, I mean, most of these people are going to be incredibly disappointed because it
00:57:00
will not happen within their lifetime. Another related problem is that we will
00:57:05
not get to immortality. We will get to something that maybe should be called a mortality.
00:57:12
that immortality is that like you're you're God, you can never die no matter what happens. It's even if we solve
00:57:21
cancer and Alzheimer and demensia and whatever, we will not get there. We will
00:57:27
get to kind of a life without a definitive expiry date that you can live
00:57:34
indefinitely. You can go every 10 years to a clinic and get yourself rejuven
00:57:40
rejuvenated, but if a bus runs you over or your airplanes explodes or a
00:57:48
terrorist kills you, you're dead and you're not coming back to life. Now,
00:57:55
realizing that you have a chance to live forever, but if there is an accident, you die. This creates a level of anxiety
00:58:04
and terror unlike anything that we know in our own lives. I think the people who
00:58:10
will will be in that situation will be extremely anxious and miserable.
00:58:17
And another issue is you know people often spend so much effort trying to get
00:58:26
gain something get something without really understanding what are they going
00:58:32
why what will you do with it what is so good about it you know like people spend so much effort to to get have more and
00:58:38
more money instead of thinking what will I actually do with that money so it's
00:58:44
the same with you know the people who want to extend life forever. What is so
00:58:49
good about life that what will you do with it? And if you know it, why don't you do it
00:58:56
already? That uh you know I hear people saying about how how precious human
00:59:02
consciousness is why why do you think it's so precious
00:59:07
and whatever it is, why don't you do it right now? I mean why spend your life
00:59:16
developing some kind of treatment that will uh extend your consciousness for a
00:59:24
thousand years. Just spend your time doing now whatever
00:59:31
you think you would be doing with your consciousness a thousand years from now. So if they were to say but it'll give me
00:59:37
more time with my family. You're saying just instead of wasting your time just like Exactly. So, you know, somebody who has
00:59:43
no time for their family at all right now because they are busy developing the kind of uh uh uh miracle cure that will
00:59:51
enable them to spend time with with their family in 200 years. This makes no
00:59:56
sense. I think about the disparity that artificial intelligence and these forms
01:00:03
of sort of bioengineering might create because it's conceivable that the rich
01:00:08
will gain access to these technologies first. Yeah. And then, you know, when we think about bioengineering,
01:00:15
being able to sort of play with our genetic code, that means if I, for example, managed to get my hands on some
01:00:20
kind of bio engineering treatment to make sure that my kids were maybe a little bit smarter, maybe a little bit
01:00:27
stronger, whatever, then you're going to start a sort of genetic chain of
01:00:32
modified children that are superior in intelligence and strength and whatever else might be desirable. M
01:00:38
and then you have this disparity in society where you have like the you know one humans one set of humans are on a
01:00:44
completely different exponential trajectory and the other humans are you know yeah behind
01:00:51
this is extremely dangerous uh I think we just shouldn't go there that we
01:00:57
shouldn't invest a lot of resources
01:01:02
efforts in developing these kinds of uh upgrades and enhancements
01:01:08
that are very likely, at least at first, to be the preserve of a small elite and
01:01:16
to translate economic inequality into biological inequality and to basically
01:01:23
split the human species to to split homo sapiens into, you know, a ruling class
01:01:30
of superhumans and and the rest of us. This is a very very dangerous
01:01:35
development. related to that is the problem that I
01:01:41
don't think it will be these will be upgrades at all
01:01:47
what worries me is that a lot of these things will turn out actually to be downgrades
01:01:53
that we again we don't understand our bodies our brains our minds well
01:02:00
enough to know what will be the consequences of tweaking our genetic
01:02:07
code or of um I don't know implanting all kinds of devices into our brains.
01:02:15
People who think that this will enable them let's say to upgrade their
01:02:21
intelligence they don't know what the side effects will be. It could be that the same
01:02:28
treatment that increases your intelligence also decreases your
01:02:34
compassion or your spiritual depth or whatever. And the danger is that
01:02:40
especially if this technology is in the hands of powerful corporations, armies,
01:02:47
governments, they will enhance those qualities that they want like
01:02:56
intelligence and like discipline while disregarding
01:03:01
uh other qualities which could be even more important for for human flourishing
01:03:07
like compassion. or like autistic sensitivity or like spirituality. If I think about somebody
01:03:13
again like Putin, what would he do with this type of technology then yes, he would like an army of super intelligent
01:03:21
and super loyal soldiers. And if these soldiers do don't have any compassion or
01:03:28
any spiritual depth, all the better for him. But that speaks to the arms race. And you know, you said you we think we
01:03:33
shouldn't, but China will see that as an opportunity or Putin will see that as an opportunity if the if the Western world,
01:03:40
if the United States or the UK don't. And so again, it comes back to this
01:03:45
point of, you know, we're screwed if we're damned if we do, we're damned if we don't. I'm not sure that in this case it it
01:03:51
works. uh because again a lot of these upgrades are likely to have um
01:03:58
detrimental side effects both for the person in question and for the society
01:04:04
as a whole. And I think that in this case societies that will choose to be uh
01:04:12
uh progress more slowly and safely they will actually have an advantage. It's
01:04:18
like if you say, you know, there is some other country where they don't have any brakes on their on their cars and they
01:04:25
don't have any seat belts and they release new medicines without checking their side effects. They're moving so
01:04:32
fast. We are left behind. No, it makes no sense to to to imitate them. This will actually ruin their societies. You
01:04:39
don't want to imitate these kinds of of harmful effects. Uh with development of
01:04:45
AI, it's different. I think there the advantages in things like finance, like
01:04:53
the military will be so big that an AI AI arms race is almost inevitable.
01:05:01
But with trying to kind of bioengineer humans, if you go too fast, it will be
01:05:09
this self-destructive. So we can take it most slowly and safely
01:05:14
and without being kind of left behind in an arms race. You said on the Tim Ferris podcast, the
01:05:21
best scenario is that homo sapiens will disappear but in a peaceful and gradual way and be replaced by something better.
01:05:26
It's quite a um uncomfortable statement to to listen to.
01:05:32
I think that again the the the type of technologies that we are now developing when you combine them with the human
01:05:39
ambition to uh um you know to to improve
01:05:45
ourselves it's almost inevitable that we will use
01:05:51
these technologies to change ourselves. The question is whether we will do it
01:05:57
slowly and responsibly enough for the consequences to be beneficial. But the
01:06:03
idea that we can now develop these extremely powerful tools of bioengineering and AI and remain
01:06:12
the way we are. We'll still be the same homo sapiens in 200 years, in 500 years,
01:06:18
in 1,000 years. we'll have all these tools to connect brain to computers to to kind of re-engineer our genetic code
01:06:26
and we won't do it. I think this is unlikely. One of the outstanding questions that I have and one of the sort of observations
01:06:33
I've had is people like Sam Alman um the founder of OpenAI that made Chat GPT
01:06:39
started working on universal basic income products like Worldcoin. And I thought, you know what, that's curious
01:06:45
that the people that are at the very forefront of this AI revolution are now trying to solve the second problem they
01:06:52
see coming, which is people not having jobs. Yeah. Essentially, is is that do you think that's a because
01:06:58
you know every I've spoken a lot this year on stages and this is one of the questions I always get asked is the
01:07:03
implications of AI on the and jobs as we know it in the workforce. Mhm. Is it realistic to believe that most
01:07:09
jobs will disappear as we know them today? I think
01:07:16
many jobs, maybe most jobs will disappear, but new jobs will emerge. You
01:07:23
know, most jobs that people do today didn't exist 200 years ago. Mhm. Like this.
01:07:28
Uh yeah, like this. Like doing a podcast. And there will be new jobs.
01:07:34
The really big problem will be how to retrain people. Uh it demands a lot of financial support
01:07:42
also psychological support for people to kind of relearn, retrain, reinvent
01:07:50
themselves and doing it not just once but repeatedly throughout their career
01:07:55
throughout their lives. The AI revolution will not be a single watershed event like you have the big AI
01:08:02
revolution in 2030. You lose 60% of jobs. You create lots of new jobs. You
01:08:09
have 10 difficult years. Everybody adjusting, adapting, reskilling, whatever, and then everything settles
01:08:16
down to a new equilibrium. It won't be like that. AI is nowhere near its full
01:08:21
potential. So you will have a lot of changes by 2030, even more changes by
01:08:27
2040, even more changes by 2050. You will have new jobs, but the new jobs too
01:08:33
will change and disappear. What new jobs? In a world where intelligence is
01:08:38
disrupted, what what jobs are left? Because you say you're going to retrain me. I'm like, you know, I'm not going to
01:08:44
be able to keep up with an AI that's retraining every second. And I I'm not sure. I mean some of the
01:08:50
answers might be counterintuitive that um
01:08:56
at least at present we see that AI is extremely good at automating jobs that
01:09:02
only require cognitive skills but they are not good at jobs that require motor
01:09:09
skills and social skills. So if you think about say doctors and nurses, so
01:09:14
at least those types of doctors who are only doing cognitive work, they
01:09:23
read articles, they get your medical results, all kinds of tests and and and
01:09:29
and and whatever. They diagnose your disease and they decide on a course of treatment. This is purely cognitive
01:09:36
work. This is the easiest thing to automate. But if you think about a nurse that has
01:09:42
to replace a bandage to a crying child, this is much more difficult to automate.
01:09:50
You don't think that's possible to automate? I I think it is possible, but not now. You need very delicate motor skills and
01:09:57
also social skills to do that. Did you see Elon's video the other day with um the Tesla robot?
01:10:04
I see a lot of these videos. It's it's getting the egg and it's cracking the egg and it's going like this.
01:10:10
No, again I'm not saying it's impossible. I'm just saying it will take longer. It's more difficult. Again,
01:10:16
there is also the social aspect. If you think about self-driving vehicles, the biggest problem for self-driving
01:10:23
vehicles is humans. I mean, not the not just the the human drivers, it's the
01:10:28
pedestrians, it's the it's the passengers. How do you deal with a drunken passenger? whatever.
01:10:36
So, uh, again, it's not impossible, but it's much more difficult. So, again, I
01:10:41
think that there will be new jobs, at least in the foreseeable future. The
01:10:46
problem will will be to retrain people. And the biggest problem of all will be
01:10:52
on the global level, not on the national level. I when I hear people talk about
01:10:57
universal basic income, the first question to ask is, is it universal or
01:11:02
national? Is it a system that let's say raises
01:11:09
taxes on big tech corporations in Silicon Valley in California and uses
01:11:15
the money to provide basic services and also retraining courses for people in
01:11:24
Ohio and Pennsylvania? Uh or does it also apply to people in
01:11:30
Guatemala and Pakistan? I mean, what happens when it becomes
01:11:35
cheaper to produce shirts with robots in California than in Guatemala and in
01:11:41
Mexico? Uh, does Sam Alman has a vision of the US government raising taxes in
01:11:48
California and sending the money to Guatemala to support the people there?
01:11:53
If the answer is no, we are not talking about universal basic income. We are only talking about national basic income
01:11:59
in the US. Then what happens to the people in Guatemala? That's the that's the biggest question.
01:12:06
And a sub question to that is about how one should be educating our our children
01:12:11
and our education institutions as they are today. Because with what's to come, um makes me wonder what what skill would
01:12:18
be worth investing you know 10 12 years into a child that I had.
01:12:23
Um nobody has any idea. I mean if you think about specific skills
01:12:30
then this is the first time in history when we have no idea how the job market
01:12:36
or how society would look like in 20 years. So we don't know what specific
01:12:41
skills people will need if you think back in history. So it was never
01:12:47
possible to predict the future but at least people knew what kind of skills
01:12:53
will be needed in a couple of decades. If you live, I don't know, in England
01:12:58
in uh uh 1023, a thousand years ago, you don't know
01:13:05
what will happen in in 30 years. Maybe the Normans will invade or the Vikings
01:13:11
or the Scots or whoever. Maybe there'll be an earthquake. Maybe there'll be a
01:13:16
new pandemic. Anything can happen. You can't predict. But you still have a very
01:13:22
good idea of how the economy would look like and how human society would look
01:13:27
like in the 1050s or the 1060s. You know that most people will still be farmers.
01:13:35
You know it's a good idea to teach your kids how to uh harvest wheat, how to
01:13:41
bake bread, how to ride a hose, how to shoot and bow an arrow. These things
01:13:46
will still be necessary in 30 years. If you now look 30 years to the future,
01:13:52
nobody has any idea what kind of skills will be needed. If you think for
01:13:58
instance, okay, this is the age of AI, computers, I will teach my kids how to code computers. Maybe in 30 years,
01:14:06
humans no longer code anything because AI is so much better than us at writing
01:14:12
code. Um, so what should we focus on? I would
01:14:17
say the only thing we can be certain about is that 30 years from now the
01:14:22
world will be extremely volatile, extremely it will keep changing at an
01:14:27
ever rapid pace. Do you think this is going to incre increase the amount of conflict
01:14:33
because I watched a video on your YouTube channel where you said the return of wars? Yeah. Uh that's one of the dangers that
01:14:40
there is and we see it all all over the world now. uh like 10 years ago we were
01:14:45
in the most peaceful era in human history and unfortunately this era is
01:14:51
over. We are now in a new era of wars and potentially of imperialism
01:14:58
and we are seeing it all over the world uh with the Russian invasion of Ukraine now with the war in the Middle East uh
01:15:05
Venezuela and Guyana some East Asia war is is is back on the table. It's not
01:15:13
just because of the rapid changes and the upheavalss they cause. It's also
01:15:19
because um you know 10 years ago we had a global order, the liberal order which
01:15:26
was far from perfect but it's still kind of regulated relations between nations
01:15:33
between countries based on an idea on on the liberal
01:15:39
worldview that despite our national differences all humans share certain
01:15:46
basic experiences and needs and interests. Which is why it makes sense
01:15:52
for us to work together to diffuse conflicts and to uh uh um solve our
01:16:00
common problems. It was far from perfect, but it did create the most
01:16:06
peaceful era in human history. Then this order was repeatedly attacked
01:16:12
not only from outside from forces like Russia or North Korea
01:16:18
or Iran that never accepted this order but also from the inside even from the
01:16:24
United States uh which was the architect to a large extent of of this order with
01:16:30
the election of Donald Trump which says I don't care about any kind of global order. I'd only care about my own
01:16:38
nation. And you see this way of thinking that I only care about my the interests
01:16:44
of my nation more and more around the world. Now the big question to ask is if
01:16:51
all the nations think like that what regulates the relations between them
01:16:58
and there was no alternative nobody came up with with the and said okay I don't
01:17:04
like the liberal liberal global order I have a better suggestion for how to manage relations between
01:17:12
different nations. They just destroyed the existing order
01:17:17
without offering an alternative. And the alternative to order is simply disorder.
01:17:24
And this is now where we find ourselves. Do you think there's more wars on the way? Yes. Unless unless we reestablish order,
01:17:33
there will be more and worse wars uh coming in the next few years in more and
01:17:39
more areas around the world. You see defense budgets all over the
01:17:45
world uh uh skyrocketing and this is a vicious circle. When your
01:17:51
neighbors increase their military budget, you feel compelled to do the
01:17:56
same and then they increase their budget even more. You know, when I say that the
01:18:02
early 21st century was the most peaceful era in human history,
01:18:08
it's one of the indications is how uh how low
01:18:14
the military budgets all over the world were. For most of history, kings and
01:18:21
emperors and cons and sultans, they the military was the number one item on
01:18:27
their budget. They spent more on their soldiers and navies and fortresses than
01:18:34
on anything else. In the early 21st century, most countries
01:18:40
spend something like a few percentage points of their of their budget on on
01:18:47
the military. Education, health care, welfare were a much more a much bigger
01:18:55
item on the budget than defense. And this is now changing. The money is
01:19:02
increasingly going to tanks and missiles
01:19:07
and cyber weapons instead of to nurses and and schools and and social workers.
01:19:15
And again, it's not inevitable. It's the result of human decisions. The relatively peaceful era of the early
01:19:22
21st century, it did not result from some miracle. It resulted from humans
01:19:28
making wise decisions in previous decades. What are the wise decisions we need to make now in your view?
01:19:33
Reinvest in in rebuilding a global order
01:19:39
which is based on universal values and norms
01:19:44
and not just on the narrow interests of of specific nation states.
01:19:50
Are you concerned that Trump might be elected again shortly? I I think it's very likely and if it happens it is likely to be the
01:19:56
kind of like the the death blow to what remains of the global order and he says
01:20:03
it and he says it openly. Now again it should be clear that many of these politicians
01:20:10
they present a false dichotomy a false binary vision
01:20:16
of the world as if you have to choose between patriotism and globalism between
01:20:24
being loyal to your own nation and being loyal to some kind of I don't know
01:20:30
global government or whatever and this is completely false there is no
01:20:35
contradiction between patriotism and global cooperation. When we talk about
01:20:41
global cooperation, we definitely don't have in mind, at least not anybody that I know, a global government. This is an
01:20:48
impossible and very dangerous idea. It simply means that um you have certain
01:20:55
rules and norms for how different nation states treat each other and and and and and and
01:21:03
behave towards each other. If you don't have a system of of global norms and
01:21:09
values, then very quickly what you have is just global conflict, is just wars. I
01:21:16
mean some people have this idea they imagine the world as a network of friendly fortresses
01:21:23
like each nation will be a fortress with very high walls taking care of its own
01:21:30
interest interests but uh living on relatively friendly terms with the
01:21:37
neighboring fortresses trading with them and and and whatever. Now the main
01:21:42
problem with this vision is that fortresses are almost never friendly. Each fortress always wants a bit more
01:21:50
land, a bit more prosperity, a bit more security for itself at the expense of
01:21:57
the neighbors. And uh this is the high road to conflict
01:22:02
and to and to and to war and to war. There's that phrase, isn't there? Ignorance is bliss. Now, something that
01:22:10
your work has forced you and continues to encourage you to not live in is
01:22:15
ignorance. So, with that, one might logically deduce that out the window goes your bliss.
01:22:22
Um, are you are you happy? I think I'm relatively happy, at least
01:22:29
happier than I was uh for most of my life. I
01:22:35
part of it is is that I invest a lot of my time not just in
01:22:42
you know researching what is happening in the world but also in the health of
01:22:47
my own mind and you know keeping a kind of balanced
01:22:56
information diet that it's it's it's basically like with food. You need food in order to survive
01:23:04
and to be healthy. But if you eat too much or if you eat too much of the wrong stuff, it's it's bad for you. And it's
01:23:12
exactly the same with information. Information is the the food of the mind.
01:23:17
And if you eat too much of it of the wrong kind, you'll get a very sick mind.
01:23:24
So I uh I try to to keep a very balanced
01:23:29
uh information diet which also includes information fasts.
01:23:35
So I try to disconnect. I um every day I dedicate two hours a
01:23:42
day for meditation. Wow. And every year I go for a long meditation retreat of between 30 and 60
01:23:50
days like in completely disconnecting. No phones, no emails, not even books. Um
01:23:58
just observing myself, observing what is happening inside my body and inside my
01:24:04
mind, getting to know myself better and kind of digesting
01:24:11
all the information that I absorbed during the rest of the year or the rest
01:24:17
of the day. Have you seen a clear benefit in doing that? Uh yes, very very clear. I don't think I
01:24:23
would be able to write these books or to do what I'm doing um without these kind
01:24:30
with this kind of information diet and and without kind of devoting a lot of time and attention to the balancing my
01:24:38
mind and keeping it healthy. You know so many people spend so much time keeping their body healthy which is very
01:24:45
important of course but we need to spend equal amount of attention with with our mind. It is as important as as our body.
01:24:52
When you said you don't think you'd be able to do what you do if you didn't take these information diets, why?
01:24:58
I'll just, you know, um first of all be just overwhelmed
01:25:04
and uh um not have any kind of peace of mind, not have any kind of perspective.
01:25:11
If you're constantly in the news cycle, in the information cycle, you lose all
01:25:18
perspective. You know organic entities unlike AIs, unlike computers, we are
01:25:26
cyclical entities. We need to sleep every day. AIS don't sleep. You know,
01:25:33
even the stock exchange closes every afternoon. It closes also for the
01:25:39
weekend or for so for Christmas. If you think about it, this is amazing that you
01:25:44
know if if a war erupts in Christmas uh uh the Wall Street will be able to
01:25:51
react only after a couple of days because the people are on holiday. They
01:25:57
they took time off. Even the money market takes time off. But if you give
01:26:05
AI full control, there will never be any time off. it will be 24 hours a day, 365
01:26:12
days a a year and people just collapse.
01:26:17
I mean, I think part of the problem that politicians today face is that um they
01:26:23
need to be on 24 hours a day because the news cycle is on 24 hours a day. Like in
01:26:29
previous eras, if you're I don't know a king in the middle ages and you you you you ride some you go somewhere, you're
01:26:36
on the road in your carriage and nobody can reach you. Even if the French are
01:26:42
invading, nobody can reach you. You have some time off. If you're a prime
01:26:47
minister now, there is no time off. And computers are built for it, but human
01:26:54
brains aren't. If you try to keep an organic entity
01:27:01
awake and kind of constantly processing information and reacting 24 hours a day,
01:27:08
it will very soon collapse. It's funny, it made me think of what the for I think it was the former Netflix
01:27:14
CEO or one of the Netflix CEOs or someone said um they said, "Our biggest competitor is sleep."
01:27:20
Sleep. Yeah. That's a very scary and and very I think important line and it's a very honest line.
01:27:26
It's a very honest line and it's scary because um if people don't sleep they
01:27:31
collapse and eventually they die. And this is again part of the problem that
01:27:37
we talked earlier about about the battle for human attention in social media in
01:27:43
streaming services. Now for many of these corporations they
01:27:49
measure their success by user engagement. The more people are engaged the more
01:27:56
successful we are. Now user engagement is a very
01:28:02
broad definition. According to this measurement one hour of outrage is
01:28:10
better than 10 minutes of joy. And uh uh certainly better than 1 hour
01:28:17
of sleep because one hour of outrage I will consume three adverts. Yes.
01:28:22
And then that means that the corporation make $30 for example. And and from two hours of
01:28:28
sleep they make nothing. From 10 minutes of joy maybe they sell only one ad.
01:28:34
Mhm. And but from the viewpoint of of how humans function and how this
01:28:41
organism function, 10 minutes of joy are probably better than for us than one
01:28:47
hour of outrage. And certainly we need not just two hours, we need six, seven,
01:28:52
eight hours of sleep. Well, this is why, you know, the algorithms on on certain platforms,
01:28:58
specifically Tik Tok, Mhm. are just absolutely addictive to say the least. Like I I
01:29:06
because they hacked us. Yeah. They It's literally they you know t we had you know a certain level of
01:29:12
addiction to the previous social algorithms and then Tik Tok came along and said hold my beer and they just went
01:29:18
for it you know and and they've won because of that. I see 60 year olds
01:29:25
absolutely addicted to Tik Tok and because they don't understand the concept of an algorithm sometimes um and
01:29:31
they don't understand like the the the advertising model and all of that stuff it's it's hypnotism. They're like
01:29:37
absolutely hypnotized. My funnily enough my driver is one of them. So my driver's outside whenever I walk up to his car
01:29:43
he's just like this on Tik Tok. He's scrolling and I had a conversation with him last night. I'm like do you realize that Tik Tok has your brain?
01:29:51
Yeah, you know, abs, you know, and we're just at the very foot sort of the first steps
01:29:57
of an exponential curve of algorithms competing for our attention in our brain. We haven't seen anything yet. I mean,
01:30:02
these algorithms, they are what like 10 years old in terms of you think about these social
01:30:08
media algorithms and the algorithms that get to know you personally to hack your brain and then grab your attention. It's
01:30:14
they are 10 years old and the companies die if they don't beat the other algorithms. So, like Twitter
01:30:20
now, when Elon took it over, and I think people will relate to this if you use Twitter, suddenly I've seen more people
01:30:27
having their heads blown off and being hit by cars on Twitter than I'd ever seen in the previous 10 years.
01:30:32
It's and I think someone at Twitter's gone, listen, this company's going to die unless we we increase time spent on
01:30:38
this platform and show more ads. So, let's start serving up a more addictive algorithm. And that requires a response
01:30:43
from Instagram and the other plat. And so it's a real, you know, Elon has this other company,
01:30:48
the Boring Company. Yeah. Which is about boring tunnels, of course. But actually, it might be a good
01:30:54
idea to make Twitter more boring and to make Tik Tok more boring. I mean, I know it's it's a very bad
01:31:01
kind of business decision. But I don't think humanity will survive
01:31:09
unless we have more boredom. If you ask me what is wrong with the
01:31:15
world in 2023 is that uh everybody is far too excited.
01:31:21
And if I had to kind of summarize what's wrong in one word, the word is excited.
01:31:27
And people don't understand the meaning of this word. People think that excited means happy. Like two people meet, I am
01:31:35
so excited to meet you. I have a new idea. I publish a new book. Whatever. Oh, this is such a such an exciting
01:31:41
idea. such an exciting book. And exciting isn't happy. Exciting isn't
01:31:48
always good. Sometimes, yes, sometimes it's good to be excited. An organism that is excited all the time dies. The
01:31:56
meaning of excitement is that you know that the body is in flight or fight
01:32:01
mode. All the nerves are on, all the neurons are firing, all the muscles are tense.
01:32:08
This is excitement and very often negative things excite
01:32:13
us. Fear excite fear is excitement. Hate is excitement. Anger is excitement.
01:32:21
And um you know it's when I meet a good friend I'm often relaxed to meet the
01:32:29
friend not excited. and or much kind you know you think about
01:32:35
the political level we have far too many exciting politicians doing very exciting
01:32:41
things and we need more boring politicians more Bidens that do less less exciting uh uh uh
01:32:50
things and but the brain is wired to pay attention to excitement and to crave it
01:32:56
but the brain evolved in situations when you didn't have a constant stream of
01:33:03
exciting videos. Sometimes it was on, sometimes it was off. And now our brains
01:33:10
have been hacked and these devices, technologies, they know how to
01:33:17
create constant excitement. And the more this happens, we also lose
01:33:25
our ability, our skill to be bored. that if we have to spend a few minutes doing
01:33:32
nothing somewhere waiting, we can't do it. We immediately take out the smartphone and start watching Tik Tok or
01:33:40
scrolling through Twitter or whatever. Did you hear about that experiment where people would rather take an electric shock than do nothing?
01:33:47
Yeah. And you know you you can't get for instance
01:33:54
to any level of peace of mind if you don't know how to handle boredom.
01:34:01
That peace and boredom are are the same way that excitement and outrage are
01:34:07
neighbors. Peace and boredom are also neighbors. And if you don't know how to
01:34:13
handle boredom, if the minute there is a hint of boredom, you run away to some
01:34:19
exciting thing, you will never experience peace of mind. And people if
01:34:25
if if humans don't experience peace of mind, there is no way that the world as a whole is going to be peaceful.
01:34:32
In 2023, I launched my very own private equity fund called Flight Fund. And
01:34:38
since then, we've invested in some of the most promising companies in the world. My objective is to make this the
01:34:44
best performing fund in Europe with a focus on high growth companies that I believe will be the next European
01:34:49
unicorns. The current investors in the fund who have joined me on this journey are some of Europe's most successful and
01:34:56
innovative entrepreneurs. And I'm excited to announce that today, as a founder of a company, you can pitch your
01:35:02
company to us. Or if you are an investor, you can also now apply to
01:35:08
invest with us. Head to flightfund.com to gain an understanding of the fund's
01:35:14
mission, the remarkable companies we proudly support, and to get in touch with me and my team. Legal disclaimer,
01:35:20
Flight Fund is regulated by the FCA. So, please remember that investing in the fund is for sophisticated investors
01:35:26
only. Don't invest unless you're prepared to lose all of the money you invest. This is a high-risisk investment and you are unlikely to be protected if
01:35:32
something goes wrong. There is no guarantee that the investment objectives will be achieved. And as with all private and equity investments, all of
01:35:38
the investment capital is at risk. This communication is for information purposes only and should not be taken as
01:35:44
investment advice or a financial promotion. As you guys know, I'm a big fan of Hule. I'm an investor in a
01:35:49
company and they sponsor this podcast. And what I've done for you, I've put together what I call the Hule Steven
01:35:54
bundle, which is a selection of my favorite products from Hule, including the black edition salted caramel flavor,
01:36:00
which is super high in protein and has 17 servings per container. Also comes with their ready to drink product, which
01:36:07
is one of my all-time favorite products from Hule. The brand new and very exciting Hule complete nutrition bars.
01:36:13
This is chocolate caramel. You can see from the empty box in front of me that I've eaten most of them, right? Me and my team here. If you leave these on the
01:36:19
counter for 5 seconds, they'll go. I'm going to say something I've never said. When Hule first made their bar many,
01:36:25
many years ago, I tried it and I didn't like it, so I've never talked about it on this podcast. They've spent roughly
01:36:30
the last 2 to 3 years making a brand new bar, which I absolutely love. If you want to order them yourself and get
01:36:35
started on your heel journey, the link is in the description below. In this podcast episode, wherever you're
01:36:41
listening to it, there'll be a Steven's bundle link and check it out. Back to the episode. If I could give you the
01:36:46
choice to be born in 1976 as you were Yeah. or to be born now,
01:36:53
I would go for 1976. I mean, the people of my generation, we
01:37:00
were privileged to grow up in one of the most peaceful and most optimistic eras
01:37:07
in human history. The end of the Cold War, the fall of the Iron Curtain. I
01:37:13
don't know of any better time. Uh but when I look at what is happening
01:37:19
right now, I don't envy the people who grow up in the 2020s.
01:37:25
What is the closing statement of hope and solution
01:37:34
that kind of ties off this conversation? What is the thing that having someone gotten to this point in the conversation
01:37:40
they should be thinking about doing which will cause the domino effect that will lead us to maybe more hopeful
01:37:47
future. But we still have agency. I mean the algorithms are not yet in in full
01:37:54
control. They are taking power away from us. But most power is still in human
01:37:59
hands and every human being has some level of of power of agency which means
01:38:06
that each one of us has some responsibility. Now nobody can solve all the world's
01:38:12
problems. So focus on one thing. Find the one
01:38:17
thing which is close to your heart which you have a deep understanding of and uh
01:38:24
and and and and try to make a difference there and the best way to make a
01:38:30
difference is to cooperate with other people. I mean the human superpower is
01:38:35
our ability to cooperate in large numbers. So if you care about a specific
01:38:40
issue don't try to be an isolated activist. 50 individuals who cooperate as part of
01:38:48
an organization can uh do much much more than 500
01:38:54
isolated activists, individuals. So, and find your one thing and again
01:39:01
don't try to do everything. Let other people do the rest and cooperate with
01:39:06
other people on on your chosen mission. Yal, your book Sapiens changed the world
01:39:14
in many ways. is it gave us a new perspective and a new understanding of who we are as as humans, where we've
01:39:19
come from. And with that, we have a road map for where we're going. It's celebrating its 10th anniversary. I have
01:39:25
the 10th anniversary edition here, which I'm going to beg you to sign for me after. Um, and it really is a once in a
01:39:32
generation book. The numbers that I have are that it sold more than 25 million
01:39:37
copies and that's in a market where people said no one's buying books anymore. That's crazy. That's absolutely
01:39:44
that's absolutely crazy. You you're working on a new book which I'm very excited to hear about. I'm sure that a
01:39:49
little birdie told me that'll be announced next year and I'm sure everyone's incredibly energized about that. Um what is the I ask this people the
01:39:56
question sometimes just as a way to to close off the show but I wanted to ask you it because it's especially pertinent to someone that's got such a huge
01:40:03
varying wealth of work. Is there one particular topic that is pertinent to
01:40:09
our future that we didn't talk about? I I would say that when we talk about
01:40:15
the future, um,
01:40:20
history is is more relevant than ever before. History is not really the study of the
01:40:28
past. History is the study of change, of how things change.
01:40:34
you nobody cares about the past for the sake of the past. All the people who
01:40:39
lived in the middle ages or in the uh uh ancient uh uh Rome, they all they are
01:40:45
all dead. They we can't do anything about their disasters and their misery.
01:40:53
We can't correct any of the wrongs that happened in ancient times. Um and they
01:41:01
don't care what we say about them. You can say anything you want about the Romans, the Vikings, they they are gone.
01:41:08
They don't care. The reason to study the past is because
01:41:13
if you understand the dynamics of change in previous centuries, in previous eras,
01:41:20
this gives you perspective uh on the process of of change in in the
01:41:26
present moment. And I think the curse of history is that people have this fantasy of
01:41:34
changing the past of bringing justice to the past and this is just impossible.
01:41:40
You cannot go back there and and save the people there. The big question is
01:41:46
how do you um save the people now? How do you prevent
01:41:53
catastrophes perhaps from from happening?
01:41:58
And this is the reason to to study history. And the main message of of history is
01:42:05
that humans created the world in which we live.
01:42:11
The world that we know with nation states and corporations and capitalist
01:42:17
economics and uh uh religions like Christianity and Hinduism, humans
01:42:23
created this world and humans can also change it. If there is something about
01:42:30
the world that you think is unfair, is dangerous, is is problematic, then I
01:42:38
some things are beyond our control. The laws of physics are beyond our control.
01:42:43
So far, the laws of biology are also beyond our control. But knowing
01:42:52
what is natural, what is the outcome of physics and biology versus what is the
01:43:00
outcome of human inventions, human stories, human institutions. This is
01:43:06
very difficult. A lot of things that people think are just natural. This is
01:43:13
the way the world is. This is biology. This is physics. They are not. They are actually the result of historical
01:43:19
processes. And this is why it's so important to understand history to understand how
01:43:26
things change and to understand what can be changed.
01:43:32
We have a closing tradition on this podcast where the last guest leaves a question for the next guest, not knowing who they're going to be leaving it for.
01:43:37
Oh, the question that's been left for you, if you could impose a global law, but
01:43:43
only one global law, what would it be and why? Oh, great question for you. I
01:43:49
I would say that people should
01:43:55
consume less information and spend more time reflecting and
01:44:01
digesting what they already know, what they already heard. Thank you, Eva. It means um a huge
01:44:07
amount to me that someone of your esteem and someone that whose books have inspired me and turned the lights on in
01:44:13
so many areas of my life um would have this conversation with me today. So I thank you so much for that. But also for turning the lights on to the hundreds of
01:44:20
millions of people that have consumed your work all around the world, the videos, the books, etc., etc. as you've said there, it's the most important work
01:44:27
because it helps us looking back at history in a way that is accessible um and inclusive in a way that even I could
01:44:34
read without having to be a historian or understand very complex subject matter. So, thank you so so so much.
01:44:41
Thank you. It's been great to to be here.
01:44:46
If you listen to this podcast frequently, there's something I talk about very often and that is the subject of sleep. And so I dug down a pretty
01:44:54
deep sleep rabbit hole to figure out how I could sleep better. One of the things that I found is a brand called Eight
01:44:59
Sleep that sponsored this podcast. And that is the cover that I have on my bed. I saw the variance in my performance, my
01:45:05
ability to talk, my mood, and everything that matters to me when I'm unslept. It regulates the temperature of both sides
01:45:13
of my bed individually. So my partner can have cold, I can have a little bit warmer, and it learns about my body and
01:45:19
sets my bed to the temperature that I need to have optimal sleep. The brands
01:45:25
that I talk about on this this show, the podcast sponsors that I have are brands that I love and use, and EightLe is one
01:45:31
of them. They've made that piece of foam that we all sleep on for 8 hours a day smart. I've put a link in the
01:45:38
description below, but you can go to eightsleep.com/stephven for exclusive holiday savings.
01:45:46
Do you need a podcast to listen to next? We've discovered that people who liked this episode also tend to absolutely
01:45:53
love another recent episode we've done. So, I've linked that episode in the description below. I know you'll enjoy
01:45:59
it. [Music]

Badges

This episode stands out for the following:

  • 85
    Best concept / idea
  • 80
    Best overall
  • 80
    Most influential
  • 75
    Most quotable

Episode Highlights

  • Fictions and Reality
    Humans often lose touch with reality due to the fictions we create and believe.
    “Much of what we take to be real is fictions.”
    @ 03m 56s
    January 11, 2024
  • The Future of Humanity
    We may transform into something entirely different due to bioengineering and AI.
    “I think we are very near the end of our species.”
    @ 08m 53s
    January 11, 2024
  • The Complexity of Finance
    As AI creates increasingly complex financial devices, will anyone understand finance anymore?
    “Less than 1%.”
    @ 26m 50s
    January 11, 2024
  • The Dangers of AI in Finance
    What happens when AI's financial devices lead to a system collapse?
    “Nobody understands what is happening.”
    @ 28m 40s
    January 11, 2024
  • Manipulation by AI
    AI's ability to mimic intimacy raises questions about human relationships and manipulation.
    “Intimacy is kind of the most powerful weapon.”
    @ 46m 47s
    January 11, 2024
  • Happiness vs. Power
    Despite rising power, there's no correlation between wealth and happiness.
    “We just don't really know how to translate power into happiness.”
    @ 53m 24s
    January 11, 2024
  • Scarcity and Choices
    The choices we make are often defined by their scarcity, impacting our happiness.
    “What makes choices special is their scarcity.”
    @ 56m 04s
    January 11, 2024
  • AI and Social Disparity
    The rise of AI could create a dangerous divide between a ruling class and the rest.
    “The disparity that AI might create could split humanity into a ruling class and the rest.”
    @ 01h 00m 57s
    January 11, 2024
  • The Need for Global Order
    Without a global order based on shared values, we risk descending into chaos and conflict.
    “The alternative to order is simply disorder.”
    @ 01h 17m 12s
    January 11, 2024
  • The Importance of Information Diets
    Maintaining a balanced information diet is crucial for mental health and perspective.
    “Information is the food of the mind.”
    @ 01h 23m 12s
    January 11, 2024
  • The Need for Boredom
    Understanding and embracing boredom is essential for achieving peace of mind.
    “If you don't know how to handle boredom, you will never experience peace of mind.”
    @ 01h 34m 01s
    January 11, 2024
  • Agency in the Age of Algorithms
    Despite algorithmic control, individuals still hold power and responsibility to effect change.
    “The algorithms are not yet in full control.”
    @ 01h 37m 54s
    January 11, 2024

Episode Quotes

Key Moments

  • End of Organic Humanity09:19
  • Language and Power22:30
  • Manipulation and Intimacy47:23
  • Power vs. Happiness53:24
  • Scarcity's Impact56:04
  • Immortality Debate59:31
  • AI Disparity1:00:57
  • Global Disorder1:17:12

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart!
Podcast thumbnail
The Professor Banned From Speaking Out: "We Need To Start Preparing” - Dr Bret Weinstein
Podcast thumbnail
WW3 Threat Assessment: The West Is Collapsing, Can We Stop It?! They Want You Confused & Obedient!
Podcast thumbnail
Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat
Podcast thumbnail
Ray Dalio: We’re Heading Into Very, Very Dark Times! America & The UK’s Decline Is Coming!