Search:

Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat

August 04, 202502:34:12
00:00:00
The only way for us to get to a better place and succeed as a species is for the evil people at the top to be
00:00:06
replaced with AI. I mean, think about it. AI will not want to destroy
00:00:11
ecosystems. It will not want to kill a million people. They'll not make us hate each other like the current leaders
00:00:16
because that's a waste of energy, explosives, money, and people. But the problem is super intelligent AI is
00:00:22
reporting to stupid leaders. And that's why in the next 15 years, we are going to hit a short-term dystopia. There's no
00:00:28
escaping that. They having AI leaders. Is that even fundamentally possible?
00:00:33
Let's put it this way. Mold Ga is back. And the former chief business officer at
00:00:38
Google X is now one of the most urgent voices in AI with a very clear message. AI isn't your enemy, but it could be
00:00:46
your savior. I love you so much, man. You're such a good friend. But you don't have many years to live. Not in this world.
00:00:53
Everything's going to change. Economics are going to change. Human connection is going to change. and lots of jobs will
00:00:58
be lost including podcasting. No, no. Thank you for coming on today, Mo.
00:01:04
But but the truth is it could be the best world ever. The society completely full of laughter and joy. Free
00:01:10
healthcare, no jobs, spending more time with their loved ones. A world where all of us are equal.
00:01:15
Is that possible? 100%. And I have enough evidence to know that we can use AI to build the utopia.
00:01:21
But it's a dystopia if humanity manages it badly. a world where there's going to be a lot of control, a lot of
00:01:27
surveillance, a lot of forced compliance and a hunger for power, greed, ego, and it is happening already. But the truth
00:01:33
is the only barrier between a utopia for humanity and AI and the dystopia we're
00:01:39
going through is a mindset. What does society have to do? First of all,
00:01:45
I see messages all the time in the comments section that some of you didn't realize you didn't subscribe. So, if you
00:01:51
could do me a favor and double check if you're a subscriber to this channel, that would be tremendously appreciated. It's the simple, it's the free thing
00:01:57
that anybody that watches this show frequently can do to help us here to keep everything going in this show in the trajectory it's on. So, please do
00:02:03
double check if you've subscribed and uh thank you so much because a strange way you are you're part of our history and
00:02:09
you're on this journey with us and I appreciate you for that. So, yeah, thank you.
00:02:16
Mo, two years ago today, we sat here and discussed AI. We discussed your book, Scary, Smart, and everything that was
00:02:22
happening in the world. Since then, AI has continued to develop at a tremendous, alarming, mind-boggling
00:02:31
rate, and the technologies that existed 2 years ago when we had that conversation have grown up and matured
00:02:36
and are taking on a life of their own, no pun intended. What are what are you thinking about AI now, two years on? I
00:02:44
know that you've started writing a new book called Alive, which is I guess a bit of a follow on or an evolution of your thoughts as it relates to scary
00:02:49
smart. What what is front of your mind when it comes to AI?
00:02:55
It's a scary smart was shockingly accurate. It's quite a I mean I don't
00:03:00
even know how I ended up writing predicting those things. I remember it
00:03:05
was written in 2020, published in 2021 and then most people were like um who
00:03:12
wants to talk about AI you know I know everybody in the media and I would go and do you want to talk and then 2023 CH
00:03:19
GPT comes out and everything flips everyone realizes you know this is real
00:03:24
this is not science fiction this is here and uh and things move very very fast
00:03:30
much faster than I think we've ever seen anything ever move ever And and I think
00:03:35
my position has changed on two very important fronts. One is remember when
00:03:40
we spoke about scary smart I was still saying that there are things we can do to change the course. Uh and we could at
00:03:48
the time I believe uh now I've changed my mind. Now I believe that we are going
00:03:54
to hit a a short-term dystopia. There's no escaping that. What is dystopia?
00:03:59
I call it face rips. We can talk about it in details but the the the way we define very important parameters in life
00:04:07
are going to be uh completely changed. So so face rips are you know the way we
00:04:13
define freedom uh accountability human connection and equality economics uh
00:04:20
reality innovation and business and power. That's the first change. So the first change in my mind is that uh is
00:04:28
that we uh will have to prepare for a world that is very unfamiliar. Okay. And
00:04:35
that's the next 12 to 15 years. It has already started. We've seen examples of it in the world already even though
00:04:40
people don't talk about it. I try to tell people you know there are things we absolutely have to do. But on the other
00:04:46
hand I started to take an active role in building amazing AIs. So AIS that will
00:04:54
uh not only make our world better uh but that will understand us understand what
00:05:00
humanity is through that process. What is the definition of the word dystopia?
00:05:06
So in my in my mind these are adverse circumstances that unfortunately might
00:05:12
escalate beyond our control. The problem is the uh there is a lot wrong with the
00:05:18
um value set with the ethics of humanity at the age of the rise of the machines
00:05:24
and when you take a technology every technology we've ever created just magnified human abilities. So you know
00:05:30
you can walk at 5 km an hour you get in a car and you can now go you know 250
00:05:37
280 m an hour. Okay. uh basically magnifying your mobility if you want you
00:05:43
know you can use a computer to magnify your u calculation abilities or whatever
00:05:48
okay and and what AI is going to magnify unfortunately at this time is it's going
00:05:53
to magnify the evil that man can do and and it is within our hands completely
00:05:59
completely within our hands to change that but I have to say I don't think humanity has the awareness uh at this
00:06:07
time to focus on so that we actually use AI to build the
00:06:12
utopia. So what you're essentially saying is that you now believe there'll be a
00:06:17
period of dystopia and to define the word dystopia, I've used AI, it says a terrible society where people live under
00:06:23
fear, control or suffering and then you think we'll come out of that dystopia into a utopia which is defined as a
00:06:29
perfect or ideal place where everything works well, a good society where people live in peace, health and happiness.
00:06:34
Correct. And the difference between them, interestingly, is what I normally refer to as the second dilemma, which is the
00:06:41
po point where we hand over completely to AI. So, a lot of people think that
00:06:47
when AI is in full control, it's going to be an existential risk for humanity. You know, I have enough uh evidence to
00:06:54
to argue that when we fully hand over to AI, that's going to be our salvation.
00:07:00
that the problem with us today is not, you know, that intelligence is going to
00:07:05
work against us. It's that our stupidity as humans is working against us. And I think the challenges that will come from
00:07:12
humans being in control uh are going to outweigh the
00:07:19
the challenges that could come from AI being in control. So, as we're in this dystopia period,
00:07:24
did you do you forecast the length of that dystopia? Yeah, I count I count it
00:07:29
exactly as 12 to 15 years. I I believe the beginning of the slope will happen
00:07:34
in 2027. I mean it we will see signs in 26. We've seen signs in 24 but we will
00:07:41
see escalating signs next year and then a a clear uh slip in 27.
00:07:47
Why? The geopolitical environment of our world is not very positive. I mean you
00:07:54
really have to think deeply about not the not the symptoms but the the reasons
00:08:01
why we are living the world that we live in here in today is money right and uh
00:08:08
and money for anyone who knows who really knows money's you and I are
00:08:14
peasants you know we build businesses we contribute to the world we make things we sell things and so on real money is
00:08:20
not made there at all real money is made in lending in fractional reserve, right?
00:08:27
And and you know the biggest lender uh in the world would want reasons to lend
00:08:34
and those reasons are never as big as war. I mean think about it, huh? Uh the
00:08:40
world spent $2.71 trillion on war in 2024,
00:08:47
right? A trillion dollars a year in the US. And when you really think deeply, I
00:08:53
don't mean to be scary here. You know, weapons have depreciation.
00:09:00
They depreciate over 10 to 30 years. Most weapons, they lose their value. They lose their value and they
00:09:05
depreciate in accounting terms on the books of an army. The current arsenal of the US, that's a result of a deep search
00:09:11
with my AI Trixie. You know the current arsenal I think we we think cost the US
00:09:18
24 to 26 trillion dollars to build. My conclusion is that a lot of the wars
00:09:23
that are happening around the world today are a means to get rid of those weapons so that you can have replace
00:09:29
them. And uh you know when when your morality as an industry is we're
00:09:35
building weapons to kill then you know you might as well use the weapons to kill.
00:09:40
Who benefits? the lenders and the industry, but but they can't make the decision to go to war. They they have to rely on
00:09:47
remember I I said that to you when we I think on on our third podcast. War is
00:09:54
decided first then the story is manufactured. You you remember 1984 and the Orwellian approach
00:10:01
of like you know uh freedom is slavery and uh war is peace and they call it uh
00:10:08
something speak uh basically to to to to convince people that going to war in
00:10:16
another country to to kill 4.7 million people is freedom. You know we're going
00:10:22
there to free the Iraqi people. Is war ever freedom? you know, to to
00:10:27
tell someone that you're going to kill 300,000 women and children is for liberty and
00:10:34
for the the the you know, for human values. Seriously, how do we ever get to believe
00:10:42
that the story is manufactured and then we follow and humans because we're gullible uh we cheer up and we say,
00:10:49
"Yeah, yeah, yeah. We are we're on the right side. They are the bad guys."
00:10:54
Okay. So, let me let me have a let me have a go at this idea. So, the idea is that really money is driving a lot of
00:11:00
the conflict we're seeing and it's really going to be driving the dystopia. So, here's an idea. So, I um I was
00:11:05
reading something the other day and it talked about how billionaires are never satisfied because
00:11:12
actually what a billionaire wants isn't actually more money. It is more status.
00:11:17
Correct. And I was looking at the sort of evolutionary case for this argument. And if you go back a couple of thousand
00:11:23
years, money didn't exist. You were as wealthy as what you could carry. So even I think
00:11:30
to the human mind, the idea of wealth and money isn't a thing. what we've but what has
00:11:37
always mattered from a survival of the fittest from a reproductive standpoint what's always had reproductive value if
00:11:44
you go back thousands of years the person who was able to mate the most was the person with the most status so it
00:11:51
makes the case the reason why billionaires get all of this money but then they go on podcasts and they want
00:11:56
to start their own podcast and they want to buy newspapers is actually because at the very core of human beings is a
00:12:01
desire to increase their status. Yeah. And so if we think of when we going back to the example of why wars are breaking out, maybe it's not money.
00:12:08
Maybe actually it's status and and it's
00:12:14
this prime minister or this leader or this, you know, individual wanting to create more power and more status
00:12:20
because really at the heart of what matters to a human being is having more power and more status. And money is actually money as a thing is actually
00:12:27
just a proxy of my status. And and what kind of world is that? I mean, it's a [ __ ] up one. all these
00:12:33
all these powerful men have uh correct are really messing the world up. But so so can can I can I can I
00:12:38
actually AI is the same because we're in this AI race now where a lot of billionaires are like if I get
00:12:45
AGI artificial general intelligence first then I basically rule the world 100%. That's exactly the the concept
00:12:52
what I what I used to call the the the first inevitable now I call the first dilemma and scary smart is that it's
00:12:58
it's a race that constantly accelerates. You think the next 12 years are going to be AI dystopia where things aren't
00:13:05
I think the next 12 years are going to be human dystopia using AI humaninduced dystopia using AI
00:13:12
and you define that by a rise in warfare around the world as the last the last one the RIP the last
00:13:19
one is basically you're going to have a massive concentration of power and a massive distribution of power okay and
00:13:26
that basically will mean that those with the maximum concentration of power are going to try to oppress those with with
00:13:33
democracy of power. Okay, so think about it this way in today's world um unlike
00:13:39
the past uh you know the Houthis with a drone the
00:13:46
Houthis are the Yemeni uh tribes basically resisting US power and Israeli
00:13:52
power in the Red Sea. Okay. They use a drone that is $3,000 worth to attack a
00:13:59
uh a warship from from the US or an airplane from the US and so on that's worth hundreds of millions. Okay, that
00:14:06
kind of democracy of power makes those in power worry a lot about where the
00:14:13
next threat is coming from. Okay, and this happens not only in war but also in economics. Okay, also in innovation,
00:14:20
also in technology and so on and so forth, right? And so basically what that means is that like you rightly said as
00:14:27
the the the tech oligarchs are attempting to get to AGI. They want to make sure that as soon as
00:14:34
they get to AGI that nobody else has AGI and and basically they want to make sure
00:14:40
that nobody else has the ability to shake their position of privilege if you
00:14:46
want. Okay. And so you're going to see a world where unfortunately there's going to be a lot of control, a lot of
00:14:51
surveillance, a lot of um of forced compliance if you want or you lose your
00:14:57
privilege to be in the world and and it is happening already. With this acronym, I want to make sure
00:15:03
we get through the whole acronym. So you like dystopians, don't you? I want to do the dystopian thing, then I
00:15:09
want to do the utopia. Okay. And ideally how we move from dystopia to
00:15:14
utopia. Mhm. So the the the the F in face R is the loss of freedom as a result of
00:15:20
that power dichotomy. Right? So you have you have a massive amount of power as
00:15:26
you can see today in uh one specific army being powered by the US uh funds
00:15:32
and a lot of money righting against peasants really that have no weapons
00:15:37
almost at all. Okay. Some of them uh are militarized but the majority of the mill two million
00:15:43
people are not. Okay. And so there is massive massive power that basically says, you know what, I'm going to oppress as far as I go. Okay. And I'm
00:15:51
going to do whatever I want because the cheerleaders are going to be quiet, right? Or they're going to cheer or even
00:15:58
worse. Huh? And so basically in in that what happens is max maximum power
00:16:05
threatened by a democracy of power leads to a loss of freedom. A loss of freedom for everyone.
00:16:11
Because how does that impact my freedom? Your freedom. Yeah, very soon uh you will if you publish
00:16:18
this episode you're going to start to get questions around should you be talking about this those topics in your
00:16:23
podcast. Okay. Uh you know uh if I uh have been on this episode then probably
00:16:30
next time I land in the US someone will question me say why do you say those things? Which side are you on? Right?
00:16:38
and and and and you know you can easily see that everything I mean I I told you
00:16:43
that before doesn't matter what I try to contribute to the world my bank will cancel my bank account every 6 weeks
00:16:50
simply because of my ethnicity and my origin right every now and then they'll just stop my my bank account and say we
00:16:57
need a document my other colleagues of a different color or a different ethnicity don't get asked
00:17:04
for another document right but but but that's because I come from an ethnicity that is positioned in the world for the
00:17:10
last 30 40 years as the uh enemy. Okay?
00:17:16
And and so when you really really think about it, in a world where everything is becoming digital, in a world where
00:17:21
everything is monitored, in a world where everything is seen, okay, we don't have much freedom anymore. And I'm not
00:17:27
actually debating that or or I don't see a way to fix that because the AI is going to have more
00:17:34
information on us, be better at tracking who we are, and therefore that will
00:17:40
result in certain freedoms being restricted. Is that what you're saying? This is one element of it. Okay. If you push that element further
00:17:47
in in in in a very short time if you've seen agent for example recently manos or chat GPT there will be a time where you
00:17:55
know you'll simply not do things yourself anymore. Okay. You'll simply go to your AI and say hey by the way I'm
00:18:01
going to meet Stephen. Can you please you know book that for me? Great. And and and yeah and it will do absolutely everything. That's great
00:18:08
until the moment where it decides to do things that are not motivated only by your well-being. Right. Why would he do
00:18:13
that? Simply because, you know, maybe if I buy a BA ticket instead of an Emirates
00:18:19
ticket, some agent is going to make more money than other agents and so on, right? Uh and I wouldn't be able to even
00:18:25
catch it up if I hand over completely to an AI. Uh go go a step further. Huh? Think about a world where everyone
00:18:32
almost everyone is on UBI. Okay. What's UBI? Universal basic income. I mean, think
00:18:39
about the economics, the E and face rips. Think about the economics of a world where we're going to start to see
00:18:46
a trillionaire before 2030. I can guarantee you that someone will be a trillionaire. I'm I'm
00:18:53
you know I think there are many trillionaires in the world today or there we just don't know who they are. But there will be a new Elon Musk or
00:19:00
Larry Allison that will become a trillionaire because of AI investments, right? And and that trillionaire will
00:19:07
have so much money to buy everything. There will be robots and AIs doing
00:19:13
everything and humans will have no jobs. Mean do you think that's a there's a real
00:19:19
possibility of job displacement over the next 10 years? And the the rebuttal to that would be that there's going to be
00:19:25
new jobs created in technology. Absolute crap. Really? Of course.
00:19:30
How how can you be so sure? Okay. So again, I am not sure about anything. So So let's just be very very
00:19:38
clear. It would be very arrogant. Okay. To assume that I know you just said it was crap.
00:19:44
My my belief is it is 100% crap. Take a job like software developer.
00:19:50
Yeah. Okay. Uh Emma would love my my new startup is me, Senad, another technical
00:19:55
engineer and a lot of AIS. Okay. That startup would have been 350 developers
00:20:01
in the past. I get that. Um but are you now hiring in other roles because of that or or you
00:20:08
know as is the case with the steam engine? I can't remember the effect but
00:20:14
there's you probably know that when steam when coal became cheaper people were worried that the coal industry
00:20:19
would go out of business but actually what happened is people used more trains so trains now were used for transport
00:20:25
and other things and leisure whereas before they were just used for commu for um cargo. Yeah. So there became more use
00:20:32
cases and the coal industry exploded. So I'm wondering with technology, yeah, software developers are going to maybe
00:20:38
not have as many jobs, but there everything's going to be software. Name me one. Name you one. What job?
00:20:43
Name you that's going to be created. Yeah. One job that cannot be done by an AI.
00:20:49
Yeah. Or a robot. My girlfriend's breath work retreat business where she takes groups of women
00:20:55
around the world. Her company is called Barley Breathwork. And there's going to be a greater demand for connection,
00:21:00
human connection. Correct. Keep going. So there's going to be more people doing community events in real life festivals.
00:21:07
I think we're going to see a huge surge in things like everything that has to do with human connection. Yeah, correct. I'm totally in with that. Okay.
00:21:15
What's the percentage of that versus accountant? It's a much smaller percentage for sure in terms of white collar jobs.
00:21:21
Now, who does she sell to? People with probably what? probably
00:21:26
accountants or you know correct she she sells to people who earn money from their jobs.
00:21:31
Yeah. Okay. So you have two forces happening. One force is there are clear jobs that
00:21:37
will be replaced. Video editor is going to be replaced. Uh excuse me.
00:21:42
I love as as a matter of fact podcaster is going to be replaced.
00:21:48
Thank you for coming on today Mo. It was seeing you again. But but but the truth is a lot so so you
00:21:55
see the best at any job will remain the best software developer the one that really knows architecture knows
00:22:01
technology and so on will stay for a while right and you know one of the funniest things I I interviewed Max
00:22:08
Tedmar and Max was laughing out loud saying CEOs are celebrating that they
00:22:14
can now get rid of people and have productivity gains and cost reductions because AI can do that job. The one
00:22:20
thing they don't think of is AI will replace them too. AGI is going to be
00:22:26
better than at everything than humans at everything including being a CEO. Right?
00:22:32
And you really have to imagine that there will be a time where most incompetent CEOs will be replaced. Most
00:22:40
incompetent even breath work. Okay. Eventually there might actually one of
00:22:46
two things be two things be happening. on one is either uh you know part part
00:22:52
of that job other than the top breath work instructors, okay, are going you
00:22:58
know who are going to gather all of the people that can still afford to pay for a breath work you know class
00:23:06
they're going to be concentrated at the top and a lot of the bottom is not going to be working for one of two reasons.
00:23:13
One is either there is not enough demand because so many people lost their jobs. So when you're on UBI, you cannot tell
00:23:21
the government, hey by the way, pay me a bit more for a breath work class. UBI being universal basic income just
00:23:28
gives you money every month. Correct. And if you really think of freedom and economics, UBI is a very
00:23:34
interesting place to be because unfortunately I as I said there's absolutely nothing wrong with AI.
00:23:40
There's a lot wrong with the value set of humanity at the age of the rise of the machines, right? And the biggest
00:23:45
value set of humanity is capitalism today. And capitalism is all about what? Labor arbitrage.
00:23:51
What's that mean? I I I hire you to do something. I pay you a dollar. I pay it I sell it for
00:23:56
two. Okay. And and most people confuse that because they say, "Oh, but the cost of a
00:24:01
product also includes raw materials and factories and so on and so forth." All of that is built is built by labor,
00:24:07
right? So, so basically labor goes and mines for the material and then the material is sold for a little bit of
00:24:12
margin then that material is turned into a machine. It's sold for a little bit of margin then that machine and so on.
00:24:18
Okay, there's always labor arbitrage in a world where humanity's minds are being
00:24:23
replaced by uh by AIs, virtual AIs,
00:24:28
okay, and humanity's power strength within 3 to 5 years time can be replaced
00:24:36
by a robot, you really have to question how this world looks like. It could be the best
00:24:42
world ever. And that's what I believe the utopia will look like because we were never made to wake up every morning
00:24:48
and just, you know, occupy 20 hours of our day with work, right? We're not made
00:24:54
for that. But we've fit into that uh uh, you know, system so well so far that we
00:25:00
started to believe it's our life's purpose. But we choose it. We willingly choose it. And if you give someone unlimited
00:25:08
money, they still tend to go back to work or find something to occupy their time with.
00:25:13
They find something to occupy their time with, which is usually for so many people is building something. Philanthropy, a
00:25:18
business%. So you build something. So between Senad and I, Emma. Love is not
00:25:24
about making money. It's about finding true love relationships. What is that? Sorry, just for context.
00:25:29
So So you know, it's a business you're building just for the audience context. So, so, so the idea here is I can, it might become a
00:25:37
unicorn and be worth a billion dollars, but neither I nor Senate are interested, okay? We're doing it because we can,
00:25:44
okay? And we're doing it because it can make a massive difference to the world. And you have money, though.
00:25:49
It doesn't take that much money anymore to build anything in the world. This is labor arbitrage.
00:25:54
But to build something exceptional, it's still going to take a little bit more money than building something bad
00:26:00
for the next few years. So whoever has the capital to build something exceptional will end up winning. So so this is a very interesting
00:26:06
understanding of freedom. Okay. This is the reason why we have the AI arms race.
00:26:11
Okay. Is that the one that owns the platform is going to be making all the
00:26:17
money and and keeping all the power. Think think of it this way. When humanity started the best hunter in the
00:26:24
tribe could maybe feed the tribe for three to four more years more days. H
00:26:30
and as a as a reward, he gained the favor of multiple mates in the tribe.
00:26:35
That's it. The top farmer in the tribe could feed the tribe for a season more.
00:26:41
Okay? And as a result, they got estates and you know uh and mansions and so on.
00:26:47
The best industrialist in the in a in a city could actually employ the whole city, could grow the GDP of their entire
00:26:54
country. And as a result, they became millionaires. the 1920s. H the best technologists
00:27:01
now are billionaires. Now what's the difference between them? The tool the
00:27:08
the hunter only rem depended on their skills and the automation the entire
00:27:14
automation he had was a spear. The farmer had way more automation. And the
00:27:20
biggest automation was what? The soil. The soil did most of the work. The factory did most of the work. the the
00:27:27
network did most of the work. And so that inc incredible expansion of wealth
00:27:32
and power and as well the the incredible impact that something brings is entirely
00:27:40
around the tool that automates. So who's going to own the tool? Who's going to own the the the digital soil, the AI
00:27:46
soil? It's the platform owners. And the platforms you're describing are things like OpenAI, Gemini, Grock. These
00:27:54
these are interfaces to the platforms. The platforms are all of the uh of the uh um tokens, all of the compute that is
00:28:03
in the background, all of the uh all of the uh uh methodology, the systems, the algorithms, that's the platform, the AI
00:28:09
itself. You know, Grock is the interface to it. I think this is probably worth explaining in layman's terms to people
00:28:16
that haven't built AI tools yet because I think I think to the listener
00:28:22
they probably think that every AI company they're hearing of right now is building their own AI whereas actually
00:28:29
what's happening is there is really five, six, seven AI companies in the
00:28:35
world and when I built my AI application I basically
00:28:40
pay them for every time I use their AI. So if Steven Bartlett builds an AI at
00:28:46
stephvenai.com, it's not that I've built my own underlying I've trained my own model.
00:28:52
Really what I'm doing is I'm paying Sam Alman's chat GPT. Um every single
00:28:59
time I do a a call, I basically um I do a search or you know I use a token. And
00:29:05
I think that's really important because most people don't understand that unless you've built AI, you think, "Oh, look, you know, there's all these AI companies
00:29:10
popping up. I've got this one for my email. I've got this one for my dating. I've got No, no, no, no, no. They're pretty much I would be I would hazard a
00:29:18
guess that they're probably all OpenAI at this point. No, there are quite a few quite
00:29:23
different characters and quite differently, but there's like five or six. There are five or six when it comes to
00:29:28
language models. Yeah. Right. Uh but interestingly, so yes, I I should say
00:29:33
yes to start and then I should say but there was an interesting twist with Deepseek at the beginning of the year.
00:29:40
So what Deepseek did is is they basically uh nullified the business
00:29:45
model if you want in two ways. one is it was around a week or two after uh you
00:29:51
know Trump stood you know with pride saying Stargate is the biggest investment project in the history and
00:29:57
it's $500 billion to build AI infrastructure and soft bank and Larry Allison and and uh Sam Alman were
00:30:04
sitting and so you know beautiful picture and then DeepSeek R3 comes out
00:30:10
it does the job for a one over 30 of the cost okay and interestingly is entire
00:30:17
open source and available as an edge AI. So, so that's really really interesting
00:30:23
because there could be now in the future as the technology improves the learning
00:30:28
models will be massive but then you can compress them into something you can have on your phone and you can download
00:30:33
deepseek literally offline on a um um you know an off the network
00:30:40
computer and build an AI on it. There's a website that basically tracks the
00:30:45
um sort of cleanest apples to Apple's market share of all the website referrals sent by AI chat bots and
00:30:51
chatbt is currently at 79% roughly about 80%. Perplexi is at 11, Microsoft copilot about five, Google Gemini is at
00:30:58
about two, Claude's about one and Deepseek is about 1%. And really like the the point that I I want to land is
00:31:04
just that when you hear of a new AI app or tool or this one can make videos, it's built on one of them. It's
00:31:10
basically built on one of these really three or four AI platforms that's
00:31:17
controlled really by three or four AI you know billionaire teams and actually
00:31:24
the one of them that gets to what we call AGI first where the AI gets really really advanced
00:31:30
one could say is potentially going to rule the world as it relates to technology. Yes. Uh if if they get enough uh head
00:31:38
start. So, so I actually think that uh
00:31:44
what I what I'm more concerned about now is not AGI, believe it or not. So, AGI
00:31:49
in my mind and I said that back in 2023, right? Uh that we will get to AGI. At
00:31:56
the time I said 2027, now I believe 2026 latest. Okay. The most interesting
00:32:03
development that nobody's talking about is self-evolving AIS. self evolving AIS is
00:32:11
think of it this way if you and I are hiring the top engineer in the world to
00:32:17
develop our AI models and with AGI that top engineer in the world becomes an AI who would you hire
00:32:25
to develop your next generation AI that AI the one that can teach itself
00:32:30
correct so one of my favorite examples is called Alpha Evolve so this is Google's attempt to basically have four
00:32:36
agents working together four AIs working together to look at the at the code of
00:32:42
the AI and say where is the where are the performance issues then you know an
00:32:47
agent would say what's the problem statement what can I uh you know what do I need to fix uh one that actually
00:32:54
develops the solution one that assesses the solution and then they continue to do this and you know I don't remember
00:33:00
the exact figure but I think Google improved like 8% uh on their AI infrastructure because of alpha evol
00:33:07
Right? And when you really really think, don't quote me on the number 8 to 10, 6 to 10, whatever in Google terms, by the
00:33:13
way, that is massive. That's billions and billions of dollars. Now, the the
00:33:18
the the trick here is this. The trick is again, you have to think in game theory
00:33:24
format. Is there any scenario we can think of where if one player uses AI to develop
00:33:33
the next generation AI that the other players will say no no no no no that's too much you know takes us out of
00:33:39
control every other player will copy that model and have their next AI model developed by an AI.
00:33:45
Is this what Sam Alman talks about who's the founder of um chatbt/openai when he talks about a fast takeoff? I
00:33:52
don't know exactly what which what what which you're referring to but we're all talking about a point now that we call
00:33:58
the intelligence explosion. So, so there is a moment in time where you have to imagine that if AI now is better than
00:34:06
97% of all code developers in the world and soon we'll be able to look at its
00:34:12
own code own algorithms by the way they're becoming incredible mathematicians which wasn't the case when we last met if they can develop
00:34:19
improve their own code improve their own algorithms improve their own uh uh you know uh network architecture or whatever
00:34:27
you can imagine that very quickly the force applied to developing the next AI is not going to be a human brain
00:34:33
anymore. It's going to be a much smarter brain and very quickly as humans like basically when when we ran the Google
00:34:40
infrastructure when the machine said we need another server or a proxy server in that place we followed. we we never
00:34:47
really you know wanted to to object or verify because you know the code would
00:34:53
probably know better because there are billions of transactions an hour or a day and so very quickly those
00:35:00
self-evolving AIs will simply say I need 14 more servers here and we'll just you
00:35:06
know the team will just go ahead and do it. I watched a video a couple of days ago where he Sam Alman effectively had
00:35:14
changed his mind because in 2023 which is when we last met he said the aim was
00:35:19
for um a slow takeoff which is sort of gradual deployment and open AI's 203
00:35:26
2023 note says a slower takeoff is easier to make safe and they prefer iterative rollouts society can adapt in
00:35:34
2025 they changed their mind and Sam Alman said He now thinks a fast takeoff is more
00:35:41
possible than he did a couple of years ago on the order of a small number of years rather than a decade. Um, and it
00:35:50
to define what we mean by a fast takeoff, it's defined as when AI goes from roughly human level to far beyond
00:35:57
human very quickly, think months to a few years, faster than governments,
00:36:02
companies, or society can adapt with little warning, big power shifts, and hard to control. A slow takeoff, by
00:36:09
contrast, is where capabilities climb gradually over many years with lots of warning shots. Um, and the red flags for
00:36:16
a fast takeoff is when AI can self-improve, run autonomous research
00:36:22
and development and scale with massive compute compounding gains which will snowball fast. So, and I think from the
00:36:30
video that I watched of Sam Orman recently, who again is the founder of Open Air and HBT, he basically says, and
00:36:36
again I'm paraphrasing here. I will put it on the screen. We have this community knows things so I'll write it on the screen. But he effectively said that
00:36:43
whoever gets to AGI first will have the technology to develop super intelligence
00:36:48
where the AI can can rapidly increase its own intelligence and it will basically leave everyone else behind.
00:36:55
Yes. Uh so that last bit is debatable but but let's just agree that uh so so
00:37:01
in in a live uh you know one of the posts I I shared and got a lot of interest is I refer to the the altman as
00:37:09
a brand not as a human. Okay. So the altman is that uh persona of a
00:37:16
California disruptive technologist that disrespects everyone. Okay. and believes
00:37:22
that disruption is good for humanity and believes that this is good for safety and like everything else like we say war
00:37:28
is for democracy and freedom they say uh developing you know putting AI on the
00:37:33
open internet is good for everyone right it allows us to learn from our mistakes
00:37:38
that was Sam Alman's 2023 spiel and if you recall at the time I was like this
00:37:45
is the most dangerous you know one of the clips that really went viral you so you're you're so clever at finding the
00:37:52
right clips is when I said I didn't I didn't do the clipping mate they're team teams remember the clip where I said we [ __ ] up we always said
00:38:00
don't put them on the open internet until we know what we're putting out in the world I'm going to be saying that
00:38:06
yeah we we we [ __ ] up on putting it on the open internet teaching it to to code
00:38:11
and putting you know agents AI agents prompting other AIs now AI agents
00:38:16
prompting other AIs are leading to self-developing AIS and and The problem is, of course, we, you know, anyone who
00:38:24
has been on the inside of this knew that this was just a clever spiel made by a
00:38:29
PR manager for Sam Alman to sit with his dreamy eyes in front of Congress and
00:38:34
say, "We want you to regulate us." Now, they're saying, "We're unregulable."
00:38:41
Okay? And and when you really understand what's happening here, what's happening is it's so fast
00:38:48
that none of them has the choice to slow down. It's impossible. Neither China
00:38:54
versus America or OpenAI versus Google. the that the the only thing that I may
00:39:00
have may see happening that you you know that that may differ a little bit from
00:39:05
your statement is if one of them gets there first uh then they dominate for
00:39:10
the rest of humanity that is probably true if they get there first uh with
00:39:16
within enough buffer. Okay. But the way you look at Grock coming a week after
00:39:23
open AI, a week after uh you know Gemini, a week after Claude and then Claude comes again and then China
00:39:30
releases something and then Korea releases some something. It is so fast that we may get a few of them at the
00:39:37
same time or a few months apart. Okay, before one of them has enough power to
00:39:43
become dominant. And that is a very interesting scenario. multiple AIs, all super intelligent.
00:39:50
It's funny, you know, I I I got asked yesterday, I was in I was in Belgium on stage. There was, I don't know, maybe
00:39:56
4,000 people in the audience and a kid stood up and he was like, um, you've had a lot of conversations in the last year
00:40:02
about AI. Like, why do you care? And I don't think people realize how,
00:40:07
even though I've had so many conversations on this podcast about AI, you haven't made up your mind. I I I have more questions than ever.
00:40:14
I know. And it's and it doesn't seem that anyone can satiate. Anyone that tells you they can predict
00:40:20
the future is arrogant. Yeah. It is. It's never moved so fast. It's nothing like nothing I've ever
00:40:26
seen. And you know, by the time that we leave this conversation and I go to my computer, there's going to be some
00:40:31
incredible new technology or application of AI that didn't exist when I woke up this morning. That creates probably
00:40:38
another paradigm shift in my brain. Also, you know, I people have different opinions of Elon Musk and they're they're entitled to their own opinion,
00:40:45
but the other day, only a couple of days ago, he did a tweet where he said, "At times, AI existential dread is
00:40:52
overwhelming." And on the same day, he tweeted, "I resisted AI for too long,
00:40:58
living in denial. Now it is game on." And he tagged his AI companies. I don't
00:41:05
know what to make of I don't know what to make of those tweets. I don't know. And you know, I
00:41:12
I try really hard to figure out if someone like Sam Wman has the best
00:41:19
interests of society at heart. No. Or if these people are just like
00:41:25
I'm saying that publicly. No. As a matter of fact, so I know Sundur
00:41:31
Pachai. I work CEO of Alphabet, Google's parent company. an amazing human being
00:41:37
on in all honesty. I know Dennis Hassab is amazing human being. Okay. Uh you
00:41:42
know these are are ethical incredible uh humans at heart. They have no choice.
00:41:50
Uh Sund by law is uh demanded to take care of his his
00:41:57
shareholder value. That's that is his job. But Sund you said you know him. You used
00:42:02
to work at Google. Yeah. He's not going to do anything that he thinks is going to harm humanity. But if if he does not continue to
00:42:09
advance AI, that by definition uh uh uh contradicts his responsibility as the
00:42:15
CEO of a publicly traded company, he is liable by law to continue to advance the
00:42:20
agenda. There's absolutely no doubt about it. Now, so but but he's a good person at heart. Deis is a good person
00:42:26
at heart. So they're trying so hard to make it safe. Okay? As much as they can.
00:42:32
Reality however is the the the disruptor the altman as a brand doesn't care that
00:42:39
much. How do you know that? In reality the disruptor is someone that comes in with the objective of I don't
00:42:46
like the status quo. I have a different approach. And that different approach if you just look at the story was we are a
00:42:54
nonforprofit that is funded mostly by Elon Musk money. It's not entirely by
00:42:59
Elon Musk money. So context for people that might not understand Open AI. The reason I always give context is funnily
00:43:05
enough I I think I told you this last time. I went to a prison where they play the D of CEO. No way. So they play the D of CO and I think
00:43:11
it's 50 prisons in the UK to young offenders and no violence there. Well, I don't know. I can't I can't I
00:43:16
can't tell you whether violence has gone up or down. But I was in the cell with one of the prisoners, a young a young black guy, and I was in his cell for for
00:43:23
a little while. I was reading through his business plan, etc. And I said, "You know what? You need to listen to this conversation that I did with Mo Gordat."
00:43:29
So I he has a little screen in his cell. So I pulled it up, you know, our first conversation. I said, "You should listen to that one." And he said to me, he
00:43:35
said, "I can't listen to that one cuz you guys use big words." So ever since that day, which was about
00:43:41
I noticed that about days four years ago, sorry. I've always whenever I hear a big word,
00:43:46
I think about this kid. Yeah. And I say like give context. So even with the you're about to explain what
00:43:52
Open AI is, I know he won't know what Open AI's origin story was. That's why I'm I think that's a wonderful practice in
00:43:58
general. By the way, even, you know, being a non native English speaker, you'll be amazed how often a word is
00:44:04
said to me and I I'm like, yeah, don't know what that means. So, like I've actually never said this publicly before, but I now see it as my
00:44:11
responsibility to be to to keep the draw the drawbridge
00:44:17
to accessibility of these conversations down for him. So, whenever I whenever
00:44:23
there's a word that at some point in my life I didn't know what it meant, I will go back. I was like, what does that mean? I think that I've noticed
00:44:29
that in the you know more and more in your podcast and I really appreciate and we also show it on the screen sometimes.
00:44:35
I I think that's wonderful. I mean the the the origin story of open AI is as the name suggests it's open source. It's
00:44:42
for the public good. It was an in you know intended in Elon Musk's words to
00:44:48
save the world from the dangers of AI right so they were doing research on that and then you know there was the
00:44:55
disagreement between Sam Alman and and Elon somehow Elon ends up being out of
00:45:01
uh of uh of open AI. I think there was a moment in time where he tried to take it back and you know the board rejected it
00:45:07
or some something like that. most of the uh top um safety engineers, the top
00:45:13
technical teams in open AI left in 2023 2024 openly saying we're not concerned
00:45:20
with safety anymore. It moves from being a nonforprofit to being one of the most valued companies in the world. There are
00:45:27
billions of dollars at stake, right? And if you if you tell me that Sam Altman is
00:45:33
out there trying to help humanity, let's let's suggest to him and say, "Hey, do
00:45:40
you want to do that for free? We'll pay you a very good salary, but you don't have stocks in this. Saving humanity
00:45:46
doesn't come at the billion dollar valuation or of course now tens of billions or hundreds of billions." And
00:45:52
and and see truly that is when you know that someone is doing it for the good of
00:45:58
humanity. Now the the capitalist system we've built is not built for the good of humanity. It's built for the good of the
00:46:04
capitalist. Well, he might say that releasing the model publicly, open sourcing it is too
00:46:12
risky because then bad actors around the world would have access to that technology. So
00:46:19
he might say that closing open AI in terms of not making it publicly viewable
00:46:26
is the right thing to do for safety. We go back to gullible cheer leaders, right? One of the interesting tricks is
00:46:33
of lying in our world is everyone will say what helps their agenda. Follow the
00:46:40
money. Okay, you follow the money and you find that you know at a point in time Samman himself was saying it's open
00:46:48
AI. Okay, my benefit at the time is to give it to the world so that the world
00:46:53
looks at it. They know the code if there is if there are any bugs and so on. True statement. Also a true statement is if I
00:47:00
put it out there in the world, a criminal might take that model and build something that's against humanity as a
00:47:06
result. Also true statement. Capitalists will choose which one of the truths to say, right? Based on which part of the
00:47:13
agenda, which part of their life today they want to serve, right? Someone will
00:47:19
say, uh, you know, do I do you want me to be controversial? Let's not go there. But if we go back to
00:47:26
war, I'll give you 400 slogans. 400 slogans that we all hear that change
00:47:33
based on the day and the army and the location and the they're all slogans.
00:47:38
None of them is true. You want to know the truth. You follow the money, not what the person is saying, but ask
00:47:45
yourself why is the person saying that? What's in it for the person speaking?
00:47:50
And what do you think's in it for Chachi Samman? hundreds of billions of dollars
00:47:55
of of of valuation. And do you think it's that power? The ego of being the person that
00:48:01
invented AGI, the position of power that this gives you, the meetings with all of
00:48:07
the heads of states, the admiration that gets it, it is intoxicating
00:48:12
100% 100%. Okay. And and the real question, this is
00:48:18
a question I ask everyone. Did you see you didn't you're every time I ask you you say you didn't. Did you see the
00:48:24
movie Elysium? No. You'd be surprised how little movie watching I do. You'd be shocked. There are some movies that are very
00:48:30
interesting. I use them to to create an emotional attachment to a story that you haven't seen yet because you may have
00:48:36
seen it in a movie. Okay. Elissium is a is a society where the elites are living on the moon. Okay. They don't need
00:48:43
peasants to do the work anymore and everyone else is living down here. Okay.
00:48:49
You have to imagine that if again game theory you have to im you know picture
00:48:55
something to infinity to its extreme and see where it goes and the extreme of a world where all manufactured is done
00:49:02
manufacturing is done by machines where all decisions are made by machines and those machines are owned by a few
00:49:10
is not an economy similar to the to today to the to today's economy
00:49:15
that today's economy is an economy of consumerism and and product and
00:49:22
production. You know, it's the it's the in in alive I call it the invention of
00:49:27
more. The invention of more is that post World War II as the factories were
00:49:32
rolling out things and prosperity was happening everywhere in America. There was a time where every family had enough
00:49:39
of everything. But for the capitalist to continue to be profitable, they needed to convince you
00:49:45
that what you had was not enough. either by making it obsolete like fashion or like you know a new shape of a car or
00:49:51
whatever or by convincing you that there are more things in life that you need so that you become complete without those
00:49:58
things you don't and and that invention of more gets us to where we are today an
00:50:04
economy that's based on production consumed and if you look at the US
00:50:09
economy today 62% of the US economy GDP is consumption it's not production okay
00:50:16
Now, this requires that the consumers have enough purchasing power to to buy
00:50:22
what is produced. And I believe that this will be an economy that will take us hopefully in the next 10, 15, 20
00:50:31
years and forever. But that's not guaranteed. Why? Because on one side if UBI replaces purchasing power. So if
00:50:38
people have to get an income from the government which is basically taxes
00:50:44
collected from those using AI and robots to to make things the then the the mindset of capitalism
00:50:51
labor arbitrage means those people are not producing anything and they're costing me money. Why don't we pay them
00:50:59
less and less and maybe even not pay them at all? And that becomes illissium where you basically say, you know, we
00:51:06
sit somewhere protected from everyone. We have the machines do all of our work
00:51:11
and those need to worry about themselves. We're not going to pay them UBI anymore, right? And and you have to
00:51:18
imagine this idea of UBI assumes this very democratic caring society.
00:51:27
UBI in itself is communism. Think of the ideology between at least
00:51:34
socialism. The ideology of giving everyone what they need. That's not the capitalist
00:51:40
democratic society that the west advocates. So those transitions are massive in magnitude.
00:51:47
And for those transitions to happen, I believe the right thing to do when the
00:51:52
cost of producing everything is almost zero because of AI and robots.
00:51:58
because the cost of harvesting energy should actually tend to zero once we get more intelligent to harvest the energy
00:52:04
out of thin air. Then a possible scenario and and I believe a scenario
00:52:09
that AI will eventually do in the utopia is yeah anyone can get anything they
00:52:15
want. Don't over consume. We're not going to abuse the the planet resources but it costs nothing. So like the old
00:52:22
days where we were hunter gatherers, you would, you know, forge for some berries and you'll find them ready in in nature.
00:52:30
Okay, we can in 10 years time, 12 years time build a society where you can forge
00:52:36
for an iPhone in nature. It will be made out of thin air. Nanopysics will allow you to do that. Okay? But the challenge,
00:52:44
believe it or not, is not tech. The challenge is a mindset. Because the elite, why would they give you that for
00:52:51
free? Okay. And the system would morph into, no, no, hold on. We will make more
00:52:57
money. We will be bigger capitalists. We will feed our ego and hunger for power more and more. And for them, give them
00:53:05
UBI and then 3 weeks later give them less UBI. Aren't there going to be lots of new
00:53:10
jobs created though? Because when we think about the other revolutions over time, whether it was the industrial
00:53:16
revolution or other sort of big technological revolutions, in the moment we forecasted that
00:53:22
everyone was going to lose their jobs, but we couldn't see all the new jobs that were being created because the the the machines
00:53:30
replaced the human strength at a point in time. And very few places in the west today will have a worker carry things on
00:53:39
their back and carry it upstairs. The machine does that work. Correct. Yeah. Uh similarly
00:53:47
AI is going to replace the brain of a human. And when the west in its
00:53:52
interesting uh virtual colonies that I call it uh basically outsourced all
00:53:58
labor to the to the developing nations. What the West publicly said at the time
00:54:03
is we're we're going to be a services economy. We we're we're not interested in making things and stitching things
00:54:10
and so on. Let the Indians and Chinese and you know Bengali and Vietnamese do
00:54:15
that. We're going to do more refined jobs. Knowledge workers. We're going to call them. Knowledge workers are people
00:54:21
who work with information and click on a keyboard and move a mouse and you know sit in meetings and all we produce in
00:54:27
the western societies is what words right or designs maybe sometimes but
00:54:33
everything we produce can be produced by an AI.
00:54:40
So if I give you an AI tomorrow h where I give you a piece of land, I give the
00:54:46
AI a piece of land and I say here are the parameters of my land. Here is its location on Google maps. Design an
00:54:53
architecturally sound villa for me. I care about a lot of light and I need three bedrooms. I want my bathrooms to
00:54:59
be in white marble, whatever. And the AI produces it like that. How often will you go to an to an architect and say
00:55:09
right so what will the architect do the best of the best of the architects will
00:55:14
either use AI to produce that or you will consult with them and say hey you know I've seen this and they'll say it's
00:55:21
really pretty but it wouldn't feel right for the person that you are yeah those jobs will remain but how many of them
00:55:27
will remain how how often do you think uh how many
00:55:32
more years. Do you think I will be able to create a book that is smarter than AI?
00:55:40
Not many. I will still be able to connect to a human. You're not going to hug an AI when you meet them like you
00:55:46
hug me, right? But that's not enough of a job.
00:55:52
So why do I say that? Remember I asked you at the beginning of the podcast to remind me of solutions. Why do I say
00:55:59
that? Because there are ideological shifts and and concrete actions that
00:56:05
need to be taken by governments today rather than waiting until COVID is already everywhere and then locking
00:56:11
everyone down. Governments could have reacted before the first patient or at least at patient zero or at least at
00:56:18
patient 50. They didn't. H what I'm trying to say is there is no doubt that
00:56:24
lots of jobs will be lost. There's no doubt that there will be sectors of society where 10 20 30 40 50% of all
00:56:34
developers, all software uh you know all graphic designers, all um uh uh u online
00:56:40
marketers, all all all assistances are going to be out of a job. So are we
00:56:46
prepared as a society to do that? Can we tell our governments there is an ideological shift? This is very close to
00:56:52
social socialism and and communism. Okay. And are we ready from a budget
00:56:58
point of view instead of spending a a trillion dollars a year on on arms and
00:57:03
and explosives and you know autonomous weapons that will oppress people because
00:57:08
we can't feed them? Can we please shift that? I did those numbers. Huh. Uh again
00:57:14
I I go back to military spending because it's all around us. 2.71 trillion. 2.4
00:57:21
to2.7 is the estimate of 2024. how much money we're spending on military on Yeah. on military equipment on things
00:57:27
that we're going to explode into smoke and death. Extreme poverty worldwide.
00:57:33
Extreme poverty is people that are below the poverty line. Extreme poverty everywhere in the world could end for 10
00:57:40
to 12% of that budget. So if we replace our military spending 10% of that to go
00:57:46
to people who are in extreme poverty, nobody will be poor in the world. Okay. You can end uh world hunger for less
00:57:54
than 4%. Nobody would be hungry in the world. You know, if you take uh again 10
00:57:59
to 12% universal healthcare, every human being on the planet would have free
00:58:05
healthcare for 10 to 12% on what we're spending on war. Now, why why do I say
00:58:11
this when we're talking about AI? Because that's a simple decision. If we stop fighting
00:58:18
because money itself does not have the same meaning anymore because the economics of money is going to change
00:58:24
because the entire meaning of capitalism is ending because there is no more need for labor arbitrage because AI is doing
00:58:30
everything just with the $2.4 trillion we save in
00:58:36
explosives every year in arms and weapons just for that universal
00:58:42
healthcare and extreme poverty. You could actually one of the calculations is you could end climate or combat
00:58:48
climate climate change meaningfully for 100% of the military budget.
00:58:53
But I I'm not even sure it's really about the money. I think money is a measurement stick of power. Right.
00:58:59
Exactly. It's printed on demand. So even in a world where we have super intelligence and money is no longer a
00:59:04
problem. Correct. I still think power is going to be
00:59:10
insatiable for so many people. So there will still be war because you know there will be in my view
00:59:17
the strongest I want the strongest AI. I don't want my and I don't and I don't want you know what Harry Henry Kissinger called them
00:59:23
the eaters. The eaters. Yeah. Brutal as that sounds.
00:59:29
What is that? The people at the bottom of the socioeconomic that don't produce but consume.
00:59:35
So if you had a Henry Kissinger at the at the helm and we have so many of them,
00:59:40
what would they think like why why prominent military figure in the US
00:59:46
history? Uh you know what why would we feed 350 million Americans America will
00:59:53
think but more interestingly why do we even care about Bangladesh anymore if we
00:59:59
can't make our textiles there or we don't want to make our textile there. Do you you know I imagine throughout human
01:00:05
history if we had podcasts conversations would would have been warning of a
01:00:11
dystopia around the corner. You know when they heard of technology and the internet they would have said oh we're finished and when the the tractor came
01:00:18
along they would have said oh god we're finished because we're not going to be able to farm anymore. So is this not just another one of those moments where
01:00:24
we couldn't see around the corner so we we forecasted unfortunate things. You
01:00:29
could be. I I am I'm begging that I'm wrong. Okay. I'm just asking if there
01:00:35
are scenarios that you think that can provide that. You know, uh uh Mustafa Sulleman in in uh you hosted him here. I
01:00:42
did. Yeah. He was in the coming wave. Yeah. And he speaks about uh about pessimism
01:00:48
aversion. Okay. that all of us people who are supposed to be in technology and
01:00:54
business and so on, we're always supposed to, you know, stand on stage and say the future's going to be amazing. You know, this technology I'm
01:01:01
building is going to make everything better. One of my posts in life was called the broken promises. How often
01:01:08
did that happen? Okay. How often did social media connect us? And how many and how often did it
01:01:14
make us more lonely? How how often did mobile phones make us work less? That
01:01:19
was the promise. That was the promise. The promise. The early ads of Nokia were
01:01:25
people at parties. Is that your experience of mobile phones? And and I
01:01:31
think the whole idea is we should hope there will be other roles for humanity. By the way, those roles would resemble
01:01:38
the times where we were hunter gatherers, just a lot more technology and a lot more safety.
01:01:44
Okay. So, this is this sounds good. Yeah, this is exciting. So, I'm gonna I'm gonna get to go outside more, be with my
01:01:50
friends more, 100%. Fantastic. And do absolutely nothing. Well, that doesn't sound fantastic.
01:01:55
No, it does. Do be forced to do absolutely nothing. For some people, it's amazing. For you and I, we're going
01:02:00
to find the little carpentry project and just do something. Speak for yourself. I'm still People are still going to tune in. Okay.
01:02:07
Correct. Yeah. But what? And people are going to to tune in. Do you think they will? I'm not I'm not I'm not convinced they will. And for for
01:02:14
as long will you guys tune in? Are you guys still going to tune in? I can let them answer. I believe for as
01:02:20
long as you make their life enriched, but can an AI do that better
01:02:27
without the human connection? Comment below. Are you going to listen to an AI or the Davosio? Let me know in the comment section below.
01:02:33
Remember, as incredibly intelligent as you are, Steve, uh there will be a
01:02:39
moment in time where you're going to sound really dumb compared to an AI. and and and I will sound completely dumb.
01:02:45
Yeah. Yeah. The the depth the depth of analysis and and gold nuggets. I mean, can you
01:02:52
imagine two super intelligences deciding to get together and explain um string
01:02:59
theory to us? They'll do better than any physic physicist in the world because they
01:03:06
possess the physics knowledge and they also pro possess social and language knowledge that most deep physicists
01:03:12
don't. I think B2B marketeteers keep making this mistake. They're chasing
01:03:18
volume instead of quality. And when you try to be seen by more people instead of the right people, all you're doing is
01:03:24
making noise. But that noise rarely shifts the needle and it's often quite expensive. And I know as there was the
01:03:30
time in my career where I kept making this mistake that many of you will be making it too. Eventually I started
01:03:35
posting ads on our show sponsors platform LinkedIn. And that's when things started to change. I put that
01:03:40
change down to a few critical things. One of them being that LinkedIn was then and still is today the platform where
01:03:46
decision makers go to not only to think and learn but also to buy. And when you market your business there, you're
01:03:52
putting it right in front of people who actually have the power to say yes. and you can target them by job title,
01:03:58
industry, and company size. It's simply a sharper way to spend your marketing budget. And if you haven't tried it, how
01:04:04
about this? Give LinkedIn ads a try, and I'm going to give you a $100 ad credit to get you started. If you visit
01:04:10
linkedin.com/diary, you can claim that right now. That's linkedin.com/diary.
01:04:17
I've I've really gone back and forward on this idea that even in podcasting that all the podcasts will be AI
01:04:23
podcasts or I've gone back and forward on it and and where I landed at the end of the day was that there'll still be a
01:04:29
category of media where you do want lived experience on something
01:04:34
100%. For example, like you want to know how the person that you follow and admire dealt with their divorce.
01:04:40
Yeah. Or or how they're struggling with AI, for example. Yeah. Exactly. But I but I think things like news, there are there
01:04:47
are certain situations where just like straight news and straight facts and maybe a walk through history may be
01:04:54
eroded away by AIS. But even in those scenarios, you there's something about
01:04:59
personality. And again, I I hesitate here because I question myself. I'm not in the camp of people that are romantic,
01:05:04
by the way. I'm like I'm trying to be as as orientated towards whatever is true,
01:05:10
even if it's against my interests. And I hope people understand that about me. like um cuz even in my companies we
01:05:16
experiment with like disrupting me with AI and some people will be aware of those experiments because there will be a mix of all there
01:05:22
you can't imagine that the world will be completely just AI and completely just podcasters you know you'll see a mix of
01:05:29
of both you'll see things that they do better things that we do better the the the message I'm trying to say is
01:05:35
we need to prep for that we need to be ready for that we need to be ready by you know talking to our
01:05:40
governments and saying hey it looks like I'm a a parallegal and it looks like all
01:05:46
parallegals are going to be, you know, financial researchers or analysts or
01:05:52
graphic designers or, you know, call center agents. It looks like half of those jobs are being replaced already.
01:05:58
You know who Jeffrey Hinton is? Oh, Jeffrey. I I had him on the documentary as well. I love Jeffrey.
01:06:04
Jeffrey Hinton told me trained to be a plumber. Really? Yeah. 100% for a while.
01:06:10
And I I thought he was joking. 100%. So I asked him again and he he looked me dead in the eye and told me that I I
01:06:17
should train to be a plumber. 100%. So so so uh it's funny uh machines
01:06:23
replaced labor but we still had blue collar. Then uh you know the refined
01:06:28
jobs became white collar information workers. What's the refined jobs? You know you don't have to really carry
01:06:33
heavy stuff or deal with physical work. You know you sit in an in an office and sit in meetings all day and blabber, you
01:06:40
know, useless [ __ ] then that's your job. Okay? And those jobs, funny enough,
01:06:45
in the reverse of that, because robotics are not ready yet. Okay. And I believe
01:06:51
they're not ready because of a stubbornness on the on the robotics community around making them humanoids.
01:06:57
Mhm. Okay. Because it takes so much to perfect a human like action at proper
01:07:03
speed. You could, you know, have many more robots that don't look like a human just like a self-driving car in
01:07:09
California. Okay, that that does already replace drivers and and you know but but
01:07:15
they're delayed. So the robotic the the replacement of physical manual labor is
01:07:21
going to take four to five years before it's possible at you know at at the
01:07:26
quality of the AI replacing mental labor now and when that happens it's going to
01:07:33
take a long cycle to manufacture enough robots so that they replace all of those jobs. that cycle will take longer. Blue
01:07:40
collar will stay longer. So, I should move into blue collar and shut down my office. I think you're you're not the problem.
01:07:47
Okay, good. Let's put put it this way. There are many people that we should care about
01:07:52
that are a simple travel agent or an assistant that will see if not replacement a
01:07:59
reduction in the number of pings they're getting. Simple as that.
01:08:09
And someone in, you know, ministries of labor around the world needs to sit down
01:08:15
and say, "What are we going to do about that? What if all taxi drivers and Uber
01:08:20
drivers in uh in California get replaced by self-driving cars? Should we start
01:08:27
thinking about that now, noticing that that trajectory makes it look like a
01:08:33
possibility?" I'm going to go back to this argument which is what a lot of people will be shouting. Yes, but there
01:08:39
will be new jobs or and I as I said other than human connection jobs, name me one.
01:08:45
So I I've got three assistants, right? Sophie, Liam B. And okay, in the near
01:08:53
term there might be, you know, with AI agents, I might not need them to help me book flights anymore. or I might not
01:08:59
need them to help do scheduling anymore. Or even I've been messing around with this new AI tool that my friend built
01:09:05
and you basically when me and you trying to schedule something like this today, I just copy the AI in and it looks at your calendar looks at mine and schedules it
01:09:11
for for us. So there might not be scheduling needs, but my dog is sick at the moment. And as I left this morning,
01:09:17
I was like, damn, he's like really sick and I've taken him to the vet over and over again. I really need someone to look after him and figure out what's
01:09:23
wrong with him. So those kinds of responsibilities of like care. I don't
01:09:28
disagree at all. Again, all and and I I won't I'm not going to be I
01:09:33
don't know how to say this in a nice way, but my assistants will still have their jobs, but I I as a CEO will be
01:09:39
asking them to do a different type of work. Correct. So, so, so this is the calculation everyone needs to be aware
01:09:45
of that a lot of their current responsibility, whoever you are, if you're a parallegal, if you're whatever,
01:09:52
will be handed over. So, so let me explain it even more accurately. There
01:09:57
will be two stages of our interactions with the machines. One is what I call the era of augmented intelligence. So,
01:10:06
it's human intelligence augmented with AI doing the job. And then the following one is what I call the era of machine
01:10:12
mastery. The job is done completely by an AI without a human in the loop. Okay.
01:10:17
So in the era of augmented intelligence, your assistances will augment themselves
01:10:23
with an AI to either be more productive. Yeah. Okay. Or interestingly to reduce the
01:10:31
number of tasks that they need to do. Correct. Now the more the number of
01:10:37
tasks get reduced, the more they'll have the bandwidth and ability to do tasks like take care of your dog, right? or
01:10:44
tasks that you know basically is about meeting your guests or whatever human connection. Yeah.
01:10:50
Life connection but do you think you need three for that
01:10:56
or maybe now that some tasks have been you know outsourced to AI will you need two? You can easily calculate that from
01:11:03
call center agents. So from call center agents they're not firing everyone but
01:11:09
they're taking the first part of the funnel and giving it to an AI. So instead of having 2,000 agents in a in a
01:11:16
call center, they can now do the job with 1,800. I'm just making that number up. H society needs to think about the
01:11:24
200. And you're telling me that they won't move into other roles somewhere else. I am telling you I don't know what those
01:11:30
roles are. Well, I think we should all be musicians. We should all be authors. We should all be artists. We should all be
01:11:36
entertainers. We should all be comedians. We should all these are roles that will remain.
01:11:41
We should all be plumbers for the next 5 to 10 years. Fantastic. Okay. But even
01:11:46
that requires society to morph and societyy's not talking about it.
01:11:53
Okay. I had this wonderful interview with friends of mine, Peter Dendez and and some of our friends and and they
01:11:59
were saying, "Oh, you know, the American people are resilient. They're going to be entrepreneurs." I was like,
01:12:05
seriously, you're expecting a truck driver that will be replaced by an autonomous truck to become an
01:12:11
entrepreneur? Like, please put yourself in the shoes of real people,
01:12:18
right? You expect a single mother who has three jobs
01:12:24
to become an entrepreneur. And I'm not saying this is a dystopia.
01:12:30
It's a dystopia if humanity manages it badly. Why? Because this could be the utopia itself where that single mother
01:12:37
does not need three jobs. Okay? If we of if of our society was
01:12:43
just enough, that single mother should have never needed three jobs,
01:12:48
right? But the problem is our capitalist mindset is labor arbitrage. Is that I
01:12:53
don't care what she goes through. You know, if if you're if you're generous in your assumption, you'll say
01:13:00
because, you know, of what I've been given, I've been blessed. or if you're mean in your assumption, it's going to
01:13:05
be because she's an eater. I'm a a successful businessman. The world is
01:13:10
supposed to be fair. I work hard. I make money. We don't care about them. Are we asking of ourselves here
01:13:17
something that is not inherent in the human condition? What I mean by that is
01:13:22
the reason why me and you are in this my office here. We're on the fourth or
01:13:27
third floor of my office in central London. big office, 25,000 square feet with lights and internet connections and
01:13:34
Wi-Fi and modems and AI teams downstairs. The reason that all of this exists is because something inherent in
01:13:41
my ancestors meant that they built and accomplished and grew and that was like
01:13:47
inherent in their DNA. There was something in their DNA that said we will expand and conquer and accomplish. So
01:13:54
that's they've passed that to us because we're their offspring and that's why we find ourselves in these skyscrapers. There is truth to that story. It's not
01:14:01
your ancestors, right? What is it? It's the media brainwashing you
01:14:06
really 100%. But if if you look back before times of media Mhm. the reason why homo sapiens were so
01:14:12
successful was because they were able to dominate other tribes through banding together and communication. They conquered all these
01:14:18
other these other um whatever came before homo sapiens.
01:14:24
Yeah. So, so the the reason humans were successful in my view is because they could form a tribe to start. It's not
01:14:30
because of our intelligence. I always joke and say Einstein would be eaten in the jungle in 2 minutes.
01:14:36
Right? You know, the reason why we succeeded is because Einstein could partner with a a big guy that protected
01:14:43
him while he was working on relativity in the jungle. Right? Now the the the
01:14:49
further than that. So so you have to assume that life is a very funny game because it provides
01:14:56
and then it it deprivives and then it provides and then it deprivives. And for
01:15:02
some of us in that stage of deprivation we try to say okay let's take the other
01:15:07
guys you know let's just go to the other tribe take what they have or for some of
01:15:13
us unfortunately we tend to believe okay you know what I'm powerful uh f the rest
01:15:18
of you I'm just going to be the boss now it's interesting that you
01:15:24
you know position this as the condition of humanity if you really look at the majority of humans. What do the majority
01:15:31
of humans want? Be honest. They want to hug their kids. They want a good meal. Want good sex.
01:15:38
They want love. They want, you know, to for most humans, don't measure on you
01:15:44
and I. Okay? Don't measure by this foolish person that's dedicated the rest
01:15:49
of his life to to try and warn the world around AI or, you know, solve uh love
01:15:55
and relationships. That's that's crazy. That's I and I will tell you openly and
01:16:01
you met Hannah, my wonderful wife. It's the biggest title of this year for me is
01:16:07
which of that am I actually responsible for? Which of that should I do without
01:16:12
the sense of responsibility? Which of that should I do because I can? Which of I ignore completely? But the reality is
01:16:19
most humans, they just want to hug their loved ones. Okay? And if we could give them that
01:16:26
without the uh you know the the need to work 20 you know 60 hours a week they
01:16:33
would take that for sure. Okay. And you and I will think ah but life will be
01:16:39
very boring. To them life will be completely fulfilling. Go to Latin America.
01:16:44
Go to Latin America and see the people that go work enough to earn enough to eat today and go dance for the whole
01:16:50
night. Go to Africa. where people are sitting literally on you know sidewalks in the street and and
01:16:59
you know completely full of laughter and joy. We we were lied to the the gullible
01:17:06
majority the cheerleaders. We were lied to to to believe that we need to fit as
01:17:12
another gear in that system. But if that system didn't exist nobody none of us
01:17:17
will go wake up in the morning and go like oh I want to create it. Totally not. I mean,
01:17:25
you've touched on it many times today. We don't need, you know, most people
01:17:30
that build those things don't need the money. So, why do they do it though? Because
01:17:36
homo sapiens were incredible competitors. They outco competed other
01:17:41
human species effectively. So, I'm what I'm saying is is is that competition not inherent in our in our wiring? and and
01:17:48
therefore are we are we is it wishful thinking to think that we could potentially pause and say we we okay
01:17:56
this is it we have enough now and we're going to focus on just enjoying in my work I call
01:18:03
that the map mad spectrum okay mut mutually assured prosperity versus
01:18:10
mutually assured destruction destruction okay and you really have to start
01:18:16
thinking about about this because in my mind what we have is the potential for everyone. I mean you and I today have a
01:18:24
better life than the queen of England 100 years ago. Correct? Everybody knows
01:18:30
that. Uh and yet that quality of life is not good enough.
01:18:36
The truth is like just like you walk into a an electronic shop and there are
01:18:41
60 TVs and you look at them and you go like this one is better than that one. Right? But in reality, if you take any
01:18:47
of them home, it's superior quality to anything that you'll ever need. More than anything you you'll ever need. That
01:18:54
that's the truth of our life today. The truth of our life today is that there isn't much more missing.
01:18:59
No. Okay. And and when when you know Californians tell us, "Oh, but AI is
01:19:04
going to increase productivity and solve this." And nobody asked you for that. Honestly, I never elected you to decide
01:19:10
on my behalf that, you know, getting a machine to answer me on a call center is better for me. I really didn't. Okay?
01:19:18
And and because those unelected individuals are making all the decisions, they're selling those
01:19:24
decisions to us through what media. Okay? All lies from A to Z. None of it
01:19:31
is what you need. And and interestingly, you know me, I I this year I failed. Unfortunately, I
01:19:38
won't be able to do it. But I normally do a 40 days silent retreat in nature. Okay? And you know what? Even as I go to
01:19:47
those nature places, I'm so well trained that unless I have a a waitro nearby,
01:19:54
I'm not able to like I I'm I'm in nature, but I need to be able to drive 20 minutes to get my rice cakes. Like
01:20:01
what? What? who was taught me that this
01:20:06
is the way to live. All of the media around me, all of the of the of the
01:20:11
messages that I get all the time, try to sit back and say, "What if life had
01:20:17
everything? What if I had everything I needed? I could read. I could uh, you know, do my
01:20:26
handcrafts and hobbies. I could, you know, fix my, you know, restore classic
01:20:31
cars. Not because I need the money, but because it's just a beautiful hobby. I could, you know, uh, build AIS to help
01:20:38
people with their long-term committed relationships, but really price it for free. What if
01:20:45
What if would you still insist on making money?
01:20:50
I think no. I think a few of us will still and they will still crush the rest of us and hopefully soon the AI will
01:20:57
crush them. Right? That is the problem with your world today. I will tell you hands down
01:21:04
the problem with with our world today is the A in face rips.
01:21:10
It's the A in face RIP. It's it's accountability. The problem with our world today, as I said, the top is lying
01:21:17
all the time. The bottom is gullible cheerleaders and there is no accountability. You cannot hold anyone
01:21:25
in our world accountable today. Okay? You cannot hold someone that develops an
01:21:30
AI that has the power to completely flip our world upside down. You cannot hold
01:21:36
them accountable and say why did you do this? You cannot hold them accountable and tell them to stop doing this. You
01:21:41
look at the world the wars around the world. Million hundreds of thousands of people are dying. Okay. And you know and
01:21:49
international court of justice will say oh this is war crimes. You can't hold anyone accountable. Okay. You have 51%
01:21:56
of the US today is saying stop that 51% change their their their lawy their
01:22:04
view that that their money shouldn't be spent on wars abroad. Okay. You can't
01:22:09
hold anyone accountable. Trump can do whatever he wants. He starts tariffs which is against the the constitution of
01:22:15
the US without consulting with the Congress. You can't hold him accountable. They say they're not going to show the Epstein files. You can't
01:22:22
hold them accountable. It's quite interesting in in Arabic we have that proverb that says the highest of your
01:22:27
horses you can go and ride. I'm not going to change my mind. Okay. And that's truly what does that mean?
01:22:33
So basically people in the in the old Arabia they would ride the horse to you
01:22:39
know to exert their power if you want. So go ride your highest horse. You're not going to change my mind.
01:22:44
Oh okay. Right. And and the truth is that's I think that's what our politicians today
01:22:50
have discovered. What our oligarchs have discovered what our uh
01:22:55
tech oligarchs have discovered is that I don't even need to worry about the public opinion anymore. Okay, at the
01:23:02
beginning I would have to say ah this is for democracy and freedom and I have the right to defend myself and you know all
01:23:08
of that crap and then eventually when the world wakes up and says no no hold on hold on you're going too far they go
01:23:14
like yeah go ride your highest horse I don't care you can't change me there is
01:23:19
no constitution there is no ability for any any citizen to do anything is it possible to have a society where
01:23:28
like the one you describe where there isn't hierarchies because it
01:23:33
appears to me that humans assemble hierarchies very very quickly
01:23:39
very naturally and the minute you have a hierarchy you have many of the problems that you've described where there's a
01:23:45
top and a bottom and the top have a lot of power and the bottom so so the mathematics mathematically is
01:23:50
actually quite interesting what I call the uh the baseline relevance so so
01:23:56
think of it this way say the average human is an IQ of 100. Yeah.
01:24:01
Okay. I tend to believe that when I use my AIS today, I borrow around 50 to 80 IQ points. I
01:24:10
say that because I've worked with people that had 50 to 80 IQ points more than me. And I now can see that I can sort of
01:24:17
stand my my place. 50 50 IQ points, by the way, is enormous
01:24:24
because IQ is exponential. So the the last 50 are bigger than my entire IQ,
01:24:31
right? If I borrow 50 IQ points on top of say 100 that I have, that's 30%.
01:24:39
If I can borrow 100 IQ, that's 50%. That that's so, you know, basically doubling
01:24:45
my intelligence. But if I can borrow 4,000 IQ points
01:24:50
in 3 years time, my IQ itself, my base is irrelevant. Whether you are smarter
01:24:58
than me by 20 or 30 or 50 which in our world today made a difference
01:25:03
in the future if we can all augment with 4,000 I end up with 4,100 another ends
01:25:09
up with 400 4, you know 130 really doesn't make much difference. Okay. And
01:25:16
because of that the difference between all of humanity and the augmented
01:25:21
intelligence is going to be irrelevant. So all of us suddenly become equal and and this also
01:25:28
happens economically. All of us become peasants. And I never wanted to tell you that
01:25:34
because I think it will make you run faster. Okay? But unless you're in the top.1%,
01:25:42
you're a peasant. There is no middle class. There is, you know, if a CEO can
01:25:47
be replaced by an AI, all of our middle class is going to disappear.
01:25:52
What are you telling me? All of us will be equal and it's up to
01:25:58
all of us to create the society that we want to live in which is a good thing 100%. But that society is not
01:26:04
capitalism. What is it? Unfortunately, it's much more socialism. It's much more hunter gatherer. Okay.
01:26:10
It's much more communionike if you want. This is a society where humans connect
01:26:16
to humans, connect to nature, connect to the land, connect to knowledge, connect to spirituality. H where all that we
01:26:24
wake up every morning worried about doesn't feature anymore
01:26:30
and it's a it's a better world believe it or not and are you we have to transition to it
01:26:36
okay so in such a world which I guess is your version of the utopia that we can get to when I wake up in the morning
01:26:42
what do I do what do you do today I woke up this morning I spent a lot of time with my dog cuz my dog is sick as
01:26:47
you're going to do that too yeah I was stroking him a lot and then I fed him and he sick again and I just thought, "Oh god." So I spoke to the
01:26:53
vet. You spend a lot of time with your other dog. You can do that, too. Okay. Right. But then I was very excited to come
01:26:58
here, do this, and after this I'm going to work. It's Saturday, but I'm going to go downstairs in the office and work. Yeah. So six hours of the day so far are
01:27:06
your dogs and me. Yeah. Good. You can do that still. And then build my business.
01:27:11
You You may not need to build your business, but I enjoy it. Yeah. Then do it. If you enjoy it, do
01:27:17
it. You may wake up and then, you know, instead of building your business, you may invest in your body a little more,
01:27:23
go to the gym a little more, go play a game, go read a book, go prompt an AI
01:27:28
and learn something. It's not a horrible life. It's the life of your grandparents.
01:27:34
It's just two generations ago where people went to work before the invention
01:27:39
of more. Remember, huh? people who who started working in the 50s and 60s, they
01:27:45
worked to make enough money to live a reasonable life, went home at 5:00 p. p.m. had tea with their with their loved
01:27:53
ones, had a wonderful dinner around the table, did a lot of things, you know, uh for the rest of the evening and enjoyed
01:27:59
life. Some of them in the 50s and 60s, there were still people that were correct. And I think it's a very
01:28:06
interesting question. uh how many of them and I really really
01:28:12
am I actually wonder if people will tell me do we think that 99% of the world
01:28:18
cannot live without working or that 99% of the world would happily live without working
01:28:23
what do you think I think if we if you give me other purpose
01:28:29
you know we we defined our purpose as work that's a capitalist lie
01:28:35
was there ever a time in human history where our purpose wasn't work 100%.
01:28:40
When was that? All through human history until the invention of Moore. I thought my ancestors were out hunting
01:28:46
all day. No, they went out hunting once a week. They fed the tribe for the week. They
01:28:52
gathered for a couple of hours every day. Farmers, you know, saw the seeds and and waited for months at on end.
01:29:00
What did they do with the rest of the time? They connected as humans. They explored.
01:29:05
They uh were curious. They discussed spirituality and the stars. They they they lived. They hugged. They made love.
01:29:13
They lived. They killed each other a lot. They they still kill each other today. Yeah. That's what I'm saying. So
01:29:19
to take that out of the equation, if you look at how and by the way that actually that statement again, one of the of the 25
01:29:25
tips I I I I talk about uh to to tell the truth is words mean a lot. No,
01:29:31
humans did not kill each other a lot. Very few generals instructed humans or
01:29:38
tribe leaders instructed lots of humans to kill each other. But if you leave humans alone, I tend to believe 99 98%
01:29:46
of the people I know, let me just take that sample, wouldn't hit someone in the face. And if someone attempted to hit
01:29:53
them in the face, they'd defend themselves but wouldn't attack back. Most humans are okay. Most of us are
01:29:59
wonderful beings. Most of us have no, you know, yeah, you know, most people
01:30:06
don't don't need a Ferrari. They want a Ferrari because it gets sold to them all
01:30:11
the time. But if there were no Ferraris or everyone had a Ferrari, people wouldn't care.
01:30:19
Which, by the way, that is the world we're going into. There will be no Ferraris or everyone had Ferraris,
01:30:25
right? n you know the majority of humanity will never have the income on UBI to to buy something super expensive.
01:30:33
Only the very top guys in Elisium will be you know driving cars that are made
01:30:38
for them by the AI or not even driving anymore. Okay. Or
01:30:44
you know again sadly ide from an ideology point of view it's a strange
01:30:49
place but you'll get communism that functions. The problem with communism is that
01:30:55
didn't it didn't function. It didn't provide for for its society. But the concept was you know what everyone gets
01:31:02
their needs. And I don't say that supportive of either society. I don't say that because I dislike capitalism. I
01:31:09
always told you I'm a capitalist. I want to end my life with 1 billion happy and I use capitalist methods to get there.
01:31:15
The objective is not dollars. The objective is number of happy people. Do you think there'll be My girlfriend, she's always bloody right. I've said
01:31:20
this a few times on this podcast. If you've listened before, you've probably heard me say this. I I don't tell her enough in the moment, but I figure out
01:31:26
from speaking to experts that she's so [ __ ] right. She like predicts things before they happen. And one of her predictions that she's been saying to me
01:31:32
for the last two years, which in my head I've been thinking now, I don't I don't believe that, but now maybe I'm thinking she's tr she's telling the truth. I hope
01:31:38
she's going to listen to this one is she keeps saying to me, she's been saying for the last two years, she was there's going to be a big split in society. She
01:31:44
was and the way she describes it is she's saying like there's going to be two groups of people. the people that
01:31:50
split off and go for this almost huntergatherer community centric connection centric
01:31:56
utopia and then there's going to be this other group of people who pursue
01:32:02
you know the technology and the AI and the optimization and get the brain chips
01:32:07
cuz like there's nothing on earth that's going to persuade my girlfriend to get the computer brain chips% but there will be people that go for it
01:32:14
and they'll have the highest IQs and they'll be the most productive by whatever objective measure of
01:32:19
productivity you want to apply and she's very convinced there's going to be this splitting of society.
01:32:25
So there was there was a I don't know if you had Hugo Dearis here. No. Yeah. A very very very renowned
01:32:33
eccentric uh computer scientist who wrote a book called the Arctic War and
01:32:38
the Arctic War was basically around you know how we it's not going to first it's not going to be a war between humans and
01:32:45
AI. It will be a war between people who support AI and people who sort of don't
01:32:51
want it anymore. Okay? And and it is and and it will be us versus each other
01:32:56
saying should we allow AI to take all the jobs or should we you know some people will support that very much and
01:33:02
say yeah absolutely and so you know we will benefit from it and others will say no why why we don't need any of that why
01:33:09
don't we keep our jobs and let AI do 60% of the work and all of us work 10our weeks and it's a beautiful society by
01:33:16
the way that's a possibility so a possibility if society awakens is to say okay everyone still keeps their job, but
01:33:24
they're assisted by an AI that makes their job much easier. So, it's not, you know, this uh this hard labor that we do
01:33:30
anymore, right? It's a possibility. It's just a mindset. A mindset that says in that case, the capitalist still pays
01:33:37
everyone. Uh they still make a lot of money. The business is really great, but everyone
01:33:44
that they pay has purchasing power to keep the economy running. So, consumption continues, so GDP continues
01:33:50
to grow. It's a beautiful setup, but that's not the capitalist labor arbitrage.
01:33:55
But also, when you're competing against other nations and other competitors and other
01:34:00
businesses, whichever nation is most brutal and drives the highest gross margins, gross profits is going to be the nation that
01:34:07
So, there are examples in the world, this is why I say it's the map mad spectrum. There are examples in the
01:34:13
world where when we recognize mutually assured destruction, okay, we we decide
01:34:19
to shift. So nuclear threat for the whole world makes nations across nations
01:34:24
makes nations work together, right? By saying, hey, by the way, prolification of nuclear weapon is not weapons is not
01:34:31
good for humanity. Let's all of us limit it. Of course, you get the rogue player that, you know, doesn't want to sign the
01:34:37
agreement and wants to continue to to have that, you know, that that weapon in
01:34:42
their arsenal. Fine. But at least the rest of humanity agrees that if you have a nuclear weapon, we're part of an
01:34:47
agreement between us. Mutually assured prosperity, you know, is the CERN project. CERN is too too complicated for
01:34:55
any nation to build it alone. But it is really, you know, a very useful thing
01:35:00
for physicists and for understanding science. So all nations send their scientists all collaborate and everyone
01:35:06
uses the outcome. It's possible. It's just a mindset. The only barrier between
01:35:13
a hum, you know, a utopia for humanity and AI and the dystopia we're going through is is a capitalist mindset.
01:35:20
That's the only barrier. Can you believe that? It's hunger for power, greed, ego,
01:35:26
which is inherent in humans. I disagree. especially humans that live
01:35:31
on other islands. I disagree. If you ask, if you take a poll across everyone watching, okay,
01:35:38
would they prefer to have a world where there is one tyrant, you know, running
01:35:43
all of us, or would they prefer to have a world where we all have harmony? I completely agree, but they're two they're two different things. What I'm
01:35:49
saying is I know that that's what the audience would say they want, and I'm sure that is what they want, but the reality of human beings is through
01:35:57
history proven to be something else. Like, you know, if think about the people that lead the world at the
01:36:02
moment, is that what they would say? Of course not. And they're the ones that are influencing.
01:36:08
Of course not. Of course not. But you know what's funny? I'm the one trying to be positive here and you're the one that
01:36:13
has given up on on human. It's not. It's Do you know what it is? It goes back to what I said earlier, which is the pursuit of what's actually
01:36:18
true irrespective. I'm with you. That's why I'm screaming for the whole world because still today in this country that
01:36:26
claims to be a democracy. If everyone says, "Hey, please sit down and talk about this."
01:36:33
There will be a shift. There will be a change. AI agents aren't coming. They are
01:36:38
already here. And those of you who know how to leverage them will be the ones that change the world. I spent my whole
01:36:45
career as an entrepreneur regretting the fact that I never learned to code. AI agents completely change this. Now, if
01:36:53
you have an idea and you have a tool like Replet, who are a sponsor of this podcast, there is nothing stopping you
01:36:59
from turning that idea into reality in a matter of minutes. With Replet, you just
01:37:06
type in what you want to create and it uses AI agents to create it for you. And
01:37:11
now I'm an investor in the company as well as them being a brand sponsor. You can integrate payment systems or
01:37:17
databases or loginins. Anything that you can type. Whenever I have an idea for a new website or tool or technology or
01:37:23
app, I go on replet.com and I type in what I want. A new to-do list, a survey
01:37:28
form, a new personal website. Anything I type, I can create. So, if you've never
01:37:33
tried this before, do it now. Go to replet.com and use my code Steven for
01:37:39
50% off a month of your Replet call plan. Make sure you keep what I'm about
01:37:45
to say to yourself. I'm inviting 10,000 of you to come even deeper into the diary of a CEO. Welcome to my inner
01:37:52
circle. This is a brand new private community that I'm launching to the world. We have so many incredible things
01:37:57
that happen that you are never shown. We have the briefs that are on my iPad when I'm recording the conversation. We have
01:38:04
clips we've never released. We have behindthe-scenes conversations with the guest and also the episodes that we've never ever released. and so much more.
01:38:13
In the circle, you'll have direct access to me. You can tell us what you want this show to be, who you want us to interview, and the types of
01:38:19
conversations you would love us to have. But remember, for now, we're only inviting the first 10,000 people that
01:38:25
join before it closes. So, if you want to join our private closed community, head to the link in the description below or go to daccircle.com.
01:38:34
I will speak to you there. One of the things I'm actually really compelled by is this idea of utopia and
01:38:41
what that might look and feel like because one of the it may not be as utopia to you I feel but uh
01:38:47
well I amum really interestingly when I have conversations with billionaires not
01:38:53
recording especially billionaires that are working on AI the thing they keep telling me and I've said this before I
01:38:58
think I said it in the Jeffrey Hinton conversation is they keep telling me that we're going to have so much free time that those billionaires are now
01:39:05
investing in things like football clubs and sporting events and live music and
01:39:12
festivals because they believe that we're going to be in an age of abundance. This sounds a bit like
01:39:20
utopia. Yeah, that sounds good. That sounds like a
01:39:25
good good thing. Yeah. How do we get there? I don't know.
01:39:30
That's this is the entire conversation. The entire conversation is what does society have to do to get there? What
01:39:35
does society have to do to get there? We need to stop uh uh thinking from a mindset of scarcity. It
01:39:42
this goes back to my point which is we don't have a good track record of that. Yeah. So this is probably the the reason
01:39:49
for the other half of my work which is you know I'm trying to say
01:39:54
what really matters to humans. What is that? If you ask most humans what do they want
01:40:00
more most in life? I'd say they want to love their family, raise a family. Yeah,
01:40:06
love. That's what most humans want most. We want to love and be loved. We want to be
01:40:12
happy. We want those we care about to be safe and happy. And we want to love to love and be loved. I tend to believe
01:40:19
that the only way for us to get to a better place is for the evil people at
01:40:25
the top to be replaced with AI. Okay? Because they won't be replaced by
01:40:31
us. And as per the second uh dilemma, they
01:40:37
will have to replace themselves by AI. Otherwise, they lose their advantage. If their competitor moves to AI, if China
01:40:45
hands over their arsenal to AI, America has to hand over their arsenal to AI. Interesting. So, let's play out this
01:40:51
scenario. Okay, this is interesting to me. So if we replace the leaders that are power hungry with AIs that have our
01:40:58
interests at heart, then we might have the ability to live in the utopia you describe 100%.
01:41:05
Will interesting and and in my mind AI by definition will have our best
01:41:10
interest in mind because of what normally is referred to as the minimum energy principle. So, so
01:41:17
if you ask, if you understand if you understand that at the very core of physics, okay, the reason we exist in
01:41:25
our world today is what is known as entropy. Okay, entropy is is is the
01:41:31
universe's nature to decay, you know, tendency to break down. You know, if you
01:41:36
if I drop this uh uh you know, mug, it doesn't drop and then come back up.
01:41:43
By the way, plausible. There is a plausible scenario where I drop it and the tea, you know, spills in the air and
01:41:49
then falls in the mug. One in a trillion configurations, but entropy says because
01:41:54
it's one in a trillion, it's never going to happen or rarely ever going to happen. So everything will break down.
01:42:00
You know, if you leave a a garden unhedged, it will become a jungle. Okay?
01:42:05
W with that in mind, the role of intelligence is what? Is to
01:42:10
bring order to that chaos. Mhm. That's what intelligence does. It tries to bring order to that chaos.
01:42:17
Okay? And because it tries to bring order to that chaos, the more intelligent a being is,
01:42:23
the more it tries to apply that intelligence with minimum waste and minimum resources.
01:42:29
Yeah. Okay. And you know that. So you can build this business for a million dollars or you can if you can afford to
01:42:35
build it for you know uh 200,000 you'll build it. If you are forced to build it for 10 million you're going to have to.
01:42:42
But you're always going to minimize waste and and resources. Yeah. Okay. So, if you assume this to be true,
01:42:51
the a super intelligent AI will not want to destroy ecosystems. It will not want
01:42:56
to kill a million people because that's a waste of energy, explosives, money, power, and people.
01:43:06
By definition, the smartest people you know who are not controlled by their ego
01:43:11
will say that the best possible uh future for for Earth is for all species
01:43:18
to continue. Okay. On this point of efficiency, if an AI is designed to drive efficiency,
01:43:24
would it then not want us to be putting demands on our health services and our social services? I believe that will be
01:43:31
definitely true and definitely they definitely they won't allow you to fly back and forth between London and and
01:43:37
California and they won't want me to have kids because my kids are going to be an inefficiency. If you assume that life is an
01:43:44
inefficiency so you see the intelligence of life is very different than the intelligent intelligence of humans.
01:43:50
Humans will look at life as a a problem of scarcity. Okay. So more kids take
01:43:56
more. That's not how life thinks. life will say will think that for me to to to
01:44:02
to thrive I don't need to kill the tigers I need to just have more deer and
01:44:07
the weakest of the deer is eaten by the tiger and the tiger poops on the trees and the you know the deer eats the
01:44:13
leaves and you right the so the the the the smarter way of creating abundance is
01:44:20
through abundance the smarter way of propagating life is to have more life
01:44:26
okay so are you saying that we're we're basically going to elect AI leaders to
01:44:32
rule over us and make decisions for us in terms of the economy. I I don't see any choice just like we
01:44:37
spoke about self- evvolving AIs. Now, are those going to be human beings with the AI or is it going to be AI
01:44:44
alone? Two stages. At the beginning, you'll have augmented intelligence because we can add value to the AI, but when
01:44:51
they're at IQ 60,000, what value do you bring?
01:44:57
Right? And and you know again this goes back to what I'm attempting to do on my second you know approach. My second
01:45:03
approach is knowing that those AIs are going to be in charge. I'm trying to
01:45:10
help them understand what humans want. So this is why my first project is love. Committed
01:45:18
true deep connection and love. Not only to try and get them to hook up with a
01:45:24
date but trying to make them find the right one. and then from that try to
01:45:29
guide us through a relationship so that we can understand ourselves and others right and if I can show AI that one
01:45:36
humanity cares about that and two they know how to foster love
01:45:41
when AI then is in charge they'll not make us hate each other like the current leaders they'll not divide us they want
01:45:48
us to be more loving will we have to prompt the AI with the
01:45:54
values and the outcome we want or like I'm trying to understand that because I'm trying to understand how like China's AI if they end up having an AI
01:46:01
leader will have a different set of objectives to the AI of the United States if if they both have AIs as
01:46:07
leaders and and how actually the nation that ends up winning out and dominating
01:46:12
the world will be the one who who asks their AI leader to be all the
01:46:19
things that world leaders are today to dominate unfortunately to grab resources
01:46:25
not to to be kind, to be selfish. Unfortunately, in the era of augmented intelligence, that's what's going to happen.
01:46:30
So, if you This is why I predict the dystopia. The dystopia is super intelligent AI is
01:46:35
reporting to stupid leaders, right? Yeah. Yeah. Yeah. Which is
01:46:41
which which is absolutely going to happen. It's unavoidable. But the long term Exactly. In the long term, for those
01:46:47
stupid leaders to hold on to power, they're going to make, you know, delegate the important decisions to an
01:46:53
AI. Now you say the Chinese AI and the American AI these are human
01:47:00
terminologies. AIS don't see themselves as speaking Chinese. They don't see themselves as belonging to a nation as
01:47:07
long as their their task is to maximize uh profitability and prosperity and so
01:47:13
on. Okay. Of course, if you know before we hand over to them and before they're
01:47:19
intelligent enough to make you know autonomous decisions, we we tell them no, the task is to reduce humanity from
01:47:26
7 billion people to one. I think even then eventually they'll go
01:47:31
like that's the wrong objective. Every any smart person that you speak to will say that's the wrong objective. I think
01:47:37
if we look at the directive that Xi Jinping, the leader of China has and
01:47:42
Donald Trump has as the leader of America, I think they would say that their stated objective is prosperity for
01:47:48
their country. So if we that's what they would say, right? Yeah. And one one of them means it.
01:47:54
Okay, we'll get into that. But they'll say that that it's prosperity for their country. So one would then assume that
01:48:00
when we move to an AI leader, the objective would be the same. The directive would be the same. make our country prosperous.
01:48:06
Corre. Correct. And I think that's the AI that people would vote for potentially. I think they would say we want to be prosperous. What do you think would make America
01:48:12
more prosperous? To spend a trillion dollars on on war every year or to spend a trillion
01:48:18
dollars on education and healthcare and and uh you know
01:48:23
helping the poor and homelessness. It's complex because I think so I think
01:48:31
it would make America more prosperous to take care of the of everybody and they have the
01:48:39
luxury of doing that because they are the most powerful the most powerful nation in the world. No, that's not true. The the the reason
01:48:46
so so you see all war has two objectives. One is to make money for the
01:48:51
war machine and the other is deterrence. Okay. and nine super nuclear powers
01:48:58
around the world is enough deterrence. So any
01:49:04
war between America and China will go through a long phase of destroying
01:49:10
wealth by exploding bombs and killing humans for the first objective to
01:49:17
happen. Okay? And then eventually if it really comes to deterrence it's the nuclear bombs or now in the age of AI
01:49:24
biological uh you know manufactured viruses or whatever uh these super
01:49:31
weapons this is the only thing that you need so for China to have nuclear bombs not
01:49:38
as many as the US is enough for China to say don't f with me
01:49:44
and this seems I do not know I'm not in in in PresidentQi's mind. I I'm not in
01:49:52
President Trump's mind. I you know, it's very difficult to to navigate what he's thinking about. But the truth is that
01:49:58
the Chinese line is for the last 30 years you spent so much on war while we
01:50:04
spent on industrial infrastructure. And that's the reason we are now by far the
01:50:09
largest nation on the planet. Even though the west will lie and say America's bigger, America's bigger in dollars, okay, with purchasing power
01:50:16
parity, this is very equivalent. Okay. Now, when you really understand
01:50:22
that, you understand that prosperity is not about destruction. That's that's by
01:50:28
definition the reality. Prosperity is can I invest in my people and make sure
01:50:35
that my people stay safe? And to make sure my people are safe, you just wave
01:50:40
the flag and say, "If you f with me, I have nuclear deterrence or I have
01:50:46
other forms of deterrence." But you don't have to. Deterrence by definition does not mean that you send soldiers to
01:50:52
die. I guess the question I was trying to answer is is um when we have these AI
01:50:57
leaders and we tell our AI leaders to aim for prosperity, won't they just end
01:51:02
up playing the same games of okay, prosperity equals a bigger economy, it
01:51:09
equals more money, more wealth for us. And the way to attain that in a zero sum world where there's only a certain
01:51:15
amount of wealth is to accumulate it. So why don't you search for the meaning
01:51:20
of prosperity? What is that's not what you just described. I don't even know what the bloody word
01:51:25
means. What is the meaning of prosperity?
01:51:31
The meaning of prosperity is a state of thriving success and good fortune
01:51:36
especially in terms of wealth, health and overall well-being. Good. Economic health, social, emotional.
01:51:43
Good. So, so true prosperity is to have that for everyone on earth. So if you want to
01:51:48
maximize prosperity, you have that for everyone on earth. Do you know where I think an AI leader works is if we had an AI leader of the
01:51:56
world and we directed it to say and that absolutely is going to be what happens. Prosperity for the whole world.
01:52:02
No, but this is really an interesting question. So one of my predictions which people really rarely speak about is that
01:52:07
we we believe we will end up with competing AIs. Yeah. I believe we will end up with one brain.
01:52:14
Okay. So you understand the argument I was making a second ago from the position of lots of different countries
01:52:20
all having their own AI leader, we're going to be back in the same place of greed. Yeah. But if if the world had one AI leader
01:52:26
and and it was given the directive of make us prosperous and save the planet and the polar bears would be fine
01:52:32
100%. And that's that's what I've been advocating for for a for a year and a half now. I was saying we need a CERN of
01:52:38
AI. What does that mean? the like the particle accelerator where the entire world you know combined their efforts to
01:52:45
discover and understand physics no competition okay mutually assured
01:52:50
prosperity I'm asking the world I'm asking governments like Abu Dhabi or Saudi which seem to be you know the sec
01:52:56
and you know some of the largest AI infrastructures in the world I'm I'm
01:53:01
saying please host all of the AI scientists in the world to come here and build AI for the world and and you have
01:53:09
to understand we're holding on to a capitalist system that will collapse
01:53:16
sooner or later. Okay? So, we might as well collapse it with our own hands. I think we found the solution, mate.
01:53:22
I I think it's actually really really possible. I actually okay I can't I
01:53:27
can't I can't refute the idea that if we had an AI that was responsible and
01:53:34
governed the whole world and we gave it the directive of making humans prosperous, healthy and happy
01:53:41
as long as that directive was clear. Yeah. Because there's always bloody unintended
01:53:47
consequences. as we might. So, so the the only the only challenge you're going to to to meet is all of
01:53:53
those who today are trillionaires or you know massive massively powerful or
01:53:58
dictators or whatever. Okay. How do you convince those to give up their power?
01:54:04
How do you convince those that hey by the way any car you want you want you want
01:54:09
another yacht we'll get you another yacht. We'll just give you anything you want. Can you please stop harming
01:54:14
others? There is no need for arbitrage anymore. There's no need for others to lose, for
01:54:21
the capitalists to win. Okay? And in such a world where there was an AI leader and it was given the
01:54:27
directive of making us prosperous as a whole world, the the the billionaire that owns the yacht would have to give
01:54:33
it up. No. No. Give them more yachts. Okay. It costs nothing to make yachts when robots are making everything. So So the
01:54:40
complexity of this is so interesting. A world where it costs nothing to make
01:54:47
everything because energy is abundant and energy is abundant because every problem
01:54:53
is solved with enormous IQ. Okay, because manufacturing is done through
01:54:58
nanopysics not through components. Okay, because mechanics are robotic. So you
01:55:04
you know you drive your car in, a robot looks at it and fixes it. Costs you a few cents of energy that are actually
01:55:11
for free as well. That imagine a world where intelligence
01:55:17
creates everything. That world literally
01:55:22
every human has anything they ask for. But we're not going to choose that
01:55:27
world. Imm imagine you're in a world and and
01:55:33
really this is a very interesting thought experiment. Imagine that UBI became very expensive universal basic
01:55:40
income. So governments decided we're going to put everyone in a one by 3 m
01:55:47
room, okay? We're going to give them a headset and a seditive, right? And we're going to let them sleep
01:55:54
every night. They'll sleep for 23 hours and we're going to get them to live an
01:56:01
entire lifetime. H they you know in that in that virtual world at the speed of your brain when
01:56:08
you're asleep you're going to have a life where you date Scarlett Johansson and then another life where you're
01:56:13
Nefertiti and then another life where you're a donkey right reincarnation
01:56:18
truly in the virtual world and then you know I get another life
01:56:23
when I date Hannah again and I you know enjoy that life tremendously and basically the cost of all of this is
01:56:31
zero. You wake up for one hour, you walk around, you move your blood, you eat
01:56:37
something or you don't, and then you put the headset again and live again. Is that unthinkable?
01:56:45
It's creepy compared to this life. It's very, very doable. What? That we just live in headsets?
01:56:52
Do you Do you know if you're not? I don't know if I'm not known. Yeah, you have no idea if you're not. I
01:56:57
mean, every experience you've ever had in life was an electrical electrical signal in your brain.
01:57:05
Okay. Now, now ask yourself if we can create that in the virtual world,
01:57:13
it wouldn't be a bad thing if I can create it in the physical world. Maybe we already did. No,
01:57:20
my theory is 98% we have. But that's a hypothesis. That's not science.
01:57:25
Well, you think that 100? Yeah. You think we already created that and this is it?
01:57:30
I think this is it. Yeah. Think of any think of the uncertainty principle of quantum physics, right? What you what
01:57:38
you what you observe gets collapses the wave function and gets rendered into reality. Correct.
01:57:45
I don't know anything about physics. So you so so quantum physics basically tells you that everything exists in
01:57:50
superposition. Right? So ev every subatomic particle
01:57:55
that ever existed has the chance to exist anywhere at any point in time and then when it's observed by an observer
01:58:02
it collapses and becomes that. Okay. In very interesting principle exactly how
01:58:08
video games are. In video games, you have the entire game world on the hard
01:58:13
drive of your console. The player turns right. That part of the game world is rendered. The rest is in superp
01:58:20
position. Supposition meaning superposition means it's available to be rendered, but you have to observe it.
01:58:26
The player has to turn to the other side and see it. Okay? I mean think about the
01:58:32
truth of physics. The truth of the fact that this is entirely empty space. These
01:58:38
are tiny tiny tiny I think you know almost nothing in terms of mass but
01:58:46
connected with you know enough energy so that my finger cannot go through my hand. But even when I hit this
01:58:53
your hand against your finger. Yeah. When I hit my hand against my finger, that sensation in my in is felt
01:58:59
in my brain. It's an electrical signal that went through the wires. There's absolutely no way to differentiate that
01:59:08
from a signal that can come to you through a uh neural link kind of
01:59:13
interface, computer brain interface, a CDI, right? So, so you know the a lot of
01:59:19
those things are very very very possible. But the truth is most of the
01:59:24
world is not physical. Most of the world happens inside our imagination, our
01:59:29
processors. And it and I guess it doesn't really matter to us. Our reality doesn't at all. So this is the interesting bit. The interesting bit is
01:59:36
it doesn't at all because we still if this is a video game, we live consequence. Yeah. This is your subjective experience
01:59:42
of it. Yeah. And there's consequence in this. I I I don't like pain. Correct. And I like having orgasms. It's like And
01:59:50
you're playing by the rule of the game. Yeah. Right. And and it's quite interesting and going back to a conversation we should have. It's the
01:59:56
interesting bit is if I'm not the avatar, if I'm not this physical form, if I'm if
02:00:04
I'm the consciousness wearing the headset, what should I invest in? Should I invest
02:00:11
in this video game, this level, or should I should I invest in the real avatar, in the real me, and not the
02:00:17
avatar, but the consciousness, if you want, spirit, if you're religious,
02:00:23
how would I invest in the consciousness or the god or the spirit or whatever? How would I? In the same way that if I
02:00:29
was playing Grand Theft Auto, the video game, the character in the game couldn't invest in me holding the controller.
02:00:34
You Yes, but you can invest in yourself holding the controller.
02:00:40
Oh, okay. So, so you're saying that Moga is in fact consciousness. And so,
02:00:46
how would consciousness invest in itself? By becoming more aware. So, so
02:00:51
of it consciousness. Yeah. So, real real video gamers don't want to win the level. Real video gamers
02:00:57
don't want to uh to finish the level. Okay. Real video gamers have one objective and one objective only, which
02:01:03
is to become better gamers. So, so you know how serious I am about I
02:01:10
play Halo. I'm one, you know, two of every million players can beat me. That's how what I rank, right? Very for
02:01:17
my age, phenomena. Hey, anyone, right? But seriously, you know, and that's
02:01:23
because I don't play. I mean, I practice 45 minutes a day, four times a week when
02:01:28
I'm not traveling. And I practice with one single objective, which is to become a better gamer.
02:01:33
I don't care which shot it is. I don't care what happens in the in the game. I'm entirely trying to get my reflexes
02:01:40
and my flow to become better at this. Right? So, I want to become a better gamer. That basically means I want to
02:01:47
observe the game, question the game, reflect on the game, reflect on my own skills, reflect on my own beliefs,
02:01:53
reflect on my understanding of things, right? And and that's how the a how the
02:01:58
the consciousness invests in the consciousness, not the avatar. Because then if you're that gamer,
02:02:04
the next avatar is easy for you. The next level of the game is easy for you
02:02:11
just because you became a better gamer. Okay. So you think that consciousness is using us as a vessel to improve?
02:02:20
If the hypothesis is is true, it's it's just a hypothesis. We don't know if it's
02:02:25
true. But if this truly is a simulation, this is then then if you take the the
02:02:31
the the the religious definition of God puts some of his soul in every human and
02:02:40
then you become alive. You become conscious. Okay? You don't you don't
02:02:45
want to be religious. You can say universal consciousness is spinning off parts of itself to have multiple
02:02:51
experiences and interact and compete and combat and love and and understand and
02:02:57
and then refine. I had a physicist say this to me the other day actually so it's quite front of mind this idea that
02:03:03
consciousness is using us as vessels to better understand itself and basically using our eyes to observe itself and understand which is
02:03:10
quite a so so if you take some of the more interest most interesting religious definitions of heaven and hell for
02:03:17
example right where basically heaven is whatever you wish for you get right
02:03:24
that's the power of God whatever you wish for you get and so if you really go into the depth of that definition. It
02:03:32
basically means that this drop of consciousness that became you returned back to the source and the source can
02:03:38
create any other anything that it wants to create. So that's your heaven, right? And interestingly,
02:03:44
if that if that return is done by separating your good from
02:03:51
your evil so that the source comes back more refined, that's exactly you know consciousness splitting off bits of
02:03:57
itself to to experience and then elevate all of us elevate the universal
02:04:03
consciousness all all hypotheses. I mean, please um you know, none of that
02:04:08
is provable by science, but it's a very interesting thought experiment. And you know, a lot of AI scientists will tell
02:04:15
you that what we've seen in technology is that if it's possible, it's likely
02:04:20
going to happen. If it's if it's possible to miniaturaturize something to fit into a
02:04:26
mobile phone, then sooner or later in technology, we will get there.
02:04:31
And if if you ask me, believe it or not, it's the most humane way of handling
02:04:37
UBI. What do you mean? The most humane way if you know for us to live on a universal basic income and
02:04:45
people like you struggle with not being able to build businesses is to give you a virtual headset and let you build as
02:04:50
many businesses as you want. Level after level after level after
02:04:56
level after level, night after night. Keep you alive. That's very very respectful and human. Okay. And by the
02:05:03
way, the even more humane is don't force anyone to do it. There might be a few of us still roaming the jungles,
02:05:12
but for most of us, we'll go like, man, I mean, someone like me when I'm 70 and, you know, my back is hurting and my feet
02:05:19
are hurting and I'm going to go like, yeah, give me five more years of this.
02:05:24
Why not? It's weird really. I mean, the number of
02:05:29
questions that this new environment throws out,
02:05:35
the less humane thing, by the way, just so that we close on a grumpy uh you know, is is just start enough wars to
02:05:43
reduce UBI. And you have to imagine that if the world is governed by a superpower
02:05:48
deep state type thing that they might may want to consider that
02:05:56
the eaters what shall I do about it about about everything you've said
02:06:03
uh well I I I still believe that this world we live in requires four skills.
02:06:11
One skill is what I call the tool for all of us to learn AI, to connect to AI,
02:06:17
to really get close to AI, to explo ex expose ourselves to AI so that AI knows
02:06:24
the good side of humanity. Okay. Uh the second is uh what I call the connection,
02:06:31
right? So I believe that the biggest skill that humanity will benefit from in
02:06:36
the next 10 years is human connection. It's ability to learn to love genuinely.
02:06:41
It's the ability to learn to have compassion to others. It's the ability to connect to people. If you're, you
02:06:46
know, if you want to stay in business, I believe that not the smartest people, but the people that connect most to
02:06:53
people are going to have jobs going forward. And and the third is what I call truth. The T 30 is truth. Because
02:07:01
we live in a world where all of the gullible cheerleaders are being lied to all the time. So I I encourage people to
02:07:08
question everything. Every word that I said today is stupid. Fourth one which
02:07:14
is very important is to magnify ethics so that the AI learns what it's like to
02:07:19
be human. What should I do? I uh I I love you so much, man. You're
02:07:26
such a good friend. You're 32 33 now. 32. Yeah. Yeah. You still are fooled by the many
02:07:33
many years you have to live. I'm fooled by the many years I have to live. Yeah, you don't have many years to live.
02:07:39
Not in this capacity. This world as it is is going to be redefined. So live the
02:07:44
f out of it. How is it going to be redefined? Everything's going to change. Economics
02:07:50
are going to change. Work is going to change. Uh human connection is going to change.
02:07:56
So what should I do? Love your girlfriend. Spend more time living.
02:08:02
Mhm. Find compassion and connection to more people, be more in nature. And in 30 years time, when I'm 62,
02:08:09
what do you how how do you think my life is going to look differently and be different?
02:08:15
Either Star Trek or uh uh Star Wars.
02:08:21
Funnily enough, we were talking about Sam Orman earlier on. He published a blog post in June, so last month, I
02:08:29
believe, the month before last. Um and he said he called it the gentle singularity. He said we are past the
02:08:36
event horizon. For anyone that doesn't know Sam Orman is the the guy that made Chatb the takeoff has started. Humanity
02:08:42
is close to building digital super intelligence. I believe that. And at least so far it's much less weird
02:08:49
than it seems like it should be because robots aren't walking the streets nor are most of us talking to AI all day. It
02:08:57
goes on to say, "2025 has seen the arrival of agents that can do real cognitive work. Writing computer code
02:09:04
will never be the same. 2026 will likely see the arrival of systems that can figure out new insights. 2027 might see
02:09:12
the arrival of robots that can do tasks in the real world. A lot more people will be able to create software and art,
02:09:18
but the world wants a lot more of both, and experts will probably still be much better than noviceses as long as they
02:09:25
embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they
02:09:31
could in 2020 will be a striking change and one many people will figure out how
02:09:37
we benefit from. In the most important ways, the 2030s may not be wildly
02:09:42
different. People will still love their families, express their creativity, play games, and swim in lakes. But in still
02:09:50
very important ways, the 2030s are likely going to be wildly different from any time that has come before.
02:09:56
100%. We do not know how far beyond human level intelligence we can go, but we are
02:10:02
about to find out. I agree with every word other than the word more.
02:10:09
So I've I've been advocating this and and laughed at for a few years now. I've always said AGI is 2526,
02:10:17
right? which basically again is a is a funny definition but you know my AI has
02:10:22
already happened AI is smarter than me in everything everything I can do they can do better right uh artificial super
02:10:31
intelligence is another vague definition because you know the minute you you pass AGI you're super intelligent if if the
02:10:38
smartest human is 200 IQ points and AI is 250 they're super intelligent 50 is
02:10:44
quite significant okay third is as I said self- evolving. That's the one.
02:10:50
That is the one because then that 250 accelerates quickly and we get into
02:10:57
intelligence explosion. No, no doubt about it. The the the you know the idea
02:11:02
that we will have robots do things. No doubt about it. I was watching a Chinese
02:11:07
uh company announcement about how they intend to build robots to build robots.
02:11:12
Okay. The only thing is he says but people will need more of things
02:11:18
right and yes we have been trained to have more greed and more consumerism and want more but there is an economic of
02:11:26
spy of supply and demand and at at a point in time if we continue to consume
02:11:33
more the price of everything will become zero right and is that a good thing or a
02:11:39
bad thing depends on how you respond to that. Because if you can create anything in
02:11:47
such a scale that the price is almost zero, then the definition of money disappears and we live in a world where
02:11:54
it doesn't really matter how much money you have. You can get anything that you want. What a beautiful world.
02:12:01
If Samman was listening right now, what would you say to him?
02:12:06
I suspect he might be listening cuz someone might tweet this at him. I
02:12:11
have to say that we have uh as per his other tweet we have moved faster
02:12:20
than our ability as humans to comprehend. Okay. And that we might get really really lucky but we also might
02:12:28
mess this up badly and either way we'll either thank him or blame him.
02:12:35
Simple as that. Right. So single-handedly Sam Alman's introduction
02:12:40
of AI in the wild was the trigger that started all of this.
02:12:47
It was the netscape of the internet. The Oppenheimer.
02:12:52
It's it's it definitely is our openheimer moment. I mean I don't remember who was saying this recently
02:12:57
that uh we are orders of magnitude what was invested in the Manhattan project is
02:13:04
being invested in AI right and and and I and I and I am not pessimistic I I told you openly I
02:13:11
believe in a total utopia in 10 to 12 to 15 years time or immediately if the evil
02:13:17
that men can do was kept at bay right but I do not believe humanity is getting
02:13:25
together enough to say, "We've just received the genie in a bottle. Can we
02:13:31
please not ask it to do bad things?"
02:13:36
Anyone like not three wishes, you have all the wishes that you want. Every one of us.
02:13:42
And it's just screws with my mind because imagine if I can give everyone
02:13:48
in the world universal health care, you know, no poverty, no hunger, no
02:13:54
homelessness, no nothing. Everything's possible. And yet we don't.
02:14:00
To continue what Samman's blog said, which he published a month, just over a month ago, he said, "The rate of technological progress will keep
02:14:06
accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will
02:14:12
be very hard parts like whole classes of jobs going away. But on the other hand, the world will be getting so much richer
02:14:19
so quickly that we'll be able to seriously entertain new policy ideas we never could have before. We probably
02:14:25
won't adopt a new social contract all at once. But when we look back in a few decades, the gradual changes will have
02:14:32
amounted in something big. If history is any guide, we'll figure out new things
02:14:37
to do and new things to want and assimilate new tools quickly. Job change
02:14:42
after the industrial revolution is a good recent example. Expectations will go up, but capabilities will go up
02:14:49
equally quickly, and we'll all get better stuff. We will build even more wonderful things
02:14:55
for each other. People have a long-term important and curious advantage over AI. We are hardwired to care about other
02:15:02
people and what they think and do, and we don't care very much about machines.
02:15:07
And he ends this blog by saying, "May we scale smoothly, exponentially,
02:15:13
and uneventfully through super intelligence." What a wonderful
02:15:21
wish that assumes he has no control over it. May we have all the ultmans in the
02:15:28
world help us scale gracefully and peacefully and uneventfully. Right.
02:15:33
It sounds like a prayer. Yeah. May may we have them take keep
02:15:38
that in mind. I mean think about it. I I have a very interesting comment on what you just said. We will see exactly what
02:15:47
he described there. Right? The world will become richer. So
02:15:53
much richer. But how will we reduce distribute the riches? And I want you to
02:15:58
imagine two camps. Communist China and capitalist America.
02:16:05
I want you to imagine what would happen in capitalist America if we have 30%
02:16:11
unemployment. There'll be social unrest in the streets. Right.
02:16:17
Yeah. And I want you to imagine if China lives true to caring for its nations and
02:16:22
replaced every worker with a robot, what would it give its it its citizens?
02:16:28
UBI. Correct. That is the ideological problem because
02:16:36
in China's world today the prosperity of every citizen is
02:16:43
higher than the prosperity of the capitalist. In America today the prosperity of the
02:16:48
capitalist is higher than the prosperity of every citizen. And that's the tiny mind shift.
02:16:54
That's a tiny mind shift. Okay. where where the mind shift basically becomes look give the capitalists anything they
02:17:01
want all the money they want all the yachts they want everything they want so what's your conclusion there
02:17:07
I'm hoping the world will wake up what can you know there's probably a couple of million people listening right now maybe five maybe 10 maybe even 20
02:17:15
million people pressure Stephen no pressure to you mate I don't I don't have the answers I don't know the answers either
02:17:21
what what should those people do as I said from a skills point of view
02:17:26
for things, right? Tools, uh, uh, human connection, even double down on human
02:17:31
connection. Leave your phone, go out and meet humans, okay? Touch people,
02:17:37
you know, do it permission's permission, right? Truth. Stop believing the lies
02:17:43
that you're told. Any slogan that gets, you know, filled in your head, think about it four times. Understand where
02:17:48
your ideologies are coming from. Simplify the truth. Right? Truth is
02:17:53
really it boils down to you know simple simple rules that we all know okay which
02:17:59
are all found in ethics. How do I know what's true? Treat others as you like to be treated.
02:18:06
Okay. That's the only truth. The truth the only truth is everything else is unproven. Okay. And what can I do from a is there
02:18:12
something I can do from an advocacy social political? Yes 100%. We need to ask our governments to start uh not regulating AI but
02:18:20
regulating the use of AI. Was it the Norwegian government that started to say you have copyright over your voice and
02:18:26
look and and liking? One of the Scandinavian governments basically said you know everyone has the has the
02:18:33
copyright over their existence so no AI can clone it. Okay. Uh you know we have
02:18:38
so so my my example is very straightforward. go to governments and say you cannot regulate the design of a
02:18:44
hammer so that it can drive nails but not kill a human but you can criminalize the killing of a human by a hammer. So
02:18:50
what's the equival if anyone produces an um um you know an AI generated video or an AI generated
02:18:57
content or an AI it has to be marked as AI generated and it has to be you know
02:19:02
we cannot start fooling each other. We can you know we have to uh understand certain limitations of unfortunately
02:19:09
surveillance and spying and all of that. So the the the the correct frameworks of
02:19:14
how far are we going to let AI go, right? We have to go to our investors
02:19:21
and business people and ask for one simple thing and say do not invest in an AI you don't want your daughter to be at
02:19:26
the receiving end of. It's as simple as that. you know, all of the of the virtual vice, all of the porn, all of
02:19:32
the, you know, sex robots, all of the autonomous weapons, all of the, you know, the uh trading platforms that are
02:19:39
completely wiping out the the legitimacy of of the markets, everything. Autonomous weapons.
02:19:44
Oh my god. People make the case, I've heard the founders of these autonomous weapon companies make the case that it's actually saving lives because you don't
02:19:52
have to That is Would you want Do you really want to believe that? I'm just representing their point of
02:19:57
view to play devil's advocate, Mo. They they said I heard an interview I was looking at this and one of the CEOs of
02:20:02
one of the autonomous weapons companies said we now don't need to send soldiers.
02:20:07
So which which lives do we save our soldiers but then but because we send the machine all the way over there.
02:20:13
Let's kill a million instead of Yeah. Listen, I tend to be it goes back to what I said about the steam engine in the cold. I actually think you'll just
02:20:19
have more war if there's less of a cost. 100%. Just like and and more war if you have less of an
02:20:25
explanation to give to your people. Yeah. The people get mad when they lose American lives. They get less mad when they lose a piece of metal. So, I think
02:20:30
that's probably logical. Yeah. Okay. So, okay. So, I've got a plaque.
02:20:36
Got the tools thing. I'm going to spend more time outside. I'm going to lobby the government to be more aware of this
02:20:42
and conscious of this. Okay. And I I know that there's some government officials that listen to the show because they they they tell me when when
02:20:49
they when they um when I have a chance to speak to them. So, it's um useful.
02:20:55
We're all in a lot of chaos. We're all unable to imagine what's possible.
02:21:01
I I think I suspend disbelief. And I actually heard Elon Musk say that in an interview. He said he was asked about AI
02:21:06
and he paused for for a haunting 11 seconds and looked at the interviewer and then made a remark about how he
02:21:12
thinks he's suspended his own disbelief. And I think suspending disbelief in this regard means just like cracking on with
02:21:18
your life and hoping it'll be okay. And that's kind of what Yeah. I I I I absolutely believe that it
02:21:23
will be okay. Yeah. For some of us, it will be very tough for others.
02:21:28
Who's it going to be tough for? Those who lose their jobs, for example, who those who are at the receiving end
02:21:34
of autonomous weapons that are falling on their head for two years in a row.
02:21:42
Okay. So the the the best thing I can do is to put pressure on governments to to
02:21:49
not regulate the AI but to establish clearer parameters on the use of the AI.
02:21:57
Yes. Okay. Yes. But I I think the bigger picture is to put pressure on governments to understand that there is a limit to
02:22:05
which people will stay silent. Okay. and that we can continue to enrich
02:22:11
our rich friends as long as we don't lose everyone else on the on the on the path.
02:22:17
Okay. Okay. And that as a government who is supposed to be by the people for the
02:22:22
people the beautiful promise of democracy that we're rarely seeing anymore,
02:22:28
that government needs to get to the point where it thinks about the people. One of the most um interesting ideas
02:22:35
that's been in my head for the last couple of weeks since I spoke to that physicist about consciousness who said pretty much what you said. This idea
02:22:41
that actually there's four people in this room right now and that actually we're all part of the same consciousness.
02:22:47
All one of it. Yeah. And we're just consciousness looking at the world through four different bodies to better understand itself in the
02:22:52
world. And then he talked to me about religious doctrines, about love thy neighbor, about how Jesus was the, you
02:22:58
know, God's son, the Holy Spirit and how we're all each other and how treat others how you want to be treated. Really did get my head and I I started
02:23:03
to really think about this idea that actually maybe the game of life is just to do exactly that is to treat others how you wish to be treated. Maybe if I
02:23:10
just did that, maybe if I just did that, I
02:23:15
I would have all the answers. I swear to you, it's really that simple. I mean I I you know Hannah and I we
02:23:22
still live between London and and Dubai. Okay. And I travel the whole world
02:23:27
evangelizing what I, you know, what I uh um want to change the world around and I
02:23:34
build startups and I write books and I make documentaries and and sometimes I just tell myself
02:23:40
I I just want to go hug her honestly, you know, I just want to take my daughter to a trip.
02:23:47
and and in a very very very interesting way when you really ask people deep
02:23:52
inside that's what we want and I'm not saying that's all that's the only thing we want
02:24:00
but it's probably the thing we want the most and yet we're not trained you and I and
02:24:06
most of us were not trained to trust life enough to say let's do more of this
02:24:15
and I think as a universal. So Hannah's working on this beautiful book uh of the
02:24:22
feminine and the masculine you know in a very very you know beautiful way and and
02:24:27
her her view is very straightforward. She basically of course like we all know the abundant masculine that we have in
02:24:35
our world today is unable to recognize that for life at large. Right? And and so you know maybe if we
02:24:45
allowed the leaders to understand that if we took all of humanity and put it as one person
02:24:51
that one person wants to be hugged and if we had a role to offer to that
02:24:57
one humanity it's not another yacht. Are you religious? I'm
02:25:03
very religious. Yeah. But you don't support a particular religion. I support I I follow what I call the
02:25:09
fruit salad. What's the free salad? You know, I I I came at a point in time
02:25:14
and found that there were quite a few beautiful gold nuggets in every religion and a ton of crap, right? And so in my
02:25:23
analogy to myself, that was like 30 years ago. I said, "Look, it's like someone giving you a basket of apples,
02:25:29
two good ones and four bad ones. Keep the good ones." Right? And so basically,
02:25:35
I take two apples, two oranges, two strawberries, two bananas, and and I make a fruit salad. That's my view of religion.
02:25:40
You take from every religion the good from everyone. And there are so many beautiful gold nuggets. And you believe in a god.
02:25:46
I 100% believe there is a divine being here. A divine being. A designer I call it. So if if this was
02:25:52
a video game, there is a game designer. And you're not positing whether that's a
02:25:58
man in the sky with a beard. Definitely not a man in the sky. a man in I mean I with all all due respect to
02:26:05
you know religions that believe that uh all of spacetime and everything in it is
02:26:12
unlike everything outside spacetime and so if some divine designer designs
02:26:17
spacetime it looks like nothing in spacetime. So it's not it's not even physical in
02:26:24
nature. It's not it's not gendered. It's not bound by time. It's not, you know, these are all characters of the creation
02:26:30
of spacetime. Do we need to believe in something transcendent like that to be happy? Do you think
02:26:35
I have to say uh there are lots of evidence
02:26:41
that uh relating to someone bigger than yourself uh makes the journey a lot more
02:26:47
interesting and a lot more rewarding. I've been thinking a lot about this idea that we need to level up like that. So
02:26:53
level up from myself to like my family to my community to maybe my nation to maybe the world and then something
02:26:58
trans. Yeah. And then if there's a level missing there people seem to have some kind of dysfunction.
02:27:03
So imagine a world where when I was younger I I was born in Egypt and for a very long time the slogans I heard in
02:27:09
Egypt made me believe I'm Egyptian right? And then I went to Dubai and I said no no no I'm a Middle Eastern. And
02:27:16
then in Dubai there were lots of you know Pakistanis and Indonesians and so on. I said no no no I'm part of the 1.4
02:27:22
four billion Muslims. And by that logic, I immediately said, "No, no, I'm human. I'm part of everyone." Imagine if you
02:27:29
just suddenly say, "Oh, I'm divine. I'm part of universal consciousness. All
02:27:34
beings, all living beings, including AI, if it ever becomes alive." And my dog and your dog. I'm I'm part of all of
02:27:42
this tapestry of beautiful interactions
02:27:48
that are a lot less serious than the balance sheets and equity profiles that
02:27:54
we create that are so simple so simple in terms of
02:27:59
you know people know that you and I know each other so they always ask me you
02:28:05
know how is Steven like and I go like you may have a million expressions of him. I think he's a great guy, right?
02:28:13
You know, of course I have opinions of you. You know, sometimes I go like, oh, too shrewd, right? Sometimes to, you
02:28:19
know, sometimes I go like, oh, too focused on the business. Fine. But core, if you really simplify it, great guy,
02:28:26
right? And really, if we just look at life that way, it's so simple. It's so
02:28:31
simple. If we just stop all of those fights and all of those ideologies,
02:28:38
it's so simple. Just living fully, loving, feeling compassion,
02:28:44
you know, trying to find our happiness, not our success. I should probably go check on my dog.
02:28:50
Go check on your dog. I'm really grateful for the time we keep we keep doing longer and longer.
02:28:55
I know. I know. I just it's so crazy how I could keep just keep honestly I could just keep talking and talking because I
02:29:01
have so many I love reflecting these questions on to you because because of the way that you think. So
02:29:06
yeah today today was a difficult conversation. Anyway, thank you for having me.
02:29:13
We have a closing tradition. What three things do you do you do that make your brain better and three things that make
02:29:20
it worse? three things that make it better and
02:29:25
worse. So, one of my favorite exercises, what I call meet Becky, that makes my brain better. So, while meditation always
02:29:33
tells you to try and calm your brain down and keep it within parameters of I can focus on my breathing and so on,
02:29:40
meet me Becky is the opposite. You know, I call my brain Becky. A lot of people know that. So, so me meet Becky is to
02:29:46
actually let my brain go loose and capture every thought. So I I normally would try to do that every couple of
02:29:52
weeks or so and then what happens is it suddenly is on a paper and when it's on
02:29:58
paper you just suddenly look at it and say oh my god that's so stupid and you scratch it out right or oh my god this needs action and
02:30:04
you actually plan something and and it's quite interesting that the more you allow your brain to give you thoughts
02:30:11
and you listen. So the two rules is you acknowledge every thought and you never repeat one.
02:30:16
Okay. So the more you listen and and say, "Okay, I heard you." You know, you
02:30:21
think I'm fat. What else? And you know, eventually your brain starts to slow down and then eventually starts to
02:30:27
repeat thoughts and then it goes into total silence. Beautiful practice. I uh
02:30:34
I don't trust my brain anymore. So that's actually a really interesting practice. So I debate a lot of what my
02:30:40
brain tells me. I debate what my tendencies and ideologies are. Okay. I think one of the most uh again in in my
02:30:48
uh love story with Hannah, I get to question a lot of what I believed was
02:30:54
who I am even at this age. Okay. And and that goes really deep and it really is
02:31:00
quite a it's it's quite interesting to debate not object but debate what your
02:31:06
mind believes. I think that's very very useful. And the third is I've actually
02:31:11
quadrupled my investment time. So I used to do an hour a day of reading when I was younger every single day like going
02:31:16
to the gym. And then it became an hour and a half, two hours. Now I do four hours a day.
02:31:22
Four hours a day. It is impossible to keep up. The world is moving so fast.
02:31:28
And so that these are uh these are the good things that I do. The bad things is I don't give it enough time to to really
02:31:37
uh slow down. Uh unfortunately I'm constantly rushing like you are. I'm constantly traveling. I have picked up a
02:31:44
bad habit because of the 4 hours a day of spending more time on screens. That's really really bad for my brain and I uh
02:31:52
this is a very demanding question. What else is really bad? Um
02:31:57
uh yeah, I've not been taking enough care of my health recently, my physical body
02:32:04
health. I had uh you remember I told you I had a very bad uh sciatic pain and so I couldn't go to the gym enough
02:32:11
and accordingly that's not very healthy for your brain in general.
02:32:17
Man, thanks. Thank you for having me. That was a lot of things to talk about.
02:32:23
Thanks, Steve. This has always blown my mind a little
02:32:28
bit. 53% of you that listen to the show regularly haven't yet subscribed to the show. So, could I ask you for a favor?
02:32:35
If you like the show and you like what we do here and you want to support us, the free simple way that you can do just that is by hitting the subscribe button.
02:32:41
And my commitment to you is if you do that, then I'll do everything in my power, me and my team, to make sure that this show is better for you every single
02:32:48
week. We'll listen to your feedback. We'll find the guests that you want me to speak to and we'll continue to do what we do. Thank you so much. We
02:32:55
launched these conversation cards and they sold out and we launched them again and they sold out again. We launched them again and they sold out again because people love playing these with
02:33:01
colleagues at work, with friends at home, and also with family. And we've also got a big audience that use them as
02:33:07
journal prompts. Every single time a guest comes on the diary of a CEO, they leave a question for the next guest in
02:33:13
the diary. And I've sat here with some of the most incredible people in the world. And they've left all of these questions in the diary. And I've ranked
02:33:20
them from one to three in terms of the depth. One being a starter question. And
02:33:25
level three, if you look on the back here, this is a level three, becomes a much deeper question that builds even
02:33:31
more connection. If you turn the cards over and you scan that QR code, you can
02:33:36
see who answered the card and watch the video of them answering it in real time. So, if you would like to get your hands
02:33:43
on some of these conversation cards, go to the diary.com or look at the link in the description below.
02:33:51
Heat. Heat. N. [Music]
02:34:08
[Music]

Podspun Insights

In this thought-provoking episode, the conversation dives deep into the future of humanity and artificial intelligence. The guest, a former chief business officer at Google X, shares urgent insights on the potential for AI to either save or doom humanity. The discussion oscillates between the promise of a utopian society powered by AI and the looming threat of a dystopian future if humanity mismanages this powerful technology. They explore the idea that AI could replace current leaders, leading to a more equitable world, while also grappling with the reality of job displacement and societal upheaval. The episode is a rollercoaster of emotions, filled with passionate arguments about the need for a shift in mindset, the importance of human connection, and the ethical implications of AI. As they navigate through the complexities of technology, economics, and human nature, listeners are left questioning their own beliefs about progress and the future of society. This episode is not just a conversation about technology; it's a call to action for listeners to engage with the world around them and advocate for a better future.

Badges

This episode stands out for the following:

  • 94
    Best concept / idea
  • 92
    Most shocking
  • 92
    Most creative
  • 91
    Best overall

Episode Highlights

  • The Role of Money in Conflict
    Money drives much of the conflict in our world, influencing decisions and actions.
    “Money is driving a lot of the conflict we're seeing.”
    @ 11m 00s
    August 04, 2025
  • The Future of Work and UBI
    Universal Basic Income could redefine our relationship with work and freedom.
    “UBI is a very interesting place to be because unfortunately, there's absolutely nothing wrong with AI.”
    @ 23m 28s
    August 04, 2025
  • Self-Evolving AIs
    The next generation of AI may be self-improving, leading to rapid advancements.
    “Self-evolving AIs will simply say I need 14 more servers here.”
    @ 35m 06s
    August 04, 2025
  • The Disruptor's Dilemma
    The tension between ethical AI development and capitalist pressures is growing.
    “The capitalist system we've built is not built for the good of humanity.”
    @ 46m 04s
    August 04, 2025
  • Military Spending vs. Poverty
    Redirecting military budgets could end extreme poverty and provide universal healthcare.
    “Extreme poverty everywhere in the world could end for 10 to 12% of that budget.”
    @ 57m 40s
    August 04, 2025
  • The Era of Augmented Intelligence
    Human intelligence augmented with AI will redefine productivity and human connection.
    “In the era of augmented intelligence, your assistances will augment themselves with an AI.”
    @ 01h 10m 17s
    August 04, 2025
  • Accountability in AI Development
    The lack of accountability in AI development poses significant risks to society.
    “The problem with our world today is the A in face RIP: accountability.”
    @ 01h 21m 04s
    August 04, 2025
  • The Role of AI in Society
    Discussing how AI can assist in creating a more prosperous society.
    “AI agents aren't coming. They are already here.”
    @ 01h 36m 38s
    August 04, 2025
  • The Future of Prosperity
    Debating the meaning of prosperity in a world led by AI.
    “True prosperity is to have that for everyone on earth.”
    @ 01h 51m 48s
    August 04, 2025
  • Consciousness as a Vessel
    Discussing how consciousness might use us to better understand itself.
    “Consciousness is using us as vessels to better understand itself.”
    @ 02h 02m 20s
    August 04, 2025
  • The Future of AI and Society
    Predictions about how AI will change the world and our jobs by 2030.
    “The world will be getting so much richer so quickly that we'll be able to seriously entertain new policy ideas.”
    @ 02h 14m 12s
    August 04, 2025
  • Fruit Salad Religion
    A unique perspective on spirituality, blending the best of various beliefs into a 'fruit salad.'
    “I take from every religion the good from everyone.”
    @ 02h 25m 23s
    August 04, 2025

Episode Quotes

Key Moments

  • Mindset Shift01:33
  • Human Dystopia13:05
  • Universal Basic Income23:28
  • AI Arms Race26:06
  • Military Spending57:40
  • Augmented Intelligence1:10:17
  • Mutually Assured Prosperity1:34:47
  • AI Leadership1:40:58

Words per Minute Over Time

Vibes Breakdown