Search Captions & Ask AI

AI AGENTS DEBATE: These Jobs Won't Exist In 24 Months!

May 12, 2025 / 02:32:10

This episode discusses the impact of AI on society, featuring guests Amjad Masad, Dan Martell, and Brett Weinstein. Key topics include job displacement, the potential for AI to improve healthcare and education, and the ethical implications of autonomous systems.

Amjad Masad shares his experience with Replit, a platform that allows users to create software without coding skills. He emphasizes the democratization of technology and how it enables individuals to build businesses and solve problems.

Dan Martell discusses the opportunities for entrepreneurs in the AI era, highlighting the potential for small teams to leverage AI tools for significant impact. He encourages listeners to embrace these technologies to create meaningful solutions.

Brett Weinstein raises concerns about the rapid pace of technological change and its implications for society. He warns of the dangers of autonomous weapons and the potential for AI to exacerbate existing inequalities.

The conversation concludes with a call to action for individuals to adapt to the changing landscape and leverage AI for personal and societal growth.

TL;DR

AI is reshaping society, creating opportunities and challenges in jobs, healthcare, and ethics.

Video

00:00:00
I think a lot of people don't realize how massive the positive impact AI is going to have on their life. Well, I
00:00:06
would argue that the idea that this AI disruption doesn't lead us to human catastrophe is optimistic. For example,
00:00:12
people are going to be unemployed in huge numbers. You agree with that, don't you? Yes. If your job is as routine as
00:00:19
it comes, it's gone in the next couple years. But it's going to create new opportunities for wealth creation. Let
00:00:26
me put it to you this way. We have created a new species and nobody on earth can predict what's going to
00:00:31
happen. We are joined by three leading voices to debate the most disruptive shift in human history, the rise of AI.
00:00:38
And they're answering the questions you're most scared about. This technology is going to get so much more
00:00:43
powerful. And yes, we're going to go through a period of disruption. But at the other end, we're going to create a fair world. It's enabling people to run
00:00:49
their businesses, make a lot of money, and you can solve meaningful problems such as the breakthroughs in global healthcare and education will be
00:00:56
phenomenal. and you can live an incredibly fulfilling existence. Well, I would just say on that front, this has
00:01:01
always been the fantasy of technologists to do marvelous things with our spare time, but we end up doom scrolling, loneliness epidemic, right? Falling
00:01:07
birth rates. So, the potential for good here is infinite and the potential for bad is 10 times. For example, there's
00:01:14
war, undetectable beat fakes and scams. So, people don't understand how many different ways they are going to be
00:01:20
robbed. Look, I don't think blaming technology for all of it is the right thing. all these issues, they're already
00:01:25
here. You're all fathers here. So, what are you saying to your children? Well, first of all, this has always blown my
00:01:32
mind a little bit. 53% of you that listen to this show regularly haven't yet subscribed to the show. So, could I
00:01:38
ask you for a favor before we start? If you like the show and you like what we do here and you want to support us, the free simple way that you can do just
00:01:44
that is by hitting the subscribe button. And my commitment to you is if you do that, then I'll do everything in my
00:01:49
power, me and my team, to make sure that this show is better for you every single week. We'll listen to your feedback.
00:01:54
We'll find the guest that you want me to speak to and we'll continue to do what we do. Thank you so
00:01:59
[Music]
00:02:05
much. The reason why I wanted to have this conversation with all of you is because the subject matter of AI, but
00:02:12
more specifically AI agents, has occupied my free time for several weeks
00:02:18
in a row. And actually amad when I started using replet for me it was a
00:02:24
paradigm shift. There was two paradigm shifts in a row that happened about a week apart. Chat GPT released their
00:02:30
image generation model where you could create any image. It was incredibly detailed with text and all those things. That was a huge paradigm shift. And then
00:02:37
in the same week I finally gave in to try and figure out what this term AI agent was that I was hearing all over
00:02:42
the internet. I heard vibe coding. I heard AI agent. I was like I will give it a shot. Mhm. And when I used Replet,
00:02:48
20 minutes in to using Replet, my mind was blown. And I think that night I
00:02:53
stayed up till 3 or 4 a.m. in the morning. For anyone that doesn't know, Replet is a piece of software that
00:03:00
allows you to create software. Mhm. And pretty much any software you you want.
00:03:06
So someone like me with absolutely no coding skills was able to build a website, build in Stripe, take payment,
00:03:13
integrate AI into my website, add Google login to the front of my website and do
00:03:18
it within minutes. I then got the piece of software that I had built with no coding skills, sent it to my friends,
00:03:23
and one of my friends put his credit card in and paid. Amazing. So I just launched a SAS company with no coding
00:03:28
skills. to demonstrate an AI agent in a very simple way. I used an online AI agent
00:03:36
called operator to order us all some water from a CVS around the corner. The AI agent did everything end to end and
00:03:42
people will be watching on the screen. It put my credit card details and it picked the water for me. It gave the person a tip. It put some delivery notes
00:03:48
in at some point a guy is going to walk in. He has not interacted with a human. He's interacted with my AI agent. And I
00:03:55
just the reason I use this as an example is again it was a paradigm shift moment for me when I heard about agents. Mhm.
00:04:00
about a month ago and I went on and I ordered a bottle of electrolytes and when it when my doorbell rang I freaked
00:04:07
out. I freaked out. But am who are you and what are you doing? So uh I started
00:04:15
programming at a very young age. You know I I built my first business when I was a teenager. I used to go to uh
00:04:22
internet cafes and program there. And I realized that they don't have software to manage the business. I was like oh
00:04:27
why don't you create accounts? I don't have a server. It took me two years to build that piece of software. And that's
00:04:33
sort of embedded in my mind this idea that hey like you know there there's a lot of people in the world with really
00:04:40
amazing ideas especially in the context where they live in that allows them to build uh businesses. However, the main
00:04:49
source of uh friction between an idea and software or call it an idea and
00:04:55
wealth creation is u infrastructure is physical infrastructure is a meaning a
00:05:01
computer in front of you. It is um an internet connection. It is the set of
00:05:07
tools and skills that you need to build that. If we make it so that anyone who
00:05:12
has ideas who who wants to solve problems will be able to do it. I mean imagine the kind of world that we we
00:05:19
could live in where no one can be anyone who has merit anyone who can think
00:05:25
clearly anyone who can generate a lot of ideas can generate wealth. I mean that's
00:05:31
an amazing world to live in right anywhere in the world. So with with Replet the company that I started in
00:05:36
2016 the the idea was like okay coding is difficult how do we solve coding and
00:05:43
um we built every part of the process the hosting the code editor the only
00:05:48
missing thing was the you know the AI agent and so over the past two years
00:05:53
we've been working on this AI agent that you can just you know similar since chat
00:05:58
GPT you know this revolution with Genai and you can just uh speak your ideas into existence. I mean this starts you
00:06:05
know sounding religious like this is like the you know the gods you know that the myths that that um that humans have
00:06:12
created they used to imagine a world where you can you can be everywhere and anywhere at once that's sort of the
00:06:18
internet and you can also speak your ideas into into existence and um you know it's it's still early I think uh
00:06:25
replet agent is is a fantastic tool and I think this technology is going to get so much more powerful
00:06:31
specifically what is an AI agent. I've got this um graph actually here which I
00:06:36
don't need to pass to any of you for you to be able to see the growth of AI agents. But this graph is Google search
00:06:43
trend data. This also resembles our revenue too. Oh, okay. Right. The the water has
00:06:49
arrived. Hello. Thank you. You can come on in. Can I have a go, please? Yes.
00:06:54
It's 3951. Great. Thank you so much. Thank you. Thank you. I mean this is this is
00:07:00
like a supernatural kind of power. You conjured water. I conjured water from my
00:07:05
mind. Yeah. And it's shown up here with us and it clearly thinks we need a lot.
00:07:11
But but just to define the term AI agent for someone that's never heard the term before. Yeah. Yeah. So uh I assume most
00:07:18
of the audience now are familiar with chat, right? You can go in and you can talk to an AI. It can search the web for
00:07:25
you. It has a limited amount of tools. Uh maybe it can call a calculator to do some addition subtraction for you, but
00:07:32
that's about it. It's a request response style. Agents are when you give it a
00:07:38
request and they can work indefinitely until they achieve a goal or they run
00:07:45
into an error and they need your help. It's an AI bot that has access to tools.
00:07:50
Those tools are access to the to a web browser like operator, access to a programming environment say like replet,
00:07:58
access to um you know credit cards. The more tools you give the agent, the more
00:08:03
powerful it is. Of course there's all these consideration around security and safety and all of that stuff. But uh the
00:08:10
the most important thing is the AI agent will determine when it finished
00:08:15
executing. Uh today AI agents can run for anywhere between you know 30 seconds
00:08:22
to 30 minutes. Uh there's a recent paper that came out that's showing that every
00:08:28
7 months the number of minutes that the agent can run for is doubling. So we're
00:08:34
at like 30 minutes now. In seven months we're going to be at an hour then you know 2 hours. Pretty soon we're going to
00:08:40
be at days. And at that point, you know, AI agent is doing labor is doing kind of
00:08:45
humanlike labor and actually uh OpenAI's new model 03 beat the expectation. So it
00:08:52
it sort of doubled coherence over long horizon tasks in just three or four
00:08:58
months. So we're in this massive and I mean this looks this exponential graph,
00:09:04
you know, that shows you the massive trend we're on. Brett, give us a little bit of of your
00:09:11
background, but also I saw you writing some notes there. There was a couple of words used there that I thought were
00:09:16
quite interesting, especially considering what I know about you. The word God was used a few times.
00:09:22
Well, uh, let me just say I'm an evolutionary biologist, and probably for the purposes of this conversation, it
00:09:30
would be best to think of me as a complex systems theorist. One of the things that I believe is true about AI
00:09:36
is that this is the first time that we have built machines that have crossed
00:09:42
the threshold from the highly complicated into the truly complex.
00:09:48
And I will say I'm listening to this conversation with a um a a mixture of
00:09:55
profound hope and dread because seems to me that it is obvious
00:10:03
that the potential good that can come from this technology is effectively
00:10:09
infinite. But I would say that the harm is probably 10 times. It's a bigger infinity. And the question of how we are
00:10:16
going to get to a place where we can leverage the obvious power that is here
00:10:22
to do good and dodge the worst harms. I have no idea. I I know we're not
00:10:27
prepared. So I hear you talking about agents and I think um that's marvelous.
00:10:34
We can all use such a thing right away and the more powerful it is, the better. The idea of something that can solve
00:10:40
problems on your behalf while you're doing something else is marvelous. But of course, that is the precondition for
00:10:50
absolute devastation to arise out of a miscommunication, right? to have
00:10:55
something acting autonomously to accomplish a goal, you damn well better
00:11:01
understand what the goal really is and how to how to pull back the reins that
00:11:06
it starts accomplishing something that wasn't the goal. The potential for abuse is also utterly profound. You know, you
00:11:14
can imagine, just pick your your dark mirror uh fantasy dystopia where something has
00:11:24
been told to hunt you down until you're dead and it sees that as a, you know, a
00:11:29
technical challenge. So, I don't know quite how
00:11:34
to balance a discussion about all of the things that can clearly come from this
00:11:39
that are utterly transcendent. I mean, I do think it is not inappropriate to be
00:11:45
invoking God or biblical metaphors here. You know, you're uh producing water
00:11:52
seemingly from thin air. I believe that does have an exact biblical parallel. Uh so, uh any case, the the power is
00:12:00
here, but so so too is the need for cautionary tales, which we don't have.
00:12:06
That's the problem is that there's no body of myth that will warn us properly of this tool because we've just crossed
00:12:11
a threshold that is similar in its capacity to alter the world as the
00:12:19
invention of writing. I really think that's that's where we are. We're talking about something that is going to
00:12:24
fundamentally alter what humans are with no plan. You know, writing alters the
00:12:31
world slowly because the number of people who can do it is tiny at first and remain so for thousands of years.
00:12:38
This is changing things weakly and that's an awful lot of power to just
00:12:43
simply have dumped on a system that wasn't well regulated to begin with. Dan, yeah. So, I'm an I'm an
00:12:50
entrepreneur. Um, I've been building businesses for the last 20 plus years. I'm completely well positioned between
00:12:56
the two of you here. the excitement of the opportunity and the terror uh of what could go on. There's this image
00:13:02
that I saw of New York City in 1900 and
00:13:08
every single vehicle on the street is a horse and cart and then 13 years later the same photo from the same vantage
00:13:15
point and every single vehicle on the street is a car. And in 13 years all the horses had been removed and cars had
00:13:22
been put in place. And um if you had have interviewed the horses in 1900 and
00:13:28
said uh how do you feel about your level of confidence uh in in the world? The horses would have said well we've been
00:13:34
part of uh humanity for you know horse and hoof hand and hoof for for many many years for for thousands of years.
00:13:41
There's one horse for every three humans. Like how bad could it be? You know we'll always have a special place. will always be part of society. Um, and
00:13:51
little did the horses realize that that was not the case. That the horses were going to be put out of of uh business
00:13:57
very very rapidly. And to reason through analogy, you know, there's a lot of us who are now sitting there going, "Hey,
00:14:04
wait a second. Does this make me a horse in 1900?" I think a lot of people don't realize how massive these kind of
00:14:10
technologies are going to have as an impact. You know, one minute we're ordering a water and that's cute and the next
00:14:16
minute it can run for days and in your words uh it doesn't stop until it achieves its goal and it comes up with
00:14:23
as many different ways as it could possibly come up with to achieve its goal and in your words it better know
00:14:29
what that goal is. I'm thinking a lot as Daniel's speaking about the vast
00:14:35
application of AI agents and where are the bounds because if if this thing is
00:14:40
going to get incrementally smarter well incrementally might be an understatement it's going to get incredibly smart
00:14:46
incredibly quick and we're seeing this AI race where all of these large language models are competing for intelligence with one another and if
00:14:53
it's able to traverse the internet and click things and order things and write things and create things and all of our
00:15:00
lives run off the internet today. What can't it do? It's going to be smarter than me.
00:15:06
No doubt it already is. And it's going to be able to take actions across the internet, which is pretty much where
00:15:11
most of my professional life operates. It's like how I build my businesses. Even this podcast is an internet product
00:15:17
at the end of the day because you can create we've done experiments now and I can show the graphs on my phone to make
00:15:22
AI podcasts and they have we've just managed to get it to have the same retention as the dire CEO. Now with the
00:15:29
image generation model retention as in viewer retention the percentage of people that get to one hour wow is the
00:15:34
same now. So we can make the video, we can publish it, we can script it, you can synthesize my voice sounds like me.
00:15:42
So what what is it going to be able to do? Mhm. And can you give me the variety
00:15:47
of use cases that the average person might not have intuitively conceived? Yeah. So I I tend to be an an optimist
00:15:54
and and part of the reason is because I try to understand the limits of the of the technology. What can it do is
00:16:00
anything that we can any sort of set of human data that we can train it on. What
00:16:06
can it not do is anything that uh humans don't know what to do because we don't
00:16:12
have the training data. Of course, it's super smart because it integrates
00:16:17
massive amount of knowledge that you wouldn't be able to read, right? It also much faster. It can run through massive
00:16:24
amount of computation that you know your brain you can't even comprehend because all of that they're smart. They can take
00:16:31
actions but we know the limits of what they can do because we trained them.
00:16:37
They're able to simulate what a human can do. So the reason you were able to order the water there is because it was
00:16:44
trained NSF data that includes clicking on Door Dash and ordering water. I
00:16:51
applaud your optimism and I like the way you think about these puzzles, but I think I see you making a mistake that we
00:16:57
are about to discover is very common place. So we have several different
00:17:02
categories of systems. We have simple systems, we have complicated systems, we
00:17:09
have complex systems and then we have complex adaptive systems.
00:17:14
And to most of us, a highly complicated system appears like a complex system. We
00:17:20
don't understand the distinction. Technologists often master highly complicated systems and they know, you
00:17:28
know, for example, a computer is a perfectly predictable system inside. There's it's deterministic. Mhm. But to
00:17:35
most of us, it functions, it's it is mysterious enough that it feels like a complex
00:17:41
system. And if you're in the position of having mastered highly complicated
00:17:46
systems and you look at complex systems and you think it's a natural extension, you fail to anticipate just how unpredictable they
00:17:54
are. So even if it is true that today there are limits to what these machines
00:18:00
can do based on their training data, I think the problem is to see what's going to happen. You
00:18:09
really want to start thinking of this as the evolution of a new species that will continue to evolve. It will partially be
00:18:16
shaped by what we ask it to do, the direction we lead it, and it will partially be shaped by things we don't
00:18:21
understand. So, how does this computer that that we have work? Well, one of the
00:18:27
things that it does is we plug them into each other using language. It's almost
00:18:33
as if you've plugged an Ethernet cable in between human minds. And that means that the cognitive potential exceeds the
00:18:41
sum of the individual minds in question. Your AIS are going to do that. And that
00:18:47
means that our ability to say what they are capable of does not come down to well we didn't train it on that data. As
00:18:54
they begin to interact that feedback is going to take them to capabilities we don't anticipate and may not even
00:19:00
recognize once they become present. That's one of my fears. This is an evolving creature and it's not even an
00:19:08
animal. If it were an animal, you could say something about what the limits of that capability are. But this is a new
00:19:14
type of biological creature and it will become capable of things that we don't
00:19:20
even have names for. Even if it didn't do that, even if it just stayed within the boundaries that you're talking
00:19:26
about, you mentioned about it having median level intelligence. Well, that by definition means 50% of the people on
00:19:32
the planet are less intelligent than uh AI. uh you know to a degree it's almost
00:19:38
as if we've just invented a new continent of remote workers. Um there's
00:19:43
billions of them. They've all got a masters or a PhD. They all speak all the languages. Anything that you could call
00:19:49
someone or ask someone over the internet to do, they're there 24/7 and they're 25
00:19:54
cents an hour. Uh if that um so like if that really happened like if we really
00:20:01
did just discover that there were a billion extra people on the planet who all had PhDs and were happy to work almost for free that would have a
00:20:08
massive disruptive impact on society. Like society would have to rethink how everyone lives and works and gets
00:20:15
meaning. Um so like and that's that's if it just stays at a median level of intelligence. Like it's it's pretty
00:20:21
profound. I still think it's it's a tool. this is power that is there to be
00:20:26
harnessed by entrepreneurs. You know, I I think that the world is gonna get disrupted, right? Um and the the you
00:20:34
know, the this you know post-war world that we created where you go through life, you go through 12 years of
00:20:40
education, you get to college and you just check the boxes, you get a job. We
00:20:46
can already see the fractures of that that it is, you know, this American dream is perhaps no longer there. And so
00:20:52
I think the world has already changed. So but but like what are the opportunities? Obviously there are
00:20:58
downsides. The opportunities is for the first time access to opportunity is is
00:21:03
equal. And I I do think there's going to be more inequality. And the reason for
00:21:08
this inequality is because actually Steve Jobs uh you know made this analogy. It's like the the best taxi
00:21:15
driver in New York is like 20% better than the you know the you know average
00:21:21
taxi driver. the best programmer uh can be 10x better. You know, we call we say the 10x engineer. Now, the variance will
00:21:30
be in the thousandx, right? Like the best the best entrepreneur that can
00:21:36
leverage those agents could be better uh could be a thousand times better than
00:21:41
than someone who doesn't have the grit, doesn't have the skill, doesn't have the ambition. Right? So, so that that will
00:21:48
create a world. Yes, there's massive access to opportunity, but there are people who will take seize it and then
00:21:55
they'll seize it and then there'll be people who don't. I imagine it almost like a um a marathon race and AI has two
00:22:02
superpowers. One superpower is to distract people um such as Tik Tok algorithm. That's right. And the other
00:22:08
superpower is to make you hyper creative. So you become a hyper consumer or a hyper creator. And in this marathon
00:22:14
race, the vast majority of people have got their shoes tied together cuz AI is distracting them. Some people are
00:22:20
running traditional race. Some people have got a bicycle and some people have got a Formula 1 vehicle. And it's going
00:22:28
to be very confronting when the results go on the scoreboard and you see, oh,
00:22:33
wait a second. There's a few people who finished this marathon in about 30 minutes. And there's a lot of us who
00:22:38
finished in like 18 hours because we had our shoes tied together. And I can't
00:22:44
understand if we've got equal opportunity why there's so much disparity between how fast it you know
00:22:50
and do, you know, I'm using an analogy, but this idea that, you know, someone like a lot of people are going to start
00:22:55
earning a million dollars a month and a lot of people are going to say, "Hey, I can't even get a job for $15 an hour."
00:23:00
there's going to be this kind of interesting wedge. Well, but I I hear in what both of you are saying
00:23:07
a kind of assumption that this will all be done on the up and up. And I do want
00:23:17
to just I am not a doomer. I agree that the doomers are likely incorrect that
00:23:24
their fears are misplaced. But I do think we have a question of a related rates problem.
00:23:30
You know I said the potential for good here is infinite and the potential for
00:23:35
bad is 10 times. Right? What I mean is there are lots of
00:23:41
ways in which this obviously empowers people to do things that they were going to be otherwise stuck in the mundane
00:23:48
process of learning to code and then figuring out how to make the code work
00:23:53
and bring it to market and all of that. And this solves a lot of those problems and that's obviously a good thing. really the what we should want is the
00:24:00
wealth creation object as quickly as we can get there. But the problem is you
00:24:06
know as much as it that hyper creative individual is empowered to make wealth
00:24:13
the person who is interested in stealing may be even more empowered. And I'm
00:24:18
concerned about that at at a pretty high level. the the abuse cases may outnumber
00:24:24
the use cases and we don't have a plan for what to do about that. Um to can I
00:24:31
can I give you a quick uh like introduction here like the optimistic view? OpenAI invented uh GP the first
00:24:37
version GPT came out in 2019. 2020 was GPT2 and so OpenAI you know now they get
00:24:44
a lot of criticism and lawsuit from IL Musk that they're no longer open source
00:24:50
right they used to be. The reason is in GPT2 they said we are we are no longer
00:24:56
gonna uh open source this technology because it's going to create um opportunities for abuse such as you know
00:25:03
influencing elections um you know stealing you know grandma's credit card
00:25:09
and so on so forth. Wouldn't you say Brett that it is kind of surprising how
00:25:14
little abuse we've seen so far? I don't know how much abuse we've seen
00:25:19
so far. I don't know how any of us do. And I also even the example that you
00:25:25
suggest where chat GPT is no longer open source to prevent abuse. I'm taking
00:25:31
their word for it that that's the motivation. Where as a systems theorist, I would say, well, if you had a
00:25:38
technology that was excellent at enhancing your capacity to wield power,
00:25:46
then open sourcing it is a failure to capitalize on that. and that the most
00:25:52
remunerative use is to keep it private and then either sell the ability to
00:25:58
manipulate elections to people who want to do so or sell the ability to have it kept off the table for people who don't.
00:26:05
And I would expect that that's probably what's going on. There's no if you have a technology as transformative as this,
00:26:12
giving it away for free is counterintuitive, which leaves those of us in the public more or less at the
00:26:18
mercy of the people who have it. So I I don't see the reason for comfort
00:26:25
there. We are at the dawn of this radical transformation of humans that by
00:26:33
its very nature as a truly complex and emergent innovation. Nobody on earth can
00:26:40
predict what's going to happen. We can we're on the event horizon of something. And the problem is you know we can talk
00:26:48
about the obvious disruptions the job disruption and that's going to be massive. And does that lead some group
00:26:54
of elites to decide, oh well, suddenly we have a lot of useless eaters and what are we going to do about that? Because
00:27:00
that conversation tends to lead somewhere very dark very quickly. Um, but I think that's just the beginning of
00:27:06
the the various ways in which this could go wrong without the doomer scenarios coming into play. This is an
00:27:12
uncontrolled experiment in which all of humanity is downstream. Yeah. So I I was
00:27:18
trying to make the point that OpenAI has been sort of wrong about the uh the sort
00:27:25
of uh how big of a potential for harm it is like you know I think we would have
00:27:30
heard about it in the news like uh the sort of how much harm it's done and maybe you know some of it is working in
00:27:36
the shadows but like the few incidents that we've heard about where you know the cause of LLM's large language models
00:27:43
the technology that's powering chat has been a huge headliners uh like New York
00:27:49
Times talked about this kid that was, you know, perhaps goated by some kind of chat software that, you know, helps
00:27:56
teenagers to be less lonely into into suicide, which is which is tragic. And obviously, these are the kind of safety
00:28:02
and and abuse uh issues that we want to we want to worry about. But these are kind of these isolated uh incidents and
00:28:09
we do have open- source large language models. Obviously, the thing that everyone talks about is DeepSeek.
00:28:15
Deepseek is uh coming from China. So what is Deepseek's incentive? You know
00:28:21
perhaps the incentive is to destroy the AI industry in the US. Uh you know when
00:28:26
they released Deepseek, you know the market tanked the market for Nvidia, the market for AI and all of that. But there
00:28:32
is an incentive to open source. Meta is open sourcing Llama. Llama is another AI
00:28:37
similar to Chat GPT. The reason they're open sourcing Llama and Zuckerberg just says that out loud is basically they
00:28:44
don't want to be beholden to open AI. They don't sell AI as a service. They
00:28:49
use it to build products. And there's this concept in business called commoditize your compliment because you
00:28:56
need AI as technology to run your service. The best strategy to do is to open source it. So these market forces
00:29:04
are going to create conditions that I think are actually beneficial. So I'll
00:29:10
give you a few few examples. One is first of all the AI companies are
00:29:15
motivated to create AI that is safe so that they can sell it. Second there are
00:29:20
security companies investing in AIs that allows them to protect from the sort of
00:29:25
malicious uh acting of of AI. And so you have the free market and we've always had that you know but generally as
00:29:32
humanity we've been able to leverage uh the same technology to protect against
00:29:38
the abuse. So I I don't really understand this and maybe this is actually this is the exact discussion
00:29:45
that you would expect between somebody at the frontier of the highly complicated staring at a complex system
00:29:51
and a biologist who comes from the land of the complex and is looking back at highly complicated
00:29:58
systems. In game theory we have something called a colle collective action problem. And in the market that
00:30:05
you're describing, an individual company has no capacity to hold back the abuses of AI.
00:30:15
The most you can do is not participate in them. You can't stop other people from programming LLMs in some dangerous
00:30:23
way. And you can limit your own ability to earn based on your own limitations of
00:30:28
what you're willing to do. And then effectively what happens is the technology gets invented anyway. It just
00:30:34
that the dollars end up in somebody else's pocket. So the incentive is not to restrain yourself so that you can at
00:30:40
least compete and participate in the market that's going to be opened. And
00:30:46
so the number of ways in which you can abuse this technology. Let's take a
00:30:51
couple. What is to stop somebody from training
00:30:56
LLMs on an individual's creative output and then
00:31:03
creating an LLM that can out compete that individual can effectively not only
00:31:09
produce what they would naturally produce over the course of a lifetime but can extrapolate from it and can even
00:31:15
hybridize it with the insights of other people so that effectively those who have the LLM can train it on the
00:31:22
creativity ity of others, not cut them in on the use of that insight. You can
00:31:27
effectively end up putting yourself out of business by putting your creative ideas in the world where they get sucked
00:31:32
up as training data for future LLMs. That is unscrupulous, but it's
00:31:38
effectively guaranteed. In fact, it's already happening. So, that's a problem. And likewise, what what would stop
00:31:45
somebody from interacting with an individual and training an LLM to become
00:31:51
like a personalized con artist? Something that would play exactly to your blind spot. That does happen. That
00:31:56
that is starting to happen. Um people get phone calls and it sounds like their daughter and I've I've lost my phone and
00:32:02
I'm borrowing a friend's phone and all of that sort of stuff. What's what's interesting is that I I think you make a
00:32:08
a really good point. I worry about the impact on society. And yet when I look at every single
00:32:14
individual who uses AI regularly, it almost has nothing but profoundly
00:32:20
positive impact on their life. I look at people like um I was just spending some
00:32:25
time with my parents-in-law um who are in their 70s and early 80s and they use
00:32:31
AI regularly for all sorts of things that they find incredibly valuable and that improves the quality of their life.
00:32:38
I personally did a an M&A a mergers and acquisitions deal where I bought a company last year and the AI was so
00:32:46
powerful at helping that process. The conversations were transcribed and they were turned into letters of intent and
00:32:52
then press releases and uh legal documents and we probably shaved
00:32:58
$100,000 worth of uh costs and and we sped up the whole process and it was
00:33:04
pretty magical to see how how it could happen. With that said, you know,
00:33:09
there's there's all of these like, well, $100,000 worth of lawyers didn't get paid, right? So, well, what I want to
00:33:14
know, yeah, people upset about, but if we look back
00:33:20
at the invention of the cell phone or the invention of the social media
00:33:26
platforms, there would be every reason to have the exactly the same perspective, right? I remember the beginning of Facebook and I remember the
00:33:33
idea that suddenly the process that used to afflict people where you would just
00:33:38
lose touch with most of the people who had been important to you, that was not something that needed to happen anymore.
00:33:44
You could just retain them permanently as a part of a diffuse uh social grouping that just simply grew
00:33:52
and value was added. There's no end to how much good that did, but what it did
00:33:58
to us was profound and not evident in the first chapter. Say the same thing
00:34:04
about the cell phones and the dopamine traps and the way this has disconnected us from each other, the way it has
00:34:10
disconnected us from nature, the way it has altered the very patterns with which
00:34:16
we think. It has altered every classroom. Mhm. So, and those things I
00:34:22
think are going to turn out to have been sort of minor foreshadowings of the
00:34:28
disruption that AI will produce. So, I agree with you today. The amount you can
00:34:33
do with AI, there's a tremendous amount of good. There's a little bit of harm. Maybe that's something we need to worry about. But as this develops, as we get
00:34:41
to, you know, to peer over the edge of this cliff that we're headed to, I think we're going to discover that we can't
00:34:48
yet detect the nature of the alteration that's coming. So, I just wanted to add some context to that cuz Amjad, I I saw
00:34:53
the interview you did in a newsletter in 2023 where you said, "I wouldn't prepare for AGI in the same way that I wouldn't
00:34:59
prepare for the end of days." It's effectively the end of days if the vision of AGI that some of these
00:35:04
companies have comes to bear because it's called the singularity moment because you can't really predict what happens after that. And so like how
00:35:11
would you even prepare for that and you want to prepare for the more likely world and that world that you can
00:35:16
actually predict is a world where yes there's like a massive improvements of technology and there's like insane
00:35:23
compounding effects of technology and it's pretty hard to keep up. From that it appeared that in 2023 you were saying
00:35:29
a similar thing to Brett in terms of we can't see around the corner here because it is a singularity.
00:35:34
Sorry you also used AGI artificial general intelligence. It'd be interesting to know what your definition
00:35:40
of AGI is. say that. Yeah. So, so what what I was saying there is even if I'm wrong that you can actually create a
00:35:49
unbounded seemingly conscious artificial intelligence that can entirely replace
00:35:57
humans and can act autonomously in a way that even humans can't act and can
00:36:03
coordinate across different AIs, different data centers to take over the world. Even if if that's so so the
00:36:09
definition of AGI is artificial general intelligence meaning that AI can acquire
00:36:15
new skills in efficiently in the same way that a humans can acquire skills.
00:36:20
Right now AIs don't acquire skills efficiently you know they they require massive amount of energy and compute
00:36:26
entire data set of compute to acquire these these skills and I think there's a
00:36:31
again a limit on how general intelligence can get I think for most of
00:36:37
the time we're lagging in terms of what humans are are capable of doing the
00:36:44
singularity is based on this concept of intellig intelligence explosion so once
00:36:49
Once you create an AGI, once you create an artificial general intelligence, that intelligence will be
00:36:56
able to modify its own source code and create the next version that is much more intelligent. And the next version
00:37:03
creates the next version and the next, you know, for infinity, right? Within a
00:37:09
week, within a week, perhaps within milliseconds at some point. Yeah. Right. Uh because it might invent new computing
00:37:15
substrate and and and all of that. perhaps they'll use quantum computing and and so then you have this
00:37:22
intelligence explosion in a way that it is impossible to predict how the world is going to be
00:37:29
and what I'm saying is this is sort of like an end of time story like how would
00:37:35
you even prepare for that so if if that's coming like why would I spend my
00:37:41
time preparing for I think it's unlikely to happen can't see around the corner yeah but I'd rather prepare
00:37:48
what I was saying there. I'd rather prepare for the more likely world in which we have access to tremendous
00:37:55
power, but the world's not ending and humans are still uh important.
00:38:02
I don't I don't know why you say more likely. I mean, I I think the structure of your argument is is sound. You would
00:38:10
prepare for the world that might happen for which you can prepare. There's literally no point in trying to prepare
00:38:16
for a world You can't predict it all. The only thing you can do is just sort of upgrade your own skills and pay
00:38:23
attention. But if I have one message for the technologists, it's that your
00:38:28
confidence about what this can and cannot do is misplaced because you have
00:38:35
without noticing stepped into the realm of the truly complex. In the truly
00:38:42
complex, your confidence should drop to near zero. that you know what's going on. Are these things conscious? I don't
00:38:50
know. But will they be highly likely they will become conscious and that we will not have a test to tell us whether
00:38:56
that has happened. Elon Musk predicts that by 2029 we will have AI with us AGI that
00:39:04
surpasses the combined intelligence of all humans. And Sam Alman actually wrote a
00:39:10
blog three months ago that I read where he said we are confident now Sam Alman being the founder of open arrow which um
00:39:15
created chatbt we are confident now that we know how to build AGI as we have traditionally understood it. When I put
00:39:22
these things together, I go back to the central question of what role do humans have in in this in the sort of
00:39:29
professional output in GDP creation. If it's smarter than all humans combined,
00:39:36
if Elon Musk is correct there, and it's able to take actions across the internet
00:39:41
and continue to learn. This is like a central question that I'm hoping I can answer today, which is like where do we
00:39:46
go? Yeah. I I mean in my vision of the world we are in the creative seat. We're
00:39:52
sitting there where um we we are controlling swarms of intelligent being
00:39:59
to do our job. You know the way you run your business for example you're sitting at a computer you have an hour to work.
00:40:05
Yeah. and you're going to launch like a thousand SDR you know sales uh you know
00:40:11
representative to go like grab as as many leads as possible and you're generating new update on replet for for
00:40:18
your website here and then uh on this side you you you're actually you you
00:40:23
have an AI that's crunching data about your uh your existing business to to
00:40:28
figure out how to improve it and these AIs are kind of somehow all coordinating together and I am trying to privilege
00:40:36
the human like this is my my mission is to uh build tools for people. I'm not
00:40:42
building tools for agents and agents are a tool and so ultimately not only do I
00:40:48
think that humans have a privileged position in the in the world and in the
00:40:53
universe. We don't know where conscious consciousness is coming from. We don't really have the science to explain it.
00:41:01
Um I think humans are special. That's one side is is my belief that humans are
00:41:07
are special in the world and another side which I understand that the technology today and I think for the
00:41:14
foreseeable future y is going to be a function of us training data. So there
00:41:19
was this whole idea like what if chat GPT generates uh pathogens. Well have you trained it on pathogens? They were
00:41:26
doing that kind of stuff in Wuhan. know I mean I mean a lot of the biotech companies are essentially using
00:41:32
artificial intelligence like I can think of Abselera I think it's Abselera in Canada their whole business is using AI
00:41:39
to create new vaccines using artificial intelligence and bigger data sets than we've never have had before and I know
00:41:46
cuz I was I was very close to one of the founders of people involved in Absela so that that work is going on anyway and if
00:41:51
we think about Wuhan that's it's quite probably well known now that it came out of a lab and people working in a lab and
00:41:57
in that scenario that had a huge impact and shut down the world. What I'm the central question I'd love to answer
00:42:02
before I throw it back open to the the room is what jobs because I know that you have this perspective. What jobs are
00:42:10
going to be made redundant in a world where I am sat here as a CEO with a thousand AI agents, right? I was
00:42:15
thinking of all the names of my of the people in my company who are currently doing those jobs. I was thinking about my CFO when you talked about processing
00:42:21
business data, my graphic designers, my video editors, etc. So what what jobs are going to be
00:42:26
impacted? Yeah, all of those. Uh so I I think and what do they do? You maybe this is useful for for the audience. I
00:42:34
think if your job is as routine as it comes, your job is gone in the next
00:42:40
couple years. So meaning if you if in in those jobs for example uh quality
00:42:46
assurance jobs data entry jobs you're sitting in front of a computer and you're supposed to click uh and and type
00:42:53
things in a certain order operator and those technologies are coming on the market really quickly and those are
00:42:58
going to displace a lot of a lot of uh labor accountants accountants noise yes
00:43:05
I mean I've just pulled a ligament in my in my foot and they did an MRI scan and I had to wait a couple of days for
00:43:10
someone to look at the MRI scan and tell me what it meant. Yeah, I'm guessing that that's gone. Yeah, I think I think
00:43:16
the healthcare ecosystem is hard to predict because of regulation and and again there there's so many limiting
00:43:22
factors on how this technology can permeates the economy because of regulations and and people's willingness
00:43:28
to to take it. But you know things unregulated jobs uh that are
00:43:34
purely text in text out. if your job, you know, you get a you get a message and you produce some kind of artifact
00:43:40
that's like probably text or images that that job is is at risk. So, just to give
00:43:46
you some stats here as well, about 50% of Americans who have a college degree currently use AI. The stats are
00:43:52
significantly lower for Americans without a college degree. So, you can see how a splinter might emerge there
00:43:57
and that crack will write widen because people like us at this table are all messing around with it. But my mom and
00:44:03
dad in Plymouth in the southwest, rural England, haven't got like they just figured out iPhones. So like I got them an iPhone and now they're like texting
00:44:09
me back. AI is a million miles away. And if I start running off with my AGI, my agents, that gap is going to widen.
00:44:15
Women are disproportionately affected by automation, which is what you were talking about there. with about 80% of
00:44:21
working women in an at risk job compared to just over 50% of men according to the Harvard Business Review and jobs
00:44:27
requiring only a high school diploma have an automation risk of 80% while
00:44:32
those requiring a bachelor's degree have an automation risk of just 20%. So we can see again how how this will cause a
00:44:39
sort of a it's also a huge risk um with business processor out outsourcing which is essentially western countries sending
00:44:47
jobs to India to the Philippines like at the moment millions of people have been lifted out of poverty through the
00:44:53
ability to do those kind of business process auto outsourcing jobs and those are all going to go but these they're
00:45:00
going to have a thousand employees but but but uh also uh these people are actually already transitioning to
00:45:05
training AI eyes. Mhm. You know, so so there's going to be a massive industry around training AI until they're
00:45:11
trained. Well, no, you you have to continuously acquire new skills and this is what I'm talking about. I mean, this
00:45:17
is again if AI is a function of its data, then you need increasingly more data. And by the way, we ran out of
00:45:22
internet data. I was actually thinking interestingly that this might not be great for the United States or the UK, the Western world because it is going to
00:45:28
be a leveler where now a kid in India doesn't need a Silicon Valley office and
00:45:33
$7 million in investment to throw up a a software company basically. My yeah my
00:45:38
belief is that so I have a I have a more broad definition of AGI and the singularity and for me AGI is do we have
00:45:45
artificial general intelligence in terms of generally speaking can AI just do
00:45:50
stuff that humans used to be able to do and we've already crossed that point we have this general intelligence that we
00:45:56
can now all access and 800 million people a week are now using uh chat GBT it's it's exploded in the last 3 months
00:46:03
and then to me a singularity Uh when the first tractor went out onto a farm, for me that was a
00:46:10
singularity moment. Uh because everyone who worked in farming, it used to take a 100 people to plow a field and now a
00:46:16
tractor comes along and two guys with the tractor can now plow the field in just as much time and now 98 people out
00:46:22
of 100 are completely out of a job. We also always underestimate a technology
00:46:28
if it does go on to change history. When you look back through cars, horses, planes, the Wright brothers just thought
00:46:34
of a plane as being something that that army could use, we had no idea of the application. So someone said to me
00:46:40
recently, they said, "When it does change the world, we underestimate the impact that it will change the world."
00:46:45
And I see people now with their estimations of AI and AI agents already incredibly optimistic. And so if history
00:46:52
holds here, we're undershooting the impact it's going to have. And I think this is the first time in my life where
00:46:57
the industrial revolution analogies seem to fall a little bit short. Yeah. Because we've never seen intelligent
00:47:04
that's like you could I could think of this as a I'm not an intelligent person on this, but I could see that as like the disruption of muscles, whereas this
00:47:11
is the disruption of intelligence. That that's that's exactly the thing is that what makes human beings special is our
00:47:18
cognitive capacity and very specifically our ability to plug our minds into each
00:47:23
other. So that the sum is is uh or the whole is greater than the sum of the
00:47:29
parts. That's what makes human cognition special. And what we are doing is we are
00:47:34
creating a something that can technologically surpass it without any
00:47:39
of the preconditions that make that a safe process. So yes, we've
00:47:45
revolutionized the world how many different times? It's innumerable. But you know, we've we've made farming
00:47:51
vastly more efficient. That's different than taking our core competency as a species and surpassing ourselves with
00:47:58
the product of our of our labor. I think your question is a good one. Then then
00:48:03
what does become? We only have one thing left. Um we have our muscles which we got rid of in the industrial revolution
00:48:10
and then we have our intellect which is this digital revolution. Now we're left with emotions and agency. So we
00:48:15
essentially the the the agency idea I think we used to judge people on IQ and
00:48:21
now IQ is the big leveler and now going forward for the next 10 years we're going to look at are you a high agency
00:48:27
person or a low agency person. Do you have the ability to get things done and coordinate agents? Do you have the
00:48:33
ability to start businesses or give orders to digital armies? uh you know
00:48:39
and and essentially these high agency people are going to thrive in this new world because they have this other thing
00:48:46
that's been bubbling under the service surface which is really interesting when you said agency is going to remain as an
00:48:51
important thing we're sat here talking about AI agents and the crazy thing in a world of AI agents that have super
00:48:58
intelligence is I can just tell my agent listen I'm going on holiday please build me a SAS company that spots a market
00:49:04
opportunity throw up the website post it on my social media channel I'll be in Hawaii I and this new agentic world is
00:49:12
stealing that too cuz now it can take action in the same way that I can browse the internet. I can call Domino's Pizza,
00:49:19
speak to their agentic agent, organize my pizza to be there before I even wake up. And in fact, predictability, you
00:49:24
know, OpenAI now learns and Sam Alman said that they've expanded the memory feature. So, it knows every it's knowing
00:49:30
more and more and more and more about me. It'll almost be able to predict what I want when I want it. It'll know Steve's calendar. He's arriving at the
00:49:37
studio. Make sure his cadence is on the side. Make sure his iPad has the brief on it. Do the brief. Do the research for
00:49:44
me. And everything else say remember Brett's birthday so when I arrive there'll be something. In fact, it's
00:49:50
removing my need for any agency. Yes. And you know again I don't know how to make this point so that it occurs to
00:49:57
people what I'm really suggesting but today maybe it's not conscious but
00:50:04
well let me put it to you this way. If you're conscious, you started out as a child
00:50:11
that wasn't. And although this may not fully encapsulate it, you are effectively an LLM, right? You go from
00:50:18
an unconscious infant to a highly conscious adult. And the process by
00:50:24
which you do that has a lot to do with being trained effectively on words and
00:50:30
other things in an environment in exactly the way that we now train these AIs. So the idea that we can take
00:50:36
consciousness off the table, it won't be there till we figure out how to program it in and we're safe because we don't know how consciousness works. I take the
00:50:42
opposite lesson. We've created the exact thing that will produce that phenomenon and then we can have philosophers debate
00:50:49
whether it's real consciousness or it just behaves exactly as if it were. And the answer is those aren't different.
00:50:54
Doesn't matter. And the same thing is true for agency. you know, especially if you've created an environment in which
00:51:00
these AIs are de facto competitors, what you're effectively doing is creating an
00:51:06
evolutionary environment in which they will evolve to fill whatever niches are there. And we didn't spell out the
00:51:11
niches. So, I have the sense we have um we have invited we have we have created
00:51:17
something that truly is going to function like a new kind of life and it's especially troubling because it
00:51:23
speaks our language. So that leads us to believe it's more like us than it is and it's actually potentially quite
00:51:30
different. So but by the way, he's the optimist here, right? Like he's so
00:51:36
optimistic about LMS and how how they're going to they're going to evolve. Yes.
00:51:42
It's amazing. It's amazing technology. Like I think it raised global IQ, right?
00:51:47
Like 800 million people like 800 million people are that much more intelligent and emotionally intelligent as well.
00:51:53
Like I know people who previously were very coarse and they kind of robbed
00:51:58
people the the the wrong way. They they would say things in not so polite way
00:52:03
and then suddenly they started putting their the you know what they're saying through chat in order to kind of make it
00:52:11
kinder and nicer and they're more liked now. And so not only is it uh making us more intelligent but also it allows us
00:52:18
to be the best version of ourselves. And the the scenario that you're talking about, I don't think I don't know what's
00:52:24
wrong with that. Like, you know, you know, the I would want less agency in certain places. Like, I would want
00:52:32
something to help me not, you know, open up a peanut butter jar at night, right?
00:52:38
You know, there are places in my life where I need more control and I would
00:52:44
rather seed it to some kind of entity that could help me make better choices.
00:52:51
I mean unfortunately even if there is some small group of elites that are able
00:52:58
to go to Hawaii while something else does the mundane details of their business
00:53:04
building. We are rather soon going to be faced with a world that has billions of
00:53:11
people who do not have the skills to leverage AI. Some of them will be
00:53:17
necessary for a time. you're going to need plumbers. But this is
00:53:23
also not a long-term solution because not only are there not enough of those
00:53:29
jobs, um, but of course we have humanoid robots that once imbued with AI capacity
00:53:37
will also be able to take, you know, they'll be able to crawl under your house into the crawl space and fix your
00:53:42
plumbing. So what typically happens when you have a massive economic contraction that
00:53:49
arises from the fact that a huge number of people are out of work is that the
00:53:54
elites start looking at those people and thinking well we don't really need them anyway. And so the idea that this AI
00:54:01
disruption doesn't lead us to some very human catastrophe I think is overly
00:54:06
optimistic and that we need to start preparing right now. What are the rights of a person who has had whatever it is
00:54:12
that they've invested in completely erased from the list of needs? Is that person responsible for not having
00:54:18
anticipated AI coming? And is it their problem that that they are now starving and they're being eyed by others as you
00:54:25
know a useless eater? I don't think so. How is it different than uh when the uh
00:54:30
uh what's it called the the looming machine came and the textile workers you know that the result of the in the lad
00:54:35
sort of revolution? uh h how is it how is it different than any time in history
00:54:41
when uh technology uh automated a a lot of people out of out of jobs? I would
00:54:47
say scale and speed that's how it's different and the scale and speed is going to result in a an unprecedented
00:54:54
catastrophe because the rate at which people are going to be simultaneously sidelined not just in one industry but
00:54:59
across every industry is just simply uh and it also did actually happen. There
00:55:05
was a there was an uh for the first 50 years of industrialization from like late 1700s to early 1800s, you actually
00:55:13
the Charles Dickens novels are essentially people coming from the farms who are displaced arriving in cities,
00:55:20
kids living on the streets. Uh the British decided to pick everyone up and send them over to the over to Australia,
00:55:27
which is where I came from. um and uh you know there there were this there was
00:55:33
this massive issue of displacement. I think we're going to go into a high velocity economy where rather than this
00:55:39
long arc of career that lasts 45 years, we're going to have these very fast
00:55:45
careers that last 10 months to 36 months. and you invent something, you
00:55:51
take it to market, you put together a team of five to 10 people who work together, you then get disrupted, you
00:55:58
come Can I mention a story here? Uh there's an entrepreneur that used Replet
00:56:03
in a similar way. Uh his name is Billy Howell. You can find him on YouTube on the internet. He would go to Upwork and
00:56:10
he would find what people are asking for different requests for certain apps, technologies. Then he would take the
00:56:18
what they're asking for, put it into replet, make it an application, call them, tell them, "I already have your
00:56:24
application. Would you pay $10,000 for it?" And so, so that's sort of an arbitrage opportunity that's that's
00:56:29
there right now. That's not arbitrage. That's theft. How is no what is it? How is that theft? You have somebody who has
00:56:35
an idea that can be brought to market and somebody else is cryptically
00:56:40
detecting it and then selling back their own idea to them. Well, they're paying them to do that. They're saying, "I will
00:56:46
give you $500 if you if someone makes this for me." Right? But this is what I more or less think is going to happen
00:56:51
across the whole economy is that yes, from this perspective, we can see that everybody is suddenly empowered to build
00:56:58
a great business. Well, what do we think about the folks who are going to be displaced from the top? What are they
00:57:04
going to think about all these people building all of these highly competitive businesses? And are they going to find a way to do, you know, what venture
00:57:11
capital has done or what record producers have done? What they're going to do is they're going to take their superior position at the top and they
00:57:18
are going to take most of the wealth that is produced by all of these people who have these ideas that in a proper
00:57:24
market would actually create businesses for them and they're going to parasitize them. I think that we with this in
00:57:33
introduction of AI and AI agents old value has moved and now it's
00:57:39
not going to be the case that the idea itself is the moat and it's not going to be the case that resources are the moat.
00:57:44
So in such a scenario you still have to figure out distribution. You still have to have for example like an audience. So
00:57:50
if you're a podcaster now you have a million followers on Twitter. you're in a prime position because you now have
00:57:55
something that the the great guy with a great idea with no audience has you have inbuilt distribution. So I now think
00:58:01
actually much of the game might be moving to like yeah still about taste and idea but also the the mo is
00:58:08
distribution. Yeah. And speaking of adaptive systems um the one of the
00:58:13
adaptation that will happen is people will seek uh humans and will seek proof
00:58:19
of humanity. Oh, I agree that uh authenticity is going to become the coin of the realm and anything that can be
00:58:27
faked or cheated is going to be devalued and things you know spontaneous jazz or
00:58:34
you know comedy that is interactive enough that it couldn't possibly have been generated with the aid of AI those
00:58:40
things are going to become prioritized you know spontaneous oratory rather than speeches answers some of your questions
00:58:47
no it answers it answers my question for the tiny number of people who are in a position to do those things. Stephen,
00:58:54
you use the mo word moat. Um, which I think is a really important word for entrepreneurs. We like we like have to
00:58:59
have a moat. We think a lot about moes and it's an industrial age. A lot of people don't even know what a moat what
00:59:04
you mean by moat. It's just I often think about this idea of what are the moes that are left. So to define how I
00:59:11
define a mo, you've got a castle and it's got a like a small circle of water around it. And once upon a time that
00:59:17
circle of water defended the castle from attack and you can pull up the drawbridge so nobody could attack you very easily. It's a de defense from
00:59:23
something. So it's your it's your shield. It's your your defense. And once upon a time as an entrepreneur, you know, I've got a software company in San
00:59:29
Francisco called Third Webb and we raised almost $30 million. We have a team of 50 great developers. And much of
00:59:36
our moat was you can't compete with us if you don't have the 50 developers and the $30 million in the bank. How much of
00:59:42
that 30 million went to coding? The vast majority of it. I mean what else are we going to do? What else we do? So this is
00:59:48
a good thing. I think modes are a bad thing. Okay, let me make the argument there. Uh so everyone is looking for
00:59:54
modes you know for example like one of the more uh significant modes is network
00:59:59
effects. Yeah, you know, so you can't compete with Facebook or Twitter
01:00:05
because to move people from Facebook or Twitter, you need to it's the collective action problem. You need to move them
01:00:11
all at once because if one of them moves, then it's the network is not
01:00:17
valuable, they'll go back. So you have this chicken and egg problem. Let's say that we have a more decentralized way of
01:00:24
doing social networks that will remove the power of Twitter to kind of censor
01:00:31
and I I think you're at the other end of of censorship, right? And so part of my optimism about humanity is that um
01:00:39
generally there's self-correction. Democracy is a self-correcting system. Uh free markets are largely
01:00:45
self-correcting systems. There are obvious problems with with free markets that that we can discuss. But take um
01:00:52
health, you know, there is obesity uh epidemic. This period of time when uh
01:00:59
companies, you know, ran loose kind of making this sugary, salty, fatty kind of
01:01:06
snacks and everyone gorged on them and everyone got very uh you know, unhealthy. And now you have Whole Foods
01:01:13
everywhere. Today, people in Silicon Valley, they don't go to bars at all. They go to running clubs. That's how you
01:01:19
meet. That's how you go find a date. You go to running clubs. And so, there was a shift that happened because there was a
01:01:27
reaction. Obviously, cigarettes is another example. You know, you were talking about phones and our addiction
01:01:33
to phones. And I see a shift right now like in my uh friend circle like people
01:01:39
who are constantly kind of on their phones is already kind of frowned upon and they don't want to hang out with you because you're you're constantly staring
01:01:45
on on your phone. So there's always these reactions and and but the problem is you you reference selfcorrection and
01:01:52
I agree that there's actually an automatic feature of the universe in which the self-correction happens. You can't have a positive feedback that
01:01:58
isn't reigned in by some outer negative feedback. But the corrections, the list
01:02:04
of corrections involves things like you point to where people become enlightened and they realize that they're doing
01:02:10
themselves harm with either the sugar that they're consuming or the dopamine traps on their phone and they get
01:02:16
better. But also on the list of corrective patterns are genocide and war
01:02:22
and you know parasetism. And the problem is these things are destructive of
01:02:30
wealth. And so you allude to the superior fact of an open market
01:02:38
without moes. Presumably the benefit of that is that more wealth gets created because people aren't kept from doing
01:02:44
things that are productive. I see that. But then what is the product of all of
01:02:49
this new wealth that is going to be generated by a world empowered by AI? Does it end up so highly concentrated
01:02:55
that you have a tiny number of ultra elites and a huge number of people who are utterly dependent on them? What
01:03:02
becomes of those people? The learning process, the self-correction pro process goes through harm in order to get to
01:03:09
that more enlightened solution. There's nothing that protects us from the harm phase being so apocalyptically terrible
01:03:17
that, you know, we get to the other side of it and we say, "Well, that was a hell of a correction." Or maybe there's nobody there to even say that. Those are
01:03:23
also on the table. It reminds me of a mouse trap where you see the cheese and we're going, "Oh my god, my grandmother's going to be able to do
01:03:28
some research and oh my god, my life's going to get easier." So you head closer and closer to the cheese.
01:03:34
And historically, if we look at all of the last 10,000 years, it's a very small
01:03:39
number of elites who own absolutely everything and a very large number of surfs and peasants who have a
01:03:46
subsistence living. You know, if the elites are too greedy and they freeze out the peasants at too high a level and
01:03:52
they try to use brutality or Yeah. eventually it comes back to haunt them. And so what you get is a recognition
01:03:59
that you you need a system that does balance these things and you know the west has the best system that we've ever
01:04:05
seen. It's one in which we agree on a level playing field. We never achieve it but we agree that it's a desirable thing
01:04:11
and the closer we get to it the more wealth we create. But again,
01:04:17
if AI empowers those with ill
01:04:23
intent at a higher rate that it empowers those who are wealth creating and
01:04:28
pro-social, we may be in for a massive regression in how fair the the market
01:04:34
that the West is. Is that your top concern versus economic displacement? And I think they're the same thing. How
01:04:41
are they the same thing? because the economic displacement is going to start. I don't know how
01:04:48
many million people are going to be displaced from their jobs in the US. Suddenly, we're going to have a question
01:04:54
about whether or not we have obligations to them. And you agree with that, don't you? Yes. But but but again, it's it's
01:05:01
the no pain, no gain. I mean, we're going to go through a period of of disruption. And I think at the other end, the old, you know, sort of
01:05:08
oppressive systems will be broken and we're going to create perhaps a fair
01:05:13
world, but it's going to have its own its own problems. And what's the scale of that disruption in your estimation?
01:05:19
It's hard to say because uh you know there's this concept of limiting factors
01:05:24
like you know there is um regulation there's the appetite of people to today
01:05:30
for example the health care system is very resistant to innovation because of regulation you know and that's a that's
01:05:37
a bad thing on the regulation point it's worth saying that when Trump came into power he signed in a new law which is
01:05:44
called removing barriers to American leadership in AI which revokes previous AI policies that were deemed to be
01:05:50
restrictive. And obviously when you think about where the funding is going in AI, it's going to two places. It's going to America and it's basically
01:05:56
going to China. That's the the vast majority of investment. So with those two in competition, any regulation that
01:06:02
restricts air in any way is actually self-sabotage. Mh. And this is, you know, I live in Europe some of the time
01:06:09
and it's already annoying to me that when Sam Alman and OpenAI released the 03 model, this new incredible model,
01:06:15
it's not in Europe because Europe has a regulation which prevents it from coming to Europe. So, we're now at a
01:06:20
competitive disadvantage um which Sam Alman spoken about. And more broadly on this point of
01:06:26
disruption, it was I was quite unnerved when I heard that Sam Wolman's other startup was called
01:06:33
Worldcoin. And Worldcoin was conceived with the goal of facilitating universal
01:06:39
basic income, i.e. helping to create a system where people who don't have money are given
01:06:46
money by the government just for being alive to help them cover their basic food and housing needs. Which suggests
01:06:51
to me that the guy that has built the biggest air company in the world can see something that a lot of us can't see which is there. Yeah. There's gonna need
01:06:58
to be a system to just hand out money to people because they're not going to be able to survive otherwise. I fundamentally disagree with that. Which
01:07:04
part do you disagree with? I disagree that first of all that humans would be happy with UBI. I I think that you know
01:07:12
uh you know core value of humans and be curious about the evolutionary reasons
01:07:17
is we want to be useful. It's really important to know that a lot of the jobs that are in at risk are the most high
01:07:23
status, highly paid jobs in the world. Let's take the highest paid job in America um which is an
01:07:29
anesthesiologist. Uh this is the highest paid job and highest paid salary job.
01:07:35
Salari job. Yeah. And the majority of that job is observing a patient, knowing
01:07:40
which type of medication would work best with their body. um giving them the exact right amount, monitoring the
01:07:47
impact of that uh on the on the body and then making slight adjustments, the
01:07:52
right technology and any nurse will be able to do that job. And you might have
01:07:57
one anesthesiologist on on site supervising 10, 20, 30, 40 wards
01:08:05
and the technology is, you know, doing the job, but that one person is there just to kind of supervise if something
01:08:11
went wrong or if there was an ethical dilemma. What's wrong with that? I mean, if if if the precision is better where they are. No, there's nothing wrong with
01:08:17
that except for the fact that a lot of people, hundreds of thousands of people have spent their entire life training to
01:08:23
be that, they get an enormous amount of purpose and satisfaction about the fact that that's their career, that's their
01:08:28
job. They have mortgages, they have houses, they have status, and that's about to go away. Well, if it's highest
01:08:35
paid jobs, maybe you should start saving. Yeah. Well, I mean, but yeah, I I hear I
01:08:43
hear you, but you're talking about people who have done vital work. Mhm.
01:08:50
Highly specialized work and are therefore not in a great position to pivot pivot based on the invention of a
01:08:56
technology that they didn't see coming because frankly, I mean, in the abstract, maybe we all saw AI coming
01:09:02
somewhere down the road, but we did not know that it was going to suddenly dawn. And we do have to figure out what to do
01:09:08
with those people. It's not their fault that they've suddenly become obsolete and it's inconceivable that people will
01:09:16
accept this. It is not. It is fundamentally incompatible with our nature. We have to have things to strive
01:09:22
for and you know you can sustain life that way but you cannot um sustain a
01:09:30
meaningful existence and so it's a short-term plan at best. Let's talk about meaning. Um on that point of job
01:09:36
displacement this is already happening. Cler CEO who has been on this podcast before, a great guy, um said to on a
01:09:43
blog post that they published on Cler's website saying that they now have AI customer service agents handling 2.3
01:09:50
million chats per month, which is equal to having to hire 700 full-time people
01:09:55
to do that. So, they've already been able to save on 700 customer service people by having air agents to do that.
01:10:03
And they actually they actually got rid of those 700 jobs, right? I don't have that information in front of me, but I'll have a look. Um, I'll throw it up
01:10:10
on screen for anyone that wants context on that. But that's already happening. This isn't some antologist or something.
01:10:15
And these aren't high paid people in every case. We've done something similar, by the way. We've we internally
01:10:20
we've we've replaced that function for 70%.
01:10:26
Yeah. I mean, our company, we're 65 people and, you know, we um, you know,
01:10:32
we make, you know, millions per per head, you know. So, it's a Are you going to need to hire more people to get up to
01:10:38
I think so, but we're we're hiring slowly, like, you know, we're we're using uh customer support, AI, and that
01:10:44
meant that we we need less uh customer support, and we're trying to leverage AI
01:10:50
as much as possible. the the person in HR at Replet writes software using
01:10:56
Replet. So, I'll give you an example. She needed um Orc charts uh software and
01:11:01
she looked at a bunch of them, got a lot of demos and they're all very expensive and they they're missing the kind of
01:11:08
features that she want. For example, she wanted like version control. She wanted to know when when something changed and
01:11:13
to go back in history. She went into replet in 3 days she got exactly the kind of software that she wanted and
01:11:20
what was the cost you know perhaps $20 you know something like that $ 20 $30 once right and um how many employees in
01:11:27
HR do we need right now we have two uh if if they're highly levered like that
01:11:33
maybe we do not need a 20 HR team on this point of meaning I've heard so many billionaires
01:11:41
in AI describe this as the age of abundance and I'm not necessarily sure If abundance is always a great thing
01:11:47
because you know when we look at mental health and we look at why how people
01:11:53
derive their meaning and their purpose in life much of it is having something to strive towards and some struggle in a meaningful direction to you and this is
01:12:00
maybe adjacent but when there was a study done I think it was in Australia where they looked at suicide letters and
01:12:06
in the suicide letters the sentiment of men in those suicide letters was they didn't
01:12:12
feel worthy they didn't feel like they were worth it. They didn't feel like they were needed by their families. And
01:12:19
this is much of what caused their psychological state. And I wonder in a world of abundance where we, you know, a
01:12:25
lot of these AI billionaires are telling us that we're going to have so much free time and we're not going to need to work. If there is at all going to be a
01:12:31
crisis of meaning, a mental health problem. I mean, there already is. And it doesn't require AI and it's going to
01:12:37
get worse. I don't know what to do about it because essentially as human beings we are built like all organisms to find
01:12:47
opportunity and figure out how to exploit it. That's what we do. And the world you're describing is really the
01:12:53
opposite of that. It's one where you're effectively having your biological needs
01:12:59
at the physiological level satisfied and there isn't an obvious place for your
01:13:06
spare time if that's what you end up with to be utilized in something that
01:13:12
you know there's no place to strive and I do imagine almost at best what would
01:13:17
happen is you have people who are being sustained by a universal basic income
01:13:24
come and then parasetized uh you know whatever currency they have to spend somebody will be targeting it
01:13:30
and they will be targeting it with a an AI augmented system that spots their
01:13:36
defects of character. I mean, again, we're already living in this world, but it will be that much worse when the AI
01:13:42
is figuring out, you know, what kind of porn to target you with specifically. That's uh it's a nightmare scenario. And
01:13:50
I do think it would be worth our time as a species to start considering if we are
01:13:57
about to find ourselves in this situation and we find some way of dealing with the basic needs of the
01:14:02
large number of people who are going to be sidelined. What would a world have to look like in order for them to have real
01:14:09
meaning? Not pseudo meaning, not something that you know superficially, you know, a video game is not meaning
01:14:14
even if it feels very meaningful in the moment. I I think that would be a a worthy investment for us to figure out
01:14:20
how to produce it. But frankly, I'm not expecting us to either have that conversation or get very far down that
01:14:27
road. I think it's much more likely that we will squander the wealth dividend that will be produced by by AI.
01:14:34
Interestingly, you also see in Western countries that when we get more abundance, we start having less kids.
01:14:40
And we're already seeing this sort of population decline in the Western world, which is was kind of scary. I think it's
01:14:46
often associated with affluence like the more money someone makes the less likely they are to want to want to have children the more they try and protect
01:14:52
their freedoms. But also on this point of AI relationships are hard you know my
01:14:58
girlfriend is happy sometimes and not happy other times and I have to like you
01:15:04
know go through that struggle with her of like working on the relationship. Children are hard and if we are
01:15:10
optimizing ourselves and you know much of the reason that I sustain the struggle with my girlfriend is I'm sure from some evolutionary reason because I
01:15:16
want to reproduce and I want to have kin but if I didn't have to deal with the
01:15:23
struggle that comes with human relationships romantic or platonic there's going to be a proportion of people that actually choose that outcome
01:15:29
and I wonder what's going to happen to birth rates in such a scenario because we're already struggling. We're already in a situation where we used to be
01:15:35
having five children per woman in the 1950s to about two in
01:15:45
2021. And we're seeing a decline. If you look at South Korea, their fertility rate has fallen to 72, the lowest
01:15:51
recorded globally. And if this trend continues, the country's population could half by 20, 100.
01:16:00
So yeah, relationships, connections, and and also I guess I guess we've got to overlay that with the loneliness
01:16:05
epidemic, which is they promised us social connection when social media came about, when we
01:16:12
got Wi-Fi connections, the promise was that we would become more connected. But it's so clear that because we spend so
01:16:17
long alone, isolated, having our needs met by Uber Eats drivers and social media and Tik Tok and the internet, that
01:16:23
we're investing less in the very difficult thing of like going and making a friend and like going and finding a
01:16:29
girlfriend. Young people are having sex less than ever before. Everything that is associated with the difficult job of
01:16:36
making in real life connection seems to be um falling away.
01:16:41
I I will make the case that everything that we've discussed here, all the negative things around loneliness, um
01:16:48
around meaning, they're already here. And I don't think blaming technology for
01:16:55
all of it is is the right thing. Like I think there are a lot of things that happened because of existing human uh
01:17:04
you know, impulses and and motivations. Um well I I wanted to go back to where
01:17:10
you started because I do think that this maybe is the fundamental question. Why is it that we are already living in a
01:17:17
world that is not making us happy? And is that the responsibility of technology? And I don't think it's
01:17:22
exactly technology. Human beings uh among our gifts are fundamentally technological whether we're talking
01:17:28
about quantum computing or flintnapping an arrow
01:17:35
head. What has happened to us that has created the growing, spreading, morphing
01:17:42
dystopia is a process that Heather and I in our book, A Hunter Gather's Guide to
01:17:47
the 21st Century, call hyper novelty. Hypern novelty is the fact of
01:17:54
the rate of change outpacing our capacity to adapt to change. And we are
01:18:00
already well past the threshold here where the world that we are young in is not the world that we are adults in. And
01:18:07
that mismatch is making us sick across multiple different domains. So the the
01:18:13
question that I ask is is the change that you're talking about going to reduce the rate of change in which case
01:18:21
we could build a world that would start meeting human needs better open opportunities for pursuing meaningful
01:18:27
work. or is it going to accelerate the rate of change which is in my opinion guaranteed to make us worse off. So if
01:18:35
it was a one-time shift, right, AI is going to dawn. It's going to open all sorts of new opportunities. There's
01:18:41
going to be a tremendous amount of disruption, but from that we'll be able to build a world. Is that world going to
01:18:46
be stable or is it going to be just, you know, one event horizon after the next? If it's the latter, then it effectively
01:18:54
says what it does to the humans, which is it it's going to dismantle us. When I
01:19:00
look out at society, I I go, okay, it's having a negative impact. When I look at um individual use cases, it's having a
01:19:07
profoundly positive impact. Including for me, it's having a very positive impact. So, it's it's one of these
01:19:13
things where I wonder what is that what is it that we need to teach people at school so that they understand the world
01:19:20
that we're going into? Because one of the biggest issues that we're having is that we're sending kids to school with
01:19:26
this blueprint, this template that they're going to have this long arc career that no longer exists that
01:19:33
essentially we're treating them like learning LLMs. And we're saying, "Okay, we're going to prompt you. You're going
01:19:38
to give us the right answer. you're going to hallucinate it if possible. And you know, and and then we go, "Okay, now
01:19:44
go off into the world." And they go, "Oh, but wait a second. I don't know how money works. I don't know how society works. I don't know how my brain works.
01:19:50
I don't know how I meant to handle this novelty problem. I'm not sure how to approach someone in a in a social
01:19:56
situation and ask if they want to go on a date." Um so all the important things that actually are the important
01:20:01
milestones that people want to be able to hit and that technology can actually have an impact on we get no user manual.
01:20:09
So I think one of the biggest things that has to happen is we have to equip uh young people all through school that
01:20:17
to actually prepare them for the world that's coming or the world that's here.
01:20:22
Well on the one hand I think you you outline the problem very well. effectively we have a a model of what
01:20:29
school is supposed to do that you know at best was sort of a match for the 50s or something like that and it woefully
01:20:36
misses the mark with respect to preparing people for the world they actually face if we were going to prepare them I
01:20:44
would argue that the only toolkit worth having at the moment is a highly general
01:20:50
toolkit the capacity to think on your feet and pivot as things change is the only game in town with respect to our
01:20:57
ability to prepare you in advance. Maybe the the other auxiliary component to
01:21:02
that would be teaching you what we know which is frankly not enough about how to
01:21:08
live a healthy life. Right? If we could if we could induce people into the kinds
01:21:14
of habits of behavior and the consumption of food and then train them
01:21:20
to think on their feet, they might have a chance in the world that's coming. But uh the fly in the ointment is we don't
01:21:28
have the teachers to do it. We don't have people who know. And that is the question is could the AI actually be
01:21:34
utilized in this manner to actually induce the right habits of mind for
01:21:40
people to live in that world. I I I spent a lot of time in education technology. One thing that is as we say
01:21:47
on the internet a black pill about education in general, education intervention is there's a lot of data
01:21:54
that shows that there are very little interventions you can make in education
01:21:59
to generate better outcomes. Um and so you know uh there's been a lot of
01:22:05
experiment around pedagogy around you know how to configure the the the classroom that have resulted in very
01:22:11
marginal improvements. There's only one intervention and this this has been uh reproduced many times that creates two
01:22:19
sigma two standard deviation uh positive outcomes in education
01:22:24
meaning you're better than 99% of uh of of everyone else and that is one-on-one
01:22:31
tutoring I thought so I was going to say smaller classrooms and personalization one-on-one tutoring yeah and and but by
01:22:36
the way if you look someone also did a survey of all the geniuses the understanders of the world and found
01:22:42
that they all had one-on-one tutoring. They all had someone in their lives that took interest in them and tutored them.
01:22:48
So, what can create one-on-one tutoring opportunity for every child in the
01:22:53
world? AI. AI. My kids use it and it's incredible. Yeah. As in like um they're
01:22:59
interacting and and it's adapting to their speed. Yes. And um it's giving them different analogies to work with.
01:23:06
So, like, you know, my son was learning about division and it's asking him to smash glass and how many pieces he
01:23:13
smashes it into with this hammer and, you know, and it's saying things like, "No, Xander, go for it. Really smash
01:23:18
it." And um and he's loving it, right? Is that synthesis? Yeah. Yeah. I'm an investor in this company. Oh, well, it
01:23:24
was it was it's great to watch that simulated one-on-one tutoring because it's talking to him. It's asking him
01:23:31
questions. Brett, you're an educator. you uh spent much of your life teaching people in universities. How do you
01:23:38
receive all of this? Well, on the one hand, I agree that the uh the closer to
01:23:43
one to one you get, the better. But I also personally believe that 0ero to one
01:23:50
is best. And what I mean by that is part of what's gone wrong with our
01:23:56
educational system is that it is done through abstraction.
01:24:03
And effectively the arbiter of whether you have succeeded or failed in learning
01:24:09
the lesson is the person at the front of the room. And that's okay if the person at the front of the room is truly
01:24:15
insightful. And it's terrible if the person at the front of the room is lackluster, which happens a lot. So what
01:24:23
doesn't work that way is interaction with the physical world in which nobody has to tell you whether you've succeeded
01:24:29
or failed. If you're faced with an engine that doesn't start, you can't argue it into starting. You have to
01:24:36
figure out what the thing is that has caused it to fail, and then there's a great reward when you alter that thing
01:24:42
and suddenly it fires up. So, I'm a big fan of being as light-handed as possible
01:24:49
and as concrete as possible in teaching. In other words, uh, when I've done it, and not just with students, but with my
01:24:55
own children, I like to say as little as possible, and I like to let physical
01:25:00
systems tell the person when they've succeeded or failed. And that creates an
01:25:06
understanding. You can extrapolate from one system to the next. And you know that you're not just extrapolating from
01:25:11
one person's misunderstanding. You're extrapolating from the way things actually work. So, I don't know if AI
01:25:18
can be leveraged in that context. My sense is there's probably a way to do it, but one would have to be deliberate
01:25:24
about it, especially with robotics and humanoid robots. Actually, that is that is the place uh where where you can do
01:25:31
this is with robotics that um it seems to me. Yeah. Well, robotics
01:25:38
will teach you the physical computing part of it. And then the question is how do you infuse this with AI so that um it
01:25:45
is that it you know it provokes you out of some eddy where you're caught and
01:25:51
moves you into the ability to solve some next level problem uh that you you wouldn't have found on your own. What do
01:25:57
what do you think should be taught in the classroom with everything that you now know? Well, you're all fathers here.
01:26:03
You all have your own children. So, it's a good question for you. How old are your kids? How old are your kids? Uh
01:26:09
three and five. 19 and 21 and six, seven, and 10. My children are very
01:26:15
young, but uh we already do use AI and I sit down with them in front of replet and we generate ideas and make make
01:26:21
games. And um I would say, you know, what Brett said about generality is very important. The ability to pivot and kind
01:26:27
of learn skills quickly. Being generative is very very important.
01:26:33
Having a you know a fast pace of generating ideas and iterating on those ideas. We sit down in front of Chad GPT
01:26:41
and my kid imagines scenario. Oh, what if you know there's a there's a cat on the moon and then you know what if the
01:26:47
moon is made of cheese and what if there's a mouse inside it or and so we keep generating these um variations of
01:26:55
these different ideas and I and I find that you know makes them more imaginative and and creative. Uh rule
01:27:01
number one that I tell my kids is stay away from porn at all costs. I'd rather
01:27:07
you have a drug problem than a porn problem. And I actually mean that. I think it's I think porn is more dangerous to the to the human being as
01:27:13
as bad as a drug problem is. But when we get to the question of how to confront the world and uh the things that you're
01:27:20
going to be um expected to to do in the workplace and all of that, my point to
01:27:26
them is you are facing the uh the dawning of the age of complex systems
01:27:36
that you are going to have to interact with. And in the age of complex systems, you have to understand that you cannot
01:27:42
blueprint a solution. And you have to approach these systems with a upgraded
01:27:49
toolkit of humility because the ability of the system to do something you don't
01:27:54
predict is much greater than a highly complicated system. So you have to anticipate that and be very sensitive to
01:27:59
the fact that what you intended to happen is not what's going to happen. So you have to monitor the unintended
01:28:06
consequences of whatever your action is and that there are really two tools which work. One of which you just
01:28:12
mentioned which is the prototyping. You prototype things. You don't imagine that I know the solution to this and I'm
01:28:18
going to build it. You imagine I think there's a solution down there. I'm going to make a proof of concept and then I'm
01:28:23
going to discover what I don't know and I'm going to make the next version. Discover what I don't know and eventually you may get to something that
01:28:29
actually truly accomplishes the goal. So prototyping is one thing. And also instead of using the blueprint
01:28:36
as the metaphor in your mind uh navigate you can navigate somewhere. And you know
01:28:42
that the way I think of it is a surfer is in some ways mastering a complex
01:28:50
system but they're not doing it by planning their days surfing down the waves. You can't do that. What you can
01:28:56
do is you can be expert at absorbing feedback and navigating your way down
01:29:01
the wave. and that that's the right approach for a complex system. Nothing else is going to work. And so I guess
01:29:07
the final piece is uh general tools always no specialization. This is this
01:29:13
is the age of generalists and um invest in those tools and they will pay.
01:29:19
So the guiding philosophy for me is uh to produce high agency generalists. So
01:29:24
um ultimately I want them to be motivated self-starters and have a wide general toolkit. I imagine them very
01:29:31
much what you imagine which is instructing robots, instructing agents,
01:29:36
coming up with ideas. Um, and I imagine them having a very high velocity life
01:29:41
where they may be writing a book, organizing a festival, having a podcast, starting a business, and being part of
01:29:47
somebody else's business all at once as they are of the ADHD. Yeah. Right. Exactly. Um, so the high agency
01:29:54
generalist is the kind of guiding philosophy. Some of the things that we do is like we do chess, we do Brazilian
01:30:00
jiu-jitsu, we do dancing, we do acting classes, playing in nature, uh entrepreneurship, understanding that you
01:30:07
can start a lemonade. We just did lemonade stands which was amazing. U we sold lots of lemonade on the street. So
01:30:13
those kind of things and jumping from one thing to the next thing, but also trying to avoid too many screens and
01:30:19
forcing them into making stuff from what's going on around the house. Um,
01:30:25
some distinctions that we try and give them is the difference between creating and consuming because I think AI has
01:30:30
this superpower of making you a hyper consumer or a hyper creator. Um, and if you don't understand the distinction
01:30:36
between creation and consumption, you end up falling into the consumption trap, whether it be porn or just news or
01:30:44
um, thing, you know, things that feel like you're productive, but you're actually just consuming stuff. Won't that be the most successful AI? the one
01:30:51
that plays with my dopamine the most. Yeah. And and makes you and makes you think that you're achieving something
01:30:58
when you're actually just consuming something. So trying to give them the understanding that there is this
01:31:04
difference in their life between creation and consumption and to be on the creation side. I started my first
01:31:09
business at 12 years old and I started more businesses at 14, 15, 16, 17 and
01:31:14
18. And at that time, what I didn't realize is that being a founder with no money meant that I also had to be the
01:31:21
marketeteer, the sales rep, the finance team, customer service, and the recruiter. But if you're starting a
01:31:27
business today, thankfully, there's a tool that wears all of those hats for you. Our sponsor today, which is
01:31:33
Shopify. Because of all of its AI integrations, using Shopify feels a bit like you've hired an entire growth team
01:31:40
from day one, taking care of writing product descriptions, your website design, and enhancing your products
01:31:46
images, not to mention the bits you'd expect Shopify to handle, like the shipping, like the taxes, like the
01:31:51
inventory. And if you're looking to get your business started, go to shopify.com/bartlet and sign up for a $1
01:31:59
per month trial. That's shopify.com/bartlet.
01:32:04
The thing that we I think all agree on is that this is inevitable. Do you agree with that, Brett? I think it's sad that
01:32:12
it is inevitable, but at this point it is. What part of it do you find sad?
01:32:18
We have squandered a long period of
01:32:24
productivity and peace in which we could have prepared for this moment. and our
01:32:31
narrow focus on competition has created
01:32:38
a a fragile world that I'm afraid is not going to survive the disruption that's coming. And it didn't have to be that
01:32:45
way. This was foreseeable. I mean, frankly, the movie 2001, which came out
01:32:51
the year before I was born, anticipates some of these problems. And you know we
01:32:58
treated it too much like education. I mean like entertainment and not enough like education. So we are now you know
01:33:07
we've had the AI era opened without a discussion about its implications for
01:33:12
humanity. There is now for game theoretic reasons no way to slow that
01:33:18
pace because as you point out if we restrain ourselves we simply put the AI in the hands of our competitors. That's
01:33:24
not a solution. So, I don't advocate it, but there's a lot more preparation we could have done. We could have
01:33:30
recognized that there were a lot of people in jobs that were uh about to be obliterated and we could have thought
01:33:36
deeply about what the moral implications were and what the solutions at our
01:33:42
disposal might have been. And having not prepared, it's going to be a lot more carnage than it needed to be. Amjad, I
01:33:49
heard you say a second ago that what we should be talking about is how we deal with job displacement. Do you have any
01:33:54
theories if you were prime prime minister or president of the world and
01:34:00
you your job was to deal with job displacement let's just say in the United States how would you go about
01:34:05
that the first thing I would do is uh teach people about these systems whether
01:34:11
it's um programs on on the TV or outreach or or what have you just trying
01:34:18
to get people to understand how chat GBT works how these algorith algorithms work
01:34:24
and as the new jobs arrive um I think you know there's going to be an opportunity for people to be able to
01:34:31
detect that you know the this job requires this set of skills and I I I
01:34:37
have this this kind of experience and although my experience are potentially outdated I can repurpose that experience
01:34:43
to do that job I'll give you an example a teacher his name is Adil Khan you know
01:34:49
he started using at the time GPT3 and uh felt like it does amazing work as a
01:34:55
tools for teachers or even potentially a teacher itself. So he learned a little bit of coding and he he went to
01:35:01
Unreallet and he built uh this company and um just two years later they're
01:35:07
worth hundreds of millions of dollars. Obviously, not everyone will be able to create businesses of that scale, but
01:35:13
because you have an experience in a certain domain, you'll be able to build
01:35:18
the next iteration of of of that using technology. So, even if your job was
01:35:23
displaced, you'll be able to figure out, you know, what's what's potentially what
01:35:30
potentially comes after that. So, so I I I think people's expertise that they built, I don't think they're all for
01:35:37
waste. Even if your job went away, you can never really predict what jobs
01:35:42
are coming. I mean, I think of this crazy situation where I tell my
01:35:47
grandfather, what is a personal fitness trainer? And he would his mind would be
01:35:53
blown by this idea that well, okay, I don't really want to go to the gym, so I have to make an appointment and pay
01:35:59
someone to go to the gym and meet with me there. And then he stands there and tells me to lift heavy things that I
01:36:04
don't really want to lift. And then he counts them and tells me that I've done a good job and then I put the heavy
01:36:10
things down and then at the end of that I feel really good and I pay him a bunch of money. My grandfather would be like
01:36:16
what on earth have you been scammed? Is this so we can never predict what what
01:36:21
this uh future of jobs would look like. Even just 20 30 40 years apart the jobs
01:36:26
rapidly and convincingly just morph into something else. I think it's very dangerous the idea that we need to focus
01:36:34
on skills. I think the future is not in skills. Skills are being replaced. It's this idea that the education system has
01:36:40
to stop being compartmentalized and has to be a lifelong learning approach. The department of education needs to be
01:36:47
seeing people as lifelong learners who are constantly disrupted and need re-education. Interesting. That that's
01:36:53
going to be a thing. The Department of Education needs to start as a kid and go right through to maybe 70. Does the
01:37:00
Department of Education have a role anymore at all? Depends on your definition of education. I think if you're trying to teach kids or if you're
01:37:06
trying to teach kids to, you know, remember facts and figures from a history book, then no. But if it's about
01:37:14
coaching, mentoring, being displaced, finding the next thing, and maybe if it's AIdriven and all of those kind of
01:37:20
things, then it's a different paradigm shift around what education is and what its purpose is. And if we see it as a
01:37:26
fluid thing where we wave into an opportunity and then wave back into education, spotting a new opportunity
01:37:32
and then back here. If we're learning rather than skills, but we're learning tools. So it's a tools-based education
01:37:39
as opposed to a skills-based education. The the purpose of education for most of human history was about virtue, about
01:37:45
becoming a great person who had good judgment and who had good values. And we don't really do much of that anymore.
01:37:51
But I think if we essentially said if we get back to what is if we ask the
01:37:56
question what is the purpose of education and where does it fit in our lives and at what time frame does it go for and then we just trust that people
01:38:04
are going to come up with weird and wonderful jobs. You know this is sounds
01:38:09
crazy but also and this is a weird analogy. My cat is incredibly happy. How
01:38:17
do you know? Well, it it demonstrates all the characteristics of of being a happy cat and it lives in a world of
01:38:25
super intelligence as far as it's concerned. So, there's this house and food just magically happens. It has no
01:38:31
idea that there's this Google calendar that runs a lot of things that happen around it. The food gets delivered. The
01:38:38
money is magically made by something that is inconceivably more intelligent than the cat. And yet the cat has
01:38:44
evolved to be living this life of purpose and meaning inside the house. And as far as it's aware, it it's got a
01:38:50
great life. But you have the power at any moment if you're having a bad day to do something not so pleasant to that
01:38:57
cat. And it can't really reciprocate that. Exactly. But but what's in it for me to hurt the cat?
01:39:04
Because the in this analogy, you might want to move house and the landlord
01:39:09
doesn't allow cats. So you've got a decision to make. Yeah, that there are things that the cat is highly disrupted
01:39:15
by due to no fault of the cat. I get it. But as far as cat existence goes and the
01:39:20
and the history of cats, if you were to ask that cat, do you want to tra trade places with any of the other cats that
01:39:27
came before you? It would probably say, I don't want to take the risk because all the other cats had to fend for themselves in a way that I don't have
01:39:33
to. It's very possible that we live end up living in a life a lot like the house
01:39:38
cat in the sense that from our perspective we're extremely like we're
01:39:44
having very interesting lives and purpose meaning and just there's this massive higher intelligence that's just
01:39:50
running stuff and we don't know how it works but it doesn't really matter whether how it works. We we we are the
01:39:57
beneficiaries of it and it it's doing important things and we're enjoying being house cats in in its in its life.
01:40:03
I have a few things to say about this. One, I'm pretty sure your cat's not as impressed with your capacity as you are
01:40:10
or as you think he is. Um I just know cats well enough to be pretty sure of that. But oh, it looks down on me. Yeah,
01:40:15
you're right. I think it I think it's a fair it's a fair point that there is an existence and actually, you know, pets
01:40:21
really do have it. If they have loving owners, they really do have it pretty great. And I would also point out that there's a way in which we already are
01:40:27
this way. Most of us do not understand the process that results in electricity
01:40:33
coming out of the walls of our house or the water that comes out of the tap. And we're pretty much okay with the fact
01:40:39
that somebody takes care of that and we can busy ourselves with whatever it might be. But the place that I find
01:40:45
something troubling in your description is that you say that the nature of what
01:40:51
we do is to deal with the fact that jobs are always being upended. That's a very
01:40:57
new process. That is the hypern novelty process. It used to be that it was only
01:41:03
very rarely that a population had a circumstance where you didn't effectively do exactly what your
01:41:09
immediate ancestors did. Right? Um, in general, you took what the jobs were,
01:41:15
you picked something that was suited to you, and you did that thing intergenerationally.
01:41:21
Intergenerationally. And the point is, we've now gotten to the point where even within your lifetime, what is possible
01:41:29
to get paid for is going to shift radically in ways that nobody can predict. And that is a dangerous
01:41:35
situation. Like probably every two years, like two or three years, right? And so maybe there's some model by which
01:41:41
we can surf that wave and you can learn a generalist toolkit and you know that
01:41:46
your survival doesn't depend on your being able to you know switch up every two years and never miss a beat or maybe
01:41:54
we can't but I do think it is worth asking the question if the rate of
01:42:00
technological change has taken us out of the normal human circumstance of being
01:42:07
able to deduce what you might do for a living based on what your ancestors did and put us in a situation where what
01:42:13
your ancestors did is going to be perfectly irrelevant no matter what. But that is effectively a choice that has
01:42:19
been made for us. And we could choose to slow the rate of change so that we would
01:42:26
live in some kind of harmony where our developmental environment and our adult
01:42:32
environment were a match. Now, as a biologist, I would argue if we don't do something like that, this is a matter of
01:42:38
time. Yeah. How would we change? How do we slow the rate of change? Well, I I mean, you can you can you can be the
01:42:43
Amish, right? You can be the Amish and live in your own communities and and I would assume some people would would
01:42:49
want that. Well, I'm you know, when Heather and I wrote our book, I wanted the first chapter to be, are the Amish
01:42:56
right? And the answer is they can't be exactly right because they picked an arbitrary moment to step off the
01:43:03
escalator. But are they right that there's something dangerous about this continuing pattern of technological change? Clearly they are. What do the
01:43:09
Amish do for anyone that doesn't know? The Amish live as if it was what 1850 or
01:43:18
something. So they live in a they don't use cars. They I think they do have
01:43:23
phones but they do not have electricity. Basically they they voluntarily accept a
01:43:30
techn they're basically a lite community and they uh have turned out to fare
01:43:37
surprisingly well against many of the things that have upended modern one of them right yeah co they did beautifully
01:43:44
quite happy people very low autism rates they they have all sorts of advantages so anyway I'm not arguing that we should
01:43:50
live like the Amish I don't see that but I do think the idea that they had an insight which was you need to step off
01:43:56
that escalator because you're just going to keep making yourselves sicker is probably right now. Maybe this is a
01:44:02
one-time shift. We've stepped over the event horizon. We are going to be living in the AI world. And maybe if we're
01:44:09
careful about it, we can figure out how to turn that landscape of infinite possibility that you're describing into
01:44:17
a place that doesn't change. That you always have the opportunity to decide
01:44:23
what needs to be done. But that none that the that living over that event horizon is not an everchanging process.
01:44:31
It's just the next frontier. I do want to also propose or ask the question when
01:44:36
we talk about our hyperchanging world. Isn't it harder for older people to learn because of the the way that the
01:44:43
brain works in terms of processing speed and memory flexibility? So I was wondering if you're going to
01:44:48
get a situation where like my father h because of his brain and the reduced memory flexibility and processing speed
01:44:54
that happens when you're older is going to struggle significantly more than my niece who can seem to learn I mean my
01:45:00
niece knows five languages and she's seven or something crazy like that five languages but I mean the brain is much
01:45:05
more plastic isn't it? So the it's and that's goes back to our evolutionary psychology which you know much evolutionary history which you know much
01:45:11
more than I do about of we're meant to learn our lessons when we're young. use that information for a lifetime. But if
01:45:18
that information is changing quickly, well, that's I mean this is exactly what I'm pointing to. It is not normal for
01:45:24
your developmental environment to fail to prepare you for your adult environment. The normal thing is as a
01:45:31
young person, you take on ever more of the responsibilities of the adult
01:45:37
environment. And then at some point, you know, in a properly functioning culture, there's a right of passage. You go into
01:45:44
the bush for 10 days, you come back with, you know, a large uh, you know,
01:45:49
game animal and now you're an adult and you take that program that you've been building and you activate it. And that
01:45:55
is normal. And, you know, you're a lot happier person. You're a lot more
01:46:00
fulfilled if your life has that kind of continuity to it. And you know, I'm not against the idea that we have enabled
01:46:07
ourselves to do things that can't be done if that's the the limit, but we
01:46:13
have also harmed ourselves gravely. And I would like to somehow pry apart our
01:46:21
ability to improve our well-being from our self-inflicted wounds that come from
01:46:29
this neverending pace of change. And I don't know if it's possible, but I think it's
01:46:34
a worthy goal. Something amusing. I don't know if it's exactly a counterpoint, but um during co
01:46:41
especially and you know through the the recent technological change uh some
01:46:47
people have started living closer to the more ancestral environment. Um so uh
01:46:54
people whose jobs are online, some of my friends like went and built communities
01:47:00
like collectives where they you know live and they they create farms and they they eat and then they eat and they have
01:47:07
like an email job. They do their email jobs for five hours and and go out and they all have children and it's it's a
01:47:14
fascinating life. And there was so much rethinking in in Silicon Valley about how we live. And there's a bunch of
01:47:19
startups that are trying to create um cities where they're like, okay, we know
01:47:25
that we've we're suffering because our cities are not really walkable. And
01:47:31
there's so many reasons why we're suffering. First, we're not getting the movement. Second, there's a social
01:47:36
aspect of walkable city where you're able to interact with people. You'll make uh friends by just happening to be
01:47:42
in the same place as others. let's actually build uh walkable cities and if we want to you know uh transport faster
01:47:50
we'll have these self-driving cars on the perimeter of the city that are going around and I think there are ways in
01:47:56
which technology can afford us to uh to live uh in a way that
01:48:02
reverses I I guess in a more local way. I I like that vision, but I also am
01:48:08
aware that there's a different vision, right? You see people in Palo Alto, for example, actually exerting, you know,
01:48:15
very strong controls on how much their children are exposed to, uh, you know, to phones. And I live in Palo Alto.
01:48:22
Yeah. So, so you see that. On the other hand, what I am worried about is that
01:48:28
the the elites of PaloAlto don't realize that what they're doing is they're
01:48:34
figuring out how to reduce the harm to their own families as they're exporting the harm to the world of these
01:48:40
technologies that for everybody else are unregulated. And so the question is, can we bring everybody along? If the AI
01:48:47
revolution is going to alter our relationship to work and everything else, can we bring everybody along so
01:48:54
that at the end of this process instead of saying well you know it's a shame that uh you know three billion people
01:49:01
were sacrificed to this transition but progress is progress we can really say well we figured it out and everybody now
01:49:07
is living in a style that is closer to their programming and closer to the expectations of their physical bodies.
01:49:15
You know, if that were true, then I would I would be I would love to be wrong in my fears about what's coming.
01:49:21
Um, but unfortunately, the market is not going to solve this problem without our being deliberate about forcing it to.
01:49:30
What's your biggest fear? Like when you say my fears about what's coming, what do you like what's what's the picture
01:49:35
that comes in your mind? Oh, it's a whole different topic actually. Um my my fear coming stemming from technology
01:49:43
uh and AI is that this is a runaway process and that that runaway process is
01:49:50
going to interface very badly with some latent human programs. that in effect
01:49:56
the need for workers largely disappears and the people who are at the head of the
01:50:03
processes that result in that elimination for the need for workers start talking about useless eaters.
01:50:08
Maybe they come up with a new term this time. Thin the herd. Yep. Or they allow it to be thinned or something. Right.
01:50:14
I've heard you talk about the five key concerns you have or the five key threats you have before. Could you name
01:50:21
those five? So the first one is the one I worry least about. I don't worry zero about it, but I worry least about it,
01:50:27
which is the malevolent AI uh that the doomers are so focused on. The second
01:50:33
one is the idea that you know an AI can be misaligned not because it has
01:50:38
divergent interest but because it just misunderstands what you've asked it. these autonomous agents. You know, the famous example is you ask them to
01:50:45
produce as many paper clips as possible and they start liquidating the universe to make paper clips and you know, it's a
01:50:50
it's a sorcerer apprentice kind of issue. The third one I would say
01:50:56
is actually all of the remainder of them I would say are guaranteed and
01:51:02
um the third of them is the derangement of human intellect that we are already
01:51:11
living in a world where it's very difficult to know what the facts even mean. Right? We the facts are so
01:51:18
filtered and we are so persuaded by algorithms that it's you know our
01:51:24
ability to be confident even in the basic facts even within our own discipline sometimes is uh at an
01:51:29
all-time low and it's getting worse and that problem takes a giant leap forward
01:51:35
at the point that you have the ability to generate undetectable deep
01:51:42
fakes. Right? that's going to alter the world very radically when the fact that you're looking at videotape of somebody
01:51:49
robbing a bank doesn't mean that they robbed a bank or that a bank was even robbed. Um, so anyway, I call this we
01:51:56
deal with this a lot by the way. I I think every single week, every single week I send my I have a chat people that
01:52:02
just are now basically spending I'd say 30% of their time dealing with deep fakes of me doing crypto scams, inviting
01:52:09
people to Telegram groups, and then asking them for credit card details. We had one on X. I think you probably saw
01:52:15
it, Dan, didn't you of me? But that someone was running ad deep fake ads on X of me. And it wasn't just one ad. It
01:52:21
was like there were it was like swatting flies. There was 10 of them. And I messaged them to X and there was 10
01:52:26
more. Then the day after there was 10 more. Then the day after there was 10 more. Then it started happening on on Meta. So it's a video of me basically
01:52:33
asking you to come to a Telegram group where people are being scammed and audience members of mine are being scammed. And when I send them to Meta,
01:52:39
they thankfully remove them. But then there's five more. And I went on LinkedIn yesterday and my DMs are,
01:52:44
"Steve, by the way, there's this new scam." And I actually at this point I c I'd need someone fulltime just sending
01:52:50
this over to Meta. I'm the I'm the same but on a smaller scale. Every week it's
01:52:55
did you really message me on Facebook asking me for my crypto wallet and blah blah blah. My my least favorite ones are
01:53:01
when the the single mother messages me saying that she just paid £500 of her money and how devastated she is and I
01:53:07
feel this moral obligation to give her her money back um because she's fallen for some kind of scam. That was me. It
01:53:13
was my voice. It was a video of me telling her something. Yeah. And I don't know I don't know how you deal with that but sorry do continue. Well, I mean
01:53:19
that's actually on the list here. the massive disruption to the way things function both because people are going
01:53:24
to be unemployed in huge numbers and because those who are not abiding by our
01:53:30
social contract are going to find themselves empowered more than the people who do. So in this case, not only
01:53:38
is this poor woman, you know, now out 500 bucks for whatever the scam was, but
01:53:44
you've also been robbed whether or not you pay her back for the thing that she thought she purchased. your credibility
01:53:51
is being stolen by somebody and you have no capacity to prevent it. This has happened to me also and it is profoundly
01:53:58
disturbing and it is only one of a dozen different ways that AI enables those who
01:54:06
are absolutely willing to shrink the pie from which we all derive in order to
01:54:14
enlarge their slice. you know, there there are innumerable ways that this can happen and um I think people do not see
01:54:21
it coming. They don't understand how many different ways they are going to be robbed every bit as surely as if
01:54:27
somebody was printing money. Um and then the last one is that this just simply
01:54:33
accelerates demographic uh processes that do potentially result
01:54:39
in the unleashing of technologies that pre-existed AI. you know, this this can
01:54:45
easily result in an escalation uh into wars that turn nuclear. Um, so
01:54:52
anyway, I think that list could probably be augmented at this point now that we've, you know, spent a little time in
01:54:59
the AI era. We can begin to put a little more flesh on the bones both of what is possible in this era and what we should
01:55:06
fear. One of those you you mentioned uh truth, you know, the problem of truth. Would you say just a thought experiment,
01:55:14
someone today like an average person, college educated say person, are they more
01:55:21
propagandized or led astray than someone in Soviet Russia?
01:55:30
Well, I don't know because I didn't live in Soviet Russia, but my understanding from people who did was that there was a
01:55:39
wide awareness that the propaganda wasn't true. Doesn't mean they knew what to believe, but there was a cynicism,
01:55:46
which is one of my fears here, is that the, you know, you're really stuck choosing between two bad options in a
01:55:54
world where you can't tell what is true. You can either be overly credulous and be a sucker all the time, or you can
01:55:59
become a cynic and you can be paralyzed by the fact that you just don't believe anything. But neither of those is a
01:56:05
recipe for do you think Google search first and maybe now chat GPT has helped people more or
01:56:11
less to find truth? I think it's not chat GPT exactly but all
01:56:18
of the various AI engines that we're starting with Google have briefly enhanced our capacity to know what's
01:56:24
true because in fact they allow us to see through the algorithmic manipulation because the AI is not well policed. you
01:56:33
can get it to recognize patterns that people will swear are not true. Um, and
01:56:38
so anyway, a lot of us have found it useful in just simply unhooking the gaslighting. Um, so that's been very
01:56:44
positive. But I also remember the early days of search and search used to be a
01:56:50
matter of there are some pages out there. I don't know where they are. Here's a mechanized something that's
01:56:56
looked through this stuff and just point me at the direction of things that contain these words. right before the
01:57:02
algorithmic manipulation started steering us into believing pure nonsense because somebody who controlled these
01:57:08
things decided it was useful for us to believe those things. So my guess is at
01:57:13
the moment AI is enhancing our ability to see more clearly but that really depends on some kind of agreement to
01:57:21
protect that capacity that I'm not aware of us having are you implying there that
01:57:27
AI will protect us from AI i.e. the woman that got scammed in my audience,
01:57:33
the platforms would have a tool built in which would be able to identify shortly that that is not me and the ad is been
01:57:40
launched by someone in another country potentially and then also when she starts being asked for her credit card details in such a way on Telegram 10
01:57:48
minutes later the system will able to understand there that this is probably a scam at that touch touch point too and
01:57:53
it will also be the defense not just the offense. First thing uh question is Meta
01:57:59
incentivized to solve this problem? Yes. Yes. And so Meta is probably actively
01:58:06
working on AIs and again it's going to be a cat and mouse game like every abuse that happens out there. So I I think
01:58:13
that the market will naturally respond to things like that in the same way that you know we installed antiviruses as you
01:58:21
know annoying as they are. I think we'll install uh AIS on our computers that
01:58:27
will allow us to at least help us kind of sort the the fake from from the truth. Well, but let's let's take the
01:58:33
example you say. Is Meta incentivized to solve this problem? Superficially, it seems that it should be, but how many
01:58:40
times in recent history have we watched a corporation cannibalize its own business over what at best is the
01:58:47
bizarre desires of its shareholders, right? Why was X throwing off people
01:58:53
with large accounts or Facebook or Google? It would seem that you would
01:59:00
expect based on the market choosing search engines or social media sites,
01:59:05
you would expect these companies to be absolutely mercenary and say, you know,
01:59:10
if Alex Jones has a big audience, who are we to say? That's what I would have expected. Instead, you had these
01:59:18
companies policing the morality of thought even though it reduced the size
01:59:25
of the population using the platforms. I have a hard time explaining why that happened, but I have every reason to
01:59:30
expect the same thing will happen with AI. What are you excited about with AI? What's your your optimistic take?
01:59:37
Because at the start of this conversation, you said that there's infinite ways that it could improve our lives and there's 10 times more ways
01:59:43
that it could hurt our lives. But let's investigate some of those ways that it could drastically improve our lives. There's a couple of different ways. One,
01:59:50
we have, as we mentioned before, a der of competent teachers and professors.
01:59:57
And that is a problem that will take three generations at least to solve if
02:00:02
what we're going to do is start tomorrow and start educating people in the right way that would make them competent to
02:00:07
stand at the front of a room and educate. But if we can augment that process, if we can leverage a tool like
02:00:13
AI so that you know a small number of competent teachers can maybe reach a larger number of pupils, that's
02:00:20
plausible I think. Second thing is we have a tremendous number of problems
02:00:26
that are obstacles to us living well on this planet that AI might be able to
02:00:33
manage that human intellect alone cannot. Right? Just in the same way that
02:00:39
you know compute power can calculate things at a rate that human beings can't keep up and there are certain things you
02:00:44
want calculated very well. There are also some reasoning problems. You could
02:00:49
imagine that instead of having um static
02:00:54
laws that govern behavior poorly because they get gamed that you could have a
02:01:01
dynamic interaction. You could specify a an objective of something like a law and
02:01:08
then you could monitor whether or not a particular intervention successfully moved you in the direction that you were
02:01:14
hoping to go or did something paradoxical which happens all the time and you could have you could basically
02:01:19
have governance that is targeted to navigation and prototyping rather than to specifying a blueprint for how we are
02:01:27
to live. So we wouldn't need politicians. Um, at the moment we're
02:01:32
stuck with, you know, constitutional protections that are as good as has been
02:01:38
constructed and still inadequate to modern realities.
02:01:43
Dan, what are you excited about with AI from an individual level, but also from a societal level? Yeah. Well, the big
02:01:49
ones are healthcare and education. I mean, it's ridiculous that you uh are sitting there in pain, having had an
02:01:56
MRI, and there just hasn't been someone to look at that MRI yet. and and tell you what to do. Um and that could easily
02:02:03
be solved there's all sorts of healthcare issues where um and also not only that throughout the entire world
02:02:09
there are places that just don't have general practitioners and they don't have you know medical advisers and and
02:02:15
you know the breakthroughs in global healthcare will be phenomenal and the breakthroughs in global education could
02:02:20
be transformational um on the planet. I I'm excited at an individual level that
02:02:26
I think the industrial age created a bunch of jobs that are very dehumanizing and we've just kind of gotten used to
02:02:32
them and put up with them. The idea that work should be repetitive and you know you just repeat the same loop over and
02:02:38
over and over over again and over a 10-year period of time you might get you know graduated up one gear and all that
02:02:44
kind of stuff. I don't think that's very human. Um the idea that you could be simultaneously writing a book, launching
02:02:51
a business, running a team, launching a festival, having an event. Um that that
02:02:57
that you could actually be doing this kind of like mini kingdom work where you've got this little, you know, uh
02:03:04
ecosystem around you of fun things that you're involved in that is actually made possible for a vast majority of people
02:03:10
if they embrace these kind of tools. um you can live an incredibly fulfilling
02:03:15
and amazing and impactful existence or I know that I do as a result of having
02:03:21
these tools in my life. Like I'm I'm doing things that I could have only dreamed about uh as a kid. And what
02:03:26
would you say to entrepreneurs? I know you you work with thousands of entrepreneurs. What are you telling them in terms of their current businesses or
02:03:33
business opportunities that you're foreseeing? So I think that small teams have infinite leverage now and that when
02:03:39
you have a team of say five to 10 people who share an incredible passion for a
02:03:45
meaningful problem in the world and they want to see that meaningful problem solved and they come together in the spirit of entrepreneurship to solve that
02:03:52
problem. That little 5 to 10 person team armed with the technology that we now
02:03:57
have available, you can have a a big impact. You can make a lot of money. You
02:04:03
can have a lot of fun. you can solve meaningful problems in the world. You can scale solutions. You can probably do
02:04:08
more in a three-year window than most people did in a 30-year career. Uh and then that little band of 5 to 10 people
02:04:15
could either go together onto a new meaningful problem or they could disband
02:04:20
and you know work on other meaningful problems with different teams. In such a world where you have this sort of
02:04:26
infinite leverage but everyone else has access to the same infinite leverage. What becomes the USP? Going back to this
02:04:33
idea of the moat, like what is the thing of value when we've all got access to $20 infinite leverage? Well, first of
02:04:39
all, the first thing you need to do you need to understand is that this moment of time is the least competitive uh
02:04:47
moment. Like if you understand how to use these tools, you can start making money tomorrow. like we, you know, I see
02:04:53
countless examples of people making thousands of dollars with these hustles that I that I talked about or building
02:05:00
businesses that generates millions of dollars in the first couple of months of existence. So, I would say start moving
02:05:05
now. Start building things. So, it's an unprecedented time of of of wealth
02:05:11
creation. Clearly at some point as the market gets more efficient as people
02:05:17
more and more people understand how to use these tools um there's less potential for uh you know creating these
02:05:25
massive businesses quickly and we've seen this like the dawn of the internet or dawn of the web you know it was a lot
02:05:31
easier to create Facebook than it is now then we had mobile and for three four
02:05:37
five years it was very easy to create massive businesses and then it became harder. Being just at the edge of what's
02:05:44
possible is going to be very very important over the next couple years. And that's that gets me really excited
02:05:49
because the entrepreneurs who are paying attention are going to are going to be having the most amount of fun, but
02:05:55
they're also going to be able to make a lot of money. How many applications have been built on Replet to date? So, you
02:06:02
know, I can talk about the millions of things that have been built since the since we started the company, but just
02:06:07
since uh September when we launched Replent, there's been about 3 million
02:06:12
applications built purely in natural language with no uh with no coding at
02:06:20
all, purely natural language. uh of those I think uh 300 400,000 of them
02:06:25
were deployed in a real um in the site was deployed and it is having people are
02:06:33
using it some kind of business some kind of internal tool. I built one last night by the way an internal tool or I uh
02:06:39
built an application to track um how my kids earn pocket money. Amazing. So, I
02:06:46
just told it that I wanted to track the tasks that are happening around the house and put an assign a value to them
02:06:51
and I want to be able to at the end of the week push a button and get a summary of how much to pay each child um for
02:06:57
their pocket money. we are so screwed. And within 15 minutes, it had created
02:07:04
this application and it was amazing. Like you could toggle between like here's the place where you have the kids
02:07:10
and here's the weekly reports and here's the um how much per task and you can
02:07:15
tick off the tasks or remove tasks or add tasks. So then I now have this application for which took 15 minutes of
02:07:23
just talking about what I wanted and now I have an application to run the pocket money uh situation in the in the house.
02:07:29
And this by the way it's something having run an IT agency years ago. That's something that we would have
02:07:35
charged five to10,000 pounds to create or5 to 10,000 US dollars to to create
02:07:42
and how much time probably talking something that would have been a three four week project and we're at the start
02:07:49
of the S-curve now that you're describing and it's already if you if it's a $20 replet's roughly $20 a month
02:07:55
25 for the for the base case you did one day of usage let's say it's a dollar it
02:08:01
cost you and it cost you minutes and a dollar now and we're at the start of the scurve and and you talk to it like
02:08:06
you're chatting to a developer. So one of the things that slows down the development process is you have to send
02:08:12
the information to a developer and they need to understand it and then they need to create something and then come back to you. This just happens in front of
02:08:18
your eyes uh while you're watching it and it's actually showing you what's being built and it's it's really wild.
02:08:25
This one change has transformed how my team and I move, train and think about our bodies. When Dr. Daniel Lieberman
02:08:31
came on the DEO. He explained how modern shoes with their cushioning and support are making our feet weaker and less
02:08:38
capable of doing what nature intended them to do. We've lost the natural strength and mobility in our feet and
02:08:44
this is leading to issues like back pain and knee pain. I'd already purchased a
02:08:49
pair of Viva barefoot shoes. So, I showed them to Daniel Lieberman and he told me that they were exactly the type of shoe that would help me restore
02:08:55
natural foot movement and rebuild my strength. But I think it was plantficitis that I had where suddenly my feet started hurting all the time.
02:09:01
And after that I decided to start strengthening my own foot by using the Vivo barefoot. And research from Liverpool University has backed this up.
02:09:07
They've shown that wearing Vivo barefoot shoes for 6 months can increase foot strength by up to
02:09:13
60%. Visit vivo barefoot.com/doac and use code diary 20
02:09:19
from my sponsor for 20% off. A strong body starts with strong feet. This has
02:09:25
never been done before. A newsletter that is ran by 100 of the world's top
02:09:31
CEOs. All the time people say to me, they say, "Can you mentor me? Can you get this person to mentor me? How do I
02:09:37
find a mentor?" So, here is what we're going to do. You're going to send me a question. And the most popular question
02:09:42
you send me, I'm going to text it to 100 CEOs, some of which are the top CEOs in
02:09:49
the world running a hundred billion dollar companies. And then I'm going to reply to you via email with how they
02:09:54
answered that question. You might say, "How do you hold on to a relationship when you're building a startup? What is
02:10:00
the most important thing if I've got an idea and don't know where to start?" We email it to the CEOs. They email back. We take the five, six top best answers.
02:10:07
We email it to you. I was nervous because I thought the marketing might not match the reality. But then I I saw
02:10:12
what the founders were replying with and their willingness to reply and I thought actually this is really good and all you've got to do is sign up completely
02:10:20
free. I don't think we've spent a lot of time talking about autonomous weapons.
02:10:25
This is the thing that really worries me. And the thing that worries people about AI is this idea is that it is uh
02:10:34
this you know emergent system and there's no one thing behind it and it can be it can act in a way that's uh
02:10:41
unpredictable and not really guided by humans. also think it's true of corporations of governments and so I
02:10:47
think individual people uh can often have the best intentions but the collective can land on doing things in a
02:10:56
way that's harmful or morally irrepant and I think um we talked about China
02:11:03
versus the US and that creates a certain race dynamics where um they're both
02:11:09
incentivized to cut corners and potentially do do harmful things and in the world of geopolitics
02:11:17
um and wars, you know, what really scares me is is autonomous weapons. And
02:11:23
why does it scare you? Because
02:11:28
uh you know you can imagine uh autonomous drones being trained on
02:11:36
someone's face and you can send a a swarm of of drones and they can be this
02:11:43
um sort of autonomous killing assassination machine and it can sort of uh function as a you know country verse
02:11:51
country technology in in the world of for which is still crazy but it can also
02:11:58
become a tool for governments to subjugate the citizens and and people
02:12:04
think we're we're safe in the west but I think the experience with co showed that
02:12:11
even the systems in the west can very quickly become draconian. Yeah.
02:12:18
Apparently, I've heard in um Iran that uh they have facial recognition cameras
02:12:25
that detect where the women are wearing hijabs in their own cars and it automatically detains the car. If you're
02:12:34
driving and you're not wearing a hijab and if you're certainly if you're walking down the street, it just picks
02:12:40
that up and immediately you're you're in trouble. uh you can like it acts as a police officer and a judge and you know
02:12:49
a law m lawmaker it's the judge jury and executioner essentially and it's just happens instantaneously what happened in
02:12:56
Canada with the truckers uh uh sort of protest where they froze their bank
02:13:02
account by virtue of just being there just by being in that location and just to confirm that Iran has implemented a
02:13:08
comprehensive surveillance system to enforce its mandatory hijab laws utilizing various ious technologies, one
02:13:13
of which is cameras and facial recognition. So they've put cameras in public spaces to identify women who are
02:13:19
not adearing to the hijab dress code. Yeah. And just on that, London has just
02:13:25
put those face cam facial recognition systems into London and also all throughout Wales. um and they're being
02:13:31
rolled out at speed and like all you would need is a change
02:13:37
of government that wanted to implement something similar and all the base layer technologies already in there. It gets a
02:13:43
little bit worse in Iran because they have this new app called the Nazar app where the government has introduced the Nazar mobile application which allows
02:13:49
you as a citizen to report another citizen who is not wearing their hijab and it it logs their location, their
02:13:55
time um when they weren't wearing it and the vehicle license plate with the crowdsource data. It can then go after
02:14:01
that individual. I would also just point out that I think we're not being imaginative enough. I agree with you. I
02:14:07
have the same concern about these autonomous weapons, but I also think this doesn't have to occur in the
02:14:13
context of war or even governmental oppression that it is perfectly conceivable that effectively this allows
02:14:20
this drops the price of a an undetectable or an unprosecutable crime.
02:14:27
And maybe economic moes return in the form of people taking out their competitors or anybody who attempts to
02:14:33
compete with them using an autonomous drone that can't be traced back to them. You know, that follows facial
02:14:39
recognition. And you know, you don't have to kill very many people for others to get the message that uh this is a a
02:14:45
zone that uh you shouldn't mess around in. So, I could imagine, you know, effectively a new high-tech organized
02:14:53
crime that uh protects rackets and makes tons of money and subjugates people who
02:15:00
haven't done anything wrong. I had Mustafa Sullivan on the podcast in 2023 when this all of this stuff started
02:15:06
kicking off and he is the CEO of Microsoft AI. You're familiar with Mustafa? Of course. Yeah. Um and he one
02:15:12
of the things he said to me at the time was one of my fears is a tiny group of people who wish to cause harm are going to have access to tools that can
02:15:19
instantly destabilize our world. That's the challenge. How to stop something that can cause harm or potentially kill.
02:15:25
That's where we need containment. And it sounds a little bit like what you're saying amad that we will now have these
02:15:31
these tools. you were talking in the context of the military, but as Brett said there, even smaller groups of
02:15:36
people that might have been, I don't know, cartels or gangs can do similar harm. And at the moment, in terms of
02:15:43
autonomous weapons, both the US and China are investing heavily in AI powered weapons, autonomous drones, and
02:15:48
cyber warfare because they're scared of the other one getting it first. And we we talked about how much of our lives
02:15:54
run on the internet, but cyber weapons and cyber AI agents that could be
02:15:59
deployed to take down China's X, Y, or Zed or vice versa are real concern.
02:16:05
Yeah. Yeah. I I think all of that is is um is a real concern. You know, unlike
02:16:12
Mustafa, I I don't think containment is is possible. Part of the reason why this
02:16:18
game theoretic system uh of competition between the US, China, corporations,
02:16:26
individuals makes it so that this this technology is already, you know, is
02:16:31
already out and really hard to put it back in the in the bag. I did ask him this question and I remember the answer
02:16:37
because it was such a stark moment for me. I said to Mustafa, "Do you think it's possible to contain it?" And he replied, "We must." So I asked him
02:16:43
again. I said, "Do you think it's possible to contain it?" and he replied we must and I asked him again I said do you think it's possible we must so the
02:16:49
problem with that uh uh chain of thinking is that it might lead to an oppressive system
02:16:55
uh there is uh one of the say doomers or philosophers of of AI which I I respect
02:17:01
his work his name is Nick Bostonramm and he he uh he's he was trying to think of
02:17:08
ways in which we can contain AI and the
02:17:13
thing that he came up with is perhaps more oppressive than something that the AI would come up with is like total
02:17:20
surveillance state. You need total surveillance on compute on people's computers on people's ideas to not
02:17:27
invent AI or AGI. It's like taking the guns or something or Right. Exactly. I mean there's always there's always this
02:17:34
problem of containing any sort of technology is that you do need um
02:17:39
oppression and draconian policies to do that. Are you scared of anything else or concerned about anything else as it
02:17:45
relates to AI outside of autonomous weapons? You know, we talked at the about the birthight crisis and I think a more
02:17:54
generalized problem there is creating virtualized environments
02:18:01
uh via VR where everyone is living in their own created universe and uh it's
02:18:09
so enticing and even create simulates work and simulates struggle uh such that
02:18:15
you don't really need to leave this this world and so every one of us will be solopcystistic, you know, similar to the
02:18:21
Matrix. Ready Player One. Ready Player One. We're all kind of uh plug even worse than Ready Player One. At least
02:18:28
that's a massivelyworked environment. I'm talking about AI simulating everything uh for us and
02:18:36
therefore you're literally in the matrix. You know, maybe that this is I was about I had that same thought. I've
02:18:42
enjoyed this great simulation. Yes. and and and so I mean are you familiar with
02:18:48
the Fermy's paradox? No, I'm not. So Fermy's paradox is um the question uh
02:18:53
the you know professor his name is uh Fermy he asked the question uh if the
02:19:00
universe is is that vast then where are the aliens? the fact that humans exist,
02:19:07
you can deduce that other civilizations exist. And if they do exist, then why
02:19:14
don't we see them? And then that spurred a bunch of Fermy solutions. So there's
02:19:20
uh I don't know, you can find hundreds of solutions on the internet. One of them is the uh sort of house cat on a
02:19:26
thought experiment where actually aliens exist, but they kind of put us in an environment like the Amish in a certain
02:19:34
time and do not expose us to what's going on out there. So they we're pets. Maybe they're watching us and kind of
02:19:40
enjoying uh what we're doing, stopping us from hurting ourselves, stopping us from hurting ourselves. There are so many things, but one of the things that
02:19:48
I think is potentially a solution to the phrases paradox and one of the saddest
02:19:54
outcomes is that civilizations progress until they invent technology that will
02:20:01
lock us into infinite pleasure and infinite simulation such that we we
02:20:07
don't have the motivation to go into space to seek out the explor
02:20:14
exploration, potentially other alien civilizations. And perhaps that is a
02:20:20
determined outcome of humanity or like a highly likely outcome of any species
02:20:27
like humanity. We like pleasure. Pleasure and pain is the main motivators. And so if you create an
02:20:33
infinite pleasure machine, does that mean that we're just at home in our VR environment with everything taking care
02:20:40
for us and literally like the matrix and the world the real world would suck in such a scenario? Yes. Be terrible. I
02:20:46
mean the other simpler explanation of the Fermy paradox is that you generate sufficient technology that you can end
02:20:53
your species and it's only a matter of time from that point which you know we can have that discussion about nuclear
02:20:59
weapons. We can have it about AI, but does some technology, if if we stay on that escalator, does some technology
02:21:06
that we generate ultimately whatever allows you to get off the planet allows you to blow up the planet? There you go.
02:21:12
I want to get everyone's closing thoughts and closing remarks. And hopefully in your closing remarks, you
02:21:17
can capture something actionable for the individual that's listening to this now on their commute to work or the single
02:21:24
mother, the average person who maybe isn't as technologically advanced as many of us at this table, but is trying
02:21:29
to navigate through this to figure out how to live a good life over the next 10, 20, 30 years. Yeah. Take as long as
02:21:37
you need. I think we live in the most uh interesting time in human history. So
02:21:43
for the single mother that's listening, for someone who wouldn't be the stereotype of a tech row, don't assume
02:21:49
that you can't do this stuff. It's never been more accessible today within your work. You can be an entrepreneur. You
02:21:56
don't have to take massive risk to go create a business um by you quit your
02:22:02
job and go create a business. There are countless examples. We uh we have a user who's a product manager at a larger real
02:22:09
estate business and he built something that created 10% lift in conversion
02:22:15
rates which generated millions and millions of dollars of that business and that person became celebrity at that
02:22:21
company and became someone who is lifting everyone else up and teaching them how to use these tools and
02:22:27
obviously that that is like a really great for for anyone's career and you're going to get a promotion and your
02:22:33
example of building a piece of software for your family for your kids to to
02:22:39
improve and and to learn more to be better kids uh as an example of being
02:22:44
entrepreneur in your family. So I really want people to break away from this
02:22:51
concept of entrepreneurship being this is your podcast a diary of a CEO. You started this podcast by talking to CEOs
02:22:59
I assume right and over time uh it changed to everyone can be a CEO
02:23:05
everyone is some kind of CEO in their life and so uh I think that we
02:23:12
have unprecedented access to tools for that vision to actually come to reality.
02:23:19
Well, it is obviously a moment of a kind of human phase transition. Something
02:23:25
that I believe will be the equal of a discovery of farming or writing or
02:23:34
electricity. And the darkness that I think is valid in looking at all of the
02:23:40
possible outcomes of this scenario is actually potentially part of a different story as well. In evolutionary biology,
02:23:47
we talk about an adaptive landscape in which a niche is represented as a peak
02:23:54
and a higher niche, a better niche is represented as a higher peak. But to get from the lower niche to the higher
02:24:00
niche, you have to cross through what we call an adaptive valley. And there's no guarantee that you make it through the
02:24:05
adaptive valley. And in fact, the drawing that we put on the board, I think, is overly hopeful because it
02:24:11
makes it in two dimensions. It looks like you know exactly where to go to climb that next peak. And in fact, it's
02:24:17
more like the peaks are islands in an archipelago that is in fog where you
02:24:23
can't figure out what direction that peak is and you have to reason out it's probably that way and you hope not to
02:24:29
miss it by a few degrees. But in any case, that darkness is exactly what you
02:24:36
would expect if we were about to discover a better phase for humans. And
02:24:42
I think we should be very deliberate about it this time. I think we should think carefully about how it is that we
02:24:47
do not allow the combination of this brand new extremely powerful technology
02:24:53
and market forces to turn this into some new kind of enslavement. And I don't
02:25:00
think it has to be. I think the potential here does allow us to refactor just about everything. Maybe we have
02:25:07
finally arrived at the place where mundane work doesn't need to exist anymore and the pursuit of meaning can
02:25:14
replace it. But that's not going to happen automatically if we don't figure out how to make it happen. And I hope
02:25:21
that we can recognize that the peril of this moment is best utilized if it
02:25:27
motivates us to confront that question directly. Each one of us has two parents, four
02:25:34
grandparents, eight great-grandparents, 16, 32, 64. You've got this inc like
02:25:41
this long line of ancestors who all had to meet each other. They all had to survive wars. They all had to survive
02:25:48
illness and disease. Everything had to happen for us. One, each individual, all
02:25:53
of this this stuff had to happen for us to get here. And if we think about all the people in those thousands and
02:25:59
thousands of people, every single one of them would trade places in a heartbeat if they had the opportunity to be alive
02:26:05
at this particular moment. They would say that their life was struggle,
02:26:11
disease, that their life was a lot of mundane and meaningless work. It was
02:26:16
dangerous. You know, every single one of us has probably got ancestors that were enslaved, probably got ancestors that
02:26:24
died too young, uh, probably got ancestors that worked horrific
02:26:29
conditions. We all have that. And they would all just look at this moment and say, "Wow." So, are you telling me that
02:26:36
you have the ability to solve meaningful problems, to come up with adventures, to
02:26:41
travel the world, to pick the brains of anyone on the planet that you want to pick the brains of? You can just listen
02:26:47
to a podcast. You can just watch a video. You can talk to an AI. Like, are you telling me that you're alive at this
02:26:53
particular moment? Please make the most of that. Like, do something with that.
02:26:59
You know, you can sit around pontificating about society and how society might work. But ultimately, it
02:27:05
all boils down to what you do with this moment. and solving meaningful problems,
02:27:10
being brave, having fun, making your little dent in the universe. You know,
02:27:16
that's that's what it's all about. And I feel like there's an obligation to your ancestors to make the most of the
02:27:22
moment. Thank you so much to everybody for being here. I I've learned a lot and I've
02:27:28
developed my thinking, which is much the reason why I wanted to bring us all together because I know you all have different experiences, different
02:27:34
backgrounds in education. and you're doing different things, but together it helps me sort of pass through all of these ideas to figure out where I land.
02:27:40
And I I I ask a lot of questions, but I am actually a believer in humans. I'm I
02:27:46
I was thinking about this a second ago. I was thinking, do I am I optimistic about humans ability to navigate this
02:27:52
just because I have no other choice? Because as you said, the alternative actually isn't worth thinking about. And
02:27:57
so I do have a optimism towards how I think we're going to navigate this in part because we're having these kinds of
02:28:03
conversations and we in history haven't always had them at the birth of a new revolution when we think about social
02:28:09
media and the implications that had. We're playing catchup with the with the downstream
02:28:14
consequences. And I am hopeful. Maybe that's the entrepreneur in me. I'm excited. Maybe that's also the
02:28:20
entrepreneur in me. But at the same time, to many of the points Brett's raised and Amjud's raised and Dan's
02:28:25
raised, there are serious considerations as we swim from one island to another. And because of the speed and scale of
02:28:32
this transformation that Brett highlights and you look at the stats of the growth of this technology and how
02:28:37
it's spreading like wildfire and how once I tried Replet, I walked straight out and I told Cozy immediately I was
02:28:42
like, "Cussie, try this." And she was on it and she was hooked. And then I called my girlfriend in Bali who's the breath work practitioner and I was like, "Type
02:28:48
this into your browser. R E P L I T." And then she's making these breath work
02:28:53
schedules with all of her clients information ahead of the retreat she's about to do. It's spreading like wildfire because we're internet native.
02:29:00
We were native to this technology. So it's not a new technology. It's something on top of something that's intuitive to us. So that transition, as
02:29:07
Brett describes it, from one peak to the other or one island to another, I think is going to be incredibly destabilizing.
02:29:12
And I've having interviewed so many leaders in this space from Reed Hoffman who's the founder of LinkedIn to the CEO
02:29:17
of Google to Mustafa who I mentioned they don't agree on much but the thing
02:29:22
that they all agree on and that Sam Alman agrees on is that the long-term future the long-term way that our
02:29:28
society functions is radically different. People squab squabble over the short term. They sometimes even
02:29:33
squabble over the midterm or the timeline but they all agree that the future is going to look completely
02:29:38
different. Amjud, thank you for doing what you're doing. you're what we didn't get to spend a lot of time on it today and this is typically what I do here but
02:29:45
your story is incredibly inspiring incredibly inspiring from where you came from what you've done what you're
02:29:51
building and you are democratizing and creating a level of playing field for entrepreneurs in Bangladesh to Cape Town
02:29:58
to San Francisco to be able to turn their ideas into reality and I do think just on the surface that that's such a
02:30:04
wonderful thing that you know I was born in Botswana in in Africa and that I could have the same access to turn my
02:30:11
imagination into something to change my life because of the work that you're doing at Replet. And I highly recommend everybody go check it out. You you
02:30:17
probably won't sleep that night because it's so it's so for someone like me it was so addictive to get to be able to do
02:30:23
that because it's been the barrier to creation my whole life. I've always had to call someone to build something. Dan, thank you again so much because you
02:30:29
represent the voice of entrepreneurs and you've really become a titan as a a thought leader for entrepreneurs in the UK and that perspective that balance is
02:30:35
incredibly important. So, I really really appreciate you being here as always and you're a huge fan favorite of our show and Brett, thank you a
02:30:41
gazillion times over for being a a human lens on complicated challenges and you
02:30:48
do it with a fearlessness that I think is imperative for us finding the truth in these kind of situations where some
02:30:54
of us can run off with optimism and we can be hurtling towards the mouse trap because we love cheese and I think
02:31:00
you're an important counterbalance and voice in the world at this time. So, thank all of you for being here. I really really appreciate
02:31:06
it and um we shall see. These things live forever.
02:31:17
So, this has always blown my mind a little bit. 53% of you that listen to this show regularly haven't yet
02:31:23
subscribed to the show. So, could I ask you for a favor? If you like the show and you like what we do here and you want to support us, the free simple way
02:31:29
that you can do just that is by hitting the subscribe button. And my commitment to you is if you do that, then I'll do
02:31:35
everything in my power, me and my team, to make sure that this show is better for you every single week. We'll listen to your feedback. We'll find the guests
02:31:41
that you want me to speak to, and we'll continue to do what we do. Thank you so much.
02:31:48
[Music]
02:32:05
[Music]

Badges

This episode stands out for the following:

  • 70
    Most inspiring
  • 70
    Best overall
  • 70
    Best concept / idea
  • 70
    Most creative

Episode Highlights

  • A New Era of Opportunity
    AI could democratize wealth creation, allowing anyone with ideas to succeed.
    “Imagine a world where anyone with merit can generate wealth.”
    @ 05m 31s
    May 12, 2025
  • The Wedge of Disparity
    A discussion on the disparity in wealth creation and job opportunities in the AI era.
    “There's going to be this kind of interesting wedge.”
    @ 23m 00s
    May 12, 2025
  • Worrying Societal Impact
    Concerns about the societal effects of AI technology.
    “I worry about the impact on society.”
    @ 32m 08s
    May 12, 2025
  • The Rise of AI and Agency
    As AI evolves, the concept of agency becomes crucial. High agency individuals will thrive in this new world.
    “Do you have the ability to get things done and coordinate agents?”
    @ 48m 27s
    May 12, 2025
  • Economic Displacement and AI
    The rapid advancement of AI may lead to significant job displacement, raising questions about societal obligations.
    “How many million people are going to be displaced from their jobs in the US?”
    @ 01h 04m 41s
    May 12, 2025
  • Job Displacement by AI
    AI is already replacing jobs, raising concerns about purpose and satisfaction for workers.
    “This isn't some antologist or something.”
    @ 01h 10m 15s
    May 12, 2025
  • Education for the Future
    Preparing students for a rapidly changing world requires a new educational approach.
    “We have to equip young people to prepare them for the world that's coming.”
    @ 01h 20m 17s
    May 12, 2025
  • The Age of Generalists
    Investing in tools for high agency generalists can lead to a fulfilling life.
    “The high agency generalist is the kind of guiding philosophy.”
    @ 01h 29m 54s
    May 12, 2025
  • Fears of AI's Impact
    Concerns arise about the potential for AI to render human workers obsolete.
    “This is a runaway process that is going to interface very badly with some latent human programs.”
    @ 01h 49m 50s
    May 12, 2025
  • Infinite Leverage for Small Teams
    Small teams can now achieve more in three years than most did in thirty.
    “You can do more in a three-year window than most people did in a 30-year career.”
    @ 02h 04m 08s
    May 12, 2025
  • The Challenge of Containing AI
    Mustafa Sullivan warns that a small group could destabilize our world with AI tools. "One of my fears is a tiny group of people who wish to cause harm are going to have access to tools that can instantly destabilize our
    “One of my fears is a tiny group of people who wish to cause harm are going to have access to tools that can instantly destabilize our world.”
    @ 02h 15m 19s
    May 12, 2025
  • Navigating the Future of Technology
    The conversation emphasizes the need for careful navigation through technological advancements to avoid oppression. "We should think carefully about how it is that we do not allow the combination of this brand new extrem
    “We should think carefully about how it is that we do not allow the combination of this brand new extremely powerful technology and market fo”
    @ 02h 25m 00s
    May 12, 2025

Episode Quotes

Key Moments

  • Paradigm Shift02:24
  • Water Conjured07:05
  • Cognitive Capacity47:18
  • Economic Catastrophe54:06
  • High Velocity Economy55:39
  • Crisis of Meaning1:11:47
  • High Agency Generalists1:29:54
  • AI Concerns2:15:19

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Daniel Priestley: AI Will Make Plumbers Earn More Than Lawyers! (2029 PREDICTION)
Podcast thumbnail
Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat
Podcast thumbnail
Simon Sinek: You're Being Lied To About AI's Real Purpose! We're Teaching Our Kids To Not Be Human!
Podcast thumbnail
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris
Podcast thumbnail
Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!
Podcast thumbnail
No.1 Money Saving Experts: Do Not Buy A House! Under 45? You're Not Getting A Pension!