Search Captions & Ask AI

WARNING: ChatGPT Could Be The Start Of The End! Sam Harris

August 07, 2023 / 01:50:42

This episode features Sam Harris, a neuroscientist and philosopher, discussing the implications of artificial intelligence (AI) on humanity. Key topics include the dangers of superhuman AI, misinformation, and the future of democracy.

Harris reflects on his TED Talk from six years ago, expressing concerns about the alignment of AI with human interests and the potential for misinformation to destabilize society. He emphasizes the urgency of addressing these issues as AI technology continues to advance.

The conversation touches on the societal impacts of narrow AI, particularly regarding misinformation in the context of upcoming elections. Harris warns that the proliferation of fake information could undermine trust in democratic processes.

Harris also discusses the philosophical implications of AI development, questioning whether humanity can maintain control over increasingly intelligent systems. He suggests that a pause in AI development may be necessary to address alignment challenges and ensure safety.

Throughout the episode, Harris shares his views on the future of work, the importance of honesty in personal relationships, and the need for new ethical frameworks in a world increasingly influenced by AI.

TL;DR

Sam Harris discusses the dangers of AI, misinformation, and the future of democracy, emphasizing the need for alignment and ethical considerations.

Video

00:00:00
artificial intelligence is superhuman it is smarter than you are and there's something inherently dangerous for the
00:00:07
Dumber party in that relationship you just can't put the genie back in the bottle Sam Harris neuroscientist
00:00:13
philosopher author podcaster he goes into intellectual territory where a few
00:00:18
others their tread six years ago you did a TED Talk the gains we make in
00:00:24
artificial intelligence could ultimately destroy us if your objective is to make Humanity happy and there was a button
00:00:30
placed in front of you and it would end artificial intelligence what would you do well I would definitely pause it the
00:00:37
idea that we've lost the moment to decide whether to hook our most powerful AI to everything it's just oh it's
00:00:45
already connected to the internet got millions of people using it and the idea that these things will stay aligned with
00:00:51
us because we have built them we gave them a capacity to rewrite their code there's just no reason to believe that
00:00:56
and I worry about the near-term problem of what humans do with increasingly powerful AI how it amplifies
00:01:03
misinformation most of what's online could soon be fake can we hold a
00:01:09
presidential election 18 months from now that we recognize as Valor right like is it safe and it just gets scarier and
00:01:16
scarier I worry we're just going to have to declare bankruptcy to the internet
00:01:21
if your intuition is correct are you optimistic about our chances of survival
00:01:30
before this episode starts I have a small favor to ask from you two months ago 74 of people that watch this channel
00:01:37
didn't subscribe we're now down to 69 my goal is 50 so if you've ever liked any
00:01:44
of the videos we've posted if you like this channel can you do me a quick favor and hit the Subscribe button it helps this channel more than you know and the
00:01:50
bigger the channel gets as you've seen the bigger the guests get thank you and enjoy this episode
00:01:56
oh [Music]
00:02:02
son six years ago you did a TED Talk
00:02:07
um I watched that Ted Talk a few times over the last week and this Ted Talk was called can we build AI without losing
00:02:13
control over it in that Ted Talk you really discussed the idea whether um AI when it gets to a certain point of
00:02:20
sentience and intelligence will will wreak havoc on Humanity
00:02:28
six years later where'd you stand on on it today do you think are you optimistic about
00:02:35
our chances of survival survival yeah I mean I can't
00:02:41
say I'm optimistic I'm I am worried about two
00:02:47
species of problem here that are related I mean there's sort of the near-term
00:02:52
problem of just what humans do with increasingly powerful Ai and
00:02:59
um how it amplifies the the problem of misinformation and disinformation and
00:03:05
make and just makes it harder and harder to make sense of reality together um
00:03:13
and then there's just the the longer term concern about you know what's called alignment with with artificial
00:03:19
general intelligence where we build AI that is is truly General and you know by definition
00:03:26
superhuman and it's competence and power and then the question is have we built
00:03:32
it in such a way that is aligned in a in a durable way with with our interests
00:03:37
and um I mean there's some people who just don't see this problem that they're kind
00:03:44
of blind to it when I'm in the presence of someone who doesn't have doesn't share this intuition they don't resonate
00:03:51
to it I just don't understand what they're doing or not doing with their minds in
00:03:58
that moment let's say I'm wrong about that well then you know it's just the other person's right and so we're just we just have
00:04:04
fundamentally different intuitions about about this particular point and then the point is this
00:04:09
if you're imagining building true artificial general intelligence that is
00:04:14
superhuman and that is what everyone whatever their intuitions purports to be imagining here I mean there's you know
00:04:20
there are people on both sides of the of the alignment debate the people who think alignment's a real problem and or and people think it's a
00:04:27
total fiction but everyone you know virtually everyone whose party to this conversation agrees
00:04:32
that we will ultimately build artificial general intelligence that will be superhuman and it's in its
00:04:39
capacities and there's very little you have to assume to be confident that that we're
00:04:44
going to do that and there's really just two assumptions one is that intelligence is substrate independent right there's
00:04:51
no it doesn't have to be made of meat it can be made in silico right and we've already proven that with narrow AI I
00:04:58
mean there's just this we obviously have intelligent machines and you know your calculator and your phone is better than
00:05:03
you are at arithmetic and it's just that's that's some very narrow band of intelligence
00:05:09
so as we keep building intelligent machines on the assumption that there's nothing
00:05:14
magical about having a computer made of meat the only other thing you have to assume
00:05:19
is that we will keep doing this we will keep making progress and eventually we will we will be in the
00:05:27
presence of something more intelligent than we are and that's not assuming Moore's Law is
00:05:33
not assuming exponential progress we just we just have to keep going right and when you look at the reasons why we
00:05:39
wouldn't keep going those are all just terrifying right because intelligence is so valuable and we're so incentivized to
00:05:46
have more of it and every increment of it is is valuable it's not like it only gets valuable when you get you know when
00:05:52
you double it or 10x it no no if you just get three more percent right that's that's uh that pays for itself
00:06:01
um so we're going to keep doing this our failure to do it suggests that something
00:06:07
terrible has happened in the meantime right we've had a World War we've had a global pandemic far worse than covid we
00:06:14
got hit by an asteroid something happened that prevented us as a species
00:06:19
from continuing to make progress in building intelligent machines right so
00:06:24
absent that we're going to keep going we will eventually be in the presence of
00:06:29
something smarter than we are and this is where intuitions divide
00:06:35
my intuition and it's shared by by many people I'm sure I know at least one
00:06:41
who you've spoken to my intuition is that
00:06:46
there is something inherently dangerous for the Dumber party in that
00:06:52
relationship there's something inherently dangerous for the Dumber species to be in Pre in the presence of the
00:06:59
smarter species and we have seen this you know based on our entanglement with
00:07:05
all other species dumber than we are right or certainly less competent than we are
00:07:12
and buy so buy it reasoning by analogy it
00:07:17
would be true of something smarter than we are um people imagine that because we have
00:07:24
built these machines that is no longer true right but and here's where my into tuition goes
00:07:32
from there that is that imagination is born of not taking intelligence
00:07:40
seriously right because what intelligence is is a you know a mismatching intelligence in
00:07:48
particular is a a fundamental
00:07:54
lack of insight into what the smarter party is doing and why it's doing it and
00:08:01
what it will do next on the part of the Dumber party right so I mean you just couldn't imagine that
00:08:08
by analogy just imagine that that dogs had invented us as their their super
00:08:13
intelligent AIS right for the purpose of making their lives
00:08:19
better you know just securing resources for them securing comfort for them making getting the medical attention
00:08:26
um it's been working out pretty well for the dogs for about 10 000 years right I
00:08:32
mean there's some exceptions we've got we've mistreat certain dogs but generally speaking for most dogs most of the time humans
00:08:40
have been a great invention right now it's true that
00:08:46
the the mismatch in our intelligence dictates a fundamental blindness with
00:08:53
respect to what we've become in the meantime right so like we have all these instrumental goals and things we care
00:08:59
about that they cannot possibly conceive right they know that when we go get the leash and say it's time for a walk they
00:09:05
understand that particular part of the language game but everything else we do when we're talking to each other when
00:09:11
we're on our computers or on our phones they don't have the dimmest idea of what
00:09:17
we're up to and if we ever if something happened if we I mean we love the truth is we love our
00:09:23
dogs we make just irrational sacrifices for our dogs we prioritize their health over all kinds of things
00:09:29
that is just amazing to consider and yet if we learn if there was a new you know
00:09:36
Global pandemic kicking off and some xenovirus was jumping from dogs to
00:09:42
humans and it was just a super Ebola right it was just it was 90 lethal and
00:09:48
if this was just a forced choice between him what do you value more you're the the lives of your dogs or the lives of
00:09:55
your kids right if that's if that's the situation we were in it's totally conceivable unless it's not a you know
00:10:01
not by no means impossible we would just kill all the dogs right and they would never know why right we
00:10:07
would just and it's because we have this layer of of
00:10:13
mind and culture and and just just the the new sphere right there's just this
00:10:18
this realm of of of mind that requires a requisite level of intelligence to even
00:10:26
be party to even know exists that they have they have no idea it
00:10:32
exists right and it's so this is a fanciful uh analogy because
00:10:39
the dogs did not invent us but evolution invented us right Evolution has coded us
00:10:44
you know as I said to survive and spawn and that's it right so Evolution can't
00:10:50
see everything else we've done with our time and attention and and all the values
00:10:56
we've formed in the meantime and all the ways in which we have explicitly disavowed the program
00:11:02
we've been given right so Evolution gave us a program but if we were really going to live by
00:11:07
the likes of that program what would we be doing and we would be having as many kids as possible right
00:11:14
you know the guys would be going to sperm banks and donating their sperm and finding that like the best use of their
00:11:21
time and attention it's like the idea that you could have hundreds of kids for which you have no financial
00:11:26
responsibility that would be the that should be the most rewarding thing that you could
00:11:32
possibly do with your time as a man and
00:11:38
yet that's obviously not what we do and there are people who decide not to have kids and they're people who and and yet
00:11:43
and everything else we do from you having podcast conversations like this to to
00:11:51
curing diseases type of Interest like literally everything we're doing with our you know with science with with
00:11:57
culture is yes there are points of contact but between those those products and our
00:12:04
evolved capacities right it's not it's not just it's not magic right we are social primates that that have leveraged
00:12:10
certain ancient Hardware to do new things but evolution the code that we've been
00:12:17
given doesn't see any of that right and we've not been optimized to build
00:12:22
democracies right um Evolution knows nothing it can know nothing if evolution were a coder
00:12:29
there's just no there's no democracy maximization in that code right it's
00:12:36
just it's not a it's not there so the idea that these things
00:12:41
will stay aligned with us because we have built them because if we have this origin story that we gave them their
00:12:46
initial code and yet we gave them a capacity to rewrite their code and build future generations of themselves right
00:12:54
um there's just no reason to believe that I see no and and the mismatch
00:13:00
in intelligence is intrinsically dangerous and you could see this by
00:13:05
I mean it's just Stuart Russell I don't know if you had him on the podcast he's a great um professor of computer science at
00:13:12
Berkeley and he he wrote literally co-wrote one of the most popular textbooks on AI
00:13:17
um he has some arresting analogies which I think are good intuition pumps here um
00:13:24
and one is just think of how you would feel if you knew like unless we got a
00:13:31
communication from elsewhere in the galaxy and it was a message that we decoded and it said people of Earth we will arrive
00:13:38
on your lowly planet in 50 years get ready right
00:13:44
that anyone who thinks that we're going to get super intelligent AI in let's say
00:13:51
50 years things were were essentially in that situation
00:13:56
and yet we're not responding emotionally to it in the same way if we if we received a communication from a a
00:14:03
species that we knew just by by fact by the sheer fact that they were communicating with us in this
00:14:09
way we knew they were more competent and more powerful and more intelligent than we are right and they're going to arrive
00:14:15
right we would we would feel that we were on the threshold of the
00:14:21
most momentous change in the history of of our species
00:14:27
and we would feel but most importantly we would feel that it's because this is a a relationship an unavoidable
00:14:35
relationship that's being foisted upon us right it's like we like some a new
00:14:41
creature is coming into the room right with its own capacities and now you're in relationship and and
00:14:49
one thing is absolutely certain it is smarter than you are right by by what
00:14:54
factor I mean ultimately we're talking about by factors you know just by so
00:15:01
many orders of magnitude it's it our intuitions completely fail I mean
00:15:06
even if even if it was just a difference in in the time of
00:15:12
processing even if it let's say there was no difference in in the actual you know native intelligence but it's just
00:15:18
processing speed a million fold difference in processing speed is is just
00:15:26
a phantasmagorical difference in capacity so like just imagine we had 10
00:15:31
Smart Guys in a room over there and they were working and thinking and talking a million times faster than we are right
00:15:38
well so they're no smarter than we are but they're just faster and we talked to them once every two
00:15:45
weeks just to catch up on you know what they're up to and what they want to do and whether they still want to collaborate with us well
00:15:51
two weeks for us is 20 000 years of analogous progress for them
00:15:57
right so how could you how could we possibly hope to constrain the opinions and collaborate with and negotiate with
00:16:04
people no smarter than ourselves who are making twenty thousand years of progress every time we make two weeks of progress
00:16:12
right it's just it's it's is unimaginable and yet there are many people who don't that
00:16:19
just think this is just fiction everything I all all the noises I've made in the last five minutes are just
00:16:25
like a a new religion of fear right and it's
00:16:30
just there's no reason to think that alignment is even a potential problem if your intuition is
00:16:37
correct and the analogy of us getting a signal from outer space that someone is coming in 30 years which by the way a
00:16:43
lot of people that speak on this subject matter um don't believe it's even going to be 30 years yeah until we reach that sort
00:16:49
of Singularity moment I think they speak of artificial general intelligence I've heard people like Elon say you know
00:16:56
many fewer decades 10 10 years 15 years 20 years Etc if that is correct then
00:17:02
surely this is the most pressing challenge conversation issue of our time
00:17:08
and there's no logical reason that I can see to
00:17:13
refute your intuition there I I can't see a logical reason the rate
00:17:18
of progress will continue don't necessarily see anything that will wipe out or pause our rate of progress
00:17:25
um I mean let me just to uh be charitable to the other side here there are other
00:17:31
assumptions that they smuggle in that they some people I mean some do it without being aware of it but some actually
00:17:37
believe these assumptions and This spells the difference on on this on this
00:17:43
uh particular intuition um so so it's possible to assume that the more intelligent you get the more
00:17:50
ethical you become by definition right now and we might you know draw a
00:17:56
somewhat more equivocal picture from just the human case where we see that oh there's some very smart people who
00:18:02
aren't that ethical but they're I believe there are people I
00:18:07
mean I've talked to a few at least a few people who believe this there are people who assume that kind of in the limit as you push out into just
00:18:13
just if far beyond human levels of intelligence
00:18:19
there's every reason to believe that all of the the
00:18:25
provincial creaturely failures of human ethics will be left
00:18:30
behind as well it's like you're not like the the selfishness and the and the and the basis for conflict like what like
00:18:36
these are not the apish urges of you know status seeking uh
00:18:44
monkeys is is just not it's not going to be in the code and as you push out into into just kind of the
00:18:51
Omnibus Genius of of the coming AI you're gonna there's a kind of a
00:18:57
sainthood that's going to come along with it right and and a wisdom that will come along with it now
00:19:03
I just think that's a that's quite a gamble I I think I would take the other the other side of that bet and and I
00:19:09
would frame it this way there have to be ways in the space of all possible
00:19:14
intelligences that are beyond the human right there's got to be more than one
00:19:20
possible there's got to be just like there's many different ways to have a chess engine that's better than I am at
00:19:26
chess they're still they're they're different from each other but they're all better than me right
00:19:32
um there's got to be more than one way to have a superhuman artificial
00:19:38
intelligence and I would I would imagine there there are
00:19:43
you know not not an infinite number of ways but just a vast number of of
00:19:49
in the space of all possible Minds there are many locations in that space beyond
00:19:54
the human that are not aligned with human well-being right there's got to be more
00:20:01
ways to build this unaligned then aligned right and what other people are smuggling into this
00:20:07
conversation is the intuition that no no once you get beyond the human is just going to get it's just you're
00:20:13
going to be in the presence of you know just the Buddha who understands quantum mechanics and oncology and everything
00:20:19
else right I just see no reason to think that that's so and we we could build something that is
00:20:25
again taken intelligence seriously we're going to build something we're in
00:20:31
relationship to it's really intelligent in all the ways that we're intelligent it's just better at all of those things
00:20:37
than we are it's by definition superhuman because the only way it wouldn't be superhuman the only way it
00:20:43
would be human level even for 15 minutes is if we didn't let it improve itself if
00:20:49
we wanted to just keep it stuck at you know at a we just we built a college undergraduate we wanted just to keep it
00:20:56
stuck there but we would have to dumb down all of the specific capacities we've already built right just like all
00:21:02
every AI we have narrow AI is superhuman for the thing it does you know it's it's
00:21:08
it has access to all the information on the internet right it's just like it's got perfect memories it can perfectly
00:21:14
copy itself when one part of the system learns something the rest of the system learns it because it just can swap files
00:21:21
right it can it's um you're again your your phone is a bit is a superhuman
00:21:26
calculator there's no reason to make it a a calculator that is human level um and so we're never going to do that
00:21:32
we're never going to be in the presence of human AGI we will be immediately in
00:21:38
the presence of superhuman AGI and then the question is how quickly it it improves and how far they're how much
00:21:44
Headroom is there to improve into on the assumption that you can get quite
00:21:50
a bit more intelligent than we are right that there's like they were nowhere near the summit of possible intelligence
00:21:58
you have to imagine that you're going to be in the presence of something that is again it could be completely
00:22:04
unconscious right this I'm not saying that there's something that's like to be this thing although there might be and that's a
00:22:11
totally different problem that's worth worrying about but whether conscious or not
00:22:18
it is solving problems detecting problems improving its capacity to do all of that
00:22:23
in ways that we can't possibly understand and the products of its increasing
00:22:31
competence are always being surfaced right so it's like it's
00:22:37
we've been we've been using it to change the world we became we've become Reliant upon it we built this thing for a reason
00:22:43
I mean one thing that's been amazing about the developments in recent months is that
00:22:49
those of us who have been at all cognizant of the AI safety space for you
00:22:54
know now going on a decade or more for some people always assumed
00:23:00
that as we got closer to the end zone we'd become either the labs would become
00:23:05
more circumspect we'd be building this stuff air gap from the internet you know it's like we have this phrase air gapped from
00:23:12
the internet like we thought this was a thing like you this thing would be in a box and then the question would be
00:23:18
well do we let it out of the box and let it do something right like is it safe and how do we know if it's safe right
00:23:24
and we thought we would have that moment we thought it would it would happen in a lab at Google or at Facebook or somewhere we thought we would hear okay
00:23:31
we've got something really impressive and now we just want it to touch the stock market or we wanted to touch the
00:23:38
our medical data or we just want to see if we can use it we're way past that right we've built
00:23:45
this stuff already in the wild it's already connected to the internet it's already got millions of people using it
00:23:52
it already has apis it's already it's already doing work so from an AI safety
00:23:57
point of view that's amazing like we didn't even have the moment the the choice point we thought was going to be so
00:24:03
fraught of course we didn't we we because there was such pressing incentives for people to press
00:24:11
forward regardless of that conversation especially but yeah everybody everyone everyone thought I mean I was never I
00:24:17
was I don't believe I was ever in conversation with someone with someone
00:24:22
like Elias or udakowski or or Nick Bostrom or Stuart Russell
00:24:28
who assumed we would be in this spot like I just everyone we because
00:24:35
you know I'd have to go back and look at those conversations but there was so much time spent you know it
00:24:42
seems quite unnecessarily on this idea that circumspect we'd make a certain amount
00:24:49
of progress and circumspection would kick in like even the people who are who
00:24:54
were doubters would become worried and at their and there would be like in the final yards you know as we go across
00:25:01
into the end zone there'd be some mode where we could sort of slow down and figure it out and try
00:25:06
like try to deal with the arms race Dynamics like let's place a phone call to China and and and just like let's
00:25:13
talk about this we got something interesting but the stuff has already been built in connection to everything
00:25:18
and there's already just endless businesses being being
00:25:24
devised on the on the the back of this thing and all the improvements are going
00:25:30
to get plowed into it and so just imagine what this looks like even in success right like let's say it just
00:25:36
starts working wonders for us and we just we get these great productivity gains and
00:25:43
okay so then we cross into the into the you know whatever the singularity is right
00:25:49
at whatever speed we find ourselves in the presence of something that is truly General after all of this stuff is all
00:25:56
of this narrow stuff uh albeit superhuman narrow stuff is
00:26:02
is something that we totally depend on right like every hospital requires it and every airplane requires it and all
00:26:09
of our missile systems require it and it's we're just this is the way we do business
00:26:15
um there is no there's nothing to turn off at that point I mean I just don't you
00:26:20
know it's like I guess I mean I put this to Mark Andreessen on my podcast and he said yeah you can turn off the internet I mean I don't I can't
00:26:27
believe he was quite serious I mean yes if you're North Korea I guess you can turn off the internet for North Korea and that's why North Korea is like North
00:26:34
Korea but the idea that we could I mean it just the cost of
00:26:40
turning off the internet now would be uh
00:26:46
I think it would be unimaginable in the in the in the economic just the economic cost alone
00:26:52
it just would be um so anyway I'm interested the the idea
00:26:59
that we've we've lost the moment to decide whether to hook our most powerful
00:27:06
AI to everything because it's already being built more or less in contact with if not everything
00:27:13
many so many things that you just can't put the genie back in the bottle that's
00:27:18
that is genuinely surprising to me and um yeah I mean incentives is this not the
00:27:25
most pressing problem then because I I was going to ask about this conversation by asking you the question about the
00:27:30
thing that occupies your mind the most and the most important thing we should be talking about and I I in part assume
00:27:36
the answer would be artificial intelligence because the way that you talk about your intuition on this subject matter you've got children
00:27:43
yeah you think about the future a lot um if you can see this species coming to Earth in the next even
00:27:50
if it's in the next 100 years um it strikes me to be the most pressing problem for Humanity
00:27:56
well I do I'm as interesting as I think that problem is and and consequential as it is
00:28:04
I'm I'm worried that life could become unlivable in the near term before we
00:28:10
even get there like I'm just worried about the the misuses of narrow AI in the meantime just I'm worried about just
00:28:16
just take the current level of AI we have you know we have gpt4
00:28:24
um I I think within the next 12 months or two years let's say let's say we
00:28:29
whatever GPT 5 is we're going to be in the presence of something where
00:28:35
most of what's online that purports to the information could soon be fake right
00:28:42
we're like just most of the text do you find on any topic is just fake right like someone
00:28:49
has just decided write me a thousand Journal articles on why mRNA vaccines
00:28:55
cause cancer and give me you know 150 citations write them in the in the style of Nature and
00:29:01
nature genetics and Lancet and Jama um and just put them out there right right one teenager could do that in five
00:29:09
minutes with the right AI right it's like it's just like we're not gpd4 is not quite that but
00:29:15
gpt5 you know possibly will be them it's like that that is such a near-term
00:29:20
advance right or get you know just when you imagine knitting together the visual stuff like mid-journey and
00:29:26
Dolly um and stable diffusion with with a large language model
00:29:33
just imagine the tool again this is maybe this is 18 months away maybe it's
00:29:38
three years away but it's not 30 years away the tool which where you can just say give me a 45 minute documentary on how
00:29:46
the Holocaust never happened filled with archival imagery give me you know Hitler
00:29:51
speaking in German and with it with the appropriate translations and
00:29:56
um give it give it in the style of Alex Gibney or Ken Burns or and
00:30:03
give me a 10 000 of those right like that like that's all all the friction
00:30:10
for misinformation has been taken out of the system and yeah I worry we're just going to have to
00:30:16
declare bankruptcy with respect to the internet like just like we just are not going to be able to figure out
00:30:22
what's real and when you when you look at how hard that is now with social media uh in the in the aftermath of of
00:30:31
covid and Trump and how just the challenge for of holding an election that most of the
00:30:39
population agrees was valid right that challenge already
00:30:45
is is on the verge of being insurmountable in the U.S right
00:30:51
I mean it's just like it's easy to see us failing at that AI aside now when you
00:30:56
add a large language models to that and the more competent future version of it where it's just the
00:31:03
most compelling deep fakes are indistinguishable from you know real data
00:31:11
um and everyone is siled into their tribes where they're stigmatizing the information that comes from any other
00:31:16
tribe and we're just and the internet is now so big a place that there really isn't the ordinary
00:31:23
selection pressures where where bad information gets successfully debunked so that it goes away it says you can
00:31:28
live in a conspiracy cult for the rest of your life if you want to
00:31:33
you know you can be queuing on all day long if you want to and now we've got deep fakes Shoring all
00:31:41
that up and just spurious you know scientific articles showing all that up I all this
00:31:49
becomes a more compelling form of psychosis and you know culturally speaking and so I'm just worried that
00:31:56
it's good it's going to get harder and harder for us to cooperate with one another and
00:32:01
collaborate and that our politics Will just completely break and that'll you know
00:32:08
offer an opportunity for lots of you know Bad actors and I mean leaving
00:32:14
aside there's cyber terrorism and there's their synthetic biology that you know the moment you get you turn AI
00:32:20
loose on on the on the prospect of of engineering viruses and you know all
00:32:26
of that it's like it it potentiates I mean the asymmetry here is that
00:32:33
it seems like it's it's always easier to break things than to fix them or to prevent people categorically prevent
00:32:39
people from breaking them and what we have with increasingly powerful technology is the ability for one person
00:32:47
to create more and more damage or one small group of people right and it was so it's just
00:32:53
it just turns out it's hard enough to build a nuclear bomb that like one person can't really do it you know no matter how smart you need a team and you
00:33:00
need to you need it's traditionally you've needed State actors and you need you need access to resources and you have to get the
00:33:06
physical material and it's hard enough but this isn't this is
00:33:12
being fully democratized this Tech and so it's um yeah I worry about the near-term chaos
00:33:21
I've never found the narrow term consequences of artificial intelligence to be that interesting until now is that
00:33:28
what you said that image of like the internet becoming unusable so that was a real Eureka moment for me because I've
00:33:34
not been thinking about that yeah no me too I was I was just concerned about the AGI risk and now
00:33:42
really in the in the aftermath of trump and kovid I've just I see the risk of
00:33:48
um you know it if not losing everything losing a lot that matters
00:33:56
just based on our and interacting with just these very
00:34:03
simple tools that that are Mis reliably misleading us I mean I'm just I'm amazed
00:34:09
at what social media I forget about I'm amazed at what Twitter did to me I mean you know even
00:34:15
with all of my training and all you know with my head screwed on reasonably straight I mean
00:34:22
it's amazing to say it but almost all of the truly bad things that
00:34:28
have happened to me in the last decade that just really like just destabilized
00:34:34
relationships and and just priorities and really like I kind of got plowed back into kind of
00:34:41
became a kind of professional emergency you know stuff I had to respond to you know in writing or on podcasts
00:34:47
it was on Twitter it was my my engagement with Twitter was the thing that produced the chaos and it was
00:34:54
completely unnecessary um and it was just it was amplifying a kind
00:35:00
of signal for me that I felt compelled to pay attention to because I was on it and I was trying to
00:35:06
communicate with people on it I was getting certain communication back and it was giving me a picture of the rest of humanity which I now think was
00:35:13
fundamentally misleading but it was it was still consequential in its yeah like even believing it was a certain point
00:35:20
believing that it was misleading wasn't enough to inoculate me against the delusion of these kind of the opinion
00:35:26
change that was being forced upon me um and I was feeling like okay like these people are becoming unrecognizable
00:35:33
like I know some of these people I've had dinner with some of these people and their behavior on Twitter is is
00:35:40
appearing so deranged to me and so in such bad faith um
00:35:45
the people are uh people who I know to be non-psychopaths are starting to behave
00:35:51
like Psychopaths at least on Twitter and I'm becoming similarly unrecognizable to
00:35:58
them that it's just again it it all felt like a psychological experiment to which
00:36:03
I hadn't consented which I enrolled myself somehow because it was it was what everyone was doing in 2009
00:36:11
um and I spent you know 12 years there getting some signal and responding to it
00:36:17
and it's not to say that it was all bad I mean I read a bunch of good articles
00:36:23
that got linked there and I you know I discovered some interesting people but uh
00:36:29
the change in my life after I deleted my Twitter account was so enormous I mean
00:36:35
it's embarrassing to admit it I mean it's just it's like it's like getting out of a bad relationship and it was
00:36:41
just it was a fundamental um
00:36:47
just freedom from from this this chaos Monster that was it was always there
00:36:53
ready to disrupt something but based on its own Dynamics and when did you do
00:36:59
Lisa um yeah like December I think it was December I would and I'm not someone
00:37:04
that really takes sides on things I like to try and remain in the middle I think politically so you must have a very different Twitter experience than I was
00:37:11
having no no no so I don't treat anything other than this podcast trailer don't do anything
00:37:17
else right okay so I just did anything you'll see on my Twitter is the podcast trailer that's it yeah and for all the
00:37:24
reasons you've described and more interestingly I wanted to say in the last eight months as someone that tries to be doesn't get caught up too much in
00:37:30
the media oh Elon bought this it's a hundred percent gone in that direction
00:37:35
as in my timeline now is I say to my friends all the time and some of my
00:37:40
friends who are again I think are nuanced and balanced have said to me the there's something that's been turned up
00:37:45
in the algorithm to increase engagement that has planted me in an unpleasant Echo chamber that I didn't desire to be
00:37:51
in and if I wasn't Cog somewhat conscious I would 100 be in there my
00:37:56
timeline my friend tweet the other day my friend Castle tweeted he's never seen more people die on his Twitter timeline
00:38:02
than he has in the last six months they're prioritizing video so you're seeing a lot of like death in CCTV footage that I've never
00:38:09
seen before and then the debate around gender um politics
00:38:14
right-leaning subject matter has never been more right down your throat yeah
00:38:20
because it's been it's almost like something in the algorithm has been switched where it's now it's now like
00:38:25
people have been let out the Asylum that's you know I can describe it and it's made me retract even more so
00:38:31
when Zuckerberg announced threads the other the other couple of weeks ago it was kind of like a a life raft right
00:38:38
out of this out of the Titanic um and I really really mean that and I'm not someone to get easily caught up in
00:38:44
narrative you know as it relates to social media platforms it's been my industry for a decade but what I've seen
00:38:49
on Twitter and it's actually made me believe this hypothesis I had five years ago where I thought they would be
00:38:55
um I thought the the Journey of social networking would be would have way more social networks and they'd be more siled
00:39:01
I thought we'd have one for our neighborhood our football club and now I believe that even more than ever yeah
00:39:06
that seems right and I think I mean whether it's possible to have a truly
00:39:11
healthy social network that people want to be in and it's a good reason to be there and it's it's uh I don't know if
00:39:19
that's possible I I like to think it is but it's um I think there's certain things you you
00:39:25
have to clean up at the outset that is supposed to make
00:39:31
it possible I mean I think I think anonymity is a bad thing I think um probably being free is a bad thing I
00:39:38
think for you if you know you sort of get what you pay for online and if it's if it's uh I just think they're there
00:39:44
there might be ways to set it up that where it would be better but I don't think it would be popular was that I think with the thing that
00:39:51
makes it popular makes it toxic right right and even the anonymity piece I've played this out a couple of times in my
00:39:56
mind and the rebuttal I always get is well there's people in Syria who have news to break important needs to break
00:40:02
and they they'd be hung if they so we need a Anonymous version of the
00:40:07
social internet right yeah well I guess there could be some exception there but
00:40:13
um I don't know it just doesn't it actually doesn't interest me
00:40:18
because I just feel such
00:40:23
a different sense of my being in the world as a result of not
00:40:29
paying attention to the my online simulacrum of myself is it's it's a um
00:40:38
because Twitter was the only one I used like I was on I've been on Facebook this whole time I've been on I think I I
00:40:43
guess I'm on Instagram too but like my team just uses those as marketing channels you know it's just like you it
00:40:49
sounds like that's the way you use Twitter now but Twitter was the the one that I decided okay this is going to be me
00:40:55
I'm gonna be posting here I'm gonna you know if if I've made a mistake I want to hear about it you know it's like and I
00:41:01
just want to use it as actual uh an actual basis for communication
00:41:07
um and for the longest time it actually felt like a valid tool in that respect
00:41:13
you know it reached a crisis point I decided this is just pure toxicity there's just no reason even the good
00:41:19
stuff can't possibly make a dent in the bad stuff so I just deleted it and then
00:41:24
I was I was returned to the real world right where I've where I actually live
00:41:30
and to books and to I mean I'm online all the time anyway but up but it's not
00:41:36
having the this is the time course of reactivity when you don't have social media when
00:41:42
you don't wait and you don't have a place to put this this instantaneous hot take
00:41:48
that you're tempted to be put out into the world because there's literally no place to put it like if like for for me
00:41:54
if I have some reaction to something in the news I have to decide whether it's worth
00:42:00
talking about it in my next podcast that I might be recording you know four days from now
00:42:06
and rather often people have been just bloviating about this thing for four solid days before I ever get to the
00:42:12
microphone and I then I get to think well this is still worth talking about in most almost nothing survives that
00:42:19
test anymore right it's like the conversations moved on so there's actually no place for me to
00:42:25
just type this thing that either takes me 10 seconds and then rolls out there
00:42:31
to get to detonate in the minds of you know my
00:42:37
friends and enemies to opposite effect and then I see the the result of all
00:42:43
that you know on a again on a the sort of reinforcement Loop of every 15 minutes
00:42:50
um not having that is such a relief that I just don't even know why I would so like
00:42:56
when threads was announced I wasn't I think I'm on threads too but it's not me it's just you know just again another
00:43:01
marketing channel um but yeah I haven't I feel such relief
00:43:07
not exercising that muscle anymore where it's like I I you know I don't know how
00:43:13
often I was checking Twitter but it was I was you know I was not checking it just to see what was
00:43:19
happening to me or what the response to my La the last thing I tweeted I was checking it a lot because it was my news
00:43:26
feed it's like I'm following you know 200 smart people they're telling me what they're paying attention to and so I'm
00:43:32
fascinated so yeah well yeah I want to see that next article or that next video just that engagement and the endless
00:43:38
opportunity to comment and to put my foot in my mouth or put my foot in someone else's mouth or have someone put
00:43:44
their foot it's just not having that has been such relief
00:43:50
that I would be I mean it's not impossible but I would be very cautious in reactivating that because it
00:43:56
was it was so much noise and again it would it created
00:44:02
there's so much it became a uh I mean it became an opportunity cost
00:44:08
but it became a just this endless opportunity for misunderstanding but
00:44:15
especially misunderstanding of me and you know everything I've been putting out into the world and then my sense
00:44:20
that I had to react to it and then you just can't plow that back into the
00:44:27
you know that that becomes the basis for further misunderstanding um
00:44:33
and it just it constantly was giving me the sense that there's something there's something I need to react to on my
00:44:38
podcast in an article on Twitter that it's just this is a valid signal like
00:44:45
this is this is this is like this is a five alarm fire this is like you got to stop everything like you're by the pool on the One Vacation you're taking with
00:44:51
your family that summer and this thing just happened on your phone
00:44:56
that like it can't wait right like you actually have to pay attention because it's like the conversation is happening
00:45:02
right now and so it was a kind of addiction to information and right you
00:45:08
know some level reputation management or or or [Music] um
00:45:15
and it was just I mean just yeah to just be free of it is such a relief apart
00:45:20
from like you know health issues with certain family members virtually the only bad things that have
00:45:27
happened to me have been a result of my engagement with Twitter over the last 10 years
00:45:34
so it's just it's just you know I I you know I guess I'm if I'm a masochist I
00:45:39
would be back on Twitter but like that would be the only reason to do it narrow AI I asked you the question a second
00:45:45
year which we um I really wanted to get a solution to it because I'm mildly terrified I
00:45:51
completely believe you'll believe your um the logic underneath your opinion that narrow area will cause this
00:45:58
um destabilization and usability of the internet so just focusing on narrow AI what what
00:46:03
would you consider to be a solution to prevent us getting to that world where misinformation is right to the point
00:46:09
that it can destabilize Society politics and culture well I think it's something I've been
00:46:16
asking people about on my podcast is because it's not actually my wheelhouse and I would just need to hear from
00:46:22
experts about what's possible technically here but um
00:46:27
I'm imagining that paradoxically or ironically
00:46:33
this could Usher in a new kind of gatekeeping that we're going to rely on
00:46:38
because like the provenance of information is going to be so important I mean the the the the assurance that a
00:46:45
video has not been manipulated or there's not a a just a pure confection of
00:46:50
of deep fakery right so you get so
00:46:55
it could be that we're we're Meandering into a new period where
00:47:01
you're not going to trust a photo unless it's come it's coming from you know Getty Images or you know the New York
00:47:07
Times has some story how the about how they have verified every photo in their
00:47:12
that they put in their newspaper they have a process and you know so if you see a a a video of of Vladimir Putin
00:47:20
seeming to say that he's declaring war on the U.S right I think
00:47:26
most people are going to assume that's fake until proven otherwise it's like it's just it's just going to be too much
00:47:33
fake stuff and it's going to be it's all going to look so good that the New York
00:47:38
Times and every other you know organ of media that we have relied upon
00:47:44
um as imperfect as they've been of late they're going to have to figure out what
00:47:49
the tools are whereby they can say okay this is actually a video of Putin right
00:47:55
and if the new I mean I'm not going to be able to figure it out on my own right the New York Times doesn't have a process or CNN doesn't have a process
00:48:02
that they go through before they say Okay Putin really said this and so this
00:48:08
is we have to now react to this because this is real um whatever that process is and you know
00:48:14
whether it's whether there's some kind of digital Watermark that you know that's connected to the blockchain
00:48:19
that's I mean there's there's some tech implementation of it that can be fully democratized where you by just being in
00:48:27
the latest version of the Chrome browser can know that you're so you you can
00:48:32
differentiate you know real and fake videos say I don't know what the implementation will be but I just I just know we're going to get to some spot
00:48:39
where it's going to be all right we have to declare epistemological
00:48:44
bankruptcy we don't know what's real we have to assume anything especially lurid
00:48:50
or agitating is fake until proven otherwise so prove otherwise
00:48:55
and that's you know that that'll be a resetting of something I don't know
00:49:01
what we do with that in a world where we really don't have that much time to react to certain things that are you
00:49:07
know a video of Putin saying he's launched his big missiles is something that you know 30 minutes
00:49:12
from now we would we would understand whether it's real or not I mean forget about again forget about
00:49:19
everything we just said about AI look at all of our Legacy risks look at
00:49:25
the risk of nuclear war the the risk of stumbling into a nuclear war by accident
00:49:30
has been hanging over our head for 70 years I mean we've got this old Tech
00:49:35
we've got these wonky radar systems that throw up errors we've we have
00:49:42
moments in history where you know one Soviet sub-commander
00:49:49
decided based on his just gut feeling his common sense that the data was
00:49:56
almost certainly an error and he decided not to pass the the the the obvious
00:50:01
evidence of a an American ICBM launch up the chain of command knowing that the
00:50:07
chain of command would say okay you have to fire right and he reasoned that if the U.S
00:50:13
was going to attack the Soviet Union they would launch more than I think in this case it looked like there were four
00:50:18
missiles that was the radar signature if the us is going to launch a first strike against the the Soviet Union in
00:50:26
one of this like the mid 80s um they're going to launch more than four
00:50:31
missiles right this has to be this has to be bad data right so this is that but you know so if we automate all this will
00:50:38
we automate it to systems that have that kind of common sense right um
00:50:43
but we've been perched on the on the edge of
00:50:48
the Abyss based on this this the possible forget about malevolent actors you know who might decide to have a
00:50:55
nuclear war on purpose we have the possibility of of accidental nuclear war
00:51:00
you add this cacophony of misinformation and deep fake to all of that and it just
00:51:07
gets scarier and scarier and this is just this is not even AI this is just
00:51:12
you know you know narrow AI Amplified misinformation how do you feel about it
00:51:18
well I mean this is the thing that worries me I I worry about the next election you know I think the next president if we can run the 2024
00:51:25
election in a way that most of America acknowledges was valid
00:51:31
that will be an amazing Victory you know whatever the outcome I mean obviously I
00:51:39
would not be looking forward to a trump presidency but um I think even more fundamental than that
00:51:45
is can we hold a presidential election 18 months from now
00:51:51
that is that we recognize as valid right like that I I don't know I don't know
00:51:56
what kind of resources are being spent on on that particular performance but that is hugely important
00:52:04
and I don't think our near-term experiments with AI is
00:52:11
going to make that easier why is it so important well it's just I mean if you think the
00:52:16
maintenance of of uh a valid democracy in in the
00:52:21
world's low in superpower is is of minor importance I
00:52:27
um I'd like to drink the tea you're drinking but yeah it's Mystic
00:52:32
I mean I I'm I can't say I'm optimistic I'm you know it's it's a paradoxical State I mean because
00:52:39
I I definitely I I tend to focus on what's wrong or might be wrong
00:52:44
I tend to I think have a a pessimistic bias right
00:52:50
like I I tend to notice what's wrong as opposed to what's right you know I mean that's my
00:52:55
um that's my bias but I'm actually very happy right like I
00:53:02
have a very a very good life I'm just like everything is is I just I'm incredibly lucky I'm surrounded by great
00:53:09
people it's like it's just it's all great and yet I see all of these
00:53:15
risks on the horizon so I'm like I'm not um
00:53:20
I just I have a very high degree of well-being at this moment in my life and
00:53:25
yet I like what's on the television is scary and so it's it's it's very
00:53:32
interesting juxtaposition yeah you know I will be I'll be very relieved if we have a
00:53:39
busy or just I feel like we're in a very weird spot I mean like the I haven't seen a
00:53:45
a full postmortem on the coveted pandemic that has fully encapsulated what I think we what I think happened to
00:53:52
us there but my my vague sense is that
00:53:57
we didn't learn a whole hell of a lot I mean basically what we learned is we're really bad at responding to this kind of
00:54:04
thing this was a challenge that that just fragmented us as a society it could have brought us together
00:54:10
it didn't and it it Amplified all of the the divisions
00:54:19
in our society politically and and economically and tribally in all kinds
00:54:24
of ways the role of misinformation and disinformation on all of that was was all too clear and I think just getting
00:54:30
worse so I think you know as a dress rehearsal for some future pandemic that's that is inevitably going to come
00:54:37
and is you know could well be worse I think we failed this dress rehearsal and
00:54:43
you know I have to hope that at some point our institutions will reconstitute themselves so as to be
00:54:51
obviously trustworthy and engender the kind of trust we actually need to have at our institutions like we need a CDC
00:54:57
that not only that we trust but that is trustworthy that we that we that we're right to trust right
00:55:05
and and so it is with an FDA and every other you know institution that that is relevant here and
00:55:13
we don't quite have that and half of our society thinks we don't have that at all right and and so it's
00:55:20
um we have to rebuild trust in institutions somehow and I just think you know we have a lot of work to do but
00:55:28
to even figure out how to make an increment of progress on that score
00:55:33
because we're again the siloing of of large
00:55:39
constituents into alternate information universes is just just not
00:55:45
functional and that's so much of what social media has done to us and alternative media I mean like you know I
00:55:51
call it you know you and I are podcasters but I call it podcast to Stan right we have this this landscape of I
00:55:59
mean there's now whatever million plus podcasts and there's you know you email newsletters and
00:56:05
everyone has now just decided to curate their information diet in a way that's
00:56:10
just bespoke to them and you can stay there forever and you're
00:56:16
getting you're getting one slice of and it could be a you know a completely fictional slice of of
00:56:23
reality and um we're losing the ability to converge on a common picture of what's going on
00:56:31
and you so did that sound optimistic I didn't hear the optimism there you tell me no I
00:56:37
I no I but I I kind of can't refuse to anything you said on a like a logical basis it all sounds
00:56:43
um like that is the direction of travel that we're going in unfortunately um I have faith that they'll be
00:56:49
surprising positives it always tends to be surprising positives that we also didn't
00:56:55
factor in um it's easy to see I mean if there's anything if there's
00:57:00
any significant low-hanging fruit technologically or or
00:57:07
scientifically that could be AI enabled for us let me just take like you
00:57:14
know a cure for cancer a cure for Alzheimer's I mean just having one thing like that
00:57:20
right that would be such an enormous good um
00:57:25
and that so that that is that's what that's why we can't get off this ride and that's why there is no break to pull
00:57:31
because the value of intelligence is so enormous I mean it is it is just
00:57:38
it's not everything I mean it's not that you know there's there are other things we care about and a right to care about Beyond intelligence I mean love is not
00:57:45
the same thing as intelligence right but intelligence is the thing that can Safeguard everything you love right like
00:57:52
even if you think the whole point in life is to just get on a beach with your
00:57:57
friends and your family and just hang out and enjoy the sunset
00:58:03
okay you don't have to augment you you don't need superhuman intelligence to do any
00:58:09
of that right you're you're fit to do it exactly as you are you could have done that in the 70s and it would just be
00:58:14
just as good a beach and they'd be just as good friends but
00:58:20
every gain we make in intelligence is the thing that safeguards that opportunity for you and everyone else
00:58:26
how would you I feel like we've not defined the time artificial general intelligence from my understanding of it it's when the the intelligence can think
00:58:33
and make decisions almost like a human yeah maybe Loosely this
00:58:38
this is a kind of just a semantic problem but intelligence can mean many things but
00:58:44
you know Loosely speaking it is the ability to solve problems uh and
00:58:51
meet goals make decisions in response to a changing environment in
00:58:59
response to data um and the general
00:59:05
aspect of that is an ability to do that in across in many different situations
00:59:12
all the sort of situations we encounter as people and to have one's capacity in one area
00:59:19
not you know as I get better at deciding whether or not this is a cup I don't
00:59:25
magically get worse at deciding whether you know you just said a word right it's
00:59:30
like I can do but it's like I can do multiple things in multiple channels that's not something we had in our
00:59:37
artificial systems for the longest time because we were everything was bespoke to the task we'd build a chess engine
00:59:43
and it couldn't even play Tic-Tac-Toe all I could do was play chess and they could and we and we just would get
00:59:49
better and better in these in these piecemeal narrow ways and then
00:59:55
things began to change a few years ago where you'd get you know with like deep mind would it would have its algorithms
01:00:00
that were uh you know the same algorithm with slightly different tuning could play go
01:00:06
right or it could you know it could solve a protein folding problem as opposed to just playing chess right and
01:00:13
it became the best in the world at chess and I became the best in the world to go and
01:00:18
um and amazingly I mean to take you know Alphas what Alpha zero did
01:00:25
it you know before Alpha zero all the chess algorithms were they just
01:00:33
had all of our chess knowledge plowed into them if they had studied every human game of chess and they just it was
01:00:39
just you know it was it was a bespoke chess engine Alpha zero just played itself I think for like four
01:00:47
hours right it just it just had the rules of Chess and then it played itself and it became better not
01:00:54
merely than every other every person who's ever played the game it became better than all the chess engines that
01:01:00
had all of the the all of our chest knowledge plowed into them so you it's a
01:01:06
fundamentally new moment in in how you build an intelligent system and it promises this this possibility
01:01:14
again this inevitability the moment you admit that we will eventually get there the
01:01:21
moment the moment you admit that it's it can be done in silico
01:01:26
and the moment that you admit that we will just keep going unless a catastrophe happens
01:01:32
and those two things are so easy to admit that I just don't at this point I don't see any place to stand where you're not forced to admit them right I
01:01:38
don't see any neuroscientific or cognitive scientific argument for
01:01:44
substrate dependence for intelligence given what we've already built and
01:01:52
again we're we're going to keep going until something stops us right we'll hit some immovable object that prevents us
01:01:58
from releasing the next iPhone but other otherwise we're going to keep going and then yeah so then it then we'll whatever
01:02:05
General will mean in that first case they'll be a case
01:02:10
where we've built a system that is so good at everything we care about
01:02:17
that is functionally General now maybe it's missing something maybe it's not you know maybe it's missing something that we don't even have a name for you
01:02:24
know we're missing all kinds of their possible intelligences that we haven't even thought about because we just
01:02:29
haven't thought about them right there there's the things that there are ways to section the universe undoubtedly that
01:02:37
we can't even conceive of because we are just we have the minds we have Elon was
01:02:43
asked a question on this by journalists the journalist said to him in a world where you believe that to be true that artificial general intelligence is
01:02:49
around the corner when your kids come to you and say Daddy what should I do with my life
01:02:55
Define purpose and meaning what advice do you now give them if you hold that intuition to be true
01:03:01
that it's around the corner what do you say to your children when they say what should I do with my life to create purpose and meaning and
01:03:08
did you say that Elon answered this question yeah what did he say it's one of the most chilling moments in an
01:03:14
interview I think I've seen in recent times because he stutters he goes silent for about 15 seconds which is very
01:03:20
annelon he stutters he stutters um he stutters a bit more because he
01:03:26
can't and then he says he thinks he's suspend he's living in suspended disbelief
01:03:32
because if you've really thought about it too much what's the point he says what's the point of me building all
01:03:38
these cars he was in his Tesla Factory what's the point of me building all these cars and what's the point I do think that sometimes so I think I have
01:03:43
to live in as his words were suspended disbelief right well I would encourage him to ask what's the point of spending
01:03:49
so much time on Twitter because that he could clearly benefit from rethinking that but
01:03:55
um that aside I mean my answer to that is
01:04:01
and I think other people have echoed this of late um I mean it's sort of surprising to me my
01:04:07
answer is that this begins to privilege a return to the the humanities as a kind
01:04:16
of a core like the center of of of mass intellectually for us because when you
01:04:22
look at what we're really good at and
01:04:27
uh it's among the last things that can be plausibly automated
01:04:35
uh and if if we automate it we may cease to care about it so it's like
01:04:41
learning to write good code is something that is going to be it's being automated now it's it is you know I'm not a
01:04:47
programmer but um you know I have it on good authority that already these large language models
01:04:54
are improving code and something like half the time they're writing better code than than people
01:04:59
that's all going to become like chess right it's just it's going to be better than people ultimately
01:05:06
um so being a software engineer is something that you know and being a
01:05:12
radiologist and being like those things it's easy to see how AI just cancels those professions or at least makes one
01:05:18
person you know so effective at using AI tools that you know one person can do the work
01:05:24
of 100 people so you've got 99 people who don't have to be doing that job
01:05:29
um but creating art and you know writing novels
01:05:36
and being a philosopher and uh talking about
01:05:43
what it means and to live a good life and how to do it it's like that's that's something that if we
01:05:52
we have we have to look at those we have to look at where we're going to care that we're
01:06:00
actually in relationship to and in dialogue with an another person who's who we know to
01:06:07
be conscious right like where we don't care about that we're not going to care we're going
01:06:12
to want just the best version of it like I don't care if the cure for cancer comes from an incentive and AI
01:06:18
I do not give a I just want the cure for cancer right like there's no added value
01:06:24
that where I find out okay the person who gave me this cure really felt good about it and he's you know he had tears
01:06:31
in his eyes when he figured out the Cure every engineering problem is like that we want safer planes we want you know we
01:06:37
just want things to work we're not sentimental about the the Artistry that
01:06:43
went into all of that uh and when the difference when the gulf between the best and the mediocre gets
01:06:51
big and consequential we're just going to want the best we're just going to want the best all the way down the line but what is the best novel
01:06:59
right what is the best podcast conversation what is it and can you
01:07:06
subtract out the the conscious person from that and
01:07:11
still think it's the best and and so like so someone once sent me a um
01:07:17
what purported to be I didn't even listen to it so I'm not even sure what it was but it looked like it was an AI
01:07:22
generated conversation between Alan Watts and Terence McKenna right both guys who I love I remember I didn't
01:07:29
know either of them but fans of both have listened to hundreds of hours of both talk as far as I know they never
01:07:36
met each other it would have been a fascinating conversation um I realized my when I looked at this YouTube video I realized I simply don't
01:07:44
care how good this is because I only care if it was actually Alan Watts and
01:07:50
Terence McKenna talking like a simulacrum of Alan Watson and Terence McKenna in this context
01:07:57
I don't care about right so another use case I I stumbled upon
01:08:03
I was playing with with chat GPT and I asked it you know the causes of World War II you know give me 500 words on the
01:08:10
cost of World War II because it gives you this perfect little you know bullet pointed essay on the cause of World War
01:08:16
II that's exactly what I want from it that's fine that's like I don't care
01:08:21
that it was there was no person behind that typing but when I when I think well
01:08:28
do I want to re read Churchill's you know history of World
01:08:33
War II it's on my shelf to read as I you know it's like I'm one of these aspirational sets of books haven't read
01:08:39
it yet um I actually want to read it because Churchill wrote it right like that that's why and if you could give me an
01:08:46
AI version of Churchill saying this is in the style of Churchill it's very even Churchill Scholars say this sounds like
01:08:52
Churchill I actually don't care about it like like that's not the used I I'll take the
01:08:58
generic use of you know give me the cost of World War II the fake Churchill is profoundly
01:09:05
uninteresting to me the real real Churchill even though he's dead is is interesting to me so the rebuttal I give
01:09:12
here and this is what my mind is doing is yeah saying this the distinction you're you're presenting
01:09:18
the the difference I see is that in the case of the conversation between two people you respect that has been generated by AI someone has signaled you
01:09:25
that that it is fake if you remove that because say Churchill thought yeah why would I write a book
01:09:31
when I could just click a button and this thing will write it in my in my voice in my tone of voice with my you
01:09:37
know with the entire the entire back catalog of things I've written before and it will produce my my account and it
01:09:43
will save me time so I'll just click a button my publisher maybe will do it for me and then I'll sell that to Sam on the
01:09:49
basis that it is um my thoughts which I imagine I I can imagine a very near future if we just do
01:09:56
it by percentage how many books are going to be increasingly written by artificial intelligence to the point
01:10:01
that when you look at a shelf I imagine at some point in the future if the intelligence does increase
01:10:07
um by any measure that most of it would be words strung together by artificial
01:10:14
intelligence and it will be selling potentially better than the words written by humans so again when we go
01:10:20
back to the conversation with your your children there might not be a career there either because artificial intelligence is
01:10:26
faster can produce more contest and iterate on whether it sells better clicks gets more clicks it can write the
01:10:33
headline create the picture write the content and then I can just take the chat because I put my name to it
01:10:40
yeah so I go even in that regard what remains well so in the limit
01:10:47
what I think we're imagining is a world where and so none of the terror none of the
01:10:53
terrifyingly bad things have happened so it's just all working we're just producing a ton of great stuff that is
01:11:00
better than the human stuff and people are losing their job so we gotta we got a labor disruption but we're not talking
01:11:05
about any other kind of political catastrophe or or you know cyber
01:11:10
apocalypse um much less AGI destroying everything
01:11:18
um then I think we just need a different economic assumption and ethical
01:11:23
intuition around the value of work I mean our default Norm now
01:11:29
in a capitalist Society is you have to figure out something to do with most of
01:11:35
your time that other people are willing to pay you for right you have to figure out how to add
01:11:40
value to other people's lives such that you reliably get paid otherwise
01:11:47
you might die right like we've got a social safety net but it's it's pretty meager you know we're not we're there
01:11:54
are cracks you can follow through you could wind up homeless and we're not going to figure out what to do about
01:11:59
that we're all too well you know and um
01:12:05
your so your claim Upon Your Existence Among Us you finding something to do with your
01:12:12
time that other people will pay you for right and now we've got artificial
01:12:18
intelligence removing some of those opportunities creating others but in the limit
01:12:23
and I do think it is different from I think analogies to other moments in in technological history are fundamentally
01:12:31
flawed I think this is a a technology which in the limit will replace jobs and
01:12:38
not create better new jobs in in their wake right it's just this just cancels
01:12:43
the need for for human labor ultimately
01:12:49
and it's strangely it replaces some of the highest status most cognitively intensive Jobs first right you know it
01:12:56
replaces replaces Elon Musk before it replaces your electrician or your
01:13:02
plumber or your masseuse way before right so we have to internalize the the
01:13:08
reality of that if again this is in success this is not it's all good things happening right
01:13:15
um and we have to have a new ethic we have to have a new economics based on that
01:13:20
ethic which is you know Ubi is one solution to this like you shouldn't have to work to
01:13:26
survive right Universal basic income yeah there's there's so much abundance now being created
01:13:33
we have to figure out how to spread this wealth around right we've got a cure for cancer over here we've got perfect
01:13:39
you know photovoltaic uh driven economies over here where it's
01:13:45
like we've solved the climate change issue you know we're just pulling wealth out of The Ether essentially
01:13:53
um we've got you know nanotechnology that is just birthing whole new Industries yeah but it's all being
01:13:58
driven by AI we don't you know there's no room in this whenever you put a person
01:14:04
in the in the CH in the decision chain you're just
01:14:09
adding noise this is the best thing this should be the best thing that's ever happened to us this is just like God
01:14:16
handing us the perfect labor-saving device right the machine that can build every other machine they can do anything you
01:14:22
could possibly want we should figure out how to spread the wealth around in that case right this is
01:14:29
just powered by sunlight no more Wars over resource extraction
01:14:35
it can build anything we can all be on the beach just hanging
01:14:40
out with our friends and family right like do you believe we should do Universal basic income where everybody's given like a month so something we have
01:14:47
to break this connection again this is this is what will have to happen in the presence
01:14:53
of this kind of Labor Force dislocation enabled by all of this going perfectly well right
01:15:00
like this again just as pure success just AI is just producing good things and the only bad thing is is putting all
01:15:06
these people out of work you know it's coming for your job eventually I've heard this and I've my issue with it am
01:15:11
I rebuttal when I talk to my friends about this idea of universal basic income when we you know we hand out enough casual resources to people so
01:15:18
that they're stable which I'm not necessarily against but just just want to play with it a little bit is humans
01:15:24
seem to have an innate an innate desire for purpose and meaning and we seem to be designed and built psychologically
01:15:30
for labor and for a discomfort but it doesn't have to be labor that's tied to
01:15:36
money right like it can be like we we will get our status in other ways and
01:15:42
we'll get our meaning in other ways and again this is all these are all just Stories We Tell ourselves I mean like you know you're talking to a person who
01:15:48
knows it's possible to be happy actually doing nothing right like like
01:15:54
just sitting in a room for a month right and just staring at the wall right because I've done it like that's possible right so so and yet that's most
01:16:01
people's worst nightmare you know it's a solitary confinement in a prison is considered a torture right and I know
01:16:06
people who spent 20 years in in a cave right so it's like there's a their capacities here that we're talking about
01:16:13
but um just more more commonly I think we will
01:16:20
we want to be entertained we want to have fun we want to be with the people we love we want to be
01:16:28
useful in relationship and
01:16:33
insofar as that gets uncoupled from the necessity of working to survive right it
01:16:41
doesn't all just go away we just need new norms and new ethics and new conversations around what we do on
01:16:47
vacation right it's like so what what you're imagining is that if you put
01:16:53
everyone on vacation on the best vacation you can make the vacation as good as possible
01:16:59
a majority of people will eventually be miserable because they're they're not
01:17:04
back at work right and yet they're most of these people are working so that they have enough money so they could finally take
01:17:10
that vacation right we will figure out a new way to be happy on the beach right I mean like if you
01:17:16
can't if you get bored with frisbee we will figure something else out that is fun you know you you can re you know
01:17:22
I'll be able to read The Churchill history of World War II on the beach and not be rushed by any other
01:17:29
imperative because I'm you know I I I'm happily retired right because my AI is
01:17:34
creating the thing that is solving all my economic problems right um
01:17:40
you know we should be so lucky as to is to have that be our problem like how to be happy in conditions of no economic
01:17:48
imperative no basis for political Strife on the on the basis of scarce resources
01:17:54
and no question about the the question of survival is off the table
01:18:04
one does with one's time and attention right you can be as lazy as you want and
01:18:09
you'll still survive you can be as unlucky as you as you want and you and you'll still
01:18:15
survive and they could the awful situation we're in now is that differences in luck
01:18:21
mean everything right you know someone is born in a in without any of the advantages that we have
01:18:30
we don't have a s we don't have a system we have an economic system that reliably gives them every advantage and
01:18:36
opportunity opportunity they could have right so it's like it's we just we um
01:18:44
we don't have the re you know we apparently we've convinced ourselves we either don't have the resources or we've convinced ourselves we don't have the
01:18:50
resources we don't have the incentive such that we access the resources so as to actually come to the help of people
01:18:56
we could help right I mean the idea that people starve to death is just it's
01:19:01
unimaginable and yet it still happens you know that's not a scarcity problem it's a political problem wherever it
01:19:08
happens and yet all of this is tied to a system where everyone has convinced themselves
01:19:14
that is normal to really have one survival be in
01:19:19
question if one doesn't work right and and we by choice or by
01:19:26
accident like like if you get if you haven't you know I think I think it's still true that in
01:19:32
the at least in the U.S this is almost certainly not true in the UK but in the U.S the most common reason for a personal
01:19:39
bankruptcy is um you know overwhelming medical expense that just comes upon you
01:19:44
for whatever reason well you know your wife gets cancer you guys go bankrupt solving the cancer problem or failing to
01:19:51
solve the cancer problem and now everything else unravels right and we we have a society which thinks yeah well
01:19:59
unlucky you you know that's you know if you wind up homeless just don't sleep in front of my store because
01:20:05
I need my you know you're going to hurt my business um like you know successful AI
01:20:12
that cancels lots of jobs would be it would be it would only be canceling
01:20:18
those jobs by virtue of producing so many good things so much value for everybody that we
01:20:25
would we would have to figure out how to spread that wealth around otherwise we'd yeah otherwise we would have a
01:20:32
and you know if an amazing amazingly dystopian
01:20:37
bottleneck for a few short years and then we would just have a revolution right then we'd then the guys in their
01:20:43
in their you know gated communities making trillions of dollars based on them having you know gotten close enough
01:20:50
to the gpus uh that they that it you know some of it rubbed off on them
01:20:56
um yeah they'd be dragged out of their houses and off their Gulf streams and
01:21:02
you know we would have a fundamental reset we have a hard reset of the political system
01:21:07
if I had to put you in a yes or no situation and ask your intuition the question now that if your objective was
01:21:14
to which I'm sure it is is to encourage the betterment of humanity and to increase our odds of happiness and
01:21:20
well-being 100 years from now um and there was a button placed in front of you and it would either end the
01:21:27
development of artificial intelligence as we've seen it over the last decade so it would never we'd never proceed with
01:21:33
developing intelligent machines um or not so you could press a button
01:21:38
and stop it right now stop it permanently such that we never
01:21:43
then do that thing we just never figure out how to build intelligent machines
01:21:49
pause it indefinitely well I would definitely pause it
01:21:56
to a point where we would we would could get our heads around the alignment
01:22:02
problems permanently if the button was a permanent pause that you couldn't undo
01:22:08
well the question is how deep does that go so like we we have everything we have now but we just yeah it just never gets better than yeah we never make progress
01:22:14
from here right um and your objective is to make Humanity happy and prosperous
01:22:21
it's hard because when you when you begin imagining all of the good stuff that we
01:22:28
could get with with aligned superhuman AI well then you know then the it's just
01:22:34
you know Cornucopia upon Cornucopia it's just everything is everything is potentially Within Reach
01:22:41
yeah I mean I I take the existential risk scenario seriously enough that I
01:22:47
would I would pause it you know I would say I mean I think we will get we will eventually get to if
01:22:54
if curing cancer is a is a biomedical engineering problem that admits of a solution and I I think there's every
01:23:01
reason to believe it ultimately would be we will eventually get there based on
01:23:06
our own you know muddling along with our you know current level of tech you know currently a information Tech
01:23:13
um I'm you know reasonably confident of that um
01:23:20
because I mean aren't you know our intelligence shows every sign of being General it's just it's not um
01:23:28
it's not as fast as we would want it to be it's not it's not what the thing that AI is going to give us is is going to
01:23:35
give us uh speed that is
01:23:41
um I mean there's speed and there's the the access there's memory right it's like and like we we can't integrate we
01:23:48
don't have the ability we have no person or team of people can
01:23:53
integrate all of the data we already have right so that like the the real promise here is that
01:23:59
a these systems will be able to find patterns that we wouldn't even know how to look for and then do something on the
01:24:06
basis of those patterns you know I think an intelligent search within the data
01:24:11
space You Know by by Apes like ourselves will eventually do uh most of the the
01:24:17
great things we want done and you know the there isn't there isn't uh
01:24:25
I mean the pro the problems we need to solve so as to safeguard the the
01:24:33
[Music] um the the career of our species and to make civilization durable and sane and
01:24:40
and uh to remove this sort of Damocles that is
01:24:45
over our heads at every moment that you know at any moment we could just decide to have a a nuclear war that ruins
01:24:51
everything or or create a a an engineered pandemic that ruins everything
01:24:58
we don't need superhuman intelligence to solve all those problems and we need the we need an appropriate emotional
01:25:04
response to the the the untenability of the status quo and we need we need a political dialogue
01:25:11
that eventually transcends our our tribalism for those of you that don't know this podcast is sponsored by Weaver
01:25:18
company that I'm a shareholder in and I'm obsessed with my woop it's glued to my wrist 24 7. and for those of you that
01:25:23
don't know it's essentially a personalized wearable health and fitness coach that helps me to have the best possible Health my whip has literally
01:25:30
changed my life whip is doing something this month which I'd highly suggest checking out it's a global Community
01:25:35
challenge called The Core 4 challenge essentially they guide you through a set of four activities throughout the month
01:25:40
of August that are scientifically proven to improve your overall health I'm giving it a go and I can't wait to see
01:25:46
the impact it has on me and I highly recommend you to join me with that so if you're not on weep here there is no
01:25:51
better time to start if you're a friend of mine there's a high probability that I've already given you a week because I'm that obsessed with it it is the
01:25:56
thing that I checked when I wake up in the morning it's the first thing that I look at I want the information on my sleep to then plan my day around so if
01:26:02
you haven't joined woop yet head to join.woop.com CEO to get your free whoop device and
01:26:10
your first month free try it for free and if you don't like it after 29 days they're going to give you your money
01:26:16
back but I have a suspicion that you're going to keep it check it out now and let me know how you get on send me a DM quick one if you've been listening to
01:26:23
this podcast for some time one of the recurring messages you've heard over and over and over again especially when we
01:26:28
first had that conversation with Tim Spector is about the importance of Greens in our diet and a while ago I
01:26:35
started pressing my friends at hewell to come out with a product that did exactly
01:26:40
that allowed you to have all those greens the vitamins and minerals you need in a drink and after several
01:26:46
several several months of iterations and processes they released this product called huel Daily Greens which is now
01:26:53
one of my favorite products from heel because it tastes great and it fills that very important nutritional Gap that
01:26:58
I had in my diet the problem is it launched in the US and it sold out
01:27:04
straight away and became a Smash Hit for fuel for the rare reasons I've described it's now back in stock in the United
01:27:10
States but it's not here in the UK yet so if you're a UK listener which I know a lot of you are it's not yet available
01:27:16
so let's all attack you let's DM them everywhere we can and tell them to bring
01:27:22
huel Daily Greens to the UK this is the product when it is available in the UK I'm going to let you know first but
01:27:28
until then let's spam their DMs you and I'd say a few others maybe two
01:27:34
or three others help change my mind about one of the most profound things I think anyone could believe which was when I was 18 I believed in Christianity
01:27:42
and then there was a couple of moments that shook my belief nothing on a
01:27:48
personal level just a couple of ideas that managed to sort of infect my operating system that led my
01:27:55
curiosity towards um your work and I changed my mind profoundly it's such a profound change
01:28:02
that I had um how do we change our minds and I and I really want to I really want to focus that question on the the individual the
01:28:11
individual's mind like I want to change my mind I want better beliefs better ideas in my head that are going to allow
01:28:17
me to get out of my own way um because I I'm not achie I'm miserable
01:28:24
I'm not living the life that I I would say I know I can live but some people don't even know they can live
01:28:30
live a better life I'm not happy that's the signal and I want to I want to rectify this in
01:28:36
some way yeah well there are a few bright lines for me I mean like take um
01:28:45
our ethical lives and our relationships to to other people right so um there's there's the problem of
01:28:51
individual well-being that kind of is still real even if you're in a moral Solitude if you're you know on a desert
01:28:58
island by yourself you really don't have ethical questions that are emerging because you're not in relationship to anybody else but you
01:29:04
still have the problem of how to be happy but so much of our unhappiness is in
01:29:10
collaboration with others right we're unhappy in our relationships we're unhappy professionally
01:29:15
um and it's worth looking at how we're behaving with other people
01:29:21
uh for me that the the highest leverage change
01:29:27
I ever made and it's against it's very easy to spell out and it's very
01:29:32
clear um and ultimately it's pretty easy is just
01:29:38
to decide that you're not going to lie about anything really I mean there might
01:29:44
be some situations in extremis where you'll feel forced to lie but those you
01:29:50
know in my view are are analogous to acts of violence that you may be forced
01:29:56
to to use in self-defense right so like a line is sort of the first stage on the Continuum of violence for me right so
01:30:02
like I'm not going to lie to someone unless I I recognize that this is not a rational actor who I can possibly
01:30:09
collaborate with this is someone I have to be um I have to avoid or defeat or
01:30:14
otherwise you know contain the the their propensity for to do me harm uh so yes
01:30:21
if the Nazis come to the door and ask if you've got Anne Frank in the Attic yes you can lie or you can shoot them or you
01:30:27
can these are not normal circumstances but that aside every other moment in
01:30:33
life where people are tempted to lie um
01:30:38
is one that I I think you can categorically rule out as being unethical and being
01:30:46
Beyond unethical it's just not it's it's creating
01:30:53
a a life you don't when you when you examine it you don't want to live right
01:30:58
in the moment you know that you're not going to lie to people and they know that about you
01:31:04
um the the it's like all of the dials get the
01:31:11
social dials get sort of recalibrated on both sides and then you find yourself
01:31:17
in the presence of of people who don't ask you for your opinion unless
01:31:23
they really want it right and then and then when you're honest I mean then then it's
01:31:29
it's not it's a night and day difference when you're giving people feedback critical feedback and
01:31:37
they know you're honest right they know they're they they're you know they're they're detector is not going
01:31:42
off because they just know you're you're even when it's not convenient you're
01:31:47
being honest and or even when it's not comfortable you're being honest um
01:31:54
one that's incredibly valuable because basically you're you're giving them the information that you would want if you
01:32:00
were in their shoes right because we have this sort of delusion that takes over us when whenever we're tempted to
01:32:06
tell a white lie we imagine okay this person doesn't want it much better
01:32:13
for me to just tell them the kind fiction and then tell them the the uncomfortable truth right but we don't
01:32:20
do the so we don't even calculate that you know for the Golden Rule there most of the time and we if you if you just
01:32:27
took a moment you'd realize oh wait a minute does someone who
01:32:32
is actually doing a bad job want me to tell them that they're doing a good job
01:32:38
and then just send them out into the world to bounce around other people who are going to be recognizing as I just
01:32:44
did that the thing they're doing isn't so great right um you're just not doing them a favor
01:32:50
right this is part of the nature of belief changes in it that when someone we believe that someone is on our side
01:32:55
or We Believe from like a political standpoint that they they represent the 99 of the views that we represent we're
01:33:02
much more likely to change our beliefs expect to tally Shara about this the neuroscientist and I wrote about this in
01:33:07
a chapter in my upcoming book about how you how you change people's minds and they showed in the elections that if
01:33:13
like a flat earther says something to a flat earther about the nature of the earth I believe it but if NASA says something to a flat earther they will
01:33:19
just dismiss on site because the source of that information is not one that they believe or trust or like or believe is
01:33:25
well-intentioned I mean this this is a bug not a feature I mean it's understandable but this is something we
01:33:31
have to grow Beyond because the the the truth is the truth right so you can't
01:33:36
I mean again it goes in both directions the person on your team who you love and
01:33:42
respect is capable of in their very next sentence of speaking
01:33:47
of falsehood right and you need to be able to to detect that and conversely you know the the person
01:33:53
you least respect is capable of saying something that's that's quite incisive and worth taking on board and and so
01:34:01
that's we have to we have to have this sort of meta cognitive
01:34:06
layer where we're noticing how we're getting played by our our social alliances and
01:34:15
recognize that the truth and and rather often important truths are are um
01:34:22
uh evaluated by different principles I mean it's not a matter of the message the messenger you know you shouldn't
01:34:27
shoot the messenger and you shouldn't worship Him you mentioned lying as being a well
01:34:34
removing lying and being more honest as being a significant step change in your own happiness is that accurate in my happiness and
01:34:41
your happiness yeah yeah um yeah immensely so because it's it's how practically and specifically how
01:34:50
so you I mean when you look at how people ruin their reputations and their relationships and their businesses their
01:34:57
careers the gateway to all of the misbehavior that accomplishes that is line it's I
01:35:05
mean look at somebody that Lance Armstrong right I mean just or Tiger Woods right these guys are the absolute apogee of sport
01:35:13
they everyone loves them everyone's just amazed at what they're what they've accomplished
01:35:19
and yet you know the the dysfunction in their lives just gets vomited up for all
01:35:26
to see at a certain point and it was just enabled at every stage along
01:35:32
the way by lying right so if if if either of them had early in their career
01:35:39
before they became famous before they became rich before they became tempted to do
01:35:44
anything um that was gonna derail their lives later on if they had
01:35:50
decided they weren't going to lie right they would have found all everything else they they did to screw
01:35:58
up their success impossible so when I decided and this is this is in
01:36:04
the book this is a course I took it at Stanford it was a a seminar with This brilliant
01:36:10
Professor Ron Howard who who many people who I think some people in Silicon Valley have taking this course as well
01:36:17
um I mean this this course was just like a machine you know undergraduates and
01:36:22
graduate students would come in on one side and then 12 weeks later would come out convinced that basically lying was
01:36:30
no longer on the menu right it's just it's just it was it that the whole seminar was an analysis of the question
01:36:38
is it ever right to lie and really we focused on on white lies I'm truly tempting lies as
01:36:45
opposed to the obvious lies it's grow people's lives and relationships um
01:36:51
it's just so corrosive and it's corrosive of relationships in ways that
01:36:58
you unless you're a student of this kind of thing you you don't necessarily notice I mean one example I believe is in that
01:37:04
that's in that book is that I remember my wife was with a friend
01:37:09
uh and the two of them were out and the the friend had something she she had to
01:37:17
do with another friend later that night but she didn't really feel like doing it um and she got a call from that friend in
01:37:23
the presence of my wife and she just lied to the friends to get out of the
01:37:29
plan right she said oh you know I'm so sorry but my you know my daughters got this thing and it was just just an utterly facile
01:37:38
use of dishonesty to get where she could have she could have just been honest right but she just
01:37:44
it was just too awkward to be honest so she just got out of it with a lie but now it's in the presence of my wife and
01:37:50
my wife is now the the immediate question is how many times have I been on the other side of that conversation right how many
01:37:57
times has she lied to me in an equally compelling way about something so
01:38:02
trivial right and so it just eroded trust in the in that relationship in a
01:38:08
way that the the liar would never have known about would never have detected it because it's just she just went right
01:38:13
back to having a good time with you know they were just out to lunch and they continued you know having their lunch and they're still having a good time and
01:38:19
it's all smiles but my wife has just logged something about kind of the ethical limitations of this person
01:38:26
um and the person doesn't know it right and so once you sort of pull on this thread you
01:38:35
basically your entire life becomes for at least for the the the transition period when until this just becomes a a
01:38:43
habit you no longer have to consider um Suddenly It's your your
01:38:50
the world becomes kind of mirror thrown up to your mind and you and your you meet yourself in all these situations
01:38:57
where you are avoiding yourself before so like someone will say you know
01:39:02
do you want to have plans or do you wanna do you want to collaborate with me on this project and if previously you
01:39:11
you always had recourse to some kind of white lie that just got you out of you know the awkward
01:39:18
uh truth which is the answer is no and they're actually reasons why not right
01:39:24
um you never had it you never have to confront the the awkwardness of that
01:39:29
you're this kind of person who has these kinds of commitments and this kind of it's like I you know
01:39:35
I mean the most awkward one would be you know someone declares a romantic interest in you and the the the the the
01:39:41
the the the answer is no and then the and the the it's known for totally superficial
01:39:49
reason right like this person is is either you're not they're not attractive enough for you right you know they're or
01:39:56
they're overweight or whatever I mean it's just it's like you have your reason why not and this is something you feel
01:40:01
you cannot say right now I'm not saying that you should always go out of your way
01:40:08
like like you're someone with Tourette's who just helplessly blurts out the truth like there's there's a scope for for
01:40:16
kindness and compassion and tact but if someone is going to really drill down
01:40:22
on the reasons why not if the person says no I want to know exactly why you don't want to go out with me
01:40:28
there's something to discover on on on either side of that true disclosure
01:40:33
right like either you are cast back on yourself and you have to realize okay
01:40:39
I'm such a superficial person that it doesn't matter who anyone is
01:40:45
if they're 10 pounds overweight I'm not interested right that's that's the mirror held up to your minds like okay
01:40:52
all right so you're that kind of person do you want to still be that kind of person do you really want to just decide that
01:40:58
everyone no matter what their virtues right and no matter what has been going you know what no matter what chaos is
01:41:04
going on in their life and they actually this person might actually lose those 10 pounds next month and you would have a
01:41:11
very different situation but are you really not available are you really filtering by weight in this way
01:41:18
um and are you really comfortable with that and are you comfortable saying that like if you if if somebody
01:41:25
forces you to actually be honest we have a closing tradition on this podcast where the last guest leaves a question
01:41:31
for the next guest not knowing who they're going to leave it for the question that's been left for you impeccable handwriting
01:41:37
where do you want to be when you die describe the place time
01:41:44
people smell and feeling
01:41:52
well it's actually uh connects with an idea I I've I've had
01:41:57
I mean I think what we need we haven't talked about psychedelics here but um there's there's been this
01:42:03
Renaissance in research and psychedelics and it's hard to know I I I'm worried that we could recapitulate some of the
01:42:11
the errors of the 60s and and uh roll this all out in a way that's less
01:42:16
than wise but the wise version would be I think we need to recapitulate something like the
01:42:24
the mysteries of ellusis where we you know we have rise of Passage that are enabled by
01:42:30
in many people's case psychedelics and and the practice of of meditation I just
01:42:36
think it's I think these are just fundamental tools of insight that are that
01:42:42
I mean for most people it's hard to see how they would get them any other way right I just think you know there's a
01:42:49
lot longer conversation about which molecule and how and all that but another component of this is a math a a
01:42:56
hospice situation where the experience of dying is as
01:43:03
wisely embraced and facilitated as is possible and I think psychedelics could certainly
01:43:10
play a role for for many people there so I imagine something like we we need a we
01:43:15
need places that are truly beautiful that where you know people have gone to die and their families can be you visit
01:43:23
them there and it is just a you know a final rite of passage that is that is
01:43:29
embraced with you know um all the wisdom uh we can muster there
01:43:37
and yeah so for in my case you know I would want to be in you know currently I I'd
01:43:44
be happy to be home but you know wherever a home is at that point I would want a um
01:43:50
I would want a view of the the sky you know it could be an ocean beneath the sky that would be ideal right
01:43:57
um I just I mean it there's there's basically nothing that makes me happier than just looking at a
01:44:04
blue sky with just watching like cumulus clouds move across a a blue sky I mean
01:44:09
it's just like I can extract so much mental pleasure just looking at that right it's just I
01:44:16
mean it's um so yeah if I'm gonna spend my last uh
01:44:21
hours of life looking at anything if my eyes are going to be open you know looking at the sky and having the Stars
01:44:27
will the sky the daytime the daytime yeah yeah I mean if I were if I light
01:44:33
pollution is enough of a thing in my world that I go for I feel like I go for years
01:44:38
without seeing a good nice guy um so I've kind of given up hope there
01:44:44
but I do love that um but yeah just a you know a view of the
01:44:49
sky and with the people I love at that point who are who are still alive at
01:44:54
that point yeah I mean I'm not I'm not worried about
01:44:59
death in that sense I mean I really I think it's
01:45:05
it the Death part is not a problem I mean I I can't say I'm looking for if I can
01:45:11
imagine there could be sort of medical chaos and uncertainty and all of the you know the weirdness that happens around
01:45:17
the dying process right depending on um and there are all kinds of ways to die
01:45:23
that I wouldn't choose but having a nice place to do that
01:45:28
with a view of the sky would be the the only solution I think I would require
01:45:34
the question asks the smell give me the smell smell give me an ocean breeze I have put an ocean there so yeah
01:45:41
an ocean breeze would be perfect Sam thank you so much thank you
01:45:47
um not just this conversation as I said to you before you sat down you were pivotal in um really helping me to unpack some
01:45:53
problems when I was younger some conflicts I should describe them as with my my view on religious belief and
01:46:00
um and the nature of the world but I think more more importantly you didn't you didn't robber me of my religious
01:46:07
beliefs and leave me with nothing right you left me with something else which is something that was really important to me which was the idea that there can
01:46:13
still be great meaning and there can be what you describe as spirituality in the absence or in the place of
01:46:20
um that religious belief religious belief gives people you know a lot of things and I I it's funny because when I was religious and I went on the journey
01:46:27
to becoming agnostic let's say um I was in conflict with people as in I would want to have a debate with
01:46:33
everybody yeah and I spent those two years watching everything that you and Richard Dawkins and Hitchens had all
01:46:38
done and then I came out the other side and it was peaceful yeah and it's you believe what you want I'll believe what
01:46:44
I want um as long as we're not causing any conflicts with each other and you're not doing any harm it's okay yeah and
01:46:50
then I discovered what I would call my own spirituality which is my meaning the meaning that I see in the world around me and um and the self and things like
01:46:57
psychedelics and it's a it's a better place to be and it removes my fear of death which I had
01:47:03
as a religious person all right well that's good so thank you yeah thank you for that and all your subsequent work
01:47:09
but you know incredible books you've written so many of them that are absolutely incredible you've got an unbelievable podcast which I was gorging
01:47:15
on before you came here as well in an app um which I mean if you could speak just a few sentences about the meaning of the
01:47:21
app and what you do I know it's much more than meditation now but I think people listening to this might be compelled to to check it out and
01:47:27
download it yeah so I had that book which you're holding waking up which is the um
01:47:33
uh which is where I talk about my experience in meditation and just how I fit it into a a
01:47:40
scientific secular worldview um and just it just turns out that an app
01:47:45
is a much better delivery system for that kind of information I mean it's just a hearing audio is you don't even
01:47:50
need you don't need video I think audio is the perfect medium for it so when that technology
01:47:56
came about or when I discovered it um I just felt incredibly lucky to be
01:48:02
able to to build it and so it's it's kind of outgrown me now there are many many teachers on it and many other
01:48:07
topics Beyond meditation that are touched but um it's a
01:48:14
it really subverts all of the problems that you know some of which we touched upon here with the with the smartphone I
01:48:21
mean like the smartphone has become this this tool of fragmentation for us it fragments our attention it continually
01:48:27
interrupts our experience it's depending on how you use it
01:48:33
um but most of what we do with it you know you're checking slack you're checking your email you're checking your social media you're you're just it's
01:48:40
punctuating your life with with all this stuff is you know at this point seemingly necessary interruptions
01:48:46
but this app or you know any really any app will like it that's delivering this kind of content
01:48:52
subverts all that because it's just this is this is it's just a platform where you're getting audio that is guiding you
01:48:59
in a specific very specific use of attention and a sort of reordering of
01:49:04
your priorities and getting you to to recognize things about your experience
01:49:09
that you you wouldn't otherwise see and yeah an app is it just a sheer good luck
01:49:18
it turns out it's it's just the perfect delivery system for that information so yeah I just feel very lucky to to have
01:49:25
stumbled upon it because again you know 10 years ago there were no apps and you know there's just it was just all I
01:49:30
could do was write a book Sam thank you yeah thank you thank you so much yeah a pleasure to meet you
01:49:37
congratulations with everything it's really it's oh thank you I was catching up on your podcast in anticipation of this and it's amazing the the reach
01:49:44
you've got now so yeah wonderful no it's we're still trying to catch up with it but it's a credit to all of the team and
01:49:51
I really want to say from the bottom out thank you because the work you do is is really really important
01:49:56
um it's been important in my life as I've said but it's just really important and we I feel like we're living in a world where like nuance and all the
01:50:03
things you've talked about and openness to debate and honest dialogue us we're getting further further away from there
01:50:08
so if there's anyone left in this world that's still willing to engage on that level I feel like they must be protected at all costs and I see you as one of
01:50:14
those people so thank you nice nice well to be continued [Music]
01:50:27
oh [Music]

Badges

This episode stands out for the following:

  • 75
    Best concept / idea
  • 70
    Most shocking
  • 70
    Best overall
  • 70
    Most influential

Episode Highlights

  • Inevitability of Progress
    Harris emphasizes that humanity will continue to advance in AI regardless of potential risks.
    “We're going to keep doing this; our failure to do it suggests something terrible has happened.”
    @ 06m 07s
    August 07, 2023
  • The Dangers of AI
    Sam Harris discusses the inherent risks of AI surpassing human intelligence.
    “There's something inherently dangerous for the Dumber species in that relationship.”
    @ 06m 52s
    August 07, 2023
  • The Cost of Turning Off the Internet
    The economic cost of turning off the internet is unimaginable, raising concerns about dependency.
    “The cost of turning off the internet now would be unimaginable.”
    @ 26m 40s
    August 07, 2023
  • The Rise of Misinformation
    With advancements in AI, the risk of misinformation becoming prevalent is alarming.
    “Most of what's online could soon be fake.”
    @ 28m 35s
    August 07, 2023
  • The Relief of Quitting Twitter
    Deleting Twitter brought immense relief, freeing from chaos and misunderstanding.
    “The change in my life after I deleted my Twitter account was enormous.”
    @ 36m 35s
    August 07, 2023
  • The Risk of Misinformation
    In a world filled with misinformation, we must assume anything lurid is fake until proven otherwise.
    “We have to assume anything especially lurid is fake until proven otherwise.”
    @ 48m 44s
    August 07, 2023
  • The Value of Humanities
    As AI takes over many jobs, a return to the humanities may become essential for purpose and meaning.
    “This begins to privilege a return to the humanities as a core intellectual center.”
    @ 01h 04m 01s
    August 07, 2023
  • Universal Basic Income: A New Ethic
    Exploring the concept of Universal Basic Income as a solution to wealth distribution in an AI-driven economy.
    “Universal basic income... you shouldn't have to work to survive.”
    @ 01h 13m 20s
    August 07, 2023
  • The Challenge of Purpose
    Discussing the innate human desire for purpose and meaning beyond monetary labor.
    “Humans seem to have an innate desire for purpose and meaning.”
    @ 01h 15m 24s
    August 07, 2023
  • The Corrosive Nature of Lying
    Lying is identified as a gateway to unhappiness and dysfunction in relationships.
    “Removing lying... is a significant step change in your happiness.”
    @ 01h 34m 41s
    August 07, 2023
  • The Ethics of Dishonesty
    A story about how a small lie can erode trust in relationships.
    “How many times has she lied to me?”
    @ 01h 37m 50s
    August 07, 2023
  • Finding Spirituality
    Exploring spirituality and meaning in life beyond religious beliefs.
    “It removes my fear of death.”
    @ 01h 46m 57s
    August 07, 2023

Episode Quotes

Key Moments

  • Survival Questions01:21
  • AI Risks06:52
  • Misinformation Concerns28:35
  • Social Media Chaos43:07
  • Epistemological Bankruptcy48:39
  • AI and Abundance1:13:45
  • Lying and Honesty1:34:50
  • Mirror of Self1:38:50

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
CEO Of Microsoft AI: AI Is Becoming More Dangerous And Threatening! - Mustafa Suleyman
Podcast thumbnail
AI Expert: (Warning) 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
Podcast thumbnail
Yuval Noah Harari: They Are Lying About AI! The Trump Kamala Election Will Tear The Country Apart!
Podcast thumbnail
Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!
Podcast thumbnail
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris
Podcast thumbnail
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Podcast thumbnail
Ex Google CEO: AI Can Create Deadly Viruses! If We See This, We Must Turn Off AI!