Search:

Ex Google CEO: AI Can Create Deadly Viruses! If We See This, We Must Turn Off AI!

November 14, 202401:49:37
00:00:00
someone was leaking information on Google and this stuff is incredibly secret so what are the secrets well the
00:00:06
first is Eric Schmidt is the former CEO of Google who grew the company from $100 million to 180 billion and this is how
00:00:15
as someone who's LED one of the world's biggest tech companies what are those first principles for leadership business and doing something great well the first
00:00:22
is risk taking is key if you look at Elon he's an incredible entrepreneur because he has this Brilliance where he
00:00:28
can take huge risks and fail fast and Fast failure is important because if you build the right product your customers
00:00:34
will come but it's a race to get there as fast as you can because you want to be first because that's where you make
00:00:40
the most amount of money so what are the other principles that I need to be thinking about so here's a really big one at Google we have the 72010 rule
00:00:46
that generated 10 20 30 40 billion dollar of extra profits over a decade and everyone could go do this so the
00:00:53
first thing is what about AI I can tell you that if you're not using AI at every
00:00:58
aspect of your business you're not going to make it but you've been in the tech industry for a long time and you've said
00:01:03
the Advent of artificial intelligence is a question of human survival AI is going to move very quickly and you will not
00:01:10
notice how much of your world has been co-opted by these Technologies because they will produce greater Delight but
00:01:16
the questions are what are the dangers are we advancing with it and do we have control over it what is your biggest
00:01:21
fear about AI my actual fear is different from what you might imagine my my actual fear
00:01:26
is that's a good time to pull the plug
00:01:32
this has always blown my mind a little bit 53% of you that listen to the show regularly haven't yet subscribed to the
00:01:38
show so could I ask you for a favor before we start if you like the show and you like what we do here and you want to support us the free simple way that you
00:01:44
can do just that is by hitting the Subscribe button and my commitment to you is if you do that then I'll do everything in my power me and my team to
00:01:51
make sure that this show is better for you every single week we'll listen to your feedback we'll find the guest that
00:01:56
you want me to speak to and we'll continue to do what we do thank you so much [Music]
00:02:03
Eric I've read about your career and you've had an extensive a varied a
00:02:09
fascinating career completely unique career and that leads me to believe that you could have written about anything
00:02:15
you know you've got some incredible books all of which I've been through over the last couple of weeks here in front of me I apologize no no but I mean
00:02:22
these are subjects that I'm just obsessed with but this book in particular of all the things you could
00:02:27
have written about with the world we find ourselves in why this why Genesis
00:02:34
well first thank you for I wanted to be on the show for a long time so I'm really happy to be able to be here in person in London Henry Kissinger Dr
00:02:42
Kissinger ended up being one of my greatest and closest friends and 10
00:02:47
years ago he and I were at a conference where he heard heard Demis hbus speak
00:02:53
about Ai and Henry would tell the story that he was about to go catch up on his jet lag but instead I said go do this
00:03:01
and he listened to it and all of a sudden he understood that we were playing with fire that we were doing
00:03:07
something that we did not understand it would have the impact on and that Henry had been working on this since he was 22
00:03:14
coming out of the army after World War II and his thesis about Kant and so forth as an undergraduate at Harvard so
00:03:21
all of a sudden I found myself in a whole group of people who are trying to understand what does it mean to be human
00:03:27
in an age of AI when this stuff starts showing up how does our life change how
00:03:33
do our thoughts change humans have never had an intellectual Challenger of our
00:03:40
own ability or better or worse it just never happened in history the arrival of
00:03:45
AI is a huge moment in history for anyone that doesn't know your story or
00:03:51
maybe just knows your story from sort of Google onwards can you tell me the sort of inspiration points the education the
00:03:58
experiences that you're draw on when you talk about these subjects well like many
00:04:05
of the people you meet um as a teenager I was interested in science I play with
00:04:10
model rockets model trains the the usual things for a boy in my generation I was
00:04:16
too young to be a video game addict but I'm sure I would be today if I were that age um I went to college and I was very
00:04:23
interested in computers and they were relatively slow then but to me they were fascinating to give you an example the
00:04:29
computer that I used in college is 100 million times slower 100 million times
00:04:36
slower than the phone you have in your pocket and by the way that was a computer for the entire University so
00:04:42
Moes law which is this notion of accelerating density of chips has defined the wealth creation the career
00:04:49
creation the company Creation in my life so I can be understood as lucky because
00:04:55
I was born with a with an interest in something which was about to explode and when when sort of everything happens
00:05:02
together everyone gets swept up in it and of course the rest is history I was sat this weekend with
00:05:08
my partners little brother who's 18 years old yes and as we ate breakfast yesterday before they flew back to
00:05:14
Portugal we had this discussion with her family that um her dad was there her mom
00:05:20
was there Raph the younger brother was there and my girlfriend was there difficult because most of them don't
00:05:26
speak English so we had to use funnily enough AI to translate what saying but the big discussion at breakfast was what
00:05:32
should Raph do in the future he's 18 years old he's got his career ahead of him and the decisions he makes as is so
00:05:38
evident in your story at this exact moment as to what information and intelligence he acquires for himself
00:05:44
will quite clearly Define the rest of his life if you were sat at that table with me yesterday when I was trying to
00:05:50
give Raph advice on what what knowledge he should acquire at 18 years old what would you have said and what are the
00:05:55
principles that sit behind that the most most important thing is to develop analytical critical thinking
00:06:02
skills I to some level I don't care how you get there so if you're if you like math or science or if you like the law
00:06:09
or if you like you know entertainment just think critically in his particular case as a as an 18-year-old what I would
00:06:17
encourage him to do is figure out how to write programming to write programs in a language called python python is easy to
00:06:24
use it's very easy to understand and it's become the language of AI so the the AI systems when they write code for
00:06:31
themselves they write code in Python and so you can't lose as developing Python
00:06:37
Programming skills and the simplest thing to do with an 18-year-old man is say make a game because these are
00:06:44
typically Gamers stereotypically make a game that's interesting using python it's interesting because I wondered if
00:06:52
coding you know I think 5 10 years ago everyone's advice to an 18-year-old has learn how to code but in a world of AI
00:06:59
where these large language models are able to write code and are you know increasing every month in their ability
00:07:05
to write better and better code I wondered if that's like a dying art form yeah a lot of people have posed this and
00:07:10
that's not correct it sure looks like these systems will write code but remember the systems also have
00:07:17
interfaces called apis which you can program them so one of the large Revenue
00:07:22
sources for these AI models because these companies have to make money at some point right is you build a program
00:07:28
and you actually make take an API call and ask it a question typ typical example is give it a picture and tell me
00:07:34
what's in the picture now can you have some fun with that as an 18-year-old of course right so so when I say python I
00:07:42
mean python using the tools that are available to build something new
00:07:48
something that you're interested in and when you say critical thinking how does one what is critical
00:07:53
thinking and how does one go about acquiring that as a skill well the first and most important thing about critical
00:07:58
thinking is to to distinguish between being marketed to which is also known as being lied to and being being given the
00:08:06
argument on your own we' have because of social media which I hold responsible for a lot of ills as well as good things
00:08:12
in life we've we've sort of gotten used to people just telling us something and believing it because our friends Believe
00:08:19
it or so forth and I strongly encourage people to check assertions so you get
00:08:25
people say all this stuff and I learned at Google all those years somebody says something I check it on Google do I and
00:08:34
you then have a question do you criticize them and correct them or do you let it go but you want to be in the
00:08:41
position where somebody makes a statement like did you know that only 10% of Americans have passports which is
00:08:49
a widely viewed but false statement um it's actually higher than that although it's never high enough in my view in
00:08:54
America but that's an example of assertion that you can just say is that true right
00:09:00
there's a a long meme of American politicians where the Congress is basically full of criminals um it may be
00:09:06
full of one or two but it's not full of of 90 but again people believe this stuff because it sounds plausible so if
00:09:14
if somebody says something plausible just check it you have a responsibility before you
00:09:21
repeat something to make sure what you're repeating is true and if you
00:09:26
can't distinguish between true and false I suggest you keep your mouth shut right
00:09:32
because you can't run a government a society without people operating on basic facts like for example climate
00:09:39
change is real we can debate over whether it's how to address it but there's no question the climate is
00:09:45
changing it is a fact it is a mathematical fact and how do I know this and somebody will say well how do you
00:09:51
know and I said because science is about repeatable uh uh experiments and also
00:09:57
proving things wrong so let's say I said that um climate change is real uh and
00:10:02
this was the first time it had ever been said which is not true then a 100 people would say that can't be true I'll see if
00:10:07
he's fa and then and then all of a sudden they'd see I was right and I'd get some big prize right so so the
00:10:14
falsifiability of these assertions is very important how do you know that science is correct it's because people
00:10:21
are constantly testing it and why is this skill of critical thinking so especially important in a
00:10:28
world of AI well partly because AI will allow for perfect misinformation so let's use an
00:10:34
example of Tik Tok Tik Tok can be understand it's called the Bandit algorithm in computer science in the
00:10:41
sense of the Las Vegas one arm Bandits do I stay in the Bandit machine and I keep on this slot machine or do I move
00:10:48
to another slot machine and the the Tik Tok algorithm basically can be
00:10:53
understood as I'll keep serving you what you tell me you want but occasionally
00:10:58
I'll give you something from the adjacent area and is highly addictive so
00:11:04
what you're seeing with social media and Tik Tok is a particularly bad example of this is people are getting into these
00:11:09
rabbit holes where they all they see is confirmatory bias and and the ones that
00:11:15
are I mean if it's fun and you know entertaining I don't care but you'll see for example there are plenty of stories
00:11:21
where people have ultimately self harm or suicide because they're already unhappy and then and then they start
00:11:28
picking up unhappy and then their whole environment online is people who are unhappy and it makes them more unhappy
00:11:35
because it doesn't have a positive bias so there's a really good example where
00:11:40
um let's say in your case you're the dad you're going to watch this as the dad with your kid and you're going to say
00:11:46
you know it's not that bad let me show you some let me give you some good Alternatives let me get you inspired let
00:11:51
me get you out of your funk the algorithms don't do that unless you force them to it's because the
00:11:57
algorithms are fundamentally about optimizing an objective function literally mathematically maximize some
00:12:04
goal that has been trained to they just in in this case it's attention and by the way part of it part of we have we
00:12:10
have so much uh outrage is because if you're a CEO you want to maximize Revenue to maximize Revenue you maximize
00:12:18
attention and the easiest way to maximize attention is to maximize outrage did you know did you know did
00:12:25
you know right and by the way a lot of the stuff is not true they're fighting over scarce attention
00:12:31
there was a recent article where there's an old quote from 1971 from herb Simon
00:12:37
an economist at the time Carnegie melan who said that um economists don't
00:12:43
understand but in the future the scarcity will be about attention so somebody now 50 years later went back
00:12:49
and said I think we're at the point where we've monetized all attention an
00:12:55
article this week two and a half hours of videos consumed by young people every
00:13:00
day right now there is a limit to the amount of video you can you know that
00:13:05
because you have to eat and sleep and to hang out but these are significant societal changes that have occurred very
00:13:12
very quickly um when I was young there was a great debate as to the benefit of television and you know my argument at
00:13:18
the time was well yes we did you know we did you know rock and roll and and drugs
00:13:23
and all of that and we watched a lot of Television but somehow we grew up okay right so it's the same argument now with
00:13:29
a different a different term will we will those kids grow up okay um it's not
00:13:34
as obvious because these tools are highly addictive much more so than television ever was do you think they'll
00:13:41
grow up okay I personally do because I'm I'm inherently an optimist I also think
00:13:46
that Society um begins to understand the problems typical example is there's an
00:13:51
epidemic of harm to teenage girls uh girls as we know are uh more advanced
00:13:57
than boys at those uh you know below uh and the girls seem to get hit by
00:14:02
social media at 11 and 12 when they're not quite capable of handling the the rejection and the emotional stuff and
00:14:08
it's driven uh you know emergency room visits self harm and so forth to record
00:14:13
levels it's well documented so Society is beginning to recognize this now F
00:14:19
schools won't let kids use their phones when they're in the classroom which kind of obvious if you ask me um so
00:14:25
developmentally uh one of the core questions about the AI Revolution is what does it do to the identity of
00:14:32
children that are growing up your values your personal values the way you get up in the morning and think about life is
00:14:38
now set it's highly unlikely that an AI will change your programming but your child can be significantly reprogrammed
00:14:45
and one of the things that we talk about in the book is what happens when the best friend of your child from birth is
00:14:51
a computer what's it like now by the way I don't know we've never done it before
00:14:57
but you're running an experiment on a billion people without a control right
00:15:04
and so we have to stumble through this so at the end of the day I'm an optimist because we will adjust
00:15:10
Society with biases and values to try to keep us on a moral High Ground human
00:15:16
life and so you should be optimistic for that because these kids when they grow up they'll live to a 100 their lives
00:15:23
will be much more prosperous I hope and I I pray that there'll be much less conflict uh certainly lifespans are
00:15:29
longer the the likelihood of them being injured and and in wars and so forth are much much lower statistically it's a
00:15:36
good message to kids as someone who's LED one of the world's biggest tech companies if you were the CEO of Tik
00:15:44
Tok what would you do because I'm sure that they realize everything you've said is true but they have this commercial
00:15:52
incentive to drive up the addictiveness of that algorithm which is causing these Echo Chambers which is causing the rates
00:15:59
of anxiety and depression amongst young girls and young people more generally to increase what would you do so so I have
00:16:05
talked to them and to the others as well and I think it's it's pretty straightforward there's sort of good
00:16:11
revenue and bad Revenue when we were at Google uh Larry and ser and I we would
00:16:17
have situations where we would improve quality you know we would make the product better and the debate was do we
00:16:23
take that to revenue in the form of more ads or do we just make the product better and and that was a clear choice
00:16:30
and I arbitrarily decided that we would take 50% to one 50% to the other because I thought they were both important so
00:16:37
and the founders of course were very supportive so Google became more moral
00:16:42
and also made more money right all of the the there's plenty of bad stuff on
00:16:47
Google but it's not on the first page that was the key thing the alternative model would be say let's maximize
00:16:54
Revenue we'll put all the really bad stuff the lies and the cheating and the deceiving and so forth that draws you in
00:17:00
it will drive you insane and we might have made more money but first it was the wrong thing to do but more
00:17:06
importantly it's not sustainable uh there's a law called gresham's law uh
00:17:12
it's a verbal law obviously um where bad speech drives out good speech and what
00:17:18
you're seeing is you're seeing in online communities which have always been um present with bullying and this kind of
00:17:25
stuff now you've got crazy people in my view who are building Bots that are
00:17:30
lying right misinformation now why do you do that you've got in there was a there was a hurricane in Florida and
00:17:37
people are in serious trouble and you sitting in the comfort of your home somewhere else are busy trying to make
00:17:43
their lives more difficult what's wrong with you like let them get rescued you know human life is important but there's
00:17:51
something about the the human psychology where people uh people talk the there's
00:17:56
a German world called shoden Freud you know there's a bunch of things like this that we have to address I want social
00:18:02
media and the online world to represent the best of humanity hope excitement
00:18:07
optimism creativity invention solving new problems as opposed to the worst and
00:18:12
I think that that is achievable you have arrived at Google at 46 years old 2001
00:18:18
2001 2001 um you had a very extensive career before then working for a bunch of really interesting companies Sun
00:18:25
Microsystems is one that I know um very well you've worked for zero in California as well Bell Labs was your
00:18:31
first um sort of real job I guess at 20 years old first sort of big Tech
00:18:37
job what did you learn in this journey of your life about what it is to build a great company and what value is as it
00:18:44
relates to being an entrepreneur and people in teams like if there were like a set of first principles that everyone should be
00:18:50
thinking about when it comes to doing something great and building something great what are those like first principles so so the first rule I've
00:18:57
learned is that you need a truly brilliant person to build a really brilliant product and that is not me I
00:19:05
work with them so find someone who's just smarter than you more clever than
00:19:10
you moves faster than you changes the world is better spoken more handsome More Beautiful You know whatever it is
00:19:16
that you're optimizing and Ally yourself with them because they're the people who are going to make make the world
00:19:22
different um in one of my books we use the distinction between divas and naves
00:19:28
and a Diva and we use the example of Steve Jobs who clearly was a diva opinionated and strong and argumentative
00:19:35
and would bully people if he didn't like them but was brilliant when he was he was a diva he wanted Perfection right
00:19:41
aligning yourself with Steve Jobs is a good idea uh the alternative is what we call a Nave and a Nave which you know
00:19:48
from British history is somebody Who's acting on their own um their own account they're not they're not trying to do the
00:19:54
right thing they're trying to benefit themselves at the at the at the cost of others and so if you can identify a
00:20:00
person in one of these teams that they're just trying to solve the problem in a really clever way and they're
00:20:06
passionate about and they want to do it that's how the world moves forward if you don't have such a person your
00:20:11
company's not going to go anywhere and the reason is that it's too easy just to keep doing what you were doing right and
00:20:18
and Innovation is fundamentally about changing what you're doing up until the this generation of tech companies the
00:20:25
most companies seem to me to be one-hot wonders right they would have one thing that was very successful and then it
00:20:31
would sort of um it was typically follow an scurve and nothing much would happen and now I think the the people are
00:20:37
smarter people are better educated you now see repeatable waves a good example being Microsoft which is you know an
00:20:44
older company now founded in basically 81 82 something like that so let's call
00:20:49
that 45 years old but they've reinvented themselves a number of times right in in
00:20:55
a really powerful way we should probably talk about this then um before we move on which is what you're talking about
00:21:01
there is that sort of founder things people now refer to as founder mode that founder energy that high conviction that
00:21:06
sort of disruptive thinking um and that ability to reinvent yourself I was looking at some stats last night in fact
00:21:13
and I was looking at how long companies stay on the S&P 500 on average now and it went from 33 years to 17 years to 12
00:21:21
years average 10 year and as you play those numbers forward eventually in sort of 2050 an AI told me that it would be
00:21:27
about eight years well I'm not sure I agree with the founder Mort argument and the reason is
00:21:34
that it's great to have a brilliant founder and um and there's this it's
00:21:39
actually like more than great it's like really important and and we need more brilliant Founders universities are
00:21:45
producing these people by the way they do exist and they show up every year you know another Michael Dell at the age of
00:21:51
19 or 22 these are just brilliant Founders obviously Gates and Ellison and
00:21:56
sort of my generation of brilliant founders Larry and Sergey and so forth for anyone that doesn't know who Larry and Sergey
00:22:03
are and doesn't know that sort of early Google story um can you give me a little bit of that backstory but then also
00:22:08
introduce these characters called Larry and Sergey for anyone that doesn't know so Larry pagee and Sergey Bren met at
00:22:13
Stanford um in they were on a grant from believe it or not the National Science
00:22:19
Foundation as graduate students and Larry pagee invented a algorithm called
00:22:24
page rank uh which is named after him um and he and Sergey wrote a paper which is
00:22:30
still one of the most cited papers in in the world and it's essentially a way of understanding priority of information
00:22:38
and mathematically it was a forier transform of the way people normally did things at at the time and so they wrote
00:22:45
this code I don't think they were that good a set of programmers you know they sort of did it they had a computer they
00:22:51
ran out of power in their dorm room so they um borrowed the power from the dorm room next to and plugged it in and they
00:22:57
had the data center in the bedroom you know in the dorm classic story um and then they moved to a u building that was
00:23:05
owned by um the sister of a girlfriend at the time and that's how they founded
00:23:11
the company their first investor was a one the founder of Sun micr System whose name was Andy bealine who just said I'll
00:23:18
just give you the money because you're obviously incredibly smart how much did he give them $100,000 or yeah maybe it was a million
00:23:25
but in any case it It ultimately became any billion ions of dollars so it gives you a sense of this early founding is
00:23:32
very important so the founders then set up in this little house in menla park
00:23:37
which ultimately we bought at Google you know as a as a museum and they set up in the garage and they had Google Google
00:23:45
world headquarters in neon made and they had a big headquarters um with the four employees that were sitting below them
00:23:52
and the computer that Larry and sery had built Larry and sery were very very good software people and obviously brilliant
00:23:57
but they were not very good hardware and so they built the computers using corkboard to separate the CPUs and if
00:24:03
you know anything about Hardware Hardware generates a lot of Heat and the corkboard would catch on fire So
00:24:09
eventually when I showed up we started building proper Hardware with proper Hardware Engineers but it gives you a
00:24:14
sense of the scrappiness that that was so characteristic um and you know today
00:24:21
there are people of enormous impact on society um and I think that will continue um for many many years what did
00:24:28
they call you in and at what point did they realize that they needed someone like you well Larry said to me uh now
00:24:33
these were they're very young he looked at me and says we don't need you now but we'll need you in the future
00:24:41
we'll need you in the future yeah so one of the things about Larry and Sergey is that they thought for the long term so
00:24:48
they didn't say Google would be a search company they said the mission of Google is to organize all the world's
00:24:54
information and if you think about it that's pretty audacious 25 years ago like how are you going to do that and so
00:25:01
they started with web search eventually and Larry had studied AI quite
00:25:06
extensively and he began to to work and ultimately he uh acquired uh with with
00:25:13
all all of us obviously uh this company called Deep Mind here in Britain which
00:25:18
essentially is the um the first company to really see the AI opportunity and
00:25:24
pretty much all of the things you've seen from AI in the last decade have come from people who are either at Deep
00:25:30
Mind or competing with deep mind going back to this point about principles then before we move further on um as it
00:25:37
relates to building a great company what are some of those founding principles we have lots of entrepreneurs that listen
00:25:42
to the show one of them you've expressed as this need for the Divas I guess these people who are just very high conviction
00:25:49
and can kind of see into the future what are the other principles that I need to be thinking about when I'm scaling my
00:25:54
company well the first is to think about scale uh I think a current example is look at Elon um Elon is an incredible
00:26:03
entrepreneur and an incredible scientist and if you study how he operates he gets people by I think sheer force of
00:26:10
personal will to overperform to take huge risks which somehow he he has this
00:26:17
Brilliance where he can make those tradeoffs and get it right so these are
00:26:22
exceptional people now in our book with Genesis we argue that you're going to have that in your pocket but to whether
00:26:28
you'll have the judgment to take the risks that Elon does that's another question the one of the other ways to
00:26:34
think about it is an awful lot of people talk to me about the companies that they're founding and they're they're a
00:26:40
little widget you know like I want to make the camera better I want to make the dress better I want to make book publishing cheaper or so forth these are
00:26:47
all fine ideas I'm interested in in ideas which have the benefit of scale
00:26:53
and when I SC I say scale I mean the ability to go from zero to Infinity in
00:26:59
terms of the number of users and demand and scale um there are plenty plenty of ways of
00:27:05
thinking about this but what would be such a company in the age of AI well we can tell you what it would look like you
00:27:12
would have apps one on Android one on iOS maybe a few
00:27:17
others those apps will use powerful networks and they'll have a really big computer in the back it's doing AI
00:27:25
calculations so future success companies will all have that right exactly what
00:27:32
problem it solves well that's up to the founder but if you're not using AI at
00:27:38
every aspect of your business you're not going to make it and the distinction as
00:27:43
a programming matter is that when I was doing all of this way back when you had
00:27:49
to write the code now ai has to discover the answer it's a very big deal and of
00:27:56
course this was a lot of this was invented at Google you know 10 years ago but basically all of a sudden an
00:28:01
analytical programming which sort of what I did my whole life you know writing code and you know do this do that add this subtract this call this so
00:28:08
forth and so on is gradually being replaced by learning the answer right so for example we use the example of transl
00:28:16
language translation uh the the current large language models are essentially
00:28:22
organized around predicting the next word well if you can predict the next word You can predict the next sequence
00:28:28
in biology You can predict the next action You can predict the next thing the robot should do so all of this stuff
00:28:35
around large language models and deep learning that has come out the Transformer paper gpt3 uh chat GPT which
00:28:42
for most people was this huge moment is essentially about um predicting the next
00:28:48
word and getting it right in terms of company culture and how important that is for the success and Prospects of a
00:28:54
company how do you think about company culture and how significant and is it and like when and who sets it so I'll
00:29:02
give well it's almost always set company cultures are almost always set by the founders I happen to be on the board of the Mayo Clinic Mayo Clinic is the
00:29:09
largest healthc care system in America it's also the most highly rated one and they have a rule which is called the uh
00:29:16
the needs of the customer come first which came out of the Mayo brothers who've been dead for like 120 years um
00:29:23
but that was their principle and I when I initially got on the board I started wandering around I thought this is kind
00:29:28
of a stupid you know stupid phrase and nobody really does this and they really believe it and they repeat it and they
00:29:35
repeat it right so it's true in non-technical cultures in that case it's a healthcare for Service delivery you
00:29:42
can drive a culture even in non-tech in Tech it's typically an engineering culture and if I had to do things over
00:29:49
again I would have even more technical people and even fewer non-technical people and just make the technical
00:29:54
people figure out what they have to do um and I'm sorry for that bias because I'm not trying to offend anybody but the
00:30:00
fact of the matter is the technical people if you build the right product your customers will come if you don't
00:30:06
build a product then you don't need a Salesforce why are you selling an inferior product so in in the how Google
00:30:12
works book and the ultimately in the trillion dollar coach book which is about Bill Campbell we talked a lot
00:30:19
about how the CEO is now the chief product officer the chief Innovation
00:30:24
officer because 50 years ago you didn't have access to Capital you didn't have access to marketing you didn't have
00:30:30
access to sales you didn't have access to distribution hours I was meeting today with an entrepreneur who said yeah
00:30:36
you know we'll be 95% Technical and I said why I said well we have a contract manufacturer and our products are so
00:30:43
good that people will just buy them this happened to be a a a technical switching company um and they said it's only a
00:30:49
100,000 times better than its competitors and I said it will sell unfortunately it doesn't work yet yeah
00:30:56
it isn't the point but if they achieve their goal people will be lined up outside the door so as a matter of
00:31:03
culture you want to build a technical culture with values about getting the product to work right and working me is
00:31:10
not another thing you do with with Engineers is you say they make a nice presentation to you
00:31:16
and they go I said that's very interesting but you know I'm not your customer your customer is really tough
00:31:22
because your customers wants everything to work and free and work right now and never make any mistakes so so give me
00:31:29
their feedback and if their feedback is good I love you and if their feedback is bad then you better get back to work and
00:31:35
stop being so arrogant so what happens is that in in the invent in the invention process within firms people
00:31:42
fall in love with an idea and they don't test it one of the things that Google did and this is largely Marissa mayor we
00:31:48
back when is one day she said to me I don't know how to judge user interface
00:31:56
mer was the previous CEO she was the CEO of Yahoo and before that she ran all the consumer products at Google uh and she's
00:32:03
now running another company in uh in the Bay Area but the important thing about Marissa is she said I can't I I said
00:32:09
well you know the UI the user interface is great at the time and it was certainly was and she said I don't know
00:32:15
how to judge the user interface myself and none of my team do but we know how
00:32:22
to measure and so what she organized were AB tests you test one test another so
00:32:28
remember that it's possible using these networks to actually kind of figure out because they're highly instrumented uh
00:32:34
dwell time how long does somebody how long does somebody watch this how important it is if you go back
00:32:41
to how Tik Tok Works uh one of the things the signals that they use include the amount of time you watch commenting
00:32:49
um forwarding uh sharing all those kinds of things and those you can understand those as analytics that go into an AI
00:32:57
engine then makes a decision as to what to do next what to make viral and on this point of um culture at
00:33:05
scale is it right to expect that the culture changes as the company scales
00:33:10
because you came into Google I believe when they were doing sort of hundred million doll in revenue and you left when they were doing what 180 billion or
00:33:16
something staggering but is it right to assume that the culture of a growing company should scale from when there was
00:33:22
10 people in that garage to when there's 100 so when I go back to Google to visit and they were kind enough to give me a
00:33:28
badge and treat me well of course um I hear The
00:33:33
Echoes of this um I was at a lunch where there was a lady running search and a
00:33:38
Gentleman runting ads you know the successors to the people who worked with me and I I asked them what's it going
00:33:45
and they said the same problems you know the same problems have not been solved but they're much bigger
00:33:52
and so when you go to a company I suspect um I was not near the founding of Apple but I was on the board for a
00:33:58
while um the founding culture you can see today in their Obsession about user
00:34:04
interfaces their Obsession about being closed and their privacy and secrecy it's just a different company right I'm
00:34:10
not passing judgment um setting the culture is important the echo are there
00:34:16
what does happen in big companies is they become less efficient for many reasons the first thing that happens is
00:34:22
they become conservative because of they're public and they have lawsuits and um a famous example is that
00:34:29
Microsoft after the antitrust um uh case in the 90s became so conservative in
00:34:35
terms of what it could launch that it really missed the web Revolution for a long time they they have since recovered
00:34:40
and I of course was happy to exploit that as a competitor to them when we were at Google but but the important
00:34:47
thing is when big companies should be faster because they have more money and more scale they should be able to do
00:34:52
things even quicker but in my industry anyway the the tech start that have a
00:34:58
new clear idea tend to win because the big company can't move fast enough to do
00:35:04
it another example we had built something called Google video I was very proud of Google video and David Drummond
00:35:11
who was the general counsel at the time came in and said you have to look at this YouTube people I said like why right who cares and it turns out they're
00:35:18
really good and they're more clever than your team and I said that can't be true you know typical arrogant Eric and we
00:35:26
sat down and we looked at it and they really work quicker even though we had an incumbent and why it turns out that the
00:35:34
incumbent was operating under the traditional rules that Google had which was fine and the competitor in this case
00:35:41
YouTube was not constrained by that they could work at any pace and they could do all sorts of things intellectual
00:35:46
property and so forth ultimately we were sued all over all of that stuff and we ultimately won all those suits but it's
00:35:52
an example where there are these moments in time where you have to move extremely quickly you're seeing that right now
00:35:59
with generative uh technology so the AGI the generative Revolution generate code
00:36:05
generate videos generate text generate everything all of those winners are being determined in the next six 12
00:36:11
months and then once once the slope is set once the growth rate is you know
00:36:17
quadrupling every uh six months or so forth it's very hard for somebody else to come in so so it's a race to get
00:36:24
there as fast as you can so when you talk to the the great Venture capitalists they are they're fast right
00:36:31
we'll look at it we'll make a decision tomorrow we're done we're in and so forth and we want to be
00:36:37
first because that's where they make the most amount of money we were talking before you arrived
00:36:42
I was talking to Jack about this idea of like harvesting and hunting so harvesting what you've already sewed and
00:36:49
hunting for new opportunities but I've always found it's quite difficult to get the Harvesters to be the hunters at the
00:36:56
same time so so Harvesters and hunting is a good metaphor um I'm interested in
00:37:01
entrepreneurs and so what we learned at Google was ultimately if you want to get something done you have to have somebody
00:37:06
who's entrepreneurial in their approach in charge of a small business and so for example Sundar when he became CEO had a
00:37:13
model of which were the little things that he was going to emphasize and which were the big things some of those little things are now big things right and and
00:37:20
he managed it that way so one way to understand innovation in a large company is you need to know who the owner is
00:37:26
Larry Page would say over and over again it's not going to happen unless there's an owner who's going to drive this and
00:37:32
he was supremely good at identifying that technical Talent right that's one of his great founder strengths so when
00:37:39
we talk about Founders not only do you have to have a vision but you also have to have either great luck or great skill
00:37:45
as to who is the person who can lead this inevitably those people are highly
00:37:51
technical in the sense that they can and very quick moving and they have good management skills right they understand
00:37:57
how to hire people and deploy resources that allows for Innovation um most of
00:38:03
the if I if I look back in my career each generation of the tech companies failed including for example Sun at at
00:38:12
the point at which it became noncompetitive with the future is it possible for a team to innovate while
00:38:17
they still have their day job which is harvesting if you know what I mean or do you have to take those people put them
00:38:23
into a different team different building different p&l and get them to focus on the disrupt div evation there are almost
00:38:28
no examples of doing it simultaneously in the same building uh the Macintosh
00:38:34
was famously um Steve in his typical crazy way had the this very small team
00:38:40
that invented the Macintosh and he put them in a little building next to the big building uh on bub Road and and um
00:38:48
Cupertino and they put a pirate flag on top of it now was that good culturally inside
00:38:54
the company no because because it created resentment in the big building
00:38:59
but was it right in terms of the revenue and path of of Apple absolutely why
00:39:05
because the Mac ultimately became the platform that established the UI the user interface ultimately allowed them
00:39:12
to build the iPhone which of course is defined by its user interface why couldn't they stay in the same building
00:39:17
it just doesn't work you you can't get people to play two roles the incentives are different if you're going to be a
00:39:23
pirate and a disruptor you don't have to follow the same rules so um there there are plenty of examples
00:39:31
where you just have to keep inventing yourself now what's interesting about cloud computing and essentially cloud
00:39:37
services which is what Google does is because the product is not sold to you it's delivered to you it's easier to
00:39:44
change but the same problem remains if you look at Google today right it's basically a search a search box and it's
00:39:51
incredibly powerful but what happens when that interface is not really textual right will have to reinvent that
00:39:59
working on Tech it'll be the system will somehow know what you're asking right it will it just it will be your assistant
00:40:06
um and again Google will do very well so I'm in no way criticizing Google here but I'm saying that even something as
00:40:12
simple as the search box will eventually be replaced by something more powerful it's important that Google be the
00:40:18
company that does that I believe they will and I I was thinking about it you know the example of Steve Jobs and that
00:40:24
building with the pirate flag on it my brain when um there's so many offices around the
00:40:32
world that were trying to kill Apple at that exact moment that might not have had the pirate flag but that's exactly
00:40:38
what they were doing in similar small rooms so what Apple had done so smartly there was they owned the people that
00:40:45
were about to kill their business model and this is quite difficult to do and part of me wonders if in your experience
00:40:51
it's a Founder that has that type of conviction that does that it's extremely
00:40:56
hard for non-founders to do this in corporations because if you think about a
00:41:01
corporation what's the duty of the CEO many there's the shareholders there's
00:41:07
the employees there's the community and there's a board trying to get a board of
00:41:12
very smart people to agree on anything is hard enough so imagine I walk in to you and I say I have a new idea I'm
00:41:20
going to kill our profitability for two years it's a huge bet and I need1
00:41:25
billion now would the board say yes well they
00:41:31
did to Mark Zuckerberg he spent all that money on um
00:41:36
essentially VR of one kind or another doesn't seem to have produced very much but at exactly the same time he invested
00:41:44
very heavily in Instagram WhatsApp and Facebook and in particular in the AI
00:41:49
systems that power them and today Facebook to my surprise is a very significant leader in AI having released
00:41:56
this uh language called or version called llama 400 billion which is curiously an open source model open
00:42:03
source means it's available freely for everyone and what what Facebook and meta is saying is as long as we have this
00:42:10
technology we can maximize the revenue in our core businesses so there's a good example and uh and Zuckerberg is
00:42:17
obviously an incredibly talented entrepreneur um he's now back on the list of the most rich people um he's
00:42:23
feeded at you know and everything he was doing and he managed to lose all that money while making a different bet
00:42:30
that's a unique founder the same thing is almost impossible with a hired
00:42:35
CEO how important here is focus and what's your your sort of opinion of um
00:42:41
the importance of focus from your experience with Google but also looking at these other companies because when you're at Google and you have so much
00:42:47
money in the bank there's so many things that you could do and could build like an endless list you can take on anybody
00:42:52
and basically win in most markets how do you think about focus at Google
00:42:58
focus is important but it's misinterpreted in Google we spent an
00:43:04
awful lot of time telling people we wanted to do everything and everyone said you can't pull off everything and
00:43:12
we said yes we can we have the underlying architectures we have the underlying reach we can do this if we
00:43:18
can imagine and build something that's really transformative and so the idea was not that we would somehow focus on
00:43:24
one thing like search but rather that we would pick areas of great impact and importance to the world many of which
00:43:30
were free by the way this is not necessarily Revenue driven and that worked I'll give you another example
00:43:35
there's an old saying in the business school that you should focus on on what
00:43:41
you're good at and you should simplify your product lines and you should get rid of product lines that don't work
00:43:47
Intel famously had a the term is called arm it's a risk uh chip and this
00:43:55
particular risk chip was not compatible with the architecture that they were using for most of their products and so
00:44:01
they sold it unfortunately this was a terrible mistake because the architecture that they sold off was
00:44:08
needed for mobile phones with low memory with small batteries and and heat problems and so forth and so on and so
00:44:16
that decision that faithful decision now 15 years ago meant that they were never a player in the mobile space and once
00:44:23
they made that decision they tried to take their expensive and expensive and complex chips and they kept trying to
00:44:29
make cheaper and smaller versions but the core decision which was to simplify simplify to the wrong outcome today if
00:44:37
you look at I'll give you an example the Nvidia chips use an arm CPU and then
00:44:43
these two powerful uh gpus it's called the b200 they don't use the Intel chip
00:44:48
they use the arm chip because it was for their needs faster I would never have predicted that 15 years ago so at the
00:44:55
end maybe it was just a mistake but maybe they didn't understand in the way
00:45:01
they were organized as a corporation that ultimately battery power would be as important as computing power right
00:45:08
the amount of battery you use and that was the discriminant so one way to think about it is if you're going to have
00:45:13
these sort of simple rules you better have a model of what happens in the next five years so the way I teach this is
00:45:22
just write down what it'll look like in five years just try what will look like in five years your company or whatever
00:45:28
it is right so let's talk about AI what will be true in five
00:45:33
years that it's going to be a lot smarter than it is be a lot smarter but how many companies will there be in AI
00:45:40
will there be five or 5,000 or 50,000 50,000 how many big companies will there
00:45:47
be will there be new companies what will they do right so I just told you my view
00:45:53
is that eventually you and I will have our own AI assistant which is a polymath
00:46:00
which is incredibly smart which helps us guide through the information overload that it is today who's going to build it
00:46:06
make a prediction what kind of hardw will be on make a prediction how fast will the networks be make a prediction
00:46:13
write all these things down and then have a discussion about what to do that what is interesting about our industry
00:46:21
is that when something like the PC comes along or the internet I lived through all of these things they are are such
00:46:27
broad phenomena that they really do create a whole new Lake a whole new ocean whatever metaphor you want now
00:46:34
people said well wasn't that crypto no crypto is not such a platform crypto is
00:46:41
not transformative to daily life for everyone people are not running around all day using crypto tokens rather than
00:46:47
currency crypto is a specialized Market by the way it's important and it's interesting it's not a horizontal
00:46:53
transformative Market the arrival of alien intelligence in the form of savant that you use is such a transformative
00:47:00
thing because it touches everything it touches you as a a producer as a star as a narrative it touches me as an
00:47:07
executive um it will ultimately help people make money in the stock market people are working on that there's so
00:47:14
many ways in which the technology is transformative to start you in your case when you think about your company
00:47:19
whether it's little you know itty bitty or a really big one it's fundamentally how will you apply AI to accelerate what
00:47:27
you're doing right in your case for example here you have I think the most successful show in the UK by far right
00:47:35
so how will you use AI to make it more successful well you can ask it to distribute you more right to make uh
00:47:41
narratives to summarize uh to to come up with new insights to suggest uh to have
00:47:46
fun to create contest there all sorts of ways that you can ask AI um I'll give you a simple example if I were a
00:47:54
politician thankfully I'm not um and I knew my district I would say uh to the
00:47:59
computer write a program so I'm saying to the computer you write a program which goes through all the constituents
00:48:05
in my interest figures out roughly what they care about and if and then send
00:48:11
them a video which is labeled you know of me digitally so I'm not fake but it's
00:48:16
kind of like my intention where I explain to them how important I as their constituent have made the bridge work
00:48:22
right and you sit there and you go that's crazy but it's possible now politicians have not discovered this
00:48:29
yet but they will because ultimately politicians are around a human connection and the quickest way to have
00:48:35
that communication is to be on their phone talking to them about something that they care about when chat GPT first
00:48:41
launched and they sort of scaled rapidly to 100 million users there was all these articles saying that um the founders of
00:48:48
Google had rushed back in and it was a crisis situation at Googled and there was panic and there was two things that
00:48:53
I thought first is is that true and second thing was how did Google not come to Market first
00:49:00
with a chat GPT style product well well remember that Google also that's the old question of why did you not do Facebook
00:49:06
well the answer is we were doing everything else right so my defensive answer is that Google has eight or nine
00:49:14
or 10 billion user clusters of activity which is pretty good right it's pretty
00:49:19
hard to do right I'm very proud of that I'm very proud of what they're doing now um my own view is that what happened was
00:49:26
Google was was working in the engine room and a team out of open AI figured
00:49:32
out a technology called rhf and what happened was when they did gpt3 and GP
00:49:39
the t is Transformer which was invented at Google when they did it they had sort of this interesting idea and then they
00:49:46
own then they sort of casually started to use humans to make it better and rhf
00:49:52
refers to the fact that you use humans at the end to do ab tests where humans can actually say well this
00:49:59
one's better and then the system learns recursively from Human training at the end that was a real breakthrough right
00:50:07
and uh I joke with my open a eye friends that you were sitting around on on Thursday night and you turn this thing
00:50:13
on and you go holy crap look how good this thing is it was a real Discovery
00:50:19
right that none of us expected certainly I did not um and once they had it um the
00:50:25
opening eye people Sam and and and so forth we'll talk about this they didn't
00:50:30
really understand how good it was they just turned it on and all of a sudden they had this huge success disaster
00:50:35
because they were working on GPT 4 at the same time it was an afterthought it's a great story because it just shows
00:50:42
you that even the brilliant Founders do not necessarily understand how powerful what they what they've done is now today
00:50:50
of course you have uh GPT 40 um basically a very powerful model from
00:50:55
open eye you have Gemini 1.5 which is clearly in clearly roughly equivalent if
00:51:01
not better in certain areas um the Gemini is more multimodal for example and then you have other players llama
00:51:07
the Llama architecture l l la ma uh does not stand for llamas it's large language
00:51:14
models um out of Facebook and a number of others uh there's a startup called anthropic um which is very powerful
00:51:22
founded by one of the inventors of gpt3 um and a whole bunch of people and they formed their company knowing they were
00:51:28
going to be that successful it's interesting they actually formed as part of their incorporation that they were a
00:51:33
public benefit Corporation because they were concerned that it would be so powerful that some evil CEO in the
00:51:39
future would force them to go for Revenue as opposed to world world goodness so the teams
00:51:46
when they were doing this they understood the power of what they were doing and they anticipated the level of impact which and they were right do you
00:51:53
think if Steve Jobs was an apple they would be on that list um how do you think the company would be
00:52:00
different well Tim has done a fantastic job in Steve's Legacy and what's interesting is normally the successor is
00:52:07
not as good as the founder but somehow Tim having worked with Steve for so long and having set the culture having Steve
00:52:13
having they've managed to continue the focus on the user this incredible safety
00:52:19
focus in terms of apps and so forth and so on and they've remained a relatively closed culture I think all of those
00:52:25
would have maintained detained had St you know tragically died uh he was a
00:52:30
good friend but the important point is Steve Steve believed very strongly in
00:52:38
what are called close systems where you own and control all your intellectual property and he and I would battle over
00:52:43
open versus closed because I came from the other side and I did this with respect I don't think they would have changed that and they've change that now
00:52:51
no I think still apple is still basically a single company that's ver
00:52:56
Ally integrated the rest of the industry is largely more open I think everyone especially in the wake of the recent
00:53:02
launch of the iPhone 16 which I've got somewhere here um has this expectation
00:53:08
that Apple would if Steve were still alive taken some big bold bet in some
00:53:13
and I think about you know Tim's tenure he's done a fantastic job of keeping that company going running it with the
00:53:19
sort of principles of Steve Jobs but has there been many big bold successful bets a lot of people point at the airpods
00:53:25
which have a a great product but I think AI is one of those things where you go I wonder if Steve would
00:53:31
have understood the significance of it and Steve was that smart that he I would
00:53:36
never you know he's an Elon level intelligence
00:53:41
um when when Steve and I worked together very closely which was what 15 years ago
00:53:47
for his death um he was very frustrated at the success of MP4 over uh mov
00:53:57
um format files and he was really mad about it and I said well you know maybe
00:54:03
that's because you were closed in quick time was not generally available said that's not true my team you know our
00:54:09
product is better and so forth so his his core belief system he's an artist
00:54:14
right and and given the choice we used to have this debate where do you want to be Chevrolet or do you want to be
00:54:20
Porsche do you want to be you know General Motors or do you want to be BMW and he said I want to be BMW
00:54:27
and during that time Apple's margins were twice as high as the PC companies
00:54:32
and I said Steve you don't need all that money you're generating all this cash you're giving it to your to your
00:54:38
shareholders and he said the principle of our profitability and our value in our brand is this is this luxury brand
00:54:47
right so that's how he thought now what How would how would AI change that
00:54:52
everything that he would have done with Apple today would be a I inspired but it
00:54:57
would be beautiful that's the great gift he had CU I think Siri was almost a
00:55:03
glimpse at what AI now kind of looks like it was a glimpse at what the I guess the ambition was we've all been
00:55:09
chatting to the Siri thing which is I think most people would agree as kind of like largely useless unless you're trying to figure out something super
00:55:14
super simple but now I this weekend as I said I was sat there with my my girlfriend's family there speaking to
00:55:21
this voice activated device and it was solving problems for me almost instantaneously that are very complex and translating them into French and
00:55:27
Portuguese welcome welcome to the replacement for Siri and again would Steve have done that quicker I don't
00:55:33
know it's very clear that the first thing Apple needs to do is have Siri be
00:55:40
replaced by an AI and call that Siri hiring we we're doing a lot of hiring in
00:55:45
our companies at the moment and we're going back and forward on what the most important principles are when it comes to hiring making lots of mistakes
00:55:51
sometimes getting things right sometimes what do I need to know as when it comes to hiring startups by
00:55:59
definition are huge Risk Takers you have no history you have no incumbency you
00:56:04
have all these competitors by definition and you have no time so in a startup you want to you want to um prioritize
00:56:12
intelligence and quickness over experience and sort of stability you
00:56:17
want to take risks on people and the great and part of the reason why startups are full of young people is
00:56:23
because young people often don't have the baggage of Executives have been around for a long time but more
00:56:28
importantly they're willing to take risks so it used to be that you could
00:56:34
predict whether a company was successful by the age of the founders and in that 20 and 30y old period the company would
00:56:41
be hugely successful startups um Wiggle they try something they try something else and they're very quick to discard
00:56:49
an old idea corporations spend years with a belief system that is factually
00:56:54
false and they don't actually changed their opinion until after they've lost all the contracts and if you go back the
00:57:02
all the signs were there nobody wanted to talk to them nobody cared about the product right and yet they kept pushing
00:57:07
it so um if you're a CEO of a larger company what you want to do is basically
00:57:13
figure out how to measure this Innovation so that you don't waste a lot of time Bill Gates had a saying a long
00:57:18
time ago which was that the most important thing to do is to fail fast right that the charact from his
00:57:24
perspective as the CEO of Microsoft founder Microsoft um that he wanted everything to happen and he wanted to
00:57:30
fail quickly and that was his theory and do you agree with that theory yeah I do
00:57:35
fast failure is important because you can say it in a nicer way but fundamentally um at Google we had this
00:57:42
72010 rule that Larry and Sergey came up with 70% of the Core Business 20% on
00:57:47
adjacent business and 10% on other what does that mean sorry cor Core Business means search ads adjacent business means
00:57:54
something that you're trying like a cloud business or so forth and the 10% is some new idea so Google created this
00:58:01
thing called Google X the first product it built was called Google brain which is the one of the first machine learning
00:58:07
architectures this actually precedes Deep Mind Google brain was used to power the AI system Google brin's team of 10
00:58:14
or 15 people generated 10 20 30 40 billion dollars of extra profits over a
00:58:20
decade so that pays for a lot of failures right then they had a whole bunch of other ideas that seemed very
00:58:26
interesting to me that didn't happen for one or another and they would cancel them and you you and then the people
00:58:33
would get reconfigured and one of the great things about Silicon Valley is it's possible to spend a few years on a
00:58:39
really bad idea and get cancelled if you will and then get another job Having learned all of that my joke is the best
00:58:46
CFO is one who's just gone bankrupt because the one thing that CFO is not going to let happen is to go bankrupt
00:58:53
again yeah well on this point of culture as well Google as such a big company
00:58:59
must experience a bunch of microcultures one of the things that I've always I've kind
00:59:04
of studied it as an as a cautionary tale is the story of TGIF at Google which was
00:59:10
this sort of weekly All Hands meeting where employees could ask the executives whatever they wanted to and the Articles
00:59:16
around it say that it was eventually sort of changed or canceled because it became unproductive it's more complicated than
00:59:23
that so lar and serus started TGF uh which I obviously participated in and
00:59:28
we had fun uh there was a sense of humor it was all off the Record um a famous
00:59:34
example is the VP of sales whose name was Omid um was always predicting lower
00:59:40
Revenue than we really had which is called sandbagging so we got a sandbag and we made him stand on the sandbag in
00:59:46
order to present his numbers it was just fun humorous you know we had skits and things like that um at at some size you
00:59:54
don't have that level of intim intimacy and you don't have a level of privacy and what happened was there were leaks
01:00:02
uh eventually there was a presentation I don't remember the specifics where the
01:00:08
Pres presentation was ongoing and someone was leaking the presentation live to a reporter and somebody came on
01:00:15
stage and said we have to stop now I think that was the moment where the company got sort of too
01:00:23
big h I heard about a story that um because
01:00:28
from what I had understood this might be totally wrong but it's all just things that Google employees have told me was that there wasn't many sackings firings
01:00:35
at Google's wasn't many layoffs wasn't really a culture of layoffs and I guess I guessed in part that's because the company was so successful that it didn't
01:00:42
have to make those extremely extremely tough decisions that we're seeing a lot of companies make today I reflect on
01:00:48
elon's running of Twitter when he take took over Twitter the you know the say the The Story Goes that he went to the
01:00:54
top floor and basically said anyone who's willing to work hard is committed to these values please come to the top
01:01:00
floor everyone else you're fired um this sort of extreme culture of culling and people being sort of activists at work
01:01:09
um and I wanted to know if there's any truth in that there's some um in in
01:01:14
Google's case um we had a position of why lay people
01:01:20
off just don't hire them in the first place it's much much easier and so in my 10 year the only layoff we did was uh
01:01:28
200 people in the sales structures right after the 2000 epidemic and I remember it as being extremely painful right it
01:01:35
was the first time we had done it so we took the position which is different at the time that you shouldn't have an
01:01:42
automatic layoff what would happen is that there was a belief at the time that every six months or nine months you
01:01:48
should take the bottom five% of your people and lay them off problem with that is you're assuming the 5% are
01:01:54
correctly identified and furthermore even the lowest performers have knowledge and value to the corporation
01:02:00
that we can take it so we took a a very much more positive view of our employees and the employees like that and we obviously paid them very well and so
01:02:06
forth and so on I think that the the cultural issues ultimately have been
01:02:11
addressed but during there was a period of time where there were uh because of
01:02:17
the free willing nature nature of the company there were an awful lot of internal distribution lists which had
01:02:22
nothing to do with the company what does that mean they were distribution lists on topics of War peace politics so forth
01:02:31
what's a distribution list a distribution like an email dist think of it as a a message board okay roughly
01:02:37
speaking think of it as message boards for employees and at one I remember that one point somebody discovered that there
01:02:43
were 100,000 such me message boards and the company ultimately cleaned that up
01:02:48
because companies are not like universities and that there are in fact all sorts of laws about what you can say
01:02:53
and what you cannot say and so forth and so for example the majority of the employees were uh Democrats in the
01:03:00
American political system and I made a point even though I'm a Democrat to try to protect the small number of
01:03:05
Republicans because I thought they had a right to be employees too so you have to be very careful in a corporation to
01:03:11
establish what what does speech mean within the corporation and uh what you
01:03:17
what you are hearing as wokeism is really can be understood is what are the appropriate topics on work time in in a
01:03:25
work venue should you be discussing my own view is stick to the business and
01:03:30
then please feel free to go to the bar scream your views talk to everybody you
01:03:35
know I'm a strong believer in free speech but within the corporation let's just stick to the corporation and its goals because I was hearing these
01:03:40
stories about I think in more recent times in the last year or two of people coming to work just for the free breakfast Pro protesting outside that
01:03:47
morning coming back into the building for lunch as best I can tell that's all been cleaned up I did also hear that that it had been
01:03:55
cleaned up because I think it was addressed in a very high conviction way which meant that it it was um seen to
01:04:02
how did how do you think about competition for everyone that's building something how much should we be focusing on our comp competition I strongly
01:04:09
recommend not focusing on competition and instead focusing on building a product that no one else has and you say
01:04:14
well how can you do that without knowing the competition well if you study the competition you're wasting your time try to solve the problem in a new way and do
01:04:20
it in a way where the customers are delighted U running Google we seldom looked at what our competitors were
01:04:26
doing what we did we spent an awful lot of time was what is possible for us to do what can we actually do from our
01:04:33
current situation and sort of the running ahead of everybody turns out to be really important what about
01:04:40
deadlines well uh Larry established the principle of um okrs which were
01:04:46
objectives and key results in every quarter Larry would actually write down all the metrics and he was tough and he
01:04:54
would say that if you got to 70% % of my numbers that was good and then we would grade based on are you above the 70% or
01:05:01
you below the 70% and it was harsh and it works you you have to measure to get
01:05:07
things done in big Corporation otherwise everyone kind of looks good makes all
01:05:12
sorts of claims feels good about themselves but it doesn't have an impact what about business plans should we be
01:05:18
writing business plans as found us Google wrote A business plan there was a run by a fellow named solar and I saw it
01:05:25
years later and it was actually correct and I told salar that the this is
01:05:30
probably the only business plan ever written for a corporation that was actually correct in hindsight so what I
01:05:37
prefer to do and this is how I teach it at Stanford is try to figure out what the world looks like in five years and
01:05:44
then try to figure out what you're going to do in one year and then do it right
01:05:50
so if you can basically say this is the direction these are the things we're going to achieve within one year and
01:05:56
then run against that as hard goals not simple goals but hard goals then you'll get there and the general rule at least
01:06:03
in a consumer business is if you can get an audience of 10 or 100 million people you can make lots of money right so if
01:06:09
you give me any business that has no revenue and a 100 million people I can find a way to to monetize that with
01:06:15
advertising and sponsorships and donations and so forth and so on focus on getting the user right and everything
01:06:22
else will follow the Google phrase is focus on the user and everything else is handled Sergey and
01:06:30
Larry you work with them for 20 years many decades yeah two decades what made
01:06:36
them special frankly raw IQ they were just smarter than everybody else really yeah and
01:06:43
uh in sergey's case his father was a very brilliant Russian mathematician his
01:06:48
mother was also highly technical his family is all very technical and he was clever he's a clever
01:06:53
mathematician uh Larry different personality but similar so an example would be that Larry and I are in
01:07:00
his office and we're writing on the Whiteboard a long list about what we're going to do and he says look we're going to do this and this and I said okay I
01:07:06
agree with you I don't agree with you we make this very long list and Sergey is out playing
01:07:11
volleyball and so he runs in in his little volleyball shorts and his little shirt all sweating he looks at our list
01:07:17
and said this is the stupidest thing I've ever heard and then he suggest five things and he was exactly right so we ar
01:07:25
red the Whiteboard and then he of course went back to play volleyball and that became the strategy of the company so
01:07:31
over and over again it was the it was their Brilliance and their ability to see things that I didn't see that I
01:07:36
think really drove it can you teach that I don't know I think you can teach
01:07:41
listening and um but I think most of us get caught up in our own
01:07:48
ideas and we are always surprised that something new happened like I've just
01:07:54
told you that I'm I've been in AI a long time I'm still surprised at the rate uh my favorite current product is called
01:08:00
notebook LM and for the uh listeners notebook LM is an experimental product out of Google
01:08:06
Deep Mind basically Gemini um it's based on the Gemini back end and it was trained with high quality podcast voices
01:08:14
it's terrifying and you basically give it a so what I'll do is um I'll write
01:08:19
something again I don't write very well and I'll ask Gemini to rewrite it to be more beautiful okay I'll take that text
01:08:26
and I'll put it in Notebook LM and it produces this interview between a man and a woman U who don't exist and for
01:08:33
fun what I do is I play this in front of an audience and I wait and see if anyone figures out that the humans are not
01:08:40
human it's so good they don't figure it out we'll play it now so this is the big thing that everyone's making a big fuss
01:08:46
about you can go and load this conversation now it's going to go out and create a conversation that's in a podcast style where there's a male voice
01:08:53
and a female voice and they're analyzing the content and then coming up with their own kind of just uh creative
01:08:59
content so you could go and push play right here we are back Thursday get ready for week three the injury report
01:09:05
this week was a doozy it's a long one yeah it is and it has the potential to
01:09:10
really shake things up so for that to me gem notebook LM is my chat GPT moment of
01:09:18
this year it was mine as well and it's much of the reason that I was um deeply
01:09:24
confused okay because as a podcaster who's building a media company we have an office down the road 25,000 square
01:09:31
feet we have studios in there um we're building audio video content at this in
01:09:40
the dawn of this new world where the cost of production of content goes to like zero or something and I'm trying to
01:09:46
navigate how to play as a media owner so first place you're you're what's really going on is you're moving from scarcity
01:09:52
to ubiquity you're moving from scarc to abundance so one way to understand the
01:09:58
world I live in is it's scale Computing generates abundance and abundance allows new strategies in your case it's obvious
01:10:04
what you should do you're a really famous podcaster and you have lots of interesting guests simply have this fake
01:10:11
set of podcasts criticize you and your guests right you're you're essentially just amplifying your reach they're not
01:10:19
going to substitute for your honest Brilliance and Charisma here but they're going to accentuate it they will they
01:10:25
will they will be entertaining they will summarize it and so forth it amplifies your reach if you go back to my basic
01:10:32
argument that AI will double the productivity of everybody or more so in your case you'll have twice as many co
01:10:40
podcasts what I do for examples I'll write something and I'll say I'll have it respond and then to Gemini I'll say
01:10:46
make it longer and it adds more stuff I think God I do this in like 30 seconds
01:10:52
then how powerful in your case take one of these uh lengthy interviews you do
01:10:57
ask the system to annotate it to amplify it and then feed that into fake
01:11:03
podcasters and see what they say you'll have a whole new set of audiences that love them more than you but but it's all
01:11:10
from you that's the key idea here I worry because there's going to be
01:11:15
potentially billions of podcasts that are uploaded to RSS feeds all around the world and it's all going to sort of chip
01:11:21
away at you know the the moat that I've so so many people have believed that but I
01:11:28
think the evidence is it's not true um when I started at Google there was this notion that celebrity would go away and
01:11:35
there would be this very long tale of micro markets you know Specialists
01:11:41
because finally you could hear the voices of everyone and we're all very Democratic and liberal in our view that's the what really happened was
01:11:48
networks accentuated the best people and they made more money right you went from being a local personality to a national
01:11:56
personality to a global personality and the globe is a really big thing and there's lots of money and lots of
01:12:01
players so you as a as a celebrity are competing against a global group of
01:12:07
people and you need all the help you can to maintain your position if you do it well by using these AI Technologies you
01:12:13
will become more famous not less famous
01:12:20
Genesis I am I've had a lot of conversations with a lot of people about the subject of AI um and when I read
01:12:26
your book and I've watched you do a series of interviews on this some of the quotes that you said really stood out to
01:12:31
me one of them I wrote down here which comes from your book Genesis
01:12:37
it's on page five the Advent of artificial intelligence is in our view a question of human
01:12:45
survival yes that is our view so why is it a question of human
01:12:53
survival AI is going to move very quickly it's moving so much more quickly
01:12:58
than I've ever seen because the amount of money the number of people the impact the
01:13:05
need what happens when the AI systems are really running key parts of our
01:13:10
world what happens when AI is making the decision my my simple example you have a
01:13:16
car which is AI controlled and you have a emergency or a lady's about to give
01:13:23
birth or something like that and they get in the car and there's no override switch because the system is optimized
01:13:30
around the whole as opposed to his or her emergency right we as humans accept
01:13:36
various forms of efficiency including urgent ones versus system systemic efficiency you could imagine that the
01:13:43
Google Engineers would design a perfect City that would perfectly operate every
01:13:48
self-driving car on every street but would not then allow for the exceptions that you need in such a in such an
01:13:55
important issue so that's a trivial example and one which is well understood
01:14:01
of how it's important that these things represent human values right that we we have to actually articulate what does it
01:14:08
mean so my favorite one is all this misinformation um democracy is pretty
01:14:13
important democracy is by far the best way to to live and operate societies look at there are plenty of examples of
01:14:19
this none of us want to work in essentially an authoritarian dictatorship so you better figure out a
01:14:25
way where the misinformation components do not screw up proper political
01:14:31
examples another example is this question about teenagers and the develop their mental development and growing up
01:14:37
into these societies I don't want them to be constantly depressed there's a lot of evidence that dates around 2015 when
01:14:46
all the social media algorithms changed from linear feeds to targeted feeds in other words they went from time to this
01:14:52
is what you want this is what you want that hyperfocus has ultimately narrowed people's um political views as I as we
01:15:00
discussed but more importantly it's produced more depression and anxiety so all the studies indicate that basically
01:15:07
if you time it to roughly then when people are coming to age they're not as happy with their lives their behaviors
01:15:13
their opportunities for this and the best explanation is it was an algorithmic change and remember that
01:15:20
these systems they're not just collections of content they are algorithmically deciding
01:15:25
you know the algorithm decides what the outcome is for humans we have to manage
01:15:30
that um what we say in many different ways in the book is that you have sort
01:15:36
of a choice of whether the um the algorithms will advance that's not a
01:15:41
question the question is are we advancing with it and do we have control over it um there are so many examples
01:15:48
where you could imagine an AI system could do something more efficiently but at what cost right
01:15:55
um I should mention that there is this discussion about something called AGI
01:16:00
artificial general intelligence and there's this discussion in the Press among many people that AGI
01:16:06
occurs on a particular day right and this is sort of a popular concept that on a particular day five years from now
01:16:13
or 10 years from now this thing will occur and all of a sudden we're going to have a computer that's just like us but
01:16:18
even quicker that's unlikely to be the path much more likely are these waves of
01:16:24
innovation in every field better psychologists better writers you see this with g chat gbt already better
01:16:31
scientists is a notion of an AI scientist that's working with the AI real scientists to accelerate the
01:16:37
development of more AI science people believe all of this will come but it has to be under human
01:16:43
control do you think it will be I do and part of the reason is I and others have
01:16:49
worked hard to get the governments to understand this it's very strange in my entire career which has gone for you
01:16:55
know 50 years the um we've never asked for government for help because asking
01:17:01
the government help is basically just a disaster in the view of the techn industry in this case the people who
01:17:07
invented it collectively came to the same view that there need to be guardrails on this technology because of
01:17:13
the potential for harm the most obvious one is how do I kill myself give me recipes to hurt other people that kind
01:17:18
of stuff there's a whole Community now in this in this part of the industry which are called trust and safety groups
01:17:26
and what they do is they actually have humans test the system before it gets
01:17:31
released to make sure the harm that it might have in it is suppressed it's literally won't answer the question when
01:17:39
you play this forward in your brain you you've been in the tech industry for a long time and from looking at your work
01:17:44
you it feels like you're describing this as the most sort of transformative potentially harmful technology that humans have really ever seen you know
01:17:51
maybe alongside the nuclear bomb I guess but some would say even potentially worse because of the nature of the
01:17:57
intelligence and its autonomy you must have moments where you you think forward into the future and
01:18:04
your thoughts about that future aren't so Rosy well because I have those moments yes but but let's let's think let's
01:18:10
answer the question I said think five years in five years you'll have two or three more turns of the crank of these
01:18:15
large models these large models are scaling with ability that is
01:18:21
unprecedented there's no evidence that the scaling has laws as they're called
01:18:26
have begun to to stop they will eventually stop but we're not there yet each one of these cranks looks like it's
01:18:33
a factor of two factor of three factor of four of capability so let's just say turning the crank all of these systems
01:18:40
get 50 times or 100 times more powerful in it of itself that's a very big deal
01:18:46
because those systems will be capable of physics and math you see this with o. one and um and open AI all the other
01:18:53
things that are occurring now what are the dangers well there's the most obvious one is cyber attacks
01:19:00
there's evidence that the raw models these are the ones that have not been released can do what are called Day Zero
01:19:06
attacks as well or better than humans a day Zero attack is an attack that's unknown they can discover something new
01:19:12
and how do they do it they just keep trying because they're computers and they have nothing else to do they don't sleep they don't eat they just turn them
01:19:18
on and they just keep going um so the so cyber is an example where everybody's concerned another one is biology viruses
01:19:25
are relatively easy to make and you can imagine coming up with really bad viruses there's a whole team I'm part of
01:19:31
a commission looking at this to try to make sure that doesn't happen I already mentioned misinformation
01:19:37
another probably negative but we'll see is the development of new forms of
01:19:43
warfare I've written extensively on how war is changing and the way to
01:19:48
understand historic war is that it's the stereotypically the the soldier with the
01:19:53
gun you know one side and so forth World War trenches you see this by the way in UK in the Ukraine fight today where the
01:20:00
ukrainians are holding on valiantly against the Russian Onslaught but he's sort of you know mono Amano you know man
01:20:06
against man sort of all of the stereotypes of War so in a drone World
01:20:11
which is the sort of the fastest way to build new robots is to build drones you'll be sitting in a Command Center in
01:20:17
some office building connected by a network and you'll be doing harm to the other side while you're drinking your
01:20:23
coffee right that's a changed in the logic of War um and it's applicable to both sides I don't think anyone quite
01:20:29
understands how war will change but I will tell you that in in the Russian Ukraine war you're seeing a new form of
01:20:36
Warfare being invented right now right um both sides have lots of drones tanks
01:20:42
are no longer very useful a $5,000 drone can kill a $5 million tank um so it's
01:20:49
called The Kill ratio so basically it's drone on drone and so now people are trying to figure out how how to have one
01:20:54
drone destroy the other drone right this will ultimately take over war and
01:20:59
conflict in our world in total you mentioned rural models this is a concept that I don't think people understand
01:21:05
exists the idea that there's some other model that's the role model that is
01:21:10
capable of much worse than the thing we play with on our computers every day it's important to establish how these things work so you the way these
01:21:16
algorithms work is they have complicated uh training things where they suck all the information in and they uh one week
01:21:24
currently believe we've sort of sucked all of the written word that's available it doesn't mean there isn't more but we've we've literally done such a good
01:21:31
job of sucking everything that humans have ever written it's all in these big computers when I say computers I don't
01:21:37
mean computers I mean supercomputers with enormous memories and the scale is mindboggling uh and of course there's
01:21:43
this company called Nvidia which makes the chips which is now one of the most valuable companies in the world um
01:21:50
surprisingly so incredibly successful because they're so Central to this revolution and good for Jensen and his
01:21:55
team so the important thing is when you do this training it comes out with a raw
01:22:00
model right it takes six months and you know you wait 24 hours a day you can watch it it gets close to there's a
01:22:07
measurement that they use called the loss function when it gets to a certain number they say good enough so then they
01:22:13
go what do we have right what do we do right um so the first thing is let's
01:22:18
figure out what it knows so they have a set of tests and of
01:22:23
course it knows all sorts of bad things which they immediately then tell it not to answer to me the most interesting
01:22:29
question is in over a 5-year period the systems will learn things
01:22:35
that we don't know they learn how will you test for things that you don't know they
01:22:41
know the answer in the industry is that they have incredibly clever people who
01:22:47
sit there and they fiddle literally fiddle with the networks and say I'm gonna I'm going to see if it knows this
01:22:55
I'll see if it can do this and then they make a list and they say that's good that's not so good right so all of these
01:23:02
Transformations so for example you can show it a picture of a website and it can generate the code to generate a
01:23:08
website all of those were not expected they just happened it's called emergent
01:23:13
Behavior scary scary but exciting and so far um the systems have held the
01:23:20
governments have worked well um the these trust and safety groups group are working here in the UK um one year ago
01:23:28
was the first trust and safety conference um the government did a fantastic job the team that was
01:23:33
assembled was the best of all the country teams here in the UK um now what's happening is these are happening
01:23:39
around the world the next one is in France in uh early February and I expect a similar good result do you think we're
01:23:46
gonna have to guard I mean you talk about this but do you think we're going to have to guard these role models with
01:23:52
with guns and tanks and machinery and stuff I worked for the Secretary of Defense for a while uh in my in Google
01:23:59
you could spend 20% of your time on other things so I worked for the Secretary of Defense to try to understand the US Military and um one of
01:24:07
the things that we did is we visited a plutonium U Factory plutonium is incredibly dangerous and Incredibly
01:24:13
secret and so this particular base is inside of another base so you go through the first set of machine guns and then
01:24:19
you have normal thing and then you go into the special place with even more machines guns and even because it's so secure so the the metaphor is do you
01:24:28
fundamentally believe that the computers that I'm talking about will be of such value and such danger that they'll have
01:24:34
their own data center with their own guards which of course might be computer guards but the important thing is that
01:24:40
it's so special that it has to be protected in the same way that we protect nuclear bombs and proliferate uh
01:24:45
and programming an alternative model is to say that this technology will spread
01:24:52
pretty broadly and there'll be many such plac if it's a small number of groups the
01:24:58
governments will figure out a way to do deterrence and they'll figure out a way to do non-proliferation so I'll make something
01:25:05
up I'll say there's a couple in China there's a few in the US there's one in in Britain of course we're all tied together between the US and Britain and
01:25:11
maybe in a few other places that's a manageable problem on the other hand let's imagine that that power is
01:25:17
ultimately so easy to copy that it spreads globally and it's accessible to
01:25:23
for example terrorist then you have a very serious proliferation problem which is not yet
01:25:29
solved this is again speculation because I think a lot about adversaries in China and Russia and
01:25:35
Putin and I think I know you talk about them being a few years behind maybe one
01:25:40
or two years behind but they're eventually going to get there they're eventually going to get to the point where they have these large language
01:25:46
models or these AIS that can do these Day Zero attacks on our nation
01:25:52
and they they don't have the like sort of social incentive structure if they're a communist country to protect and to um
01:26:01
guard against these things are you not worried about what China is gonna do um I am worried and I'm worried
01:26:07
because you're going into a space of great power without fully defined boundaries what kinger and we talk about
01:26:13
this in the book The the Genesis Book is fundamentally about what happens to society with the arrival of this new
01:26:20
intelligence and the first book we did age of AI was right before chat GPT so
01:26:25
now everybody kind of understands how powerful these things are we talked about it now you understand it so once
01:26:30
these things show up who's going to run them who's going to be in charge how will they be used so from my perspective
01:26:38
I believe at the moment anyway that China will behave relatively responsibly
01:26:43
and the reason is that it's not in their interest to have free speech in every case in China when they
01:26:51
have a choice of giving freedom to their Cit citizens or not they choose non-freedom and I know this because I
01:26:57
spent through all the uh I spent all the time dealing with it so it sure looks to
01:27:03
me like the Chinese AI solution will be different from the West because of that
01:27:08
fundamental bias against freedom of speech because these things are noisy they make a lot of noise they'll
01:27:15
probably still make AI weapons though well on the weapon side you have to assume that every new technology is
01:27:23
ultimately strengthened in a war um the tank was invented in World War I at the
01:27:28
same time you had the initial forms of uh airplanes much of the second world war was an air Campaign which
01:27:35
essentially built many many things and if you look at the the there's a a book
01:27:40
called Freedom's Forge about the American U structure according to the
01:27:46
book they ultimately got to the point where they could build two or three airplanes a day at scale so in an
01:27:53
emergency Nations have enormous power I get asked all the time if
01:27:59
everyone if anyone's going to have a job left to do because this is the disruption of intelligence and whether it's people driving cars today I mean we
01:28:05
saw the Tesla announcement of the robo taxis whether it's accountants lawyers and everyone in between that's or
01:28:12
podcasters are we going to have jobs left well um this question has been
01:28:17
asked for 200 years um there was there were the L eyeses here in Britain way
01:28:23
back when and inevitably when these Technologies come along there's all these fears about them indeed with a lot
01:28:29
I there were riots and people you know destroying the Looms and all of this kind of stuff but somehow we got through
01:28:34
it so um my own view is that there will be a lot of job
01:28:40
dislocation but there will be a lot more jobs not fewer jobs and here's why we
01:28:46
have a demographic problem in the world especially in the developed developed world where we're not having enough
01:28:51
children uh that's well understood uh furthermore we have a lot of older people and and the younger people have
01:28:57
to take care of the older people and they have to be more productive if you have young people who need to be more productive the best way to make them
01:29:03
more more productive is to give them more tools to make them more productive whether it's a machinist that goes from
01:29:10
a manual machine into a CNC machine or in in the more modern case of a knowledge worker who can achieve more
01:29:17
objectives we need that productivity group if you look at Asia which is the centerpiece of
01:29:22
manufacturing they have all this cheap labor well it's not so cheap anymore so do you know what they did they added
01:29:27
robotic assembly Lin so today when you go to China in particular it's also true in Japan and Korea the manufacturing is
01:29:34
largely done by robots why because their demographics are terrible and their cost of Labor is too high so the future is
01:29:41
not fewer jobs it's actually a lot of jobs that are unfilled with people who
01:29:46
may have a job skill mismatch which is why education is so important now what are examples of jobs that go away
01:29:53
automation has always gotten rid of jobs that are dangerous physically dangerous or ones
01:30:01
which are essentially too repetitive and too boring for humans I'll give you an example um security guards it makes
01:30:08
sense that security guards would become robotic because it's hard to be a security guard you fall asleep you don't
01:30:15
know quite what to and these systems can be smart enough to be very very good security now these are these are
01:30:21
important sources of income for these people they're going to have to find another job another example in in the
01:30:27
media in um Hollywood everyone's concerned that AI is going to take over their jobs all the evidence is the
01:30:33
inverse and here's why um the Stars still get money The Producers still make money they still distribute their movie
01:30:40
but their cost of making the movie is lower because they use more they use for example synthetic backdrops so they
01:30:45
don't have to build the set um they can do synthetic makeup now there are job losses there so the people who make the
01:30:51
make make the set and do the makeup are going to have to go back into construction and personal care by the
01:30:57
way in America and I think it's true here there's an enormous shortage of people who can do high quality craftsmanship right those people will
01:31:04
have jobs they're just different and they may not be in Los Angeles am I gonna have to interface with this
01:31:10
technology am I going to have to get a neuralink in my brain because we you go over the subject of there being these
01:31:16
sort of two species of humans potentially ones that do have a way to
01:31:21
incorporate themselves more with artificial intelligence and those that don't and if and if that is the case
01:31:27
what is the time Horizon in your view of that happening I think neuralink is much more speculative because you're dealing with
01:31:33
direct brain connection and nobody's going to drill on my brain until it needs it trust me I suspect you feel the
01:31:39
same uh I I guess my O My overall view is that
01:31:47
um you will not notice how much of your world has been
01:31:53
co-opted by these Technologies because they will produce greater Delight if you think about it a lot of
01:32:01
life is inconvenient it's fix this call this make this happen AI systems should
01:32:06
make all that seamless you should be able to wake up in the morning and have coffee and not have a care in the world
01:32:11
and have the computer help you have a great day this true of everyone now what
01:32:17
happens to your to your profession well as we said no matter how good the
01:32:22
computers are people are going to want to care about other people another example let's imagine you have Formula 1
01:32:28
and you have Formula One with humans in it and then you have a a a robot Formula 1 which where the cars are driven by the
01:32:35
equivalent of a robot is anyone going to go to the robotic Formula 1 I don't think so because of the drama the human
01:32:42
achievement and so forth do you think that when they run the marathon here in London they're going to have robots
01:32:48
running with humans of course not right of course the robots can run faster than humans it's not interesting what is
01:32:54
interesting is to see human achievement so I think the commentators who say oh there won't be jobs we won't care I
01:33:00
think they miss the point that we care a great deal about each other as human beings we have opinions you have a
01:33:07
detailed opinion about me having just met me met me right now and I for you we just are naturally set up your face your
01:33:13
mannerisms and so forth we can describe it all right the robot shows up is like oh my God what another robot how boring
01:33:20
why is samman working on the the founder of open AI when the co-founders of open a working on universal basic income
01:33:26
projects like worldcoin then well worldcoin is not the same thing as universal Bitcoin uh um Universal basic
01:33:34
income there is a belief in the tech industry that it goes something like
01:33:39
this the politics of abundance what we do is going to create so much abundance
01:33:45
that most people won't have to work and there'll be a small number of groups that work who typically these people
01:33:51
themselves and there be so much Surplus everyone can live like a millionaire and everyone will be happy I completely
01:33:57
think this is false I think none of what I just told you is false but all of these Ubbi ideas come from this notion
01:34:04
that humans don't behave the way we actually do so I'm I'm a Critic of this view I believe that that we as humans so
01:34:12
I an example is um we're going to make legal the legal profession much much
01:34:17
easier because we can automate much of the technical work of lawyers does that mean we're going to have fewer lawyers
01:34:23
no the current lawyers will just do more laws they'll do more they'll add more complexity the system doesn't get easier
01:34:30
the humans become more sophisticated in their application of the principles we are naturally basically uh we have this
01:34:37
thing called um basically reciprocal altruism that's part of us but we also have our bad sides as well those are not
01:34:44
going away because of AI when I think about AI this simple analogy often think of is say my IQ is Steven bartett is 100
01:34:50
and there's this AI that sat next to me whose IQ is 1,000 what on Earth would you want to give Steven to do because
01:34:57
because that 1,000 IQ would have really bad judgment in a couple cases because remember that the AI systems do not have
01:35:04
human values unless it's added right I would much rather talk to you about
01:35:10
something involving a moral or human judgment even with the Thousand I wouldn't mind Consulting it so tell me
01:35:16
the the history how was this resolved in the past how are these but at the end of the day in my view the core aspects of
01:35:23
it which have to do with morals and judgment and beliefs and Charisma they're not going away is there a chance
01:35:29
that this is the end of humanity no um the way Humanity does is much it's much harder to
01:35:36
eliminate all of humanity than you think all the people I've looked with on these biological attacks say it's it takes
01:35:43
more than one horrific pandemic and so forth to eliminate humanity and and the
01:35:48
the pain can be very very high in these moments look at the World War I World War II the Hodor in uh Ukraine in the
01:35:56
1930s the Nazis you know these are horrifically painful things but we survived right we we as a as a Humanity
01:36:04
survived and we will I wonder if this is the moment where humans couldn't see past around the corner because you know
01:36:11
I've heard you talk about how the AIS will turn in they'll be agents and they'll be able to speak to each other and we won't be able to understand the
01:36:17
language I have a specific proposal on that um there are points where humans
01:36:22
should assert control and I've been trying to think about where are they I'll give you an example
01:36:27
there's something called recursive self-improvement where the system just keeps getting smarter and smarter and learning more and more things at some
01:36:35
point if you don't know what it's learning you should unplug it but we can't unplug them can we sure you can
01:36:41
there's a power plug and there's a circuit breaker go and turn the circuit breaker off another example um there's a
01:36:48
there's a scenario theoretical where the system is so powerful it can produce a new model faster than the previous model
01:36:56
was checked okay that's another intervention point so in each of these
01:37:02
cases um if the if agents and the technical term is called agents what they really are is large language models
01:37:09
with memory and you can begin to concatenate them you can say this model does this and then it feeds into this
01:37:15
and so forth you can build very powerful decision systems we believe this is the the the thing that's occurring this year
01:37:21
and next year everyone's doing them they will arrive the agents today speak in English you
01:37:26
can see what they're saying to each other they're not human but they are communicating what they're doing English
01:37:34
to English to English as long as and it doesn't have to be English but as long as they're human understandable but
01:37:40
let's so the thought experiment is one of the agents says I have a better idea I'm going to communicate in my own
01:37:45
language that I'm going to invent that only other agents understand that's a good time to pull the plug what is your
01:37:52
biggest fear about AI my actual fear is different from what you might imagine my my actual fear is
01:37:57
we're not going to adopt it fast enough to solve the problems that affect everybody right and the reason is that
01:38:03
the that if you look at every everyone's everyday lives what do they want they want safety they want Health Care they
01:38:11
want great schools for their kids we just work on that for a while why do we make people's lives just better because
01:38:17
of AI we have all these other interesting things why don't we have a um a teacher that is an AI teacher that
01:38:25
works with existing teachers in this language of the kid in the culture of the kid to get the kid as smart as they
01:38:31
possibly can why don't we have a doctor or doctor's assistant really that enables a a human doctor to always know
01:38:39
every possible best treatment and then based on their current situation what the inventory is which country is how
01:38:45
their insurance Works what is the best way to treat that patient those are relatively achievable Solutions why
01:38:50
don't we have them if you just did education and Healthcare globally the impact in terms of lifting
01:38:56
human potential up would be so great right that it would change everything it wouldn't solve the various
01:39:04
other things that we complain about about you know this celebrity or this misbehavior or this conflict or even
01:39:09
this war but it would establish a Level Playing Field of knowledge and opportunity at a global level that has
01:39:15
been the dream for decades and decades and decades Chuck me that perfect head
01:39:21
one of the things that I think about the time because my life is quite hectic and busy is how to manage my energy load and
01:39:27
as a podcaster you kind of have to manage your energy in such a way that you can have these articulate conversations with experts on subjects
01:39:34
you don't understand and this is why perfect Ted has become so important in my life because previously when it came
01:39:39
to Energy Products I had to make a trade-off that I wasn't happy with typically if I wanted the energy I had to deal with high sugar I had to deal
01:39:46
with Jitters and crashes that come along with a lot of the mainstream Energy Products and I also just had to tolerate
01:39:51
the fact that if I want energy I have to put up with a lot of artificial ingredients which my body didn't like
01:39:57
and that's why I invested in perfect Ted and why they're one of the sponsors of this podcast it is changed not just my life but my entire team's life and for
01:40:03
me it's drastically improved my cognitive performance but also my physical performance so if you haven't tried perfect Ted yet you must have been
01:40:10
living under a rock now is the time you can find perfect Ted at Tesco and waitrose or online where you can enjoy
01:40:16
40% off with code diary 40 at checkout head to perfect ted.com this is quite
01:40:23
interesting 85% of Internet users have heard of vpns but only 55% know what
01:40:29
they do if you're in that group let me explain vpn's enable your location online to differ from where you actually
01:40:35
are geographically to help you browse and stream sites that would otherwise be unavailable to you I use nordvpn who are
01:40:41
a sponsor of this show to watch Manchester United games online no matter where I am in the world and Indie from my team uses them whenever she's booking
01:40:47
flights back home to New Zealand having a different online location means she can take advantage of dynamic pricing
01:40:54
and get cheaper prices for her flights nordvpn is the fastest VPN in the world and just one account can be used across
01:41:00
10 devices and they've shared a generous offer for my listeners a discount and four additional months free on a 2-year
01:41:07
plan it's also completely risk-free with nord's 30-day money back guarantee so head to nordvpn.com
01:41:13
doac or click the link in the description below throughout the pandemic I've been
01:41:19
a big supporter um it was a contrarian view but I think it's now less a contrarian view that companies and CEOs
01:41:27
need to be clear in their convictions around how they work and one of the things that I've um been criticized a lot for is that I'm I'm for having
01:41:34
people in a room together so my companies we um we're not remote we work together in an office as I said down the
01:41:39
road from here and I believe in that because I think of community and engagement and synchronous work and I
01:41:45
think that work now has a responsibility to be more than just a set of tasks you do in a world where we're lonier than
01:41:51
ever before there's more disconnection and especially for young people you don't have families and so on um having
01:41:56
them work alone in a small white box in a big city like London or New York um is robbing them of something which I think
01:42:01
is important this was a bad this was a contrarian view it's become less contrarian as the big tech companies in
01:42:07
America have started to roll back some of their initial knej reactions to the pandemic that there a lot of them are asking their team members to come back
01:42:13
into the office at least a couple of days a week what's your point of view on this so I have a strong view that I want
01:42:19
people in an office it doesn't have to be all one office but I want them in an office and partly it's for their own benefit if
01:42:25
you're in your 20s when I was a young executive I knew nothing of what I was doing I literally was just lucky to be
01:42:31
there and I learned by hanging out at the water cooler going to meetings hanging out being in the hallway had I
01:42:37
been at home I wouldn't have had any of that knowledge which ultimately was Central to my subsequent promotions so
01:42:43
if you're in your 20s you want to be in an office because that's how you're going to get promoted and I think that's consistent with the majority of the
01:42:50
people who really want to work from home have honest problems with commuting and family and so forth they're real issues
01:42:56
the problem with our joint view is it's not supported by the data the data indicates that productivity is actually
01:43:02
slightly higher in uh work uh when you allow work from home so you and I really
01:43:08
want that company of people sitting around the table and so forth but the evidence does not support our view
01:43:14
interesting yeah is that true it is absolutely true why is Facebook and all these companies rolling back their uh
01:43:20
and like Snapchat rolling back their remote working policies then not everyone is um and you most companies
01:43:26
are doing various forms of hybrids where it's two days or three days or so forth
01:43:32
um I'm sure that for the average listener here who works in public security or in a government they say well my God they're not in the office
01:43:39
every every every day but I'll tell you that at least for the the industries that have been studied there's evidence
01:43:46
that allowing that flexibility from work from home increases productivity I don't happen to like it but I want to
01:43:52
acknowledge the science is there what is the um the advice that you wish you'd gotten at my age that you didn't get the
01:43:59
most important thing is probably keep betting on yourself and bet again and roll the dice and roll the dice what
01:44:05
happens in as you get older is you realize that these opportunities were in front of you and you didn't jump for
01:44:11
them why you were in a bad mood or you know you didn't know who to call or so
01:44:16
forth life can be understood as a series of opportunities that are put before you and they're Tim Limited
01:44:23
I was fortunate that I got the call after a number of people had turned it down to work for Larry and for and with
01:44:29
Larry Sergey at Google changed my life right but that was luck and timing my
01:44:34
one friend on the board at the moment said I was very thankful to him and he said but you know you did one thing
01:44:40
right I said what he said you said yes so your philosophy in life should be
01:44:46
to say yes to that opportunity and yes it's painful and yes it's difficult and yes you have to deal with your family
01:44:52
and yes you have to travel to to some foreign place and so forth get on the airplane and get it done what's the hardest challenge you've
01:44:58
dealt with in your life well on the personal side you know I've had the I've had a set of you know personal personal
01:45:04
Pro problems and tragedies um like everyone does I think on a business
01:45:11
context um there were moments at Google where we
01:45:19
had control over an industry that we didn't execute well the most obvious one is social
01:45:24
media uh at the time when Facebook was founded we had a system which we called Orit um which was really really
01:45:31
interesting and somehow we we we did everything well but we missed that one right and I would have preferred and
01:45:37
I'll take responsibility for that we have a closing tradition on this podcast where the last guest leaves a question for the next guest not knowing who
01:45:42
they're going to be leaving it for and the question left for you is what is your non-negotiable something you do
01:45:48
that significantly improves everyday life well what I try to do is try to be
01:45:53
online and I also try to keep people honest every day you keep you hear all
01:45:59
sorts of ideas and and so forth half of which are right half of which are wrong I try to make sure I know the truth as
01:46:06
best we can determine it Eric thank you so much thank you it's uh such an honor your books are have shaped my thinking
01:46:12
in so many so many important ways and I think your new book Genesis is the single best book I've I've read on the
01:46:19
subject of AI because you take a very nuanced approach to these subject matters and I think sometimes it's
01:46:24
tempting to be binary in your way of thinking about this technology the the pros and the cons but your writing your
01:46:30
videos your work takes this really balanced but informed approach to it I have to say as an entrepreneur the
01:46:36
trillion dollar coach book is what I highly recommend everybody goes and reads because it's um it's just a really
01:46:41
great Manual of being a leader in the Modern Age and an entrepreneur I'm going to link all five of these books in the
01:46:46
in the comment section below the new book Genesis comes out in the US I believe on the 19th of November
01:46:53
um I don't have the UK date but I'll find it and I'll put it in but it's a book it's a it's a critically important
01:47:00
book that nobody should avoid I've been searching for answers that are contained in this book for a very very long time
01:47:05
I've been having very a lot of conversations on this podcast in search of some of these answers and I feel clearer um about myself my future but
01:47:12
also the future of society because I've read this book so thank you for writing it and thank you and let's thank Dr
01:47:17
Kissinger he finished the last chapter in his last week of life in his deathbed
01:47:22
that's how profound he thought that this book was And all I'll tell you is that
01:47:28
he wanted to set us up for a good next 50 years having lived for so long and
01:47:34
seen both good and evil he wanted to make sure we continue the good progress we're making as a
01:47:40
society is there anything he would want to say any answer he gave would take five
01:47:46
[Music] minutes a remarkable man thank you Eric
01:47:52
thank you [Music] I'm going to let you into a little bit
01:47:58
of a secret and you're probably going to think that I'm a little bit weird for saying this but our team are our team
01:48:03
because we absolutely obsess about the smallest things even with this podcast when we're recording this podcast we
01:48:09
measure the CO2 levels in the studio because if it gets above a th000 parts per million cognitive performance dips
01:48:15
this is the type of 1% Improvement we make on our show and that is why the show is the Way It Is by understanding
01:48:21
the power of pounding 1% you can absolutely change your outcomes in your life it isn't about drastic
01:48:28
Transformations or quick wins it's about the small consistent actions that have a
01:48:33
lasting change in your outcomes so two years ago we started the process of creating this beautiful diary and it's
01:48:39
truly beautiful inside there's lots of pictures lots of inspiration and motivation as well some Interac Dev
01:48:45
elements and the purpose of this diary is to help you identify stay focused on
01:48:50
develop consistency with the one % that will ultimately change your life we have a limited number of these 1% Diaries and
01:48:57
if you want to do this with me then join our waiting list I can't guarantee all of you that join the waiting list will be able to get one but if you join now
01:49:03
you have a higher chance the waiting list can be found atth diary.com I'll link it below but that isth diary.com
01:49:12
[Music]
01:49:22
ah

Podspun Insights

In this riveting episode, the conversation dives deep into the secrets of leadership and innovation as Eric Schmidt, former CEO of Google, shares his insights on navigating the rapidly evolving tech landscape. With a blend of humor and gravitas, Schmidt discusses the importance of risk-taking and the revolutionary impact of artificial intelligence on society. He reveals the "72010 rule" from Google that generated billions in profits and emphasizes the necessity of critical thinking in a world increasingly influenced by AI.

Listeners are treated to a fascinating exploration of the principles that drive successful entrepreneurship, as Schmidt recounts his journey alongside tech giants Larry Page and Sergey Brin. The discussion touches on the challenges of maintaining company culture in a rapidly scaling organization and the delicate balance between innovation and stability.

As the episode unfolds, Schmidt's candid reflections on the future of AI provoke thought about its implications for humanity. He expresses both optimism and caution, urging listeners to consider how AI can enhance lives while also acknowledging the potential risks. The episode culminates in a call to action for individuals and businesses to embrace AI responsibly, ensuring that technology serves to uplift rather than undermine human values.

Badges

This episode stands out for the following:

  • 95
    Best overall
  • 94
    Most influential
  • 93
    Best concept / idea
  • 92
    Most quotable

Episode Highlights

  • Critical Thinking in the Age of AI
    In a world where misinformation can spread easily, critical thinking is essential.
    “The first and most important thing about critical thinking is to distinguish between being marketed to and being given the argument on your ”
    @ 07m 58s
    November 14, 2024
  • Optimism for the Future
    Despite challenges, there's hope for a prosperous future as society learns to adapt to AI.
    “I'm inherently an optimist because we will adjust society with biases and values.”
    @ 15m 10s
    November 14, 2024
  • The Founding of Google
    Larry Page and Sergey Brin met at Stanford and created the PageRank algorithm, which transformed information retrieval.
    “They wrote a paper which is still one of the most cited papers in the world.”
    @ 22m 30s
    November 14, 2024
  • The Importance of Company Culture
    Company cultures are shaped by founders, influencing how businesses operate and innovate.
    “Company cultures are almost always set by the founders.”
    @ 29m 02s
    November 14, 2024
  • Focus vs. Ambition at Google
    Google aimed to tackle multiple impactful projects, believing in their capacity to innovate across various domains.
    “We can do this if we can imagine and build something that's really transformative.”
    @ 43m 12s
    November 14, 2024
  • The Future of AI
    Predictions about AI's evolution and its impact on personal assistance.
    “Eventually, you and I will have our own AI assistant which is a polymath.”
    @ 45m 53s
    November 14, 2024
  • Hiring Principles in Startups
    Startups prioritize intelligence and quickness over experience and stability.
    “You want to take risks on people.”
    @ 56m 12s
    November 14, 2024
  • The Brilliance of Sergey and Larry
    Their raw IQ and unique perspectives drove Google's success. 'It was their Brilliance and their ability to see things that I didn't see.'
    “It was their Brilliance and their ability to see things that I didn't see.”
    @ 01h 07m 31s
    November 14, 2024
  • The Future of Warfare
    AI is changing the landscape of warfare, making it more complex and automated. 'In a drone World... you'll be doing harm to the other side while you're drinking your coffee.'
    “In a drone World... you'll be doing harm to the other side while you're drinking your coffee.”
    @ 01h 20m 11s
    November 14, 2024
  • The Future of Jobs in an AI World
    Despite fears of job loss, AI may create more opportunities than it eliminates.
    “There will be a lot more jobs, not fewer jobs.”
    @ 01h 28m 40s
    November 14, 2024
  • Human Connection in a Tech Era
    Even with AI advancements, the need for human interaction remains vital.
    “We care a great deal about each other as human beings.”
    @ 01h 33m 00s
    November 14, 2024
  • Seizing Opportunities
    Life is a series of opportunities; saying yes can change your trajectory.
    “Life can be understood as a series of opportunities.”
    @ 01h 44m 11s
    November 14, 2024

Episode Quotes

Key Moments

  • Optimism15:10
  • PageRank Algorithm22:24
  • Cultural Foundations29:02
  • AI Acceleration1:12:58
  • Future Warfare1:20:11
  • Human Connection1:33:00
  • Opportunities in Life1:44:11
  • 1% Improvement1:48:15

Words per Minute Over Time

Vibes Breakdown