Search Captions & Ask AI

Eric Schmidt on AI, the Battle with China, and the Future of America

September 24, 2025 / 28:58

This episode features Eric Schmidt, former Google executive chairman, discussing AI, competition with China, and the future of warfare technology.

Schmidt emphasizes the underhyped nature of the AI revolution and the importance of the West winning in technological advancements. He shares insights on the capabilities of AI agents and their potential to surpass human reasoning.

The conversation shifts to the competition between the U.S. and China in AI, with Schmidt highlighting China's focus on applying AI to various sectors despite hardware limitations. He warns that the U.S. must also compete in consumer applications and robotics.

Schmidt discusses his investment in Relativity Space, a company aiming to compete with SpaceX, and the challenges of the space market. He also touches on the evolution of warfare, particularly the role of drones and automation in modern conflicts.

Finally, Schmidt reflects on the societal issues facing the West, including declining birth rates, and advocates for immigration as a solution. He concludes by addressing the future of AGI and the collaborative potential between humans and AI.

TL;DR

Eric Schmidt discusses AI, U.S.-China tech competition, and future warfare technology.

Video

00:00:00
[Music]
00:00:02
I honestly believe that the AI
00:00:04
revolution is underhyped.
00:00:07
Now, why is this all important?
00:00:09
Eric Schmidt is here. He's the former
00:00:10
Google executive chairman and CEO.
00:00:12
These agents are going to be really
00:00:14
powerful, and they'll start to work
00:00:15
together. We're soon going to be able to
00:00:17
have computers running on their own,
00:00:19
deciding what they want to do. Now we
00:00:21
have the arrival of a new nonhuman
00:00:23
intelligence which is likely to have
00:00:26
better reasoning skills than humans can
00:00:29
have.
00:00:29
So if you were emperor of the world for
00:00:32
1 hour, the most important thing I do is
00:00:34
make sure that the west wins.
00:00:36
Ladies and gentlemen, please welcome
00:00:40
Eric Schmidt.
00:00:42
[Music]
00:00:44
Hi.
00:00:46
[Music]
00:00:47
Hi. Looking good. Good to see you.
00:00:51
Good to see you.
00:00:52
Good to see you.
00:00:53
You're looking, Eric. You're looking.
00:00:55
Very nice.
00:00:57
Oh my god. David, good to see you. It's
00:00:59
like
00:00:59
David Sax is here as well.
00:01:00
It's like a It's like a reunion of all
00:01:02
of our former companies. David, why did
00:01:04
you quit after all?
00:01:06
My old
00:01:07
my old boss.
00:01:09
It was
00:01:10
What was it like working with Young
00:01:12
Freedberg? Take us back.
00:01:14
Can I Can I tell a story? We have to
00:01:16
come down to Orange County and they're
00:01:17
like, "Hey, we're going to take the
00:01:19
plane and it was Eric's plane." We get
00:01:21
on the plane and then he goes up and
00:01:23
flies the plane. I'm in the back of the
00:01:24
plane by myself.
00:01:24
Was it King Air?
00:01:25
I'm like, "The CEO of Google's flying me
00:01:27
down to Orange County." It was
00:01:28
incredible. That was my first time
00:01:30
actually hanging out with Eric.
00:01:31
It was It was my Gulfream.
00:01:33
That's right.
00:01:34
Um, he was way too smart.
00:01:37
Way too smart.
00:01:37
Way too smart.
00:01:38
Was he focused? Did he contribute? Did
00:01:40
he move the needle?
00:01:41
But he was very smart. Okay, that's kind
00:01:44
of our consensus off the pot as well.
00:01:47
Look, you guys know this guy well. He's
00:01:49
really that smart. So, he taught me more
00:01:52
stuff than most of any of the employees
00:01:54
at Google and then you left.
00:01:56
Well, tell us what you've been doing.
00:01:57
So, um
00:01:58
No, no, wait. Before that, I got to ask
00:02:00
you this question. There was a recently
00:02:02
deleted video from Stanford.
00:02:04
Oh, no.
00:02:04
You had a moment of clarity where you
00:02:07
said, "Hey, you know, like at Google,
00:02:09
people are like too much work life
00:02:11
balance. They need to commit. They need
00:02:13
to work harder. We had Sergey at the
00:02:15
last event. He's going back to work. So
00:02:17
Sergey got the message.
00:02:22
Predicting Sergey's behavior is
00:02:24
something I can fail at. I tried for 20
00:02:26
years. Um I am not in favor of um
00:02:31
essentially working at home. I And the
00:02:34
reason I mean many of you guys all work
00:02:36
at home to some degree, but your careers
00:02:38
are already established. But think about
00:02:40
a 20-some who has to learn how the world
00:02:43
works and you know they they come out of
00:02:45
Berkeley or Dartmouth and they're very
00:02:47
well educated. When I think about how
00:02:49
much I learned when I was at Sun just
00:02:51
listening to these elder people who were
00:02:54
5 or 10 years older than I was argue
00:02:56
with each other in person. I don't how
00:03:00
do you recreate that in this new thing?
00:03:02
And and I'm in favor of work life
00:03:04
balance and that's why people work for
00:03:06
the government. Um, sorry. Um,
00:03:10
strays.
00:03:11
Sorry, sorry, sorry. Um, if you're going
00:03:14
to be in tech and you're going to win,
00:03:16
you're going to have to make some
00:03:17
tradeoffs. And you're remember, we're up
00:03:20
against the Chinese. The Chinese work
00:03:23
life balance consists of 996, which is
00:03:25
9:00 a.m. to 9:00 p.m. 6 days a week. By
00:03:28
the way, the Chinese have clarified that
00:03:30
this is illegal. However, they all do
00:03:33
it.
00:03:34
That's who you're competing against. I
00:03:35
brought I brought everybody back to
00:03:37
office. It's so much better.
00:03:38
So, it let's just pick up on that theme.
00:03:40
So,
00:03:41
you don't need to defend the government.
00:03:42
No, no. Believe me, I don't I don't see
00:03:44
the need to. I'm an unpaid part-time
00:03:46
adviser to the government. So, uh but uh
00:03:48
but we are in this high-tech competition
00:03:51
with China. They obviously care about
00:03:53
AI, too. They're trying to race ahead.
00:03:56
How do you uh I I understand that you
00:03:59
recently ma made a trip there. How do
00:04:00
you um handicap this this this
00:04:02
competition? Well, you and I just talked
00:04:04
about this as part of your as your
00:04:06
incredibly important work in the White
00:04:08
House. Um, I had thought that China and
00:04:11
the United States were competing at the
00:04:13
peer level in AI and that the good work
00:04:17
that you have done and your predecessors
00:04:19
did to restrict chips were slowing them
00:04:21
down. They're really doing something
00:04:23
more different than I thought. They're
00:04:25
not pursuing crazy AGI strategies partly
00:04:29
because the hardware limitations that
00:04:31
you've put in place, but partly because
00:04:33
the depth of their capital markets don't
00:04:35
exist. They can't raise based on a wing
00:04:38
and a prayer $100 million or maybe an
00:04:41
equivalent to to build the data centers.
00:04:42
They just can't do it. And so the result
00:04:45
is they're very focused on taking AI and
00:04:48
applying it to everything. And so the
00:04:51
concern I have is that while we're
00:04:53
pursuing AGI, which is incredibly
00:04:54
interesting and we should talk about and
00:04:56
all of us will be affected by this, we
00:04:58
better also be competing with the
00:05:00
Chinese in day-to-day stuff. Consumer
00:05:03
apps, this is something you understand
00:05:04
very well, Chimath, uh, consumer apps,
00:05:06
uh, robots and so forth and so on. I saw
00:05:08
all the the Shanghai robotics companies
00:05:11
and these guys are attempting to do in
00:05:13
robots what they've successfully done
00:05:15
with electric vehicles, right? and
00:05:18
they're re they their work ethic is
00:05:20
incredible. They're wellunded. It's not
00:05:22
the crazy valuations that we have in
00:05:25
America. They can't raise the capital,
00:05:27
but they can win across that. The other
00:05:29
thing the Chinese are doing, and I want
00:05:31
to emphasize this is a major
00:05:32
geopolitical issue, is that my own
00:05:35
background is open source. In the
00:05:36
audience, you all know open source means
00:05:38
open code. Open weight open weights
00:05:41
means open training data.
00:05:43
China is competing with open weights and
00:05:45
open training data. And the US is
00:05:47
largely and majority focused on closed
00:05:49
weights, closed data. That means that
00:05:52
the majority of the world, think of it
00:05:54
as the belt and road initiative, are
00:05:56
going to use Chinese models and not
00:05:57
American models. Now, I happen to think
00:06:00
the West and democracies are correct.
00:06:02
And I'd much rather have the
00:06:04
proliferation of large language models
00:06:06
and that learning be done based on
00:06:07
Western values. Eric, we had a a major
00:06:09
open- source initiative um with Meta,
00:06:13
you know, incredible balance sheet,
00:06:15
tremendous technical firepower, but they
00:06:17
seem to have misexecuted and now are
00:06:19
taking a step back and reformulating
00:06:21
something to your point that looks a
00:06:23
little bit more closed source. It's not
00:06:25
clear. You know, Alex Wang's a good
00:06:27
friend. Uh he's come in, he's taken
00:06:29
over. He's obviously incredibly uh
00:06:30
incredibly capable. I would not hold
00:06:33
keep um I I would not say that they're
00:06:35
going fully closed. And I think also
00:06:37
they got screwed up because the deepseek
00:06:40
people uh R1 did such a good job, right?
00:06:44
If you look at the reasoning model in
00:06:46
deepseek and in particular their ability
00:06:48
to do reinforcement learning forward and
00:06:50
back, forward and back and forward and
00:06:51
back. This is a major achievement and it
00:06:54
appears that they're doing it with less
00:06:56
precision than numeric precision than
00:06:59
the American models. As a bit of
00:07:01
technical things, uh there's something
00:07:03
called FP64, FP32, FP16. The American
00:07:07
models are typically using 16 bit
00:07:09
precision for their training. The
00:07:11
Chinese are pushing eight and now even
00:07:13
four.
00:07:14
Is there is there something that um the
00:07:17
American you know bigger companies need
00:07:19
to be doing in open source so that we
00:07:21
can actually combat this?
00:07:22
Well, a number of the large companies
00:07:23
have said that they want to be leaders
00:07:25
in open source as well. Um Sam Alman
00:07:27
indicated that the smallest version of
00:07:29
the 03 model would be released I believe
00:07:32
open weights and they have done so and
00:07:35
he told me anyway that this model is
00:07:38
much smaller than 10 the 26. It's much
00:07:41
easier to train and it will fit or or
00:07:43
can fit on your phone. So one path is to
00:07:47
say that we'll have these supercomputers
00:07:50
doing AGI which will always be
00:07:52
incredibly expensive and so forth. But
00:07:54
we also have to watch to make sure that
00:07:55
the proliferation of these models for
00:07:57
handheld devices is under American
00:08:00
control whether it's OpenAI or Meta or
00:08:03
Gemini or what have you. recently you uh
00:08:06
took over uh relativity space and I
00:08:09
think for for the people that don't know
00:08:10
this is a business that effectively
00:08:13
whose ambition is to compete with
00:08:14
SpaceX.
00:08:15
I think you were the first investor or
00:08:17
the earliest investor in it.
00:08:19
I was
00:08:19
I'm sorry.
00:08:20
It's okay.
00:08:21
You lost some money in the first what
00:08:23
happened? Did you get crammed down?
00:08:24
No, no, no. I mean all of us did.
00:08:26
All of I mean look
00:08:27
I mean I've been very happily uh you
00:08:30
know an investor in SpaceX and Swarm and
00:08:32
Starlink. relativity was
00:08:34
and by the way swarm is a big deal
00:08:35
created
00:08:36
swarm was a
00:08:36
so thank you
00:08:37
yeah swarm has been a really great
00:08:39
success for for them and I think for the
00:08:41
world um but what I was going to ask you
00:08:43
is walk us through the evolution of the
00:08:46
space market why you decided of all the
00:08:48
companies you have the capital base to
00:08:50
kind of put your money anywhere why did
00:08:53
you pick that why did you pick that
00:08:54
business why now
00:08:56
rockets are really cool and they're
00:08:58
really hard I had I'm as you know I'm a
00:09:01
pilot and I know lots about jets And I
00:09:03
had assumed that rockets were as mature
00:09:05
as jet engines. They're not. It is an
00:09:08
art and a science. These things are very
00:09:11
hard to do. The amounts of power, I
00:09:13
mean, in our case, the rocket is 4
00:09:15
million pounds of thrust. Um, you have
00:09:18
to hold the thing down to test it. And
00:09:20
you can't even hold it with metal
00:09:21
things. You have to have other things to
00:09:23
hold it down as well. There's so much
00:09:24
force otherwise it will take off. Um,
00:09:26
another interesting thing about rockets
00:09:28
is that a rough number is that 2% of the
00:09:30
weight of the rocket is the payload, 18%
00:09:33
is roughly the rocket, and 80% is the
00:09:36
propellant. And my reaction as a new
00:09:38
person is, you're telling me you can't
00:09:40
do any better. And the physicists say
00:09:42
after 60 years of physics, that's the
00:09:44
best we can do to get out of the
00:09:46
gravitation of of the Earth. And so, I
00:09:49
think rockets are interesting and
00:09:50
they're challenging. There's always an
00:09:52
opportunity for competition. um in
00:09:54
relativity space's area, it's
00:09:56
essentially a LEO com a LEO competitor.
00:09:59
So low Earth orbit satellites, that sort
00:10:01
of thing. The order book is full. We
00:10:03
just have to launch the rocket and and
00:10:05
this entry into space happened and I'm
00:10:08
not sure how well known this is. So you
00:10:10
can go as far as you want to go into
00:10:11
this, but you've done a lot as well in
00:10:13
next generation warfare as well. Do you
00:10:15
want us just to talk about that and how
00:10:18
you ended up there and what role that
00:10:20
plays and just give us a landscape?
00:10:22
Maybe David asked about the China
00:10:23
question, but it's they're all kind of
00:10:24
almost interrelated.
00:10:25
Well, at first place, I'm a software
00:10:27
person, not a hardware person. I I
00:10:28
explain to people that hardware people
00:10:30
go to different schools than software
00:10:31
people. Um, and they think slightly
00:10:34
differently. So, I'm always at a
00:10:35
limitation in these new industries. Um,
00:10:37
I had worked for the Secretary of
00:10:39
Defense and have a top secret clearance
00:10:41
and all that. I was given a medal, etc.,
00:10:43
uh, for trying to help the Pentagon
00:10:45
reorganize itself. And when the Ukraine
00:10:48
war started, I was watching and I
00:10:50
thought, well, here's an opportunity to
00:10:52
see a country that has no navy u and no
00:10:55
uh and no air force how they do this
00:10:58
with automation. And indeed, it has been
00:11:00
a spectacular success as a matter of
00:11:03
innovation. Um and outnumbered 3 to one
00:11:06
with huge differences in um kinetic
00:11:10
strength, weapons, mobilization, and so
00:11:13
forth and so on. Ukraine has held on
00:11:15
really quite well. Um and what's
00:11:17
happening now is you're seeing
00:11:19
essentially the birth of a completely
00:11:20
new military national security
00:11:22
structure. Um one way to think about it
00:11:25
is that we all u so first place and I've
00:11:28
I've seen it live and I will tell you
00:11:29
that real war is much worse than the
00:11:32
worst movies you have ever seen about
00:11:34
war. And that's all I'll say. It's
00:11:36
really horrific and it's to be avoided
00:11:37
at all cost. Um and then right for
00:11:41
obvious reasons and the the
00:11:45
and I love all these people say well
00:11:46
we'll well you know the wararmongering
00:11:48
talk be careful what you wish for
00:11:51
because the other side gets a vote when
00:11:53
I started working and and trying to
00:11:55
understand what Ukraine was doing uh
00:11:57
Russia was pushed back and they've come
00:12:00
back with a very very strong second and
00:12:03
third round so the enemy gets a vote in
00:12:05
this situation uh but to but to go on um
00:12:08
The rough way in which war will evolve
00:12:12
is first things will have to be very
00:12:14
very mobile and very much not in fixed
00:12:17
places. This takes out most of the
00:12:20
military infrastructure that exists in
00:12:22
the world. Um things like tanks um of
00:12:25
which we're now building a whole bunch
00:12:27
more even stronger tanks here in America
00:12:29
don't make any sense in a world where a
00:12:31
2 kg payload from a a well-armed drone
00:12:35
can destroy the tank. It's called the
00:12:37
kill ratio. And that drone costs retail
00:12:40
$5,000, $4,000. The tank, the American
00:12:42
tank costs $30 million. You can see the
00:12:46
you can send an awful lot of those
00:12:47
drones to destroy those tanks. Um, the
00:12:50
likely evolution goes something like
00:12:52
this. So, first, people learn that
00:12:54
drones are like rifles and like
00:12:56
artillery. So, it's more efficient to
00:12:59
use drones now than to use mortars,
00:13:01
grenades, artillery. That's clear. If
00:13:04
you just look at the economics,
00:13:05
economics in terms of cost or
00:13:07
effectiveness as it's called. Um the
00:13:10
next thing that happens is that both
00:13:12
sides develop drone capabilities which
00:13:15
what you're seeing now and each then
00:13:17
becomes a war of drone against drone. So
00:13:20
you have drone against anti- drone. And
00:13:22
so then the shift moves to how do you
00:13:24
detect the enemy drone and how do you
00:13:26
destroy it before it destroys you. So
00:13:28
the doctrine ultimately is the drones
00:13:30
are forward and the people are behind.
00:13:34
And I've seen operations in for example
00:13:36
sitting in Kev where the Ukrainians are
00:13:38
commanding things over Starlink I might
00:13:40
add um in the distance in the distant
00:13:43
war and they're very very effective. So
00:13:45
we've solved the latency problems, we've
00:13:47
solved the timing problems and so forth
00:13:49
in that area. The ultimate state is very
00:13:52
interesting and I don't think anyone has
00:13:54
foreseen this. If you go back to our
00:13:56
conversation about RL and planning,
00:13:59
which is what you're seeing with AI,
00:14:01
let's say that that we're on one side
00:14:03
and we have a million drones and there's
00:14:05
another side over here that has another
00:14:06
million drones. Each side will use
00:14:09
reinforcement learning AI strategies to
00:14:11
do battle plans, but neither side can
00:14:14
figure out what the other side's battle
00:14:16
plan is. And therefore, the deterrence
00:14:19
against attacking each other will be
00:14:20
very high. Today, the way military
00:14:23
planners operate is that they count
00:14:24
weapons. They say, "Well, you have this
00:14:26
many and I have this many and you can do
00:14:28
this kind of a maneuver and so forth."
00:14:30
But in an AI world where you're doing
00:14:31
reinforcement learning, you can't count
00:14:34
what the other side is planning. You
00:14:35
can't see it. You don't know it. And I
00:14:37
believe that that will deter what I view
00:14:39
as one of the most horrendous things
00:14:41
ever done by humans, which is war.
00:14:43
Because unless there's a perfect balance
00:14:45
between either side, there will be some
00:14:47
mutual destruction of the drone supply
00:14:49
like there would be with any artillery
00:14:50
stock in traditional warfare and
00:14:53
whoever's left ends up winning. Like
00:14:55
they're just
00:14:56
Well, it's very important to understand
00:14:57
that there's no winners in war. Um, by
00:15:00
the time you have a drone battle of the
00:15:02
scale I'm describing, the entire
00:15:04
infrastructure of your side will be
00:15:06
destroyed. The entire infrastructure of
00:15:07
the other side will be destroyed. These
00:15:09
are lose-lose scenarios. Isn't there
00:15:11
like an an equilibrium though that that
00:15:13
can also create where there because of
00:15:16
that mutually assured destruction
00:15:18
there's a det or is that
00:15:19
well I'm arguing that it's it's not a
00:15:21
deterrence
00:15:23
right
00:15:23
that as deterrence can be understood as
00:15:25
I want to hit you which I don't but I
00:15:28
want to hit you so much but that if I do
00:15:30
that the penalty is the penalty is
00:15:32
greater than the value of me hitting you
00:15:34
right
00:15:35
and that's how det that's how
00:15:36
but that seems like an that seems like a
00:15:38
great um advantage antage and upside of
00:15:42
this move to sort of drones and
00:15:44
automation that we don't have today.
00:15:46
Well, I there are many advantages to
00:15:48
moving to drones and automation. One,
00:15:49
they're much much cheaper, right?
00:15:51
They're much much cheaper.
00:15:53
Yeah.
00:15:53
And two, and two, you can stockpile
00:15:56
algorithms. You can essentially learn
00:15:58
and learn and learn. And remember, you
00:16:00
can also build training data, right,
00:16:02
that's synthetic, so you can be even
00:16:04
better than the others. The final
00:16:05
question I've been asked by our military
00:16:08
is what's the role of the of a
00:16:11
traditional land army? And I wish I
00:16:14
could say that all of these human
00:16:17
behaviors can occur without humans being
00:16:19
at at risk. I don't think so. I think
00:16:22
that the way um robot war essentially
00:16:26
drone war will occur is there will be
00:16:28
these destructive waves, but eventually
00:16:30
humans are going to have to cross a
00:16:32
line. they're going to have to
00:16:33
after we've depleted them. So, you're
00:16:35
investing in this drone technology and
00:16:37
then do you think
00:16:39
Optimus and humanoid robots are the
00:16:41
next, you know, volley in this um new
00:16:45
warfare. It's going to be a long time
00:16:48
before humanoid robots we which is what
00:16:50
we see in the movies all day, right? Be
00:16:52
a very long time before we see that. Uh
00:16:55
what you're going to see is very very
00:16:58
fast mobility solutions, right? airly
00:17:00
airbased solutions and also hypersonics
00:17:02
hypersonics also things underwater
00:17:05
there's a lot of that going on it's a
00:17:06
different domain um if you look at the
00:17:09
um the muro and some other boats that
00:17:12
the Ukrainians used they have
00:17:14
essentially used USVs to destroy the uh
00:17:17
Russian fleet in the Black Sea this was
00:17:20
crucial for them because they needed to
00:17:22
be able to export the grain from Odessa
00:17:25
around and it's like 6% or 10% of their
00:17:28
economy It's a very big deal and they
00:17:30
did that with drones.
00:17:31
Eric, it seems like there's this
00:17:33
overarching worldview that you have,
00:17:36
meaning you have this view on AI.
00:17:40
There's all the stuff you're doing now
00:17:41
in drones, in warfare, in rocketry. It
00:17:44
all converges quite honestly because in
00:17:46
the in the next five or 10 years, these
00:17:48
things will all come to pass. What is
00:17:50
the like how do you view the world? Like
00:17:53
what is the role of America? What is
00:17:55
your role as a as a capitalist, as a
00:17:57
technologist, as like a statesman?
00:18:00
I want America to win,
00:18:04
right?
00:18:07
Uh I am here because the American dream,
00:18:11
the people who invested in me, in my
00:18:13
case, Berkeley and so forth, people took
00:18:15
a chance on me. I want the next
00:18:17
generation to have that. I also want you
00:18:20
all to remember I was just in in uh as
00:18:23
part of the World War II surrender
00:18:24
ceremony in in Honolulu and they talked
00:18:27
about fighting tyranny, right? We forget
00:18:30
that our ancestors or greatgrandparents
00:18:32
or whatever fought the Great War to keep
00:18:35
liberalism and democracy alive. I want
00:18:38
us to do that. How do we do that as
00:18:40
Americans? We use our strengths. What
00:18:43
are our strengths? We're chaotic,
00:18:46
confusing, loud, you know, but we're
00:18:48
clever. Uh we allocate capital smartly.
00:18:51
We have very deep financial markets. We
00:18:54
have this enormous industrial base of
00:18:56
universities and entrepreneurs which are
00:18:58
represented here. We should celebrate
00:19:00
this. We should stoke it. We should make
00:19:01
it go faster and faster. I spent lots of
00:19:04
time in Europe because of the Ukraine
00:19:06
stuff. They are so envious of us. When
00:19:08
you're in Asia, they are envious of us.
00:19:11
Don't screw it up, guys. That's what I
00:19:13
want to work on.
00:19:14
Can Can I Can I just ask you outside of
00:19:19
this external conflict? We had um a
00:19:22
conversation with Alex Karp today and we
00:19:25
actually had Tucker Carlson here
00:19:26
yesterday and some of the dialogue was
00:19:29
around the I I don't know if the right
00:19:31
term is the erosion of the west that
00:19:33
there may be social issues that are
00:19:34
brewing in the west that may be hurting
00:19:37
us from the inside. How much do you
00:19:38
observe or spend time on these issues?
00:19:41
And the metric that often is cited now
00:19:43
is declining birth rates in the west.
00:19:45
And that our population, and we're gonna
00:19:47
talk with Elon in a few minutes about
00:19:48
this. Um, oh, sorry.
00:19:51
Oh, we just ruined my bad. My surprise.
00:19:53
Oops.
00:19:53
Sorry. Sorry.
00:19:54
There's your surprise guest.
00:19:55
Sorry. Sorry. Sorry. Sorry.
00:19:57
Um, slip. Uh, but um,
00:20:02
Elon is a good friend and he's
00:20:04
addressing this issue of population
00:20:06
directly himself.
00:20:11
problem solve for it. Good for you.
00:20:13
Is it is it is it a reflection of
00:20:15
something going on? There's a rise of
00:20:16
Mandani getting elected in New York. Uh
00:20:18
some of the historic values of the West
00:20:22
seem to be, you know, kind of under a
00:20:25
state of transformation. Right now,
00:20:27
one metric of the success of a society
00:20:29
is its ability to reproduce. And so, I
00:20:32
think this is a legitimate concern of
00:20:33
the West. It's much worse in Asia. The
00:20:36
um the Chinese number is about 1.0 0 for
00:20:39
two parents. In Korea, it's now down to
00:20:42
78 for two. So, it's really important to
00:20:46
recognize that we as humans are
00:20:48
collectively choosing to depopulate. And
00:20:51
the numbers are staggering, right? And
00:20:54
imagine a situation where instead of
00:20:56
having growth, you have shrinkage. And
00:20:58
furthermore, they're getting older. And
00:21:00
so, as a business, all of a sudden, your
00:21:03
revenue is declining. And there's
00:21:05
nothing you can do because you can't
00:21:06
innovate with fewer and fewer customers.
00:21:08
So if you just put it in a business
00:21:10
context, ignoring the moral issues which
00:21:12
are all very real, it's just bad, right?
00:21:14
So we have to solve that problem. I
00:21:16
happen to be in favor broadly of
00:21:17
immigration because I think immigration
00:21:19
helps us solve that problem. But as a
00:21:21
global mechanism, we have to address
00:21:23
that. Um, in any case, from my
00:21:26
perspective, you're going to have these
00:21:28
issues, but America is organized around
00:21:32
the concept of American exceptionalism.
00:21:35
And as long as we understand that the
00:21:37
way we make progress is we invest in the
00:21:40
right people, in the right businesses,
00:21:42
we have a a strong capital market, we
00:21:45
invest in the infrastructure that they
00:21:46
need, um, we'll be fine. That is my
00:21:49
actual opinion. Can can we go back to um
00:21:52
AI for a second?
00:21:54
So um Eric, I think you can help us get
00:21:56
to a let's call it a a bipartisan
00:21:59
understanding of these issues. I think
00:22:00
you you you think really clearly about
00:22:02
this. Um you know, in in the wake of
00:22:06
Chad GBT launching at the end of 2022, I
00:22:10
think the discourse was really dominated
00:22:12
in 2023 and 24 by this idea of AGI and
00:22:15
that AGI was imminent. And I think it
00:22:18
created almost like a panicky atmosphere
00:22:20
in Washington among policy makers and
00:22:23
you saw things like we got to restrict
00:22:25
open source because you know then China
00:22:27
will get it and um and this is before
00:22:29
Deepseek launched and then we saw that
00:22:31
actually they're ahead of us on open
00:22:32
source but it feels like there's been um
00:22:35
a pullback a little bit from the AGI
00:22:37
narrative which I think I think it's
00:22:39
actually a good thing. I think it's more
00:22:40
conducive to calm rational policym.
00:22:43
What's your perception of AGI right now?
00:22:45
Where where are we on that whole train?
00:22:48
So So um so first place, the speech that
00:22:51
the president delivered about a month
00:22:53
ago about AI strategy, which I think you
00:22:56
probably wouldn't say it, but you kind
00:22:57
of wrote it for him, was exactly right.
00:23:00
Right. So thank you.
00:23:02
David collaborated with an amazing
00:23:04
leader who we all respect and admire so
00:23:07
much, Eric.
00:23:10
Yes. Uh, so nevertheless,
00:23:13
saying I wrote it was was way too
00:23:15
strong. I mean, actually, but anyway,
00:23:17
if you didn't if you didn't write it,
00:23:18
then it must have been your twin. But in
00:23:20
any case, um, the you you got you got
00:23:23
the emphasis right, which was that
00:23:25
investment in research, investment in
00:23:28
the kind of stuff that we do is really,
00:23:30
really important.
00:23:32
I don't agree with you on this on this
00:23:34
AGI thing because there's this group
00:23:36
which I call the San Francisco um
00:23:39
narrative because they all live in San
00:23:41
Francisco and their narrative goes
00:23:43
something like this. Um today we're
00:23:46
doing agents uh the agentic revolution
00:23:48
will change businesses which I agree
00:23:49
with. Um that what happens is the
00:23:52
systems will become recursively
00:23:54
self-intelligent
00:23:56
with recursive self-improvement as it's
00:23:59
called. If you have a scale-free problem
00:24:01
and a scale-free problem for example is
00:24:03
programming or math where you can just
00:24:05
keep doing it you get these enormous
00:24:08
fast gains if you buy enough hardware do
00:24:11
enough software so forth and so on that
00:24:13
is still underway.
00:24:15
The collective of that says that in the
00:24:18
next three-ish years they believe that
00:24:21
we will get forms of super intelligence
00:24:24
and the way they define it is basically
00:24:27
a savant a chemist so a physics soant a
00:24:30
mathematician soant I don't agree with
00:24:32
the three years but I do agree that
00:24:34
it'll be maybe six or seven years
00:24:36
but if it's a savant in you know a
00:24:39
particular area is that general
00:24:41
intelligence
00:24:42
it's not general intelligence yet
00:24:44
general internal intelligence is when it
00:24:46
can set its own objective function.
00:24:48
Right?
00:24:49
And there's no evidence of that.
00:24:51
There's no evidence right now of the
00:24:52
ability to set your own objective
00:24:54
function. Um the the the thinking and
00:24:56
I'm writing a paper on this so I've been
00:24:58
studying it is that the the the
00:25:01
technical problem is non-stationerity of
00:25:04
mathematical proofs. And what you're
00:25:07
doing is you're trying to solve against
00:25:09
objective function but the objective
00:25:10
function keeps changing which is how
00:25:12
humans operate. your goal changes every
00:25:14
day. Whereas computers have trouble with
00:25:16
that. As a math problem, we don't have
00:25:18
an algorithm yet for LLMs that can do
00:25:22
that. People are working on it. Um and
00:25:24
the the test will be can you basically
00:25:28
um using the information available in
00:25:30
1902, can you derive the same thing that
00:25:33
Einstein did with special relativity
00:25:36
followed by general relativity? We
00:25:38
cannot do that today. Um and most people
00:25:42
believe that the way this will be solved
00:25:44
is through analogy. So the theory of
00:25:47
great geniuses is that they understand
00:25:50
one area of extremely well and they're
00:25:52
so brilliant the lady or man can then
00:25:55
take their ideas and apply it to a
00:25:57
completely different domain. If we can
00:26:00
solve that problem then I think it's
00:26:02
over. Then we get to AGI and then it's a
00:26:06
whole different world.
00:26:08
I think one of the reasons why it's hard
00:26:10
to replace a human and you know JK and I
00:26:14
debate this is that humans are end to
00:26:16
end. You know we can do the whole job.
00:26:18
You have sort of a complete
00:26:19
understanding. You can pivot very
00:26:21
easily. AI at least as we know it today
00:26:24
is not end to end. It has to be
00:26:25
prompted. You get an answer. That answer
00:26:27
has to be validated. Then you have to
00:26:29
ask a new question because it never
00:26:30
gives you exactly what you want. You
00:26:32
have to apply more context. You have to
00:26:34
go through an iterative loop. Finally,
00:26:35
you get to an answer that has business
00:26:37
value. The way biology puts it is that
00:26:40
AI is not end to end. It's middle to
00:26:42
middle. Humans are end to end. And so,
00:26:43
as a result of that, instead of AI
00:26:47
replacing all of us, AI will be very
00:26:49
synergistic with humans because we can
00:26:52
define the objective function. We do the
00:26:54
prompting and we work with it to iterate
00:26:56
and it does a lot of the work in the
00:26:58
middle. Um, that seems to me like a very
00:27:01
optimistic, less duoristic take on it.
00:27:04
What you just said is exactly what's
00:27:06
going to happen for the next few years
00:27:08
that each of us will have assistance
00:27:11
which on our command and our prompting
00:27:13
will be incredibly helpful to whatever
00:27:16
problem we have you know personal uh you
00:27:18
have people who are using these things
00:27:19
for relationship advice for you know
00:27:21
talking to their kids I mean it's all
00:27:22
crazy stuff um but the fact of the
00:27:25
matter is that's it the to me the real
00:27:27
question is when does it cross over to
00:27:29
having its own valition its own ability
00:27:31
to seek information and solve new
00:27:33
problems s that's a different animal.
00:27:35
But have we seen any evidence of
00:27:36
recursive self-improvement yet?
00:27:38
Um, not yet. I'm I'm I've funded a
00:27:41
number of startups which claim to be
00:27:43
close to it, but of course these are
00:27:45
startups and you never know, which tells
00:27:48
me it's 5 10 years cutting numbers.
00:27:50
What do you think Google's doing on this
00:27:51
front?
00:27:52
Um, well, I'm not at Google anymore. Um,
00:27:55
every issue of Gemini is top of the
00:27:58
leaderboard. So 2.5 just overcame
00:28:02
everybody and I'm sure there's another
00:28:03
one coming. Um Demis is working really
00:28:06
hard on this question about um
00:28:08
scientific discovery. So that's a p that
00:28:12
is a path to getting to AGI.
00:28:15
Eric, um we appreciate the work you're
00:28:17
doing. Uh we appreciate you being here
00:28:19
with us. We appreciate what you've done,
00:28:21
the impact you've had on Silicon Valley,
00:28:23
uh as society. Yeah. No, but it's it's
00:28:26
really been
00:28:26
I am so happy to be part of this. You
00:28:29
created this incredible community and
00:28:31
there's all of these smart people that
00:28:33
spend all their time listening to you.
00:28:37
Very concerning.
00:28:40
Wow. Eric
00:28:45
[Music]
00:28:48
very
00:28:49
Thanks, Eric. Appreciate you. Cheers.
00:28:53
All right.

Badges

This episode stands out for the following:

  • 70
    Best overall
  • 70
    Most influential
  • 65
    Best concept / idea
  • 60
    Most quotable

Episode Highlights

  • AI Revolution Insights
    Eric Schmidt discusses the underhyped potential of AI and its implications for the future.
    “The AI revolution is underhyped.”
    @ 00m 04s
    September 24, 2025
  • The Harsh Reality of War
    Schmidt shares his perspective on the brutal nature of warfare and its consequences.
    “Real war is much worse than the worst movies about war.”
    @ 11m 29s
    September 24, 2025
  • America's Role in Global Competition
    Schmidt emphasizes the need for America to win in the tech race against China.
    “I want America to win.”
    @ 18m 04s
    September 24, 2025
  • Population Decline Concerns
    The discussion highlights the alarming trend of declining birth rates in the West.
    “We're collectively choosing to depopulate.”
    @ 20m 48s
    September 24, 2025
  • AI's Role in Business
    AI is seen as a synergistic tool rather than a replacement for humans in business.
    “AI is not end to end. It's middle to middle.”
    @ 26m 40s
    September 24, 2025

Episode Quotes

Key Moments

  • AI Revolution00:04
  • War Realities11:29
  • No Winners in War15:00
  • America's Future18:04
  • Population Issues20:48
  • AI Discussion26:40
  • Community Appreciation28:26

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Elon Musk: OpenAI Betrayal, His Future at Tesla, and the Next Big Thing — Grokipedia
Podcast thumbnail
Winning the AI Race Part 1: Michael Kratsios, Kelly Loeffler, Shyam Sankar, Chris Power
Podcast thumbnail
Inside the All-In Summit: Behind the Scenes of the World's Greatest Conference 🚀
Podcast thumbnail
Bond crisis looming? GOP abandons DOGE, Google disrupts Search with AI, OpenAI buys Jony Ive's IO
Podcast thumbnail
Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller
Podcast thumbnail
Trump AI Speech & Action Plan, DC Summit Recap, Hot GDP Print, Trade Deals, Altman Warns No Privacy
Podcast thumbnail
AI Psychosis, America's Broken Social Fabric, Trump Takes Over DC Police, Is VC Broken?