Search Captions & Ask AI

Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis

March 19, 2026 / 01:06:41

This episode features a discussion with Jensen Huang, CEO of Nvidia, covering topics such as AI infrastructure, Grock, and the future of robotics. The conversation highlights the evolution of AI technology, the importance of disaggregated computing, and Nvidia's role in shaping the AI landscape.

Jensen Huang discusses the launch of Grock and its implications for AI processing, emphasizing the need for disaggregated inference in modern computing. He explains how Nvidia's technology is evolving from a GPU-centric model to a comprehensive AI factory approach.

The episode also touches on the future of robotics, with Huang predicting significant advancements in the field over the next few years. He believes that AI will revolutionize industries, including healthcare and agriculture, and that every car will eventually be autonomous.

Huang addresses the importance of education in AI, advising young people to become experts in using AI technologies. He discusses the potential for AI to enhance productivity and create new job opportunities while acknowledging the challenges of job displacement.

The episode concludes with a focus on the global AI race, with Huang emphasizing the need for the U.S. to lead in AI technology and the importance of collaboration across industries.

TL;DR

Jensen Huang discusses AI advancements, Grock, robotics, and the future of work in this insightful episode.

Video

00:00:00
special episode this week. We've
00:00:02
preempted the weekly show and there's
00:00:05
only three people we preempt the show
00:00:06
for. President Trump, Jesus, and Jensen.
00:00:11
And uh I'll let you pick which order we
00:00:13
do that. Uh but what an amazing run
00:00:16
you've had and a great event. Uh
00:00:19
>> every industry is here. Every tech
00:00:21
company is here. Every AI company is
00:00:23
here. Incredible. Incredible. I'm going
00:00:26
all in.
00:00:27
>> If you were building a global financial
00:00:29
system from first principles today, you
00:00:31
wouldn't build it on 50-year-old legacy
00:00:34
rails, you'd build Airwall, one AI
00:00:37
native platform for global accounts,
00:00:39
cards, and payments. It's designed to
00:00:41
make the entire world feel like a local
00:00:44
market. Others are bolting AI onto
00:00:46
broken infrastructure, but AirWall was
00:00:48
built for the intelligent era from day
00:00:50
one. Stop paying the legacy tax and
00:00:52
start building the future at
00:00:54
awallix.comallin.
00:00:56
Arowallix. Build the future.
00:00:59
>> And um one of the great announcements of
00:01:02
the past year has been Grock. When you
00:01:04
made the purchase of Grock, did you
00:01:06
realize how insufferable Cha Chimath
00:01:08
would become?
00:01:10
>> I had I had an inkling that that that
00:01:13
>> we're his friends. We have to deal with
00:01:15
him every week.
00:01:16
>> I know it.
00:01:16
>> You had to deal with him for the six
00:01:18
week close.
00:01:19
>> I know. It's like two weeks. Two weeks.
00:01:21
>> It's all coming back to me now. It's
00:01:22
it's making me rather uncomfortable. The
00:01:25
the thing is uh many of our strategies
00:01:29
are are presented in in broad daylight
00:01:32
at GTC years in advance of when we do
00:01:36
it. Two and a half years ago, I
00:01:38
introduced the operating system of the
00:01:40
AI factory and it's called Dynamo.
00:01:43
Dynamo as you know is a piece of
00:01:46
instrument a machine that was created by
00:01:48
seammens to turn essentially water into
00:01:52
electricity and dynamo uh powered the
00:01:56
factory of the last industrial
00:01:58
revolution. So I thought it was the
00:01:59
perfect name for the operating system of
00:02:02
the next industrial revolution the
00:02:04
factory of that and so inside Dynamo the
00:02:07
fundamental technology is disagregated
00:02:10
inference.
00:02:11
Jason, I I know you're you're you're
00:02:14
super technical.
00:02:14
>> Absolutely. I know it.
00:02:15
>> I'll let you take this one. Go ahead and
00:02:17
define it for the audience. I don't want
00:02:18
to step on you.
00:02:19
>> Yeah. Thank you. I I I knew you wanted
00:02:21
to jump in there for a second, but it's
00:02:22
it's disagregating inference, which
00:02:25
means the the pipeline, the processing
00:02:27
pipeline of inference is extremely
00:02:29
complicated. In fact, it is the most
00:02:31
complicated computing problem today.
00:02:34
Incredible scale, lots of mathematics of
00:02:36
different shapes and sizes. And we came
00:02:39
up came up with the idea that you would
00:02:41
change you would you would disagregate
00:02:43
parts of the processing such that some
00:02:46
of it can run on some GPUs rest of it
00:02:49
can run on different GPUs and that led
00:02:52
to us realizing that maybe even
00:02:55
disagregated computing could make sense
00:02:58
that we could have different
00:02:59
heterogeneous nature of computing that
00:03:02
same sensibility led us to melanox
00:03:05
>> you know today Nvidia's computing is
00:03:08
spread across GPUs, CPUs, switches,
00:03:11
scale up switches, scale out switches,
00:03:13
networking processors, and now we're
00:03:15
going to add Grock to that. And we're
00:03:17
going to put the right workload on the
00:03:19
right chips. You know, we just really
00:03:21
evolved from a GPU company to an AI
00:03:23
factory company.
00:03:24
>> I mean, I think that was probably the
00:03:25
biggest takeaway that I had. You're
00:03:27
seeing this fundamental disagregation
00:03:29
where we've gone from a GPU and now you
00:03:31
have this complexion of all these
00:03:34
different options that will eventually
00:03:36
exist. The thing that you guys said on
00:03:38
stage or you said on stage was I I would
00:03:40
like the high value inference people to
00:03:43
take a listen to this and 25% of your
00:03:46
data center space you said should be
00:03:47
allocated to this gro lpu GPU
00:03:50
>> grock to about 25% of the ver rubins in
00:03:54
the g in the data center. So can you
00:03:56
tell us about how the industry looks at
00:03:58
this idea of now basically creating this
00:04:01
next generation form of disagregated
00:04:03
pre-filled decode DSAG and how people do
00:04:05
you think will react to it?
00:04:06
>> Yeah and take a step back and at the
00:04:09
time that we added this we went from
00:04:12
large language model processing
00:04:16
to agentic processing. Now when you're
00:04:19
running an agent you're accessing
00:04:22
working memory you're accessing
00:04:24
long-term memory. You're using tools.
00:04:26
You're really beating up on storage
00:04:28
really hard. You have agents working
00:04:30
with other agents. Some of the agents
00:04:32
are very large models. Some of them are
00:04:34
smaller models. Some of them are
00:04:36
diffusion models. Some of them are auto
00:04:39
reggressive models. And so there are all
00:04:41
kinds of different types of models
00:04:42
inside this data center. We created Vera
00:04:44
Rubin to be able to run this
00:04:46
extraordinarily diverse workload. My
00:04:49
sense is and so we added what used to be
00:04:52
a one rack company. we now added four
00:04:55
more racks,
00:04:55
>> right?
00:04:56
>> So, Nvidia's TAM, if you will, increased
00:04:59
from what whatever it was to probably
00:05:02
something, call it, you know, 33% 50%
00:05:05
higher. Now, part of that 33% or 50% a
00:05:09
lot of it's going to be storage
00:05:10
processors. It's called Blue Field. Some
00:05:13
of it will be a lot of it, I'm hoping,
00:05:15
will be Grock processors, and some of it
00:05:17
will be CPUs. And they're all and a lot
00:05:21
of it's going to be networking
00:05:22
processors. And so all of this is going
00:05:24
to be running basically the computer of
00:05:28
the AI revolution called agents, right?
00:05:30
>> The operating system of of um modern
00:05:33
modern industry.
00:05:34
>> What about embedded applications? So you
00:05:36
know my daughter's teddy bear at home
00:05:39
wants to talk to her. What goes in
00:05:41
there? Is it a custom ASIC or does there
00:05:43
end up becoming much more kind of a
00:05:45
broader set of TAM with developing tools
00:05:48
that are maybe different for different
00:05:49
use cases at the edge and an embedded
00:05:51
application? We think that there's three
00:05:53
computers in the problem at the at the
00:05:56
largest at the largest scale when you
00:05:58
take take a step back. There's one
00:05:59
computer that's really about training
00:06:01
the AI model, developing creating the
00:06:03
AI, another computer for evaluating it.
00:06:07
Depending on the type of problem you're
00:06:08
having, like for example, you look
00:06:10
around, there's all kinds of robots and
00:06:11
cars and things like that. You have to
00:06:13
evaluate these robots inside a virtual
00:06:18
gym that represents the physical world.
00:06:21
So it has to be software that obeys the
00:06:24
laws of physics.
00:06:25
>> And that's a second computer. We call
00:06:27
that omniverse. The third computer is
00:06:29
the computer at the edge, the robotics
00:06:31
computer.
00:06:32
>> That robotics computer, one of them
00:06:34
could be self-driving car. Another one's
00:06:35
a robot. Another one could be a teddy
00:06:37
bear.
00:06:37
>> Little tiny one for a teddy bear.
00:06:39
>> One of the most important ones is one
00:06:41
that we're working on that basically
00:06:43
turns the telecommunications base
00:06:45
stations into part of the AI
00:06:48
infrastructure. So now all of the it's a
00:06:51
$2 trillion industry all of that in time
00:06:54
will be transformed into an extension of
00:06:57
the AI infrastructure and so radios
00:07:00
radios will become a edge devices
00:07:02
factories warehouses you name it and so
00:07:05
so there are three these three basic
00:07:07
computers
00:07:08
all of them you know are going to be
00:07:10
necessary
00:07:11
>> Jensen last uh last year I think you
00:07:14
were ahead of the the rest of the world
00:07:15
in in in saying inference isn't going to
00:07:17
a thousand last year.
00:07:19
>> Yes. Is it going to hurt my feelings? Is
00:07:21
it is it going to 1 millionx? Is it
00:07:23
going to 1 billionx? Yeah.
00:07:24
>> Right. And I think people at the time
00:07:26
thought it was pretty hyperbolic because
00:07:28
the world was still focused on
00:07:29
pre-scaling on training. Here we are
00:07:32
now. Inference has exploded. We're
00:07:34
inference constrained. Um you announced
00:07:36
an inference factory that I think is
00:07:38
leading edge that's going to be 10x
00:07:40
better in terms of throughput to the
00:07:42
next factory. But yet if you if I listen
00:07:45
to what the chatter is out there, it's
00:07:47
that your inference factory is going to
00:07:49
cost 40 or 50 billion and the
00:07:51
alternatives the custom AS6 AMD others
00:07:54
are going to cost 25 to 30 billion and
00:07:56
you're going to lose share. So why don't
00:07:58
you talk to us? What are you seeing? How
00:07:59
do you think about share and does it
00:08:02
make sense for all these folks to pay
00:08:04
something that's a 2x premium to what
00:08:06
others are marketing? The big takeaway,
00:08:09
the big idea is that
00:08:13
you should not equate the price of the
00:08:17
factory and the price of the tokens, the
00:08:20
cost of the tokens. It is very likely
00:08:23
that the $50 billion factory, and in
00:08:26
fact, I can prove it that the $50
00:08:28
billion factory will generate for you
00:08:31
the lowest cost tokens. And the reason
00:08:34
for that is because we produce these
00:08:36
tokens at extraordinary efficiency
00:08:40
10 times you know the difference between
00:08:42
50 billion now it turns out 20 billion
00:08:45
is just land power and shell right
00:08:47
>> and then on top of that you have storage
00:08:50
anyways networking anyways you got CPUs
00:08:52
anyways you got servers anyways you got
00:08:54
cooling anyways the difference between
00:08:56
that GPU being 1x price or halfx price
00:09:00
>> is not between 50 billion and 30 billion
00:09:03
Pick your favorite number, but let's say
00:09:05
between 50 billion and 40 billion.
00:09:07
>> That is not a large percentage when the
00:09:10
$50 billion data center is actually 10
00:09:13
times the throughput.
00:09:14
>> Right, Jess?
00:09:16
>> That's the reason why I said that even
00:09:18
for most chips, if you can't keep up
00:09:21
with the state of the technology and the
00:09:22
pace that we're running, even when the
00:09:24
chips are free, it's not cheap enough.
00:09:26
>> Yeah.
00:09:27
>> Can I can I just ask a general strategy
00:09:29
question?
00:09:29
>> Yeah. I mean, you're running the most
00:09:32
valuable company in the world. This
00:09:33
thing is going to do 350 plus billion of
00:09:36
revenue next year. 200 billion of free
00:09:39
cash flow. It's compounding at these
00:09:40
crazy rates. How do you decide what to
00:09:43
do? Like, how do you actually get the
00:09:45
information? I mean, it's famous now
00:09:47
these sort of emails that are people are
00:09:49
meant to send you, but how do you really
00:09:51
decide to get an intuition of how to
00:09:54
shape the market, where to really double
00:09:55
down, where to maybe pull back, where to
00:09:57
actually go into a green field? How how
00:09:59
does that information get to you? How do
00:10:00
you decide these things?
00:10:01
>> In a final analysis, that's the job of
00:10:03
the CEO. Yeah.
00:10:04
>> And our job is to define the strategy,
00:10:06
define the vision, define the strategy.
00:10:08
We're informed, of course, by amazing
00:10:10
computer scientists, amazing
00:10:11
technologists, great people all over the
00:10:13
company, but we have to shape that
00:10:15
future. Well, part of it has to do with
00:10:17
is this something that's insanely hard
00:10:19
to do? If it's not hard to do, we should
00:10:22
back away from it. And the reason for
00:10:23
that if if it's easy to do obviously um
00:10:26
lots of competitors a lot of competitors
00:10:28
is this something that has never been
00:10:30
done before that's insanely hard to do
00:10:33
and that somehow taps into the special
00:10:36
superpowers of our company and so I have
00:10:38
to find this confluence of things to
00:10:40
that meets the standard and in the end
00:10:44
we also know that a lot of pain and
00:10:46
suffering is going to go into it. Yeah,
00:10:47
>> there no great things that are invented
00:10:50
because it was just easy to do and just
00:10:51
like first try here we are.
00:10:53
>> And so if it's super hard to do,
00:10:55
nobody's ever done it before, it's very
00:10:57
likely that you're going to have a lot
00:10:58
of pain and suffering and so you better
00:11:00
enjoy it.
00:11:00
>> So can you can you just look at maybe
00:11:02
three or four of the more longtail
00:11:04
things you announced
00:11:05
>> and just talk about the long-term
00:11:07
viability of whether it's the data
00:11:09
centers in space or whether it's what
00:11:10
you're trying to do with ADAS in autos
00:11:12
or you know what you're trying to do on
00:11:14
the biology side. just give us a sense
00:11:16
of like how you see some of these curves
00:11:18
inflecting upwards in some of these
00:11:19
longer tail businesses.
00:11:20
>> Excellent. U physical AI large category
00:11:24
we believe and I just mentioned we have
00:11:26
three computing systems all the software
00:11:28
platforms on top of it. physical AI as a
00:11:31
large category.
00:11:34
It's technology industry's first
00:11:36
opportunity
00:11:38
to address a $50 trillion industry that
00:11:41
has largely been, you know, void of
00:11:44
technology until now. And so, we need to
00:11:46
invent all of the technology necessary
00:11:48
to do that. I felt that that was a
00:11:49
10-year journey. We started 10 years
00:11:52
ago. We're seeing it inflecting now. It
00:11:54
is a multi-billion dollar business for
00:11:56
us. It's close to$10 billion a year now.
00:11:58
And so it's a big business and it's
00:12:00
growing exponentially. And so that's
00:12:02
number one. I think in the case of
00:12:04
digital biology, I think we are
00:12:05
literally near the chat GPT moment of
00:12:08
digital biology. We're about to
00:12:09
understand how to represent genes,
00:12:12
proteins, cells. We already know how to
00:12:15
understand chemicals. And so the ability
00:12:17
for us to represent and understand the
00:12:20
dynamics of the building blocks of
00:12:22
biology, that's a couple of two, three,
00:12:25
five years from now. In five years time,
00:12:28
I completely believe that the healthcare
00:12:29
industry where digital biology is going
00:12:31
to inflect. And so these are a couple of
00:12:33
the really great ones and you could see
00:12:35
they're all around us.
00:12:36
>> Agriculture,
00:12:37
>> agriculture
00:12:38
>> reflecting now.
00:12:39
>> No question. Yeah.
00:12:40
>> Hson, I want to take you from the data
00:12:42
center to the desktop. Uh the company
00:12:45
was built in large part on hobbyists,
00:12:48
video gamers, and and all those graphic
00:12:50
cards in the beginning. And you
00:12:52
mentioned in front of I think 10,000
00:12:54
people here just clawed open claw clawed
00:12:58
code and what a revolution agents have
00:13:00
become and specifically
00:13:03
the hobbyists who are really where a lot
00:13:05
of energy um we see you know a lot of
00:13:08
the innovation breaks want desktops you
00:13:10
announced one here uh I believe it's the
00:13:13
Dell 6800 uh this is a very powerful
00:13:16
workstation to run local models 750 gigs
00:13:18
of RAM obviously the the Mac uh studio
00:13:22
sold out everywhere in my company. We're
00:13:24
moving to openclaw everything. Freeberg
00:13:27
just got clawdelled. You got claw peld,
00:13:29
I understand. And you're obsessed with
00:13:30
these.
00:13:31
>> What is this from the streets movement
00:13:34
of creating open-source
00:13:37
agents and using open source on the
00:13:39
desktop mean to you? Great. Where is
00:13:40
that going?
00:13:41
>> Yeah. So great. First of all, let's take
00:13:42
a step back. Um in the last two years,
00:13:45
we saw basically three inflection
00:13:47
points. The first one was generative
00:13:50
chat GPT
00:13:53
brought AI to the common
00:13:56
everybody to our awareness. But the fact
00:13:58
of the matter is the technology sat in
00:14:00
plain sight months before GPT. It wasn't
00:14:03
until chat GPT put a user interface
00:14:07
around it made it easy for us to use
00:14:09
that generative AI took off. Now
00:14:11
generative AI as you know generates
00:14:14
tokens for internal consumption as well
00:14:16
as external consumption. internal
00:14:17
consumption is thinking which led to
00:14:20
reasoning. 01 and 03
00:14:23
continue that wave of chat GBT grounded
00:14:26
information made AI not only answer
00:14:28
questions but answer questions in a more
00:14:30
grounded way useful. We started seeing
00:14:33
the revenues and the e the economic
00:14:36
model of open AI start to inflect. Then
00:14:39
the third one was only inside the
00:14:42
industry that we saw clock code the
00:14:44
first agentic system that was very
00:14:46
useful really revolutionary stuff but
00:14:50
but cloud code was only available for
00:14:52
enterprises. Most people outside never
00:14:55
saw anything about cloud code until open
00:14:58
claw. Open claw basically put into the
00:15:02
po popular consciousness what an AI
00:15:05
agent can do. Mhm.
00:15:07
>> That's the reason why open claw is so
00:15:09
important from a cultural perspective.
00:15:11
Now the second second reason why it's so
00:15:14
important is that open claw is open but
00:15:17
it formulates
00:15:20
it structures a type of computing model
00:15:24
that is basically reinventing computing
00:15:26
all together. It has a memory system. It
00:15:30
scratch is a short-term memory file
00:15:31
system. It has it has it has scales.
00:15:34
Yeah. Did you say skills or scales?
00:15:36
>> Skills.
00:15:37
>> Oh, skills.
00:15:37
>> They do have scales theoretically. Yeah.
00:15:39
Yeah. Skills.
00:15:40
>> So, the first thing first thing it it,
00:15:42
you know, it has resources. It it
00:15:44
manages resources. It's it does
00:15:46
scheduling.
00:15:47
>> Yep.
00:15:47
>> Right. And it cron jobs. It could it
00:15:50
could spawn off agents. It could, you
00:15:52
know, it could decompose a task and and
00:15:54
cause and solve problems as does
00:15:56
scheduling. It has IO subsystems. It
00:15:59
could, you know, input. It has outputed
00:16:01
connect to WhatsApp. And also it has a
00:16:04
API that allows it to run multiple types
00:16:07
of applications called skills.
00:16:09
>> Yeah.
00:16:10
>> These four elements fundamentally define
00:16:13
a computer.
00:16:13
>> Yeah.
00:16:14
>> And therefore what do we have? We have a
00:16:17
personal artificial intelligence
00:16:21
>> computer for the very first time.
00:16:23
>> Open source.
00:16:24
>> It's open source. It runs literally
00:16:26
everywhere. And so this is now the this
00:16:28
is the op this is basically the
00:16:30
blueprint the operating system of modern
00:16:33
computing.
00:16:33
>> Yeah.
00:16:34
>> And it's going to run literally
00:16:35
everywhere. Now of course one of the
00:16:37
things that we had to help it do is
00:16:38
whenever you have agentic software you
00:16:41
have to make sure that and agentic
00:16:42
software has access to sensitive
00:16:44
information. It could execute code. It
00:16:46
could communicate externally. We have to
00:16:49
make sure that all of it has to be
00:16:50
governed. all of it has to be secure and
00:16:52
that we have policies that that gives
00:16:55
these agents two of the three things but
00:16:58
not all three things at the same time
00:16:59
and so the governance part of it we
00:17:02
contributed to Peter Peter Steinberger
00:17:04
was here and and so we've got a mountain
00:17:06
of great engineers working with him to
00:17:07
help secure and keep that thing so that
00:17:10
it could protect our privacy protect our
00:17:12
security
00:17:12
>> Jensen that paradigm shift makes some of
00:17:15
the AI legislation that has passed
00:17:17
around the country to regulate AI and a
00:17:20
lot of the proposed legislation
00:17:21
effectively moot, doesn't it? Can you
00:17:23
just comment for a second on how quickly
00:17:25
the paradigm shift kind of obiates a lot
00:17:27
of the models for regulatory oversight
00:17:30
of AI, which is becoming a very hot
00:17:32
topic in politics right now.
00:17:34
>> Well, this is this is the part that that
00:17:36
we just with policy makers, we need to
00:17:39
we need to always get in front of them
00:17:41
and Brad, you do a great job doing this.
00:17:42
We had to get in front of them and
00:17:43
inform them about the state of the
00:17:45
technology, what it is, what it is not.
00:17:48
It is not a biological being. It is not
00:17:53
alien. It is not conscious.
00:17:57
Um it is computer software.
00:18:00
>> Yeah. Exactly.
00:18:01
>> And and it is not something that um we
00:18:04
say things like we don't understand it
00:18:06
at all.
00:18:07
>> It is not true. We don't understand at
00:18:09
all. We understand a lot of things about
00:18:10
this technology. and and so so I think
00:18:13
one we have to make sure that we
00:18:14
continue to inform the policy makers and
00:18:17
not affect not allow doomerism and
00:18:20
extremism to affect how policy makers
00:18:24
think and understand about this
00:18:25
technology. However, however, we still
00:18:27
have to recognize the technology is
00:18:28
moving really fast and don't get policy
00:18:30
ahead of the technology too quickly and
00:18:33
the risk that we we run as a nation. Our
00:18:37
greatest source of national security
00:18:39
concern with respect to AI is that other
00:18:41
countries adopt this technology while we
00:18:44
are so angry at it or afraid of it or
00:18:48
somehow paranoid of it that our
00:18:51
industries our society don't take
00:18:53
advantage of AI and so I'm just mostly
00:18:55
worried about the diffusion of AI here
00:18:57
in the United States.
00:18:58
>> Can you just double click if you were in
00:18:59
the seat in the boardroom of Anthropic
00:19:02
over that whole scuttlebutt with the
00:19:04
department of war? It sort of builds on
00:19:06
this idea of people didn't know what to
00:19:09
think. It's sort of added to this layer
00:19:11
of either resentment or fear or just
00:19:14
general mistrust that people have
00:19:16
sometimes at the software levels of AI.
00:19:19
What would do you think you would have
00:19:20
told Daario and that team to do maybe
00:19:22
differently to try to change some of
00:19:24
this outcome and some of this
00:19:25
perception?
00:19:25
>> The first thing that I I would I would
00:19:27
say about anthropic is first of all the
00:19:29
technology is incredible. We are a large
00:19:31
consumer of anthropic technology. really
00:19:33
admire their focus on security, really
00:19:36
admires their focus on safety. Um the
00:19:38
the the the culture by which we they
00:19:42
went about it, the the technology
00:19:43
excellence by which they went about it
00:19:45
really fantastic. Um I I would say that
00:19:48
that the the desire to warn people about
00:19:53
the capability, the technology is is
00:19:55
also uh really terrific. We just have to
00:19:57
make sure that we understand that the
00:19:59
world has a spectrum and that that
00:20:02
warning is good, scaring is less good,
00:20:05
>> right?
00:20:06
>> Um and because this technology is too
00:20:08
important to us,
00:20:09
>> right?
00:20:10
>> And and I think that it is fine to uh
00:20:15
predict the future, but we need to be a
00:20:17
little bit more circumspect. We need to
00:20:19
have a little bit more humility that in
00:20:21
fact we can't completely predict the
00:20:24
future and the abil and to say things
00:20:26
that that are quite extreme, quite
00:20:29
catastrophic that there's no evidence of
00:20:32
it happening um could be more damaging
00:20:35
than people think. And and of course we
00:20:37
are technology leaders.
00:20:40
>> There were there was a time when nobody
00:20:41
listened to us. Yeah. Um but now because
00:20:44
technology is so important in the social
00:20:47
fabric, such an important industry, so
00:20:50
important to national security, our
00:20:52
words do matter and I think we have to
00:20:54
be much more circumspect. We have to be
00:20:55
more moderate. We have to be more
00:20:57
balanced. We have to be more for more
00:20:58
thoughtful.
00:20:59
>> Well, I you know, I would nominate you.
00:21:02
I think the industry's got to get
00:21:03
together. 17% popularity of AI in the
00:21:07
United States. I mean, we see what
00:21:08
happened to nuclear, right? We basically
00:21:11
shut down the entire nuclear industry
00:21:12
and now we have a 100 fision reactors
00:21:14
being built in China and zero in the
00:21:16
United States. Um we hear about
00:21:18
moratoriums on data centers. So I think
00:21:20
we have to be a lot more proactive about
00:21:21
that. But but I want to go back to this
00:21:23
agentic explosion that you're seeing
00:21:25
inside your company, the efficiencies,
00:21:27
the productivity gains inside your
00:21:29
company. There's a lot of debate whether
00:21:31
or not we're seeing ROI, right? And you
00:21:33
and I entering into this year, the big
00:21:36
question was are the revenues going to
00:21:38
show up? are the revenues going to scale
00:21:40
like intelligence and then we had this
00:21:42
kind of Oenheimer moment a five6 billion
00:21:45
month by anthropic in February. Um do
00:21:49
you think as you look ahead you
00:21:51
announced a trillion dollar you know
00:21:53
visibility into a trillion dollars of
00:21:54
just Blackwell and Vera Rubin over the
00:21:56
course of the next couple years when you
00:21:58
see this happening at Anthropic and
00:22:00
OpenAI do you think we're on that curve
00:22:02
now where we're going to see revenues
00:22:04
scale in the way that intelligence is
00:22:06
scaling
00:22:07
>> when you look around when you I'll
00:22:08
answer this a couple different ways when
00:22:09
you look around this audience you will
00:22:11
see that anthropic and open AAI is
00:22:13
represented here but in fact everybody
00:22:16
99% of Everything that is here is all AI
00:22:19
and it's not anthropic and open AI.
00:22:21
>> Right. Right.
00:22:21
>> And the reason for that is because AI is
00:22:24
very diverse.
00:22:26
>> I would say that the second most popular
00:22:29
model as a category is open models.
00:22:32
>> Number one is yeah open open source open
00:22:35
ways open source.
00:22:36
>> Open AI is number one. Open source is
00:22:39
number two. Very distant third is
00:22:40
anthropic. And that tells you something
00:22:42
about the scale of all of the AI
00:22:45
companies that are here. And so so it's
00:22:48
important to recognize recognize that um
00:22:51
let me let me come back and say a couple
00:22:52
things. One when we went from generative
00:22:55
to reasoning the amount of computation
00:22:58
we needed was about a hundred times.
00:23:00
>> When we went from reasoning to agentic
00:23:03
the computation is probably another 100
00:23:06
times. Now we're looking at in just two
00:23:10
years computation went up by a fact
00:23:13
10,000x.
00:23:15
Meanwhile,
00:23:17
people pay for information, but people
00:23:21
mostly pay for work.
00:23:23
>> Yes,
00:23:25
>> talking to a chatbot and getting an
00:23:26
answer is super great,
00:23:28
>> right?
00:23:29
>> Helping me do some research,
00:23:30
unbelievable, but getting work done,
00:23:32
I'll pay fordeed.
00:23:34
>> And so that's where we are.
00:23:36
>> Agentic systems get work done. They're
00:23:38
helping our software engineers get work
00:23:39
done. And and so then you take that you
00:23:43
got 10,000x more compute you get
00:23:46
probably at this point 100x more
00:23:48
consumption now.
00:23:49
>> Yes.
00:23:49
>> Yeah.
00:23:50
>> And we haven't even started scaling yet.
00:23:52
>> We are absolutely at a millionx
00:23:54
>> which is I think a great place to talk
00:23:56
about the number of people have 20
00:23:59
30,000 at the company something.
00:24:01
>> We have 43,000 employees. You know I
00:24:03
would say 38,000
00:24:05
are engineers. The conversation we've
00:24:08
had on the pod a number of times is, "Oh
00:24:10
my god, look at the token usage in our
00:24:12
companies. It is growing massively." And
00:24:15
some people are asking, "Hey, when I
00:24:17
join a company, how many tokens do I get
00:24:19
cuz I want to be an effective employee?"
00:24:21
And you postulated, I believe, during
00:24:23
your 2 and 1/2 hour keynote, pretty long
00:24:26
keynote, well done, that you were
00:24:30
spending,
00:24:30
>> if it was well done, it would be
00:24:31
shorter. Yeah, he didn't have time to do
00:24:33
a time to write an hour.
00:24:36
>> So you guys So you guys know So you guys
00:24:38
know there is no practice
00:24:40
>> and so it's a gripping and ripping
00:24:41
>> rip and rip. Yeah.
00:24:43
>> So So I just want to let you know I was
00:24:44
writing the speech while I was giving
00:24:46
the speech. Okay. So
00:24:48
>> you never know.
00:24:49
>> But does that mean if we do back
00:24:51
>> I apologize.
00:24:52
>> Back of the envelope math 75,000 in
00:24:55
tokens for each engineer or something
00:24:56
like that. So, are you spending in
00:24:58
Nvidia a billion2 billion dollars on
00:25:00
tokens for your engineering team right
00:25:02
now?
00:25:02
>> We're trying to Let me give you a
00:25:03
thought experiment. Let's say you have a
00:25:05
software engineer or AI researcher and
00:25:07
you pay them $500,000 a year. We do that
00:25:10
all the time.
00:25:11
>> Okay, this is happening all over the
00:25:13
time. Um, that $500,000 engineer at the
00:25:16
end of the year, I'm going to ask them
00:25:18
how many token how much did you spend in
00:25:20
tokens? If that person said $5,000, I
00:25:23
will go ape something else.
00:25:24
>> Yes.
00:25:25
>> Right. If that if that $500,000 engineer
00:25:28
did not consume at least $250,000 worth
00:25:31
of tokens, I am going to be deeply
00:25:34
alarmed. Okay? And this is no different
00:25:37
than one of our chip designers who says,
00:25:40
"Guess what? I'm just going to use paper
00:25:42
and pencil. I don't think I'm going to
00:25:44
need any CAD tools."
00:25:45
>> This is a real paradigm shift to start
00:25:47
thinking about these all-star employees.
00:25:50
It almost reminds me of of what we
00:25:51
learned in the NBA when LeBron James
00:25:53
started spending a million dollars a
00:25:55
year just on his health of his body like
00:25:57
and maintaining it. That's right.
00:25:58
>> Here he is at age 41 still playing. It
00:26:01
really is, hey, if these are incredible
00:26:03
knowledge workers, why wouldn't we give
00:26:05
them
00:26:06
>> superhuman abilities?
00:26:08
>> That's exactly right.
00:26:08
>> Where does that go? If we if we
00:26:10
extrapolate out two or three years from
00:26:12
now, what is the efficiency of that
00:26:15
allstar at an Nvidia and what they're
00:26:16
able to accomplish? What do they look
00:26:18
like? Well, first of all, things that
00:26:20
that that um wow, this is too hard. That
00:26:24
thought is gone. Uh this is going to
00:26:26
take a long time. That thought is gone.
00:26:28
Uh we're going to need a lot of people.
00:26:29
That thought is gone. This is no
00:26:32
different than in this in the last
00:26:33
industrial re revolution. Somebody goes,
00:26:36
"Boy, that building really looks heavy."
00:26:38
Nobody says that. Nobody, wow, that
00:26:40
mountain looks too big. Nobody says
00:26:42
that. Right.
00:26:43
>> Everything that's too big, too heavy,
00:26:45
takes too long,
00:26:46
>> those thought, those ideas are all gone.
00:26:48
>> You're reduced to creativity.
00:26:50
>> That's right.
00:26:50
>> What can you come up with?
00:26:51
>> Exactly. Which means now the question is
00:26:53
how do you how do you work with these
00:26:55
agents? Well, it's just a new way of
00:26:57
doing computer programming. In the f in
00:26:59
the past, we code. In the future, we're
00:27:01
going we're going to write ideas,
00:27:03
architectures, specifications.
00:27:06
We're going to organize teams. We're
00:27:08
going to give him, we're going to help
00:27:09
them define how to evaluate the
00:27:11
definition of good versus bad. What's
00:27:14
the what does it look like when
00:27:15
something is a great outcome, how to
00:27:17
iterate with you, how to brainstorm.
00:27:19
That's really what you're looking for.
00:27:21
And I'm I think that every engineer is
00:27:23
going to have hundred hundred agents.
00:27:25
>> Back to the PR problem the industry has
00:27:28
right now. You have executives uh like
00:27:31
David Freeberg with Oho who's looking at
00:27:35
literally taking through the use of
00:27:36
technology your technology and AI the
00:27:39
number of calories produced and making
00:27:42
high quality cal calories what is the
00:27:44
factor you think you can bring the cost
00:27:47
down Freeberg and what impact does this
00:27:49
vision have for what you're doing
00:27:51
>> zero shot genomic modeling and it works
00:27:54
>> and then you have that moment and you're
00:27:56
like holy
00:27:58
>> honestly like and and that's after
00:28:01
people are replacing entire enterprise
00:28:03
software stacks in a night. I did
00:28:05
something in 90 minutes. I was telling
00:28:06
the guys about replaced the whole
00:28:08
software stack and like a whole bunch of
00:28:10
workload. 90 minutes on claude, ran this
00:28:12
agentic system, built the whole thing,
00:28:13
deployed it, and we got we were on a
00:28:15
Sunday night.
00:28:15
>> On a Sunday night, 10 p.m. I was done at
00:28:17
11:30. I went to bed.
00:28:18
>> As the CEO, you replaced
00:28:20
>> Yeah. And everyone on my management team
00:28:22
had to do a similar exercise over the
00:28:24
weekend. What we saw on Monday, I was
00:28:26
like, it's over. But the technical
00:28:30
stuff, the science stuff, we did
00:28:32
something in 30 minutes using auto
00:28:33
research. And I'd love your view on auto
00:28:34
research and what that tells us about
00:28:36
how far we still have to go in terms of
00:28:38
efficiency. But using auto research and
00:28:40
a chunk of data, something was published
00:28:43
internally that we said, "Oh my god."
00:28:46
And that would normally be a PhD thesis
00:28:47
that would take seven years. It would be
00:28:49
one of the most celebrated PhD thesis
00:28:50
we've ever seen in this field and it
00:28:52
would be in the journal science. And it
00:28:54
was done in 30 minutes on a desktop
00:28:55
computer running on auto research with
00:28:58
all the data we just ingested. We got it
00:28:59
on Friday. We're like, "Hey, let's try
00:29:00
it." try booted up, went to GitHub,
00:29:02
downloaded auto research and ran it and
00:29:05
you see everyone's face just go like and
00:29:08
then the potential of what this is
00:29:09
unlocking for us is like the kind of
00:29:11
thing that would take seven years and it
00:29:13
happened in 30 minutes and we're
00:29:15
experiencing it in genomics and we're
00:29:17
like this is unbelievable. So I I think
00:29:19
like the acceleration is widening the
00:29:22
aperture for everyone in a way that like
00:29:25
you didn't imagine a few years ago. But
00:29:27
just going back to the auto research
00:29:28
point, can you just comment on what you
00:29:30
think about the fact that this thing got
00:29:32
published with 600 lines of code in a
00:29:34
weekend and the capacity that it has to
00:29:36
run locally and achieve what it can
00:29:38
achieve with all of these diverse data
00:29:40
sets and what that tells us about the
00:29:42
early stages we are in terms of
00:29:43
optimization on algorithms and hardware.
00:29:46
The fundamental reason why open claw is
00:29:48
so incredible number one is its com its
00:29:52
confluence its timing with the
00:29:55
breakthroughs in large language model.
00:29:57
>> Yeah.
00:29:57
>> Its timing was perfect. It was
00:29:59
impeccable. Now in a lot of ways Peter
00:30:01
wouldn't have come up with it probably
00:30:03
if not for the fact that claude and GPT
00:30:06
and chat GPT have reached a level that
00:30:08
is really very good.
00:30:10
>> Right. It is also a new capability that
00:30:13
allows these models to tool use the
00:30:16
tools that we've created over time web
00:30:19
browsers and Excel spreadsheets and you
00:30:22
know in the case of chip design synopsis
00:30:25
and cadence and uh omniverse and blender
00:30:29
and autodesk and all of these tools are
00:30:31
going to continue to be used. There's
00:30:33
some some people say that that the
00:30:35
enterprise IT software industry is going
00:30:38
to get destroyed. There's it's there's a
00:30:41
let me give you the alternative view.
00:30:43
The enterprise software industry is
00:30:45
limited by butts and seats. It's about
00:30:48
to get a hundred times more agents
00:30:50
banging on those tools. They're going to
00:30:52
be agents banging on SQL. They're going
00:30:53
to be agents bang on vector databases,
00:30:55
agents bang on Blender, agents banging
00:30:57
on Photoshop. And the reason for that is
00:30:59
because those tools are first of all do
00:31:02
a very good job. Second, those tools are
00:31:05
the conduit between us in the final
00:31:08
analysis. When the work is done, it has
00:31:10
to be represented back to me in a way
00:31:12
that I can control,
00:31:13
>> right?
00:31:14
>> And I know how to control those tools.
00:31:17
And so I need everything to be put back
00:31:19
in the synopsis. I want everything to
00:31:20
put back in the cadence because that's
00:31:22
how I control it. That's how I've ground
00:31:24
truth.
00:31:24
>> Let me ask you a question about open
00:31:26
source. So we have these closed source
00:31:28
models. They're excellent.
00:31:29
>> We have these openweight models. Many of
00:31:31
the Chinese models are incredible.
00:31:34
Absolutely incredible. Two days ago, you
00:31:36
may not have seen this because you were
00:31:37
busy on stage, but there was a training
00:31:40
run that happened in this crypto project
00:31:42
called Bit Tensor Subnet 3. They managed
00:31:45
to train a 4 billion parameter llama
00:31:47
model, totally distributed with a bunch
00:31:50
of people contributing
00:31:52
excess compute, but they were able to do
00:31:54
it statefully and manage a training run,
00:31:56
which I thought was like a pretty crazy
00:31:59
technical accomplishment. Yeah,
00:32:00
>> because it's like random people and each
00:32:02
person gets a little share.
00:32:04
>> Our our modern version of folding at
00:32:05
home.
00:32:06
>> Exactly. So what what do you think about
00:32:08
the end state of open source? Do you see
00:32:11
this decentralization of architecture as
00:32:13
well and decentralization of compute to
00:32:16
support open weights and a totally open-
00:32:19
source approach to making sure AI is
00:32:21
broadly available to everyone? I believe
00:32:23
we fundamentally need
00:32:26
models as a first class product,
00:32:28
proprietary product as well as models as
00:32:32
open source. These two things are not A
00:32:35
or B. It's A and B. There's no question
00:32:37
about it. And the reason for that is
00:32:39
because models is a technology, not a
00:32:42
product. Model is a technology, not a
00:32:44
service. For the vast majority of
00:32:46
consumers, the horizontal layer, the
00:32:48
general intelligence, I would really
00:32:51
really love not to go fine-tune my own,
00:32:54
right? I would really love to keep using
00:32:55
chat GPT. I'd love to use cloud. I love
00:32:58
to use Gemini. I love to you use X. And
00:33:00
they all have their own personalities as
00:33:02
you know, which is kind of depends on my
00:33:04
mood and depends on what problem I'm
00:33:05
trying to solve. You know, I might, you
00:33:07
know, do it on X or I might do it on on
00:33:08
chat GBT. And so that that segment of
00:33:11
the of the industry is thriving. is
00:33:13
going to be great. However, there all
00:33:17
these industries their domain expertise
00:33:20
their specialization has to be channeled
00:33:22
has to be captured in a way that they
00:33:25
can control and that it can only come
00:33:27
from open models. The open model
00:33:29
industry we're contributing tremendously
00:33:31
to it is near the frontier
00:33:35
and quite frankly even if it reaches the
00:33:38
frontier
00:33:40
I think that products as a service
00:33:42
worldass products as a models as a
00:33:45
product is going to continue to thrive.
00:33:46
>> Every startup we're investing in now is
00:33:50
open source first and then going to the
00:33:52
proprietary models.
00:33:53
>> Yeah. The beautiful thing is because you
00:33:55
have a great router you connect it to by
00:33:58
on on first day every single day you're
00:34:01
going to have access to the world's best
00:34:02
model and and then it gives you time to
00:34:06
cost reduce and fine-tune and specialize
00:34:08
and so you're going to have worldass
00:34:09
capabilities out to shoot every single
00:34:11
time. Let
00:34:12
>> J can I
00:34:13
>> ask a question?
00:34:14
>> Nobody wants the US to win the global AI
00:34:16
race more than you, right? But a year
00:34:20
ago, the Biden era diffusion rule really
00:34:23
was an anti- American diffusion of AI
00:34:26
around the world. So here we are a year
00:34:28
into the new administration.
00:34:30
Give us a grade. Where is where are we
00:34:33
in terms of global diffusion and the
00:34:35
rate at which we're spreading US AI
00:34:38
technology around the world? Are we an
00:34:40
A? Are we a B? or we see what what's
00:34:42
working, what's not working.
00:34:44
>> Well, first of all, President Trump
00:34:46
wants American industry to lead. He
00:34:50
wants American technology industry to
00:34:51
lead. He wants American technology
00:34:53
industry to win. He wants us to spread
00:34:56
American technology around the world. He
00:34:58
wants United States to be the wealthiest
00:35:00
country in the world. He wants all of
00:35:02
that. At the current moment, as we
00:35:05
speak,
00:35:07
Nvidia gave up a 95% market share in the
00:35:10
second largest market in the world, and
00:35:13
we're at 0%.
00:35:15
>> President Trump, That's right. President
00:35:16
Trump wants us to get back in there. And
00:35:19
and uh the first thing is uh to get
00:35:23
licens licensed for the companies that
00:35:25
we're going to be able to sell to. We've
00:35:27
got many companies who have requested
00:35:30
for licenses. We've applied for licenses
00:35:32
for them and we've got approved licenses
00:35:34
from sec secretary lutnik. Uh now uh
00:35:38
we've we informed the Chinese companies
00:35:40
and many of them have given us purchase
00:35:41
orders and so we're going to we're going
00:35:43
to we're in the process of cranking up
00:35:45
our supply chain again to go ship. I
00:35:48
think at the highest level Brad um I
00:35:50
think one of the things that we should
00:35:51
acknowledge is this. Our national
00:35:54
security
00:35:55
is diminished when we don't have access
00:36:00
to miniature motors, rare earth
00:36:03
minerals. It's diminished when we don't
00:36:05
control our telecommunications networks.
00:36:07
It's diminished when we can't provide
00:36:10
for sustainable energy for our country.
00:36:12
It is fundamentally diminished. Every
00:36:14
single one of these industries is an
00:36:16
example of what I don't want the AI
00:36:19
industry to be.
00:36:20
>> Right? When we look forward in time and
00:36:23
we say what do we want? What is the what
00:36:25
does it look like when American
00:36:28
technology industry American AI industry
00:36:30
leads the world? We can all acknowledge
00:36:33
that there is no way that AI models is
00:36:37
one universally. It is we can all
00:36:40
acknowledge that that is an outcome that
00:36:42
makes no sense. However, we can all
00:36:45
imagine that the American tech stack
00:36:48
from chips to computing systems to the
00:36:52
platforms are used broadly by the world
00:36:56
where they build their own AI, they use
00:36:58
public AI, they use private AI whatever
00:37:01
and they can build their applications in
00:37:02
their society. I would love that the
00:37:05
American tech stack is 90% of the world.
00:37:07
Yes, I would love that. The alternative
00:37:10
if it looks like solar, rare earth,
00:37:14
magnets, motors, telecommunications, I
00:37:17
consider that a very bad outcome for
00:37:20
national security.
00:37:21
>> Agreed.
00:37:21
>> Yeah.
00:37:22
>> How much are you monitoring the
00:37:23
situation with the conflicts around the
00:37:26
world right now? And how much does it
00:37:27
worry you Jensen? So, China and Taiwan
00:37:30
and then helium availability coming out
00:37:32
of the Middle East, I understand, can be
00:37:34
a supply chain risk to semiconductor
00:37:35
manufacturing. How much do these
00:37:37
situations worry you? How much are you
00:37:39
spending on them?
00:37:40
>> Well, first of all, I think the in
00:37:42
Middle East, I have we have 6,000
00:37:44
families there.
00:37:45
>> Yeah,
00:37:45
>> we have a lot of Iranians uh at NVIDIA
00:37:48
and their families are still in Iran.
00:37:50
And so so we have we have a lot of
00:37:52
families there. The first thing is is
00:37:53
they're quite anxious. They're quite
00:37:55
concerned, quite scared. Um we're
00:37:57
thinking about them all the time. Uh
00:37:58
we're monitor and keeping an eye on them
00:38:00
all the time. They have 100% of our
00:38:02
support. Uh I've been asked several
00:38:04
times, are we still considering uh being
00:38:06
in Israel? We are 100% in Israel. We are
00:38:10
100% behind the families there. We are
00:38:12
100% in the Middle East. I was also
00:38:15
asked, you know, given what's happening
00:38:17
in the Middle East, uh is that an area
00:38:20
where we believe that we can expand
00:38:22
artificial intelligence to? Um I believe
00:38:25
that there's a reason we went to war and
00:38:27
I believe at the end of the war, Middle
00:38:29
East will be more stable than before.
00:38:32
And so if we were there, if we're
00:38:34
considering it before, we should
00:38:35
absolutely be considering it after. And
00:38:37
so I'm 100% in on that. With respect to
00:38:40
with with with respect to to Taiwan,
00:38:43
>> we have to do three things. One, we have
00:38:46
to make sure that we re-industrialize
00:38:48
the United States as fast as we can.
00:38:50
Yeah.
00:38:50
>> And whether it's the chip manufacturing
00:38:52
plants, the the computer manufacturing
00:38:54
plants, or the AI factors.
00:38:55
>> How are we doing on that? We're doing
00:38:57
excellent with by by gaining the
00:39:01
strategic support by gaining the
00:39:03
friendship of the supply chain of
00:39:05
Taiwan.
00:39:07
By gaining their friendship, by gaining
00:39:09
their support, we were able to build
00:39:13
Arizona and Texas, California at
00:39:15
incredible rates. They're they are
00:39:18
genuinely a strategic partner. Um we we
00:39:21
we really they deserve our support. They
00:39:25
deserve our friendship. They deserve our
00:39:27
uh generosity and they're doing
00:39:29
everything they can to accelerate the
00:39:31
manufacturing process for us. And so, so
00:39:34
I think that's number one. Number two,
00:39:36
we ought to diversify the manufacturing
00:39:38
supply chain. And whether it's South
00:39:40
Korea, whether it's it's Japan, it's
00:39:43
Europe, we got to we got to diversify
00:39:45
the supply chain, make it more
00:39:46
resilient. And number three, let's be
00:39:48
let's let's demonstrate restraint. And
00:39:53
while we're reducing uh increasing our
00:39:56
diversity and resilience, let's not
00:40:00
press push um you know
00:40:03
>> unnecessary we need to be patient.
00:40:05
>> Is helium a problem?
00:40:07
>> A lot of reports,
00:40:08
>> you know, I think helium could be a
00:40:09
problem, but it's also the case that the
00:40:11
supply chain probably has a lot of
00:40:13
buffer in it.
00:40:14
>> These kind of things tend to have a lot
00:40:15
of buffer. Uh but but um you know
00:40:19
>> Yeah,
00:40:19
>> you've um made massive progress in
00:40:22
self-driving. You made a big
00:40:24
announcement. You've added many more
00:40:26
partners including BYD. There was just a
00:40:28
video of you driving around in a
00:40:30
Mercedes and uh huge announcement uh
00:40:34
with Uber that you're going to have a
00:40:36
number of cars on the road from many
00:40:38
different manufacturers. your bet is I
00:40:41
believe that there's going to be an
00:40:43
Android
00:40:44
type open-source platform that you're
00:40:47
going to play a major part in with
00:40:49
dozens of uh car providers and then
00:40:52
maybe on the other side there could be
00:40:53
an iOS with Tesla or Whimo. What's your
00:40:56
strategy thinking there and how that
00:40:59
chessboard emerges because it feels like
00:41:02
you have a a pretty deep stack and in
00:41:05
some ways you're competing and in other
00:41:07
places you're collaborative. Yeah. Um,
00:41:10
it's taking a step back. We believe that
00:41:13
everything that moves will be autonomous
00:41:16
completely or partly
00:41:18
someday. Number one. Number two, we
00:41:20
don't want to build self-driving cars,
00:41:22
but we want to enable every car company
00:41:24
in the world to build self-driving cars.
00:41:26
And so, we built all three computers,
00:41:28
the training computer, the simulation
00:41:29
computer, the valuation evaluation
00:41:31
computer, as well as the car computer.
00:41:33
We develop the world's safest driving
00:41:36
operating system. Uh we also created the
00:41:40
world's first reasoning autonomous
00:41:43
vehicle so that it could decompose
00:41:45
complicated scenarios into simpler
00:41:47
scenarios that it knows how to navigate
00:41:49
through just like us reasoning systems.
00:41:52
And so that reasoning system called Al
00:41:54
Pomayo has enabled us to achieve
00:41:56
incredible results.
00:41:59
We
00:42:00
open this we ver we vertical
00:42:03
optimization. We horizontally innovate
00:42:06
and we let everybody decide. Do you want
00:42:08
to buy one computer from us? In the case
00:42:09
of Elon and Tesla, they buy our training
00:42:12
computers. Um, do they want to buy our
00:42:14
training computer and our simulation
00:42:15
computers or do you want to let us uh
00:42:18
work with us to do all three and even
00:42:19
put the car computer in your car. So,
00:42:21
we, you know, our attitude is we want to
00:42:24
solve the problem. We're not the
00:42:27
solution provider.
00:42:29
And we're delighted however you work
00:42:31
with us. Let me build on this question
00:42:33
because I think it's like it's so
00:42:34
fascinating. You actually do create this
00:42:36
platform. A thousand flowers are
00:42:38
blooming.
00:42:40
>> But it's also true that some of those
00:42:41
flowers want to now go back down in the
00:42:43
stack and try to compete with you a
00:42:45
little bit. Google has TPU, Amazon has
00:42:48
inferentia and tranium. You know,
00:42:50
everybody's sort of spinning up their
00:42:52
own version of I think I can out Nvidia
00:42:54
Nvidia
00:42:55
>> even though they also tend to be huge
00:42:58
customers.
00:42:58
>> How do you navigate that? And what do
00:43:01
you think happens over time and
00:43:03
>> where do those things play in the
00:43:05
complexion of this kind of vision?
00:43:06
>> Yeah, really great. You know, first of
00:43:08
all, um, we're the only AI company,
00:43:11
we're an AI company. We build foundation
00:43:13
models. We're at the frontier in many
00:43:15
different domains. We build every single
00:43:17
every single layer, every single stack.
00:43:19
Um, we're the only AI company in the
00:43:21
world that works with every AI company
00:43:22
in the world. They never show me what
00:43:25
they're building and I always show them
00:43:26
exactly what I'm building.
00:43:28
>> Right.
00:43:28
>> Yeah. And so so the confidence comes
00:43:31
from this one. Uh we are delighted to
00:43:35
compete on what is the best technology
00:43:38
and to the extent that to the extent
00:43:40
that we can continue to run fast I
00:43:42
believe that buying from Nvidia still is
00:43:45
one of the most economic things they
00:43:46
could do and that's just incredible
00:43:48
confidence there. Number one. Number
00:43:49
two, we're the only architecture that
00:43:51
could be in every cloud and that gives
00:43:53
us some fundamental advantages. We're
00:43:55
the only architecture you could take
00:43:56
from a cloud and put into onrem in the
00:43:59
car in any region
00:44:00
>> in space.
00:44:01
>> That's right. In space. And so there's a
00:44:03
whole whole part of our market about 40%
00:44:06
of our of our business most people don't
00:44:08
realize this 40% of our business unless
00:44:11
you have the CUDA stack unless you can
00:44:12
build an entire AI factory you have the
00:44:14
customers don't know what to do with
00:44:16
you. They're not trying to build chips.
00:44:18
They're not trying to buy chips. They're
00:44:20
trying to build AI infrastructure. And
00:44:22
so they want you to come in with the
00:44:24
full stack. And we've got the whole
00:44:25
stack. And so surprisingly, Nvidia is
00:44:28
gaining market share. If you look at
00:44:30
where we are today, we're gaining share.
00:44:32
>> Do you think what happens is these guys
00:44:33
try and they realize, oh my god, it's
00:44:35
too much. And then they come back. Is
00:44:36
that why the share grows?
00:44:38
>> Well, we're gaining share for several
00:44:39
reasons. One, um, our velocity has gone,
00:44:44
we help people realize it's not about
00:44:45
building the chip, it's about building
00:44:47
the system.
00:44:48
>> And that system is really hard to build.
00:44:50
uh and and so their their their business
00:44:53
with us is increasing. In the case of
00:44:55
AWS, I think they just announced, I
00:44:56
think it was yesterday, that they're
00:44:58
going to buy a a million chips uh in the
00:45:01
next couple years. I mean, that's a lot
00:45:03
of chips from from AWS. And that's on
00:45:05
top of all the chips they've already
00:45:06
bought. And so, we're delighted to do
00:45:08
that. But number one, we're gaining
00:45:10
share this last couple years because we
00:45:13
now have Anthropic coming to Nvidia.
00:45:16
Meta SL is coming to Nvidia. And the
00:45:20
growth of open models is incredible. And
00:45:23
that's all on Nvidia. And so we're
00:45:25
growing in share because of the number
00:45:27
of models. We're also growing in share
00:45:29
because out all of these companies are
00:45:33
outside of the cloud and they're growing
00:45:35
regionally in enterprise in industries
00:45:37
at the edge and that entire segment of
00:45:40
growth is you know really hard to do if
00:45:42
it's just building an as
00:45:44
>> Brad
00:45:44
>> related to that um and not to get in the
00:45:47
weeds on the numbers but analysts don't
00:45:49
seem to believe right so if you look at
00:45:52
the consensus forecast you said compute
00:45:54
could 1 millionx right and Yet they have
00:45:58
you growing next year at 30%, the year
00:46:00
after that at 20%. And in 2029, which is
00:46:03
supposed to be a monster year at 7%.
00:46:06
Right? So if you just if you take your
00:46:08
TAM and you apply their growth numbers,
00:46:11
it suggests that your share will
00:46:12
plummet. Do you see anything in your
00:46:15
future order book that would make that
00:46:18
correct?
00:46:18
>> Yeah. First of all, they just don't
00:46:20
understand the scale and the breadth of
00:46:23
AI.
00:46:23
>> Yes.
00:46:24
>> Yeah.
00:46:24
>> I think that's true. Most people think
00:46:26
that AI is in the top five hyperscalers,
00:46:29
>> right? That's right. There's also an
00:46:31
orthodoxy around these law of large
00:46:33
numbers where,
00:46:34
>> you know, they have to go back to their
00:46:36
investment banking risk committee and
00:46:37
show some model.
00:46:39
>> They're not going to believe in their
00:46:41
minds that 5 trillion goes to 15
00:46:43
trillion. They're like go to it can go
00:46:45
to seven or they can have a 10 trillion
00:46:48
company.
00:46:48
>> It's all just CIA stuff that I think
00:46:50
>> it's never happened before. So you can't
00:46:51
say it will
00:46:52
>> and and because because you have to
00:46:54
redefine what it is that you do. There
00:46:56
was somebody who made an observation
00:46:57
recently that Nvidia
00:47:00
Jensen how can you be larger than Intel
00:47:04
in servers and the reason for that is
00:47:06
because the CPU market of the entire
00:47:09
data center was about $25 billion a
00:47:11
year,
00:47:12
>> right?
00:47:12
>> We do $25 billion a year as you guys
00:47:14
know in a very in the time that we were
00:47:16
sitting here.
00:47:17
>> And so obviously obviously
00:47:21
That was a joke.
00:47:22
>> No, it's but it's
00:47:23
>> all in podcast.
00:47:25
Don't worry. Everything on this show is
00:47:27
roughly. Don't worry about it. It's all
00:47:28
in here. Anyway, that was not guidance.
00:47:32
But anyhow, anyhow, it the the point is
00:47:35
how big you can be
00:47:37
>> depends on what is it that you make,
00:47:39
>> right?
00:47:40
>> Nvidia is not making chips. Number one,
00:47:42
making chips does not help you solve the
00:47:44
AI infrastructure problem anymore. It's
00:47:46
too complicated. Number three, most
00:47:49
people think that AI is narrowly in the
00:47:51
things that they talk about and hear and
00:47:53
see.
00:47:54
>> It's AI is much open AI is incredible.
00:47:57
They're going to be enormous. Anthropic
00:47:59
is incredible. They're going to be
00:48:00
enormous. But AI is going to be much
00:48:03
much bigger than that.
00:48:05
>> And we addressed that segment.
00:48:06
>> Tell us about data centers in space for
00:48:08
a second.
00:48:09
>> Yeah.
00:48:09
>> Um
00:48:10
>> we're already in space. How should the
00:48:12
layman think about what that business is
00:48:16
versus when you hear about these big
00:48:17
data center buildouts that's happening
00:48:19
in in on the ground?
00:48:21
>> Well, we should definitely work on the
00:48:23
ground first because we're already here
00:48:25
and number one. Number two, we should
00:48:27
prepare to be out in space and obviously
00:48:28
there's a lot of energy in space. Um the
00:48:31
challenge of course is that cooling
00:48:34
you can't take advantage of conduction
00:48:36
and convection and so you can only use
00:48:38
radiation and radiation requires very
00:48:41
large surfaces and so now that's not an
00:48:43
impossible thing to solve and there's a
00:48:45
lot of lot of space in space. Um but
00:48:48
nonetheless
00:48:49
the expense is still quite there is is
00:48:51
there uh we're going to go explore it.
00:48:53
We're already there. We're already
00:48:55
radiation hardened. Uh we have we have
00:48:57
uh uh uh CUDA in satellites around the
00:49:00
world. Um they're doing imaging, image
00:49:02
processing, AI imaging and um and that
00:49:05
kind of stuff ought to be done in space
00:49:07
instead of sending all the data back
00:49:08
here and do imaging down here. We ought
00:49:10
to just do imaging out in space. And so
00:49:12
there's a lot of things that we ought to
00:49:13
done do do in space. And in the
00:49:15
meantime, uh we're going to explore what
00:49:17
is the architecture of data centers look
00:49:18
like uh in space. And it'll take it'll
00:49:21
take years. It's okay. We got I got
00:49:23
plenty of time. I wanted to um double
00:49:25
click on healthcare. I know you've got a
00:49:26
big effort there. We're all of a certain
00:49:28
age where we're thinking about lifespan,
00:49:30
health span. I mean, we all look great.
00:49:32
I think
00:49:34
>> some better than others.
00:49:34
>> I think some better than others. I don't
00:49:36
know what your secret is, Jensen.
00:49:37
>> Pretty good these these
00:49:38
>> I mean, what's what are you taking?
00:49:40
What's off the menu? You got to talk to
00:49:42
me when we're backstage. I want to know
00:49:43
in the green room what you got going on.
00:49:45
>> Squats and push-ups and sit-ups.
00:49:47
>> Perfect. Okay. Um but
00:49:49
>> that works. what you know in terms of
00:49:52
the buildout in healthcare
00:49:56
where is that going and what kind of
00:49:58
progress are we making? I was just using
00:50:00
Claude to do some analysis and saying
00:50:02
like where are all these billing codes?
00:50:04
We spend twice as much money in the US.
00:50:06
We get seem to get half as much. It
00:50:08
seemed like uh 15 to 25% of the dollar
00:50:11
spent were on these first GP visits. And
00:50:14
I think we all know like chat GBT and a
00:50:17
large language model does a better job
00:50:19
more consistently today at a first
00:50:22
visit. So what has to happen there to
00:50:25
kind of break through all that
00:50:26
regulation and have AI have a true
00:50:28
impact on the health care system?
00:50:29
>> There's several way several areas that
00:50:31
we're involved in in um in healthcare.
00:50:34
One is uh AI
00:50:38
uh physics uh and and that's or AI
00:50:41
biology using AI to understand represent
00:50:45
predict biology behavior biological
00:50:47
behavior and so that's one that's very
00:50:49
important in drug discovery. There's
00:50:51
second which is AI agents and that's
00:50:54
where the assistance and helping
00:50:56
diagnosis and things like that. Open
00:50:58
evidence is a really good example.
00:50:59
Hypocratic is a really good example.
00:51:01
Love working with those companies. Um I
00:51:03
really think that this is an area uh
00:51:04
where agentic technology is going to
00:51:06
revolutionize how we interact with
00:51:09
doctors and how do we interact for
00:51:10
healthcare. The third part that we're in
00:51:12
involved in is physical AI. The first
00:51:14
one is AI physics using AI to predict
00:51:16
physics. The second one is physical AI.
00:51:19
AI that understand the properties of the
00:51:21
laws of physics and that's used for a uh
00:51:24
robotic surgery huge amounts of
00:51:27
activities there. Every single
00:51:29
instrument whether it's ultrasound or
00:51:31
you know CT or whatever instrument we
00:51:34
interact with in a hospital in the
00:51:35
future will be agentic.
00:51:36
>> Yeah.
00:51:37
>> You know open claw in a safe version
00:51:39
will be inside every single instrument.
00:51:42
And so in a lot of ways that instrument
00:51:44
is going to be interacting with patients
00:51:45
and nurses and doctors in a very unique
00:51:48
way. so much investment in AI weapons.
00:51:50
It would be wonderful to see some
00:51:52
investment in AI EMTs and paramedics and
00:51:55
saving lives, not just taking them,
00:51:57
which I think is a great segue into
00:51:59
robotics. You've got dozens of partners.
00:52:02
We have this very weird
00:52:04
>> I I don't know I want to call a lost
00:52:05
decade or 20 years of Boston Dynamics.
00:52:09
Google bought a bunch of companies. They
00:52:10
then wound up selling them and spinning
00:52:12
them out where people just thought
00:52:14
robotics is just not ready for prime
00:52:16
time. And now here we have the world's
00:52:18
greatest entrepreneur at this time. Uh
00:52:20
tied with you, uh Elon Musk doing well,
00:52:23
that was a good save, I hope. Optimus,
00:52:25
uh pretty impressive. And then other
00:52:27
companies in China. How how close is
00:52:30
that to actually being in our lives
00:52:34
where we might see a chef, a robotic
00:52:37
chef, a robotic nurse, a robotic
00:52:39
housekeeper, you know, this humanoid
00:52:41
factor actually working in the real
00:52:43
world, knowing what you know with those
00:52:46
partners and the fidelity, especially in
00:52:48
China where they seem to be doing as
00:52:49
good a job as we're doing here or maybe
00:52:51
better.
00:52:52
>> Um,
00:52:54
we invented the industry largely.
00:52:57
America invented. We c you could argue
00:52:59
we got into it too soon.
00:53:00
>> Yeah.
00:53:01
>> And and we got exhausted. We got tired
00:53:04
um about five years before the enabling
00:53:08
technology appeared.
00:53:09
>> The brain.
00:53:09
>> Yeah. Yeah. And we we just got tired of
00:53:11
it just a little too soon. Okay. That's
00:53:14
number one. But it's here now. Now the
00:53:16
question is how much longer? From the
00:53:18
point of high functioning existence
00:53:21
proof, high functioning exist existence
00:53:24
proof to reasonable products
00:53:28
technology never takes more than a
00:53:30
couple two three cycles. And so a couple
00:53:33
two three cycles basically be somewhere
00:53:34
around 3 years to 5 years. That's it. 3
00:53:37
years to 5 years we're going to have
00:53:38
robots all over the place. Uh I think I
00:53:40
think um uh China is is uh formidable
00:53:43
and the reason for that is because their
00:53:46
micro electronics, their uh motors,
00:53:49
their rare earth, their magnets, which
00:53:50
is foundational to robotics,
00:53:53
>> they are the world's best. And so in a
00:53:55
lot of ways, our robotics industry
00:53:57
relies deeply on their ecosystem and
00:53:59
their supply chain. Um and uh and and
00:54:02
they're, you know, obviously moving very
00:54:04
quickly. Uh we're going to, you know,
00:54:06
our robotics industry will have to rely
00:54:08
a lot on it. the world's robotics
00:54:09
industry will have to rely on a lot on
00:54:11
it. And so so I think um you're gonna
00:54:14
see some fast fast movements here.
00:54:16
>> Ultimately, one for one. Elon seems to
00:54:19
think we're going to have one robot for
00:54:20
every human. 7 billion for 7 billion, 8
00:54:22
billion for 8 billion.
00:54:23
>> Well, I'm hoping more. Yeah, I'm hoping
00:54:25
more. Yeah. Uh well, first of all,
00:54:27
there's a whole bunch of robots that are
00:54:29
going to be in factories working around
00:54:30
the clock. There's going to be a whole
00:54:32
bunch of fac
00:54:34
that that don't move. They move a little
00:54:36
bit. Uh almost everything will be
00:54:37
robotic. What does the world look like?
00:54:39
>> Sorry, let me I think like this is one
00:54:41
of the robotics for me is one of the
00:54:43
pieces that I think unlocks uh economic
00:54:45
mobility opportunities for every
00:54:47
individual. Everyone now like when
00:54:49
everyone got a car, they could now go
00:54:51
and do a lot of different jobs. When
00:54:53
everyone gets a robot, their robot can
00:54:55
do a lot of work for them. They can
00:54:57
stand up an Etsy store, a Shopify store.
00:54:59
They can create anything they want with
00:55:01
their robot. They could do things that
00:55:03
they independently cannot do. I think
00:55:05
the robot is going to end up being the
00:55:07
greatest unlock for prosperity for more
00:55:09
people on Earth than we've ever seen
00:55:11
with any technology before.
00:55:13
>> Yeah, no doubt. I mean, just a simp the
00:55:15
simple math at the moment is we're
00:55:17
millions of people short in labor today.
00:55:19
Right. Yeah.
00:55:19
>> Right. We're we're we're actually really
00:55:22
desperate in need of robotics and so
00:55:24
that all of these companies could grow
00:55:27
more if they had more labor. I mean,
00:55:29
we're we're number one. Some of the
00:55:32
things that you mentioned are super fun.
00:55:33
I mean, because of robots, we'll have
00:55:35
virtual presence. Uh, you know, I'll be
00:55:38
able to go into the robot of my house
00:55:42
and virtually operate it. I'm on a
00:55:44
business trip,
00:55:45
>> right?
00:55:46
>> Walk around the house and walk the dog.
00:55:48
>> Yeah. Walk the dog.
00:55:49
>> Break the leaves.
00:55:50
>> Yeah. Exactly. Freak out the dog.
00:55:51
>> Maybe not quite that, but just, you
00:55:53
know, just, you know, wander around and
00:55:55
just see what's going on in the house.
00:55:56
You know, chat with the dog, chat with
00:55:58
the kids.
00:55:58
>> Yeah.
00:55:59
>> Yeah. And that's time travel is also
00:56:01
we're going to be able to travel at the
00:56:02
speed of light, you know, and so, you
00:56:04
know, clearly we're going to send our
00:56:06
robots ahead of us.
00:56:07
>> Yeah.
00:56:08
>> Not going to send myself. I'm going to
00:56:09
send a robot, you know.
00:56:10
>> Check it out.
00:56:11
>> Yeah. Yeah. And then I'm going to upload
00:56:12
my AI. Yeah.
00:56:13
>> Well, it's inevitable. It unlocks the
00:56:14
moon and it unlocks Mars as um targets
00:56:17
for for colonization, which gives us
00:56:19
>> infinite resources. Getting back from
00:56:21
the moon is effectively zero energy cost
00:56:23
to move material back because you can
00:56:25
use solar and accelerate. So you could
00:56:27
have factories that make everything the
00:56:29
world needs on the moon and the robots
00:56:31
are going to be the unlock for enabling
00:56:32
that.
00:56:32
>> That's right. Distance no longer
00:56:33
matters.
00:56:34
>> Distance doesn't matter. Yeah.
00:56:35
>> The more the more revenue we get out of
00:56:38
models and agents, the more we can
00:56:40
invest in building the infrastructure
00:56:41
which then unlocks more capabilities on
00:56:43
models and agents. Daario on Dwaresh's
00:56:46
podcast recently said by 2728 we'll have
00:56:49
hundreds of billions of dollars of
00:56:51
revenue out of the model companies and
00:56:53
the agent companies. and he forecasts a
00:56:55
trillion dollars by 2030. Right? This is
00:56:58
non-infrastructure AI revenue. Um
00:57:01
>> I think he I think he's he's being very
00:57:04
conservative. I believe Dario and
00:57:05
Anthropic is going to do way better than
00:57:07
that.
00:57:08
>> Wow.
00:57:08
>> Way better than that.
00:57:09
>> Wow. So from 30 billion to a trillion.
00:57:11
>> Yeah. and not and and the reason for
00:57:13
that is the one part that he hasn't
00:57:15
considered is that I believe every
00:57:17
single enterprise software company will
00:57:20
also be a reseller
00:57:23
value added reseller of anthropic code
00:57:25
anthropics tokens value added reseller
00:57:28
open AAI that's right and they're going
00:57:31
to that that that part of their
00:57:34
>> get this logarithmic expansion
00:57:35
>> yes
00:57:36
>> their go to market is going to expand
00:57:39
tremendously this year
00:57:41
>> what do you think in that world is the
00:57:42
moat what's left over. I mean you have
00:57:44
some modes that are frankly I think as
00:57:47
this scales almost insurmountable. The
00:57:49
best one that nobody talks about is
00:57:51
probably CUDA which is just like an
00:57:54
incredible strategic advantage. But in
00:57:56
the future if a model can be used to
00:57:59
create something incredible then the
00:58:01
next spin of a model can be used to
00:58:03
maybe disrupt it. Sort of in your mind
00:58:05
what do you think for these companies
00:58:06
that are building at that application
00:58:08
layer? What's their moat? like how do
00:58:10
they differentiate themselves?
00:58:12
>> Deep specialization.
00:58:14
Deep specialization. I believe that um
00:58:17
these models they're going to have
00:58:19
general general models that are
00:58:20
connected into the software company's
00:58:24
agentic system,
00:58:25
>> right?
00:58:26
>> Many of those models are cloud models
00:58:29
and proprietary models, but many of
00:58:31
those models are specialized sub aents
00:58:34
that they've trained on their own.
00:58:37
>> Right. So the call to arms for you for
00:58:39
entrepreneurs is look
00:58:40
>> know your vertical.
00:58:42
>> That's right.
00:58:42
>> Know it as deep and as better than
00:58:44
everybody else.
00:58:45
>> That's right.
00:58:45
>> And then wait for these tools because
00:58:47
they're catching up to you and now you
00:58:48
can imbue it with your knowledge.
00:58:49
>> That's right. The sooner you connect
00:58:51
your agent,
00:58:52
>> the sooner you connect your agent with
00:58:54
customers,
00:58:55
>> that flywheel is going to cause your
00:58:57
agent to get
00:58:58
>> it very much is an inversion of what we
00:59:00
do today because today we build a piece
00:59:02
of software and we say what generalizes
00:59:04
>> and then let's try to sell it as broadly
00:59:06
as possible and then sell the
00:59:07
customization around it
00:59:08
>> and we in fact in fact exactly right we
00:59:11
we create a horizontal but notice there
00:59:15
are all these gsis and all of these
00:59:17
consultants who are specialists who then
00:59:20
take your horizontal platform and
00:59:22
specializes it into
00:59:24
>> and that's arguably a five or six time
00:59:26
bigger industry is the customization.
00:59:28
>> It is absolutely the whole very much is
00:59:30
>> that's right. So I think that these
00:59:32
platform companies have an opportunity
00:59:34
to become that specialist to become that
00:59:37
vertical.
00:59:38
>> Yeah. Domain expert.
00:59:38
>> You know, I just want to give you your
00:59:39
flowers. I think it was 3 years ago you
00:59:41
said you're not going to lose your job
00:59:43
to AI. You're going to lose your job to
00:59:44
somebody using AI. And here we are. The
00:59:47
entire conversation has revolved around
00:59:49
this concept of agents making people
00:59:52
superhuman and the business opportunity
00:59:55
expanding and entrepreneurship
00:59:56
expanding. You actually saw it pretty
00:59:58
clearly. Yeah.
00:59:58
>> You changed your view.
01:00:01
This is Doom Dmer. No, I'm not Doomer. I
01:00:04
do I do have Dmer. No, I you can hold
01:00:07
space for I think two ideas. One is
01:00:08
there are going to be a lot
01:00:09
>> that's viral Jake.
01:00:11
>> Oh, no. There you can.
01:00:12
>> But that's just because he doesn't hang
01:00:13
out with me enough.
01:00:15
>> Well, we I mean we a little bit. We
01:00:17
don't talk about it. He will show THE
01:00:19
TABLE. HE'LL FOLLOW YOU AROUND.
01:00:21
>> I'm not asking for it. I'm just
01:00:22
>> follow you around. I'm not asking for
01:00:24
it.
01:00:24
>> You can come with me and Tucker. We ski
01:00:26
in Japan every January. Love it. and
01:00:29
Tucker go road trip.
01:00:31
There is going to be job displacement
01:00:33
and then the question becomes,
01:00:35
>> you know, do those people have the
01:00:37
fortitude, the resolve to then go
01:00:39
embrace these,
01:00:40
>> you know, technologies. We're we're
01:00:42
going to see 100% of driving go away by
01:00:44
humans. That's just it's that's a
01:00:47
beautiful thing and the lives saved, but
01:00:49
we have to recognize that's 15 million
01:00:50
people in the United States, 10 to 15
01:00:52
million who are employed in that way.
01:00:54
And and so that is going to happen. Yes,
01:00:57
>> I I think I think that jobs will change.
01:00:59
For example, um there are many
01:01:01
chauffeers today uh who drives the car.
01:01:04
I believe that though many of those
01:01:05
chauffeers will actually be in the car
01:01:08
sitting behind the drive the steering
01:01:10
wheel while the car is driving by
01:01:12
itself. And the reason for that is
01:01:14
because remember what a chauffeur does
01:01:16
in the end. These chauffeers, they're
01:01:18
helping you they're your assistants.
01:01:20
They're helping you with your luggage.
01:01:21
They're helping you. I mean, they're
01:01:22
helping you with a lot of things and and
01:01:24
so I wouldn't be surprised actually if
01:01:27
the chauffeers of the future becomes
01:01:29
your mobility assistant and they are
01:01:32
helping you do on a whole bunch of other
01:01:33
stuff
01:01:33
>> to the hotel.
01:01:34
>> Yeah. And the car is driving by itself.
01:01:36
>> The autopilot in planes created a lot
01:01:39
more pilots and didn't take any of the
01:01:41
pilots out of the cockpit even though
01:01:43
the autopilot is flying the plane 90% of
01:01:45
the time. By the way, while that car is
01:01:47
driving itself, that chauffeur is going
01:01:49
to be doing a bunch of other work on his
01:01:50
phone and he's going to be
01:01:52
>> arranging, for example, coordinating a
01:01:54
bunch of things for you, getting, you
01:01:55
know, it's all the pie just grows in a
01:01:57
way that
01:01:58
>> one of the things that that that
01:02:01
yes, every job will be will be
01:02:02
transformed. Um, some jobs will be
01:02:04
eliminated. However, we also know that
01:02:07
many many jobs will be recre will be
01:02:08
created. The one thing that I will say
01:02:10
to young people who are coming out of
01:02:12
school who are concerned who are anxious
01:02:14
about AI be the expert of using AI
01:02:18
>> how much look we all want our employees
01:02:20
to be expert at using AI and it's not
01:02:23
not
01:02:25
>> not trivial not trivial and so knowing
01:02:28
how to specify not to overprescribe
01:02:32
leaving enough room for the AI to
01:02:34
innovate and create while we guide it to
01:02:37
the outcome we want. it. All of that
01:02:40
requires artistry.
01:02:41
>> You had you had this great advice to
01:02:43
when you were at Stanford, I think it
01:02:45
was, which is I wish to you pain and
01:02:46
suffering. Do you remember that?
01:02:47
>> Yeah.
01:02:48
>> Fantastic.
01:02:49
>> What's your advice to young people
01:02:50
around what they should be studying? So,
01:02:52
if they're sort of about to leave high
01:02:54
school because now those are the kids
01:02:56
that are at this like really native,
01:02:58
they haven't made a decision about
01:02:59
college, what to study, if at all go to
01:03:02
college. How do you guide those kids?
01:03:04
What would you tell them? I I still
01:03:07
believe that deep science, deep math, um
01:03:10
language skills, you know, as you know,
01:03:13
language is the programming language of
01:03:16
AI,
01:03:17
>> the ultimate programming language.
01:03:18
>> And so, as it turns out, it it could be
01:03:20
that the English major could be the most
01:03:22
successful. Yeah.
01:03:23
>> And and so so I think I think um I I
01:03:26
would just advise whatever whatever
01:03:28
education you get, just make sure that
01:03:30
you're deeply deeply expert in using
01:03:33
AIS. One of the things that I wanted to
01:03:35
say with respect to jobs and I want
01:03:36
everybody to hear it that in fact at the
01:03:39
beginning of the deep learning
01:03:40
revolution, one of the the finest
01:03:43
computer scientists in the world deeply
01:03:45
deeply I deeply uh deeply uh um respect
01:03:49
uh predicted that computer vision will
01:03:51
completely eliminate radiologists
01:03:54
and and that the one the one field he
01:03:57
advises everybody to not go into is
01:04:00
radiology. 10 years later, his
01:04:03
prediction was at 100% right. Computer
01:04:06
vision has been integrated into all of
01:04:09
the radiology technologies and radiology
01:04:11
platforms in the world 100%. The
01:04:14
surprising outcome is the number of
01:04:16
radiologists actually went up and the
01:04:18
demand for radiologists is skyrocketed.
01:04:21
The reason for that is because
01:04:23
everybody's job
01:04:25
has a purpose and its task. The task
01:04:29
that you do is studying the scans,
01:04:33
>> but your purpose is to diag helping the
01:04:36
doctors, helping the patient diagnose
01:04:38
disease.
01:04:38
>> And so what's surprising is because the
01:04:41
scans are now being done so quickly,
01:04:44
>> they could do more scans, improving
01:04:46
healthcare.
01:04:47
>> Yes.
01:04:47
>> But doing more scans more quickly allows
01:04:50
patients to
01:04:51
>> be
01:04:53
onboarded a lot more quick, treated a
01:04:55
lot more quickly. And as it turns out,
01:04:57
because hospitals enjoy making money,
01:05:00
too.
01:05:00
>> Yeah.
01:05:01
>> Right.
01:05:02
>> They're doing more scans,
01:05:04
>> they're treating more customers and
01:05:05
patients, the revenues go up. Guess
01:05:07
what?
01:05:09
>> And and a country that grows faster,
01:05:11
productivity increases. A wealthier
01:05:14
country can put more teachers in the
01:05:16
classroom, not less teachers in the
01:05:17
classroom. That's right. You just give
01:05:19
every one of those teachers a
01:05:20
personalized curriculum for every
01:05:22
student in the room. It makes them all
01:05:23
bionic and leads to a lot more. Every
01:05:26
single student will be assisted by AI,
01:05:28
but every single student will need great
01:05:30
teachers.
01:05:31
>> Yeah. Amazing. Uh Jensen,
01:05:33
congratulations. I know your success and
01:05:35
really this is an incredibly positive,
01:05:37
uplifting discussion. We really
01:05:39
appreciate you taking the time for us.
01:05:40
He is the steward we need.
01:05:42
>> You are you are the more vocal. I'm
01:05:45
being very vocal about the positive side
01:05:47
of it. I think there's too much dumerism
01:05:48
is
01:05:49
>> but I also think it takes the humility
01:05:50
to have this level of success and be
01:05:52
humble about we're making software guys.
01:05:55
Yeah.
01:05:56
>> And I think that that's actually really
01:05:58
healthy for people to hear. We have done
01:06:00
this before. We have invented categories
01:06:02
and industries before.
01:06:04
>> We don't need to go to this
01:06:06
>> scaremongering place. It does nothing.
01:06:08
>> And we get to choose, right? We have
01:06:10
autonomy and and agency. We get to pick
01:06:12
how to
01:06:13
>> we sure do
01:06:13
>> employ this. Okay, everybody. We'll see
01:06:15
you next time on the All-In interview.
01:06:18
Okay.
01:06:19
>> Well done, brother.
01:06:20
>> Thanks, man.
01:06:20
>> Good job.
01:06:21
>> Thank you, sir. That was awesome.
01:06:23
>> Good. Good. Appreciate you.
01:06:24
>> You guys are awesome.
01:06:25
>> Look at this. Look at this big crowd
01:06:27
behind you guys,
01:06:27
>> man. I think they're here for you.
01:06:36
>> I'm going all in.

Badges

This episode stands out for the following:

  • 80
    Best concept / idea
  • 80
    Most influential
  • 75
    Best overall
  • 70
    Most inspiring

Episode Highlights

  • The Rise of Grock
    The purchase of Grock has transformed the landscape of AI processing. 'Did you realize how insufferable Cha Chimath would become?'
    “Did you realize how insufferable Cha Chimath would become?”
    @ 01m 04s
    March 19, 2026
  • The Operating System of the AI Revolution
    Dynamo is introduced as the operating system for the next industrial revolution. 'It's called Dynamo.'
    “It's called Dynamo.”
    @ 01m 38s
    March 19, 2026
  • The Importance of AI Technology
    AI technology is crucial, and we must approach it with caution and humility.
    “This technology is too important to us.”
    @ 20m 06s
    March 19, 2026
  • A Paradigm Shift in Efficiency
    In just 30 minutes, a PhD-level achievement was accomplished using AI, showcasing its potential.
    “This is unbelievable.”
    @ 29m 17s
    March 19, 2026
  • Support for Families in Iran
    NVIDIA expresses unwavering support for families in Iran amidst ongoing conflicts.
    “They have 100% of our support.”
    @ 38m 02s
    March 19, 2026
  • Optimism for a Stable Middle East
    Jensen Huang shares his belief in a more stable Middle East post-conflict.
    “I believe at the end of the war, Middle East will be more stable than before.”
    @ 38m 25s
    March 19, 2026
  • NVIDIA's Vision for Self-Driving Cars
    NVIDIA aims to empower all car manufacturers to create self-driving vehicles.
    “We want to enable every car company in the world to build self-driving cars.”
    @ 41m 22s
    March 19, 2026
  • A Thousand Flowers Blooming
    NVIDIA's collaborative approach fosters innovation across the tech landscape.
    “A thousand flowers are blooming.”
    @ 42m 36s
    March 19, 2026
  • The Future of AI
    Jensen Huang emphasizes the vast potential of AI beyond current expectations.
    “AI is going to be much bigger than that.”
    @ 48m 03s
    March 19, 2026
  • The Rise of Robotics
    Jensen Huang predicts a future filled with robots in everyday life.
    “We're going to have robots all over the place.”
    @ 53m 38s
    March 19, 2026
  • The Future of Robotics
    Robots will unlock economic mobility opportunities, allowing individuals to create and innovate like never before.
    “The robot is going to end up being the greatest unlock for prosperity.”
    @ 55m 05s
    March 19, 2026
  • Job Transformation in the Age of AI
    While some jobs will be eliminated, many new opportunities will arise as AI transforms work.
    “Every job will be transformed. Some jobs will be eliminated, but many will be created.”
    @ 01h 02m 01s
    March 19, 2026

Episode Quotes

Key Moments

  • Dynamo Introduction01:38
  • AI Mistrust19:14
  • AI Efficiency Breakthrough29:17
  • Self-Driving Cars41:22
  • Future of Robotics53:38
  • Robots and Prosperity55:05
  • Job Transformation1:02:01
  • AI and Employment1:02:12

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller
Podcast thumbnail
Home Affordability Crisis, Palantir's Advantage, Big Short on AI, H-1B Abuse, Solar Storm Hits Earth
Podcast thumbnail
E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more
Podcast thumbnail
Trump Brokers Gaza Peace Deal, National Guard in Chicago, OpenAI/AMD, AI Roundtripping, Gold Rally
Podcast thumbnail
E166: Mind-blowing AI Video: OpenAI launches Sora + Is Biden too old? Tucker/Putin interview & more
Podcast thumbnail
Massive Somali Fraud in Minnesota with Nick Shirley, California Asset Seizure, $20B Groq-Nvidia Deal
Podcast thumbnail
The AI Cold War, Signalgate, CoreWeave IPO, Tariff Endgames, El Salvador Deportations
Podcast thumbnail
Arm CEO Rene Haas on AI: Nvidia Lessons, Intel’s Decline and the US-China Chip War