Search:

Inside America’s AI Strategy: Infrastructure, Regulation, and Global Competition

January 23, 202647:55
00:00:00
Great to see everyone and I'm thrilled
00:00:02
to be able to talk about the issue of
00:00:04
the day and that is artificial
00:00:05
intelligence and AI in our world. Um
00:00:09
David, Michael, I'd love you to talk
00:00:12
about what where we are right now in
00:00:15
terms of the pursuit to be the number
00:00:17
one uh lead AI country. How are we
00:00:20
doing, David?
00:00:21
>> I think we're doing great. Um Maria,
00:00:24
last year uh President Trump gave a
00:00:26
major AI policy speech. is in July and
00:00:28
he declared that the United States had
00:00:30
to win the AI race. Uh he he had first
00:00:33
of all declared that we were in one. Uh
00:00:36
and I think his speech was reminiscent
00:00:38
of when President Kenny declared that we
00:00:40
were in a space race and had to win that
00:00:41
race. I think since then what you've
00:00:44
seen is that American companies have
00:00:46
only innovated more. You're seeing all
00:00:48
sorts of really incredible products
00:00:50
being released all the time. I think
00:00:52
that um American uh uh AI models, chips,
00:00:57
um data centers only just keep um
00:01:00
getting better and better. And so I feel
00:01:01
very good about the American position in
00:01:04
this AI race. Certainly we have some
00:01:05
very uh you know competent uh and
00:01:09
formidable competitors. Um China
00:01:11
obviously has a lot of very smart people
00:01:12
working in this area. But I do think
00:01:15
that uh just what you see from uh
00:01:18
American companies in Silicon Valley
00:01:19
right now is really incredible. And yet
00:01:21
there are still so many questions about
00:01:23
all of the spending underway uh to build
00:01:25
this out with regard to data centers.
00:01:27
And of course the question keeps coming
00:01:29
up are we spending too much. Will we get
00:01:31
the return on investment? How do you see
00:01:33
that?
00:01:34
>> I I think that we will. Um I think that
00:01:37
the reason why you're seeing this huge
00:01:39
infrastructure buildout is because the
00:01:41
demand is ultimately there. I I know a
00:01:43
lot of people worry and about whether
00:01:46
this could be like a dot situation. And
00:01:48
remember where we had the whole fiber
00:01:50
build out in the late '9s and then we
00:01:51
had a doc crash. The difference here is
00:01:54
that uh in the late '9s and early 2000s
00:01:57
we had a a problem known as dark fiber
00:01:59
where you had this fiber buildout and
00:02:01
then it didn't get used. There's no such
00:02:03
thing as a dark GPU right now. Every GPU
00:02:05
that's being put in a data center is
00:02:07
getting used. Uh and it's being used to
00:02:09
generate tokens and that's to power the
00:02:11
this new generation of AI chat bots or
00:02:14
coding assistants. uh and there's just
00:02:16
been some releases in the last couple
00:02:17
months on the coding front that you know
00:02:20
it's if you're following what develop
00:02:22
software developers are saying they're
00:02:24
saying it's mind-blowing it's completely
00:02:25
revolutionizing their industry so demand
00:02:28
for tokens just increases and that
00:02:30
increases the demand for this data
00:02:32
center buildout that we're seeing so I
00:02:34
don't think it's going to stop anytime
00:02:35
soon and just last year this
00:02:37
infrastructure buildout added about uh
00:02:39
2% to the GDP growth rate and I and
00:02:42
that's what helped propel us to this you
00:02:44
know four to 5% % growth rate and I
00:02:46
think you're going to see something
00:02:47
similar this year.
00:02:48
>> Well, it is certainly leading growth,
00:02:50
Michael. Um, and I'm so happy to be able
00:02:52
to get this conversation going with both
00:02:55
of you who are really leading this.
00:02:57
David, thank you. And Michael, thank
00:02:58
you. Same questions for you, Michael.
00:03:00
Assess where we are right now on AI.
00:03:02
>> I I think just a reminder for the group
00:03:03
for those who haven't been tracking as
00:03:05
closely as we do every day. The the plan
00:03:07
really had essentially three pillars and
00:03:09
it talked about how one, how can the US
00:03:12
continue to out innovate our
00:03:13
competitors? two, how can we drive the
00:03:16
infrastructure build that we need to
00:03:17
support um this this AI revolution? And
00:03:20
three, how do we actually share with the
00:03:22
world or export our great American
00:03:24
technology? And for each of those three
00:03:26
pillars, there was quite a lot of
00:03:27
actions that the federal government has
00:03:28
taken to drive that forward. Um, and I
00:03:30
think I think we're pretty proud to say
00:03:32
that we've made, I think, pretty good
00:03:33
progress on on all three. Um just
00:03:35
focusing a little bit on the innovation
00:03:36
one you were talking about earlier. I
00:03:38
think the the the the the core um
00:03:41
insight that we've always had about how
00:03:42
you drive this innovation is you have to
00:03:44
have a regulatory environment that
00:03:46
allows this technology to be developed
00:03:49
and ultimately commercialized in the
00:03:51
United States. And the US has done a
00:03:54
great job compared to the rest of the
00:03:55
world on sort of setting that up and
00:03:57
creating a framework that works. But we
00:03:59
can always do better and improve it. And
00:04:01
the president in his his speech in July
00:04:03
talked a lot about um this issue of a
00:04:06
patchwork of state regulations and how
00:04:08
can we ensure that there aren't 50
00:04:10
different rules around AI and and what's
00:04:13
most what's most important about this
00:04:14
debate which I I think a lot of people
00:04:16
sometimes don't sometimes miss is the
00:04:19
patchwork is actually most detrimental
00:04:21
to early stage young companies and
00:04:24
entrepreneurs. If you want to develop a
00:04:26
new AI technology, if you want to build
00:04:27
something on top of one of our great
00:04:28
frontier models, having to figure out
00:04:31
how to navigate 50 different rules
00:04:32
across 50 different states creates a lot
00:04:34
of friction and ultimately the big guys
00:04:36
are the ones that can succeed in in that
00:04:38
environment the best. Um, so we're
00:04:39
spending a lot of time trying to think
00:04:40
about how can you create a legislative
00:04:42
proposal that can actually um deliver on
00:04:45
um a sensible national framework to
00:04:47
solve to solve this regulatory issue.
00:04:49
So, so what would you say then, Michael,
00:04:51
are the basic frameworks that are uh
00:04:54
sort of must-have in in that kind of
00:04:56
federal oversight? Because some states
00:05:00
did push back in the US and say, "No,
00:05:02
no, we want to be able to control our
00:05:04
destiny when it comes to AI." What's
00:05:06
most important when you look at that
00:05:08
framework in terms of um a federal
00:05:11
oversight? Yeah, I think in the
00:05:13
executive order the president signed in
00:05:15
in December directing us to kind of work
00:05:17
through this proposal, he listed a few
00:05:19
things that um the state should continue
00:05:21
to be able to pursue individually on
00:05:23
their own. Um legislation or rules
00:05:25
around child safety was on that list. Um
00:05:27
the rules around permitting of data
00:05:29
centers and buildouts are continuing to
00:05:31
be something that states should should
00:05:32
look at. So there are a few things that
00:05:34
were enumerated, but that's the kind of
00:05:35
stuff that I guess Dave and I are going
00:05:36
to be working through. I don't know if
00:05:37
you have any thoughts on on that. Yeah,
00:05:39
I mean I I think the the basic problem
00:05:42
that we have is that I mean frankly the
00:05:44
states are going hog wild right now with
00:05:46
regulation. There's over,200 bills going
00:05:49
through state legislatores right now. I
00:05:51
think it's very much a knee-jerk
00:05:53
reaction. I know there's a lot of fears
00:05:55
and concerns about AI, but it seems like
00:05:57
for every hypothetical concern, there's
00:06:01
um multiple state bills now to try and
00:06:03
regulate that thing before we really
00:06:05
know how it's going to play out. And um
00:06:07
I think it would be better to I think
00:06:09
since this technology is so new and the
00:06:11
environment is so dynamic, I think it'd
00:06:13
be better to spend a little bit more
00:06:15
time studying how AI is actually being
00:06:17
used and what risks are actually
00:06:19
materializing before you overregulate
00:06:21
the thing. But in any event, that that's
00:06:23
the what we're seeing right now at the
00:06:25
state level. And um and and I think that
00:06:28
the president's been very consistent
00:06:30
that it would be better to have a single
00:06:32
have one rule book, a single rule book
00:06:34
at the federal level, lightweight
00:06:35
federal standard. Uh I think this
00:06:37
problem is only going to get more acute
00:06:40
over time because again you you as you
00:06:42
have 50 different states running in 50
00:06:44
different directions, the patchwork
00:06:46
problem only gets um more significant.
00:06:48
So, in any event, this is something that
00:06:49
we're going to work, I think, closely
00:06:51
together on this year, which is to see
00:06:53
if we can get enough consensus on a
00:06:55
federal framework to enact a law. Only
00:06:58
Congress can ultimately cramp the
00:07:00
states. We understand that. Um, and you
00:07:02
know, as you know, it's very difficult
00:07:03
to get a bill through Congress. You need
00:07:05
60 votes in the Senate. So, has to be
00:07:07
bipartisan to to a certain degree. So,
00:07:10
but we're going to try and see if we can
00:07:12
um work to get that consensus.
00:07:14
>> Yeah. And do you have any clarity on the
00:07:15
timing on that in terms of um support in
00:07:19
Congress for a federal oversight or do
00:07:21
you see push back there as well
00:07:23
depending on the state you're talking
00:07:24
about?
00:07:25
>> Well, there's push back in Congress to
00:07:27
the idea of preeemption without a
00:07:30
federal standard. So, in other words,
00:07:31
you can't replace something with
00:07:33
nothing. This is sort of the the thing
00:07:34
that we heard uh repeatedly. But I think
00:07:37
there is uh quite a bit of interest in
00:07:39
both the House and the Senate towards
00:07:41
having again some sort of lightweight
00:07:43
federal standard. But we're still in the
00:07:44
early stages of those conversations and
00:07:46
we're going to see what we can try and
00:07:47
get done this year.
00:07:48
>> Meanwhile, you've got some people
00:07:50
pushing back after wanting to see the
00:07:52
innovation and growth of data centers.
00:07:54
Now they're saying not in my backyard.
00:07:56
What about that? Is that an issue?
00:07:58
>> Yeah, I mean we got a letter recently
00:07:59
from Bernie Sanders saying stop all data
00:08:01
centers, all data center development.
00:08:03
And you know if we do that we will lose
00:08:06
the AI race. I mean you do need this
00:08:08
infrastructure. Uh other countries are
00:08:10
building out this infrastructure.
00:08:11
China's building out I think they're um
00:08:13
spinning up a a new uh nuclear power
00:08:16
plant or um or coal plant new energy
00:08:19
every single week and a lot of that is
00:08:21
going to power their data centers. So it
00:08:24
would fundamentally I think uh
00:08:26
the United States in the AI race if we
00:08:28
just stopped building data centers
00:08:29
altogether. At the same time, there are
00:08:31
concerns about affordability, about um
00:08:34
whether consumers would have to pay a
00:08:36
higher electrical rate because of data
00:08:38
centers. Uh President Trump's been
00:08:40
really clear that consumers should not
00:08:42
have to pay higher rates for electricity
00:08:44
because of data centers. You saw just
00:08:46
last week Microsoft stepped up and made
00:08:47
a pledge that it will that its data
00:08:50
centers will not cause residential rates
00:08:53
to increase. I think you'll likely see
00:08:55
other tech companies stepping up and
00:08:57
making similar commitments. And in fact,
00:08:59
when I've talked to the hyperscalers and
00:09:01
when I've talked to the AI companies, it
00:09:04
was never their plan to draw off the
00:09:05
grid. They all are uh saw standing up
00:09:07
their own power generation as part of
00:09:10
their buildout. Um and what Secretary
00:09:13
Wright, the our Secretary of Energy has
00:09:15
been doing is trying to um is is reform
00:09:18
the regulations that actually make it
00:09:20
more difficult for these AI data centers
00:09:22
to stand up their own power behind the
00:09:24
meter. So that basically is is our
00:09:26
vision is let and and I should say this
00:09:29
is President Trump's vision really since
00:09:31
the beginning of the administration is
00:09:32
he said let the AI companies become
00:09:35
power companies let them stand up their
00:09:37
own power generation as they built you
00:09:39
know side by side with these new data
00:09:40
centers and the um the result of that is
00:09:44
you know a we get this infrastructure b
00:09:46
residential rates don't go up. Yeah,
00:09:48
because Michael, this this race has fast
00:09:51
become it's moves from an AI race to a
00:09:54
power race.
00:09:56
>> And and I think what we're seeing is
00:09:57
that um we we need to share a good story
00:10:02
about how ultimately this buildout is
00:10:04
going to be net positive for American
00:10:06
rateayers. And I think sometimes if you
00:10:08
know and if you're in a small community
00:10:09
and someone shows up to build the data
00:10:11
center, I mean, you have to make it
00:10:12
clear that ultimately this something is
00:10:13
going to actually lower your your rates
00:10:15
long term. Um and and the president put
00:10:17
out a tweet a truth last Monday where he
00:10:19
was as as David said very clear that you
00:10:21
know if you're going to build a data
00:10:22
center you have to pay your own way for
00:10:24
it and um Microsoft has stepped up and
00:10:26
our our hope is that many others will do
00:10:27
the same.
00:10:28
>> But but some companies um because they
00:10:31
don't have the cash right now are
00:10:33
borrowing money right to to build out
00:10:35
the data centers and there's also a
00:10:37
worry that the banks will be left
00:10:39
holding the bag for some of this because
00:10:41
again the spending is is too much. your
00:10:44
thoughts on that?
00:10:46
>> Well, I think there there is obviously
00:10:48
that concern. I mean, you you know, I I
00:10:50
think it's it's um it's less I would say
00:10:53
the banks are more you see Oracle making
00:10:55
a huge investment. You see, uh
00:10:57
Blackstone making huge investments, real
00:10:58
estate companies. Um ultimately, I think
00:11:01
these are very savvy market players,
00:11:04
very deep companies, and they're doing
00:11:06
this because they see an ROI there at
00:11:08
the end of the the rainbow. Um can I
00:11:11
make one other point about just the the
00:11:12
data center? So, um, just on
00:11:14
electricity, uh, I actually think that
00:11:17
if we allow the data centers to stand up
00:11:19
their own power generation, it will
00:11:21
actually bring down rates. Not only will
00:11:23
it not increase residential rates, it'll
00:11:25
bring it down. And it'll do that in two
00:11:27
ways. One is that the data centers can
00:11:30
can give or sell power back to the meter
00:11:34
when they have excess. So, that will
00:11:35
help bring down rates. Second, there's a
00:11:38
lot of fixed costs involved in power
00:11:40
generation. It's not all variable. So
00:11:42
when you're able to amortize those fixed
00:11:44
costs over a greater supply, you bring
00:11:47
down the meter rate for everybody. And
00:11:48
so there's huge economies of scale. So
00:11:51
the more scale you get in electricity,
00:11:53
like most other things, the price comes
00:11:55
down. That's what So it's actually a
00:11:57
good thing that the uh that we have this
00:12:00
buildout going on because it will
00:12:02
ultimately reduce prices for consumers.
00:12:05
But we do have to make sure that these
00:12:08
new data centers aren't just plugging
00:12:09
into the grid and using, they have to be
00:12:11
contributing back.
00:12:12
>> And I think what a great policy change
00:12:14
has made under this administration, the
00:12:16
the B administration had as a matter of
00:12:18
policy had made it such that you
00:12:19
couldn't do this behind the meter energy
00:12:21
generation. Um, and if you wanted to
00:12:23
bring your own power, you couldn't. You
00:12:24
had to be part of the larger grid. So I
00:12:26
think um that rule has has changed uh uh
00:12:29
by by Secretary Wright and by FK to kind
00:12:31
of allow this to happen. And ultimately
00:12:33
I agree with David. I think once you
00:12:34
have sort of greater scale in in the
00:12:36
power generation, you'll be contributing
00:12:38
back into the grid in a way that that
00:12:39
benefits rateayers.
00:12:41
>> Let's go back to the uses and how AI is
00:12:44
changing our lives. You you mentioned
00:12:46
earlier um all of the uses and and and
00:12:49
the impact the AI is having. What do you
00:12:51
see as the most important use and where
00:12:54
AI is being deployed and implemented
00:12:57
best right now?
00:12:58
>> Well, it's interesting. I think it
00:12:59
there's been an evolution. So I think we
00:13:01
started with you know AI chat bots like
00:13:04
chatgpt and in a sense that was kind of
00:13:07
like better web search. Um it was really
00:13:09
great for research asking it questions
00:13:11
and give you answers to anything. Then
00:13:13
we saw um we saw models add chain of
00:13:16
thought and they could start to do you
00:13:18
know deeper reasoning. Then we saw
00:13:20
coding assistance. This is really and
00:13:22
and I think over the past few months
00:13:23
there's been a real breakthrough. If you
00:13:26
talk to people, software developers, it
00:13:28
really seems like there's been, you
00:13:29
know, a major shift in in just
00:13:31
improvement in the quality of the coding
00:13:33
assistants. And I think where that's
00:13:35
going next is um tools for knowledge
00:13:38
workers. So the same types of assistants
00:13:41
that have been outputting code can now
00:13:44
output any type of format. So whether
00:13:46
it's like Excel models, PowerPoints,
00:13:49
websites, you name it, knowledge workers
00:13:52
are now going to be able to generate all
00:13:54
these different types of things the same
00:13:56
way that coders have been gen that
00:13:58
software developers have been using AI
00:14:00
generate code. I think that's one of the
00:14:01
big things you're going to see in 2026
00:14:04
is again just this um this productivity
00:14:07
boom for knowledge workers. So I think
00:14:09
that's like one of the things you're
00:14:10
seeing on the ground. And then
00:14:12
separately there's there's a bunch of
00:14:13
things happening in industry verticals.
00:14:16
So different industries being impacted
00:14:18
by AI. So in healthcare I think there's
00:14:21
a tremendous opportunity uh to improve
00:14:25
um or to to reduce sort of
00:14:27
administrative bureaucracy to uh to
00:14:30
improve this um processing of paperwork
00:14:32
that happens. also to use uh AI and
00:14:35
medical and scientific research to help
00:14:37
find new uh cures. You're already seeing
00:14:39
users tell all sorts of stories about uh
00:14:42
diagnosis. They've been able to put in
00:14:44
their medical records into chat GBT or
00:14:47
you know other chat engine or chat bots
00:14:49
and get like remarkable results. They've
00:14:52
been able to you know finally figure out
00:14:54
what was you know what what was wrong
00:14:56
with them and they've been able to take
00:14:57
that to a doctor. You have doctors using
00:14:59
it too. So medical I think is a really
00:15:01
interesting area but there's a whole
00:15:03
bunch of these um examples of different
00:15:05
different industries are now being
00:15:06
impacted.
00:15:08
>> The the one area I think a lot about is
00:15:09
is AI for science and and back to to to
00:15:13
David's initial point about the progress
00:15:14
we've seen these frontier models. I
00:15:16
think the very early ones sort of
00:15:18
started with just general knowledge and
00:15:20
you have to go back and understand like
00:15:21
why and the question was what was the
00:15:23
data available for those model builders
00:15:25
to start training their models and for
00:15:27
the early ones you could just scrape the
00:15:29
internet and just kind of cram
00:15:30
everything into a model and train it and
00:15:32
and that's where you kind of had this
00:15:33
this first phase of of large language
00:15:35
models and the second one was coding and
00:15:38
if you think about how do you get a
00:15:39
really good coding model you again you
00:15:40
have to trade it you have to train it on
00:15:42
existing code and that's again something
00:15:44
that is you know relative atively easier
00:15:47
to to acquire than other types of data
00:15:48
and you saw great progress and jumps in
00:15:50
in the coding models. I think the the
00:15:52
third big sort of shift that hasn't
00:15:54
really been touched on yet which the
00:15:56
government itself is trying to do a good
00:15:58
uh push on is the AI for science
00:16:00
question and why it's so challenging for
00:16:02
scientific discovery to like tie in with
00:16:05
the way that LMS are are traditionally
00:16:07
trained is that the science data is
00:16:09
extraordinarily fragmented and it's not
00:16:12
done in a way or formatted in a way that
00:16:14
um can easily be applied to a large
00:16:16
language model sort of like training run
00:16:18
and if you think about scientific
00:16:19
discovery it's spread out across so many
00:16:21
different disciplines. You have
00:16:22
chemistry data, you have math data, you
00:16:24
have material science data and all of
00:16:26
that is is all types of different
00:16:28
formats. And our effort in
00:16:30
administration um we launched something
00:16:32
called the Genesis mission which are is
00:16:34
our attempt to sort of make these big
00:16:36
bold leaps in AI for scientific
00:16:39
discovery. and our national labs at the
00:16:41
Department of Energy are have been doing
00:16:43
incredible research over the last you
00:16:45
know 50 60 years and all of that has is
00:16:48
sitting and is ready to be used to be
00:16:49
trained for for for these models. So my
00:16:52
hope is that over the next year we're
00:16:53
going to see a lot more work in this in
00:16:55
scientific discovery to be able to
00:16:56
actually accelerate how quickly we can
00:16:58
choose which experiments to run, run
00:17:00
those experiments, go back and figure
00:17:02
out what we did wrong and run them
00:17:03
again. And and this ties in with lots of
00:17:05
interesting ideas that people have
00:17:06
around some of these AI labs where you
00:17:09
essentially have you can put in the the
00:17:10
the thesis or the hypothesis and
00:17:13
ultimately these labs can do lab
00:17:14
experiment itself and move forward. So
00:17:16
that's kind of the dream that I have
00:17:17
that that ultimately we as a country can
00:17:19
can almost double our our R&D output
00:17:22
over the next 10 years because of AI.
00:17:24
>> So so what kind of breakthroughs um
00:17:26
would you expect or would you like to
00:17:29
see? Yeah, I I think there um the ones
00:17:32
that I think can make a big impact are
00:17:34
uh first the the the the experimentation
00:17:37
and training runs around fusion um are
00:17:41
extraordinarily computationheavy
00:17:43
and they themselves if we can if we can
00:17:46
have a a a faster feedback loop on how
00:17:49
we do these these simulations for fusion
00:17:51
we can move the timelines in for fusion.
00:17:53
So that could be a big a big step.
00:17:55
Material science is also a very a very
00:17:57
big area where you want to be able to
00:17:59
test all types of of different molecules
00:18:01
and how interact with each other. This
00:18:02
is important for all the big things
00:18:04
we're trying to do in space. Whether
00:18:05
it's our lunar base or getting to Mars
00:18:07
or bringing nuclear energy to space,
00:18:09
having advanced material science is
00:18:11
important. And the third is one that
00:18:12
everyone always cares about is is
00:18:14
healthcare and and therapeutics. How can
00:18:16
you more quickly be able to identify the
00:18:18
the best molecules to solve a particular
00:18:20
particular health challenge? and how do
00:18:22
you more quickly iterate to a point
00:18:24
where you can move to a to a clinical
00:18:26
trial
00:18:26
>> and on a everyday level I mean you also
00:18:29
have the auto sector I think as a big
00:18:31
beneficiary here I think that's one area
00:18:33
that seems to be spending a lot on this
00:18:36
as well you agree with that
00:18:37
>> well I mean with like self-driving or
00:18:41
>> I mean self-driving for sure is going to
00:18:43
be huge it feels like we've hit some
00:18:45
sort of new inflection point there where
00:18:47
the quality's gotten to the point where
00:18:49
you're starting to see robo taxis now
00:18:51
Whimo and Tesla. Um
00:18:54
>> what what about an AI assistant? I mean,
00:18:56
is that going to be something that is
00:18:59
sort of common place? I someone said to
00:19:01
me the other day that oh, in China we're
00:19:02
doing things so much differently because
00:19:05
you're using AI for research as as you
00:19:08
said, but we're using it as I have my AI
00:19:11
assistant and I'm um you know, they're
00:19:14
paying my bills and cleaning my house
00:19:15
and buying my wife a birthday present
00:19:17
and and doing everything for me. I I
00:19:20
think so. I think that'll happen
00:19:21
probably this year. So the the product
00:19:24
that just came out recently that
00:19:26
everyone's kind of going crazy over is
00:19:28
the latest iteration of flawed code uh
00:19:31
which is uh powered by uh anthropics uh
00:19:34
opus 4.5 model which seems to be a real
00:19:37
breakthrough in in in coding and so
00:19:40
again this is you know the software
00:19:41
developers are really imp impressed with
00:19:43
it but in inside of cloud code they had
00:19:46
they introduced a new tab called co-work
00:19:49
again you can as a non coder
00:19:53
uh or as someone who is looking for um
00:19:56
to create output other than code, you
00:19:58
can now use it to uh to basically create
00:20:01
all sorts of other kinds of outputs.
00:20:02
Like I mentioned, you can do uh
00:20:04
spreadsheets or powerpoints, things like
00:20:07
that. And you can have it, you can point
00:20:09
it to your file drive and it can look at
00:20:12
the work you've already done. So if
00:20:14
there's a particular type of format for
00:20:16
a PowerPoint you like, you just point it
00:20:18
to the work you've already done and say
00:20:20
I want to do you know a new um you know
00:20:23
presentation but using this style but on
00:20:26
this topic and it'll actually emulate
00:20:29
you know your style and and the work
00:20:31
your format the work you've already
00:20:33
done. And um people are very impressed
00:20:36
with this and you can also point it at
00:20:37
your email and have it analyze your
00:20:39
email pull things out of it. So it right
00:20:42
now it's very taskbased. You you the
00:20:44
user have to prompt it for each task.
00:20:47
But you can see there the beginning of a
00:20:49
personal digital assistant where you
00:20:52
connect it to your file drive to your
00:20:54
email to all of your data sources and it
00:20:57
can start to do tasks for you and again
00:21:00
it understands the format and the style
00:21:02
that you like to produce work in. So, it
00:21:05
feels to me like we just need one more
00:21:09
layer of abstraction on top of a tool
00:21:11
like that and you'll have your own
00:21:13
personal digital assistant
00:21:15
>> and um you know there'll be like a voice
00:21:17
interface. You ever seen the movie Her
00:21:20
you know with uh walking Phoenix and um
00:21:23
I think Scarlett Johansson is just the
00:21:25
voice but uh you know he's telling her
00:21:27
what what to do through an earpiece. I
00:21:29
mean we're very close to something like
00:21:30
that. I mean, I'm not saying that, you
00:21:32
know, the AI is going to become sensient
00:21:34
or whatever, but um but no, we're like I
00:21:36
think in 2026, you could see that that
00:21:40
these types of of tools again started as
00:21:42
coding assistants, but now they become
00:21:44
personal digital assistants. That could
00:21:46
definitely happen this year.
00:21:48
>> Michael, what what don't people
00:21:49
understand about AI? What what do you
00:21:51
think is most important for us to
00:21:52
understand about the innovation underway
00:21:54
right now with science and and AI? I I
00:21:58
think some people I I think it's easy to
00:22:00
underestimate the the long-term impact
00:22:03
this is going to have across so many
00:22:04
industries and and domains. Um I think
00:22:07
very much, you know, it's easy to to to
00:22:10
quickly think about AI as a as just a
00:22:12
sophisticated chatbot because that's
00:22:14
what most people interact with every day
00:22:16
and and that's what they they they touch
00:22:18
and feel. Um, but I think that to me I
00:22:20
think the long-term impacts and not to
00:22:21
keep harping on the science, I think
00:22:23
there is a there's a a real fundamental
00:22:25
shift happening in the velocity and pace
00:22:28
that we can test and uh and evaluate and
00:22:31
execute scientific discovery and
00:22:33
endeavors. And I think I think that's
00:22:34
going to have huge repercussions for the
00:22:36
way that we as a country innovate
00:22:37
broadly speaking in the years ahead
00:22:39
>> which is why we're watching what China
00:22:41
is doing. Let's talk a bit about China
00:22:42
and where it is relative to the United
00:22:45
States. Are we winning? Is it about
00:22:47
chips? What's the race specifically
00:22:49
really about?
00:22:50
>> Well, I I I think that in general we're
00:22:53
ahead of China. There's different layers
00:22:55
of the stack. So, you've got the the the
00:22:58
models, then you've got the chips, and
00:23:01
you know, then you've got the chipm
00:23:02
equipment, you know. So, you go down the
00:23:04
stack. I would say that the deeper in in
00:23:06
the stack that you go, the greater the
00:23:07
American advantage. Um I think on models
00:23:10
most people would say that we're our
00:23:13
models are maybe 6 months ahead or so
00:23:15
plus or minus of the Chinese models. You
00:23:18
look at chips maybe 2 years ahead. You
00:23:20
go to the semiconductor manufacturing
00:23:23
equipment it could be like 5 years. So
00:23:25
the US does have sign significant
00:23:27
advantages there. There's only maybe a
00:23:30
couple of areas where I think China has
00:23:33
has an advantage. Um one is on energy
00:23:37
production. And if you look at the their
00:23:38
grid, their grid has roughly doubled in
00:23:41
the last 10 years, whereas ours has only
00:23:43
grown by about 2 to 3%. Energy
00:23:46
production in the US has been a
00:23:49
relatively sleepy industry before AI
00:23:52
came along. And a lot of that had to do
00:23:54
with regulations and the antipathy of
00:23:57
the previous administration towards
00:23:59
energy production. Obviously, President
00:24:01
Trump had a very different view on this.
00:24:03
I think he was preient on this issue.
00:24:05
you go back 10 years and he was talking
00:24:07
about we got a drill baby drill and um
00:24:10
and I think he understood that energy
00:24:13
growth was the precondition for economic
00:24:16
growth and it's definitely the pre
00:24:18
precondition for this uh AI
00:24:20
infrastructure growth. So this is an
00:24:23
area where again we have to basically
00:24:24
expand our energy production um and I
00:24:27
and and and so I think that is an area
00:24:30
where we need to catch up. The other
00:24:32
area where I would say, you know, I I
00:24:35
don't know if I would call this an
00:24:36
advantage exactly, but if you but you
00:24:40
could argue that China
00:24:44
has the edge in what is what's being um
00:24:48
called AI optimism. So there was a a
00:24:50
polling done by Stanford across
00:24:52
countries and they asked the citizens of
00:24:54
all these different countries uh do you
00:24:57
feel that the benefits of AI will be
00:24:59
more beneficial or more harmful? And if
00:25:03
if you thought that it that overall be
00:25:05
more beneficial than harmful they call
00:25:07
that AI optimism. Well in China AI
00:25:10
optimism was 83%. So 83% of the
00:25:13
population feels that it's being more
00:25:15
beneficial than harmful. That number in
00:25:16
the United States is only 39%.
00:25:19
So for some reason people in China are
00:25:22
more optimistic about AI than in the
00:25:25
United States and you generally you
00:25:26
generally see this that uh Asian
00:25:29
countries are very high on AI optimism
00:25:32
in the western countries are lower and I
00:25:35
think it's a interesting or open
00:25:37
question about why this is. I think
00:25:39
there's a few possible explanations for
00:25:41
it. I I think that um first of all, the
00:25:43
the media tends to focus on the doom and
00:25:45
gloom stories with with AI,
00:25:48
>> the fear,
00:25:48
>> the fears. Um and we can talk about some
00:25:50
of those fears and uh and and how, you
00:25:53
know, whether we think they're they're
00:25:54
real. Um but I think the media has a lot
00:25:57
to do with it. I think that the the way
00:25:59
that Hollywood has portrayed AI over the
00:26:03
decades, you know, with whether it's the
00:26:04
Terminator or 2001, uh has you has
00:26:08
portrayed this dystopian view of the
00:26:10
future. And I think that plays into
00:26:11
fuel's thinking. And then frankly, I
00:26:13
would say that part of the the fault
00:26:16
lies with our tech leaders who haven't
00:26:19
necessarily done a great job describing
00:26:21
the benefits of AI. In fact, when
00:26:23
they're talking about, you know, AI
00:26:25
eliminating 50% of knowledge workers,
00:26:28
that doesn't sound like a, you know,
00:26:29
very utopian scenario. That sounds
00:26:31
dystopian to most people. And so I do
00:26:34
think that unintentionally some of our
00:26:36
tech leaders have played into this um AI
00:26:39
pessimism. And the reason why I think
00:26:42
this could be a disadvantage for the
00:26:43
United States is because again it's
00:26:45
feeding into this regulatory frenzy
00:26:48
we're seeing again 1,200 bills at the
00:26:50
state level. And right now I think you
00:26:53
know we are winning this AI race. We're
00:26:55
ahead in all the key dimensions chips
00:26:57
models and so on. But we could shoot
00:26:59
ourselves in the foot, you know, if we
00:27:01
end up overregulating this thing to to
00:27:03
death, we could actually cost ourselves
00:27:05
this AI race. So, I do worry about this
00:27:09
question of AI optimism,
00:27:11
>> right? It's a great point. And how what
00:27:12
would happen if the US is not number one
00:27:14
in this, Michael?
00:27:15
>> Yeah, I I I think we we need to be and
00:27:17
that's why we put put the plan out. I
00:27:19
think you know when I think about the
00:27:20
the China question and about the the
00:27:22
sort of larger question of how do we win
00:27:24
the AI race what always what I always
00:27:26
like to think about is this question of
00:27:28
adoption and I think sometimes there's
00:27:30
this overemphasis on the leaderboard
00:27:32
it's like which frontier model is number
00:27:34
one on some sort sort of metric and the
00:27:37
reality is we're neck and neck and as
00:27:39
David said we're probably had you know
00:27:40
six to 12 months on our frontier models
00:27:42
but I think what we have seen over over
00:27:44
time and over history is that um you
00:27:46
don't necessarily need to have the very
00:27:48
best model or very best piece of
00:27:50
technology in the world for it to
00:27:52
perforate globally. And a lot of us who
00:27:54
were part of the first Trump
00:27:55
administration saw this very firsthand
00:27:57
with the telecom wars of that era of
00:28:00
what Huawei was able to do globally. And
00:28:02
at the time when when Huawei first
00:28:04
started their their sort of global
00:28:05
export push um they certainly were not
00:28:08
the very best technology in the world.
00:28:10
They were current they were certainly
00:28:12
you know you know subpar compared to to
00:28:14
Ericen and Nokia. that they were good
00:28:16
enough and they were subsidized enough
00:28:18
such that they became sort of the
00:28:20
default telecom um system for a lot of
00:28:22
the world and we've learned a lot of
00:28:23
lessons from that and we take that very
00:28:25
seriously. When it comes to AI, we know
00:28:27
there's ambition for the Chinese to
00:28:29
export their models and have them be the
00:28:32
models that are powering all these
00:28:33
different use cases across across the
00:28:36
global south and across the rest of the
00:28:37
world. Um, that's why the president
00:28:39
launched something called the American
00:28:40
AI export program and our mission and I
00:28:42
think we're in a very lucky position
00:28:43
here compared to what we're dealing with
00:28:45
with Huawei is as David said, we are
00:28:48
dominant in almost every part of the
00:28:49
stack. We have the very best models. We
00:28:51
have the various applications. We have
00:28:52
the very best chips. So, we are in a
00:28:54
position of power now and is up to us as
00:28:56
a country to share that technology with
00:28:58
the world with all of our partners and
00:28:59
allies. make sure that any developer
00:29:01
anywhere in the world that wants to
00:29:03
build a new application using AI is
00:29:05
using is fine-tuning an American model
00:29:07
on top of an American chip. And that
00:29:09
isn't that isn't a a hard reality to
00:29:11
see. That is something that I think we
00:29:12
can very easily do just because we have
00:29:14
the very best tech. That's a program
00:29:15
that um we launched last late last year
00:29:17
and we're doing a big push this year to
00:29:19
get that get that out the door.
00:29:20
>> It's an important point that you make in
00:29:22
terms of exporting AI to the rest of the
00:29:24
world. Is it true that China is telling
00:29:27
its companies don't use American chips,
00:29:30
don't use American AI right now?
00:29:33
>> It it it seems so. Um I mean China is
00:29:36
developing its own models. Obviously
00:29:37
about a year ago you had the Deepseek
00:29:39
moment where you you had a powerful
00:29:42
model released by Deep Seek and I think
00:29:43
that kind of put Chinese uh AI on the
00:29:47
map in a way. I think people in the west
00:29:49
didn't realize you in a way how good
00:29:51
China was at producing models and there
00:29:53
was a little bit of complacency
00:29:56
uh towards our relative position. People
00:29:59
weren't really talking about the global
00:30:00
competition two years ago. It wasn't
00:30:03
really discussed at all. Uh I remember
00:30:05
when you know the Biden administration
00:30:06
created this you know 100page Biden
00:30:09
executive order regulating AI. No one
00:30:12
was talking about whether this might
00:30:13
slow whether all this regulation would
00:30:15
slow us down. Visa v China wasn't even
00:30:17
part of the conversation. Then Deepseek
00:30:18
launched and I think we did realize
00:30:20
we're in a global competition and we
00:30:21
have to win and that's why we have to
00:30:23
actually be quite careful about how we
00:30:25
regulate this and not make sure we're
00:30:26
not overregulating it. But I think you
00:30:29
know China definitely wants to compete.
00:30:31
Um there have been some stories recently
00:30:35
I think uh Bloomberg and Reuters
00:30:36
reported that they actually are not
00:30:38
allowing Nvidia chips into their country
00:30:41
and the reason for that we think is that
00:30:44
they want to indigenize chip production.
00:30:46
They want to stand up Huawei as their
00:30:48
national champion and effectively
00:30:50
they're creating a market subsidy for
00:30:52
Huawei by keeping out the competition.
00:30:55
So, they're protecting their market to
00:30:56
stand up Huawei. And I think their plan
00:30:59
would be to have Huawei dominate chips
00:31:02
in China first and then use that to
00:31:04
scale up and then try to take over the
00:31:07
rest of the world. Chip production is a
00:31:09
scale up business. So, you know, if they
00:31:11
can dominate the Chinese market first,
00:31:13
that gives them a powerful platform to
00:31:15
then proliferate to the rest of the
00:31:16
world.
00:31:17
>> So, so where are we in that, Michael? I
00:31:19
mean, first you all came up with the AI
00:31:21
action plan, then came up with another
00:31:23
plan in terms of exporting AI to the
00:31:26
rest of the world. What can you tell us
00:31:27
in terms of where we are in that?
00:31:28
>> Yeah, so the the progress is is moving
00:31:30
on that. We um we closed a request for
00:31:33
information from the commerce department
00:31:34
late last year, which went out to
00:31:36
industry and said, "Hey, if we want to
00:31:37
export the American AI stack, what
00:31:38
should we be thinking about? How should
00:31:40
we be designing these packages that we
00:31:41
share with the world?" Commerce is now
00:31:43
ingesting that that information.
00:31:45
There'll be a request for proposals that
00:31:47
comes out very shortly. And that's where
00:31:48
we actually want companies to come
00:31:50
together to form consortia and say like
00:31:52
look this is what a package looks like.
00:31:54
And I think what um you know what what
00:31:56
people need to sort what I always try to
00:31:58
remind people is that the the the the
00:32:01
buyers of AI around the world um vary
00:32:04
quite dramatically in their level of
00:32:07
sophistication. So in the US, if you're
00:32:09
a very sort of, you know, if you're a
00:32:11
Fortune50 company and you want to deploy
00:32:13
AI, you have a pretty sophisticated sort
00:32:15
of CIO or CTO shop, you are thinking
00:32:17
very carefully about like which cloud
00:32:19
you want to buy, which potential model
00:32:20
you want to use, do you want to
00:32:21
fine-tune it on your own data? Do you
00:32:23
want to build your own application? You
00:32:24
know, what application you go and see?
00:32:25
You can like test various things. You
00:32:27
like go to all these third parties and
00:32:29
evaluate which is best. And it's a very
00:32:31
sort of complicated mix of how you end
00:32:32
up creating something that's optimum for
00:32:34
your particular company. for a lot of
00:32:36
countries around the world that are
00:32:37
aspiring to to use AI for their people
00:32:40
or to support the services whether it be
00:32:42
health care or um you know tax
00:32:44
collection or whatever it may be um you
00:32:46
know they don't have a a you know
00:32:49
billion dollar IT budget you know
00:32:51
they're just trying to figure out what
00:32:52
is a tool that I can use in my country
00:32:54
to deliver the benefits of AI to my
00:32:56
people so we think very carefully around
00:32:58
how can we craft solutions which you
00:33:01
know turn keys could be one way to put
00:33:03
it or how do you how do you provide a
00:33:04
solution that can easily be deployed in
00:33:06
a country and what's often you know what
00:33:09
often sort of gets caught up in this
00:33:10
debate is this question of you know how
00:33:12
many chips is the US going to be sending
00:33:14
around the world and and what I always
00:33:15
try to remind people is that you know
00:33:17
outside of the US China and maybe a few
00:33:20
other countries most countries around
00:33:22
the world do not have the capital or the
00:33:24
aspiration to do largecale training runs
00:33:27
or development of their own frontier
00:33:28
models there are very few countries
00:33:30
around the world that are going to build
00:33:32
sort of colossus style training centers
00:33:34
most countries around the world need
00:33:36
smaller data centers that just have
00:33:37
inference related chips that can drive
00:33:39
and and and do the you know do the
00:33:42
inference on on the particular um runs
00:33:45
that the government wants to have. So I
00:33:47
think what we're working very hard to do
00:33:49
is is is create sort of these these
00:33:51
these turnkey manageablysized AI
00:33:54
solutions that then we can partner with
00:33:57
a lot of our export finance
00:33:59
organizations like Development Finance
00:34:00
Corporation or the Export Import Bank to
00:34:03
make the export of that particular stack
00:34:05
much more appealing and commercially
00:34:08
viable in countries that are not
00:34:09
extraordinarily deeped. Um so we're
00:34:12
going to be in India next month for the
00:34:13
India um AI impact summit. Um, this is
00:34:16
sort of the largest global gathering for
00:34:18
for AI folks. Um, and we're going to be
00:34:20
sharing a lot more on on the progress of
00:34:22
this uh of this program there.
00:34:24
>> You want to weigh in?
00:34:25
>> Well, I just just to build on that. I
00:34:27
think people sometimes ask, you know, h
00:34:30
how do you how will you know if if
00:34:32
you've won the AI race, you know, with
00:34:34
with with China with with other
00:34:35
countries and I think there's a very
00:34:37
simple answer to that which is market
00:34:39
share. You know, if 5 years we look
00:34:41
around the world and we see that it's
00:34:44
American chips and models are being used
00:34:46
everywhere, well that means we won. But
00:34:47
if in 5 years we look around the world
00:34:49
and it's Huawei chips and Deepseek
00:34:51
models, then that would be very bad,
00:34:53
right? That would be a bad sign. That
00:34:55
means that we lost. So I do think that
00:34:56
the proliferation or diffusion of
00:34:59
American technology is really critical
00:35:01
to winning this AI race. We know from
00:35:03
Silicon Valley that the companies that
00:35:06
end up becoming huge are the ones that
00:35:08
create ecosystems. It's the, you know,
00:35:10
you you you as a technology company, you
00:35:13
want to have the most apps in your app
00:35:14
store. You want to have the most
00:35:15
developers writing on top of your API.
00:35:19
You want to be a platform company. And
00:35:21
so in all these technology races,
00:35:24
biggest ecosystem wins. And we want to
00:35:26
have the so that's basically why I think
00:35:28
this program is so important is we want
00:35:30
to create the biggest ecosystem. Now
00:35:33
this is not only about benefiting the US
00:35:35
because in order to have a successful
00:35:37
ecosystem you have to create value for
00:35:39
your partners and that's really
00:35:41
important like Michael's saying not
00:35:43
every country is going to be on the
00:35:44
cutting edge of developing its own chips
00:35:46
or developing its own frontier models
00:35:48
but they can use these tools to derive
00:35:51
value to apply them to their businesses
00:35:54
to their economies to extract value and
00:35:56
be part of this technological
00:35:58
revolution. So I think that you know we
00:36:00
have to think in this with this partner
00:36:03
mindset and I do think that this this
00:36:06
type of mindset is actually very common
00:36:08
to Silicon Valley. Like I mentioned I
00:36:09
think every great technology company
00:36:11
thinks in terms of how do we get the
00:36:13
most people on top of our tech stack but
00:36:15
it is a form of thinking that's pretty
00:36:17
alien to the bureaucracy in Washington
00:36:21
which has much more of a command and
00:36:22
control type of mindset. Y
00:36:24
>> and when President Trump came into
00:36:26
office, just give a couple examples of
00:36:28
this, the regulations that were sitting
00:36:30
on our desk that had just been handed
00:36:32
down by our predecessors, again, we had
00:36:34
this 100page Biden executive order on AI
00:36:38
that was all this new regulation and
00:36:40
there was a 200page
00:36:42
uh was called the Biden diffusion rule,
00:36:44
which was 200 pages of regulations
00:36:47
uh on the export of semiconductors. So
00:36:50
we were turning the the AI industry
00:36:53
models and chips into a highly regulated
00:36:55
industry. That was that was basically
00:36:57
the direction that Washington was going
00:36:59
in. And the first thing President Trump
00:37:02
did his first week in office was rescend
00:37:04
all of those unust regulations which I
00:37:06
think was absolutely critical. You know
00:37:08
the thing that really makes Silicon
00:37:10
Valley special is this concept of
00:37:12
permissionless innovation. you know,
00:37:14
since um Hulin and Packard started 85
00:37:17
years ago, started building Silicon
00:37:20
Valley, it's it the idea has always been
00:37:22
that just a couple of founders kind of a
00:37:24
great idea start their company, they get
00:37:26
some angel investors to write, you know,
00:37:29
a check for, you know, seed capital.
00:37:32
Those investors think they're probably
00:37:33
going to lose their money, but they
00:37:34
figure there's a shot. and you know and
00:37:37
it's so it could be the two guys in
00:37:39
their garage or it could be the college
00:37:40
dropout in the dorm room and they don't
00:37:43
need to go to Washington to get
00:37:44
permission for their idea right it's
00:37:46
permissionless innovation that's what's
00:37:48
has made Silicon Valley the crown jewel
00:37:50
of the world it's why so many of the I
00:37:52
think heads of state who are here are
00:37:54
always asking how do we create our own
00:37:56
Silicon Valley that was not the
00:37:58
direction we were on when President
00:38:00
Trump came into office the new 300 pages
00:38:03
of regulations concerning AI the Biden
00:38:06
administration left us with would have
00:38:08
changed this um environment of
00:38:10
permissionless innovation to an
00:38:12
environment of you have to go to
00:38:13
Washington to get approval for your idea
00:38:16
and I think that President Trump really
00:38:18
corrected that and since then we've been
00:38:20
implementing you know his AI action plan
00:38:23
uh which is all about you know pro
00:38:24
innovation pro infrastructure pro energy
00:38:26
and pro export so it's been I think a
00:38:29
total change and I think just in the
00:38:31
past year you've seen the results of
00:38:33
that
00:38:33
>> and I think what one thing to to add
00:38:35
there Um part of the the international
00:38:38
uh agenda that we have on AI is one
00:38:40
obviously let's let's do the export but
00:38:42
the other piece is trying to share with
00:38:44
all of our partners and allies how you
00:38:46
can actually create a regatory
00:38:48
environment that allows this technology
00:38:49
to succeed and here we are in Europe and
00:38:52
I think many of us that sort of have you
00:38:54
know tried to work with technology
00:38:56
companies in Europe have have hit sort
00:38:58
of a lot of roadblocks and a lot of
00:38:59
stumbles and no matter you know the drug
00:39:01
reporting came out and and he can say
00:39:04
that there's a lot of issues But things
00:39:05
don't ever seem to seem to really
00:39:07
change. And I think all of that that the
00:39:10
the the way that our regulatory
00:39:12
structure is is designed in the US and
00:39:15
the way that the entrepreneurial spirit
00:39:16
thrives in the US is something that we
00:39:18
try to share with countries all around
00:39:20
the world. And I think the the the
00:39:23
general um knee-jerk reaction for most
00:39:26
policy makers around the world is one
00:39:28
that moves to a corner that is obsessed
00:39:31
with the precautionary principle. this
00:39:34
concept that every time something new
00:39:35
comes out, the role of the policy maker
00:39:37
is to sort of like sit in a room and
00:39:39
whiteboard everything that could go
00:39:41
wrong and then design regulations to
00:39:44
make sure those wrong things, these
00:39:46
hypothetical wrong things don't happen.
00:39:48
When in reality, what we do in the US,
00:39:50
what we try to do is sit in a room and
00:39:51
whiteboard what rules we can create to
00:39:53
actually unlock innovation. What are the
00:39:55
ones we should remove to allow more
00:39:57
innovation to happen? And I think that
00:39:59
mindset is something that we constantly
00:40:01
try to share at all these international
00:40:02
fora. The US has you know there has been
00:40:05
an AB test on what regulatory structure
00:40:08
works and what succeeds. You know we've
00:40:10
seen how the how how Europe has
00:40:12
approached this in the last 20 years and
00:40:13
we've seen what the US has done. So I
00:40:15
think the the recipe is kind of obvious
00:40:16
but but sometimes we have to just keep
00:40:18
repeating it to to our counterparts. And
00:40:20
I love the Draghy report because it was
00:40:22
so clearly uh identifying companies that
00:40:26
are in Europe that you know like no Novo
00:40:29
Nordus is like 350 billion or $400
00:40:32
billion company and in America we've had
00:40:34
companies of trillion dollar companies
00:40:37
Nvidia hitting$5 trillion. So so what is
00:40:40
the path to innovation?
00:40:43
Well, I I I think part of it is and I I
00:40:46
I think this is the difference between
00:40:49
maybe the American mindset and the
00:40:50
European mindset towards this is that
00:40:53
ultimately the innovation in the United
00:40:55
States comes from the private sector. It
00:40:57
comes from the entrepreneurs, the
00:40:58
founders, the innovators, the geniuses
00:41:00
with an idea. And I think that the
00:41:04
government sees its role, at least when
00:41:06
it's thinking properly about this, as
00:41:08
being an enabler and as just setting the
00:41:11
rules of the road. um and maybe putting
00:41:13
in some guard rails, but basically it's
00:41:15
letting the entrepreneurs cook and
00:41:18
that's how you get innovation. And now I
00:41:21
don't want to bash our European host too
00:41:24
much, but you know the when when the uh
00:41:27
when the EU talks about AI leadership,
00:41:30
they're talking about the regulators and
00:41:32
they think their value ad is well, we're
00:41:34
going to we're going to show the whole
00:41:35
world the regulatory model for AI. So,
00:41:37
it's kind of a bad case of u main
00:41:40
character syndrome where uh you know
00:41:42
where like the regulators think they're
00:41:44
the main characters in this. No, look,
00:41:46
the regulators are the supporting
00:41:48
players. The main characters always have
00:41:50
to be the entrepreneurs. It's got to be
00:41:52
the innovators. That's how you unlock
00:41:54
innovation. When the when you start to
00:41:57
see yourself, I mean, the the regulators
00:41:59
and the policy makers as the as the main
00:42:01
characters, that's not a great recipe
00:42:03
for innovation. And I think just just a
00:42:05
minor point on the on the AI stuff in
00:42:07
Europe that you know the EUAI act which
00:42:09
has been so detrimental to to the AI
00:42:12
ecosystem here here in Europe was passed
00:42:15
before chat GPT was even invented and
00:42:18
that shows the challenge here. You're
00:42:20
you're you're believing that you can
00:42:22
solve some kind of problem or some
00:42:23
you're solving something but the end of
00:42:25
the day innovation is moving so much
00:42:26
more quickly and ultimately that that
00:42:28
rule makes no sense now in a world of of
00:42:30
frontier models large language models
00:42:32
and they have to sort of edit it. So,
00:42:33
let me push back before we go and ask
00:42:36
you to identify any risks or threats or
00:42:40
downside risks in all of this. What
00:42:43
should we be worried about if anything
00:42:45
with regard to AI usage?
00:42:48
>> Well, I I think there are Orwellian
00:42:51
scenarios uh of AI that I think we
00:42:54
should be concerned about. And again, I
00:42:57
I tend to think that those scenarios
00:42:59
were described by George Orwell, not by,
00:43:01
you know, James Cameron and the
00:43:02
Terminator. And specifically, it's
00:43:04
misuse of AI by government. I do think
00:43:07
that AI could be used as a tool to um
00:43:12
surveil, to censor, to even potentially
00:43:15
brainwash the population. This is why
00:43:17
the administration has taken such a firm
00:43:19
stance against what it's called woke AI,
00:43:21
which I almost think that that name
00:43:23
maybe trivializes the magnitude of the
00:43:26
problem we're talking about. We're
00:43:27
talking about AI having a political bias
00:43:29
built into it. Um, and it the bias can
00:43:33
be so subtle that people don't even
00:43:34
necessarily notice over time, but it has
00:43:36
a huge impact on what people are allowed
00:43:39
to learn and think and know and what you
00:43:41
know children learn. And so I think it's
00:43:44
very important that we try to make sure
00:43:46
that AI was politically unbiased. Um
00:43:49
there just in this regard, one of the
00:43:52
things that we were so concerned about
00:43:54
with that Biden executive order on AI
00:43:56
that we were sended in the first week is
00:43:58
that it had 20 pages of language on DEI
00:44:02
and it was promoting this idea that AI
00:44:05
models need to build in a DEI layer.
00:44:08
Well, you know, this is how you ended up
00:44:10
with, you know, the the the black George
00:44:12
Washington uh you know, story where that
00:44:15
the first version of of uh Gemini came
00:44:18
out and it was, you know, it was
00:44:19
basically rewriting history to serve a
00:44:22
current political agenda of DEI. And um
00:44:25
you know that that was in a way that
00:44:27
that that case of bias was so ludicrous
00:44:30
that everyone kind of laughed at it. But
00:44:32
it gives you a sense of what could
00:44:34
happen if you start to build the the
00:44:37
bias into AI and you know that same you
00:44:41
know so-called trust and safety
00:44:43
apparatus that was starting to be built
00:44:45
into social media sites as a way to
00:44:48
censor and deplatform and shadowban. You
00:44:51
could see that being built into AI
00:44:53
models as a way to control uh the the
00:44:57
public discourse in in a very serious
00:44:59
way. And I think that, you know,
00:45:02
President Trump again just put a total
00:45:04
halt to that, you know, rescended that.
00:45:07
But it was also we also um President
00:45:09
Trump signed an executive order saying
00:45:11
that the federal government would not
00:45:12
procure politically biased AI. So look,
00:45:16
on a first amendment basis, if an AI
00:45:18
company wants its AI to be biased in
00:45:21
some direction, they probably have a
00:45:23
first amendment right to do that. But we
00:45:26
have as the federal government have the
00:45:28
discretion not to buy that software and
00:45:30
we've said that we won't. So, I feel
00:45:33
very good that during President Trump's
00:45:35
uh term in office for the next three
00:45:37
years, this idea of Orwellian AI is not
00:45:40
going to be a problem. But I do worry
00:45:42
that at some point in the future, if you
00:45:44
had a different regime in Washington,
00:45:47
you know, if the federal government
00:45:49
started to pressure AI companies to
00:45:52
build in this political bias, that would
00:45:54
be a very serious threat, I think, to to
00:45:57
our freedoms. It's a it's a great point
00:45:58
to make. Before we wrap up real quick on
00:46:00
jobs, can either of you explain what
00:46:03
Elon Musk is is saying about the impact
00:46:05
of AI said we're not going to need to
00:46:07
work. You know, the the AI AI is going
00:46:10
to do it all. I I just I'm trying to
00:46:12
understand what he's saying that it's
00:46:14
we're going to go on holiday. Um jobs
00:46:16
are going away and AI is going to do
00:46:18
everything. Well, Elvon's a friend of
00:46:21
mine and um I'll I'll uh I I'll disagree
00:46:25
with him slightly on this, but um but
00:46:27
but let me just
00:46:29
the his comment about the the job loss
00:46:32
obviously is what gets all the
00:46:33
headlines, but at the same time he's
00:46:34
saying that he's also saying that in
00:46:37
this future there's going to be so much
00:46:39
abundance that everyone's going to have
00:46:40
what they want and there's not going to
00:46:42
be any money. So people people leave out
00:46:46
that part of the story and they just
00:46:47
report Elon says everyone's going to
00:46:48
lose their jobs. No, we're talking about
00:46:50
a radically different future. It could
00:46:52
be the future. It's kind of described in
00:46:53
Star Trek, you know, where like there is
00:46:56
no money because we have everything.
00:46:58
Look, I I think that, you know, Elon is
00:47:01
directionally correct about the future.
00:47:02
I think we are heading toward toward uh
00:47:05
towards a world of much greater
00:47:06
abundance, rising living standards for
00:47:09
everybody, greater productivity. I think
00:47:11
that will lead to rising wages. I don't
00:47:13
think it's going to put everyone out of
00:47:14
work. I don't think that's going to
00:47:15
happen. Uh but again, the timelines
00:47:18
matter a lot. And you know, getting to a
00:47:20
world with no money is not something
00:47:22
that's going to happen in the next 5
00:47:23
years.
00:47:24
>> And of course, Michael, this is helping
00:47:25
us um in terms of longevity and living
00:47:29
longer, right? In terms of the impact on
00:47:31
science.
00:47:32
>> Totally. I think generally the the
00:47:34
abundance story extends itself well into
00:47:36
into, you know, health care and
00:47:39
everywhere else that and and just
00:47:40
quality of life. So, good things ahead.
00:47:42
I think
00:47:43
>> we'll leave it there. Michael Katzios
00:47:45
and David Saxs, thanks so much.
00:47:46
>> Thank you.

Podspun Insights

In this episode, the conversation dives deep into the current landscape of artificial intelligence and the race to dominate this transformative technology. David and Michael share their insights on the U.S. position in the AI race, drawing parallels to the historic space race. They discuss the rapid innovation in American AI, the infrastructure buildout, and the regulatory challenges that could hinder progress. With a focus on the three pillars of innovation, infrastructure, and global export, they explore how the U.S. can maintain its lead against formidable competitors like China.

The episode also highlights the potential of AI in various sectors, from healthcare to coding, and the importance of a cohesive regulatory framework that fosters innovation rather than stifles it. As they navigate the complexities of state versus federal regulations, the duo emphasizes the need for a unified approach to avoid a chaotic patchwork of laws that could disadvantage startups and entrepreneurs.

Listeners are treated to a glimpse of the future, where AI could revolutionize everyday tasks and scientific discovery, while also addressing the ethical implications of AI use. The discussion culminates in a reflection on the broader societal impacts of AI, including job displacement and the potential for a future of abundance. With a mix of optimism and caution, this episode is a thought-provoking exploration of the AI landscape and its implications for the future.

Badges

This episode stands out for the following:

  • 90
    Most satisfying
  • 90
    Best concept / idea
  • 90
    Biggest cultural impact
  • 88
    Best overall

Episode Highlights

  • The AI Race
    Experts discuss the U.S. position in the global AI race and the need for innovation.
    “We're in an AI race, and we have to win it!”
    @ 00m 33s
    January 23, 2026
  • Infrastructure Buildout
    The demand for AI infrastructure is driving economic growth and innovation.
    “This infrastructure buildout added about 2% to the GDP growth rate.”
    @ 02m 37s
    January 23, 2026
  • AI in Healthcare
    AI is transforming healthcare by improving diagnosis and reducing bureaucracy.
    “AI is a really interesting area for improving medical processes.”
    @ 15m 01s
    January 23, 2026
  • AI's Potential for R&D
    AI could significantly boost the country's research and development output in the next decade.
    “We can almost double our R&D output over the next 10 years because of AI.”
    @ 17m 19s
    January 23, 2026
  • AI Optimism Gap
    A stark contrast in AI optimism exists between China and the United States, with China leading.
    “In China, AI optimism was 83%. In the US, it’s only 39%.”
    @ 25m 16s
    January 23, 2026
  • The AI Race and Regulation
    The US is ahead in AI technology, but overregulation could jeopardize its position.
    “We’re neck and neck in the AI race, but we need to be careful about regulation.”
    @ 27m 30s
    January 23, 2026
  • Winning the AI Race
    Market share will determine the victor in the global AI landscape.
    “You've won the AI race, you know, with market share.”
    @ 34m 32s
    January 23, 2026
  • Ecosystem Dominance
    In technology, the biggest ecosystem wins, emphasizing the importance of collaboration.
    “The biggest ecosystem wins.”
    @ 35m 24s
    January 23, 2026
  • Permissionless Innovation
    Silicon Valley thrives on innovation without bureaucratic hurdles, a stark contrast to Washington.
    “It's permissionless innovation that makes Silicon Valley special.”
    @ 37m 10s
    January 23, 2026
  • Entrepreneurs vs. Regulators
    The focus should be on empowering entrepreneurs rather than letting regulators take center stage.
    “The regulators are the supporting players. The main characters are the entrepreneurs.”
    @ 41m 46s
    January 23, 2026
  • Future Abundance
    A future of greater abundance and rising living standards is on the horizon.
    “We're heading toward a world of much greater abundance.”
    @ 47m 02s
    January 23, 2026

Episode Quotes

Key Moments

  • Infrastructure Growth02:37
  • Power Race09:51
  • Healthcare Innovation15:01
  • R&D Boost17:19
  • AI Optimism25:16
  • AI Race Victory34:32
  • Innovation Mindset37:10
  • Entrepreneurial Focus41:46

Words per Minute Over Time

Vibes Breakdown