Search Captions & Ask AI

AI Doom vs Boom, EA Cult Returns, BBB Upside, US Steel and Golden Votes

May 31, 2025 / 01:29:41

This episode of the All-In podcast features discussions on AI doomerism, job displacement, and the implications of government regulation on technology and the economy. The hosts, including David Sachs, Chimath Palihapitiya, and Jason Calacanis, debate the potential impact of AI on employment and the economy, referencing insights from Dario Amodei and the AI safety bill.

The conversation begins with a focus on the concerns surrounding AI and its potential to displace jobs, particularly in sectors like tech, finance, and transportation. David Sachs emphasizes the need for a balanced view on AI's impact, arguing that while job loss is a concern, the technology could also lead to increased productivity and job creation.

Chimath Palihapitiya shares his perspective on the rapid pace of change and the potential for new opportunities, while also acknowledging the challenges faced by entry-level workers. The hosts discuss the role of government regulation in shaping the future of AI and the economy, with Sachs cautioning against fear-mongering that could lead to unnecessary restrictions.

The episode also touches on the broader implications of AI for global competition, particularly between the US and China. The hosts express concerns about the potential for overregulation to stifle innovation and economic growth.

Overall, the episode presents a nuanced discussion on the intersection of technology, employment, and government policy, urging listeners to consider both the risks and opportunities presented by AI.

TL;DR

The episode discusses AI doomerism, job displacement, and government regulation's impact on technology and the economy.

Video

00:00:00
All right, everybody. Welcome back to
00:00:01
the AllIn podcast. The number one
00:00:03
podcast in the world. You got what you
00:00:05
wanted, folks. The original Quartet is
00:00:08
here live from DC with a great shirt. Is
00:00:12
that is your habitasher making that
00:00:13
shirt or is that a Tom Ford? That white
00:00:16
shirt is so crisp, so perfect. David
00:00:18
Sachs, you're talking about me. You're
00:00:20
Zar. You're Zary. I'll tell you exactly
00:00:22
what it is. I'll tell you what it is.
00:00:24
You can tell me if it's right. Brioni.
00:00:25
Yes, of course it's Brioni. Briion
00:00:28
spread car. Look at that. How many years
00:00:30
have I spent being rich? When a man
00:00:32
turns 50, the only thing he should wear
00:00:34
is broni. The stitching is Looks very
00:00:37
luxurious. That's how Chimath knew,
00:00:38
right, Jimoth? How'd you figure it out?
00:00:40
The stitching? It's just how it lays
00:00:42
with the collar. To be honest with you,
00:00:43
it's the button catch. The Brion has a
00:00:46
very specific style of button catches.
00:00:47
If you don't know what that means, it's
00:00:49
because you're [ __ ] ignorant,
00:00:50
malcontent yourself. I'm looking it up
00:00:52
right now. Right. Yeah.
00:00:55
I just asked you to continue.
00:00:59
We'll let your winners ride.
00:01:02
[Music]
00:01:07
We open source it to the fans and
00:01:08
they've just gone crazy with it.
00:01:14
All right, everybody. The All-In Summit
00:01:15
is going into its fourth year September
00:01:17
7th through 9th and the goal is of
00:01:19
course to have the world's most
00:01:20
important conversations. Go to allin.com
00:01:23
yada yada yada to join us at the summit.
00:01:25
All right. It's a lot in the docket, but
00:01:27
there's kind of a very unique thing
00:01:29
going on in the world. David, everybody
00:01:31
knows about AI doomerism. Basically,
00:01:33
people who are concerned uh rightfully
00:01:35
so that AI could have some, you know,
00:01:38
significant impacts on the world. Dario
00:01:41
Amod said he could see employment spike
00:01:44
to 10 to 20% in the next couple years.
00:01:46
They're 4% now as we've always talked
00:01:47
about here. He told Axio that AI
00:01:49
companies and government needs to stop
00:01:51
sugar coating what's coming. He expects
00:01:53
a mass elimination of jobs across tech,
00:01:55
finance, legal, and consulting. Okay,
00:01:57
that's a debate we've had here. And
00:01:59
entrylevel workers will be hit the
00:02:01
hardest. He wants law makers to take
00:02:03
action and more CEOs to speak out. Poly
00:02:06
market thinks regulatory capture via
00:02:08
this AI safety bill is very unlikely. US
00:02:10
enacts AI safety bill in 2025 currently
00:02:13
stands at a 13% chance. But uh Sax, you
00:02:15
wanted to discuss this because it seems
00:02:17
like there is more at work than just a
00:02:20
couple of technologists with I think
00:02:22
we'd all agree there are legitimate
00:02:25
concerns about job destruction or job
00:02:29
and employment displacement that could
00:02:30
occur with AI. We all agree on that
00:02:32
where we're seeing robo taxis start to
00:02:34
hit the streets and I don't think
00:02:35
anybody believes that being a cab driver
00:02:37
is going to exist as a job 10 years from
00:02:38
now. So there seems to be something here
00:02:42
about AI dumerism but it's being taken
00:02:44
to a different level by a group of
00:02:46
people maybe uh with a different agenda.
00:02:49
Yeah. Well, first of all, let's just
00:02:50
acknowledge that there are concerns and
00:02:53
risks associated with AI. It is a
00:02:56
profound and transformative
00:02:58
technology and there are legitimate
00:03:00
concerns about where it might lead. I
00:03:02
mean the future is unknown and that can
00:03:05
be kind of
00:03:06
scary. Now, that being said, I think
00:03:09
that when somebody makes a pronouncement
00:03:12
that says something like 50% of white
00:03:14
collar jobs are going to be lost within
00:03:15
2 years, that's a level of specificity
00:03:19
that I think is just unknowable and is
00:03:22
more associated with an attempt to grab
00:03:24
headlines. And to be frank, if you go
00:03:27
back and look at Anthropic's
00:03:30
announcement or Daario's announcement,
00:03:33
there is a pattern of trying to grab
00:03:35
headlines by making the most
00:03:36
sensationalist version of what could be
00:03:40
a legitimate concern. If you go back
00:03:42
three years ago, they created this
00:03:44
concern that AI models could be used to
00:03:47
create bioweapons.
00:03:49
And they showed what was supposedly a
00:03:52
sample I think of claw generating an
00:03:56
output that can be used by a
00:03:57
bioteterrorist or something like that.
00:03:59
And on the basis of that it actually got
00:04:01
a lot of play and in the UK Rishiun got
00:04:06
very interested in this cause and that
00:04:08
led to the first AI safety summit at
00:04:10
Bletchley Park. So that sort of concern
00:04:13
really drove some of the initial AI
00:04:16
safety concerns. But it turns out that
00:04:17
that particular output was discredited.
00:04:20
It wasn't true. I'm not saying that AI
00:04:23
couldn't be used or misused to maybe
00:04:26
create a boweapon one day, but it was
00:04:28
not an imminent threat in the way that
00:04:30
it was portrayed. There have been other
00:04:31
examples of this. You know, obviously
00:04:33
people are concerned about could the AI
00:04:35
develop into a super intelligence that
00:04:37
grows beyond our control. Could it lead
00:04:39
to widespread job loss? I mean, these
00:04:42
are legitimate things to worry about,
00:04:44
but I think these concerns are being
00:04:46
hyped up to a level that there's simply
00:04:48
no evidence for. And the question is
00:04:50
why? And I think that there is an agenda
00:04:54
here that people should be concerned
00:04:56
about. So, let's start with maybe
00:05:00
Freeberg things that we all agree on
00:05:02
here. There are millions of people who
00:05:05
drive trucks and Ubers and lifts and
00:05:08
door dashes.
00:05:10
You would, I think, agree the majority
00:05:12
of that work in but 5 to 10 years, just
00:05:15
to put a number on it, will be done by
00:05:18
self-driving, robots, cars, etc.,
00:05:21
trucks. Yeah, Dave, I think it's that
00:05:24
might be the wrong way to look at it or
00:05:26
I wouldn't look at it that way. And
00:05:27
maybe I'll just give frame it a
00:05:29
different way, please. If I'm deploying
00:05:32
capital, let's say I'm a CEO of a
00:05:35
company and I can now have
00:05:40
software that's written by AI. Does that
00:05:43
mean that I'm going to fire 80% of my
00:05:45
software engineers? Basically, it means
00:05:48
one software engineer can output, call
00:05:50
it 20, 50 times as much software as they
00:05:53
previously could by using that software
00:05:55
generation tool.
00:05:57
So the return on the invested capital,
00:06:01
the money I'm spending to pay the salary
00:06:03
of that software engineer is now much
00:06:06
much higher. I'm getting much more out
00:06:08
of that person because of the unlocking
00:06:10
of the productivity because of the AI
00:06:12
tool than I previously could. So when
00:06:15
you have a higher ROI on deployed
00:06:18
capital, do you deploy more capital or
00:06:21
less capital? Suddenly you have this
00:06:23
opportunity to make 20 times on your
00:06:25
money versus two times on your money. If
00:06:28
you have a chance to make 20 times on
00:06:29
your money, you're going to deploy a lot
00:06:30
more capital. And this is the story of
00:06:33
technology going back to the first
00:06:36
invention of the first technology of the
00:06:39
caveman. When we have this ability to
00:06:41
create leverage, humans have a tendency
00:06:44
to do more and invest more, not less.
00:06:47
And I think that's what's about to
00:06:49
happen. I think we see this across the
00:06:52
spectrum. People assumed, "Oh my gosh,
00:06:54
software can now be written with one
00:06:55
person. You can create a whole startup.
00:06:57
You don't need to have venture capital
00:06:58
anymore. In fact, what I think we're
00:07:00
going to see is much more venture
00:07:01
capital flowing into new tech startups.
00:07:03
Much more capital being deployed because
00:07:05
the return on the invested capital is so
00:07:07
so so much higher because of AI." So
00:07:10
generally speaking, I think that the the
00:07:12
premise that AI destroys jobs is wrong
00:07:15
because it doesn't take into account the
00:07:17
significantly higher return on invested
00:07:19
capital, which means more capital is
00:07:20
going to be deployed, which means
00:07:21
actually far more jobs are going to be
00:07:23
created, far more work is going to get
00:07:24
done. And so I think that the
00:07:26
counterbalancing effect is really hard
00:07:27
to see without taking that zoomed out
00:07:29
perspective. To to to respond to Sax's
00:07:32
point, I do think anytime you see a
00:07:34
major change socially, society, there's
00:07:37
a vacuum. How's the system going to
00:07:39
operate in the future? And anytime
00:07:41
there's a vacuum in the system, a bunch
00:07:43
of people will rush in and say, I know
00:07:45
how to fill that vacuum. I know what to
00:07:46
do because I am smarter, more educated,
00:07:50
more experienced, more knowledgeable,
00:07:51
more moral. I have some superiority over
00:07:54
everyone else. And therefore, I should
00:07:56
be in a position to define how the new
00:07:58
system should operate. And so, there's a
00:08:00
natural kind of power vacuum that
00:08:02
emerges anytime there's a major
00:08:03
transition like this. and there will be
00:08:05
a scrambling and a fighting and a whole
00:08:07
bunch of different representation.
00:08:08
Typically fear is a great way of getting
00:08:10
into power and people are going to try
00:08:12
and create new control systems because
00:08:13
of the transition that's underway. Okay.
00:08:15
You're going to see this around the
00:08:16
world. Yeah. I mean, uh, so Chimath,
00:08:19
it's pretty clear, you know, Freeberg
00:08:22
didn't answer this question
00:08:23
specifically, so I'm going to give it to
00:08:24
you again. You would agree jobs like
00:08:28
driving things are going to go away. If
00:08:30
we had to pick a number somewhere
00:08:32
between 5 and 10 years, the majority of
00:08:33
those would go away. he's positioning,
00:08:35
hey, a lot more jobs will be created
00:08:36
because there'll be all these extra
00:08:38
venture capital and opportunities, etc.
00:08:41
But job displacement will be very real
00:08:43
and we're seeing, I think, job
00:08:45
displacement. Now, you had a tweet
00:08:46
recently, you know, you were talking
00:08:47
about entry level jobs and how that
00:08:50
seems to be going away in the white
00:08:52
collar space. So, where do you land on
00:08:55
job displacement? Freeberg's already
00:08:57
kind of given the big picture here, but
00:08:58
let's step back to for people who are
00:09:00
listening who have relatives who drive
00:09:02
Uber or a truck or are graduating from
00:09:05
college and want to go work at a, you
00:09:08
know, I don't know, the Magnificent 7 or
00:09:10
in tech and they're not hiring and and
00:09:13
we know the reason they're not hiring
00:09:14
because they're leaning into AI. So,
00:09:17
let's talk about the job displacement in
00:09:18
the medium term. I'm going to ignore
00:09:21
your question and I'm going to Why
00:09:24
should you be any different than the
00:09:25
other? So I now content on this podcast.
00:09:29
There's two people not wanting to answer
00:09:31
the question about job displacement.
00:09:32
Interesting trend. No, no. We'll go back
00:09:34
to that. Let me start by just saying
00:09:35
that it seems that these safety
00:09:38
warnings tend to be pretty coincidental
00:09:41
with key fundraising moments in
00:09:43
anthropic journey. So let's just start
00:09:45
with that. And if you put that into an
00:09:48
LLM and try to figure out if what I just
00:09:50
said was true, it's interesting, but you
00:09:52
find it's relatively accurate. I think
00:09:54
that there is a very smart business
00:09:57
strategy here. And I've said a version
00:09:59
of this about the other companies at the
00:10:01
foundational model layer that aren't
00:10:04
Meta and Google because Meta and Google
00:10:06
frankly sit on these money gushers where
00:10:09
they just generate so much capital that
00:10:11
they can fund these things to infinity.
00:10:13
But if you're not them, so if you're
00:10:15
OpenAI or if you're anthropic, you have
00:10:18
to find an angle. And I think the angles
00:10:20
are slightly different for both. But I
00:10:23
think what this suggests is that there's
00:10:24
a pattern that exists and I think that
00:10:26
that
00:10:27
explains some of the framing of what we
00:10:30
see in the press, Jason, and why we get
00:10:32
these exaggerated claims. Perfect.
00:10:36
So there are people who are doing this
00:10:39
for nefarious reasons is I I guess where
00:10:42
you're sort of getting at here. It's a
00:10:44
way to market. It's smart. It's smart.
00:10:47
If you fall, it's up to you. Yeah. Okay.
00:10:50
Well, there's also an industrial complex
00:10:53
according to some folks that
00:10:56
are backing this. If you've heard of
00:10:59
effective altruism, that was like this
00:11:01
uh movement of a bunch of I don't know,
00:11:04
I guess they consider themselves
00:11:05
intellectual sachs and they uh were kind
00:11:10
of backing a large swath of
00:11:13
organizations that I guess we would call
00:11:16
in the industry astroturfing or what do
00:11:18
they call it when you make so many of
00:11:19
these organizations that they're not
00:11:23
real in politics and flooding the zone
00:11:25
perhaps. So if you were to look at this
00:11:27
article here, Nick, I think you have the
00:11:29
AI existential risk um industrial
00:11:33
complex graphic there. It seems like a
00:11:36
group of people according to this
00:11:37
article have backed to the tune of 1.6
00:11:40
billion, a large number of organizations
00:11:42
to scare the be Jesus out of everybody
00:11:45
and make YouTube videos, Tik
00:11:47
Toks, and they've they've made a map of
00:11:49
it. There's some key takeaways here from
00:11:52
that article where it says here that
00:11:55
it's an inflated ecosystem. There's a
00:11:57
great deal of redundancy. Same names,
00:11:59
acronyms, logos with only minor changes.
00:12:01
Same extreme talking points. Same group
00:12:03
of people just with different titles.
00:12:04
Same funding source. There's a funding
00:12:06
source called Open Philanthropy which
00:12:08
was funded by Dustin Moskovitz who is
00:12:10
one of the Facebook billionaires. Jim,
00:12:13
you worked with him, right? I mean he
00:12:14
was wasn't he like Zuck's roommate at
00:12:17
Harvard or something and one of the
00:12:18
first engineers made a lot of money so
00:12:21
he funded this he he's he's an EA and he
00:12:24
funded this group called Open
00:12:25
Philanthropy which then has become the
00:12:27
feeder for essentially all these other
00:12:30
organizations which are almost different
00:12:32
fronts to basically the same underlying
00:12:34
EA
00:12:35
ideology and what's interesting is that
00:12:37
the guy who set this up for Dustin
00:12:40
Holden Karnovski who is a major
00:12:43
effective altruist and was doing out all
00:12:44
the money. He's married to Daario's
00:12:47
sister and she she's I guess associated
00:12:49
with EA and she was one of the
00:12:50
co-founders of Anthropic. So these are
00:12:53
not coincidences. I mean the reality is
00:12:55
there's a very specific ideological and
00:12:58
political agenda here. Now what is that
00:13:01
agenda? It's basically global AI
00:13:04
governance if you will. They want AI to
00:13:07
be highly regulated but not just at the
00:13:09
level of the nation state but let's say
00:13:12
internationally supernationally to
00:13:15
well if you just do a quick search on
00:13:18
global compute governance it'll tell you
00:13:21
what the key aspects are so number one
00:13:23
they want regulation of computational
00:13:26
resources this includes access to
00:13:30
GPUs they want AI safety and security
00:13:33
regulation they want international you
00:13:35
call from globalist agreements and they
00:13:38
want ethical and societal considerations
00:13:40
or policy built into this. Now, what
00:13:42
does that sound like? That sounds a lot
00:13:44
to me like what the Biden administration
00:13:46
was pursuing. Specifically, we had that
00:13:49
Biden executive order on AI, which was
00:13:51
100 pages of Bernson regulation that was
00:13:54
designed to promote AI safety, but had
00:13:57
all these DEI requirements. So, you
00:13:59
know, it led to woke AI. You remember
00:14:01
when Google launched Black George
00:14:04
Washington and so forth, they had the
00:14:06
Biden diffusion rule which created this
00:14:08
global licensing framework to sell GPUs
00:14:10
all over the world. So extreme
00:14:12
restrictions on proliferation of servers
00:14:15
of computing power. They created what's
00:14:19
called the AI safety institute and they
00:14:22
again fostered these international AI
00:14:24
summits. So if you actually look at what
00:14:28
the Biden administration was tangibly
00:14:30
doing in terms of policy and you look at
00:14:32
what EA's agenda is with respect to
00:14:35
global compute governance, they were
00:14:37
pushing hard on these fronts. And now if
00:14:40
you look at the level of personnel,
00:14:41
there were very very
00:14:43
powerful Biden staffers who now all work
00:14:46
in anthropic. So probably the most
00:14:49
powerful Biden staffer on AI over the
00:14:52
past four years was a lawyer named Tun
00:14:55
Chabra and he now works at Anthropic for
00:14:59
Daario. Elizabeth Kelly who was the
00:15:01
founding director of the AI safety
00:15:03
institute in the government now works at
00:15:06
Anthropic. Like I mentioned Daario's
00:15:10
sister is married to Holden Karnnowski
00:15:12
who dos out all the money to these EA
00:15:14
organizations. So if you were to do
00:15:16
something like create a network map, you
00:15:18
would see very quickly that there's
00:15:20
three key nodes here. There's the
00:15:22
effective altruist movement of which Sam
00:15:25
Bankman Freed's the most notable member
00:15:26
but which I think Dustin Mos is now the
00:15:29
main funer. There's the Biden
00:15:30
administration and like the key staffers
00:15:32
and then you've got anthropic and it's a
00:15:36
very tightly wound network. Now why does
00:15:39
this matter? Let's get Yeah. Also the
00:15:41
goals I think is Yes. Well, the the
00:15:43
goal, like I said, is is global compute
00:15:45
governance. It's basically establishing
00:15:47
national and then international
00:15:49
regulations of AI. Now, but they would
00:15:52
claim, let's just pause here for a
00:15:54
minute. They would claim the reason
00:15:55
they're doing it. And so, we we'll we'll
00:15:58
save if we believe this or not, but they
00:16:01
are concerned about job destruction in
00:16:03
the short term. They're also concerned
00:16:05
as science fiction as it is that the AI
00:16:08
when we get to like a sort of
00:16:10
generalized super intelligence is going
00:16:12
to kill humanity. That this is a nonzero
00:16:14
chance. Elon has said this before.
00:16:16
They've sort of taken it to a almost
00:16:18
like a certainty. Yes, we're going to
00:16:20
have so many of these general
00:16:22
intelligences. But they only believe
00:16:24
that when they're raising money. Well,
00:16:26
that's what I'm sort of getting at.
00:16:27
Like, so I think they believe it all the
00:16:28
time, but maybe maybe the press releases
00:16:30
are time for for the fun building. But
00:16:32
let me let me answer that. Right. Yeah.
00:16:35
Yeah. Look, I mean, it is a great
00:16:37
product. Claude kicks ass. I'm more
00:16:38
interested in the political dimension of
00:16:39
this. I'm not bashing a specific product
00:16:42
or company. But look, I think that there
00:16:44
is some nonzero risk of AI growing into
00:16:48
a super intelligence that's beyond our
00:16:50
control. They have a name for that. They
00:16:51
call it X risk or existential risk. I
00:16:55
think it's very hard to put a percentage
00:16:56
on that. I'm willing to acknowledge that
00:16:59
is a risk. You know, I think about that
00:17:00
all the time and I do think we should be
00:17:02
concerned about it. But there's two
00:17:04
problems I think with this approach.
00:17:05
Number one is X-Risk is not the only
00:17:08
kind of risk. I would say that China
00:17:10
winning the AI race is a huge risk. I
00:17:12
don't really want to see a CCP AI
00:17:15
running the world. And if you hobble our
00:17:19
own innovation, our own AI efforts in
00:17:21
the name of stomping out every
00:17:22
possibility of X-risk, then you probably
00:17:25
end up losing the AI race to China
00:17:27
because they're not going to abide by
00:17:28
those same regulations. So again, you
00:17:31
can't optimize for solving only one risk
00:17:34
while ignoring all the others. And I
00:17:36
would say the risk of China winning the
00:17:38
AI race is, you know, it might be like
00:17:42
30%. Whereas I think X risk is probably
00:17:44
a much lower percentage. So there are
00:17:47
there are other risks to to worry about.
00:17:49
And I I do think that they are
00:17:50
single-mindedly focused on scaring
00:17:53
people with some of these headlines
00:17:55
around first it was the bioweapons then
00:17:57
it was the super intelligence now it's
00:17:58
the job loss and I think it's a tried
00:18:01
andrue tactic of people who want to give
00:18:05
more power to the government to scare
00:18:07
the population right because if you can
00:18:09
scare the population and make them
00:18:11
fearful then they will cry out for the
00:18:13
government to solve the problem and
00:18:15
that's what I see here is that you've
00:18:16
got this elaborate network of front
00:18:20
organizations which are all motivated by
00:18:22
this EA ideology. They're funded by a
00:18:25
hardcore leftist. And by the way, I
00:18:27
became aware of Dustin's politics
00:18:30
because of the Chase Bodin recall. I
00:18:33
found out that he was a big funer of
00:18:34
Chase Bodin. Remember this? Yeah. Dustin
00:18:36
Mossmus and Carrie Tuna, his wife.
00:18:39
Also, Reed Hastings just joined the
00:18:41
board of of Anthropic. remember when he
00:18:45
back in 2016 to tried to drive Peter
00:18:48
Teal off of the board of Facebook for
00:18:51
supporting Trump. So, you know, these
00:18:53
are like committed leftists. They're
00:18:56
Trump haters. But the point is that
00:18:58
these are people who fundamentally
00:18:59
believe in empowering government to the
00:19:03
maximum more government and empowering
00:19:05
government to the maximum extent. Now my
00:19:07
problem with that is I actually think
00:19:09
that probably the single greatest
00:19:11
dystopian risk associated with AI is the
00:19:15
risk that government uses it to control
00:19:18
all of us. To me like you end up in some
00:19:21
sort of Orwellian future where AI is
00:19:25
controlled by the government and out of
00:19:26
all the risks we've talked about that's
00:19:28
the only one for which I've seen
00:19:30
tangible evidence. So in other words, if
00:19:32
you go back to last year when we had the
00:19:35
whole woke AI, there was plenty of
00:19:37
evidence that the people who were
00:19:40
creating these products were infusing
00:19:42
their left-wing or woke values into the
00:19:45
product to the point where it was lying
00:19:47
to all of us and it was rewriting
00:19:49
history. And there was plenty of
00:19:51
evidence that the Biden EO was trying to
00:19:53
enshrine that idea. Was basically trying
00:19:55
to require DEI be infused into AI
00:19:58
models. and it wanted to anoint two or
00:20:02
three winners in this AI race. So, I'm
00:20:04
quite convinced that prior to Donald
00:20:07
Trump winning the election, we were on a
00:20:08
path of global compute governance where
00:20:10
two or three big AI companies are going
00:20:12
to be anointed as the winners. And the
00:20:14
quid proquo is that they were going to
00:20:15
infuse those AI models with woke values.
00:20:18
And there was plenty of evidence for
00:20:20
that. You look at the policies, you look
00:20:21
at the models. This was not a
00:20:23
theoretical concern. This was real. And
00:20:26
I think the only reason why we've moved
00:20:28
off of that trajectory is because of
00:20:30
Trump's election. But we could very
00:20:32
easily be moved back onto that
00:20:33
trajectory. If you were to look at all
00:20:35
three opinions here and and put them
00:20:37
together, they could all be true at the
00:20:38
same time. You've got a a number of
00:20:40
people, some might call useful idiots,
00:20:42
some might call just, you know, people
00:20:43
with god complexes who believe they know
00:20:45
how the world should operate. Effective
00:20:47
altruism kind of falls into that. Oh, we
00:20:49
can make a formula that that's their
00:20:51
kind of idea where we can tell you where
00:20:53
to put your money, rich people, in order
00:20:55
to create the most good and you know,
00:20:57
we're these enlightened individuals with
00:20:58
the best view of the world. They might
00:20:59
be, who knows, maybe they're the
00:21:00
smartest kids in the room, but they're
00:21:02
kind of delusional. The second piece
00:21:03
I'll do here is I think you're
00:21:05
absolutely correct, Chimat, that there
00:21:07
are people who have economic interests
00:21:09
who are then using those useful idiots
00:21:11
andor delusional people with god
00:21:14
complexes to serve their need, which is
00:21:16
to be one of the three winners. And then
00:21:18
sack
00:21:19
Inherent to all of that is they have a
00:21:21
political ideology. So why not use these
00:21:25
people with delusions of grandeur in
00:21:27
order to secure the bag for their
00:21:29
companies for their investments and
00:21:31
secure their candidates into office so
00:21:34
that they can block further people from
00:21:36
getting H100s cuz they literally want
00:21:38
to. By the way, that's the part that's
00:21:39
very smart about what they're doing
00:21:41
because you know it's not like they're
00:21:44
illquid. They're full of liquidity in
00:21:46
the sense that you're bringing in people
00:21:48
that are very technically capable and
00:21:50
you're setting up these funding rounds
00:21:52
where a large portion goes right back
00:21:54
out the door via secondaries and so
00:21:56
there's all these people that are making
00:21:58
money having this worldview and so to
00:22:00
your point Jason it's going to cement
00:22:02
that worldview and then they are going
00:22:04
to propagate it even more aggressively
00:22:06
into the world. So I think the threshold
00:22:08
question is should you fear government
00:22:11
overregulation or should you fear
00:22:13
autocomplete and I would say you should
00:22:16
not be so afraid of the autocomplete
00:22:18
right now it may get so good that it's
00:22:21
an AGI but right now it's an
00:22:23
exceptionally good autocomplete. Yeah.
00:22:25
And I just think that again it's a tried
00:22:27
and trueue tactic of people who want to
00:22:31
give immeasurably more power to the
00:22:33
government to try and make people afraid
00:22:35
and they stampede people into these
00:22:36
policies. Right. And it gives them
00:22:38
power. Exactly. Now, why do I think this
00:22:40
is important to talk about? On last
00:22:42
week's show, I talked about the trip to
00:22:44
the Middle East and how we started doing
00:22:46
these AI acceleration partnerships with
00:22:48
the Gulf States who have a lot of
00:22:49
resources, a lot of money, and they're
00:22:51
intensely interested in AI. and the
00:22:53
Biden administration was pushing them
00:22:54
away. It basically said, "You can't have
00:22:56
the chips. You can't build data
00:22:57
centers." And it was pushing them into
00:22:58
the arms of China. The thing that I
00:23:00
thought was so bizarre is that the
00:23:03
various groups and organizations and
00:23:05
former Biden staffers who wrote this
00:23:07
policy have been agitating in Washington
00:23:09
and they've been trying to portray
00:23:11
themselves as China hawks. And I'm like,
00:23:13
wait, this doesn't make any sense
00:23:15
because this policy again, there's
00:23:17
there's basically two camps in this new
00:23:18
cold war. It's US versus China. you can
00:23:20
pull the Gulf States into our orbit or
00:23:22
you can drive them into China's orbit.
00:23:24
So, this to me just didn't make any
00:23:25
sense. And what's happened is that
00:23:27
frankly, you've got this EA ideology
00:23:30
that's really motivating things, which
00:23:32
is a desire to lock down compute, right?
00:23:35
They're afraid of proliferation. They're
00:23:38
afraid of diffusion. That's really their
00:23:40
motivation. and they're trying to
00:23:42
rebrand themselves as China hawks
00:23:44
because they know that in the Trump
00:23:45
administration that idea is just not
00:23:47
going to get much purchase. Right. And
00:23:50
your position as ZAR is a level playing
00:23:54
field. People compete and the good guys,
00:23:58
you know, the West should be supported
00:24:00
to hit artificial general intelligence
00:24:03
as fast as possible. So the bad guys,
00:24:05
China, don't get it first. That that's a
00:24:08
well open competition. I don't know if I
00:24:11
would frame it around AGI specifically,
00:24:13
but what I would say is that look, I
00:24:14
think our policy should be to win the AI
00:24:16
race because the alternative is that
00:24:18
China wins it. And that would be very
00:24:20
bad for our economy and our military.
00:24:22
How do you win the AI race? You got to
00:24:23
out innovate. You got to have
00:24:24
innovation. That means we can't have
00:24:26
overregulation and red tape. We got to
00:24:28
build out the most AI infrastructure,
00:24:30
data centers, energy, which includes our
00:24:32
partners. And then third, I think it
00:24:34
means AI diplomacy because we want to
00:24:35
build out the biggest ecosystem. We know
00:24:37
that biggest app store wins, biggest
00:24:39
ecosystem wins, right? And the policies
00:24:41
under the Biden administration were
00:24:43
doing the opposite of all those things.
00:24:44
But again, you have to go back to what
00:24:46
was driving that. And it was not driven
00:24:48
by this China hawk mentality. That is
00:24:50
now a convenient rebranding. It was
00:24:53
driven by this EA ideology, this
00:24:56
doomerism. And so this is why I'm
00:24:58
talking about it is I want to expose it
00:25:00
because I think a lot of people on the
00:25:02
Republican side don't realize where the
00:25:04
ideology is really coming from and who's
00:25:06
funding it. They're obviously Trump
00:25:08
haters and they need to be lumored quite
00:25:10
frankly
00:25:14
when we look at they do they need to be
00:25:16
lumored. I mean, you know, Freberg, I
00:25:19
want I want to come back around again
00:25:20
cuz I respect your opinion on, you know,
00:25:24
how close we are to turning certain
00:25:26
corners, especially in science. So, I
00:25:29
understand big picture you believe that
00:25:32
the opportunity will be there. Hey, we
00:25:33
got people out of fields, you know, in
00:25:35
the agricultural revolution, we put them
00:25:37
into factories, industrial revolution,
00:25:38
then we went to this information
00:25:39
revolution. So, your position is we will
00:25:41
have a similar transition and it'll be
00:25:44
okay.
00:25:46
But do you not believe that the speed
00:25:50
because we've talked about this
00:25:51
privately and and publicly on the pod
00:25:53
that this speed the velocity at which
00:25:55
these changes are occurring you would
00:25:57
agree are faster than the industrial
00:25:59
revolution much faster than the
00:26:01
information revolution. So let's one
00:26:03
more time talk about job displacement
00:26:05
and I think the real concern here for a
00:26:08
group of people who are buying into this
00:26:09
ideology is specifically unions job
00:26:12
displacement. This is something the EU
00:26:14
cares about. This is something the Biden
00:26:16
administration cares about. If truck
00:26:18
drivers lose their jobs, just like we
00:26:20
went to bat previously for coal miners,
00:26:23
and there were only 75,000 or 150,000 in
00:26:25
the country at the time, but it became
00:26:27
the national dialogue. Oh my god, the
00:26:28
the the coal
00:26:29
miners. How fast is this going to
00:26:32
happen? One more time on drivers
00:26:34
specifically. Okay, coders, you think
00:26:36
there'll be more code to write, but
00:26:38
driving, there's not going to be more
00:26:39
driving to be done. So is this time
00:26:42
different in terms of the velocity of
00:26:44
the change and the job displacement in
00:26:46
your mind? Freedber the velocity is
00:26:49
greater but the benefit will be faster.
00:26:51
So the benefit of the industrial
00:26:53
revolution which ultimately drove lower
00:26:55
price products and broader availability
00:26:57
of products through manufacturing was
00:26:59
one of the key outputs of that
00:27:01
revolution. Meaning that we created a
00:27:03
consumer market that largely didn't
00:27:04
exist prior. Remember prior to the
00:27:07
industrial revolution, if you wanted to
00:27:08
buy a table or some clothes, they were
00:27:10
handmade. They were kind of artisal.
00:27:13
Suddenly, the industrial revolution
00:27:14
unlocked the ability to massproduce
00:27:16
things in factories. And that dropped
00:27:18
the cost and the availability and the
00:27:20
abundance of things that everyone wanted
00:27:22
to have access to, but they otherwise
00:27:24
wouldn't have been able to afford. So
00:27:26
suddenly everyone could go and buy
00:27:27
blankets and clothes and canned food and
00:27:31
all of these incredible things that
00:27:32
started to come out of this industrial
00:27:34
revolution that happened at the time.
00:27:36
And I think that folks are
00:27:37
underestimating and underrealizing the
00:27:39
benefits at this stage of what's going
00:27:41
to come out of the AI revolution and how
00:27:44
it's ultimately going to benefit
00:27:45
people's um availability of products,
00:27:48
cost of goods, access to things. So the
00:27:50
counterbalancing force Jcal is
00:27:52
deflationary which is um let's assume
00:27:55
that the cost of everything comes down
00:27:57
by half. That's a huge relief on
00:28:01
people's need to work 60 hours a week.
00:28:04
Suddenly you only need to work 30 hours
00:28:06
a week and you can have the same
00:28:09
lifestyle or perhaps even a better
00:28:10
lifestyle than you have today. So the
00:28:13
counterargument to your point, and I'll
00:28:15
talk about the pace of change in
00:28:16
specific jobs in a moment, but the
00:28:18
counterargument to your point is that
00:28:20
there's going to be this cost reduction
00:28:23
and abundance that doesn't exist today.
00:28:26
Give an example. Let's give like some
00:28:27
examples that we could see automation
00:28:30
and food prep. So we're seeing a lot of
00:28:31
restaurants install robotic systems to
00:28:34
make food and people are like, "Oh, job
00:28:36
loss, job loss." But let me just give
00:28:38
you the counter side. The counter side
00:28:39
is that the cost of your food drops in
00:28:41
half. So suddenly, you know, all the
00:28:43
labor cost that's built into making the
00:28:45
stuff you want to pick up, everyone's
00:28:46
freaking out right now about inflation.
00:28:47
Oh my god, it's $8 for a cup of coffee.
00:28:49
It's $8 for a latte. This is crazy,
00:28:51
crazy, crazy. What if that dropped down
00:28:52
to two bucks? You're going to be like,
00:28:54
man, this is pretty awesome with good
00:28:55
service and good experience. And don't
00:28:57
make it all dystopian, but suddenly
00:28:59
there's going to be this like incredible
00:29:01
reduction or deflationary effect in the
00:29:02
cost of food. And we're already starting
00:29:04
to see automation play it way in the
00:29:06
food system to bring inflation down. And
00:29:07
that's going to be very powerful for
00:29:09
people. Shout out to uh ita cloud
00:29:11
cushions and cafe X. We all took swings
00:29:13
at the bat at that exact concept is that
00:29:15
it could be done better, cheaper,
00:29:17
faster. One of the amazing things of
00:29:18
these vision action models that are now
00:29:21
being employed is you can rapidly learn
00:29:24
using vision systems and then deploy
00:29:26
automation systems in those sorts of
00:29:28
environments where you have a lot of
00:29:29
kind of repetitive tasks that the system
00:29:32
can be trained and installed in a matter
00:29:34
of weeks. And historically that would
00:29:35
have been a whole startup that it would
00:29:36
have taken years to figure out how to
00:29:38
get all these things together and custom
00:29:39
program it, custom code it. So the flip
00:29:41
side is like when Uber hit those people
00:29:43
were not drivers. Think about the jobs
00:29:45
that all those people had prior to Uber
00:29:47
coming to market. And then the reason
00:29:49
they drove for Uber is they could make
00:29:50
more money driving for Uber or now
00:29:52
driving or Door Dash and the
00:29:54
flexibility. So their lifestyle got
00:29:56
better. They had all of this more
00:29:57
control in their life. Their incomes
00:29:59
went up. And so there's a series of
00:30:01
things that you are correct won't make
00:30:04
sense in the future from a kind of
00:30:07
standard of work perspective. But the
00:30:08
right way to think about it is
00:30:10
opportunity gets created. New jobs
00:30:12
emerge, new industry, new income, costs
00:30:15
go down. And so I keep harping on this
00:30:17
that it's really hard today to be very
00:30:19
prescriptive to Sax's point about what
00:30:21
exactly is around the corner. But it is
00:30:24
an almost certainty that what is around
00:30:26
the corner is more capital will be
00:30:28
deployed. That means the economy grows.
00:30:30
That means there's a faster deployment
00:30:31
of growth of new jobs, new opportunities
00:30:34
for people to make more money, to be
00:30:35
happier in the work that they do. And
00:30:37
the flip side being things are going to
00:30:38
get cheaper. So, I mean, I know we're
00:30:40
waxing philosophical here, but I think
00:30:42
it's really key because you can focus on
00:30:44
the one side of the coin and miss the
00:30:46
whole other. And that's what a lot of
00:30:48
journalist commentators and fearongerers
00:30:51
do is they miss that other side. Got it.
00:30:53
Well said, Freeberg. Well said. I think
00:30:55
I've heard Satcha turn this question
00:30:58
around about job loss saying well do you
00:31:00
believe that GDP is going to grow by 10%
00:31:03
a year because what are we talking about
00:31:05
here I in order to have the kind of
00:31:07
disruption that you're talking about
00:31:09
where I don't know 10 to 20% of
00:31:12
knowledge workers end up losing their
00:31:13
jobs AI is going to have to be such a
00:31:15
profound force that it's going to have
00:31:17
to create GDP growth like we've never
00:31:19
seen before. That's right. So, it's
00:31:21
easier for people to say, "Oh, well, 20%
00:31:23
of people are going to lose their jobs."
00:31:24
But wait, are we we're talking about a
00:31:25
world in where the economy is growing
00:31:26
10% every year. Like, do do you actually
00:31:28
believe that's more income? That's more
00:31:30
income for everyone. That's new jobs
00:31:31
being created. It's an inevitability.
00:31:33
We've seen this in every revolution. You
00:31:36
know, prior to the industrial
00:31:37
revolution, 60% of Americans worked in
00:31:39
agriculture. And when the tractor came
00:31:41
around and factories came around, those
00:31:43
folks got to get out of doing manual
00:31:45
labor in the fields where they were
00:31:46
literally, you know, tilling the fields
00:31:49
by hand. and they got to go work in a
00:31:51
factory where they didn't have to do
00:31:52
manual labor to move things. Yeah, they
00:31:54
did things in the factory with their
00:31:55
hands, but it wasn't about grunt work in
00:31:58
the field all day in the sun. And it
00:32:00
became a better standard of living. It
00:32:01
became new jobs. And today, it became a
00:32:03
5day work week. It went from a 7day 7day
00:32:06
work week to five, 100 hours a week to
00:32:08
45, 50 hours a week. And now I think the
00:32:11
next phase is we're going to end up in
00:32:12
less than 30 hours a week with people
00:32:14
making more money and having more
00:32:16
abundance for every dollar that they
00:32:17
earn with respect to what they can
00:32:19
purchase and the lives they can live.
00:32:21
That means more time with your family,
00:32:22
more time with your friends, more time
00:32:24
to explore interesting opportunities.
00:32:26
So, you know, we've been through this
00:32:27
conversation a number of times. I I know
00:32:29
I'm not No, it's important to bring it
00:32:30
up, I think, and and really unpack it
00:32:33
because the fear is peing now, Sachs.
00:32:35
People are using this moment in time to
00:32:37
scare people that hey the jobs are going
00:32:39
to go away and they won't come back. But
00:32:41
what we're seeing on the ground saxs is
00:32:44
I'm seeing many more startups getting
00:32:46
created and able to accomplish more
00:32:48
tasks and hit a higher revenue per
00:32:50
employee than they did in the last two
00:32:52
cycles. So it used to be you know you
00:32:54
try to get to a quarter million in
00:32:55
revenue per employee than 500. Now we're
00:32:57
regularly seeing startups hit a million
00:32:58
dollars in revenue per employee,
00:33:00
something that was rarified air
00:33:01
previously, which then speaks to your
00:33:03
point, Freeberg, that there'll be more
00:33:05
abundance. There'll be more capital
00:33:07
generated, more more capital deployed,
00:33:10
with more capital deployed for more
00:33:12
opportunities, but you're going to need
00:33:13
to be more resilient. I think yeah, I
00:33:15
think it's actually very hard to
00:33:18
completely eliminate a human job. the
00:33:21
the ones that you cited and JK you keep
00:33:23
citing the same ones because I actually
00:33:24
don't think there's that many that fit
00:33:25
in this category the drivers and maybe
00:33:28
level one customer support because those
00:33:30
jobs are so monolithic but when you
00:33:32
think about even like what a salesperson
00:33:34
does right it's like yes they spend a
00:33:37
lot of time with prospects but they also
00:33:38
spend time negotiating contracts and
00:33:40
they spend time doing postale
00:33:42
implementation and follow-up and they
00:33:44
spend time learning the product and
00:33:46
giving feedback I mean it's a
00:33:48
multifaceted job and you can use AI I to
00:33:51
automate pieces of it, but to eliminate
00:33:53
the whole job is actually very hard. And
00:33:56
so I just think this idea that boom, 20%
00:33:58
of the workforce is going to be
00:33:59
unemployed in two years. I just don't
00:34:01
think that it's going to work that way.
00:34:03
But look, if there is widespread job
00:34:05
disruption, then obviously the
00:34:06
government's going to have to react and
00:34:08
we're going to be in a very different
00:34:09
societal order. But my point is, you
00:34:12
want the government to start reacting
00:34:14
now before this actually happens. We
00:34:16
don't need to be precogs and predict it.
00:34:17
Yeah. It's a total power grab. It's a
00:34:20
total power grab to give the government
00:34:22
and these organizations more power
00:34:24
before the risk is even manifested. And
00:34:27
let me say this as well with respect to
00:34:28
all these regulations that were created,
00:34:30
the 100page by NEO and the 200page
00:34:32
diffusion rule, none of these
00:34:34
regulations solve the excess problem.
00:34:37
None of these things actually would
00:34:39
prevent the most existential risk that
00:34:41
we're talking about. They don't solve
00:34:43
for alignment. They don't sign for the
00:34:45
kill switch. None of that. Yeah. If
00:34:46
someone actually figures out how to
00:34:47
solve that problem, I'm all ears. You
00:34:49
know, look, I'm not cavalier about these
00:34:52
risks. I understand that they exist, but
00:34:54
I'm not in favor of the fear-mongering.
00:34:57
I'm not in favor of giving all this
00:34:58
power to the government before you even
00:34:59
know how to solve these problems.
00:35:02
Shimath, you did a tweet about
00:35:04
entry-level jobs being toast. So, I
00:35:06
think there is a nuance here. Uh, and
00:35:08
both parties could be correct. I think
00:35:10
the job destruction is happening as we
00:35:12
speak. I'll just give what one example
00:35:14
and then drop to you, Chimath. One job
00:35:16
in startups that's not driving a car or
00:35:19
you know super entry level was people
00:35:20
would hire consultants to do recruitment
00:35:22
and to write job descriptions. Now I was
00:35:24
at a journal last night talking to a
00:35:26
bunch of founders here in Singapore and
00:35:27
I said how many people have used AI to
00:35:29
write a job description? Everybody's
00:35:30
hand went up. I said how many of you
00:35:32
with that job description was that job
00:35:34
description better than you could have
00:35:36
written or any consultant? And they they
00:35:37
all said yes 100% AI is better at that
00:35:39
job. That was a job a highlevel HR
00:35:42
recruitment job or an aspect of it sack.
00:35:45
So that was half the job, a third of the
00:35:47
job. To your point, the chores are being
00:35:49
automated. So I do think we're going to
00:35:50
see entrylevel jobs. Shimoth, the ones
00:35:53
that get people into an organization,
00:35:56
maybe they're going away. And that was
00:35:58
that your point of your tweet, which
00:35:59
we'll pull up right here. If a GPT is a
00:36:01
glorified
00:36:03
autocomplete, how did we used to do
00:36:05
glorified autocomplete in the past? It
00:36:08
was with new grads. New grads were our
00:36:12
autocomplete. And to your point, the
00:36:15
models are good enough that it
00:36:17
effectively allows a person to rise in
00:36:20
their career without the need of new
00:36:23
grad grist for the mill, so to speak.
00:36:27
So, I think the reason why companies
00:36:28
aren't hiring nearly as many new grads
00:36:30
is that the the folks that are already
00:36:32
in a company can do more work with these
00:36:35
tools. And and I think that that's a
00:36:36
very good thing. So you're generally
00:36:38
going to see OPEX as a percentage of
00:36:41
revenue shrink naturally and you're
00:36:44
going to generally see revenue per
00:36:47
employee go up naturally but it's going
00:36:51
to create a tough job market for new
00:36:53
grads in the established organizations.
00:36:56
And so what should new grads do? They
00:36:58
should probably steep themselves in the
00:36:59
tools and go to younger companies or
00:37:01
start a company. I think that's the only
00:37:03
solution for them. Bingo. The most
00:37:05
important thing for whether there are
00:37:08
jobs available for new grads or not is
00:37:10
whether the economy is booming. So
00:37:12
obviously in the wake of a financial
00:37:14
crisis, the jobs dry up because
00:37:16
everyone's cost cutting and those jobs
00:37:18
are the first ones to get
00:37:20
cut. But if the economy is booming, then
00:37:23
there's going to be a lot more job
00:37:25
creation. And so again, if AI is this
00:37:28
driver and enabler of tremendous
00:37:30
productivity, that's going to be good
00:37:31
for economic growth. And I think that
00:37:33
that will lead to more company
00:37:34
formation, more company expansion at the
00:37:36
same time that you're getting more
00:37:38
productivity. Now, to give an example,
00:37:40
one of the things I see a lot discussed
00:37:43
online about these coding assistants is
00:37:45
that they make junior programmers much
00:37:48
better because, you know, if you're
00:37:50
already like a 10x programmer, very
00:37:52
experienced, you already knew how to do
00:37:54
everything. And you could argue that the
00:37:56
people who benefit the most are the
00:37:59
entry-level coders who are willing to
00:38:01
now embrace the new technology and it
00:38:03
makes them much more productive. So in
00:38:05
other words, it's a huge leveler and it
00:38:08
takes an entry-level coder and makes
00:38:09
them 5x or 10x better. So look, this is
00:38:13
an argument I see online. The point is
00:38:15
just I don't think we know how this cuts
00:38:16
yet. I agree. And I just think there's
00:38:18
like this this dumerism is premature and
00:38:21
it's not a coincidence that it's being
00:38:24
funded and motivated by this hardcore
00:38:29
ideological element. I'll tell you my
00:38:30
hiring experience. We have about 30
00:38:33
people at 8090 and the way that I have
00:38:36
found it to work the best is you have
00:38:38
senior people act as mentors and then
00:38:40
you have an overwhelming corpus of young
00:38:42
very talented people who are AI native.
00:38:45
And if you don't find that mix, what you
00:38:48
have instead are L7s from Google and
00:38:52
Amazon and Meta who come to you with
00:38:55
extremely high salary demands and stock
00:38:58
demands and they just don't thrive. And
00:39:01
part of why they don't thrive is that
00:39:03
they push back on the tools and how you
00:39:05
use them. They push back on all these
00:39:07
things that the tools help you get to it
00:39:08
faster. M this is why I think it's so
00:39:10
important for the young folks to just
00:39:12
jump in with two feet and be AI native
00:39:14
from the jump because you're much more
00:39:16
hirable frankly to the to the emergent
00:39:19
company and the bigger companies you'll
00:39:22
have a lot of these folks that see the
00:39:24
writing on the wall may not want to
00:39:26
adapt as fast as otherwise. Another way
00:39:29
for example that you can measure this is
00:39:30
if you look inside your company on the
00:39:33
productivity lift of some of these
00:39:35
coding assistants for people as a
00:39:37
distribution of age. What you'll see is
00:39:39
the younger people leverage it way more
00:39:41
and have way more productivity than
00:39:43
older folks. And I'm not saying that as
00:39:44
an aegis comment. I'm saying that it's
00:39:46
an actual reflection of how people are
00:39:48
reacting to these tools. What you're
00:39:50
describing is a paradigm shift. It is a
00:39:52
big leap. Is you know it's like when I
00:39:54
went to college, when I took computer
00:39:56
science, it was object-oriented
00:39:58
programming. It was like C++. It was
00:40:00
compiled languages. It was gnarly. It
00:40:02
was nasty work. And then you had these
00:40:05
highle abstracted languages. And I used
00:40:07
to remember at Facebook, I would just
00:40:09
get so annoyed because I was like, why
00:40:10
is everybody using PHP and Python? This
00:40:12
is like not even real. But I was one of
00:40:14
these old lights who didn't understand
00:40:16
that I just had to take the leap. And
00:40:18
what it did was it grew the top of the
00:40:20
funnel of the number of developers by
00:40:22
10x. And as a result, what you had were
00:40:25
all of these advancements for the
00:40:26
internet. And I think what's happening
00:40:28
right now is akin to the same thing
00:40:30
where you're going to grow the number of
00:40:31
developers upstream by 10x. But in order
00:40:34
to embrace that, you just have to jump
00:40:36
in with two feet. And if you're very
00:40:37
rigid in how you think the job should be
00:40:40
done technically, I think you're just
00:40:41
going to get left behind. Just a little
00:40:44
interesting statistic there. Microsoft
00:40:46
announced 6,000 job layoffs, about 3% of
00:40:48
their workforce, while putting up record
00:40:50
profits while being in incredible cash
00:40:52
position. Total confirmation bias. It's
00:40:55
like now every time there's a layoff
00:40:57
announcement, people try to tie it to AI
00:40:58
to feed this doomer story. I don't think
00:41:01
that's an AI story. I well I actually
00:41:03
think it I don't think it's an AI story.
00:41:04
I think it is because the people they're
00:41:06
eliminating are management and I think
00:41:08
the the management layer becomes less
00:41:10
you're saying it was it was entry level
00:41:12
employees. Now you're saying it's
00:41:13
management. This is total confirmation.
00:41:14
I think those are no no I think those
00:41:16
are two areas that specifically get
00:41:18
eliminated. Entry level it's too hard.
00:41:20
It it's too hard to give them the grunt
00:41:22
work. And then for the managers who are
00:41:24
old and I've been there for 20 years.
00:41:26
Hold on. Let me finish. Th for those
00:41:27
people I think they are unnecessary in
00:41:31
this new AI management. What are you
00:41:33
talking about? What what is the AI agent
00:41:35
that's doing management right now in
00:41:37
companies? Oh theory doesn't even make
00:41:39
sense. Oh no it it totally does. There
00:41:41
are tools now that are telling you this
00:41:44
is these are the most productive people
00:41:45
in the organization. Shath just outlined
00:41:47
who's shipping the most etc. who's using
00:41:49
the tools. And then people are saying,
00:41:51
well, why do we have all these highly
00:41:53
priced people who are not actually
00:41:54
shipping code who are LSAs? You're
00:41:56
totally falling for some sort of
00:41:57
narrative here. This makes no sense. I
00:41:59
don't think I am. Yeah, let me be very
00:42:02
clear what I'm saying. What I am saying
00:42:04
is AI natives are extremely productive.
00:42:07
They use these tools. They're very fil
00:42:09
with them. I think it's very reductive,
00:42:12
but what you see is the older or more
00:42:15
established in your career you are in
00:42:16
technical roles, what I see is that it's
00:42:19
harder and harder for folks like that to
00:42:21
embrace these tools in the same way.
00:42:23
Now, how does it play out in terms
00:42:25
of jobs? I think that just these tools
00:42:29
are good enough where the net new
00:42:32
incremental taskoriented role that would
00:42:34
typically go to a new grad, a lot of
00:42:37
that can be defayed by these models.
00:42:38
That's what I'm saying very clear
00:42:40
specifically and I don't think that
00:42:41
speaks to management. I agree with Sax.
00:42:43
It doesn't do Sergey said Freeberg when
00:42:46
he came to uh our F1 that management
00:42:49
would be the first thing to go. I was
00:42:50
talking to some entrepreneurs last night
00:42:52
again here in Singapore and they are
00:42:54
taking all the GitHub and and Jira cards
00:42:57
and and things that have been submitted
00:42:58
plus all the Slack messages in their
00:43:00
organization and they're putting them
00:43:02
into an LLM and having it write
00:43:04
management reports of who is the most
00:43:06
productive in the organization. And in
00:43:07
the new version of Windows, it's
00:43:09
monitoring your entire desktop. Freeberg
00:43:11
management is going to know who in the
00:43:13
organization is actually doing work,
00:43:15
what work they're doing, and what the uh
00:43:18
result of that work is through AI. That
00:43:20
is the future of management. And you
00:43:22
take out all bias, all you know,
00:43:25
loyalty, and the AI is going to do that.
00:43:27
Couldn't disagree with you more saxs on
00:43:29
that, but Freeberg, you wanted to wrap
00:43:31
somewhere on this point. My point is
00:43:32
that AI managers are not losing their
00:43:35
job because AI is replacing them. I
00:43:37
didn't say that AI wouldn't be a
00:43:39
valuable tool for managers to use. Sure,
00:43:42
AI would be a great tool for managers,
00:43:44
but we're not anywhere near the point
00:43:47
where managerial jobs are being
00:43:48
eliminated because they're getting
00:43:50
replaced by AI agents. We're still at
00:43:52
the chatbot stage of this. Literally,
00:43:54
Sergey said he took their internal
00:43:56
Slack, went into like a dev
00:43:58
conversation, and said, "Who are the
00:43:59
underrated people in this organization
00:44:00
who deserve a raise?" and it gave him
00:44:02
the right answer. So, wait, that doesn't
00:44:03
allow you to cut 6,000 people. I think
00:44:05
it's happening as we speak. It's just
00:44:07
not over. You You fell for this
00:44:09
narrative. You grasped onto this
00:44:11
Microsoft restructuring where they
00:44:14
eliminated 6,000 roles and you're trying
00:44:15
to attribute that to AI now. I think it
00:44:18
has to do with AI. I think management is
00:44:19
looking at it saying, "We are going to
00:44:21
replace these positions with AI. We
00:44:23
might as well get rid of them now." It
00:44:24
is in flux. We'll see who's right in the
00:44:26
coming months. Can I make another
00:44:28
comment, Freick? wrap this up here so we
00:44:29
can get on to the next topic. This is a
00:44:32
great topic. This is I want to make one
00:44:34
last point which I think and Sax you may
00:44:37
not appreciate this so we can have a a
00:44:39
healthy argument about this. I think in
00:44:41
the same way that all of this jobs are
00:44:44
going to get lost to AI fear-mongering.
00:44:46
There's a similar narrative that I think
00:44:48
is a false narrative around there's a
00:44:50
race in AI that's underway between
00:44:53
nation states. And the reason I think
00:44:56
it's false is if I asked you guys the
00:44:57
question, who won the industrial
00:44:59
revolution? The industrial revolution
00:45:01
benefited everyone around the world.
00:45:03
There are factories and there's a
00:45:04
continuous effort and continuous
00:45:06
improvements in manufacturing processes
00:45:08
worldwide. That is a continuation of
00:45:10
that revolution. Similar if I asked who
00:45:12
won the internet race, there were
00:45:14
businesses built out of the US,
00:45:16
businesses built out of China,
00:45:17
businesses built out of India and Europe
00:45:19
that have all created value for
00:45:21
shareholders, created value for
00:45:22
consumers, changed the world, etc. And I
00:45:25
think the same is going to happen in AI.
00:45:26
I don't think that there's a finish line
00:45:28
in AI. I think AI is a new paradigm of
00:45:32
work, a new paradigm of productivity, a
00:45:34
new paradigm of business, of the
00:45:36
economy, of livelihoods, of pretty much
00:45:39
everything uh every interaction humans
00:45:41
have with ourselves and the world around
00:45:43
us will have in its substrate AI and as
00:45:47
a result, I think it's going to be this
00:45:48
continuous process of improvement. So,
00:45:49
I'm not sure. Look, there there are
00:45:51
different models and you can look at the
00:45:52
performance metrics of models, but you
00:45:54
can get yourself spun up into a tizzy
00:45:56
over which model is ahead of the others.
00:45:58
Which one's going to quote get to the
00:45:59
finish line first? But I think at the
00:46:01
end of the day, the abundance and the
00:46:03
economic prosperity that will arise from
00:46:05
the continuous performance improvements
00:46:07
that come out of AI and AI development
00:46:09
will benefit all nation states and
00:46:11
actually could lead to a little bit more
00:46:13
of a less resource constrained world
00:46:16
where we're all fighting over limited
00:46:17
resources and there's nation state
00:46:19
definitions around who has access to
00:46:21
what and perhaps more abundance which
00:46:23
means more peace and uh less of this
00:46:25
kind of resourced world. your thought on
00:46:28
the kumbaya theory exposed by Freeberg.
00:46:31
Yeah, exactly. Um I I'll partially agree
00:46:34
in the sense that I don't think the AI
00:46:37
race is a finite game. It's an infinite
00:46:39
game. I I agree that there's no finish
00:46:42
line, but that doesn't mean there's not
00:46:44
a race going on. So for example, an arms
00:46:46
race would be a classic example of a
00:46:49
competition between countries to see who
00:46:53
is stronger to basically amass power and
00:46:55
they might be neutralizing each other.
00:46:57
The balance of power may stay in
00:46:58
equilibrium even though both sides feel
00:47:00
the need to constantly uplevel their
00:47:03
arms, their power. Yeah. And so I think
00:47:05
that to use the the term that Mir
00:47:08
Shimemer used at the all-in summit, we
00:47:09
are in an iron cage. The US and China
00:47:12
are the two leading countries in the
00:47:14
world economically, militarily,
00:47:17
technologically. They both care about
00:47:20
their survival. The best way to ensure
00:47:22
your survival in a self-help world is by
00:47:27
being the most powerful. And so these
00:47:29
are great powers who care a lot about
00:47:31
the balance of power. And they will
00:47:32
compete vigorously with each other to
00:47:35
maintain the greatest balance of power
00:47:37
between them. and high-tech is a major
00:47:40
dimension of that competition and within
00:47:42
high-tech AI is the most important
00:47:43
field. So look, there is going to be an
00:47:45
intense competition around AI. Now the
00:47:48
question is how does that end up? I mean
00:47:50
it could end up in a tie or in it could
00:47:53
end up in a situation where both
00:47:55
countries benefit. Maybe open source
00:47:57
wins. Maybe neither side gains a
00:47:59
decisive advantage. that they're
00:48:01
absolutely going to compete because
00:48:03
neither one can afford to take the risk
00:48:05
that the other one will develop a
00:48:07
decisive advantage. Prisoners dilemma.
00:48:09
Nuclear proliferation is a good analogy.
00:48:11
I would argue nuclear deterrence led to
00:48:13
a more peaceful world in the 20th
00:48:15
century. I mean is that fair to say Sax
00:48:17
that ultimately what happened with
00:48:18
nuclear is that the actual underlying
00:48:21
technology hit you know an asmtote right
00:48:24
it plateaued right and so we ended up in
00:48:26
a situation where in the case of the
00:48:28
United States versus Soviet Union where
00:48:30
both sides had enough nukes to blow up
00:48:32
the world many times over and there
00:48:34
wasn't really that much more to innovate
00:48:36
so you know the the the underlying
00:48:38
technological competition had ended the
00:48:40
the dynamic was more stable and they
00:48:42
were able to reach an arms control
00:48:44
framework to sort of control the arms
00:48:46
race, right? I think AI is a little
00:48:49
different. We're in a situation right
00:48:51
now where the technology is changing
00:48:53
very very rapidly and it's potentially
00:48:55
on some sort of exponential curve and so
00:48:58
therefore being a year ahead even 6
00:49:00
months ahead could result in a major
00:49:02
advantage. I think under those
00:49:03
conditions both sides are going to feel
00:49:05
the need to compete very vigorously. I
00:49:08
don't think they can sign up. This is a
00:49:09
system of productivity right for an
00:49:11
agreement to slow each other down. I
00:49:12
just don't nuclear was not a system of
00:49:14
productivity. It was not a system of
00:49:15
economic growth. It was a system of
00:49:17
literally destruction. And this is quite
00:49:20
different. This is a system of making
00:49:21
more with less which unleashes benefits
00:49:24
to everyone in a way that perhaps should
00:49:27
be calming down the conflict in the
00:49:29
potential. You got to admit that there's
00:49:30
a there is a potential dual use here.
00:49:33
There's no question that the armies of
00:49:35
the future are going to be drones and
00:49:36
robots and they're going to be AI
00:49:37
powered. Yeah. And as long as that's the
00:49:39
case, these countries are going to
00:49:40
compete vigorously to have the best AI
00:49:42
and they're going to want their leaders
00:49:45
or national champions or startups and so
00:49:47
forth to win the race. What's the worst
00:49:49
case, Saxs, if if China wins the AI
00:49:52
race? What is the worst case scenario?
00:49:54
Ask what it means first. Ask Sax. That's
00:49:56
literally what I'm asking. Like what
00:49:58
would that scenario be? Would they
00:49:59
invade America and they dominate us
00:50:01
forever? What does it mean to citizen?
00:50:03
Yeah. What does it mean to win? Yeah. To
00:50:06
me, it would mean that they achieve a
00:50:08
decisive advantage in AI such that we
00:50:11
can't leaprog them back. And an example
00:50:14
of this might be something like 5G where
00:50:17
Huawei somehow leaprogged us, got to 5G
00:50:20
first and disseminated it through the
00:50:22
world. They weren't concerned about
00:50:24
diffusion. They were interested in
00:50:26
promulgating their technology throughout
00:50:28
the world. If the Chinese win AI, they
00:50:30
will sell more products and services
00:50:32
around the globe than the US. This is
00:50:34
where we have to change our mindset
00:50:35
towards diffusion. I would define
00:50:37
winning as the whole world consolidates
00:50:40
around the American tech stack. They use
00:50:42
American hardware in data centers that
00:50:45
again are are fundamentally powered by
00:50:47
American technology. And you know just
00:50:49
look at market share. Okay? If we have
00:50:51
like 80 to 90% market share that's
00:50:53
winning. If they have 80% market share
00:50:55
then we're in big trouble. So it's very
00:50:58
simple. It means like yeah but if the
00:50:59
market grows up by 10x it doesn't matter
00:51:02
because the world will have every
00:51:03
individual in every country will now
00:51:05
have more they will have a more
00:51:06
prosperous life and as a result it's not
00:51:09
necessarily the framing about if we
00:51:11
don't get there first we are necessarily
00:51:13
going to lose I get that there's an edge
00:51:14
case of conflict or what have you but I
00:51:17
do think that there's a net benefit
00:51:19
where the whole world suddenly is in
00:51:20
this more prosperous state and this is a
00:51:23
classic example of a dual use technology
00:51:25
where there are both economic benefits
00:51:27
and there are military benefits. Yes,
00:51:30
GPS would come to mind in this example,
00:51:33
right? Like my summary point is just
00:51:35
that it's not all about a losing game
00:51:37
with respect to this quote race with
00:51:40
other nation states. But at the end of
00:51:41
the day, yes, there is risk, but I do
00:51:44
think that if the the pace of
00:51:46
improvement stays on track like it is
00:51:48
right now, holy [ __ ] I think we're in a
00:51:50
pretty good place. That's just my point.
00:51:51
Okay. Some positivity. Okay. Look, I I
00:51:54
hope that the AI race stays entirely
00:51:56
positive and it's a healthy competition
00:51:58
between nations and the competition
00:52:00
spurs them on to develop more prosperity
00:52:02
for their citizens. But as we talked
00:52:04
about in the AI summit, there's two ways
00:52:06
of looking at the world. There's kind of
00:52:08
the economist way that Jeffrey Saxs was
00:52:10
talking about and then there's the
00:52:12
balance of power way or realist way
00:52:13
which Mir Shmer was talking about. And
00:52:16
when economic prosperity and survival or
00:52:21
balance of power come into conflict,
00:52:24
it's the realest view of the world that
00:52:26
it's the balance of power that gets
00:52:27
privileged. And I just think that's the
00:52:29
way that governments operate is that
00:52:31
prosperity is incredibly important. We
00:52:33
want economic success, but power is
00:52:37
ultimately privileged over that. And
00:52:39
this is why we're going to compete
00:52:40
vigorously in high-tech. That's why
00:52:42
there is going to be an AI race. Okay,
00:52:44
perfect segue. We should talk a little
00:52:46
bit about what was the topic of
00:52:48
discussion yesterday. I had a lunch with
00:52:51
a bunch of family offices and capital
00:52:52
allocators uh government folks here in
00:52:54
Singapore and they were talking about
00:52:56
our discussion last week about the big
00:52:59
beautiful bill and the debt here in the
00:53:02
United States. It's permeating
00:53:04
everywhere. The two conversations at
00:53:06
every stop I've made here is the big
00:53:09
beautiful bill and the balance sheet of
00:53:12
the United States as well as tariffs.
00:53:13
So, we need to maybe revisit our
00:53:16
discussion last week. Chimath, you had
00:53:18
uh and Freeberg did a an impromptu call
00:53:21
with Ron Johnson over the weekend, which
00:53:23
then spurred him going on 20 other
00:53:26
podcasts to talk about this. Steven
00:53:28
Miller from the administration has been
00:53:30
tweeting some corrections or his
00:53:33
perceived corrections about the bill.
00:53:35
And Sax, uh, I think you've also started
00:53:37
tweeting this. Where do we want to
00:53:39
start? Maybe. Well, I think there are
00:53:41
just a couple of facts that should be
00:53:43
cleaned up because Okay, so facts from
00:53:46
the administration, their view of our
00:53:48
discussion. Well, even though I was
00:53:50
defending the bill last week on the
00:53:52
whole, I wasn't saying it was perfect. I
00:53:53
was just saying it was better than the
00:53:54
status quo. Yeah, you were clear about
00:53:56
that. Yeah. Yeah. But even even I in
00:53:58
doing that was conceding some points
00:54:01
that I think were just factually wrong.
00:54:03
And the big one was that I said I was
00:54:05
disappointed that Doge the Doge cuts
00:54:07
weren't included in the big beautiful
00:54:08
bill. What Steven Miller has pointed out
00:54:11
is that reconciliation bills can only
00:54:14
deal with what's called mandatory
00:54:15
spending. They can't deal with what's
00:54:16
called discretionary spending. And since
00:54:19
the Doge cuts apply to discretionary
00:54:21
spending, they just can't be dealt with
00:54:23
in a reconciliation bill. They have to
00:54:24
be dealt with separately. There can be a
00:54:26
separate recision bill that comes up,
00:54:28
but it can't be dealt with in this bill.
00:54:30
And just to be very clear, look, if the
00:54:32
Doge cuts don't happen through recision,
00:54:34
I'm going to be very disappointed in
00:54:36
that. I really want the Doge cuts to
00:54:37
happen, but it's just a fact that the
00:54:40
Doge cuts cannot happen in the big
00:54:42
beautiful bill. It's not that kind of
00:54:44
bill. And I think it's therefore wrong
00:54:46
to blame big beautiful bill for not
00:54:48
containing Doge cuts when the Senate
00:54:51
rules don't allow that. You know, it all
00:54:52
goes back to the the Bird rules. There
00:54:55
are only specific things that can be
00:54:57
dealt with through reconciliation, which
00:54:59
is this 50 vote threshold, and it has to
00:55:02
be quote unquote mandatory spending.
00:55:04
discretionary cuts are dealt with in
00:55:05
annual appropriations bills that require
00:55:07
60 votes. Now look, this is kind of a
00:55:09
crazy system. I don't know exactly how
00:55:11
it evolved. I guess Robert Bird is the
00:55:13
one who came up with all this stuff and
00:55:15
maybe they need to change the system,
00:55:16
but it's just wrong to blame the big
00:55:18
beautiful bill for not containing the
00:55:19
Doge cuts. That's just a fact. Okay, so
00:55:21
the other thing is that the BBB does
00:55:24
actually cut spending. It's just not
00:55:26
scored that way because when the
00:55:29
bill removes the sunset provision from
00:55:31
the 2017 tax cuts, the CBO ends up
00:55:35
scoring that as effectively a spending
00:55:37
increase. But tax rates are simply
00:55:40
continuing at their current level. In
00:55:42
other words, at this year's level. So if
00:55:45
you used the current year as your
00:55:47
baseline, okay, and then compared it to
00:55:50
spending next year, it would score as a
00:55:54
cut in spending. So it's just not it's
00:55:56
not correct to say this bill increases
00:55:58
spending. It does actually result in a
00:56:01
mandatory spending cut, but it's not
00:56:04
getting credit for that because we're
00:56:06
continuing the tax rates at the current
00:56:09
year's rates. Do you believe Sachs that
00:56:12
this administration which you are part
00:56:14
of in four years will have spent will
00:56:17
have balanced the budget. Will it have
00:56:19
reduced the deficit or will the deficit
00:56:21
continue to grow at 2 trillion a year?
00:56:24
What is your belief because there's a
00:56:26
lot of strategies going on here. Yeah.
00:56:27
My my belief is that President Trump
00:56:29
came into office inheriting a terrible
00:56:32
fiscal situation. I mean basically that
00:56:34
he created and that Biden created. They
00:56:36
both put 8 trillion. They both put 8
00:56:38
trillion on the debt. It's a big
00:56:40
difference. It's a big difference to add
00:56:41
to the deficit when you're in the
00:56:43
emergency phase of
00:56:45
CO for that. Sure. It's emergency
00:56:48
spending. It was never supposed to be
00:56:49
permanent and then somehow Biden made it
00:56:51
permanent and he wanted a lot more.
00:56:52
Remember build back better? He wanted a
00:56:54
lot more. So, you know, it's it's tough
00:56:57
when you come into office with a what is
00:56:59
$2 trillion annual deficit. So, to my
00:57:01
original question, now look, hold on.
00:57:03
Would I like to see the deficit
00:57:04
eliminated in one year? Yeah,
00:57:06
absolutely. But there's just not the
00:57:07
votes for that. Well, I asked you for
00:57:09
there's a one vote margin here in the
00:57:10
House and the Democrats aren't
00:57:13
cooperating in any way. So, I think that
00:57:15
the administration is getting the most
00:57:18
done that it can. This is a mandatory
00:57:20
spending cut and I think the Doge cuts
00:57:23
will be dealt with hopefully through
00:57:24
recision in a subsequent bill. I'm
00:57:27
asking you about four years from now.
00:57:28
Will we be sitting here in four years?
00:57:30
Will Trump have cut spending by the end
00:57:32
of this term? In another three and a
00:57:34
half years, will we be looking at a
00:57:36
balanced budget? potentially is that the
00:57:38
goal of the administration or will we be
00:57:41
at 42 44 45 trillion at the end of
00:57:45
Trump's second term? David said, listen,
00:57:47
if you want that level of specificity,
00:57:48
you're going to have to get Scott Besson
00:57:49
on. Okay, this is just not my area. I'm
00:57:51
not going to pretend to have that level
00:57:52
of detailed answers. But what I believe
00:57:54
is that the Trump administration's
00:57:56
policy is to spur growth. I think that
00:58:00
these tax policies will spur growth. I
00:58:02
think that AI will also be a huge
00:58:04
tailwind. It'll be a productivity boost.
00:58:07
I think let's stop being doomers about
00:58:08
it. We need that productivity boost and
00:58:10
I think that the net result of those
00:58:12
things will be to improve the fiscal
00:58:14
situation. Do I want more spending cuts?
00:58:16
Yeah, but look, we're getting more than
00:58:18
was represented last week. Let's put it
00:58:20
that way. Okay, fair enough. Sax, thank
00:58:21
you for the cleanup there. Chimath, our
00:58:23
bestie Elon was on the Sunday shows and
00:58:26
he said, "Hey, the bill can be big or it
00:58:28
can be beautiful. It can't be both." He
00:58:30
seems to be, I'll say, displeased or
00:58:34
maybe not as optimistic about balancing
00:58:36
the budget and and getting spending
00:58:38
under
00:58:39
control, but he he still believes in
00:58:41
Doge, obviously, and and and hopefully
00:58:43
Doge continues. You seemed a little bit
00:58:45
concerned last week. A week's passed.
00:58:49
You've heard some of Steven Miller's
00:58:51
opinions. Where do where do you net out
00:58:53
seven days from our big beautiful budget
00:58:55
bill debate last week, a week later?
00:58:58
Well, I mean, I think Steven's critique
00:59:00
of
00:59:01
how the media summarized the reaction to
00:59:06
the bill is accurate.
00:59:09
And I think it's probably useful to
00:59:11
double click into one thing that Saxs
00:59:13
didn't mention, but that Steven did. A
00:59:15
lot of this pivots around the CBO, which
00:59:17
is the Congressional Budget Office, and
00:59:19
how they look at these bills, and
00:59:22
there's a lot of issues with how they do
00:59:26
it.
00:59:28
In one specific case, which Sax just
00:59:30
mentioned and Stephen talked about, is
00:59:31
that they have these arcane rules about
00:59:35
the way that they score things. And what
00:59:38
they were assuming is that the tax rates
00:59:43
would flip
00:59:44
back to what they were before the first
00:59:48
Trump tax cuts, which obviously would be
00:59:51
higher than where they are today. What
00:59:55
that would mean in their financial model
00:59:58
is we were going to get all that money
01:00:00
now to maintain the tax cuts where we
01:00:03
are. They now then would look at that
01:00:05
and say, "Oh, hold on. That's a loss of
01:00:07
revenue. Why are all of these things
01:00:09
important?"
01:00:11
I downloaded the CBO model, went through
01:00:14
it, and what I would say is at best it's
01:00:17
Spartan, which means that I don't think
01:00:20
a financial analyst or somebody that
01:00:23
controls a lot of money will actually
01:00:25
put a lot of stock in their model. I
01:00:28
think what you'll have happen is people
01:00:29
will build their own versions bottoms
01:00:32
up. Do you trust it, the the CBO's
01:00:35
version of this, or do you largely trust
01:00:36
it? I don't think the CBO really knows
01:00:38
what's going on to be totally honest
01:00:39
with you. Okay. I think that there are
01:00:42
parts of what they do which they're also
01:00:45
opaque on. Nick, I sent you a tweet from
01:00:50
Goldman Sachs. So, here is what Goldman
01:00:52
put out. Now, the point is when you
01:00:54
build a model, what you're trying to do
01:00:56
is net out all of these bars, okay?
01:00:58
You're trying to add the positive bars
01:00:59
and the negative bars, and you figure
01:01:00
out what is the total number at the end
01:01:02
of it. Now, in order to do that, when
01:01:05
you see the bars on the far right,
01:01:07
that's a 20 $34. That's very different
01:01:09
than a 20 $25. The CBO doesn't disclose
01:01:13
how they deal with that. They don't dis
01:01:15
disclose the discount rate. So you can
01:01:17
question what that is. The CBO makes
01:01:19
these assumptions that, you know, as
01:01:21
Steven pointed out, are very brittle
01:01:22
with respect to the tax plan. That's not
01:01:25
factored in here. So those are the
01:01:27
issues with the way the CBO scores it.
01:01:30
So you have to do it yourself. Now,
01:01:32
Peter Navaro published an article which
01:01:34
I think is probably the most pivotal
01:01:36
article about this whole topic.
01:01:39
Peter of tariff fame. Yeah. Yeah. Here I
01:01:42
think he nails it right in the bullseye,
01:01:44
which is the bond market needs to make a
01:01:48
decision on one very critical assumption
01:01:51
when they build their own model. Okay,
01:01:53
so let's ignore the
01:01:55
CBO's kind of brittle math and the Excel
01:01:58
that they post on their website. People
01:02:00
are going to do their own because
01:02:01
they're talking about managing their own
01:02:02
money. But Navaro basically points to
01:02:05
the critical thing which is listen those
01:02:07
CBO assumptions also include a fatal
01:02:09
error which is they assume these very
01:02:12
low levels of
01:02:13
GDP. What you're probably going to see
01:02:16
in Q2 is a really hot GDP print. If I'm
01:02:20
a betting man, which I am, I think the
01:02:21
GDP prints going to come in above three.
01:02:24
Not quite four, but above three. And so
01:02:26
what Peter is saying here is, hey guys,
01:02:28
like you're estimating 1.7% GDP. why
01:02:32
don't you assume 2.2 two or why don't
01:02:34
you assume 2.7 or any number or really
01:02:37
what he's saying is why don't you build
01:02:39
a sensitivity so that you can see the
01:02:41
implications of that and I think that
01:02:43
that is a very important point okay so
01:02:46
where do I net out a week later Jason
01:02:48
it's pretty much summarized in the tweet
01:02:52
that I posted earlier today so over the
01:02:56
last week as people have digested it I
01:02:58
think that there are small actors in
01:03:00
this play and big actors the biggest
01:03:02
actor is obviously President President
01:03:03
Trump. But the second biggest actor is
01:03:05
the long end of the bond market. These
01:03:07
are the central bankers, the long bond
01:03:09
holders, and these macro hedge funds.
01:03:11
Why? Because they will ultimately
01:03:13
determine the United States's cost of
01:03:16
capital. How expensive will it be to
01:03:18
finance our deficits irrespective of
01:03:20
whatever the number is. It could be a
01:03:22
dollar or it could be a trillion
01:03:24
dollars. That doesn't matter right now.
01:03:25
The point is what is going to be our
01:03:26
cost of capital? And what's happened
01:03:30
over the last little while is that
01:03:32
they've steepened the curve and they've
01:03:33
made it more expensive for us to borrow
01:03:36
money. That's just the fact. So how do
01:03:40
we get in front of this? I think the
01:03:43
most important thing if you think about
01:03:44
what Peter Navaro said is this plan and
01:03:48
the bill can work if we get the GDP
01:03:52
right. Okay. So how do you get the GDP
01:03:56
right? And this is where I have one very
01:03:59
narrow set of things that I think we
01:04:01
need to improve. And the specific thing
01:04:03
that I'll go back to is today
01:04:05
America is at a supply demand tradeoff
01:04:08
on the energy side. What does that mean?
01:04:11
We literally consume every single bit of
01:04:14
energy that we make. We don't have slack
01:04:18
in the
01:04:19
system. We are growing our energy
01:04:22
demands on average about 3% a year.
01:04:27
So I think the most critical thing we
01:04:29
need to do is to make sure the energy
01:04:31
markets stay robust. Meaning there's a
01:04:35
lot of investment that people are
01:04:37
making. On Tuesday I announced a deal
01:04:40
that I did building a 1 gawatt data
01:04:42
center in Arizona. This is a lot of
01:04:44
money. This is little old me. But there
01:04:46
are lots of people ripping in huge huge
01:04:48
huge checks, hundreds of billions of
01:04:49
dollars. I think the sole focus has to
01:04:52
be to make sure that the energy policy
01:04:54
of America is robust and it keeps all
01:04:57
the electrons online. If there's any
01:05:00
contraction, I think it'll hit the GDP
01:05:03
number because we won't have the energy
01:05:05
we need and that's where things start to
01:05:07
get a little funky. So, I think where I
01:05:08
am is I think President Trump should get
01:05:11
what he wants. I think the bill can work
01:05:14
narrowly address the energy provisions
01:05:16
and I think we live to fight another
01:05:18
day. So
01:05:19
Freiriededberg cynical approach might be
01:05:21
we're working the refs here. The CBO is
01:05:24
not taking into GDP. This GDP has a
01:05:27
magical unicorn in it. AI and energy is
01:05:30
going to spur this amazing growth. But
01:05:33
the bond markets don't believe it
01:05:34
either. So, are we looking at just a
01:05:39
GOP, a party, I'll put the
01:05:41
administration aside, that is just as
01:05:43
recklessly spending as the Democrats,
01:05:45
and they want to change the formula by
01:05:48
which they're judged in the future, that
01:05:50
there's going to be magically all this
01:05:51
growth and growth solves all problems.
01:05:54
And what we really need to do to your
01:05:56
point I think two weeks ago that this is
01:05:59
just disgraceful to put up this much
01:06:00
spending and we have to have austerity
01:06:02
and we need to increase uh maybe the
01:06:05
discipline in the country and both
01:06:06
parties have to be part of that. I'm
01:06:08
asking you uh from the cynical
01:06:10
perspective maybe to represent or steal
01:06:11
me on the other side here.
01:06:14
We had a conversation with Senator Ron
01:06:16
Johnson after we recorded the pod last
01:06:19
week and he was very clear in a key
01:06:23
point which is that this bill addresses
01:06:27
mandatory spending. Just to give you a
01:06:29
sense 70% of our federal budget is
01:06:32
mandatory spending. 30% falls into that
01:06:35
discretionary category. The mandatory
01:06:37
spending is composed of the interest on
01:06:39
the debt which is now well over a
01:06:41
trillion dollars a year on its way to a
01:06:43
trillion five almost a trillion a year.
01:06:44
Medicare, Medicaid, Social Security and
01:06:48
some other income security programs. And
01:06:50
as Ron Johnson shared with us over the
01:06:53
years more and more programs have been
01:06:55
put into the mandatory spending category
01:06:57
and so you can get past the
01:06:59
filibustering in the Senate to be able
01:07:00
to get budget adjustments done. The key
01:07:04
thing he's focused on and Rand Paul is
01:07:06
focused on and I've talked about is the
01:07:08
spending level of our mandatory
01:07:10
programs. The big beautiful bill
01:07:13
proposes a roughly $70 billion per year
01:07:16
cut in Medicaid. Okay, and that sounds
01:07:19
awful. How could you do that to people?
01:07:21
In 2019, the year before COVID, Medicaid
01:07:25
spending was $627 billion. 2024 it was
01:07:29
914 billion. So the $70 billion cut gets
01:07:33
you down to 840. You're still roughly
01:07:35
call it 40% above where you were in
01:07:37
2019. So is that the right level? And
01:07:40
fundamentally the opportunity to cut
01:07:43
those mandatory programs, which I know
01:07:45
sounds awful, to cut Social Security and
01:07:46
cut Medicaid, but the reality is they're
01:07:48
not just being cut from a low level.
01:07:52
They're being cut from a level that's 60
01:07:55
plus% higher than they were in 2019. And
01:07:57
I gave you another example which is the
01:07:59
SNAP program, the food stamp program.
01:08:01
Again, uh $15 billion of the 120 a year
01:08:05
that we spend on food stamps is being
01:08:06
used to buy soda and a whole another
01:08:09
chunk of that 120 is being used to buy
01:08:11
other junk food. So that they have
01:08:13
proposed in this bill to cut SNAP down
01:08:15
to 90 and it was 60 in 2019. So it's
01:08:19
still 50% above where it was in 2019. So
01:08:23
the key point that's being made by Ron
01:08:24
Johnson and others is that the spending
01:08:27
on these mandatory programs which
01:08:29
account for nearly threequarters of our
01:08:30
federal budget are still very elevated
01:08:33
relative to where we were in 2019. And
01:08:36
we are not going to get out of our
01:08:38
deficit barring a massive increase in
01:08:39
GDP without changes to the spending
01:08:42
level. Now I don't put the blame on the
01:08:45
White House. This bill passed with one
01:08:47
vote in the House. One vote. And so a
01:08:50
key point to note, and I've said this
01:08:52
from day one, and every time I've gone
01:08:54
to DC and every time we've talked about
01:08:56
Doge, I've said there's no way any of
01:08:58
this stuff's going to change without
01:08:59
legislative action from the Congress.
01:09:02
And here we are seeing Congress, for
01:09:04
whatever reason, you can listen to Ron
01:09:05
Johnson. You can listen to Rand Paul.
01:09:06
You can listen to others say, you know
01:09:08
what, we can't cut that deep. It is
01:09:10
going to be too harmful to our
01:09:12
constituents. We need to keep the
01:09:14
programs at their current levels or make
01:09:17
no changes at all or only modest
01:09:18
changes. And that's where we are. That's
01:09:20
the reality. Now, I do think that Navaro
01:09:23
did an excellent job in his op-ed for
01:09:26
whatever criticism we may want to lay on
01:09:27
Navaro for many other things. He pointed
01:09:29
out that the CBO projections in 2017 for
01:09:34
the next year's GDP growth numbers was
01:09:36
1.8 to 2% and it actually came in at
01:09:38
2.9%. a full one point higher because of
01:09:41
the tax and jobs act that was passed by
01:09:43
the Trump administration in 2017. So the
01:09:46
additional money that goes into
01:09:48
investments because lower taxes are
01:09:50
being paid fueled GDP growth. This is
01:09:52
what some people call trickle down
01:09:54
economics. People ridicule it. They say
01:09:56
it doesn't work. It's not real. But in
01:09:58
this particular instance, they cut taxes
01:10:00
and the GDP grew much faster than was
01:10:03
projected or estimated by the economists
01:10:05
at the CBO. So the argument that's being
01:10:07
made is that we are not capturing many
01:10:10
of the upsides in the GDP numbers that
01:10:13
are being projected. And I will be
01:10:15
honest about this. I don't think anyone
01:10:18
knows how much the GDP is going to grow.
01:10:21
We don't know the economic benefit and
01:10:23
effects of AI. We don't know the
01:10:26
economic benefits and effects of the
01:10:29
work that's being done to deregulate.
01:10:30
Another key point which is not talked
01:10:32
about by Navaro or anywhere else.
01:10:34
There's a broad effort to
01:10:37
deregulate standing up new energy
01:10:40
systems, deregulate industry and pharma,
01:10:42
deregulate banking. Besson talked about
01:10:44
this in our interview with him. All of
01:10:46
those deregulatory actions theoretically
01:10:49
should drive more investment dollars
01:10:51
because if you can get a biotech drug to
01:10:53
market in 5 years instead of 10, you'll
01:10:55
invest more in developing new biotech
01:10:56
drugs. If you can stand up a new nuclear
01:10:58
reactor in seven years instead of 30,
01:11:01
you'll build more nuclear reactors.
01:11:02
Money will flow. if you can um get a new
01:11:06
factory working because it's a lot
01:11:08
easier and faster to to build the
01:11:09
factory and cheaper, you'll build more
01:11:11
factories and production will go up.
01:11:13
People were really taken, by the way, by
01:11:15
your comment that you would shut up
01:11:16
about the deficit if we had like a
01:11:19
really great energy policy. We were
01:11:21
dumping a lot on top of it. I want to
01:11:23
build on the point that both Jamatha and
01:11:25
Freeberg made about growth rates. So,
01:11:27
there's a very important chart here from
01:11:29
Fred. This is the Federal Reserve St.
01:11:31
Louis. This is Federal Receipts. So
01:11:33
basically it's federal tax revenue as a
01:11:36
percent of GDP and this goes all the way
01:11:38
back to you know the 1930s 1940s. So if
01:11:42
you look in the postWorld War II period
01:11:44
you can see just eyeballing it that
01:11:47
there's a lot of variation around this
01:11:49
but the line is around
01:11:51
17.5% plus or minus 2%. And the
01:11:54
interesting thing is that this chart
01:11:58
reflects radically different tax rates.
01:12:01
So, for example, during some of these
01:12:04
periods, we've had 90% top marginal tax
01:12:07
rates. We've had 70% top top marginal
01:12:09
tax rates. So, yeah, under Jimmy Carter,
01:12:11
the top marginal tax rate was, I think,
01:12:14
70%.
01:12:16
We've had tax rates, you know, under
01:12:18
Reagan or Clinton in the 20s. So, the
01:12:21
point is that the the tax rate that you
01:12:24
have and what you actually collect as a
01:12:26
percent of GDP don't correlate. The most
01:12:29
important thing by far is just how the
01:12:32
economy is doing. If you look at the top
01:12:33
tick, it's around 2000 there. If you
01:12:35
just mouse over it, 1999 to 2000. Yeah.
01:12:38
Yeah. We get like just under 20% of
01:12:40
federal receipts% of GDP and tax rates
01:12:43
were quite low back then. The reason why
01:12:45
is we had an economic boom. So look, the
01:12:47
point is the most important thing in
01:12:49
terms of tax revenue is having a good
01:12:52
economy. And this is why you don't just
01:12:54
want to have very high tax rates because
01:12:57
they clobber your economy. So this point
01:12:58
that Navaro was making in that article,
01:13:02
it actually makes sense. I mean 1.7% is
01:13:04
a pretty tepid growth assumption, we
01:13:06
should be able to grow a lot faster. And
01:13:08
if we have a favorable tax policy, you
01:13:10
can grow a lot faster. Now, if you go to
01:13:13
spending, can you pull up the Fred chart
01:13:14
on spending? What you see here is that I
01:13:17
mean it's been kind of going up but
01:13:18
let's say that since the 19 mid1970s or
01:13:21
so federal net outlays as a percent of
01:13:24
GDP so basically spending was around 20%
01:13:26
of GDP and then what happened is during
01:13:29
co it went crazy went all the way up to
01:13:31
30% and now it's back down to you know
01:13:33
low 20s but it's still not back down to
01:13:36
20 and what we need to do is grow the
01:13:40
economy we have to grow GDP to the point
01:13:42
where federal net outlays are back
01:13:44
around 20%. If you could get tax revenue
01:13:48
to the historical mean of around 17.5%
01:13:50
or 17%, you get spending to 20%, then
01:13:53
you have a budget deficit of 3% which is
01:13:55
much more tolerable. And I think that's
01:13:57
best target under his 33 plan, right? Is
01:14:00
you get GDP growth back up to 3% and you
01:14:03
get the budget deficit down to 3%. All
01:14:06
right, Chimat, you had some charts you
01:14:07
wanted to share. Well, I think what's
01:14:10
amazing is if you take last week and now
01:14:13
again this week, we're all converging on
01:14:15
the same thing. The path out of this is
01:14:21
through GDP
01:14:23
growth. And I just want everybody to
01:14:26
understand where we are. And this is
01:14:29
without judgment. This is just the
01:14:30
facts. What this chart shows in gray is
01:14:34
the total supply of power in the United
01:14:39
States and the blue line is the
01:14:43
utilization. So what you build for is
01:14:48
what you think is a premium above the
01:14:50
demand, right? You'd say if there's one
01:14:51
unit of demand, let's have 1.2 units of
01:14:54
supply, we'll be okay. But as it turns
01:14:56
out, historically in the United States,
01:14:59
we've had these cycles where we didn't
01:15:02
really know what the demand curve would
01:15:04
look like. And so over the last number
01:15:07
of years, we've stopped really building
01:15:11
supply in
01:15:12
power. But what happened with things
01:15:14
like AI and all of these other things is
01:15:16
that the demand just continued to spike.
01:15:19
And so what this chart shows is we are
01:15:21
at a standstill sitting here today in
01:15:24
2025. On margin we're actually short
01:15:27
power which is to say sometimes there
01:15:30
are brownouts sometimes there is lack of
01:15:32
power because we didn't add enough
01:15:35
capacity. So that's where we are today.
01:15:37
So then we talk about all of these new
01:15:39
kinds of energy and this is just meant
01:15:41
to ground us in the
01:15:43
facts. If you tried to turn on a project
01:15:47
today sitting here in May of 2025,
01:15:51
here's what the timelines are. We all
01:15:54
talk about
01:15:55
SMRs, small modular
01:15:58
reactors. The reality is that if you get
01:16:02
everything permitted and you believe the
01:16:04
technology can be derisked, you're still
01:16:06
in a 2035 plus time frame. You're a
01:16:08
decade away.
01:16:10
If you have an unplanned NAT gas plant
01:16:13
today, the fastest you could get that on
01:16:15
is four years from now. If we tried to
01:16:18
restart a mothball nuclear reactor, of
01:16:22
which there are only three we can
01:16:24
restart, that's a 20 27 to 2030 time
01:16:28
frame. So, let's give us the benefit of
01:16:29
the doubt. That's 2 years away. If we
01:16:32
needed to plan Nat gas plant, there's
01:16:35
already 24 gawatt in the queue which
01:16:37
can't get turned on. So where does this
01:16:40
end up? And this is where I think we
01:16:42
need to strip away all the partisanship
01:16:44
and understand what we're dealing with.
01:16:47
We have ready supply of renewable and
01:16:51
storage options
01:16:53
today. It's the fastest thing that you
01:16:55
can turn on. It allows us to turn on
01:16:58
supply to meet the demand and
01:17:00
utilization. So I just think it's
01:17:03
important to understand that we must not
01:17:05
lose energy. We cannot lose the energy
01:17:07
market because that is the critical
01:17:09
driver of all the GDP. All right. Nepon
01:17:11
steel and the US steel merger got
01:17:13
cleared by President Trump. This was
01:17:16
something that was being blocked by
01:17:17
Biden obviously for national security
01:17:19
reasons. Nepon is going to acquire steel
01:17:22
for 14.9 billion. Biden blocked that as
01:17:24
we had discussed. On Friday, Trump
01:17:26
cleared the deal to go through calling
01:17:28
it a partnership that will create 70,000
01:17:30
jobs in the US. And on Sunday, Trump
01:17:32
called the deal an investment sync. It's
01:17:34
a partial ownership, but it will be
01:17:35
controlled by the USA. Chim, there seems
01:17:38
to be uh a reframing of this deal and
01:17:41
that the United States is going to
01:17:43
benefit from it, but it's not a sale.
01:17:46
Let's set some context. The United
01:17:48
States is always on the wrong side of
01:17:49
these deals. Okay? We've been on the
01:17:51
wrong side for 20 years. Meaning, we
01:17:53
show up when an asset is stranded
01:17:56
or completely run into the ground. For
01:18:00
example, we did the auto bailouts at the
01:18:03
end of the great financial crisis. If
01:18:05
it's not a company and there's toxic
01:18:07
assets, we set up something called TARP.
01:18:09
What do we get? Not much in return. In
01:18:12
this, it's the
01:18:14
opposite. And I think that this strategy
01:18:16
has worked for many other countries
01:18:19
really well. So if you look at
01:18:22
Brazil, companies like Embraer and
01:18:24
Valet, which are really big Brazilian
01:18:26
national champions, have a partnership,
01:18:28
a pretty tight coupling with the
01:18:30
Brazilian government. The Brazilians
01:18:31
have a golden vote. If you look inside
01:18:33
of the UK, there's a bunch of aerospace
01:18:36
and defense companies, including
01:18:37
Rolls-Royce, that have a very tight
01:18:39
coupling with the UK government. They
01:18:41
have a golden vote. If you look in
01:18:43
China, companies like Bite Dance and CL
01:18:47
have a very tight coupling with the
01:18:50
Chinese government and the Chinese
01:18:52
government has a golden vote. And so
01:18:54
what are all of those deals? Those deals
01:18:56
are about companies that are thriving
01:18:58
and on the forward foot. And so I think
01:19:00
this is a really important example of
01:19:02
things that we need to copy. I've said
01:19:04
this before, but one part of China that
01:19:06
I think we need to pay very close
01:19:07
attention to is Hu Jin Tao in 2003 laid
01:19:12
out a plan and he said we are going to
01:19:14
create 10 national champions in China in
01:19:16
all the critical industries that are
01:19:18
going to matter for the next 50 years
01:19:20
including things like batteries and rare
01:19:22
earths and AI and they did it but for
01:19:25
those companies that allowed them to
01:19:26
thrive and crush it and I think that we
01:19:29
need to do that and compete with those
01:19:31
folks on an equal playing field. So in
01:19:33
all industries or in very specific
01:19:35
strategic ones because that would seem
01:19:37
like corrupting capitalism in free
01:19:39
markets would be the steelman. Yeah,
01:19:42
there's 10 industries that matter and
01:19:44
you steel is one. Okay. I think the
01:19:48
precursors for pharmaceuticals are
01:19:50
absolutely critical. Got it. I think AI
01:19:53
is absolutely critical. I think the
01:19:56
upstream lithography and EV deposition
01:19:58
and chipm capability absolutely
01:20:01
critical. I think batteries are
01:20:04
absolutely critical and I think rare
01:20:06
earths and the specialty chemical supply
01:20:09
chain absolutely critical. If you have
01:20:11
those five, you are in control of your
01:20:14
own destiny in the sense that you can
01:20:16
keep your citizens healthy and you can
01:20:18
make all the stuff for the future. So I
01:20:21
think if the president is creating a a
01:20:24
more expansive idea beyond US deal with
01:20:27
this idea of US support, maybe there'll
01:20:30
be preferred capital in the future to US
01:20:32
deal. But if he creates a category by
01:20:35
category thing across five or six of
01:20:37
these critical areas of the future, I
01:20:39
think it's super smart and we should do
01:20:41
more of it. Sax, what do you think?
01:20:44
interventionism, putting your thumb on
01:20:45
the scale, golden votes, a good idea for
01:20:47
America in very narrow verticals or let
01:20:51
the free market decide. What are your
01:20:52
thoughts on this golden vote, having a
01:20:54
board seat, etc. Well, it depends what
01:20:58
the free market, so to speak, produced.
01:21:00
And the reality is over the past 25
01:21:02
years is we exported a lot of this
01:21:04
manufacturing capacity to China. And I
01:21:07
don't think it was a free market because
01:21:08
they had all these advantages under the
01:21:09
WTO that we talked about on a previous
01:21:11
podcast. they were able to subsidize
01:21:13
their national champions while still
01:21:17
remaining compliant with the WTO rules
01:21:18
because supposedly they were a
01:21:19
developing country. It was totally
01:21:20
unfair. And what they would do is
01:21:23
through these subsidies, they would
01:21:25
allow these national champions to
01:21:27
essentially dump their products in the
01:21:28
global market and drive everyone else
01:21:30
out of business. They became the lowcost
01:21:31
producers. I think that as the president
01:21:34
just said recently, not every industry
01:21:36
has to be treated as strategic clothes
01:21:39
and toys. is we don't necessarily have
01:21:40
to reshore in the United States but
01:21:42
steel production is definitely strategic
01:21:45
steel aluminum and I'd say the rare
01:21:47
earth we have to have that capacity we
01:21:49
cannot be completely dependent on China
01:21:51
for our supply chain so some of these
01:21:53
industries have to be reassured and if
01:21:55
you need subsidies to do it I think that
01:21:57
you do it for national security reasons
01:21:59
first and foremost there are other sense
01:22:01
yeah yeah there are other industries
01:22:03
where the private market works just fine
01:22:05
and what we need to do to help those
01:22:07
companies is simply not get in their
01:22:08
Hey, with unnecessary red tape and
01:22:10
regulations. So, I would say empower the
01:22:13
free market when America is the winner.
01:22:16
And then in other areas where they're
01:22:20
necessary for national security, then
01:22:21
you have to be willing to basically
01:22:23
protect our industries. Freeberg, it
01:22:25
seems like the great innovation here
01:22:26
might also be the American public
01:22:29
getting upside. When we gave loans to
01:22:32
Celindra and Tesla and Fiser and a bunch
01:22:35
of people for batterypowered, you know,
01:22:37
energy under Obama, we just got paid
01:22:39
back in some cases by Elon. Other people
01:22:42
defaulted, but we didn't get equity.
01:22:44
What if we had instead of getting our
01:22:46
500 million back in the loan from from
01:22:47
Elon, which he paid back early and with
01:22:49
interest, if we got half back and we got
01:22:51
half in equity, RSUs, whatever, stock
01:22:54
options, warrants, this would be an
01:22:56
incredible innovation. So, what are your
01:22:57
thoughts here? because people look to
01:22:59
this podcast as, hey, the free market
01:23:00
podcast, but this does seem to be a
01:23:03
notable exception here of maybe we
01:23:05
should get involved and do these golden,
01:23:09
you know, share votes, board seats, you
01:23:12
know, maybe more creative um structures
01:23:15
in order to win faster. What are your
01:23:17
thoughts, Reaper? I don't like it. I
01:23:20
don't like the government and markets.
01:23:22
Keep the government out of the markets.
01:23:24
It creates a slippery slope. First of
01:23:26
all, I think markets don't operate well
01:23:28
if government's involved. It gets
01:23:30
inefficient and that hurts consumers. It
01:23:32
hurts productivity. It hurts the
01:23:34
economy. Second, I think it's a slippery
01:23:36
slope. You do one thing. question
01:23:38
though. If government non-intervention
01:23:40
results in all the steel production
01:23:42
moving offshore, if it results in all
01:23:45
the rare
01:23:46
earth processing and the rare earth
01:23:49
magnet casting industries moving
01:23:52
offshore, in fact, not just moving
01:23:54
offshore, but moving to an adversarial
01:23:57
nation such that they can just switch
01:24:00
off our supply chain for pretty much
01:24:02
every electric motor. Is that an outcome
01:24:05
of the quoteunquote free market that we
01:24:06
should accept? Well, then I think that's
01:24:09
where the government can play a role in
01:24:10
trade deals to to manage that effect. So
01:24:13
you can create incentives that'll drive
01:24:15
onshore manufacturing by increasing the
01:24:18
tariff or restricting trade with foreign
01:24:20
countries so that there isn't a cheaper
01:24:22
alternative, which is obviously one of
01:24:24
the plays that this Trump administration
01:24:25
is trying to do. I' I'd rather have that
01:24:27
mechanism than the government making
01:24:30
actual market-based decisions and
01:24:31
business decisions. You know how
01:24:32
inefficient government runs. You know
01:24:34
how difficult it is to assume that that
01:24:36
bureaucracy is actually ever going to
01:24:37
act and pick any best interest or any
01:24:39
good interest at all. They're just going
01:24:41
to [ __ ] it all up. So, I'd rather keep
01:24:43
the government entirely out of the
01:24:44
market. Create a a trade incentive where
01:24:46
the trade incentive basically will drive
01:24:48
private markets, private capital to
01:24:51
build that industry on shore here
01:24:53
because there isn't one and there's
01:24:54
demand for it because you've restricted
01:24:56
access to the foreign market. That I
01:24:58
think would be the best general
01:24:59
solution. tax and then I think it's a
01:25:01
slippery slope because then you could
01:25:02
always rationalize something being
01:25:03
strategic, something being security
01:25:05
interest in the United States. So then
01:25:06
every industry suddenly gets government
01:25:08
intervention and government involvement.
01:25:09
And then the third thing is I don't want
01:25:11
the government making money that the
01:25:13
Congress then says, hey, we've got more
01:25:14
money, we got more revenue, let's spend
01:25:15
more money because then they'll create a
01:25:17
bunch of waste and nonsense that'll
01:25:19
arise from having increased revenue. one
01:25:22
side and I will say I one thing where I
01:25:24
do think we do a poor job is we don't do
01:25:26
a good job to answer your question Jal
01:25:28
of investing the retirement funds that
01:25:30
we've mandated through social security
01:25:32
we should be taking the $4.5 trillion
01:25:34
that our social security beneficiaries
01:25:36
have had deducted from their paychecks
01:25:38
over many many years and those social
01:25:41
security future retirees or current
01:25:43
retirees are getting completely ripped
01:25:45
off because their money is being loaned
01:25:46
to the federal government. It's not
01:25:48
being invested. It's been loaned to the
01:25:49
government to spend money and run a
01:25:51
deficit and ultimately inflate away the
01:25:53
value of the dollar. We should have been
01:25:55
investing those dollars in some of these
01:25:57
strategic assets. So if ever there were
01:25:59
to be shares or investment that the
01:26:01
government does, it should be done
01:26:02
through strategic investing through the
01:26:04
social security or retirement program.
01:26:06
Similar by the way to what's done in
01:26:07
Australia where these uh these supers
01:26:10
are have created an extraordinary
01:26:12
surplus of capital. Same in Norway, same
01:26:15
in all the Middle East countries.
01:26:16
incredible sovereign wealth funds that
01:26:19
benefit the retirees and the population
01:26:21
at large. That's where the dollars
01:26:22
should be invested from. I do think the
01:26:24
fundamental focus priority right now
01:26:26
should be reforming social security
01:26:28
while we still have the chance. We have
01:26:30
until 2032 when social security will be
01:26:32
functionally bankrupt and everyone's
01:26:34
going to get overt taxed and kids are
01:26:36
going to end up having to pay um through
01:26:38
inflation for the benefits of the
01:26:40
retirees of the last generation.
01:26:41
Freeberg's right. We're on a seven-year
01:26:42
shock clock to when social security is
01:26:45
not funded. And by the way, this
01:26:46
opportunity to fix mandatory spending,
01:26:48
it was an opportunity to introduce some
01:26:50
structural reform in social security.
01:26:52
Another reason why I think that there's
01:26:53
a degree of discretzia in this bill,
01:26:55
particularly with how Congress had acted
01:26:57
and not addressing what is becoming a
01:26:59
critical issue because everyone wants to
01:27:01
get reelected in the next 12 months, 18
01:27:03
months. They've got elections coming up.
01:27:05
So, everyone's scrambling to not mess
01:27:06
with that because you can't touch it.
01:27:07
It's like, you know what, guys? This is
01:27:09
bankrupt in seven years. It's going to
01:27:11
cost us 5 10 times as much when we have
01:27:13
to deal with it when everyone runs out
01:27:14
of money. Deal with it now. Fix the
01:27:17
problem. And by the way, we should flip
01:27:18
all that money, $4.5 trillion into an
01:27:21
investment account for the retirees
01:27:23
where they can own equities and they can
01:27:25
make investments in the markets and they
01:27:26
can participate in the upside of
01:27:28
American industry and the GDP growth
01:27:29
that's coming. Instead, they're getting
01:27:31
paid 3.8% or four 4 and a.5% average
01:27:34
from treasuries that they own that, by
01:27:36
the way, are now have a lower credit
01:27:38
rating than they've ever had. You know,
01:27:40
it's crazy. I I'm I'm in complete
01:27:42
agreement with you and I think it's a
01:27:43
lack of leadership on Trump's part. If
01:27:45
Trump is going to criticize Taylor Swift
01:27:47
and Zalinski and Putin and everybody,
01:27:50
you know, all day long on Truth Social,
01:27:52
he can criticize Congress and the
01:27:54
Democrats and the Republicans on not
01:27:57
cutting spending. I think he should
01:27:59
speak up. I think he was elected to do
01:28:01
that. It was a big part of the mandate
01:28:03
and uh he should tone down the tariff uh
01:28:06
chaos and tone up the uh and lean into
01:28:10
uh intelligent immigration you know
01:28:12
recruiting great talent to this country
01:28:14
and he should be pushing to make these
01:28:17
bills uh control spending that's just
01:28:19
one person's belief for the chairman
01:28:20
dictator hapatia your zar David Sachs in
01:28:25
that Chris Brion white shirt very
01:28:27
beautiful and the sultan of science deep
01:28:30
in his wal E era. I am the world's
01:28:33
greatest moderator and as Freeberg will
01:28:35
tell you, executive producer for life
01:28:37
here at the All-In podcast. We'll see
01:28:38
you all next time. Bye-bye. Jason.com.
01:28:40
Love you boys. Bye-bye.
01:28:43
We'll let your winners ride.
01:28:45
Rainman David.
01:28:50
We open sourced it to the fans and
01:28:52
they've just gone crazy with it. Love
01:28:54
you. Queen of
01:28:56
[Music]
01:29:02
Kino besties are gone.
01:29:05
That is my dog taking your driveways.
01:29:10
Oh man, my appetiter will be. You should
01:29:13
all just get a room and just have one
01:29:15
big huge orgy cuz they're all just like
01:29:17
this like sexual tension that they just
01:29:19
need to release somehow.
01:29:23
Wet your feet. her feet. That's going to
01:29:26
be good. We need to get Murphy's
01:29:31
[Music]
01:29:36
our all in.

Episode Highlights

  • AI Doomerism Discussion
    The podcast dives into the concerns surrounding AI and its potential job displacement effects.
    “AI could have some significant impacts on the world.”
    @ 01m 35s
    May 31, 2025
  • Job Displacement Predictions
    Experts predict significant job losses in various sectors due to AI advancements.
    “If you hobble our own innovation, you probably end up losing the AI race to China.”
    @ 17m 22s
    May 31, 2025
  • Government Control Risks
    The conversation highlights fears of government using AI for control, posing a dystopian threat.
    “The greatest dystopian risk associated with AI is the risk that government uses it to control us.”
    @ 19m 11s
    May 31, 2025
  • The AI Revolution's Impact
    The speed of change in AI is unprecedented, potentially leading to job displacement but also new opportunities.
    “The velocity is greater but the benefit will be faster.”
    @ 26m 49s
    May 31, 2025
  • Job Creation Amidst AI
    Despite fears of job loss, AI may create new industries and opportunities, leading to economic growth.
    “Opportunity gets created. New jobs emerge, new industry, new income, costs go down.”
    @ 30m 10s
    May 31, 2025
  • AI as a Game Changer for Programmers
    AI tools are leveling the playing field for entry-level coders, making them significantly more productive. 'It takes an entry-level coder and makes them 5x or 10x better.'
    “It takes an entry-level coder and makes them 5x or 10x better.”
    @ 38m 05s
    May 31, 2025
  • The Future of Management with AI
    AI tools are transforming management by identifying productivity without bias. 'The AI is going to do that.'
    “The AI is going to do that.”
    @ 43m 25s
    May 31, 2025
  • Doge Cuts and Reconciliation
    The Doge cuts can't be included in the reconciliation bill due to Senate rules.
    “It's just wrong to blame the big beautiful bill for not containing the Doge cuts.”
    @ 55m 16s
    May 31, 2025
  • Optimism for Economic Growth
    Discussion on how productivity boosts from AI and energy policies could improve fiscal situations.
    “We need that productivity boost and I think that the net result will improve the fiscal situation.”
    @ 58m 10s
    May 31, 2025
  • Elon Musk's Budget Insight
    Elon Musk states, "The bill can be big or it can be beautiful. It can't be both."
    “The bill can be big or it can be beautiful. It can't be both.”
    @ 58m 26s
    May 31, 2025
  • Nepon Steel Merger Cleared
    President Trump cleared the Nepon Steel merger, promising 70,000 jobs in the US.
    “This partnership will create 70,000 jobs in the US.”
    @ 01h 17m 30s
    May 31, 2025
  • Urgent Need for Social Security Reform
    Social Security is on a seven-year countdown to bankruptcy, requiring immediate reform.
    “We have until 2032 when social security will be functionally bankrupt.”
    @ 01h 26m 34s
    May 31, 2025

Episode Quotes

Key Moments

  • Power Vacuum08:00
  • AI Velocity26:49
  • AI Productivity Boost38:05
  • Doge Cuts Debate54:36
  • CBO Critique59:17
  • GDP Growth1:14:15
  • Energy Market Importance1:17:05
  • Social Security Crisis1:26:42

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Trump Takes On the Fed, US-Intel Deal, Why Bankruptcies Are Up, OpenAI's Longevity Breakthrough
Podcast thumbnail
Inside the White House Tech Dinner, Weak Jobs Report, Tariffs Court Challenge, Google Wins Antitrust
Podcast thumbnail
Trump Rally or Bessent Put? Elon Back at Tesla, Google's Gemini Problem, China's Thorium Discovery
Podcast thumbnail
Bond crisis looming? GOP abandons DOGE, Google disrupts Search with AI, OpenAI buys Jony Ive's IO
Podcast thumbnail
AI Psychosis, America's Broken Social Fabric, Trump Takes Over DC Police, Is VC Broken?
Podcast thumbnail
Trump AI Speech & Action Plan, DC Summit Recap, Hot GDP Print, Trade Deals, Altman Warns No Privacy
Podcast thumbnail
Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot
Podcast thumbnail
IPOs and SPACs are Back, Mag 7 Showdown, Zuck on Tilt, Apple's Fumble, GENIUS Act passes Senate
Podcast thumbnail
Big Beautiful Bill, Elon/Trump, Dollar Down Big, Harvard's Money Problems, Figma IPO