Search Captions & Ask AI

David Sacks Exposes the AI Doomer Industrial Complex: Global Control, Threat Inflation, Biden Links

June 02, 2025 / 09:45

This episode discusses the effective altruism movement, AI existential risk, and the influence of major donors like Dustin Moskovitz. Key topics include the funding of organizations promoting AI regulation, the connections between effective altruists and the Biden administration, and the potential risks of AI governance.

The conversation highlights the role of Open Philanthropy, funded by Moskovitz, in supporting various organizations that share similar ideologies. The discussion also mentions key figures such as Tarun Chabra and Elizabeth Kelly, who transitioned from the Biden administration to Anthropic.

Concerns about AI governance are raised, particularly regarding the balance between safety regulations and innovation. The hosts argue that focusing solely on existential risks may hinder the US's competitive edge against countries like China.

The episode critiques the narrative surrounding AI risks, suggesting it is used to justify increased government control. The potential for government misuse of AI is emphasized as a significant concern.

Overall, the episode presents a critical view of the motivations behind AI regulation efforts and the implications for future governance.

TL;DR

The episode critiques effective altruism's influence on AI regulation and the risks of government control over technology.

Video

00:00:00
Well, there's also an industrial complex
00:00:03
according to some folks that
00:00:06
are backing this. If you've heard of
00:00:09
effective altruism, that was like this
00:00:11
uh movement of a bunch of I don't know,
00:00:14
I guess they consider themselves
00:00:15
intellectual sacks and they uh were kind
00:00:20
of backing a large swath of
00:00:23
organizations that I guess we would call
00:00:26
in the industry astroturfing or what do
00:00:28
they call it when you make so many of
00:00:29
these organizations that they're not
00:00:33
real in politics and flooding the zone
00:00:35
perhaps. So if you were to look at this
00:00:37
article here, Nick, I think you have the
00:00:39
AI existential risk um industrial
00:00:43
complex graphic there. It seems like a
00:00:46
group of people according to this
00:00:47
article have backed to the tune of 1.6
00:00:50
billion a large number of organizations
00:00:52
to scare the be Jesus out of everybody
00:00:55
and make YouTube videos, Tik Toks, and
00:00:58
they've they've made a map of it.
00:01:00
There's some key takeaways here from
00:01:02
that article where it says here that
00:01:06
it's an inflated ecosystem. There's a
00:01:07
great deal of redundancy. Same names,
00:01:09
acronyms, logos with only minor changes.
00:01:11
Same extreme talking points. Same group
00:01:13
of people just with different titles.
00:01:14
Same funding source. There's a funding
00:01:17
source called Open Philanthropy which
00:01:18
was funded by Dustin Moskovitz who is
00:01:20
one of the Facebook billionaires.
00:01:23
Chimath, you worked with him, right? I
00:01:24
mean he was wasn't he like Zuck's
00:01:26
roommate at Harvard or something and one
00:01:28
of the first engineers made a lot of
00:01:30
money. So he funded this he he's an EA
00:01:33
and he funded this group called Open
00:01:35
Philanthropy which then has become the
00:01:37
feeder for essentially all these other
00:01:40
organizations which are almost different
00:01:42
fronts to basically the same underlying
00:01:44
EA ideology.
00:01:46
And what's interesting is that the guy
00:01:48
who set this up for Dustin, Holden
00:01:50
Carnowski, who is a major effective
00:01:53
altruist and was doing out all the
00:01:55
money, he's married to Daario's sister
00:01:57
and she she's I guess associated with EA
00:01:59
and she was one of the co-founders of
00:02:01
Anthropic. So these are not
00:02:03
coincidences. I mean the reality is
00:02:05
there's a very specific ideological and
00:02:08
political agenda here. Now what is that
00:02:11
agenda? It's basically global AI
00:02:14
governance if you will. They want AI to
00:02:17
be highly regulated but not just at the
00:02:19
level of the nation state but let's say
00:02:22
internationally supernationally to
00:02:25
well if you just do a quick search on
00:02:28
global compute governance it'll tell you
00:02:31
what the key aspects are. So number one
00:02:34
they want regulation of computational
00:02:36
resources. This includes access to
00:02:40
GPUs. They want AI safety and security
00:02:43
regulation. They want international, you
00:02:45
call them, globalist
00:02:46
agreements. And they want ethical and
00:02:49
societal considerations or policy built
00:02:51
into this. Now, what does that sound
00:02:53
like? That sounds a lot to me like what
00:02:55
the Biden administration was pursuing.
00:02:57
Specifically, we had that Biden
00:03:00
executive order on AI, which was 100
00:03:02
pages of Bernson regulation that was
00:03:04
designed to promote AI safety, but had
00:03:07
all these DEI requirements. So you know
00:03:09
it led to woke AI. You remember when
00:03:12
Google launched Black George Washington
00:03:14
so forth they had the Biden diffusion
00:03:17
rule which created this global licensing
00:03:19
framework to sell GPUs all over the
00:03:21
world. So extreme restrictions on
00:03:24
proliferation of servers of computing
00:03:26
power. They created what's called the AI
00:03:29
safety institute and they again fostered
00:03:33
these international AI summits. So if
00:03:36
you actually look at what the Biden
00:03:38
administration was tangibly doing in
00:03:40
terms of policy and you look at what
00:03:43
EA's agenda is with respect to global
00:03:45
compute governance, they were pushing
00:03:48
hard on these fronts. And now if you
00:03:50
look at the level of personnel, there
00:03:52
were very very
00:03:53
powerful Biden staffers who now all work
00:03:56
in anthropic. So probably the most
00:03:59
powerful Biden staffer on AI over the
00:04:02
past four years was a lawyer named Tarun
00:04:05
Chabra and he now works at Anthropic for
00:04:09
Daario. Elizabeth Kelly who was the
00:04:11
founding director of the AI safety
00:04:13
institute in the government now works at
00:04:16
Anthropic. Like I mentioned Daario's
00:04:20
sister is married to Holder and
00:04:21
Karnowski who doss out all the money to
00:04:24
these EA organizations. So if you were
00:04:26
to do something like create a network
00:04:27
map, you would see very quickly that
00:04:30
there's three key nodes here. There's
00:04:32
the effective altruist movement of which
00:04:34
Sam Bankman Freed's most notable member,
00:04:36
but which I think Dustin Mos is now the
00:04:39
main funer. There's the Biden
00:04:40
administration and like the key staffers
00:04:42
and then you've got anthropic and it's a
00:04:46
very tightly wound network. Now why does
00:04:49
this matter? Let's get Yeah. Also the
00:04:51
goals I think is Yes. Well, the the
00:04:53
goal, like I said, is is global compute
00:04:55
governance. It's basically establishing
00:04:57
national and then international
00:04:59
regulations of AI. Now, but they would
00:05:02
claim, let's just pause here for a
00:05:04
minute. They would claim the reason
00:05:05
they're doing it. And so, we we'll we'll
00:05:08
save if we believe this or not, but they
00:05:11
are concerned about job destruction in
00:05:14
the short term. They're also concerned
00:05:15
as science fiction as it is that the AI
00:05:18
when we get to like a sort of
00:05:20
generalized super intelligence is going
00:05:22
to kill humanity that this is a nonzero
00:05:24
chance. Elon has said this before.
00:05:26
They've sort of taken it to a almost
00:05:28
like a certainty. Yes, we're going to
00:05:30
have so many of these general
00:05:32
intelligences that they only believe
00:05:34
that when they're raising money. Well,
00:05:36
that's what I'm sort of getting at.
00:05:37
Like, so I think they believe it all the
00:05:39
time, but maybe maybe the press releases
00:05:40
are time for for the fun building. But
00:05:42
let me let me answer that. So, right.
00:05:45
Yeah. Yeah. Look, I mean, it is a great
00:05:47
product. Cloud kicks us. I'm more
00:05:48
interested in the political dimension of
00:05:49
this. I'm not bashing a specific product
00:05:52
or company, but look, I think that there
00:05:54
is some nonzero risk of AI growing into
00:05:58
a super intelligence that's beyond our
00:06:00
control. They have a name for that. They
00:06:01
call it X-risk or existential risk. I
00:06:05
think it's very hard to put a percentage
00:06:06
on that. I'm willing to acknowledge that
00:06:09
is a risk. You know, I think about that
00:06:10
all the time and I do think we should be
00:06:12
concerned about it. But there's two
00:06:14
problems I think with this approach.
00:06:15
Number one is X-Risk is not the only
00:06:18
kind of risk. I would say that China
00:06:20
winning the AI race is a huge risk. I
00:06:22
don't really want to see a CCP AI
00:06:25
running the world. And if you hobble our
00:06:29
own innovation, our own AI efforts in
00:06:31
the name of stomping out every
00:06:32
possibility of X-risk, then you probably
00:06:35
end up losing the AI race to China
00:06:37
because they're not going to abide by
00:06:38
those same regulations. So again, you
00:06:41
can't optimize for solving only one risk
00:06:44
while ignoring all the others. And I
00:06:46
would say the risk of China winning the
00:06:48
AI race is, you know, it might be like
00:06:52
30%. Whereas I think X risk is probably
00:06:54
a much lower percentage. So there are
00:06:57
there are other risks to to worry about.
00:06:59
And I I do think that they are
00:07:00
single-mindedly focused on scaring
00:07:03
people with some of these headlines
00:07:05
around first it was the boweapons, then
00:07:07
it was the super intelligence, now it's
00:07:08
the job loss. And I think it's a tried
00:07:11
andrue tactic of people who want to give
00:07:15
more power to the government to scare
00:07:17
the population, right? Because if you
00:07:19
can scare the population and make them
00:07:21
fearful, then they will cry out for the
00:07:23
government to solve the problem. And
00:07:25
that's what I see here is that you've
00:07:26
got this elaborate network of front
00:07:30
organizations which are all motivated by
00:07:32
this EA ideology. They're funded by a
00:07:35
hardcore leftist. And by the way, I
00:07:37
became aware of Dustin's politics
00:07:40
because of the Chase Bodin recall. I
00:07:43
found out that he was a big funer of
00:07:44
Chase Bodin. Remember this? Yeah. Dustin
00:07:46
Mosimus and Carrie Tuna, his wife.
00:07:49
Also, Reed Hastings just joined the
00:07:51
board of of Anthropic. remember when he
00:07:55
back in 2016 to tried to drive Peter
00:07:58
Teal off of the board of Facebook for
00:08:01
supporting Trump. So, you know, these
00:08:03
are like committed leftists. They're
00:08:06
Trump haters. But the point is that
00:08:08
these are people who fundamentally
00:08:09
believe in empowering government to the
00:08:12
maximum extent, more government, and
00:08:14
empowering government to the maximum
00:08:16
extent. Now, my problem with that is I
00:08:18
actually think that probably the single
00:08:20
greatest dystopian risk associated with
00:08:24
AI is the risk that government uses it
00:08:27
to control all of us. To me, like you
00:08:30
end up in some sort of Orwellian future
00:08:33
where AI is controlled by the
00:08:35
government. And out of all the risks
00:08:37
we've talked about, that's the only one
00:08:39
for which I've seen tangible evidence.
00:08:41
So in other words, if you go back to
00:08:44
last year when we had the whole woke AI,
00:08:46
there was plenty of evidence that the
00:08:48
people who were creating these products
00:08:51
were infusing their left-wing or woke
00:08:54
values into the product to the point
00:08:56
where it was lying to all of us and it
00:08:58
was rewriting history. And there was
00:09:00
plenty of evidence that the Biden EO was
00:09:02
trying to enshrine that idea. Was
00:09:05
basically trying to require DEI be
00:09:07
infused into AI models. and it wanted to
00:09:11
anoint two or three winners in this AI
00:09:14
race. So, I'm quite convinced that prior
00:09:16
to Donald Trump winning the election, we
00:09:18
were on a path of global compute
00:09:20
governance where two or three big AI
00:09:22
companies were going to be anointed as
00:09:23
the winners. And the quid proquo is that
00:09:25
they were going to infuse those AI
00:09:27
models with woke values. And there was
00:09:29
plenty of evidence for that. You look at
00:09:30
the policies, you look at the models.
00:09:32
This was not a theoretical concern. This
00:09:34
was real. And I think the only reason
00:09:37
why we've moved off of that trajectory
00:09:40
is because of Trump's election.

Episode Highlights

  • The AI Existential Risk Industrial Complex
    A discussion on the funding and influence behind AI organizations, revealing a tightly wound network of effective altruists and political agendas.
    “There's a very specific ideological and political agenda here.”
    @ 02m 05s
    June 02, 2025
  • Concerns Over AI Governance
    The conversation highlights fears about AI governance and the potential for government control over AI technologies.
    “The greatest dystopian risk associated with AI is the risk that government uses it to control all of us.”
    @ 08m 20s
    June 02, 2025

Episode Quotes

Key Moments

  • Effective Altruism00:09
  • AI Governance02:14
  • Dystopian Risks08:20

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Trump AI Speech & Action Plan, DC Summit Recap, Hot GDP Print, Trade Deals, Altman Warns No Privacy
Podcast thumbnail
Tucker Carlson: Rise of Nick Fuentes, Paramount vs Netflix, Anti-AI Sentiment, Hottest Takes
Podcast thumbnail
Bernie Sanders: Stop All AI, China's EUV Breakthrough, Inflation Down, Golden Age in 2026?
Podcast thumbnail
Does OpenAI Need a Bailout? Mamdani Wins, Socialism Rising, Filibuster Nuclear Option
Podcast thumbnail
“AI is the new lightning rod for fear and divisiveness that leads to compliance and control.”
Podcast thumbnail
Inside America’s AI Strategy: Infrastructure, Regulation, and Global Competition
Podcast thumbnail
David Sacks: America needs “a single national standard” in AI to beat China and avoid Woke AI