Search Captions & Ask AI

Is The AI Bubble About To Pop? - Chamath Palihapitiya

August 24, 2025 / 09:37

This episode discusses AI stock fluctuations, an MIT study on generative AI failures, and comments from industry leaders like Shamath Palihapitiya and David Sachs.

The episode begins with a review of an MIT study revealing that 95% of generative AI pilots fail to reach production due to employee resistance and poor quality output. Shamath Palihapitiya shares his observations from his software company, 8090, noting that many companies are spending on AI without clear strategies.

David Sachs comments on the current state of AI investments, suggesting that while there has been a correction in public AI stocks, the overall investment cycle remains strong. He emphasizes the importance of distinguishing between probabilistic and deterministic software in AI applications.

The discussion highlights the backlash against overly optimistic narratives surrounding AI, particularly regarding job loss and rapid advancements. The hosts agree that while AI is a powerful tool, it requires careful implementation and validation to deliver business value.

As the conversation wraps up, the hosts reflect on the evolution of AI technology, suggesting that the development process will be more gradual than previously anticipated, countering the hype surrounding imminent breakthroughs.

TL;DR

AI stocks drop after MIT study reveals 95% of generative AI projects fail; industry leaders discuss implications and future of AI investments.

Video

00:00:00
AI mainet hit a bit of a detour this
00:00:02
week. All over the, you know, three or
00:00:04
four days, AI stocks were down across
00:00:06
the board because of this MIT study that
00:00:09
went viral as well as Sam Alman's
00:00:11
comments about a bubble and Zuck
00:00:14
instituting a hiring freeze in AI after
00:00:17
going on a complete blitzkrieg. So,
00:00:19
let's get into it. Act one, Monday,
00:00:21
Fortune dug up a generative AI study
00:00:24
that MIT published last week or last
00:00:26
month, I should say. In that study, MIT
00:00:29
found that 95% of Geni pilots are
00:00:32
failing to make it to production because
00:00:34
of employee resistance, poor quality
00:00:36
output, and the most interesting problem
00:00:39
seems to be resource misallocation.
00:00:42
According to the study, 70% of JN AI
00:00:44
budgets are going towards things like
00:00:46
building sales and marketing tools,
00:00:47
which have poor ROI. The highest ROI was
00:00:50
found in back office optimization
00:00:54
like automating tasks that cut back
00:00:56
spends you know on various departments.
00:00:59
Basically
00:01:01
these pilots aren't working. Shamath is
00:01:03
what the study found they evaluated 300
00:01:06
AI implementations and interviewed 150
00:01:08
leaders across 52 companies. You've been
00:01:11
grinding it out with your own software
00:01:13
company now called 8090.
00:01:16
Does this align with what you're seeing
00:01:18
on the field, Jamal?
00:01:20
I think what I would tell you is that I
00:01:22
think the first wave was just a lot of
00:01:24
boards who read the words AI somewhere
00:01:28
in an article and then went to a board
00:01:31
meeting and turned to the CEO and said,
00:01:33
"What's your AI strategy?"
00:01:34
Mhm.
00:01:35
And then the CEO turns around and sends
00:01:37
that down into their or eventually hits
00:01:40
the CTO's desk. And I think the first
00:01:42
wave is mostly people just spending
00:01:44
money because they had large existing
00:01:46
budgets. And so they were like, let's
00:01:47
just go and try a bunch of different
00:01:49
things. And I think now we're going
00:01:51
through the sorting function of
00:01:52
realizing that there's a big difference
00:01:54
between probabilistic software and
00:01:57
deterministic software. That's probably
00:01:59
the biggest reason why you're seeing so
00:02:01
many failure modes in sales and
00:02:03
marketing. It's very hard to codify
00:02:05
sales and marketing into a set of
00:02:06
heristics that never change. But back
00:02:08
office processes,
00:02:11
why they're so good and a great target
00:02:15
for AI is ultimately you have so many
00:02:17
people that have been hired to deal with
00:02:18
edge cases, right? I think like that's
00:02:20
what people do in most companies is
00:02:21
they're in charge of a process and
00:02:23
they're they're dealing with edge cases.
00:02:24
And I think that you can get extremely
00:02:26
high rates of accuracy if you implement
00:02:28
AI correctly in back office tasks.
00:02:32
I think the real question is like what
00:02:34
happens to all of this revenue that has
00:02:37
been generated. You're seeing companies
00:02:39
generating $50 million of ARR in a
00:02:42
matter of months and then raising huge
00:02:45
rounds. I think what we haven't seen is
00:02:47
whether there'll be any sort of either
00:02:49
logo churn or dollar churn as new
00:02:53
companies come in with even cheaper
00:02:55
solutions. the foundational models move
00:02:57
up the stack and just absorb capability
00:02:59
or things just don't work and they get
00:03:01
abandoned. All of that churn happened in
00:03:05
social. I remember when I was in the
00:03:07
middle of helping build Facebook, we
00:03:09
went through that whole cycle. There was
00:03:11
seven or 8,000 social companies
00:03:14
and within six years there was five of
00:03:17
us left. It happened in SAS when I was
00:03:20
investing in SAS. There's a couple of
00:03:22
very early and important successes like
00:03:24
Yammer which Sax started which I was
00:03:27
very lucky to be an investor of but then
00:03:30
it took many years for the handful of
00:03:31
winners to get really sorted out and I
00:03:34
suspect we're about to go through that
00:03:35
same cycle in AI. So I think that
00:03:37
article basically paints a very accurate
00:03:40
picture. There's been a lot of triing
00:03:42
and experimentation.
00:03:44
We now need to go through a sorting and
00:03:45
a cleansing and then we'll rebuild from
00:03:48
first principles around the
00:03:49
and not surprising
00:03:51
not surprising to our sultan of SAS
00:03:53
David Sachs because the sales and
00:03:56
marketing departments they they're very
00:03:58
promiscuous when it comes to new tools
00:04:00
getting a great lead closing a sale you
00:04:03
can directly connect it so we always see
00:04:04
them test stuff out doesn't surprise me
00:04:07
that we'd see sales and marketing go
00:04:09
after this first but what do you think
00:04:10
about the brittleleness of this revenue
00:04:13
the churn sachs. Are we going to see a
00:04:17
lot of these companies rocket up to 100
00:04:19
and come back down to 50? Is this
00:04:20
something you're seeing uh or your your
00:04:23
firm which I don't know your status at
00:04:25
the firm? Maybe you could tell us how
00:04:26
how that's working out in terms of your
00:04:28
intelligence there. But what is craft
00:04:29
seeing on the field there?
00:04:31
I think we're seeing a lot of
00:04:33
interesting AI applications being
00:04:35
developed, but it's still very early
00:04:37
days. And I think that over the past
00:04:39
week or so, there was a correction in
00:04:42
sentiment towards AI, but I think it was
00:04:45
a healthy correction. I don't think this
00:04:46
was the beginning of a bus cycle or
00:04:48
something like that. I I still think
00:04:50
that we're in a boom. I still think
00:04:51
we're in a investment super cycle, but I
00:04:54
think there was a healthy dose of
00:04:55
skepticism applied to some of the more
00:04:57
fantastical claims that have been made
00:04:59
about AI. And I think this is why you
00:05:01
saw there was like roughly what, like a
00:05:02
10% correction in public AI stocks. And
00:05:05
there was that MIT report that said that
00:05:08
95% of projects and companies are are
00:05:11
not making it to production yet and so
00:05:13
forth and so on. So I feel like we're
00:05:14
getting in the weeds a little bit here
00:05:15
and what we should be talking about is
00:05:16
just sort of where we are in this um in
00:05:18
this AI super cycle.
00:05:20
And where do you perceive us at? We're
00:05:22
in the experimentation phase. We're in
00:05:23
the pilot phase. But this issue around
00:05:27
probabilistic versus deterministic makes
00:05:29
it hard to trust the software. Is that
00:05:31
what you think the key issue is? Well,
00:05:33
let me tell you why I think that this
00:05:34
correction is actually healthy is that
00:05:38
after chatbt launched at the end of 2022
00:05:40
and then throughout 2023, the dominant
00:05:43
narrative in AI is that AGI was just two
00:05:45
to three years away and everyone kind of
00:05:48
had their own definition of AGI was, but
00:05:50
it was kind of this idea of smarter than
00:05:52
human super intelligence and kind of
00:05:55
magic AI. AI would be able to do
00:05:57
everything. And as a result of that, you
00:06:00
kind of had both utopian and dystopian
00:06:03
narratives really proliferated. And so,
00:06:06
you know, you started getting this like
00:06:07
job loss narrative that within a few
00:06:09
years, 50% of knowledge workers would be
00:06:11
out of jobs. You got this rapid takeoff
00:06:13
narrative that basically the leading AI
00:06:15
models would be able to turn their
00:06:18
intelligence towards improving
00:06:19
themselves towards recursive
00:06:21
self-improvement and therefore within a
00:06:25
couple of years the leading models would
00:06:26
basically achieve super intelligence and
00:06:27
leave everyone else in the dust and then
00:06:29
capture all the value of humanity and
00:06:31
then based on that narrative which again
00:06:35
it was the same underlying narrative
00:06:37
that fueled both utopian and and
00:06:40
dystopian or doomer takes on AI. You
00:06:43
got, I think, a huge backlash which has
00:06:45
already been forming where you have a
00:06:47
thousand bills running through state
00:06:49
legislatores right now and you have all
00:06:51
this AI safety legislation. You got
00:06:54
bills like in California the SB 1047
00:06:57
which would have applied tremendous
00:06:58
amount of of new regulation to AI. So
00:07:01
you saw this policy backlash happen as
00:07:03
well and it was all based on these
00:07:06
fantastical and kind of magic views of
00:07:08
what AI was going to do in just the next
00:07:10
2 to 3 years. And I think that the
00:07:13
reason why this recent skepticism is
00:07:15
healthy is because I think it's
00:07:16
rebutting all of that and it's showing
00:07:18
that, you know, AI is a powerful tool. I
00:07:22
mean, I I definitely think it's a new
00:07:24
and important form of computing and it
00:07:25
is going to unlock tremendous value in
00:07:27
the economy, but it's going to take us a
00:07:29
while to get there. I mean, you can't
00:07:31
just tell the AI, you know, be a sales
00:07:34
rep, be a customer service rep, and kind
00:07:36
of throw it over the wall and expect
00:07:38
that it's going to replace a human. It
00:07:40
takes a lot of prompting and iteration
00:07:43
and validation to make the AI work to
00:07:46
make it generate business value. And if
00:07:49
we were on a path towards rapid takeoff,
00:07:53
then what you would see is that the
00:07:54
leading AI models would be increasing
00:07:57
the distance between like the top one or
00:08:00
two models would be increasing the
00:08:01
distance between you know the rest of
00:08:03
the models. And instead what we're
00:08:05
seeing is a clustering of model
00:08:07
performance around the same performance
00:08:08
bench.
00:08:09
It's incremental, right? They're
00:08:10
incrementally
00:08:10
the progress to be a little bit more
00:08:12
incremental. It's more evolutionary
00:08:13
rather than revolutionary. And I I think
00:08:15
this really crystallized around the
00:08:17
launch of chat GPT5 where a lot of
00:08:19
people were expecting GPT5 to be this
00:08:23
huge breakthrough. Sam Alman was sort of
00:08:25
teasing this concept by posting photos
00:08:27
of the Death Star that the idea that
00:08:29
this model was going to blow everybody
00:08:31
else away and the reviews ended up being
00:08:33
very mixed and then we saw that on the
00:08:35
performance evaluations.
00:08:36
It's not that the model didn't represent
00:08:38
progress, it just fell short of these
00:08:41
lofty expectations that have been
00:08:42
created. So Freeberg, let me get you in
00:08:45
on this
00:08:45
just to sorry I've been kind of
00:08:46
longwinded here, but just let me just
00:08:48
sum this up, which is please I think
00:08:50
that what people can now see is that
00:08:53
we're not in like a a loop of recursive
00:08:55
self-improvement. We're seeing that
00:08:57
there are a handful of of great model
00:08:59
companies, but the development of this
00:09:01
technology is going to be a more normal
00:09:04
technology race. It's not like the
00:09:07
leading players just all of a sudden
00:09:08
going to achieve AGI just very quickly.
00:09:11
And as a result of that, I I think
00:09:13
because it is a more normal technology
00:09:16
race, I think we can apply a more normal
00:09:20
logic to it from both an investment and
00:09:22
a policy standpoint. And I think that a
00:09:26
lot of the narratives that were hyped up
00:09:28
about imminent doom or imminent utopia,
00:09:31
depending on what side you were on, were
00:09:33
just massively overhyped. And this is
00:09:35
why I think it's just a very healthy

Episode Highlights

  • MIT Study Reveals AI Pilot Failures
    A study shows 95% of AI pilots fail to reach production due to various issues.
    “95% of Geni pilots are failing to make it to production.”
    @ 00m 29s
    August 24, 2025
  • Healthy Correction in AI Sentiment
    Recent skepticism towards AI is seen as a healthy correction, not a bubble burst.
    “I think this correction is actually healthy.”
    @ 04m 45s
    August 24, 2025

Episode Quotes

Key Moments

  • AI Stock Drop00:02
  • MIT Study Findings00:21
  • AI Implementation Challenges01:51
  • Healthy Skepticism04:45

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
AI Bubble Pops, Zuck Freezes Hiring, Newsom’s 2028 Surge, Russia/Ukraine Endgame
Podcast thumbnail
Software Stocks Implode, Claude's Hit List, State of the Union Reactions, Trump's Tariff Pivot
Podcast thumbnail
Does OpenAI Need a Bailout? Mamdani Wins, Socialism Rising, Filibuster Nuclear Option
Podcast thumbnail
E125: SpaceX launch, Fox News settlement, "Zombie-corn" exodus to AI, late-stage implosion
Podcast thumbnail
E114: Markets update: whipsaw macro picture, big tech, startup mass extinction event, VC reckoning
Podcast thumbnail
Trump Brokers Gaza Peace Deal, National Guard in Chicago, OpenAI/AMD, AI Roundtripping, Gold Rally
Podcast thumbnail
Fed Hesitates on Tariffs, The New Mag 7, Death of VC, Google's Value in a Post-Search World
Podcast thumbnail
E135: Wagner rebels, SCOTUS ends AA, AI M&A, startups gone bad, spacetime warps & more