Search Captions & Ask AI

Four CEOs on the Future of AI: CoreWeave, Perplexity, Mistral, and IREN

March 23, 2026 / 01:37:39

This episode features interviews with AI CEOs at Nvidia's GTC conference, discussing topics such as AI infrastructure, compute demand, and the future of AI technology.

Michael Intrader, CEO of CoreWeave, shares insights on the evolution of his company from crypto mining to providing GPU compute for AI applications. He discusses the importance of risk management and the transition to neural networks, highlighting partnerships with organizations like Inflection AI.

Arvin Shrinivas, co-founder of Perplexity, explains the company's focus on accuracy and user experience in AI. He describes the evolution of their products and the integration of AI with personal computing, emphasizing the importance of trust and security.

Arthur Manchester, CEO of Mistral AI, discusses the challenges and opportunities of building AI models in Europe, particularly in light of privacy regulations. He emphasizes the significance of open-source models and the need for collaboration with industry experts.

Daniel Roberts, co-CEO of Iron, talks about the transition from Bitcoin mining to AI compute, detailing the company's growth and the importance of renewable energy in their operations.

TL;DR

AI CEOs discuss infrastructure, compute demand, and future innovations at Nvidia's GTC conference.

Video

00:00:00
I'm here at Nvidia's annual GTC
00:00:02
conference and I'm going to interview
00:00:04
four amazing AI CEOs. Stick with us.
00:00:15
>> Our episode is sponsored by the New York
00:00:17
Stock Exchange. Are you looking to
00:00:19
change the world and raise capital? Do
00:00:21
it at the NYSE. The NYSE is a modern
00:00:25
marketplace and a massive platform built
00:00:28
for scale and long-term impact. So if
00:00:31
you're building for the future, the NYSE
00:00:33
is where it happens.
00:00:37
>> One of the great companies of the AI era
00:00:40
is of course Cororeweave. They're
00:00:41
building massive infrastructure for
00:00:43
these hyperscalers. And in some ways,
00:00:45
Michael Intrader, welcome to the
00:00:47
program. You're the original
00:00:49
hyperscaler. you guys got in very early
00:00:52
and secured your I don't know which GPUs
00:00:56
you wound up getting but you were very
00:00:58
early to this trend. How did you get to
00:01:01
it so early and how did you build out
00:01:04
this you know first I guess at the time
00:01:06
neocloud? Yeah. So we we didn't really
00:01:09
start it as a Neocloud and I I uh I was
00:01:12
uh running an algorithmic hedge fund uh
00:01:15
focused on natural gas and uh when when
00:01:19
you build an algorithmic hedge fund um
00:01:21
once the algorithms are built you're
00:01:23
really just monitoring it and testing
00:01:25
different uh thesises and doing all
00:01:27
that. But there's also a lot of downtime
00:01:29
and we got super interested in crypto.
00:01:32
Um, and you know, we're pretty nerdy. We
00:01:35
kind of dig under the hood and we
00:01:36
started to get interested in the
00:01:37
security layer. Uh, we looked at Bitcoin
00:01:40
and the mining for Bitcoin and we didn't
00:01:42
like it. We just thought that like
00:01:43
there's some brilliant engineer that
00:01:46
built the ASIC and they're probably
00:01:47
going to be better at running it than we
00:01:49
are. So, we really began to focus on the
00:01:51
GPUs mostly because the GPUs were you
00:01:55
can mine Ethereum with them. uh but you
00:01:58
could also do all these other things and
00:01:59
really so right from the start we looked
00:02:02
at the compute as an option to be able
00:02:06
to deploy our computing power to
00:02:10
different use cases and so you know
00:02:12
began the company in 2017 uh you know um
00:02:15
spent the first kind of three years
00:02:19
mining crypto went through a couple of
00:02:21
crypto winters um because we had come
00:02:24
from a hedge fund were, you know, we we
00:02:27
have real chops in risk management and
00:02:29
how we think about uh capital and risk
00:02:32
exposure and allocation and all of that.
00:02:34
And so we were really careful around
00:02:35
that right from the start. So we
00:02:37
weathered crypto winter really well um
00:02:39
and began to scale the company and
00:02:41
immediately started to look for other
00:02:43
use cases that you could use this
00:02:45
compute for because crypto was pretty
00:02:47
volatile.
00:02:48
>> Yeah. And crypto was a question mark at
00:02:50
that time.
00:02:50
>> Absolutely.
00:02:51
>> Yeah. I mean Bitcoin was speculative and
00:02:53
there were many other specular projects.
00:02:55
the only other people using this type of
00:02:57
hardware quants
00:02:59
>> medical researchers.
00:03:01
>> So a good way to think about it is like
00:03:03
the progression of products that we kind
00:03:05
of started to work on. You know first
00:03:07
was uh um crypto but we immediately
00:03:10
moved from crypto to CGI rendering and
00:03:12
we built projects that would allow uh um
00:03:16
folks that were trying to animate and
00:03:18
render images um you know kind of what
00:03:22
makes the movies cool, right? and and uh
00:03:24
we started to work on that and then we
00:03:26
moved to batch computing and started to
00:03:27
look at medical research and different
00:03:29
ways of using the compute to be able to
00:03:31
drive science. Um, and we just kind of
00:03:33
kept moving up the stack in terms of
00:03:36
complexity uh uh on how GPUs could be
00:03:39
used. And ultimately uh in like call it
00:03:43
like 2020 2021 we started to really try
00:03:47
to figure out how you can go ahead and
00:03:49
use GPUs for neural networks and that
00:03:52
was not something that uh we knew how to
00:03:54
do. Um, and so we actually went out and
00:03:56
bought a bunch of A100s and donated them
00:03:59
to a uh a group that was working on uh a
00:04:02
Luther AI. They were working on an open-
00:04:04
source project with the thought that um
00:04:07
these guys are taking the GPU compute
00:04:11
because we're donating it. They can't
00:04:13
really get pissed at us if we're not
00:04:14
very good at it initially. And uh that
00:04:16
worked out really well because
00:04:17
>> they can't complain about the SLA.
00:04:18
>> They they kept telling us like we need
00:04:20
more of this, you got to work on this.
00:04:22
And that began to really uh uh give us
00:04:26
an understanding of what was necessary
00:04:28
to run scale parallelized computing. And
00:04:32
uh you know that that uh um we went
00:04:34
through it. I I I kind of feel like
00:04:35
buying those initial GPUs was the
00:04:37
tuition we paid to learn how to run this
00:04:40
business. And then one of the
00:04:41
interesting things is all of those guys
00:04:43
went back to their day jobs because they
00:04:45
were all volunteers working on this.
00:04:47
They were like-minded scientists. And
00:04:49
when they got to their day jobs, they
00:04:51
were all like, I want that
00:04:52
infrastructure. It's built the right
00:04:54
way. That's the way that researchers are
00:04:56
going to want to use it. And that
00:04:57
launched our our business. It was an
00:04:59
amazing story. And
00:05:00
>> so you went from crypto to these
00:05:03
researchers into academia and deep
00:05:05
research. What's the next card to turn
00:05:07
over in the poker game?
00:05:09
>> Yeah. So, so um what became very clear
00:05:11
to us very very early on was that the
00:05:15
scaling laws were going to drive and
00:05:18
remember this is really back in the you
00:05:20
know 2020 2021 before uh uh chatgpt
00:05:24
moment occurred and we began to
00:05:27
understand that like computing
00:05:29
decommoditizes at scale right like when
00:05:31
when you know anybody can run a GPU but
00:05:34
can you run a cluster that's large
00:05:35
enough to train a model that can change
00:05:37
the world and that's a different
00:05:38
question. And so we really began to
00:05:41
think about like how do you go about
00:05:44
scaling up your delivery of this
00:05:47
computing to clients, larger and larger
00:05:49
clients. And that was the next card to
00:05:51
turn is to think about it from a okay,
00:05:54
you know, there's a component of this
00:05:55
that is going to lean into uh our
00:05:58
ability to access the capital to be able
00:06:01
to deliver our solution to the broadest
00:06:04
possible audience to the most
00:06:05
sophisticated consumers of this compute.
00:06:07
And and that was really the next card is
00:06:10
thinking about it as a business rather
00:06:12
than as a engineering project to be able
00:06:14
to deliver the the uh uh the
00:06:17
infrastructure and the software and
00:06:19
really everything between you know when
00:06:21
you when you're thinking about what we
00:06:22
do, we kind of live above the Nvidia
00:06:25
GPUs but below the models. Yeah. and
00:06:28
everything in there, all the software,
00:06:30
the integration of software and
00:06:32
operations and uh observability and all
00:06:35
the things that you need to be able to
00:06:36
build uh a cloud that's purpose-built
00:06:40
for this one specific use case, right?
00:06:42
So, we don't we don't do everything. We
00:06:44
really focus on one use case which
00:06:46
allows
00:06:46
>> you want to do web servers different you
00:06:48
got AWS,
00:06:48
>> you know what they do a great job. It's
00:06:50
like it's a it's a great solution. It
00:06:52
was a brilliant solution to solve a
00:06:53
problem. We just looked at it and said
00:06:55
there's a new problem and let's go about
00:06:57
let's go about looking at this problem
00:06:59
and try and come up with a solution to
00:07:01
deliver compute that solves that
00:07:03
problem.
00:07:03
>> And when did the language model start
00:07:05
dialing and calling you for you know
00:07:08
capacity?
00:07:09
>> Yeah. So uh our our our first uh um well
00:07:13
our our first language model was really
00:07:15
a Luther. Um but uh our our first like
00:07:18
large commercial uh was inflection. Um
00:07:22
and so you know we work with Mustafa and
00:07:25
and and and Inflection and then we we
00:07:27
really diversified from there uh into
00:07:31
the hyperscalers into you know uh open
00:07:35
AAI across the the the the model uh the
00:07:38
foundation models across um you know and
00:07:41
and just kept scaling and scaling with
00:07:43
the belief that you know once again the
00:07:46
the the decommoditization
00:07:48
of compute the ability to to deliver a
00:07:52
solution and the solution is building
00:07:54
supercomputers that can change the world
00:07:57
and that's really what we began to focus
00:07:59
on. That was the lead into training and
00:08:01
now the world has gone through, you
00:08:03
know, this this moment where we've moved
00:08:05
from research into the productization of
00:08:09
this. It's it's it's beginning to work
00:08:11
its way in from the the uh the fringe of
00:08:14
organizations into the core of what they
00:08:16
do. And you can see that every day in
00:08:19
the uh in the amount of inference
00:08:21
compute that is being driven through you
00:08:24
know our uh infrastructure layer which
00:08:27
is just massive which is just like one
00:08:28
of the shows you people are consuming it
00:08:31
not just building models but they're
00:08:33
deploying them and and utilizing them.
00:08:35
>> I always think of inference as the
00:08:37
monetization
00:08:39
>> of the investment in artificial
00:08:41
intelligence. So when when when we when
00:08:44
we see our compute being used uh uh to
00:08:48
stand up the massive scale of inference
00:08:50
that's hitting our compute every day and
00:08:52
like you know inference is when people
00:08:54
ask the model a question it comes back
00:08:57
with an answer that's an inference or
00:08:58
when you ask the model a question and
00:09:00
then to go do something that's inference
00:09:02
right and that's actually where you're
00:09:04
you're you have the opportunity to
00:09:06
really drive value outside of the model
00:09:10
itself but into the real world and
00:09:13
that's really exciting for us. That's
00:09:15
what we like to watch. That's what I
00:09:17
like to watch in terms of gauging the
00:09:18
health.
00:09:19
>> What chips are those?
00:09:20
>> Um so so really uh you know we are we
00:09:26
are the tip of the spear in bringing um
00:09:28
the new architecture uh out of Nvidia uh
00:09:32
into uh um into commercial production at
00:09:35
scale. Yeah. And uh so when when you
00:09:38
know we were the first ones to bring the
00:09:39
H100s at scale, we were the first ones
00:09:41
to bring the H200s at scale, first ones
00:09:43
with the GB uh 200s, and now you've got
00:09:46
the GB300s. And one of the things that's
00:09:48
that's that's amazing and really
00:09:50
fascinating for us is is you know people
00:09:53
are using the bleeding edge GPUs to
00:09:55
train models as the new architectures
00:09:58
come out and then they take those GPUs
00:10:01
and they move them into different
00:10:03
experiments and then over time they move
00:10:06
them into inference and they continue to
00:10:09
use them in inference for a very very
00:10:10
long time.
00:10:11
>> What is the shelf life of a 100 right
00:10:13
now? That's been a big debate is I think
00:10:16
for your company for Microsoft and I
00:10:19
guess Michael Bur you know who you must
00:10:21
have known when you were a quant you
00:10:23
know saying oh my god the whole industry
00:10:25
is the sky's falling and then we all
00:10:27
know in the industry that people don't
00:10:29
just throw this hardware away that they
00:10:30
find uses for it the street finds its
00:10:32
own use for technology so what's the
00:10:34
reality of the lifespan of these things
00:10:36
>> so so my my take on the the uh uh the
00:10:40
GPU uh depreciation bait is that it's
00:10:43
nonsense Right. It's a debate that is
00:10:45
being uh brought to the forefront by uh
00:10:48
some traders that have a short position
00:10:50
in the stock and they're trying to uh
00:10:52
talk down. Look, here's what we know,
00:10:54
right? Um
00:10:57
when when we buy infrastructure, we're a
00:10:59
success based company, right? We're a
00:11:00
small company on a relative basis
00:11:02
compared to the enormous companies that
00:11:04
we're competing with. And so they come
00:11:07
our clients come into us and they buy
00:11:09
compute for five years, for six years.
00:11:11
Our average contract is 5 years. So any
00:11:15
commentary by anyone either inside or
00:11:18
outside of the industry that this stuff
00:11:19
becomes obsolete in 16 months or
00:11:21
whatever nonsense they're spewing, it's
00:11:23
it doesn't it doesn't in any way match
00:11:26
up with the facts on the ground. The
00:11:28
facts on the ground is they're buying it
00:11:29
for 5 years. Right? If and my approach
00:11:32
to this has always been if people are
00:11:34
willing to pay me for it,
00:11:36
>> it still has value.
00:11:38
>> Correct.
00:11:38
>> Pretty simple way of of approaching it.
00:11:40
We use a six-year depreciation. Um, we
00:11:43
believe that the GPUs will last in
00:11:45
excess of six years, but we felt like
00:11:47
that was a fair and reasonable approach
00:11:49
to a technology cycle that's moving at
00:11:51
this velocity. Um, the A100s, the ampers
00:11:55
this year, the price has appreciated
00:11:57
through the year.
00:11:58
>> And why is that? I I think it's because
00:12:00
one of the things that happens is as
00:12:03
more installed capacity becomes
00:12:06
available, you have new companies that
00:12:08
come into existence that have new use
00:12:09
cases that have different size models
00:12:11
that are trying to uh build new
00:12:14
commercial ventures that maybe have been
00:12:16
blocked out of the H100s and never had
00:12:19
an opportunity to run on that. I mean to
00:12:20
make a very simple example for the
00:12:22
audience like when you trade in your
00:12:24
iPhone after 3 or 4 years you're like
00:12:27
who's going to use an iPhone 12 and it's
00:12:29
like have you been to South America or
00:12:32
Africa where you go to the store and you
00:12:35
buy an iPhone 12 or you buy the Pixel 7
00:12:38
and it costs $50 that's still got great
00:12:41
life left in it.
00:12:42
>> Absolutely.
00:12:42
>> Yeah. you know,
00:12:43
>> and so look, you know, we we find these
00:12:46
amazing use cases, new companies that
00:12:48
have come into existence or existing
00:12:51
companies that have integrated new
00:12:53
models into their workflow that are able
00:12:55
to use the Ampierce and so they keep
00:12:59
buying any GPUs that we have available.
00:13:02
And once again, you know, the the
00:13:03
concept that a GPU
00:13:05
>> is no longer relevant or commercially
00:13:07
viable after 16 more 18 months or two
00:13:10
years.
00:13:10
>> Yeah, that's it just it just doesn't
00:13:12
make sense.
00:13:12
>> It's obviously far. I think sometimes
00:13:14
people get caught up in Moore's law or
00:13:16
in just how fast our industry is growing
00:13:19
and that there's so much at stake that
00:13:22
big companies are demanding the most
00:13:24
recent products. That doesn't mean that
00:13:27
the lifespan has gotten shorter. It
00:13:29
means the opportunity and the surface
00:13:31
area of the opportunity has gotten much
00:13:33
larger.
00:13:33
>> Yeah. Uh one of the things is is like
00:13:36
you know the the uh the the industry has
00:13:40
gotten so much attention for the
00:13:43
unprecedented scale of capital that is
00:13:46
coming to bear on this. And
00:13:49
because of that, there tends to be a
00:13:51
incredible focus on
00:13:54
the companies that are building on these
00:13:57
most advanced chipsets. And the truth of
00:14:00
the matter is is you know even within
00:14:03
those companies they have a long tale of
00:14:05
useful life
00:14:06
>> to provide inference horsepower to work
00:14:09
on other experiments to do less bleeding
00:14:12
edge activity but still needs to be done
00:14:14
>> and yeah I mean rendering comes to mind
00:14:16
as well or yeah we're making images on
00:14:19
nano banana like there there will be a
00:14:21
use for it. There is a moment in time
00:14:23
where maybe the compute to power ratio
00:14:25
doesn't make sense. My my expectation is
00:14:29
is obsolescence will be defined by the
00:14:34
moment in time where the power
00:14:37
in the data center for me will be able
00:14:41
to be repurposed for a higher margin
00:14:44
than the existing infrastructure
00:14:46
provides. And you know, like I said, I I
00:14:49
fully expect this infrastructure to last
00:14:51
in excess of 6 years, but the the the
00:14:54
standard in the in in in the space has
00:14:56
really been used with one exception,
00:14:59
which is Amazon, which is Yeah, it's 6
00:15:01
years. That's that seems like the right
00:15:02
schedule. I'm not making it up. That's
00:15:04
what everybody's using.
00:15:05
>> Yeah. And the energy cost is the
00:15:09
opportunity because hey, it's just a we
00:15:11
need that space. there's a better uh
00:15:14
reward here and that might get resold
00:15:16
that hardware to somebody else who wants
00:15:18
it a hobbyist or something. It's
00:15:20
available
00:15:21
>> and or it could be sent someplace else
00:15:22
where they have more capacity when they
00:15:25
can repurpose it there. But I I I um I
00:15:27
kind of feel like, you know, we'll we'll
00:15:29
deal with that part of the business when
00:15:31
we get there. What I know right now is
00:15:34
it is extraordinarily profitable. It's
00:15:36
very creative to my company to continue
00:15:39
to keep the infrastructure that's been
00:15:41
up and running, that's been on these
00:15:42
long-term contracts, and as it rolls
00:15:44
off, as it's been in use for 5 years,
00:15:47
you know, as it becomes available, I am
00:15:49
still able to sell it at a higher price
00:15:50
than it was at a year ago. There's
00:15:53
competition now. When you were buying
00:15:54
these from Jansen back in the day, yeah,
00:15:57
you could buy them and have them
00:15:58
shipped, I would assume, within 30 days
00:16:01
or less. nowadays what's the weight like
00:16:04
even for you a loyal old customer and is
00:16:07
there a bit of a battle is there
00:16:08
politics to who gets the servers like I
00:16:12
you see some like very big names talking
00:16:14
about they got to get an allocation is
00:16:16
it still a little bit crazy what's it
00:16:18
like to be in that category having to
00:16:20
buy something everybody wants
00:16:22
>> look uh you know I I uh I I think of it
00:16:25
as an affirmation of the business that
00:16:26
we're in right like the fact that we are
00:16:29
attracting competitors the the means
00:16:31
that the business is healthy and there's
00:16:33
a lot of people trying to deliver this
00:16:34
service because the need for this
00:16:38
infrastructure the need to integrate the
00:16:40
infrastructure you know into the
00:16:42
software layers to deliver it to
00:16:44
artificial intelligence uh either at the
00:16:46
model level or the inference level or
00:16:48
the application level or whatever you
00:16:50
know level of the five layer cake that
00:16:52
Jensen's you know focused on
00:16:55
the the fact that there are more people
00:16:57
coming into this it doesn't discourage
00:16:58
me um as far as getting access to the GP
00:17:01
CPUs, we show up like everybody else
00:17:03
with a um you know, we'd like to buy
00:17:06
here's a PO and we're ready to pay. Um
00:17:08
the one what's the wait time like? And
00:17:11
is it just really competitive or not?
00:17:15
Because I talked to Jensen about he said
00:17:17
I said, "How do you manage all these
00:17:18
like big egos and names and companies
00:17:20
trying to buy stuff?" And he said,
00:17:22
"Well, they order it and we give it to
00:17:24
them in the order in which they order
00:17:26
it."
00:17:26
>> Is it really like that?
00:17:27
>> It really is. Right. like you know he he
00:17:30
doesn't want to be in the position of
00:17:33
playing favorites or all like that that
00:17:35
just seems like a bad place to be with
00:17:37
your clients
00:17:37
>> or auctioning them off. Can you imagine?
00:17:40
>> Yeah, that would that that
00:17:41
>> that would be crazy.
00:17:42
>> Yeah.
00:17:43
I don't I'm not sure that would be good
00:17:45
for the long-term business. No. Yeah.
00:17:46
So, so our our our approach is
00:17:49
>> you might get some sovereigns coming in
00:17:50
and saying I'll pay double. They do that
00:17:52
with Ferraris too sometimes.
00:17:54
>> I guess these are the Ferraris of
00:17:56
computing, right?
00:17:56
>> In a way they are. Yeah. Bugattis. Our
00:17:59
>> our our approach is to work with
00:18:02
clients across the entire space to find
00:18:05
opportunities that are really
00:18:07
interesting companies that can fit into
00:18:09
our contraction contracting requirements
00:18:12
where we're going to be able to go out
00:18:14
and structure the debt that we require
00:18:16
in order to go out and and uh build
00:18:18
infrastructure at this scale. And um
00:18:21
>> how does all that debt work? I that is
00:18:23
something that you guys specialize in.
00:18:26
um corporate debt uh I'm in the venture
00:18:28
business people are like why should I be
00:18:30
in venture when corporate debt pays so
00:18:31
well corporate paper's so huge I'm
00:18:34
curious how this fits in and like what
00:18:37
uh interest rate people are paying on
00:18:40
you know a billion dollars in
00:18:43
infrastructure what do they pay on that
00:18:44
>> yeah so so coreweave has really been the
00:18:49
innovator around a lot of the financing
00:18:52
engines that have come to bear on this
00:18:54
we did the first GPU based uh loans. Um,
00:18:58
and like I I think it's important or I'm
00:19:01
going to try to explain this in a way
00:19:02
people can understand. So what we do is
00:19:05
we go out and we find a client. Let's
00:19:07
use Microsoft. You brought them up
00:19:09
before, right? And Microsoft comes to us
00:19:11
and says, "We'd like to buy some compute
00:19:12
for you." And we say, "Okay, great.
00:19:14
We're going to sign a contract." Once I
00:19:15
have a contract in hand,
00:19:17
>> then what I do is I create something.
00:19:19
It's not a particularly creative name.
00:19:21
It's called the box. Yeah.
00:19:22
>> Right. And what I do with the box is I
00:19:25
take my contract with Microsoft and I
00:19:27
put it in the box. I go to Jensen and I
00:19:29
buy the GPUs, I put it in the box. I
00:19:31
take my data center contract, I put it
00:19:34
in the box. And now the box governs cash
00:19:36
flow.
00:19:37
>> And it has a waterfall of cash flow that
00:19:39
comes into it and goes out of it. And so
00:19:41
the way it works is then I build the
00:19:44
compute and then I deliver the compute
00:19:46
to Microsoft and they pay the box. They
00:19:48
don't pay me,
00:19:49
>> right? It goes into the box and the
00:19:51
first thing it does is it pays the data
00:19:53
center. It pays the power bill. It pays
00:19:56
the interest and the principal and then
00:19:59
whatever's left flows back to us, right?
00:20:02
And so it is an incredibly well
00:20:05
ststructured, time-tested,
00:20:07
pressure-etested vehicle to be able to
00:20:09
borrow money against client paper and
00:20:13
all of the other collateral around the
00:20:16
deal. which is why Corewave, which is a
00:20:18
company that many people haven't ever
00:20:20
heard of, was able to go out and raise
00:20:22
$35 billion in 18 months to build
00:20:25
infrastructure at scale. But what's
00:20:27
important to understand is the economics
00:20:30
in this box are such that within 2.5
00:20:33
years of a 5-year deal, we have paid for
00:20:37
everything.
00:20:38
>> The principal's been paid off. The well
00:20:40
the principal's been paid off, the
00:20:42
interest has been paid off. The return
00:20:43
into the box is such that we are able to
00:20:47
generate returns to our company at the
00:20:50
box level which gives the most
00:20:53
sophisticated lenders in the world
00:20:56
whether it's banks or private equity
00:20:59
funds or um you know whoever. confidence
00:21:03
that they're going to
00:21:06
be able to achieve the one rule of
00:21:08
lending, which is give me my money back.
00:21:10
>> Yes. Works better when that happens.
00:21:12
>> So, they look at this box and they're
00:21:13
like, "Wow, we're really confident we're
00:21:15
going to get our money back."
00:21:16
>> And maybe they want 10 boxes.
00:21:17
>> That's correct.
00:21:18
>> And if any one box um goes upside down,
00:21:22
you can deal with it and it's not as
00:21:24
acute.
00:21:25
>> That's correct. And they don't
00:21:26
cross-pollinate. They don't cause uh
00:21:29
contagion across the boxes. are all
00:21:31
independent and discreet. One, and
00:21:33
number two is as you do this and as you
00:21:36
show the lenders how this financing tool
00:21:40
and how this financing mechanism works,
00:21:43
what they do is they continue to lend
00:21:45
you money at progressively lower rates.
00:21:48
And so when you think about our cost of
00:21:51
capital over the last two years, we have
00:21:55
dropped our cost of capital by 600 basis
00:21:57
points.
00:21:58
>> Wow. It is enormous, right? And so
00:22:00
you're seeing a company that is driving
00:22:02
its cost of capital down towards where
00:22:05
the hyperscalers borrow, which will
00:22:08
enable us to be able to be competitive
00:22:10
with them over time. And we have been
00:22:14
extremely
00:22:15
uh militant and diligent about feeding,
00:22:19
watering, and caring for those boxes so
00:22:21
that we continue to have access to the
00:22:23
capital markets in a way that allows us
00:22:25
to build and drive our business.
00:22:26
>> Means you has to say no. You have to say
00:22:28
no to maybe some people who want to be
00:22:29
in the box.
00:22:30
>> Yeah. So, we we look at some deals and
00:22:33
we're just like, you know, they want to
00:22:35
buy GPUs for a year and I look at it and
00:22:37
say I I that's not a deal that I can do
00:22:39
because it's too short for me to am
00:22:41
advertise the expenses or and so I won't
00:22:44
do that. Right. Like once
00:22:46
>> and they can go to another provider who
00:22:47
maybe wants to take that risk on who has
00:22:49
extra capacity.
00:22:50
>> Absolutely. But our business is really
00:22:52
built about around the risk management
00:22:54
of being able to get to scale. Because
00:22:56
in my mind
00:22:59
during this period of disequilibrium
00:23:01
during this period where there are not
00:23:03
enough GPUs in the world to uh provide
00:23:06
the compute for all of the different use
00:23:09
cases in artificial intelligence the
00:23:11
part that's important for me and for my
00:23:13
company is to get enormously large so we
00:23:16
can drive down our cost of capital so
00:23:18
that we have information flow coming in
00:23:20
from all different parts of the market.
00:23:22
large language models, high-speed
00:23:24
trading, uh, uh, search, all of these
00:23:27
things. And they're feeding they're
00:23:28
feeding information back into us that is
00:23:31
letting us know what the next product we
00:23:33
need to build is or where, you know,
00:23:35
they need help uh, scaling or what type
00:23:38
of compute they need and all of that
00:23:40
information flow is incredibly valuable
00:23:43
to us.
00:23:44
>> What What can you tell us about demand?
00:23:45
There's been reports of, hey, maybe the
00:23:48
Oracle Starbase thing with OpenAI's been
00:23:52
downsized or maybe not and then you know
00:23:56
uh other folks Microsoft is going big
00:23:58
and Google's going big Meta's going big
00:24:01
and those people obviously have massive
00:24:03
cash flow Apple seems to be MIA they
00:24:05
don't seem to want to play you you
00:24:07
you've you've uh you've named a lot of
00:24:09
really big companies with really big
00:24:10
balance sheets that have the capacity to
00:24:12
drive a lot of demand look I I have been
00:24:15
truly steadfast in this
00:24:18
>> for years now for for for four
00:24:21
The depth of the demand for the service
00:24:23
we provide has been relentless and
00:24:27
overwhelms the global capacity of the
00:24:30
world to deliver enough compute to
00:24:33
enable all of the demand for artificial
00:24:37
intelligence to be sated and that has
00:24:40
been we have been relentless about that.
00:24:42
>> Sounds like Nick's tickets during the
00:24:44
Patrick Euing era like they got up to
00:24:46
50,000 people on the wait list. So if
00:24:49
magically the weight list went away, if
00:24:52
the if the constraint went away and we
00:24:53
just had a large amount of GPUs
00:24:56
available, lot of energy available, a
00:24:59
lot of data center available, how much
00:25:03
capacity would just all of a sudden come
00:25:05
out of the system.
00:25:05
>> So so or would be deployed I should say.
00:25:08
>> So remember how we build our our
00:25:11
business through this box
00:25:12
>> and it's a fiveyear box. So if we had an
00:25:16
air pocket, if if demand were suddenly
00:25:19
to disappear because of a technology
00:25:21
breakthrough, because of a uh a war,
00:25:24
anything, right? Like like the why from
00:25:26
a riskmanagement perspective does not
00:25:28
matter. You have to prepare your company
00:25:31
for the what happens if it happens.
00:25:33
Yeah. And so by entering into these
00:25:36
long-term contracts into entering into
00:25:38
contracts with counterparties that have
00:25:40
large balance sheets, you are or we are
00:25:44
protecting ourselves and our lenders.
00:25:47
Yeah.
00:25:47
>> So that we are confident and they are
00:25:50
confident because you can see how
00:25:51
confident they are by the rate that
00:25:53
they're charging us continuing to
00:25:54
decline that they're ultimately going to
00:25:56
get their money back. And that is the
00:25:58
one rule of lending. And so um you know
00:26:02
I if
00:26:02
>> but just in terms of the capacity if you
00:26:04
were unconstrained and Nvidia Jensen
00:26:06
says hey order as many as you want what
00:26:08
would happen
00:26:09
>> so um the the it's also important to
00:26:13
understand the constraints aren't just
00:26:14
GPUs right electricity it's it's power
00:26:17
shells it's memory it's storage it's
00:26:20
it's networking it's optics all of the
00:26:23
things and there's there's various
00:26:24
throttles that will limit the
00:26:27
>> memory is a throttle right now right
00:26:28
>> oh yeah it Oh yeah, it is.
00:26:30
>> Why? How did memory become the throttle?
00:26:32
>> If um
00:26:34
memory and uh it has historically been a
00:26:39
cyclical business, right? We have seen
00:26:41
these waves of demand driving up the
00:26:44
cost for memory and then it collapses
00:26:46
and then it drives it up. It's a very
00:26:48
boom and bust business. is cyclical in
00:26:50
its nature because the fabs are so
00:26:53
capital intensive that people invest in
00:26:55
the fabs, build a ton of capacity and
00:26:58
then overbuild if there's any type of
00:27:02
turndown. And that we've seen that cycle
00:27:04
again and again. What's happening right
00:27:06
now is the confluence of two things,
00:27:09
right? one is is
00:27:11
with all the demand for artificial
00:27:13
intelligence and the corresponding
00:27:15
demand for compute and the ancillary
00:27:18
services around the GPU, the demand is
00:27:21
through the roof. That's number one.
00:27:23
Number two is is that
00:27:25
>> there was probably an investment cycle
00:27:27
that needed to happen back in 2023
00:27:30
that would have brought on the necessary
00:27:32
fab capacity to be able to serve.
00:27:35
>> Impossible to predict what should happen
00:27:37
just with energy. It's impossible to
00:27:38
predict what just happened. And now
00:27:40
people are chasing energy. The data
00:27:42
centers are going where the energy is.
00:27:44
It's not based on real estate. It's
00:27:46
based on it's and
00:27:47
>> where's there's some wind.
00:27:48
>> And anytime you you have a uh very cap
00:27:52
not every any time, but many times when
00:27:54
you have a uh a capital inensive
00:27:56
business like you know building fabs,
00:27:58
you will get this boom and bust cycle
00:28:00
just like in energy they overbuild.
00:28:02
Yeah. And you know
00:28:03
>> fiber.
00:28:04
>> Yeah. I mean there's there's there's a
00:28:06
lot of examples of that our approach
00:28:09
>> in some ways when you look at that it's
00:28:11
a beautiful aspect of capitalism that
00:28:13
we're able to have a boom bus cycle that
00:28:17
we're able to weather it right if you
00:28:18
think just that capitalism from first
00:28:20
principles something like that happens
00:28:22
and we have too much fiber it creates an
00:28:24
opportunity for Google to buy it all up
00:28:26
or the next person
00:28:27
>> listen the the the um um you know it it
00:28:30
does it does a lot of things having a
00:28:32
boom cycle it clears out the underbrush.
00:28:35
will be able to survive and take
00:28:37
advantage of that and it sews the seeds
00:28:39
of future business.
00:28:45
You put the fiber into the ground which
00:28:47
became the backbone of how you know we
00:28:51
watch movies every day and how we you
00:28:53
know uh communicate and how we hop on a
00:28:55
zoom and you know co and all of these
00:28:58
things were based on that infrastructure
00:29:01
that was available to be consumed. Yeah,
00:29:04
people don't recognize this fact if you
00:29:06
the the premise of YouTube from the
00:29:08
founders who I knew, Chad Hurley and his
00:29:12
other partner. They basically had the
00:29:14
realization at this curve storage is
00:29:16
coming down so quickly we could offer
00:29:18
free unlimited uploads and bandwidth is
00:29:21
coming down. So I guess we don't have to
00:29:23
charge people for sharing a video
00:29:25
online. Before that, if your video went
00:29:28
viral, people are going to have their
00:29:30
minds blown. But your server would turn
00:29:33
off and it would say this person, you
00:29:35
know, needs to pay their bill. Yes.
00:29:36
Because they were getting charged for
00:29:38
carriage by the megabit going out.
00:29:41
>> Yes. I mean it look and and you know
00:29:43
these these the business models change
00:29:45
and evolve and you know like you said
00:29:48
Moore's law and and and certainly Jensen
00:29:50
will talk about the fact that like what
00:29:52
what is going on within the the the
00:29:55
accelerated comput dwarfs
00:29:58
>> Moore's law right and all of that is
00:30:00
going to lead to
00:30:03
>> more opportunity to build more companies
00:30:05
that are going to do things like you two
00:30:07
did which has really changed the world.
00:30:09
>> Yeah. I mean the the concept that I I
00:30:12
don't know if it was like a million
00:30:14
hours being uploaded every hour or
00:30:16
minute but at some point Susan what
00:30:18
Jackie rest in peace said told me just
00:30:21
like how much was being uploaded every
00:30:25
minute and it made no logical sense and
00:30:27
she realized
00:30:28
>> well there's three billion people two or
00:30:31
three billion people in the service and
00:30:32
1% upload or 0.1 10 bit bips upload and
00:30:36
it's like okay one in a thousand people
00:30:37
upload it's a big it's a big denom
00:30:39
denominator like
00:30:40
>> I I was uh sitting on a a panel uh with
00:30:43
Sarah Frier, CFO of uh Open AAI and u um
00:30:50
she every once in a while uh um she she
00:30:53
really puts out like interesting uh
00:30:55
information and so she was talking about
00:30:58
the cost of a million tokens when ChatG3
00:31:01
came out and it was $32 and change and
00:31:05
now a million tokens cost nine cents.
00:31:08
>> Yeah.
00:31:08
>> Right. And so you you just see like like
00:31:11
the incredible power of how the capital
00:31:15
markets, how capitalism is
00:31:19
uh uh fueling engineering and fueling uh
00:31:22
uh competition.
00:31:23
>> It's become recursive now too. I mean
00:31:25
these models if you say to the model,
00:31:27
hey make yourself more efficient, spend
00:31:29
less money and lower the cost of tokens.
00:31:30
It'll be like okay captain.
00:31:32
>> Yeah.
00:31:32
>> I don't know if you saw Cararpathy's
00:31:34
recursive
00:31:36
>> thing last weekend but it's like now
00:31:38
civilians who've never worked in a
00:31:40
language model or done computer science
00:31:41
are like, I'm going to try to do
00:31:42
something recursive this weekend. You
00:31:44
know, it's one of the things that I that
00:31:47
uh uh I talked to, you know, the other
00:31:51
founders about, you know, and it's like
00:31:55
when you think about some of the things
00:31:57
that AI does, right, it's lowering the
00:32:00
barrier to operations. So if you have a
00:32:03
good idea or a great idea, you can open
00:32:06
up your model and you can tell your
00:32:10
model, you can vibe code it, you can do
00:32:11
all kinds of different things and create
00:32:14
things that never existed before. That's
00:32:16
amazing, right? like that's bringing
00:32:18
down this incredible barrier that kept
00:32:20
human creativity contained and now all
00:32:23
of a sudden this whole new vector of uh
00:32:26
uh you know medical research or
00:32:28
different approaches to you know
00:32:31
baseball cards or whatever you want if
00:32:33
you've got a great idea if you've got a
00:32:34
new creative idea that's the valuable
00:32:37
kernel right now that allows you to to
00:32:40
build new things and to create new
00:32:41
things and I just think that's
00:32:42
incredibly exciting like you're bringing
00:32:45
the minds of 8 billion in people a tool
00:32:48
that allows them to overcome what was
00:32:50
insurmountable for
00:32:52
forever
00:32:53
>> for humanity.
00:32:54
>> Yeah, it's a bright new future. Michael,
00:32:56
appreciate you sharing the uh uh
00:32:58
information with us and the vision. I am
00:33:01
really delighted to have Arvin Shri Nas
00:33:04
on the program.
00:33:05
>> Thank you for having me here Jason.
00:33:07
>> It's so great. I want to go through
00:33:09
three stages in which I fell in love
00:33:11
with your product. The first phase was I
00:33:15
could go in pick my language model if I
00:33:18
wanted to choose open AI, if I wanted to
00:33:19
use claude, whatever it was. That was
00:33:22
like a real unlock for me. And on the
00:33:25
sidebar sidebar, I noticed you had done
00:33:29
essentially like what Yahoo did in the
00:33:32
early days, finance, sports, and when I
00:33:36
pulled my nickname up, it gave me a live
00:33:39
version of that. When I pulled my stocks
00:33:40
up, it summarized the news in real time.
00:33:42
time and I was like, "Wow, this
00:33:44
execution's great." And I I kind of made
00:33:46
you my front door, two different models,
00:33:48
and it made it easier for me to check
00:33:50
it. Then you came out with the Comet
00:33:52
browser and I was like, "Holy cow, I can
00:33:55
give this a series of instructions. Go
00:33:56
to my LinkedIn, find everybody from this
00:33:59
company, put them into a Google sheet
00:34:01
and boom, you were the first out of the
00:34:02
gate with that." And then just the last
00:34:04
couple of weeks I had been claw pilled
00:34:07
in using openclaw but you came out with
00:34:09
computer and I started using computer
00:34:12
and boy it's good uh it's a really
00:34:15
strong start uh allowing me to do
00:34:18
repetitive tasks very similar in some
00:34:21
ways to co-work from claude uh or
00:34:24
basically an engineer or developer using
00:34:26
it. So
00:34:28
are are these the evolution of the
00:34:31
company and I should think about it that
00:34:32
way. But how do you look at perplexity
00:34:34
now? You have a very loyal fan base.
00:34:37
You're making a lot of money. I don't
00:34:38
know if you disclose it but I think it's
00:34:40
hundreds of millions to billions. You
00:34:42
can tell us but what is perplexity in
00:34:44
the face of wow Claude's having a great
00:34:47
run, OpenAI still doing strong. Grock
00:34:49
doing very well. Gemini coming on
00:34:51
strong. There's like six or seven of you
00:34:53
and uh you just happen to be one of my
00:34:55
top twos right now.
00:34:57
>> Thank you. So tell me first of all,
00:34:58
first of all, thank you. Thank you so
00:34:59
much. Perplexity has always been built
00:35:01
for people who are always looking for
00:35:04
the extra edge, the curious people. So
00:35:06
it's very natural that you are uh one of
00:35:09
our power users. Uh one common theme for
00:35:13
us uh for the last three and a half
00:35:16
years is accuracy. Plexity wants to be
00:35:19
the company that's building the most
00:35:21
accurate AI. So when you want to give
00:35:23
somebody answers, accuracy is very
00:35:25
essential for building trust because
00:35:27
only then the user is going to ask the
00:35:29
next set of questions. It turns out it
00:35:31
was a great idea to give AI access to
00:35:34
the internet to be accurate. So that's
00:35:36
the perplexity ask product. It turns out
00:35:38
it's a great idea for AI to have full
00:35:40
access to a browser so that it can be
00:35:43
accurate when you task it to go do
00:35:45
something that you would do yourself on
00:35:46
a browser. Aentic browsing comet. Now
00:35:50
the last phase is it turns out it's a
00:35:52
great idea for AI to be given a full
00:35:55
access to a computer so that it can do
00:35:58
whatever you do on a computer on its own
00:36:01
essentially becoming the computer
00:36:03
itself. an orchestra of everything AI
00:36:07
can do today. every single capability
00:36:09
each individual AI model has be it GPT
00:36:12
or cloud or Gemini or anything else an
00:36:16
orchestra of all those capabilities that
00:36:18
what that's what perplexity computer is
00:36:20
and all these sub agents that are
00:36:23
running inside computer are the
00:36:24
musicians the models are essentially the
00:36:27
instruments and they're like hundreds of
00:36:30
models out there each having their own
00:36:32
specialization some are good at coding
00:36:34
some are good at writing some are good
00:36:35
at multimodal visual synthesis is image
00:36:38
generation, video generation, audio, but
00:36:40
what matters is the end output, the
00:36:42
music you play. That's the work AI gets
00:36:45
done for you. And that's what perplexity
00:36:47
computers. The AI itself is the
00:36:50
computer. Now,
00:36:51
>> still lives inside of a browser. Have
00:36:53
you considered giving it desktop root
00:36:56
access? That feels like the next place
00:36:57
this is going, but that comes with a lot
00:37:00
of security issues, a lot of trust
00:37:02
issues. As you mentioned, trust is
00:37:03
paramount. getting the right answer is
00:37:05
what builds it, but also not getting
00:37:07
hacked and not having it delete your
00:37:09
files. So, how do you think about root
00:37:12
access to my Windows machine? Obviously,
00:37:14
iOS, they won't let you, but with an
00:37:16
Android phone, it would let you.
00:37:18
>> Yes.
00:37:18
>> So, do you have that in the works?
00:37:20
>> Yes. So, we announced something called
00:37:21
personal computer. Perplexity personal
00:37:24
computer that's essentially going to
00:37:26
take all the trust and reliability and
00:37:28
the server side execution of perplexity
00:37:30
computer but synchronize it with your
00:37:33
local computer so that you can use it
00:37:36
from your phone and we're going to do
00:37:37
this with the Mac Mini where you
00:37:40
synchronize your computer with the Mac
00:37:41
Mini so that becomes your local server
00:37:44
all the agent orchestration that has to
00:37:46
do with your local private data will run
00:37:48
on that local orchestration loop that
00:37:50
runtime with the Mac Mini. Not on your
00:37:53
servers, not on anthropics.
00:37:55
>> Exactly.
00:37:55
>> Yeah.
00:37:56
>> It could still ping Frontier models if
00:37:58
it needs to with your permission,
00:38:01
>> but it will be orchestrating everything
00:38:03
on your local hardware.
00:38:05
>> Yeah.
00:38:05
>> And if it needs to run on the server
00:38:07
side hardware, if you don't want very
00:38:09
complicated, longunning stask to be
00:38:12
running on your local hardware. Yeah.
00:38:14
>> You can delegate it to run on your
00:38:16
server side computer, which is again
00:38:18
only accessible to you and you alone. So
00:38:21
that way we're going to bring this
00:38:23
perfect hybrid of trustworthy
00:38:25
uh hybrid between local and server side
00:38:28
and you
00:38:29
>> and you'll make it easy to do. It just
00:38:30
be abstracted. You install one
00:38:32
executable, boom, it's done.
00:38:33
>> It's it's like open claw for dummies.
00:38:35
Nobody needs to learn how to use it.
00:38:37
Nobody needs to manage API keys. Nobody
00:38:39
needs to manage separate billing across
00:38:40
like 100 different services. Figure out
00:38:43
what you can give access to and not
00:38:45
access to. We take care of that.
00:38:47
>> So it's a Steve Jobs way of doing it.
00:38:49
you know, end to end integration
00:38:51
>> and and how do you think about local
00:38:53
models? I have started running Kimmy 2.5
00:38:56
on a Mac Studio.
00:38:58
>> It's not as good as Claude or Gemini or
00:39:00
Grock, but you can probably do about 80%
00:39:03
there for free.
00:39:04
>> Yeah.
00:39:05
>> Essentially.
00:39:06
>> Yeah.
00:39:06
>> Uh and so that's quite compelling
00:39:08
considering some of my other bills,
00:39:09
Claude and and stuff were getting
00:39:11
expensive.
00:39:12
>> So, do you have one of those? You
00:39:15
started testing on your local Mac
00:39:17
Studio. I assume you have a Mac Studio
00:39:18
and you're doing this yourself. Yeah.
00:39:20
>> Or now, I don't know if you saw uh Dell
00:39:22
and Nvidia announced a giant
00:39:24
workstation. Um is it 3,800?
00:39:28
>> Something like that.
00:39:28
>> Something like that with 750 gigs of
00:39:30
RAM. So,
00:39:32
>> what do you think about the desktop
00:39:34
going back to workstation/server?
00:39:37
>> Yeah.
00:39:37
>> Status.
00:39:38
>> I think it's very promising. Um my my
00:39:41
prediction is it'll initially start off
00:39:43
as a sub agent. So whatever you need to
00:39:46
go uh like your tax returns, your
00:39:49
personal photos, your emails, your your
00:39:52
calendar, all that stuff, those local
00:39:55
apps, your personal notes, very personal
00:39:58
notes. You can make sure that the models
00:40:00
that access those tokens will be running
00:40:04
on your local hardware if you want to,
00:40:06
if you're that privacy conscious.
00:40:09
uh and more complicated stuff that
00:40:12
accesses your data that's already on the
00:40:14
server side. Example, your Google
00:40:15
calendar, yeah, your Gmail. This is
00:40:18
personal data still, but an AI runtime
00:40:22
can access that through your connector,
00:40:24
your Google calendar connector, your
00:40:26
Google Workspace connector, and that
00:40:28
could run on the server side because
00:40:29
anyway, the data is on the servers. It's
00:40:31
not even lying on your device.
00:40:32
>> So, that sort of hybrid orchestration is
00:40:35
where we're headed to. I don't think
00:40:37
it's a dichotomy between fully local
00:40:39
versus fully server. Uh it's all about
00:40:42
choice. And anyway, when you're on your
00:40:44
phone, uh you want to you don't care
00:40:46
actually which server that workloads
00:40:48
running from because it's not going to
00:40:49
be able to run on your phone anyway. The
00:40:52
chips need to exist on a Mac Studio or a
00:40:54
Mac Mini and or on the server
00:40:56
>> or this new Dell that's coming out. And
00:40:58
I I really think the idea of spending
00:41:01
$10,000 on a powerful desktop will
00:41:04
appeal to people if it lowers their $500
00:41:08
a month
00:41:09
>> claude bill. This is an incredible
00:41:11
savings. Plus, you get the benefit
00:41:13
>> of privacy and not educating the
00:41:15
language models on your personal data.
00:41:18
>> Yes. And it's going to be it's going to
00:41:19
be like you're buying a refrigerator,
00:41:21
your your your internet modem. Like the
00:41:23
cost for these will eventually go down.
00:41:26
>> Yeah. But it's not going to feel like
00:41:28
you're wasting your money. Uh every
00:41:31
every home has a lot of other sensors.
00:41:34
>> Yeah.
00:41:34
>> That runs your home that'll also be part
00:41:37
of this orchestration loop.
00:41:39
>> Yeah.
00:41:40
>> So, so that's where it gets exciting
00:41:41
because now you can just dictate
00:41:43
something to your phone and that can
00:41:45
control your entire home.
00:41:48
>> So that's the dream that everybody has
00:41:49
and all that orchestration loop can run
00:41:52
on your local hardware, no problem. And
00:41:54
I'm curious what you think of the
00:41:56
operating system. What's eventually
00:41:59
going to be the operating system of this
00:42:02
workstation?
00:42:03
>> AI is the operating system. Like earlier
00:42:05
in the traditional operating system, you
00:42:07
execute programmatically.
00:42:09
Now you start with objectives, not
00:42:11
specific instructions.
00:42:13
>> Right?
00:42:13
>> You come up with a highlevel objective.
00:42:15
go build this website for me that you
00:42:18
know takes all the transcripts of all in
00:42:20
podcast and tracks the stock price just
00:42:22
before the podcast and after. Yeah.
00:42:24
>> And charted for the max 7.
00:42:26
>> Yeah.
00:42:26
>> And and charted over time you can so
00:42:28
that's the objective but individually
00:42:31
it's running a file system a code
00:42:33
sandbox access to the internet. It's
00:42:35
having like its own HTML tools and like
00:42:38
so I think that's basically where you
00:42:40
know models systems and files and
00:42:43
connectors are all coming together. You
00:42:44
would think of that as an OS
00:42:46
>> except you're operating at an
00:42:48
abstraction about that where you're
00:42:50
thinking in terms of objectives.
00:42:52
>> Yeah. And does it need to eventually
00:42:55
become its own operating system in your
00:42:57
mind?
00:42:58
>> It could be like people could think
00:43:00
about like yeah I have a my perplexity
00:43:02
computer running all the time whether it
00:43:05
essentially it runs on Linux machines
00:43:07
right now. Every server side computer is
00:43:09
a Linux machine. Yeah.
00:43:11
>> So, I think Mark Anderson tweeted this
00:43:13
right after our release that turns out
00:43:15
Linux computers was the right idea.
00:43:17
Desktop desktop Linux computers are
00:43:19
finally going to work.
00:43:20
>> Yeah. I mean, they're stable. They're
00:43:22
customizable and you're not at the mercy
00:43:25
of Apple's desire to contain the
00:43:28
experience or Microsoft surface area as
00:43:32
for hackers.
00:43:33
>> Exactly.
00:43:34
>> You build something rock solid and it
00:43:36
does feel like Linux might actually
00:43:38
become the correct
00:43:39
>> the eventual winner. It may not need to
00:43:41
have a front end.
00:43:42
>> That's the thing. You could you could
00:43:44
access the Linux machine on your phone.
00:43:47
>> You could be running iOS or Android. It
00:43:49
doesn't matter.
00:43:50
>> The actual valuable runtime is running
00:43:53
on Linux on the server.
00:43:55
>> You've done great as a consumer company.
00:43:58
Lot of love there. Now I'm starting to
00:44:00
see uh corporations with computer
00:44:04
starting engaging it. In fact, you'll be
00:44:06
happy to know this. Last week, I took
00:44:08
two people in my back office and I said,
00:44:10
"Stop working on OpenClaw. Your job is
00:44:13
to do the back office automation at our
00:44:16
venture firm only using Perplexity." And
00:44:19
they were like, "Perplexity computer."
00:44:21
And they were like, "Oh, okay. Um, it
00:44:24
doesn't talk well in Slack. It doesn't
00:44:26
have an agent in Slack." I was like, "It
00:44:28
will. I'm going to see AR and I'll talk
00:44:31
to him about that." So, we need a really
00:44:33
strong Slack connector.
00:44:34
>> It's already out.
00:44:34
>> It is. Okay, great. computer exists as a
00:44:37
Slackbot right now.
00:44:38
>> Okay,
00:44:38
>> that you can add to your Slack workspace
00:44:40
on enterprise plan
00:44:41
>> and our entire company works like that.
00:44:44
People are talking more to computer on
00:44:45
Slack to other than to other people.
00:44:47
>> In our first volley, we were sending
00:44:49
reports in, but it wasn't interactive.
00:44:51
That's perfect.
00:44:54
So now you've got your company going in
00:44:56
two different directions. This
00:44:57
incredible consumer run you have. How
00:44:59
many people are using the product every
00:45:01
month?
00:45:01
>> Several tens of millions. So tens of
00:45:03
millions of people that's very much
00:45:05
similar to the trajectory of the Google
00:45:07
and Yahoo consumer business. Now you've
00:45:09
got corporate. How are you doing on the
00:45:11
corporate side? Thousands of companies.
00:45:13
>> The fastest growing business for us. Ah
00:45:15
>> it's growing faster than the consumer in
00:45:17
revenue and things like computer unlock
00:45:20
entirely new possibilities. For example,
00:45:22
we've saved more than $und00 million for
00:45:25
our uh enterprise max customers who are
00:45:27
on the highest tier of enterprise.
00:45:29
>> Explain what that is. What does it cost?
00:45:31
200 a month per person.
00:45:32
>> So there are two tiers. One is the
00:45:34
enterprise pro which is $40 a month and
00:45:36
there's the enterprise max which is $400
00:45:39
a month. And that that and and and and
00:45:42
on a computer after you run out of your
00:45:44
credits you would pay for the tokens.
00:45:46
You pay for the usage.
00:45:48
>> Are you making money on the $400 a
00:45:50
month, $5,000 a year one or at this
00:45:52
point in time are people going so crazy?
00:45:54
Our uh one thing that Perplexity has is
00:45:57
every revenue we make, unlike certain
00:45:59
other rapper companies, every revenue
00:46:01
Perplexity makes has positive gross
00:46:02
margins.
00:46:03
>> Got it.
00:46:04
>> Because uh we're not just selling
00:46:06
tokens,
00:46:06
>> right?
00:46:07
>> Most of our revenue is recurring because
00:46:09
people are paying a subscription fee
00:46:11
>> and because we route through multiple
00:46:13
different models, we're very efficient
00:46:15
in terms of how we spend on the tokens.
00:46:18
because we have all this advantage with
00:46:19
rag and orchestration and search. We
00:46:22
don't actually need to blow up the
00:46:23
context window of the models.
00:46:25
>> Yeah.
00:46:25
>> As a result of that, we have positive
00:46:28
gross margins on all the revenue. Every
00:46:30
single penny we make, we make profits on
00:46:32
that. But the overall the company is
00:46:34
still yet to be profitable, but we're
00:46:35
working towards that.
00:46:36
>> You've had the opportunity to exit. A
00:46:39
lot of rumors, Apple, other people were
00:46:41
like, "Hey, this is a great team." How
00:46:42
many people on the team now?
00:46:44
>> About 400.
00:46:45
>> Yeah. You you've got a very coveted
00:46:46
team. You obviously understand consumer.
00:46:48
You obviously understand business. It's
00:46:50
a product driven organization. Reports
00:46:52
are you declined,
00:46:55
but the world's getting hyper
00:46:56
competitive here. How do you keep up as
00:46:58
a 400 person organization when you got
00:47:01
Sam Alman over here raising a hundred
00:47:04
billion dollars, you know, and then you
00:47:06
have Elon putting data centers in space
00:47:08
and merging with SpaceX and Twitter. You
00:47:11
have Google with unlimited resources.
00:47:13
Amazon getting in the game and obviously
00:47:16
Gemini uh very strong product and Google
00:47:20
really good at consumer. I think we'd
00:47:22
all agree Facebook and Meta haven't
00:47:25
figured it out yet except maybe for
00:47:27
serving us better ads, but they they
00:47:28
haven't figured out the consumer case
00:47:30
yet, but they'll copy it. They always
00:47:31
do.
00:47:32
How do you look at the playing field?
00:47:34
Because the degree of difficulty, this
00:47:37
isn't playing checkers or this is like
00:47:40
playing against the 10 best chess
00:47:43
players in the world. That's what you
00:47:44
have to do every day.
00:47:46
>> So, how do you think about it? Long-term
00:47:47
and independent company. Do you think
00:47:49
you'll need to join forces at some
00:47:51
point?
00:47:51
>> Well, and why didn't you take the deal?
00:47:53
This deals were incredible that you got
00:47:54
offered.
00:47:55
>> So, one advantage we have that all these
00:47:59
companies you mentioned don't have is
00:48:01
the multimodel orchestration. We're like
00:48:04
Switzerland. We don't have to have one
00:48:06
horse in the race. If GPT wins, Gemini
00:48:09
wins, Claude wins, Llama wins, it
00:48:12
doesn't matter to us. Uh or even open
00:48:15
source models can win, no problem.
00:48:16
>> And you have them on the service. You
00:48:18
have DeepSeek and Kimmy.
00:48:20
>> We have Kimmy, we have Neotron, and we
00:48:22
have uh a lot of usage of Quen, Alibaba
00:48:25
Quen.
00:48:26
>> Yeah.
00:48:26
>> Silently under the hood. So for us like
00:48:29
that advantage of being able to take the
00:48:31
best in each model and give the user the
00:48:35
orchestra of everything they can do. I
00:48:37
don't think any of the companies you
00:48:38
mentioned can do that
00:48:39
>> right nor would they
00:48:40
>> nor would they it makes no sense for
00:48:42
them. It would be an admission that all
00:48:44
the data centers and capex they've built
00:48:46
out mean still couldn't produce them the
00:48:49
best model. And uh Daario uh CEO of
00:48:52
Anthropic said recently in an interview
00:48:54
uh that models are specializing. Towards
00:48:58
the beginning of last year people
00:48:59
thought models are going to commoditize
00:49:01
but towards the end of last year people
00:49:04
models started specializing. Even within
00:49:06
coding
00:49:08
u cloud code and codeex have very
00:49:11
different capabilities. Our iOS
00:49:13
engineers love using codeex. Our backend
00:49:16
engineers love using cloud code.
00:49:17
>> Yeah. So even within a specialization
00:49:19
like coding, models have their own
00:49:22
unique specialtities and there are many
00:49:24
other use cases outside coding where
00:49:26
different models are good at different
00:49:27
things. Which means the orchestra
00:49:29
conductor that has no one model to the
00:49:32
horse in the race can win by providing a
00:49:35
very unique value and service to the
00:49:37
customer that each of these amazing
00:49:40
names that you mentioned cannot. And so
00:49:42
you're buying tokens wholesale from them
00:49:45
and then you'll charge customers to do
00:49:48
it or do you think it's all
00:49:50
>> we're going to take care of all that
00:49:51
orchestration?
00:49:52
>> Yeah.
00:49:52
>> So you don't have to manage tokens
00:49:54
across different models
00:49:55
>> cuz I authenticate I a couple of my
00:49:58
different accounts my pro accounts into
00:49:59
perplexity. But does it I I I don't have
00:50:02
enough knowledge to know if you're
00:50:04
abstracting that and people can just
00:50:05
search across them and it's part of
00:50:07
their perplexity subscription. No, we're
00:50:09
not bundling subscriptions from into
00:50:11
other AIS.
00:50:12
>> We just ping the models directly.
00:50:14
>> Got it.
00:50:14
>> Uh what you get in us is the perplexity
00:50:17
or orchestration.
00:50:18
>> Got it.
00:50:18
>> The harness,
00:50:19
>> right?
00:50:20
>> So the when when when when models are
00:50:22
kind of specializing the there's a
00:50:25
bigger value in the one who knows how to
00:50:27
build a great harness,
00:50:28
>> right?
00:50:28
>> That can take the best in each model.
00:50:30
>> Does it auto route today or do you still
00:50:32
have the drop down somebody's got to
00:50:34
pick
00:50:34
>> it? It definitely auto routes the best
00:50:36
model for each prompt,
00:50:37
>> but we also give users the flexibility
00:50:39
to pick whatever model they want.
00:50:41
>> What do you think of I've seen a bunch
00:50:43
of startups hack this together, but
00:50:45
doing the same query across multiple
00:50:47
>> We built a thing called model council.
00:50:49
>> Model council. Yeah.
00:50:50
>> Yeah. So that's one of the one of the
00:50:52
modes and perplexity where I saw Jensen
00:50:54
say in one of the interviews that he he
00:50:57
puts the same prompt in five different
00:50:58
AIs and sees what each of them says.
00:51:00
>> Yes.
00:51:01
>> Like everybody does that. Yeah. But then
00:51:03
you still have to apply your biological
00:51:05
computers
00:51:10
about your trust or your
00:51:12
>> five different doctors.
00:51:13
>> Five different doctors trying to figure
00:51:14
it out.
00:51:15
>> Exactly. So it's dumb.
00:51:16
>> So the model council is a feature we
00:51:18
built where it will not just give you
00:51:20
the answers of each model, but it will
00:51:22
tell you exactly where they agree, where
00:51:23
they disagree, and where the nuances
00:51:24
are.
00:51:25
>> And that's in the interface. Model
00:51:26
council, I didn't know it was there.
00:51:27
>> It's there.
00:51:28
>> I mean, you you released product at a
00:51:30
pretty great cadence, huh? How where did
00:51:32
you learn that and what's your
00:51:34
philosophy of shipping product?
00:51:36
>> Our philosophy is like speed is our
00:51:38
mode. Like you know again one of the
00:51:40
things that big companies cannot do is
00:51:41
move at the speed we do serve customers
00:51:43
at the speed and qual it's it's very
00:51:45
hard to maintain quality speed and trust
00:51:47
at the same time.
00:51:48
>> Yeah.
00:51:49
>> Like Apple takes a long time to ship
00:51:51
anything
00:51:51
>> because they're very worried about
00:51:53
people not trusting them.
00:51:54
>> Yeah.
00:51:54
>> Uh and so some companies are
00:51:56
bureaucratic and they just take forever
00:51:58
to ship something. They don't maintain
00:52:00
what they ship. They may make a big deal
00:52:01
about an event but nobody even knows how
00:52:04
to go and use that feature.
00:52:05
>> Yeah. They get abandoned.
00:52:06
>> Exactly. So, Perplexity has those
00:52:08
advantages for being very small. And
00:52:10
towards the end of last year, we found
00:52:12
that like AI coding tools have made it
00:52:14
much faster for us to ship things
00:52:16
>> which is honestly one of the reasons why
00:52:18
we built computer because now even
00:52:20
non-engineers are shipping code here by
00:52:22
just pinging a slack bot and asking it
00:52:24
to fix bugs.
00:52:25
>> So, this the the iteration has just been
00:52:27
like exponential. The the moment I had
00:52:30
where I became clawilled was when I was
00:52:34
working with it and I was like, "Hey, I
00:52:36
want to build my network. I know these
00:52:38
20 people in Japan. I had dinner with
00:52:39
them during my recent trip. I want to
00:52:41
know who they know. So, check out
00:52:43
LinkedIn and other things and who
00:52:45
they're associated with and make me like
00:52:46
a mind map of it. And then the next trip
00:52:49
I want to meet with the next circle of,
00:52:52
you know, those connections." So, I
00:52:54
started asking like, "Okay, I got the
00:52:55
results." I was like, "Great."
00:52:58
Um, and they said, "Where do you want me
00:52:59
to put them?" And uh, I was like, "Well,
00:53:01
where can you put them?" And it said,
00:53:02
"Well, I can put it in a Google sheet. I
00:53:04
can put it in notion table. I can put it
00:53:05
here. I can give you a PDF. I can give
00:53:06
you a CSV file. Or I could write you a
00:53:09
CRM." And I was like, "Yeah, sure. Make
00:53:12
me a CRM system." And it may a CRM
00:53:15
system.
00:53:16
>> And I think that becomes, and I think
00:53:18
maybe one out of a thousand people
00:53:20
working with AI have had that
00:53:21
experience. Maybe it's one in 10,000.
00:53:24
Where your agent says, I'll make you
00:53:26
bespoke software.
00:53:27
>> Yeah.
00:53:28
>> Have you had that yet? And and do you
00:53:30
see that as a part of computer that when
00:53:33
a person needs a spreadsheet, you don't
00:53:35
launch Excel or Google Sheets, you just
00:53:38
pop up a spreadsheet?
00:53:40
>> Yeah. Well, we have a board meeting
00:53:42
tomorrow.
00:53:42
>> Okay, I'll come.
00:53:43
>> And and so
00:53:44
>> I'll pitch it to the board.
00:53:45
>> Sure.
00:53:46
>> Uh our computer computer made the memo.
00:53:50
>> Oh, wow.
00:53:50
>> Yeah. And um we had a partner meeting to
00:53:53
pitch a partnership idea and uh earlier
00:53:56
we would have a design team do the whole
00:53:58
deck.
00:53:58
>> Yeah.
00:53:59
>> Computer just oneshotted it. Uh I had a
00:54:02
press briefing with a bunch of
00:54:03
journalists. My comm's person would
00:54:05
>> Sorry about that. Brutal.
00:54:07
>> And then my comms person would usually u
00:54:09
give me a memo what to say.
00:54:12
>> Computer one-shoted him.
00:54:13
>> So
00:54:14
>> it's crazy. And it's the context is so
00:54:17
good because the memor is getting
00:54:18
better. Yeah. Yeah.
00:54:19
>> So it's like I know that journalist from
00:54:22
the last time.
00:54:24
>> I know the board meeting. I have all the
00:54:26
previous decks.
00:54:28
>> When did that happen?
00:54:30
>> I think it it happened with Opus 45
00:54:34
>> Opus 45. That was a inflection point
00:54:36
when models were started being amazingly
00:54:40
good at orchestration and reasoning and
00:54:42
tool calls and cloud code brought in
00:54:45
this new idea in AI that everything can
00:54:47
happen inside a sandbox, a console, a
00:54:51
terminal with access to tools where
00:54:54
tools are just command line tools.
00:54:56
>> Yeah,
00:54:56
>> they don't even need to have graphical
00:54:58
user interface. So when you did that and
00:55:00
when you organize around files and sub
00:55:03
aents and skills and CLIs, the model
00:55:06
started be becoming very good at
00:55:09
handling the context. So the context
00:55:11
window no longer became a problem. It
00:55:14
just put whatever necessary into the
00:55:15
context whenever it wanted to and dumped
00:55:17
dumped them away when it wanted to.
00:55:19
>> Yeah.
00:55:19
>> And that made it like suddenly so good
00:55:21
at doing very long orchestration tasks.
00:55:26
>> Yeah. It's it's pretty crazy. I have
00:55:28
every episode of this week in startups,
00:55:30
all the transcripts and then all of all
00:55:32
in
00:55:32
>> that was one of the tasks I did by the
00:55:33
way I can send it to you. I asked it I
00:55:36
want you to download every all-in
00:55:37
podcast.
00:55:38
>> Yeah.
00:55:38
>> U since the beginning and I want you to
00:55:42
take a mention of all the public
00:55:44
companies they mentioned during the
00:55:46
episode.
00:55:46
>> Yes.
00:55:46
>> I want you to have a histogram of the
00:55:48
counts and I also want you to chart it
00:55:51
across time and then I want you to
00:55:53
analyze the impact on the stock price
00:55:55
>> and the sentiment of what we said.
00:55:57
Exactly. And it did like it clearly
00:55:59
said,
00:55:59
>> "Are we moving stocks
00:56:00
>> around Google's stock going up?"
00:56:02
>> Yes.
00:56:03
>> Prior to that, you guys were talking a
00:56:04
lot about Google.
00:56:05
>> Yes.
00:56:06
>> And it clearly
00:56:06
>> And I said I made a bet publicly on the
00:56:08
thing. I said, "I am buying a bunch of
00:56:10
Google because I believe even though
00:56:12
they're behind,
00:56:13
>> it's because they're too precious." You
00:56:15
were kind of mentioning a company that
00:56:16
might be too precious at times and
00:56:18
doesn't release.
00:56:18
>> I was like, "That's that company. They
00:56:20
need to release more." Yeah.
00:56:21
>> And uh I told Sergey, I was like,
00:56:23
>> like
00:56:25
>> give us the good stuff. started giving
00:56:27
us the good stuff.
00:56:28
>> It literally gives you the timestamps of
00:56:29
every single and then I can go click on
00:56:31
it and actually hear
00:56:33
>> exactly
00:56:34
>> that moment.
00:56:34
>> Yeah.
00:56:35
>> Sweet.
00:56:35
>> Yeah. So that's when that's when I was
00:56:37
like damn like
00:56:38
>> this I would have had somebody do this
00:56:40
as a weekl long project.
00:56:42
>> It would have been 10 hours a week of
00:56:44
researcher. I I'm experiencing the same
00:56:46
thing when I do research notes. I've
00:56:49
created my own uh like mega prompt.
00:56:54
>> Yeah. and it will go and like tell me
00:56:57
where you worked before and who's in
00:56:59
your circle, who your competitors are,
00:57:01
who your friends are, blah blah blah,
00:57:02
and then go find I try to find old
00:57:05
podcast is one of my secrets. If you're
00:57:06
an interviewer watching, I try to find
00:57:09
what was the person talking about 5
00:57:10
years ago, 10 years ago, and then over
00:57:13
10 years ago. And I've gone into
00:57:14
interviews now with Michael Dell and
00:57:16
talked about things he was talking about
00:57:18
in the '9s. Yeah.
00:57:19
>> And it finds me some ancient stuff. Like
00:57:21
you would pay a research or a producer,
00:57:24
>> you know, $70,000 a year, $80,000 a year
00:57:27
to do this and they would have done a
00:57:29
third of the job in 10 times longer.
00:57:33
>> It's really gotten weird just in the
00:57:35
last 6 months. What do you think the
00:57:36
next 6 months looks like?
00:57:38
>> I think the the dream that what we are
00:57:40
going to try to do is help businesses
00:57:42
run as autonomously as possible. You
00:57:45
know, everybody talks about this AI is
00:57:47
going to create this one person $1
00:57:48
billion company. Some people say it's
00:57:51
already happened because people pay
00:57:53
researchers like 1 billion, but it's not
00:57:55
truly moving the GDP by 1 billion. It's
00:57:58
not truly creating new value. So the
00:58:01
best way to do that is to actually help
00:58:02
a small business people who would
00:58:04
otherwise drive Ubers for like yes
00:58:06
>> extra passive income to like buy like a
00:58:10
Mac mini set up perplexity personal
00:58:12
computer and run their business on that
00:58:14
or like run it on the server it doesn't
00:58:16
matter uh and actually make real money.
00:58:19
>> Yeah.
00:58:19
>> Hundreds of thousands or even millions a
00:58:20
year
00:58:21
>> and uh grow it.
00:58:24
>> Have computer go and run your ad
00:58:25
campaigns on Instagram or Google. I mean
00:58:28
>> integrate with SEM and SEO tools, find
00:58:30
new users and uh integrate with Stripe,
00:58:34
charge them, ship new features, have
00:58:36
your own like intercom integration for
00:58:39
customer support and like have this all
00:58:41
working well. You can be sipping wine in
00:58:42
Napa. That's the dream that you know it
00:58:45
feels awesome to say. Everybody thinks
00:58:47
AI is already there. It's not there yet.
00:58:49
Someone has to do that hard work.
00:58:51
>> Yeah,
00:58:51
>> that's what we want to do. Yeah, it it's
00:58:53
a great vision because
00:58:56
when I watched startups 20 years ago,
00:58:58
there were so many check boxes they had
00:59:00
to do. I have to find an office space. I
00:59:01
got to put up a bunch of servers. I I
00:59:04
got to hire hire an HR firm. I I got to
00:59:07
hire a PR person. All this stuff. And
00:59:09
now I talk to young founders. They got a
00:59:11
three-person team. They've come out of
00:59:13
A16Z, my program, Launch Accelerator,
00:59:16
whatever it is, Y Combinator. And I'm
00:59:18
like, "Okay, you raised a half million,
00:59:19
you raised a million. Who are you
00:59:21
hiring?" And they're like, "Um, I don't
00:59:23
know if we need to hire anybody." I'm
00:59:24
like, "If you could hire somebody, would
00:59:26
you hire?" They're like, "Well, I do my
00:59:28
own HR. I have this partner." And
00:59:30
they're I'm like, "How are you doing
00:59:32
hiring anyway?" And they're like, "Well,
00:59:34
I put out an ad and then uh it sorts and
00:59:36
ranks the candidates and then it emails
00:59:40
the top 10, asks them a bunch of
00:59:41
questions, and then I meet with the last
00:59:42
two." And I'm like, "That's what a
00:59:44
recruiter did."
00:59:45
>> Like, the entire recruiting job has been
00:59:47
abstracted. And like a a tool like
00:59:49
computer is going to make that even
00:59:51
faster.
00:59:52
>> Much work to do. Uh lot of connectors, a
00:59:55
lot of specific workflows. People don't
00:59:58
want to like learn how to write like,
01:00:00
you know, essay long prompts. You know,
01:00:01
it needs to be so quick and fast and
01:00:03
autonomous. You just set it up and done.
01:00:06
>> And you have an idea, you can turn it
01:00:07
into a business and start making money.
01:00:09
>> Yeah. It's it's an incredible future. Uh
01:00:12
and it feels like it's right here. Do
01:00:13
you how do you think about job
01:00:15
displacement? is you're actually making
01:00:17
the tool that enables people
01:00:20
>> to be a solo entrepreneur and get to a
01:00:22
million in revenue, but it's also the
01:00:23
same tool that doesn't require them to
01:00:25
hire. And we've had this debate a
01:00:26
million times on the podcast.
01:00:28
>> Do you
01:00:30
I'm wondering if like me, you have
01:00:32
moments where you're like, "Oh my god,
01:00:33
this is really terrifying." Yeah.
01:00:35
>> A lot of people are going to lose their
01:00:36
jobs really fast.
01:00:37
>> Yeah.
01:00:38
>> And then, oh my god, you can learn any
01:00:40
skill you want and all the things that
01:00:42
were hard are now easy.
01:00:44
>> Yeah. I I go back and forth. I'm 70 80%
01:00:48
super positive about this, but I do
01:00:49
worry about like 20% of the time I'm a
01:00:51
little worried. Yeah. Where do you sit?
01:00:53
>> I mean, America has always been about
01:00:54
like entrepreneur entrepreneurship,
01:00:57
right? Like we we've been about like
01:00:58
trying to build new things, discover new
01:01:00
things, go explore.
01:01:02
>> Uh I think this whole like Henry Ford
01:01:04
came and built factories and brought in
01:01:06
jobs and things like that and like put
01:01:09
people into a box. But u I think the
01:01:12
reality is people most people don't
01:01:15
enjoy their jobs. They're doing it for
01:01:16
they hate them.
01:01:17
>> Exactly.
01:01:18
>> So there is suddenly a new possibility a
01:01:20
new opportunity to go use these tools,
01:01:22
learn them and start your own mini
01:01:24
business. And if it pays for your needs
01:01:27
for year or multiple years and lets you
01:01:30
have a high quality life and good work
01:01:32
life balance and true feeling of agency
01:01:34
and ownership and passion to like get
01:01:36
your ideas out there. I think that is
01:01:39
even if there is temporary job
01:01:41
displacement to deal with that sort of
01:01:43
glorious future is what we should look
01:01:45
forward to.
01:01:45
>> I I I think you're exactly right. If
01:01:48
there will be some displacement, but
01:01:49
then there's also going to be so many
01:01:51
opportunities open up and it requires
01:01:53
the individual to not be passive.
01:01:55
>> Exactly.
01:01:55
>> They have to be rugged individualists.
01:01:58
They have to be resilient. Yeah.
01:01:59
>> And they have to be resourceful. And I
01:02:01
think once you start playing with these
01:02:02
tools, that's what happens.
01:02:04
>> Exactly. you you all of a sudden feel
01:02:06
like
01:02:06
>> it brings out the best in you if you
01:02:07
truly are in a good space.
01:02:09
>> Yeah.
01:02:09
>> Yeah.
01:02:10
>> I today uh Comet for iOS is out.
01:02:14
>> Yeah.
01:02:14
>> I'm a Comet super fan. I required
01:02:17
everybody. You were nice enough when I I
01:02:18
emailed you. I was like, "Can you send
01:02:19
me some licenses?" You sent You don't
01:02:21
may not remember. You sent me a bunch of
01:02:22
licenses. I said, "Everybody put this on
01:02:24
because it was $300 a month when you
01:02:26
first came out with the common browser.
01:02:28
Now it's free, I think, for all users.
01:02:30
>> Highly recommend it. Highly recommend
01:02:32
getting a pro account. It's only 20
01:02:34
bucks a month to get into perplexity,
01:02:35
which is a joke. So, you can get on
01:02:37
board for nothing, less than a dollar a
01:02:39
day.
01:02:40
>> But what does iOS allow me to do? And
01:02:43
and how does it connect to computer?
01:02:45
Because that's another thing I'm having.
01:02:47
>> Yeah.
01:02:48
>> Cloud code. Uh computer, there's not a
01:02:51
good enough integration with this mobile
01:02:53
device yet.
01:02:54
>> Yeah. So, computer is already on the
01:02:56
perplexity app. So, you can just toggle
01:02:58
the computer and start using it.
01:03:00
uh comet's uniqueness and perplexity for
01:03:03
the company uh and and and the strategy
01:03:05
is the fact that you can control the
01:03:08
browser. So the browser also becomes a
01:03:10
tool for computer
01:03:12
>> just like your Google workspace and all
01:03:14
these other things. uh until the whole
01:03:17
world is organized around CLI and tools.
01:03:20
>> Yeah,
01:03:20
>> there's still a lot of tasks we have to
01:03:22
do manually on the web on the browser.
01:03:25
Open tabs, fill up forms, click on
01:03:26
things, upload stuff, all that stuff. If
01:03:29
you want to automate, you need a
01:03:31
browser. You need an AI that can
01:03:33
natively control the browser. So that is
01:03:35
comet. And that's why no matter how many
01:03:38
other tools in the market exist like
01:03:40
open claw or like claw co-work
01:03:43
>> executing tasks on a browser on the
01:03:45
server side along with all the other
01:03:47
things is something uniquely perplexity
01:03:49
can do.
01:03:50
>> Yeah. My dream is that you'll create an
01:03:53
Android app that roots my Android phone.
01:03:56
>> Yeah.
01:03:57
>> And that you just take over and see
01:03:58
everything because one of the blockers I
01:04:00
have now is some of the websites have
01:04:03
gotten a little pnicity.
01:04:05
>> Yeah. I don't want to mention too many,
01:04:07
but Reddit, LinkedIn.
01:04:09
>> Yeah.
01:04:10
>> And like they're just I I am a great
01:04:13
Reddit user. I'm a great LinkedIn
01:04:15
supporter, but sometimes like I need to
01:04:17
get my inmail.
01:04:18
>> Yeah.
01:04:19
>> From my LinkedIn and I just need to, you
01:04:22
know, find seven people at company. I is
01:04:24
there going to be a solution
01:04:27
>> between the LinkedIn and Reddits of the
01:04:29
world and the claws and perplexities? Is
01:04:32
how is that
01:04:33
>> I mean
01:04:33
>> negotiation going? You don't have to
01:04:35
speak about any specific ones unless you
01:04:36
want to,
01:04:37
>> but it feels like there's got to be a
01:04:39
solution
01:04:40
>> and I'm willing to pay for it as a user.
01:04:42
I'm willing to play Reddit to allow my
01:04:44
bot to show up and behave properly.
01:04:46
>> Well, I I I cannot speak about any
01:04:48
particular company, but we are happy to
01:04:51
work with anyone, right? So, um I think
01:04:54
with with Comet, our idea is to give
01:04:56
people the flexibility to set things up
01:04:58
on their own.
01:04:58
>> Yeah. and uh any um official APIs that
01:05:02
anyone's willing to offer, we're always
01:05:04
happy to put that as part of computer.
01:05:07
Here's what I think should happen. Let
01:05:08
me see if you agree. Um and this is for
01:05:11
Steve Huffman at Reddit.
01:05:14
I go on Reddit. I do a pro account for
01:05:18
20 bucks a month. And when I do that, I
01:05:20
can authenticate whatever tool I want um
01:05:24
to do a series of well- behaved things a
01:05:28
certain number of times a day.
01:05:30
>> Yeah.
01:05:30
>> So, it's not unlimited. I'm not going to
01:05:31
scrape the whole site, but I would like
01:05:33
it to just let Perplexi or computer go
01:05:37
and just tell me, hey,
01:05:39
>> what are people saying on the this
01:05:40
weekend startups and all-in subreddits?
01:05:42
Summarize it for me so I get the
01:05:44
customer feedback. And I would literally
01:05:46
name my
01:05:48
uh agent and I would say I it won't post
01:05:51
on my behalf. It won't vote on my
01:05:53
behalf. Just needed to do a couple of
01:05:54
little readonly things. This would be an
01:05:56
easy solution. Or LinkedIn I would like
01:05:58
if you I have I already pay LinkedIn
01:06:00
like 50 bucks a month. Like they should
01:06:01
just let the $50 a month one work with
01:06:04
computer.
01:06:04
>> Yeah, absolutely. I mean,
01:06:06
>> okay, this is for Satia Nadella. Let
01:06:09
LinkedIn work with Perplexity and the
01:06:11
other players and we'll pay you extra.
01:06:14
>> Perfect. It's a revenue stream. Don't
01:06:15
you think API access for our customers
01:06:17
is a revenue stream?
01:06:18
>> I think so. I think so. I think I think
01:06:20
fundamentally giving users a choice
01:06:23
>> and setting it up as a win-win for both
01:06:24
the business and the user
01:06:26
>> Yeah.
01:06:26
>> is where the world should head to.
01:06:28
>> And and I I would say the same thing
01:06:30
applies to any any website in the world.
01:06:32
Like if if you want an AI to use it on
01:06:34
your behalf, it should be okay for cuz
01:06:36
that's what the user wants.
01:06:37
>> I mean, I have a paid New York Times
01:06:40
subscription. like let me go in there
01:06:42
and do you know whatever 100 searches a
01:06:45
day, a week, a month, whatever they
01:06:47
choose, but that would make the
01:06:49
subscription that much more sticky.
01:06:51
>> Exactly.
01:06:51
>> Uh all right, Arvin, love the product.
01:06:55
Anybody at home,
01:06:56
>> it's just tremendous. Go learn computer
01:06:59
and get the Comet browser. It has
01:07:01
changed my business for the last two
01:07:03
years. Love the product and we'll have
01:07:06
you back soon when you launch your
01:07:07
operating system and come up with your
01:07:09
own server and desktop server but
01:07:11
business is the focus. Yeah.
01:07:13
>> Yes.
01:07:13
>> All right. Great seeing you.
01:07:15
>> We have an amazing guest Arthur
01:07:16
Manchester here the CEO of Mistral AI.
01:07:19
How are you doing sir?
01:07:21
>> Great. Thank you for having me.
01:07:22
>> And so you're here at Nvidia's big
01:07:26
conference,
01:07:28
big announcement. You're going to be
01:07:30
working with Nvidia to build models.
01:07:33
uh to open source them. What is the uh
01:07:36
big announcement here?
01:07:37
>> Well, we're announcing that we are going
01:07:39
to be training the next generation of
01:07:41
frontier models with uh with Nvidia. Um
01:07:44
it's something that we've been doing
01:07:45
before with Nvidia with MLMO, something
01:07:47
we did like 18 months ago. And the point
01:07:49
for us is really to be able to produce
01:07:51
the best open source models out there so
01:07:53
that we can actually use those assets to
01:07:55
specialize them through products that we
01:07:57
do for our customers like Forge that
01:08:00
helps us customize the models for the
01:08:02
enterprise we work with in engineering
01:08:04
in physics in science uh in making them
01:08:06
better at certain languages when we work
01:08:08
with governments etc.
01:08:10
>> And and Michel obviously based in uh
01:08:12
France you're the leading AI company
01:08:15
there. What's it like running the
01:08:17
company and building a large language
01:08:19
model in Europe? Obviously, there's
01:08:20
regulations and all kinds of
01:08:23
considerations. Privacy, the French are
01:08:24
known for protecting privacy. In the
01:08:26
United States, we're known for taking it
01:08:28
away. How is the landscape there and
01:08:31
what do you have to deal with there that
01:08:32
maybe you wouldn't have to deal with in
01:08:34
America? And what's the pros and the
01:08:35
cons? I'd say first, we have 25% of our
01:08:38
business in the US. Uh, and 25% of our
01:08:40
researchers are actually here. So I
01:08:42
actually spend a lot of time here as
01:08:44
well as in France as well as in the UK
01:08:46
in Singapore where we are. So of course
01:08:48
it's it's different markets. Uh it's
01:08:51
markets where you have language which is
01:08:52
a topic uh where there's much more
01:08:55
manufact manufacturing is a bigger piece
01:08:57
of the cake than it is here. uh and I'd
01:09:00
say the our strength has been to also
01:09:02
work with European companies that are a
01:09:04
bit lagging behind uh and that wants to
01:09:06
adopt the technology to to leap forward
01:09:09
and we've been able to do that through a
01:09:10
forward deployment engineering
01:09:11
engagement through our forge product for
01:09:13
our studio product that allows to deploy
01:09:15
agents that do end to end automation but
01:09:18
on top of that the thing that we have
01:09:19
announced today like forge is something
01:09:21
that is actually being used today uh
01:09:23
with customers in the US because they
01:09:25
come to us with uh needs for post
01:09:28
training for making mod specifically
01:09:29
good at financial services and what's
01:09:32
happening is that we have this product
01:09:33
and we can bring the models to
01:09:34
specialize them as well.
01:09:36
>> And so your belief is specialized
01:09:39
verticalized models healthcare finance
01:09:42
engineering different verticals will win
01:09:44
the day or a a global model will win the
01:09:47
day that does everything.
01:09:49
>> Well you need general purpose models to
01:09:51
do the orchestration parts etc. But at
01:09:53
some point you enterprises sits on a lot
01:09:56
of intellectual property on a lot of
01:09:58
signals coming from physical systems
01:10:00
from factories from tools and the it's
01:10:03
actually not trivial to connect those
01:10:05
systems to connect those data to models
01:10:07
that are closed source. If you have open
01:10:09
models you can actually add uh new
01:10:10
parameters you can make a lot of deeper
01:10:13
things that you cannot do with closed
01:10:14
models. You can also and that's what
01:10:16
something that we do. We don't we not
01:10:18
only do we work at the model side but
01:10:19
also at the orchestration side. We see
01:10:21
it with subject matter experts to
01:10:23
understand their needs and we build
01:10:24
business applications that are fully
01:10:26
bespoke to their needs by modifying the
01:10:28
models but also modifying the harness on
01:10:30
top etc. So we believe that eventually
01:10:33
building on open source technology is a
01:10:35
way to save cost is a way to have better
01:10:36
control because you can sit the thing on
01:10:38
every cloud that you want on your
01:10:40
hardware if you want you can deploy it
01:10:41
on the edge if you want and eventually
01:10:43
uh from a from a customization
01:10:45
perspective and from leveraging your
01:10:47
decades of IP that you've been acrewing
01:10:49
in financial services in heavy
01:10:51
manufacturing like companies like SML
01:10:53
for instance they do benefit from
01:10:54
working with us because we take their
01:10:56
data and we build models that are
01:10:57
specifically good for their
01:10:59
>> um just training data using experts to
01:11:03
come in and refine a model. Most people
01:11:05
don't know this business that well, but
01:11:07
this has become a very large part of the
01:11:10
industry. Obviously, scale AI was doing
01:11:12
it. They went to Facebook, lost a lot of
01:11:14
the customer base who didn't want to uh
01:11:17
send their data, I guess, over to Meta.
01:11:19
Uh we're investors in a company called
01:11:20
Micro One that's doing pretty well in
01:11:22
this space. There's other folks doing
01:11:23
it. explain to the audience what you're
01:11:27
doing specifically for companies and how
01:11:29
this training works in a verticalized
01:11:31
way and then how you silo that data
01:11:33
because if you're working with one
01:11:34
customer in aerospace or fintech they
01:11:37
might have a need set but they may not
01:11:39
want that training to go to a
01:11:42
competitor. I can use a few examples. I
01:11:45
think overall the data segregation is
01:11:47
super important and the way we have
01:11:48
solved that is through a portable
01:11:50
platform. So our technology is a set of
01:11:52
services, a set of training tools, a set
01:11:55
of data processing tools that I can take
01:11:57
and that I can put on the infrastructure
01:11:59
of my customers. So suddenly from an IT
01:12:01
perspective and when we talk to the
01:12:02
CIOS, they realize that from security
01:12:04
perspective, the flow of data doesn't go
01:12:07
there's no data flow coming back to
01:12:08
Mistral because everything stays there.
01:12:10
Now uh the way we we then use that
01:12:13
technology that has been deployed is
01:12:15
that we're going to be working with uh
01:12:17
the team that is doing uh image scanning
01:12:20
and default detection with ISML for
01:12:22
instance and we're going to be sending
01:12:24
forward deployment engineers scientists
01:12:26
they're all PhDs they know how to train
01:12:27
models and they spend some time with the
01:12:29
subject matter experts that can explain
01:12:31
how an image is being detected what how
01:12:33
do you def detect defaults etc and based
01:12:36
on that we're going to work out what
01:12:37
kind of data needs to be used to train
01:12:40
the models that it's going to solve the
01:12:41
task in itself. And so the we we send
01:12:44
the technology typically we send a
01:12:46
little bit of scientists because uh you
01:12:49
do need that expertise transfer and that
01:12:51
knowledge transfer in between our teams
01:12:53
and the vertical experts and then we
01:12:55
make sure that eventually our team no
01:12:57
longer needs to be there to retrain the
01:12:58
models to get more data access etc. So
01:13:00
that combination of data segregation,
01:13:03
expertise transfer, knowledge transfer
01:13:05
is the one thing that makes us quite
01:13:06
unique and allows us to serve the most
01:13:09
critical use cases, the most critical
01:13:10
processes in industries that actually
01:13:12
need to take their data and put it into
01:13:14
models for it to work. Yeah, this seems
01:13:17
to be once the entire open web, what was
01:13:21
available legally, gray market, etc. I
01:13:25
wouldn't have you comment on that
01:13:26
controversy. Uh but we we kind of
01:13:29
exhausted what's in the open crawl.
01:13:31
Yeah,
01:13:31
>> we have.
01:13:31
>> And and it's time to actually
01:13:34
either make synthetic data or actually
01:13:37
use experts. Do you believe in synthetic
01:13:39
data and where does that work and where
01:13:41
does it fail? We use synthetic data as a
01:13:44
way to warm up the models. It's a way to
01:13:46
actually be quite efficient at the
01:13:48
beginning. If you have a large model and
01:13:49
you want to train a small model, you
01:13:50
would you will use your large model to
01:13:52
pro to process and to produce a lot of
01:13:55
synthetic data at the beginning. uh and
01:13:57
then but eventually you do need to have
01:13:58
human signal. Uh so the human signal is
01:14:01
something that is always a bit costly to
01:14:02
acquire because you need to talk to the
01:14:04
experts they need to give feedback to
01:14:06
the machines and so at the beginning
01:14:08
synthetic data allows you to do the
01:14:10
compression to to further compress the
01:14:11
models. At the end you do need to go and
01:14:13
get data that is uh produced by humans.
01:14:16
So yeah, it's a it's a way to have uh
01:14:18
it's it's mostly an efficient way of
01:14:20
training models to have big bigger
01:14:22
models that are used as as teachers for
01:14:25
smaller models, but it's not enough. And
01:14:26
so you also need human signal. Arthur,
01:14:28
we've seen um an incredible explosion.
01:14:31
We're sitting here on AO52
01:14:34
after OpenClaw, the year of our Lord, 52
01:14:37
days.
01:14:39
when you first saw Open Claw and saw the
01:14:41
reaction of hackers,
01:14:45
founders, startups, CEOs, just the
01:14:47
amount of energy and it racing to the
01:14:49
top of GitHub with the most number of
01:14:51
stars and likes and and all these
01:14:54
contributors. What did that say to you
01:14:56
as an executive in the space who's been
01:14:59
grinding on this for many years? What
01:15:01
what does that openclaw moment mean?
01:15:04
Well, it resonated a lot with what we
01:15:05
were doing with our customers uh because
01:15:08
pretty quickly uh enterprises realized
01:15:10
that if they wanted to make some gains
01:15:12
with artificial intelligence geni, they
01:15:14
would need to automate full processes.
01:15:16
And to automate a full process as an
01:15:18
enterprise, well, you can use open
01:15:19
cloud, but it's going to be uh it's
01:15:21
actually not really enough because you
01:15:22
you have data problems, you have
01:15:24
governance problems, you can't observe
01:15:26
uh the process that is running and you
01:15:28
can't can't control it in um in many
01:15:30
cases when you run a KYC process. So if
01:15:32
you're HSBC for instance, one of our
01:15:34
customers, uh you will want to have
01:15:36
deterministic gates that are going to
01:15:38
always do the same thing in a way that
01:15:40
is observable and that you can guarantee
01:15:43
the CIO that it's always going to go
01:15:44
through these gates and that's not
01:15:46
something that Openflow is providing
01:15:48
because it doesn't have this the kind of
01:15:50
primitives that you need to work on
01:15:52
collective productivity, observable
01:15:54
productivity and to work on mission
01:15:55
critical systems. On the other hand, uh
01:15:58
the autonomy it gives and the autonomy
01:16:00
it brings to to people that are just
01:16:02
individuals that are hacking together
01:16:04
things is a way to also show to
01:16:05
enterprises that if you set up the right
01:16:07
control plane, if you set up the right
01:16:09
sandboxes, if you connect to the right
01:16:11
data sources, if you make sure that you
01:16:13
your access controls are well respected,
01:16:15
then you can actually unleash the power
01:16:17
of agents doing things for your
01:16:19
employees and that's going to work. Work
01:16:21
on the platform cuz otherwise you will
01:16:23
not be at ease when you're sleeping. It
01:16:24
is um definitely something you have to
01:16:26
be thoughtful about. When I installed
01:16:28
it, I gave it just for my agent root
01:16:31
access to my Google Docs and my G Suite,
01:16:35
my notion, my Zoom and uh my notion and
01:16:40
uh GCAL, everything. And then I
01:16:42
realized, wow, I can with my enterprise
01:16:45
edition of Gmail essentially, I can just
01:16:48
summarize for my entire 21 person
01:16:50
investment company every conversation
01:16:52
going on in Gmail and then correlate it
01:16:55
with every conversation in Slack. And
01:16:57
then I realized, oh my gosh, there's
01:16:59
compensation discussions going on.
01:17:01
There's a person on a PIP who we put
01:17:03
them on a perform performance
01:17:05
improvement plan perhaps or something
01:17:06
like that. Oh, I have to make sure
01:17:09
nobody else can access this because the
01:17:12
power comes from giving it access to
01:17:15
data. But with great power comes great
01:17:17
responsibility and I think people are
01:17:19
learning that in real time. Yeah, it's a
01:17:21
big problem because the enterprise data
01:17:23
is not a single thing that you want to
01:17:24
put into a single system that is going
01:17:26
to be accessible by by everyone and so
01:17:28
you need to have this layer that
01:17:30
actually understands what is the is what
01:17:32
is in the data. you need to have a
01:17:34
semantic of what can actually be
01:17:35
proposed to uh HR or what can be
01:17:39
proposed to uh engineering and typically
01:17:42
compensation is one of these things you
01:17:43
want to make sure that the compensation
01:17:45
data does not flow back to all of the
01:17:47
all of the enterprise because you're
01:17:48
going to have a lot of problems uh if if
01:17:50
that's the case and so what you actually
01:17:52
need and which is hard to do is what we
01:17:54
call context engine so a mapping of
01:17:56
where the data sits that comes with a
01:17:58
certain number of metadata that is
01:18:00
telling you that this data is actually
01:18:01
not accessible to part of the company
01:18:04
and if you actually have someone in
01:18:06
engineering that is asking for something
01:18:08
related to comp the thing is actually
01:18:10
going to tell you look you actually
01:18:11
can't access that data so so that's uh
01:18:14
that's hard it's actually hard you need
01:18:16
to rethink entirely the way your IT
01:18:17
systems are being connected and uh at
01:18:20
some point you also need to think about
01:18:21
your management because your influ your
01:18:23
information flow is completely different
01:18:25
today uh if you're connecting agents
01:18:28
together with your data sources than it
01:18:29
used to be and suddenly maybe you don't
01:18:31
need that manager whose only purpose was
01:18:33
to take information from the bottom and
01:18:35
put the information on top etc. So
01:18:37
there's some IT problems to solve and
01:18:39
you need the right primitives, you need
01:18:41
sandboxes, you need airback based access
01:18:44
control and these kind of things and uh
01:18:46
you have change to do. You you need to
01:18:48
rethink your entire customer service uh
01:18:50
department cuz suddenly you actually
01:18:52
don't need that much transfer of
01:18:53
information operated by humans.
01:18:55
>> All right. Uh you have to go. You got a
01:18:57
flight to catch. It is so great to see
01:18:58
you Arthur. Continued success with
01:19:00
Mishril.
01:19:01
>> Thank you very much. Cheers. I'm really
01:19:02
lucky to have Daniel Roberts here. He's
01:19:04
the co-CEO and co-founder along with his
01:19:07
brother of Iron. They are a publicly
01:19:10
traded company. They started in BTC.
01:19:12
Welcome to the All-In Interview program.
01:19:14
>> Thanks, Jason. Pleasure to be here.
01:19:16
>> Yeah. And so you started in Sydney. You
01:19:19
and your brother um was seven, eight
01:19:22
years ago. And you got in early on
01:19:25
Bitcoin and all these Bitcoin monitor uh
01:19:28
miners wanted to have data centers. Huh.
01:19:30
>> Yeah, that that's directionally right.
01:19:32
So the thesis we saw was this explosion
01:19:35
of the digital world, the growth in the
01:19:36
online and at some point the real world
01:19:39
was going to struggle. So we set about
01:19:41
to build out largecale data centers.
01:19:43
Yes, the first use case was Bitcoin
01:19:45
mining. But as we said to our seed
01:19:47
investors, use that to bootstrap the
01:19:49
platform, generate cash flow, layer in
01:19:52
higher and better use cases over time as
01:19:53
they emerge. Here we are today with AI,
01:19:56
we are swapping out all the Bitcoin for
01:19:57
AI chips. When did you first start
01:19:59
seeing the demand in the company shift
01:20:02
from hey Bitcoin miners we need some
01:20:06
H100s whatever it is uh to hey we're
01:20:10
this nonprofit open AAI hey we're this
01:20:13
research lab we need some AI compute
01:20:16
when did that start hitting
01:20:17
>> look we had a bit of a false dawn I
01:20:19
would say back in 2020 we signed anou
01:20:21
with Dell to start bringing out
01:20:22
customers and compute but in hindsight
01:20:25
it was too early so we went back to
01:20:26
Bitcoin kept bootstrapping in the
01:20:28
platform. Look, I would say about 2
01:20:30
years ago and month by month, the demand
01:20:33
just continues to escalate.
01:20:34
>> And you were in so early that when you
01:20:37
were looking at data center space in the
01:20:39
United States,
01:20:42
you were one of one looking at the
01:20:43
space, one of two or three people
01:20:45
looking at the space, they they were
01:20:47
trying to sell you on space. Yeah.
01:20:49
>> Yeah. So, we actually develop the data
01:20:51
centers ourselves. So, we go and find
01:20:53
the land, we go and get the permits, we
01:20:54
go and apply for grid connections. And
01:20:57
we were doing it at a scale that just
01:20:59
amazed people at the time. Like 750
01:21:01
megawatts is our flagship Texas site
01:21:04
four years ago was unheard of. In the
01:21:06
middle of the desert, we're building
01:21:07
these big data centers. The traditional
01:21:08
data center industry going what are you
01:21:10
guys doing? We're saying we believe in
01:21:12
the future digitization, high
01:21:14
performance computing and obviously now
01:21:16
today it's paying dividends.
01:21:17
>> Yeah. I don't think anybody could have
01:21:19
predicted when chat GPT came out, Open
01:21:22
Claw recently as a turning point. Um,
01:21:25
and then you know, Microsoft, Google,
01:21:28
and everybody embracing this. Uh, and
01:21:30
that's your big partner, Microsoft.
01:21:33
>> Yes, Microsoft's one of our early
01:21:34
partners. We signed a $9.7 billion
01:21:37
contract with them late last year, but
01:21:39
as I was explaining to you before the
01:21:41
show, that's 5% of our capacity. So,
01:21:44
things are busy at the moment.
01:21:45
>> Yeah.
01:21:46
>> And when you do these buildouts,
01:21:49
the big conversation today is not is no
01:21:52
longer the number of GPUs putting in.
01:21:54
It's just power. Power is the uh
01:21:58
constraint today. Yeah.
01:21:59
>> Look, for many of the industry it is,
01:22:01
but for us, because we started 8 years
01:22:03
ago tying up all this land and power,
01:22:05
it's not. So, we've got 4 1/2 gawatt.
01:22:08
For context, that's almost as much power
01:22:11
annually as the Bay Area uses in its
01:22:13
entirety each. Wow. It's huge. So, for
01:22:16
us, the hurdle or the constraint is
01:22:18
really time to compute. And that's
01:22:21
emerging across the industry as well.
01:22:23
And time to compute means trades people
01:22:27
coming to West Texas living in a a
01:22:30
trailer that you set up to then break
01:22:34
ground on a data center, build
01:22:36
foundations, build water cooling
01:22:38
systems. Like this is hard manual labor
01:22:41
going on. Yeah,
01:22:42
>> exactly. And this is the whole real
01:22:44
world challenge to respond to these
01:22:46
digital exponential demand curves that
01:22:48
are unconstrained by the real world in
01:22:50
terms of their appetite. And it just
01:22:52
compounds. You need thousands of people
01:22:54
out in these locations that haven't
01:22:56
supported it. You put stress on supply
01:22:58
chains. We're seeing what's happening
01:22:59
with the memory, every aspect of it. So,
01:23:02
it's just permanent whack-a-ole,
01:23:03
permanent solving fires to try and be
01:23:05
bring online this compute.
01:23:07
>> And you get to spend time there.
01:23:10
>> What's it like when you set up a town or
01:23:13
you bring a thousand people or 2,000
01:23:16
people to what's a pretty much remote
01:23:19
small town? you I'm assuming that like
01:23:21
when you bring a thousand there might
01:23:22
only be 500 living there right now. So
01:23:26
what are those towns like? I'm it sounds
01:23:28
to me like something out of like the
01:23:30
gold mining era when people first you
01:23:32
know uh went and and were prospectors
01:23:35
prospecting town
01:23:37
>> pretty pretty much. I mean the
01:23:38
barbecue's great that was a draw card
01:23:40
but apart from that uh look we've always
01:23:42
had a policy of hiring local supporting
01:23:44
the local community. Uh this year we're
01:23:46
hitting a million dollars in community
01:23:48
grants cumulatively. That's things like
01:23:50
local playgrounds, supporting the fire
01:23:52
departments, but we will hire locally.
01:23:54
Once we can't find that trade locally,
01:23:56
we will expand the radius by 20 mi and
01:23:58
hire out of that and so on and so on for
01:24:00
us.
01:24:00
>> That's very thoughtful. Yeah. And and
01:24:02
these folks are coming say an
01:24:04
electrician or a construction worker.
01:24:07
They're coming having built houses or
01:24:10
you know uh maybe building um corporate
01:24:15
offices and now they come for a tour of
01:24:18
duty here and the salaries go up
01:24:20
massively but they got to leave their
01:24:22
family for a 3-month tour or something.
01:24:24
>> Yeah. Yes and no. Because typically
01:24:26
where we locate is where there's heavy
01:24:28
electrical infrastructure. Where there's
01:24:31
heavy electrical infrastructure is
01:24:32
typically where old manufacturing and
01:24:35
industry has closed down. Ah,
01:24:37
>> so we go in, leverage that sunk capex,
01:24:40
rehire, retrain local workforces and
01:24:43
bring a new industry to town in these
01:24:44
data centers. H
01:24:45
>> has has that workforce now been
01:24:48
completely depleted and we need to train
01:24:50
another generation, a younger generation
01:24:53
to be generation tool belt and really
01:24:55
embrace the trades
01:24:57
>> 100%. We're partnering with
01:24:58
universities, trade colleges.
01:25:00
Absolutely. And you go to a trade
01:25:02
school, you got you go to a college,
01:25:05
people are getting degrees in philosophy
01:25:07
and English literature, they're going
01:25:10
50k a year in debt, 200k a year in debt.
01:25:13
What's the starting salary for a trades
01:25:15
person working on a data center doing
01:25:18
electrical or construction or HVAC?
01:25:20
What's the ballpark range?
01:25:22
>> Uh, look, I won't talk specifics, but
01:25:24
they they are going up. The price is
01:25:26
going up. Depends on the level, but yes,
01:25:28
there is a rush for good. hearing 150 to
01:25:31
like 300K. Am I in the ballpark?
01:25:34
>> The lower end directionally, you're
01:25:35
right. Yeah.
01:25:35
>> Yeah. I mean, it's incredible when you
01:25:37
think about it. There's concern about,
01:25:40
hey, AI taking jobs and then on this
01:25:42
other side of the ledger can't find
01:25:45
enough talent to to to service it. Talk
01:25:48
to me about energy sources and how you
01:25:51
think about that. Uh, President Trump,
01:25:53
Chris Wright, the administration that
01:25:56
kind of started with, hey, clean,
01:25:57
beautiful coal. Year two, they're like,
01:26:00
"All sources matter." Nuclear,
01:26:02
obviously, nack gas is plentiful in that
01:26:05
area. We obviously got a lot of oil.
01:26:06
People don't know this about Texas in
01:26:08
the United States, the number one uh
01:26:11
source of solar installations. Yeah.
01:26:14
>> Yeah. Talk to us about energy.
01:26:15
>> So, so our our philosophy has been
01:26:17
sustainability from day one. We have
01:26:18
used 100% renewable energy since
01:26:21
inception.
01:26:22
>> What?
01:26:22
>> 100%.
01:26:23
>> Wait, how is that possible? It's
01:26:25
>> We use hydro in British Columbia. We use
01:26:27
wind and solar in West Texas. In West
01:26:29
Texas, where we're located, there's
01:26:31
around 45 to 50 GW of wind and solar.
01:26:34
>> Yeah.
01:26:34
>> The transmission line to export that
01:26:36
down to the load centers in Dallas and
01:26:38
Houston is 12 GW.
01:26:40
>> Oh.
01:26:40
>> So you go and locate to the source of
01:26:42
lowcost excess renewable energy,
01:26:44
monetize it into this digital commodity,
01:26:46
export it at the speed of light as
01:26:48
token.
01:26:49
>> Great arbitrage. And the wind is
01:26:51
producing a lot, but it it's harder to
01:26:53
get from those areas where people are
01:26:56
willing to put up. I mean, people don't
01:26:57
understand how big West Texas is. It is
01:27:01
an incredible amount of land. And you're
01:27:02
coming from Australia
01:27:04
where also on the west side, people
01:27:06
don't understand exactly how much just
01:27:09
pure nature land there is. Yeah.
01:27:10
Undeveloped.
01:27:11
>> So much land. And the issue is distance.
01:27:13
You've got to spend billions of dollars
01:27:14
on this transmission connection
01:27:16
infrastructure to move that power to
01:27:18
where people actually want it. You can
01:27:19
build wind farms, you can build solar
01:27:21
farms, but if you build it in the desert
01:27:23
and no one can use it, then what's the
01:27:25
point? So the whole opportunity for our
01:27:26
industry is to go to the source of that
01:27:28
power and monetize it.
01:27:30
>> So the data centers follow the wind
01:27:33
turbines, the solar installations.
01:27:36
How do you think about batteries and are
01:27:38
you able to put those online? Because
01:27:40
obviously you're going to have periods
01:27:41
where, hey, it's not a windy day. In
01:27:43
Texas, we have very few days when it's
01:27:45
overcast, so that problem's pretty much
01:27:47
solved. But you're going to have 50 days
01:27:48
where the sun's not beating down. So,
01:27:51
how do you deal with the demand and and
01:27:53
and softening that duck curve?
01:27:55
>> We don't need to.
01:27:56
>> The utility does that on our behalf. So,
01:27:58
this is why these grid connections are
01:28:00
so scarce, so hard to get, and so highly
01:28:02
valued because once you get that grid
01:28:04
connection, the utility underwrites all
01:28:06
of that variability. They guarantee you
01:28:09
24/7 reliable power.
01:28:11
>> Got it. So, on their side, they're
01:28:13
figuring it out. something goes down and
01:28:15
they could fall back even though you're
01:28:18
100% committed to renewables if they
01:28:20
needed to fall back to gas or whatever
01:28:22
they have that ability out there. So you
01:28:24
have that as a backup.
01:28:26
>> A lot of talk about or a debate. Are we
01:28:29
getting ahead of our skis? Are people
01:28:32
slowing down? There was some talk about
01:28:34
the OpenAI project maybe downscaling a
01:28:36
little bit. Is OpenAI a partner as well
01:28:38
or
01:28:39
>> uh can't comment.
01:28:40
>> Can't comment. Okay. So we'll we'll read
01:28:42
into that whatever we want.
01:28:44
But
01:28:46
are there pockets where people are
01:28:47
saying, "Hey, let's slow down." Or is it
01:28:49
still gang busters?
01:28:51
>> It's right up the end of the spectrum.
01:28:53
It's gang busters. We we cannot meet
01:28:55
demand. That's why the whole industry
01:28:57
now is around time to compute. There are
01:28:59
no idle GPUs in the world sitting in a
01:29:01
data center.
01:29:02
>> Yeah. And what's your take on when
01:29:06
software makes and this is a big uh
01:29:10
discussion from Jensen himself during
01:29:12
his two and a half hour keynote
01:29:13
yesterday. Uh we're sitting here
01:29:15
Wednesday. I think he did his keynote on
01:29:16
Tuesday. He was talking about hey
01:29:18
software is going to make it 50 times
01:29:19
more uh you know lower the cost of
01:29:21
tokens 50x and then you have um
01:29:25
transport also contributing to that.
01:29:28
When do you think the curve goes from
01:29:30
parabolic to simply growing at a
01:29:33
ridiculous level? Is is there a slowdown
01:29:36
coming or how are you planning for the
01:29:38
future?
01:29:38
>> Look, I think it's actually the
01:29:39
opposite. I think it feeds on itself.
01:29:41
So, I'll give you one example. You go
01:29:43
into chat GPT today and you generate an
01:29:45
image. You enter to the prompt. It's
01:29:47
like the dialup internet days.
01:29:48
>> It is
01:29:49
>> right. It takes minutes and you're like,
01:29:50
I better get this prompt right.
01:29:52
>> Yeah.
01:29:52
>> Finally, 2 minutes later, it comes. Now,
01:29:54
I'll give you an example. If we 10x the
01:29:56
amount of compute available, which is an
01:29:58
enormous task from where we are today,
01:30:00
and those images take 5 to 10 seconds,
01:30:02
are we going to generate more or less
01:30:04
images?
01:30:05
>> Oh, many more. Uh, this is Jevans
01:30:07
paradox. This is the theory of induced
01:30:09
traffic. You know, you build a couple
01:30:11
more lanes, people start to think, well,
01:30:13
maybe the uh distance from Bondai Beach
01:30:16
to to the central business district in
01:30:18
Sydney terms would be an acceptable
01:30:20
commute.
01:30:20
>> Love the analogy.
01:30:21
>> Yeah. Uh, so what do you think about or
01:30:25
or what are you seeing? I mean, we're
01:30:26
here at Nvidia. Obviously, they make the
01:30:29
leading edge chips. They just bought
01:30:30
Grock, so now you've got, you know, two
01:30:33
of the leading edge chips uh coming out
01:30:35
of the same company, but custom silicon
01:30:38
becoming a big discussion. Has that
01:30:40
started to land in the data centers yet?
01:30:42
Obviously, Google, don't know if they're
01:30:44
a customer, you can tell us, but they're
01:30:46
making custom silicon. Amazon is making
01:30:48
custom silicon. Meta is making custom
01:30:51
silicon. Talk to me about that
01:30:53
revolution and is it actually making it
01:30:55
to the data centers yet?
01:30:56
>> Look, it to various degrees it is.
01:30:58
They're promoting their products.
01:30:59
They're trying to tie up data center
01:31:01
capacity. So yes, there's multiple
01:31:03
silicon looking for homes. I think I
01:31:05
think it's fair to say Nvidia has a
01:31:07
massive head start. The ecosystem
01:31:09
they've incubated the standards that
01:31:11
they're setting. So I would say the
01:31:13
safest pathway to build out at scale
01:31:14
early is to follow the Nvidia road maps.
01:31:17
But absolutely over time we are seeing
01:31:19
these chips emerge.
01:31:20
>> A and in terms of desktop computing I
01:31:25
don't know if you saw the announcement
01:31:27
that um Dell and Nvidia are making a
01:31:31
really powerful desktop 750 gigs of RAM
01:31:34
lot of power. You're going to be able to
01:31:35
run some local models open source and
01:31:39
with openclaw and open source coming
01:31:42
from uh Kimmy and a bunch of the models
01:31:45
out in China.
01:31:47
has the hacker group, which I think you
01:31:49
started in like I did probably in
01:31:51
similar time periods. People are
01:31:53
starting to get really obsessed with
01:31:55
having a 10 or $20,000 desktop setup and
01:31:58
running this local. What do you think of
01:31:59
that trend? I'm curious.
01:32:00
>> Yeah, I mean the breakthroughs we're
01:32:01
seeing in software, the way it's
01:32:03
distributing power to every man in every
01:32:06
and woman in every house and their
01:32:07
ability to code and use products like
01:32:09
open core, the generation of demand and
01:32:12
appetite for compute at a local level
01:32:14
all the way through to these mega data
01:32:15
centers. It's absolutely real and as we
01:32:18
see the emergence of agents using more
01:32:20
and more as we see autonomous vehicles
01:32:22
and other automation, robotics, it's
01:32:24
absolutely going to compound.
01:32:26
>> And what about nuclear? Uh the Trump
01:32:29
administration
01:32:31
really seemed to flip the switch on a
01:32:35
growing
01:32:36
uh belief that hey wait, nuclear is
01:32:39
pretty great. It's clean. It's the
01:32:41
original renewable in a way. Uh, and
01:32:43
these new modular reactors have nothing
01:32:45
to do with Chernobyl, Fukushima, or
01:32:48
Three-Mile Island. They're much safer.
01:32:50
They're a completely different
01:32:51
architecture. Have the Have those
01:32:54
started to land yet? And are you since
01:32:56
you followed correctly in in the great
01:32:58
state of Texas where I'm from, you
01:33:00
followed correctly that time, are you
01:33:02
following nuclear?
01:33:03
>> I I think you have to. I think the
01:33:05
reality is it's going to take a decade,
01:33:07
a bit longer by the time big projects
01:33:09
can come into commissioning, but now is
01:33:11
the time to start that conversation. and
01:33:13
put in place policies, mobilize capital,
01:33:15
and start that ball rolling.
01:33:17
>> Yeah. Have you do you have a data center
01:33:19
going up near nuclear?
01:33:21
>> No, not at the moment.
01:33:22
>> Not at the moment. But you're actively
01:33:24
tracking that activity cuz
01:33:26
>> Yeah, this seems uh pretty inevitable.
01:33:30
Yeah,
01:33:30
>> feels like it.
01:33:31
>> And if that happens, what impact does it
01:33:33
have on your industry? If if you could
01:33:35
because obviously it's happening in
01:33:37
China and people always put the Bitcoin
01:33:39
miners they were like the canary in the
01:33:41
coal mine near the hydro dams and near
01:33:44
the nuclear where there was excess
01:33:45
capacity. What impact do you think this
01:33:48
has if you could actually have small
01:33:50
modular reactors next to data centers?
01:33:52
>> Well, I I think it just opens up the
01:33:54
market and enhances the US's competitive
01:33:55
advantage in this space. Like AI is
01:33:57
inevitable, robotics is inevitable. The
01:34:00
reality is the correlation between human
01:34:02
progress and energy consumption is
01:34:05
really really high over a very long time
01:34:07
period. So if we can find a way to
01:34:09
unlock new generation, clean generation
01:34:11
as nuclear and locate that more at the
01:34:13
source and enable more compute on a
01:34:15
distributed basis, all those use cases
01:34:17
we just discussed become easier, more
01:34:19
fluid, faster and then you get that
01:34:22
positive flywheel around Jebans's
01:34:23
paradox and demand. Talk to me about the
01:34:26
architecture today of
01:34:29
Ethernet and data moving between data
01:34:33
centers within data centers. That
01:34:35
backbone is going through a paradigm
01:34:38
shift as well. Yeah.
01:34:39
>> Yeah. Yeah, it is. And Jensen coins
01:34:41
coins the term the data center is the
01:34:43
new computer.
01:34:44
>> So you need to step back and you say
01:34:46
right this big building is essentially
01:34:48
the old desktop PC we had under our desk
01:34:51
at home. You go right how does that
01:34:53
work? So all the cabling, the latency,
01:34:56
the number of hops between each GPU, how
01:34:58
they talk to each other, the fabric
01:34:59
around Infiniband, Ethernet, it's
01:35:02
absolutely critical because every
01:35:04
millisecond matters in terms of
01:35:05
performance of that cluster.
01:35:07
>> Yeah. And where do you think uh or or
01:35:11
what do you think of Elon's uh vision?
01:35:16
It's obviously a a longer term vision of
01:35:18
putting data centers in space and
01:35:20
there's a couple other people working on
01:35:21
it as well. Yeah, I mean it's very hard
01:35:24
to argue with Elon. He's been very right
01:35:27
on a number of things for a very long
01:35:28
time. I think sitting here today, it
01:35:30
feels exceptionally difficult given the
01:35:32
cost of moving things to space, the
01:35:34
challenges around radiation. There's a
01:35:37
huge amount of engineering challenges,
01:35:39
but that's never scared Elon before. So,
01:35:41
I'm not
01:35:41
>> qualified and he's he he's inevitably
01:35:44
right, but sometimes he's late. He might
01:35:48
be late to the party. He might be late
01:35:49
to the dinner party. you might show up
01:35:51
at dessert, but generally uh he nails
01:35:54
it. How much of an issue is getting the
01:35:56
data out of the data center to consumers
01:36:00
today? Is that not something people are
01:36:02
worried about when you're building
01:36:03
something out in West Texas, all that
01:36:06
data, fiber, all that's been taken care
01:36:09
of or does that become a gating issue at
01:36:11
some point? So this was one of the big
01:36:13
myths that we had to bust when we
01:36:15
started this business because everyone
01:36:16
said data centers must be located close
01:36:18
to population centers, metropolitan
01:36:20
areas. Latency is really important and
01:36:22
we say yeah that's right latency is
01:36:23
important but the reality is in the US
01:36:26
Texas especially there is fiber
01:36:28
everywhere underneath the ground lots
01:36:30
and lots and lots of it. And when you
01:36:32
look at latency from our site in the
01:36:34
middle of the desert in West Texas down
01:36:37
to Dallas, the big carrier hotel, six
01:36:39
millisecond roundtrip latency. What's
01:36:42
six milliseconds? There's a thousand
01:36:44
seconds milliseconds in a second. Yeah,
01:36:46
>> we're talking six.
01:36:47
>> It's it's adjacent.
01:36:49
>> Yeah, it's it's not even uh Yeah, it's
01:36:52
definitely not material. Uh listen,
01:36:54
continued success uh and uh you're
01:36:58
hiring
01:36:59
>> a lot of people.
01:37:00
>> Yeah. Yeah, I think we got 129 job
01:37:03
advertisements up at the moment.
01:37:04
>> All right, so everybody go to the Iran
01:37:06
website. Uh, and listen, company's doing
01:37:09
fantastic. Thanks for spending some time
01:37:11
with us here at Allin
01:37:13
GTC.
01:37:16
>> Thanks, Jason.
01:37:16
>> Appreciate it.
01:37:33
I'm going all in.

Episode Highlights

  • Coreweave's Journey
    From crypto to CGI rendering, Coreweave evolved its use of GPUs for diverse applications.
    “We began to scale the company and look for other use cases.”
    @ 02m 41s
    March 23, 2026
  • The Value of GPUs
    Michael Intrader discusses the longevity and value of GPUs in the market.
    “If people are willing to pay me for it, it still has value.”
    @ 11m 36s
    March 23, 2026
  • The Box: A Financial Strategy
    A unique financial vehicle that governs cash flow and builds confidence among lenders.
    “They look at this box and they’re like, 'Wow, we’re really confident we’re going to get our money back.'”
    @ 21m 13s
    March 23, 2026
  • Demand for AI Compute
    The demand for artificial intelligence services is overwhelming the global capacity to deliver compute.
    “The depth of the demand for the service we provide has been relentless.”
    @ 24m 23s
    March 23, 2026
  • Perplexity's Evolution
    Perplexity has evolved to provide accurate AI with full access to the internet and computing.
    “Perplexity wants to be the company that’s building the most accurate AI.”
    @ 35m 19s
    March 23, 2026
  • Perplexity's Growth
    Perplexity is experiencing rapid growth, with tens of millions of users and thousands of corporate clients.
    “Several tens of millions of people are using the product every month.”
    @ 45m 01s
    March 23, 2026
  • The Future of AI and Business
    Exploring how AI can help small businesses thrive and operate autonomously.
    “AI is going to create this one person $1 billion company.”
    @ 57m 48s
    March 23, 2026
  • Job Displacement vs. New Opportunities
    Discussing the balance between job loss and the potential for new entrepreneurial opportunities.
    “Even if there is temporary job displacement, we should look forward to a glorious future.”
    @ 01h 01m 45s
    March 23, 2026
  • The Power of Open Source Models
    Mistral AI's approach to building specialized models for various industries.
    “Building on open source technology is a way to save cost and have better control.”
    @ 01h 10m 35s
    March 23, 2026
  • The Rise of AI Demand
    The shift from Bitcoin to AI chips marks a significant change in demand for data centers.
    “Here we are today with AI, we are swapping out all the Bitcoin for AI chips.”
    @ 01h 19m 56s
    March 23, 2026
  • Sustainability Commitment
    The company has been using 100% renewable energy since its inception, focusing on sustainability.
    “We have used 100% renewable energy since inception.”
    @ 01h 26m 21s
    March 23, 2026
  • Nuclear Energy's Comeback
    Nuclear energy is gaining traction as a clean energy source, with new modular reactors.
    “Now is the time to start that conversation.”
    @ 01h 33m 11s
    March 23, 2026

Episode Quotes

Key Moments

  • Coreweave's Evolution02:41
  • Cash Flow Management19:36
  • Hybrid AI38:23
  • Job Displacement1:00:33
  • AI Empowerment1:02:06
  • AI Demand Shift1:19:56
  • 100% Renewable Energy1:26:21
  • Custom Silicon Revolution1:30:56

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
The Future of Everything: What CEOs of Circle, CrowdStrike & More See Coming in 2026
Podcast thumbnail
E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more
Podcast thumbnail
Massive Somali Fraud in Minnesota with Nick Shirley, California Asset Seizure, $20B Groq-Nvidia Deal
Podcast thumbnail
Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller
Podcast thumbnail
Jensen Huang: Nvidia's Future, Physical AI, Rise of the Agent, Inference Explosion, AI PR Crisis
Podcast thumbnail
New SEC Chair, Bitcoin, xAI Supercomputer, UnitedHealth CEO murder, with Gavin Baker & Joe Lonsdale
Podcast thumbnail
E143: Nvidia smashes earnings, Arm walks the plank, M&A market, Vivek dominates GOP debate & more
Podcast thumbnail
Epstein Files Fallout, Nvidia Risks, Burry's Bad Bet, Google's Breakthrough, Tether's Boom