Search Captions & Ask AI

Sergey Brin, Google Co-Founder | All-In Live from Miami

May 20, 2025 / 33:24

This episode covers AI advancements, the future of education, and insights from Sergey Brin, co-founder of Google. Key discussions include the impact of AI on productivity, the evolution of human-computer interaction, and the role of education in an AI-driven world.

Sergey Brin shares his experiences returning to Google and the excitement surrounding AI technology. He discusses how AI models have transformed productivity and the challenges of integrating AI into existing workflows.

The conversation shifts to the future of education, with Brin expressing skepticism about traditional college paths in light of AI's capabilities. He emphasizes the importance of adaptability and exploration for his children.

Brin also reflects on the evolution of human-computer interaction, predicting a shift towards voice and thought-based commands. He discusses the potential of AI to enhance user experiences and the implications of advanced AI systems.

The episode concludes with Brin discussing the future of robotics and the balance between open-source and proprietary models in AI development.

TL;DR

Sergey Brin discusses AI's impact on productivity, education, and future human-computer interaction.

Video

00:00:02
We've got a special guest who's going to
00:00:04
come join us. This always happens.
00:00:06
Another bread, everybody. Oh my god.
00:00:09
Somebody told me you uh started
00:00:11
submitting code and it kind of freaked
00:00:14
everybody out that daddy was hungry. All
00:00:17
models tend to do better if you threaten
00:00:19
them. If you threaten them like with
00:00:20
physical violence. Yes. Management is
00:00:23
like the easiest thing to do with AI.
00:00:25
Absolutely. It must be a weird
00:00:26
experience to meet the bureaucracy in a
00:00:28
company that you didn't hire. But on the
00:00:30
other side of it, I would say it's
00:00:31
pretty amazing that some junior muck
00:00:33
muck can basically look at you and say,
00:00:35
"Hey, go yourself." No, but I'm serious.
00:00:37
That's a sign of a healthy culture,
00:00:39
actually. You're punching a clock, man.
00:00:41
I hear the reports. You and I have
00:00:43
talked about it. You're going to work
00:00:44
every day. Yeah. It's been, you know,
00:00:47
some of the most fun I've had in my
00:00:48
life, honestly. And uh I retired like a
00:00:52
month before CO hit in theory. Yeah. And
00:00:55
I was like, you know, this has been
00:00:56
good. I want to do something else. I
00:00:58
want to hang out in
00:00:59
cafes, read physics books. Yeah. And
00:01:02
then like a month later, I was like, uh,
00:01:05
that's not really happening. So then I
00:01:08
just started to go to the office, you
00:01:10
know, once we could go to the office.
00:01:12
And
00:01:13
um actually to be perfectly honest,
00:01:16
there was a guy uh from uh OpenAI, this
00:01:20
guy named Dan, and I I ran into him at a
00:01:23
little party and he said, you know, look
00:01:26
what are you doing? This is like the
00:01:28
greatest
00:01:30
transformative moment in computer
00:01:32
science ever. Completely like and you're
00:01:34
a computer scientist. I'm a computer
00:01:35
scientist. Forget that. Founder of
00:01:37
Google, but you're a PhD student for
00:01:39
computer science. I haven't finished my
00:01:40
PhD yet, but working on it. Keep
00:01:42
working. Yeah, we'll get there.
00:01:44
Technically on leave of absence, right?
00:01:46
And uh he he told me this and I'd
00:01:48
already started kind of going into the
00:01:49
office a little bit and I was like, you
00:01:51
know, he's right. And uh it has been uh
00:01:54
just incredible just well you guys all
00:01:57
obviously follow all the AI technology
00:02:00
but being a computer scientist it is you
00:02:02
know the most exciting thing you know of
00:02:05
my life just technologically and the
00:02:08
exponential nature of this the pace of
00:02:11
it dwarfs anything we've seen in our
00:02:13
career it's almost like every thing we
00:02:16
did over the last 30 or 40 years has led
00:02:19
up to this moment and it's all compound
00:02:22
pounding on itself. The pace maybe you
00:02:24
could speak, you know, you you had a
00:02:26
company Google that grew from, you know,
00:02:29
a 100 users and 10 employees
00:02:32
to now you have over two billion people
00:02:35
using I think six products or five
00:02:37
products have over two billion. It's
00:02:39
it's not it's not even worth counting
00:02:41
because it's the majority of the people
00:02:42
in the planet touch Google
00:02:44
products. Describe the pace. Yeah. I
00:02:47
mean the excitement of the early web
00:02:50
like I remember using Mosaic and then
00:02:52
later Netscape. Uh how many of you
00:02:55
remember Mosaic actually? Am I a weirdo?
00:02:58
And you remember there was a what a
00:03:00
what's new page? The what's new page is
00:03:01
great. Right. Like you go through two or
00:03:04
three new web pages again. Yeah. It was
00:03:05
like in this last week these were the
00:03:07
new websites. Yes. And it was like such
00:03:09
and such elementary school such and such
00:03:12
a fish tank. Yeah. And you were like
00:03:15
Michael Jordan appreciation page. Yeah.
00:03:17
Whatever it was, these were the three
00:03:18
new sites on the whole internet. So
00:03:21
obviously the web you know developed
00:03:23
very rapidly from there and that was a
00:03:25
very uh exciting and then we've had
00:03:28
smartphones and whatnot. But you know
00:03:30
this the developments in AI are just
00:03:33
astonishing I would say by comparison uh
00:03:36
just because of you know the web spread
00:03:39
but didn't technically change so much uh
00:03:42
from you know month to month year to
00:03:44
year but these AI systems actually
00:03:47
changed quite a lot quite a lot you know
00:03:49
the like if you went away somewhere for
00:03:53
a month and you came back you'd be like
00:03:55
whoa what happened somebody told me you
00:03:57
uh started submitting code and it kind
00:04:01
of freaked everybody out that daddy was
00:04:02
home.
00:04:05
Okay. Daddy need a PR. What happened?
00:04:07
The code I submitted wasn't very
00:04:09
exciting. I think I needed to like add
00:04:11
myself to get access to some things and
00:04:14
you know a minor CL here or there. Um
00:04:17
nothing nothing that's going to win any
00:04:19
awards. uh but but I you know you need
00:04:23
to do that to um to do basic things run
00:04:26
basic experiments and things like that
00:04:29
um and I've I've tried to do that and
00:04:32
touch different parts of the system so
00:04:35
that you know I so I first of all it's
00:04:38
fun and secondly I know what I'm talking
00:04:40
about um it's really feels privileged to
00:04:43
be able to kind of go back to the
00:04:46
company not have any real executive
00:04:48
responsibilities but be able to actually
00:04:51
go deep into every little pocket. Are
00:04:54
there parts of the AI stack that
00:04:57
interest you more than others right now?
00:04:58
Are there certain problems that are just
00:05:00
totally captivating you? Yeah, I started
00:05:02
uh you know like sort of um I don't know
00:05:05
a couple years ago and maybe a year ago
00:05:08
uh I was really very close with uh the
00:05:11
what we call pre-training. Yeah. Um,
00:05:13
actually most of what people think of as
00:05:15
AI training, whatever people call it,
00:05:17
pre-training for various historical
00:05:20
reasons. Uh, but that's sort of the big
00:05:23
super, you know, you throw huge amounts
00:05:26
of computers at it. Um, and,
00:05:29
uh, I I learned a lot, you know, just
00:05:31
being deeply involved in that and seeing
00:05:33
us go from model to model and so forth
00:05:35
and running little baby experiments, but
00:05:38
uh, kind of just for fun. So I could say
00:05:41
I did it. Uh and uh more recently the
00:05:44
post-training especially as the thinking
00:05:46
models have uh come around. Um and
00:05:49
that's been you know another huge step
00:05:52
up in general in AI. So uh you know we
00:05:56
don't really know what the ceiling is.
00:05:59
When you um explain what's happening
00:06:02
with prompt engineering then to deep
00:06:04
research and what's happening there to
00:06:07
like a civilian. How would you explain
00:06:09
that sort of step function? because I
00:06:11
think people are not hitting the down
00:06:13
carrot and watching deep research in
00:06:16
Gemini's mobile app and you got a mobile
00:06:18
app and it's pretty great and I by the
00:06:19
way I got the uh fold after you and I um
00:06:22
were talking about it and okay Google
00:06:24
kicks Siri's ass now like it actually
00:06:26
does what you ask it to do when you ask
00:06:28
it to open up it does stuff but the
00:06:30
number of threads the number of queries
00:06:32
the number of follow-ups that it's doing
00:06:34
in that deep research is 200 300 maybe
00:06:39
explain that jump and then what you
00:06:40
think the jump after that is. To me, the
00:06:43
exciting thing about AI, especially
00:06:45
these days, I mean, it's not like quite
00:06:47
AGI yet as people are seeking or it's
00:06:50
not superhuman intelligence, but it's
00:06:54
pretty damn smart and uh can definitely
00:06:57
surprise you. So, I so I think of the
00:07:00
superpower is when it can do things in a
00:07:03
volume that I cannot. Yes. Right. So,
00:07:06
you know, by default when you use some
00:07:08
of our AI systems, you know, it'll suck
00:07:10
down whatever top 10 search results, you
00:07:13
know, and kind of pull out what you need
00:07:16
out of them, something like that. Um,
00:07:18
but I could do that myself to be honest,
00:07:20
you know, maybe it would take me a
00:07:22
little bit more time. But if it sucks
00:07:24
down the top, you know, thousand results
00:07:27
and then does follow on searches for
00:07:28
each of those and reads them deeply,
00:07:31
like that's, you know, a week of work
00:07:33
for me. Like I can't do that. This is
00:07:35
the thing I think people have not fully
00:07:37
appreciated who are not using the deep
00:07:38
research projects before we had our F1
00:07:41
um driver on stage. I'm a neopight. I
00:07:44
don't know anything about it. I said,
00:07:45
"How many deaths occurred per decade?"
00:07:47
And I said, "I want to get to deaths per
00:07:49
mile driven it." And at first was like,
00:07:50
"That's going to be really hard." I was
00:07:51
like, "I give you permission to make
00:07:53
your best shot at it and come up with
00:07:55
your best theory. Let's do it." And it
00:07:57
was like, "Okay." And it was like
00:08:00
there's this many teams, there's this
00:08:01
many races. Which model did you use it?
00:08:04
Uh, no. I'm I use Gemini. Gemini
00:08:07
fabulous version, the fabulous one. And
00:08:09
it was like, but I treat it like I get
00:08:12
sassy with it and it kind of works for
00:08:14
me. You know, it's a weird thing. It's
00:08:16
like you drinking the wine. We don't
00:08:18
circulate too much in the AI community.
00:08:21
Uh, but the not just our models, but all
00:08:24
models tend to do better if you threaten
00:08:25
them. If you threaten them like with
00:08:27
physical violence. Yes. But like that's
00:08:30
people feel weird about that, so we
00:08:31
don't really talk about that. Yeah, I
00:08:33
was threatened them with not being
00:08:34
fabulous and it responded to that as
00:08:36
well. Yeah, that's historically you just
00:08:38
say like, "Oh, I'm going to kidnap you
00:08:40
if you
00:08:41
don't." Yeah, they actually Can I ask
00:08:43
you a more specific But hold on. But it
00:08:45
went through it and it literally came up
00:08:48
with a system where it said, "I think we
00:08:50
should include practice miles." So,
00:08:52
let's say there's 100 practice miles for
00:08:54
every mile on the track. And then it
00:08:56
literally gave me the deaths per mile
00:08:59
estimated. And then I started cross
00:09:00
referencing it and I was like oh my god
00:09:02
this is like somebody's term paper for
00:09:05
undergrad you know like whoa done in in
00:09:09
minutes. It's yeah I mean it's amazing
00:09:11
and all of us have had these experiences
00:09:13
where you suddenly decide okay I'll just
00:09:16
throw this AI I don't really expect it
00:09:18
to work and then you're like whoa that
00:09:20
actually worked. So, as you as you have
00:09:22
those moments and then you go home to
00:09:25
your just life as a dad, have you gotten
00:09:27
to the point where you're like, "What
00:09:30
will my children do? And are they
00:09:32
learning the right way? And should I
00:09:35
totally just change everything that
00:09:37
they're doing right now?" Have you had
00:09:39
any of those moments yet? Yeah. I mean,
00:09:41
I look, I I don't really know how to
00:09:44
think about it to be perfectly honest. I
00:09:45
don't have like a magical way. I mean I
00:09:47
see I have a a kid in high school and
00:09:50
middle school and you know I mean the
00:09:52
AIS are basically you know already ahead
00:09:57
you know I mean obviously there's some
00:09:59
things AIs are particularly dumb at and
00:10:00
they you know they make certain mistakes
00:10:03
a human would never make but generally
00:10:05
you know if you talk about like math or
00:10:07
calculus or whatever like they're pretty
00:10:10
damn good like they you know can win
00:10:12
like math contests and coding contests
00:10:14
things like that against you know some
00:10:16
top humans and and then I look at you
00:10:19
know okay he's whatever my son's going
00:10:22
to go on to whatever from sophomore to
00:10:24
junior and what is he going to learn and
00:10:26
then I think in my mind and I talk to
00:10:28
him about this well what is the AI going
00:10:30
to be inact
00:10:33
yeah yeah and it's like comparable right
00:10:36
obviously are there areas where you
00:10:38
would tell your son look don't or not
00:10:41
not yet I don't know if you can like
00:10:43
plan your life around this I mean I
00:10:46
didn't
00:10:47
particularly plan my life to like I
00:10:51
don't know be an entrepreneur or
00:10:52
whatever. I was just liked math and
00:10:53
computer science. I guess maybe I got
00:10:55
lucky and it worked out to be you know
00:10:57
useful in the world. I don't know. I
00:10:59
guess I I I think you know my kids
00:11:01
should do what they like. Hopefully it's
00:11:04
somewhat challenging and they can you
00:11:05
know overcome uh different kinds of
00:11:08
problems and things like that. What
00:11:09
about specifically? What about college?
00:11:11
Do you think college should is going to
00:11:13
continue to exist as it is today? I mean
00:11:15
it seems like college was already
00:11:16
undergoing this kind of uh revolution
00:11:20
even before this sort of AI challenge of
00:11:22
people are like is it worth it? Should I
00:11:24
be more vocational? What's actually
00:11:26
going to be useful? So we're already
00:11:28
kind of entering this kind of situation
00:11:31
uh where there's sort of questions asked
00:11:33
about colleges. Yeah, I think you know
00:11:36
AI obviously puts that at the forefront.
00:11:39
As a parent, I think a lot about, hey,
00:11:42
so much of education in America and the
00:11:45
middle class, upper class is all
00:11:47
about what college, how do you get them
00:11:50
there? And honestly, lately, I'm like, I
00:11:52
don't think they should go to college.
00:11:53
Like, it's just fundamentally, you know,
00:11:55
my son is a rising junior and his entire
00:11:59
focus is he wants to go to an SEC school
00:12:01
because of the culture.
00:12:04
And two years ago, I was I would have
00:12:07
panicked and I would have thought,
00:12:09
should I help him get into a school,
00:12:10
this school, that school? And now I'm
00:12:12
like, that's actually the best thing you
00:12:14
could do. Be socially well adjusted,
00:12:17
psychologically deal with different
00:12:18
kinds of failures, you know, enjoy a few
00:12:20
years of exploration. Yeah. Yeah. Yeah.
00:12:23
Sergey, can I ask you about hardware?
00:12:26
You know, years ago, Google owned Boston
00:12:28
Dynamics, maybe a little bit ahead of
00:12:30
its time, but the way these systems are
00:12:33
learning through visual information and
00:12:36
sensory information and basically
00:12:38
learning how to adjust to the
00:12:40
environment around them is triggering
00:12:42
these kind of pretty profound like
00:12:43
learning curves in hardware and there's
00:12:46
dozens of like startups now making
00:12:48
robotic systems. What do you see in
00:12:51
robotics and hardware? Is this a year or
00:12:54
are we in a moment right now where
00:12:55
things are really starting to work? I
00:12:57
mean, I think we've uh you know,
00:12:59
acquired and later sold like five or so
00:13:02
robotics companies and uh you know,
00:13:04
Boston being one of them. I guess if I
00:13:06
look back on it, we built the hardware.
00:13:08
We also had this more recently we built
00:13:10
out u everyday robotics internally and
00:13:14
then later had to transition that. You
00:13:16
know, the robots are all cool and all,
00:13:18
but the software wasn't quite there.
00:13:23
Um, that's every time we've tried to do
00:13:25
it to, you know, to make them truly
00:13:28
useful
00:13:29
and presumably one of these days that'll
00:13:32
no longer be true, right? But have you
00:13:34
seen anything lately that Yeah. Do and
00:13:36
do you believe in the humanoid form
00:13:37
factor robots or do you think that's a
00:13:39
little overkill? I'm probably the one
00:13:41
weirdo who doesn't who's not a big fan
00:13:43
of humanoids, but maybe I'm jaded
00:13:46
because we've, you know, we at least
00:13:48
acquired at least two humanoid uh
00:13:50
robotics startups and later sold them.
00:13:53
Um but but the reason is I mean the
00:13:56
reason people want to do humanoid robots
00:13:58
for the most part is because the world
00:14:00
is kind of designed around this form
00:14:02
factor and you know you can train on
00:14:04
YouTube, we can train on videos, people
00:14:06
do all the things. Um, I personally
00:14:09
don't think that's given the AI quite
00:14:12
enough credit. Like AI can learn, you
00:14:16
know, through simulation and through
00:14:18
real life pretty quickly how to handle
00:14:20
different situations. And I don't know
00:14:22
that you need exactly the same number of
00:14:24
arms and legs and wheels, which is zero
00:14:26
in the case of humans, as humans to make
00:14:28
it all work. And that so I'm I'm
00:14:31
probably
00:14:32
less bullish on that. But to be fair,
00:14:34
there are a lot of really smart people
00:14:36
who are making humanoid robots. So I
00:14:38
wouldn't discount it. What about the
00:14:40
path of being a programmer? That's where
00:14:42
we're seeing with that finite data set.
00:14:45
And listen, Google's got a 20-y old code
00:14:47
base now. So like it actually could be
00:14:48
quite impactful. What are you seeing
00:14:50
like literally in the company? You know,
00:14:53
are the 10x developer is always this
00:14:55
like ideal that you can, you know, you
00:14:57
get a couple of unicorns once in a
00:14:58
while, but are we going to see like all
00:15:00
developers like, you know, their
00:15:02
productivity hit that level 8 n 10 and
00:15:04
they're just going to or is it going to
00:15:06
be all done by computers and we're just
00:15:08
going to check it, make sure it's not
00:15:09
too weird. Um, because it could get
00:15:12
weird if you vibe code. Yeah, I'm
00:15:15
embarrassed to say this. Okay. I like
00:15:17
recently I just had a big tiff inside
00:15:20
the company because we had this list of
00:15:23
what you're allowed to use to code and
00:15:25
what you're not allowed to use to code
00:15:26
and the Gemini was on the no list. You
00:15:30
Oh, you have to be pure. You can't I
00:15:32
don't know for like a bunch of really
00:15:33
weird reasons that it would like boggled
00:15:36
my mind that you know you couldn't vibe
00:15:38
code on the Gemini code. I mean, nobody
00:15:40
would like enforce this rule, but um but
00:15:43
there was this, you know, actual
00:15:46
internal web page. For whatever reason,
00:15:47
historical reason, somebody had put this
00:15:49
and I had a big fight with them and I,
00:15:51
you know, I cleared it up after a
00:15:54
shocking period of time. You escalated
00:15:56
to your boss. Oh, I I definitely told
00:16:00
about it and I Sorry, I don't know if
00:16:02
you remember, but you got super voting
00:16:05
founders. You are the boss. You can do
00:16:07
what you want. It's your company still.
00:16:09
No, no, it was uh he was very
00:16:11
supportive. I was more like uh uh I was
00:16:14
like I talked to him. I was like I can't
00:16:16
deal with these people. You need to deal
00:16:17
with this. Like I just like I'm beside
00:16:20
myself that they're like saying it's
00:16:22
weird that there's bureaucracy like in a
00:16:23
company that you must be a weird
00:16:25
experience to meet the bureaucracy in a
00:16:26
company that you didn't hire. But but on
00:16:28
the other side of it I would say it's
00:16:30
pretty amazing that some junior mucky
00:16:32
muck can basically look at you and say
00:16:34
hey go yourself. No but I'm serious.
00:16:36
That's a sign of a healthy culture
00:16:38
actually I guess. So anyway, it did get
00:16:40
fixed and uh people are using it. So
00:16:42
they got fired.
00:16:45
That person's working in Google Siberia.
00:16:48
No, we're trying to, you know, roll out
00:16:51
every possible kind of AI and and trying
00:16:53
external ones, you know, be whatever the
00:16:55
cursors of the world, all all of those
00:16:58
uh to just see what really makes people
00:17:01
more productive. Um I mean for myself
00:17:04
definitely makes me more productive
00:17:06
because I'm not do you do you think the
00:17:08
number of foundational models like if
00:17:10
you look three years forward
00:17:12
will they start to cleave off and gets
00:17:14
highly specialized like beyond the
00:17:16
general and the reasoning maybe there's
00:17:19
a very specific model for chip design
00:17:22
there's clearly a very specific model
00:17:23
for biologic precursor design protein
00:17:26
folding like is the number of
00:17:27
foundational models in the future Sergey
00:17:30
a multiple of what they are today the
00:17:32
same something in between. That's a
00:17:35
great question. I kind
00:17:37
of if I I mean look I don't know like
00:17:40
you guys take a guess just as well as I
00:17:42
can but um if I had to
00:17:46
guess you know things have been more
00:17:49
converging
00:17:50
uh and uh this is sort of broadly true
00:17:53
across machine learning I mean you used
00:17:54
to have all kinds of different kinds of
00:17:56
models and whatever convolutional
00:17:59
networks for vision things and you know
00:18:02
you had um whatever RNN's for text and
00:18:07
speech and stuff and uh you know all
00:18:09
this has shifted to transformers
00:18:11
basically uh and uh increasingly it's
00:18:15
also just becoming one model um now we
00:18:18
do get a lot of oomph occasionally we do
00:18:20
specialized models uh and it's it's
00:18:24
definitely
00:18:25
scientifically a good way to iterate
00:18:28
when you have a particular target you
00:18:29
don't have to like do everything in
00:18:31
every language and handle whatever both
00:18:34
images and video and audio and uh in one
00:18:37
go. Um but we are generally able to
00:18:42
after we do that take those learnings
00:18:46
and basically put that capability into a
00:18:48
general model. So there's not that much
00:18:51
benefit. Um you know you could you can
00:18:54
get away with a somewhat smaller
00:18:55
specialized model a little bit faster a
00:18:58
little bit cheaper but the trends have
00:19:00
not gone that way. What do you think
00:19:02
about the open source closed source
00:19:04
thing? Has there been big philosophical
00:19:07
movements that change your perspective
00:19:09
on the value of open source? Um, we're
00:19:12
still waiting on this o, you know, open
00:19:15
AI. Oh, yeah. Open source drop. I mean,
00:19:17
we haven't seen it yet, but
00:19:19
theoretically it's coming. I mean, have
00:19:21
to give credit uh to where credit's due.
00:19:24
I mean, Deepseek released a really
00:19:26
surprisingly
00:19:28
powerful model uh when it was January or
00:19:32
so. So, that that definitely closed the
00:19:34
gap to proprietary models. We've pursued
00:19:37
both. So, we released Gemma uh which are
00:19:40
our open- source or you know open models
00:19:44
and um those perform really well.
00:19:47
They're small, dense models, so they fit
00:19:48
well on one computer. Um, and
00:19:52
uh, they're not as powerful as Gemini.
00:19:55
Uh, but I mean, the jury's out which way
00:19:58
that's going to go. Do you have a point
00:19:59
of view on what human computing
00:20:02
interaction looks like as AI progresses?
00:20:06
It used to be, thanks to you, as a
00:20:09
search box. You type in some keywords or
00:20:12
a question and you would click on links
00:20:14
on the internet and get an answer. Is
00:20:16
the future typing in a question or
00:20:18
speaking to a AirPod or thinking or
00:20:23
thinking or like what's the what's the
00:20:24
Yeah. And then the answer is just spoken
00:20:26
to you. I mean by by the way just to
00:20:27
build on this it was Friday, right?
00:20:30
Neuralink got breakthrough designation
00:20:32
for their human brain interface. I mean
00:20:34
that's a very big step in allowing the
00:20:36
FDA to clear everybody getting it
00:20:38
implanted. Yeah. Is it like if you could
00:20:40
just summarize what you think is kind of
00:20:42
the most common place human computer
00:20:45
interaction model in the next decade or
00:20:47
whatever. Is it a you know there's this
00:20:49
idea of glasses with a screen in the
00:20:51
glasses and you tried that a long time
00:20:53
ago. Yeah, I kind of messed that up.
00:20:54
I'll be honest.
00:20:56
Uh got the timing totally wrong on that.
00:20:59
Early again. Yeah. Uh right. Right. But
00:21:02
early. There are a bunch of things I
00:21:03
wish I had done differently, but
00:21:05
honestly it was just like the technology
00:21:07
wasn't ready for for Google class. Uh
00:21:10
but nowadays these things I think are
00:21:12
more sensible. I mean there's still
00:21:13
battery life issues I think that you
00:21:17
know we and others need to overcome. Uh
00:21:20
but I think that's a cool form factor. I
00:21:23
mean when you say 10 years though you
00:21:25
know a lot of people are saying hey the
00:21:26
singularity is like five years away. So
00:21:30
your ability to see through that into
00:21:34
the future.
00:21:36
I mean it's very important. But do you
00:21:37
have anybody else? Sorry. Just let me
00:21:39
ask about this. Do you There was a
00:21:40
comment that Larry made years ago that
00:21:44
humans were a stepping stone in
00:21:46
evolution. Okay. Can you comment on
00:21:49
this? Like do you do you think that this
00:21:52
AGI super intelligence or really silicon
00:21:54
intelligence exceeds human capacity and
00:21:57
humans are a stepping stone in you know
00:22:00
progression of evolution? Boy, I think
00:22:03
like sometimes us nerdy guys go and get
00:22:05
have a little too much wine. I've had
00:22:08
two glasses and um I'm ready to go. I I
00:22:11
need some more for this conversation. Um
00:22:15
human implants. Let's go. I mean, I
00:22:17
guess we're starting to get experience
00:22:19
with these AIs that can do certain
00:22:21
things, you know, much better than us.
00:22:23
Um, and they're definitely, you know,
00:22:25
with my skill of math and coding, I feel
00:22:28
like I'm better off just turning to the
00:22:32
AI now. And how do I feel about that? I
00:22:34
mean, it doesn't really bother me, you
00:22:36
know, I use it as a
00:22:37
tool. So, I feel like I've gotten used
00:22:40
to it, but you know, maybe if they get
00:22:43
even more capable in the future,
00:22:46
um, I'll look at it differently. Yeah,
00:22:48
there's a moment of insecurity, maybe. I
00:22:50
guess. So, as an aside, management is
00:22:52
like the easiest thing to do with the
00:22:54
AI. Yeah, absolutely. And I did this,
00:22:57
you know, uh, at Gemini on some of our,
00:23:00
you know, work chats, um, kind of like
00:23:02
Slack, but we have our own version. We
00:23:04
had this AI tool that actually was
00:23:06
really powerful. We unfortunately anyway
00:23:08
temporarily got rid of it. I think we're
00:23:10
going to bring it back and bring it to
00:23:11
everybody. But it it could suck down a
00:23:14
whole chat space and then answer pretty
00:23:16
complicated questions. So I was like,
00:23:18
"Okay, summarize this for me." Okay, now
00:23:21
assign something for everyone to work on
00:23:23
and uh and then I would paste it back in
00:23:26
so people didn't realize it was the AI.
00:23:28
I I admitted it pretty soon. Um and
00:23:31
there were a few giveaways here or
00:23:32
there, but it worked remarkably well.
00:23:35
And then I was like, well, who should be
00:23:37
promoted in this chat space? Uh, and I
00:23:41
actually picked out this woman, this
00:23:43
young woman engineer who like, you know,
00:23:44
I didn't even notice she wasn't very
00:23:46
vocal uh particularly in that PRs kicked
00:23:50
ass. No, no, it was like and then uh I
00:23:53
don't know something that the AI had
00:23:55
detected and I went I talked to the
00:23:56
manager actually and and he was like,
00:23:58
"Yeah, you know what? You're right. Like
00:24:01
she's been working really hard all these
00:24:02
things." Wow. I think that ended up
00:24:04
happening actually. Uh
00:24:07
so I don't know. I guess after a while
00:24:09
you just kind of take it for granted
00:24:10
that you can just do these things. I
00:24:12
don't know. It hasn't really Do you
00:24:13
think that there's a a use case
00:24:16
for like an infinite context link? Oh
00:24:20
100%. I mean all of Google's codebase
00:24:23
goes infinite but sure you should have
00:24:25
access infinite. Yeah. Stateful. Yeah.
00:24:29
and then multiple sessions so that you
00:24:30
could have like 19 of these things, 20
00:24:32
of these things running or just evolve
00:24:33
itself. Eventually, it'll evolve itself.
00:24:35
Yeah. I mean, I guess if it knows
00:24:37
everything, then you can have just one
00:24:39
in theory. You just need to somehow
00:24:41
disambiguate what you're talking about.
00:24:43
Uh but yeah, for sure there's no limit
00:24:46
to use of uh context and there, you
00:24:50
know, there are a lot of ways to make it
00:24:52
larger and larger. There's a there's a
00:24:53
rumor that internally there's a Gemini
00:24:55
build that is a quasi infinite context.
00:25:00
Is it is it a valuable thing? Like I
00:25:01
don't know. Well, you say what you want
00:25:03
to say, but I mean for any such cool new
00:25:07
idea in AI, there are probably five such
00:25:09
things internally. Um and uh you know
00:25:12
the question is how well do they work?
00:25:13
And um yeah, I mean we're definitely
00:25:15
pushing all the bounds um in terms of
00:25:18
intelligence, in terms of context, in
00:25:20
terms of um speed, you know, you name
00:25:24
it. And what about the hardware? Like
00:25:25
when you guys build stuff, do you care
00:25:28
that you have this pathway to Nvidia or
00:25:32
do you think eventually that'll get
00:25:33
abstracted and there'll be a transpiler
00:25:36
and it'll be Nvidia plus 10 other
00:25:38
options, so who cares? Let's just go as
00:25:39
fast as possible. Well, we mostly for
00:25:41
for Gemini, we mostly use our own TPUs.
00:25:45
So, um but we also do support um Nvidia
00:25:48
and we we're one of the big uh uh
00:25:52
purchasers of Nvidia chips and we have
00:25:54
them in Google Cloud available for our
00:25:57
customers uh in addition to
00:25:59
TPUs. Um at this stage it's uh for
00:26:05
better or for worse not that abstract
00:26:06
and maybe someday the AI will abstract
00:26:09
it for us but you know given just the
00:26:11
amount of computation you have to do on
00:26:12
these models you actually have to think
00:26:14
pretty carefully how to do everything
00:26:16
and exactly what kind of chip you have
00:26:18
and how the memory works and the
00:26:20
communication works and so forth are
00:26:23
actually pretty big factors and it
00:26:25
actually yeah maybe one of these days
00:26:28
the AI itself will be good enough to
00:26:30
reason through that today. It's not
00:26:31
quite good enough. I don't know if you
00:26:33
guys are having this experience with the
00:26:35
interface, but I find myself even on my
00:26:37
desktop and certainly on my mobile
00:26:39
phone, going immediately into voice chat
00:26:42
mode and telling it, "Nope, stop." Uh,
00:26:45
that wasn't my question. This is my
00:26:46
question. Nope. Uh, let's say that again
00:26:48
in shorter bullet points. Nope, I want
00:26:49
to focus on this. It's so quick now.
00:26:53
Last year was unusable. It was too slow.
00:26:55
And now it like stops. Okay. And then
00:26:58
you sell it. I would like It's what I
00:27:00
want to go to. I don't want to type. I
00:27:01
want to use voice. And then
00:27:02
concurrently, I'm watching the text as
00:27:05
it's being written on the page and I
00:27:07
have another window open and I'm doing
00:27:08
Google searches or second queries to an
00:27:11
LLM or writing a Google doc or a notion
00:27:15
page or typing something. So, it's
00:27:17
almost like that scene in um Minority
00:27:20
Report where he has the gloves or in
00:27:22
Bladeunner where he's, you know, in his
00:27:24
apartment saying, "Zoom in, zoom in."
00:27:25
Closer to the left, to the right. And
00:27:27
there's something about these language
00:27:29
models and their ability to the response
00:27:32
time which was always something you
00:27:33
focused on response time the is there
00:27:36
like a response time thing where it
00:27:37
actually is worth doing voice and where
00:27:40
it wasn't previously everything is
00:27:43
getting better and faster and so forth
00:27:45
you know smaller models are more capable
00:27:48
there are better ways to do inference on
00:27:49
them that are faster you can also stack
00:27:52
them like you know this is like Nico's
00:27:53
company 11 labs it's an exceptional TTS
00:27:56
SSD stack like there's I mean there are
00:27:59
other options. Whisper is really good at
00:28:01
certain things, but those this is where
00:28:03
I I kind of believe you're going to get
00:28:05
this like compartmentalization where
00:28:08
there'll be certain foundational models
00:28:10
for certain specific things. You stack
00:28:11
them together. You kind of deal with the
00:28:13
latency and it's like pretty good
00:28:16
because they're so good. Like Whisper
00:28:18
and 11 for those speech examples that
00:28:20
you're talking about are
00:28:22
kickass. I mean, they're exceptional.
00:28:24
Well, wait till you turn on your camera
00:28:26
and it sees your reaction to what it's
00:28:29
saying and you go and before you even
00:28:31
say that you don't want it or you put
00:28:32
your finger up, it's pauses. Oh, did you
00:28:34
want something else? Oh, I see you're
00:28:36
not happy with that result. You know,
00:28:38
it's going to get really weird. It's a
00:28:40
funny thing, but we have the, you know,
00:28:41
we have the big open shared offices, so
00:28:44
during work, I can't really use voice
00:28:46
mode too much. I usually use it on the
00:28:48
drive. The drive is incredible. Yeah. I
00:28:51
don't feel like I could. I mean, I would
00:28:53
get its output in my headphones, but if
00:28:56
I want to speak to it, then everybody's
00:28:57
listening to me. So, it's weird. I just
00:28:59
think that would be socially awkward,
00:29:01
but I should I should do that. In my car
00:29:03
ride, I do chat to the AI, but then it's
00:29:06
like audio in, audio out. But I feel
00:29:08
like I honestly maybe it's a good
00:29:10
argument for a private office. I should
00:29:12
spend more time like you guys are. You
00:29:14
could talk to your manager.
00:29:17
They might get one. I like being out in
00:29:19
the
00:29:21
I like with everybody. Uh, but I do
00:29:23
think that there's this AI use case that
00:29:25
I'm missing which I should probably
00:29:27
figure out how to try more often. If
00:29:30
people want to try your new product, is
00:29:32
there a website they can visit or
00:29:34
something or special code or go check? I
00:29:37
mean, honestly, there's a dedicated
00:29:38
Gemini app. If you're using Gemini, just
00:29:40
like you're going through the Google
00:29:42
navigation from your search, just get
00:29:43
the download the actual Gemini app. It's
00:29:45
kickass. It really is the best models. I
00:29:47
think it is. You should use 2.5 Pro. 2.5
00:29:51
Pro. Pay the It's It's a You got to pay,
00:29:53
right? Uh yeah, you got a few query, you
00:29:56
got a few prompts for free, but uh you
00:29:57
know, if you do it a bunch, you need to
00:29:59
make all these like 20 bucks a month.
00:30:02
You got a vision for like making it free
00:30:03
and throwing some ads on the side. Yeah.
00:30:05
One step down in hardware cost, the
00:30:07
whole thing will be free. Well, okay.
00:30:08
It's free today without ads on the side.
00:30:10
You just got a certain number of the top
00:30:11
model. I think we're likely are going to
00:30:13
have always now like sort of top models
00:30:15
that we can't supply infinitely to
00:30:18
everyone right off the bat. But, you
00:30:21
know, wait 3 months and then the next
00:30:23
generation. Seems to me like if I'm
00:30:24
asking all these queries, you know, just
00:30:26
having a little on the sidebar of things
00:30:28
I might be a running list that changes
00:30:30
in real time of things I might be
00:30:32
interested in. All for, you know, really
00:30:34
good AI advertising. I just um
00:30:38
uh I don't think we're going to like
00:30:39
necessarily our latest and greatest
00:30:41
models which are you know take a lot of
00:30:44
computation. I don't think we're going
00:30:45
to just be free to everybody right off
00:30:48
the bat. But as we go to the next
00:30:51
generation you know it's like every time
00:30:52
we've gone forward a generation then the
00:30:55
sort of uh the new free tier is usually
00:30:57
as good as the previous pro tier uh and
00:31:02
sometimes better. All right, give it up
00:31:03
for Sergey Brit. Thank you.
00:31:07
[Applause]
00:31:09
Okay, thanks everybody for watching that
00:31:10
amazing interview with Sergey Brent and
00:31:12
thanks Sergey for joining us in Miami.
00:31:14
If you want to come to our next event,
00:31:16
it's the All-In Summit in Los Angeles,
00:31:19
fourth year for All-In Summit. Go to
00:31:22
all-in.com/events to apply. A very
00:31:24
special thanks to our new partner, OKX,
00:31:27
the new money app. OKX was the sponsor
00:31:30
of the McLaren F1 team, which won the
00:31:32
race in Miami. Thanks to Haidider and
00:31:35
his team, an amazing partner and an
00:31:37
amazing team. We really enjoyed spending
00:31:39
time with you. And OKX launched their
00:31:41
new crypto exchange here in the US. If
00:31:43
you love Allin, go check them out. And a
00:31:45
special thanks to our friends at Circle.
00:31:48
They're the team behind USDC. Yes, your
00:31:51
favorite stable coin in the world. USDC
00:31:54
is a fully backed digital dollar
00:31:56
redeemable one for one for USD. It's
00:31:59
built for speed, safety, and scale. They
00:32:01
just announced the Circle Payments
00:32:03
Network. This is enterprisegrade
00:32:05
infrastructure that bridges the gap
00:32:07
between the digital economy and outdated
00:32:09
financial reality. Go check out USDC for
00:32:12
all your stable coin needs. And special
00:32:15
thanks to my friends including Shane
00:32:17
over at Poly Market, Google Cloud,
00:32:18
Salana, and BVNK. We couldn't have done
00:32:22
it without y'all. Thank you so
00:32:25
much. We'll let your winners ride.
00:32:29
[Music]
00:32:33
And it said we open source it to the
00:32:35
fans and they've just gone crazy with
00:32:36
it. Love you queen of quinoa.
00:32:40
[Music]
00:32:46
Besties are gone.
00:32:48
That is my dog taking notice your
00:32:50
driveways.
00:32:53
Oh man. My dasher will meet up. We
00:32:56
should all just get a room and just have
00:32:58
one big huge orgy because they're all
00:32:59
just useless. It's like this like sexual
00:33:01
tension that they just need to release
00:33:02
somehow.
00:33:06
Wet your feet.
00:33:09
We need to get Murky's I'm going all
00:33:14
[Music]
00:33:18
in. I'm going all in.

Episode Highlights

  • Transformative Moment in Computer Science
    A computer scientist reflects on the incredible pace of AI development, stating, "It's the most exciting thing of my life."
    “It's the most exciting thing of my life.”
    @ 02m 05s
    May 20, 2025
  • The Role of AI in Education
    As a parent, the speaker questions the future of education in the age of AI, suggesting that college may not be necessary.
    “I don't think they should go to college.”
    @ 11m 53s
    May 20, 2025
  • Humanoid Robots: Overkill?
    The speaker shares skepticism about humanoid robots, emphasizing that AI can learn effectively without mimicking human form.
    “I personally don't think that's given the AI quite enough credit.”
    @ 14m 09s
    May 20, 2025
  • The Future of Human-Computer Interaction
    Exploring how AI will change the way we interact with technology, from typing to thinking.
    “Is the future typing in a question or speaking to an AirPod?”
    @ 20m 16s
    May 20, 2025
  • Neuralink's Breakthrough
    Neuralink receives breakthrough designation for its human brain interface, paving the way for future advancements.
    “That's a very big step in allowing the FDA to clear everybody getting it implanted.”
    @ 20m 32s
    May 20, 2025
  • AI in the Workplace
    Using AI tools to enhance productivity and decision-making in work environments.
    “Management is like the easiest thing to do with the AI.”
    @ 22m 52s
    May 20, 2025

Episode Quotes

Key Moments

  • Special Guest00:02
  • AI Excitement02:05
  • Education Revolution11:20
  • Humanoid Robots14:09
  • Neuralink Breakthrough20:32
  • AI Evolution21:46
  • Gemini App Launch29:40

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Satya Nadella on AI’s Business Revolution: What Happens to SaaS, OpenAI, and Microsoft?
Podcast thumbnail
Winning the AI Race Part 1: Michael Kratsios, Kelly Loeffler, Shyam Sankar, Chris Power
Podcast thumbnail
E111: Microsoft to invest $10B in OpenAI, generative AI hype, America's over-classification problem
Podcast thumbnail
Grok 4 Wows, The Bitter Lesson, Third Party, AI Browsers, SCOTUS backs POTUS on RIFs
Podcast thumbnail
E167: Google's Woke AI disaster, Nvidia smashes earnings (again), Groq's LPU breakthrough & more
Podcast thumbnail
E133: Market melt-up, IPO update, AI startups overheat, Reddit revolts & more with Brad Gerstner