Search Captions & Ask AI

Sam Altman: Getting Fired (and Re-Hired) by OpenAI, Agents, AI Copyright issues

May 10, 2024 / 01:43:03

This episode features Sam Altman, CEO of OpenAI, discussing the future of AI, including the anticipated release of GPT-5, the evolution of AI models, and the implications of AI on society. Key topics include the ongoing development of AI technology, the importance of making advanced tools accessible, and the challenges of balancing open-source and closed-source models.

Sam Altman reflects on the rapid advancements in AI, particularly with ChatGPT, which became the fastest product to reach 100 million users. He addresses the potential release strategy for GPT-5, emphasizing a more gradual rollout and the desire to make advanced technology available to free users.

Discussions also cover the competitive landscape of AI, with Altman acknowledging the challenges posed by open-source models and the need for OpenAI to maintain a leading edge. He highlights the importance of continuous improvement in AI systems and the potential for future innovations in mobile technology and AI applications.

Altman shares insights on the ethical considerations surrounding AI development, including the need for safety measures and the role of regulatory frameworks. He expresses a commitment to ensuring that AI benefits humanity while navigating the complexities of commercialization and public perception.

The episode concludes with a candid conversation about Altman's experiences at OpenAI, including his brief departure and return as CEO, shedding light on the internal dynamics of the organization and the future direction of AI technology.

TL;DR

Sam Altman discusses AI's future, GPT-5 release, and OpenAI's role in balancing accessibility and safety in technology.

Video

00:00:00
I first met our next guest Sam mman
00:00:02
almost 20 years ago when he was working
00:00:04
on a local mobile app called looped we
00:00:07
were both backed by seoa capital and in
00:00:09
fact we were both in the first class of
00:00:11
seoa Scouts he did investment in a
00:00:14
little unknown fintech company called
00:00:16
stripe I did Uber and in that tiny
00:00:18
experiment did Uber I've never heard
00:00:20
that before yeah I think so it's
00:00:22
possible starting already you should
00:00:24
write a book Jacob
00:00:28
maybe let your winners
00:00:30
[Music]
00:00:31
Rainman
00:00:35
David and in said we open source it to
00:00:37
the fans and they've just gone crazy
00:00:40
[Music]
00:00:43
with that tiny experimental fund that
00:00:46
Sam and I were part of a Scouts is
00:00:48
sequoia's highest multiple returning
00:00:50
fund couple of low digigit Millions
00:00:52
turned into over 200 million I'm told
00:00:54
and then he did yeah that's what I was
00:00:56
told by ruoff yeah and he did a stint at
00:00:58
Y combinator where he was president from
00:01:00
2014 to 2019 in 2016 he co-founded open
00:01:03
AI with the goal of ensuring that
00:01:06
artificial general intelligence benefits
00:01:08
all of humanity in 2019 he left YC to
00:01:11
join openai full-time as CEO things got
00:01:14
really interesting on November 30th of
00:01:16
2022 that's the day open AI launched
00:01:19
chat GPT in January 2023 Microsoft
00:01:22
invested 10 billion in November 2023
00:01:25
over a crazy 5ay span Sam was fired from
00:01:28
open AI everybody was going to go work
00:01:30
at Microsoft a bunch of hard emojis went
00:01:33
viral on x/ Twitter and people started
00:01:36
speculating that the team had reached
00:01:38
artificial general intelligence the
00:01:40
world was going to end and suddenly a
00:01:42
couple days later he was back to being
00:01:44
the CEO of open AI in February Sam was
00:01:48
reportedly looking to raise $7 trillion
00:01:49
do for an AI chip project this after was
00:01:53
reported that Sam was looking to raise a
00:01:54
billion from masi yoshian to create an
00:01:57
iPhone killer with Johnny I the
00:01:58
co-creator of the iPhone all of this
00:02:01
while chat GPT has become better and
00:02:03
better and a household name is having a
00:02:05
massive impact on how we work and how
00:02:08
work is getting done and it's reportedly
00:02:10
the fastest product to hit 100 million
00:02:12
users in history in just two months and
00:02:15
check out opening eyes insane Revenue
00:02:18
ramp up they reportedly hit two billion
00:02:19
in ARR last year welcome to the all-in
00:02:23
podcast Sam mman thank you thank you
00:02:26
guys Sak you want to Le us off here okay
00:02:28
sure I mean I I think the whole industry
00:02:30
is waiting with baited breath for the
00:02:32
release of GPT 5 I guess it's been
00:02:35
reported that it's launching sometime
00:02:36
this summer but that's a pretty big
00:02:38
window can you narrow that down I guess
00:02:41
where where are you in the release of
00:02:42
GPT 5 uh we we take our time on releases
00:02:46
of major new models and I don't think we
00:02:51
uh I think it will be great uh when we
00:02:54
do it and I think we'll be thoughtful
00:02:56
about how we do it uh like we may
00:02:58
release it in a different way than we've
00:02:59
released previous BS models um also I
00:03:01
don't even know if we'll call it gbt 5
00:03:03
um what I what I will say is you know a
00:03:06
lot of people have noticed how much
00:03:08
better gbd4 has gotten um since we've
00:03:11
released it and particularly over the
00:03:12
last few months I
00:03:14
think I think that's like a better hint
00:03:18
of what the world looks like where it's
00:03:20
not the like 1 two 3 4 5 six seven but
00:03:24
you you just you use an AI system and
00:03:27
the whole system just gets better and
00:03:28
better fairly continous ly um I think
00:03:32
that's like both a better technological
00:03:34
Direction I think that's like easier for
00:03:36
society to adapt to um
00:03:39
but but I assume that's where we'll head
00:03:41
does that mean that there's not going to
00:03:42
be long training cycles and it's
00:03:44
continuously retraining or training
00:03:47
submodels Sam and maybe you could just
00:03:49
speak to us about what might change
00:03:51
architecturally going forward with
00:03:53
respect to large
00:03:54
models well I mean one one one thing
00:03:58
that you could imagine is this just that
00:04:00
you keep training right a model uh that
00:04:02
that would seem like a reasonable thing
00:04:04
to
00:04:05
me do you think you talked about
00:04:07
releasing it differently this time are
00:04:09
you thinking maybe releasing it to the
00:04:11
paid users first or you know a slower
00:04:14
roll out to get the red teams tight
00:04:17
since now there's so much at stake you
00:04:19
have so many customers actually paying
00:04:20
and you've got everybody watching
00:04:22
everything you do you know is it is it
00:04:26
moreful now yeah still only available to
00:04:29
the paid user users but one of the
00:04:30
things that we really want to do is
00:04:32
figure out how to make more advanced
00:04:34
technology available to free users too I
00:04:36
think that's a super important part of
00:04:39
our mission uh and and this idea that we
00:04:42
build AI tools and make them super
00:04:44
widely
00:04:45
available free or you know not that
00:04:47
expensive whatever it is so that people
00:04:49
can use them to go kind of invent the
00:04:51
future rather than the magic AGI and the
00:04:53
sky inventing the future and showing it
00:04:55
down upon us uh that seems like a much
00:04:58
better path it seems like more spiring
00:05:00
path I also think it's where things are
00:05:01
actually heading so it makes me sad that
00:05:04
we have not figured out how to make gp4
00:05:06
level technology available to free users
00:05:08
it's something we really want to do it's
00:05:10
just very expensive I take it's very
00:05:12
expensive yeah chamal your thoughts I
00:05:15
think maybe the the two big vectors Sam
00:05:17
that people always talk about is that
00:05:20
underlying cost and sort of the latency
00:05:23
that's kind of rate limited a killer app
00:05:26
and
00:05:27
then I think the second is sort of the
00:05:30
long-term ability for people to build in
00:05:33
an open source World versus a closed
00:05:34
Source world and I think the crazy thing
00:05:37
about this space is that the open source
00:05:40
Community is rabid so one example that I
00:05:43
think is incredible is you know we had
00:05:44
these guys do a pretty crazy demo for
00:05:47
Devon remember like even like five or
00:05:50
six weeks ago that looked incredible and
00:05:52
then some kid just published it under an
00:05:54
open MIT license like open Devon and
00:05:58
it's incredibly
00:06:00
good and almost as good as that other
00:06:02
thing that was closed source so maybe we
00:06:05
can just start with that which is tell
00:06:07
me about the business decision to keep
00:06:09
these models close source and where do
00:06:12
you see things going in the next couple
00:06:14
years so on the first part of your
00:06:16
question um speed and cost those are
00:06:19
hugely important to us and I don't want
00:06:23
to like give a timeline on when we can
00:06:26
bring them down a lot cuz research is
00:06:27
hard but I am confident we'll be able to
00:06:30
um we want to like cut the latency super
00:06:32
dramatically we want to cut the cost
00:06:34
really really dramatically um and I
00:06:37
believe that will happen we're still so
00:06:39
early in the development of the science
00:06:42
and understanding how this works plus we
00:06:43
have all the engineering Tailwinds so
00:06:47
I I don't know like when we get to
00:06:50
intelligence too cheap to meter and so
00:06:52
fast that it feels instantaneous to us
00:06:53
and everything else but I do believe we
00:06:56
can get there for you know a pretty high
00:07:00
level of intelligence and um I it's
00:07:04
important to us it's clearly important
00:07:06
to users and it'll unlock a lot of stuff
00:07:08
on the sort of Open Source close Source
00:07:10
thing I think there's great roles for
00:07:12
both I think
00:07:14
um you know we've open sourced some
00:07:16
stuff we'll open source more stuff in
00:07:17
the future but really like our mission
00:07:20
is to build towards AGI and to figure
00:07:22
out how to broadly distribute its
00:07:24
benefits we have a strategy for that
00:07:26
seems to be resonating with a lot of
00:07:28
people it obviously isn't for everyone
00:07:30
and there's like a big ecosystem and
00:07:32
there will also be open source models
00:07:34
and people who build that way um one
00:07:36
area that I'm particularly interested
00:07:38
personally in open source for is I want
00:07:41
an open source model that is as good as
00:07:43
it can be that runs on my phone and that
00:07:47
I think is going to you know the world
00:07:49
doesn't quite have the technology for
00:07:51
for a good version of that yet but that
00:07:52
seems like a really important thing to
00:07:54
go do at some point will you do will you
00:07:56
do that will you release I don't know if
00:07:57
we will or someone will but someone
00:07:59
llama 3 llama 3 running on a phone well
00:08:03
I guess maybe there's like a seven
00:08:05
billion version yeah yeah uh I don't
00:08:08
know if that's if that will fit on a
00:08:09
phone or not but that should be fitable
00:08:13
on a phone but I don't I I'm not I'm not
00:08:14
sure if that one is like I haven't
00:08:16
played with it I don't know if it's like
00:08:17
good enough to kind of do the thing I'm
00:08:18
thinking about here so when when llama 3
00:08:21
got released I think the big takeaway
00:08:23
for a lot of people was oh wow they've
00:08:24
like caught up to GPT 4 I don't think
00:08:26
it's equal in all Dimensions but it's
00:08:29
like pretty pretty close or pretty in
00:08:31
the ballpark I guess the question is you
00:08:33
know you guys released four a while ago
00:08:36
you're working on five or you know more
00:08:38
upgrades to four I mean I think to
00:08:41
Chamas point about Devin how do you stay
00:08:43
ahead of Open Source I mean it's just
00:08:45
that's like a very hard thing to do in
00:08:47
general right I mean how do you think
00:08:49
about
00:08:50
that what we're trying to do is not
00:08:54
make the sort of
00:08:57
smartest set of Weights that we can can
00:08:59
but what we're trying to make is like
00:09:01
this useful intelligence layer for
00:09:04
people to use and a model is part of
00:09:06
that I think we will stay pretty far
00:09:09
ahead of I hope we'll stay pretty far
00:09:11
ahead of the rest of the world on that
00:09:13
um but there's a lot of other work
00:09:16
around the whole system that's not just
00:09:19
that you know the the model Waits and
00:09:22
we'll have to build up enduring value
00:09:24
the oldfashioned way like any other
00:09:26
business does will have to figure out a
00:09:28
great product and reasons to stick with
00:09:29
it and uh you know delivered at a great
00:09:31
price when you founded the organization
00:09:34
you the stated goal or part of what you
00:09:37
discussed was hey this is too important
00:09:39
for any one company to own it so
00:09:41
therefore it needs to be open then there
00:09:43
was the switch hey it's too dangerous
00:09:45
for anybody to be able to see it and we
00:09:48
need to lock this down because you you
00:09:49
had some fear about that I think is that
00:09:52
accurate because the cynical side is
00:09:54
like well this is a capitalistic move
00:09:56
and then the I think
00:09:59
you know I'm I'm curious what the
00:10:01
decision was here in terms of going from
00:10:03
open we the world needs to see this it's
00:10:06
really important to closed only we can
00:10:08
see it well how did you come to that
00:10:11
that conclusion what were the disc part
00:10:12
of the reason that we released chat GPT
00:10:14
was we want the world to see this and
00:10:15
we've been trying to tell people that AI
00:10:17
is really important and if you go back
00:10:19
to like uh October of 2022 not that many
00:10:22
people thought AI was going to be that
00:10:23
important or that it was really
00:10:24
happening um and a huge part of what we
00:10:28
we try to do
00:10:30
is put the technology in the hands of
00:10:33
people uh now again there's different
00:10:34
ways to do that and I think there really
00:10:35
is an important role to just say like
00:10:37
here's the weights have at it but the
00:10:40
fact that we have so many people using a
00:10:42
free version of chat GPT that we don't
00:10:44
you know we don't run ads on we don't
00:10:46
try to like make money on we just put
00:10:47
out there because we want people to have
00:10:48
these tools um I think it's done a lot
00:10:51
to provide a lot of value and you know
00:10:54
teach people how to fish but also to get
00:10:56
the world um really thoughtful about
00:10:59
what's happening here now we still don't
00:11:01
have all the answers and uh we're
00:11:03
fumbling our way through this like
00:11:04
everybody else and I assume we'll change
00:11:06
strategy many more times as we learn new
00:11:08
things you know when we started open AI
00:11:11
we had really no idea about how things
00:11:13
were going to go um that we'd make a
00:11:15
language model that we'd ever make a
00:11:17
product we started off just I remember
00:11:20
very clearly that first day where we're
00:11:21
like well now we're all here that was
00:11:23
you know it was difficult to get this
00:11:24
set up but what happens now maybe we
00:11:26
should write some papers maybe we should
00:11:28
stand around a whiteboard and we've just
00:11:30
been trying to like put one foot in
00:11:32
front of the other and figure out what's
00:11:33
next and what's next and what's next
00:11:36
and I think we'll keep doing that can I
00:11:39
just replay something and just make sure
00:11:40
I heard it right I think what you were
00:11:42
saying on the open source close Source
00:11:45
thing is if I heard it right all these
00:11:48
models independent of the business
00:11:50
decision you make are going to become
00:11:52
asymptotically accurate towards some
00:11:55
amount of accuracy like not all but like
00:11:57
let's just say there's four or five that
00:11:58
are
00:12:00
well capitalized enough you guys meta
00:12:03
Google Microsoft whomever right so let's
00:12:05
just say four or five maybe one
00:12:08
startup and on the open web and then
00:12:12
quickly the accuracy or the value of
00:12:15
these models will probably shift to
00:12:16
these proprietary sources of training
00:12:18
data that you could get that others
00:12:20
can't or others can get that you can't
00:12:22
is that how you see this thing evolving
00:12:25
where the open web gets everybody to a
00:12:27
certain threshold and then it's just an
00:12:29
arms race for data beyond that doesn't
00:12:33
so I definitely don't think it'll be an
00:12:34
arms race for data because when the
00:12:36
models get smart enough at some point it
00:12:37
shouldn't be about more data at least
00:12:39
not for training it may matter data to
00:12:41
make it useful um look the the one thing
00:12:45
that I have learned most throughout all
00:12:48
this is that uh it's hard to make
00:12:50
confidence statements a couple of years
00:12:51
in the future about where this is all
00:12:53
going to go and so I don't want to try
00:12:54
now I I will say that I I expect lots of
00:12:58
very capable models in the world and you
00:13:01
know like it feels to me like we just
00:13:04
like stumbled on a new fact of nature or
00:13:07
science or whatever you want to call it
00:13:08
which is like we can create you can like
00:13:13
I mean I don't believe this literally
00:13:15
but it's like a spiritual point you know
00:13:18
intelligence is just this emergent
00:13:19
property of matter and that's like a
00:13:21
that's like a rule of physics or
00:13:22
something um so people are going to
00:13:24
figure that out but there will be all
00:13:26
these different ways to design the
00:13:27
systems people will make different Cho
00:13:29
choices figure out new ideas and I'm
00:13:32
sure like you
00:13:36
know like any other industry I would
00:13:39
expect there to be multiple approaches
00:13:41
and different people like different ones
00:13:43
you know some people like iPhones some
00:13:44
people like an Android phone I think
00:13:46
there'll be some effect like that let's
00:13:48
go back to that first section of just
00:13:50
the the cost and the
00:13:53
speed all of you guys are sort of a
00:13:55
little bit rate limited on literally
00:13:58
nvidia's throughput right and I think
00:14:00
that you and most everybody else have
00:14:02
sort of effectively announced how much
00:14:04
capacity you can get just because it's
00:14:06
as much as they can spin out what needs
00:14:08
to happen at the substrate so that you
00:14:10
can actually compute cheaper compute
00:14:14
faster get access to more energy how are
00:14:16
you helping to frame out the industry
00:14:19
solving those problems well we we'll
00:14:22
make huge algorithmic gains for sure and
00:14:24
I don't want to Discount that I you know
00:14:26
I very interested in chips and energy
00:14:28
but if we can make our if we can make a
00:14:30
same quality model twice as efficient
00:14:32
that's like we had twice as much compute
00:14:34
right and I think there's a gigantic
00:14:37
amount of work to be done there uh and I
00:14:42
hope we'll start really seeing those
00:14:44
results um other than that the whole
00:14:47
supply chain is like very complicated
00:14:48
you know there's there's logic Fab
00:14:51
capacity there's how much hbm the world
00:14:53
can make there's how quickly you can
00:14:55
like get permits and pour the concrete
00:14:56
make the data centers and then have
00:14:57
people in there wiring them all up
00:14:59
there's finding the energy which is a
00:15:00
huge bottleneck but
00:15:03
uh I think when there's this much value
00:15:06
to people uh the world will do its thing
00:15:08
we'll try to help it happen faster um
00:15:11
and there's probably like I don't know
00:15:14
how to give it a number but there's like
00:15:15
some percentage chance where there is as
00:15:18
you were saying like a huge substrate
00:15:20
breakthrough and we have like a
00:15:22
massively more efficient way to do
00:15:23
Computing but I don't I don't like Bank
00:15:26
on that or spend too much time thinking
00:15:27
about it what about the device side and
00:15:30
sort of you mentioned sort of the models
00:15:33
that can fit on a phone so obviously
00:15:35
whether that's an llm or some slm or
00:15:37
something I'm sure you're thinking about
00:15:39
that but then does the device itself
00:15:41
change I mean is it does it need to be
00:15:42
as expensive as an iPhone
00:15:47
uh I'm super interested in this uh I I
00:15:50
love like great new form factors of
00:15:52
computing and it feels like with every
00:15:55
major technological Advance a new thing
00:15:57
becomes possible
00:16:00
uh phones are unbelievably good so I
00:16:03
think the threshold is like very high
00:16:04
here like what like I think I think like
00:16:07
I personally think an iPhone is like the
00:16:10
greatest piece of technology Humanity
00:16:13
has ever made it's really a wonderful
00:16:14
product what comes after it like I don't
00:16:17
know I mean I was gonna that was what I
00:16:18
was saying it's so good that to get
00:16:20
Beyond it I think the bar is like quite
00:16:23
High well you've been you've been
00:16:24
working with Johnny IV on on something
00:16:26
right we've been discussing ideas but uh
00:16:29
I don't like if I knew is it that that
00:16:32
it has to be more complicated or
00:16:33
actually just much much cheaper and
00:16:35
simpler well every Mo almost everyone's
00:16:37
willing to pay for a phone anyway so if
00:16:39
you could like make a way cheaper device
00:16:41
I think the barrier to carry a second
00:16:43
thing or use a second thing is pretty
00:16:46
high so I don't think C given that we're
00:16:48
all willing to pay for phones or most of
00:16:50
us are I don't think cheaper is the
00:16:53
answer different is the answer then
00:16:56
would there be like a specialized chip
00:16:58
that would run the phone that was really
00:17:00
good at powering a you know a phone size
00:17:02
AI model probably but the phone
00:17:04
manufacturers are going to do that for
00:17:06
sure that doesn't that doesn't
00:17:07
necessitate a new device I think you'd
00:17:09
have to like find some really different
00:17:11
interaction Paradigm that the technology
00:17:14
enables uh and if I knew what it was I
00:17:17
would be excited to be working on it
00:17:19
right now but well you have you have
00:17:21
voice working right now in the app in
00:17:23
fact I set my action button on my phone
00:17:24
to go directly to chat gpt's voice app
00:17:27
and I use it with my kids and they love
00:17:29
it talking to it's got latency issues
00:17:31
but it's really we'll get we'll we'll
00:17:32
get that we'll get that better and I
00:17:34
think voice is a hint to whatever the
00:17:37
next thing is like if you can get voice
00:17:40
interaction to be really good it
00:17:42
feels I think that feels like a
00:17:44
different way to use a computer but
00:17:46
again with that by the way like what why
00:17:49
is it not responsive and you know it
00:17:52
feels like a CB you know like over over
00:17:55
it's really annoying to use you know uh
00:17:58
in that way but it's also brilliant when
00:18:00
it gives you the right answer we are
00:18:01
working on that uh it's it's so clunky
00:18:04
right now it's slow it's like kind of
00:18:07
doesn't feel very smooth or authentic or
00:18:09
organic like we'll get all that to be
00:18:12
much
00:18:13
better what about computer vision I mean
00:18:16
they have glasses or maybe you could
00:18:18
wear a pendant I mean you take the
00:18:19
combination
00:18:21
of visual or video data combine it with
00:18:24
voice and now ai knows everything that's
00:18:27
happening around you super powerful to
00:18:29
be able to like the multimodality of
00:18:32
saying like hey cha gbt what am I
00:18:35
looking at or like what kind of plant is
00:18:37
this I can't quite tell
00:18:39
um that's obvious that that's like a
00:18:41
that's another I think like hint but
00:18:44
whether people want to wear glasses or
00:18:45
like hold up something when they want
00:18:47
that like I there's a bunch of just like
00:18:50
like the the sort of like societal
00:18:53
interpersonal issues here are all very
00:18:56
complicated about wearing a computer on
00:18:57
your face um we we saw that with Google
00:19:00
class people got punched in the face in
00:19:02
the mission started a lot of I forgot
00:19:04
about that I forgot about that so so I I
00:19:06
think it's
00:19:07
like what are the apps that could be
00:19:10
unlocked if AI was sort of ubiquitous on
00:19:13
people's
00:19:14
phones do you have a sense of that or
00:19:17
what would you want to see built
00:19:21
uh I I think what I want is just this
00:19:24
always on like super low friction
00:19:29
thing where I
00:19:31
can either by voice or by text or
00:19:33
ideally like some other it just kind of
00:19:36
knows what I want have this like
00:19:38
constant thing helping me throughout my
00:19:39
day that's got like as much context on
00:19:41
as possible it's like the world's
00:19:43
greatest assistant um and it's just this
00:19:45
like thing working to make me better and
00:19:48
better uh there's there there's like a I
00:19:51
know when you hear people like talk
00:19:52
about the AI future they're imag they
00:19:54
imagine there's sort of
00:19:56
two different approaches and don't sound
00:19:59
that different but I think they're like
00:20:00
very different for how we'll design the
00:20:01
system in practice there's the I want an
00:20:04
extension of myself um I want like a
00:20:09
ghost or an alter ego or this thing that
00:20:11
really like is me is acting on my behalf
00:20:14
is um responding to emails not even
00:20:17
telling me about it is is is sort of
00:20:20
like it it becomes more me and is me and
00:20:24
then there's this other thing which is
00:20:25
like I want a great senior employee
00:20:29
it may get to know me very well I may
00:20:31
delegate it you know you can like have
00:20:32
access to my email and I'll tell you the
00:20:34
constraints but but I think of it as
00:20:36
this like separate entity and I
00:20:40
personally like the separate entity
00:20:42
approach better and think that's where
00:20:44
we're going to head um and so in that
00:20:48
sense the thing is not you but it's it's
00:20:51
like a always available always great
00:20:54
super capable assistant executive agent
00:20:57
in a way like gets out there working on
00:20:59
your behalf and understands what you
00:21:01
want and anticipates what you want is
00:21:04
what I'm reading into what you're saying
00:21:05
I think there'd be agent like Behavior
00:21:08
but there's like a difference
00:21:10
between a senior employee and an agent
00:21:13
yeah and like I want it you know I think
00:21:16
of like my I think like of it like one
00:21:19
of the things that I like about a senior
00:21:22
employee is
00:21:26
they'll they'll push back on me they
00:21:28
will sometimes not do something I ask or
00:21:31
they sometimes will say like I can do
00:21:32
that thing if you want but if I do it
00:21:34
here's what I think would happen and
00:21:35
then this and then that and are you
00:21:37
really
00:21:38
sure I definitely want that kind of vibe
00:21:41
which not not just like this thing that
00:21:42
I asking it blindly does it can reason
00:21:46
yeah yeah and push back it can reason it
00:21:47
has like the kind of relationship with
00:21:49
me that I would expect out of a really
00:21:51
competent person that I worked with
00:21:54
which is different from like a sycophant
00:21:56
yeah the thing in that world where if
00:21:58
you had this like Jarvis like thing that
00:22:01
can reason what do you think it does to
00:22:06
products that you use today where the
00:22:08
interface is very valuable so for
00:22:10
example if you look at an instacart or
00:22:12
if you look at an Uber or if you look at
00:22:14
a door Dash these are not services that
00:22:16
are meant to be pipes that are just
00:22:19
providing a set of apis to a Smart Set
00:22:22
of agents that ubiquitously work on
00:22:24
behalf of 8 billion
00:22:25
people what do you think has to change
00:22:28
in how we think about how apps need to
00:22:29
work of how this entire infrastructure
00:22:31
of experiences need to work in a world
00:22:33
where you're agentically interfacing to
00:22:35
the world you I'm actually very
00:22:37
interested in designing a world that is
00:22:40
equally usable by humans and by AIS so I
00:22:44
I
00:22:46
I I I like the interpretability of that
00:22:49
I like the smoothness of the handoffs I
00:22:51
like the ability that we can provide
00:22:52
feedback or whatever so you know door
00:22:55
Dash could just expose some a pi to my
00:22:59
future AI assistant and they could go
00:23:02
put the order and whatever or I could
00:23:04
say like I could be holding my phone and
00:23:06
I could say okay AI assistant like you
00:23:08
put in this order on door Dash please
00:23:11
and I could like watch the app open and
00:23:12
see the thing clicking around and I
00:23:14
could say hey no not this or like um
00:23:17
there there's something about designing
00:23:19
a world that is
00:23:23
usable equally well by humans and AIS
00:23:25
that I think is a interesting concept
00:23:28
excited humanoid robots than sort of
00:23:31
robots of like very other shapes the
00:23:33
world is very much designed for humans
00:23:34
and I think we should absolutely keep it
00:23:36
that way and a shared interface is nice
00:23:39
so you see voice chat that modality kind
00:23:42
of gets rid of apps you just ask it for
00:23:44
sushi it knows Sushi you like before it
00:23:46
knows what you don't like and does its
00:23:48
best shot at doing it I it's hard for me
00:23:50
to imagine that we just go to a world
00:23:53
totally where you say like hey Chachi BT
00:23:55
order me sushi and it says okay do you
00:23:57
want it from this restaurant what kind
00:23:59
what time whatever I think user I think
00:24:02
visual user interfaces are super good
00:24:05
for a lot of things um and it's hard of
00:24:07
me to imagine like a world
00:24:10
where youd never look at a screen and
00:24:15
just use voice mode only but I I can't
00:24:18
imagine that for a lot of things yeah I
00:24:20
mean Apple tried with Siri like you
00:24:22
supposedly you can order an Uber
00:24:23
automatically with Siri I don't think
00:24:25
anybody's ever done it because it's why
00:24:27
would you take the risk of not the
00:24:29
quality to your point the quality is not
00:24:31
good but when the quality is good enough
00:24:33
you're you'll actually prefer it just
00:24:35
because it's just lighter weight you
00:24:36
don't have to take your phone out you
00:24:37
don't have to search for your app and
00:24:40
press it and oh it automatically logged
00:24:42
you out oh hold on log back in oh TFA
00:24:44
it's a whole pain in the ass you know
00:24:46
it's like setting a timer with Siri I do
00:24:48
every time because it works really well
00:24:50
and it's great and I don't need more
00:24:52
information but ordering an Uber like I
00:24:55
want to see the prices for a few
00:24:57
different options I want to see how far
00:24:58
away it is I want to see like maybe even
00:25:01
where they are on the map because I
00:25:02
might walk somewhere I get a lot more
00:25:04
information by I think in less time by
00:25:07
looking at that order the Uber screen
00:25:09
than I would if I had to do that all
00:25:10
through the audio Channel I like your
00:25:12
idea of watching it happen that's kind
00:25:14
of cool I think there will just be like
00:25:16
yeah
00:25:18
different there are different interfaces
00:25:20
we use for different tasks and I think
00:25:21
that'll keep going of all the developers
00:25:23
that are building apps and experiences
00:25:26
on open AI are there few that stand out
00:25:29
for you where you're like okay this is
00:25:30
directionally going in a super
00:25:32
interesting area even if it's like a toy
00:25:34
app but are there things that you guys
00:25:36
point to and say this is really
00:25:39
important um I met with a new company
00:25:44
this morning or and bar even a company
00:25:45
it's like two people that are going to
00:25:46
work on a summer project trying to
00:25:48
actually finally make the AI tutor like
00:25:52
and I've always been interested in this
00:25:53
space a lot of people have done great
00:25:54
stuff on our platform but if if someone
00:25:57
can deliver
00:25:58
like the way that you actually
00:26:03
like H they used a phrase I love which
00:26:05
is this is going to be like a monor
00:26:07
level reinvention for how people how
00:26:08
people learn things yeah um but if you
00:26:11
can like find this new way to like let
00:26:12
people explore and learn and new ways on
00:26:14
their own I'm personally super excited
00:26:17
about that um a lot of the coding
00:26:20
related stuff you mentioned Devon
00:26:21
earlier I think that's like a super cool
00:26:23
vision of the future the thing that I am
00:26:26
Health Healthcare I I believe
00:26:28
should be pretty transformed by this but
00:26:32
the thing I'm personally most excited
00:26:34
about is the sort of doing faster and
00:26:38
better scientific discovery gp4 clearly
00:26:40
not there in a big way although maybe it
00:26:42
accelerates things a little bit by
00:26:44
making scientists more
00:26:45
productive but Alpha 43 yeah that's like
00:26:50
but Sam that will be a Triumph those are
00:26:54
not like these these models are train
00:26:58
Tred and built differently than the
00:27:01
language models I mean to some obviously
00:27:04
there's a lot that's similar but there's
00:27:06
a lot um there's kind of a groundup
00:27:08
architecture to a lot of these models
00:27:09
that are being applied to these specific
00:27:12
problem sets these specific applications
00:27:14
like chemistry interaction modeling for
00:27:17
example does you you'll need some of
00:27:20
that for sure but the the thing that I
00:27:21
think we're missing across the board for
00:27:24
many of these things we've been talking
00:27:25
about is models that can do reason
00:27:28
and once you have reasoning you can
00:27:30
connect it to chemistry simulators or
00:27:32
guess yeah that's the important question
00:27:34
I wanted to kind of talk about today was
00:27:37
this idea
00:27:38
of networks of models people talk a lot
00:27:41
about agents as if there's kind of this
00:27:44
linear set of call functions that happen
00:27:47
but one of the things that
00:27:49
arises in biology is networks of systems
00:27:53
that have cross interactions that the
00:27:56
aggregation of the system the
00:27:58
aggregation of the network produces an
00:28:00
output rather than one thing calling
00:28:02
another that thing calling another do we
00:28:04
see like an emergence in this
00:28:06
architecture of either specialized
00:28:07
models or network models that work
00:28:10
together to address bigger problem sets
00:28:12
use reasoning there's computational
00:28:14
models that do things like chemistry or
00:28:16
arithmetic and there's other models that
00:28:17
do rather than one model to rule them
00:28:20
all that's purely
00:28:23
generalized I don't know um
00:28:29
I don't know how much
00:28:31
reasoning is going to turn out to be a
00:28:33
super generalizable thing I suspect it
00:28:36
will but that's more just like an
00:28:38
intuition and a hope and it would be
00:28:40
nice if it worked out that way I I don't
00:28:42
know if that's
00:28:45
like but let's walk through the the
00:28:47
protein modeling
00:28:49
example there's a bunch of training data
00:28:52
images of proteins and then sequence
00:28:55
data and they build a model predictive
00:28:57
model and they have a set of processes
00:28:59
and steps for doing that do you envision
00:29:01
that there's this artificial general
00:29:04
intelligence or this great reasoning
00:29:05
model that then figures out how to build
00:29:07
that submodel that figures out how to
00:29:08
solve that problem by acquiring the
00:29:10
necessary data and then resolving there
00:29:13
so many ways where that could go like
00:29:15
maybe it is it trains a literal model
00:29:17
for it or maybe it just like knows the
00:29:20
one big model what it can like go pick
00:29:22
what other training data it needs and
00:29:24
ask a question and then update on that
00:29:27
um I guess the real question is are all
00:29:29
these startups going to die because so
00:29:31
many startups are working in that
00:29:32
modality which is go get special data
00:29:34
and then train a new model on that
00:29:36
special data from the ground up and then
00:29:38
it only does that one sort of thing and
00:29:40
it works really well at that one thing
00:29:42
and it works better than anything else
00:29:43
at that one thing you know there there's
00:29:44
like a version of this I think you can
00:29:49
like already see when you were when you
00:29:51
were talking about like biology and
00:29:53
these complicated networks of systems
00:29:54
the reason I was smiling I I got super
00:29:56
sick recently and I'm mostly better now
00:30:00
but it was just like body like got beat
00:30:02
up like one system at a time F like you
00:30:04
can really tell like okay it's this
00:30:05
cating thing and uh and that reminded me
00:30:10
of you like talking about the like
00:30:11
biology is just these like you have no
00:30:12
idea how much these systems interact
00:30:14
with each other until things start going
00:30:15
wrong and that was sort of like
00:30:17
interesting to see but I was using
00:30:21
um I was like using chat GPT uh to try
00:30:24
to like figure out like what was
00:30:26
happening whatever and and and would say
00:30:28
well I'm you know unsure of this one
00:30:29
thing and then I just like posted a
00:30:31
paper on it without even reading the
00:30:33
paper um like in the context and it says
00:30:36
oh that was the thing I was unsure of
00:30:37
like now I think this instead so there's
00:30:40
like a that was like a small version of
00:30:41
what you're talking about where you can
00:30:44
like can say this I don't I don't know
00:30:46
this thing you can put more information
00:30:47
you don't retrain the model you're just
00:30:48
adding it to the context here and now
00:30:51
you're getting a so these models that
00:30:53
are predicting protein structure like
00:30:55
let's say right this is the whole basis
00:30:57
and now now other molecules at Alpha
00:31:01
3 can they can yeah I mean is it
00:31:05
basically a world where the best
00:31:07
generalized model goes in and gets that
00:31:09
training data and then figures out on
00:31:11
its own and maybe you could maybe you
00:31:13
could use an example for us can you tell
00:31:15
us about Sora your video model that
00:31:17
generates amazing moving images moving
00:31:20
video and and what's different about the
00:31:22
architecture there whatever you're
00:31:24
willing to share on how how that is
00:31:27
different
00:31:28
yeah so my on the general thing first
00:31:34
my you clearly will need
00:31:38
specialized simulators connectors pieces
00:31:42
of data whatever but my
00:31:45
intuition and again I don't have this
00:31:47
like backed up with science my intuition
00:31:49
would be if we can figure out the core
00:31:51
of generalized reasoning connecting that
00:31:53
to new problem domains in the same way
00:31:56
that humans are generalized reason
00:31:59
ERS would I think be be doable it's like
00:32:02
a fast unlock faster unlock than I think
00:32:05
I I think so
00:32:10
um but yeah you Sora like does not start
00:32:13
with a language model um it's that
00:32:15
that's a model that is like customized
00:32:17
to do video uh and and so like we're
00:32:21
clearly not at that world yet right so
00:32:24
you guys so just as an example for you
00:32:26
guys to build a good
00:32:28
video model you built it from scratch
00:32:30
using I'm assuming some different
00:32:33
architecture and different data but in
00:32:36
the future the generalized reasoning
00:32:39
system the the AGI whatever system
00:32:42
theoretically could render that by
00:32:44
figuring out how to do it yeah I mean
00:32:46
one example of this is like okay you
00:32:48
know as far as I know all the best text
00:32:51
models in the world are still a lot of
00:32:52
regressive models and the best image and
00:32:54
video models are diffusion models and
00:32:56
that's like sort strange in some sense
00:32:59
yeah yeah so there's a big debate about
00:33:04
uh training data you guys have been I
00:33:06
think the most thoughtful of any company
00:33:08
you've got licensing deals now ft Etc
00:33:11
and we got to gu be gentle here because
00:33:14
you're involved in a New York Times
00:33:16
lawsuit you weren't able to settle I
00:33:17
guess an arrangement with them for
00:33:19
training data how do you think about
00:33:23
fairness in fair
00:33:24
use we've had big debates here on the
00:33:27
pod
00:33:28
obviously your actions are you know
00:33:30
speak volumes that you're trying to be
00:33:32
fair by doing licensing deals so what
00:33:35
what's your personal position on the
00:33:37
rights of artists who create beautiful
00:33:40
music lyrics books and you taking that
00:33:45
and then making a derivative product out
00:33:46
of it and and then monetizing it and and
00:33:49
what's fair here and how do we get to a
00:33:52
world where you know artists can make
00:33:55
content in the world and then decide
00:33:57
what they want other people to do with
00:33:59
it yeah and and I'm just curious your
00:34:01
personal belief because I know you to be
00:34:02
a thoughtful person on this and I know a
00:34:04
lot of other people in our industry are
00:34:07
not very thoughtful about how they think
00:34:08
about content
00:34:10
creators so I think it's very different
00:34:12
for different kinds of I mean look on
00:34:14
unfair use I think we have a a very
00:34:16
reasonable position under the current
00:34:18
law but I think AI is so different that
00:34:22
for things like art we'll need to think
00:34:23
about them in different ways but I would
00:34:25
say if you go read a bunch of math on
00:34:29
the internet and learn how to do math
00:34:34
that I think seems unobjectionable to
00:34:37
most people and then there's like you
00:34:38
know another set of people who might
00:34:40
have a different opinion well what if
00:34:41
you
00:34:45
like actually let me not get into that
00:34:47
just in the interest of not making this
00:34:48
answer too long so I think there's like
00:34:50
one category people are like okay
00:34:51
there's like generalized human knowledge
00:34:54
you can kind of like go if you learn
00:34:56
that like that's
00:34:58
that that's like open domain or
00:34:59
something if you kind of go learn about
00:35:01
the Pythagorean theorem um that's one
00:35:04
end of the spectrum and then I think the
00:35:06
other extreme end of the spectrum is um
00:35:10
is Art and maybe even like more than
00:35:14
more specifically I would say it's like
00:35:16
doing it's a system generating art in
00:35:19
the style or the likeness of another
00:35:21
artist um would be kind of the furthest
00:35:24
end of
00:35:25
that and then there's many many cases on
00:35:28
the Spectrum in between
00:35:31
uh I think the conversation has been
00:35:33
historically very caught up on training
00:35:36
data but it will increasingly become
00:35:38
more about what happens at inference
00:35:40
time as training data
00:35:43
becomes less valuable and the what the
00:35:48
system does accessing you know
00:35:52
information in in context in real time
00:35:55
or uh you know know taking like like
00:35:58
something like that what happens at
00:36:00
inference time will become more debated
00:36:02
and and how the what the new economic
00:36:04
model is there so if you say like
00:36:08
uh if you say like create me a song in
00:36:11
this in the style of Taylor
00:36:13
Swift even if the model were never
00:36:16
trained on any Taylor Swift songs at all
00:36:19
you can still have a problem which is it
00:36:21
may have read about Taylor Swift it may
00:36:22
know about her themes Taylor Swift means
00:36:24
something and then and then the question
00:36:26
is like that model even if it were never
00:36:28
trained on any Taylor Swift song
00:36:30
whatsoever be allowed to do that and if
00:36:33
so um how should Taylor get paid right
00:36:38
so I think there's an optin opt out in
00:36:40
that case first of all and then there's
00:36:41
an economic model um staying on the
00:36:43
music example there is something
00:36:45
interesting to look at from the
00:36:48
historical perspective here which is uh
00:36:50
sampling and how the economics around
00:36:52
that work this is not quite the same
00:36:54
thing but it's like an interesting place
00:36:55
to start looking Sam let me just
00:36:56
challenge that what's the difference in
00:36:59
the example you're giving of the model
00:37:01
learning about things like song
00:37:03
structure Tempo Melody Harmony
00:37:06
relationships all the discovering all
00:37:09
the underlying structure that makes
00:37:10
music successful and then building new
00:37:13
music using training data and what a
00:37:16
human does that listens to lots of music
00:37:19
learns about and and their brain is
00:37:21
processing and building all those same
00:37:23
sort of predictive models or those same
00:37:25
sort of uh discoveries understandings
00:37:28
what's the difference here and why why
00:37:30
are you making the case that perhaps
00:37:33
artists should be uniquely paid this is
00:37:36
not a sampling situation you're not the
00:37:37
AI is not outputting and it's not
00:37:39
storing in the model the actual original
00:37:41
song it's learning structure right I
00:37:44
wasn't trying to make that that point
00:37:46
because I agree like in the same way
00:37:47
that humans are inspired by other humans
00:37:50
I was saying if you if you say generate
00:37:51
me a song in the style of Taylor Swift I
00:37:54
see right okay where the prompt
00:37:56
leverages some artist I I think
00:37:59
personally that's a different case would
00:38:01
you be comfortable asking or would you
00:38:03
be comfortable letting the model train
00:38:05
itself well a music model being trained
00:38:08
on the whole Corpus of music that humans
00:38:09
have created without royalties being
00:38:12
paid to the artists that um that music
00:38:15
is being fed in and then you're not
00:38:17
allowed to ask you know artists specific
00:38:19
prompts you could just say hey pay me a
00:38:20
play me a a really cool pop song that's
00:38:23
fairly modern about heartbreak you know
00:38:25
with a female voice you know we have
00:38:27
currently made the decision not to do
00:38:30
music and partly because exactly these
00:38:32
questions of where you draw the lines
00:38:34
and you know what like
00:38:37
even I was meeting with several
00:38:39
musicians I really admire recently I was
00:38:41
just trying to like talk about some of
00:38:42
these edge cases but even the world in
00:38:46
which if
00:38:48
we went and let's say we paid 10,000
00:38:53
musicians to create a bunch of music
00:38:55
just to make a great training set where
00:38:57
the music music model could learn
00:38:58
everything about strong s structure um
00:39:03
and what makes a good catchy beat and
00:39:06
everything else um and only trained on
00:39:09
that let's say we could still make a
00:39:10
great music model which maybe maybe we
00:39:11
could um you know I was kind of like
00:39:14
posing that as a thought experiment to
00:39:15
musicians and they're like well I can't
00:39:17
object to that on any principal basis at
00:39:18
that point um and yet there's still
00:39:21
something I don't like about it now
00:39:22
that's not a reason not to do it um
00:39:25
necessarily but it is
00:39:28
did you see that ad that Apple put out
00:39:30
maybe it was yesterday or something of
00:39:32
like squishing all of human creativity
00:39:34
down into one really thin iPad what was
00:39:35
your take on
00:39:37
it uh people got really emotional about
00:39:40
it yeah it's stronger reaction than you
00:39:42
would think yeah there's something
00:39:47
about I'm obviously hugely positive on
00:39:50
AI but there is something that I think
00:39:53
is beautiful about human creativity and
00:39:55
human artistic expression and and you
00:39:57
know for an AI that just does better
00:39:59
science like great bring that on but an
00:40:01
AI that is going to do this like deeply
00:40:03
beautiful human creative expression I
00:40:05
think we should like figure out it's
00:40:08
going to happen it's going to be a tool
00:40:10
that will lead us to Greater creative
00:40:12
Heights but I think we should figure out
00:40:13
how to do it in a way that like
00:40:15
preserves the spirit of what we all care
00:40:17
about here and I I think your actions
00:40:20
speak loudly we were trying to do Star
00:40:25
Wars characters in dolly
00:40:27
and if you ask for Darth Vader it says
00:40:30
hey we can't do that so you've I guess
00:40:31
red teed or whatever you call it
00:40:34
internally yeah you're not allowing
00:40:36
people to use other people's IP so
00:40:38
you've taken that decision now if you
00:40:40
asked it to make a Jedi Bulldog or a
00:40:42
Sith Lord Bulldog which I did it made my
00:40:45
Bulldogs a Sith bulldogs so there's an
00:40:47
interesting question about your spectrum
00:40:49
right yeah you know we put out this
00:40:50
thing yesterday called the spec um where
00:40:53
we're trying to say here are here's
00:40:56
here's how our model is supposed to
00:40:58
behave and it's very hard it's a long
00:41:01
document it's very hard to like specify
00:41:03
exactly in each case where the limit
00:41:05
should be and I view this as like a
00:41:07
discussion that's going to need a lot
00:41:08
more input um but but these sorts of
00:41:12
questions
00:41:14
about okay maybe it shouldn't generate
00:41:16
Darth Vader but the idea of a Sith Lord
00:41:18
or a Sith style thing or Jedi at this
00:41:20
point is like part of the culture like
00:41:22
like these are these are all hard
00:41:25
decisions yeah and and I think you're
00:41:27
right the music industry is going to
00:41:29
consider this opportunity to make tlor
00:41:31
Swift songs their opportunity it's part
00:41:33
of the four-part fair use test is you
00:41:36
know these who gets to capitalize on new
00:41:39
Innovations for existing art and and
00:41:41
Disney has an argument that hey you know
00:41:44
if if you're GNA make Sora versions of a
00:41:47
aoka or whatever Obi-Wan Kenobi that's
00:41:49
Disney's opportunity and that's a great
00:41:51
partnership for
00:41:53
you you know to pursue so we're I think
00:41:56
this section I would label as AI in the
00:41:58
law so let me ask maybe a higher level
00:42:01
question what does it mean when people
00:42:04
say regulate AI totally Sam what does it
00:42:07
what does that even mean and comment on
00:42:09
California's new proposed regulations as
00:42:12
well a few if you're up for it uh I'm
00:42:15
concerned I mean there's so many
00:42:17
proposed regulations but most of the
00:42:18
ones I've seen on the California state
00:42:20
things I'm concerned about I also have a
00:42:22
general fear of the states all doing
00:42:24
this them themselves um when people say
00:42:27
regulate AI I don't think they mean one
00:42:30
thing I think there's like some people
00:42:31
are like B the whole thing some people
00:42:33
like don't allow it to be open source
00:42:35
required to be open source um the thing
00:42:37
that I am personally most interested in
00:42:40
is I think there will
00:42:43
come look I may be wrong about this I
00:42:45
will acknowledge that this is a
00:42:46
forward-looking statement and those are
00:42:48
always dangerous to make but I think
00:42:49
there will come a time in the not super
00:42:52
distant future like you know we're not
00:42:53
talking like decades and decades from
00:42:55
now where AI say the frontier AI systems
00:42:58
are capable of causing
00:43:02
significant Global harm and for those
00:43:06
kinds of systems in the same way we have
00:43:08
like Global oversight of nuclear weapons
00:43:11
or synthetic bio or things that can
00:43:13
really like have a very negative impact
00:43:15
Way Beyond the realm of one country uh I
00:43:18
would like to see some sort of
00:43:20
international agency that is looking at
00:43:22
the most powerful systems and ensuring
00:43:24
like reasonable safety testing you know
00:43:26
these things things are not going to
00:43:28
escape and recursively self-improve or
00:43:30
whatever the
00:43:32
criticism of this is that you're you
00:43:35
have the resources to Cozy up to Lobby
00:43:39
to be involved and you've been very
00:43:40
involved with politicians and then
00:43:42
startups which are also passionate about
00:43:44
and invest in um are not going to have
00:43:46
the ability to Resource uh and deal with
00:43:49
this and that this regulatory capture as
00:43:51
per our friend you know Bill Gurley did
00:43:53
a great talk last year about it so maybe
00:43:55
you could address that headon do do you
00:43:57
if the line were we're only going to
00:43:59
look at models that are trained on
00:44:01
computers that cost more than 10 billion
00:44:03
or more than 100 billion or whatever
00:44:04
dollars I'd be fine with that there'd be
00:44:07
some line that'd be fine and uh I don't
00:44:10
think that puts any regulatory burden on
00:44:11
startups so if you have like the the
00:44:14
nuclear raw material to make a nuclear
00:44:16
bomb like there's a small subset set of
00:44:18
people who have that therefore you use
00:44:19
the analogy of like a a nuclear
00:44:21
inspectors kind of situation yeah I
00:44:23
think that that's interesting sax you
00:44:26
have a question well go ahead you had to
00:44:28
follow can I say one more thing about
00:44:30
that of course I'd be super nervous
00:44:33
about regulatory overreach here I think
00:44:34
we can get this wrong by doing way too
00:44:36
much I or even a little too much I think
00:44:38
we can get this wrong by doing not
00:44:39
enough
00:44:41
but but I do think part of and
00:44:45
I and now I mean you know we have seen
00:44:49
regulatory overstepping or capture just
00:44:52
get super bad in other areas um and you
00:44:56
know like also maybe nothing will happen
00:44:58
but but I think it is part of our duty
00:45:00
and our mission to like talk about what
00:45:03
we believe is likely to happen and what
00:45:06
it takes to get that right the challenge
00:45:08
Sam is that we have statute that is
00:45:11
meant to protect people protects Society
00:45:14
at large what we're creating however a
00:45:17
statute that gives the government rights
00:45:20
to go in and audit code to audit
00:45:24
business um trade Secrets uh we've never
00:45:29
seen that to this degree before
00:45:31
basically the California legislation
00:45:33
that's proposed and some of the federal
00:45:34
legislation that's been proposed
00:45:36
basically requires the fed the
00:45:39
government to audit a model to audit
00:45:41
software to audit and review the
00:45:43
parameters and the weightings of the
00:45:44
model and then you need their check mark
00:45:47
in order to deploy it for commercial or
00:45:51
public use and for me it just feels like
00:45:55
we're trying to
00:45:57
reigning the the the government agencies
00:46:00
for fear and and because folks have a
00:46:03
hard time understanding this and are
00:46:04
scared about the implications of it they
00:46:06
want to control it and because they want
00:46:08
and the only way to control it is to say
00:46:10
give me a right to audit before you can
00:46:11
release it asess these people cluel I
00:46:14
mean the way that the the stuff is
00:46:15
written you read it you're like G to
00:46:16
pull your hair out because as you know
00:46:18
better than anyone in 12 months none of
00:46:20
this stuff's going to make sense anyway
00:46:21
totally right look the reason I have
00:46:23
pushed for an agency based approach for
00:46:26
for for kind of like the big picture
00:46:28
stuff and not a like write it in laws I
00:46:30
don't in 12 months it will all be
00:46:32
written wrong and I don't think even if
00:46:35
these people were like True World
00:46:38
experts I don't think they could get it
00:46:39
right looking at 12 or 24 months um and
00:46:43
I don't these policies which is like
00:46:45
we're going to look at you know we're
00:46:47
going to audit all of your source code
00:46:48
and like look at all of your weights one
00:46:50
by one like yeah I think there's a lot
00:46:53
of crazy proposals out there um by the
00:46:55
way especially if the models are always
00:46:57
retrained all the time if they become
00:46:58
more Dynamic again this is why I think
00:47:00
it's yeah but but like when before an
00:47:03
airplane gets certified there's like a
00:47:05
set of safety tests we put the airplane
00:47:07
through it um and totally it's different
00:47:10
than reading all of your code that's
00:47:12
reviewing the output of the model not
00:47:14
reviewing the insides of the model and
00:47:17
and so what I was goingon to say is I
00:47:18
that is the kind of thing that I think
00:47:21
as safety testing makes sense how are we
00:47:24
going to get that to happen Sam and I'm
00:47:26
not just speaking for open AI I speak
00:47:28
for the industry for for Humanity
00:47:29
because I am concerned that we draw
00:47:32
ourselves into almost like a dark ages
00:47:34
type of era by restricting the growth of
00:47:37
these incredible technologies that can
00:47:39
prosper human that that Humanity can
00:47:41
prosper from so significantly how do we
00:47:43
change the the sentiment and get that to
00:47:45
happen because this is all moving so
00:47:46
quickly at the government levels and
00:47:48
folks seem to be getting it wrong and
00:47:50
I'm I'm just to build on that Sam the
00:47:53
architectural decision for example that
00:47:55
llama took is pretty interesting in that
00:47:58
it's like we're going to let llama grow
00:48:01
and be as unfettered as possible and we
00:48:02
have this other kind of thing that we
00:48:04
call Llama guard that's meant to be
00:48:06
these protective guard rails is that how
00:48:08
you see the problem being solved
00:48:10
correctly or do you see at the current
00:48:12
at the current strength of models
00:48:15
definitely some things are going to go
00:48:16
wrong and I don't want to like make
00:48:18
light of those or not take those
00:48:20
seriously but I'm not like I don't have
00:48:22
any like catastrophic risk worries with
00:48:25
a gp4 level model um and I think there's
00:48:29
many safe ways to choose to deploy this
00:48:34
uh may maybe we'd find more common
00:48:37
ground if we said that uh and I like you
00:48:40
know the specific example of models that
00:48:44
are capable that are technically capable
00:48:46
not even if they're not going to be used
00:48:48
this way of recursive
00:48:51
self-improvement um or
00:48:54
of you know autonomously
00:48:57
designing and deploying a bioweapon or
00:49:00
something like that or a new model that
00:49:03
was the recursive self-improvement Point
00:49:05
um you know we should have safety
00:49:08
testing on the outputs at an
00:49:10
international level for models that you
00:49:12
know have a reasonable chance of of
00:49:14
posing a threat there uh I don't think
00:49:17
like GPT 4 of course does
00:49:22
not POS any sort of well I want to say
00:49:26
any sort cuz we don't yeah I don't think
00:49:29
the gp4 poses a material threat on those
00:49:31
kinds of things and I think there's many
00:49:33
safe ways to release a model like this
00:49:35
um but you know when like significant
00:49:39
loss of human life is a serious
00:49:43
possibility like airplanes
00:49:45
or any number of other examples where I
00:49:48
think we're happy to have some sort of
00:49:49
testing framework like I don't think
00:49:51
about an airplane when I get on it I
00:49:52
just assume it's going to be safe right
00:49:55
right there's a lot of hand ringing
00:49:56
right now Sam about
00:49:58
jobs and you had a lot of I think you
00:50:00
did like some sort of a test when you
00:50:02
were at YC about Ubi and you've been
00:50:04
result in that come out very soon I just
00:50:06
it was a fiveyear study that wrapped up
00:50:09
um or started five years ago well there
00:50:12
was like a beta study first and then it
00:50:13
was like a long one that ran but uhk can
00:50:16
you explain yeah why did you start why'
00:50:17
you start it maybe just explain Ubi and
00:50:19
why you started it um so we started
00:50:22
thinking about this in 2016 uh kind of
00:50:25
about the same time started taking AI
00:50:27
really seriously and the theory was that
00:50:31
the magnitude of the change that may
00:50:34
come to society
00:50:37
and jobs in the economy and and sort of
00:50:40
in some deeper sense than that like what
00:50:41
the social contract looks like
00:50:45
um meant that we should have many
00:50:47
studies to study many ideas about new
00:50:50
new ways to
00:50:52
arrange that um I also think that you
00:50:56
know I'm not like a super fan of how the
00:50:58
government has handled most policies
00:51:01
designed to help poor people and I kind
00:51:03
of believe that if you could just give
00:51:06
people money they would make good
00:51:08
decisions and the market would do its
00:51:09
thing and you know I'm very much in
00:51:12
favor of lifting up the floor and
00:51:15
reducing eliminating poverty um but I'm
00:51:18
interested in better ways to do that
00:51:19
than what we have tried for the existing
00:51:22
social safety net and and kind of the
00:51:24
way things have been handled and I think
00:51:26
people money is not going to go solve
00:51:29
all problems it's certainly not going to
00:51:30
make people happy but it
00:51:33
might it might solve some problems and
00:51:36
it might give people a better Horizon
00:51:40
with which to help themselves and I'm
00:51:42
interested in that I I think
00:51:44
that now that we see some of the ways so
00:51:48
2016 was a very long time ago uh you
00:51:50
know now that we see some of the ways
00:51:52
that AI is developing I wonder if
00:51:54
there's better things to do than the
00:51:57
[Music]
00:51:58
traditional um conceptualization of Ubi
00:52:02
uh like I
00:52:03
wonder I wonder if the future looks
00:52:05
something like more like Universal basic
00:52:07
compute than Universal basic income and
00:52:09
everybody gets like a slice of gpt7
00:52:12
compute and they can use it they can
00:52:14
resell it they can donate it to somebody
00:52:16
to use for cancer research but but what
00:52:18
you get is not dollars but this like
00:52:21
productivity slice yeah you own like
00:52:22
part of the productivity right I would
00:52:24
like to shift to the gossip part of this
00:52:27
gossip what gossip let's go back let's
00:52:30
go back to November what the flying
00:52:35
happened um you know I I if you have
00:52:39
specific questions I'm happy to maybe
00:52:41
I'll answer maybe you said you were
00:52:43
going to talk about it at some point so
00:52:44
here's the point what the hell happened
00:52:47
you were fired you came back it was a
00:52:49
palace Intrigue did somebody stab you in
00:52:52
the back did you find AGI what's going
00:52:54
on tell us this is a safe face
00:52:57
Sam um I was fired I
00:53:02
was I talked about coming back I kind of
00:53:05
was a little bit unsure at the moment
00:53:07
about what I wanted to do because I was
00:53:08
very upset um and I realized that I
00:53:14
really loved open Ai and the people and
00:53:17
that I would come back and I kind of I
00:53:20
knew it was going to be hard it was even
00:53:22
harder than I thought but I I kind of
00:53:25
was like all right fine um I agreed to
00:53:27
come back um the board like took a while
00:53:30
to figure things out and then uh you
00:53:33
know we were kind of like trying to keep
00:53:35
the team together and keep doing things
00:53:38
for our customers and uh you know sort
00:53:40
of started making other plans then the
00:53:41
board decided to hire a different
00:53:43
interim CEO um and then
00:53:46
everybody there many people oh my gosh
00:53:49
what was what was that guy's name he was
00:53:50
there for like a scaramucci right like
00:53:53
uh em great and I I have nothing but
00:53:56
good
00:53:58
scari um and then where were you when
00:54:02
they um when you found the news that
00:54:04
you'd been fired like taking I was in a
00:54:07
hotel room in Vegas for F1 weekend I
00:54:10
think text and they're like fire pick up
00:54:13
said I think that's happened to you
00:54:14
before J I'm trying to think if I ever
00:54:17
got fired I don't think I've gotten
00:54:18
fired um yeah I got no it's just a weird
00:54:21
thing like it's a text from who actually
00:54:23
no I got a text the night before and
00:54:24
then I got in a phone call with the
00:54:27
uh and then that was that and then I
00:54:28
kind of like I mean then everything went
00:54:31
crazy I was like uh it was
00:54:35
like I mean I have my phone was like
00:54:37
unusable it was just a Non-Stop
00:54:39
vibrating thing of like text messages
00:54:41
call basically you got fired by tweet
00:54:44
that happened a few times during the
00:54:45
Trump Administration a few uh C TW
00:54:49
before tweeting was nice of them um and
00:54:52
then like you know I kind of did like a
00:54:54
few hours of just this like absolute
00:54:56
State um in the hotel room trying to
00:55:01
like I was just confused beyond belief
00:55:03
trying to figure out what to do and uh
00:55:05
so weird and then
00:55:07
like flew home it maybe like got on a
00:55:12
plane like I don't know 3 p.m. or
00:55:13
something like that um still just like
00:55:16
you know crazy non-stop phone blowing up
00:55:19
uh met up with some people in person by
00:55:21
that evening I was like okay you know
00:55:24
I'll just like go do AGI research and
00:55:27
was feeling pretty happy about the
00:55:29
future and yeah you have options and
00:55:32
then and then the next morning uh had
00:55:35
this call with a couple of board members
00:55:36
about coming back and that led to a few
00:55:40
more days of craziness and then
00:55:44
uh and then it kind of I think it got
00:55:48
resolved well it was like a lot of
00:55:50
insanity in between what percent what
00:55:53
percent of it was because of these
00:55:54
nonprofit board members
00:55:57
um well we only have a nonprofit board
00:55:59
so it was all the nonprofit board
00:56:01
members uh there the board had gotten
00:56:03
down to six people um
00:56:07
they and then they removed Greg from the
00:56:10
board and then fired me um so but it was
00:56:14
like you know but I mean like was there
00:56:16
a culture clash between the people on
00:56:17
the board who had only nonprofit
00:56:19
experience versus the people who had
00:56:21
startup experience and maybe you can
00:56:23
share a little bit about if you're
00:56:24
willing to the motivation behind the
00:56:26
action anything you
00:56:28
can I think there's always been culture
00:56:31
clashes
00:56:34
at look
00:56:36
obviously not all of those board members
00:56:39
are my favorite people in the world but
00:56:41
I
00:56:43
have serious respect
00:56:46
for the gravity with which they treat
00:56:50
AGI and the importance of getting AI
00:56:53
safety right and even if I
00:56:56
stringently disagree with their
00:56:59
decision- making and actions which I do
00:57:02
um I have never once doubted
00:57:05
their integrity or commitment to um the
00:57:11
sort of shared mission of safe and
00:57:12
beneficial
00:57:13
AGI um you know do I think they like
00:57:17
made good decisions in the process of
00:57:19
that or kind of know how to balance all
00:57:21
of things opening I has to get right no
00:57:24
but but I think that like the intent the
00:57:27
intent of the magnitude of yeah
00:57:32
AGI and getting that right I actually
00:57:36
let me ask you about that so the mission
00:57:38
of open AI is explicitly to create AGI
00:57:41
which I think is really
00:57:43
interesting a lot of people would say
00:57:46
that if we create AGI that would be like
00:57:49
an unintended consequence of something
00:57:51
gone horribly wrong and they're very
00:57:53
afraid of that outcome but open a makes
00:57:56
that the actual Mission yeah does that
00:58:00
create like more fear about what you're
00:58:02
doing I mean I understand it can create
00:58:03
motivation too but how do you reconcile
00:58:06
that I guess why is I think a lot of I
00:58:08
think a lot of the
00:58:10
well I mean first I'll say I'll answer
00:58:12
the first question and the second one I
00:58:14
think it does create a great deal of
00:58:15
fear uh I think a lot of the world is
00:58:19
understandably very afraid of AGI or
00:58:21
very afraid of even current Ai and and
00:58:23
very excited about it and even more
00:58:26
afraid and even more excited about where
00:58:28
it's going um and
00:58:31
we we wrestle with that but like I think
00:58:35
it is unavoidable that this is going to
00:58:37
happen I also think it's going to be
00:58:39
tremendously beneficial but we do have
00:58:41
to navigate how to get there in a
00:58:42
reasonable way and like a lot of stuff
00:58:45
is going to change and change is you
00:58:47
know pretty pretty uncomfortable for
00:58:49
people so there's a lot of
00:58:52
pieces that we got to get right and ask
00:58:56
can I ask a different question you you
00:58:58
have
00:59:00
created I
00:59:01
mean it's the hottest company and you
00:59:04
are literally at the center of the
00:59:06
center of the
00:59:07
center but then it's so unique in the
00:59:12
sense that all of this value you issued
00:59:15
economically can you just like walk us
00:59:17
through like yeah I wish I had taken I
00:59:19
wish I had taken Equity so I never had
00:59:21
to answer this question if I could go
00:59:23
back in why don't they give you a grand
00:59:24
now or just give you a big option Grant
00:59:27
like you deserve yeah give you five
00:59:29
points what was the decision back then
00:59:31
like why was that so important the
00:59:33
decision back then the re the original
00:59:34
reason was just like the structure of
00:59:36
our nonprofit it was uh like there was
00:59:39
something about yeah okay this is like
00:59:43
nice from a motivations perspective but
00:59:45
mostly it was that our board needed to
00:59:47
be a majority of disinterested
00:59:49
directors and I was like that's fine I
00:59:51
don't need Equity right now I kind
00:59:54
of but like but in this weird way now
00:59:57
that you're running a company yeah it it
00:59:59
creates these weird questions of like
01:00:01
well what's your real motivation versus
01:00:03
to that's that it is so deeply un I one
01:00:07
thing I have noticed it is is so deeply
01:00:10
unimaginable to people to say I don't
01:00:12
really need more
01:00:13
money like and I well people think I
01:00:16
think I think people think it's a little
01:00:18
bit of an ulterior motive I think yeah
01:00:20
yeah yeah no so it assumes what else is
01:00:22
he doing on the side to make money
01:00:24
something if I were just trying to say
01:00:26
like I'm going to try to make a trillion
01:00:27
dollarss with open AI I think everybody
01:00:29
would have an easier time and it would
01:00:30
save me it would save a lot of
01:00:32
conspiracy theories totally this is
01:00:34
totally the back Channel you are a great
01:00:37
dealmaker I I've watched your whole
01:00:39
career I mean you're just great at it
01:00:42
you got all these connections you're
01:00:43
really good at raising money uh you're
01:00:47
fantastic at it and you got this Johnny
01:00:49
I thing going you're inhumane you're
01:00:51
investing in company she got the orb
01:00:54
raising $7 trillion to build
01:00:56
Fabs all this stuff all of that put
01:01:00
together J loves fake news I'm kind of
01:01:03
being a little fous here you know
01:01:04
obviously it's not you're not raising 7
01:01:05
trillion doll but maybe that's the
01:01:06
market cap of something putting all that
01:01:08
aside the T was you're doing all these
01:01:12
deals they don't trust you because
01:01:14
what's your motivation you you your end
01:01:16
running and and what opportunities
01:01:18
belong inside of open AI what opportuni
01:01:21
should be Sam's and this group of
01:01:23
nonprofit people didn't trust you is
01:01:25
that what happened so the things like
01:01:27
you know device companies or if we were
01:01:29
doing some chip Fab Company it's like
01:01:31
those are not Sam project those would be
01:01:33
like opening ey would get that Equity um
01:01:36
they would okay that's not the Public's
01:01:38
perception well that's not like kind of
01:01:40
the people like you who have to like
01:01:42
commentate on the stuff all day
01:01:43
perception which is fair because we
01:01:44
haven't announced the stuff because it's
01:01:45
not done I don't think most people in
01:01:47
the world like are thinking about this
01:01:49
but I I I agree it spins up a lot of
01:01:53
conspiracies conspiracy theories in like
01:01:56
Tech commentators yeah and if I could go
01:01:59
back yeah I would just say like let me
01:02:01
take equity and make that super clear
01:02:03
and then every be like all right like
01:02:05
I'd still be doing it because I really
01:02:06
care about AGI and think this is like
01:02:08
the most interesting work in the world
01:02:10
but it would at least type check to
01:02:11
everybody what's the chip project that
01:02:14
the S trillion do and where' the Seven
01:02:16
trillion number come from makes no sense
01:02:17
I don't know where that came from
01:02:18
actually I genuinely don't uh I think I
01:02:21
think the world needs a lot more AI
01:02:23
infrastructure a lot more than it's
01:02:26
currently planning to build and with a
01:02:28
different cost structure um the exact
01:02:31
way for us to play there is we're still
01:02:35
trying to figure that out got it what's
01:02:36
your preferred model of organizing open
01:02:38
AI is
01:02:40
it sort of like the move fast break
01:02:43
things highly distributed small teams or
01:02:46
is it more of this organized effort
01:02:48
where you need to plan because you want
01:02:49
to prevent some of these edge cases um
01:02:53
oh I have to go in a minute uh it's not
01:02:55
because
01:02:58
it's not to prevent the edge cases that
01:03:00
we need to be more organized but it is
01:03:01
that these systems are so complicated
01:03:04
and concentrating bets are so important
01:03:07
like
01:03:09
one you know at the time before it was
01:03:11
like obvious to do this you have like
01:03:13
deep mind or whatever has all these
01:03:15
different teams doing all these
01:03:16
different things and they're spreading
01:03:17
their bets out and you had Open the Eyes
01:03:19
say we're going to like basically put
01:03:20
the whole company and work together to
01:03:22
make gp4 and that was like unimaginable
01:03:25
for how to run an AI research lab but it
01:03:28
is I think what works at the minimum
01:03:30
it's what works for us so not because
01:03:32
we're trying to prevent edge cases but
01:03:34
because we want to concentrate resources
01:03:35
and do these like big hard complicated
01:03:38
things we do have a lot of coordination
01:03:40
on what we work on all right Sam I know
01:03:42
you got to go you've been great on the
01:03:44
hour come back any time great talking to
01:03:46
you guys yeah fun thanks for for being
01:03:49
so open about it we've been talking
01:03:50
about it for like a year plus I'm really
01:03:53
happy it finally happened yeah it's
01:03:54
awesome I really app come back on after
01:03:56
our next like major launch and I'll be
01:03:57
able to talk more directly about some
01:04:00
you got the zoom link same Zoom link
01:04:01
every week just same time same Zoom link
01:04:03
drop time just drop in just put on your
01:04:06
calendar come back to the game come back
01:04:08
to the game in a while I you know I
01:04:11
would love to play poker it has been
01:04:13
forever that would be a lot of fun send
01:04:16
where Cham when you and I were heads up
01:04:18
and you you had remind me you and I were
01:04:21
heads up and you went all in I had a set
01:04:25
but there was a straight and a flush on
01:04:27
the board and I'm in the tank trying to
01:04:29
figure out if I want to lose this back
01:04:31
when we playing small Stakes it might
01:04:32
have been like 5K pot or something and
01:04:35
then chath can't stay out of the pot and
01:04:37
he starts taunting the two of us you
01:04:39
should call you shouldn't call he's
01:04:41
bluffing and I'm like I'm going I'm
01:04:44
trying to figure out if I make the call
01:04:45
here I make the call and uh it was like
01:04:48
uh you had a really good hand and but I
01:04:50
just happened to have a s I think you
01:04:51
had like top pair top kicker or
01:04:52
something but you you made a great move
01:04:54
because the boy was so almost like a
01:04:56
bottom set Sam has a great style of
01:04:58
playing which I would call RAM and jam
01:05:00
totally you got to get I don't really
01:05:02
know if you I don't I don't know if you
01:05:03
could say about anybody
01:05:05
else I don't I don't I'm not gonna you
01:05:07
haven't seen Jam play in the last 18
01:05:09
months it's a lot
01:05:10
different much more so much fun now find
01:05:14
find hard to have you played bomb pots I
01:05:18
don't know what that is okay you'll love
01:05:19
it we'll see you is nuts
01:05:22
it's do two boards and congrats
01:05:25
everything honestly than thanks for
01:05:27
coming on and love to have you back when
01:05:29
the next after the big launch sounds
01:05:31
please do cool bye gentlemen some
01:05:34
breaking news here all those projects He
01:05:37
said are part of open AI That's
01:05:39
something people didn't know before this
01:05:40
and a lot of confusion
01:05:42
there chamat what was your major
01:05:44
takeaway from our hour with Sam I think
01:05:47
that these guys are going to be one of
01:05:50
the four major companies okay that
01:05:52
matter in this whole space I think that
01:05:54
that's clear
01:05:56
I think what's still unclear is where is
01:05:58
the economics going to be he said
01:06:00
something very discreet but I thought
01:06:01
was important which is I think he
01:06:04
basically my interpretation is these
01:06:06
models will roughly all be the same but
01:06:08
there's going to be a lot of scaffolding
01:06:11
around these models that actually allow
01:06:14
you to build these apps so in many ways
01:06:15
that is like the open source movement so
01:06:17
even if the model itself is never open
01:06:20
source it doesn't much matter because
01:06:23
you have to pay for the infrastructure
01:06:24
right there's a lot of open Source
01:06:25
software that runs on Amazon you still
01:06:27
pay AWS something so I think the right
01:06:30
way to think about this now
01:06:34
is the models will basically be all
01:06:36
really good and then it's all this other
01:06:38
stuff that you'll have to pay for
01:06:40
interface whoever builds all this other
01:06:43
stuff is going to be in a position to
01:06:46
build a really good business freeberg he
01:06:48
talked a lot about reasoning it seemed
01:06:50
like that he kept going to reasoning and
01:06:52
away from the language model did you not
01:06:53
did you note that and anything else that
01:06:55
you noted in our hour with yeah I mean
01:06:57
that's a longer conversation because
01:06:58
there is a lot of talk about language
01:07:00
models eventually evolving to be so
01:07:03
generalizable that they can
01:07:07
resolve pretty much like all intelligent
01:07:09
function and so the language model is
01:07:11
the foundational model that that yields
01:07:13
AGI but that's a I think there's a lot
01:07:16
of people that at different schools of
01:07:18
thought on this and how much my my other
01:07:20
takeaway I think is that the I think
01:07:24
what he also seem to indicate is there's
01:07:27
like so many like we're also enraptured
01:07:30
by llms but there's so many things other
01:07:33
than llms that are being baked and
01:07:35
rolled by him and by other groups and I
01:07:37
think we have to pay some amount of
01:07:39
attention to all those because that's
01:07:41
probably where and I think freedberg you
01:07:43
tried to go there in your question
01:07:44
that's where reasoning will really come
01:07:45
from is this mixture of experts approach
01:07:48
and so you're going to have to think
01:07:49
multi-dimensionally to reason right we
01:07:52
do that right do I cross the street or
01:07:54
not in this point in time you reason
01:07:56
based on all these multi-inputs and so
01:07:58
there's there's all these little systems
01:08:00
that go into making that decision in
01:08:01
your brain and if you if you use that as
01:08:03
a a simple example there's all this
01:08:05
stuff that has to go into
01:08:07
making some experience being able to
01:08:10
reason intelligently sax you went right
01:08:12
there with the corporate structure the
01:08:15
board and uh he he he gave us a lot more
01:08:19
information here what are your thoughts
01:08:21
on the hey you know the chip stuff and
01:08:24
the other stuff I'm working that's all
01:08:26
part of open AI people just don't
01:08:27
realize it and that moment and then you
01:08:30
know your questions to him about Equity
01:08:32
your thoughts on um I'm not sure I was
01:08:35
like the main guy who asked that
01:08:36
question jakal but um well no you did
01:08:39
talk about the the nonprofit that the
01:08:41
difference between question about the
01:08:44
clearly was some sort of culture Clash
01:08:46
on the board between the the people who
01:08:49
originated from the nonprofit world and
01:08:50
people who came from the startup world
01:08:52
we don't really know more than that but
01:08:54
there clearly was some sort of culture
01:08:55
class
01:08:56
I thought one of the a couple of the
01:08:58
other areas that he drew attention to
01:08:59
that were kind of interesting is he
01:09:00
clearly thinks there's a big opportunity
01:09:02
on mobile that goes beyond just like
01:09:05
having you know a chat gbt app on your
01:09:08
phone or maybe even having like a Siri
01:09:10
on your phone there's clearly something
01:09:12
bigger there he doesn't know exactly
01:09:14
what it is but it's going to require
01:09:16
more inputs it's that you know personal
01:09:18
assistant that's seeing everything
01:09:20
around you help really I think that's a
01:09:22
great Insight David because he was
01:09:24
talking about hey I'm looking for a
01:09:27
senior team member who can push back on
01:09:29
me and understands all context I thought
01:09:30
that was like a very interesting to
01:09:32
think about an executive assistant or an
01:09:35
assistant that's has executive function
01:09:38
as opposed to being like just an alter
01:09:40
ego for you or what he called a
01:09:42
sycophant that's kind of interesting I
01:09:44
thought that was interesting yeah yeah
01:09:45
and clearly he thinks there's a big
01:09:47
opportunity in biology and scientific
01:09:49
discovery after the break I think we
01:09:51
should talk about Alpha fold 3 it was
01:09:52
just announ let's do that and we can
01:09:54
talk about the the Apple ad and
01:09:56
I just want to also make sure people
01:09:57
understand when people come on the Pod
01:09:59
we don't show them questions they don't
01:10:00
edit the transcript nothing is out of
01:10:03
bounds if you were wondering why I
01:10:05
didn't ask or we didn't ask about the
01:10:06
Elon lawsuit he's just not going to be
01:10:08
able to comment on that so it' be no
01:10:10
comment so you know and we're not like
01:10:12
our time was limited and there's a lot
01:10:14
of questions that we could have asked
01:10:15
him that would have just been a waste of
01:10:16
time and com so I just want to make sure
01:10:19
people understand of course he's going
01:10:20
to no comment in any lawsuit and he's
01:10:22
already been asked about that 500 times
01:10:25
yes should we take a quick break before
01:10:26
the next before we come back yeah I'll
01:10:27
take a bio break and then we'll come
01:10:28
back with some news for you and some
01:10:30
more banter with your
01:10:33
favorite besties on the number one
01:10:36
podcast in the world the only podcast
01:10:38
all right welcome back everybody second
01:10:40
half of the show great guest Sam mman
01:10:41
thanks for coming on the Pod we've got a
01:10:44
bunch of news on the docket so let's get
01:10:46
started freeberg you told me I could
01:10:49
give some names of uh the guests that
01:10:51
we've booked for the all in Summit I did
01:10:54
not you did you've said each week every
01:10:56
week that I get to say I did not I
01:10:59
appreciate your interest in the all in
01:11:02
Summits lineup but we do not yet
01:11:05
have uh enough critical math uh to feel
01:11:08
like we should go out there well uh I am
01:11:11
a loose canidate so I will announce my
01:11:13
two guests and I created The Summit and
01:11:16
you took it from me so and done a great
01:11:18
job I will announce my guests I don't
01:11:20
care what your opinion is I have booked
01:11:22
two guests for the summit and it's going
01:11:24
to be out look at these two guests I
01:11:27
booked for the third time coming back to
01:11:29
the summit our guy Elon Musk will be
01:11:31
there hopefully in person if not you
01:11:33
know from 40,000 feet on a starling
01:11:35
connection wherever he is in the world
01:11:36
and for the first time our friend Mark
01:11:40
Cuban will be coming and so two great
01:11:43
guests for you to look forward to but
01:11:45
free BG's got like a thousand guests
01:11:46
coming he'll tell you when it's like 48
01:11:49
hours before the conference but yeah two
01:11:51
great speaking of billionaires who are
01:11:52
coming isn't coming too yes coming yes
01:11:55
he's booked so we have three
01:11:57
billionaires three billionaires yes okay
01:12:00
hasn't fully confirmed so don't okay
01:12:01
well we're going to say it anyway has
01:12:03
penciled in back we say penciled yeah
01:12:06
don't back out this is going to be
01:12:08
catnip for all these protest organizers
01:12:10
like if
01:12:11
youke the bear well by the way speaking
01:12:14
of updates what do you guys think of the
01:12:17
bottle for the all-in tequila oh
01:12:19
beautiful honestly honestly I will just
01:12:21
say I think you are doing a marvelous
01:12:24
job that
01:12:25
I was
01:12:27
shocked at the design shocked meaning it
01:12:30
is so unique and high quality I think
01:12:33
it's amazing it would make me drink
01:12:37
tequila you're going to You're Gonna
01:12:39
Want to going to it it is uh stunning
01:12:41
just congratulations and um yeah it was
01:12:44
just when we went through the deck at
01:12:47
the uh at the monthly meeting it was
01:12:49
like oh that's nice oh that's nice we're
01:12:51
going to do the concept bottles and then
01:12:52
that bottle came up and everybody went
01:12:55
like crazy it was like somebody hitting
01:12:56
like a Steph Curry hitting a half court
01:12:58
shot it was like oh my God it was just
01:13:01
so clear that you've made an iconic
01:13:03
bottle that if we can produce it oh Lord
01:13:07
it is going to be looks like we can the
01:13:11
make it yeah it's gonna be amazing I'm
01:13:13
excited I'm excited for it you know it's
01:13:14
like the Des so complicated that we have
01:13:16
to do a feasibility analysis on whether
01:13:18
it was actually manufacturable but it is
01:13:21
so or at least the early reports are
01:13:23
good so we're going to H hopefully we'll
01:13:25
have some a for the in time for the all
01:13:27
Summit I mean why not sounds I mean it's
01:13:30
great when we get barricaded in by all
01:13:32
these protesters we can drink the
01:13:34
tequila did you guys see did you see
01:13:36
Peter te Peter teal got barricaded by
01:13:39
these ding-dongs at Cambridge my god
01:13:41
listen people have the right to protest
01:13:42
I think it's great people are protesting
01:13:44
but surrounding people and threatening
01:13:46
them is a little bit over the top and D
01:13:49
think you're exaggerating what happened
01:13:51
well I don't know exactly what happened
01:13:52
because all we see is these videos look
01:13:54
they're not threatening anybody and I
01:13:56
don't even think they tried to barricade
01:13:57
him in they were just outside the
01:13:59
building and because they were blocking
01:14:01
the driveway his car couldn't leave but
01:14:04
he
01:14:05
wasn't physically like locked in the
01:14:08
building or something yeah that's that's
01:14:10
what the headlines say but that could be
01:14:12
fake news fake Social yeah this was not
01:14:14
on my bingo card this Pro protester
01:14:17
support by saaks was not on the bingo
01:14:19
card I got to say I contitution the
01:14:22
Constitution of the United States in the
01:14:25
Amendment provides for the right of
01:14:27
assembly which includes protest and sit
01:14:29
in as long as they're as long as they're
01:14:31
Peaceable now obviously if they go too
01:14:34
far and they vandalize or break into
01:14:36
buildings or use violence then that's
01:14:38
not Peaceable however expressing
01:14:40
sentiments with which you disagree does
01:14:43
not make it violent and there's all
01:14:46
these people out there now making the
01:14:47
argument that if you hear something from
01:14:50
a protester that you don't like and you
01:14:54
subjectively experience that as a as a
01:14:56
threat to your safety then that somehow
01:14:59
should be you know treated as valid like
01:15:02
that's basically violent well that's
01:15:04
that's not what the Constitution says
01:15:07
and these people understood well just a
01:15:09
few months ago that that was basically
01:15:11
snowf fakery that you know just because
01:15:14
somebody you know what I'm
01:15:16
saying we have the rise of the woke
01:15:18
right now where they're buying yeah the
01:15:21
woke right they're buying into this idea
01:15:22
of safetyism which is being exposed
01:15:25
ideas you don't like to protest you
01:15:26
don't like is a threat to your safety no
01:15:28
it's
01:15:30
not every we absolutely have snow fakery
01:15:33
on both sides now it's ridiculous the
01:15:35
only thing I will say that I've seen and
01:15:38
is this this this uh surrounding
01:15:41
individuals who you don't want there and
01:15:44
locking them in a circle and then moving
01:15:46
them out of like protesta that's not
01:15:48
cool yeah obviously you can't do that
01:15:49
but look I think that most of the
01:15:52
protests on most of the campuses have
01:15:53
not crossed the line they've just
01:15:55
occupied The Lawns of these campuses and
01:15:57
look I've seen some troublemakers try to
01:16:01
barge through the the encampments and
01:16:04
claim that because they can't go through
01:16:06
there that somehow they're being
01:16:08
prevented from going to class look you
01:16:10
just walk around the lawn and you can
01:16:12
get to class okay and you know some of
01:16:15
these videos are showing that these are
01:16:18
effectively right-wing provocators who
01:16:20
are engaging in leftwing tactics and I
01:16:24
don't support it either way by the way
01:16:26
some of these camps are some of the
01:16:28
funniest things you've ever seen it's
01:16:29
like there are like a one tent that's
01:16:32
dedicated to like a reading room and you
01:16:34
go in there and there's like these like
01:16:37
Center oh my God it's unbelievably
01:16:39
hilarious look there there's no question
01:16:41
that because the protests are
01:16:42
originating on the left that there's
01:16:44
some goofy views like you know you're
01:16:46
dealing with like a leftwing idea
01:16:48
complex right but and you know it's easy
01:16:52
to make fun of them doing different
01:16:53
things but the fact of the matter is
01:16:56
that most of the protests on most of
01:16:57
these campuses are even though they can
01:16:59
be annoying because they're occupying
01:17:01
part of the lawn they're not violent
01:17:04
yeah and you know the way they're being
01:17:05
cracked down on they're sending the
01:17:06
police in at 5:00 a. to crack down on
01:17:09
these encampments with batons and riot
01:17:12
gear and I find that part to be
01:17:14
completely excessive well it's also
01:17:17
dangerous because you know things can
01:17:19
escalate when you have mobs of people
01:17:21
and large groups of people so I just
01:17:22
want to make sure people understand that
01:17:24
large group of people large you have a
01:17:26
diffusion of responsibility that occurs
01:17:28
when there's large groups of people who
01:17:29
are passionate about things and and
01:17:31
people could get hurt people have gotten
01:17:33
killed at these things so just you know
01:17:35
keep it calm everybody I agree with you
01:17:37
like what's the harm of these folks
01:17:39
protesting on a lawn it's not a big deal
01:17:40
when they break into buildings of course
01:17:42
yeah that crosses the line obviously
01:17:44
yeah but I mean let them sit out there
01:17:45
and then they'll run out their food cars
01:17:48
their food card and they run out of
01:17:50
waffles did you guys see the clip I
01:17:51
think it was on the University of
01:17:53
Washington campus where one kid
01:17:56
challenged this antifa guy to a push-up
01:17:58
contest oh
01:18:00
fantastic I mean it's it is some of the
01:18:02
funniest stuff some of some content is
01:18:05
coming out that's just my favorite was
01:18:07
the woman who came out and said that the
01:18:09
Columbia students needed humanitarian
01:18:11
Aid and oh my God the overdubs on her
01:18:14
were hilarious I was like uh
01:18:16
humanitarian Aid the he was like we need
01:18:19
our door Dash right now we need we
01:18:21
Double Dash some boba and we can't get
01:18:23
it through the the police need our Boba
01:18:26
low sugar Boba with the with the popping
01:18:28
Boba bubbles wasn't getting in but you
01:18:31
know people have the right to protest
01:18:32
and uh Peaceable by the way there's a
01:18:34
word I've never heard very good sax
01:18:36
Peaceable inclined to avoid argument or
01:18:39
violent conflict very nice well it's in
01:18:41
the Constitution it's in the first
01:18:42
amendment is it really I've never I
01:18:44
haven't heard the word Peaceable before
01:18:46
I mean you and I are sympatico on this
01:18:47
like I I don't we used to have the
01:18:51
ACLU like backing up the KKK going down
01:18:55
Main Street and really fighting decision
01:18:57
yeah they were really fighting for I I'm
01:19:00
and I have to say the Overton window is
01:19:02
opened back up and I think it's great
01:19:04
all right we got some things on the
01:19:05
docket here I don't know if you guys saw
01:19:07
the Apple new iPad ad it's getting a
01:19:09
bunch of criticism they use like some
01:19:12
giant hydraulic press to crush a bunch
01:19:15
of creative tools EJ turntable trumpet
01:19:19
piano people really care about Apple's
01:19:21
ads and what they represent we talked
01:19:24
about that that uh Mother Earth little
01:19:26
vignette they created here what do you
01:19:29
think free BR did you see the ad what
01:19:30
was your reaction to it made me sad it
01:19:32
it did not make me want to buy an iPad
01:19:34
so huh did not seem like it made you sad
01:19:38
it actually elicited an emotion meaning
01:19:40
like commercials it's very rare that
01:19:42
commercials can actually do that most
01:19:43
people just Zone up yeah they took all
01:19:45
this beautiful stuff and heard it didn't
01:19:47
it didn't feel good I don't know it just
01:19:49
didn't seem like a good ad I don't know
01:19:50
why they did that I don't get it I I've
01:19:52
I don't I don't know I think I think
01:19:54
maybe what they're trying to do is the
01:19:55
the selling point of this new iPad is
01:19:57
that it's the thinnest one I mean
01:19:59
there's no innovation left so they're
01:20:00
just making the devices yeah you know
01:20:02
thinner yeah so I think the idea was
01:20:06
that they were going to take this
01:20:07
hydraulic press to represent how
01:20:09
ridiculously thin the new iPad is now I
01:20:12
don't know if the point there was to
01:20:14
smush all of that good stuff into the
01:20:17
iPad I don't know if that's what they
01:20:18
were trying to convey but yeah I I think
01:20:21
that by destroying all those creative
01:20:24
tools that apple is supposed to
01:20:26
represent it definitely seemed very
01:20:28
offbrand for them and I think people
01:20:30
were reacting to the fact that it was so
01:20:34
um different than what they would have
01:20:35
done in the past and of course everyone
01:20:37
was saying well Steve would never have
01:20:38
done this I do think it did land wrong I
01:20:41
mean I I didn't care that much but but I
01:20:45
I was kind of asking the question like
01:20:46
why are they destroying all these
01:20:48
Creator tools that they're renowned for
01:20:52
creating or for turning into the digital
01:20:55
version yeah it just didn't land I mean
01:20:58
shath how are you doing emotionally
01:21:01
after seeing me are you okay buddy yeah
01:21:06
I think this is uh you guys see that in
01:21:09
the
01:21:10
birkshire annual meeting last
01:21:12
weekend Tim Cook was in the audience and
01:21:17
Buffett was very laudatory this is an
01:21:19
incredible company but he's so clever
01:21:22
with words he's like you know this is an
01:21:24
incredible business that we will hold
01:21:28
forever most likely then it turns out
01:21:31
that he sold $20 billion worth of Apple
01:21:33
shares in the
01:21:34
cat we're gonna hold it forever which
01:21:37
which by the way sell if you guys
01:21:39
remember we we put that little chart up
01:21:40
which shows when he doesn't mention it
01:21:42
in the in the annual letter it's
01:21:44
basically like it's foreshadowing the
01:21:46
fact that he is just pounding the cell
01:21:49
and he sold $20 billion well also
01:21:52
holding it forever could mean one share
01:21:54
yeah exactly we kind of need to know
01:21:56
like how much are we talking about I
01:21:59
mean it's an incredible business that
01:22:01
has so much money with nothing to do
01:22:03
they're probably just going to buy back
01:22:04
the stock just a total waste they were
01:22:06
floating this rumor of buying rivan you
01:22:08
know after they shut down Titan project
01:22:10
the your internal project to make a car
01:22:11
it seems like a car is the only thing
01:22:13
people can think of that would move the
01:22:16
needle in terms of earnings I think the
01:22:17
problem is J like you kind of become
01:22:19
afraid of your own shadow meaning the
01:22:21
the folks that are really good at m&a
01:22:23
like you look at Benny off
01:22:25
the thing with benof m& strategy is that
01:22:27
he's been doing it for 20 years and so
01:22:31
he's cut his teeth on small Acquisitions
01:22:34
and the market learns to give him trust
01:22:37
so that when he proposes like the $27
01:22:39
billion slack acquisition he's allowed
01:22:41
to do that another guy you know narora
01:22:44
at pandw these last five years people
01:22:46
were very skeptical that he could
01:22:48
actually roll up security because it was
01:22:49
a super fragmented Market he's gotten
01:22:52
permission then there are companies like
01:22:54
Dan that buy hundreds of companies so
01:22:56
all these folks are examples of you
01:22:58
start small and you you earn the right
01:23:00
to do more Apple hasn't bought anything
01:23:02
more than 50 or hundred million do and
01:23:04
so the idea that all of a sudden they
01:23:06
come out of the blue and buy a 102
01:23:08
billion dollar company I think is just
01:23:10
totally doesn't stand logic it's just
01:23:13
not possible for them because they'll be
01:23:15
so afraid of their own shadow that's the
01:23:16
big problem it's themselves well if
01:23:19
you're running out of In-House
01:23:21
Innovation and you can't do m&a then
01:23:23
your options are kind of limited I mean
01:23:25
I do think that the fact that the big
01:23:27
news out of apple is the iPad's Getting
01:23:30
Thinner does represent kind of the end
01:23:32
of the road in terms of innovation it's
01:23:34
kind of like when they added the third
01:23:36
camera to the iPhone yeah it reminds me
01:23:39
of those um remember like when the
01:23:41
Gillette yeah they came out and then
01:23:43
they did the five was the best onion
01:23:44
thing was like we're doing five f
01:23:47
it but then Gillette actually came out
01:23:49
with the Mach 5 so yeah like the parody
01:23:51
became the reality what are they going
01:23:52
to do add two more cameras to the iPhone
01:23:54
you have five cameras on it no makes no
01:23:57
sense and then I don't know anybody
01:23:58
wants to remember the Apple Vision was
01:24:01
like gonna plus why are they body
01:24:02
shaming the the fat
01:24:05
iPads that's a fair point Fair Point
01:24:07
actually you know what it's actually
01:24:10
this didn't come out yet but it turns
01:24:11
out the iPad is on OIC it's actually
01:24:14
dropped a lot that would have been a
01:24:15
funnier ad yeah yeah exactly oh oh
01:24:20
ohic we can just Workshop that right
01:24:22
here but there was another funny one
01:24:24
which was making the iPhone smaller and
01:24:26
smaller and smaller and the iPod smaller
01:24:27
and smaller and smaller to the point it
01:24:28
was like you know like a thumb siize
01:24:30
iPhone like the Ben Stiller phone in
01:24:33
Zoolander or
01:24:36
correct yeah that was a great scene is
01:24:39
there a category that you can think of
01:24:42
that you would love an Apple product for
01:24:45
there's a product in your life that you
01:24:47
would love to have Apple's version of it
01:24:50
they they killed it I think a lot of
01:24:53
people would be very open minded to an
01:24:55
Apple car okay they they just would it's
01:24:57
it's a connected internet device
01:25:00
increasingly so yeah and they they
01:25:02
managed to flub
01:25:03
it they had a chance to buy Tesla they
01:25:06
managed to flub it yeah right there are
01:25:09
just too many examples here where these
01:25:11
guys have so much money and not enough
01:25:12
ideas that's a
01:25:14
shame it's a bummer yeah the one I
01:25:17
always wanted to see them do saxs was TV
01:25:19
the one I always wanted to see them do
01:25:21
was the TV and they were supposedly
01:25:22
working on it like the actual TV not the
01:25:24
little Apple TV box in the back and like
01:25:26
that would have been extraordinary to
01:25:28
actually have a gorgeous you know big
01:25:31
television what about a gaming console
01:25:33
they could have done that you know
01:25:34
there's just all these things that they
01:25:36
could have done it's not a lack of
01:25:39
imagination because these aren't exactly
01:25:42
incredibly World beating ideas they're
01:25:44
sitting right in front of your face it's
01:25:46
just the will to do it yeah all in one
01:25:50
TV would have been good if you think
01:25:52
back on Apple product lineup over the
01:25:55
years where they've really created value
01:25:57
is on how unique the products are they
01:26:00
almost create new categories sure there
01:26:02
may have been a quote tablet computer
01:26:04
prior to the iPad but the iPad really
01:26:06
defined the tablet computer era sure
01:26:08
there was a smartphone or two before the
01:26:10
iPhone came along but it really defined
01:26:12
the smartphone and sure there was a
01:26:14
computer before the Apple 2 and then it
01:26:15
came along and it defined the personal
01:26:17
computer in all these cases I think
01:26:19
Apple strives to define the category so
01:26:22
it's very hard to define a television if
01:26:24
you think about it or gaming console in
01:26:25
a way that you take a step up and you
01:26:28
say this is the new thing this is the
01:26:29
new platform so I don't know that's the
01:26:32
lens I would look at if I'm Apple in
01:26:33
terms of like can I redefine a car can I
01:26:36
make you know we're all trying to fit
01:26:38
them into an existing product bucket but
01:26:40
I think what they've always been so good
01:26:41
at is identifying consumer needs and
01:26:43
then creating an entirely new way of
01:26:45
addressing that need in a real step
01:26:46
change function from the like the the
01:26:49
the um iPod it was so different from any
01:26:51
MP3 player ever I think the reason why
01:26:53
the car could have been completely
01:26:55
reimagined by Apple is that they have a
01:26:57
level of credibility and trust that I
01:26:59
think probably no other company has and
01:27:02
absolutely no other tech company has and
01:27:05
we talked about this but I think this
01:27:08
was the third Steve Job story that that
01:27:11
I left out but in
01:27:13
200 I don't know was it
01:27:15
[Music]
01:27:17
one I launched a 99 cent download store
01:27:21
right I think I've told you the story in
01:27:22
winam and
01:27:26
Steve Jobs just ran total circles around
01:27:28
us but the reason he was able to is he
01:27:30
had all the credibility to go to the
01:27:31
labels and get deals done for licensing
01:27:34
music that nobody could get done before
01:27:36
I think that's an example of what
01:27:38
Apple's able to do which is to use their
01:27:39
political Capital to change the rules so
01:27:42
if the thing that we would all want is
01:27:44
safer roads and autonomous vehicles
01:27:47
there are regions in every town and city
01:27:49
that could be completely converted to
01:27:52
level five autonomous zones if I had to
01:27:55
pick one company that had the
01:27:57
credibility to go and change those rules
01:27:58
it's them because they could demonstrate
01:28:01
that there was a methodical safe
01:28:03
approach to doing something and so the
01:28:05
point is that even in these categories
01:28:08
that could be totally reimagined it's
01:28:09
not for a lack of imagination again it
01:28:11
just goes back to a complete lack of
01:28:13
Will and I understand because if they
01:28:15
had if you if you had $200 billion do of
01:28:18
capital on your balance sheet I think
01:28:21
it's probably pretty easy to get fat and
01:28:23
lazy yeah it is and and they want to
01:28:25
have everything built there people don't
01:28:27
remember but they actually built one of
01:28:29
the first digital cameras you must have
01:28:30
owned this right freeberg you're I
01:28:32
remember this yeah totally it beautiful
01:28:34
what did they call it was it the ey
01:28:36
camera or something quick take quick
01:28:39
take yeah um the thing I would like to
01:28:41
see apple build and I'm surprised they
01:28:42
didn't was a smart home system the way
01:28:46
Apple has Nest a drop cam a door lock
01:28:50
you know a AV system go equestron or
01:28:54
whatever and just have your whole home
01:28:56
automated thermostat Nest all of that
01:28:58
would be brilliant by Apple and right
01:29:00
now I'm an apple family that has our all
01:29:04
of our home automation through Google so
01:29:06
it's just kind of sucks I would I would
01:29:08
like that all to that would be pretty
01:29:10
amazing like if they did a crant or
01:29:11
Savant CU then when you just go to your
01:29:13
Apple TV all your cameras just work you
01:29:15
don't need to
01:29:17
yes that's the that I mean and everybody
01:29:19
has a home and everybody automates their
01:29:21
home so well everyone has Apple TV at
01:29:23
this point so you just make Apple TV the
01:29:26
brain for the home system right that
01:29:29
would be your Hub and you can connect
01:29:31
your phone to it then yes that would be
01:29:33
very nice yeah like can you imagine like
01:29:36
the ring cameras all that stuff being
01:29:38
integrated I don't know why they didn't
01:29:39
go after that that seems like the easy
01:29:41
layup hey you know everybody's been
01:29:43
talking freedberg about this uh Alpha
01:29:47
fold this folding
01:29:49
proteins and there's some new version
01:29:52
out from Google and uh also Google
01:29:55
reportedly we talked about this before
01:29:56
is also advancing talks to acquire
01:29:58
HubSpot so that rumor for the $30
01:30:01
billion market cap HubSpot is out there
01:30:03
as well free break your as our resident
01:30:06
science Sultan uh our resident Sultan of
01:30:09
Science and as an Google
01:30:12
alumni pick either story and let's go
01:30:14
for it yeah I mean I'm not sure there's
01:30:15
much more to add on the HubSpot
01:30:16
acquisition rumors they are still just
01:30:18
rumors and I think we covered the topic
01:30:20
a couple weeks ago but I will say that
01:30:22
Alpha fold 3 that was just announced
01:30:24
today and demonstrated by Google um is a
01:30:27
real uh I would say breathtaking moment
01:30:31
um for biology for bioengineering for
01:30:34
human health for medicine and maybe I'll
01:30:36
just take 30 seconds to kind of explain
01:30:38
it um you remember when they introduced
01:30:40
Alpha fold at Alpha fold 2 we talked
01:30:43
about DNA codes for proteins so every
01:30:46
three letters of DNA codes for an amino
01:30:49
acid so a string of DNA codes for a
01:30:52
string of amino acids and that's called
01:30:54
a gene that produces a
01:30:55
protein and that protein is basically a
01:30:59
like think about beads there's 20
01:31:00
different types of beads 20 different
01:31:01
amino acids that can be strung together
01:31:04
and what happens is that necklace that
01:31:06
bead necklace basically collapses on
01:31:08
itself and all those little beads stick
01:31:10
together with each other in some
01:31:11
complicated way that we can't
01:31:12
deterministically model and that creates
01:31:15
a three-dimensional structure which is
01:31:16
called A protein that molecule and that
01:31:19
molecule does something interesting it
01:31:20
can break apart other molecules it can
01:31:23
bind molecules it can move molecules
01:31:24
around so it's basically the Machinery
01:31:26
of chemistry of biochemistry and so
01:31:29
proteins are what is encoded in our DNA
01:31:32
and then the proteins do all the work of
01:31:34
making living organisms so Google's
01:31:36
Alpha fold project took threedimensional
01:31:38
images of proteins and the DNA sequence
01:31:41
that codes for those proteins and then
01:31:43
they built a predictive model that
01:31:45
predicted the three-dimensional
01:31:46
structure of a protein from the DNA that
01:31:48
codes for it and that was a huge
01:31:50
breakthrough years ago what they just
01:31:52
announced with Alpha fold 3 today
01:31:54
is that they're now including all small
01:31:56
molecules so all the other little
01:31:58
molecules that go into chemistry and
01:32:00
biology that drive the function of
01:32:02
everything we see around us and the way
01:32:05
that all those molecules actually bind
01:32:07
and fit together is part of the
01:32:09
predictive model why is that important
01:32:11
well let's say that you're designing a
01:32:12
new drug and it's a protein based drug
01:32:15
which biologic drugs which most drugs
01:32:16
are today you could find a biologic drug
01:32:18
that binds to a cancer cell and then
01:32:20
you'll spend 10 years going to clinical
01:32:22
trials and billions of later you find
01:32:24
out that that protein accidentally binds
01:32:27
to other stuff and hurts other stuff in
01:32:28
the body and that's an off Target effect
01:32:30
or a side effect and that drug is pulled
01:32:32
from the clinical trials and it never
01:32:33
goes to Market most drugs go through
01:32:36
that process they are actually tested in
01:32:39
in animals and then in humans and we
01:32:41
find all these side effects that arise
01:32:42
from those drugs because we don't know
01:32:45
how those drugs are going to bind or
01:32:47
interact with other things in our
01:32:48
biochemistry and we only discovered
01:32:50
after we put it in but now we can
01:32:52
actually model that with software we can
01:32:54
take that drug we can create a
01:32:56
three-dimensional representation of it
01:32:57
using the software and we can model how
01:33:00
that drug might interact with all the
01:33:01
other cells all the other proteins all
01:33:03
the other small molecules in the body to
01:33:06
find all the of Target effects that may
01:33:08
arise and decide whether or not that
01:33:10
presents a good drug candidate that is
01:33:12
one example of how this capability can
01:33:15
be used and there are many many others
01:33:17
including creating new proteins that
01:33:20
could be used to bind molecules or stick
01:33:22
molecules together or new prot that
01:33:24
could be designed to rip molecules apart
01:33:26
we can now predict the function of
01:33:29
threedimensional molecules using this
01:33:31
this capability which opens up all of
01:33:33
the software based design of chemistry
01:33:35
of biology of drugs and it really is an
01:33:38
incredible breakthrough moment the
01:33:41
interesting thing that happened though
01:33:42
is Google alphabet has a subsidiary
01:33:45
called isomorphic Labs it is a drug
01:33:47
development subsidiary of alphabet and
01:33:49
they've basically kept all the IP for
01:33:52
Alpha fold 3 in is
01:33:54
so Google is going to monetize the heck
01:33:56
out of this capability and what they
01:33:58
made available was not open source code
01:34:00
but a webbased viewer that scientists
01:34:03
for quote non-commercial purposes can
01:34:04
use to do some fundamental research in a
01:34:06
web-based viewer and make some
01:34:08
experiments and try stuff out and how
01:34:09
interactions might occur but no one can
01:34:12
use it for commercial use only Google's
01:34:14
isomorphic Labs can so number one it's
01:34:16
an incredible demonstration of what AI
01:34:18
outside of llms which we just talked
01:34:21
about with Sam today and obviously
01:34:23
talked about other models but llms being
01:34:25
kind of this consumer text predictive
01:34:27
model capability but outside of that
01:34:29
there's this capability in things like
01:34:32
chemistry with these new AI models that
01:34:34
can be trained and built to predict
01:34:36
things like three-dimensional chemical
01:34:39
interactions that is going to open up an
01:34:41
entirely New Era for you know human
01:34:43
progress and I think that's what's so
01:34:45
exciting I think the other side of this
01:34:46
is Google is hugely advantaged and they
01:34:48
just showed the world a little bit about
01:34:50
some of these jewels that they have in
01:34:51
the treasure chest and they're like look
01:34:52
at what we got we're going to all these
01:34:54
drugs and they've got Partnerships with
01:34:55
all thesea companies at isomorphic Labs
01:34:57
that they've talked about and it's going
01:34:59
to usher in a new era of drug
01:35:01
development design for human health so
01:35:03
all in all I'd say it's a pretty like
01:35:05
astounding day a lot of people are going
01:35:06
crazy over the capability that they just
01:35:08
demonstrated and then it begs all this
01:35:10
really interesting question around like
01:35:12
you know what's Google G to do with it
01:35:13
and how much value is going to be
01:35:14
created here so anyway I thought it was
01:35:15
a great story and I just rambled on for
01:35:17
a couple minutes but I don't know it's
01:35:19
pretty cool super interesting is is this
01:35:20
AI capable of making a science Corner
01:35:22
that David saak pay attention
01:35:25
to well it will it will predict the Cure
01:35:28
I think for the common cold and for
01:35:30
herpes so he should pay attention
01:35:33
absolutely folding cells is the app that
01:35:36
Sachs casual game Sachs just downloaders
01:35:38
playing how many uh how many chest moves
01:35:40
did you make during that segment sex
01:35:42
sorry let me just say one more thing do
01:35:43
you guys remember we talked about
01:35:44
yamanaka factors and how challenging it
01:35:46
is to basically we can reverse aging if
01:35:48
we can get the right proteins into cells
01:35:51
to tune the expression of certain gen to
01:35:53
make those cells youthful right now it's
01:35:56
a shotgun approach to trying millions of
01:35:58
compounds and combinations of compounds
01:36:00
to do them there's a lot of companies
01:36:01
actually trying to do this right now to
01:36:03
come up with a fountain of youth type
01:36:05
product we can now simulate that so with
01:36:08
this system one of the things that this
01:36:10
Alpha 3 can do is predict what molecules
01:36:13
will bind and promote certain sequences
01:36:15
of DNA which is exactly what we try and
01:36:17
do with the yanaka factor-based
01:36:19
expression systems and find ones that
01:36:21
won't trigger off Target expression so
01:36:23
meaning we can now go through the search
01:36:25
space and software of creating a
01:36:27
combination of molecules that
01:36:29
theoretically could unlock this Fountain
01:36:31
of Youth to deage all the cells in the
01:36:33
body and introduce an extraordinary kind
01:36:35
of health benefit and that's just again
01:36:37
one example of the many things that are
01:36:39
possible inredible with this sort of
01:36:40
platform I I and I'm really I gota be
01:36:42
honest I'm really just sort of skimming
01:36:43
the surface here of what this can do the
01:36:46
capabilities and the impact are going to
01:36:48
be like I don't know I know I say this
01:36:50
sort of stuff a lot but it's going to be
01:36:51
pretty for half there's um on the Block
01:36:53
post they have this incredible video
01:36:55
that they show of
01:36:57
the Corona virus that creates a common
01:37:00
cold I think the S P&M protein and not
01:37:03
only did they literally like predict it
01:37:06
accurately they also predicted how it
01:37:09
interacts with an antibody with a sugar
01:37:12
it's nuts so you could see a world where
01:37:15
like I don't know you just get a vaccine
01:37:17
for the cold and it's kind of like you
01:37:19
never have colds again amazing I mean
01:37:21
simple stuff but so powerful you can
01:37:23
filter out stuff that has offt Target
01:37:25
effect so so much of drug Discovery and
01:37:27
all the side effect stuff can start to
01:37:29
be solved for in silicone and you could
01:37:31
think about running extraordinar large
01:37:33
use a model like this run
01:37:34
extraordinarily large simulations in a
01:37:37
search space of chemistry to find stuff
01:37:39
that does things in the body that can
01:37:42
unlock you know all these benefits can
01:37:44
do all sorts of amazing things to
01:37:45
destroy cancer to destroy viruses to
01:37:48
repair cells to deage cells and this is
01:37:51
a hundred billion dollar business they
01:37:53
say I alone I feel like this is where I
01:37:57
I've said this before I think Google's
01:37:58
got this like portfolio of like
01:38:01
quiet
01:38:03
extraord what if they hit and the fact
01:38:05
and I think the fact that they didn't
01:38:06
open source everything in this says a
01:38:08
lot about their intentions yeah yeah
01:38:10
open source when you're behind closed
01:38:12
Source lock it up when you're ahead but
01:38:14
show yamanaka actually interestingly
01:38:16
yamanaka is the Japanese whiskey that
01:38:19
saak serves on his plane as well it's
01:38:21
delicious I love that Hokkaido yak
01:38:24
Jason I feel like if you didn't find
01:38:26
your way to Silicon Valley you could be
01:38:27
like a Vegas Lounge comedy guy
01:38:29
absolutely for sure yeah I was actually
01:38:32
yeah somebody said I should do like
01:38:33
those 1950s those 1950s talk shows where
01:38:35
the guys would do like the the stage
01:38:37
show somebody told me I should do um
01:38:40
like spald and gray Eric bosan style
01:38:42
stuff I don't know if you guys remember
01:38:44
like the uh the monologue is from the
01:38:45
80s in New York I was like oh that's
01:38:47
interesting maybe all right everybody
01:38:48
thanks for tuning in to the world's
01:38:50
number one podcast can you believe we
01:38:53
did it jamath uh the number one podcast
01:38:56
in the world and the all-in summit the
01:38:59
Ted killer if you are going to Ted
01:39:02
congratulations for genu flecting if you
01:39:04
want to talk about real issues come to
01:39:06
the all-in summit and if you are
01:39:08
protesting at the all-in summit let us
01:39:11
know uh what mock meet you would like to
01:39:13
have freeberg is setting up mock meat
01:39:15
stations for all of our protesters and
01:39:18
what milk you would like yeah all vegan
01:39:20
if you if your o milk nut milk
01:39:23
come five different kinds of zanth gum
01:39:26
you can from all of the nut milks you
01:39:28
could want and then they'll be
01:39:30
mindful can we have some soil likein
01:39:32
please yes on the south lawn we'll have
01:39:34
the goat yoga going on so just please
01:39:37
note that the goat yoga will be going on
01:39:38
for all of you it's very thoughtful for
01:39:41
you to make sure that our protesters are
01:39:43
going to be well well fed well taken
01:39:46
care of yes we're actually freeberg is
01:39:48
working on the protester gift bags the
01:39:50
protester gift bags they're made
01:39:54
Yak folding proteins so you're good
01:39:57
folding proteins I think I saw them open
01:39:59
for the Smashing Pumpkins in
01:40:02
2003 on fire on fire enough I'll be here
01:40:05
for three more nights love you boys
01:40:07
byebye love you besties is this the all
01:40:09
Potter open mic night what's going on
01:40:11
it's basically I'm just
01:40:15
bored let your winners
01:40:17
ride Rainman David
01:40:23
said we open source it to the fans and
01:40:25
they've just gone crazy with it love
01:40:28
queen
01:40:32
[Music]
01:40:35
of Besties
01:40:38
are my dog taking
01:40:42
driveway oh man myit will meet me we
01:40:46
should all just get a room and just have
01:40:48
one big huge orgy cuz they're all this
01:40:49
useless it's like this like sexual
01:40:51
tension that they just need to release
01:40:52
somehow
01:40:53
[Music]
01:40:58
your we need to get
01:41:04
[Music]
01:41:09
merch all right that's episode 178 and
01:41:12
now the plugs the all-in Summit is
01:41:15
taking place in Los Angeles on September
01:41:17
8th through the 10th you can apply for a
01:41:19
ticket at summit. Allin podcast.co
01:41:23
scholarships will be coming soon if you
01:41:25
want to see the four of us interview Sam
01:41:27
Alman you can actually see the video of
01:41:30
this podcast on YouTube
01:41:33
youtube.com Allin or just search all-in
01:41:35
podcast and hit the alert Bell and
01:41:38
you'll get updates when we post we're
01:41:39
doing a Q&A episode live when the
01:41:43
YouTube channel hits 500,000 and we're
01:41:45
going to do a party in Vegas my
01:41:48
understanding when we hit a million
01:41:49
subscribers so look for that as well you
01:41:51
can follow us on x x
01:41:53
/the all inpod Tik Tok is allore inor
01:41:58
talk Instagram the all inpod and on
01:42:01
LinkedIn just search for the all-in
01:42:03
podcast you can follow chth x.com chth
01:42:06
and you can sign up for a substack at
01:42:08
cho. substack docomo freeberg can be
01:42:11
followed at x.com freeberg and ohal is
01:42:14
hiring click on the careers page at ohal
01:42:16
genetics.com and you can follow saxs at
01:42:19
x.com davit saxs saxs recently spoke at
01:42:22
the American moment conference and
01:42:24
people are going crazy for it it's
01:42:26
pinned to his tweet on his X profile I'm
01:42:28
Jason kakanis I amx.com Jason and if you
01:42:32
want to see pictures of my Bulldogs and
01:42:33
the food I'm eating go to instagram.com
01:42:36
Jason inth firstname Club you can listen
01:42:39
to my other podcast this week in
01:42:40
startups just search for it on YouTube
01:42:42
or your favorite podcast player we are
01:42:44
hiring a researcher apply to be a
01:42:47
research you're doing primary research
01:42:48
and working with me and producer Nick
01:42:50
working in data and Science and being
01:42:52
able to do great great research Finance
01:42:54
Etc Allin podcast.co resesarch it's a
01:42:57
full-time job working with us the
01:42:59
besties we'll see you all next time when
01:43:01
the Allin podcast

Badges

This episode stands out for the following:

  • 75
    Best concept / idea
  • 70
    Best overall
  • 70
    Most influential
  • 65
    Most creative

Episode Highlights

  • ChatGPT's Impact
    ChatGPT became the fastest product to hit 100 million users in history.
    “It's reportedly the fastest product to hit 100 million users in history.”
    @ 02m 10s
    May 10, 2024
  • OpenAI's Strategy Shift
    Sam discusses the shift from open to closed-source models for safety.
    “We want to build AI tools and make them widely available.”
    @ 07m 20s
    May 10, 2024
  • The Future of AI Assistants
    Exploring the concept of AI as a separate entity that understands and anticipates needs.
    “I personally like the separate entity approach better.”
    @ 20m 44s
    May 10, 2024
  • The Role of AI in Healthcare
    Discussing the transformative potential of AI in the healthcare sector.
    “I believe healthcare should be pretty transformed by this.”
    @ 26m 28s
    May 10, 2024
  • The Beauty of Human Creativity
    Exploring the intersection of AI and human artistic expression, emphasizing the need to preserve creativity.
    “There's something beautiful about human creativity.”
    @ 39m 53s
    May 10, 2024
  • AI Regulation Challenges
    A deep dive into the complexities of regulating AI and the potential for overreach.
    “Regulating AI is a complex discussion that needs more input.”
    @ 41m 07s
    May 10, 2024
  • Navigating AGI Development
    The importance of responsible development of AGI and the challenges that lie ahead.
    “We have to navigate how to get there in a reasonable way.”
    @ 58m 42s
    May 10, 2024
  • Organizational Structure of OpenAI
    Sam explains the unique approach to organizing AI research.
    “It's what works for us.”
    @ 01h 03m 25s
    May 10, 2024
  • The Overton Window
    A discussion on the importance of peaceful protests and freedom of speech.
    “The Overton window has opened back up and I think it's great.”
    @ 01h 19m 00s
    May 10, 2024
  • Apple's New iPad Ad Criticized
    The latest iPad ad has drawn criticism for its emotional impact and message.
    “It did not make me want to buy an iPad.”
    @ 01h 19m 32s
    May 10, 2024
  • Alpha Fold 3 Breakthrough
    Google's Alpha Fold 3 can predict protein structures, revolutionizing drug development.
    “This opens up an entirely new era for human progress.”
    @ 01h 34m 43s
    May 10, 2024
  • Vaccine for the Common Cold?
    Imagining a future where we could eliminate colds with a vaccine.
    “You could see a world where you just get a vaccine for the cold.”
    @ 01h 37m 15s
    May 10, 2024

Episode Quotes

Key Moments

  • Future of AI01:57
  • Separate Entity Approach20:44
  • AI as an Assistant20:51
  • User Experience23:23
  • Trust Issues1:01:12
  • AI Infrastructure1:02:21
  • Major Launch1:03:56
  • Cold Vaccine1:37:15

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Elon gets paid, Apple's AI pop, OpenAI revenue rip, Macro debate & Inside Trump Fundraiser
Podcast thumbnail
New SEC Chair, Bitcoin, xAI Supercomputer, UnitedHealth CEO murder, with Gavin Baker & Joe Lonsdale
Podcast thumbnail
DOJ targets Nvidia, Meme stock comeback, Trump fundraiser in SF, Apple/OpenAI, Texas stock market
Podcast thumbnail
Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller
Podcast thumbnail
E130: DeSantis's Twitter Spaces, debt ceiling, Nvidia rips, state of VC, startup failure & more
Podcast thumbnail
Epstein Files, Is SaaS Dead?, Moltbook Panic, SpaceX xAI Merger, Trump's Fed Pick
Podcast thumbnail
IPOs and SPACs are Back, Mag 7 Showdown, Zuck on Tilt, Apple's Fumble, GENIUS Act passes Senate
Podcast thumbnail
Trump vs Powell, Solving the Debt Crisis, The $10T AGI Prize, GENIUS Act Becomes Law
Podcast thumbnail
E143: Nvidia smashes earnings, Arm walks the plank, M&A market, Vivek dominates GOP debate & more
Podcast thumbnail
E133: Market melt-up, IPO update, AI startups overheat, Reddit revolts & more with Brad Gerstner