Search Captions & Ask AI

OpenAI Battles Safety Concerns and High-Profile Exits | Pivot

May 21, 2024 / 08:32

This episode discusses the recent resignation of Yan LeCun from OpenAI, the dissolution of the super alignment team, and the ongoing tension between AI safety and product development.

Yan LeCun, head of super alignment, expressed concerns about OpenAI's safety culture, stating it has been overshadowed by the push for new products. His departure adds to the list of at least 11 high-profile exits from the company.

The hosts analyze the company's shift towards a profit-driven model, with Sam Altman and Greg Brockman emphasizing the importance of preparing for AGI risks. They question whether OpenAI can balance speed and safety in its operations.

Discussions also touch on the stringent offboarding agreements for departing employees, which include non-disclosure and non-disparagement clauses. The hosts reflect on the implications of these policies for employee equity and company culture.

Overall, the episode highlights the ongoing internal conflicts at OpenAI regarding its mission and the ethical responsibilities of AI development.

TL;DR

Yan LeCun resigns from OpenAI amid safety concerns, highlighting internal conflicts over profit versus AI ethics.

Video

00:00:00
things are getting messy at open AI
00:00:01
again oh man is this the most like telen
00:00:04
Nolla of a Company the company had other
00:00:06
high-profile departure again this week
00:00:08
with the resignation of yan Leica uh the
00:00:11
head of super alignment which is the
00:00:14
team focused on AI safety that's the
00:00:16
words we're going to be all super
00:00:18
aligned uh Leica explained his departure
00:00:20
in a series of social media posts saying
00:00:22
in part that open AI safety culture and
00:00:24
processes have taken a backseat to shiny
00:00:26
products and there's been a bunch of
00:00:28
shiny products they showed off last week
00:00:30
uh open it's not the things are not
00:00:32
unrelated um open AI has dissolved that
00:00:35
super alignment team the company told
00:00:37
Bloomberg the group will be integrated
00:00:38
across research efforts to help achieve
00:00:40
safety goals uh Sam Alman put out a
00:00:44
statement and also did open a co-founder
00:00:46
Greg Brockman uh sharing their view of
00:00:48
the future they said the company has
00:00:50
quote raised awareness of the risks and
00:00:52
opportunities of AGI so the world can
00:00:53
better prepare for it whether they're
00:00:55
preparing for it or not is a big
00:00:57
question there's been at least 11
00:00:59
high-profile exits in the last few
00:01:00
months um you know this is an issue
00:01:03
again of speed versus safety they have
00:01:05
been rolling out the products because
00:01:06
they're deathly terrified of getting
00:01:08
rolled over by the big companies I can
00:01:10
feel it I can feel such a Netscape
00:01:12
moment for them um you know uh we'll see
00:01:16
uh we'll see what'll happen here but
00:01:18
it's definitely a company still shaking
00:01:20
off or dealing with these issues that
00:01:22
they've had these two different types of
00:01:24
people who are um involved in this
00:01:27
company which is some that are think
00:01:29
this is a a risk to humanity others who
00:01:31
are like calm the [ __ ] down let's make
00:01:33
some stuff and we'll figure it out later
00:01:36
um uh one of the things that got a lot
00:01:39
of reporting was open aa's offboarding
00:01:41
agreements that have non-disclosure and
00:01:42
non-disparagement Provisions not
00:01:44
uncommon but theirs were particularly
00:01:46
stringent if a departy employee violated
00:01:48
these Provisions they were in danger of
00:01:49
losing all their vested Equity according
00:01:51
to Vox Sam mman confirmed in a tweet
00:01:53
there was a provision about potential
00:01:55
Equity cancellation for departing
00:01:56
employees but it was never en forced the
00:01:58
company is currently changing that
00:02:00
language uh it sounds like they're just
00:02:02
like just tough customers on that thing
00:02:05
um it is further than other people do
00:02:07
it's usually more Talent friendly in
00:02:09
general in Silicon Valley Scott what are
00:02:11
your thoughts on all this when Ilia was
00:02:14
part of the board yeah when when he was
00:02:17
part of the board that fired Sam Alman
00:02:19
if you're going to stab the prince you
00:02:21
better kill him when he came back Ilia
00:02:23
became the information age equivalent of
00:02:24
prosan he was dead man walking he just
00:02:27
wasn't going to survive oh no feelings
00:02:30
for firing me come on water under the
00:02:33
bridge that just wasn't going to happen
00:02:34
and this is similar to meta the fastest
00:02:36
way to get a severance check is to go to
00:02:38
work for the trust and safety team
00:02:40
because every once in a while in
00:02:42
response to real heat they'll pretend to
00:02:43
give a good godamn and they'll create a
00:02:45
trust and safety team I doubt Mark
00:02:47
listens to them or cares about them and
00:02:48
then under the cover of dark fires most
00:02:50
of them I like the fact that open AI is
00:02:52
becoming more like what they really are
00:02:54
and that is they are a for-profit
00:02:55
company and they're not pretending tough
00:02:58
mothers and they're not pretending to be
00:02:59
any anything I'd rather them be like New
00:03:01
Yorkers that's that's one of the reasons
00:03:02
I love New York versus doing business in
00:03:05
California is they don't pretend to be
00:03:07
something they aren't
00:03:10
and this is a four profit companies and
00:03:13
open AI are going to be so good at
00:03:15
making profit they shouldn't be trusted
00:03:17
to do anything else and the fact and
00:03:20
first off and also well except they were
00:03:21
founded with a slightly different idea
00:03:23
but go ahead yeah and then when they
00:03:25
took 11 billion those people wanted
00:03:26
their money back so they should have
00:03:29
never taken that money but the one thing
00:03:32
the one compensatory thing here is that
00:03:34
any group of people that decides to call
00:03:36
themselves super alignment should be
00:03:37
fired that is I thought you'd like that
00:03:40
is those people should endure a certain
00:03:42
amount of ridicule and pain what are
00:03:44
your thoughts well you know AI is going
00:03:46
to kill us Scott I mean it's a really
00:03:49
interesting thing because these people
00:03:50
are you know there's a group over in
00:03:51
anthropic that are much more concerned
00:03:53
in that regard and I think they have a
00:03:55
right to be absolutely I think listen
00:03:58
I've always been a safety let why are
00:04:00
you not paying attention to any bit of
00:04:01
safety from the very get-go and so I
00:04:04
would naturally be affiliated with the
00:04:05
super alignment people at the same time
00:04:08
for them to think this is anything other
00:04:10
than a for-profit institution is kind of
00:04:13
well then go on a board go on a you know
00:04:16
go to Stanford and and become a
00:04:17
high-profile naysayer of these things or
00:04:20
write a book like burnbook right um
00:04:22
because they're inside these companies I
00:04:24
think that because of the amount of
00:04:25
money here there's just no way people
00:04:27
aren't going to be aligned around the
00:04:29
money making and you know some people
00:04:31
are like oh you like Sam mman I said I
00:04:33
said yes but he's a tough [ __ ] I
00:04:35
was like are you kidding he's so
00:04:36
aggressive he's so much like all the
00:04:38
people he you know he's just like he's
00:04:41
not like Elon because elon's a toxic
00:04:43
piece of [ __ ] sometimes and most of the
00:04:45
time uh I don't consider Sam like that
00:04:47
but he is interested in not having this
00:04:49
company die you know he is interested in
00:04:52
making it the most dominant I have no
00:04:55
question he's hyper aggressive and just
00:04:58
as feral as the rest of them and so um
00:05:01
you know they're going to have these
00:05:02
things because what they want to do is
00:05:04
pretend that they care about this safety
00:05:06
stuff which they do peripherally you
00:05:08
know more than other people I guess they
00:05:10
bring it up more but they don't they
00:05:12
just don't care about that issue and
00:05:14
they're not they're not they're not
00:05:15
going to be around to see the machines
00:05:17
eating Us Alive or something like you
00:05:19
know what I mean like it's it's not
00:05:20
their job it's it's well but it this is
00:05:23
the problem its birth was like it was
00:05:26
like two it was like a hippie parent and
00:05:29
a a non- hippie parent right and they're
00:05:32
fighting forever this group these groups
00:05:34
of people are fighting forever and so
00:05:36
they are never going to be this whole
00:05:37
company isn't going to be in alignment
00:05:39
it started out out of alignment and
00:05:41
they're never going to I know that I'm
00:05:43
using a term about the idea of making
00:05:45
sure it's safe but they will never this
00:05:47
company will never it might be what
00:05:49
kills them right it might be what kills
00:05:51
them because they're going to get
00:05:52
unusual it's like Google saying don't be
00:05:55
evil that was this one big mistake like
00:05:57
why did they do that right because
00:05:59
they're evil not evil that's too far but
00:06:01
you know what I mean like that that that
00:06:04
whole cosplaying about being heroic has
00:06:06
always been a problem for that's not
00:06:08
these companies always figure out
00:06:09
they're like okay unless we're re Ben
00:06:11
and Jerry's let's be honest folks we all
00:06:13
want our own big home we all want to
00:06:15
take care of our kids we all want a
00:06:17
broader selection set of mates than we
00:06:18
deserve so let's let's stop let's stop
00:06:21
pretending where it's dangerous is that
00:06:24
we keep thinking that Sam Alman is
00:06:27
actually a better generation of leaders
00:06:29
we don't need to be as worried about AI
00:06:31
because Sam's in charge and he'll speak
00:06:32
in hush tones and say how concerned he
00:06:34
is and what it does is it dampens the
00:06:36
urgency in the need to elect people who
00:06:38
can craft legislation to regulate these
00:06:40
guys that's right it's not Sam's job
00:06:42
that's exactly right his Sam is doing
00:06:44
his job he's firing people who get in
00:06:47
the way of him making being the no if
00:06:51
Sam were to lose to to gemini or to
00:06:55
llama or whatever or xai they would say
00:06:59
yeah but he was more ethical and more
00:07:01
concerned and we love him for that he's
00:07:04
not going to get a statue for that let's
00:07:06
just say there have to be ethical things
00:07:08
if you're working for this kind like
00:07:09
it's working for Palante here you're
00:07:12
going to make defense department stuff
00:07:14
don't work there like that you know or
00:07:16
you know the Google employees that's
00:07:17
different because Google sort of gave
00:07:18
them an in to to complain because they
00:07:20
were like we're better than this but
00:07:22
they're really not better than this
00:07:23
right so I think when you you sort of
00:07:26
you know you you per have a performative
00:07:28
nature of being being heroic you're
00:07:30
going to be slapped later cuz you're
00:07:32
going to let people down period and it's
00:07:34
going to be very clear and again the
00:07:36
reason my first line in my book was and
00:07:38
so it was capitalism after all sticks
00:07:41
that's what it is that's what's
00:07:42
happening here and I think you all
00:07:44
should go off and form a group of people
00:07:46
that's that scares the [ __ ] B Jesus
00:07:48
out of of the potential and you go up to
00:07:50
Congress and you march in those offices
00:07:52
and you explain to them why they need to
00:07:53
make legislation that's what that's what
00:07:55
you need to do I think when people all
00:07:58
backed Sam a lot of people are like oh
00:08:00
they love them I'm like no they love the
00:08:01
money and they want to make money here
00:08:03
and they you know and and they want to
00:08:05
make they want to be not just make money
00:08:07
they want to be at the coolest company
00:08:08
making the [ __ ] they want to be at the
00:08:09
winning company that makes them Rich
00:08:12
yeah and also is the coolest company
00:08:14
right that's more than that more than
00:08:16
that anyway we'll see what happens I
00:08:18
thought Yan's um I thought his his
00:08:21
series of tweets was interesting but it
00:08:23
doesn't really I'm sorry Yan do
00:08:26
something else

Badges

This episode stands out for the following:

  • 70
    Most controversial
  • 60
    Most shocking
  • 60
    Most talked-about

Episode Highlights

  • Speed vs. Safety Dilemma
    The ongoing struggle between rapid product rollout and safety concerns raises questions about AI's future.
    “This is an issue of speed versus safety.”
    @ 01m 03s
    May 21, 2024
  • OpenAI's For-Profit Shift
    Critics argue that OpenAI's transformation into a for-profit entity undermines its original mission.
    “OpenAI is becoming more like what they really are: a for-profit company.”
    @ 02m 54s
    May 21, 2024

Episode Quotes

Key Moments

  • High-Profile Exits00:06
  • Speed vs. Safety01:03
  • Netscape Moment01:10
  • For-Profit Reality02:54

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
How Kara Swisher "Cracked the Case"… and Got Dragged Into the Nuzzi-Lizza-RFK Jr drama | Pivot
Podcast thumbnail
Kristi Noem Fired — Her New Role Sounds Like a “Bad Marvel Movie” | Pivot