Search Captions & Ask AI

HUMANS WILL BE EXTINCT BY ______ 😳

December 04, 2025 / 01:30

This episode discusses AI superintelligence, the potential risks of human extinction, and the gorilla problem in relation to intelligence. Guests include Richard Branson and Jeffrey Hinton, who signed a statement urging caution in AI development.

The conversation highlights concerns raised by over 850 experts regarding the safety of AI systems and the implications of creating something more intelligent than humans. The gorilla problem is used as a metaphor for the lack of control humans may have over AI.

The Midas touch is introduced as a cautionary tale, illustrating how greed drives companies to pursue AI technology despite the risks involved. The discussion emphasizes the dangers of unchecked technological advancement.

One guest is asked if they would halt AI progress if possible, and they express a belief that safety can still be achieved, indicating a cautious optimism about the future of AI.

TL;DR

Experts warn AI superintelligence could lead to human extinction; the Midas touch illustrates the dangers of unchecked greed in technology.

Video

00:00:00
In October, over 850 experts, including
00:00:02
yourself and other leaders like Richard
00:00:04
Branson and Jeffrey Hinton, signed a
00:00:06
statement to ban AI super intelligence
00:00:08
as you guys raised concerns of potential
00:00:10
human extinction.
00:00:11
Because unless we figure out how do we
00:00:14
guarantee that the AI systems are safe,
00:00:17
we're toast.
00:00:18
And you talk about this gorilla problem
00:00:20
as a way to understand AI in the context
00:00:22
of humans.
00:00:22
Yeah. So, a few million years ago, the
00:00:24
human line branched off from the gorilla
00:00:26
line in evolution. And now the gorillas
00:00:28
have no say in whether they continue to
00:00:29
exist because we are much smarter than
00:00:31
they are.
00:00:31
So intelligence is actually the single
00:00:33
most important factor to control planet
00:00:34
earth.
00:00:35
Yep.
00:00:35
But we're in the process of making
00:00:36
something more intelligent than us.
00:00:38
Exactly.
00:00:39
Why don't people stop then?
00:00:41
Well, one of the reasons is something
00:00:42
called the Midas touch. So King Midas is
00:00:44
this legendary king who asked the gods,
00:00:46
can everything I touch turn to gold? And
00:00:48
we think of the Midas touch as being a
00:00:49
good thing, but he goes to drink some
00:00:51
water and the water has turned to gold.
00:00:53
And he goes to comfort his daughter, his
00:00:54
daughter turns to gold. So he dies in
00:00:56
misery and starvation. So this applies
00:00:58
to our current situation in two ways.
00:01:01
One is that greed is driving these
00:01:02
companies to pursue technology with the
00:01:04
probabilities of extinction being worse
00:01:06
than playing Russian roulette. And
00:01:08
that's even according to the people
00:01:09
developing the technology without our
00:01:11
permission. And people are just fooling
00:01:13
themselves if they think it's naturally
00:01:15
going to be controllable.
00:01:16
So if you had a button in front of you
00:01:19
which would stop all progress in
00:01:20
artificial intelligence, would you press
00:01:22
it?
00:01:24
Not yet. I think there's still a decent
00:01:26
chance they guarantee safety, which I
00:01:28
can explain.

Badges

This episode stands out for the following:

  • 70
    Most shocking
  • 70
    Best concept / idea
  • 70
    Most controversial
  • 65
    Most intense

Episode Highlights

  • AI Safety Concerns
    Experts warn that without proper safety measures, humanity could face extinction due to AI.
    “We're toast.”
    @ 00m 17s
    December 04, 2025
  • Gorilla Problem Explained
    Using the gorilla analogy to discuss the implications of AI intelligence over humanity.
    “Intelligence is actually the single most important factor to control planet earth.”
    @ 00m 33s
    December 04, 2025
  • The Midas Touch and AI
    Exploring how the legendary Midas touch parallels our current AI development risks.
    “Greed is driving these companies to pursue technology with the probabilities of extinction.”
    @ 01m 01s
    December 04, 2025

Episode Quotes

Key Moments

  • AI Super Intelligence00:06
  • Gorilla Analogy00:18
  • Midas Touch00:42
  • Greed and AI01:01
  • AI Safety Debate01:19

Related Episodes

Podcast thumbnail
AI Expert: (Warning) 2030 Might Be The Point Of No Return! We've Been Lied To About AI!
Podcast thumbnail
Creator of AI: We Have 2 Years Before Everything Changes! These Jobs Won't Exist in 24 Months!
Podcast thumbnail
Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control! Geoffrey Hinton
Podcast thumbnail
AI Expert: We Have 2 Years Before Everything Changes! We Need To Start Protesting! - Tristan Harris
Podcast thumbnail
Ex-Google Exec (WARNING): The Next 15 Years Will Be Hell! We Need To Start Preparing! - Mo Gawdat