Search Captions & Ask AI

Grok 4 Wows, The Bitter Lesson, Third Party, AI Browsers, SCOTUS backs POTUS on RIFs

July 11, 2025 / 01:30:50

This episode covers the latest in AI advancements, the potential for a third political party led by Elon Musk, and discussions on federal workforce reductions. Guests include Travis Kalanick, Heath Ra Boy, and others.

The conversation begins with a humorous anecdote about a trip to Pasalaqua in Lake Ko, where the hosts discuss the luxurious amenities and their experience. Jason shares a funny story about taking items from the hotel.

Travis Kalanick discusses his work with cloud kitchens and autonomous technology, particularly focusing on the Pony AI project and its implications for food delivery and preparation. He emphasizes the efficiency gains from automation in the food industry.

The panel then shifts to Elon Musk's announcement of a new political party, discussing its potential impact on American politics and the feasibility of winning congressional seats. The conversation touches on the Supreme Court's ruling regarding federal workforce reductions and the implications for executive power.

Finally, the hosts share personal recommendations, including a documentary on Osama bin Laden and Travis's experiences with backgammon tournaments, highlighting the blend of leisure and competition.

TL;DR

Elon Musk's potential political party and AI advancements dominate discussions, alongside personal anecdotes and recommendations from the hosts.

Video

00:00:00
I have a very funny story to tell you. Jason, where have you been? I've been trying to text you. You've been offline. What's
00:00:05
going on? Where have you been? I've been working feverishly, but yesterday I had to
00:00:12
go to prepare for some meetings that I have on Sunday, which I can't tell you about, but
00:00:17
Natt and I, Natt and I went to Pasalaqua, which is in Lake Ko, which is an in I
00:00:24
mean, it's stunning. The the grounds are stunning. The hotel is stunning. if you have a chance to go to Lake Ko.
00:00:30
Anyways, this is us at Pasal Aqua. Who's the beautiful woman there? Is that the woman who owns it or something? Is
00:00:36
that the queen? That's not But the best part is we had such a good time. You know how they have like a registry book to leave a message?
00:00:43
Sure. So, I left a message. Here we go. What a truly magnificent
00:00:50
place. Above and beyond any expectation we had. Go below. Go below. That's not for me.
00:00:56
Thank you. We took everything. The free We took everything. The free bird. Oh my
00:01:02
god, Jason. The hangers. Okay. The laundry bag toothpaste, the robes,
00:01:08
the slippers, everything. Absolutely fantastic. Listen, you're going to have to send a bill to the free birds. Absolutely.
00:01:15
[Music] We'll let your winners ride.
00:01:20
[Music] And that's it. We open sourced it to the
00:01:26
fans and they've just gone crazy with love.
00:01:32
All right, listen. We've got a great panel this week. It's the summer. Things are slow. Some people are busy. I think
00:01:38
uh our prince of panic attacks, our dear Sultan of science is uh he's at the beep
00:01:44
sax is busy. Couldn't make it this week in his place. another brilliant PayPal
00:01:50
alumni and uh dare I say GOP supporter, Heath Ra Boy. How are you, sir?
00:01:56
Pleasure to be with you again. Nice to see you. And I'm assuming you're in gorgeous Florida or somewhere in
00:02:03
Yeah, I'm actually in New York. Oh, my hometown. Is it safe? Is it okay?
00:02:08
Mom Dammy uh chasing you down the street. Not yet, but it's safe.
00:02:13
Seize your assets. It's safe. Yeah, it's safe right now. We'll see you on November 4th. You know, as you probably
00:02:18
heard, on July 4th was the first time in recorded history that there were no shootings or no murders in New York on
00:02:24
that day. So, right now, things are in pretty good shape, but we may be maybe leaving New York quickly.
00:02:30
Yeah, you're going to probably want to sell that place if you got one there because Mami is going to seize it and turn it into a drugstore for you. Yes,
00:02:37
it's going to be drug stores. Travis Kalanick is back with us. How you doing, bestie?
00:02:43
Uh, pretty good. Pretty good. Yeah. Second appearance here on the round table and uh third time on the show. Of
00:02:49
course, you spoke at the summit. You've been busy with cloud kitchens. Yeah. Lots of exciting things going on. Oh, lots of stuff. Lots of stuff. The
00:02:56
robots the robots are taking over. We're we're rolling out. We're rolling out robots. Yeah. TK, can you tell us what you're doing
00:03:03
with this Pony AI thing or not? That's speculation. Uh look, you know, obviously is autonomy
00:03:10
as we, you know, in the US we have of course. Wait, do you want to just frame for people that don't that may not be up
00:03:16
to speed what was announced or at least Why don't you frame it? So, why don't you AI is um an autonomous company doing
00:03:23
self-driving? It's one of the few uh players that actually have cars on the road. They're based in China. They've
00:03:29
got a lot of operations in the Middle East. They've got a deal with um a delivery company called Uber, which you
00:03:36
might be familiar with. Uh okay. So
00:03:41
look, well the deal was basically that you would partner with Uber, license in the
00:03:46
pony technology and essentially start a competitor I guess to Whimo and Tesla.
00:03:53
Let me work on this one. Okay. So So in the US we have Whimo. We see the Whimos
00:04:00
in San Francisco, Los Angeles, Boston, coming soon to Miami, coming soon to
00:04:06
Atlanta, coming soon to DC. They're even talking about New York.
00:04:11
Tesla's sort of like the, you know, they're doing it the hard way, you know, classic Elon style, like let's
00:04:19
let's do this sort of in a fundamental holy [ __ ] let's go all the way kind of
00:04:25
kind of approach. Uh, and it's unclear when it gets over the line. Of course, he he launched sort of a a semi
00:04:32
semi-pilot of sorts in Austin recently, but there's no other alternatives. So
00:04:38
what happens is is some of the folks who are interested in making sure there are alternatives
00:04:44
have reached out they they've reached out to me and there different discussions they get going because
00:04:49
they're like Travis you did autonomy way back in the day got the Uber autonomous
00:04:55
stuff going in 2014 maybe there's something to do here to
00:05:02
create optionality now maybe like I'm of course very interested on the food side I talk about autonom onous burritos
00:05:08
being a big deal because if you can automate the kitchen, the production of food and then you can automate the sort
00:05:15
of logistics around food, you take huge amount of costs out of the food out of
00:05:21
what's going on in food and that's of course near and dear to my heart. There's folks of course that want to see
00:05:28
autonomy and mobility. That's a real thing. It it may be that or I would say
00:05:34
if you get the autonomy problem right, you can use it to apply to both
00:05:40
problems. So there's a lot of folks interested in moving things, moving food, moving
00:05:46
people. And if there is some kind of autonomous
00:05:51
technology that maybe I get involved in, it might apply to a bunch of different
00:05:57
things. And so I've got some inbound. Let's just put it that way. There's no there's no real deal right now, but
00:06:02
there is definitely some inbound. And I think there is some news about some of that inbound that may or may not be occurring. That's probably the best way
00:06:09
to put it. That is long-winded. I'll try to tighten that up next time. No, no, I think it's great to get the
00:06:15
overview here first on uh allin. Thank you for sharing it with us. And everybody knows you have
00:06:21
been doing a bowl builder lab 37 I think it's called. We throw it up on the screen. Not sure what the status of it
00:06:27
is. And then I'll let you go, Chimath, with your follow-up question. But I I think there's a pretty interesting concept here of the bowl getting built
00:06:34
and then put into a self-driving car. Now, that machine looks huge, but it's actually 60 square feet.
00:06:39
That picture makes it look monstrous. It's a 60 foot machine like uh imagine
00:06:45
running like a Sweet Green like brand or a Chipotle like brand and just making it
00:06:50
so it comes to life for people who who you know are like, "Hey, what is this thing?" Imagine you just order
00:06:56
online exactly the kind of bowl you want and actually this machine could run like many brands at the same time and and
00:07:02
does you build the bowl you want whatever ingredients uh it sort of it if
00:07:07
you look at that bottom you see those little white bricks at the bottom that's what carries the bowl underneath
00:07:13
dispensers it fills up the machine puts a it sauces the the bowl then it puts a
00:07:19
lid on it takes the bowl puts it in a bag uh puts utensils in the bag seals
00:07:25
the bag and the bag goes down a conveyor belt where then another machine what we
00:07:31
would call an AGV takes the bull to the front of house. The bull gets put into a
00:07:36
locker. The courier be a Door Dash who breeds courier will wave their app in
00:07:43
front of a camera and it will open up the locker that has the food that they're supposed to pick up. So it just
00:07:48
it takes out a lot of what we would call the the cost of assembly. um which is
00:07:54
more reduces mistakes, right? I mean a mistake. Yeah, we know exactly how many grams of every
00:07:59
ingredient are put in. That's exactly what you're supposed to get and so you get a higher quality product.
00:08:07
It takes a lot of the cost out. You imagine ultimately that's going to be they're going to be couriers with that
00:08:12
as well that you know I like to say autonomous burritos like is a Whimo gonna carry a burrito or is Tesla going
00:08:19
to have a a machine that carries food or you know is there another another
00:08:24
company that ends up doing you know sort of the the the things the the the the autonomous delivery of things and the
00:08:31
point is is well where we are right now is we've got customers and so those customers are starting to deploy
00:08:38
this quarter and it's pretty interesting. I mean you can the in our
00:08:43
delivery kitchens the cost of labor is about 30% of revenue. That's what the successful guy
00:08:50
let's say 30% 35% of revenue in a in a brickandmortar in a brick and mortar
00:08:56
restaurants it's even higher. Okay. When they're running our machine it's
00:09:01
between seven and 10% of revenue. Right. Amazing. Then you take out the cost of
00:09:07
the delivery, you know, and now it's becoming everybody can have a private chef, which was your original vision for
00:09:12
Uber was people don't know the original tagline, but it was your your p everybody has a private driver. Everyone's private driver was the
00:09:18
original for Uber. Basically, the infrastructure was already there. And I said this on, you know, one of your
00:09:23
recent, I think it was at the All-In Summit, Jason, but like um
00:09:30
in the mobility, cars, you know, I transport uh space, the roads were
00:09:37
already there. The cars were already built. People weren't using their cars 98% of the day. So the infrastructure is
00:09:44
already there to get people around to do this as a service and do it very efficiently and conveniently
00:09:50
with food. The infrastructure is not there. Like yes, restaurants have excess capacity. That's what Uber Eatats
00:09:56
utilizes. But to go and say like let's make 30% of all meals in a in a city uh
00:10:03
sort of prepared and delivered by a service, the infrastructure is not
00:10:08
there. So you have to build it. So our company that the mission is uh
00:10:14
infrastructure for better food. So that's real estate, that's software and robotics for the production and delivery
00:10:21
of food in this super efficient way. All right. Uh Keith, what are your thoughts? Any questions for
00:10:28
Well, he's not here, but isn't this what David Freeberg tried to do a few years ago? Yeah. This came up on the last Allin.
00:10:34
Yeah. Or the last one I was at. Yeah. Yeah. pizza. Pizza. The problem was I told Freeberg, "People don't want to eat
00:10:40
quinoa. You got to put a a little steak in there, maybe a piece of salmon." But he
00:10:45
was kind of rel I think eventually he relented and let people have a little bit of protein. Uh but yeah, it's such a
00:10:51
great vision. And wait, he he died as a vegan martyr. I think the business died as martyr.
00:10:57
Well, that was the hill he was led to. There's a lot of people have died on that hill. But the bottom line is if
00:11:03
you're going to get into automation, you have to it has to be endto-end automation. And what I mean by that is
00:11:09
like there are pizza there are pizza companies that have come and gone automated pizza companies where it's
00:11:15
like we have a pizza machine and everybody's like yeah this is amazing and you have a guy you have a
00:11:20
million-dollar pizza machine and then on the left you have a guy feeding ingredients into the pizza machine and on the right you have a guy taking the
00:11:27
pizza out and then putting it in a box and doing all this. So instead of one guy making pizzas, I have a
00:11:33
million-dollar machine and two guys making pizza. And so when you look at
00:11:38
these auton like robotic food production machines or food assembly machines, you
00:11:46
have to look at the full stack and say does it work with the ecosystem that
00:11:51
exists in a restaurant and does it go full stack from you know like like we
00:11:57
have this thing that machine we saw earlier. The staff preps the food, they
00:12:02
put the food in the machine and then they leave. They're gone. This restaurant runs itself for many hours without anybody
00:12:10
there. But this could be McDonald's, Burger King, and Taco Bell. Nobody would know
00:12:17
that right there. That machine is a it's an assembly machine, right? The food is prepped by humans and then assembled by
00:12:24
this machine. For a Chipotle or a sweet green, this is like a a majority of their labor, right? You go up to a
00:12:30
Chipotle, there's like 10 guys at lunch and you're still in line. That machine right there does 300 bulls an hour,
00:12:38
right? And so you go, okay, that's the this is what's called um like the
00:12:44
assembly line. It's just that front line where you basically assemble things. I think sometimes I call it the make line.
00:12:51
What will happen over time is you'll have perpendicular lines going into it where you're producing food,
00:12:58
right? So you'll have a production or bake line going into an assembly line here and then you go, "Oh, wow." So you
00:13:07
have it something that dispenses burgers on buns. That's the dispenser. That's the assembly,
00:13:13
right? But Factorio on steroids basically. Yeah. And then it's like how do you cook that burger? That's what I call that's
00:13:20
what we call state change. So state change is the is the cooking of the food. assembly is the like how do I put
00:13:28
it together and plate it. Doesn't this collapse like for example if you have a yield of 300 per hour you
00:13:34
said out of that one machine? Yes. Very quickly you can imputee the value
00:13:39
of having a smaller footprint store with five of these things in a faceless
00:13:44
warehouse with drone delivery or cars. You don't need the physical infrastructure.
00:13:50
So then don't you create a wasteland of real estate or how do you repurpose all the real estate? Well, the way to think
00:13:55
about it is like 90% of well, it's probably a little lower than that right now. Let's say 85% of all meals in the
00:14:01
US are at home. They just are. And a vast majority of those meals are
00:14:08
cooked at home. So, you know, like Uber Eats and Door Dash, they represent like
00:14:14
1.8% or 2% of all meals right now. It's very tiny, right? So what you're doing
00:14:21
is you're using real estate to and infrastructure to prepare and deliver
00:14:27
meals to people at their homes. And so it's not the restaurants still exist.
00:14:33
We're still going to want to go to restaurants. We're still going to want to go outside. We learned that during co we knew it before. We definitely know it
00:14:39
after. Um and so I don't it's not really like a decimating real estate situation.
00:14:46
It's taking a thing we used to do for ourselves and creating a service that does it higher quality. You know, sort
00:14:52
of I like to say you don't have to be wealthy to be healthy and just infrastructure to get that cost
00:14:59
down. And so you're doing something as a service that we used to do at home. I think in the super long run you're
00:15:06
like what where's the story on grocery stores? If you go to like in in 20
00:15:12
years, I think everybody agrees, you you will have machines making very
00:15:18
high quality, very personalized meals for everybody. This will be good for Keith because he
00:15:24
measures stuff down to like five calories based on his Instagram. What's your What's your What's your body
00:15:30
fat? Like seven%. It's like Just open his Instagram. He posted four
00:15:37
times today about his body fat. like so disgusted with himself at 10%. It's like bad at 10. But um I actually
00:15:42
think the vision of this actually the natural implication and maybe the home run version of this is everybody has a
00:15:48
private chef in their house, a robot in their house that actually does this personalized because people do want to
00:15:55
cook at home, but they don't have the time. Yeah. Or space uh and infrastructure.
00:16:02
But man, these delivery services are charging. Rich people do this all the time, right? They do these crazy meal
00:16:07
delivery services for 200 bucks a day and this is just going to abstract it down to everybody and man people get
00:16:14
creative when there's that empty space to your point Jimoth about what happens to all this space. When I lived in New York in the 80s and 90s it was common to
00:16:21
in Tribeca in West Chelsea where I lived to take storefronts put your little architect's office in the front and live
00:16:28
in the back. And many people were hacking real estate. We still need five, 10 million homes in this country. And
00:16:34
they're already doing this with malls. I I I keep seeing malls being turned into colleges and creative spaces. One of
00:16:41
them in Boston, they turned like the second and third floor into studio apartments for artists. So, you know,
00:16:48
where there's a will, there's a way we could use the space. I mean, yeah. Where this goes, what Chimamas saying and where the real estate goes is
00:16:53
we call it the internet food court where, you know, you're on Amazon, right? It's the everything store. Now
00:17:00
imagine that for food and then imagine you have an 8,000 foot facility where
00:17:06
basically anything can be made. Anything can be made because if you have that machine you saw has 18 sort of dispensers for food, 10
00:17:14
different sauces. You get the idea. Now what what about when it's 50 or 100 dispensers for food? What if you have
00:17:21
multiple machines with a 100 dispensers for food? That's crazy. you can the combinatorial
00:17:26
math in terms of what's possible, what can be made sort of, you know, goes exponential and so
00:17:34
the internet food court is sort of the vision for where this all goes. Another example of the bitter lesson.
00:17:41
The bitter Yeah, we're going to get to that, I guess, today in a very full docket. Before we get to
00:17:47
that, just a little bit of housekeeping here. September 7th, 8th, 9th in Los Angeles,
00:17:54
the All-In Summit again, all-in.com yada yada yada. The lineup is stacked and
00:18:00
we're gonna start announcing the speakers. People have been begging us to announce the speakers. I don't know.
00:18:05
You got to hold some back. Careful, careful. Hold a couple back, but we got some really nice speakers lined up. It is
00:18:11
going to be extraordinary. It is. It is the best one yet. I mean, every year we have this done.
00:18:18
Yeah. Yeah. Every year we have this little bit of panic like um you know we going to get great speakers and man they started flowing in this week it's going
00:18:24
to be extraordinary almost as extraordinary as this delicious tequila behind my head here.
00:18:29
Get the allin tequila at tequila.allin.com. Deliveries begin late summer. He's moving to the side. You can't even
00:18:36
tell right there. All right. Listen. Oh wow. Lots to discuss this week. Obviously AI
00:18:43
is continuing to be the big story in our industry and for good reason. Our bestie
00:18:50
uh Elon released Gro 4 Wednesday night. Two versions, base model and a heavy
00:18:57
model. 30 bucks a month for the base. $300 a month for this heavy model which
00:19:04
has a very unique feature. You can have a multi- aent feature. I got to see this actually when I visited XAI a couple of
00:19:09
weeks ago where multiple agents work on the same problem then they and they do that simultaneously obviously and then
00:19:15
compare each other's work and it gives you kind of like a study group the best answer uh by consensus really
00:19:21
interesting according to artificial analysis benchmarks you can pull that up Nick Gro 4 base model has surpassed
00:19:29
OpenAI's 03 pro Google Gemini's 2.5 pro as the most intelligent model
00:19:36
includes like seven different industry standard evaluation tests. You can look it up, but reasoning, math, coding, all
00:19:43
that kind of stuff. This is, you know, book smarts, not necessarily street smart. So, it doesn't mean that these
00:19:49
things can reason. And obviously there was a little um there was a little kurluffle on um X formerly known as
00:19:57
Twitter where XAI got a little frisky and was saying all kinds of crazy stuff and needed to uh maybe be redteamed a
00:20:04
little bit more decisively. Many of you know Grock 4 was trained on
00:20:09
Colossus. That's that giant data center that Elon's been building. And we showed
00:20:14
the chart here. Shimoth, you sent us a link to the bitter lesson by uh Rich Sutton in the group chat.
00:20:20
That's the 2019 blog post. We'll pull it up here for people to take a look at and we'll put it in the show notes. Maybe
00:20:26
just generally, yeah, your reaction to both how quickly Elon
00:20:33
has, and that chart showed it, how quickly Elon has caught up and I don't think people expected him to
00:20:38
take the lead, but here we are. Before we start, Nick, can you please show Elon's tweet about how they did on the
00:20:44
AGI benchmark? It's absolutely incredible.
00:20:50
Two things. One is how quickly starting
00:20:55
in March of 2023, so we're talking about less than 2 and 1/2 years, what this
00:21:03
team has accomplished and how far ahead they are of everybody
00:21:08
else. that's demonstrated by this. But the second is a fundamental architectural decision that Elon made
00:21:16
which I think we didn't fully appreciate until now. And it maps to an
00:21:22
architectural decision he made at Tesla as well. And for all we know, we'll figure out that he made an equivalent
00:21:27
decision at SpaceX. And that decision is really well encapsulated by this essay,
00:21:34
the bitter lesson by Rich Sutton. And Nick, if you can just throw this up there, but just to summarize what this
00:21:41
says, it basically says in a nutshell that you're always better off when
00:21:47
you're trying to solve an AI problem taking a general learning approach that can scale with computation because it
00:21:54
ultimately proves to be the most effective. And the alternative would be something that's much more human labored
00:22:01
and human involved that requires human knowledge. And so the first method, what
00:22:07
it essentially allows you to do is view any problem as an endless scalable
00:22:14
search or learning task. And as it's turned out, whether it's chess or go or
00:22:21
speech recognition or computer vision, whenever there was two competing approaches, one that used general
00:22:27
computation and one that used human knowledge, the general computation
00:22:32
problem always won. And so it creates this bitter lesson for humans that want
00:22:38
to think that we are at the center of all of this critical learning and all of these leaps. In more AI specific
00:22:45
language, what it means is that a lot of these systems create these embeddings that are just not understandable by
00:22:50
humans at all, but it yields incredible results. So why is this crazy? Well, he made this
00:22:57
huge bet on this 100,000 GPU cluster. People thought, "Wow, that's a lot. Is it going to bear fruit?" Then he said,
00:23:03
"No, actually, I'm scaling it up to 250,000." Then he said, "It's going to scale up to a million." And what these
00:23:09
results show is a general computational approach that doesn't require as much
00:23:15
human labeling can actually get to the answer and better answers faster. That
00:23:20
has huge implications because if you think about all these other companies,
00:23:25
what is Llama been doing? They just spent 15 billion to buy 49% of scale AI.
00:23:32
That's exactly a bet on human knowledge. What is Gemini doing? What is OpenAI doing? What is Anthropic doing? So all
00:23:38
these things come into question. And then the last thing I'll say is if you look back, he made this bet once before,
00:23:45
which was Tesla FSD versus Whimo. And Tesla FSD only had cameras. It didn't
00:23:51
have LAR. But the bet was, I'll just collect billions and billions of driving
00:23:57
miles before anybody else does and apply general compute and it'll get to
00:24:03
autonomy faster than the other more laborious and very expensive approach.
00:24:09
So I just think it's an incredible moment in technology where we see so many examples. Travis is another one
00:24:15
what he's just talked about. You know, the bitter lesson is you could believe that, you know, food is this immutable
00:24:22
thing that's made meticulously by hand by these individuals, or you can take this general purpose computer approach,
00:24:28
which is what he took, waited for these cost curves to come into play, and now you can scale food to every human on
00:24:34
Earth. I I just think it's a it's so profoundly important. One thing I'll throw out there, Chimoth,
00:24:39
is the Tesla approach for autonomy is taking human knowledge. In fact, the
00:24:46
whole idea is to approximate human human driving, right? The is the whole damn
00:24:53
thing. Now, depending on your approach in the technology, you can do like what's called an end toend approach or
00:24:59
you can look at okay perception, prediction, planning, uh, and control,
00:25:04
which are like these four modules that sort of you you you sort of engineer if
00:25:09
that makes sense. But it's approximating human driving to do it. The difference
00:25:15
is that, you know, I I think Elon's taken a a
00:25:21
almost a more human approach, which is like, I've got two eyes. Why can't my car Why can't my car do it like a human?
00:25:28
Like, I don't have any LAR spinning around on my head as a human. Why can't my car? So, it's kind of interesting.
00:25:34
He's sort of taking what you're saying Chimoth on the computation side because hardware 5 is coming out on Tesla
00:25:41
probably next year which is going to make a big difference in what FSD can do.
00:25:47
That's the compute side you're talking about. But then he is approximating human. Yeah. I just meant that, you know, other
00:25:53
than the first versions of FSD, which I think Andre talked about, Andre Carpathy talked about,
00:25:58
you know, they're not really so reliant anymore on human labeling, per
00:26:04
se, right? So that that Yeah, that that interference. And then the other crazy thing that he said
00:26:10
subsequent versions of Grock are not going to be trained on any
00:26:15
traditional data set that exists in the wild. the cumulative sum of human knowledge has been exhausted in AI
00:26:21
training. That happened basically last last year. And so the only way to then supplement that is with synthetic data
00:26:28
where the AI creates it'll sort of write an essay or it'll come up with with a thesis and then it will grade itself um
00:26:34
and and and sort of go through this process of self-arning with synthetic data. He said that he's going to have
00:26:40
agents creating synthetic data from scratch that then drive all the training
00:26:46
which I just think is it's crazy. Just explain this concept one more time with a bitter lesson. hand coding
00:26:52
heristics into the computer and saying, "Hey, here are specific openings in chess."
00:26:57
Use Yeah, use chess, right? You're hand coding specific examples of openings in there, end games, etc.
00:27:03
versus just saying, "Play every possible game, and here's every game we have." So, here's
00:27:08
Yeah. So, the two approaches would be, let's say, like Travis and I were building competing versions of a chess
00:27:14
solver, and Travis's approach would say, I'm just going to define the chessboard.
00:27:20
I'm gonna give the players certain boundaries in which they can move,
00:27:25
right? So the bishop can only move diagonally and there's a couple of boundary conditions and I'm going to
00:27:31
create a reward function and I'm just going to let the thing self-learn and selfplay. That's his version.
00:27:38
And then what happens is when you map out every single permutation
00:27:45
when you go and play Keith, who's the best chess player in the world, what you're doing at that point is saying,
00:27:50
"Okay, Keith made this move." So you search for what Keith's move is, and you
00:27:56
have a distribution of the best moves that you could make in response or vice versa. That was the cutting edge
00:28:03
approach. The different approach which is more, you know, what people would think is more quote unquote elegant and
00:28:09
less brute force would be Jason for you and I to sit there and say, "Okay, if Keith moves here, we should do this. We
00:28:15
should do this specific variation of the Sicilian defense." And and it's too much human knowledge. And I
00:28:21
think what what it turned out was there was a psychological need for humans to believe we were part of the answer.
00:28:27
But what this is showing is because of Moore's law and because of general computation, it's just not necessary.
00:28:33
You just have to let go, give up control. And that's very hard for some people and for others.
00:28:38
It's also very hard in some circumstances where a car is driving down the road and it's learning in that process, which is why you need a safety
00:28:45
driver. And and I think Elon made the right decision to put one in there. Keith, your thoughts? Yeah, a a couple points. It's it's not
00:28:50
quite that binary. Chimoth, I generally agree with your arc, but like if you think about LLMs being the most important unlock in AI, LLMs are all
00:29:00
trained on human writing. So someone wrote every piece of data that every LLM
00:29:05
use a human wrote at some point in history. So yes, it's true that they've shocked everybody including OpenAI's,
00:29:12
you know, original team on the implications, the broad implications, the general applicability to almost
00:29:18
every problem, but it's not like there was uh some tablets floating in space that weren't drafted by humans that
00:29:24
we've trained on. as you get in non LLM based uh models, you may be totally right, but almost no one's really using
00:29:31
nonlm based models at scale on driving specifically. Travis is totally right that humans are actually
00:29:38
really good drivers except when they get distracted. They get distracted by drugs or alcohol. They get distracted by being
00:29:43
tired. They get distracted by turning the radio. They get distracted by chatting with their passenger. So
00:29:48
trading against human behaviors actually turned out to be a great decision because what for whatever sort of
00:29:54
Darwinistic reasons humans are pretty ideal drivers and so you don't have to reason from first principles this is a
00:30:00
much better path and I think again there may be a a broad u sort of lesson there.
00:30:06
most important thing I think as a VC that you said uh is we've been debating for years should we invest in companies
00:30:12
like scale or mercore or any of these surge the truth is I think there's a very short halflife
00:30:18
on human label data and so everybody who's investing in these companies just
00:30:24
looking at revenue traction really didn't understand that there may be a year two years three years max when
00:30:32
anybody uses human label data for maybe anything because we hit the end of human
00:30:38
knowledge or just the collection of it. 99% done or you train on you train on it so well
00:30:46
that you don't need to label anymore. Like the the machines know how to label as good or better than a human. And so
00:30:54
like we're seeing this in the self-driving space is labeling was huge, right? You would have a
00:30:59
three-dimensional sort of scene that's created by video plus LAR. Let's say,
00:31:06
okay, I have to label all of these essentially what become boxes like I've
00:31:11
identified objects. You're you're some of the players in the in the autonomous software space, autonomous vehicle
00:31:17
software space are no longer doing any labeling because the machines are doing it all.
00:31:23
Just broadly, it'll just be built into the chipset that this is a stop sign. Like it's like we know what a stop sign is. We don't
00:31:29
need the millionth time for somebody captas like you're like find the stop sign or what's the traffic light and
00:31:36
eventually the machines are just way better than humans at identifying these things or you to be very practical when you see
00:31:43
a stop sign you don't have to identify that it's a stop sign. You just see that every human when they encounter a stop
00:31:50
sign 99.9% of the time they hit a break and they never act. So nobody actually knows it's a stop sign. It's just that
00:31:57
hit a break when you see something that looks like this object. It's just a vibe. Yeah, it's a vibe.
00:32:03
I would just say that that's like intuitive knowledge versus like the expressly labeled human knowledge. The
00:32:08
question for me is if everybody was so reliant on human labeling initially, if
00:32:15
you're an investor now, when you see these Gro four results,
00:32:21
how do you make an investment decision that's not purely levered to just computation?
00:32:26
So if you look at these results, does it mean that the, you know, there's 300 to
00:32:33
a,000 basis points of lag between just letting the computers vibe
00:32:39
itself to the answer versus interjecting ourselves? If interjecting ourselves slows us down by 300 to a,000 basis
00:32:46
points per successive iteration, then over two or three iterations, you've totally lost. So, what does it mean for
00:32:54
everybody that's not Grock when they wake up today and they have to decide how do I change my strategy or double
00:33:00
down? I think look, I'm I'm not in the investment game, but if I were, it would
00:33:06
be all about scientific breakthrough. So, I sometimes get in this place where I'm looking I'm going down a path. I,
00:33:14
you know, I'll be up at 4:00 or 5 in the morning. Uh, my day hasn't quite started, but I'm not sleeping anymore.
00:33:21
And I'll start go like I'll be on Kora and see some cool quantum physics question or something else I'm looking
00:33:27
into and I'll go down this thread with GPT or Grock and I'll start to get to
00:33:34
the edge of what's known in quantum physics and then I'm doing the
00:33:39
equivalent of vibe coding except it's vibe physics
00:33:45
and we're approaching what's known and I'm trying to poke and see if there's breakthroughs to be had and I've gotten
00:33:51
pretty damn close to some interesting breakthroughs just doing that. And I, you know, I pinged uh I pinged
00:33:58
Elon at some point. I'm just like, dude, if I'm if I'm doing this and I'm super
00:34:04
amateur hour physics enthusiast like what about all those PhD students and
00:34:11
postocs that are super legit using this tool and this is pre Grock 4 now with
00:34:16
Grock 4 like like there's a lot of mistakes. I was seeing Grock make that
00:34:22
then I would correct and we would talk about it. Grock could be this place where breakthroughs are actually
00:34:28
happening, new breakthroughs. So if I'm investing in this space, I would be like,
00:34:34
who's got the edge on scientific breakthroughs and and the application
00:34:39
layer on top of these foundational models that orients that direction? Is your perception that the LLMs are
00:34:46
actually starting to get to the reasoning level that they'll come up with a novel concept theory and have
00:34:53
that breakthrough or that we're kind of reading into it and it's just trying random stuffs at the at the margins.
00:34:59
It's uh or maybe it doesn't happen. No, no, no. So, what I what I've seen and again I haven't used Grock for I tried to use it early this morning but
00:35:06
for some reason I couldn't do it on my on my app. But
00:35:11
so let's say we're talking Gro 3 and existing chat GPT as it is. No, it cannot come up with the new idea. These
00:35:18
things are so wedded to what is known and they're so like even when I come up with a new idea, I have to really it's
00:35:26
like pulling a donkey sort of. You see, you're pulling it because it doesn't want to break
00:35:32
conventional wisdom. It's like really adhering to conventional wisdom. for
00:35:37
pulling it out and then eventually goes, "Oh [ __ ] you got something." But then when it says that, when it says
00:35:44
that, then you you have to you have to go, "Okay, it said that, but I'm not sure." Like you have to double and
00:35:49
triple check to make sure that you really got something. To your point, when these models are fully divorced
00:35:55
from having to learn on the known world and instead can just learn synthetically,
00:36:00
yeah, then everything gets flipped upside down to what is the best hypothesis you have
00:36:06
or what is the best question? You could just give it some problem and it would just figure it out.
00:36:11
So, where I go on this one, guys, is it's all about scientific method, right? If you get if you have an LLM or
00:36:19
foundational model of some kind that is the best in the world at the scientific method,
00:36:25
game the f over. You basically you just light up more GPUs and you just got like a thousand
00:36:32
more PhD students working for you. Keith, you're nodding your head here.
00:36:39
I I agree with that. I think that's fantastic because the scientific method also the faster it is the more you when
00:36:46
you have a hypothesis the faster you get a response you're more likely to dive in and dive in and dive in recursively and
00:36:51
recursively and every lag every millisecond lag causes you to like lose your train of thought sort of so to
00:36:57
speak. So you get the benefits that Travis alluding to plus speed and you go places you never guess. This happens all
00:37:03
the time when you run a company and you're doing like analytics and you have a tool that allows you to constantly query quickly quickly quickly double
00:37:09
click triple click you get to answers that you never get to there's even a second or two second or 3 second delay
00:37:14
let alone sending it to a human. Secondly where you actually see this today it's already happening. If you
00:37:20
look at foundational models that just apply to science, there's lots of things about the human body, let's say in
00:37:25
health biology, that we humans don't actually understand all the connections. Like why do we do X? Why do some people
00:37:31
get cancer? Why other people not get cancer? Why does the brain work this way? Models trained solely on science
00:37:37
tend to expose connections that no human has ever had before. And that's because like the raw
00:37:43
materials there and we only have a conscious awareness of call it 110%. But
00:37:48
when you apply it to other human domains where they're training on human sort of data, human produced data, human
00:37:55
produced output, they're limited to that output. So I think you just take the science and apply it at large and you
00:38:01
you're going to wind up finding things that no human has ever thought before. And it's that the thing about science
00:38:06
though is that it's the hypothesis that you then have to test in the physical world. So the you're like okay you've
00:38:13
got this hive mind this like you know this this
00:38:20
computation engine this brain of sorts you wanted to say consciousness but you start I was like how do I describe
00:38:28
the big C word consciousness but but you need to be able to test in the physical world so you could imagine
00:38:35
a a physical lab connected to one of these systems
00:38:40
where then you could say, okay, like if it was a chemistry experiment, you could do chemistry experiments or physics. You
00:38:47
get the idea. What could go wrong? It would be it's yeah, no big deal. It's going to be fine. Okay. So, but but this
00:38:54
is where it goes because if you have a scientific method machine, you still have to be able to test your hypothesis.
00:39:00
You have to go through the scientific and the verification. Yeah. Exactly. Yeah. Wow. It's kind of mind-blowing. Reminds
00:39:06
me really mindblowing if you remember I don't know if you guys remember dark matter and like the discovery of it and everything and as
00:39:12
explained to me by Lisa Randall you know the the discovery was made not by knowing there was dark ma matter there
00:39:18
and observing it but observing there was something you know gravitational forces
00:39:23
around this other matter and then they said wait what's causing that and that's why they found dark matter so these
00:39:29
ideas you know the idea that LLM could actually do that come up with something so novel is it
00:39:36
doesn't it feels like we might be right there, right? Like we're kind of on the cusp of it. One of the seven most difficult problems
00:39:41
in math or the most important problems in math is proving a general solution to this thing called Mavier Stokes, which
00:39:47
is basically like viscous fluid dynamics and conservation of mass. We use it every day in the design of everything.
00:39:53
You know what? It hasn't been proved. Isn't that the craziest thing where you're just like, how is this even possible? We use it to design airplanes,
00:39:59
to design everything. It hasn't been proved. And so you could just point a computer at this thing and you would unlock all these incredible mysteries of
00:40:07
the universe and we would probably find completely different propulsion systems. We could probably do things that we
00:40:14
didn't think were possible. Teleportation I mean who knows what's possible. But remember remember you know how Elon
00:40:22
talks about Brock and and about AI generally is about why are we here? What
00:40:28
is the purpose? Meaning of the universe. Yeah. What is the meaning of the universe? How does it work?
00:40:33
And a sort of fierce truth seeeking mechanism there. Let me ask you a question, Keith, Travis, Jason. If you guys were running
00:40:41
Grock 4, that'd be so much fun. How do you judo flip open AAI? Because
00:40:50
they are marching steadfastly towards a billion Mao, then a billion DAO.
00:40:59
It's a juggernaut. So, how do you use the better product in a moment to judo
00:41:05
flip the less better product? Look. Yeah. I mean, here's the thing,
00:41:11
right? So, you do the Elon way. So you have you get a bunch of missionary like
00:41:17
full-on missionary engineers that work twice as hard and you have a culture
00:41:23
that is ultra fierce truth seeeking and you don't you don't get caught up in
00:41:31
politics, bureaucracy, BS and you just you go for it and and I
00:41:37
think you know that's where you know and then you go wow scientific breakthrough scientific method method like you start
00:41:44
winning on truth and that will start I believe that will start to give the
00:41:50
product awesomeness of open AI a run for its money
00:41:56
but like the product of open AI the product department those guys are crushing
00:42:02
they're really good they're not only ahead of the game but they feel like it just they're just leading in a lot of
00:42:08
different ways but if you are better at truth you will eventually you'll eventually have an AI product manager.
00:42:14
Yeah. And on a technical basis too, people forget how good Elon is at factories and physical real world
00:42:22
things. Uh what he did standing up Colossus made like Jensen Juan was like how is this
00:42:28
possible that you did this right? So pressing that his ability to build factories and he said many times like
00:42:34
the factory is the product of Tesla. It's not the cars that come out of the factory or the batteries. It's the
00:42:40
factory itself. So if he can keep solving the energy problem with solar on one side and batteries and standing up,
00:42:49
you know, Colossus 2, three, four, five, he's going to have a massive advantage there on top of Travis, you know, the
00:42:55
missionary individuals, which by the way was what he backed before Sam Alman corrupted the
00:43:02
original missionary basis of OpenAI and made it closed AI and a, you know, this is nothing derogatory
00:43:07
towards him, but he did hoodwink and stab Elon in the It's not nothing personal. I mean, he just screwed him
00:43:13
over. And would you say he bamboozled him? He bamboozled him, screwed him, hoodwinkedked him, you know, but pick
00:43:20
your term here. But, uh, he did he didn't dirty. The original mission was to be missionary and open source all
00:43:27
this content. That's the other piece I think is a wild card. And I'll and then I'll sing Keith's position, but
00:43:33
open sourcing some of this could have profound ramifications. I think open sourcing the self-driving data could
00:43:40
have a really profound impact. Elon wanted to do something really disruptive like he open sourced his
00:43:46
patents for you know um charging. If he opens source the data set and self-driving does anybody have the
00:43:52
ability to produce robo taxis at the scale he can do it? I don't think so. Travis's hypothesis is true then yeah
00:43:58
everybody will. Well, what? Sorry. Everybody will what? Shiman, if you have access to the money that
00:44:05
buys the compute, everyone could solve that problem. What's the hardware piece I'm talking about? He said he said if he if he published
00:44:12
all the FSD data, could somebody build an autonomous vehicle? Well, yes, but could somebody produce a
00:44:19
100 million robo taxis from a factory with batteries in them? Okay. No, that's a diff that's a different question.
00:44:24
That's the thing I'm saying. And not really because last time I was a guest on you know on we talked about vertical
00:44:29
integration. Uh products really require vertical integration. So ultimately you have a
00:44:36
self-driving something that is customuilt for knowing it's going to be self-driving and it interacts
00:44:43
differently. The cost structure is different. The controls are different. The seating's different. everything. You build a product taking advantage of
00:44:49
where in the stack you have the most competitive advantage, but then you leverage that and it reinforces it's
00:44:54
still why like Apple despite missing the AI wave still a pretty good company from any empirical standpoint. I mean like
00:45:01
the performance is absolutely miserable on the most important technology breakthrough of the last 70 years, but
00:45:06
the company's still alive and still worth trillions of dollars because it's vertically integrated. Open AAI at your
00:45:12
point, they do have a good product team and they need to stay ahead on the product level because they can't compete
00:45:18
on the factory level. The way to stay ahead on the product level is shipping a device. They got to ship the device.
00:45:24
It's got to be good. It's got to be right. It's got to be the right form factor. It's got to do things for humans that are unexpected. But then if they do
00:45:30
that, they're like Apple plus AI. Chimath, what's the paper you were talking about before? What was the name
00:45:35
of it again? The bitter lesson. Yeah. could apply to autonomous driving is right now it's still like hey how do
00:45:42
I drive like a human we talked about that but the leapfrog moment here could be like hey drive a car make sure it's
00:45:49
efficient don't hit anybody and just simulate that uh quadrillion times and
00:45:55
it's all good right but right now we're still trying to drive like humans because we don't have enough data and
00:46:02
therefore can't do enough compute that's the global lesson by the way you're totally right conceptual you the
00:46:08
blog post is right. But that's only true when you have enough data. And depending upon the use case, the level of data you
00:46:14
need may not be possible for years, decades, and you may need to hack your way there through human interactions.
00:46:20
Yeah, physical world AI is lacking in data and so you just try to approximate
00:46:26
humans. I don't know if you guys have seen this. In related news, OpenAI and uh
00:46:31
Perplexity are going after the browser. Perplexity launch comet for their $200 a
00:46:37
month tier. I actually downloaded I'll show it to you in a second. But this is um a really interesting category. It's
00:46:44
something developers can do already and they do it all the time, you know, but having your browser uh connected to
00:46:51
agents let you do really interesting things. I'll show you an example here that I just fired off while we're talking. So, I just asked it, hey, give
00:46:58
me the best flights from um United Airlines and uh business class from New
00:47:04
York City, from San Francisco to New York City. It does some searches, but what you see here is it's popped up a
00:47:10
browser window and it's actually doing that work and you can see the steps it's using and then I can actually open that
00:47:16
browser window and watch it do that. This is just a screenshot of it and it will open multiple of these. So you
00:47:22
could I was doing a search the other day saying like, "Hey, tell me all the autobiographies I haven't bought on Amazon. Put them into my, you know,
00:47:29
shopping cart and summarize each of them cuz I like biographies and I like doing it here." And when it did this last
00:47:36
time, it put my flight into like uh and I was logged in under my
00:47:42
account and it basically put it into my account in the checkout. So again, this isn't like if you're a
00:47:49
developer, you do this all day long, but this really seems to be a new product
00:47:55
category. I'm curious if you guys have played with it yet. And then what your thoughts are on having an agentic
00:48:01
browser like this available to you to be doing these tasks in real time. You can also connect
00:48:08
obviously your Gmail, your calendar to it. So I did a a search, tell me every restaurant I've been to and then put it
00:48:14
by city. And then I was going to open my open table and then pull that data as well. What's interesting about this,
00:48:22
Keith, and I know you're a product guy and done a lot of product work. I'm curious your thoughts on it is you don't
00:48:29
have to do this in the cloud. You're authenticated already into a lot of your accounts, nor do you have to worry about
00:48:36
being blocked by these services because it doesn't look like a scraper or a bot. It just it's your browser doing the
00:48:42
work. Your thoughts on this? Have you played with it at Yep. I think it's a great hail mary attempt by Perplexity. I
00:48:48
think up since something like this, perplexity is toast. Like for the stat about chat GBT is going to a billion
00:48:53
users like it's becoming the verb, you know, that the way you describe using AI for a normal consumer. There's nothing
00:49:00
left of perplexity if they can't pull this off. So, it's a great idea because like the history of like consumer
00:49:06
technology companies is whoever's up has uphill ground like in a military sense, whoever is first has a lot of control.
00:49:12
This is actually what Google should be doing truthfully like I think Google's also Google search qu search is toast
00:49:19
and since they have Chrome and they theoretically have a quality team in Gemini they should be putting these two
00:49:25
things together and hoping to compete with Chad GBT they're going to lose the search game like the assets that are
00:49:30
best at Google right now have nothing to do with search it's every other product is the only thing that's going to save that company if they can put figure out
00:49:36
how to use them Travis your thoughts on this category
00:49:42
anything come to mind for you in terms of, you know, feature sets that would be
00:49:48
extraordinary here? I know you you like to think about products and the consumer experience.
00:49:54
It's really interesting. So, you know, I've been spending, as you guys know, I've been spending my time on real
00:50:00
estate and construction and robotics. And so, I' I've been out of the this kind of consumer software game for a
00:50:06
long time. But super interesting over the last six months there have been a a
00:50:12
number of consumer software CEOs like when I hang out with them or
00:50:17
whatever they're like yo how are we going to how are we going to keep doing what we do when the agents
00:50:23
take over. Yeah. The paradigm shift is so profound that the idea that you would visit a web
00:50:29
page goes away and you're just in a chat dial you have an agent that's just taking care of your flights for you. So,
00:50:36
I I kind of I I think there's a leaprog over that. I think
00:50:41
it's just like you tell something, yo, I want to go to New York. Can you you know, I'm sort of looking at this time
00:50:47
range. Can you just go find something I'm probably going to like and give me a couple options?
00:50:52
Yeah. And it's just a whole you have an interface and then you know is perplex
00:50:59
is this thing that you just showed perplexely is that the interface or do I just have an agent that just goes and
00:51:04
does everything for me and is this the start of that? I you know I just haven't spent enough time. I I do know that
00:51:12
every consumer software CEO that has an app in the app store is
00:51:18
tripping. They're tripping right now. And I mean big boys. I meet guys with real stuff and sometimes I I'm doing
00:51:25
like almost like therapy sessions with them. I'm like, "It's going to be fine. You actually you actually have stuff.
00:51:32
You have a mo. You have real stuff that's of value. They can't replace it with an agent." And they're like, "So, you're lying to them. You're doing
00:51:38
hospice care and you're telling them everything's going to be okay, but the patient options on Robin Hood while he's like,
00:51:44
"Yeah, yeah, tell me more. Tell me more." All these things. There's certain things that are protected and there's certain things
00:51:51
that aren't. That's all. Well, let's talk about that because the you and I are old enough to remember uh
00:51:56
general magic. This vision was out there a long time ago with personal digital assistance and you would just talk to an
00:52:03
agent. It would go do this for you. This feels like a step to that where it does
00:52:08
all the work for you, presents you the final moment and says approve.
00:52:13
Like a concierge or a butler. Yeah, I think what you're describing is what we want. But I think more specifically
00:52:19
for today, Keith and Travis totally nail it. Look, I think building a browser is
00:52:26
an absolutely stupid capital allocation decision. Just totally stupid and unjustifiable in 2025. Specifically for
00:52:34
Perplexity, I think their path to building a legacy business is to replace Bloomberg.
00:52:41
Everything that they've done in financial information and financial data in going beyond the model has been
00:52:49
excellent. As somebody who's paid $25,000 to Bloomberg for many years,
00:52:54
the terminal is atrocious. It's terrible. It's not very good. It's very
00:53:00
limited. and anybody that could build a better product would take over a hundred
00:53:09
billion dollar enterprise because I think it's there for the taking. I wish that Perplexity would double and triple
00:53:14
down on that. And so when you see this kind of random sprawl, let's do it, Jimoth. Let's just go do
00:53:19
it. When you do the random sprawl, I think it doesn't work. But I just want to say like a browser is like the dumbest thing
00:53:25
to build in 2025 because in a world of agents, what is a browser? It's a
00:53:31
glorified markup reader. It's like handling HTML. It's handling CSS and JavaScript. It's doing some networking.
00:53:38
It's doing some security. It's doing some rendering. But it's like this is all under the water type stuff. I get it
00:53:46
that we had to deal with all that nonsense in 1998
00:53:51
to try LOS or Google for the first time. But in 2025,
00:53:56
there's something that you just speak to and eventually there's probably something that's in your brain which you
00:54:02
just think and it just doesn't. You're thinking I need a flight to JFK or at the maximum
00:54:09
today in a very elegant beautiful search bar you type in get me a flight and it
00:54:15
already knows what to do. Keith, in some ways this is a step towards that ultimate vision. So you'd think it's
00:54:21
worth it to you know sort of perplexity to make this way point perhaps if you look at it as a waypoint between the
00:54:27
ultimate vision which is a command line and earpiece. How do you get distribution Jason for
00:54:32
the 19th web browser in 2025? Well, yeah, that is a challenge and I think most people are speculating Apple, which
00:54:40
has a lot of users, might buy Perplexity or do a deal with Perplexity and give
00:54:46
them that distribution because of the Justice Department case against Google. So, there's been a lot of speculation
00:54:52
about that. But Keith, what do you think? Well, I don't think they'd buy anything worth it. Like, what do what is Apple going to get if you continue this failed
00:54:58
strategy of Apple, right? Apple has missed every possible window on AI and continues to miss it
00:55:04
and it has cultural I think the CEO has challenges I think culturally they have challenges I think they have
00:55:09
infrastructure challenges so it's it's not an easy fix but buying perplexity is not going to help like strategy is
00:55:15
actually pretty coherent one for perplexity qu perplexity uh so I think that it's not
00:55:20
pick a vertical and own it strategy not not a bad idea um especially because you need unique data sources some of
00:55:27
those data sources may or may not license their data to open AI. So, you can do some clever things there, but um
00:55:34
I don't think there's any residual value that Apple would get out of perplexity except there's some product taste, but
00:55:39
what are you going to spend like a billion dollars for product taste? I mean, Mark's spending hundreds of millions of dollar, hundreds of billions
00:55:45
of dollars or whatever he's spending these days. And you know, Grock, if anything, Grock for shows that Mark
00:55:50
really doesn't need to just spend money to build a whole new team because everything they've done in AI has also missed the boat. Well, I mean, Keith,
00:55:56
the way you phrase it there almost makes it worth it for Apple to throw a Hail Mary, have a team with some taste
00:56:03
because that's how they tend to do things is something that is elegant. And why not just throw your search to it,
00:56:08
throw 10 billion at a bunch of what's elegant would be if there's a bunch of agents and just a
00:56:14
chat box. Seeing a bunch of visual diarrhea is not elegant. It's lazy.
00:56:20
On our on our little Bloomberg clone, I'll give you naming rights. So you can call it that. You like you like poly polyhapatia. So
00:56:28
hey, can somebody can somebody uh bring up the polyhapatia? You know what's so funny?
00:56:34
Rolls right off your tongue. TK, listen. We were trying to do a screen
00:56:39
of companies and it maxes out at five companies on a specific type of screen
00:56:45
where you're like you're trying to compare stock price to EBID and you're like, okay, I can only choose five, I
00:56:50
guess. So which five should I choose? Lefont was on right like two episodes ago. He was like I can't pull this up. It's limited to six companies.
00:56:57
Dude, you it's So what do people use Bloomberg? They use it for the messaging. Now like
00:57:03
my team has traded huge positions via text message on Bloomberg. So there is something very valuable there.
00:57:09
But the core usability and the core UI of that company has not evolved. I have my contribution
00:57:15
and Plexity is very good at that by the way. It they they do a very good job. I got a new domain name Travis. Let this
00:57:21
one just sink in here. This is my way to weasle my way into the deal. begin.com. Begin.com.
00:57:27
You own that, don't you? I do. I'm just a little I sniped some good ones once in a while. I got
00:57:32
begin.com and I got annotated.com. Those are my two little domains. You're like You're like one of these old people that show up at those show and
00:57:40
then show and you're like, "Oh, I have this thing that I bought 1845."
00:57:46
Guys, Jason Jason is Jason is the daddy and GoDaddy. Okay. I'm your dad. I'm
00:57:51
your daddy. That's what it is. Who's your daddy? Hey, speaking of daddy. Let's go on to our next story.
00:57:58
Come on. Is now the right time for a third party? Elon seems to think so. Last week, he
00:58:06
announced that AXI would be creating a a new political party. I'll let you decide who daddy is in this one. uh he said
00:58:14
quote when it comes to bankrupting our country with waste and graft we live in a one party system not a democracy he's
00:58:21
not yet outlined a uh a platform for the American party we talked about it here last week I listed four core values
00:58:28
which seem to get a good reaction on X fiscal responsibility doge sustainable
00:58:34
energy and dominance in that manufacturing in the US which Elon has done uh single-handedly here pronatalism
00:58:41
which I think is a passion project for him and Shabbath you punched it out with the fifth technological excellence
00:58:47
according to poly market 55% chance that Elon registers the American party by the
00:58:53
end of the year and you know one thing I was trying to figure out is just how unpopular are these candidates
00:59:00
and uh these political parties this is a very interesting chart that I think we could have a a great conversation around
00:59:06
it turns out we used to love our presidents if you look here from Kennedy at 83% % his highest approval rating.
00:59:13
His lowest was 56%. That was his lowest approval rating. So he operated in a very high band. Look at
00:59:20
Bush 2 during after 9/11. 92% was his peak. His lowest was 19, right? Wartime
00:59:27
president. But then you get to Trump one, Biden, and Trump 2. Historically
00:59:33
low high approval. Their high watermark. 49 for Trump one, 63 for Biden, one of
00:59:39
one. and then 47 for Trump too and their lowest 29 3140. So maybe it is time for
00:59:48
a third party candidate. Let's discuss it boys. I have no idea how to read this graph. I
00:59:54
have zero idea the worst. I'm like what is happening here? This is the worst formatted chart. This
01:00:00
is a confusing chart. But well, the reason I'm putting it up is for debate. So you should be saying thank you.
01:00:06
Yeah, we're debating that it's creating debating. Why did you put it up? Here's another one. Gallup pole. Americans
01:00:11
desire for a viable third party. 63% in 2023. So, it's it's bumping along an
01:00:16
alltime high. Okay. I'm really concentrating on this one. Okay. Anyway, I'm going to stop there.
01:00:22
What's the gray? I'm going to let you. Okay. Okay. Got the different presents during that time
01:00:28
period and how popular parties were. Let's stop here. This is a good This is a good place to stop.
01:00:34
Yeah. A couple points. Yes. The idea of Elon creating a third party is for any other human being like
01:00:40
absolutely absurd and ridiculous. Elon has obviously done incredible things. So dismissing anything he's touching is a
01:00:46
bad idea. However, I think the best metaphor I've seen is it's a little bit like Michael Jordan tried to play
01:00:51
baseball and he became a replacement level baseball player, which actually really hard to do. By the way, Elon is probably
01:00:58
a replacement level politician. Um, he's Michael Jordan for entrepreneurial stuff, but the third party stuff is not
01:01:05
going to work. First of all, um there that chart is misleading. It's a flaw of
01:01:10
average. Well, it's both badly designed and it's a flaw of average politics. Trump is incredibly popular among
01:01:15
Republicans. He actually has the highest approval rate of any Republican ever measured in recorded history. It's 95%.
01:01:22
Reagan was peaked out at 93%. It's just Democrats don't like him, which is perfectly fine. Being polarizing is is
01:01:29
an ingredient to being successful, including with people on the show. Like the point of accomplishing things in the
01:01:35
world is you don't really care what half the world thinks. You need to make sure that there's a lot of people who like you and really approve and are
01:01:41
enthusiastic about what you do. And Trump is about as popular with his party as anybody's ever been ever. Period.
01:01:48
No exceptions. Secondly, um there's MAGA has kind of already uh changed the
01:01:54
Republican party. Trump is sort of like a third party takeover of the Republican party. And so it's kind of already
01:02:01
happened and maybe you can do this every 20 years or 30 years. I don't think you can have like this kind of
01:02:07
transformation on one party within a two compressed period of time for a lot of reasons. Third is um really smart
01:02:14
parties absorb the lesson of political science. Unfortunately I studied political science. I wasted kind of my college years and instead of saying CS
01:02:22
and you know maybe then I'd be coding stuff and doing physics like Travis. But one thing I did learn is smart parties
01:02:28
absorb the best ideas of third parties. So the oxygen is usually not there
01:02:34
because there's a Darwinistic evolution of if you get traction on an idea, it's really easy to conscript some of those
01:02:42
ideas and take away the momentum. No third party candidate that's a true like third party has won a Senate seat since
01:02:49
1970. And that's actually Bill Buckley's brother. So he had some name ID. The
01:02:55
other thing Elon I think is missing and the proponents of what he's doing is people vote not just for ideas, they
01:03:01
vote for people. It's a combination. The product is what do you what do you believe and who are you? And you can't
01:03:08
divorce the two. Trump is a person and that generates a lot of enthusiasm and it's one of the reasons why he has
01:03:14
challenges in midterms because he's not on the ballot. His ideas may be on the ballot but he is not specifically on the
01:03:19
ballot. So unless because Elon can't be the figure head of the party, he literally can't constitutionally.
01:03:26
You need a face that's a person, Obama, a Clinton, like there's reasons why people resonate.
01:03:32
Reagan without that personality. Specific ideas
01:03:37
just are not going to galvanize the American people. Okay. So the counter to that and what
01:03:43
people believe he's going to try to do is win a couple of seats in the House, Travis. win maybe one or two Senate
01:03:49
seats if you were to do that. Those things are pretty affordable to back.
01:03:55
Couple of million dollars for a House race. Senate maybe$25 million. If Elon
01:04:02
puts, I don't know 250 million to work every two years, which he I think he put 280 million to work on the last one. He
01:04:09
could kind of create the Joe Mansion moment and uh he could build a caucus, a
01:04:15
platform. Grover Norquist kind of pledge along these lines. So, what do you think of
01:04:21
that? If he's not gonna create a viable third party presidential candidate, could he Travis pick off a couple of
01:04:28
Senate seats, pick off a couple of congressional seats? Okay. So, first I have this axiom that I'm making up right now.
01:04:34
Okay. Okay. It's called Elon is almost always right. Okay. Okay. All right.
01:04:40
Elon was right about everything. Seriously, let's just be real. And like honestly the things he's upset about and
01:04:46
that he's riled up about especially when you look at the deficit like man I am
01:04:52
right on board that train. Part one. Part two.
01:04:58
We've never had somebody with this kind of capital that can be a quote unquote
01:05:04
party boss outside of the system. Right.
01:05:10
And there's a lot of people that agree with the types of things he's saying and he
01:05:17
knows how to draw, you know, he Elon in his own right kind of has a populous vibe like he does his thing and he's
01:05:25
turned X into what it is and he's he's a big part of X. And so I think it's the I
01:05:32
think it's great and honestly there's there's the moves you can make on Senate and House and just having a few folks
01:05:38
and then being you being levers then to get the things you want done. That's part one. And then part two of that is
01:05:45
the threat of that happening can make good things happen separately even if it doesn't go all the way.
01:05:51
I just love it. I'm I'm on the train. Yeah. I'm I'm I'm in love with this role for Elon more than picking a party
01:05:58
because he's picking a very specific platform that I think resonates with folks which is just balance the budget.
01:06:04
Don't put us in so much debt and let's have some sustainable energy, you know, job done. Great job.
01:06:10
The problem with that is like he's actually wrong about the reason why we have a deficit or debt.
01:06:17
It's not because we're undertaxed. It's we're massively overspending. If we just No, I think he believes we're
01:06:22
overspending. They should have been supporting the last, you know, beautiful bill because if you just held federal
01:06:30
spending to 2019 levels, so 2019 is not like, you know, decades ago,
01:06:35
literally with our current tax revenues, we would be in a surplus, 500 billion.
01:06:40
Yeah. So there all we need to do is cut spending. Now I admit that why didn't that happen with the big beautiful bill?
01:06:46
So this is where details do matter. I think there is a willingness and a you
01:06:52
know discipline problem on both parties and I think maybe he can help fix that. The second thing is that we have these arcane rules particularly in the Senate
01:06:58
that you need 60 votes in many ways to cut things except through very hacky
01:07:04
methods and that's a reality. So the best thing truthfully he could do is help get a Republican party to 60 votes
01:07:11
and then in then in theory he could be absolutely furious if you didn't cut back to 2019 levels. But it it's very
01:07:19
tricky. Or you can just overrule like this. The filibuster is an artifact of history and at some point some majority
01:07:26
leader is just going to say we're done with the filibuster and just steamroll through all the cuts at 50 or 51 votes
01:07:32
which you can do. There's no constitutional right to a filibuster. It is an artifact of centuries of American
01:07:38
history and at some point it's going to go away. So maybe the time is now. Maybe we should just fix everything now.
01:07:44
I think you're exactly right. I think that the filibuster it's just a matter of time. I think it's on borrowed time.
01:07:49
And I think in a world where it is on borrowed time, Jason, I think your path is probably the one that gives the
01:07:57
American party, if it does come into existence, the most leverage, which is if you control three to five independent
01:08:04
candidates, you gain substantial leverage. I just want to take a step back and just note something. I don't
01:08:10
know if you guys know this, but the only reason we're even having this conversation or this is even possible is
01:08:17
because in 2023, the FEC, Federal Elections Commission,
01:08:22
they actually released guidance and they changed a bunch of rules. And the big
01:08:28
change that they made then was it allowed super PACs to do a lot more than just run ads. Up until that point, all
01:08:35
you could do if you were a super PAC is just basically run advertising, television, and radio,
01:08:41
I guess, online as well. But what they were allowed to do starting in 23 was
01:08:46
they were allowed to fund ground operations. They were allowed to do things like door knocking, phone banking, you know, get out the vote. So,
01:08:54
in other words, what happened was a super PAC became more like a full campaign machine. And Trump showed the
01:09:02
blueprint of using a super PAC, specifically his, to win the
01:09:07
presidential election. So he was able to fund this massive ground game. He built infrastructure
01:09:13
across the swing states. He was obviously incredibly effective. And now that playbook can actually be used by
01:09:20
other folks. And so to the extent that Elon decides to use those changed FEC
01:09:25
rules, Jason, I think what you said is the only path. But but I just I I just wanted to double click on Keith's point
01:09:31
because it's so important. I do think the filibuster is going to go away and it is because the the arceness of these
01:09:38
rules having to do a reconciliation bill and you know needing a supermajority veto proof supermajority in and the
01:09:45
other case it just means that nothing gets done and I think somebody will eventually get impatient and just
01:09:50
steamroll this thing. We've never had so many people say they feel politically homeless as we did the last two cycles.
01:09:57
And that includes many people on this podcast, people in our friend circle. And I think just the idea that Elon
01:10:03
could create a platform that people could opt into and support, just the
01:10:08
existence of that would make the other two parties get their act together. By the way, I think that's what we need
01:10:14
is a little bit of a stick there and a carrot. Hey, if you don't control spending, there's this third option. And
01:10:20
if Travis and I are in it, and Keith, I know you'll never leave the Republican party, but Shimoth, you know, you're
01:10:26
probably set where you're where you want to be right now. But I can tell you, we go through our top 10 20 list,
01:10:32
out of those 50% will join Elon's party. Well, look, the other the other thing, Jason, that that Keith said, which I
01:10:38
think is is really important is if he were to run people, I think they
01:10:45
have to transcend politics and policy. And I think they need to be straight up
01:10:50
bosses. People that have enormous name recognition so that effectively what
01:10:56
you're voting is a name and not an agenda. Equivalent to, I think, what happened to Schwarzenegger when he ran.
01:11:02
He ran on an enormous amount of name recognition in the great Davis recall. He didn't run on the platform. I don't
01:11:08
think anybody should mention that. JD Vance had this great book, captured people's imagination. He's an incredible
01:11:14
speaker. He pisses off a third or twothirds of the country depending on where you are in the country, but you
01:11:20
can't ignore him. I think Elon can find 10 JD Vance type characters and back
01:11:26
them fairly easily. He is a magnet for talent. People will line up. I have been
01:11:31
contacted by high-profile people. I was actually thinking of running. Can you put me in touch with Elon?
01:11:36
I was thinking more like actors and sports stars, meaning where they just come with their own
01:11:42
in-built distribution. Like I think you almost have to to rank ex followers and Instagram followers and do a join and
01:11:49
say, "Okay, these are do you know what I mean?" Like I think it's like totally different. It's painful, guys. It's painful. Like
01:11:55
let's not get more celebrities as politicians. Like let's get like people who've led large large efforts, large
01:12:03
initiatives, complex things, you know, ideally, but they still have to communicate, right, Keith? They have to
01:12:08
be able to communicate on a podcast. That's the new platform. If they can't spend two hours, three
01:12:14
hours chopping it up on a podcast like this or Joe Rogan, you know, that's Camala's the reason she
01:12:20
couldn't even contend was because she couldn't hang for two hours in an intellectual discussion. If you can't
01:12:25
hang, you're out in today's political arena. It'll be interesting to see if he can tune his algorithm for talent, which is
01:12:32
epic, to tune for politics because it's a slightly different audience, but if you
01:12:37
can tune the algorithm and quality, that might work. I think you can win a few House races. I think that's doable. I
01:12:43
don't think you can win a Senate race. Well, there it is. Elon Keith doesn't think you can win a Senate race, but he
01:12:49
thinks you win a couple congressional ones. Thanks for giving him the motivation, Keith. I appreciate it. I'm sure he's the biggest mistake you've
01:12:55
ever made. He's not going to win, too. People in the Republican party right now are going, "Oh, no. Don't poke the
01:13:01
tiger." Uh, listen. Speaking of That's how Trump got into politics, so I don't want to be Obama here. Okay. You just Obama, Elon, right? Yeah.
01:13:09
Congratulations. All right. Listen, Scotas made a big decision here. This is a really
01:13:15
important uh decision. Uh they've sided with Trump for plans for federal
01:13:22
workforce rifts, reductions in workforce for those of you who don't know. As you know, Elon, Trump, they wanted
01:13:29
to, you know, downsize the three million people who are federal employees. This
01:13:34
is just federal employees we're talking about. We're not talking about military, and we're not talking about state and
01:13:40
city. That's tens of millions of additional people. If you remember, Trump issued this executive order back
01:13:46
in February when we got in office implementing the president's Doge Workforce Optimization Initiative. and
01:13:51
he asked all the federal agencies, hey, just prepare a riff for their departments consistent with applicable laws was part of this EO. Okay. In
01:13:59
April, the American Federation of Government Employees, AFGE, sued the Trump administration, saying the president must consult Congress on
01:14:06
large-scale workforce changes. This is a key debate because the Congress, as you
01:14:11
know, has power of the purse. They set up the money, but the president and the executive branch, they have to execute
01:14:17
on that. And that's what the key is here. So they accuse Trump of violating the separation of powers under the
01:14:23
Constitution Act. AFGE has 820,000 members. In May, a San Francisco based
01:14:30
federal judge sided with the unions, blocking the executive order. The judge,
01:14:35
who was appointed by Clinton, said any reduction in the federal workforce must be authorized by Congress. This is a key
01:14:41
issue. And the White House submitted an emergency appeal, yada yada. Eight of
01:14:46
nine Supreme Court justices sided with the White House in overturning this block. And so the reasoning, it's very
01:14:51
likely the White House will win the argument of the executive order. They have the right to prepare a riff. The
01:14:56
question is, can they actually execute on that riff? And who has that power? Chimoth, does the power reside with the
01:15:04
president to make large scale or, you know, riffs or do they have to consult Congress first? Your thoughts on this
01:15:10
issue? It's an incredibly important ruling. incredibly right.
01:15:17
I think President Trump should have absolute leeway to decide how the people
01:15:22
that report to him act and do their job. If you take a step back, Jason, there
01:15:28
are more than 2,000 federal agencies. Employees plus contractors,
01:15:34
I think, number 3 million people. If you put three million people into 2,00
01:15:41
agencies and then you give them very poor and outdated technology, which
01:15:48
unfortunately most of the government operates on, what are you going to get?
01:15:53
You're going to get incredibly slow processes. You're going to get
01:16:01
a lot of checking and double-checking and you're going to ultimately just get a lot of regulations because they're
01:16:07
trying to do what they think is the right job. So since 1993, what have we
01:16:13
seen? Regulations have gotten out of control. It's like a 100,000 new rules per some number of months. Like it's
01:16:19
just crazy. So eventually we all succumb to an infinite number of rules that we
01:16:25
all end up violating and not even know it. So if the CEO of the United States,
01:16:32
President Trump, isn't allowed to fire people, then all of that stuff just compounds. So I think that this is a
01:16:39
really important thing that just happened. It allows us to now level set how big should the
01:16:45
government be? But more importantly, the number of people in the government are also the ones that
01:16:51
then direct downstream spend that make net new rules. And if you can slow the
01:16:57
growth of that down, you're actually doing a lot. In many ways,
01:17:02
I wish Elon had come in and created Doge now. Like, could you imagine if Doge was
01:17:09
created the day after this Supreme Court ruling? It would have been a totally different outcome, I think, because with
01:17:16
that Supreme Court ruling in hand, these guys probably would have been like a hot knife through butter,
01:17:23
Travis. So, I I think it's a big deal. Except that ruling doesn't happen without Doge. That Doge caused that
01:17:28
ruling to occur. True. Well, the EO did you could have passed all that was all Doge style,
01:17:33
though. You know what I'm saying? It was If they wasn't firing people, yeah, they probably wouldn't felt the need to your
01:17:39
point, Travis, to actually file this. But Travis, if you are living in the age of AI efficiency right now, operations
01:17:46
of companies is changing dramatically. Can you imagine telling somebody you you can be CEO, but you can't change
01:17:52
personnel. That's the job. You get to be CEO, but you just can't change the players on the team. You can buy the
01:17:58
Knicks, but you can't change the coach. No, you can grow. You just can't shrink it. It's like a It's like running a
01:18:04
unionized company, which actually does exist. Are largeized companies where you can't do any of these things,
01:18:10
right? Do do they still exist or are they all gone? They're going quickly. Yeah, probably.
01:18:16
I think this just gets back to what what is actually Congress authorizing when a bill occurs. And there's certain things
01:18:24
that are specific and certain things that aren't. And I don't I'm not sure that in in a lot of these bills, it's
01:18:30
not very specific about exactly how many people must be hired. And so if it's I'm
01:18:38
just doing the common man's sort of approach to this, which is like if if the law says you have to hire x number
01:18:44
of people, then that is what it is. If the law says you here's some money to spend, here are the ways in which to
01:18:50
spend it, but it's not specific about how many people you hire, then that is different. Yeah. It should be outcome based. Hey,
01:18:57
here's the goal. Here's the the key objectives, right? Travis Travis is totally right. Like
01:19:02
there are there's a variety of different laws, some with incredible specificities, some with very broad mandates. The Constitution clearly says
01:19:09
that all executive power resides in the president of the United States. Period. There's no exceptions there. However,
01:19:14
Congress does appropriate money and post Watergate, many people think Congress has the power
01:19:22
to force the president to spend the money. And you can debate that. You can debate it on a per statute basis. And
01:19:28
that will be more nuanced and that's going to get litigated whether the president can refuse to spend money that
01:19:33
Congress explicitly instructed him to spend, sometimes called empowerment. That's a very interesting intellectual
01:19:40
debate. This one's a little bit easier. It'll get more complicated again like this EO is only approved to allow for
01:19:47
the planning. I think the vote might be closer. I think there's still a majority on the Supreme Court for the actual
01:19:52
implementation, but it may not be 81 when there's a specific plan that
01:19:57
possibly navigate its way through the courts again. Yeah, it's super fascinating.
01:20:03
Yeah. I wonder if they're going to get to the point where they're going to say in every bill, you need to hire this number of people to hit
01:20:09
I don't I don't know if they can. Like that's where it gets borderline unconstitutional. like where you actually prescribe that the president in
01:20:16
the exercise of his constitutional duties has to hire certain number of people
01:20:22
that feels pretty precarious. Well, I I I'm not sure, Keith. That's just like they prescribe a whole bunch
01:20:28
of other things like you must you must appropriate money for to this specific
01:20:35
institution to do this specific work. But that's not an executive function. Like if you said like the secretary of
01:20:41
state has to have x number of employees doing something, the secretary of state
01:20:47
is your personal representative to conduct foreign affairs on behalf of the president of the United States. It gets
01:20:53
a little bit more messy as you translate it to people um that the president
01:20:59
should I mean yes Congress does set you know which people are subject to Senate
01:21:04
confirmation, what their salaries and compensation bans are. So it's it's never going to be fully binary where the
01:21:10
president can do whatever he wants and it's never gonna I don't think it'll be constitutional for Congress to mandate and put all kinds of handcuffs on the
01:21:16
president. Well, then you you also have performance that comes in here. What if you look at
01:21:21
the Department of Education? You say scores have gone down. We've spent this money. We're not getting the results.
01:21:27
Therefore, these people are incompetent. Therefore, I'm firing them for cause and I'm going to hire new people. How are
01:21:33
you going to stop the executive from doing that? There's been a bunch of litigation, you know, in parallel to
01:21:38
this litigation about the president's ability to fire people. And for the most part, the Supreme Court's basically,
01:21:44
with maybe the exception of the Federal Reserve chair, said that the president can fire pretty much anybody he wants.
01:21:51
I mean, that's the way to go is like I mean, I hate to be cut, but if the results aren't there,
01:21:57
I think if they're presidential Yeah. If they're a presidential appointee, the president should be able to fire you at will. Just like if you were a VP at one
01:22:04
of our companies, the CEO should be able to fire you at will. But what about Keith? If the whole department sucks. Hey, you guys were
01:22:10
responsible for early education. You had to put together a plan. The plan failed. Everybody's fired. We're starting over.
01:22:17
Like, you should be allowed to do that. How are we going to have an efficient government? Some of these departments were created by congressional statute, like the
01:22:24
Department of Education in 1979. And you're right, every single educational stat has got worse in the United States
01:22:30
since the department was created. But [ __ ] there is a law on the books that says there shall be a department of
01:22:36
education. So you may have to repeal that. All right, listen. We're at an hour and
01:22:42
a half, gentlemen. Do you want to do the FICO story or should we just wrap Jama? And we got plenty of show here. It's a
01:22:49
great episode. Anything else you want? I don't really have much to say on the FICO story. I thought these other topics were really good though.
01:22:54
Oh, we did great today. This is a great panel. I'm so excited you guys are here. Let me just ask you guys um any offduty
01:23:01
stuff that you can share with us with the audience. Any recommendations? Restaurants, hotels, trips,
01:23:08
movies you watch, books you read? Keith, I know that you are an active guy. What What's on your agenda this summer?
01:23:14
Anything interesting you can share with the audience that you're consuming, conspicuous or otherwise?
01:23:19
Well, I don't want to share any good restaurants or hotels because Oh, you're gayeping. your gatekeeping.
01:23:25
Come on, man. Give us Give us your babysitter. Baby, it's like if you have a babysitter, you're not going to tell
01:23:31
everybody who you're babysitter. Yes. Can I get your nanny's email, but there are there are things that are
01:23:37
what do you call it? No marginal cost consumption like Netflix. So, for example, um you know, this documentary
01:23:43
on Osama bin Laden is phenomenal. Like I don't know if any of you have seen it. It's brand new
01:23:49
and you know I I'm a student of this stuff and I I thought you know I knew the whole story and etc. Watch episode
01:23:56
one. Just start with episode one and it just blew me away with new information, new footage, just absolutely incredible
01:24:02
stuff. So highly highly recommend it. What uh what was the big takeaway for you so far? I don't know if there's any
01:24:07
like specific takeaway, but just like so many parts of the story are misunderstood and not really understood
01:24:13
and how various confluences of somewhat random things lead to a very
01:24:18
catastrophic result, but it it's it's like as um dramatic as the best movie,
01:24:25
but it's a full documentary and you will learn things and absorb things. I I just
01:24:30
I've had friends while I've been recommending it to friends and for a story you think you know it's incredibly
01:24:36
revealing. Okay, Travis, anything you got on your plate there that you're enjoying a restaurant, a dish?
01:24:42
I mean look, you know, I mean Jason, you know, I go to Austin a lot. Yes.
01:24:48
Like basically from March till October, I do about 15 weekends in Austin. I have
01:24:55
a lakehouse. Jason's hung out a couple times. So I I love water skiing. That's my
01:25:02
whole thing. That's my like I just love it. It's just my thing. Very zen. Very zen.
01:25:07
Yeah. And it's lake it's I call it lake life. So lake that's a thing. And then I recently this
01:25:13
is a little bit of like a side quest. I recently purchased
01:25:18
the preeminent back engine. XG. XG. That's right.
01:25:26
It's uh acronym is it's extreme gamut and so the preeminent engine so all the
01:25:32
pros rate themselves based on this it was done it was built by this amazing
01:25:37
entrepreneur this guy Zavier who is just a full-on sort of
01:25:45
ultra ultra I mean just what's the word I'm looking for it's not a like a savant essentially
01:25:52
but hasn't worked on it for many years so I'm getting back into it and love it and making it like taking modern machine
01:26:01
learning sort of deep learning techniques and like big compute and saying can we push the game of back
01:26:08
gammon forward. So super exciting and ultra training apps to get people up to speed quickly. I played in my first back
01:26:15
gammon tournament and cashed so that was pretty cool. No, wait. Yeah. Okay.
01:26:21
Yeah. All due respect, you know the founder of Uber. You're very high profile. You go to this back in is this like held at the
01:26:27
Motel 8 in like a conference room in the back. It was amazing. They set the It was at
01:26:33
the It was like a month ago or so. There's like a big tournament and it was
01:26:38
uh so the the United States Back Federation had this big tour. It was I guess it was um at the Los Angeles
01:26:46
LAX at the LAX Hilton and it was in it was in the basement of the Hilton.
01:26:53
Great. And it was like next to the Dungeons and Dragons convention.
01:26:58
It It had those kinds of legit vibes. I love it. And like people would So I went in super
01:27:04
low pro, just did my thing, but eventually was recognized, but I was not recognized as the founder of Uber.
01:27:11
I was recognized as the owner of XG. The owner of XG. And then there was like a fullon
01:27:18
melee that basically occurred. They're like, "Oh, the owner XG. Travis is here. Shimoth, I feel like we've got a window
01:27:25
here to do the all-in back gammon high-end tournament. We got to lock this down now. We got to
01:27:31
lock down the all-in back gammon set. I get the co-branding rights on this. Okay. Absolutely. XG XG.
01:27:37
Yeah. Well, no. The all-in XG, you know, like because I love a great back gamon set. If we could make like a
01:27:42
$10,000 one, Chimath, we could kill turtles or white rhinos, all the animals
01:27:47
that, you know, um Freeberg's trying to protect. Oh my god. We could murder them and then make that would be so great.
01:27:54
Yes. Like maybe the white could be, you know, rhinos and then you could take something else, elephant skin,
01:28:00
something, you know, just really tragic and then eat the meat and make the the the back set for you.
01:28:06
I love back and honestly like if I wasn't attempting to be like expert
01:28:11
poker player, that is the game. I mean, if you're talking about a Pandora's box where once
01:28:17
you open it, oh my god, you can go down the rabbit. Off. Let's go, dude. Let's do that. Back again is back is a beautiful,
01:28:24
beautiful, beautiful game. I love the vibes of sitting. Travis and I sat I got some cigars out. You know,
01:28:30
we pour a little of the all-in tequila. Tequila. Uh, we get that going.
01:28:36
A couple of uh the all-in cigars and then we have the all-in back. It's a wonderful hang. Yeah. Keith, would you consider giving
01:28:42
us some of your money playing back? Absolutely. Absolutely. We gota We got to get some of that
01:28:49
money on the table because you don't play poker with us. I don't play poker, but back. Yeah, that sounds great. And I'll bring I'll bring
01:28:54
better tequila. I have better tequila. Well, like we're going to upgrade. We'll do a little taste off. Yeah. So, you've insulted now Elon with the Senate
01:29:01
seats and with his uh tequila. My tequila is much better. Trust me. Okay. Who who is left in the PayPal
01:29:07
mafia? You'd like to insult Reed Hoffman or Peter? Anything about Peter?
01:29:15
Reed could join Elon's party. He's collecting a bunch of misfits, so we might as well take Reed, too. All right, listen. This has been another
01:29:21
amazing episode of the number one podcast in the world, the all-in podcast for your Sultan of science, who couldn't
01:29:28
make it today. He's at the beep conference we don't mention. and uh David Saxs who is out uh making America
01:29:36
safe in AI and crypto pyapatia world's greatest moderator
01:29:42
Travis Keith thank you for coming thanks for appreciating you guys were great today what a panel see you all
01:29:48
next time byebye let your winners ride
01:29:54
David said we open sourced it to the fans and
01:30:00
they've just gone Crazy right there. Love you. Queen of
01:30:06
your [Music]
01:30:11
besties are my dog taking a notice in your driveway.
01:30:19
Oh man, my dasher will meet up at You should all just get a room and just have one big huge orgy cuz they're all just
01:30:25
useless. It's like this like sexual tension that we just need to release somehow.
01:30:31
wet your feet. We need to get merch.
01:30:39
[Music] I'm going all in.

Badges

This episode stands out for the following:

  • 60
    Funniest

Episode Highlights

  • A Stunning Hotel Experience
    Describing the breathtaking beauty of Pasalaqua in Lake Ko, a must-visit destination.
    “What a truly magnificent place.”
    @ 00m 43s
    July 11, 2025
  • The Future of Food Delivery
    Exploring how automation and robotics will revolutionize meal preparation and delivery.
    “You don't have to be wealthy to be healthy.”
    @ 14m 59s
    July 11, 2025
  • The Bet on Human Knowledge
    Investments in AI reflect a belief in human knowledge over pure computation. 'That's exactly a bet on human knowledge.'
    @ 23m 32s
    July 11, 2025
  • End of Human Labeling
    The reliance on human labeling in AI is nearing its end as machines learn to label themselves. '99% done or you train on it so well that you don't need to label anymore.'
    @ 30m 32s
    July 11, 2025
  • Scientific Breakthroughs with AI
    AI's ability to apply the scientific method could lead to unprecedented discoveries. 'If you have an LLM that is the best in the world at the scientific method, game the f over.'
    @ 36m 19s
    July 11, 2025
  • Elon Musk's New Political Party
    Elon Musk announced the creation of a new political party, the American Party, focusing on fiscal responsibility and technological excellence.
    “When it comes to bankrupting our country with waste and graft we live in a one party system not a democracy.”
    @ 58m 14s
    July 11, 2025
  • The Case for a Third Party
    A discussion on the historical unpopularity of recent presidents and the potential for a third party candidate.
    “Maybe it is time for a third party candidate.”
    @ 59m 48s
    July 11, 2025
  • The Filibuster's Future
    The filibuster is an artifact of history and may soon be abolished. 'Maybe we should just fix everything now.'
    “The filibuster is on borrowed time.”
    @ 01h 07m 44s
    July 11, 2025
  • The Rise of Super PACs
    Changes in FEC rules have transformed super PACs into full campaign machines, allowing for ground operations. 'What happened was a super PAC became more like a full campaign machine.'
    @ 01h 08m 22s
    July 11, 2025
  • Political Homelessness
    A growing number of people feel politically homeless, highlighting a shift in voter sentiment. 'We've never had so many people say they feel politically homeless.'
    “We've never had so many people say they feel politically homeless.”
    @ 01h 09m 50s
    July 11, 2025
  • The All-In Podcast Finale
    An entertaining wrap-up with laughs and tequila talk. "I have better tequila."
    “I have better tequila.”
    @ 01h 28m 54s
    July 11, 2025
  • A Call for Merch
    The hosts jokingly discuss the need for merchandise. "We need to get merch."
    “We need to get merch.”
    @ 01h 30m 39s
    July 11, 2025

Episode Quotes

Key Moments

  • Magnificent Hotel00:43
  • Food Automation14:59
  • Human Knowledge vs. AI23:32
  • Browser Evolution48:48
  • Super PAC Evolution1:08:22
  • Political Homelessness1:09:50
  • Lake Life1:25:02
  • Merch Discussion1:30:31

Words per Minute Over Time

Vibes Breakdown

Related Episodes

Podcast thumbnail
Uber CEO Dara Khosrowshahi on self-driving's future, changing business model, job displacement
Podcast thumbnail
NBA Gambling Scandal, Billionaire Tax, Tesla's Future, Amazon Robots, AWS Outage, Dangerous AI Bias
Podcast thumbnail
Meta's scorched earth approach to AI, Tesla's future, TikTok bill, FTC bans noncompetes, wealth tax
Podcast thumbnail
Elon Musk on DOGE, Optimus, Starlink Smartphones, Evolving with AI, Why the West is Imploding
Podcast thumbnail
Trump Rally or Bessent Put? Elon Back at Tesla, Google's Gemini Problem, China's Thorium Discovery
Podcast thumbnail
E168: Can Google save itself? Abolish HR, AI takes over Customer Support, Reddit IPO teardown
Podcast thumbnail
Trump AI Speech & Action Plan, DC Summit Recap, Hot GDP Print, Trade Deals, Altman Warns No Privacy
Podcast thumbnail
Winning the AI Race Part 3: Jensen Huang, Lisa Su, James Litinsky, Chase Lochmiller