Jason, do you want to tell us about your new favorite podcast? Oh, it's so good. My feed is now because, you know, since cancel culture ended, Sachs, everybody uses the R word and the f word right now. My entire feed on Instagram is either gay or down syndrome or bulldogs. It's one of those three. And then I stumbled upon the Miss Thing pod, Miss Thing. And they do a bit called gay name, straight name. Here's gay name or straight name for David. This good news and bad news freeird. Here we go. >> Gay name or straight name? >> David. >> David to me is straight. >> Okay. But he has my perfect body. It can be confusing because I'm kind of like, are you gay? And it's like, no, I just want to be you, David. >> Totally. Well, it's so like the Michelangelo's David the male ideal. It's like incredible body kind of small. Sorry. >> Yeah. >> Oh, >> it's a little rough. >> What? >> What are you watching there, Jal? >> They basically nailed these two, but okay, keep going. >> I don't think Chimoth is on their short list, but I know Jason will come up at some point. >> Gay name or straight. >> Maybe this is it. >> Chimoth on the count of three. Yeah. >> Three, two, one. gay. I'm seeing like Italian sweater, like really kind of like a loud vibrant sweater. He like wears it to like poker night with his like his boys and like not I'm not talking like straight poker. I'm talking like gay poker nights like at the bar. >> Yeah. Always talking about wine. >> Talks about wines. >> Always sort of like Yeah exactly. >> Yep. Also, it's so like the guy at the gym taking off his shirt, taking selfies. >> Yeah. And everyone else is kind of like, "Excuse me, Chimoth. I'd like to use the mirror. I'd like to see myself. >> See you at the next day." Poker night. >> Totally. >> You bring the wine. >> We'll bring the sweater. >> Yeah, >> there it is. Wow. They did do. >> That is fantastic. That is fantastic. A shout out to my guys at the Miss Ding podcast. >> Wow, that was awesome. I think I'm gay. I never knew. >> Let your winners ride. >> And it said we open sourced it to the fans and they've just gone crazy with it. >> What did you do like cameo? Did you pay them to do that? >> Did it for me as a favor. So they did it for >> That's awesome. Well, thanks to those guys to the miss. I've seen those guys before in clips. I find them very funny. >> It's so great. Shout out to my guys. >> That was awesome. >> All right, everybody. Seriously, welcome back to the number one podcast in the world. It's the All-In podcast with me again, David Freebergia, David Saxs, and of course, I'm Jason Calcanis. You can call me Jay Cal if you're here for the first time. Topic one, open AI. They missed their targets for chat GPT Freedberg both on users and revenue. Let's talk about it. The Wall Street Journal says in a uh a breaking investigative report on Tuesday that OpenAI expected to hit 1 billion wows weekly active users before the end of 2025. They missed that and they still haven't hit the milestone 4 months into 2026. Also, Chamath, they missed their 2025 revenue target for Chad GPT. Exact number wasn't specified, but as we've talked about here, they're at a 2030 billion run rate. There's a little bit of accounting nuance that is yet to be worked out in the industry. Two reasons why this matters. Sachs, OpenAI has $600 billion in spending commitments for compute. Just to put that in perspective, that's about what they're trading for on secondary markets. In other words, the entire value of the Open AI enterprise equals their spend commitments in the coming year. CFO Sarah Frier, who is coming to liquidity, is reportedly worried, hey, that revenue isn't growing fast enough to keep up with expense and OpenAI wants to IPO later this year. This has put Frier and Oughtman in conflict or uh maybe there's some natural tension there. Frier doesn't think OpenAI is ready for public reporting standards. According to the Wall Street Journal, Altman obviously wants to move faster, so they released a joint statement. this is ridiculous yada yada yada. Let's go to you Saxs. What do you think's going on here? Are these major headwinds or is this just managing expectations as the leader of the pack in the most important race of our lifetimes, the race towards super intelligence? >> Well, I actually have a little bit of a contrarian take on this. I know that OpenAI had a really bad week. Like you said, they had that Wall Street Journal article which said that they missed their numbers. They missed their 1 billion user growth target. They missed their revenue numbers. That's called into question whether they can afford the data center commitments that they've made. And then in addition to that, they've also had the lawsuit with with Elon happening this week. So in the press, it ended up being I think a pretty bad week for them. But I have a contrarian take on this, which is I think that over the past week or two, if you look at kind of what's happening at the product level, it's been a pretty good couple of weeks for them. They released chat GBT 5.5 and the reviews from, you know, people I talked to in Silicon Valley have been really strong. You talk to developers, coders, they're very happy with it. At the same time, Opus 4.7, which is the latest anthropic release, appears to be a bust. People are complaining about it. They're in a lot of cases are rolling back to 4.6. They're saying that Opus 4.7 is rationing compute. It's reducing thinking time, not as good. there were some bugs and clawed. So if you just compare chat GPT 5.5 to Opus 4.7, it does appear that OpenAI has had a better couple of weeks on a product level. And I think there's reason to believe that the product improvements will continue. GPT 5.5 is based on a new base model called Spud, which is the first base model upgrade they've done in I don't know over a year. and having a new base model will pave the way for future improvements as well. So I think OpenAI is feeling pretty optimistic about their product right now and I think you're starting to see on X some of the developer mojo is shifting. I'm seeing a lot of people saying that they are shifting their their coding usage from Opus to GPT 5.5. So I think that SAM may end up being right but for the wrong reason. And what I mean by that is that when he made these big compute commitments, it was based on those estimates of hitting the billion users on the consumer side and hitting those revenue targets. The consumer business ended up being weak. So they missed those targets. But in the meantime, coding has become the allimportant sector of AI. And because they made all these compute commitments and they built out these data centers, they have more compute than anthropic right now. Anthropic is token constrained. It's reducing their ability to serve mythos, for example. It's causing them to engage in compute gating with Opus 4.7. And I understand why Daario made that decision. I'm not saying I mean it was a prudent business decision. I'm not criticizing him for it, but I think again I think Sam may end up being right here for the wrong reason, which is he missed on consumer, but enterprise is going gang busters and is giving him the ability now, I think, to catch up on code. your poly market >> which is the all important market right now >> of course and we talked about gro and cursor teaming up last week Elon and the team over there poly market showing now a 32% chance that openai goes public by the end of 2026 this is down from 60% in December and Shimath you gave a bit of a warning hey there's only so many dollars to go around SpaceX IPO is obviously getting out first and now if openai doesn't go out this year and anthropic does the sets up an interesting dynamic. What are your thoughts here generally speaking about the massive commitment that OpenAI has made? Are they going to run off the cliff or will it wind up being brilliant? Uh even if it wasn't strategically for the exact reasons, >> I think they're going to be fine. I think this is a multi-t trillion dollar company. I think Anthropic is a multi- trillion dollar company. I think the thing that's happening right now is uh a complete misunderstanding of what's actually happening inside of the world of AI. And there is one very specific choke point that is constraining everything which is access to the power that's necessary to drive these tokens. To the extent that open AI missed, I think what that is is an insight to not enough compute capacity today. And that problem is only getting worse. You've already seen that with Anthropic as well where they just found a way to economically induce Amazon to give them enough capacity so that you don't have to route through bedrock to get to the anthropic models. You're also seeing them do differentiated deals now with economic participation on top of what they already had from folks like Google to give them more capacity. What is my point? Everything in this market is power constrained. The reason that these folks may miss a number or a forecast have nothing to do with demand. It is entirely 100% due to the supply of the power necessary to generate the output token. There is a really interesting thing that was just announced today that will make this problem even worse, which is what you're starting to see now is backlogs build up of not just the access to the power, but then the componentry that's actually necessary. Not just resips and not just NAT gas turbines, but now you're talking about transformers and all the actual tactical grid infrastructure. Why is this important? If you look at the actual amount of gigawatts that are under construction, we have a huge mismatch now, people have announced all these projects Jason but less than half of it is actually being built. Less than half. Most of it is stuck in red tape. Most of that is because there are these supply chain delays. So there's no credible strategy to turn any of this stuff on. Who will this hurt? It will hurt Anthropic and OpenAI the most. Who will this benefit? It will benefit the hyperscalers, specifically Oracle, Amazon, Meta, Microsoft, and Google. And now what you're going to see is a negotiation and a trade back and forth. How much equity do I have to give up? How much control do I have to give up to get access to the compute versus how badly will I miss my growth forecasts if I don't? And now what that means is, and we spoke about this last week, that's a huge lane for Grock to just run through and SpaceX to run through cuz they have a ton of excess capacity. And so I think the cursor deal was the appetizer. But if I were Elon now, I'd be running all over this market because if the models catch up in quality, I think he could also do something really crazy with anthropic or open AI right now. Maybe not open AI because of the >> we'll get into the lawsuit in a bit >> the baggage. >> Yeah. >> But man, he and Dario should do a deal tomorrow. >> So you're framing, hey, the the limited resource here is compute. The demand is off the charts. >> No, the limiting resource is power. power which then powers compute which then provides tokens which then services the massive uh developer and co-work and all these other projects that consumers and enterprises can't get enough of. Got it. >> And Jason, the other factor that complicates that for anthropic and open AI is all the stuff that's sort of sitting around thumb twiddling. 40% of that is going to get cancelled because they've done such a poor job of creating a good positive halo around AI that 40% of all the announced projects get cancelled because 40% of all projects in the last four years have been cancelled. Yeah. And there's there there are some bad feelings about data centers, AI, jobs, etc. And that's causing some headwind. People are you literally doing violent things in society and blaming data centers and AI for it. I don't want to give it too much air time. Freeberg, what's your take on the chessboard we're looking at here? Either through compute, energy or through going public on a business level, you know, the strategic nature of capital, compute and energy now playing a role in this massive amount of demand. still a ball in the air kind of game. BCG had this theory, I think I talked about this once before, called the rule of three where they've shown time and again that any stable, mature, competitive market evolves to a 4:21 ratio of market share for basically 90% of the market. So there's a market leader that has four times the market share of the second place, that's two times the market share of the third place. This is the case in pretty much every mature kind of competitive market. So you can kind of think about AI probably evolving into a consumer market and an enterprise market. Open AAI, even if they're not at a billion, they're still at 900 million weekly users, which is well ahead of whatever Claude is at. I think Claude is like probably subund million sacks, you may know. And then Gemini is probably closer to them at 700 to a billion somewhere in that range. Probably pretty neck and neck with open AI. So, you know, the consumer market looks like it's trending towards a chat GPT/Google fight for first place and second place and then probably anthropic in third place and maybe Elon emerges and takes off enabled by his compute capacity and then the enterprise market is a little bit of a different story and that's its own market which is kind of anthropic or probably Google in the lead actually if you look at all the vertex use. Google claims that 75% of GCP customers are active users of Vertex. So there's probably a pretty sizable market share that Google's captured on the enterprise side as well. This is also probably why Google stock has absolutely ripped over the last couple of months is they're literally in first place or fighting for first place in enterprise and consumer. But I still think that there's a lot of opportunity to Chimoff's point about the compute and energy capacity constraints in improving how we actually scale and deploy models in both the enterprise and the consumer setting. And it is such early days and I just want to highlight this paper that came out from MIT from these two scientists and these guys published a paper on pruning techniques and neural networks. This paper showed that you could actually reduce the size of these networks by 90%. And get the same accuracy out by pruning very large models down to smaller models. And then you can make a selection on which model to run for inference. And by doing this, you can actually reduce inference costs by 10x. You can get 10x the output per energy unit that goes into the data center with no loss of accuracy. And so it's a really interesting call it algorithmic technique that can be applied to the existing large models to actually make them much lower energy use. So if you think about it, you're firing up a very large model to answer a very simple question. You can actually prune away that model. Now this is probably going to be the case in AI applications as it is in traditional Google search. There's a long tale of searches, but there's a few searches that account for a large percentage of search volume. It's like what is the weather? What are the movies times? You know, what's the stock price? Like there's a certain set of things that make up the bulk of consumer energy. And there's probably a certain set of things that probably make up the bulk of coding output as well. And so if you can get that 80% of searches or chat interfaces or coding requests reduced down through pruning techniques to smaller models and then you have a whole set of smaller models that can be called dynamically and you reduce inference cost by 90%. you can make much more use, call it 10 times the use on data center and energy capacity than we can today. So I would argue that we're still in the very early days of getting efficiency in terms of output and tokens and we're just in the very kind of early stage of that which also unlocks the opportunity for guys like Elon to reinvent how this is done and potentially compete pretty aggressively. >> There are two ways to win. You could throw compute at it or you can do SLM's small language models and V SLM's verticaliz small language model. So if you had a verticalized small language model for the weather, let's say that doesn't exist, but uh you can you can use it as an example. They will have one for travel as an example. When you hit Google for flight information, it's obviously going to route you to something lighter and faster that uses Google flights. And Google flights has been incor incorporated into Gemini. Gemini now is right behind 700 750 million users >> and it's exactly what we discussed I don't know 18 months ago on this podcast Freeberg that what if they put it at the top >> and what would that do to their search revenue search revenue is surging and they're also surging so they figured out a way to balance those two competing forces having search results that are AI enabled and still getting people to click on links they've done it brilliantly apparently and the stock is rewarding. >> I'll just add one statement to what you said, which is like you're using what I would call a humanistic on humans don't intuitively know what this model is. It's not just a verticalized model, but there are going to be models that will be discovered through automated pruning techniques that will then be working in concert. So, lots of small models that link together. And we don't define each model by some human heristic like this is a search travel model. >> This is a maps model. We don't we don't know why these models work the way they do when they get broken down. But I do think that that's really where the evolution is happening. So effectively a model becomes a macro model. It's got lots of smaller models underneath it that can be dynamically called and that allows you to have 10x the inference for the same unit of energy. >> Sax, >> let me just build on your point about Google. Jcal, I would say that if there's a single reason why OpenAI did not hit its user targets and its revenue targets, certainly around consumer, you'd have to say it's because Google managed to take meaningful share, you know, they were basically nowhere a year or so ago. Sergey came out of retirement, helped focus the company, and like you said, they did a brilliant job improving Gemini and putting it at the top of search, incorporating it. Now, that being said, again, I don't think the news is all bad for OpenAI because I do think that the 5.5 release was great. We're hearing really good things about Codeex. I do think that Codeex is taking share in coding tokens right now. And I just think we're in a really interesting place where these companies are constantly oneuping each other. I mean, two weeks ago it looked like Anthropic was going to be completely dominant, right? I mean, Anthropic was growing at 10x. Open AAI was growing at 3x and it looked like >> and then the servers started going down. Did you see that this week? The server going down. People were in my office were complaining we can't get on claud. >> Listen, competition brings out the best in everyone. Anthropic forced open AI to compete. Google's forced open AI to compete in consumer. I just hope the market stays competitive for as long as possible. I do think that's what's best for consumers, our economy, and for our country overall. Let me just say one other area where I think OpenAI had a good week is in this red-hot area of cyber. Obviously, Anthropic made a huge splash with Mythos. It hasn't been commercially released. Their compute constraint, but as a proof of concept or training model, it hit a new level of capabilities with cyber. But now OpenAI has released a new model called GPT 5.5 cyber which has just been through a bunch of tests and they've shown this was testing done by the AI security institute that GPT 5.5 is the second model to complete one of their multi-step cyber attack simulations end to end. So it has the same level of capability as Mythos and it does appear to be commercially ready. You know, they've got the compute to serve it. So I do think that that's a big accomplishment. I mean, look, we knew that other cyber models were coming. It wasn't just going to be Mythos. In fact, within 6 months or so, all the Frontier models are going to have Mythos level cyber capability. But it's impressive that OpenAI got this GPT 5.5 cyber out. so quickly and I think 5.5 might be the first cyber model that cyber defenders actually get to use because again I don't think they're as compute constrained as anthropic is >> and this is an incredible opportunity you know for the crowd strikes and PaloAlto networks of the world both of which have been on the program they come out and they start attacking this space man you could really see everything get tightened up and this could be an incredible revenue stream for everybody who's got whether it's cursor claude or open eye or Gemini. This is an amazing opportunity to tighten up as much as it is to get attacked. >> Can I make a point about that? Cuz look, there is so much fear right now almost the level of panic about mythos. People are treating it like a doomsday weapon or something like that. It's not. is simply that the frontier models have reached the point where they're capable of automating cyber activities just like they're capable of automating coding. But that means that a model could power up a cyber attacker or cyber defender the same way they can power up a coder and allow them to discover a lot more vulnerabilities. So there is obviously a risk there. But I think it's important to understand that Mythos or GPT 5.5, it doesn't create the vulnerabilities. It just discovers them. The bugs were already in the code. They were sitting there waiting for some hacker to discover. If we can now use AI to find these bugs in advance, these vulnerabilities and patch them, then you actually harden our infrastructure and and you harden our security. I also believe that this leap from let's call it preAI cyber to post AAI cyber it's going to be I think a big one-time upgrade cycle because again you're going to find all these dormant bugs and vulnerabilities but I think that once we get past that upgrade cycle you're going to reach a new equilibrium between AI powered cyber offense and AI powered cyber defense it's going to become a lot more normal it's not going to feel like this huge disruption which is to say I think you know people are treating this as like some existential threat. I don't think it is as long as everyone does what they're supposed to do, which is use the new capabilities to harden their code bases and infrastructure and security before the hackers get a hold of these capabilities. >> Yeah. And if Chimath, if you were to look at this to build on Sax's point, there are about 5 million or so security experts in the world. We talked about token cost. 40 hours of tokens just pounding it, you know, a week. You could create another five million for a hundred dollars per chief security officer per security expert. So it's the volume of security expert agent saxs to your point. Yeah. You could have 50 50 million of them 100 million of them. They're not finding something unique. They're just they never sleep. They're relentless in their pursuit of these problems. It's a really great point. >> Just kind of just refine that. So yeah, there's probably 5 million people in the cyber industry, but there's probably only a few thousand really elite hackers. >> Sure, >> those hackers didn't have the time to go after the entire surface area of every possible attack vector out there. And so if you train a model to do what they do, obviously, like you said, it can operate with a scale and speed that a human hacker can't. So obviously, you know, what you need to do is get these tools in the hands of the white hats, let them do the cyber attacks themselves to then find the vulnerabilities and patch them before the black hats get a hold of these capabilities. But I think it's just just one last point on this, I'll stop. It's just it's really important to understand that the Chinese models are going to have these capabilities within approximately 6 months. >> Oh, they have them now in Deep Seek 4 for sure. They've got some level. Well, no. Deepc4, I mean, Dec 4 is impressive in a lot of ways, but its capability is not at the frontier. It's maybe 80 80 85%. Let's call it the American frontier. >> Chimoff, you wanted to get in on this. Let's get Chim in. >> Two things. The reason that this is even possible is because humans are errorprone and when humans code, they create holes. And so, humans exploiting humans is where we've been for a long time. Now we have computers exploiting humans because the computers go and seek out all these bugs that humans wrote. In the next phase it'll be machines versus machines. And so I think the nature of cyber is going to completely change. Probably in the next five or six years there'll be so much reason to rewrite all of the software that runs the world. In one part because you're going to be asked to show more operating leverage and revenue growth, but in another part because everything else that was handmade in the past is just fundamentally insecure. Either way, all roads will lead to all the operational software that runs the world will get rewritten. More and more of it will be written by machines. More and more of it will be impregnable as a result. But then the cyber threat actually will only increase because then you're going to try to figure out how to use a machine to inject something into another machine so that some agentic loop inject some malware or injects a bad token. And I think that's a very complicated thing. What I will tell you is I'm not even sure if I'm allowed to say this but a very good probably the best cyber security company in the world run by one of the very best CEOs in the world who may or may not be speaking at liquidity would tell you that they have penetrated and can essentially manipulate every model. Let me just let me just say it roughly that way. >> Okay, perfect. Yeah. And I uh at the breakthrough prize uh which three of the four of us were at I talked to George Kurtz the other person you were kind of describing was not that >> sitting beside Nash. Yeah I'm talking about >> Nash and George are the two guys leading this Palo Alto Networks Crowd Strike and they understand the what George told me was there is just a line out the door of people who want this product or service. And if you look at it, Freeberg, like the murder rate, like we're sitting here with the lowest murder rate in the history of humanity. It has gone down massively in our lifetimes, but massively over the arc of history. I think that's what's going to happen with cyber. There is only so many attack vectors, and the remaining attack vectors are just going to be human factors, right, Freeberg? That's always been the case. And as we make the software more resilient, then the the weak link is, you know, the secretary who puts her post-it note, you know, with the password there or the accountant who, you know, uses their dog's name plus one, two, three for their password, right? That's the the historical one. Okay, let's any anything you want to add, Free Bird, as we wrap there? Oh, that's >> Why is your bed so messy, by the way? Why can't you just ask the room service to come in? >> Listen, I'll tell I can tell you what happened. Listen, I'm here in Atlanta. And also, why don't you have a suite like where there's two rooms? Like, is it just one room? This hotel only has one. >> It's just one room. Yes. >> You know, either you're cheap or poor. Which one is it? I'm cheap. Here's I'll tell you what. Here's a situation. I'm in Atlanta for the Knicks game tonight. >> You're in one room. >> Here's what I do. I just want to explain to you value for value. Some people spend their money on private jets and they spend $30,000 flying to Atlanta. I spend 30,000 on courtside seats. I don't want the suite. I want to put it into the seat side. >> You can do both. I guess I could do both, too. I don't I'm I'm in the process of becoming >> I don't understand >> of embracing my richness. Okay. >> If you've already convinced yourself that you should spend $30,000 for courtside tickets, which I think is outrageous, but okay. You've already convinced yourself like 10k each, but yeah. Yeah. >> A hotel room that has two rooms. Okay. >> Probably cost 15% more than what you're paying. >> It's 2x, but yes, you're right. >> I'll get the I'll get the hotel room. 20 or 20% more, but you room like 200 bucks a night. So you pay 400 a night. You get another hotel. >> I mean it's it's Atlanta. The most I'm in the best hotel the most expensive hotel is 500 a night in Atlanta. It's no big deal. But everything's sold out because all the Knicks people are coming here. >> So you're selling me double that would have been a thousand and you couldn't spend,000. >> Everything is sold out because the Knicks are here. >> So we have to look at your dirty beds. >> It's gross. >> The bed's not that dirty. Come on. Just deal with it. Okay. Take it out and post. >> I have a private jet story about flying to Atlanta. >> You reminded me. Okay. So yeah, there was some event there. So I I flew my team there, you know, there's a few people on my plane and it's kind of a long flight. Was it like 4 hours or something >> from the back? Yeah. >> Yeah. So I went in the back to to sleep. Well, first, you know, we we started the flight and I had a few bottles of Papy Van Winkle on on the plane. And so we started off with like a drink and then I went in the back and and fell asleep and I woke up basically when we landed. >> So I come out and like all three bottles are basically cashed of like Happy Van Wink. >> Oops. Those were like two grand a bottle. >> No, no, they're more. These were like antique bottles. Like one of them was >> I have one of those from your plane. I have one of those from the old Falcon. Yeah. >> Yeah. They were like these vintage >> $4,000. I remember. Yeah. >> Anyway, you can't even find this [ __ ] anymore. So these guys, they asked me like when we land like, "Hey, Sax, how much did it cost for you to fly us to this event?" And I said, "Well, about $8,000 in jet fuel and about $12,000 of Happy Van Winkle." Well, you gota you got to fuel the the vibes as well as the plane. It's uh >> Is Atlanta nice? I've never really >> Do those people still work for you or are they are they uh >> they called in Atlanta? Um is Atlanta nice? Listen, last year I went to the Detroit games and that city was on the rebound. Atlanta has an incredible opportunity to rebound. I'll say it that way. There's a great opportunity for them to upgrade the city. I I went to Waffle House at midnight last night. There was no shootings. Okay, let's keep moving. By the way, do you get royalty points at the Best Western Atlanta or No, >> I get double points because I use my Best Western uh Visa card. Yeah, it's everywhere you want it to be. All right. Use the promo code Jcal and get a thousand extra points. In other Open AI news, Musk versus Alman, the trial of the century or maybe the decade has started. Elon is of course accusing Open AI of breach of charitable trust, unjust enrichment. He's accusing Open AAI of essentially flipping a nonprofit into a for-profit. He's seeking 150 billion in damages that they revert back to a nonprofit that Alman and Brockman be removed. And there were some fireworks between Elon and the Open AI lawyers. Elon kind of leveled up the discussion. He said, quote, "If we make it okay to loot a charity, the entire foundation of charitable giving in America will be destroyed. That's my concern." Obviously, there's a ton of interesting nuances here. Specifically, Greg Brockman keeping a diary where he was journal maxing his plans uh like a Bond villain here. And uh the excerpts from his diary include conclusion, we truly want the BC Corp. The true answer is that we want Elon out. If 3 months later we're doing BCorp, then it was a lie. Can't see us turning this into a forprofit without a nasty fight. I'm just thinking about the office and we're in the office and this story will correctly be that we weren't honest with him. In the end, it's still about wanting a for-profit just without him. yada yada yada. Freeberg, your thoughts on this case? Is Elon going to win? I just don't know why Greg Brockman's got a freaking diary where he's like literally documenting. I mean, I love the guy, but what the [ __ ] is he thinking? Like, you're just sitting here at home and like, let me write about the the crime I'm committing or let me write like and let me record it. And by the way, let me never delete it. I don't understand this. >> It's not just journal maxing. It's discovery maxing. >> It's smoking gun maxing. >> I don't get it. I don't get it, man. >> I mean, do you guys remember from the wire in that scene where the guy's like, "Is you taking notes on a criminal conspiracy?" He's got everybody in the room. Can we play that clip? It's like, >> "What are you doing, Greg? [ __ ] is you taking notes on a criminal conspiracy? What the [ __ ] is you thinking, man? >> If you're going to commit a crime, you do not write down the date and time of the crime in your journal. >> Well, look, we don't know it's a crime. Let's not. >> Okay, sure. >> A crime, but yes, you keeping shenanigans. Jamat, do you keep a diary? >> What do you think, Juel? Do you keep a diary? >> I I believe ruminate. No, I'll tell you right now, rumination is the path to unhappiness. Nobody gives a [ __ ] about your feelings. Writing your feelings down is only going to make you miserable. Talking to your spouse about your feelings. >> Just go to a beautiful dinner, sit courtside at the next, and do what I've been doing for 30 years. >> [ __ ] maxing. >> [ __ ] maxing. And the register goes up. All you have to do is work. Start new projects. Nine out of 10 foul. Place nine out of 10 bets. One wins and you're golden. Go sit courtside at the Knicks game. >> Keep going. Life's too short. >> And just keep moving forward. Don't write anything down. Period. Full stop. >> It's good advice. Yeah, I just The biggest surprise to me was this guy's got a diary. I just I don't know anyone that has a diary. I've never heard of this. So anyway, that was shocking. Besides that, I have no view on what's going to happen with the case or what the judge will do. >> I have no comment on the case either. I think it's weird that Poly Market hasn't budged even as all of this discovery has been published. It's effectively at 42 or 43% that Elon wins. So, one of the friends in our group chat said what may just happen is that Elon technically wins and he's just credited back the $40 million. And so, maybe that's what this poll is front running. But on a totally separate note, I think Jason, I know you say it as a joke, but this idea of just keep moving forward, don't ruminate, I think is very good general life advice for everybody to follow. >> The modern-day therapy industrial complex and the medication industrial complex, I believe, is >> around rumination. Well, it does pivot around rumination. >> Yes. >> That is the gateway drug to all these things. >> Yep. Talk about your problems. You know, when these people go to therapy, you ever hear these people? Howard Stern's like, "I've been in therapy with the same person 2 or three days a week for 40 years." I'm like, "Okay, what's the incentive for the therapist to stop charging you $1,200 an hour? There is none." Then they lose a revenue stream. They lose a customer. It's all a giant [ __ ] fraud. Facts. Uh, in terms of this case, >> I wouldn't go that far. I do think that there's a lot of value in kind of untying some of these Gordian knots that people have because of how they grew up. But there's a difference between that and being specific and just randomly ruminating cuz I don't think there's a lot of productive. >> You've got an acute issue like in trauma in your life. Yeah, sure. Unpack it, figure it out. I'm just talking about this neverending self-improvement, you know, ruminating thing. Uh but getting back on topic here, Saxs, what's the And we're we're talking about a jury, I believe, in Oakland. >> No, but it's a bench trial. This is important. It's a bench trial where the jury is advisory in capacity, >> but ultimately that judge, she will make the final call >> and she'll do the damages. And so, is this a case sacks of like we've got a Bay Area jury judge and we've got Elon who's considered, you know, a bit right-wing and people don't all agree in that area in terms of his politics. And then you have this Sam Alman New Yorker story and people finding out that so many different people feel they got screwed by him. You put these two things together, it's impossible to handicap where this turns out. Sachs, your thoughts? >> Well, yeah, I don't think this is about politics. I mean, I guess you could argue that what Elon is seeking, which is to protect the charity, is if anything a left-coded sort of principle, although I don't really think it's left versus right. Look, I don't want to take sides on this trial. I'm just watching like everyone else. The last time I weighed in on some Elon litigation, I got deposed for six hours. Remember that? Cuz they just assume that somehow I know something, >> right? >> I've never talked to Elon about the case. I don't know anything about it. >> Yeah. >> I'm going to see what happens like everyone else. Now, one thing I I will say having just read some of the coverage is that apparently the company at some point did offer Elon shares in the company, but he thought that there was something kind of icky about it. Do you remember this that >> Yes. Because at at one point I said on our show when this dispute started happening but before it became a court case I said look if Open AAI at a certain point decided they had the wrong structure they should just gone and done a make right with Elon and he should have been a shareholder on the cap table. What I didn't know is that apparently they did try to do something like that but Elon turned it down because he did want the entity to remain a charitable entity. In other >> had a principled view of it according to the reports and was like no we're trying to save humanity and then you're giving this keys to the kingdom to Microsoft that's all come out >> and I I also have not talked to Elon about any of this but my guess is like most of these things there'll be some sort of settlement or something here but maybe he takes it to the mat who knows Judge Rogers who's doing this 61-year-old Obama appointee politics has played a role. Saxs, they have had to tell the jury like however you feel about these individuals politically, whatever, please put that aside. But of note is that she oversaw the Epic Games versus Apple trial over App Store exclusively ruled in favor of Apple with some caveats um that they don't have a monopoly, etc., etc. So, this is going to be a really interesting one. And I think the worst case scenario is open AI for for OpenAI is they have to unravel this somehow and that would delay the IPO. That would cause chaos in shareholders and I guess the best case is some sort of settlement. And if Elon put the first 40 or $50 million in, he's he's due 10 20 30% of the company after dilution. All right, let's keep moving through the docket. Lots more to discuss and uh good luck to everybody in their lawsuit and those of you betting on market all-in summit selling up fast. Our fifth edition Los Angeles September 13th to 15th. Go to allin.com/events and uh speakers are going to be top tier. Apparently Freeberg is having this as his major creative outlet. I heard some back channel chimoff today that he's going to be doing Broadway musical uh illusionist. tap. I got a tap dancing situation. >> He's literally going fullon entertainer. This is going to be vaudeville sachs wrapped up. He's just going to take it to a whole new level. Musical numbers >> like Nathan Lane. >> I think if you're coding that it's going to be his big gay summit. Yes, it could be a big gay summit. >> Might be our last year in LA, guys. >> Why? >> Not might be. >> Might be. Oh, everybody wants to go to Vegas apparently. Those bones, baby. Can you imagine leaving the summit for lunch and going and playing crabs? Jimoth, we get a fresh. Yes, I can. Yes, I can imagine. >> I got some bricks. Oh, I got some bricks right here. Let's go. Yum, yum. >> That way Sachs can come. >> Yeah, Sax is like, I'm never setting foot in California, but I will. You know, we're doing a couple live events. Are you coming to them? >> Liquidity or something different? >> Liquidity. And then there's the the all-in summit happens in September. >> Yeah, I'm going to do those, too. >> All right. Big tech smashed their earnings on Thursday. Google, Microsoft, Amazon, and Meta all reported. I don't know why they do this on the same night, folks, but they do. And performance was spectacular. It was great. However, the capex announcements were really the story here. Let me just cue this up and show the chart. $725 billion in capex guidance in 2026 from but four companies. Amazon, Microsoft, Google, and Meta. Amazon leading the pack with 200 billion, 190 billion each for Microsoft and Google, 145 billion for Meta. You add Grock, you add OpenAI and some other players to these plans. And we haven't heard from the new Apple CEO yet, but he's going to be taking over. And he's going to have some plans here. I'm sure we are going to see the large a trillion dollars a trillion dollars in buildout over the next year. I don't know if this is even possible, but this is all being driven by AI and cloud computing. Google Cloud, which includes the Google Suite, that grew 63% year-onear. Let that number sink in. 63% on 20 billion in revenue. That's in a quarter. Microsoft cloud, that includes Azure, Windows Server, SQL Server, they bundled some things together there to get the number to go up. Uh that grew 30% on 34.7 billion in revenue. Amazon Web Services, the original cloud, that grew 28% on 37.6 billion in revenue. That's a bit of a pure play. Just counts Amazon's web services. Obviously, these are all moving to NeoClouds. These are all serving AI jobs and tokens now. They have a massive customer base and the customers from the smallest startups all the way to the biggest frontier models cannot get enough compute and it is going to the bottom line. But this is shrinking Chimath cash flow massively. These were free cash flow machines, the largest money printing machines in the history of humanity. But they are giving up on free cash flow, stock buybacks and dividends and the focus on those three to invest in infrastructure. Amazon's free cash flow down 97% Google, Microsoft and Meta down 12, 12 and 8% respectively. your thoughts on this free cash flow, the end of the free cash flow deluge and the massive massive investment we're seeing in capex. I think we're seeing a very important structural shift in the capital markets. I think the last 20 or 30 years, well 20 years, it's been that the mag 7 just kind of ran away with it. that these big companies got bigger and bigger and it absorbed all of these investment dollars and the biggest reason was that it had these very assetike business models, right? You just built some more software and it just has all this leverage and it all just kind of worked except maybe for Amazon cuz they needed physical infrastructure for warehouses and delivery and whatnot. But by and large it was a very asset light investment cycle. Now all of a sudden the pendulum is swinging violently in the other direction. And there's something that I think people misunderstand which is as it moves back to these asset heavy infrastructure investments. The hyperscalers are signing checks that I mean I suspect their body can cash but there's a world in which they can't. I'll give you an example. You know when Microsoft convinced the owners of three Mile Island to turn their >> nuclear site back on? >> Yeah. >> Do you know what their Ford purchase agreement was? It was for more than 2x the prevailing spot rate for energy. More than 2x. The problem is that's not for an enormous percentage of their overall energy needs. So if you play that out and you think these five or six companies all of a sudden are not just spending Jason 700 billion a year of capex which they are but then from an operating cash flow they're going to be spending 2x the prevailing spot rate because they just want guaranteed demand into the future. Where's all this cash going to go? It's not going to go to the shareholder and it's not going to stay on the balance sheet. These companies will now get levered. They're going to get highly sophisticated around the financial engineering. They'll have more debt. They'll have all kinds of different vehicles and term loans and revolvers and all of this stuff. And so, they're going to look like this big bulky industrial business in five years. And I'm not sure that there's a good valuation case to be made at that point. And so I think it may be simpler and this is what I tweeted to just follow the dollars like a trillion dollars a year going out of the hyperscalers. Where is it going? Just follow those dollars and buy those companies because those companies are already underpriced. This is uh obviously reminiscent of something we all experienced. Uh Nick, can you pull up the Cisco chart I just sent you and put it at max? uh we had a massive buildout of the infrastructure of the internet in the late 1990s and into 2000. And what that caused was a lot of aggressive companies to do massive amounts of spending, a lot of retail investors to embrace these stocks like we're seeing with people trying to get into these private companies and saxs. Look at the 2000 peak of Cisco. This is the most extraordinary chart ever. It took them 25 years to get back to that peak and uh they had a lost two decades and we had a massive amount of fiber that wound up getting bought. We talked about that a couple years ago on the program. But there's something for you to build off of here when you look at this massive infrastructure. You think it's going to be Cisco systems all over again, World Warcom, etc.? >> No, I really don't. The issue we had in 2000 was dark fiber. You had all this infrastructure being built out and it wasn't being used. There's no dark GPUs today as you know Brad Gersonner likes to say. So what's driving the capex now is the voracious demand for compute for tokens and the demand is now pulling forward this additional um investment in infrastructure. So I think what's happened here is that the bull thesis for AI just got validated in a single afternoon. I mean again you got Microsoft Azure, Google Cloud, Amazon AWS, Meta, they're all basically exceeding expectations, exceeding guidance in terms of where their cloud revenue would be and therefore how much they're going to reinvest in capex this year. I think we were supposed to have 660 billion of hyperscaler capex up from 350 last year. I think there's now the new estimate is it's going to be over 700. So this is you know again it's more than 2% of GDP. This is a huge tailwind to GDP. There's another article saying that I think in the last quarter AI was 75% of GDP growth. And by the way, this is just the capex part. This is the physical infrastructure. This is not the economic impact of the tokens that are generated inside the token factory. This is the building of the factories. How do those tokens get used? like we're seeing they're being used not just to do research or to answer questions but to create code. And so we're seeing this explosion of productivity in software development. And we're seeing an explosion of bespoke software being created and that's going to accelerate every part of the economy. Every business that now wants to get code will be able to get code for the first time. Before they couldn't even hire the engineers, they needed to generate it. Now they will be able to. So that is a huge unlock of productivity across the economy. Then you're getting into these new use cases like the the co-working use cases and agents, right? So the the workflow automations that are happening, it's still early. I don't believe that this is going to replace humans. We had that um in the past week, we had that crazy case of an agent deleting a production database in 9 seconds because because of a bug. Look, what that said to me is that >> it's not that agents aren't valuable. They are valuable, but they have to be supervised. You know, this idea that you're just going to be able to like automate all the jobs away. It is a massive amount of handwaving over the real technical problems and issues. The agents have to be supervised. Someone has to be accountable. It's not going to be the CEO. The CEO doesn't want to be accountable for thousands of agents. You need people >> despite what Jack had block said. >> Yeah. Thousand direct reports is a great like goal, but it's not realistic. Yeah, >> you need IT people who are savvy who can supervise this and make sure it's working. They have to be accountable to the CEO. Someone has to drive the productivity. It's like Bology always said, AI is not end to end is middle to middle. You have to have someone to do the prompting and you have to have someone to do the validating and I would add the supervision and accountability. So anyway, the larger point though is I'm speaking to the fact that I don't think there's going to be this huge job loss associated with this productivity boom that we're going to get. And in fact, I think what's actually happening now is that AI is becoming synonymous with the American economy. I mean, the fact that it's generating 75% of GDP, you have this capex explosion, this energy explosion that feeds it, and again, just the beginning of the applications that are being unleashed by these new token factories. I think it's all a very, very positive thing. and all these doomers who are trying to throw a wet blanket on it or constantly scaring the daylights out of people. I mean, what do they want the American economy to do? Just to stop I mean, they just don't want any progress. I mean, like again, you know, when you talk about stopping AI or halting AI progress? What you're really doing is stopping the American economy now. You're basically saying you don't want economic growth. AI is now synonymous with the growth of the American economy. And if there's no economic growth, there's not gonna be money to pay for all the social programs. There's not gonna be money to pay down the national debt. There's not gonna be money to basically build up our national defense. All these things we want to spend money on. We have to have a vibrant economy. And that is now synonymous with AI. So I know that AI may not be popular. I see those polls. But having a strong economy is popular. And I believe that those things are now synonymous. It's almost like there was some architect or ZAR who set up the chessboard in the first year of this to make sure that it was ultra competitive. >> President Trump set the table on this. >> Absolutely. With some good advice, I think. Maybe. >> Freeberg, your thoughts? >> It's always good to have good advisors. >> Always good to have good advisors. Absolutely. Absolutely. >> No, but look, I've said it before. The president just wants America to win. >> Literally, there are people who if we were looking at this, you know, I don't know, a hundred years ago, it'd be like people were like, "Yeah, you know what? we shouldn't build the highway system or we half built the highway system. Let's stop let's stop building the highways. >> No, the highway system was funded by the federal government. There was no competition. It was the most expensive on a on a inflationadjusted basis. I think it was the most expensive project in US history. >> Yeah. And the railroads before that like you can't stop these things. They have to keep going. It's interesting point you know there is so much demand for the resource of tokens of intelligence freeberg and it's quite different than the fiber situation as Sax correctly points out where we built all this but we didn't actually have an application here the application is pretty um pretty wellnown and you've got a large number of people in businesses who are trying to vibe code their way to success trying to push this stuff and we had an interesting story referenced earlier in the show where uh Claude ate somebody's homework. This is the nightmare of all nightmares. Somebody was vibe coding. Uh it was the founder of Pocket OS. Apparently, they make software for rental car companies. He was using Opus 4.6 through Cursor's AI platform, their coding platform, and uh you know, which is like the most expensive tier. Uh and he said he configured it with enough safety rules, but the agent was working on a routine task. They saw some sort of credentiing mismatch and they decided to fix the mismatch by deleting a railway volume without user confirmation and uh they pushed the code from a repo to a live app and they deleted everything including the backups. Literally a scene from Silicon Valley's HBO clip of Son of Anton. Hilarious. >> You gave your AI permission to overwrite code in the internal file system. Were you going to tell me about this? No, I thought that was the company policy these days. >> Okay, well, your AI just failed epically. >> That's unclear. >> It's possible the Son of Anton decided that the most efficient way to get rid of all the bugs was to get rid of all the software, which is technically and statistically correct. But artificial neural nets are sort of a black box. So, we'll never know for sure. >> How did they get that so right, Zach? Five or six years ago, art and neural networks are a black box. So, I guess we'll never know. But technically, it was correct. Freeberg, when you blow up a hollow system with your vibe coding, which you were absolutely showing off in front of Jensen a couple of weeks ago about how much code you're pushing, who are you going to blame? You going to take responsibility yourself? Are you going to blame Cla Claude or Kurser? Who are you going to blame when you blow up the entire stack over at Ohio? >> Who you blame? >> Yeah, I'll blame Dario. >> You blame Dario. Okay, that's what I thought. That's a correct answer. Correct answer. Blame Daario. He's the one who says it's a doomsday machine. Uh, come on the prodio. 17th invitio. >> I mean, I've invited the guy like 17 times. He is totally going to me. He wants nothing to do with this podcast. >> Actually, let me speak to that. So, I think I think that um there's maybe a misperception that this error occurred because of, you know, quote unquote AI scheming, >> like kind of in that video that the AI decided that the best way to get rid of bugs is to basically eliminate the codebase. This is kind of like the, you know, AI is going to turn the world into paper clips type thing where somehow it'll like miss scheme. That's not really what happened here. This is a case of just a of old-fashioned bugs occurring at an edge case. You know, you've got the fact that this API was not designed for permissioned usage. You've got the fact that a credential was left kind of lying around. Probably it should not be. There's kind of like a perfect storm that caused the AI to do something or the agent to do something that didn't quite understand it was what it was doing. I think that if there is a systemic problem here rather than just kind of a like a random edge case is that AI still doesn't know what it doesn't know. You know, like a human would stop before deleting a production database and just say, "Oh, I'm about to do something like really serious, really destructive. Am I sure I want to do this?" You know, and a human would have stopped and said, "Oh, wait a second. like I need to be more confident in what I'm doing before I take that action. And AI still has this issue where again it can be kind of overcon. This is where like the hallucinations come from is it doesn't know when it should have a low confidence in its output, right? But this is why it has to be supervised. You know, the longer the time horizon for a task, the more likely it is to go off the rails. >> And a drift. Exactly. And this is why I think people are starting to realize that this idea of eliminating all software developers was the peak of inflated expectations. Yes. >> Right. There was actually a really good tweet on this by Aaron Levy who's got the right take on this. Aaron retweeted Matthew Glacius who sort of sardonically tweeted that 5 months in I think I've decided I don't want to vibe code. I want professionally managed software companies to use AI coding assistants to make more better, cheaper software products that they sell to me for money. >> Just lower your prices. Don't make me vibe code is the translation. >> Yeah. I mean, I think like rare win for for Madaglacius there. Anyway, Aaron Levy then says Agent Coding is a huge win for software developers that want to get more done and it's fantastic for anyone curious to learn how to start coding. What it's less great for is casually building complex software that you have to maintain on an ongoing basis and take all the risk for upgrades, maintenance, keeping up to date with latest security issues, you know, the bugs, cyber, those are taxes on most knowledge workers who aren't familiar with the system. >> It's not a tax. It's a huge risk. >> Yes, it's a risk has to be managed if you don't. People will get fired because there will be some public companies where some goofball tries to vibe code their way out of something and they're going to torch the enterprise value. It's going to be glorious to watch because we're all going to laugh and realize that was stupid and should never have happened in the first place. >> Yeah. I mean, it's there is a chance that this improves to the point passes trial of disillusionment and becomes super productive and you'll be able to get an agent to do reasonable things without deleting your data set. But we have a way to go. Here is your, you know, this is the tech adoption chart. Basically, you got a technology gets triggered. You have the trial, you have this peak of inflated expectations. You go into the trial of disillusionment and then the slope of enlightenment invest and eventually it becomes deer and it's an opportunity. Hey, uh, Freedberg, you have become reddit tide curious. You have and also >> tell me tell me about rea cuz I want it. I want to get on it. M >> I want to use it and I need you to tell Nat that it's okay for me to take it. >> I I have a friend who has some advice as well. >> Freeberg, the coverage is coming out of this phase three clinical trial data release that Lily put out last month. So everyone's going crazy over the data which continues to show pretty amazing results. So unlike trazepatide which is kind of Lily's main product today, it's a which is a dual agonist. It's got two peptides in it that that bind to different receptors, the GLP1, the GIP receptor. This other one now also binds to glucagon, which is a third receptor. And that glucagon receptor binding peptide causes the cells to increase their metabolism, which actually accelerates fat energy consumption over what would typically be muscle energy consumption. It's more likely to burn up fat early on, which causes more quick fat loss, but also reduces muscle loss. And some of the other data that's now coming out shows non-HDL cholesterol down 27%, triglycerides down 41%. >> Liver fat down 80% to >> 80% reduction of liver fat. A1C drops from 7.9% to 6% in 40 weeks, which is amazing, by the way. If you're diabetic and your A1C drops that much in a couple of months, it's literally a life-saving product. The average user in this phase 3 trial saw their weight decline from 214 pounds. They lost 37 pounds. That's compared to six pounds on placebo in 40 weeks. And you know, modest side effects. 20% people felt more nauseous than the people that were on the placebo. There's a lot of other separate studies that are being done now that are showing significant reductions in inflammatory signaling molecules. So systemic signaling of like hey cells are in distress triggers this kind of inflammatory process that can have a lot of other damage to your body can accelerate aging. And so one of the other conversations is that retride might actually be kind of a deaging drug as well. Hercules Hercules Hercules you know, and a lot of the studies, by the way, are done on the the the very high dose, 12 milligram dose, but you could probably get this thing dosed down to 2 milligrams and still see a lot of the anti-inflammatory maintenance and other benefits. I'm no doctor, but people are going nuts over this being more widely useful than just for clinical obesity or type when the FDA when's the projected date for >> 2027. Mid 27. >> That's what they're saying. Could happen sooner. I mean, the data is in the, you know, the FDA will take their time to evaluate it, but I think given the way this is all looking, >> could happen sooner, could happen sometime later this year. Swim Chimath Swim said it's incredible and that uh it's living up to the hype in their experience. >> Who? >> Swim. >> What is that? What is that? >> Someone who isn't me. Swim. >> Oh, >> this is a Reddit term. Someone who isn't me said who has a guy swim has a guy and has cycled on reddatride and does push-ups and says muscle gain has been spectacular no muscle loss and a lowering of fat. >> If you go on X and you just search up rea >> Mhm. >> it's like incredible. You see these like 65 year old guys that go from a dadbod to looking like an incredibly ripped athlete in weeks. And and I I mean I'm shocked. And then for me I don't need that help per se, but my liver health is important to me. My cardiac health because I'm South Asian and it just looks like a wonder drug. I can't wait. When you starve your body, when you turn off the the appetite, which is the GLP-1 agonist function, normally your body goes into this kind of mode of starvation and you have this process by which your body tries to generate energy from your existing cells. And because muscle is much denser than fat, you can have a favoring of muscle tissue being kind of broken up over fat tissue. But what this new agonist, this glucagon agonist that they put into this neutrutide is it favors fat burning over muscle burning. And so that actually can drive short-term use at low dose for people to cut weight and maintain muscle and get ripped. And so that's why a lot of people in the kind of fitness community are talking about, hey, I want to get access to this and get on it for a while. So you'll see a lot more hype probably in that community as well as the all the health effects. It just feels like we're about to have an absolute avalanche of peptides to choose from. >> On November of 2025, Lily cut a deal with the Trump administration. I saw this to drop the price on Drespatide pretty significantly. I think it's like 50 bucks on Medicare. >> 50 bucks from Medicare. Yeah. >> Yeah. Which is a pretty cheap price point, but it starts to make sense as you think about the portfolio of Lily products. You get Tzepide for 50 bucks, but if you want to upgrade, get the Retatride. That's the high premium product and that's where they're going to start the Mercedes to the Honda. I'm sure if I'm Lily and I'm sitting there and I'm looking at this data coming out, I'm like, "My god, people will pay for this and that starts to become sort of like the upgrade to the BMW or the Model S plat if you will." >> Yeah. The Trappetide is like the one bedroomedroom messy bed hotel room >> and the other one's the sweetide is like the twobedroom suite. >> Well, you can also, by the way, you guys know I'm a spokesman for Row. They also have the Waggoi pill uh row.co/twist cotwist uh to get your >> Wait, are you a paid talking about Are you a paid What are you talking about? We're putting Charles Barkley and >> we're not having sales team and then you come over on Allin and you start promoting. >> No, no, no, no. Trust me, we'll get one of those as well. We'll get a row. Sponsorship here. >> What was the ro pill that you had me get? What was it called? >> Oh, sparks. Sparks. Sparks. Did you take it? >> I have taken it and now >> Amore please. No, not >> maybe just a half of a lousy. >> I want to hear the story. I want to hear the story. Go. >> It's so out of control. >> I told what I told you. >> So then what happens is Nat and I are like, you can't you can't just randomly use it. It's scheduled. We discuss it. We put it on the calendar. >> You need a plan. >> You need a plan. >> You need a plan. Can't go in >> because otherwise otherwise it's too much. You just can't randomly take it. >> What do you mean? >> It's going to be a sesh. >> It's a whole thing, man. It's like I don't have the energy for that. >> It's an extended session. You you have to be well rested. This don't do this at 1:00 a.m. This is like a 10 p.m. This is like a No, this is more like a This is more like on vacation, you know, like >> 10:00 a.m. to 12:00 p.m., you know, to noon, you know, you got to really >> you got to really schedule it. >> Schedule it because kids around >> otherwise it's got to be empty. >> Otherwise, it's unfair to her and it's just a li it's a lie. Put it out. >> It's a It's a big commitment literally. >> You You look embarrassed, Jim. Do you feel embarrassed talking about it? It's just a lot, man. It's like It's a lot to handle. It's a lot. >> It's If you want to get the extra 20% in your performance, >> it's a lot, bro. >> It's a lot. It's basically over time. >> What happened? What happened was I was like, "Oh, what is this thing?" Jason's like, "Dude, you must get it. You must get it." So, we got it. We tried it and >> we were like, "What the was that?" And so, then I've been trying to bleed the pills out. So, I gave some to Stant Tang. I'm like, "Stanley, you try it." Literally, he's dealing them like cards >> but when we're having poker dinner, I'm like, does anybody want to try these things? But these what is it? Rose sparks. Is that rose sparks? Shout out to my friends out. All right, let's keep moving here. Freedberg, guys. Friedberg had his own personal Super Bowl. You see me getting ready for Nick's playoff season. I get my courtside. Freeberg had the equivalence acts. He went to the Supreme Court in order to hear them talk about chemicals. This was a big deal for him. The Supreme Court coming together. Did you wear it? Yes. The Monsanto trial happened in the Supreme Court and he went he got courtside. He went to the Supreme Court and listened in the building. Have you guys ever seen a live Supreme Court hearing? >> No. I'd love to. >> I'd love to. Tax, have you been? >> No, I haven't actually. >> I mean, honest honestly, I think it was one of the most amazing experiences I've ever had. There was a massive protest out front. We went through the marshall's office to get in. And that building, you walk in, it's like sacred. It's all marble. It's you're not allowed to talk. You have to be super quiet when you're in the building. Like they keep going like you're in some quiet library. It's like people treat it with this level of kind of sanctity that that and respect. And they're like, there is no politics here. There is no [ __ ] There is no freedom of speech. This is the court. When you come into this court, the justices tell you how you will speak, how you will behave, what you will do, and you will not speak unless spoken to. You put all your stuff in a locker, you go up the stairs, you go into the the courtroom. And the courtroom, it's just so amazing being in there. They have this amazing marble freeze above the justices that has some of the great people of human history, Moses and these kind of amazing historical figures. And then below them are the nine justices and the court case. If you guys haven't watched the case, you can listen to them, I think, online. on you. Is it worth listening to? >> Hold on. Wait, wait, wait. I have a question. So, >> does Robert sit in the middle cuz he's a chief? Yes. >> And then do all of the right justices sit on the right? >> No, they're mixed. So, they're I think I don't know I don't know the exact seating, but they're mixed in terms on appointments of the court. Is >> that right? I think that's right. And then so yeah, that's right. And then it kind of goes out from the middle with Roberts in the middle. Roberts occasionally will name the justices and say, "Hey, do you have a question? Do you have a question?" if no one's talking, but otherwise the justices will jump in with their questions when they want and they'll ask. Now, honest to God, watching this is like watching LeBron James play basketball. These lawyers are so mind-blowingly impressive on both sides that you would just like sit there and I was like in awe. It was so I I felt like my energy was completely sapped from me at the end of this process because you were just so engaged and so caught into the way that these guys are thinking and talking. >> Did you take a rose sparks? Did you take a rose sparks when you were there? >> No. And if you're familiar if you're familiar enough with the case or the case history or the law that's being debated because again when when you get to the Supreme Court, you never debate the case. What you're debating is the legal interpretation of the the decisions that were made on the case. And so is this constitutional? How do you interpret this particular act, this law, this federal law? What's the right way to think about it? So you don't actually talk about the case. You talk about the interpretation of American law, of our laws, of our constitution, of the global. >> You're saying the facts have already been determined. >> That's right. >> Right. At a lower court, there's questions of fact and questions of law. >> The facts have already been determined by the lower court. It's just Supreme Court is ruling on questions of law. >> That's right. And so they have a full briefing with the full history of the case. And remember, they only hear two cases a day. So they're one hour each for each hearing. >> So you go in and they only do it Monday, Tuesday, Wednesday on the last two weeks. and they only hear cases from October to April. There's only a handful of cases that are selected. >> Wow. So you're really on a shock clock then to make >> you're on a shot clock and you only have and it's 30 minutes aside and then the justices will ask question. >> So this was Monsanto and Roundup, right? So what was the law that was being debated >> for yours? The regulatory body, the EPA sets the label for pesticides. Does this cause cancer or not? What are the warnings? This can be damaging for birth defects, pregnancy, all the things that we're all used to seeing on labels. when you buy a product, a chemical product and the EPA and their regulatory authority determined that Roundup does not cause cancer. When you sell a pesticide, you first have to register it with the EPA, get it approved, and then the EPA gives you a label. And the label is written by the EPA. It says exactly what you're supposed to say. And in this case, it said all this stuff doesn't say cancer because they determined it does not cause cancer. And I'm not going to debate whether or not it causes cancer, but that's the case that was made is that the EPA is the regulatory body under a federal act called FIFRA, fungicide, insecttoide or denicide act. And that's where the EPA is given their regulatory authority to put the label on these products. And all of the cases that have been lost have been state failure to warn cases. To date, Bayer, which now owns Monsanto, has paid out $10 billion in these lawsuits, and they have reserved 10 billion on their balance sheet. They have 90,000 cases still outstanding in the courts. 90,000. >> Wow. >> And so, this one case got kind of appealed up to the Supreme Court. Last year, the White House solicitor general, and if the solicitor general steps up and asks the Supreme Court to take a case, it's more likely the case gets taken. So, the White House said, "Please take this case. We need to have federal preeemption, meaning the federal government has the right to set the label because all of the cases that have been lost and that are being adjudicated are in state courts where the state has a law like in California called a failure to warn law, which means if a manufacturer knows that a product carries a risk, you have to warn the consumer. And so the the lawyers have been arguing that Monsanto or Bayer knew that this product caused cancer and didn't warn the consumer. And they've been winning cases. they've been losing cases, but they've won enough cases that this has now become a multi-dea billion dollar problem. And so the argument is that the EPA says it doesn't cause cancer and they have federal preeemption. So the EPA has the right to determine. So that's the one argument. But then when the other attorney came up, this guy was like literally like watching LeBron James. And so going in, we're like, "Oh, 63 Bayer's going to win." And then the other guy comes up and he was like, "Well, hey, you guys overturned the Chevron doctrine last year. You guys remember that case? >> Yeah. Where basically when the Chevron doctrine got overturned, it basically said that no longer >> does the federal agency get to decide it has to be a direct reading of the law. >> Duh. So now, so he's saying like the states should have a right to read the law themselves. They shouldn't have to just defer to the EPA. And that's what this will come down to. So at the end of it, we were like, "Oh my god, this could be a 50/50 coin flip, 54 either way." And going into it, we were kind of like trying to say, "Hey, maybe this could be 63." So honestly, the whole experience was incredible. The case is interesting. >> These are very complicated matters. How are these people able to make a wholesome argument in like one side gets 30 minutes, the other side gets 30 minutes, there's a little Q&A, and then you're done in an hour. >> There's this whole art and science and Sax, you're probably familiar with this on how do you distill down a Supreme Court case in the briefing dock? Like what is it you're petitioning around the court? and you try and distill it down to the exact legal interpretation you want the judges to rule on, not all the other [ __ ] >> And this is oral arguments. Yes. >> Oral arguments, just a discussion. And then the judges jump in and all they're doing is asking the lawyer questions, one lawyer at a time, the one side and then the other side. And by the way, the solicitor general came up in the middle and kind of made a few comments and they asked her some questions from the White House and she sat down and then the two sides kind of went back and forth and and they just it's like 30 minutes Q&A each on that one specific legal question and Katanji Brown Jackson said, "But what if after the EPA issued the label, they found out information that it does cause cancer? Shouldn't they update the label?" And he's saying, "Well, no, they're not allowed to. They can only issue the label the EPA says." And he says also and it's it's a criminal case if they find out that it does cause cancer and they don't report it to the EPA and then she's saying well what if the EPA doesn't act and shouldn't the states have a right to protect their people? So those are the legal arguments, the discussions that are going on in all of this. And there's interesting implications which is fundamentally if the states get to interpret federal law and ignore federal regulatory bodies, it opens up a whole new can of worms in terms of like all the states can start to ignore federal regula regulatory bodies like the EPA or the FDA or the USDA or and on and on and on. So the whole case has a whole bunch of really interesting implications wound up in it. when you hear these guys and they're just talking chimat about that exact like interpretation of the law and that's what this comes down to. It's not the actual case that matters >> and after the sachs they oral arguments and then they have like a private conference where they'll write their papers and give their final judgment. Yeah. >> Saxs. >> Yeah. Yeah, I think what happens is that so I guess there's some discussion that happens behind closed doors and they figure out where the majority is and then the chief gets to assign who writes the opinion for the majority >> in that meeting nobody is allowed in and in fact you have a double door system where like if anything needs to come in and out you have to like kind of like knock on the door you're led into this anti chamber then >> oh is it an airlock is an airlock >> it's effectively we I had I don't know if you were there Jason but we had Ted Cruz come to play in the poker game >> uh And Ted Cruz clerked for William Ranquist and if you want to have an incredible dinner, ask him about the Supreme Court and Bill Ranquist. He's a real student of the Supreme Court and it just makes the Supreme Court free to your point sound like the most incredible body that's ever been created anywhere. By the way, more than the White House, more than the Capital Building, more than any of these other big agencies, this place has it's almost like being in England, it has these kind of ways that people operate. The the the security is so different. They kind of stand there in the court and they all exchange places every 20 minutes. It's very coordinated. They're dressed very differently than any other >> courtroom listening. >> Maybe like 150, I would say. How do you take it? Are they on >> I actually think everyone is a guest of a clerk or someone that works at the court. I don't think that it's like very publicly available to get in there. >> You can't line up. There's no lineup. >> There's I don't know if there's a lineup. Um this was a connection through we got in through the chief justice. Um he gave us the pass, but I think it was like very um >> I think at the Elon versus Open AI case there's you can line up and then the judge gave like 30 tickets to the press court. No, no. Yeah, but I think there's a lineup for the Supreme Court as well. There's some public access that they're >> It did not look like anyone from the public was in this court. Everyone, everyone is dressed respectfully. I mean, this court has an incredible amount of like, >> you know, cool experience. >> I would I would just say uh enjoy it while you can. I mean, I think the Supreme Court is one of the last highly functional institutions in the United States. And% >> you know at some point we're going to have like 13 or 21 or some crazy number of justices up there and get jersey after justices there and so enjoy it while it's still >> in the current in the current form it's in. >> Can you imagine showing up with jerseys with the justices names on them and like having sections and like somebody selling cracker jacks >> theocracy version of the Supreme Court >> version. Exactly. >> The popularity of the court really depends on whether it's issuing decisions that people agree with. That's what it comes down to. If like if you ask people whether they like the Supreme Court or not, it really just depends on whether they agree with the decisions are recency as opposed to >> the process of the decisions and how well argued it is and all these things that you're pointing to. And actually the the court I mean I just checked the numbers. The court is relatively popular right now. I think that it got as low as 35% in the 2024 Gallup survey, but I think it's back up to, you know, like 44 to 50% favorability, which for something that's involved in politics is relatively high, right? Like you look at Congress or >> any particular politician, they're going to be lower than that typically. I just felt so assured of like the institution when I visited and saw these guys interact and behave and how they behave the process. It was like >> man this what an amazing country. Yeah. >> Well, the reason I say what I say is there was an interview with James Carville recently. Did you guys see this? He saidaw he said look when we get power we're packing the court. >> So we're not even going to we're not going to worry about it. >> And we're going to get to 13, right? He said we're going to make >> I think Yeah. They're going to go from 9 to 13 and then they're going to create some new states and all the rest of it. So that'll be that. >> Uh >> enjoy enjoy while it last. Enjoy. >> Enjoy while it lasts. >> Uh by the way, >> end on a high note. >> Wait. Yeah. It's the end of the empire. That'll be that. >> By the way, there is uh a Supreme Court on I was correct. There is an online ticketing lottery. So we can all sign up and you can get a fourack of tickets. I think they should make this I we should talk to Howard Lutnik. Maybe he can make this an auction. We get a revenue stream from the US. We could sell like 10 of the tickets as courtside seats for 20 grand. >> Jason, you're exactly what they're trying to protect against. >> Exactly. Like, how can we how do we monetize the Supreme Court? >> All right, everybody. That's it. That's the world's greatest podcast for you for Chimoth Poly Hatia, David Freeberg, and David Saxs. I am the world's greatest moderator. We'll see you chief justice. >> I'm like the chief justice of the allin podcast. We'll let your winners ride. >> Rainman David and it said we open sourced it to the fans and they've just gone crazy with it. >> Love you queen of winners. >> Besties are gone. >> That is my dog taking a driveway. >> Oh man. My habitasher will meet. >> We should all just get a room and just have one big huge orgy cuz they're all just useless. It's like this like sexual tension that you just need to release somehow. >> Your feet. We need to get Mercury's already. I'm going all in.