The Implications of GPT-5
Why OpenAI needs to adopt Google's product release strategy, Trump's outdated model of Intel, timeline reactions to GPT-5, and more
Good morning, we’re live on TBPN. Here’s Coogan’s run of show for Friday, August 8.
OpenAI should adopt Google’s approach
OpenAI created magic, but going forward, we don’t need numbered releases. Instead of trying to produce an Apple-level keynote for something that doesn’t require reselling every year, the company needs to adopt a stance more similar to Google’s.
iPhone launch events make sense; Apple can tout a bunch of super tangible, quantifiable improvements on a huge screen and slick clips on social. The battery lasts longer, there are 3 cameras instead of 2, etc.
GPT-5 is not this type of product. It will constantly get better in a bunch of small ways, just like Google has over the past few decades, but we don’t need big launch events.
OpenAI’s real missed opportunity yesterday was that they didn’t just drop the number system entirely. Version numbers used to mean big shifts in scale (+1 order of magnitude) or a big shift in architecture (e.g. it now has reinforcement learning with human feedback). Going forward, there will be a million little tricks and improvements that improve customer satisfaction, retention, and most importantly, monetization. These won’t merit a big keynote. There will be cool new tricks under the hood, but OpenAI won’t want to talk about them publicly and increasingly, consumers (and even tech insiders) won’t care.
I don’t remember a Google Keynote announcing the knowledge panel that pulls in a celebrity’s date of birth from Wikipedia, or one announcing the widget that pulls in the cast of a movie on a relevant search. Those features just launched and improved the experience. No keynote necessary. At the same time, this has made me very bullish on both OpenAI as a consumer product cash machine and as fertile ground for 20%-type projects. The core business is going to be incredible, and they’ll be able to take wild swings at stuff, just like Google (think Gmail, Google Maps, Google Glass, dozens of Google chat apps). Some will work work, some won’t, but not every company can afford to test out new ideas. Exciting times ahead.
S-Curve Implications
It’s clear that we’ve maxed out the current scaling regime, such that building ever larger training clusters is probably not the move going forward. As such, the game will be making sure inference is done profitably. Zooming out, it seems like token generation, as a business, is generating low tens of billions of dollars annually. We don’t want to get completely over our skis by wasting money on unprofitable inference.
International markets are interesting here. Paid plans will see lower adoption, and ads will probably monetize within an order of magnitude of Google and Facebook rates, but cost to serve an individual user might be several orders of magnitude higher, at least for the next few years, before models get distilled further, and “good-enough” models move to ASICs or older, depreciated hardware. Training data might be a valuable upside here, but I think the economic equation will be slightly different than the zero-marginal cost world of serving WhatsApp to a marginal user in a developing country, at least in the near term. It’s good for DAU numbers though!
Trump attacks Intel CEO Lip-Bu Tan
It feels like Trump has an outdated model for understanding the importance of Intel. China has already completely caught up to Intel. Huawei, SMIC, and SMEE seem fully capable of doing everything Intel can do and more. Intel hasn’t been the crown jewel of American semiconductor supremacy for decades and basically no one is advocating for a real comeback. Even splitting up Intel isn’t in the cards. They can’t find a customer for their new fab! The future of American semiconductors is TSMC and Samsung fabs (in America if you want that). We’re watching the wind-down of Intel, massive layoffs, and a narrowing of scope. It’s not really a turnaround at this point. Who do you want in the role?
TBPN’s top stories
GPT user oneshots a Minecraft clone…
Trump signs EOs targeting Choke Point 2.0 and allowing crypto assets in 401(k)s…
SECOND EXPOSÉ IN A WEEK: High-flying Newsette CEO accused of grift and possible fraud…
Roon goes in-depth on OpenAI efforts to improve writing quality with GPT-5…
New piece from Julia Steinberg on "Cluely, McKinsey and the scourge of Economic Nihilism”…
Sweetgreen stock ($SG) collapses 79% from November with market at ATHs…
HyperWrite AI Matt Shumer reviews GPT-5…
Sundar makes Google AI tools free for college students…
Justice Department ups Maduro bounty to $50M…
9 TSMC engineers suspected of leaking confidential information about chipmaking process…
Gongworthy
Celcius energy drink revenue doubles from $329M to $739M YoY
FanDuel posts 16.3% sportsbook gross gaming margin in June, tripling Nevada’s historic hold of 5-6%
Today’s lineup
SemiAnalysis President Doug O’Laughlin
Lead Edge Capital Managing Partner Mitchell Green
Orbital Operations CEO Ben Schleuniger
The Timeline: Part 1
Standout moments from the stream yesterday…
Yesterday was GPT-5 day. Watch the show here.
OpenAI Chief Research Officer Mark Chen sees the future of agentic AI manifest as not a single, monolithic agent but as teams of multiple, specialized agents working together. He said:
When you look at the levels of AGI, the top level is what we describe as “organizational AI.” [This is] a collection of agents working together — often like we might in a company — towards a shared goal. You would imagine that these agents sub-specialize in ways, similar to what humans do…
Vercel CEO Guillermo Rauch echoed Mark’s prediction in his segment yesterday, and also thinks AI will become the best CS:GO player in the world (eventually).
In 2022, OpenAI had to pay contractors to use precursors to GPT, President Greg Brockman said — crazy how different the situation is today. He described the “first result” that set the company on the path to a consumer LLM: in 2017, built on different architecture from today’s transformers, OpenAI’s recurrent neural network “unsupervised sentiment neuron” was not only able to predict the next character in Amazon reviews, but also capture semantics. It was one of Brockman’s first a-ha moments with generative AI.
OpenAI is currently posting some blockbuster KPIs, the company’s CFO Sarah Friar said on the stream. 700 million people use GPT weekly, companies are paying for 5 million individual GPT seats, and 4 million developers have built on OpenAI’s platform. Sarah also told us the scale of OpenAI’s enterprise business is literally reaching the nation-state level. “This is the first time in my career I’ve seen governments come to the table… The government of Estonia put GPT in all the high schools… It’s a whole other level of selling, I’ve never seen anything at this scale,” she said.
The ‘Studio Ghibli moment’ for GPT-5 will be coding, OpenAI VP of Research Max Schwarzer predicted on the stream. Max said the new model will let everyone create their own games and play them in the app.
OpenAI COO Brad Lightcap said that before the company inked a deal with the USG to sell each agency a GPT subscription for $1, he was hearing stories of US government employees going to their cars on lunch break to use GPT on their phones to get their work done faster (government employees have been prohibited from using it on fed hardware).
Other guests were Warp founder Zach Lloyd, Qodo CPO Dedy Kredo, Charlie labs founder Riley Tomasek, Factory CTO Eno Reyes, Augment Code CSO Guy Gur-Ari, CodeRabbit CEO Harjot Gill, Cognition CEO Scott Wu, ChatPRD founder Claire Vo, and AI developer Ben Hylak.