Neural Computers
Gamestop bids on eBay, Anthropic cofounder updates RSI timeline, OpenAI and Anthropic both ink deals to deploy AI to enterprise
Happy Monday.
The current thing in tech and business is Gamestop’s offer to buy eBay for $55.5B.
Today’s lineup
Alpha School Head of Founder Development Nat Eliason at 11:30 AM
Casa Co-Founder & CEO Michael York at 12:00 PM
Living Carbon Co-Founder & CEO Maddie Hall at 12:10 PM
AMP PBC Founder Anjney Midha at 12:20 PM
Colossal Founder & CEO Ben Lamm at 12:40 PM
Serval Founder & CEO Jake Stauch at 12:50 PM
Panthalassa Co-Founder & CEO Garth Sheldon-Coulson at 1:00 PM
Haun Ventures Founder & CEO Katie Haun at 1:10 PM
Rivet Co-Founder & CEO Nick Abouzeid at 1:20 PM
Neural Computers
by John Coogan
There’s been rumblings about the potential for a Neural Computer for years now since the AI boom began. The basic idea is that the computer would have no software whatsoever, essentially just an LLM that generates whatever you need as you are using the device. Karpathy put it this way: “Imagine a device that takes raw videos or audio into basically what’s a neural net and uses diffusion to render a UI that is unique for that moment.”
It feels like we’re starting to see glimpses of this now. I most recently felt it while trying to understand Ryan Cohen’s proposal for GameStop to takeover eBay for $55.5B. I haven’t tracked either company closely, and I wanted to quickly understand how the two companies size up. In a pre-ChatGPT world, I would have pulled stats from Google or Yahoo Finance, maybe copied them into a spreadsheet if I wanted to see them side-by-side (although Google and Yahoo both offer company comparison views, they are always a bit tricky to navigate). Then if I wanted to share the findings, I could screenshot the sheet, or if I was feeling really fancy, design a slide linked to the data. Now this whole process is a single prompt.
Prompt: “Do a bunch of research on GameStop and eBay’s valuation and key financial metrics, things like growth rate, top line, earnings, revenue, valuation, how the multiples fit together. Build a nicely designed side-by-side comparison of the two companies.” - ChatGPT 5.5 Thinking w/ Images
It’s not a perfect result, I’d probably not use red for all of GameStop’s data because red is usually saved for negative numbers. It’s a good start though and I could easily ask for that change. And obviously you could take this a lot further, delivering an updated image every quarter after earnings drops, or every day if you wanted, or for any other two companies, or for anything else you want. I think the end result is fewer dashboards and more ad hoc analyses delivered on demand to answer the exact question you have at the moment.
Karpathy describes this concept as “Software 3.0” in his Sequoia AI Ascent talk and gave an example of shifting from a vibe-coded app to a fully AI workflow for a project that would annotate menus:
MenuGen is this idea where you come to a restaurant, they give you a menu, and there are usually no pictures. I don’t know what any of these things are — usually 30% or 50% of the things, I have no idea what they are.
So I wanted to take a photo of the restaurant menu and get pictures of what those things might look like in a generic sense. I vibe-coded this app that basically lets you upload a photo, and it does all this stuff. It runs on Vercel, re-renders the menu, gives you all the items, uses OCR for all the different titles, uses an image generator to get pictures of them, and then shows it to you.
Then I saw the Software 3.0 version of this, which blew my mind. It was literally: take your photo, give it to Gemini, and say, “Use Nano Banana to overlay the things onto the menu.”
Nano Banana basically returned an image that was exactly the picture of the menu I took, but it actually put into the pixels the different things in the menu. This blew my mind because actually all of my MenuGen is spurious. It’s working in the old paradigm. That app shouldn’t exist.
The Software 3.0 paradigm is a lot more raw. Your neural network is doing more and more of the work. Your prompt or context is just the image, and the output is an image. There’s no need to have any of the app in between.
I have a few takeaways from this:
First, I think it’s exciting for anyone who’s been hesitant to jump into vibe coding. Frontier models are already able to, in 90% of situations, instantiate exactly whatever’s required to solve an actual problem on the fly under the hood, entirely abstracting away code and tools.
Second, I’m reminded of the 2016 USV blog post “Fat Protocols” which argued that unlike “thin protocols” of the web era like HTTP, FTP, etc. (which accrued minimal value), in crypto, Bitcoin, Ethereum, etc. would be “fat protocols,” do a lot of valuable work, and potentially be more valuable than the application layer that enabled interaction with them. There’s still a bunch of complicated market dynamics around value accrual in the AI value chain, but in terms of doing useful work, the models are certainly getting “fatter” every month to use the USV terminology.
Third, there’s still the question of walled garden jumping. The internet isn’t quite “dead” yet, there’s a lot of good information out there, but many, many platforms are fairly locked down, so writing code, puppeteering a browser running on a Mac Mini, or digging through iMessage locally can still require a different workflow, but that’s more of a legal and business discussion than a technical one. The models will continue to find their way over, under, and through any cracks in the walls of the gardens if users ask politely enough.
There’s still a long way to go here, inference is expensive, everything is slow, and models still make odd mistakes (although less and less these days). I’m still enjoying the image output workflow and it feels like moving up a level of abstraction is more qualitatively binary than a slightly higher score on a particular benchmark.
Headlines
The Information: XAI Shows How Hard It Is to Use a Lot of GPUs at Once
➞ Lambda response: “The xAI "low utilization" story has people mixing up two different metrics”…
WSJ: GameStop Offers to Buy eBay for $56 Billion
Reuters: Cerebras targets $26.6 billion valuation in US IPO as AI chip demand surges
Bloomberg: OpenAI Finalizes $10 Billion Joint Venture With PE Firms to Deploy AI
Roon post on Anthropic as an organization that “worships” Claude goes viral
WSJ: OpenAI Wants to Go Public. First Sarah Friar Needs to Get It to Grow Up.
NYT’s Ezra Klein: Why the A.I. Job Apocalypse (Probably) Won’t Happen
NYT: A.I. Is a National Security Risk. We Aren’t Doing Nearly Enough.
WSJ: Why Almost Everyone Loses—Except a Few Sharks—on Prediction Markets
Patrick Collison: Stripe Atlas hits 100,000 all-time incorporations
Rex Salisbury: Erebor hits $1.1 billion in deposits
Charlie Billelo: 0.1% of the accounts on Polymarket have earned 67% of the profits
Politico Europe: EU accused of wasting €20B on AI computing dreams
FT: Chinese dissident Li Ying: ‘Our work is about being ready for the tipping point’
City Journal: The Surprising Heart of the Data-Center Boom
WSJ: The Roomba Guy’s Second Act: A Robot You’ll Want to Snuggle
Amazon introduces Amazon Supply Chain Services
Lisan al Gaib: The AI model gap is bigger than you think
Riley Walz creates ‘stupid NYT’










Love the article and love the hat. Happy you guys started a merch store :)