Processing Anthropic v. DoW
Block lays off 4,000, OpenAI raises $110B, Anthopic-Pentagon discourse rages on
Happy Monday.
The current thing in tech and business is Anthropic’s stance about working with the Department of War, and the Department of War blacklisting it from working with the government.
Today’s lineup
Stratechery Founder Ben Thompson at 12:00 PM
Magic Mind Founder James Beshara at 12:30 PM
Quinn Emmanuel Executive Chairman and Founding Partner John Quinn at 1:00 PM
WorkOS Founder Michael Grinich at 1:30 PM
The Immersion Economy Author Adam Simon at 1:40 PM
Flux AI Founder and CEO Matthias Wagner at 1:50 PM
QuiverAI Co-Founder and CEO Joan Rodriguez at 1:55 PM
Cal AI Co-Founder and CEO Zach Yadegari at 2:00 PM
Smack Co-Founder and CEO Andy Markoff at 2:10 PM
Daily Op-Ed, by John Coogan
Processing Anthropic v. DoW
Ben Thompson wrote a great piece today on Anthropic and Alignment, he will be joining TBPN today at noon to discuss it further. I have some of my own thoughts on Anthropic and the Department of War that I’ll share here.
We were off on Friday, traveling over the weekend, so I processed the news in a different way than usual. Here’s how I worked through the story.
The first question I saw people wrestling with, and I had to wrestle with, was how much say should a private company have in how the government uses products? My default assumption is very little. The United States is a democracy, so we, the American people, elect Congress and the President to pass laws that create the bounds for how products from bullets, software, and anything else are used.
Hypothetically, if I was the CEO of Ford, and I make Ford Mustangs, Explorers, and F-150s, and the government comes to me and asks to buy some cars, I probably should treat them like any other customer. I shouldn’t turn them away, but if they then ask me to add bulletproof glass and armor plating to every Mustang, Explorer, and F150, I should be able to tell them no. That would incur a ton of extra cost. My customers want a Mustang that’s fast, they care about range and towing capacity. If they want to modify the vehicles after the fact, that’s fine, or if they want to kick off a new contract and see if I can fulfill their new needs, we can talk about that, but the incremental cost needs to be properly accounted for. My manufacturing line might not be set up for armored Mustangs, I might need new equipment.
If they come back and say, yeah that’s fine, don’t worry about the armor, we’ll just take the Mustangs and drive them into battle without any modifications, I have a responsibility to make them aware of how the product is unsuitable for that, but I don’t think I should stop them from buying.
It’s totally reasonable for Dario to say that Anthropic models, in his view, are not capable enough to be deployed in certain DoW contexts. It’s bad salesmanship, but it’s certainly responsible if that’s his true belief. At the same time, the government has the freedom to assess the efficacy of these models (which are changing in capability rapidly) and determine when and where they are effective. Now they can’t break the law, and Congress (and the American people by extension) are free to create new laws to restrict or encourage the use of technology in all sorts of ways.
Across the two sticking points, no mass domestic surveillance and no fully autonomous lethal weapons, there has been a question as to why OpenAI was allowed to include that language but Anthropic could not get a deal done. This, I believe, comes down to the idea of “deals that stick.” There are plenty of situations where a high-level deal is agreed upon but then the two parties fight endlessly over definitions. Palmer Luckey explained this on X: “[W]hat is autonomous? What is defensive? What about defending an asset during an offensive action, or parking a carrier group off the coast of a nation that considers us to be offensive?”
If you have developed a good working relationship with someone, it’s much easier to give on specific terms that will need to be cooperatively interrogated over time. Semafor reported that Anthropic disapproved of its technology being used during the Maduro raid. The joke was that the DoW was probably just asking basic knowledge retrieval questions, but I actually think that’s good if that was the case. There’s a viral interaction between Ted Cruz and Tucker Carlson where Tucker gives Ted a pop quiz on the population of Iran and he didn’t know it. That’s a perfect question for an LLM! It’s reasonable to expect our elected officials and our military leaders to have a deep understanding of every country they are interacting with, and LLMs are really good for knowledge retrieval! If you actually take ten steps back and think about a government official asking an LLM about the population of a country, that seems good!
The last question concerns the supply chain risk. Ben Thompson makes a strong argument for why government pressure like this is actually reasonable in this situation, but I’m still wrestling with how real this threat actually is. Many reports state the supply chain risk labeling as an established fact, but Kalshi has the odds that this actually happens at 42% and Dario said on CBS that he hasn’t received any written communication about it yet.
And speaking of Dario on CBS, he did unpack some more of his logic, which clearly resonated with some people. I was left unsatisfied with his positioning around autonomous weapons though. He was basically arguing that LLMs hallucinate and should not be used for autonomous weapons, which is clearly a commentary on using AI at the Department of War broadly. It would have been much stronger to just say, look, at Anthropic, we’ve built a system that’s specifically good at answering questions, being helpful, and writing code. Our system is awesome at that but we don’t make a product that we’d recommend using for autonomous weapons. It’s tricky to try and twist arms here though.
There has been some mistaken commentary floating around that America does not have laws that prevent mass domestic surveillance. We do, it’s in the Fourth Amendment: “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated.” There are also other laws that set the bounds for what is legal and what is not. The best outcome here, in my mind, is that democracy continues, the buck ultimately stops with the U.S. Government and clear laws get passed outlining how AI can be used by federal agencies.
Headlines
WSJ: Trump Administration Shuns Anthropic, Embraces OpenAI in Clash Over Guardrails
Treasury Sec. Bessent announces US Treasury to terminate all use of Anthropic products
Palmer Luckey chimes in on Anthropic-DoW debate with viral X post
Tyler Cowen: In the Pentagon Battle with Anthropic, We All Lose
AP: Fintech company Block lays off 4,000 of its 10,000 staff, citing gains from AI
Sanders pitches $4.4 trillion tax on billionaires, in 2028 marker
TechCrunch: MyFitnessPal acquires Cal AI
Bloomberg: Amazon’s Extreme AI Spending Sends Stock to Worst Month in Years
Reuters: Amazon’s cloud unit reports fire after objects hit UAE data center
WSJ: Paramount Wins Bidding War for Warner Discovery After Netflix Drops Out
Wired: OpenAI Fires an Employee for Prediction Market Insider Trading
FT: Nvidia-backed ‘open’ AI start-up courts investors at $20bn-plus valuation
Bloomberg: How To Win Slots and Influence People
Vincenzo Landino: Apple’s FormulaOS
WSJ: Crypto Firm Paradigm to Expand Into AI, Robotics With New Fund
Internet rascal Riley Walz makes Pokemon-like game for California payphones
OpenAI raises a $110 billion round of funding from Amazon, NVIDIA, and SoftBank
Posts of the Day
TBPN is brought to you by Ramp.
Ramp is the all-in-one finance automation platform that helps businesses save time and money with smarter corporate cards, spend management, and bill pay.
Special thanks to our sponsors: Shopify, Graphite, ElevenLabs, CrowdStrike, MongoDB, Fin, NYSE, AppLovin, Phantom, Gemini, Cognition, Labelbox, Public, Kalshi, Restream, Vanta, Console, Railway, Figma, Plaid, Okta, Linear, Turbopuffer, Lambda, Gusto, Vibe.co, Sentry, and Cisco.












