🧭 SaugaTech Compass #19 - When Principles Meet Power: The OpenAI-Anthropic Pentagon Saga
Hi SaugaTech Community,
In the last edition of SaugaTech Compass, we wrote about how OpenAI hired Peter Steinberger after Anthropic’s legal team pushed him away. Strategic opportunism, we called it.
Little did we know that the same pattern is going to play out again in just a matter of days, but with way higher stakes.
Last Friday, Trump banned every federal agency from using Anthropic’s technology. The Defense Secretary labeled them a “supply chain risk”- a language usually reserved for Chinese companies. Hours later, OpenAI announced a deal to put their models on Pentagon networks.
Same concerns about autonomous weapons and mass surveillance. Same stated principles. Completely opposite outcomes.
Here’s what makes this interesting: Dario Amodei, who runs Anthropic, used to work for Sam Altman at OpenAI. They built GPT-2 and GPT-3 together. Then they had a falling out over how fast to move versus how careful to be. Dario left in 2021 to start Anthropic as the “safety-first” alternative.
Now they’re running the two most valuable AI companies in the world, locked in a very public fight about principles versus pragmatism. For those of us building in the GTA, this matters—because eventually, it’s a story of choices, choices that we all have to make one way or the other, about how our work gets used.
🚀 First Things First: SaugaTech March Meetup
SaugaTech March meetup is scheduled for the 28th March at IDEA Mississauga, Square One. We will be announcing the topic and open early registrations this Friday on SaugaTech WhatsApp Group . Join the group to register early, lest we run out of spots like last time. We will also post a formal communication about the event early next week on this Substack group as well.
Now coming back to our main story.
Anthropic X Open AI | How We Got Here
Dario Amodei joined OpenAI in 2016 as VP of Research. He helped build GPT-2 and GPT-3, and co-invented the technique that made ChatGPT possible—getting AI to learn from human feedback.
By 2020, tensions were building. Dario and other senior researchers believed these models would become extraordinarily powerful, but that power needed serious safety work. According to people who were there, disagreements over strategy and commercialization got personal.
In early 2021, Dario and his sister Daniela left with a dozen other OpenAI researchers to found Anthropic as the explicit “safety-first” alternative.
Fast forward to 2026: Anthropic is worth $380 billion. OpenAI is worth $500 billion. And they’re fighting over Pentagon contracts.
The Pentagon Standoff
Last July, the Department of Defense offered $200 million contracts to four AI companies—Anthropic, OpenAI, Google DeepMind, and xAI—to deploy models on classified military networks.
The Pentagon wanted access for “all lawful purposes.” No exceptions.
Anthropic pushed back. They wanted two explicit prohibitions: no fully autonomous weapons, and no mass domestic surveillance of Americans. Everything else—intelligence analysis, logistics, cybersecurity—was fine.
The Pentagon refused. The Under Secretary called Dario a “liar” with a “God complex.” Trump posted that Anthropic was run by “Leftwing nut jobs.”
On February 27, negotiations collapsed. By 5:02 PM, Anthropic was blacklisted.
Hours later, OpenAI announced their deal. Sam said OpenAI shared the same red lines but had reached agreement with “strong contractual protections.”
What was different? Anthropic wanted contractual prohibitions preventing those uses even if laws changed. OpenAI agreed to “all lawful purposes” and trusts current laws and internal monitoring.
Critics point out the loophole: if law allows mass surveillance—and existing rules let agencies buy personal data without warrants—then OpenAI’s models can be used for exactly what Anthropic tried to prevent.
Sam admitted the deal was “definitely rushed” and acknowledged: “it clearly exposes us to some risk.”
Why OpenAI Couldn’t Walk Away
Here’s the story behind the story that’s not being talked about: OpenAI most likely couldn’t afford to walk away.
The company is burning cash. ChatGPT traffic is declining. Enterprise market share dropped from 50% to 27%. Google’s Gemini has been leapfrogging them. DeepSeek proved you don’t need billions in compute to compete.
OpenAI introduced ads in ChatGPT—something Sam said he “hated.” They’re planning a late-2026 IPO at a $750 billion valuation.
This is OpenAI’s existential year. They can’t afford to be on the wrong side of government. Not when they need regulatory support for IPO. Not when defense contracts provide stable revenue. Not when getting blacklisted would crater their valuation.
Anthropic is profitable with $1.16 billion in monthly revenue. They could afford principle.
OpenAI couldn’t.
The Monday Scramble
By Monday, the backlash hit.
Claude overtook ChatGPT in the App Store. Chalk graffiti covered OpenAI’s offices. QuitGPT launched. OpenAI employees questioned whether the deal had real protections.
Sam admitted they “shouldn’t have rushed” and it “looked opportunistic and sloppy.” He called the Pentagon to rework the contract.
The amended deal now prohibits using OpenAI technology on “commercially acquired” data—closing the loophole where agencies buy personal data from brokers. It states: “the AI system shall not be intentionally used for domestic surveillance of U.S. persons.”
But legal experts remain skeptical. The contract “does not give OpenAI an Anthropic-style right to prohibit otherwise-lawful government use.” It just says the Pentagon can’t break current laws—and laws change.
As MIT Technology Review put it: OpenAI “settled for softer legal boundaries” while Anthropic pushed for moral ones.
The Big Question : Who Decides
Lets strip away the drama for a minute. The underlying question is very simple: Who decides how powerful technology gets used?
Anthropic says builders have moral responsibility. If you know your technology could violate human rights, you have an obligation to prevent it—even if it costs money.
OpenAI says government, not companies, should decide how lawful tools are deployed. If laws need changing, that’s democracy—not corporate veto power.
Both positions are coherent. Both have risks.
Anthropic’s risk: Companies wield outsized influence over national security. If every AI lab demands veto power, government loses autonomy.
OpenAI’s risk: “All lawful purposes” becomes a blank check. Surveillance laws from the 1970s don’t contemplate AI-powered monitoring. Trusting future administrations won’t exploit gaps feels optimistic.
What This Means for GTA Builders
Most of us aren’t building AI for the military. But we’ll face similar choices.
Do you let advertisers microtarget vulnerable populations? Do you build addictive features or dark patterns which drive users to make sub-optimal purchase decisions? Do you sell to customers whose use cases make you uncomfortable?
Anthropic drew red lines and lost $200 million. OpenAI signed and faced revolt. Both paid costs.
Principles are expensive. Dario walked away from $200 million over restrictions that wouldn’t have changed current military operations. That’s either integrity or virtue signaling, depending on perspective.
Pragmatism has costs. OpenAI got the deal but took reputational damage. By Monday they were scrambling to amend and admit mistakes.
Clarity matters most. Both claimed the same values but reached opposite conclusions. The worst outcome is claiming principles you won’t enforce or accepting terms you don’t believe.
You can’t avoid ethical questions by building neutral tools. Every technology has uses its creators never intended.
The question isn’t whether you’ll face these choices. It’s whether you’ll make them deliberately.
✨SaugaTech Epilogue
For Dario and Sam—former colleagues who built modern AI together before their bitter split—the rivalry is now total. They compete for customers, talent, funding, and control over how their technology shapes warfare and surveillance. One chose principles and lost $200 million. One chose pragmatism and faced employee revolt. Both paid different costs, but here’s what neither had to pay: the cost of not knowing where they stood.
That clarity matters more than most people realize. When you’re honest about your boundaries—what you will and won’t build, who you will and won’t serve—you sleep better. Not because you made the “right” choice by some universal standard, but because you made your choice deliberately.
For those of us building in the GTA, the lesson isn’t to pick Team Dario or Team Sam. It’s to know your own boundaries before someone else tests them. Build fast, ship often, take risks—but know the line you won’t cross.
The companies that thrive long-term won’t be the ones that avoided tough decisions. They’ll be the ones that made those decisions with clear eyes and lived with the consequences without losing themselves in the process.
Technology will always have unintended uses. Platforms will always face choices about what they enable. You can’t avoid that by building “neutral” tools. You can only decide ahead of time what you stand for, so when the pressure comes—and it will come—you already know your answer.
Keep building. Keep learning. But keep your compass close.
Looking forward to seeing you on 28th March at our next meetup.
Team SaugaTech
CONNECT | COLLABORATE | INNOVATE


