I cancelled my ChatGPT subscription last night. It took about thirty seconds. The decision had been building for months, but OpenAI (parent of ChatGPT) signing a classified AI deal with the Pentagon, hours after Anthropic (parent of Claude) was blacklisted for refusing to do exactly that, made it effortless.
Here is what happened. Anthropic told the Department of Defense it would not remove safeguards preventing its AI from powering autonomous weapons or mass domestic surveillance. The Pentagon’s response was to brand Anthropic CEO Dario Amodei a “liar” with a “God complex.” Defence Secretary Pete Hegseth then designated Anthropic a “supply chain risk to national security,” a label normally reserved for foreign adversaries. Trump followed up by ordering all federal agencies to phase out Anthropic’s products within six months.
Within hours, OpenAI announced it had signed a deal to deploy its models into the Pentagon’s classified networks.
Sam Altman, CEO of OpenAI claims the contract includes the same red lines on autonomous weapons and surveillance that Anthropic demanded. If you believe that distinction will hold once the models are inside a classified network with no external oversight, I have a harbour bridge to sell you.
Amodei’s position is clear and technically sound: frontier AI systems are not reliable enough to power fully autonomous weapons, and without proper oversight they “cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day.” That is not an anti-military stance. It is an engineering assessment.
OpenAI’s move is the opposite. It is a company racing to fill a vacuum created by political retaliation, wrapping compliance in the language of safety.
> It has reached a point where Trump condemning something in one of his online tantrums has become a reliable signal that the condemned party is probably doing the right thing. Anthropic getting blacklisted by this administration is, if anything, a quality endorsement.
And the “Department of War” (Hegseth’s preferred rebrand, which tells you everything about the mindset) is already demonstrating exactly the kind of reckless behaviour that makes AI safeguards necessary. Two weeks ago the military used a high-energy laser to shoot down party balloons in Texas.
Then this week they used the same system to destroy a $30 million Customs and Border Protection drone in civilian airspace near El Paso because CBP flew it into military airspace without telling anyone. The FAA had to close airspace in response.
One arm of the US government is literally shooting down another arm’s assets, and this is the organisation we are supposed to trust with unsupervised AI?
I have been moving my workflows to Claude steadily over the past year. The model quality has been comparable or better for most of what I do. But this was never just about capability. It is about what kind of company you want to fund with your subscription dollars.
I am now 90%+ Claude. The remaining 10% is Google Gemini, Perplexity and a few niche use cases I have not migrated yet. That gap is closing fast.
Anthropic chose to lose a $200 million military contract and get blacklisted by the US government rather than remove safeguards on autonomous weapons and mass surveillance. OpenAI chose to sign on the dotted line the same night.
That is all you need to know about who is building AI responsibly and who is just building AI profitably.
Sources:
- OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump (cnbc.com)
- Pentagon-Anthropic AI standoff is real-time testing balance of power in future of warfare (cnbc.com)
- The Pentagon brands Anthropic CEO Dario Amodei a ‘liar’ with a ‘God complex’ (fortune.com)
- Hegseth Furious as Anthropic Refuses to Bend to Pentagon’s AI Demands (newrepublic.com)
- U.S. military used a laser to shoot down Customs and Border Protection drone (nbcnews.com)
- OpenAI signs Pentagon deal for classified AI networks hours after Anthropic gets banned from federal agencies (the-decoder.com)