Back to list
TECH & HUMAN//2026-02-28//5 min

ChatGPT Is Now the Pentagon's Official AI Tool. And Nobody's Asking What For.

On Friday afternoon, Trump ordered every federal agency to immediately cease using Anthropic's technology. A few hours later, Sam Altman announced a deal with the Pentagon to deploy ChatGPT in classified military networks.

Media framed it as a story about courage versus cowardice. Anthropic refused to back down. OpenAI jumped into the market gap. Hero and villain.

But the real story is somewhere else. And it's much worse.

What the Pentagon Actually Wanted

The Pentagon — renamed "Department of War" since January — demanded one thing from Anthropic: that the military could use Claude for "all lawful purposes" without any restrictions.

Anthropic said no. But not to everything. They agreed to military use. They were the first AI company in classified networks. First in national laboratories. They sacrificed hundreds of millions in revenue by cutting off companies linked to the Chinese Communist Party.

They refused exactly two things:

Mass surveillance of American citizens. AI can link legally available data — phone geolocation, browsing history, financial records from data brokers — into a complete profile of any person. Automatically. At massive scale. On millions of people at once. And technically? Legal. Because that data is "publicly available."

Fully autonomous weapons without human oversight. Today's AI models aren't reliable enough to make life-and-death decisions on their own. Anthropic offered to collaborate on research to improve that reliability. The Pentagon refused.

For this, the Pentagon designated Anthropic as a "supply chain risk to national security" — a label normally reserved for adversaries like Huawei. Trump threatened criminal prosecution. Pete Hegseth spoke of "betrayal."

What Altman Actually Signed

Then came Sam Altman. He announced a deal with the Pentagon. Claimed it contains the same principles — ban on mass surveillance, requirement for human oversight.

Sounds good. Except.

Altman's deal references "existing US law and Pentagon policy." Axios confirmed it directly: the restrictions "reflect existing law and the intent was not to create new legal standards."

That's the key detail most media missed.

Anthropic argued that current law isn't enough. That collecting "publicly available" data is technically legal, but combined with frontier AI it becomes de facto mass surveillance. They wanted protection beyond the law. Because the law hasn't caught up with the technology.

OpenAI accepted the status quo. "All lawful purposes" — just in prettier packaging.

And the evidence matches. Emil Michael, undersecretary for technology, was reportedly still on the phone offering Anthropic a deal while Hegseth was tweeting the blacklist. That deal still contained requirements for collecting data on Americans — geolocation, browsing, financial records. The Pentagon celebrated the OpenAI deal. Hegseth reposted Altman's tweet. If OpenAI had actually pushed through the red lines that the Pentagon just destroyed Anthropic over — why would they celebrate?

Why This Is a Historical Turning Point

Let's pause for a moment. Because this is more important than it looks.

The American government just threatened criminal prosecution against an AI company — not because it broke the law. Because it had ethical standards stricter than the law.

And another AI company — the biggest in the world — immediately signed a deal that doesn't have those standards.

ChatGPT will now run in classified military networks with restrictions that don't exceed existing legislation. Legislation that Anthropic itself showed is inadequate for AI. Legislation that allows linking publicly available data into complete citizen profiles.

This isn't sci-fi. This is Friday.

What This Means in Practice

Imagine a system with access to data broker information — and that data is legally available, sold routinely. Geolocation. Who you meet with. What you buy. What websites you visit. Financial records.

A human couldn't connect all this. It would take years. Thousands of analysts.

A frontier AI model does it in seconds. On millions of people at once. And because every individual data point is "public" — it's legal.

Dario Amodei described it precisely: the law was created when these capabilities didn't exist. Nobody anticipated that a single technology could connect all those fragments into a complete picture. Legal puzzle pieces that together create something the law never intended to enable.

Anthropic said: this is de facto mass surveillance, even if it's technically legal. We refuse to enable it.

OpenAI said: it's legal, so no problem.

Reactions That Speak Volumes

Katy Perry posted a screenshot of her Claude subscription with a heart. 10 thousand likes, a million views within an hour. Thousands of people announced canceling ChatGPT Plus.

Over 100 Google employees sent leadership a letter demanding similar safeguards. OpenAI and Google DeepMind employees signed a joint open letter "We Will Not Be Divided."

Ilya Sutskever — OpenAI co-founder who left after a conflict with Altman — said: "It is extremely good that Anthropic did not back down. Much harder situations will come in the future."

Harvard professor Lawrence Lessig called Anthropic's decision "a beautiful act of integrity and principle — incredibly rare for our time."

People instinctively understand what happened. One company stood by its principles and paid for it. The other repackaged capitulation in nice words and got the contract.

What Comes Next

Anthropic announced it will challenge the "supply chain risk" designation in court. Hegseth gave military contractors 6 months to transition away from Anthropic.

But that's just the surface.

The real precedent is this: the American government just showed that an AI company with ethical standards stricter than the law will be punished. And a company that fits within "all lawful purposes" will be rewarded.

That's a signal. For every AI company on the planet.

Next time someone considers setting guardrails beyond the law — they'll remember what happened to Anthropic. And they probably won't do it.

And meanwhile, ChatGPT will run in the classified networks of the world's largest military. With restrictions that don't even reach what its biggest competitor considered the absolute minimum.


Most people today noticed the drama. Anthropic versus the Pentagon. Courage versus cowardice. Hero versus villain.

But the real story isn't about who's brave.

It's that the world's most advanced AI just got the green light for "all lawful purposes" in the hands of the American military — and "lawful" doesn't mean "safe." It never did.

We just didn't have the technology to make that difference visible until now.