It's a no-brainer it's a nontraditionalAnd Why Today's War Makes It More Urgent Than Ever
Last week, before the U.S. and Israel launched massive strikes on Iran this morning -- and before Trump announced his ban on federal agencies using Anthropic -- I cancelled my ChatGPT paid subscription to shift my work to Claude. I switched because Claude was a smarter, better-functioning AI. But what has unfolded since then has given that decision far greater weight than I expected.
Here is what you need to know.
Anthropic Stood Its Ground
Anthropic, the company that makes Claude, refused to give the Pentagon unlimited access to its AI. The dispute came down to two specific refusals: Anthropic would not allow Claude to be used for domestic mass surveillance of Americans, and it would not allow Claude to power fully autonomous weapons systems -- machines that can kill without a human making the final decision. Anthropic stated it had "tried in good faith" to reach an agreement with the Pentagon over months of negotiations, "making clear that we support all lawful uses of AI for national security aside from the two narrow exceptions" being disputed. The company added: "To the best of our knowledge, these exceptions have not affected a single government mission to date."
Anthropic explained its reasoning plainly: "First, we do not believe that today's frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America's warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
Trump's Retaliation
The Trump administration's response was swift and punishing. Defense Secretary Pete Hegseth designated Anthropic a "Supply-Chain Risk to National Security" -- a designation normally reserved for companies with ties to foreign adversaries. Trump then ordered all federal agencies to stop using Anthropic's products, declaring: "The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War, and force them to obey their Terms of Service instead of our Constitution." Anthropic says it intends to challenge the national security designation in court.
What About OpenAI and ChatGPT?
Here is where the story gets more complicated than a simple sellout narrative. OpenAI struck a deal with the Pentagon hours after Anthropic was blacklisted. But OpenAI CEO Sam Altman claimed his company negotiated the same essential red lines Anthropic had demanded, writing: "Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
Altman went further, calling on the Pentagon to "offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept," and expressing a "strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements."
CNN noted that it remains "not clear what is actually different about OpenAI's deal versus what Anthropic wanted." The real difference may be purely political: the Pentagon accepted OpenAI's framing as consistent with existing law, while characterizing Anthropic's identical concerns as "ideological" and "woke." A senior Pentagon official told Axios: "The problem with Dario is, with him, it's ideological. We know who we're dealing with."
What concerns me is whether those written commitments will be honored. Altman's promises may be sincere, but the Trump administration has demonstrated that it views any limits on military AI use as politically motivated resistance. I do not trust that the political appointees now running the Pentagon -- people like Hegseth, who has shown no interest in institutional constraints -- will respect those red lines when they become inconvenient. Agreements on paper mean little when the people enforcing them see safety guardrails as enemy ideology.
And Then Came the War
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).





