The company has concerns over the use of its technology for autonomous lethal weapons or mass surveillance. Whiskey Pete is threatening the firm with being labeled a security risk.
February 25, 2026

Not quite sure what Whiskey Barrel Pete is up to here. According to the Washington Post, he has threatened Anthropic, demanding the artificial intelligence firm share its novel technology in the name of national security if it does not agree by Friday to terms favorable to the military.

Anthropic is prepared to walk away from the negotiations — and its $200 million contract with the Defense Department — if concerns over the use of its technology for autonomous weapons or mass surveillance are not addressed, the Post reports.

The Pentagon claims it is not proposing any use of Anthropic’s technology that is not lawful. (Because the Epstein administration is so honest about everything else, I suppose we should believe them?) In a statement to The Washington Post that if the company does not comply by 5:01 p.m. Friday, Hegseth “will ensure the Defense Production Act is invoked on Anthropic, compelling them to be used by the Pentagon regardless of if they want to or not.”

(Experts on the DPA questioned whether it could be used to force Anthropic to drop the limitations it imposes on how its technology can be used.)

Anthropic is the first firm to integrate its technology into the Pentagon’s classified networks, and the firm has aggressively positioned itself to be a key player in national security. Well, as the nuns always said, "Lie down with dogs, rise up with fleas."

Dario Amodei, the company’s CEO, held firm that its AI model Claude should not be used to power autonomous weapons or conduct mass surveillance of Americans, the Post sources said.

Apparently Anthropic’s AI was applied during the raid to capture Venezuelan President Nicolás Maduro in ways the company had not approved. Defense "officials" say if Anthropic does not allow the Pentagon to apply the AI as it wants to, within lawful limits, the company would be considered a supply-chain risk, blocking them from future contracts.

AIs can’t stop recommending nuclear strikes in war game simulations

Leading AIs from OpenAI, Anthropic and Google opted to use nuclear weapons in simulated war games in 95 per cent of cases

www.newscientist.com/article/2516...

Christopher Mims (@mims.bsky.social) 2026-02-25T12:27:46.907Z

The Trump regime is using AI for mass surveillance of Americans, and plans to integrate AI into their weapons systems. They are trying to extort Anthropic for refusing to go along.

The oligarchs and their would-be King want to make human beings like us obsolete.

Max Berger (@maxberger.bsky.social) 2026-02-24T19:45:28.715Z

The company's Claude chatbot is one of the few AI systems cleared for use in classified settings. But a standoff between Anthropic and the Trump administration is putting its government work at risk.

NPR (@npr.org) 2026-02-24T20:39:15.869191Z

Can you help us out?

For over 20 years we have been exposing Washington lies and untangling media deceit, but social media is limiting our ability to attract new readers. Please give a one-time or recurring donation, or buy a year's subscription for an ad-free experience. Thank you.

Discussion

We welcome relevant, respectful comments. Any comments that are sexist or in any other way deemed hateful by our staff will be deleted and constitute grounds for a ban from posting on the site. Please refer to our Terms of Service for information on our posting policy.
Mastodon