Thursday, April 9, 2026

Latest

Pentagon used Claude AI to kidnap Maduro – reports

Claude AI used in Maduro kidnap

Anthropic has publicly touted its focus on “safeguards,” seeking to limit military use of its tech.

The US military actively used Anthropic’s Claude AI model during the operation to capture Venezuelan President Nicolas Maduro last month, according to reports from Axios and The Wall Street Journal – revealing that the safety-focused company’s technology played a direct role in the deadly overseas raid.

Claude was utilized during the active operation, not merely in preparatory phases, Axios and the WSJ both reported Friday. The precise role remains unclear, though the military has previously used AI models to analyze satellite imagery and intelligence in real-time.

The San Francisco-based AI lab’s usage policies explicitly prohibit its technology from being used to “facilitate violence, develop weapons or conduct surveillance.” No Americans lost their lives in the raid, but dozens of Venezuelan and Cuban soldiers and security personnel were killed on January 3.



“We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” an Anthropic spokesperson told Axios. “Any use of Claude – whether in the private sector or across government – is required to comply with our Usage Policies.”

Anthropic’s rivals OpenAI, Google, and Elon Musk’s xAI all have deals granting the Pentagon access to their models without many of the safeguards that apply to ordinary users. But only Claude is deployed, via partnership with Palantir Technologies, on the classified platforms used for the US military’s most sensitive work.

The revelation lands at an awkward moment for the company, which has spent recent weeks publicly emphasizing its commitment to “AI safeguards” and positioning itself as the safety-conscious alternative within the AI industry.

CEO Dario Amodei has warned repeatedly of the existential dangers posed by unconstrained use of artificial intelligence. On Monday, the head of Anthropic’s Safeguards Research Team, Mrinank Sharma, abruptly resigned with a cryptic warning that “the world is in peril.” Days later, the company poured $20 million into a political advocacy group backing robust AI regulation.

At the same time, Anthropic is reportedly negotiating with the Pentagon over whether to loosen restrictions on deploying AI for autonomous weapons targeting and domestic surveillance. The standoff has stalled a contract worth up to $200 million, with Defense Secretary Pete Hegseth vowing not to use models that “won’t allow you to fight wars.”

Support DTNZ

DTNZ is committed to bringing Kiwis independent, not-for-profit news. We're up against the vast resources of the legacy mainstream media. Help us in the battle against them by donating today.

Promoted Content

Source:RT News

No login required to comment. Name, email and web site fields are optional. Please keep comments respectful, civil and constructive. Moderation times can vary from a few minutes to a few hours. Comments may also be scanned periodically by Artificial Intelligence to eliminate trolls and spam.

4 COMMENTS

  1. Technology being used to enable tyrants and criminal governments as per usual. AI was never going to work in the public good. You could say the internet has “come home” and is fulfilling one of its key original functions, serving the US military.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Wellington
scattered clouds
17.4 ° C
17.8 °
17 °
86 %
9.3kmh
40 %
Wed
17 °
Thu
18 °
Fri
16 °
Sat
19 °
Sun
18 °




Sponsored



Trending

Sport

Daily Life

Opinion

DTNZ News Network