Pentagon Pressures Anthropic Over AI Access for Classified Systems

23

The U.S. Department of Defense is escalating efforts to gain broader access to advanced artificial intelligence (AI) technologies, with Anthropic being the latest company under pressure. On Tuesday, Pentagon officials summoned Anthropic CEO Dario Amodei to Washington for direct negotiations over the terms of their existing $200 million contract. This move follows a recent memo from Defense Secretary Pete Hegseth demanding AI firms loosen restrictions on their technology for military applications.

Background on the Dispute

Last year, the Pentagon entered into a pilot program with Anthropic, gaining access to their AI model. However, the Defense Department now seeks more extensive usage rights, aligning with agreements already struck with Elon Musk’s xAI and nearing completion with Google (Gemini model). The Pentagon’s strategy is clear: leverage these deals to pressure Anthropic into broader cooperation.

Key Demands and Anthropic’s Concerns

The core issue is control over how AI models are deployed. The Pentagon insists on the freedom to use the technology “as it sees fit,” provided activities remain lawful. While the department will allow companies to retain some safety features (dubbed “the safety stack”), it wants to avoid limitations that hinder military applications.

Anthropic, as the first AI firm granted access to classified military networks, is willing to compromise but has firm conditions. The company demands guarantees that its AI won’t be used for mass surveillance of U.S. citizens or deployed in fully autonomous weapons systems without human oversight. This stance reflects broader ethical concerns within the AI community about the potential for misuse.

Implications and Strategic Shift

This situation highlights a major shift in the Pentagon’s approach to AI procurement. The department is prioritizing access over restrictions, signaling a willingness to accept certain risks in exchange for operational flexibility. The pressure on Anthropic suggests a broader effort to standardize AI contracts, ensuring the military maintains maximum control over these powerful technologies.

Ultimately, the outcome will set a precedent for future engagements with AI companies, determining whether ethical safeguards will be secondary to strategic military interests.