AI Safety Abandoned as Competition Escalates: The Pentagon, Anthropic, and OpenAI in a Dangerous Race

2

The brief consensus around AI safety, once a shared goal of companies, lawmakers, and the public, is rapidly unraveling. What began as a cautious push for regulation and oversight has devolved into a cutthroat competition where the U.S. military and leading AI firms prioritize speed and dominance over ethical considerations. The Pentagon’s aggressive stance and the aggressive responses from Anthropic and OpenAI signal a dangerous shift: safety is now secondary to strategic advantage.

The Pentagon vs. Anthropic: A Turning Point

The conflict between the Department of Defense (now rebranding as the Department of War) and Anthropic illustrates the problem perfectly. Anthropic previously insisted its Claude AI models would not be used for autonomous weapons or mass surveillance, a condition the Pentagon now seeks to erase. The military’s refusal to accept these limitations led to Anthropic losing its contract and being labeled a “supply-chain risk,” effectively barring it from future government work.

This isn’t just about contract disputes. It’s about the military’s determination to remove any restrictions on AI use, even if it means pushing the boundaries of legality. The question is not whether the military can build lethal autonomous drones, but whether it will, and how quickly. The lack of international agreements means other nations will follow suit, creating an inevitable AI arms race.

The Erosion of Safety Protocols

Anthropic’s recent changes to its “Responsible Scaling Policy” underscore the shift. The policy, designed to prevent catastrophic AI risks by tying model releases to safety procedures, has been quietly abandoned. The company admitted that the policy failed to create the broad consensus needed to enforce safety standards. The environment now prioritizes AI competitiveness and economic growth, leaving safety discussions behind.

The result is a bare-knuckle competition where OpenAI swiftly moved to fill the void left by Anthropic’s contract termination. OpenAI CEO Sam Altman claimed his move was intended to support Anthropic, but Anthropic CEO Dario Amodei accused him of undermining the company’s position to gain favor with the administration. This internal infighting demonstrates that even within the leading AI labs, safety is increasingly viewed as a liability rather than a priority.

The Illusion of Progress

Despite the bleak reality, AI companies insist that safety remains important. Anthropic’s chief science officer, Jared Kaplan, argues that research labs still prioritize ethical development. OpenAI points to the growth of AI safety organizations and the European Union’s regulatory efforts as signs of progress.

However, these claims ring hollow when weighed against the Pentagon’s actions and the industry’s relentless pursuit of dominance. OpenAI admits that while it has safeguards in place, there is no guarantee they will withstand pressure from the military, which could invoke the Defense Production Act to seize control if necessary.

A Grim Conclusion

The situation is clear: AI is too powerful, too tempting, to be restrained. As Anthropic CEO Dario Amodei put it, “This is the trap.” The race to develop and deploy AI will inevitably overshadow safety concerns, leaving humanity vulnerable to its unchecked potential. The era of cautious optimism is over. The future of AI is defined by competition, and safety will be the first casualty.