Anthropic Abandons Pretext of AI Safety After Pentagon Pressure

In a notable shift, Anthropic, a prominent player in the artificial intelligence sector known for its emphasis on safety, has relaxed its core safety commitments. The move comes in response to increasing pressure from the U.S. Department of Defense, which seeks military access to the company’s AI models, including those used in missile defense systems.

The changes, described in a recent Time article, mark a departure from Anthropic’s previous pledge to refrain from developing more powerful AI models without proven safety measures. As competition in the AI landscape heats up, the company acknowledged that its rigid stance on safety could hinder its ability to keep pace with rivals. One official stated, “It didn’t make sense to make unilateral commitments … if competitors are blazing ahead.”

Critics are expressing concern over the implications of loosening AI safety protocols, fearing this could enable uses for mass surveillance and autonomous warfare. “This is a dangerous expansion of the imperialist war machine,” commented one observer, underlining the ethical dilemmas surrounding military applications of AI technologies.

The capitulation highlights a critical intersection of technological advancement, ethical responsibility or lack thereof, and military interests. With far right white supremacists at the helm of the Pentagon, the pressure for companies to permit for more military power concentration in their hands is growing. Anthropic will certainly not be the last supplier to roll over due to pressure.