Trump Orders Halt on Anthropic AI Use Amidst Pentagon Ethics Clash

In a significant escalation of tensions over the military’s use of artificial intelligence, President Donald Trump has directed all federal agencies to cease utilizing technology developed by AI firm Anthropic. This directive follows a protracted dispute between the company and the Department of Defense regarding the ethical deployment of AI in military operations.

The conflict centers on Anthropic’s AI model, Claude, which the company has restricted from being used for mass domestic surveillance or fully autonomous weapons systems. Anthropic’s leadership, including CEO Dario Amodei, has expressed ethical concerns over these applications, leading to a standoff with the Pentagon. Defense Secretary Pete Hegseth had previously warned that the company could be designated a “supply chain risk” if it did not permit unrestricted use of its technology. This designation would effectively sever Anthropic’s ties with the Department of Defense and its contractors.

President Trump’s announcement came shortly before a Pentagon-imposed deadline for Anthropic to comply with demands to lift these restrictions. In a post on Truth Social, Trump criticized the company, stating, “The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War.” He further instructed federal agencies to “IMMEDIATELY CEASE all use of Anthropic’s technology,” allowing for a six-month phase-out period to transition away from the company’s products.

The designation of Anthropic as a “supply chain risk” is a move typically reserved for foreign entities considered threats to national security. This action underscores the administration’s stance that private companies should not impose limitations on the military’s use of critical technologies. Defense Secretary Hegseth emphasized this point, stating that the company’s position was “putting AMERICAN LIVES at risk.”

In response to the administration’s actions, Anthropic has indicated plans to challenge the designation in court, arguing that the restrictions are legally unjustified. The company’s stance has garnered support from AI safety advocates and some lawmakers who share concerns about the ethical implications of AI in military applications.

The broader AI industry is closely monitoring the situation, as it may set a precedent for how AI technologies are integrated into national security frameworks. Notably, OpenAI has reached an agreement with the Pentagon to deploy its AI models under strict safety conditions, including prohibitions on mass surveillance and autonomous weapons use. This agreement highlights the ongoing debate within the tech industry about balancing ethical considerations with national security demands.

As the six-month phase-out period begins, federal agencies will need to identify alternative AI solutions that align with both ethical standards and operational requirements. The outcome of this dispute may have lasting implications for the relationship between the government and AI developers, as well as the future of AI deployment in military contexts.