A federal decision on Thursday has temporarily halted a US governmental ban on the use of Anthropic’s artificial intelligence technology by federal contractors. The decision came from US District Judge Rita Lin, who sided with Anthropic in the legal battle against a presidential directive. This directive compelled all federal agencies to cease utilizing Anthropic’s Claude AI model, citing the company as a “supply chain risk”.
The conflict originated from Anthropic’s contract negotiations with the US Department of Defense. The Pentagon has been working to accelerate its AI initiatives to enhance intelligence data processing and military efficiency. During discussions, Anthropic proposed safety measures, including a stipulation against using its AI for mass surveillance of American citizens. A Pentagon representative maintained that any military use of the technology would follow lawful orders, underscoring the friction between safety concerns and strategic imperatives (JURIST).
President Donald Trump criticized Anthropic in February, labeling its insistence on ethical guidelines a “disastrous mistake” that risked American lives. Consequently, the administration branded Anthropic a national security threat, leading to its supply chain risk designation. Anthropic responded with a lawsuit claiming this designation violated various legal standards, including the Administrative Procedure Act and the First Amendment.
Judge Lin’s ruling highlighted that the government did not supply sufficient evidence to justify the supply chain risk label. Moreover, she described the ban as a case of “classic illegal First Amendment retaliation,” arguing it was a response to Anthropic’s public critique of government contracting approaches.
This ruling underscores the ongoing tensions between private tech companies and government agencies over ethical AI usage. It also draws attention to how national security concerns are being weighed against corporate policy strategies. Additional details have surfaced, noting the debate’s broader implications on how AI ethics are integrated into government contracts and operations (Reuters).