The U.S. Department of Justice (DOJ) is defending the Department of War’s recent designation of artificial intelligence company Anthropic as a “supply chain risk” in the U.S. Court of Appeals for the D.C. Circuit. This designation effectively bars Anthropic from participating in military contracts, following the company’s refusal to permit unrestricted military use of its AI model, Claude.
In its court filing, the DOJ stated that the Secretary of War determined Anthropic’s behavior had “undermined the trust required to sustain that relationship and given rise to the risk that [Anthropic] may unilaterally manipulate its software to enforce its own moral and policy judgments about the military’s appropriate use of the software.” This assertion underscores the government’s concern that Anthropic could impose its ethical standards on military applications of its technology.
Anthropic has responded by filing lawsuits in both the U.S. District Court for the Northern District of California and the D.C. Circuit Court of Appeals, challenging the “supply chain risk” designation. The company argues that this label violates its First Amendment rights and exceeds the government’s legal authority. Anthropic contends that the designation is a form of retaliation for its stance on ethical AI use, particularly its opposition to mass surveillance and autonomous weapons. The company maintains that existing procurement laws do not grant the Pentagon or the President the authority to blacklist domestic companies in this manner.
The legal battle has attracted support from various quarters. Microsoft and a group of 22 retired U.S. military leaders have filed amicus briefs backing Anthropic’s position. Microsoft argues that the designation is vague, unprecedented, and politically motivated, potentially causing economic harm and undermining fair government contracting. The retired military officials, including former service secretaries and a former CIA director, express concerns that the Pentagon’s actions could disrupt military operations reliant on AI tools.
OpenAI, a competitor of Anthropic, has also weighed in on the matter. CEO Sam Altman described the government’s actions as setting an “extremely scary precedent” and expressed hope for a better resolution. Despite OpenAI’s own agreement to work with the Pentagon, Altman emphasized the importance of handling such disputes differently.
The outcome of this legal confrontation could have significant implications for the relationship between AI developers and the U.S. defense establishment, particularly concerning the balance between national security interests and ethical considerations in the deployment of artificial intelligence technologies.