Federal judge sides with Anthropic in first round of standoff with Pentagon
Face-off is over company’s refusal to let defense department use its AI model in autonomous weapons systems
usa.bryanrite.com –
A federal judge in California sided with Anthropic in its case against the Department of Defense on Thursday, ordering a temporary pause on the government’s punitive measures against the artificial intelligence firm.
Judge Rita Lin granted Anthropic’s request for a temporary injunction while the northern district court of California hears the company’s case. Anthropic argued that the Department of Defense and Donald Trump violated its first amendment rights in declaring the company a supply chain risk and ordering government agencies to cease using its technology.
The judge stayed the order for one week.
The months-long standoff between Anthropic and the government has revolved around the company’s refusal to allow the defense department to use its Claude AI model for fully autonomous lethal weapons or domestic mass surveillance. Anthropic filed suit against the government earlier this month, which began on Tuesday with a hearing on the temporary injunction.
Judge Lin’s ruling found that the government overstepped its authority in its attempts to punish and coerce Anthropic, stating that the Pentagon designating the AI company as a “supply chain risk” is “likely both contrary to law and arbitrary and capricious.
“The Department of War provides no legitimate basis to infer from Anthropic’s forthright insistence on usage restrictions that it might become a saboteur,” Lin wrote.
During the hearing on Tuesday, Lin questioned lawyers for the government on the rationale behind the supply chain risk designation when the DoD could have simply dropped Anthropic as a contractor.
“It looks like an attempt to cripple Anthropic,” Lin said.
Attorneys for the government claimed that although the secretary of defense, Pete Hegseth, had posted on social media that no contractor that does business with the US military could work with Anthropic, his statement did not carry a legal effect and therefore would not create the irreparable harm the company’s lawsuit alleged. When Lin pressed government lawyers why Hegseth would post something without any legal authority to back it, they responded that they did not know.
Lin appeared skeptical of the government’s arguments throughout the hearing, as well as said that the actions the government took against Anthropic did not seem to be reflective of specific national security concerns.
Anthropic has argued in its complaint against the government, which it filed earlier this month, that the supply chain risk designation and other punitive actions could cost the company hundreds of millions if not billions of dollars.
“These actions are unprecedented and unlawful. The constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic stated in its complaint.
The injunction has implications for the government’s attempts to make federal agencies replace Claude with other AI tools, a difficult process given how deeply Anthropic’s technology has been embedded into government operations. The DoD has reportedly been extensively using Claude for military operations, including in target selection and analysis of missile strikes in its war against Iran.
Comment