Anthropic becomes first-ever American company to be designated as ‘risk to America’s national security’; tells CEO Dario Amodei in letter

1772772496 anthropic ceo dario amodei.jpg


Anthropic becomes first-ever American company to be designated as 'risk to America’s national security'; tells CEO Dario Amodei in letter

Claude-maker Anthropic has been officially designated as “nation security risk” in America, becoming the first US company to get the label. In an official statement, Anthropic CEO Dario Amodei said that the AI firm now has no choice but to challenge the supply chain risk designation. “Yesterday (March 4) Anthropic received a letter from the Department of War confirming that we have been designated as a supply chain risk to America’s national security,” Amodei said in the statement. “As we wrote on Friday (February 27), we do not believe this action is legally sound, and we see no choice but to challenge it in court,” he added.Dario Amodei further stated that the language used by the Department of War in the letter (even supposing it was legally sound) matches the company’s statement on Friday “that the vast majority of our customers are unaffected by a supply chain risk designation.” “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts,” he said.

Anthropic CEO Dario Amodei: Trump admin’s decision has limited impact

In the statement, Anthropic CEO Dario Amodei said that the government’s decision has a very limited impact. According to the company, the law used by the Department of War is meant only to protect government supply chains, not to punish companies. “The Department’s letter has a narrow scope, and this is because the relevant statute (10 USC 3252) is narrow, too. It exists to protect the government rather than to punish a supplier; in fact, the law requires the Secretary of War to use the least restrictive means necessary to accomplish the goal of protecting the supply chain,” he wrote.As a result, the designation cannot block companies from using Anthropic’s AI or working with the company in general. “Even for Department of War contractors, the supply chain risk designation doesn’t (and can’t) limit uses of Claude or business relationships with Anthropic if those are unrelated to their specific Department of War contracts,” he stated.

Anthropic vs Pentagon

The dispute between Anthropic AI and Pentagon originates from the AI company refusing to lift its safeguard and let the military use it for “all lawful purposes.” Despite being the only AI model running in the military’s classified systems, the AI firm has consistently insisted on blocking Claude’s use for what it calls the mass surveillance of Americans or to develop weapons that fire without human involvement.For those unaware, Anthropic’s Claude was used during the operation to capture Venezuela’s Nicolás Maduro, through Anthropic’s partnership with Palantir. According to reports, the company’s AI tool was also used during the Iran strikes. The Pentagon gave an ultimatum to the company last week before announcing it a ‘supply chain risk’. Calling the decision “retaliatory” and “punitive”, Anthropic CEO said that US President Trump disliked the company for not giving ‘dictator-style praise’. Recently, the company said that it is in talks with U.S. Department of Defense about the use of its AI models by the US military.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *