Microsoft to court: Stop Anthropic’s designation as ‘national security risk’ for now, first allow …

Microsoft to court stop anthropic39s designation as 39national security risk39.jpg


Microsoft to court: Stop Anthropic's designation as 'national security risk' for now, first allow ...
Satya Nadella, CEO, Microsoft

Microsoft is now backing AI giant Anthropic. According to a report by CNBC, Microsoft is urging the US judge to block Pentagon’s designation of the company as a ‘supply chain risk’. In a filling to the District Court in San Francisco, Microsoft gas asked for a temporary restraining order would prevent the Defense Department from enforcing its ban on Anthropic’s technology across its existing contracts. Microsoft is of the view that without such an order, defense contractors would need to “act immediately to alter existing product and contract configurations,” potentially disrupting the military’s use of advanced AI. “This could potentially hamper US warfighters at a critical point in time,” Microsoft said in its filing.

Pentagon’s immediate ban

Last week, the Department of Defense officially banned Anthropic’s models and labelled the company a ‘supply chain risk’ a designation which is typically reserved for foreign adversaries. The move needs the defense vendors to certify that they are not using Anthropic’s AI in Pentagon-related work. Responding to this, Anthropic sued the Trump administration, calling the designation “unprecedented and unlawful” and warning that hundreds of millions of dollars in contracts were at risk.

Why Anthropic has sued the government

The Pentagon wanted Anthropic to remove hard limits on deploying its AI for fully autonomous weapons and domestic surveillance of American citizens. Anthropic denied saying that current AI models are not reliable enough for autonomous weapons and that using them in that way would be dangerous. It also called domestic surveillance a violation of fundamental rights.When those negotiations broke down, Defense Secretary Pete Hegseth formally designated Anthropic a national security supply-chain risk. President Donald Trump then directed the government to stop working with Anthropic altogether, with a six-month phase-out announced for existing contracts.The Defense Department has been equally firm, saying that US law – not a private company – should determine how America defends itself, and that the military needs full flexibility to use AI for “any lawful use.” The Pentagon warned that Anthropic’s self-imposed restrictions could endanger American lives.

Dispute over the use of AI

This ban followed a fail negotiations between Anthropic and the Pentagon. Anthropic has sought assurances that its Claude models would not be used for fully autonomous weapons or domestic mass surveillance. The Pentagon, however, wanted unfettered access for all lawful military purposes. Neither side budged, leading to the collapse of talks.

Microsoft’s stake

Microsoft has invested heavily in Anthropic, announcing plans last November to put up to $5 billion into the company. It has also been a major backer of rival OpenAI since 2019. In its filing, Microsoft said a restraining order would allow time for a “negotiated resolution that will better serve all involved and avoid wide-ranging business impacts.”A spokesperson added: “The Department of War needs reliable access to the country’s best technology. And everyone wants to ensure AI is not used for mass domestic surveillance or to start a war without human control.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *