Read memo that Sam Altman sent to employees hours before OpenAI signed deal with Pentagon and said that actually meant: Anthropic is ‘overreacting’

Read memo that sam altman sent to employees hours before openai signed deal with pentagon and said that actually meant anthropic is 39overreacting39.jpg


Read memo that Sam Altman sent to employees hours before OpenAI signed deal with Pentagon and said that actually meant: Anthropic is 'overreacting'

OpenAI CEO Sam Altman reportedly told his employees that rival Anthropic was ‘overreacting’ in its dispute with the US Department of War. A report cited an internal memo sent by Altman to his employees to make this claim. The memo was reportedly sent just hours before OpenAI finalised a deal with the Pentagon to deploy its AI models on classified government networks. In the message shared before the deal was announced, Altman said OpenAI was working with the US Department of War to explore whether its models could operate in classified environments while maintaining existing safety guardrails, an issue that had stalled Anthropic’s engagement with the government. He added that the company was seeking an approach to help resolve the broader industry deadlock over the military use of AI.The memo came shortly before OpenAI announced an agreement with the Pentagon following heightened tensions after President Donald Trump directed federal agencies to stop using Anthropic’s technology and Defence Secretary Pete Hegseth described the company as a “supply-chain risk to national security.”

What Sam Altman said in his memo to OpenAI employees

In a note to OpenAI employees seen by the Wall Street Journal, Altman wrote that the company was pursuing a deal “that allows our models to be deployed in classified environments and that fits with our principles.”“We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.” the memo added.Altman said he hoped to help ease tensions between the two sides and prevent outcomes that could set difficult precedents for the industry. “We would like to try to help de-escalate things,” he wrote in his message.He also expressed support for Anthropic’s concerns in principle, while also recognising the government’s position on national security oversight. “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines,” Altman noted.He added that the disagreement was centred more on governance than on how AI systems would ultimately be deployed. Altman explain, “We believe this dispute isn’t about how AI will be used, but about control. We believe that a private US company cannot be more powerful than the democratically-elected US government, although companies can have lots of input and influence. Democracy is messy, but we are committed to it.”According to the memo, OpenAI believes these limits can be maintained through technical safeguards, including restricting models to cloud-based environments rather than edge deployments, which could reduce risks associated with uses such as autonomous weapons. The company also plans to support researchers in obtaining security clearances so they can advise the government on system limitations and potential risks.“We would also build technical safeguards and deploy personnel (FDEs) to partner with the government to ensure things are working correctly, and we would offer similar services to other allied nations. If we are successful, perhaps this can be a path that can work for other AI labs, too,” Altman highlighted. Before Altman circulated the memo, some OpenAI employees had publicly expressed support for Anthropic on social media. Around 70 current staff members signed an open letter titled, “We Will Not Be Divided,” which, according to its website, seeks to build a “shared understanding and solidarity in the face of this pressure” from the Pentagon.“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety, and I’ve been happy that they’ve been supporting our war fighters. I’m not sure where this is going to go,” Altman told CNBC in an interview before announcing the Pentagon deal.Last year, OpenAI received a $200 million contract from the Department of War, enabling the agency to use its models for non-classified purposes. Meanwhile, Anthropic became the first AI lab to integrate its systems into mission workflows on classified government networks.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *