How these ‘words’ in OpenAI contract with Pentagon may allow the US government to do what Anthropic is said to have ‘backed out’ over
OpenAI struck a ‘hastily’ arranged deal with the Pentagon on the same day when the Department of War ‘kicked out’ Anthropic because the company declined to comply with its demands. Soon after the deal was announced, multiple reports said that the ChatGPT-maker faced backlash from company employees. Now, the Financial Times is reporting that OpenAI is locked in a second round of negotiations to ensure its technology isn’t weaponised for mass domestic spying.Citing people familiar with the conversations, the report suggests that there are concerns on the words used in the contract. The agreement prohibits “intentional,” “deliberate,” or “targeted” surveillance of American citizens using OpenAI’s AI models. While this sounds protective, legal experts and internal staff have flagged a significant gap. They are concerned that the US government may end up surveilling Americans “incidentally” or “unintentionally” — and under the current contract language, that may not be prohibited at all.In easier words, the loophole that Anthropic refused to accept in its own negotiations with the Pentagon may still exist in OpenAI’s signed agreement.
OpenAI is already working to revise some of the terms in Pentagon contracts
OpenAI is not ignoring the problem as the company has already revised some of the contractual wording around surveillance since the deal was announced and is working to add further protections during a three-month implementation period, the report added.“What is yet to be worked out is the implementation of these contracts,” a person close to OpenAI was quoted as saying. The person added that the next phase will cover questions “beyond the language of the contracts” – including where the technology will actually be deployed and what technical safeguards will govern when AI models might refuse to follow instructions.“The challenge for OpenAI is how to make a product that is still usable but doesn’t do unsafe things,” the same person was quoted as saying.OpenAI CEO Sam Altman has been candid about how the deal came together, acknowledging in an internal company meeting that the rush to announce an agreement that “looked opportunistic and sloppy.” The speed was evident. OpenAI announced the deal on Friday, then issued an updated statement on Monday with revised language — a sign that the original terms were not fully thought through before being made public.The company has since been in damage-control mode, repeatedly clarifying the contract’s terms to address concerns from staff, legal observers, and the wider public, the report added. Meanwhile, Anthropic’s CEO Dario Amodei, in a note to staff, accused OpenAI of “mendacious” messaging around its original contract.