Crypto: Openai Wins Defense Contract Hours After Government Ditches Anthropic
OpenAI will deploy its AI models on Pentagon classified networks after the US government ordered agencies to stop using rival Anthropic over national security concerns.
OpenAI has reached an agreement with the United States Department of Defense to deploy its artificial intelligence models on classified military networks, just hours after the White House ordered federal agencies to stop using technology from rival firm Anthropic.
In a late Friday post on X, OpenAI CEO Sam Altman announced the deal, saying the company would provide its models inside the Pentagon’s “classified network.” He wrote that the department showed “deep respect for safety” and a willingness to work within the company’s operating limits.
The announcement came amid a turbulent week for the AI sector. Earlier the same day, Defense Secretary Pete Hegseth labeled Anthropic a “Supply-Chain Risk to National Security,” a designation typically applied to foreign adversaries. The ruling requires defense contractors to certify they are not using the company’s models.
President Donald Trump simultaneously directed every US federal agency to immediately halt use of Anthropic technology, with a six-month transition period for agencies already relying on its systems.
Related: Crypto VC Paradigm expands into AI, robotics with $1.5B fund: WSJ
Anthropic was the first AI lab to deploy models across the Pentagon’s classified environment under a $200 million contract signed in July. Negotiations collapsed after the company sought guarantees that its software would not be used for autonomous weapons or domestic mass surveillance. The Defense Department insisted the technology be available for all lawful military purposes.
In a statement, Anthropic said it was “deeply saddened” by the designation and intends to challenge the decision in court. The company warned the move could set a precedent affecting how American technology firms negotiate with government agencies, as political scrutiny of AI partnerships continues to intensify.
Altman said OpenAI maintains similar restrictions and that they were written into the new agreement. According to him, the company prohibits domestic mass surveillance and requires human responsibility in decisions involving the use of force, including automated weapons systems.
Related: Pantera, Franklin Templeton join Sentient Arena to test AI agents
Source: CoinTelegraph