OpenAI Strikes Deal With US Department Of War To Deploy AI Models On Classified Network Amid Anthropic Blacklisting
· Free Press Journal

Washington DC [US]: Open AI on Friday (local time) announced that it had reached an agreement with the US Department of War to deploy some of its models in their classified network. Open AI CEO Sam Altman posted the announcement on X claiming that the agreement has clauses that enforce prohibition on mass surveillance. The issue of surveillance was the breaking point that forced the US Department of War to cancel its agreement with Open AI's bitter rival Anthropic.
Visit raccoongame.org for more information.
"Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement," Altman said in his post.
'We Don't Need It': Donald Trump Orders Federal Ban On Anthropic AI, Pentagon Labels Firm 'National Security Risk'"We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place," he said.
Earlier, US President Donald Trump ordered all federal agencies to immediately stop using Anthropic technology amid a growing dispute between the AI company and the Pentagon. He accused the firm of trying to interfere with how the US military operates and threatening further action if it does not cooperate during a six-month phase-out period. Secretary of War Pete Hegseth also launched into a diatribe on Anthropic boss Dario Amodei and accused him of engaging in dupilicty.
'Liar, Has God-Complex': US War Department Slams Anthropic CEO After Refusal To Remove AI Safeguards"This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic's models for every LAWFUL purpose in defense of the Republic. Instead, Anthropic AI and its CEO Dario Amodei have chosen duplicity. Cloaked in the sanctimonious rhetoric of "effective altruism," they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives," Hegseth posted on X.
"The Terms of Service of Anthropic's defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic's stance is fundamentally incompatible with American principles," he added.
'Liar, Has God-Complex': US War Department Slams Anthropic CEO After Refusal To Remove AI Safeguards"Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America's warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final," he further said.
The DOW's remarks come after Amodei, in a statement on Thursday, said that the firm would not support certain uses of AI, including mass domestic surveillance and fully autonomous weapons, citing concerns about democratic values and the current reliability of frontier AI systems. Amodei stated that despite pressure from the DOW to agree to "any lawful use" of its technology and remove specific safeguards, the company would not change its position.
"The Department of War has stated they will only contract with AI companies who accede to "any lawful use" and remove safeguards in the cases mentioned above. They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a "supply chain risk"--a label reserved for US adversaries, never before applied to an American company--and to invoke the Defense Production Act to force the safeguards' removal.
These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security," the statement read. With Altman's Open AI now stepping in where Anthropic was, it signals yet another chapter in the bitter standoff between two of the World's top Artificial Intelligence companies.
Disclaimer: This story is from the syndicated feed. Nothing has been changed except the headline.