
OpenAI’s rushed classified deal with the newly rebranded “Department of War” is testing whether America can modernize the battlefield without sliding into surveillance-state habits at home.
Story Snapshot
- OpenAI signed a classified-network agreement with the U.S. Department of War on March 1, 2026, after rival Anthropic refused similar terms.
- The Trump administration ordered federal agencies to phase out Anthropic over six months after DoW leadership labeled it a “supply chain risk.”
- CEO Sam Altman admitted the deal was rushed and “optics don’t look good,” while OpenAI touted “red lines” against surveillance and autonomous weapons.
- Internal dissent surfaced publicly, with at least one OpenAI alignment researcher calling the safeguards “windowdressing.”
A Classified Deal Lands in the Middle of a Government Blacklist Fight
OpenAI’s March 1 agreement allows its AI models to be used inside classified military networks, a major step beyond the company’s earlier discussions that focused on non-classified work. The timing mattered: the signing came only hours after Anthropic rejected a similar government offer, reportedly due to concerns about broad “all lawful purposes” language. Within the same weekend, the Trump administration moved to phase Anthropic out of federal agencies over six months.
The pressure campaign around Anthropic is a central fact in the story because it reshaped the negotiating environment for every major AI lab. The Department of War’s Secretary, Pete Hegseth, designated Anthropic a supply-chain risk, and the phase-out order elevated what could have been a standard procurement dispute into a precedent-setting government action. OpenAI’s leadership publicly opposed blacklisting while simultaneously accepting a classified deal it acknowledged was executed quickly.
Altman’s “De-escalation” Argument vs. Employee “Windowdressing” Critique
Sam Altman defended the decision in a public Q&A on X, conceding the agreement was rushed and the optics were bad, but framing it as a way to reduce escalation and keep AI work inside clearer boundaries. OpenAI also highlighted “red lines” that it says prohibit mass surveillance, autonomous weapons, and “high-stakes” automated decisions. Executives argued that combining legal restrictions, usage policies, and technical controls is stronger than relying only on contract language.
Not everyone inside OpenAI appeared persuaded. Fortune reported public criticism from alignment researcher Leo Gao, who argued the safeguards were more presentation than protection. That split matters because it clarifies what the public can and cannot verify: OpenAI released only a partial version of its agreement, and key details remain classified. With incomplete text, outside observers cannot independently assess enforcement mechanisms, auditability, or what remedies exist if a customer pushes the boundaries.
What OpenAI Actually Published—and What Still Can’t Be Verified
OpenAI’s public post described a “layered safeguards” approach: references to governing law, explicit policy limits, and technical guardrails intended to prevent prohibited uses. The company’s stated boundaries track three headline prohibitions—no surveillance, no autonomous weapons, and no high-stakes decision automation—meant to reassure the public that the deal won’t turn AI into a shortcut around constitutional norms. Still, the company’s own transparency comes with limits because the full contract remains classified.
A legal analyst warned that legal commitments can be squishy in practice when terms depend on evolving interpretations of law and classified program details. That uncertainty is not a minor footnote; it is the core governance challenge with powerful dual-use systems. Americans who remember years of bureaucratic overreach have reason to demand plain, enforceable limits—because “trust us” has a bad track record when national-security agencies get new tools.
The Precedent Conservatives Should Watch: Government Leverage Over Private Tech
From a limited-government perspective, the biggest long-term issue may not be which AI lab wins a contract, but how the federal government shapes the market through coercive leverage. The Anthropic phase-out shows how quickly Washington can punish a private firm for refusing preferred terms, even when that refusal is framed as an ethics or governance objection. OpenAI itself reportedly described the blacklist as a “scary precedent,” even while trying to keep a seat at the table.
At the same time, the national-security stakes are real. The Trump administration has signaled urgency about military AI capacity amid geopolitical rivalry, and the DoW clearly wants access to frontier models for classified work. The unresolved question is whether the system that gets built will respect Americans’ expectations of lawful, limited use—or whether broad “lawful purposes” language and classified implementation will gradually normalize domestic-style surveillance capabilities. Public documentation remains partial, and that’s a problem.
Sources:
OpenAI signs Department of War deal after Anthropic refusal and blacklisting (AI ethics dispute)
OpenAI CEO Sam Altman answers questions on new Pentagon deal
Our agreement with the Department of War













