How negotiations between Anthropic and the Pentagon fell apart — and OpenAI stepped in
What was supposed to be a milestone in collaboration between artificial intelligence and national defense turned into one of the most turbulent episodes in the relationship between Silicon Valley and the U.S. government in recent years. Negotiations between Anthropic and the U.S. Department of Defense over a $200 million contract simply blew up in the final minutes of a tight deadline, leaving behind a trail of public accusations, personal rivalries, and a lawsuit that could redefine the limits of AI in the military sector. And right in the middle of it all, OpenAI showed up to close a lightning-fast deal with the Pentagon, adding even more controversy. Let’s get into what happened behind the scenes. 👀
The memo that changed everything: how the Pentagon cleared the way for military AI
It all started on January 9, when Defense Secretary Pete Hegseth issued an internal memo directing the Department of Defense to speed up the adoption of artificial intelligence tools across its operations. This was not just another bureaucratic directive — it marked a real shift in the U.S. government’s posture, with generative AI now being viewed as a central piece of the national defense strategy.
The memo was blunt: AI should be broadly integrated into military operations, and tech companies should provide their tools without restrictions. To make the message even clearer, Hegseth reportedly placed AI-generated posters of himself around Pentagon hallways with the phrase I want you to use A.I. — an obvious nod to the classic Uncle Sam recruitment poster. The message was impossible to miss.
In practice, the memo meant that every AI company already involved in a Pentagon pilot program would need to renegotiate its contracts. Anthropic, OpenAI, Google, and xAI were all part of that program, but Anthropic had a particularly strong position. It was the only company that had already deployed its technology to operate in classified systems, and its Claude model was being widely used by defense analysts. That put Anthropic at the center of the renegotiations — and directly in the line of fire.
The $200 million contract and Anthropic’s red lines
The proposed contract was worth around $200 million and included privileged access to Claude for use in data analysis, military logistics, and intelligence operations. On the Pentagon side, the negotiations were led by Emil Michael, the Department of Defense’s CTO. Michael is a well-known figure in the tech world — a former top Uber executive, he joined the Department of Defense in May as chief technology officer after previously serving as a special assistant at the Pentagon during the Obama administration.
The central problem in the negotiations became clear in the very first rounds. The Department of Defense demanded unrestricted use of Anthropic’s technology, with no company-imposed limitations. That included the possibility of using Claude for the collection and analysis of unclassified commercial data on American citizens — such as geolocation data and web browsing history — as well as applications in weapons systems and surveillance.
Anthropic had a firm position on that. Since its founding, the company has maintained an acceptable use policy that explicitly prohibits two things: the use of its models for mass surveillance of American citizens and deployment in autonomous weapons systems without human oversight. For Dario Amodei, Anthropic’s CEO, those were not just negotiable corporate guidelines — they were red lines that defined the company’s identity.
The Department of Defense, for its part, argued that no private contractor should have the power to decide how its tools may legally be used by the government. It was a philosophical standoff that quickly turned into contract language the two sides could not reconcile.
The Pentagon meeting that went nowhere
On February 24, Pete Hegseth called an in-person meeting with Dario Amodei at the Pentagon in an attempt to find a path forward. According to people familiar with the discussions, the meeting lasted less than an hour, and the two men showed little warmth. The atmosphere was tense, and it became clear that the disagreement went beyond contractual clauses — there was a deep divide over the role of artificial intelligence in war and national security.
At the end of the conversation, Hegseth delivered an ultimatum: if Anthropic failed to reach a deal with the Pentagon by 5:01 p.m. the following Friday, the company would be classified as a supply chain risk. That designation, notably, has historically been reserved for foreign companies that the U.S. government considers national security threats — it had never been used against an American company. Hegseth also mentioned the possibility of invoking the Defense Production Act to force Anthropic to cooperate, although that threat was later dropped.
The Friday no one will forget: inside the collapse
Negotiations between Anthropic and the Pentagon dragged on for weeks, with legal teams on both sides exchanging contract drafts almost daily. According to sources close to the talks, the document had reached agreement on the vast majority of its terms. Points related to technical infrastructure, access levels, implementation timelines, and payment amounts were basically settled. The deadlock was concentrated in one very specific area: the issue of lawful surveillance of Americans.
On the Thursday before the deadline, Dario Amodei doubled down on AI safety. In a public statement, he said Anthropic could not, in good conscience, accept the Pentagon’s demands. He argued that in a narrow set of cases, artificial intelligence can undermine rather than defend democratic values, and that some uses simply fall outside what today’s technology can do safely and reliably.
Emil Michael’s response came that same night, and it was brutal. On social media, Michael called Amodei a liar and said he had a God complex. He posted that Amodei wanted personal control over how the United States military operates and was willing to put the country’s national security at risk. The intensely personal tone of the accusations shocked many people across the technology and defense worlds.
When Friday arrived, Anthropic executives still believed a deal was possible. According to people on both sides of the negotiation, the two parties were separated by only a few words in the surveillance clause. Anthropic was willing to allow its technology to be used by the National Security Agency, or NSA, for classified material collected under the Foreign Intelligence Surveillance Act, or FISA. But the company wanted a legally binding promise from the Pentagon that its technology would not be used to process unclassified commercial data about American citizens — such as geolocation information and web browsing records.
The Pentagon, meanwhile, wanted exactly the opposite: for Anthropic to allow the bulk collection and analysis of that type of commercial data.
The final minutes and Emil Michael’s ace in the hole
Complicating things even further, President Donald Trump posted on social media Friday morning telling Hegseth that he had prepared a message criticizing Anthropic and ordering all government agencies to stop working with the company within six months. Even after Trump published that message at 3:47 p.m., the two sides kept talking.
Emil Michael was on a call with Anthropic executives when he asked to speak directly with Dario Amodei to resolve the final wording of the contract. The response he got was that Amodei was in a meeting with his executive team and needed more time. Michael was unhappy with that answer.
And here is the detail that changes everything: Michael had an ace in the hole. While negotiating with Anthropic, he had also been working in parallel on an alternative deal with OpenAI. The day after the tense Pentagon meeting between Hegseth and Amodei, Sam Altman called Michael to discuss an agreement for his company. In just one day, the two had already sketched out a draft contract. OpenAI accepted the Pentagon’s requirement that its AI could be used for all lawful purposes, but it also negotiated the right to implement technical safeguards in its systems to stay aligned with its safety principles.
When the 5:01 p.m. Friday deadline passed, the Department of Defense gave Anthropic no more time. At 5:14 p.m., Pete Hegseth announced that he had designated Anthropic a security risk and that the company would be barred from working with the U.S. government. In his words, posted on social media: America’s warfighters will never be held hostage by the ideological whims of big tech.
OpenAI enters the picture and the controversy explodes
That same Friday night, while Anthropic’s lawyers were beginning to prepare a lawsuit against the Pentagon, Sam Altman was on the phone with Emil Michael finalizing the details of OpenAI’s agreement with the Department of Defense. At 10 p.m., Altman announced on social media that his company had reached a deal with the Pentagon to provide its AI technologies for classified systems. Pete Hegseth reposted Altman’s announcement shortly afterward.
The speed of it all triggered a wave of criticism. On Saturday, Altman invited people to ask questions about the deal on X, formerly Twitter, in an attempt to contain the backlash. Many questioned how OpenAI could sign a contract with the Pentagon and still maintain its safety principles. Others asked whether the agreement truly protected the company’s AI models from misuse.
Altman’s response was pragmatic: We do not want the ability to weigh in on a specific, lawful military action. But we very much want the ability to use our expertise to design a safe system.
For many observers, that answer revealed the fundamental difference between the two companies. While Anthropic drew red lines around what its technology could and could not do, OpenAI took a more flexible approach, accepting that the government would determine lawful uses while reserving for itself the role of implementing technical protections.
Personal rivalries that fueled the fire
You cannot tell this story without talking about the personal rivalries that shaped every stage of the negotiations. Emil Michael, Dario Amodei, and Sam Altman have known each other for years in Silicon Valley’s business circles — and the relationship between them has never exactly been friendly.
Amodei and Altman, both 40, worked together at OpenAI before Amodei left in 2021 to found Anthropic, taking several top researchers with him. The split was driven by deep disagreements over safety in the development of language models. Since then, the two have been open rivals, competing for both talent and contracts in the AI market.
Michael, meanwhile, is 53 and carries a reputation as an aggressive negotiator — a legacy from his time as Uber’s senior vice president, where he was involved in controversies that defined the company during its most turbulent era. According to people familiar with the negotiations, Michael clearly preferred Altman, who had been actively courting the Trump administration, over Amodei, whose views on safety and ethics he saw as unnecessary obstacles.
That mix of competing AI philosophies, inflated egos, and mutual distrust turned what could have been a technical and legal negotiation into a personal drama with geopolitical consequences.
Lawsuit and the future of AI in the defense sector
That same Friday, just hours after the negotiations collapsed, Anthropic announced it would sue the Pentagon over the decision to classify it as a supply chain risk. The move is historically unprecedented. That designation has always been reserved for foreign companies that the U.S. government considers national security threats — such as certain Chinese telecommunications manufacturers. It had never been applied to an American company.
The lawsuit opened a debate that goes far beyond the parties involved. The central question is this: Do technology companies have the right to maintain ethical use policies even when working with the federal government on defense contracts? Or does national security justify any technological use the government considers lawful?
The case also revived memories of Google’s Project Maven in 2018, when company employees revolted against the use of AI for analysis of military drone imagery. Back then, Google backed away. But Anthropic went further by taking the dispute into court, openly challenging the Pentagon’s power to dictate the terms of technology use.
Meanwhile, U.S. intelligence agencies, including the CIA, which already uses Anthropic’s technology, have been pressuring both sides to reach an agreement. Some current and former government officials said they remain hopeful that a middle path can still be found.
What changes from here
The fallout from this story is still unfolding, but the impact is already visible on several fronts. Inside Silicon Valley, the episode reignited the debate over how far AI companies should go to meet government demands. On Capitol Hill, lawmakers from both parties have started discussing the need for legislation that sets clear limits on the use of artificial intelligence in defense and surveillance operations.
In the market, Anthropic saw its reputation grow among developers and researchers who value a safety-first approach, while OpenAI began facing tougher questions about its willingness to bend principles in exchange for government revenue. OpenAI, it is worth noting, is also facing a lawsuit brought by The New York Times over alleged copyright infringement related to the training of its AI models — another legal battlefront for Altman’s company.
One thing is clear in all of this: the collapse of the contract between Anthropic and the Pentagon was not just a deal gone bad. It was a turning point in how society thinks about the role of artificial intelligence in military power, the ethical limits of technology, and who ultimately has the responsibility — and the courage — to say no. 🔍
