What is behind the tension between Anthropic and the Pentagon
Anthropic, the company behind the Claude assistant, took a firm stance by refusing to allow its artificial intelligence to be used for purposes like mass surveillance and the development of fully autonomous weapons. That decision did not sit well with the Pentagon, which quickly classified the company as a risk within the United States defense supply chain. In practical terms, this means Anthropic came to be viewed as an unreliable supplier in the eyes of the Department of Defense, which could have serious consequences for its future contracts and partnerships with the American government.
The Trump administration’s decision to block Anthropic and designate it as a supply chain risk happened on Friday, and just hours later combat operations began in Iran. The United States government justified the strikes by claiming they were necessary to neutralize imminent threats from Iranian nuclear and missile programs. The timing made everything even more sensitive. The military strikes amplified the debate over the role of tech companies in war operations and national security. Suddenly, Anthropic’s decision to draw an ethical line carried enormous weight, because it became clear that artificial intelligence tools are getting closer and closer to the battlefield.
What really stood out was that the pressure did not come from competitors or the market, but directly from one of the largest military structures on the planet. The Pentagon’s message was clear: companies that do not cooperate with defense demands can be treated as obstacles. This set a troubling precedent for the entire tech industry, since other companies developing language models and AI systems could face similar situations if they refuse to comply with similar requests.
The wave of support that surprised Silicon Valley
What nobody expected was that the response would come so quickly and with so much force. Within days, open letters started circulating among employees at companies like Google, OpenAI and other industry giants, calling for clear limits on how their organizations collaborate with the American government on military matters. Support for Anthropic’s position came from inside the very companies that, in theory, could have benefited if Anthropic lost market share. This shows that concern about the responsible use of artificial intelligence goes beyond commercial competition and touches on values that many professionals consider non-negotiable.
The manifesto dubbed We Will Not Be Divided gathered nearly 900 signatures in record time. On Friday, the document had just a few hundred names, and by Monday it had reached almost 900, including about 100 OpenAI employees and roughly 800 from Google. The text is straight to the point and states that the Department of Defense is trying to divide companies through fear that a competitor will cave to pressure. The letter argues that this strategy only works if nobody knows where the others stand, and that the manifesto’s goal is precisely to create shared understanding and solidarity in the face of these pressures.
Beyond We Will Not Be Divided, a second open letter also gained momentum. Hundreds of tech professionals signed a document asking the Department of Defense to withdraw Anthropic’s designation as a supply chain risk. The list of signatories includes dozens of OpenAI employees, along with professionals from companies like Salesforce, Databricks, IBM and Cursor. This document calls on the U.S. Congress to examine whether the use of these extraordinary authorities against an American tech company is appropriate, and argues that Anthropic and other private companies should not face retaliation for refusing to give in to government demands.
The most interesting part is that this solidarity was not limited to engineers and developers. AI safety researchers, product designers, project managers and even professionals in administrative roles joined the movement. The collective message is that supporting Anthropic is not about defending one specific company, but rather about protecting a principle that affects the entire industry: the right of a technology organization to refuse demands that conflict with its ethical values without facing institutional retaliation.
Google’s role and the shadow of Project Maven
For Google, this new wave of protests reopens old wounds. According to a New York Times report, the company is in negotiations with the Pentagon to bring its Gemini AI model to a classified government system. If that materializes, it would be a significant step that would reignite an internal battle that has been going on for years within the company.
More than 100 Google employees who work directly with artificial intelligence technology reportedly signed a letter to company leadership last week, expressing concerns about working with the Department of Defense. They asked Google to draw the same red lines as Anthropic — in other words, to refuse the use of its AI for mass surveillance or autonomous weapons.
Jeff Dean, Google’s chief scientist, received the memo and apparently showed sympathy with at least some of the concerns raised. In a post on X, Dean wrote that mass surveillance violates the Fourth Amendment of the U.S. Constitution and has a chilling effect on free speech. He added that surveillance systems are prone to misuse for political or discriminatory purposes.
Dean has been through similar situations inside Google before. In 2018, the company faced an internal revolt over Project Maven, a Pentagon program that used artificial intelligence to analyze images captured by drones. After thousands of employees protested, Google let the contract expire without renewal and subsequently established its well-known AI Principles, defining how its technology could be used.
But the issue was never truly resolved. In 2024, Google fired more than 50 employees after protests related to Project Nimbus, a joint $1.2 billion contract with Amazon for services to the Israeli government. Google executives always maintained that the contract did not violate the company’s AI Principles. However, documents and reporting showed that the deal allowed providing Israel with AI tools that included image categorization, object tracking and even provisions for state weapons manufacturers. A December 2024 New York Times report revealed that four months before signing the Nimbus deal, company employees had already raised internal warnings that the contract could damage Google’s reputation and that Google Cloud services could be used or linked to facilitating human rights violations.
To make things even more complicated, in early 2025 Google reportedly revised its AI Principles and removed language that explicitly prohibited building weapons and surveillance technology. This quiet change reinforced internal and external suspicions about the company’s real willingness to maintain ethical boundaries in the use of its technology.
Pressure from No Tech For Apartheid and the cloud contract question
On Friday, the No Tech For Apartheid collective, known for criticizing cloud computing contracts between the United States government and major tech companies, published a joint statement with a hard-hitting title: Amazon, Google and Microsoft must reject the Pentagon’s demands.
The coalition called on the three cloud infrastructure leaders to refuse Department of Defense terms that would enable mass surveillance or other abusive uses of artificial intelligence. The group also demanded greater transparency in contracts involving the Pentagon, the Department of Homeland Security and Immigration and Customs Enforcement, or ICE.
No Tech For Apartheid pointed directly at Google, citing the possibility of a deal with the Pentagon that could mirror a contract that already allows the Department of Defense to use Grok, from Elon Musk’s xAI, in classified environments — as far as anyone knows, without any safeguards. The statement warned that the companies are on the verge of accepting similar contract terms and that Google is negotiating to deploy Gemini for the Pentagon’s classified uses.
While Anthropic and OpenAI made public statements about their negotiations with the Department of Defense and the current status of their contracts, Alphabet, Google’s parent company, stayed silent and did not respond to multiple requests for comment.
The bigger picture: immigration, AI use and tensions across the sector
The tension between the tech sector and the American government did not come out of nowhere. In the months leading up to the crisis involving Anthropic, relations had already been deteriorating for other reasons. The increasing aggressiveness of federal immigration agents, including the deaths of two American citizens in Minnesota earlier in the year, intensified demands from tech workers for more transparency about what their companies do for the government. Professionals across the industry began openly questioning cloud computing and artificial intelligence contracts with government agencies, creating an atmosphere of distrust that was already boiling before Anthropic was even classified as a security risk.
This scenario makes it clear that the issue goes far beyond a single company or a specific contract. What is at stake is the very relationship between the companies that build the world’s most advanced technologies and the governments that want to use them to project military power and control borders. With each passing week, new revelations and new episodes make this relationship more complex and more loaded with consequences.
Why this debate matters for the future of artificial intelligence
This story goes well beyond a dispute between an AI startup and the Department of Defense. It reignites fundamental questions about who controls the most powerful technologies of our time and what the limits of their application should be. Generative artificial intelligence has advanced to a point where its models can be adapted for purposes ranging from customer service to identifying targets in combat scenarios. The line between civilian and military use has become so thin that the decisions made now by the companies leading this market will set the standard for the coming decades.
There is also an economic dimension that cannot be ignored. The United States defense sector moves hundreds of billions of dollars a year, and contracts with tech companies represent an ever-growing slice of that spending. Turning down that money is not a trivial decision, especially for companies that depend on investment rounds and need to demonstrate constant growth. By walking away from that path, Anthropic took on a real financial risk. And that is precisely why the support it received from professionals at other companies carries so much weight — it shows that a significant portion of the industry values ethical principles above lucrative contracts.
The scenario taking shape now is one of growing tension between governments that want unrestricted access to the most advanced artificial intelligence capabilities and companies trying to maintain some control over how their technologies are used. This is not a question exclusive to the United States. Governments around the world are watching this dispute closely, because the outcome will influence how AI regulation is built globally. If the Pentagon manages to establish that refusing military demands triggers supply chain penalties, other nations could adopt similar strategies, creating a domino effect that would drastically reduce the autonomy of tech companies.
What to expect in the next chapters
The mobilization around the Anthropic case is still unfolding and the number of signatories on the open letters continues to grow. Historically, this kind of internal pressure from employees has produced real results in the tech sector. The Project Maven case, the revision of Google’s AI Principles and policy changes around data use at other companies are all examples of situations where workers’ voices forced actual change. Collective support for an ethical stance can be more powerful than any corporate lobbying, especially when it reaches critical mass.
On the other hand, the American government is unlikely to back down without a response. There are already discussions in the United States Congress about the possibility of creating legislation that would require artificial intelligence companies to cooperate with national security demands under certain circumstances. If something like that passes, the decision would no longer be in the hands of companies and would become a legal matter. Anthropic and other companies that share this vision would have to find new ways to defend their principles within a potentially unfavorable regulatory framework.
What is clear is that we are living through a defining moment for the relationship between artificial intelligence and military power. The choices being made now — by companies, governments and the professionals who build these technologies — will shape the future of this industry in ways we still cannot fully predict. And the message that nearly 900 tech workers sent by signing those letters is hard to ignore: the community that builds these tools wants a say in how they are used 🔥
