Share:

Pentagon pressured AI companies before strikes on Iran — and that is a terrible sign for the future of ethical AI

Artificial Intelligence and military power have never been so close — or so dangerously intertwined. In March 2026, while the United States and Israel were preparing coordinated strikes against Iran, a very different kind of battle was unfolding behind the scenes in Washington. The Pentagon and Anthropic, creator of the Claude assistant, were locked in a tug of war over the limits of using AI in military operations. The company wanted basic guarantees: no surveillance of American citizens and no autonomous weapons without human oversight. The Trump administration’s response was fast and brutal — it cut all federal contracts with Anthropic and labeled the company radical and woke.

Hours later, OpenAI stepped in and took its place, signing a deal with the Department of Defense without imposing any explicit ethical restrictions. The episode raises a question that goes far beyond the technology itself: what happens when the most powerful government on the planet forces AI companies to abandon their own ethical principles? 🤔 This story involves hollowed-out regulation, growing machine autonomy on the battlefield, and a sense that ethics is increasingly disposable when the subject is war. Let’s break down what happened, why it matters, and what could be coming next.

The tug of war between the Pentagon and Anthropic

To understand how serious this was, we need to look at the bigger picture. Anthropic has always positioned itself as a company that puts safety first in the development of Artificial Intelligence. Since its founding, the company has laid out very clear acceptable use policies, including an explicit ban on applications involving lethal autonomous weapons and mass surveillance without a court order. When the Pentagon came knocking to sign contracts tied to military operations in the Middle East, Anthropic did not say no outright — it tried to negotiate minimum conditions that would preserve its founding values.

Among those conditions were a requirement for human supervision in any lethal decision-making process and a guarantee that its Claude models would not be used to spy on American citizens on US soil. Anthropic also publicly stressed that current AI systems are not reliable enough to operate fully autonomous weapons, underscoring that its position was not just ideological, but also technical.

The government, however, was not willing to accept half measures. The Trump administration viewed these conditions as ideological roadblocks rather than legitimate technical safeguards. On the Friday before the strikes, President Donald Trump posted on social media that he would never let a radical, woke company dictate how the great American military fights and wins wars. The cancellation of all federal contracts with Anthropic came alongside a public campaign to discredit the company, triggering a domino effect that pressured other tech firms to rethink any stance that could be seen as resistance to the government. The message was crystal clear: either you collaborate on the Pentagon’s terms, or you are out of the game — and out of the money.

The most revealing move was how quickly OpenAI filled the vacuum left by Anthropic. Within hours, Sam Altman’s company had a deal in place with the Department of Defense. The core difference from Anthropic’s stance is that OpenAI adopted the concept of allowing all legal uses of its tools, without publicly specifying which ethical lines it would refuse to cross.

That does not necessarily mean OpenAI has abandoned all of its internal principles, but the absence of explicit conditions sent a worrying signal across the Artificial Intelligence industry. The implicit message is that companies that want to survive financially and maintain access to billion-dollar contracts need to be flexible — and flexibility, in this context, means staying quiet about ethics and usage limits. On the flip side, the decision is already having consequences for OpenAI itself. A growing movement to boycott ChatGPT has gained traction among users who criticize the lack of any ethical limits in the military deal.

It is worth remembering that OpenAI did not get here out of nowhere. Its cofounder and president Greg Brockman personally donated 25 million dollars to a pro-Trump organization the previous year. Sam Altman also contributed 1 million dollars to Trump’s inauguration fund — while publicly stressing that he has also donated to Democratic politicians. These financial ties help explain how seamlessly the company positioned itself as Anthropic’s replacement at such a sensitive moment.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

The ethical principles that once looked promising

What makes this episode even more bitter is that not long ago, there was a fairly strong international consensus on the need to govern the military use of Artificial Intelligence. In February 2020, the US Department of Defense itself adopted ethical principles for AI use across the organization. Those principles required systems to be responsible, equitable, traceable, reliable, and governable.

NATO followed a similar path in 2021, laying out comparable principles for AI use among its members. The United Kingdom published its own defense AI strategy in 2022, aligned with the same philosophy of transparency and human control. These milestones were not perfect, but they represented a genuine effort to set rules of the road before the technology advanced too fast to rein in.

The United States plays a unique role among its international allies in setting global norms for military conduct. When Washington adopted these principles, it sent a clear signal to countries like Russia, China, Brazil, and India about how military AI should be governed. Walking away from those same principles now sends an equally clear signal — just in the opposite direction.

Machine autonomy and the hollowing out of regulation

The debate over autonomy in Artificial Intelligence systems used in the military is not new, but it took on a whole new level of urgency with this episode. For years, security experts and researchers have warned that the line between AI that assists human decisions and AI that makes decisions on its own is much thinner than it looks. The main concern centers on so-called lethal autonomous weapons systems — hardware and software capable of selecting targets and attacking them without human intervention.

When a drone analyzes real-time data, identifies a target and recommends a strike, is the human operator who pushes the final button really making an independent decision, or just rubber-stamping what the machine has already decided? That question gets even more uncomfortable when you consider that in real combat scenarios, the pressure for fast responses often pushes systems toward higher and higher levels of autonomy. Pushing out companies that insisted on keeping humans genuinely in control of decision-making only speeds up this dangerous trajectory.

On the regulation front, the picture is just as grim. The Trump administration had already banned US states from regulating AI the year before, claiming that any state-level regulation would threaten innovation. That decision wiped out one of the few remaining layers of protection in the US regulatory ecosystem. Without robust legislation that clearly defines what autonomous systems can and cannot do in armed conflict, responsibility falls entirely on the companies — and as we have seen, companies that try to exercise that responsibility are punished.

International bodies like the Red Cross have been debating treaties on lethal autonomous weapons for almost a decade, but negotiations have moved at a glacial pace while the technology evolves exponentially. The result is a widening regulatory gap, where increasingly capable systems operate in a legal and moral vacuum.

Military AI and its dependence on the private sector

One key point that many people miss is that military Artificial Intelligence depends almost entirely on the private sector. The most relevant data, the most advanced technical know-how, and the most qualified professionals are in companies like Anthropic, OpenAI, and Palantir — not in government labs. This dynamic has been obvious since 2017, when the Pentagon’s Project Maven set out to accelerate the use of machine learning and data integration in US military intelligence, relying heavily on commercial partners in Silicon Valley.

The US Defense Innovation Board formally recognized in 2019 that in the AI arena, the key data, knowledge, and talent are all in the private sector. That reality has not changed in 2026, which means the government needs the companies just as much as the companies need government contracts. The difference is that the government can wield its regulatory and financial power as leverage, while companies have far less room to maneuver when they choose to push back.

This asymmetry was laid bare in Anthropic’s case. The Department of Defense, led by Secretary Pete Hegseth, went beyond simply canceling contracts — it labeled Anthropic a supply chain risk, a rare designation that until then had only been applied to foreign companies. In practice, that means no contractor, supplier, or partner doing business with the US military can engage in any commercial activity with Anthropic. The company has announced plans to challenge the decision in court, but the reputational and financial damage is already done.

Ethical AI depends on democratic norms

There is a dimension to this debate that often flies under the radar: the idea that Artificial Intelligence can be inherently ethical rests on democratic assumptions that many people take for granted. The concept of algorithmic transparency, for instance — being clear and honest about the rules a system uses to make decisions — assumes that people have the right to know how these technologies work, because in a democracy power ultimately belongs to the people.

In an autocratic regime, though, it does not matter how transparent the algorithms are. There is no underlying premise that civilians have a say in decisions or deserve to know what the government is doing with these tools. Open, public debate is often seen as a defining feature of liberal democracies. Consensus can be valued, but constructive disagreement and even fierce debate are signs of democratic health.

From that perspective, Anthropic’s desire to have genuine discussions with the government about ethical red lines was democracy in action. The company signaled both a willingness to engage in rational dialogue and a belief in the value of constructive disagreement. When the government responded not with arguments, but with economic punishment and public stigmatization, what was lost was not just a commercial contract — it was a slice of the democratic fabric that underpins the very idea of ethical AI.

The final irony: the strikes used Anthropic’s technology

Maybe the most ironic — and disturbing — detail in this whole story is what happened just hours after Trump’s public attack on Anthropic. When US strikes on Iran were finally launched, reports indicated that the planning for those operations had actually used Anthropic’s software. In other words, the same technology the government claimed was unacceptable because of the ethical conditions imposed by the company was already baked into the military operations underway. 😶

This contradiction says a lot about the real nature of the dispute. The problem was never the quality or reliability of Anthropic’s technology. The problem was that the company had the nerve to set conditions on how that technology would be used. The message to the entire AI ecosystem was written in capital letters: the technology is welcome, the ethical limits are not.

Tools we use daily

The global domino effect

The domino effect of this regulatory hollowing-out goes far beyond US borders. When the world’s biggest military power signals that ethics and regulation are obstacles rather than safeguards, other countries feel emboldened to follow the same path. China, Russia, and other nations with advanced military AI programs are carefully watching Washington’s moves, and the normalization of unrestricted Artificial Intelligence in defense operations fuels a technological arms race where ethical limits become a competitive disadvantage.

This is probably the most dangerous legacy of the showdown between the Pentagon and Anthropic: not just what happened, but the precedent it sets for everything that comes next. The rules-based international order, which was already showing signs of strain, loses yet another pillar when the country that historically led the creation of those rules decides they no longer apply.

What is at stake for the future of AI

Beyond the battlefield, this clash between government and tech companies exposes a structural tension that is going to shape the next several years of Artificial Intelligence development. The biggest companies in the field depend on government contracts to fund increasingly expensive research — training a state-of-the-art language model already costs hundreds of millions of dollars, and that number is only going up. This financial dependence creates a brutal power imbalance, where the government can effectively dictate the rules of the game simply by controlling the flow of money.

Anthropic discovered this the hardest way possible, and it is tough to imagine other companies are not recalibrating their own red lines right now. Ethics in AI, which already faced resistance for being seen as a brake on innovation, now faces an even more formidable opponent: economic pragmatism under direct political pressure. Influential Silicon Valley figures like billionaire Marc Andreessen, author of the Techno-Optimist Manifesto, and Joe Lonsdale, Palantir’s cofounder, celebrated Trump’s reelection as a release from regulatory shackles. Andreessen’s line comparing Trump’s victory to a boot being taken off the throat of the tech industry perfectly captures the mood among a powerful slice of the sector.

There is also a dimension that directly affects public trust in these technologies. When everyday people find out that the same Artificial Intelligence models they use to write emails, manage their schedules, and learn new things are being deployed in military operations without transparent safeguards, their trust in these tools inevitably erodes. That matters because healthy adoption of AI in society depends on a minimal social contract — people need to feel there is some level of control, oversight, and accountability over how these systems operate. Without that, the risk is not just technological or military, but also social: a slow erosion of trust that can undermine the real benefits Artificial Intelligence can bring in areas like healthcare, education, and science.

This moment calls for an honest reflection about the kind of future we are building. AI technology itself is neither good nor bad — it reflects the intentions and limits imposed by those who control it. The core question raised by the episode involving the Pentagon, Anthropic, and OpenAI is simple to state but extremely hard to answer: who gets to define those limits? If the answer is only whoever has the most power and money, with no room for independent regulation and genuine ethics, then we are heading toward a world where machine autonomy grows in lockstep with the shrinking of human autonomy over those machines. And that, honestly, should worry everyone — not just people who work with technology. 🧠

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

Performance and Growth: Nvidia, AI Agents, and Data Centers

Nvidia accelerates revenue with data centers, GB300 NVL72, and Rubin; efficiency and AI Agents demand drive record growth and profit.

AI and Copyright: Supreme Court Denies Copyright Protection for Artistic Creation

Supreme Court rejected the AI-generated art case; in the US only humans can hold authorship — a direct impact on

AI Reveals the Identity of Anonymous Social Media Users

Vulnerable anonymity: how modern AI unmasks social media profiles and why this threatens your online privacy.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.