Share:

The OpenAI-Pentagon deal that shook the tech world

OpenAI stepped into one of the biggest controversies in its recent history by officially signing a contract with the Pentagon to provide artificial intelligence technology for classified military operations. What could have been announced as a strategic milestone for the company quickly turned into a reputation crisis that caught everyone off guard. ChatGPT users started uninstalling the app in droves, developers and researchers across the tech community publicly spoke out against the decision, and pressure on social media grew exponentially within hours.

According to data published by TechCrunch, uninstalls of the ChatGPT mobile app surged 295% on Saturday compared to normal removal rates. That number is not just a curious statistic — it is a direct barometer of just how dissatisfied users were with the announcement. Meanwhile, Anthropic’s Claude app quickly climbed to the top of Apple’s App Store rankings, a position it still held the following Tuesday. The mass migration of users showed that people are willing to switch platforms when they feel a company’s values no longer align with their own.

The backlash was so intense that Sam Altman himself, OpenAI’s CEO, publicly acknowledged the company was opportunistic and sloppy in how it handled the communication around the deal. That kind of admission coming from the leader of one of the most influential companies on the planet in the AI space is not something that happens every day. It showed the company completely underestimated the impact the news would have, both among its users and among partners and investors who had always seen OpenAI as an organization committed to the responsible use of technology.

Altman posted on X on Monday that additional changes were being made to the agreement, including a guarantee that OpenAI’s system would not be intentionally used for domestic surveillance of American citizens and nationals. On top of that, as part of the new amendments, intelligence agencies like the National Security Agency (NSA) would also be unable to use OpenAI’s system without a supplemental modification to the original contract. The CEO acknowledged the company made a mistake by rushing to publish the announcement on Friday, stating that the issues involved are extremely complex and demand clear communication.

Faced with a flood of criticism, OpenAI decided to walk things back. The company revised the terms of the Pentagon contract and began trying to balance its commercial ambitions with public pressure that, so far, shows no signs of letting up. In a statement published on Saturday, OpenAI went so far as to claim its deal with the Pentagon had more safeguards than any previous agreement for classified AI deployments, including Anthropic’s. That backpedaling move, however, raised even more questions. After all, where does the company’s real commitment to safety and ethics in AI development end, and where does plain old crisis management begin?

The Anthropic case and the complexity of military AI use

The situation got even messier when information came to light involving Anthropic, another giant in the artificial intelligence space. The company, known for its firm stance on safety and for refusing to compromise on its principles against developing autonomous weapons, was placed on a blacklist by the Trump administration. The official justification was precisely Anthropic’s unyielding position on the ethical boundaries of military use of its language models. The company stood by what it called a red-line corporate principle, stating that its technology should not be used to build fully autonomous weapons. But what happened next surprised even the most seasoned analysts in the industry.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Just hours after the ban, the Claude model developed by Anthropic was spotted being used in American operations directly tied to the US-Israel war and the conflict with Iran. According to the Wall Street Journal, American strikes in the Middle East utilized Anthropic’s technology hours after the Trump-imposed prohibition. That means even with the company officially excluded from the table, the technology it created continued being applied in military contexts without its consent or direct oversight. The Pentagon declined to comment on its relationship with Anthropic when pressed by the media. This episode exposes a massive vulnerability in the current AI ecosystem, where the creators of advanced models can completely lose control over how their tools are used once they leave the controlled development environment.

The Anthropic situation also highlights a troubling paradox. Companies that try to maintain stricter ethical standards end up being punished and pushed away from strategic decisions, while the technology they developed keeps being deployed without any governance or oversight from the original creators. Professor Mariarosaria Taddeo of the University of Oxford told the BBC that with Anthropic out of the Pentagon, the most safety-conscious actor left the room. According to her, that represents a real problem. The observation makes sense when you remember that Anthropic was precisely the voice within the military ecosystem pushing for clearer boundaries on how these tools are applied.

This creates an environment where being responsible effectively becomes a competitive liability rather than a positive differentiator. The consequences of this dynamic for the future of artificial intelligence in military contexts are massive and are far from being fully understood.

How artificial intelligence is already used in the military

The integration of artificial intelligence into military operations did not pop up overnight. AI is already present across multiple fronts within the armed forces of several countries, from logistics optimization to the rapid processing of massive volumes of information. The United States, Ukraine, and NATO all use technology from Palantir, an American company that specializes in data analysis tools for government clients, covering intelligence collection, surveillance, counterterrorism, and military operations in general.

The United Kingdom, for example, recently signed a 240 million pound contract with Palantir. And the BBC reported conversations with professionals involved in integrating the company’s defense platform, known as Maven, into NATO’s infrastructure. That platform brings together a vast range of military information — from satellite data to intelligence reports — which is then analyzed by commercial AI systems like Claude to help make decisions that are faster, more efficient, and, when appropriate, more lethal, as described by Louis Mosley, head of Palantir’s UK operations.

However, large language models have a well-known problem: they can make serious mistakes or even fabricate information, a phenomenon called hallucination. In a military context, an AI model hallucination can have catastrophic consequences. Lieutenant Colonel Amanda Gustave, data director for NATO’s Maven Task Force, made a point of emphasizing that there is always human oversight in the process and that a human is always in the decision loop. According to her, it would never be the case that an AI makes a decision on its own on behalf of the armed forces.

It is worth noting that Palantir, unlike Anthropic, does not support a total ban on autonomous weapons. The company’s position is that there should be a human in the loop, but without establishing an absolute red line like the one Anthropic defends. This difference in approach between the two companies illustrates the spectrum of positions that exist within the tech industry regarding the military use of AI, and how those positions have direct consequences for who remains — or does not remain — a government partner.

Who actually decides how artificial intelligence is used in conflicts?

This sequence of events involving OpenAI, the Pentagon, and Anthropic raises a question that goes far beyond corporate contracts or business decisions. The central question is: who actually has the power to decide how artificial intelligence will be applied in conflict zones and military operations? Today, the answer seems to be that this power is concentrated almost exclusively in the hands of governments and the military, with tech companies occupying an increasingly peripheral role in those decisions. OpenAI can revise its terms of use, and Anthropic can refuse to collaborate, but when a government with the power and influence of the United States decides it needs an AI tool for military purposes, the barriers these companies can put up are, at best, fragile.

The military use of artificial intelligence is not a new topic, but it has taken on a completely different urgency in recent months. The US-Israel war and the geopolitical developments involving Iran are accelerating the adoption of AI tools in operational contexts where decisions need to be made in fractions of a second. Language models are being used for intelligence data analysis, logistics planning, real-time translation of intercepted communications, and even target identification in combat scenarios. Each of these applications carries risks that cannot be ignored, from classification errors that could result in civilian casualties to the automation of decision-making processes that have historically always depended on human judgment.

What makes everything even more delicate is the fact that, as of now, there is no international regulatory framework robust enough to govern the use of artificial intelligence in military operations. International treaties on conventional and nuclear weapons took decades to negotiate and implement, and the speed at which AI is being integrated into defense systems far outpaces the ability of international bodies to create clear rules for this new landscape. OpenAI and other companies in the sector are navigating terrain where the rules simply do not exist in any adequate form yet, and where every decision sets precedents that will shape the future of this technology for many years.

The tension between profit and responsibility in the AI industry

Beyond the geopolitical and military issues, the OpenAI-Pentagon case also reveals a fundamental tension within the artificial intelligence industry itself. On one side, there are enormous commercial pressures. Government contracts represent revenues in the billions of dollars and guarantee access to infrastructure, data, and partnerships that no company in the sector can afford to ignore. On the other side, the user base and developer community that sustains these companies expects transparency, accountability, and a genuine commitment to the ethical use of technology. Balancing these two worlds is proving to be a much harder task than any big tech executive would like to admit.

The fact that Altman used the phrase genuinely trying to de-escalate things and avoid a much worse outcome suggests that OpenAI was under pressure from multiple directions. The wording indicates there were even more problematic alternative scenarios on the table and that the company believed closing the deal with certain safeguards would be preferable to having no influence at all over how AI would be used by the Pentagon. That reasoning has its logic, but it clearly did not convince a significant portion of the public and the developer community that closely follows every move these companies make.

Tools we use daily

The brutal 295% spike in ChatGPT uninstalls was not just a fleeting emotional reaction. It was a direct message that users are paying attention and willing to vote with their wallets when they feel a company has crossed lines they consider unacceptable. For OpenAI, which relies on subscriptions and user trust to sustain its business model, this kind of movement represents a concrete financial risk that needs to be taken seriously.

What this episode means for the future of AI

This episode also serves as an important wake-up call for everyone following the advancement of artificial intelligence. The technology we use every day to generate text, create images, write code, and automate tasks is, at its core, the same technology being adapted for military purposes in real conflicts. The distance between ChatGPT on your phone and a language model operating inside a military base is much smaller than most people realize. That proximity makes the discussion around governance, transparency, and clear boundaries for AI deployment in scenarios where human lives are directly at stake even more urgent.

The concept of keeping a human in the loop, advocated by both Palantir and NATO in their operations with the Maven platform, seems like the bare minimum required to ensure some level of accountability in decisions involving the use of lethal force. But even that safeguard raises questions. When a human operator is receiving recommendations from an AI system that processes information at speeds impossible for a person to keep up with, how effective is that oversight really? The pressure for rapid decisions in combat contexts can easily turn human review into a rubber stamp for decisions that, in practice, have already been made by the machine.

The path that OpenAI, Anthropic, and other companies in the space take over the coming months will define much of what we can expect from this technology in the decades ahead. If the market rewards companies that bend their principles in exchange for military contracts, the trend will be for the military use of artificial intelligence to expand without guardrails. If, on the other hand, public pressure and the tech community manage to keep the debate alive and demand real transparency, there is a chance that some kind of balance can be reached before the consequences become irreversible.

What is clear is that this conversation is just getting started and the decisions being made right now will echo for a long time. How governments, companies, and civil society negotiate the boundaries of artificial intelligence in the military domain is, without exaggeration, one of the most important questions of our generation. And, from what we saw this past week, nobody has the definitive answers yet. 🤖

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

Performance and Growth: Nvidia, AI Agents, and Data Centers

Nvidia accelerates revenue with data centers, GB300 NVL72, and Rubin; efficiency and AI Agents demand drive record growth and profit.

AI and Copyright: Supreme Court Denies Copyright Protection for Artistic Creation

Supreme Court rejected the AI-generated art case; in the US only humans can hold authorship — a direct impact on

AI Reveals the Identity of Anonymous Social Media Users

Vulnerable anonymity: how modern AI unmasks social media profiles and why this threatens your online privacy.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.