Claude remains active in military operations even after ban order
The artificial intelligence Claude, developed by Anthropic, is being actively used by the Pentagon in United States military operations against Iran. The information came from two sources with direct knowledge of AI use by the American military in an interview with CBS News. According to the reports, the tool was employed during attacks carried out last weekend and remains operational in ongoing missions. The case raises a whole set of questions about the role large language models play today within the most advanced military infrastructure on the planet.
What makes this situation even more surprising is the timing of it all. The Pentagon’s use of Claude is happening even after the Trump administration announced a federal ban on Anthropic technology, giving government agencies a six-month deadline to completely abandon the platform. In other words, the same technology being officially banned from federal government structures is still running in real military operations against Iran. It is a contradiction that very clearly exposes the complexity of the relationship between the defense sector and private-sector artificial intelligence companies.
The news about Claude’s use in operations against Iran was initially reported by the Wall Street Journal and later confirmed by CBS News with its own sources. So far, the Pentagon has not publicly detailed exactly how the AI tool is being used in the context of the conflict. This lack of transparency contributes to the climate of uncertainty surrounding the entire situation, especially when you consider that the use is happening alongside a public dispute between the American government and the company that created the technology.
This revelation highlights something many analysts already suspected: the growing dependence of the American military apparatus on commercial AI tools. We are not talking about an experiment or a pilot test. We are talking about a real-world conflict application, where operational decisions are being, in some form, supported by a language model originally created for civilian use. And that completely changes how we understand the reach and influence of these technologies in today’s world.
The conflict between Anthropic and the Department of Defense
The root of this tension started when Anthropic tried to set clear limits on the military use of Claude. The company included in its acceptable use policies a series of restrictions that got straight to the point: a ban on mass surveillance of American citizens, a veto on use in fully autonomous weapons systems, and other safeguards designed to ensure the technology would not be used in ways the company considered ethically problematic. For Anthropic, which has positioned itself since its founding as a company focused on AI safety, these restrictions were a direct reflection of its principles. But for the Pentagon, it was a very different story.
The Department of Defense’s response was blunt and to the point. The Pentagon demanded unrestricted access to the model for all purposes considered legal under American law. The Department of Defense’s central argument was straightforward: laws already exist that prohibit mass surveillance of American citizens, and the Pentagon’s own internal policies already restrict the use of fully autonomous weapons. In the military’s view, Anthropic’s concerns were therefore redundant and immaterial. From a military standpoint, the idea that a private company could dictate the terms of how a tool would be used in national security operations was simply not acceptable.
The standoff was set, and from that moment things escalated very quickly. Secretary of Defense Pete Hegseth stepped in and classified Anthropic as a supply chain risk, heavy terminology normally reserved for suppliers considered unreliable or potentially hostile to American interests. That classification carries enormous institutional weight and signals to the entire defense ecosystem that doing business with Anthropic could be problematic.
Emil Michael, the Pentagon’s chief technology officer, defended the Department of Defense’s position in an interview with CBS News. According to him, at some level, you have to trust that the military will do the right thing. Michael also detailed that the Department of Defense uses Claude to synthesize documents, make logistics more efficient, and optimize supply chains, among other tasks. These applications might seem bureaucratic at first glance, but in a scenario of active military operations, the ability to process information quickly and organize logistics efficiently can have a direct impact on how things unfold on the ground.
Anthropic’s stance on red lines
On the other side of the dispute, Anthropic CEO Dario Amodei explained to CBS News that the company sought to draw red lines on government use of its technology. According to Amodei, Anthropic believes that crossing those lines would be contrary to American values, and the company wanted to take a stand in defense of those very values. He went further and stated that disagreeing with the government is the most American thing there is, and that Anthropic considers itself patriotic in everything it did throughout this process.
That statement is interesting because it repositions the narrative. While the government frames Anthropic as an obstacle or even a threat, the company positions itself as a defender of fundamental principles of American democracy. It is a clash of narratives that goes far beyond a simple business negotiation. We are talking about a dispute over who has the legitimacy to define the limits of artificial intelligence use in defense and national security contexts.
With that framing in place, the path to the executive order was paved. President Trump signed the document officially banning Anthropic technology from federal agencies, establishing the six-month deadline for a complete transition. However, the reality on the ground seems to tell a different story from what the official paperwork suggests. Claude continues operating in military actions against Iran, which indicates that unplugging an artificial intelligence system already integrated into ongoing operations is not something you do overnight, regardless of what an executive order says.
The Israel question and the use of AI in conflicts
One point that remains open is whether the Israeli military is also using Claude in this conflict. CBS News reached out to a spokesperson for the Israel Defense Forces but did not receive a response by the time of publication. What is known is that Israel already actively uses artificial intelligence in military operations. Israeli forces have their own target identification system, known as Lavender, which was deployed during the war in Gaza.
The existence of systems like Lavender and the Pentagon’s use of Claude show that applying AI in war scenarios is not an isolated phenomenon. Different countries and military forces are incorporating these technologies in various ways, each with their own protocols and limits — or, in some cases, with the absence of them. This scenario reinforces the urgency of international discussions about regulation and governance of artificial intelligence in military contexts.
What this reveals about the future of AI in military operations
This whole situation tells us a lot about how artificial intelligence has become an essential component of modern military machinery. The fact that the Pentagon cannot simply turn off Claude and move on shows that the integration of these tools goes far beyond a simple auxiliary software. According to the national security outlet Defense One, citing multiple sources familiar with the dispute between the Department of Defense and Anthropic, it could take three months or more for the Pentagon to replace Claude’s capabilities with another AI platform.
When an AI model is embedded into operational workflows in real conflict scenarios, it becomes part of the critical infrastructure. Removing it requires planning, time, and most importantly, a viable alternative that delivers equivalent capabilities. And that is a detail many people underestimate when debating AI regulation in the context of defense and national security. Signing a decree is not enough — you need a Plan B ready to go.
Another point worth highlighting is the precedent this case sets for the entire artificial intelligence industry. Anthropic tried to maintain its safety and responsible use principles, and the consequence was being labeled a risk and having its technology officially banned from the federal government. This sends a clear message to other companies in the space: if you try to impose ethical limits on how the government uses your technology, you could end up facing retaliation. It is a concerning dynamic that could end up discouraging other companies from taking similar stances in the future, which would have direct implications for the governance and safety of AI systems used in high-risk contexts.
The case of Claude’s use in operations against Iran also raises practical questions about transparency and accountability. If an artificial intelligence model is supporting decisions in armed conflict scenarios, who is responsible when something goes wrong? The company that created the model? The Pentagon that is using it? The operators who interact with the tool on a daily basis? These questions do not have simple answers, and the fact that the system keeps running even during an official ban process makes everything even murkier. What is clear is that the relationship between governments and AI companies is entering completely new territory, where the rules are still being written in real time — and, in some cases, while the missiles are already flying 🚀
A dependency that is hard to undo
At the end of the day, what this story shows is that artificial intelligence has already crossed a point of no return in the military context. Claude is not just a technological curiosity being tested in a Pentagon lab. It is a functional piece within operations happening right now, in one of the most tense geopolitical scenarios of the moment. Anthropic finds itself in a paradoxical position: its technology is considered good enough to support war operations, but the company itself is treated as a problem for wanting to set limits on the use of that very technology.
For anyone following the artificial intelligence market, this episode serves as a powerful reminder that AI development does not happen in a vacuum. A company’s design decisions, usage policies, and ethical principles can collide head-on with the interests of sovereign states — and when that happens, the balance of power does not always tip in favor of the technology side. The Anthropic versus Pentagon case may very well become a landmark in the history of AI regulation, defining how governments and companies will negotiate the boundaries of this technology’s use in the years ahead.
For now, Claude remains active. Iran continues to be the target of military operations supported by artificial intelligence, and the ban order remains officially in effect, even though in practice it has not yet taken full effect. It is the kind of situation that sounds like a science fiction script, but it is actually happening, with real consequences for everyone involved. And if there is one thing this episode makes crystal clear, it is that the debate over AI in military contexts can no longer be treated as something distant or hypothetical. It has already arrived — and it is more complex than any language model could have predicted.
