Share:

How Artificial Intelligence Is Changing Military Operations

Artificial Intelligence is no longer just a science fiction concept — it has become a central piece in modern combat operations. Over the past few months, the United States armed forces took a significant step by incorporating AI systems into the planning of airstrikes targeting Iran, and the details emerging are putting an urgent debate front and center about the limits of this technology when human lives are at stake. Two sources with knowledge of the matter, who requested anonymity because the information is sensitive, confirmed that the military is using AI systems from data analytics company Palantir to identify potential targets in the ongoing strikes. What was once discussed only in research labs and tech conferences is now operational reality in conflict zones, with decisions being processed at speeds no human analyst could keep up with alone.

This transition marks a turning point in how wars are waged and, at the same time, opens up a series of ethical and strategic questions that governments and civil society need to address seriously. The adoption comes at a time when Defense Secretary Pete Hegseth is pushing to put Artificial Intelligence at the heart of American combat operations, as outlined in the Department of Defense official AI strategy. At the same time, Hegseth has clashed with Anthropic leadership over limitations imposed on the use of their AI models, adding a layer of political tension to this entire technological equation.

The Role of Palantir and the Maven Platform

At the epicenter of this transformation is Palantir, a data analytics company with deep roots in the defense and intelligence sector. The company integrated the Claude language model, developed by Anthropic, into its military platform called Maven. This combination allows massive volumes of intelligence data to be processed in a matter of seconds, giving military analysts a screening and target identification capability that would be humanly impossible to replicate in the same timeframe. The system cross-references satellite data, intercepted communications, geospatial information, and field reports to generate recommendations that, in theory, make operations more precise and reduce the risk of collateral damage.

According to a source with knowledge of Anthropic work with the Department of Defense, Claude is used to help military analysts sift through large volumes of intelligence and does not directly provide targeting recommendations. This distinction matters because it defines, at least officially, the role of AI as a support tool rather than an autonomous decision-maker. The same Maven platform was also used in the operation to capture Venezuelan President Nicolás Maduro, which shows the breadth of applications the system has already achieved.

Palantir is no newcomer when it comes to working with governments and intelligence agencies. The company was co-founded by Peter Thiel and holds billions of dollars in contracts with the U.S. Department of Defense. The Maven platform was designed to function as a sort of central nervous system for intelligence operations, pulling together fragmented data from multiple sources and presenting it in an organized, actionable format for decision-makers. With the integration of Claude, the platform gained a natural language processing layer that allows analysts to ask complex questions and receive contextualized answers in real time. In practice, an operator can ask the system to identify movement patterns in a specific region and receive in seconds an analysis that previously would have required entire teams working for hours.

Anthropic declined to comment on the matter. Palantir also did not respond to requests for comment.

Speed Versus Precision: The Core Dilemma

The efficiency provided by Artificial Intelligence in this context is undeniable. Before these tools were adopted, the planning cycle for an airstrike could take hours or even days, depending on the complexity of the operation and the volume of available data. With AI integrated into the workflow, that time has been drastically reduced. In a video posted on the X platform, Admiral Brad Cooper, leader of U.S. Central Command, acknowledged that AI has become a critical tool in helping the United States select targets in Iran.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Our warfighters are leveraging a variety of advanced AI tools. These systems help us sift through vast amounts of data in seconds so our leaders can cut through the noise and make smarter decisions faster than the enemy can react, Cooper said. He added that humans will always make the final decisions about what to strike and what not to strike and when to strike, but advanced AI tools can transform processes that used to take hours and sometimes even days into seconds.

However, speed without precision is a massive risk, and that is exactly where the debate over oversight gains traction. Representative Pat Harrigan, a Republican from North Carolina and member of the House Armed Services Committee, pointed out that Operation Epic Fury resulted in more than 2,000 targets struck with remarkable precision, which he considers a testament to how these capabilities can be used responsibly and effectively. Still, Harrigan emphasized that no AI system replaces the judgment, training, and experience of the American warfighter, and that the human in the decision loop is not a formality but a requirement.

The Conflict Between Anthropic and the Pentagon

What makes this situation particularly complex is the clash between Anthropic and the Department of Defense. The company, creator of the Claude model, publicly positions itself as an advocate for the safe and responsible development of Artificial Intelligence. The company attempted to prevent the military from using its AI for domestic surveillance and lethal autonomous weapons, which triggered a harsh response from the government.

The week before the original report was published, the Department of Defense classified Anthropic as a threat to national security, a designation that could result in the removal of its systems from military use in the coming months. In response, Anthropic filed a lawsuit to challenge that classification. The case illustrates the growing tension between tech companies trying to establish ethical boundaries for their products and governments that see those boundaries as obstacles to operational efficiency.

Anthropic CEO Dario Amodei himself admitted in an interview with NBC that he cannot guarantee with 100% certainty that the systems built by his company are perfectly reliable. That statement, coming from the leader of one of the most influential companies in the AI sector, carries especially heavy weight when applied to a military context. A major OpenAI study published in September found that all leading AI chatbots, built on large language models, suffer from a phenomenon known as hallucination — they periodically fabricate responses that sound plausible but are factually incorrect. When that margin of error is placed in the context of combat operations, the gravity of the situation becomes clear.

The Push for Oversight and Transparency in Congress

In the U.S. Congress, lawmakers from both parties are mobilizing to demand more oversight of the role of Artificial Intelligence in military operations. The bipartisan concern is a clear signal that the issue transcends traditional political divisions and touches on fundamental questions about technology governance, human rights, and national security.

Representative Jill Tokuda, a Democrat from Hawaii and member of the House Armed Services Committee, was emphatic in stating that a full and impartial review is needed to determine whether AI has already harmed or put lives at risk in the war with Iran. According to her, human judgment must remain at the center of life-or-death decisions.

Representative Sara Jacobs, a Democrat from California and also a member of the same committee, raised a relevant technical point: AI tools are not 100% reliable — they can fail in subtle ways, and yet operators keep placing too much trust in them. Jacobs called for rigorous safeguards and the guarantee that a human being is involved in every decision involving the use of lethal force, because the cost of getting it wrong can be devastating for civilians and for the service members carrying out these missions.

In the Senate, Senator Elissa Slotkin, a Democrat from Michigan and member of the Senate Armed Services Committee, said the Department of Defense has not done enough to clarify how humans are verifying AI-assisted or AI-generated military intelligence. Senator Mark Warner, a Democrat from Virginia and the top Democrat on the Senate Intelligence Committee, said he is concerned about the military use of AI to assist in target identification and that there are unanswered questions about how the new technology is being used.

Senator Kirsten Gillibrand, a Democrat from New York, was even more direct in calling for clearer rules on how the military can use AI, saying there is little reason to trust that the Department of Defense will be responsible in its use of AI without explicit safeguards.

The Algorithmic Black Box in Life-or-Death Contexts

One of the most sensitive points in this debate is the issue of algorithmic transparency. Language models like Claude are complex systems that operate as what experts call a black box — meaning it is difficult to explain precisely why the model reached a particular conclusion or recommendation. In commercial applications, this opacity is already a problem. In military contexts, it becomes potentially catastrophic.

If an AI system recommends that a specific location be treated as a target and that recommendation turns out to be wrong, there needs to be a clear path to understanding what went wrong, why it went wrong, and how to prevent the same type of failure from happening again. Without that transparency, oversight becomes nothing more than a bureaucratic exercise with no real effectiveness. Leading AI researchers around the world admit they do not fully understand how the most advanced AI systems work, which adds another layer of uncertainty to the entire picture.

Pentagon chief spokesperson Sean Parnell posted on the X platform that the military does not want to use AI to develop autonomous weapons that operate without human involvement. However, the Department of Defense did not respond to questions about how it balances using AI to reduce human workloads while verifying whether the analyses and target suggestions are accurate. That lack of response fuels the concerns of those following the issue closely.

The Perspective of Security and Ethics Experts

Mark Beall, head of government affairs at the AI Policy Network and former director of AI strategy and policy at the Pentagon from 2018 to 2020, offered a balanced perspective. According to him, while AI can speed up the process of deciding where to strike, it is clear that humans still need to thoroughly verify targets. Beall acknowledged that AI systems are being deployed effectively to accelerate existing workflows and allow commanders, analysts, and planners to make better and faster decisions. But when it comes to actually pulling the trigger on weapons systems, he believes the technology is not ready yet.

Tools we use daily

Beall also warned about a concerning future scenario: as these systems become extremely efficient and adversaries begin using them, there will be growing pressure to shorten the review of AI analyses in order to operate at useful and effective speeds. For him, it is critical to solve the reliability problem before reaching that point, because making lethal autonomous weapons safe and effective is in the interest of the entire world.

Heidy Khlaaf, chief scientist at the AI Now Institute, a nonprofit organization that advocates for the ethical use of technology, expressed concern that relying on AI to process information rapidly in life-or-death decisions could serve as a way for the military to avoid accountability for mistakes. According to Khlaaf, it is very dangerous that speed is being sold as something strategic when in reality it is cover for indiscriminate strikes, given how imprecise these models are.

What Is at Stake for the Future

The Trump administration has publicly embraced the use of this technology for both the military and across the entire government, which indicates that the trend of expanding AI in defense operations is likely to continue and probably intensify in the coming months and years. None of the lawmakers consulted by NBC News suggested that AI should be completely removed from military use, but there is a growing consensus that more oversight is needed.

Beyond the legislative concern, human rights organizations and Artificial Intelligence ethics experts are closely monitoring these developments. The possibility that algorithmic errors could result in civilian deaths is something that cannot be treated as an acceptable side effect of military modernization. The core dilemma remains: AI can indeed make military operations faster and, in many cases, more precise, but that efficiency cannot come at the cost of irresponsibly delegating decisions that carry irreversible consequences.

The human-in-the-loop model — the guarantee that a human being is always part of the decision circuit — is defended by virtually every actor involved in this discussion. The real question is not whether this principle exists on paper, but whether it is respected in practice when operational pressure demands answers in seconds. When the machine delivers an analysis almost instantly and there is a narrow window of opportunity, the temptation to blindly trust the technology is real and dangerous.

The path that seems most sensible involves investing in robust auditing mechanisms, ensuring that human operators have the time and autonomy to question the machine recommendations, and above all, creating regulations that keep pace with the rapid evolution of technology on the battlefield. What is at stake here is not just a matter of military strategy but of collective responsibility over how humanity chooses to use the most powerful tools it has ever created. And with the constant advancement of language models and data processing capabilities, this debate is only going to become more urgent. 🧠

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

Performance and Growth: Nvidia, AI Agents, and Data Centers

Nvidia accelerates revenue with data centers, GB300 NVL72, and Rubin; efficiency and AI Agents demand drive record growth and profit.

AI and Copyright: Supreme Court Denies Copyright Protection for Artistic Creation

Supreme Court rejected the AI-generated art case; in the US only humans can hold authorship — a direct impact on

AI Reveals the Identity of Anonymous Social Media Users

Vulnerable anonymity: how modern AI unmasks social media profiles and why this threatens your online privacy.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.