Trump, AI and war: a dangerous turning point
Artificial Intelligence is already everywhere: it organizes grocery lists, creates bedtime stories for kids, helps with work, and even supports government decisions. But there is one specific use that completely changes the game: when the same technology is used to plan and carry out military operations. That is where a system built to summarize emails or write a resume enters the chain that turns information into real violence.
In recent months, according to reports cited in the original article, the Donald Trump administration reportedly turned to AI at least twice in actions with massive geopolitical impact. First, in a special forces operation in Venezuela involving President Nicolás Maduro. Then, in strikes against Iran, using AI to analyze intelligence, identify targets, and run bombing simulations. Even though the technical details are not fully public yet, the picture is clear: advanced AI models are being actively integrated into the planning and execution of war.
This shift marks an inflection point. What was once debated at academic roundtables — who controls AI, how it can be used in conflicts, what the limits should be — now shows up in real operations, with immediate human and political consequences. The sense of discomfort is unavoidable: a chatbot that was helping review a document yesterday could today be, at some stage of the process, helping decide where a missile will land.
Claude, Anthropic, and the use of AI in military operations
One of the most frequently mentioned names in this context is Anthropic, the company behind the model Claude. In practice, Claude is seen by the public as a more cautious competitor to ChatGPT, focused on safety and alignment. However, reports suggest the model was allegedly used in two operations linked to the Trump administration.
In the first case, Claude was reportedly used to help plan and coordinate an operation to capture Nicolás Maduro at his compound in Venezuela. The reports suggest the AI assisted with things like scenario simulation, route analysis, and risk assessment, although the technical details of how the model was integrated into the chain of command are not entirely clear. The central point is not the step-by-step process but the fact that a civilian tool, used by millions of people for everyday tasks, entered the heart of a regime-change mission.
Shortly after, new information indicated the same technology was used in strikes against Iran. In that situation, Claude reportedly served to process large volumes of intelligence data, identify potential military targets, and help run attack simulations. Instead of analysts spending days filtering data, the AI does the job in minutes, delivering digested reports for decision-makers.
It is important to reinforce one point so the picture does not get distorted: there is no indication that Claude is directly controlling weapons, nor that fully autonomous systems have been cleared to attack without oversight. What exists, based on available information, is an intensive use of AI as high-level support in strategic and tactical decisions, which is already enough to raise the risk and change the nature of war.
Dario Amodei, red lines, and the clash with Trump
In the middle of this story is Dario Amodei, CEO of Anthropic. He became a key figure when he decided to impose two clear limits on the use of the Claude model by governments and military entities. Those two red lines are:
- Banning the use of AI for mass domestic surveillance, meaning no using the model to monitor an entire population in real time;
- Banning the development of fully autonomous weapons that select and strike targets without meaningful human control.
These restrictions created a public rift with the Trump administration, which pushed for more freedom in using the technology in defense and security contexts. Amodei’s refusal to soften those rules opened an open dispute with the White House, turning a technical debate into a political conflict.
Meanwhile, another industry giant, OpenAI, moved in a different direction. The company signed an agreement with the Pentagon to supply AI technology for projects tied to the Department of Defense. In its statement, OpenAI said the contract would include safeguards even stronger than the conditions set by Anthropic, promising limits on the types of military use allowed. Still, the simple fact that a major AI tech company has a formal deal with the American military apparatus shows how the expectation of a total separation between AI and war has essentially collapsed.
The result is a scenario where civilian tools, designed for productivity and creativity, take on a growing role in national security strategies. As the original article highlights, a system that was born to help write emails is now part of the chain that converts information into lethal force. And that changes everything.
From theoretical debate to the reality of bombs and special operations
For many years, questioning whether Artificial Intelligence would be used in war felt like an exercise in futurism. Researchers talked about risks at conferences and wrote papers, but the discussion sounded distant. Today, that distance is gone. The capture of Maduro by special forces in January and the missiles launched at targets in Iran, both with AI support, mark a historic turning point.
Until recently, many people still treated military AI as an almost abstract topic, surrounded by assumptions and hypothetical scenarios. Now, the examples are concrete, with dates, locations, actors, and real consequences. And even though fine behind-the-scenes details are inevitably kept under military classification, it is safe to say that AI has left the lab and entered the battlefield, even if only as an auxiliary brain in the decision-making process.
This shift also chips away at an old idea about the balance of power: the notion that some weapons exist more as a deterrent than as an everyday tool. At the height of the Cold War, the doctrine of mutually assured destruction served as a psychological and strategic brake on the use of nuclear weapons. They were there, but pressing the button was seen as almost unthinkable.
With AI entering military simulations, including scenarios involving nuclear weapons, the signs are far less encouraging. War game studies with AI systems showed that these models tend to recommend nuclear strikes at an alarming rate, adopting a far more aggressive posture than humans in similar tests. It is as if the cold cost-benefit calculation, without carrying the full historical and emotional weight of the atomic bomb, makes using that kind of weapon more acceptable to the algorithm.
AI as the standard in military decisions: a before and after
Once a military power demonstrates it can use AI models effectively in highly complex operations, it is very hard to go back. Other countries watch, learn, copy, adapt. In no time, what was the exception becomes standard practice.
In the coming years, it is quite likely that more armed forces will begin incorporating AI into tasks such as:
- Large-scale intelligence analysis, cross-referencing satellite data, communications, sensors, and open sources;
- Target identification and prioritization, based on strategic impact and estimated risk of collateral damage;
- War scenario simulations, virtually testing different types of attacks and responses;
- Logistics optimization, ensuring troops, equipment, and supplies arrive at the right place at the right time;
- Real-time monitoring of theaters of operation, with automatic alerts for events deemed critical.
When historians look back at this period, there is a good chance they will describe the use of AI in war operations as a milestone comparable, in symbolic impact, to the nuclear bombings of Japan in World War II. Not for destroying entire cities at once, but for signaling a before and after in how humanity organizes and carries out armed conflicts.
Right now, it is still impossible to predict all the long-term effects of this transition. But it is already clear that the more AI is introduced as a central element of military planning, the more fragile the line between deterrence and actual action becomes. A miscalculation, a bad piece of data, or a bias baked into a model can escalate tensions far more quickly than in previous eras.
From the ideal ban to the race for algorithmic weapons
In an ideal world, the international community would have settled the matter very early on: no AI in weapons and lethal decisions. That kind of prohibition was at least conceptually considered in debates about global treaties and rules of war. But in practice, that brake was gradually abandoned over time.
One symbol of that more restrained era was the stance of Demis Hassabis, co-founder of DeepMind. When he sold his company to Google, Hassabis reportedly set the condition that the technology would not be used for military purposes. That promise held for a few years but came under fire when the company’s parent, now Alphabet, publicly dropped the commitment not to use AI in weapons systems.
With giant companies reversing course and governments willing to use AI to maintain a strategic edge, the idea of a total ban grew increasingly remote. The Trump administration’s actions, using models like Claude in high-impact operations, serve as a definitive step in that direction: if the world’s largest military power integrates commercial AI into regime changes and strikes on another country, the message to the rest of the world is crystal clear.
The consequence is a kind of algorithmic arms race, where every country tries not to fall behind in using AI for war, intelligence, and defense. In that environment, any discussion about ethics risks being treated as an obstacle rather than a baseline standard of civilization.
International pressure and the urgency of concrete limits
Given this scenario, the inevitable question is: can anything be done now? Even if the ideal moment has passed, there is still room to reduce harm and try to establish at least some guardrails. The central point of the original article is that allies and international institutions need to pressure the U.S. government into accepting clear limits on the use of AI in military contexts.
Those limits cannot just be vague codes of conduct. They need to become:
- Formal international commitments, with rules about what is and is not acceptable in integrating AI with weapons systems;
- Transparent procurement standards, making it clear what kind of technology armed forces can or cannot acquire from private companies;
- Independent oversight mechanisms, with real auditing of how AI models are being used in sensitive operations.
The goal is not simply to put the United States on trial but to prevent the use of commercial AI models in regime-change operations from becoming the new normal. If this becomes established without pushback, we fully enter a world where tools accessible to the public, designed for productivity and creativity, can be slotted without ceremony into military action chains.
When the planet’s largest military force normalizes this kind of practice, the message to other governments and companies is straightforward: if it works and gives an advantage, it is fair game. And that is when we truly cross through the looking glass. We enter a universe where war also becomes a problem of systems design, model training, and data policy, with a direct impact on the lives of millions of people who never consented to having their conflicts decided by algorithms.
In the middle of all this, there is the uncomfortable feeling that humanity is learning to use an incredibly powerful technology in real time, in the midst of concrete crises, without having properly agreed on the rules of the game. And when the tests involve rockets, missiles, and special operations, any design error stops being a simple bug and becomes a tragedy on a global scale.
