How artificial intelligence turned disinformation into a profitable business
The speed at which generative artificial intelligence tools have evolved in recent months created a scenario few could have predicted so clearly. Today, anyone with access to a computer and a free account on AI platforms can produce fake videos with enough visual quality to fool millions of people. In the context of the conflict between the United States and Israel against Iran, this capability has become a full-blown machine for manufacturing fictional narratives that spread across social media before any fact-checking effort can catch up.
Tools like video generators based on diffusion models can create scenes of supposed bombings, explosions in cities, and even completely fabricated satellite images — all in a matter of minutes and without any advanced technical knowledge. As Timothy Graham, a digital media expert at Queensland University of Technology, puts it: what used to require a professional audiovisual production can now be done in minutes with AI tools. The barrier to creating convincing synthetic material about conflicts has essentially disappeared.
What the BBC Verify investigation uncovered goes beyond the technical problem of creating this content. The central point is that there is an entire economic chain sustaining the production of online disinformation. Creators figured out that videos about sensitive geopolitical topics — especially involving military conflicts — generate absurd levels of engagement. We are talking about hundreds of millions of views accumulated on content depicting events that never happened in real life.
Concrete cases tracked by BBC Verify
The BBC Verify analysis identified several specific examples of how this phenomenon is playing out. One of the most striking cases involves an AI-generated video that appeared to show missiles striking the city of Tel Aviv, Israel, with sounds of explosions in the background. That single video was replicated in more than 300 different posts, which in turn were shared tens of thousands of times across multiple social media platforms.
An important and troubling detail: some users on X, formerly Twitter, turned to the platform’s own Grok chatbot to try and verify whether the video was real. In several cases documented by BBC Verify, Grok incorrectly stated that the AI-generated video was authentic. In other words, the artificial intelligence tool built right into the platform ended up functioning as an amplifier of disinformation rather than a tool to combat it.
Another fabricated video that racked up tens of millions of views showed the Burj Khalifa skyscraper in Dubai supposedly engulfed in flames while a crowd ran toward the building. This content spread at a time when residents and tourists were genuinely concerned about drone and missile attacks in the region, which amplified the panic and confusion generated by the disinformation.
As Mahsa Alimardani, a researcher specializing in Iran at the Oxford Internet Institute, points out, fake videos like these have a devastating impact on people’s trust in verified information they encounter online, while also making it much harder to document real evidence from the conflicts.
The new frontier of AI-fabricated satellite imagery
A first-of-its-kind characteristic of this conflict, identified by BBC Verify, is the emergence of satellite images generated by artificial intelligence. This represents a new level of sophistication in disinformation production that had not been observed in previous conflicts.
The most telling case involves the Iranian drone and missile attacks on the headquarters of the U.S. Navy’s Fifth Fleet in Bahrain, which occurred on the first day of the conflict. BBC Verify verified multiple real videos showing these attacks. However, the following day, a fabricated photo began circulating on X, shared by the Iranian newspaper The Tehran Times, a state-affiliated outlet. The image claimed to show extensive damage to the American military base.
Technical analysis revealed that the fake image was created from a real satellite photo of the American naval base in Bahrain, taken in February 2025 and publicly available online. One revealing detail gave the fraud away: three parked vehicles outside the base appeared in the exact same position in both the genuine satellite image and the fabricated photo — despite supposedly being captured a year apart. According to Google’s SynthID watermark detector, the fake image was generated or edited using one of Google’s own AI tools.
Another documented example shows a real image of a small plume of smoke at an American base in Iraq that was manipulated with artificial intelligence to look like a massive explosion, dramatically amplifying the impression of destruction.
The vicious cycle between algorithms and fabricated content
To understand why this problem escalated so quickly, you have to look at the mechanics of recommendation algorithms. Social media platforms are designed to maximize the time you spend scrolling through your feed, and content that provokes strong emotional reactions — like fear, anger, and outrage — is naturally prioritized by automated distribution systems.
When a fake video showing an alleged military attack starts getting likes, comments, and shares, the algorithm interprets that as a signal that the content is relevant and begins distributing it to even more people. This creates a self-reinforcing cycle: the more disinformation circulates, the more the system amplifies it, and the more money the creator earns. In practice, there is no efficient automated mechanism that can reliably tell the difference between a real conflict video and a complete fabrication made with artificial intelligence.
As Timothy Graham observes, engagement-based monetization and accurate information are fundamentally at odds, and no platform has fully resolved this contradiction — and perhaps none ever will.
The explosion of accessible tools
Generative AI expert Henry Ajder emphasizes that the number of different tools now available for creating highly realistic AI manipulations is unprecedented. These tools have never been so accessible, so easy, and so cheap to use.
The list of popular AI content generation platforms includes Google’s Veo, OpenAI’s Sora, the Chinese app Seedance, and Grok itself, integrated directly into X. This variety of options means creators are not dependent on a single tool and can switch between them to produce content at an industrial scale.
Victoire Rio, executive director of the nonprofit organization What To Fix, adds that the pipeline between content creation and its publication on social media can now be almost entirely automated. This explains the unprecedented volume of fabricated material that emerged during this specific conflict. 😮
The situation gets even more complex when you consider that many of these creators operate in organized networks. They are not isolated individuals posting a video here and there. These are coordinated operations that use multiple accounts, publish at strategic times, and replicate the same content with slight variations to avoid automatic detection. Some of these accounts manage to accumulate millions of followers within just a few weeks, riding the wave of public interest in the conflict.
The economics behind disinformation
Here is where we get to the most revealing piece of this whole story. According to X’s head of product, 99% of the accounts responsible for spreading AI-generated videos about the conflict were trying to exploit the platform’s monetization system, publishing content designed to generate massive volumes of engagement in exchange for payment through the Creator Revenue Sharing program.
In other words, these people were not spreading disinformation out of ideological conviction or by accident. They were doing it deliberately to make money.
The platform does not disclose how many accounts participate in this program or exactly how much they can earn. However, Graham estimates that X pays around eight to twelve dollars per million impressions from verified users. To be eligible, a creator needs to hit five million organic impressions within three months and maintain a premium X subscription.
In Graham’s view, once you are inside the program, viral AI-generated content basically works like a money-printing machine. They have built the ultimate disinformation enterprise, the researcher says.
What the platforms are doing — and why it is not enough
Major social media platforms have been announcing measures to combat online disinformation generated by artificial intelligence, but the reality shows that these actions still fall far short of what is needed.
X announced this week that it will temporarily suspend creators from its monetization program if they publish AI-generated videos showing armed conflicts without proper labeling. For Alimardani, this is a notable signal that the platform has acknowledged the severity of the problem.
BBC Verify reached out to TikTok and Meta, the company behind Facebook and Instagram, to ask whether they planned to adopt similar measures, but neither responded to requests for comment. The silence from these platforms is, in itself, quite revealing about the industry’s stance on this crisis.
The problem with community notes and other decentralized moderation systems is that they depend on volume and consensus among participants. Many times the correction arrives hours or even days after the content has already gone viral and reached millions of people. By that point, the damage is done. The initial impression of a shocking video showing a supposed explosion is far more powerful than any correction note that shows up afterward, and the majority of users who saw the original content never end up seeing the correction.
The structural contradiction of monetization
There is a structural contradiction that platforms have yet to resolve. On one hand, they need to offer attractive compensation programs to keep creators engaged and producing content, since that is what sustains the entire ecosystem. On the other hand, those same programs end up incentivizing the production of sensationalist and false material, because the primary criterion for compensation tends to be view count and engagement volume, not the accuracy or quality of the content.
As long as this logic does not fundamentally change, the financial incentive to create fake videos about hot-button topics like international conflicts will continue to exist and will probably intensify as AI tools become more accessible and more powerful.
Digital watermarks and their limitations
Some of the more promising initiatives involve the adoption of digital watermarks and metadata embedded in content generated by artificial intelligence. Tools like Google’s SynthID can already identify when an image was created or edited by AI. Companies like Google, OpenAI, and Meta are developing joint standards so that all synthetic content carries a kind of invisible digital signature that allows for automatic identification.
However, practical implementation still faces considerable obstacles. Not all video generation tools adhere to these standards, and simple recompression or editing techniques can remove these watermarks with ease. On top of that, the adoption of these solutions depends on international cooperation between companies, governments, and developers — something that moves at a much slower pace than the speed at which disinformation spreads. 🤔
The conflict context fueling disinformation
To put the scenario in which this entire wave of fabricated content emerged into perspective, it is worth remembering that the United States and Israel began launching strikes against Iran on February 28. In response, Iran launched drone and missile attacks against Israel, as well as striking multiple Gulf nations and American military assets in the region.
Many people turned to social media to find and share the latest information and to try to make sense of an extremely fast-moving week of conflict. This natural information-seeking behavior created the perfect breeding ground for fake content creators to exploit public anxiety and curiosity. When people are scared and want quick answers, the tendency to consume and share content without verification increases dramatically.
The role of the individual in containing this phenomenon
In the middle of all this, individual responsibility carries enormous weight. Before sharing any shocking video about the conflict or any other sensitive topic, it is worth taking a few seconds to check the original source of the post. Recently created accounts with few followers but viral content are a classic sign of operations focused on monetization through disinformation.
Another important indicator is checking whether recognized news outlets are reporting the same event. If an alleged military strike only appears on social media accounts and has no corresponding journalistic coverage, the chances of it being AI-fabricated content are extremely high.
The current landscape also demands an update in how we consume information. The era of hyper-realistic fake videos has fully arrived, and the trend is that the quality of this content will only improve over time. Organizations like BBC Verify and other fact-checking initiatives play a critical role in this ecosystem, but they simply cannot keep up with the volume of fabricated content that surfaces every day.
The combination of stronger digital literacy, smarter government regulation, and genuine accountability from platforms seems to be the most viable path to reducing the impact of this new form of disinformation. In the meantime, the best defense remains a healthy dose of skepticism toward any content that seems too dramatic to be true — because, more and more often, it probably is not. 🧐
