How monetization on X turned hacked accounts into disinformation machines
Monetization on X has become the engine behind a sophisticated scheme connecting hacked accounts, AI-fabricated war videos, and a flood of disinformation targeting millions of users. The story broke last Wednesday when Nikita Bier, head of product development at the platform, publicly detailed a coordinated operation involving at least 31 compromised profiles. All of these profiles were hacked and renamed on February 27 to variations of the name Iran War Monitor, in a clear attempt to capitalize on global interest in the conflict involving Iran, the United States, and Israel. Behind the entire operation was a single person operating out of Pakistan, and the motivation had nothing to do with ideology — it was purely financial, aimed at cashing in on X’s creator revenue sharing program.
The profiles posted clips generated by artificial intelligence as if they were real footage of military strikes, bombings, and troop movements. Everything was designed to rack up views at massive scale and, consequently, earn money through the platform’s revenue program. This engagement-based monetization model creates a direct incentive for producing sensationalist and false content, especially during moments of real geopolitical tension, when people are more vulnerable and hungry for information. The result is a vicious cycle where the platform pays for attention, and attention is won through well-packaged lies.
Bier said the X team is getting faster at detecting this type of operation and is also working to eliminate the incentives that make it viable. But this episode exposes a structural weakness that goes far beyond an isolated case. When money flows toward whoever gets the most clicks, the entire system ends up rewarding whoever lies better and faster. And generative artificial intelligence has made this process absurdly cheap and scalable, allowing a single person anywhere in the world to operate dozens of accounts simultaneously with visually convincing content.
The role of artificial intelligence in fabricating war content
Generative artificial intelligence has evolved to a point where producing realistic war scenario videos no longer requires production teams, expensive equipment, or advanced technical knowledge. Accessible tools can generate images and clips simulating explosions, military movements, and urban destruction with a level of detail that fools even attentive observers. In the case revealed by Bier, the videos posted by the hacked accounts had exactly this profile — they looked like legitimate documentary footage but were entirely fabricated. This represents a major leap in disinformation production capability, because visual content has always carried more credibility than written text. When someone sees a video of an alleged airstrike, the natural tendency is to believe it actually happened.
The problem gets worse when this type of material is distributed in the context of a real conflict. The tension between Iran, the United States, and Israel is genuine — with Iran launching missile attacks against American bases in Bahrain while Israel was bombing Beirut — and people are actively searching for information about what is happening. In this scenario, content fabricated by artificial intelligence blends with real footage, creating an informational fog that makes it extremely difficult to understand the facts. Disinformation doesn’t need to convince everyone — it just needs to sow enough doubt that people lose the ability to tell what is real from what is fake. And when that happens at scale, with dozens of profiles posting simultaneously, the effect multiplies exponentially.
It is worth noting that the operator behind the scheme didn’t need to develop any proprietary technology or hack complex defense systems. He simply combined artificial intelligence tools available on the open market with hacked accounts that already had established follower bases. This combination allowed the fake content to reach massive audiences from the very first moment of publication, without any need to build credibility from scratch. It is an operational model that can be replicated by anyone with financial motivation and basic internet access, which makes the problem even more concerning.
It wasn’t just Iran War Monitor — the IDF Girl profiles were part of the scheme too
An important detail revealed by Bier is that the operation wasn’t limited to profiles posing as Iranian conflict monitors. The same person was also behind multiple profiles called IDF Girl, which pretended to be women connected to the Israel Defense Forces. These profiles were likewise operated from Pakistan and followed the same logic of exploiting public curiosity and emotional engagement around the Middle East conflict.
This diversification of personas shows that the operator had a strong understanding of how content distribution algorithms work on X. By creating profiles with different identities but all revolving around the same hot topic, he managed to maximize the total reach of the operation. Each profile captured a different segment of the audience interested in the conflict — some attracted people looking for information from the Iranian side, others attracted those following the Israeli perspective. It was a disinformation network strategically designed to cover multiple angles of the same subject and, in doing so, multiply earnings from the monetization program.
This behavioral pattern raises serious questions about identity verification within the platform. Despite X requiring identity verification to participate in the revenue program, the fact that a single person managed to operate 31 hacked accounts simultaneously without being detected immediately reveals clear gaps in the process.
Monetization as an incentive for large-scale disinformation
X’s monetization program, known as Creator Revenue Sharing, was built on the premise of rewarding content creators who generate engagement on the platform. To participate, creators need to subscribe to X Premium, have at least 500 Premium followers, generate roughly five million impressions over three months, and complete identity verification. In theory, it is a way to reward those who produce relevant and interesting material. In practice, however, the system doesn’t effectively differentiate legitimate content from content manufactured to manipulate emotions and pile up views.
When payment is proportional to the number of impressions, the economic incentive points directly toward creating material that generates the most reactions possible — and few topics provoke as much reaction as war and armed conflict. The scheme uncovered on X is the latest proof that this compensation model can be systematically exploited by bad actors.
The logic is simple and efficient from the perspective of whoever is running the scheme. First, you gain access to hacked accounts that already have thousands of followers, eliminating the hardest step in any social media growth strategy. Then, you rename those profiles to something that sounds like a credible news source on a hot topic, like the Middle East conflict. Next, you publish content generated by artificial intelligence that looks real enough to go viral. Views pile up quickly, and the monetization program turns that attention into cash. The entire process can be executed by a single person, as was proven in this case, and the financial return is practically immediate.
One factor that amplifies this problem is currency exchange. Since the revenue program’s payments are made in US dollars, users in countries where the local currency has less purchasing power find an additional incentive to produce viral and sensationalist content. This partly explains why operations of this kind frequently originate in countries like Pakistan and Bangladesh, where the value of the dollar represents a proportionally much larger financial reward.
X’s response and new rules against AI-generated war content
Given the severity of the case, X announced stricter rules to combat this type of abuse. Going forward, users who publish AI-generated videos of armed conflicts without clearly indicating that the material was produced with AI will be suspended from the monetization program for 90 days. In cases of repeat offenses, removal from the program will be permanent. This measure attempts to directly target the financial incentive fueling these operations, but it still depends on the platform’s ability to detect fabricated content before it goes viral.
Bier’s team also highlighted that detection systems are being improved to identify patterns such as simultaneous renaming of multiple accounts, coordinated posting of similar content, and technical signals indicating the use of AI video generation tools. Even so, the race between those who produce and those who detect remains uneven. Generative artificial intelligence tools evolve at an impressive pace, and each new version makes it harder to distinguish real content from fabricated material.
X has also invested in account transparency. In November 2025, the platform introduced the About This Account feature, which reveals the actual geographic location of profiles that publish political content. This feature has already helped identify that several accounts posting political content about India were actually operating from outside the country. Bier also pointed to the case of an account presenting itself as a journalist in Gaza, but that was actually sharing material entirely generated by artificial intelligence. These examples reinforce that the problem is systemic and not limited to a single operator or a single conflict.
The broader landscape of disinformation in times of conflict
Concern about disinformation on X during periods of conflict is nothing new, but it has taken on a different dimension with the accessibility of artificial intelligence tools. As the attacks between the US, Israel, and Iran unfold — including missile strikes on American bases in Bahrain, bombings in Beirut, and tensions that have even affected Indian ships in the Persian Gulf — the proliferation of fake content makes the information environment even more chaotic and dangerous.
This type of operation doesn’t emerge in a vacuum. It feeds on an ecosystem where the demand for real-time information is extremely high, verification mechanisms are slow, and financial incentives reward whoever publishes first, not whoever publishes accurately. It is the perfect storm for disinformation factories to thrive, and the case of the 31 Iran War Monitor profiles is just the visible tip of a much larger problem.
What this means for the future of social media
This case serves as a warning about the direction major platforms are taking by prioritizing engagement metrics as the foundation for their monetization programs. The combination of accessible artificial intelligence, hacked accounts with ready-made audiences, and pay-per-view systems creates an ecosystem where disinformation becomes literally profitable. We are not talking about a theoretical problem or a future risk — this is something already happening right now, with real consequences for how millions of people understand war conflicts and international crises.
Platforms need to rethink how their financial reward systems interact with content moderation. Models that pay exclusively based on view volume, without considering the accuracy or quality of the published material, are essentially subsidizing lie factories. Detection of content generated by artificial intelligence needs to evolve at the same speed as the generation tools advance, and identity verification processes for accounts must become more robust to make it harder to mass-operate compromised profiles.
This episode also reinforces the importance of digital literacy among users themselves. In an environment where any video can be fabricated in minutes by an artificial intelligence, healthy skepticism has become an essential survival skill in the information age. Checking sources, being suspicious of overly dramatic content, and seeking confirmation from established news outlets are practices that become even more relevant when disinformation is produced at industrial scale with increasingly convincing visual quality.
The Iran War Monitor case won’t be the last of its kind — but it can serve as a reference point for understanding the complexity of the problem ahead of us. The question that remains is: can platforms adapt their business models fast enough to prevent the next geopolitical crisis from being even more polluted by fake content? 🧠
