Artificial Intelligence and the Explosion of Fake Content on Social Media
Artificial Intelligence is transforming the way we consume information on social media, and not always for the better. Every single day, an absurd amount of AI-generated content floods the feeds of millions of people around the world, blending real facts with completely fabricated narratives. The problem is that this mix is getting harder and harder to spot, even for people who consider themselves well-informed and careful about what they consume.
And the numbers help put the scale of this into perspective.
A 2024 study from Cornell University found that AI-generated misinformation grew by 57.3% on mainstream websites in just one year. On sites that specialize in spreading fake content, that number jumped to a staggering 474%. 😮
This is not just a cold statistic. In practice, it means that anyone using social media today is more exposed than ever to believing in something that simply never happened. And the most concerning part is that the tools built to fight this problem are still far from being able to handle it on their own.
A striking case that illustrates this scenario involves AI-fabricated videos about the conflict with Iran. Images showing alleged Iranian counterattacks against Tel Aviv and even against the Burj Khalifa in Dubai, the tallest building in the world, circulated massively on the X platform and were viewed by millions of people. The videos were completely fake, generated by artificial intelligence, but they had visual quality convincing enough to fool a significant portion of the public. The most ironic part is that when BBC journalists asked the Grok chatbot to verify the authenticity of those images, the chatbot itself insisted the material was real. That is right: an AI tool was unable to identify content fabricated by another AI.
This episode reveals something important about the current state of technology. The tools that are supposed to protect us from misinformation still have critical flaws, and blindly trusting them can be just as risky as not verifying any information at all.
How AI Became a Fake Content Factory
In recent years, large language models, the now-famous LLMs, have become so sophisticated that anyone with access to a computer can generate convincing text in seconds, without needing any deep technical knowledge. That is incredible from a technology democratization standpoint, but it opens a massive gap when the goal is to create misleading content. A fake article that would have previously taken hours to write and publish can now be replicated at industrial scale, with subtle variations that make automatic detection by fact-checking systems much harder.
The speed at which this content spreads on social media is alarming, and the algorithms of the platforms themselves, which prioritize engagement, end up acting as fuel for the fire. Content that sparks outrage, fear, or excitement tends to get more interactions, and recommendation systems interpret those interactions as a signal of relevance, delivering that material to an ever-growing audience. The result is a vicious cycle where the most emotionally charged misinformation gets more organic reach than factual and balanced news.
What makes the situation even more complicated is that texts produced by artificial intelligence today no longer sound like those automated response bots we all used to know a few years back. They have flow, context, plausible citations, and even an emotional tone that resonates with readers. This means that the visual and linguistic cues many people used to rely on to spot a suspicious text, like grammatical errors or nonsensical phrases, simply do not work anymore. Misinformation has evolved, and our critical thinking needs to keep up with that evolution if we want to keep consuming information in a healthy and responsible way.
It is worth noting that not all AI-generated content is fake or malicious, far from it. Many newsrooms around the world already use AI tools to speed up the production of legitimate news, summaries, and analysis. The problem lies in the deliberate use of these tools to create narratives that distort reality, fuel polarization, and manipulate public opinion. And when this malicious content enters the same stream as truthful content, inside the same feed, with the same format and the same appearance, it becomes extremely difficult for the average social media user to tell one from the other without a conscious and informed effort.
The Role of Algorithms and Information Bubbles
There is a factor that makes this whole dynamic worse, and most people do not notice it in their daily lives: social media algorithms are designed to show you more of what you already want to see. When someone interacts with a certain type of content, whether by liking, commenting, or sharing, the system reads that as a preference and starts delivering more of the same. In practice, this means that someone who already leans toward believing a certain type of narrative will be fed more content that confirms that view, creating what experts call an information bubble or echo chamber.
Research shows that people, in general, do not seek balance when forming their opinions. They tend to consume sources that reinforce what they already think. Algorithms amplify this natural behavior, making the public even more susceptible to fake news generated by artificial intelligence. A person inside an information bubble will rarely be exposed to a correction or a different point of view, unless they make a deliberate effort to step outside that loop.
A global study published in Nature Human Behaviour brought a particularly revealing finding on this topic. Participants with strong ideological views showed more confidence in their own ability to detect misinformation than their actual performance justified. In other words, the people who believe they are least susceptible to manipulation are, in fact, among the most vulnerable to believing false information. This overconfidence works as a dangerous blind spot because it prevents the person from questioning what they are consuming.
The Psychological Impact of Deepfakes and Fabricated Videos
If AI-generated text already poses a significant challenge, deepfakes in video format take the problem to a whole different level. The ability to create realistic videos with faces, voices, and settings entirely fabricated by artificial intelligence makes it nearly impossible for the human eye to tell real from artificial in many cases.
A study conducted by communication psychologists at the University College London revealed an alarming finding: participants who watched deepfake videos continued to be influenced by the content even after being told the videos were fabricated. In other words, people knew it was fake and still somehow chose to keep believing what they saw. This phenomenon shows that the emotional impact of visual content is so powerful that it can override logical reasoning, even when the person has all the information needed to reject it.
Both Facebook and X have fact-checking tools specifically designed to combat what has come to be known as fake news. However, many users dismiss the fact-checking itself as false, especially when it contradicts their preexisting views. This resistance turns fact-checking into a minefield, where accurate information is treated with suspicion while fake content is embraced with conviction.
Why Fact-Checking Still Does Not Solve Everything
Fact-checking is one of the main tools that journalists, researchers, and digital platforms use to fight misinformation, and it plays a fundamental role in the information ecosystem. Specialized organizations work daily to verify claims circulating on social media and across news outlets. But there is an enormous structural limitation in this process: time.
While a fact-checker takes hours or even days to verify and publish a complete analysis, the fake content has already traveled through thousands of shares and reached people who may never see the correction. The problem is not the quality of the verification work, which is usually excellent, but rather the speed gap between the production of fake content and the response that comes after.
Social media platforms themselves have been investing in automated systems that use artificial intelligence to identify and flag potentially false content before it spreads too far. Some of these systems work well for obvious cases, like crudely manipulated images or text with known misinformation patterns. But when AI is used to create more sophisticated content, with coherent narrative structure and references that look legitimate, those detection systems start to show their limitations. It is almost like a digital arms race, where every advance in detection tools is met with an advance in fake content creation techniques, and this cycle has no clear end in sight.
Another factor that complicates the effectiveness of fact-checking is the psychological phenomenon known as the backfire effect. Studies show that, in some cases, when a person receives a correction about something they already believe, especially if that belief is tied to their identity or worldview, they tend to reinforce the original belief even more instead of revising it. This means that even when verification arrives quickly and is well communicated, it does not always produce the desired effect. The fight against misinformation, therefore, goes far beyond simply presenting the correct facts. It needs to account for how people process information, which involves psychology, communication, and a deep understanding of human behavior online.
The Lesson of the Bereans for the Digital Information Age
The original article that inspired this reflection, initially published in the Baptist and Reflector by journalist Chris Turner, draws an interesting and quite relevant analogy. Turner brings back the figure of the Bereans, a group mentioned in the Book of Acts, chapter 17, who became known for receiving teachings with enthusiasm while never giving up on personally verifying whether what they heard was true. They cross-referenced every piece of information against the available scriptures before accepting it as fact.
This approach is described as noble precisely because it combines openness with rigor. The Bereans did not reject the new on principle, nor did they accept everything without questioning. They practiced what we would today call critical thinking, and that is a skill that applies to anyone, regardless of religious convictions or ideological position.
Turner also quotes a saying from his former journalism professor, William D. Downs Jr., who for 41 years served as a distinguished professor of communication and journalism at Ouachita Baptist University: If your mother says she loves you, get three sources to confirm it. Another memorable line: Trust nobody and assume nothing. These phrases might sound extreme, but they carry a valuable principle, which is to never treat information as true just because it comes from a source we trust or because it confirms what we already believe.
In Turner’s adaptation for the present day, the recommendation becomes even more radical: do not believe anything you hear or even what you see. In a world of deepfakes and AI-fabricated visual content, even visual evidence has lost part of its reliability.
Practical Habits for Digital Discernment
Faced with such a complex landscape, where even institutional tools have their limitations, individual discernment becomes an essential skill for anyone using the internet today. This does not mean distrusting everything all the time, which would be exhausting and counterproductive, but rather developing a set of habits that help filter what is trustworthy from what is questionable before consuming, sharing, or acting on a piece of information.
Some practices that make a real difference in everyday life:
- Check multiple sources before accepting any information as true. If a major story only appears in one place, that is already a red flag.
- Recognize emotional manipulation. Ask yourself: is this information presenting facts objectively, or is it trying to trigger outrage, fear, or excitement?
- Pause before sharing. Social media rewards speed and impulsive reactions, but discernment requires a pause. If something triggers an intense emotional response, that is exactly the moment to wait before reposting.
- Limit excessive news consumption, especially from single sources or social media algorithms. Continuous consumption of uncurated information can warp your perspective and increase anxiety.
- Use visual verification tools. Free resources like Google Reverse Image Search and TinEye let you check whether an image has been published before in a different context, which frequently reveals manipulations or misuse of old photos and videos.
This kind of check takes less than two minutes and can prevent you from inadvertently contributing to the spread of misinformation. Building this habit into your daily routine is a practical and accessible way to exercise discernment without needing any specialized technical knowledge.
Sharing Is a Responsibility
One of the most relevant points raised in the original article is that sharing unverified information is, in practice, a modern way of spreading lies, even when the intention is good. Turner points out that this is especially true within communities with a high degree of mutual trust, where a respected person can propagate fake content, intentionally or accidentally, and that content will be accepted and passed along without question precisely because of the trust placed in the messenger.
This dynamic is not exclusive to any specific group. It repeats itself across professional, family, political, and religious circles. Politically charged content, emotional stories that seem inspiring, and shocking videos circulate daily without any prior verification, and each share amplifies the reach of that information, whether it is true or false.
Ultimately, it is worth reflecting on the role each of us plays within the social media information ecosystem. Every share is a decision, and that decision carries real weight in the world. When we share something without verifying it, even in good faith, we end up participating in the distribution chain of that content. Developing a more discerning approach before amplifying information is one of the most concrete contributions any user can make toward a healthier digital environment.
This is not about paranoia or quitting social media altogether, but about understanding that discernment is a collective responsibility. The fight against misinformation starts the moment each person decides what they will or will not spread to their connections. As Professor Downs wisely put it, maybe we do not need three sources when our moms say they love us, but we definitely need a pause and a more careful look before accepting as truth everything we see and hear out there. 🧠
