SHARE:

Fake news, disinformation, and AI-forged content are no longer the exception; they’ve become part of our daily lives. With war videos that look real, supposedly leaked screenshots, and texts full of exclusive information, it’s getting harder and harder to know who or what to trust. In this scenario, an old posture from good journalism is more relevant than ever: responsible skepticism, that professional stance that refuses to accept anything as true without checking, confirming, and putting it into context.

Many years before the digital era, seasoned journalism professors were already teaching principles that sound almost prophetic today. One of the most famous was the recommendation to never trust a single version of events. The logic was simple: a solid story needs to be backed by consistent sources, documents, and evidence. The difference is that, back then, the challenge was dealing with hallway rumors and human error. Now the game also includes bots, recommendation algorithms, and AI systems capable of producing entire images, texts, and videos that never actually happened, but look absolutely real.

In this content-saturated environment, the old rule don’t believe everything you hear has gotten a necessary upgrade: don’t blindly trust everything you see either. The culture of glancing at a headline, feeling a spike of emotion, and immediately sharing has become fuel for a disinformation machine that just keeps growing. And this affects everyone, including well-intentioned people and religious communities, who often spread fake content without realizing it, simply because it came from someone they trust.

AI, deepfakes, and the factory of fake news at industrial scale

The same Artificial Intelligence that now generates text, audio, and images with just a few clicks is behind much of the misleading content circulating on social media. A recent example involved war and conflicts in the Middle East: videos went viral showing alleged attacks and counterattacks, with scenes of destruction in cities like Tel Aviv or at famous tourist spots in Dubai. The images were dramatically convincing, with smoke, explosions, and cinematic framing. A lot of people shared them believing they were real footage of the conflict.

But there were several problems. Fact-checking experts identified inconsistent visual details, wrong shadows, repeated patterns, and classic signs of AI generation. Even so, the videos had already racked up millions of views, and the informational damage was done. To make things worse, when journalists asked an advanced chatbot to evaluate whether those videos were real, the automated response confirmed them as authentic, adding even more fuel to the confusion.

Cases like this expose a critical limitation: generative AI tools don’t have an internal truth sensor. They work with probability, language patterns, and past data, not with conscience, ethics, or responsibility. If a model was trained on a huge volume of information that includes false, distorted, or decontextualized content, nothing prevents it from reproducing those distortions with a super professional look. And when that happens on social platforms where content is consumed in seconds, a well-produced lie travels much faster than any later correction.

Recent research helps us grasp the scale of the problem. A 2024 study by researchers at Cornell University showed that, in just one year, the volume of AI-generated disinformation on so-called mainstream sites grew by more than 50 percent. On pages already known for spreading fake news, the surge was even more alarming, with increases of several hundred percent. In other words, the technology didn’t invent the culture of lying from scratch, but it massively turbocharged those who were already profiting from deceptive content.

How people keep believing even after they know it’s fake

There’s an even more worrying detail: in many cases, people remain influenced by something even after being told it’s fake. Researchers at University College London ran an experiment with deepfake videos. First, they let participants watch the clips. Later, they clearly revealed which videos had been fabricated. The result was both curious and troubling: even knowing they had watched something fake, many participants still let those images influence how they judged the topic.

This effect doesn’t happen by accident. Our brain doesn’t operate like a neat, logical database with tidy little folders. Strong images, emotional stories, and content that pokes at fear, anger, or hope stick in our emotional memory. Later on, even if we rationally know it was fake, that initial impact keeps shaping our perceptions. That’s where the combination of AI, algorithms, and human psychology turns into a ticking time bomb.

Another study, published in the journal Nature Human Behaviour, analyzed how people with very strong political or ideological views deal with disinformation. The conclusion was that the more convinced someone is of their own opinions, the more confident they tend to be in their ability to recognize fake news. In practice, though, their actual performance was no better than anyone else’s, and often worse. In short: the folks who think they’re immune to fake news are usually the ones who fall for it the most.

Algorithms, bubbles, and the habit of only hearing what we want

On top of all this, we have the role of social media algorithms. These platforms were built to maximize screen time, engagement, and clicks, not to deliver balanced or high-quality information. The more you interact with a certain type of content, the more the system assumes you like it and starts showing you similar posts, reinforcing the same point of view.

If you constantly consume news from a single side, your feed starts to look like a mirror that only reflects your own beliefs. Inside that closed environment, any content that fits your narrative – even if it’s fake news, an AI-generated video, or a distorted interpretation of real facts – tends to be accepted with much less resistance. The result is a vicious cycle: the person doesn’t look for balance, just reinforcement; the platforms deliver exactly that; and misinformation finds the ideal ground to spread, because it doesn’t have to compete with alternative versions or serious fact-checking.

When someone decides to break this pattern a bit and seeks out diverse sources, checks outlets with solid credibility, or even compares different coverage of the same event, they’re already practicing a much-needed kind of informational hygiene. It’s not about living in constant suspicion of everything, but about admitting that our personal filters can fail and that being absolutely certain of everything all the time is more of a red flag than a sign of intelligence.

Faith communities and the risk of digital gullibility

In religious communities, this challenge gets an extra layer. Many people place strong trust in leaders, church friends, or relatives who share content in closed groups. When a video, a chain message, or a supposed news story is sent by someone highly respected, the natural tendency is to believe it without questioning, especially if the message reinforces moral values, political positions, or topics that already stir up strong emotions.

The problem is that this trust, when not paired with discernment, becomes a feast for disinformation. Emotional stories about miracles that were never verified, fear-based political rumors, or alarmist messages that mix faith with conspiracy theories gain strength precisely because they travel through this trusted channel. Little by little, communities that should be examples of wisdom and prudence in dealing with the truth end up becoming amplifiers of rumors and distortions.

Ancient wisdom texts have been warning about this for centuries: naive people believe anything, while prudent people evaluate, compare, and reflect before taking a step. The difference today is that the stage for this naivety is no longer just small talk on the corner, but networks that reach thousands of people in a few seconds. When someone shares something false without knowing it, it’s not just an isolated mistake; it’s another piece of a huge disinformation web that gradually influences the social climate, political decisions, and even how faith is perceived in the public eye.

Being critical, verifying, and pausing: a discipline for the AI age

In the middle of this tsunami of fabricated content, one attitude stands out again: the discipline of not swallowing anything before chewing on it. Instead of reacting on impulse to a post that triggers outrage, fear, or euphoria, it’s worth adopting a small habit that makes a big difference: pause. A few seconds of thought already help break the reflex of instant sharing.

A good starting question is simple: do I know this is true, or do I just want it to be true because it matches what I already think? If the honest answer leans toward the second option, that’s a clear sign it’s worth double-checking. See if the information appears in multiple reliable sources, look for concrete data instead of just vague, heated language, and check whether any fact-checking outlet has already analyzed the content – all of this helps separate, at least a little, what’s fact from what’s theater.

It also helps to watch out for classic signs of emotional manipulation. Messages urging you to share before they delete this, that promise secrets the mainstream media is hiding, or that attack any questioning as betrayal tend to be more suspicious. The same goes for content that looks a bit too perfect, with cinematic scenes supposedly captured by chance in the middle of chaos. At a time when deepfakes are increasingly accessible, being skeptical of a video that looks too good to be true isn’t paranoia; it’s self-defense.

Practical habits to protect yourself from disinformation

A few simple steps, repeated consistently, already build a very strong layer of protection against fake news.

  • Check the source: before sharing, see if the site is known, whether it has a track record in journalism or just lives off clickbait headlines. Sketchy links, bizarre domains, and pages with no clear contact information are red flags.
  • Look for more than one source: if something is truly important, it’s unlikely that only one outlet in the world is talking about it. Compare how different outlets cover the same story.
  • Pay attention to date and context: a lot of disinformation is just old news repackaged as if it were current. Checking the publication date and whether it still makes sense today helps avoid confusion.
  • Be suspicious of purely emotional appeals: texts that only play on fear, hatred, or outrage, without offering verifiable data, are usually more interested in triggering a reaction than informing.
  • Analyze images and videos: if something looks too good (or too catastrophic) to be true, it might be. Reverse image search tools and basic checks of visual details help spot manipulation.

Remembering that you don’t have to have a definitive opinion about everything based on the very first version of the facts is also part of this package. In crises, armed conflicts, tragedies, or tense political decisions, the first few hours are always full of noise, incomplete data, and conflicting versions. Waiting a bit, following updates throughout the day, and seeing how serious outlets adjust the information over time is much more sensible than making bold claims that can crumble in a matter of hours.

In the end, the combo of technology, responsible journalism, and personal discernment is what can help slow down the avalanche of AI-generated disinformation. Tools will evolve, social networks will change, and new formats of deception will appear. But as long as there are people willing to stop, think, verify, and admit they might be wrong, there will always be room for the truth to have a real chance of being heard amid all the noise.

Picture of Rafael

Rafael

Operations

I transform internal processes into delivery machines — ensuring that every Viral Method client receives premium service and real results.

Fill out the form and our team will contact you within 24 hours.

Related publications

AI SDR Agent on WhatsApp: How SMBs Can Cut Costs and Scale Sales

Respond 21x faster your leads and scale your sales operation with a fraction of the cost of expanding your sales

Robot Detects Unusual Browser Activity Using JavaScript and Cookies

Learn why sites require JavaScript and cookies for unusual activity and how to fix blocks with quick, simple steps

Productivity with Agentic Artificial Intelligence in execution and workflows.

Agentic AI: how to operationalize AI agents to improve workflows, metrics, and governance, turning pilots into real productivity gains.

Receive the best innovation content in your email.

All the news, tips, trends, and resources you're looking for, delivered to your inbox.

By subscribing to the newsletter, you agree to receive communications from Método Viral. We are committed to always protecting and respecting your privacy.

Rafael

Online

Atendimento

Calculadora Preço de Sites

Descubra quanto custa o site ideal para seu negócio

Páginas do Site

Quantas páginas você precisa?

4

Arraste para selecionar de 1 a 20 páginas

📄

⚡ Em apenas 2 minutos, descubra automaticamente quanto custa um site em 2026 sob medida para o seu negócio

👥 Mais de 0+ empresas já calcularam seu orçamento

Fale com um consultor

Preencha o formulário e nossa equipe entrará em contato.