top of page

Support | Tip | Donate

Recent Posts

Featured Post

The Liar’s Dividend and AI Slop - Sadly, Why Truth Is Getting Harder to Defend

  • Writer: The White Hatter
    The White Hatter
  • 6 hours ago
  • 4 min read

In the age of AI-generated content, where anyone can create a convincing fake video, voice clip, or image with a few clicks, the real danger isn’t just misinformation. It’s something deeper and harder to counter, something known as the “liar’s dividend.” This is the idea that once the public knows fakes are possible, people who are caught doing or saying something damaging can simply claim, “That’s fake,” and cast doubt on what’s actually real.


Now layer in what's often referred to as “AI slop”, the flood of low-effort, mass-produced content made with generative AI tools. These include fake social media posts and videos, AI-written articles with misleading facts, voice-cloned phone scams, deepfake nudes, and junk images that clutter search engines. This flood isn’t just overwhelming our feeds, it’s muddying the waters of truth.  Presently, this is something that is flooding Pinterest (1)


The term “liars dividends” was coined in 2018 by law professors Danielle Citron and Robert Chesney, who were studying the legal and social implications of  Ai generated deepfakes. (2) They argued that as the public becomes aware that digital forgeries are easy to create, bad actors can exploit this doubt. If a video shows a politician making a controversial remark, that person can now shrug and say, “It’s AI-generated.” (3) Even if it’s real, enough people may doubt it to kill the story or avoid accountability.


The liar’s dividend doesn’t need to prove something is fake. It just needs to raise enough uncertainty that no one can say for sure what’s real. In short, liars dividends that creates doubt that can now be used a digital shield of defence.


“AI slop” is a blunt but fitting term for the overwhelming flood of ever increasing quality, misleading, or meaningless content now being churned out by generative AI systems. Most of this content isn’t created to inform or enlighten, but rather to drive profit, attract outrage-driven clicks, or manipulate public perception. It reflects a growing problem in the digital information ecosystem, one that’s becoming harder to detect and easier to spread.


One form of AI slop shows up in fake news articles, often filled with hallucinated (made up) facts. These stories mimic the look and feel of real journalism but are generated by algorithms with no concern for truth or accuracy. They can go viral quickly, especially on social media, where shocking headlines travel faster than corrections. This is something that we have certainly take hold in the senior population who are one of the largest newcomers to social media. (4)(5)(6)


The rise of deepfake images and videos is also a troubling aspect of AI slop. These manipulated visuals can appear disturbingly real, yet they are entirely fabricated. On social media, they’re often shared without context, contributing to the spread of false narratives, conspiracy theories, or reputational harm. Presently, we have seen multiple TikTok videos where teens are now having to convince parents that what they see online, that they believe to be real, is in fact deep fake AI slop.


Even areas that once relied on human authenticity like reviews, resumes, academic essays, and job applications, are now being saturated with AI-generated text. These outputs may sound plausible, but they’re often devoid of genuine insight or relevance. When used to deceive, they can erode trust in platforms, institutions, and professional hiring processes.


In short, AI slop isn’t just annoying, it’s a growing digital pollutant or online chaff. It crowds out credible voices, undermines trust in information, and makes it harder for people to tell fact from fiction.


The effect? People are starting to question everything they see online, because even the most polished photo or persuasive essay could be a product of AI. To be honest, we should be suspicious!


The more fake or questionable content people encounter, the easier it becomes to believe that anything could be fake. This is the dangerous synergy between AI slop and the liar’s dividend.


For example:


  • A student is caught plagiarizing with the help of ChatGPT, but argues the system generated it without their input.


  • A CEO is heard making sexist comments in a leaked recording, but insists it’s a voice clone.


  • An intimate image of a teen circulates online, and the sender claims it’s an AI fake to avoid consequences, not understanding that in Canada it may still be illegal.


  • A scammer uses deepfake video to impersonate a boss asking for urgent wire transfers, while later blaming a "hacked AI tool" when caught.


This is no longer speculative. Courts, educators, journalists, and parents are already dealing with the fallout. AI slop allows people to discredit real evidence. When everything can be faked, nothing can be fully trusted. That’s the chilling reality the liar’s dividend is ushering in. This fact alone is going to make police investigations even more problematic and time consuming.


This environment of doubt hits young people especially hard. Teens are already growing up in an onlife world where what’s real and what’s not is often blurred. When adults can’t even agree on whether a video is authentic, how can we expect young people to make sense of digital evidence?


It also opens the door for manipulation and abuse:


  • Teens accused of harmful behaviour can now deny responsibility by blaming AI, which in today AI environment can be used as reasonable defence until proven otherwise.


  • Victims of exploitation may be dismissed if offenders claim the content is a deepfake until proven otherwise. and,


  • Trust between peers, educators, and families erodes when no one knows who to believe.


The liar’s dividend isn’t just a loophole, it’s a weapon that can be used by those looking to avoid accountability, manipulate public opinion, or harm others without consequence. In a world overflowing with AI slop, we need to double down on truth, transparency, and education. Otherwise, we risk losing the ability to tell what’s real at all.


This is why digital literacy, and more importantly AI literacy, is becoming even more important not just for teens, but for us adults as well!


Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech not No Tech



Resources:

Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page