top of page

Support | Tip | Donate

Recent Posts

Featured Post

How Social Media Is Already Training AI with What We Have Posted Both Present and Past

  • Writer: The White Hatter
    The White Hatter
  • Jun 16
  • 3 min read

At The White Hatter, we believe in cutting through fear-based narratives and helping parents see the bigger picture when it comes to technology, safety, and privacy. Lately, we’ve seen growing concern about the idea of uploading photos to artificial intelligence (AI) platforms like ChatGPT, especially around whether those images could be used to train artificial intelligence systems. While the concern isn’t entirely misplaced, it’s important to understand what’s already happening on a much larger scale, especially on the platforms that youth, teens, parents, caregivers, and educators, use every day.


Many of the biggest tech companies in the world such as Meta (which owns Facebook, Instagram, and WhatsApp), Reddit, Microsoft (including Teams), Google, and others, have already built artificial intelligence directly into their platforms. These companies are not only using AI to run features like facial recognition, auto-tagging, and predictive text, they are also using the data people upload including photos, captions, DMs, comments, videos, and more, to train their AI Large Language Models (LLMs) and machine learning systems.


This isn’t something that’s going to happen in the future. It’s already happening. In fact, leaked documents, terms of service updates, and whistleblower testimonies confirm that everything many of us have uploaded over the past decade is fair game for AI training. These companies have trained their models on the very content we’ve been posting for years, often with minimal public awareness or understanding. (1)(2)(3)(4)


This is why we’ve always taught that the moment you or your child hits send, post, or upload, the content becomes public, permanent, searchable, copiable, exploitable, shareable, and often for sale. Now we can add one more,  it’s has also been training AI.


This doesn’t mean kids should never post online. It means we, as parents caregivers, and educators, need to understand the reality of the digital environment our children are growing up in and sharing that knowledge with our youth and teens. AI is not just in futuristic tools like ChatGPT. It’s already deeply woven into the platforms our children use to connect, express themselves, and learn.


So what can you do as a parent, caregiver, or educator? Start by having open conversations with your child about how the platforms they use collect and apply data. These talks don’t need to be heavy or fear-based. The goal is to help them think critically about the choices they make online such as what they post, who they interact with, and how they manage their digital footprint. When youth and teens understand that the content they upload can be used in ways they may not have considered, including for training AI, they’re more likely to pause and reflect before sharing.


It’s also a good idea to go through privacy settings together. While these controls won’t prevent AI systems from using public or shared content to train on, reviewing them can help build solid digital hygiene habits. It sends the message that they have some agency online and that it’s worth taking the time to be intentional about who sees what they post. Privacy settings aren’t a fix-all, but they’re a practical step toward building awareness and accountability.


Another important concept to share is the “long tail” of online actions. Once something is uploaded, it doesn’t just disappear. Content can be copied, screen-recorded, shared, and reused, sometimes years later and far outside the original context. Helping your child understand this reality can encourage more thoughtful and cautious decision-making when they engage online.


Teaching media literacy around artificial intelligence is key. Young people should understand what AI is, how it works, and how it’s trained using the content we all post. When kids grasp that their data isn’t just sitting on a server but actively fueling smart technologies, they’re better equipped to make informed decisions. Media literacy is no longer optional. It’s a foundational skill for living in a digital world shaped by algorithms, machine learning, and evolving AI systems.


At The White Hatter, we aren’t anti-tech or anti-social media. We’re pro-awareness and education. The goal isn’t to unplug your child from the digital world but to equip them with the tools to navigate it safely, responsibly, and intelligently, especially as AI continues to shape their onlife world in ways that are mostly invisible, but very real.


As we always say: “Just because you’re not paying for the product doesn’t mean you’re not the product.” Let’s help our kids understand that truth, so they can make empowered choices online.


Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech



References:




Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page