AI - The Next Challenge in Online Safety, Security, and Privacy When It Comes To Youth and Teens.
- The White Hatter

- 6 minutes ago
- 6 min read

As online digital literacy and internet safety presenters and investigators, we are starting to see how “some” youth, teens, and adults are using and leveraging artificial intelligence (AI) as a tool to target others online. Here are some concerns that we are seeing and hearing from teens, and some others that we are predicting will become “problematic”
Deepfake images or videos (already seen)
Teens can weaponize AI image and video tools to create intimate or compromising content that never happened. That can look like a nude photo of a classmate generated from publicly available pictures, or a short clip that appears to show a peer saying or doing something humiliating. The effect is immediate and brutal, where the victim faces shame, social isolation, and harassment based on a fabrication. Even once the media is proven false, the damage to reputation and mental health can linger, because visual material circulates quickly and can be repackaged or reposted by others.
Fake accounts and impersonation (already seen)
AI can help craft believable profiles and messages that mimic a real person’s tone and style, allowing someone to set up a convincing fake account posing as a classmate. With that account, the user can privately message others to spread lies, spark rumours, or entrap friends into sharing private material. More sophisticated misuse may involve a chatbot that imitates a specific peer, carrying on conversations that coax secrets or images out of someone who thinks they are talking to a trusted friend. The result is betrayal of trust and the potential for coordinated social harm, where a single fake identity catalyzes wider conflict.
Synthetic audio and voice cloning (already seen)
Voice-cloning tools can produce short audio clips that sound like a particular teen. Those clips can be used to fabricate confessions, threats, or admissions and then circulated to humiliate or coerce the person whose voice was cloned. The emotional consequences are severe because audio carries a convincing intimacy. People tend to trust what they hear, which makes synthetic audio a powerful tool for manipulation and false accusation, especially when combined with doctored images or text messages.
Automated harassment and pile-ons (predicting)
Rather than relying on a small group of individuals, someone can use AI to generate waves of abusive comments, impersonations, or spam across multiple platforms. The coordinated flood creates the impression that many people are attacking the same target. That pile-on effect can overwhelm moderation systems and the victim’s ability to respond, amplify the social cost of the harassment, and escalate otherwise isolated drama into a sustained campaign of bullying. The anonymity and speed of AI-generated attacks make them hard to counter in real time.
Doxxing and targeted disclosure (already seen)
AI tools that scrape and analyze public information can quickly assemble a startlingly complete picture of a teen’s life such as usernames across platforms, location clues from images, family members’ names, and other personal details. When that aggregated profile is published intentionally to expose or shame someone, the consequences can include threats, stalking, or harassment at school and in the community. Because AI speeds up the process of finding and summarizing dispersed data, doxxing that used to take days or weeks can now happen in a matter of hours.
Phishing and sextortion using AI-crafted social engineering (already seen)
AI can write highly personalized messages that mimic a friend’s voice and reference private details to reduce suspicion. Those messages may include links or requests meant to trick a teen into revealing login credentials, opening malware, or sending intimate photos. If the attacker obtains compromising images, they may threaten to release them unless the victim pays money or complies with demands. The psychological pressure of sextortion is intense; victims often feel isolated, ashamed, and unsure where to turn for help.
Reputation manipulation and gaslighting (predicting)
Malicious students can generate fake screenshots, chat logs, or social posts that appear authentic and then share them to sow doubt, split friendship groups, or push someone out of social circles. Over time, repeated circulation of falsified material can erode a teen’s credibility, leaving peers uncertain whom to trust. That gaslighting tactic does more than harm reputation. It undermines the victim’s social support, which is often the very resource they need to recover.
Academic and credential fraud used to harm peers (predicting)
AI can be misused to fabricate documents, messages, or assignments that frame another student for cheating, plagiarism, or misconduct. A falsified essay, a forged email exchange, or a fake submission with a peer’s name attached can trigger disciplinary processes and lasting academic consequences. Even when the fraud is discovered, the target may suffer reputational fallout and stress from defending themselves against institutional procedures and accusations.
The rise of AI generated content introduces a new layer of complexity for anyone trying to uncover the truth behind online harm. When a teen uses AI to target another peer, distinguishing between what is real and what is fabricated becomes increasingly difficult. In the past, investigators could rely on digital forensics, metadata, timestamps, and file origins, to verify authenticity. Today, an AI-generated image or audio clip can appear entirely legitimate while leaving little or no trace of manipulation. This lack of verifiable digital fingerprints makes it harder for parents, educators, and even police to determine who created the content, when it was made, and whether it depicts something real or artificial.
For parents and schools, the immediate challenge is credibility. When a fake video or screenshot surfaces, it can spread faster than it can be disproven. Teens caught in the middle of these situations may insist that “it’s not real,” while peers, teachers, or administrators struggle to decide what to believe. By the time digital forensics can confirm the truth, if that’s even possible, the reputational and emotional harm is already done. The rapid speed at which AI generated content circulates outpaces the ability of adults to respond or intervene effectively.
For law enforcement, AI misuse raises evidentiary and legal questions that current laws were not designed to handle. Establishing intent, authorship, and authenticity is far more complex when the content itself can be entirely synthetic. Investigators must now rely on advanced forensic AI detection tools that are still evolving and not always accessible to smaller agencies or schools. Even when perpetrators are identified, proving culpability in court can be difficult if the manipulated material cannot be conclusively tied to them.
The anonymity offered by AI tools means that a teen can create deepfakes, impersonations, or coordinated harassment campaigns from behind layers of encrypted accounts or offshore platforms. Traditional investigative pathways, such as subpoenas or production orders for IP addresses, may lead to dead ends, third party servers, or anonymized cloud systems. This erodes accountability and can leave both victims and their families feeling powerless.
The emotional fallout of this technological shift is also profound. Parents trying to support their children are navigating incidents that even digital professionals find challenging to untangle. Schools, which are often the first to confront these cases, must now assess AI based harm with limited resources and no standardized protocols. For police, we are predicting the growing volume of AI fuelled digital harm cases risks will overwhelm already strained cybercrime units.
AI has blurred the line between truth and fabrication. While it has opened incredible creative opportunities, it has also created a new frontier of digital deception. For those tasked with investigating or responding to online harm, the path to justice and resolution has never been more complicated.
Artificial intelligence has become part of how today’s youth communicate, create, and connect but also, in some cases, weaponized to target and harm others. What we are witnessing is not just a new form of digital peer aggression, but an evolution of it. The ability to fabricate images, videos, voices, or entire online identities gives everyday peer conflict the potential to escalate into reputational, emotional, and even legal crises.
As presenters and investigators, we recognize that the goal is not to vilify technology but to understand it. AI is not inherently good or bad, it reflects the intent of its user. Parents, educators, and caregivers must help young people see that ethical digital citizenship now includes how we use and respond to artificial intelligence. The same creativity that can be used to deceive can also be harnessed to protect, educate, and support.
The challenge ahead is teaching youth and teens not only to spot AI misuse but also to consider the human cost of using these tools irresponsibly. Empowering youth with empathy, digital literacy, and accountability will do more to prevent harm than fear or restriction ever could. In the end, shaping how AI is used among teens starts with shaping the values behind the screen.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech














