How AI Is Changing Digital Peer Aggression (Cyberbullying) for Today’s Youth and Teens
- The White Hatter
- 1 minute ago
- 11 min read

CAVEAT - This week, a teen reached out to us after being targeted by AI-driven peer aggression. This is not an isolated incident. It reflects a pattern we are increasingly hearing about from students in the schools we visit. The use of AI as a tool to target others is emerging quickly, and this article highlights how that shift is changing the landscape.
There was a time when bullying, or what we like to call peer aggression, depended on something real, a photo, a rumour, a message, or a moment that actually happened, that is no longer the case.
Today, we are now seeing and hearing from youth a teens how artificial intelligence (AI) has introduced a new layer to peer aggression. Young people can now create content that looks real, sounds real, and feels real, even when it is completely fabricated. In the hands of youth and teens, sometimes driven by curiosity, sometimes by conflict, and sometimes by a desire for attention, this technology is being used and weaponized in ways that can cause very real harm. For parents, caregivers, and educators, understanding this shift is no longer optional, it’s essential.
Traditional cyberbullying involved sharing or amplifying real content. AI changes that by allowing youth and teens to manufacture and even alter reality. A youth or teen no longer needs an embarrassing photo of a peer, they can generate one using AI, and that fact removes one of the natural barriers that once limited bullying. Now, the only requirement is access to an AI app and a willingness to use it.
So, how have we seen youth and teens weaponizing AI for the purpose of digital peer aggression here at the White Hatter?
1/ Deepfake “Nude” / Sexualized Images
One of the most concerning and rapidly emerging uses of artificial intelligence involves the manipulation of ordinary photos, often taken from social media, into fake nude or sexualized images. With the help of easily accessible AI tools, a simple, everyday image can be altered in a way that makes it appear explicit or intimate, even though the person in the photo never posed for such content.
Once created, these images are often shared through group chats, private messaging apps, or social media platforms. In some cases, they are circulated widely within a school or peer group, sometimes without the creator fully considering the impact. What may begin as a misguided attempt at humour, retaliation, or attention can quickly escalate into something far more harmful.
For the youth or teens being targeted, the experience can feel deeply violating. Even though the image is fabricated, others may believe it is real, which can affect their reputation, relationships, and sense of safety. The speed at which these images can be created and distributed means that the damage can spread quickly, often before any intervention can take place.
This is where AI has fundamentally changed the landscape when it comes to digital peer aggression. It has removed the need for real images to exist in the first place, lowering the barrier for this type of harm and increasing the likelihood that more young people will encounter it, either as a target, a bystander, or, in some cases, as someone who participates without fully understanding the consequences.
2/ AI-Generated Humiliation (Memes, Videos, Voice Cloning)
Artificial intelligence is now being used in ways that allow youth and teens to fabricate highly convincing content about others. For example, voice cloning tools can generate audio that sounds like a real person saying things they never actually said, including offensive or inappropriate remarks. When shared, this type of content can quickly spread and be believed, damaging reputations and relationships.
In addition to audio, AI can also be used to edit or generate videos that place someone in situations that never happened. These manipulated clips can make it appear as though a youth or teen was involved in embarrassing, inappropriate, or harmful behaviour, even though the event is entirely fictional.
AI is also being used to produce highly personalized memes that target specific individuals. These are often designed to ridicule or humiliate, using inside jokes, personal details, or altered images to increase their impact. Because these creations can be produced quickly and shared widely, they can intensify the reach and effect of peer aggression in ways that were not possible before.
3/ Impersonation & Fake Digital Identities
Artificial intelligence is also making it easier for youth and teens to create fake accounts that closely mimic real students. With AI generated profile photos, usernames, and writing styles, these accounts can appear authentic enough to convince others they are legitimate.
Once created, these accounts may be used to post inappropriate or harmful content in someone else’s name, send messages that damage friendships or reputations, or engage in harassment while concealing the identity of the person behind it. In some cases, the goal is to embarrass or isolate the targeted the youth or teen. In others, it is to create conflict between peers by making it seem as though someone said or did something they never actually did.
What makes this particularly challenging is the level of believability. When others cannot easily distinguish between a real account and a fabricated one, the impact can spread quickly before the truth has a chance to catch up. For the youth or teen being impersonated, this can lead to confusion, loss of trust, and a sense of losing control over their own identity in the digital space.
4/ Scaled and Coordinated Harassment
Artificial intelligence is also changing the scale at which digital peer aggression can occur. What once may have involved a few messages or a small group can now expand rapidly into something much larger and more persistent. With AI tools, content can be created and reposted quickly, across multiple accounts and platforms, often within a very short period of time.
This makes it easier for individuals, or even groups, to repeatedly target the same person in a coordinated way. A single post can turn into dozens. One account can become many. What starts as a small incident can quickly grow into a constant stream of content that follows a youth or teen throughout their online spaces.
For the youth or teen being targeted, this can feel overwhelming. It is not just the content itself, but the volume, repetition, and sense that it is coming from everywhere at once. That intensity can make it much harder to escape, respond, or seek support, increasing the emotional impact in ways that were far less common in the past.
5/ AI-Driven Threats, Coercion, and Sextortion
In some situations, AI generated content is being used as a form of leverage or control. A youth or teen may create a fabricated image and then threaten to share it publicly unless the targeted person complies with certain demands. These demands can range from sending money, to sharing personal information, to providing real images or continuing communication.
What makes this especially concerning is that the image does not need to be real to have power. The fear of it being believed by peers, teachers, or family members is often enough to pressure a young person into compliance. In this way, AI is being used to support behaviours that mirror coercion and sextortion, where manipulation and intimidation replace direct force.
For the youth or teen being targeted, this can create a strong sense of panic, isolation, and loss of control. They may feel trapped between the fear of exposure and the pressure to comply, which is why early intervention and adult support are so important when situations like this arise.
There are three important realities parents and caregivers need to understand when it comes to AI based digital peer aggression:
1/ The Barrier to Entry Is Extremely Low
One of the most significant shifts with AI is how accessible these tools have become. Many of the apps and platforms capable of generating images, audio, and video are either free or available at a very low cost. They are often designed to be user friendly, requiring little to no technical knowledge. In some cases, a youth or teen can upload a photo, click a button, and generate manipulated content in seconds.
This ease of use removes many of the barriers that once limited this type of behaviour. What previously required advanced skills in photo editing or coding can now be done by almost anyone with a smartphone. As a result, the likelihood of misuse increases, not necessarily because more youth and teens intend harm, but because the opportunity to experiment, push boundaries, or act impulsively is now readily available.
2/ “It’s Fake” Does Not Reduce the Harm
A common misconception among youth and teens is that if something is not real, it cannot cause real damage. In practice, the opposite is often true. When AI generated content is shared, most people who see it do not stop to question its authenticity, they react to what they believe they are seeing or hearing.
For the youth and teens being targeted, the impact can be immediate and deeply personal. Their reputation may be affected, friendships strained, and their sense of safety in their social environment disrupted. Even if the content is later proven to be fake, the emotional effects, embarrassment, anxiety, and loss of trust, do not simply disappear.
In many cases, the explanation or correction never spreads as far as the original content, something we call a digital tail. This means that a false narrative can continue to shape how others perceive them long after the incident has passed.
3/ The Law Is Already Being Applied
While laws specific to artificial intelligence are still evolving, this does not mean there are no legal consequences. In Canada, existing Criminal Code provisions are already being used to address behaviours involving AI generated content. What matters is not whether something is real or fake, but whether harm was caused, consent was ignored, or someone was targeted. Here are some clear legal examples that apply:
Creating or sharing AI generated sexual images of a minor. Even if an image is completely fabricated, if it depicts someone under 18 in a sexualized way, it can fall under:
Criminal Code s.163.1 – Child Pornography
Canadian courts have interpreted this law broadly. In cases such as R. v. Sharpe (2001), the Supreme Court of Canada confirmed that even fictional or created material can meet the threshold if it is made for a sexual purpose involving minors. This means an AI-generated image does not need to be “real” to be illegal.
Sharing or threatening to share intimate images without consent. If a student distributes, or even threatens to distribute, a sexualized image of another person, it may fall under:
Criminal Code s.162.1 – Non-consensual distribution of intimate images
This applies even if the image is fake, if it is presented as real and causes harm. Courts have increasingly focused on the impact on the victim, not just the origin of the image.
Using AI to harass or repeatedly target someone. If AI is used to repeatedly send messages, post harmful content, or create ongoing distress, it may meet the threshold for:
Criminal Code s.264 – Criminal Harassment
This includes behaviour that causes someone to fear for their safety or experience significant emotional distress, even if no physical threat is made.
Impersonating someone using AI generated profiles or content. Creating fake accounts or content that pretends to be another person can fall under:
Criminal Code s.403 – Identity Fraud / Personation
If the impersonation is done to cause harm, embarrassment, or to manipulate others, it may meet the criteria for a criminal offence.
Using AI generated images to threaten or coerce someone. If a student uses a fake image to pressure someone into doing something, such as sending money or images, this may constitute:
Criminal Code s.346 – Extortion
The key element is the threat, not whether the image is real. If the threat causes someone to comply out of fear, it can meet the legal threshold.
Supporting youth and teens in this space begins with conversation. Rather than leading with fear or punishment, it is more effective to approach the topic with curiosity and openness. Asking questions such as whether they have seen people using AI to create fake images, or what students at their school think about it, can create an entry point for discussion. When youth and teens feel heard rather than judged, they are far more likely to share what they are seeing and experiencing. That connection is what creates the opportunity for meaningful guidance.
Alongside conversation, it is important to help youth snd teens develop critical thinking skills about the content they encounter online. Many young people are growing up in an environment where highly realistic images, videos, and audio can be created with ease. Helping them understand that not everything they see or hear is real is now a foundational part of digital literacy. They also need to recognize that sharing something, even if intended as a joke, can still cause real harm to another person. The focus should be on helping them pause, question, and think before they hit the send button.
Empathy also plays a critical role. One of the challenges with AI generated content is that it can create a sense of distance between the action and its impact. When something is fabricated, it may not feel as serious to the person creating or sharing it. It is important to remind youth and teens that while the content may be artificial, the person being targeted is not. The emotional and social consequences are real, and helping youth understand that connection can influence their choices in a meaningful way.
Clear expectations need to be established. Youth and teens benefit from knowing where the boundaries are and why those boundaries exist. This includes understanding that creating or sharing AI generated content of others without consent is not acceptable, and that sexualized content involving peers is never appropriate. It is also important to reinforce that intent does not erase impact. Even if something was meant as a joke, it can still cause harm and carry consequences. Clear, consistent messaging helps provide the structure youth and teens need to navigate this evolving digital landscape responsibly.
Artificial intelligence is not the problem, it’s a tool. However, it’s a tool that has changed the landscape of digital peer aggression in three important ways:
1/ Speed – content can be created instantly
2/ Scale – it can spread widely and quickly
3/ Believability – fake content can look real enough to cause real harm
This is no longer just cyberbullying as we once understood it. It is the ability to create a false reality about someone and distribute it at scale. For youth and teens, that means learning not just how to use technology, but how to navigate a world where what they see and what others see about them may not always be real.
For parents, caregivers, and educators, it means recognizing that the digital environment youth and teens are growing up in is changing quickly, and that our role has to evolve with it. Staying informed is no longer just about understanding social media platforms, it now includes having a basic awareness of how AI tools work, what they are capable of, and how they are being used in peer interactions, both positively and negatively. Without that awareness, it becomes much harder to provide relevant guidance.
Staying engaged means being present in an ongoing way, not just stepping in when something goes wrong. Regular conversations about technology, online behaviour, and emerging trends help normalize these discussions and make it easier for youth and teen to come forward when something does happen. Engagement also means paying attention to changes in behaviour, shifts in friendships, or signs that something may be off, even if a youth or teen has not said anything directly.
Guiding with clarity involves setting expectations that are simple, consistent, and grounded in values. Youth and teens need to understand not just what the rules are, but why they exist. This includes conversations about consent, respect, reputation, and the real-world consequences of digital actions, including those involving AI. When expectations are clear, it reduces ambiguity and helps youth and teens make better decisions in moments of pressure or impulse.
At the same time, guidance must be delivered with care. Youth and teens are more likely to seek help when they believe they will be supported rather than judged. If a situation arises, whether they are the target or the one who made a mistake, the goal should be to respond in a way that prioritizes learning, accountability, and safety. This balanced approach helps build trust, and trust is what ultimately allows adults to remain a meaningful influence in a youth or teen’s onlife world.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech














