The New Face of Mean-Girl Aggression: AI-Generated Nudes and Digital Harm
- The White Hatter

- Dec 11, 2025
- 5 min read

Caveat - Most of the deepfake nude incidents reported in schools and covered by the media involve teen boys targeting teen girls. We recently learned of a case that breaks this pattern, where a teen girl weaponized and distributed AI generated picture of other girls at her school.
Most parents understand that youth and teens can be unkind and really mean to one another. Social dynamics have always included cliques, popularity hierarchies, and power plays. Anyone who grew up in school remembers the “queen bee” figure; the socially dominant teen who uses her status to influence, exclude, or target others. This behaviour isn’t new, but the tools teens can now access make the impact far more severe.
This week, we became aware of a case where a teen girl weaponized and distributed AI generated nudes of other girls at her school publicly as a form of digital peer aggression. This kind of behaviour isn’t just cruel, it introduces risks that most families never had to consider even a few years ago.
Peer conflict among teens is as old as adolescence itself. What has changed is the ability to weaponize technology in ways that were unimaginable only a short time ago.
AI nudification tools allow a user to upload an innocent photo and produce a sexualized version of that image within seconds. These apps are designed to be simple and fast. They require no technical skill, and many run on a regular smartphone. For a teen who wants to harm someone else, this creates an alarming new avenue for targeted aggression.
In the case that spawned this article, it wasn’t about sexuality and it wasn’t curiosity. It was a way to purposely humiliate, intimidate, and socially isolate other peers at their school by exploiting their image.
This is digital peer aggression, not “drama,” and it has real-world consequences.
While boys also experience digital aggression, research consistently shows that teen girls are more likely to be targeted through sexualized rumours, image-based harassment, and relational bullying. The “queen bee” dynamic amplifies this.
Girls who hold social power often use their influence in ways that threaten other girl’s reputations and relationships. In previous decades this might have meant whisper campaigns or passing around notes. Today, the same behaviour happens online and reaches far more people, much faster, with much more damaging content. When sexualized images are involved, even fake ones, the stakes rise dramatically.
Teens fear being judged, sexualized, or socially destroyed by their peers. A single manipulated image can trigger shame, anxiety, withdrawal, or panic. It can also lead to further victimization as the image spreads beyond the original group.
Parents and caregivers need to appreciate the emotional toll this kind of attack can have. Research on image-based abuse shows that victims often experience:
Fear and loss of control over their own identity
Social isolation due to embarrassment or anxiety about peers’ reactions
Declines in mental health, including depression and panic symptoms
Distrust of social circles they once felt safe in
• Long-term concerns about reputation, school records, or future opportunities
It’s easy for adults to say, “it’s fake. People will understand.” Teens don’t experience it that way. Their world revolves around social acceptance. A manipulated nude image can feel like a catastrophic lie changing event.
Parents sometimes assume that if the image is fake, it falls into a grey zone. In many jurisdictions, including Canada, this assumption is wrong.
Creating, sharing, or threatening to share a sexualized image of a minor, real or AI-generated, can still violate child exploitation, defamation, or criminal harassment laws. Even when courts debate the specifics of “AI-generated child sexual abuse material,” police agencies across Canada increasingly treat these cases as serious offences because the impact on the targeted youth is real and often severe.
Even if charges are not laid, schools may suspend or expel a student for participating in image-based aggression. Families can also pursue civil remedies such as those available here in British Columbia under the Intimate Image Protection Act.
Why Teens Use These Apps to Harm Each Other
We believe several factors drive this behaviour:
1. Power and control
The queen bee dynamic is built on dominance. Technology gives aggressors a new way to reinforce their social status.
2. Anonymity and detachment
Teens often feel separated from the emotional consequences of digital behaviour. They see a screen, not a person.
3. Curiosity mixed with impulsivity
Adolescent brain development means impulse control is still growing. A moment of anger or jealousy can lead to a harmful decision.
4. Peer reinforcement
Group chats, online friend circles, and private message threads make it easy for harmful content to spread among supportive peers.
5. Lack of understanding about legality and harm
Many teens simply don’t grasp that an AI-generated nude can carry the same emotional harm as a real one.
Parents and caregivers can help protect their children by starting conversations early. Talk openly about deepfakes, AI nudification apps, and why these tools carry real risks. Keep the discussion calm rather than alarmist, and remember that young people are more receptive when they feel heard and respected instead of lectured.
It’s also important to explain image-based harm in a way they can understand. Even when an image is fake, the emotional fallout can be significant. A manipulated sexualized picture can damage trust, dignity, and mental health. What matters most is the impact on the person targeted, not the intent behind the act.
Reinforcing empathy is another key step. Ask your child how they would feel if someone used technology to humiliate them online. Many teens don’t naturally think ahead to the consequences of digital behaviour until someone helps them picture it.
Make sure your child knows they can come to you early if something happens. Survivors of these types of crimes often feel shame even though they did nothing wrong. Let them know you won’t blame them, and that you’re there to support them.
If your child becomes a target, act quickly. Document everything by capturing screenshots, saving links, and recording usernames. Reach out to the school to report the incident, and consider contacting police if the image is sexualized. Many platforms now offer reporting tools for AI-generated sexual images, so encourage your child to use those features as well.
Help your child manage the narrative. Reassure them that adults understand deepfakes exist and that manipulated images don’t define who they are. Support them in speaking with a trusted teacher, counsellor, or coach who can help stop rumours and provide a safe space at school.
What Schools Should Be Doing
This type of aggression isn’t something schools can ignore. A healthy response includes:
Clear digital-behaviour policies that address AI-manipulated images
Education for students on consent, image-based harm, and digital ethics
Strong protocols for reporting, supporting, and protecting targeted youth
• Avoiding punitive approaches that blame then target of these types of crimes or silence them
Schools play a critical role in shaping the culture around technology. Parents should expect them to stay informed and proactive.
We are entering a time when teens can exploit AI tools to cause real emotional, social, and psychological harm. The example of a “queen bee” teen using deepfake nudes to target other girls is not an isolated scenario. It represents the direction digital peer aggression is heading.
Parents and caregivers need to be aware of these risks, educate their children early, and keep communication open. Guidance, empathy, and timely intervention are key to protecting youth from the damage these tools can cause.
If your child ever becomes a target, remind them they are not to blame. Shame belongs to the person who chose to inflict harm, not the one who was harmed.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech














