When Courts & Verdicts Focus on Social Media Design, Shouldn’t Legislation Do the Same? A Message For Canadian Parents, Caregivers, & Legislators
- The White Hatter

- 4 hours ago
- 7 min read

Caveat - this is a follow-up to our article, "The New Mexico and LA Social Media Rulings: Some Of Our Thoughts and Concerns" that we posted yesterday (1)
Across Canada, there is growing political discussions about introducing social media legislation aimed at youth and teens, similar to what has been implemented in places like Australia. At first glance, proposals such as age gating can feel like a logical step. If we restrict access, we reduce risk, right? It sounds simple, however, as parents and caregivers, we need to ask a deeper question, “Are we solving the actual problem, or are we addressing what feels like the problem?”
Recently, two civil trial verdicts in the United States involving major social media platforms have added an important dimension to this discussion. While these were U.S. based cases, their impact has reached far beyond those borders. For many families around the world, the outcomes felt like a form of validation. Long standing concerns about potential harm were not brushed aside, they were formally recognized and taken seriously.
However, what is most important is what the juries actually focused on in both cases. The findings were not centred on the idea that youth simply use social media, and the juries did not say that these platforms were inherently harmful in all contexts. Instead, the focus was much more specific and, in our view, much more useful. The jury found negligence in the “design” of these platforms. Not content, not access alone, and that distinction matters more than most headlines and online discussions post verdict are highlighting.
Young people are wired to be curious. They explore, look for connection, seek validation, and try to figure out who they are and where they belong. That is not a weakness, it is a normal and important part of development.
What the juries recognized is that some platform features are deliberately designed to align with this stage of development in ways that can extend engagement. Tools like infinite scrolling, autoplay, algorithmic systems, and attention grabbing notifications are not there by chance. They are built to keep users coming back and staying longer.
These are not neutral design choices. They are intentional systems, created with a clear purpose, and they shape how youth, teens, and even adults experience and interact with these platforms.
This is not about blaming youth for using technology, it’s about understanding the environment they are stepping into. Age based legislation may feel protective, but they come with what we believe are two real limitations:
#1/ They are often easy to bypass. Many youth and teens already know how to work around them, whether through shared accounts, inaccurate birthdates, or the use of tools that mask location or identity, and
#2/ More importantly, they do not address the core issue identified in these recent court cases.
If the design of a platform can contribute to harm, then restricting access without changing that design simply delays exposure. It does not reduce the risk when access eventually happens. In some cases, it may even reduce opportunities for guided learning, where parents and caregivers can help youth build the skills they need to navigate these environments safely.
If we take the US based court’s findings seriously, then legislation and regulation should begin to shift toward how platforms are built, not just who is allowed to use them. That could include:
Greater transparency around how algorithms recommend content
Limits on features that are designed to maximize time on platform at the expense of well-being
Stronger safeguards for youth accounts by default
Clearer warnings and education about how these systems work
This approach does not take responsibility away from parents, caregivers, or young people, and it does not place it entirely on the shoulders of industry or government. Instead, it reflects a more accurate understanding of how digital environments actually work. Responsibility is shared, and when any one part is missing, the system becomes less effective.
Families play a critical role as guides, or what we like to call “digital sheepdogs”. What happens at home shapes how youth and teens interpret and respond to what they experience online. Conversations, boundaries, and modelling behaviour all help build the foundation that youth and teens rely on when they are navigating digital spaces independently. This is where values are taught, questions are answered, and trust is built.
Youth and teens, in turn, are not passive users. They are active participants in their digital lives. As they grow, they learn how to make choices, manage their time, recognize influence, and understand the impact of their actions. Digital literacy is not about getting everything right, it’s about steady growth, helping young people build the ability to think critically and make thoughtful choices in onlife spaces that are always changing.
At the same time, companies are responsible for the environments they create. The design of a platform is not neutral. Features, algorithms, and engagement strategies all shape user behaviour in powerful ways. When those systems are purposely built to prioritize attention above all else, especially for younger users, the impact can be significant. Expecting better from these companies is not optional, it is necessary and should be legislated.
Governments also have a role to play by setting clear, evidence informed standards that prioritize safety, transparency, and accountability. Effective legislation and policy should not be driven by headlines or pressure alone. It should reflect a clear understanding of how technology is designed and used, and it should focus on creating conditions where safer digital environments are the default, not the exception.
When families, youth, companies, and governments each take responsibility within their role, we move closer to a balanced and practical approach. One that does not rely on a single solution, like age gating, but instead recognizes that protecting youth and teens in today’s onlife world requires all parts working together.
When we focus only on banning or restricting access, we risk narrowing the conversation too much. Social media is not a single, uniform experience. For many youth and teens, it can provide:
Connection to friends and communities
Opportunities for creativity and self-expression
Access to information, learning, and entertainment, and
A sense of belonging, especially for those who may feel isolated offline
At the same time, there are real risks that cannot be ignored. Both of these truths can exist at the same time. Good legislation and policy, like good parenting, requires us to hold these two truths.
As parents, caregivers, and legislators, this moment offers an opportunity. Not to step back, but to step in with more clarity. If the conversation is shifting toward design, then one of the most powerful things we can do is help our kids understand how these systems work.
Start by having open, everyday conversations about why certain content keeps showing up in your child’s feed. Help youth and teens understand that what they see is not random, it’s shaped by algorithms that learn from what they watch, like, share, and even how long they pause on a post. When youth and teens recognize that their online experience is being curated for them, they are more likely to question it, rather than simply accept it.
It is also important to talk about how notifications are designed to pull their attention back to a device. Whether it is a like, a comment, or a streak reminder, these alerts are not just informational, they are intentional. Helping youth and teens see this design for what it is can give them a greater sense of control over when and why they engage, rather than feeling like they have to respond every time their phone lights up.
Another valuable conversation is about how it feels to spend long periods of time scrolling. Encourage your child to reflect on their own experience. Do they feel energized, connected, and inspired, or drained, distracted, and overwhelmed? Building this kind of self-awareness helps them start to recognize patterns in their behaviour and make more informed choices about how they spend their time online.
It is equally important to explore when technology is adding value to their life and when it may be taking something away. Not all screen use is the same. Creating, learning, and connecting with others can be positive and meaningful experiences. On the other hand, passive use that leaves them feeling worse than when they started is worth paying attention to. Helping youth and teens distinguish between these experiences gives them a clearer framework for making decisions.
These conversations are not about creating fear or placing blame. They are about building awareness. When youth and teens understand the onlife environment they are in, including how it is designed and how it affects them, they are in a much stronger position to navigate it with intention and confidence.
The recent US based court decisions are an important step. They signal that we are beginning to better understand where risk can come from in digital spaces. The next step is ensuring that legislation and policy reflects that understanding.
Age based bans and access restrictions focus on who can enter, but they do little to change the environment itself. A youth who is blocked at 15 and then gains access at 16 is not entering something different or safer, they are stepping into the same digital space that raised concerns in the first place, one shaped by systems designed to capture and hold attention. The feeds, recommendations, and messaging features are all driven by the same engagement focused architecture. Simply changing the age of the user does not change how that environment operates, nor does it remove the influence of those design choices.
It is also important to recognize that these systems do not exist in isolation. youth and teens move between platforms, often quickly, and many of those platforms rely on similar design strategies. If one door is closed, another often opens, but the underlying mechanics remain the same. This is why focusing only on access can create a false sense of protection. The risk is not tied to a single app, it is tied to a broader design model that spans the digital ecosystem.
When we start to think in these terms, the goal becomes clearer. It is not just about restricting access, it’s about improving the environment itself. Doing so would not only better support youth and teens, but would create a healthier and more transparent digital experience for everyone who uses these platforms.
Focusing only on access (age gating) may feel like action, but it risks missing the deeper issue. However, focusing on design moves us closer to meaningful change. This is where we, here at the White Hatter, believe Canadian legislation should target, rather than age gating (2).
This is also where Canadian legislation and regulation can have the greatest impact. The focus should not be limited to who is allowed on these platforms, but rather on how those platforms are designed and operated!
Call To Action:
For those Canadians reading this article, please forward it you your member of parliament!
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
Reference:














