top of page

Support | Tip | Donate

Recent Posts

Featured Post

Platform Design, Age Gating, and Why Language and Intent Matters in the Social Media Legislation Debate

  • Writer: The White Hatter
    The White Hatter
  • 12 minutes ago
  • 13 min read


In a recent article we wrote titled, “What the L.A. Social Media Case is Revealing,” we discussed an ongoing civil lawsuit in Los Angeles that is beginning to shed light on how some social media platforms were designed and how those design choices may influence user behaviour (1).


For parents and caregivers trying to understand what all of this means, one point is becoming increasingly clear. Many of the risks youth and teens encounter online are not just simply the result of them making poor choices, they are often closely connected to how certain digital platforms are built.


Features such as algorithmic recommendation systems, engagement driven feedback loops, push notifications, streak systems, and infinite scrolling are not accidental. They are intentional design decisions meant to keep users engaged for as long as possible. Engagement is valuable because attention can be converted into advertising revenue and data that can be sold to others for a profit.


When these design elements are discussed in policy debates, however, the current political conversation often shifts toward restricting young people rather than examining the systems themselves, and that distinction matters.


A growing number of governments, including here in Canada, are considering or introducing legislation that would restrict youth access to social media through age gating laws or outright bans. On the surface, this approach may appear logical. If youth and teens are experiencing harm on social media, limiting their access might seem like a solution. However, this strategy, we would argue, focuses on the symptoms of harm rather than the underlying cause.


If the risks associated with social media stem largely from how platforms are engineered, then preventing youth and teens from using them does not change the architecture that created those risks in the first place, the design remains the same. The systems that prioritize engagement, amplify emotional content, and continuously recommend new material remain unchanged. More importantly, the economic incentives driving those systems remain unchanged. In other words, the structure that produces many of the problems continues to exist exactly as it did before a legislated age gate.


To understand why design matters, parents and caregivers should know a little about how these platforms operate behind the scenes, which include:


Algorithmic Recommendation Systems


Most major social media platforms rely on algorithmic recommendation engines. These systems analyze user behaviour such as what someone clicks, watches, likes, or comments on. Based on that data, the system predicts what content is most likely to keep the user engaged and then recommends more of it.


This process can be helpful in some contexts. It can connect users with communities, hobbies, or educational content they might not otherwise discover. However, the same systems can also amplify extreme, emotionally charged, or sensational content because those types of posts tend to generate strong engagement. For youth and teens whose brains are still developing impulse control and emotional regulation, this environment can be especially powerful.


Engagement Feedback Loops


Many platforms are designed to provide constant feedback signals such as likes, comments, shares, streaks, and follower counts. Over time, this can encourage repeated behaviour in order to seek additional variable rewards. While adults experience these effects as well, adolescents are generally more sensitive to social feedback due to developmental factors within the brain’s reward circuitry.


Infinite Scroll and Frictionless Use


Another design feature often discussed in research is infinite scroll. Rather than reaching a natural stopping point, users can continue scrolling indefinitely through an endless stream of new content. Without built in pauses or friction points, people often spend much longer on platforms than they originally intended.


This design pattern is not unique to social media, it appears in many digital products, including streaming platforms and online shopping sites. However, when combined with algorithmic recommendations and variable reward feedback systems, it can create highly immersive experiences.


If these types of design features contribute to some of the challenges associated with social media use, banning youth and teens from platforms does little to address the root of the problem. The architecture remains intact, and that creates several unintended consequences, such as:


The Design Still Affects those over 16


Even if youth and teens were successfully restricted from using certain platforms, the same design features would continue to affect those over the age of 16.


Those over the age of 16 are not immune to algorithmic persuasion, emotional amplification, or engagement loops. In fact, many of the misinformation, political polarization, and online harassment issues that researchers study occur primarily among young adult users. If the goal is to address systemic risks within digital platforms, which it should be no matter what the age, focusing solely on youth access to those under the age of 16, overlooks a much broader issue.


Youth and Teens Often Find Workarounds


History also shows that youth and teens frequently find ways around digital restrictions. Age verification systems can be bypassed using VPNs, borrowed identification, secondary accounts, or platforms that fall outside regulatory frameworks.


When that happens, youth and teens do not disappear from digital spaces. They simply migrate to areas that may be less visible to parents, caregivers, educators, or researchers. This can make potential harms harder to detect and understand.


It Shifts Responsibility


Perhaps the most significant issue is where responsibility ultimately lands. When legislation focuses primarily on banning youth, it can subtly shift the narrative toward blaming teens and families rather than examining the systems that shape digital environments. Even when a social media platform introduces an age-verification system, those measures can often still be bypassed. When that happens, the company can claim it fulfilled its legal responsibility by putting the system in place. Responsibility can then shift back toward parents or caregivers, with the argument that they failed to supervise or enforce the platform’s age rules.


Parents and caregivers are then left still feeling responsible for managing technologies that were deliberately designed by teams of behavioural scientists, engineers, and data analysts to capture attention, which age gating was supposed to mitigate. That imbalance is difficult for any family to manage alone.


We recently came across an online argument from an advocate supporting a 16 year old social media age restriction. In essence, the argument was that limiting youth access may be the only effective way to force platforms to reconsider their business models. The reasoning suggested that reducing youth participation would directly impact engagement metrics and advertising revenue, and that financial pressure is the only language these companies truly respond to. In other words, let’s use youth and teens as the tip of the spear to slay the dragon of big tech. 


We have actually seen this argument from more than a few people, so it is worth addressing directly.


At one level, we understand the logic. History shows that large corporations often respond when financial incentives shift. Real harm creates real urgency, and many people understandably want a strategy strong enough to force change. However, when we say, “let’s ban the kids so the companies feel it,” we need to pause and carefully examine what that framing implies.


If our central criticism is that platforms reduce young people to engagement statistics, impressions, and monetizable data points, then positioning youth as an economic pressure tactic risks adopting the same logic from the opposite direction. The conversation shifts from, “How do we protect and equip young people?” to, “How do we use young people to influence corporate behaviour?”That is not a small shift, it’s an ethical one in our opinion.


In that framing, youth and teens are no longer centred primarily as developing citizens who need guidance, literacy, support, and protection. They become leverage points, a means to an end. Their removal is valued not simply because it may support their growth, but because it may also dent engagement metrics and quarterly earnings.


That line of reasoning mirrors, in a different form, the same engagement logic many critics say social media companies use. The difference is that instead of maximizing youth presence to drive revenue, the proposal seeks to reduce youth presence to compel compliance. In both cases, young people risk being treated as variables, or as a pawn, in a strategy rather than as human beings whose needs and rights should remain central.


When we frame youth primarily as numbers that can be moved up or down to influence outcomes, whether for profit or for policy change, we risk sidelining their agency and reinforcing the very mindset we say we oppose!


Blanket bans can also send a message, whether intended or not, that young people are passive victims who must be extracted from digital environments rather than educated within them. That can undermine the work many families, educators, and youth serving professionals are doing to build digital resilience, critical thinking, and responsible participation.


At The White Hatter, we often talk about preparing rather than prohibiting. When policy removes youth entirely as a tactic, it sidelines the very developmental work that helps build long-term capacity. Prevention and protection without preparation can leave youth more vulnerable when access inevitably comes.


There is also a political risk here. When youth become regulatory bargaining chips, debate can become more performative than structural. Lawmakers can point to exclusion and claim decisive action, while deeper reforms, such as algorithmic transparency, advertising limits, or meaningful design accountability, remain politically harder and easier to avoid. The optics of an age ban are strong, while the systemic change legislation is less certain politically.


Meanwhile, focusing on youth removal can relieve pressure to address the actual drivers of harm, including:


  • problematic design loops


  • data harvesting business models


  • algorithmic amplification of extreme content


  • behavioural advertising incentives


If companies are still allowed to maintain the same engagement architecture for  those over 16, including adults, then the business model survives and the design logic remains intact. If the goal is to shift incentives, then the pressure should align with the harm, and that means targeting the systems, not the age group.


We do understand the argument that financial consequences are often the only signal major technology companies truly respond to. Where we differ is in who should carry that pressure.


Instead of using youth access as the economic lever, policymakers should focus on the systems and design choices creating harm in the first place. If platforms rely on engagement loops that reward outrage, amplify extreme content, harvest behavioural data, or blur the lines between advertising and social connection, then accountability should land there.


Legislation can be designed to directly address those features. Tie penalties to manipulative architecture, tie consequences to opaque algorithmic systems, tie fines to data practices that prioritize profit over well-being. If a company continues deploying harmful design despite clear standards, then meaningful financial consequences should follow, not symbolic fines that can simply be absorbed as the cost of doing business, but penalties large enough to influence executive decision making and shareholder expectations.


That approach keeps youth centred as stakeholders whose safety matters, not as leverage whose absence matters. If we believe financial pressure changes corporate behaviour, then let us apply that pressure where it belongs. Shift incentives at the structural level. Hold companies accountable for how their products are built and monetized. When the cost of harmful design outweighs the profit it generates, business models begin to evolve.


The goal should not be to remove young people in order to create pain. The goal should be to change the architecture so the environment itself becomes safer for everyone.


Another important issue emerging in policy discussions is how these restrictions are being described. Some advocates argue that youth should not be banned from social media, but that their access should simply be delayed until they reach a certain age or developmental stage.


At first glance, this may sound like a meaningful distinction. However, when we look at the definitions and the practical impact, the difference becomes much less clear.


According to Merriam-Webster:


Ban (verb): To prohibit or forbid, especially by legal means.

Ban (noun): An official or legal prohibition.


Delay (verb): To put off to a later time; postpone. To stop, detain, or hinder for a time.

Delay (noun): The act of postponing or slowing. A period of time by which something is late or postponed.


In our opinion, based on the above noted definitions, a delay becomes a ban when the postponement is legislated, mandatory, indefinite, or tied to conditions a youth or teen cannot realistically control.


It is understandable that some advocates prefer the softer language of “delay.” It sounds less punitive and more developmental. However, when access to a product or service is prohibited by law until an adult defined threshold is met, the function of that policy is effectively the same as a ban. Changing the vocabulary does not change the outcome.


So, when does a “delay” function as a ban? We believe the trigger is when access is prohibited by law, it’s no longer a delay it’s now a legislated ban.


If legislation states that youth cannot access social media until a specific condition is met, the activity is prohibited. Whether that prohibition is framed as “not yet,” “not until platforms fix themselves,” or “not until you reach a certain age,” the outcome remains the same. The young person is not allowed to participate, and that fits the plain definition of a ban. 


Some proposals suggest youth access should be delayed until technology companies redesign their platforms or until certain safety benchmarks are achieved. From a youth or teen’s perspective, those conditions are completely outside their control. If there is no clear timeline or measurable pathway to access, the restriction becomes indefinite and once again, now operates as a ban.


Another argument suggests youth should only access social media once they reach a certain level of cognitive or emotional maturity. The challenge is that maturity does not develop on a fixed schedule. Some twelve-year-olds demonstrate strong digital awareness and responsibility. Some sixteen-year-olds struggle with impulsivity and peer pressure. When policy assumes a single developmental timeline for all youth, it creates a blanket restriction, and again, that functions as a ban.


Some supporters of “delays” frame them as a form of harm reduction. However, the concept of harm reduction in public health developed around the understanding that risky behaviours do not disappear simply because they are prohibited. Instead, harm reduction focuses on making behaviours safer while they occur through education, skill-building, and guided support. Removing access entirely until a future condition is met does not follow this model, it’s a prohibition rather than true harm reduction.


Youth know when adults are softening the truth. Adults sometimes underestimate how perceptive young people are, especially when it comes to technology and policy.


Youth and teens live in an “onlife” world, where online and offline experiences blend seamlessly. Rules, restrictions, and consequences are experienced quickly and clearly. Many  youth and teens immediately recognize that a so called “delay” on social media access functions exactly like a ban. From their perspective, a ban labeled as a delay is still a ban, and changing the language does not change the effect.


Here is what we have learned from speaking to more than 680,000 youth and teens. When adults rely on softer wording, young people often interpret it as rhetorical packaging rather than honest communication. That can create credibility issues and weaken trust. If we want youth to take digital safety conversations seriously, clear language matters. Transparency builds credibility., while euphemisms can erode it.


So, if restricting youth access does not address the core issue, what alternatives exist? Many researchers, policymakers, and digital safety advocates, including us here at The White Hatter, are increasingly focused on pushing for legislation and regulation that focuses on the design of digital systems, rather than simply limiting who can access them. This approach is often referred to as “safety by design.” Examples of design focused oversight could include:


  • greater transparency about how recommendation algorithms operate


  • independent audits of platform design and data practices


  • limits on manipulative design features targeting minors


  • default safety settings for youth accounts


  • clear reporting and accountability mechanisms when harms occur


Technology companies have created tools that connect families, support learning, and allow youth to express creativity in remarkable ways. However, innovation and profit should exist alongside reasonable expectations around user well being and public accountability.


Industries that affect public health, transportation, food safety, and pharmaceuticals all operate under regulatory frameworks designed to protect consumers. Digital environments that influence billions of people arguably deserve similar scrutiny.


So what can parents and caregivers do right now while we wait for the right legislation? While larger policy debates continue, here are practical steps parents can take to help their children navigate digital environments more safely today.


Focus on Digital Literacy


Teaching youth and teens how algorithms work, why platforms promote certain types of content, and how engagement systems operate can help them become more critical consumers of digital media. Understanding the system reduces its power.


Keep Conversations Open


Youth and teens are far more likely to discuss online concerns when they feel they will not be judged or immediately punished. Open dialogue allows parents to understand what their children are experiencing online rather than relying on assumptions.


Emphasize Balance


Helping youth build healthy routines that include sleep, physical activity, offline friendships, and hobbies can naturally reduce the potential for excessive digital engagement. Balance often works better than strict prohibition.


Stay Curious About Technology


Parents and caregivers do not need to become technical experts, but having a basic understanding of the platforms youth use can make conversations far more meaningful. Curiosity goes a long way.


Connect With Your Government Legislator


Parents and caregivers need to become activists. We have reached a tipping point, and we need to ensure that we do this right when it comes to legislation.  Contact your local politicians and share this article with them.


The ongoing social media litigation in Los Angeles may ultimately reshape how courts, policymakers, and the public think about the relationship between platform design and user behaviour. 


For parents and caregivers, the most important point to understand is that the challenges connected to youth and teens using social media rarely come from one single cause. They are shaped by a mix of factors that include platform design, profit-driven incentives, adolescent development, family environments, education systems, and wider cultural influences.


Reducing that complexity to a simple solution like age gating teens from social media risks overlooking the deeper structural issues that deserve attention that we have spoken to in this article.


Policies that focus only on restricting youth access may feel decisive, but without examining how digital systems are built, they do little to change the environment itself. In our view, bans without meaningful systemic oversight are not just ineffective, they represent a form of wilful blindness, where responsibility is redirected rather than addressed.


If we want safer digital spaces for youth and teens, the discussion cannot focus only on how youth behave online. It also needs to examine how the platforms they use are designed in the first place. Age gating laws by themselves do not address that reality. They attempt to contain the symptoms while leaving the underlying problem untouched. Rather than constantly managing the damage, the real work is fixing the design choices that are creating the harm in the first place.


Technology is not disappearing, and social media will continue to evolve. Social AI systems are becoming more immersive and emotionally responsive. The central policy question is not only, “Should youth be there?” It should also be, “How should the system behave when youth are there today and into the future?”


If amplification algorithms deepen vulnerability, then regulating those mechanisms may be more precise than removing access for everyone. Age gating may reduce early exposure, but design reform addresses structural incentives.


The most resilient solution likely requires both developmental awareness and systemic accountability.


At The White Hatter, we believe in practical progress over symbolic solutions. Protecting youth means acknowledging developmental science, recognizing uneven risk distribution, and holding systems accountable for how they shape attention, emotion, and behaviour.


We believe the purpose of legislation should not be to remove youth and teens from the onlife world they are growing up in. Digital spaces are now part of the social, educational, and cultural environment that young people inhabit. Trying to exclude them entirely from that environment does little to prepare them for it.


A more constructive goal for legislation is to ensure that the digital systems shaping young people are built with meaningful guardrails. These guardrails should recognize how adolescence actually works, including the developmental realities of curiosity, risk-taking, identity formation, and sensitivity to social feedback.


Thoughtful legislation and policy should focus on the design and accountability of the systems youth encounter. That means encouraging transparency, responsible product design, and safeguards that reduce unnecessary harm while still allowing young people to learn, explore, and gradually develop the skills they will need to navigate an increasingly onlife world.


Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech



References:


Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page