Nova Scotia Liberal Party Pushing For Age Gating Social Media Legislation: A Reasoned and Cited Rebuttal
- The White Hatter
- 17 hours ago
- 10 min read
Updated: 12 minutes ago

Caveat : Many digital literacy and internet safety advocates, including us, are committed to grounding our advocacy in rigorous, evidence-based research. This approach ensures that the information shared with our audiences is current, accurate, reliable, and beneficial. However, we are increasingly concerned about the growing spread of alarmist messaging in the media, much of which lacks “credible evidence” and relies on anecdote or opinion. Too often, this type of narrative does more harm than good especially when it comes to advancing a political agenda. It is from this perspective that we present this rebuttal for consideration, with the aim of offering balance to the current one sided conversation surrounding youth, teens, families, and legislation.
This week, the Nova Scotia minority Liberal Party announced plans to introduce legislation that would block anyone 16 and under from accessing social media platforms. (1) At first glance, this may sound like a bold step to protect youth and teens. Scratch beneath the surface, however, and the move looks far more political than practical. It follows a global playbook being used by some individuals and special interest groups; use youth as the “point of the spear” to slay the evil social media dragons, win headlines, and stir parental emotions and anxieties.
This announcement is interesting to us given that here in Canada, both provincial and federal governments permit minors under 18, and even 16, to take part in activities that carry significant responsibility or even potential danger. Often, this is allowed when a parent or guardian provides consent, but in some cases, no parental involvement is required at all. (2) Provincial and federal governments across Canada already accept the idea that young people are capable of making consequential decisions under the right circumstances, so why should this not apply to social media as well?
What makes 16 or even 18 a so called magical age? In our experience presenting to more than 680,000 youth and teens across Canada, we’ve seen firsthand that maturity does not follow a strict calendar. Some 14 year olds demonstrate the judgment and self-control of a 19 year-old, while some 19 year olds still exhibit the impulsiveness of a younger teen. This reality challenges the idea that an arbitrary age threshold is the best way to decide when young people are ready to engage with technology, the internet, and social media. In most cases, we believe that parents and caregivers are far better positioned than government to assess their child’s readiness. They know their child’s temperament, emotional development, and capacity for responsibility in ways that legislation simply cannot account for. While laws can set broad frameworks, the nuanced and individualized decision of when a child is prepared to step into the digital world should rest with parents, caregivers, and families, not politicians.
Nova Scotia politicians driving this proposal are relying on emotionally loaded terms such as “epidemic of depression,” “rise in suicides,” and “mental health crisis” to support their case. These kinds of phrases are crafted to stir feelings rather than encourage thoughtful debate. In their press release, they also referred to unspecified “studies” without citing sources, an approach that, in academic circles, is often criticized as a form of “enshittification.” (3)
Enshittification is a term coined by Cory Doctorow, and listed in the Merriam-Wester dictionary, which describes a cognitive strategy where an individual or group pushes overwhelming information clutter and manipulative content, without citations for review, which is often used to advance a political agenda over a good evidence based research approach. It often shows up as repeated, simplified narratives that are easier to believe than complex multifactorial truths. When repeated often enough, something called flooding the zone in politics, these messages start to feel like fact, even if they are not backed by strong evidence.
What stood out in the news coverage was that the Nova Scotia chapter of the advocacy group “Unplugged Canada,” who are backing this legislation, leaned heavily on enshittification style language. During the interview, their representative made sure to have one book prominently displayed in the background: Dr. Jonathan Haidt’s The Anxious Generation (4), a work often held up as the definitive text supporting their position While Dr. Haidt has captured public attention, his thesis is far from settled science. In fact, a significant body of peer reviewed evidence based research challenges or outright contradicts his claims. (5) Yet these counter narratives by scholars and extremely reputable evidence based researchers like Dr Pete Etchells (6), and soon to be Dr. Catherine Knibbs (7), just to name 2, are conspicuously absent from the political grandstanding and the public discussions that are taking place on this issue.
The idea of a social media ban has a seductive lure of simplicity, just delay, block, or ban access until youth and teens are “old enough,” and safety is achieved. However, this is an illusion. Technology shifts faster than any rulebook, just look at what is happening with artificial intelligence. Many youth are now engaging with AI based “companionship” apps that the research is showing can have mental health concerns for some youth and teens. However, these are not “social media” platforms, therefore the Nova Scotia legislation would not likely apply.
Research consistently shows that the majority of young people use social media without lasting harm. (5) In fact, many youth and teens can benefit socially, emotionally, even academically. (8)(9) The minority who do experience distress are often the same young people who already struggle offline. (10)(11) Vulnerability, not technology, is the real issue. The question that should be asked is, “why do some people prosper online while others get into real difficulty?” What reduces vulnerability is not sweeping nanny laws, but support that includes engaged parents, digital literacy education, and healthy boundaries at home and in schools when it comes to technology, the internet, and social media.
Worst case scenarios such as sextortion, cyberbullying, and pornography exposure are often brought up, and rightly so. No one dismisses them, certainly not us. In fact, just today we helped another family whose 18yr old son fell prey to a sextortion, we get it! However, here’s the truth, bans do not prevent these harms. As online investigators, we know that predators thrive in secrecy. Push teens underground via legislation, and you reduce parental visibility while doing little to reduce risk. In fact, bans can make the job of keeping kids safe harder.
The Nova Scotia Legislators and proponents of age gating, often point to international examples. However, when we look closely, the results are little more nuanced than their messaging would suggest.
In Australia, the government tested various age verification systems such as selfie-based age estimation, credit card checks, even motion based gestures. The outcome? None were foolproof, and some systems misjudged age badly (12) Australian Teenagers, predictably, were confident they could bypass the restrictions and are doing so. In other words, the Australians, at the time of writing this article, DO NOT have an effective age gating protocol in place to support their legislation that comes into effect soon!
However, here’s an interesting wrench to throw into this discussion, research in Australia by the eSafety Commissioner also found (13) :
For children aged 8 - 12 who used social media, 36% had their own account, and 77% of those had their account helped set up by a parent or caregiver
Another 54% used social media through a parent’s or caregiver’s account.
When parents or caregivers do this, they may unintentionally put their child at greater risk. The platform will treat the account as belonging to an adult rather than a youth or teen, which means safeguards designed for younger users on some of these platforms will not be activated.
Anecdotally, we see the same thing happen here in Canada as well. Given this reality, how is the Australian legislation, or even the Nova Scotia proposed legislation, going to stop this from happening? Answer - it won’t!
In the UK, where such legislation already exists, laws requiring facial recognition or ID checks are being widely bypassed through VPNs and other workarounds. (14)(15) These aren’t just the tricks of “tech-savvy” kids. They’re basic moves for anyone with access to Google or YouTube.
Another major concern we have with the 16+ age verification legislation, to enforce it, platforms need photo IDs, biometric scans, or use third-party services from all users, not just youth and teens, but from us adults as well. While marketed as secure, we guarantee these age verification companies create tempting targets for hackers and even hostile state actors.
We have already seen how vulnerable supposedly “gold-standard” privacy and security systems can be:
UK Ministry of Defence Breach: Data from 100 British officials, including members of special forces and MI6, was exposed, endangering thousands of Afghans BBC report. (16)
Canada Revenue Agency Breach: Sensitive taxpayer information was leaked in a breach documented by the Office of the Privacy Commissioner of Canada. (17) This is only one of several major cyber privacy breaches of big companies in Canada (18)
If major government agencies and large multi-million dollar companies can be compromised, third party age verification databases are no exception.
That doesn’t mean age verification should be dismissed outright, there are clear cases, such as access to pornography sites, where it plays a necessary role in protecting young people. However, what it does mean is that any solution must be designed in a way that protects privacy as much as it enforces restrictions.
Age verification should not come at the cost of creating massive databases of personal information no matter what the age, which could be vulnerable to misuse or breach. Instead, the focus needs to be on privacy preserving approaches that strike a balance between safeguarding youth and respecting the rights of all internet users. This is important given that currently there is no implemented age assurance solution anywhere in the world that can provide the necessary balance between reliable accuracy and user privacy. There are some who have said that countries like France have done it, but they fail to acknowledge that research in France on this issue found serious privacy challenges with age verification (19). These same voice also fail to report that in France the use of VPN’s to bypass their age verification legislation is a significant issue.
Some people argue that age verification is nothing new, you already have to show ID to buy alcohol, cigarettes, vapes, when entering a bar, or if a police officer pulls you over while driving. However, here’s the critical difference, in those cases your ID is reviewed by a human being, face to face, and then returned to you, there is no permanent digital record created. Digital age verification, on the other hand, often requires scanning, transmitting, or storing sensitive personal information. That shift, from a momentary human check to a digitized recorded transaction, introduces significant privacy and security vulnerability and concerns to both teens and adults. Canadian privacy lawyer David Fraser, a recognized legal expert on these issues, highlights these concerns in a recent YouTube video about the federal age-verification bill here in Canada. In our view, the same implications would also apply to the provincial age-gating legislation being proposed by the Nova Scotia Liberals (20)
The takeaway is clear; bans and verification schemes can’t match the ingenuity of young people or the constant change of today’s onlife world, and they raise serious privacy concerns for everyone
If bans don’t work, what does? Evidence points to the everyday presence of adults who are equipped, empathetic, and engaged, as the cornerstone of online safety. Parents and caregivers who talk openly with their children, set realistic boundaries, and model healthy tech use raise youth who are more resilient online. Schools that integrate digital literacy into curricula give students tools to recognize dark patterns, resist manipulation, and navigate risks.
Safety comes from preparation, not prohibition. Resilience is built by guiding youth through the onlife world, not by pretending we can fence it off until an arbitrary birthday. We cannot predict which child will encounter harm online. However, we can predict something more important, bans don’t eliminate risk, they only delay it and youth and teens in countries where such legislation and age gating exists are easily bypassing it. Education and guidance, by contrast, reduce harm across the board and empower young people to face challenges responsibly.
Protecting children is a noble cause. But when advocacy devolves into nanny legislation, (21) it fractures the very trust it claims to defend. It substitutes feel good optics for real solutions. Do we think that these social media platform should be legislated and regulated, ABSOLUTELY!
However, instead of creating barriers for youth, like age gating, legislation should prioritize transparency and ethical practices from social media companies. For instance, laws could mandate algorithm driven content which in our experience is what leads youth and teens down the dark hole of the internet, restrictions on dark patterns that manipulate user behaviour, and robust reporting systems for harmful content.
The burden of accountability, via legislation, must rest squarely on the shoulders of social media vendors and not on youth, teens, and parents. These companies have unparalleled influence over the digital experiences of millions of users and should be required, through well thought out legislation, to adopt measures that safeguard users without compromising personal freedoms through legislation. For example:
Companies should be required to disclose how their algorithms prioritize content and offer users the ability to customize their feeds. They should also allow outside third parties, such as academic researchers, to review these algorithms to ensure compliance
Companies should be required to create stricter data collection limits and opt-in policies can prevent the exploitation of user’s personal information.
Companies should be required to implement features like content warnings, parental controls, and enhanced reporting mechanisms to address harmful material that can be easily found and utilized.
So where does that leave us? Here at the White Hatter we believe a middle ground is possible:
Parental Consent and Responsibility Matters: Parents already guide their child’s participation in high-risk activities. Extending this principle to online life recognizes family decision making. This should be a parent or caregiver decision, and NOT a government decision specific to this topic. Also, with parental consent comes parental responsibility (22)
Safety-by-Design: Platforms must reduce harm through strong privacy defaults, content moderation, and limits on manipulative algorithms. This needs to be legislated into law! We can no longer allow these companies to police themselves. (21)
Parent Education: Not all parents feel equipped to guide their child online. Governments and schools should invest in digital literacy resources to close that gap.
Parents and caregivers deserve better than fear-driven politics and nanny legislation. Politicians owe the public more than selective research and emotional soundbites. Youth deserve a future shaped not by bans and blockades, but by empowerment, resilience, and the confidence that comes from being prepared for an onlife world that is already here!
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
References: