Nova Scotia Liberal Party Pushing For Age Gating Social Media Legislation: A Reasoned and Cited Rebuttal
- The White Hatter
- Aug 29
- 13 min read
Updated: Sep 2

Caveat : Many digital literacy and internet safety advocates, including us, are committed to grounding our advocacy in rigorous, evidence-based research. This approach ensures that the information shared with our audiences is current, accurate, reliable, and beneficial. However, we are increasingly concerned about the growing spread of alarmist messaging in the media, much of which lacks “credible evidence” and relies on anecdote or opinion. Too often, this type of narrative does more harm than good especially when it comes to advancing a political agenda. It is from this perspective that we present this rebuttal for consideration, with the aim of offering balance to the current one sided conversation surrounding youth, teens, families, and legislation.
This week, the Nova Scotia minority Liberal Party announced plans to introduce legislation that would block anyone 16 and under from accessing social media platforms. (1) At first glance, this may sound like a bold step to protect youth and teens. Scratch beneath the surface, however, and the move looks far more political than practical. It follows a global playbook being used by some individuals and special interest groups; use youth as the “point of the spear” to slay the evil social media dragons, win headlines, and stir parental emotions and anxieties.
This announcement is interesting to us given that here in Canada, both provincial and federal governments permit minors under 18, and even 16, to take part in activities that carry significant responsibility or even potential danger. Often, this is allowed when a parent or guardian provides consent, but in some cases, no parental involvement is required at all. (2) Provincial and federal governments across Canada already accept the idea that young people are capable of making consequential decisions under the right circumstances, so why should this not apply to social media as well?
What makes 16 or even 18 a so called magical age? In our experience presenting to more than 680,000 youth and teens across Canada, we’ve seen firsthand that maturity does not follow a strict calendar. Some 14 year olds demonstrate the judgment and self-control of a 19 year-old, while some 19 year olds still exhibit the impulsiveness of a younger teen. This reality challenges the idea that an arbitrary age threshold is the best way to decide when young people are ready to engage with technology, the internet, and social media. In most cases, we believe that parents and caregivers are far better positioned than government to assess their child’s readiness. They know their child’s temperament, emotional development, and capacity for responsibility in ways that legislation simply cannot account for. While laws can set broad frameworks, the nuanced and individualized decision of when a child is prepared to step into the digital world should rest with parents, caregivers, and families, not politicians.
Nova Scotia politicians driving this proposal are relying on emotionally loaded terms such as “epidemic of depression,” “rise in suicides,” and “mental health crisis” to support their case. These kinds of phrases are crafted to stir feelings rather than encourage thoughtful debate. In their press release, they also referred to unspecified “studies” without citing sources, an approach that, in academic circles, is often criticized as a form of “enshittification.” (3)
Enshittification is a term coined by Cory Doctorow, and listed in the Merriam-Wester dictionary, which describes a cognitive strategy where an individual or group pushes overwhelming information clutter and manipulative content, without citations for review, which is often used to advance a political agenda over a good evidence based research approach. It often shows up as repeated, simplified narratives that are easier to believe than complex multifactorial truths. When repeated often enough, something called flooding the zone in politics, these messages start to feel like fact, even if they are not backed by strong evidence.
What stood out in the news coverage was that the Nova Scotia chapter of the advocacy group “Unplugged Canada,” who are backing this legislation, leaned heavily on enshittification style language. During the interview, their representative made sure to have one book prominently displayed in the background: Dr. Jonathan Haidt’s The Anxious Generation (4), a work often held up as the definitive text supporting their position While Dr. Haidt has captured public attention, his thesis is far from settled science. In fact, a significant body of peer reviewed evidence based research challenges or outright contradicts his claims. (5) Yet these counter narratives by scholars and extremely reputable evidence based researchers like Dr Pete Etchells (6), and soon to be Dr. Catherine Knibbs (7), just to name 2, are conspicuously absent from the political grandstanding and the public discussions that are taking place on this issue.
The idea of a social media ban has a seductive lure of simplicity, just delay, block, or ban access until youth and teens are “old enough,” and safety is achieved. However, this is an illusion. Technology shifts faster than any rulebook, just look at what is happening with artificial intelligence. Many youth are now engaging with AI based “companionship” apps that the research is showing can have mental health concerns for some youth and teens. However, these are not “social media” platforms, therefore the Nova Scotia legislation would not likely apply.
Research consistently shows that the majority of young people use social media without lasting harm. (5) In fact, many youth and teens can benefit socially, emotionally, even academically. (8)(9) The minority who do experience distress are often the same young people who already struggle offline. (10)(11) Vulnerability, not technology, is the real issue. The question that should be asked is, “why do some people prosper online while others get into real difficulty?” What reduces vulnerability is not sweeping nanny laws, but support that includes engaged parents, digital literacy education, and healthy boundaries at home and in schools when it comes to technology, the internet, and social media.
Worst case scenarios such as sextortion, cyberbullying, and pornography exposure are often brought up, and rightly so. No one dismisses them, certainly not us. In fact, just today we helped another family whose 18yr old son fell prey to a sextortion, we get it! However, here’s the truth, bans do not prevent these harms. As online investigators, we know that predators thrive in secrecy. Push teens underground via legislation, and you reduce parental visibility while doing little to reduce risk. In fact, bans can make the job of keeping kids safe harder.
The Nova Scotia Legislators and proponents of age gating, often point to international examples. However, when we look closely, the results are little more nuanced than their messaging would suggest.
In Australia, the government tested various age verification systems such as selfie-based age estimation, credit card checks, even motion based gestures. The outcome? None were foolproof, and some systems misjudged age badly (12) Australian Teenagers, predictably, were confident they could bypass the restrictions and are doing so. In other words, the Australians, at the time of writing this article, DO NOT have an effective age gating protocol in place to support their legislation that comes into effect soon!
UPDATE - Aug 31st/2025
The Australian Age Assurance Technology Trial Report was recently released. (23) It’s a lengthy document, but a valuable one. What stood out to us was the clear breakdown of the different types of age-gating tools currently available, along with an honest look at their strengths and limitations.
Some quotes from this report:
"Age assurance can be done in Australia privately, efficiently and effectively."
"In examining safeguards, like privacy and security, whilst we have had regard to the relevant statutory provisions and guidance, we cannot provide the level and depth of analysis that would be required to provide any kind of clearance across all 48 of the Trial participants."
"The Trial is also not intended to test if every individual product works as claimed but rather to consider if the technologies as a whole work."
"We found a plethora of approaches that fit different use cases in different ways, but we did not find a single ubiquitous solution that would suit all use cases, nor did we find solutions that were guaranteed to be effective in all deployments"
"We found that the systems were generally secure and consistent with information security standards, with developers actively addressing known attack vectors including AI-generated spoofing and forgeries. However, the rapidly evolving threat environment means that these systems – while presently fairly robust – cannot be considered infallible. Ongoing monitoring and improvement will help maintain their effectiveness over time. Similarly, continued attention to privacy compliance will support long-term trust and accountability"
"We found some concerning evidence that in the absence of specific guidance, service providers were apparently over-anticipating the eventual needs of regulators about providing personal information for future investigations. Some providers were found to be building tools to enable regulators, law enforcement or Coroners to retrace the actions taken by individuals to verify their age which could lead to increased risk of privacy breaches due to unnecessary and disproportionate collection and retention of data."
The report also found that none were 100% effective but several reached at least a 92% true positive rate. We do believe that when it comes to technology, nothing is 100% effective, especially when it comes to age-verification, there is always going to be a margin of error. We want to acknowledge that a 92%+ success rate is an excellent bench mark.
In this report the authors stated, "Age estimation has emerged as a mature, secure and adaptable tool for enforcing age-based access in a wide range of digital and physical contexts. When configured responsibly and used in proportionate, risk-based scenarios, it supports inclusion, reduces reliance on identity documents and enhances user privacy. "
However, it’s worth noting that the Electronic Frontier Foundation, the only privacy stakeholder on the report’s Advisory Board, published a public op-ed about two weeks ago that challenges the report’s claims of success. (24)
As the report now undergoes public scrutiny, it will be important to see whether its findings hold up. If they do, there may be promising pathways forward that balance age-restricted access with the protection of privacy.
However, here’s an interesting wrench to throw into this discussion, research in Australia by the eSafety Commissioner also found (13) :
For children aged 8 - 12 who used social media, 36% had their own account, and 77% of those had their account helped set up by a parent or caregiver
Another 54% used social media through a parent’s or caregiver’s account.
Update Sept 2nd, 2025:
This is not just an Australian issue, Ireland just released a report that found that 71% 𝐨𝐟 8–12 𝐲𝐞𝐚𝐫 𝐨𝐥𝐝𝐬 𝐬𝐭𝐢𝐥𝐥 𝐡𝐚𝐯𝐞 𝐚𝐜𝐜𝐨𝐮𝐧𝐭𝐬 𝐨𝐧 𝐩𝐥𝐚𝐭𝐟𝐨𝐫𝐦𝐬 𝐰𝐢𝐭𝐡 𝐚 𝐦𝐢𝐧𝐢𝐦𝐮𝐦 𝐚𝐠𝐞 𝐨𝐟 13.
When parents or caregivers do this, they may unintentionally put their child at greater risk. Anecdotally, we see the same thing happen here in Canada as well. Here's a realistic hypothetical based on the above noted eSafety Commissioner and Ireland numbers:
When a parent or caregiver sets up an Instagram, Snapchat, or TikTok account on their child’s device using an age-verification tool and then allows the child to log in with that token, the platform treats the account as an adult’s. As a result, protections meant for teens are turned off, leaving the young user more exposed to potential risks.
Given this reality, how is the Australian legislation, or even the Nova Scotia proposed legislation, going to stop this from happening? Answer - it won’t!
In the UK, where such legislation already exists, laws requiring facial recognition or ID checks are being widely bypassed through VPNs and other workarounds. (14)(15) These aren’t just the tricks of “tech-savvy” kids. They’re basic moves for anyone with access to Google or YouTube. As John Scott-Railton, a researcher at the Citizen Lab at the University of Toronto, who studies surveillance and digital rights stated in a Washington Post article about the UK law (25):
"A textbook illustration of the law of unintended consequence, the law suppresses traffic to compliant platforms while driving users to sites without age verification. The more government squeezes, the more they reward the very sites that scoff as their rules"
Having presented to more than 680,000 youth and teens, we are already seeing a noticeable rise in middle and high school students exploring the deep web, a space where enforcing age restrictions is nearly impossible. Our concern is that as more of the surface web becomes age-gated, it may drive not only young people but also adults toward the deep web, where safeguards are far harder to apply.
Another major concern we have with the 16+ age verification legislation, to enforce it, platforms need photo IDs, biometric scans, or use third-party services from all users, not just youth and teens, but from us adults as well. While marketed as secure, we guarantee these age verification companies create tempting targets for hackers and even hostile state actors.
We have already seen how vulnerable supposedly “gold-standard” privacy and security systems can be:
UK Ministry of Defence Breach: Data from 100 British officials, including members of special forces and MI6, was exposed, endangering thousands of Afghans BBC report. (16)
Canada Revenue Agency Breach: Sensitive taxpayer information was leaked in a breach documented by the Office of the Privacy Commissioner of Canada. (17) This is only one of several major cyber privacy breaches of big companies in Canada (18)
If major government agencies and large multi-million dollar companies can be compromised, third party age verification databases are no exception.
That doesn’t mean age verification should be dismissed outright, there are clear cases, such as access to pornography and gambling sites, where it plays a necessary role in protecting young people. However, what it does mean is that any solution must be designed in a way that protects privacy as much as it enforces restrictions. The new Australian report mentioned above provides independent research that such platforms do exist.
Age verification should not come at the cost of creating massive databases of personal information no matter what the age, which could be vulnerable to misuse or breach. Instead, the focus needs to be on privacy preserving approaches that strike a balance between safeguarding youth and respecting the rights of all internet users. This is important given that currently there is no implemented age assurance solution anywhere in the world that can provide the necessary balance between reliable accuracy and user privacy. There are some who have said that countries like France have done it, but they fail to acknowledge that research in France on this issue found serious privacy challenges with age verification (19). These same voice also fail to report that in France the use of VPN’s to bypass their age verification legislation is a significant issue.
Some people argue that age verification is nothing new, you already have to show ID to buy alcohol, cigarettes, vapes, when entering a bar, or if a police officer pulls you over while driving. However, here’s the critical difference, in those cases your ID is reviewed by a human being, face to face, and then returned to you, there is no permanent digital record created. Digital age verification, on the other hand, often requires scanning, transmitting, or storing sensitive personal information. That shift, from a momentary human check to a digitized recorded transaction, introduces significant privacy and security vulnerability and concerns to both teens and adults. Canadian privacy lawyer David Fraser, a recognized legal expert on these issues, highlights these concerns in a recent YouTube video about the federal age-verification bill here in Canada. In our view, the same implications would also apply to the provincial age-gating legislation being proposed by the Nova Scotia Liberals (20)
The takeaway is clear; bans and verification schemes can’t match the ingenuity of young people or the constant change of today’s onlife world, and they raise serious privacy concerns for everyone
If bans don’t work, what does? Evidence points to the everyday presence of adults who are equipped, empathetic, and engaged, as the cornerstone of online safety. Parents and caregivers who talk openly with their children, set realistic boundaries, and model healthy tech use raise youth who are more resilient online. Schools that integrate digital literacy into curricula give students tools to recognize dark patterns, resist manipulation, and navigate risks.
Safety comes from preparation, not prohibition. Resilience is built by guiding youth through the onlife world, not by pretending we can fence it off until an arbitrary birthday. We cannot predict which child will encounter harm online. However, we can predict something more important, bans don’t eliminate risk, they only delay it and youth and teens in countries where such legislation and age gating exists are easily bypassing it. Education and guidance, by contrast, reduce harm across the board and empower young people to face challenges responsibly.
Protecting children is a noble cause. But when advocacy devolves into nanny legislation, (21) it fractures the very trust it claims to defend. It substitutes feel good optics for real solutions. Do we think that these social media platform should be legislated and regulated, ABSOLUTELY!
However, instead of creating barriers for youth, like age gating, legislation should prioritize transparency and ethical practices from social media companies. For instance, laws could mandate algorithm driven content which in our experience is what leads youth and teens down the dark hole of the internet, restrictions on dark patterns that manipulate user behaviour, and robust reporting systems for harmful content.
The burden of accountability, via legislation, must rest squarely on the shoulders of social media vendors and not on youth, teens, and parents. These companies have unparalleled influence over the digital experiences of millions of users and should be required, through well thought out legislation, to adopt measures that safeguard users without compromising personal freedoms through legislation. For example:
Companies should be required to disclose how their algorithms prioritize content and offer users the ability to customize their feeds. They should also allow outside third parties, such as academic researchers, to review these algorithms to ensure compliance
Companies should be required to create stricter data collection limits and opt-in policies can prevent the exploitation of user’s personal information.
Companies should be required to implement features like content warnings, parental controls, and enhanced reporting mechanisms to address harmful material that can be easily found and utilized.
So where does that leave us? Here at the White Hatter we believe a middle ground is possible:
Parental Consent and Responsibility Matters: Parents already guide their child’s participation in high-risk activities. Extending this principle to online life recognizes family decision making. This should be a parent or caregiver decision, and NOT a government decision specific to this topic. Also, with parental consent comes parental responsibility (22)
Safety-by-Design: Platforms must reduce harm through strong privacy defaults, content moderation, and limits on manipulative algorithms. This needs to be legislated into law! We can no longer allow these companies to police themselves. (21)
Parent Education: Not all parents feel equipped to guide their child online. Governments and schools should invest in digital literacy resources to close that gap.
Parents and caregivers deserve better than fear-driven politics and nanny legislation. Politicians owe the public more than selective research and emotional soundbites. Youth deserve a future shaped not by bans and blockades, but by empowerment, resilience, and the confidence that comes from being prepared for an onlife world that is already here!
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
References: