top of page

Support | Tip | Donate

Recent Posts

Featured Post

The Possible High Cost of Age Verification: Are We Trading Our Privacy for Online Protection?

  • Writer: The White Hatter
    The White Hatter
  • Jul 22
  • 7 min read

Updated: Jul 28

ree

As governments around the world move forward with laws that mandate age verification with the goal of protecting children online, we may be entering a new era, one that signals the end of digital anonymity for everyone, not just youth. These efforts, though often rooted in good intentions, raise urgent questions that all parents, caregivers, and citizens should be asking,  “Are we solving the right problem? And what are we giving up in the process?”


Mandatory age verification laws are being introduced with a clear goal, to shield minors from harmful or inappropriate online content such as pornography, gambling, or violence. As parents, caregivers, and advocates for youth safety, that’s a goal we support. But good intentions don’t automatically equal good solutions.


Here in Canada there there is a Private Members Bill before parliament called Bill 209 “Protecting Young Persons From Pornography Act”. As Canadian privacy lawyer David Fraser has stated, the Bill has an honourable intention, to protect young persons from accessing pornography online, however the devil is in the details of the wording. David argues the proposals in this Bill are just to loose and therefor a risk to all our privacy as guarantees by the Canadian Charter of Rights. (1) We agree with David, this Bill should be parked, better edited legally, and only implemented once the technology allow the promised accuracy without the risk to our private and person information.


We have to be careful about "virtue legislation”: When laws are passed to signal moral or political commitment (e.g., protecting children) but the actual implementation causes broader harms, especially to rights like privacy to all of us.


The real concern is the method being used to achieve this protection. Age verification laws require every user, regardless of age, to prove who they are before accessing content. This usually involves uploading government-issued ID, facial scans, or other personal identifiers.


This is not just about protecting kids anymore. These systems now affect all of us.


The widespread use of age verification means platforms will collect and store massive amounts of personally identifiable information (PII). Once that data is out there, it's vulnerable. No system is perfectly secure. A breach doesn’t just mean someone accessed your password, it could mean they now have your full legal identity.


Even more alarming, we’re now seeing evidence that government agencies are gaining access to what was supposed to be private information. In the United States, for example, recent reports have revealed that agencies involved in immigration enforcement have used personal identifiable information, originally collected to verify users on online platforms, for surveillance purposes. This kind of access is typically justified by so-called “exception clauses” in terms of service agreements. These clauses often say something like, “We may disclose your information to comply with applicable laws, regulations, governmental guidelines, or legal obligations.”


In plain language, this means, “If the government asks, we’ll likely hand over your data.” 


This shouldn’t surprise us. It’s standard boilerplate in nearly every privacy policy we have read. However, it should concern us, especially when platforms are being required by law to collect more and more sensitive information.


Protecting children doesn’t have to mean sacrificing privacy for everyone. It’s a mistake to frame this as an either/or situation. What we’re witnessing is a shift away from user privacy under the banner of child protection, when the real fix should have started elsewhere, that being the platforms themselves.


Rather than relying on reactive age checks, platforms should be building in protections from the ground up. “Safety-by-design” is a model that emphasizes proactive architecture, designing features that make platforms inherently safer without collecting personal data.


This could include things like default private settings for minors, meaningful content moderation, limits on data collection, built-in friction to slow impulsive sharing, and algorithmic transparency.


The problem is, these approaches don’t support the business model of big tech companies. Platforms are designed to maximize engagement, because more views mean more revenue. Whether the user is 13 or 73, more clicks equal more profit. So rather than redesigning their systems to protect users, companies shift the burden to users themselves, forcing them to verify, disclose, and sacrifice their anonymity. This is why these social media companies need to be legislated to do so, and not the users.


These laws risk shifting responsibility away from the platforms and toward the users, or more specifically, the third-party companies tasked with verifying age. Instead of pushing platforms to build safer environments through a safety-by-design approach, the focus becomes ticking a compliance box, verify the user's age, and you're off the hook.


This allows platforms to continue operating as they always have. If harm occurs on their site, they can deflect accountability by pointing to the verification process. “The user was verified,” they’ll say, “so it’s not on us.” In this model, platforms aren't incentivized to reduce exposure to harmful content, rethink their algorithms, or add protections for younger users. The assumption is that if someone made it past the verification age gate, they must be old enough to handle what’s inside.


But digital harm doesn’t start and stop at the age of 18, or whatever the legal threshold may be. Just because a person clears an age check doesn’t mean the platform is suddenly safe. By outsourcing age verification to a third party, platforms are essentially buying themselves legal cover while continuing to prioritize engagement over user well-being.


Safety-by-design should mean more than verifying someone's age at the door. It means building platforms that are safe, respectful, and responsible no matter who walks through it. Laws that rely solely on age gates risk creating the illusion of safety while letting the real problems go unaddressed. This is important, why? Recent events in France, Florida, and Texas where VPN usage spiked after Pornhub suspended operations due to age verification laws, demonstrate how easily these technical barriers can be circumvented.


Many of these age verification systems are being managed by third-party companies, because we can’t trust social media companies to police themselves given they have proven that they won’t. These third parties will collect your documents and verify your identity, acting as intermediaries between you and the platform. However, how tightly are these companies being regulated? Who ensures they delete your data after use? Who ensures they won’t be pressured by government agencies to release your private information?


Without meaningful regulation, oversight, and transparency, we’re being asked to take a massive leap of faith, and hand over some of our most sensitive information in the process. Given that past performance often dictates future behaviour, we are very concerned that this private identifiable information will not stay secure.


Some may argue that online age verification is no different than showing ID to buy alcohol or enter an adult only nightclub. But the comparison falls apart under closer inspection. In the real world, when you show your ID at a liquor store or nightclub, a staff member quickly checks your birthdate and hands it back. Your information isn’t scanned, stored, or entered into a permanent database.


Online age verification is a different story. In most cases, it involves uploading a copy of your ID, a facial scan, or other biometric data. That information is then stored, sometimes by the platform itself, sometimes by a third-party service provider, and often retained long after the age check is complete. Once collected, that data can be accessed, sold, leaked, or handed over to government authorities under vague and sweeping “exception clauses.”


So no, this isn’t like showing your ID at a door. It’s more like handing your passport to a stranger, watching them photocopy it, and then not knowing what happens to it next, or who might have access to it down the line. That level of data collection carries risks far beyond the one-time ID checks we’re used to in the physical world.


We all want kids to be safer online. But mandatory age verification creates a system where everyone’s identity is on the line, and that’s a risky tradeoff.


When government agencies can access age verification data for reasons far beyond child protection, we need to stop and ask, “is this really about safety, or are we normalizing mass surveillance under the guise of protection?”


Privacy and protection are not mutually exclusive. The solution isn’t to demand more personal data, it’s to demand better design, better policies, and platforms that put people before profit and that can only be done through legislation. However, these big juggernaut social media companies have huge, and we mean huge budgets to lobby and prevent such legislation from taking hold.


It's disappointing that policymakers didn’t take meaningful action years ago to hold social media companies accountable for the environments they’ve created.  We are seeing history repeat itself with today’s new AI companies. Instead of leading with forward-thinking legislation that prioritizes platform accountability and user protection, many have chosen the easier path, passing what amounts to “nanny laws” that target the end user rather than the systems doing the harm. It’s our opinion that such laws are nothing more than “political cover” to say, “look we as government are taking this seriously”


Age verification laws may look like action, but in reality, they shift the burden onto individuals while giving platforms a free pass to maintain the status quo. These laws suggest that if users are properly age-verified, the platform has done its job, ignoring the fact that many of the risks young people face online have less to do with age and more to do with design: addictive algorithms, data harvesting, content amplification, and lack of meaningful moderation.


Rather than putting pressure on tech companies to embed safety, transparency, and accountability into their products from the start, lawmakers are passing reactive measures that put the responsibility on families, users, and third-party verifiers. We don’t need more finger-pointing. We need legislation that forces platforms to prioritize the well-being of users over ad revenue and data collection.


It’s time our politicians start treating this like the corporate accountability issue it is.



Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech




References:


Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page