top of page

Support | Tip | Donate

Recent Posts

Featured Post

Safe Cables Vs. Safe Spaces Online

  • Writer: The White Hatter
    The White Hatter
  • Oct 9
  • 4 min read
ree

Don’t Be Fooled: Social Media Isn’t the Only One Facilitating Harm to Kids


Laws and regulations that aim to make social media safe for kids, while placing all responsibility, liability, and blame solely on the platforms themselves, are short-sighted. Other organizations that facilitate similar harms but remain out of the spotlight will continue to operate without accountability. Many of these organizations were enabling such harms even before social media existed. This approach is neither fair nor equitable and ultimately fails to address the broader problem.


To truly create a safe, secure, and harm-free internet, online safety regulations must be applied consistently across all services and devices, not just social media platforms. Only then can we build a genuinely safer online environment for children and everyone else.


Right now, North America is at a pivotal moment, shaping laws aimed at regulating youth safety online and creating safe spaces for kids, with responsibility, liability, and blame placed entirely on social media.


For example, in the US:

  • S.2073 – Kids Online Safety and Privacy Act (KOSPA)

  • S.1409 – Kids Online Safety Act (KOSA)


And in Canada:

  • Bill C-63 – Online Harms Act


When reviewing advocacy for these proposed laws alongside valid criticism of poor online safety standards and negligence, it becomes clear that wherever we spend our time, online or offline, it should be safe. At the very least, steps must be taken to make online spaces safer.


For those who are less tech-savvy, it might seem reasonable to believe that social media platforms like Meta (Facebook), TikTok, and Discord bear sole responsibility for online safety. However, a fundamental flaw in this reasoning is that many other organizations and services also facilitate online harms. Placing all responsibility on the end-product platforms ignores the broader ecosystem of harm. Meanwhile, infrastructure providers, such as ISPs, network services, and data brokers, often escape scrutiny and accountability. Yet, there are already examples of laws requiring action and accountability beyond social media. So, it is possible.


It should not matter whether only social media is safe for kids; all online connections should be safe. Focusing solely on social media allows harms to persist, with companies operating out of the spotlight continuing to facilitate harm.


What is a “Safe Cable”?


To truly create a totally safe, secure, and harm-free internet, online safety regulations must be applied fairly across all services and devices. Only then can we build a genuinely safer online environment for children and everyone else.


A similar concept has been proposed by the International Centre for Missing & Exploited Children, suggesting age-verifying credentials embedded directly into devices (1). While this approach doesn’t directly address harmful content, it illustrates the need to consider safety across the entire digital ecosystem, not just social media.


Let’s Explain Further...


If we genuinely want to provide totally safe online spaces for kids through active monitoring and filtering, focusing only on social media presents some issues:


  • It unfairly blames social media for all online harms while allowing other services and networks to enable harmful activities without accountability.


  • If social media platforms impose too many barriers for young users, they may migrate to platforms not covered by these laws or outside jurisdictional reach. North America currently benefits from large tech companies having a legal presence, enabling enforcement. We saw an example in early 2025 when a proposed TikTok ban in the US prompted many youth to move to a Chinese app with no English settings (2).


Trying to make the internet safer by regulating only social media is like trying to stop water from flooding a house by plugging just one hole in a leaking roof while ignoring all the others. It does little to solve the broader problem.


What's Missing?


Any proposed laws must clearly define what is being regulated. For example, in the US, both S.2073 (KOSPA) and S.1409 (KOSA) define “social network” as most people would expect. However, they explicitly exclude:


  • Common carrier services

  • Broadband internet access services

  • Teleconferencing or video conferencing services

  • Wireless messaging services


Similarly, Canada’s Bill C-63 defines “social media service” in a way that also effectively excludes these other services.


While many major sites maintain offices and servers in North America, it is naïve to believe that American or Canadian governments can control the entire internet. Yet, this has not stopped attempts elsewhere in the world to regulate internet infrastructure.


Other governments, such as China, have recognized that the most effective control over data and information lies at the infrastructure level, as seen with the "Great Firewall" (3).


We have also seen infrastructure-level controls attempted in North America, such as:


  • In 2010, U.S. Immigration and Customs Enforcement (ICE) and Homeland Security Investigations (HSI) launched “Operation In Our Sites,” seizing domains hosting pirated or counterfeit content (4).


  • Many US states have passed age verification laws for adult content (5).


  • In 2019, a Canadian Federal Court required ISPs to block access to the pirate IPTV service GoldTV (6).


  • In Canada, the Act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service requires ISPs and online service operators to report suspected child sexual exploitation material (CSAM). They must notify police and the Canadian Centre for Child Protection, restrict access to the content, preserve data, and may face penalties if they fail to comply (7).


Current advocacy for safe online spaces often resembles patching a small hole in a very leaky roof. The root cause of online harms lies with those committing the harms, and with all the connections that enable them to continue. Without a comprehensive, ecosystem-wide approach, efforts to make the internet safe will remain partial and ultimately ineffective.


We want to encourage social media safety advocates to critically examine their true end goals and what truly needs to happen to achieve them. Is the aim to make the internet safer for kids, or is it more about the glory of defeating a symbolic dragon? Blaming the big, high-profile social networks is popular, makes for great sound bites, and having a single common enemy can feel motivating. However, we should not let that oversimplify or erode the need for broader, more comprehensive and effective solutions.


Citations:



Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page