Student Surveillance, Child Safety Tech, and EdTech: When Education + Protection Becomes Profiteering
- The White Hatter
- Jun 19
- 5 min read

Caveat - as this school year comes to a close, here are some of our thoughts about the integration of technology into the new school year to come.
As parents and caregivers, we all want our children to thrive in school where they are safer, supported, and ready to learn. We here at the White Hatter support the pedagogical intentional use of technology in the classroom, but a growing network of surveillance and data-collection technologies, often introduced under the banners of “safety” or “personalized learning,” is quietly reshaping how our children experience education. (1) Schools are increasingly turning to digital tools, from AI-powered monitoring software to educational apps and online learning platforms, that promise to protect kids or enhance their education. But who’s really benefiting?
The uncomfortable truth is this, many of these digital tools don’t just monitor or educate students, they harvest their data. And that data is often monetized, packaged, and sold to third parties. (2) In some cases, the very companies claiming to keep children safe are the same ones profiting from their digital interactions of their platforms. (3)
As an example, in May 2025, schools in Australia discovered that Microsoft Teams, a widely used video conferencing platform, had quietly activated face and voice recognition features two months earlier, without notifying schools or requesting parental consent. (4) This silent update began harvesting students’ biometric data, including unique facial and vocal profiles, which were stored in the cloud, often without the knowledge of educators or families. This is deeply concerning because, unlike passwords, biometric data cannot be changed. Once a child's most intimate data, such as facial expressions, voice patterns, and emotional responses, is captured and stored, the risks could be long-term and irreversible specific to privacy and security.
Whether it’s a School’s Facebook page, Microsoft Teams, a Chromebook issued by the school, or a free math game app on a student’s iPad, nearly all digital education platforms collect information. This might include location, browsing history, keystrokes, search queries, behavioural data, biometric data, and even emotional sentiment analysis. Some of the most popular educational tools, both free and subscription-based, build detailed profiles of student behaviour, performance, and habits, and then monetize this data through targeted advertising, predictive analytics, or by selling access to third-party marketers. Reporter Daniel Lyons from Newsweek stated:
“The most important thing to understand about most Social Networks & Apps is that you are not their customer, you are their inventory. You are the product a Social Network is selling. Most Social Networks real customers are advertisers. You, as a Social Network member, are useful only because you can be packaged up and sold to advertisers. The moreInformation Social Networks can get from you, the more you are worth.”
These tools were created by corporations whose business models depend on massive scale, constant surveillance, and continuous data extraction. They are intentionally vague, you can’t see what happens to your data once it’s collected, how it might be used to train future models, or where those models might eventually appear.
What’s particularly concerning is that even premium tools, which parents, caregivers, and schools pay for, are not always exempt from these practices. The notion that “you’re the product if the product is free” no longer holds. Now, you can pay and still be the product.
These practices often operate in legal grey areas, using vague privacy policies and buried consent forms to justify invasive data collection. And while some platforms claim anonymization, researchers have repeatedly shown how “anonymized” data can often be re-identified, especially when combined with other data sets. (5)
This trend echoes what we see in the child safety tech industry. Many school and parent control apps and monitoring tools publish alarming reports based on their own proprietary user data. They warn of surging cyberbullying, rising exposure to harmful content, and other digital threats, all while selling products designed to address those same dangers. Without independent academic verification, these reports may serve more as marketing strategies than evidence-based insights.
Fear sells, and so does data, thus why the data economy has become so profitable especially when it comes to youth who will become tomorrows customers and purchasers.
Just like child safety tech companies, many EdTech platforms position themselves as essential tools for safeguarding or educating youth, while quietly engaging in aggressive data monetization (6) and in both cases, parents, caregivers, and even educators are rarely given the full picture.
The effects of this unchecked data collection are significant and far-reaching:
Loss of privacy: Students are growing up under constant surveillance, their digital learning habits tracked, analyzed, and commodified, often without their or their parents’ informed consent.
Behavioural profiling: Algorithms can label students as “at-risk,” “disengaged,” or “problematic” based on patterns that may be misunderstood or biased, potentially impacting their educational outcomes.
Equity concerns: Surveillance technologies disproportionately affect marginalized students, particularly those with disabilities, LGBTQ+ students, and youth of colour, who may be more likely to be flagged and disciplined by automated systems.
Before trusting any report, app, or digital classroom tool, we need to ask:
What data is being collected, and who owns it?
Is this tool truly serving an educational or safety purpose, or is it serving a corporate business model?
Is the data available to independent researchers for peer review and accountability?
Are children and families fully informed about what’s being done with their data?
Are the claims made by the company backed by unbiased academic studies, or are they rooted in fear-based marketing?
Again, before uploading a student’s work, face, name, and learning history into a social media or AI platform, we need to ask who is governing this private data?
It’s time for a reality check. Not all technology in schools or homes is bad, far from it. But we cannot conflate protection with surveillance, or confuse educational innovation with data exploitation. We must demand the same level of transparency, accountability, and independent oversight from EdTech and child safety companies that we demand from social media platforms.
Parent and caregiver fears about online safety and academic success are real and valid. But decisions about which tools to use should be based on evidence, not emotionally charged marketing or manufactured panic. Until EdTech and safety companies open their data to independent academic scrutiny, parents, educators, and policymakers must approach their reports and products with healthy skepticism.
Our children are not just users. They are learners. They are people, and they deserve education and not digital exploitation.
Digital Food For Thought
The White Hatter
Facts Not Fear. Facts Not Emotions. Enlighten Not Frighten. Know Tech Not No Tech.
Related Articles:
References: