top of page

Support | Tip | Donate

Recent Posts

Featured Post

What the L.A. Social Media Case Is Revealing

  • Writer: The White Hatter
    The White Hatter
  • 32 minutes ago
  • 7 min read


Across the world, lawsuits involving social media companies are beginning to move through the courts. One of the most closely watched cases right now is a large civil lawsuit in Los Angeles that examines whether the design of certain social media platforms has contributed to harm among young users.


Like many others working in the field of digital literacy and online safety, we have been following this case closely. What is especially important in this trial is not just the legal arguments being made in court, but the internal company documents that have surfaced during the legal disclosure process.


These documents, now being tested through cross examination, provide a rare look inside how some social media products were designed, what company executives knew about the psychological effects of those designs, and the decisions that were made about whether to change them. For parents trying to understand what all of this means for their children, the emerging picture is important.


Many of the internal communications and research reports being discussed in court suggest that social media companies were well aware that certain design features could strongly influence “some” user behaviour, particularly when it came to keeping people engaged on their platforms for long periods of time. According to documents cited in the case, some of these companies understood that features such as endless scrolling feeds, algorithmically curated content, reward-based engagement systems, and persistent notifications could increase the amount of time users spend on their platforms.


For those of us who have been studying digital environments for years, this information is not particularly surprising. Platforms compete for attention, and attention drives advertising revenue. The longer a user stays on a platform, the more opportunities there are to show ads and generate income. In other words, engagement is not just a design goal, it’s part of the “capitalistic”  business model that provides maximum profit and engagement.


What the documents appear to show is that unsurprisingly companies did recognize the potential psychological effects of these engagement tools, but chose to continue using them because they were effective at retaining users. To understand why these design features are so powerful, it helps to look at some of the mechanisms that are frequently discussed in the court documents.


Infinite Scroll


One of the most well-known features of modern social media is infinite scrolling. Instead of reaching the end of a page, new content continuously loads as a user scrolls down.


This removes the natural stopping cues that used to exist when people browsed the internet. Without those cues, users often continue consuming content longer than they originally intended.


Algorithmic Amplification


Social media platforms use algorithms to decide which content appears in a user’s feed. These algorithms are designed to prioritize material that is most likely to capture attention and encourage interaction.


Content that generates strong emotional reactions often performs well within these systems. As a result, users may see a steady stream of posts designed to keep them engaged, whether that engagement is positive or negative.


Variable Reward Systems


Psychologists have long known that unpredictable rewards are one of the most powerful motivators of behaviour. This concept comes from behavioural psychology and is often referred to as a “variable reward schedule.”


It is the same principle used in slot machines. You never know when the next reward will appear, which encourages repeated checking and interaction.


On social media, rewards can come in the form of likes, comments, messages, or new content that triggers curiosity.


Push Notifications


Notifications are designed to pull users back into the platform throughout the day. They act as reminders that something might be waiting for you. Over time, these small interruptions can create a constant cycle of checking and rechecking devices.


Individually, each of these features might seem harmless. Combined together, they create a highly engaging digital environment that is intentionally designed to capture attention.


Young people are not the only users affected by these design strategies, adults are influenced by them as well. However, adolescents may be particularly sensitive to environments that reward novelty, social feedback, and rapid stimulation. During adolescence, the brain’s reward systems develop earlier than the brain regions responsible for long-term decision making and impulse control.


That developmental reality does not mean young people are helpless or incapable. It does mean that certain types of digital environments can be especially compelling during this stage of life. Understanding this helps explain why some teens struggle to put their devices down even when they know they should. The issue is not simply about “willpower”, it’s also about how the environment itself has been engineered.


In response to concerns about youth online safety, many governments are proposing or implementing age verification systems that attempt to prevent younger users from accessing certain platforms. At first glance, this approach seems logical. If young people are vulnerable to harmful digital environments, restricting access appears to offer protection. However, the emerging evidence from court cases like the one in Los Angeles suggests that the core issue may lie deeper than age alone.


If a product is designed in ways that intentionally maximize engagement through psychological mechanisms, those design choices affect all users, not just young ones. This is why focusing only on age gating can sometimes feel like trying to remove water from a flooded floor while ignoring the problematically designed broken pipe that caused the flooding in the first place. The underlying design of the product remains unchanged.


What these lawsuits are beginning to highlight is a broader policy question that many experts have been raising for years, should governments focus primarily on restricting who can access digital platforms?, or,  should they also examine how those platforms are designed in the first place?


Some researchers and policymakers are increasingly discussing the concept of “safety by design.” This approach focuses on building products in ways that reduce harmful patterns of engagement rather than simply limiting access. Examples could include:


  • stronger limits on algorithmic amplification


  • design features that encourage healthy stopping points


  • reduced reliance on variable reward engagement systems


  • greater transparency about how recommendation algorithms operate


• tools that give users more meaningful control over notifications and feeds


We believe this is a more effective direction for legislation because it focuses on improving the digital environment itself rather than simply limiting access based on age. If the underlying design issues are not addressed, the challenges remain unchanged. Once a young person reaches the age threshold and gains access, they will still encounter the same engagement mechanisms and pressures that age gating alone does little to resolve.


As well, platform based age gating can often be bypassed using tools such as virtual private networks (VPNs), proxy services, or simply by accessing an older friend’s or family member’s device. We are already seeing this at scale occur in countries that have implemented age-verification laws. As a result, young people who fall below the age threshold may still find ways onto these platforms and remain exposed to the same attention capturing design features discussed earlier in this article.


For parents and caregivers, it is easy to feel overwhelmed when hearing about lawsuits, internal documents, and complex policy debates. The most important takeaway is actually quite simple and something that we have stressed for years here at the White Hatter:


Social media platforms are not neutral environments. They are carefully designed systems built to capture attention and encourage continued engagement.


Understanding that reality helps parents and caregivers have more productive conversations with their children about how these platforms work. When youth understand that certain features are intentionally built to keep them engaged, they are often better equipped to recognize those patterns and make more mindful choices.


The lawsuits now unfolding around the world are unlikely to produce simple answers. In some cases, courts will take years to sort through the evidence, and policymakers will continue debating what regulations should look like.


However, what these cases are doing is bringing the design of digital platforms into public view through a formal evidentiary process. Instead of relying on opinion, speculation, or leaked documents circulating in the media that have not been tested in court, the information is being examined within a legal framework where evidence must be disclosed, questioned, and scrutinized. This process allows internal communications, research, and decision-making within these companies to be evaluated in a structured way, giving the public a clearer understanding of how these platforms were designed and the considerations that shaped those choices.


For families, that visibility creates an opportunity. Instead of focusing only on how much time children spend online, we can begin paying closer attention to how the platforms themselves operate. That shift in thinking may ultimately lead to more meaningful solutions than simply trying to lock the digital doors after the platforms have already been built.


Profit and innovation are important drivers of technological progress, and there is no reason they cannot continue to thrive in the digital marketplace. However, those goals should exist alongside thoughtful consideration for user well being, public health, and the broader social environment that these platforms increasingly influence.


History, and reality, has shown us that companies whose business models depend on maximizing engagement and advertising revenue cannot realistically be expected  or trusted to fully regulate themselves. Their economic incentives are naturally aligned toward increasing user activity, retention, and growth. Expecting organizations to voluntarily limit the very mechanisms that drive their profitability is, in many ways, asking them to act against their own business interests.


That is why legislative guardrails matter. Transparency about how platforms operate, independent oversight, and carefully considered legislation and regulation can help ensure that innovation does not come at the expense of user welfare. When legislation focuses on the design features of digital platforms rather than simply restricting access, it has the potential to encourage a healthier balance. In doing so, it allows technology companies to continue innovating and generating economic value while also recognizing their responsibility within the broader social ecosystem their products help shape. This is becoming even more important as social AI becomes more integrated into the everyday lives of youth, teens, and adults.


In the end, protecting youth and teens online is not just about controlling access, it’s also about understanding the systems they are stepping into.



Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech

Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page