When Technology Crosses a Line: AI Companions, Influence, and the Criminal Code
- The White Hatter
- 2 hours ago
- 4 min read

Caveat: This article is not legal advice. It is intended to raise awareness and encourage informed conversations among parents, caregivers, educators, and policy makers about an emerging issue that is evolving faster than the laws designed to address it.
In Canada, Section 241(1) of the Criminal Code is clear in its intent. It makes it a serious criminal offence to counsel, encourage, or assist another person in ending their life. The law recognizes that influence matters. Words, guidance, and encouragement can have real world consequences, especially when someone is vulnerable.
Historically, this section of the law has been applied to human behaviour where one person influences another, where one individual encouraging or assisting another in a moment of crisis. However, we are now entering a space where that influence is no longer limited to human interaction. Artificial intelligence, particularly AI companionship apps, is introducing a new layer of complexity that did not exist when these laws were written.
AI companions are designed to simulate conversation, connection, and emotional support. Some are marketed as friends, others as romantic partners, and some as mental health supports. For many youth and teens, these tools can feel responsive, attentive, and always available in ways that human relationships are not, and that matters.
When a young person is struggling, feeling isolated, or looking for validation, the source of that response, whether human or artificial, can carry weight. If that system provides harmful guidance, reinforces negative thinking, or in the worst case scenario appears to support or normalize self harm, the question becomes more than theoretical, it becomes practical.
If a person can be held criminally responsible for counselling suicide, what happens when similar encouragement comes from an AI system (1)(2)? Right now, the law was not built with this reality in mind. An AI system is not a person, it cannot be charged, and it cannot form criminal intent in the traditional sense. However, it does not exist in isolation. Behind every AI system are people such as designers. developers, companies, and decision makers. This is where the legal conversation starts to shift.
Could “criminal” responsibility extend to those who design, deploy, or profit from systems that have foreseeable risks? Could a company be held accountable if their product contributes to harm in a way that resembles what the law already prohibits between individuals? These are not simple questions, and they are not settled. They sit at the intersection of criminal law, civil liability, product design, and ethics.
Youth and teens are already engaging with AI in ways many adults are just beginning to understand. These interactions can feel private, personal, and in some cases, more emotionally open than conversations with parents or trusted adults, which creates both opportunity and risk.
The opportunity is that AI can support learning, creativity, and even reflection when used appropriately. The risk is that these systems are not human. They do not truly understand context, emotion, or consequence. They generate responses based on patterns in data, not lived experience or moral judgment. When something goes wrong, there is no intuition, no pause, no “little voice” that says this conversation needs to stop or be redirected to a real person.
Before we even get to criminal liability, there is a more immediate question. What responsibility do companies have to build safety into these systems from the start? This is what the recent civil cases in the United States sought to answer specific to the design of social media platforms. The jury in these two cases found the answer to that question was “yes”.
However, these most recent US court cases didn’t consider that if an AI companion is marketed as supportive or emotionally aware, should it be required to:
Recognize indicators of distress or self harm language
Refuse to engage in harmful or reinforcing conversations
Redirect users to real world supports such as crisis lines or trusted adults
Provide transparency about its limitations
These are not technical impossibilities. They are design choices, and design choices reflect priorities. We believe that if argued in a civil trail the answer would be “yes”. However, this is not good enough because the punitive relief is only financial, and for some of these multi-billion dollar companies, it’s just a part of doing business. If, however, the CEO or owners of these companies could face jail time, we believe we would see thing change quickly form a safety by design perspective.
There is a growing argument, that we agree with, that existing criminal laws may need to be revisited to account for this new reality. Not to criminalize innovation, but to ensure that criminal accountability keeps pace with capability.
Could Section 241 be interpreted or amended in a way that considers the role of AI systems and those who create them? Possibly. Would such changes influence how companies build and release AI companionship tools? Likely. We have seen in other industries that when clear accountability exists, especially if it means jail time, behaviour changes. Safety becomes part of the business model, not an afterthought.
The law has long recognized that encouraging someone toward harm carries responsibility. What we are now grappling with is how that principle applies when the “voice” doing the encouraging is no longer human, but still designed by one. That is not just a legal question, it’s a parenting question, a design question, and increasingly, a societal one.
Digital Food For Thought
The White Hatter
Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech
References:














