top of page

Support | Tip | Donate

Recent Posts

Featured Post

AI in the Doctor’s Exam Room: When Adoption Moves Faster Than Consent

  • Writer: The White Hatter
    The White Hatter
  • 3 minutes ago
  • 6 min read


Caveat - We are encouraged by the role AI is beginning to play in advancing medical research, improving diagnostic imaging, and supporting treatment decisions. It is equally notable to see how some family physician practitioners are now integrating AI into the doctor–patient interaction itself. As this article explores, however, the convenience these tools offer can also introduce new vulnerabilities, particularly when it comes to privacy and informed consent. 


Recently, a Canadian poster (Taryn E) on LinkedIn noticed a sign posted in a Canadian medical clinic explaining that an AI scribe may be used during patient appointments to assist with documentation. The language focused on efficiency, privacy, and better care. Most patients would likely find this reassuring and many would probably consent without hesitation.


However, it understandably made the poster, who is AI literate, pause, not because AI in healthcare is inherently harmful, but because of what the sign did not explain.


The poster posed some reasonable questions such as , “where is the audio processed, whether it is stored and for how long, whether humans ever review it, whether patient data is used to improve AI models, or what terms like “HIPAA-compliant” actually mean in real-world practice.” None of this suggests bad intent. It does, however, highlight a growing gap between AI adoption and meaningful informed consent.


AI is moving quickly, and faster than many governance structures can adapt. In that gap, transparency is too often reduced to reassurance rather than explanation. This is not a failure of doctors, it is a systems and implementation challenge.


Doctors are medical professionals. They are trained to diagnose, treat, and care for patients. They are not, by default, experts in AI architecture, cloud infrastructure, data jurisdiction, or privacy law.


In many clinical environments, AI tools are introduced at the system or organizational level to increase efficiency (1)(2). Individual clinicians often inherit these tools with limited opportunity to deeply question vendor claims about privacy, storage, secondary use, or cross-border data flows. This is not negligence, it’s what happens when adoption outpaces the time, training, and governance supports clinicians are given.


The issue here is not that doctors are careless with privacy. It is that many clinical environments lack the capacity, resources, or clarity needed to fully evaluate AI systems before they reach the exam room. Compliance is not the same as ethical design, and an AI system can meet regulatory requirements and still fail patients.


Consent that is passive, implied, or bundled into general intake forms is not meaningful consent. Opt-out processes that are awkward, unclear, or socially uncomfortable are not real choices. Privacy statements that reassure without explaining data flows do not empower patients to make informed decisions.


If AI is going to be embedded in healthcare, governance must be proactive rather than retrofitted. Consent must be informed rather than assumed. This is not a legal technicality, it’s a clinical and human issue.


Once patients understand that their words may be transcribed, processed, or handled by AI systems, the dynamic of the clinical encounter could change. People may self-edit, omit details, or simplify explanations. This is a well-documented human response to perceived monitoring. Even when the technology is benign, awareness of recording and processing can influence disclosure.


In the past, when a doctor walked in to the examine room and close the door behind them, it was you and doctor having a private and often intimate discussion, with no one else listening. AI changes this relationship and when trust shifts, the quality of care can shift with it.


Don’t get us wrong, most patients reasonably expect their visit to be documented. That has long been part of healthcare. What is changing is what documentation now means. There is a significant difference between consenting to clinical note taking and consenting to AI mediated transcription, processing, and interaction with large language models. These systems are often vendor controlled and not fully transparent to either patients or clinicians.


Even when audio is not retained, the involvement of an AI model introduces new questions about data handling, model interaction, secondary use, auditability, and error correction. These are ethical distinctions, not just technical ones.


If patients would answer differently when these details are explained clearly, then general consent is no longer sufficient.


“Transcription” is rarely just transcription. The word sounds simple, however, in practice, it often refers to cloud based, LLM-backed processing that is not hosted locally. In many cases, infrastructure “may” sit outside Canada. From a Canadian perspective, this raises legitimate data sovereignty questions. Once health data touches foreign infrastructure, different legal regimes may apply, including lawful access frameworks that are outside Canadian control. These risks are not hypothetical, even if no harm has yet occurred.


There is no technical reason AI transcription cannot be done securely, domestically, and with strong governance. The concern is not whether it is possible, but whether it has actually been done. Data protection impact assessments, data flow mapping, and clear risk evaluations should be standard practice before deployment, not something patients discover only after asking. Saying data is “stored in Canada” is not the same as saying it is governed, controlled, and protected in ways that respect patient autonomy. Accountability needs to be defined before something goes wrong!


AI medical scribes introduce new points of failure alongside their benefits (3). Transcription errors happen, context can be missed, diagnoses can be mis-recorded, and notes may enter a medical record before being carefully reviewed (4). When that happens, accountability must be clear. Who is responsible if an error affects care? The clinician, the clinic, the health authority, or the vendor? Patients should not have to navigate that ambiguity after harm occurs.


Clear expectations matter. At a minimum, governance should address transcript review before records are finalized, error correction processes, and documented responsibility across the care team and vendors. Without this clarity, risk is quietly shifted onto patients.


This is why privacy policy must be understandable to be meaningful. Healthcare has long recognized that complex medical information must be translated into plain language so patients can make informed decisions.  We would suggest that the same standard must apply to AI.


Terms like “HIPAA-compliant” or “privacy-first” mean little to most patients. Consent is not meaningful if people cannot explain what they agreed to in plain terms.


This responsibility does not fall on clinicians alone. It is shared across the entire care environment, from front desk intake to IT teams to leadership. Everyone involved in the patient journey plays a role in ensuring consent is understood, not merely collected.


A simple rule that is used in policing when it comes to an “informed consensual search”, if a suspect cannot explain what they consented to before the search, they did not meaningfully consent. This same principle could also be applied to the medical profession when it come to the informed consensual use of AI scripting software in a medical office. 


This is about governance, not slowing progress. We are not opposed to AI in healthcare. Used well, it can reduce administrative burden, support clinicians, and improve documentation quality. Burnout in the medical field, especially with family doctors, is real, and efficiency matters. The issue is not speed itself, it’s deploying new technology without equivalent investment in consent, clarity, and governance.


Trust is not an obstacle to innovation. It is a prerequisite for sustainable adoption. If acceleration undermines patient autonomy, clarity, or agency, that is not progress. It is risk being externalized onto the very people the system exists to serve.


Healthcare is not a race to deploy tools, it’s a responsibility to get them right. AI will continue to move quickly. Our ethics, consent practices, and governance structures need to move with the same care.



Digital Food For Thought


The White Hatter


Facts Not Fear, Facts Not Emotions, Enlighten Not Frighten, Know Tech Not No Tech



Post Script


We know that we have several doctors who follow us here at the White Hatter. The College of Physicians and Surgeons of British Columbia, where we live, currently does not have a rigid, one-size-fits-all policy prohibiting or mandating AI scribes. Instead, it has interim ethical guidance that places responsibility on physicians to that covers off much of what we speak to in this article:


  • Protect patient privacy and confidentiality. 


  • Obtain meaningful informed consent and be transparent with patients about AI use. 


  • Maintain professional accountability for documentation and clinical decisions. 


  • Understand how the specific AI tool works and ensure it is appropriate for their practice. 


As AI evolves, CPSBC has indicated it may further update or formalize standards related to AI use in clinical practice. (5)(6)(7)



References:














Support | Tip | Donate
Featured Post
Lastest Posts
The White Hatter Presentations & Workshops
bottom of page