Do you know where ‘shadow AI’ is already happening in your clinic - and what’s being pasted into it?
‘Shadow AI in healthcare’ rarely arrives as a formal project. It arrives quietly: a clinician between patients, under pressure, opening an AI tool and thinking: ‘Just rewrite this letter…’ ‘Just summarise this history…’ ‘Just help me word this message…’
And then - almost without noticing - patient information ends up in a system your clinic doesn’t control, producing text that may end up in the record, a referral, or a patient message.
Shadow AI isn’t a future risk: UK data suggests rapid growth in clinicians using AI tools at work, often without consistent guidance or training. A safer approach is to create a ‘permitted use lane’ that staff will actually follow.
A UK survey analysis of over 2,100 GPs reported 28% already using AI at work, with a meaningful proportion using tools obtained independently (eg ChatGPT), alongside concerns about inconsistent guidance and training - a ‘wild west’ dynamic.
Private clinics are not immune. If anything, autonomy plus pace makes ‘quiet adoption’ more likely.
Why Bans Fail (and What They Incentivise Instead)
The instinctive leadership response is: ban it. It feels clean and decisive. In practice, it often creates the worst version of AI use.
Because bans don’t remove what drives the behaviour: documentation burden, inbox pressure, short appointment slots, patient expectations for rapid, polished communication.
So the need doesn’t disappear. It goes underground.
Bans incentivise three predictable outcomes:
- Underground use - People stop disclosing use, not necessarily using it.
- Copy-paste habits - When the ‘safe’ option is slow, staff use the fastest option - often personal accounts and personal devices.
- Undocumented outputs - Text produced by AI gets inserted into letters, notes, and messages without any record of how it was created or checked.
From a safety and medicolegal perspective, this is the worst place to be: you carry the risk without the governance.
The Three Shadow-AI Behaviours You Should Assume Are Happening
If you run a clinic, assume these are already occurring - even in good teams with high standards.
1) Drafting Letters and Notes
This is the most common entry point because it feels ‘non-clinical’. In reality, the wording of a letter is often inseparable from clinical meaning.
Typical uses: making clinic letters clearer or more ‘patient friendly’, converting bullet points into prose, creating referral summaries.
The risk is not only confidentiality. It’s the ‘tidy but wrong’ output - the kind of error that reads plausibly and slips through when you’re tired.
2) Summarising Consultation History
This often starts sensibly: ‘help me create a timeline’. But history summarisation is where nuance is most easily flattened: denied vs not asked, suspected vs confirmed, contextual vs central, ‘patient worried about X’ vs ‘patient has X’.
It also nudges the tool into ‘thinking’ territory: a drafting aid becomes a quasi decision support tool without anyone explicitly agreeing to that shift.
3) Patient Messaging Templates
This is deceptively risky because it looks like administration. AI is used to draft: follow-up instructions, safety-netting advice, ‘what to expect’ messages, appointment reminders with more warmth or reassurance.
But messaging is where patient trust is most fragile: a slightly wrong phrase can sound dismissive, over-certain, or unsafe. And it can create complaints fast.
Build a ‘Safe Lane’ (Not a Fantasy Policy)
If you want to reduce risk, start with a different question: How do we make the safe behaviour the easiest behaviour?
A realistic ‘safe lane’ has five elements:
1) Approved Tools (Make Compliance Frictionless)
If the governed option is hard to access, people will default to the one in their pocket. Approved tools should be provisioned with:
- Organisational accounts (not personal logins)
- Clear data handling terms and retention
- Appropriate security controls
- Clarity on sub-processors and storage
2) Approved Tasks (Be Specific)
‘Use AI responsibly’ is not guidance. It’s a slogan. Define permitted tasks with examples clinicians recognise.
3) Red-Line Tasks (A Short ‘Never’ List)
Ambiguity is unsafe. Keep the red lines blunt and short.
4) An Escalation Route (So Uncertainty Has Somewhere to Go)
Staff need to know: who to ask, how quickly they’ll get an answer, what to do in the meantime.
5) Data Protection and Governance That Match the Risk
AI adoption is still data processing. If you’re implementing AI tools in real workflows, you need proportionate governance around lawfulness, transparency, accuracy, and accountability.
Clinical Accountability Doesn’t Get Outsourced
Shadow AI thrives on a seductive myth: ‘It’s just administration.’ But the moment AI influences wording that goes into the clinical record, a referral, a patient instruction, or a safety-netting message - it becomes part of clinical care.
The GMC is clear in principle: professional standards still apply when using AI, clinicians remain responsible for decisions, and patients should be supported to make informed decisions with clear communication about uncertainty and limitations.
The MDU is similarly blunt: doctors remain responsible for accurate records, including notes transcribed by AI, and using AI without organisational approval and governance may create significant personal risk.
In practical terms, ‘human oversight’ must mean: AI may draft, AI may rephrase, AI may structure - the clinician reviews, corrects, and owns the final text.
If there isn’t time to review, there isn’t time to use it.
The Minimum Viable Policy Clinicians Won’t Ignore
Most AI policies fail because they try to cover everything and end up unread. A minimum viable ChatGPT policy for clinicians should fit on one page and include examples.
One-Page Policy Skeleton
- Purpose (one sentence): We use AI to reduce administration and improve clarity without compromising safety, confidentiality, or accountability.
- Golden rule: No patient-identifiable information goes into unapproved tools.
- Permitted uses (with examples): improving grammar/clarity of non-identifiable text; drafting generic patient information (non-patient-specific); turning clinician bullet points into a letter after de-identification; formatting internal templates and SOPs.
- Prohibited uses (with examples): pasting identifiers or full histories into personal AI accounts; using AI to generate diagnoses, prescriptions, or triage outcomes outside an approved pathway; sending AI-drafted patient advice without clinician review.
- Escalation line: If unsure: pause and ask [named route]. Don’t guess.
A Simple Decision Tree (People Actually Use)
- Does this include patient-identifiable data? → Yes: approved tool/workflow only.
- Will the output go to the patient or into the record? → Yes: clinician review + correct + sign.
- Could this influence clinical judgement or risk? → Yes: treat as higher risk → escalate/approved pathway only.
The Question That Matters
Shadow AI in healthcare is rarely malicious. It’s usually a coping strategy.
So the leadership question isn’t ‘How do we stop people using AI?’
It’s this: Do you want to discover AI use in your clinic through a complaint - or through a designed, governed pathway that makes safe practice the default?