Do You Know Where ‘Shadow AI’ Is Already Happening in Your Clinic?

It rarely arrives as a formal project. It arrives quietly - a clinician between patients, under pressure, opening an AI tool and thinking: ‘Just rewrite this letter…’

← Back to Blog
Shadow AI in healthcare - clinicians using unapproved AI tools in clinical settings

Do you know where ‘shadow AI’ is already happening in your clinic - and what’s being pasted into it?

‘Shadow AI in healthcare’ rarely arrives as a formal project. It arrives quietly: a clinician between patients, under pressure, opening an AI tool and thinking: ‘Just rewrite this letter…’ ‘Just summarise this history…’ ‘Just help me word this message…’

And then - almost without noticing - patient information ends up in a system your clinic doesn’t control, producing text that may end up in the record, a referral, or a patient message.

Shadow AI isn’t a future risk: UK data suggests rapid growth in clinicians using AI tools at work, often without consistent guidance or training. A safer approach is to create a ‘permitted use lane’ that staff will actually follow.

A UK survey analysis of over 2,100 GPs reported 28% already using AI at work, with a meaningful proportion using tools obtained independently (eg ChatGPT), alongside concerns about inconsistent guidance and training - a ‘wild west’ dynamic.

Private clinics are not immune. If anything, autonomy plus pace makes ‘quiet adoption’ more likely.

Why Bans Fail (and What They Incentivise Instead)

The instinctive leadership response is: ban it. It feels clean and decisive. In practice, it often creates the worst version of AI use.

Because bans don’t remove what drives the behaviour: documentation burden, inbox pressure, short appointment slots, patient expectations for rapid, polished communication.

So the need doesn’t disappear. It goes underground.

Bans incentivise three predictable outcomes:

From a safety and medicolegal perspective, this is the worst place to be: you carry the risk without the governance.

The Three Shadow-AI Behaviours You Should Assume Are Happening

If you run a clinic, assume these are already occurring - even in good teams with high standards.

1) Drafting Letters and Notes

This is the most common entry point because it feels ‘non-clinical’. In reality, the wording of a letter is often inseparable from clinical meaning.

Typical uses: making clinic letters clearer or more ‘patient friendly’, converting bullet points into prose, creating referral summaries.

The risk is not only confidentiality. It’s the ‘tidy but wrong’ output - the kind of error that reads plausibly and slips through when you’re tired.

2) Summarising Consultation History

This often starts sensibly: ‘help me create a timeline’. But history summarisation is where nuance is most easily flattened: denied vs not asked, suspected vs confirmed, contextual vs central, ‘patient worried about X’ vs ‘patient has X’.

It also nudges the tool into ‘thinking’ territory: a drafting aid becomes a quasi decision support tool without anyone explicitly agreeing to that shift.

3) Patient Messaging Templates

This is deceptively risky because it looks like administration. AI is used to draft: follow-up instructions, safety-netting advice, ‘what to expect’ messages, appointment reminders with more warmth or reassurance.

But messaging is where patient trust is most fragile: a slightly wrong phrase can sound dismissive, over-certain, or unsafe. And it can create complaints fast.

Build a ‘Safe Lane’ (Not a Fantasy Policy)

If you want to reduce risk, start with a different question: How do we make the safe behaviour the easiest behaviour?

A realistic ‘safe lane’ has five elements:

1) Approved Tools (Make Compliance Frictionless)

If the governed option is hard to access, people will default to the one in their pocket. Approved tools should be provisioned with:

2) Approved Tasks (Be Specific)

‘Use AI responsibly’ is not guidance. It’s a slogan. Define permitted tasks with examples clinicians recognise.

3) Red-Line Tasks (A Short ‘Never’ List)

Ambiguity is unsafe. Keep the red lines blunt and short.

4) An Escalation Route (So Uncertainty Has Somewhere to Go)

Staff need to know: who to ask, how quickly they’ll get an answer, what to do in the meantime.

5) Data Protection and Governance That Match the Risk

AI adoption is still data processing. If you’re implementing AI tools in real workflows, you need proportionate governance around lawfulness, transparency, accuracy, and accountability.

Clinical Accountability Doesn’t Get Outsourced

Shadow AI thrives on a seductive myth: ‘It’s just administration.’ But the moment AI influences wording that goes into the clinical record, a referral, a patient instruction, or a safety-netting message - it becomes part of clinical care.

The GMC is clear in principle: professional standards still apply when using AI, clinicians remain responsible for decisions, and patients should be supported to make informed decisions with clear communication about uncertainty and limitations.

The MDU is similarly blunt: doctors remain responsible for accurate records, including notes transcribed by AI, and using AI without organisational approval and governance may create significant personal risk.

In practical terms, ‘human oversight’ must mean: AI may draft, AI may rephrase, AI may structure - the clinician reviews, corrects, and owns the final text.

If there isn’t time to review, there isn’t time to use it.

The Minimum Viable Policy Clinicians Won’t Ignore

Most AI policies fail because they try to cover everything and end up unread. A minimum viable ChatGPT policy for clinicians should fit on one page and include examples.

One-Page Policy Skeleton

A Simple Decision Tree (People Actually Use)

The Question That Matters

Shadow AI in healthcare is rarely malicious. It’s usually a coping strategy.

So the leadership question isn’t ‘How do we stop people using AI?’

It’s this: Do you want to discover AI use in your clinic through a complaint - or through a designed, governed pathway that makes safe practice the default?

#ClinicalGovernance#InformationGovernance#PatientSafety#AIinHealthcare

Ready to Grow?

Book a Discovery Call and discover how AI-powered systems can help your practice grow faster, run leaner, and maximise impact.

Book a Discovery Call →