Health NZ Bans Staff from Using ChatGPT to Write Clinical Notes
New Zealand Health Authority Prohibits ChatGPT for Medical Documentation
Health New Zealand (Te Whatu Ora) has banned staff from using ChatGPT and similar AI tools to write clinical notes, citing patient privacy and data security concerns.
The Directive
Health NZ instructed healthcare workers across the country to stop using generative AI for clinical documentation, including patient medical records, clinical notes and observations, treatment plans and summaries, and referral letters.
The Concerns
Data Privacy: Patient information entered into ChatGPT could be stored by OpenAI and used for model training, or accessed by third parties.
Clinical Risk: AI-generated clinical notes may contain hallucinations or fabricated information, miss critical details, create a false sense of thoroughness, or be difficult to audit or verify.
Why This Matters
- Growing trend: Healthcare organizations worldwide are grappling with AI usage policies
- Australia: Similar bans have been implemented in several hospital systems
- UK: NHS guidance restricts but does not fully ban AI clinical documentation
- US: HIPAA compliance questions remain unresolved for AI tools
The Tension
Healthcare workers are using ChatGPT because clinical documentation is time-consuming and contributes to burnout. AI tools can significantly speed up note-writing, and short-staffed hospitals face pressure to increase efficiency.
The ban highlights the fundamental tension between AI's efficiency benefits and the healthcare sector's strict privacy and accuracy requirements.
At 79 points on Hacker News with 27 comments.