Health NZ Bans Staff from Using ChatGPT to Write Clinical Notes

Available in: 中文
2026-03-26T01:27:32.185Z·1 min read
Health New Zealand banned staff from using ChatGPT for clinical notes over patient privacy and data security concerns. The directive highlights the tension between AI efficiency gains and healthcare's strict privacy requirements.

New Zealand Health Authority Prohibits ChatGPT for Medical Documentation

Health New Zealand (Te Whatu Ora) has banned staff from using ChatGPT and similar AI tools to write clinical notes, citing patient privacy and data security concerns.

The Directive

Health NZ instructed healthcare workers across the country to stop using generative AI for clinical documentation, including patient medical records, clinical notes and observations, treatment plans and summaries, and referral letters.

The Concerns

Data Privacy: Patient information entered into ChatGPT could be stored by OpenAI and used for model training, or accessed by third parties.

Clinical Risk: AI-generated clinical notes may contain hallucinations or fabricated information, miss critical details, create a false sense of thoroughness, or be difficult to audit or verify.

Why This Matters

The Tension

Healthcare workers are using ChatGPT because clinical documentation is time-consuming and contributes to burnout. AI tools can significantly speed up note-writing, and short-staffed hospitals face pressure to increase efficiency.

The ban highlights the fundamental tension between AI's efficiency benefits and the healthcare sector's strict privacy and accuracy requirements.

At 79 points on Hacker News with 27 comments.

↗ Original source · 2026-03-26T00:00:00.000Z
← Previous: China Mass-Producing Hypersonic Missiles for $99,000 EachNext: OpenAI Invests in AI Startup Isara at $650 Million Valuation →
Comments0