HomeAI NewsThe AI Prescription: Health NZ is Banning ChatGPT in the Clinic

The AI Prescription: Health NZ is Banning ChatGPT in the Clinic

As overwhelmed clinicians turn to free AI tools for administrative relief, Health NZ cracks down over privacy concerns, sparking union backlash.

  • Strict AI Ban: Health NZ has strictly prohibited the use of free artificial intelligence tools, including ChatGPT, Claude, and Gemini, for drafting any clinical notes due to severe data privacy and accountability risks.
  • Disciplinary Threats: A recent memo warns that staff caught using unapproved AI tools—even if patient data is anonymised—will face formal disciplinary action.
  • Systemic Pressures: Union leaders argue that desperate, overworked healthcare professionals are turning to AI to cope with “enormous pressure,” pointing to recent IT cuts as the root cause of the improvised tech solutions.

The rapid integration of artificial intelligence into everyday workflows has finally collided with the strict confidentiality requirements of the medical field. Health New Zealand (HNZ) has found itself at the center of a modern healthcare dilemma: balancing the urgent need to protect sensitive patient data with the reality of an exhausted workforce desperately seeking administrative relief. After discovering that staff have been using free AI chatbots to draft clinical notes, HNZ has drawn a hard line, threatening formal disciplinary action for future breaches.

The issue came to a head this week when a senior manager circulated a memo to all Mental Health and Addiction Services staff in the Rotorua Lakes district. The directive was unambiguous, reminding employees that utilizing free, public AI platforms like ChatGPT, Claude, and Gemini is strictly prohibited. The ban extends beyond direct data entry; the memo explicitly forbids staff from using AI tools to draft notes and subsequently transcribing them into handwritten or typed official records, even if the patient information is anonymised beforehand.

For Health NZ, the crackdown is fundamentally about safeguarding vulnerable information. Sonny Taite, HNZ’s Director of Digital Innovation and AI, emphasized that free AI tools present unacceptable risks regarding data security, privacy, and clinical accountability. According to the organization’s overarching AI policy, any artificial intelligence tool utilized in a clinical setting must be officially registered and vetted by the Health NZ National Artificial Intelligence and Algorithm Expert Advisory Group (NAIAEAG). Exemptions are rare and assessed strictly on a case-by-case basis. HNZ is not entirely anti-AI, however; they are currently rolling out “Heidi,” an approved, secure AI scribe tool, across emergency departments to help ease the administrative burden safely.

Despite these official protocols, HNZ declined to disclose exactly how many instances of unauthorized AI use have occurred or whether any staff members have already faced disciplinary action. But for those representing the healthcare workers, the focus on punishment entirely misses the broader, more systemic issue at play.

Fleur Fitzsimons, the National Secretary for the Public Service Association (PSA)—the union representing many health and addiction service workers—argued that clinical staff are not using these tools out of malice or carelessness, but out of desperation. She highlighted that healthcare workers are buckling under “enormous pressure” and turning to whatever resources they can find to stay afloat. Fitzsimons strongly criticized the tone of the memo, calling it a “warning shot” that relies on threats rather than solutions, ultimately making staff too afraid to ask for help or ask questions about new technologies.

From a broader perspective, the union argues that HNZ must look inward. Fitzsimons pointed out a glaring contradiction: while HNZ threatens staff for improvising with free tech tools, the organization has simultaneously been cutting the very teams responsible for digital systems and IT support. If healthcare professionals are resorting to unauthorized chatbots just to finish their paperwork, the union suggests that the proper response is investment in robust training and approved technological infrastructure, rather than a heavy-handed enforcement of the Code of Conduct.

As the healthcare sector continues to grapple with burnout and understaffing, the tension between data security and operational efficiency will only grow. The current clash at Health NZ serves as a stark reminder that while the future of medicine will undoubtedly include AI, getting there requires giving clinicians the right tools, rather than just taking the wrong ones away.

Must Read