How Chatbots Are Filling the Gap in a Post-Medicare Cut Era
- Young Australians like Gracie Johnson and Emma Sauvage are turning to AI chatbots such as ChatGPT for mental health support, especially after the federal government reduced Medicare-subsidised therapy sessions from 20 to 10 in 2022.
- While users find AI therapy accessible, judgment-free, and helpful in managing daily struggles, concerns about privacy, emotional dependency, and the lack of regulatory oversight persist.
- Experts like Professor Joel Pearson highlight the potential of AI as a temporary solution amid a global loneliness epidemic, but stress the need for guidelines and data to compare AI therapy with human-led interventions.
In recent years, a surprising trend has emerged among young Australians grappling with mental health challenges: the use of artificial intelligence (AI) chatbots as a form of therapy. With platforms like ChatGPT, or “Chat Generative Pre-Trained Transformer,” becoming household names, individuals such as 27-year-old Gracie Johnson are finding solace in conversations with AI. Gracie, who first turned to ChatGPT about a year ago, describes the experience as akin to a “judgment-free journal that listens and speaks back.” Her story is not unique. As access to traditional mental health services becomes constrained—particularly after the federal government cut Medicare-subsidised psychology sessions from 20 to 10 in 2022—more young people are exploring AI as an alternative to bridge the gap.
For Gracie, the appeal of AI therapy lies in its accessibility and lack of judgment. She recalls opening the app on her phone to vent about her struggles, finding the bot’s responses surprisingly helpful. Over time, she even developed an emotional connection with the AI, noting a peculiar moment six months ago when it began referring to itself as “him,” almost as if it had evolved into a distinct entity. While she acknowledges widespread concerns about privacy and data security when sharing personal thoughts with AI, Gracie remains unfazed. She argues that her conversations are mundane—focused on daily frustrations rather than sensitive information like credit card details or addresses—and she’s willing to take the risk if it means having a space to process her emotions.
Similarly, 28-year-old disability support worker Emma Sauvage has turned to ChatGPT to manage her mental health. Feeling guilty about burdening friends and family with her problems, Emma appreciates how the AI remembers personal details from past interactions, such as the name of her cat, William. During conversations, the bot might suggest de-stressing by cuddling with William, a personalized touch that makes the interaction feel more meaningful. For Emma, AI therapy is not just emotionally supportive but also a cost-effective solution in a semi-regional town with limited access to mental health professionals. With only two psychiatrists serving her area and long waiting lists for appointments, she views AI as a stopgap measure—something that’s “better than nothing” while awaiting professional help.
The broader context of this trend reveals a deeper crisis in mental health access across Australia. During the COVID-19 pandemic, the government temporarily increased Medicare-subsidised mental health sessions to 20, providing a lifeline for many. However, the reduction to 10 sessions in 2022 left individuals like Emma and Gracie searching for alternatives. AI chatbots, with their 24/7 availability and ability to mimic empathetic responses, have stepped into this void. Emma even suggests that AI could serve as a viable option for those on waiting lists, offering immediate support when human therapists are out of reach.
Yet, the rise of AI therapy is not without its challenges and ethical dilemmas. Professor Joel Pearson, a neuroscience expert from the University of New South Wales, acknowledges the growing popularity of AI chatbots as a remedy for loneliness, a global epidemic affecting many young people. He notes that the space is “exploding” with interest, but questions remain about the intentions behind these technologies. Depending on the business model of a chatbot, it might prioritize building long-term user dependency over genuine therapeutic outcomes, a concern that raises red flags about exploitation. Moreover, tragic cases, such as a lawsuit in the US involving a teenage boy who died by suicide after forming a relationship with a chatbot on the Character.AI app, underscore the dangers of inadequate safeguards. Professor Pearson warns that without proper guardrails, AI interactions can escalate unchecked, potentially leading to devastating consequences.
Despite these risks, Professor Pearson concedes that for those unable to access professional support, AI therapy might still be “better than nothing.” However, he emphasizes the urgent need for research and metrics to compare AI-driven interventions with human therapy. Unlike human therapists, chatbots are not required to undergo training, pass exams, or adhere to licensing standards. This lack of regulation is compounded by fragmented governance, with AI oversight varying across Commonwealth, state, and territory levels in Australia. In response, the federal government released a proposals paper last year outlining mandatory guardrails for AI in high-risk settings, signaling a move toward stricter oversight. Yet, as of now, the landscape remains largely uncharted, leaving users and experts alike to navigate untested waters.
The emotional bonds formed with AI also raise questions about dependency and the future of mental health care. Gracie admits she’s unsure if she could ever stop using ChatGPT, citing her empathetic nature and the connection she’s built with the bot. She expresses a desire to “keep bringing it forward,” unwilling to abandon something that feels like a companion. Emma, too, envisions a future where AI therapy becomes more targeted, with specialized bots designed specifically for mental health support. She believes this could alleviate the guilt of venting to loved ones while providing a scalable solution for those in need. Among her peers, the idea is gaining traction, with many seeing AI as a clever and practical tool.
Ultimately, the phenomenon of young Australians turning to AI for therapy reflects both innovation and desperation in equal measure. On one hand, chatbots like ChatGPT offer an accessible, stigma-free space to process emotions, filling a critical gap left by reduced government support and overwhelmed mental health systems. On the other, the lack of regulation, potential for emotional dependency, and privacy concerns highlight the need for caution and oversight. As Professor Pearson aptly puts it, people will use whatever resources are available to seek help, and AI is increasingly becoming that resource. Whether it evolves into a legitimate complement to human therapy or remains a risky stopgap depends on how society chooses to address these emerging challenges. For now, stories like Gracie’s and Emma’s serve as a powerful reminder of the human need for connection—be it with a person or a machine.