When faced with a problem, it’s increasingly common for us to ask a chatbot. Within seconds of posing the question, you have an answer.
So it’s unsurprising people are turning to artificial intelligence platforms like Woebot or ChatGPT for health-related queries.
The uptick in AI in psychotherapy includes tasks such as analysing transcripts of therapy sessions and educating students.
But can a chatbot be a virtual therapist?
YOU COMPLETE…LY UNDERSTAND ME
Whether it’s a physical or mental health problem, chatbots can provide quick solutions.
And they’re often delivered in an empathetic manner. This is because of the natural language processing trait found in most AI-driven chatbots.
Also known as NLP, the process involves analysing the spoken or written word for characteristics, traits or behaviours that could be linked to a human’s emotions or feelings.
This subsection of AI enables chatbots to better understand prompts to develop and deliver human-like responses.
The potential of this technology has proven to be a tempting topic for Australian researchers.
VIRTUAL BESTIES
In WA, scientists have been testing a chatbot using NLP to determine its potential as an affordable and accessible virtual therapist.
Murdoch University research analysed the performance of an AI-driven tool called Bestie. Bestie is designed to interact with patients by analysing the language in their questions and providing customised responses.
While still in the research stage, initial results found Bestie could potentially provide early mental health services to individuals struggling to access a psychologist.
But chatbots could be used for more than just patient welfare.
A HELPING ROBOT HAND
University of Melbourne researchers have flipped the use of chatbots as virtual therapists.
Digital Health Senior Lecturer Dr Simon D’Alfonso and his team have created a chatbot designed to help educate the next generation of psychologists.
Called Client101, its purpose is simple. Act like a patient presenting with mental health issues so students can practise their skills without the need for a real patient or actor.
“Client101 primarily started off as just a way to simulate mental health clients,” says Simon.
“Students need practice, and sometimes all they have is their peers that they can practise with, and … their teachers don’t really have much time to pretend to be patients.”

Credit: Shantanu Kumar via Unsplash
PROMISING RESULTS
Initial results from the first Client101 trial were so positive the technology has been swiftly implemented.
“We got a few students to try it out, about 15 students, and we just did a qualitative analysis of some interviews with them and it was promising,” says Simon.
“This semester, we’re embedding Client101 into two University of Melbourne subjects – in a clinical psychology subject and an educational psychology subject – just to see how it fares in terms of being part of the curriculum.”
With the average psychology student requiring a minimum of 300 work placement hours in a year, this technology could ease resource limitations.
Simon says it would also allow students to practise whenever and wherever they can.
“I think there’s promise there, and using a chatbot as a training tool is far less problematic than trying to use it as a virtual therapist.”
HAVE A CHAT…BOT
With expensive fees and lengthy appointment waitlists, the use of AI as an online psychologist is increasing.
Cost is the largest barrier for Australians to access mental health services, with some patients waiting up to 12 weeks to see a psychologist, according to the Australian Psychological Society.
Psychological distress among youth has more than doubled over the past decade, so it’s understandable people are turning to chatbots to get cheap, immediate help.

Credit: Nik Shuliahin via Unsplash
Like any new technology, chatbots come with a warning for users to be aware of the risks of taking health advice from what is ultimately a robot.
This warning is particularly paramount when it comes to therapy, which revolves around conversation and connection.
MORE THAN AN ONLINE CONNECTION
The humanisation of chatbots was first realised in the mid-1960s when psychologist Joseph Weizenbaum developed one of the world’s first chatbots, ELIZA.
ELIZA was used in an experiment designed to show the limitations of this type of system and the superficial nature of interactions between chatbots and humans.
But the experiment backfired.
“[Joseph] was a little surprised when the people around him that he tested became more immersed [and] involved than he thought they would be,” says Simon.
“What’s behind the screen is not some sentient conscious agent, but there’s a tendency of people to anthropomorphise in their interactions with systems such as chatbots.”
“There can be risks in that area … people developing these problematic rapports or connections with these systems, which ultimately can’t be truly reciprocated.”
This rapport between human and robot has since been dubbed ‘the Eliza effect’. It has resulted in some people developing friendships or even romantic relationships with chatbots.
In extreme cases, the virtual connection has resulted in serious consequences, like when a 19-year-old attempted to assassinate Queen Elizabeth II based on a conversation with a chatbot in 2021.

Credit: Paul Hanaoka via Unsplash
NOT A REPLACEMENT FOR HUMAN CONNECTION
Despite the risks, the use of AI will become more common, including in therapy where psychotherapists are already putting the tech into practice.
“When you think about psychotherapy, it’s an inherently language-based task,” says Simon.
“People [are looking] into the possibilities of using natural language processing to analyse transcripts of psychotherapy sessions to get insights into what exactly is happening and ways that a therapist might be able to improve what they’re doing.
“It’s not to be a replacement for the ground truths that I think humans are able to generate.
“Sometimes it’s simply just a matter of having a system because this is laborious work.”