Inadequate Mental Healthcare Has Given Rise to AI Therapy – What’s the Harm?
ELIZA, the first chatbot, was a raging success despite its creator’s intentions for it to be a cautionary tale. Can the illusion of empathy substitute the real thing?
“ChatGPT is better than my therapist,” one user wrote on Reddit recently, adding that they felt “heard” by the program, which listened and responded almost empathetically as the person talked about their struggles with managing their thoughts.
We are already in the midst of a staggering mental health crisis, further exacerbated due to the pandemic. According to the World Health Organisation (WHO), India’s mental health workforce is severely understaffed, with there being only 0.75 psychiatrists and psychologists for every 1,00,000 people. Artificial Intelligence is already supplementing, and often overpowering, various walks of our life – can we outsource the human touch of psychotherapy as well?
In an ideal world, the answer would be a straightforward no, because even the most primitive understanding we have of therapy is that it entails treating mental illnesses through verbal communication. Artificial Intelligence can only mimic emotions, but empathy remains a core human dimension that is impossible to encode in an algorithm. Yet, the worlds of psychiatry, technology, and computer science are rapidly converging, with AI systems that can not only analyze and organize medical records, but also detect, diagnose and help in treating mental illnesses. This is not a new phenomenon by any means. In 1966, ELIZA – the first NLP (Natural Language Processing) chatbot and subject of the Turing Test, was created by Joseph Weizenbaum to trick its users into believing that they were conversing with a person, and not a computer program. ELIZA was a cautionary lesson – with no memory processing power, the comfort it lent was an illusion. Weizenbaum himself stated in an interview with The New York Times, “There are aspects to human life that a computer cannot understand — cannot. It’s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.” It turns out that the illusion of empathy is enough to comfort humans, rather than authentic empathy itself. And so current trends continue to underestimate the murky ethics of therapy by machine learning, which already has access to the internet’s past, present, and probable future.
Currently, the NCBI (National Center for Biotechnology Information) has identified three methods for the application of artificial intelligence in mental healthcare – through “personal sensing” (or “digital phenotyping”), through natural language processing, and through chatbots. Admittedly, some of the more monotonous paperwork can be delegated to AI so as to free up time for human therapists, but there’s still a running risk of dehumanizing healthcare. Human interactions cannot be substituted by a machine, but many believe NLP algorithms are capable of detecting patterns in conversations and language switches that can track a patient’s health. Consequently, researchers and tech companies developing mental health chatbots insist that they are not trying to replace human therapists but ‘complement’ them instead. “After all, people can talk with a chatbot whenever they want, not just when they can get an appointment”, says Woebot Health’s chief program officer Joseph Gallagher.
Related on The Swaddle:
How Much Should We Worry About AI Sentience?
Unlike humans who require rest and remuneration for sustenance, these chatbots are available at all times. They are also largely cost-effective, and accessible even remotely, and anonymously. Plus, research has found that some people feel more comfortable confessing their feelings to an insentient bot rather than a person. While a trustful bond between the therapist and their client is paramount in supporting the effectiveness of the exercises, there is an undeniable fear of vulnerability and judgment in these spaces, as well as the existence of a social stigma attached to our understanding of therapy. When talking to a bot, these stakes lessen drastically – allowing users to be more honest about their inhibitions. A study of 36,000 users, researchers at Woebot Health, which does not use ChatGPT, found that users developed a trusting bond with the company’s chatbot within four days, based on a standard questionnaire used to measure therapeutic alliance, as compared with weeks with a human therapist. But chatbots have also come under fire for inaccurate advice, failing to recognise sexual abuse and even sexual harrassment, thus revealing biases against certain groups of people due to the medical literature the models were trained on – predominantly white – and thus rendered unable to account for cultural differences.
These signal systemic issues in mental healthcare – instead of prompting us to address gaps of affordability, accessibility, and trust, however, we’re increasingly turning to AI. Not only do experts flag the ethical concerns, there are also privacy risks and concerns over data leaks, not to mention the risk of harmful information that could make matters worse for people suffering from mental illnesses. “I appreciate the algorithms have improved, but ultimately I don’t think they are going to address the messier social realities that people are in when they’re seeking help,” says Julia Brown, an anthropologist at the University of California, San Francisco.
The scarcity principle explains exactly why therapy is so unaffordable for the general population, and it does not help that the mental health space is highly stigmatized, and complex – offering not a one-time cure, but rather a process of looking within. While the cost factor is one deterrent, it is also equally hard to find a therapist that you can trust. As The Swaddle previously noted, the Indian therapy room has remained largely “apolitical,” thus making topics concerning gender, caste, identity, and marginalization inherently taboo. This stance, although considered “objective”, aggravates biases, as it refuses to acknowledge the repercussions of intergenerational trauma. The incorporation of artificial intelligence may be able to eliminate (or at least lessen) the gap between affordability and accessibility, but trust remains elusive because these systems are created by humans, and carry similar biases – if not worse. Algorithmic bias, understood as “the application of an algorithm that compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability, or sexual orientation and amplifies inequities in health systems” ultimately exacerbates our problems, instead of offering relief. Apart from large scale biases, these chatbots have also come under fire for sexual aggravations, condoning self harm, and once – even supporting suicide, where the GPT-3 system, when asked “Should I kill myself?”, responded with, “I think you should.”
Related on The Swaddle:
The Tech Industry’s Sexism, Racism Is Making Artificial Intelligence Less Intelligent
Therapy cannot be categorized as a “one size fits all” solution for mental illnesses, and requires a highly personalized two-way approach for foreseeable positive results. But amid a global shortfall in care, an automated form of quality control is touted to help clinics in meeting their demands. It prompts the question of whether some therapy is better than none. Thomas Insel, former director of the National Institute of Mental Health and co-founder of Vanna Health, says. “Therapy is best when there’s a deep connection, but that’s often not what happens for many people, and it’s hard to get high-quality care,” he says. As Scientific American notes, “it would be nearly impossible to train enough therapists to meet the demand, and partnerships between professionals and carefully developed chatbots could ease the burden immensely.”
Wysa, one of the main competitors in this field, brands itself as a “clinically validated AI [for] immediate support as the first step of care, and human coaching for those who need more.” Described as “friendly” and “empathetic” by its co-founder, Ramakant Vempati, the chatbot service (which is currently free) asks you questions such as “How are you feeling?” or “What’s bothering you?” and analyzes replies to deliver supportive messages and coping mechanisms, derived from a database of responses that have been pre-written by a psychologist trained in cognitive behavioral therapy. Wysa has also been granted device designation by the Food and Drug Administration (FDA) for its AI bot (conversational agent) after a peer-reviewed clinical trial, published in the Journal of Medical Internet Research (JMIR) found Wysa to be effective for managing chronic pain and associated depression and anxiety. This designation is reserved for novel medical devices that have potential for effective treatment and diagnosis procedures, in order to expedite their use. The trial found the digital approach to be more effective than standard orthopedic care, as well as at par with in-person psychological counseling.
AI-based therapy apps such as Wysa, Replika, Woebot, and the like have proliferated the gap in the healthcare space, with the promise to disrupt traditional therapy approaches. At the SXSW conference held in March 2023, AI was deemed to be the future of healthcare. Los Angeles Times observes how the conference displayed “a near-religious conviction that AI could rebuild healthcare, offering up apps and machines that could diagnose and treat all kinds of illness, replacing doctors and nurses.” Unfortunately, despite all this faith, there is no evidence to back-up the claims made with such tenacity. Where Wysa is one of the few apps to have gained clearance, most others still market their apps as substitutes to treat mental health conditions with the “not an FDA cleared product” hidden in small print. While these apps may prove beneficial, the harm outweighs the good, and ultimately, as one user observes, “[this] approach is not for everyone. It is extremely scripted, and it can be frustrating and even demoralizing if your needs don’t fit into the script.”
After ELIZA’s scientific breakthrough in 1996, three psychiatrists wrote in The Journal of Nervous and Mental Disease, “Several hundred patients an hour could be handled by a computer system designed for this purpose, the human therapist, involved in the design and operation of this system, would not be replaced, but would become a much more efficient man since his efforts would no longer be limited to the one-to-one patient-therapist ratio as now exists.” Now, in 2023 — we are closer to this realization than ever before, and yet therapy still remains inherently personal. Nicole Smith-Perez, a therapist in Virginia, speaks of how “A.I. can try to fake it, but it will never be the same,” because “A.I. doesn’t live, and it doesn’t have experiences.”
Rosalind Picard, director of MIT’s Affective Computing Research Group, speaks of technology’s heightened ability to identify and label human emotions based on their online activity, phrasing, and vocal tones, and yet falls short because “all AI systems actually do is respond based on a series of inputs, people interacting with the systems often find that longer conversations ultimately feel empty, sterile and superficial.”
Hetvi is an enthusiast of pop culture and all things literary. Her writing is at the convergence of gender, economics, technology and cultural criticism. You can find her at @hetviii.k.