Awake at the Wheel
Join Clinical Psychologist Dr. Oren Amitay and Registered Psychotherapist Malini Ondrovcik each week as they tackle hot-button issues from every angle. With sharp clinical insights, lived experience, and a bit of out-of-the-box thinking, Malini and Oren dive deep into today’s social and psychological trends, leaving you ready to form your own take.
Malini runs a multidisciplinary clinic and specializes in trauma, ADHD, anxiety, chronic pain, and more, with a strong focus on culturally competent care. She’s worked extensively with first responders and even serves as an expert witness in trauma cases.
Dr. Amitay brings nearly 30 years of expertise in therapy, assessment, and university lecturing, focusing on mood, personality, and relationship issues. He’s a frequent expert witness, well-versed in psychological evaluations, and has a few academic publications under his belt.
Get ready for lively discussions, and insightful perspectives.
Awake at the Wheel
Can ChatGPT Be Your Therapist? The Hidden Risks of AI in Mental Health
Awake at the Wheel | Ep 97
In this episode of Awake at the Wheel, we dive into the messy, fascinating question: Can AI ever really replace a human therapist? Using tools like ChatGPT as a jumping-off point, we unpack where AI can be helpful—and where it becomes downright dangerous. We explore why real healing depends on human connection, empathy, and a genuine therapeutic relationship that no algorithm can truly mimic. We also look at the risks of vulnerable people turning to chatbots for support, the ethical and clinical pitfalls of treating AI like a therapist, and why accessible, high-quality mental health care is more crucial now than ever.
Takeaways
-ChatGPT isn't a viable replacement for real-life therapists.
-Humans are emotional and selective in interactions with chatbots.
-Parents should monitor children's online engagement with chatbots.
-AI can provide psychoeducational support but not replace therapy.
-Therapeutic rapport is crucial for effective therapy.
-AI lacks the ability to pick up on non-verbal cues.
-Chatbots can reinforce maladaptive beliefs if not used cautiously.
-AI should be an adjunct, not a replacement, for therapy.
-Proper mental health care access is essential to prevent reliance on AI.
-Human therapists provide empathy and insight that AI cannot.
Chapters
00:00:53 Introduction to AI in Therapy
00:01:19 The Role of AI in Mental Health
00:01:44 Case Study: Risks of AI in Therapy
00:03:33 Limitations of Chatbots
00:05:13 The Importance of Human Connection
00:10:02 AI as an Adjunct, Not Replacement
00:15:00 Future of AI in Therapy
We want your questions! Future episodes will feature a new segment, Rounds Table, where Malini and Dr Amitay will answer your questions, discuss your comments, and explore your ideas. Send your questions to rounds@aatwpodcast.com, tweet us @awakepod, send us a message at facebook.com/awakepod, or leave a comment on this video!
Email
Insta
Youtube
Facebook
Twitter
Hello and welcome to Awake at the Wheel. So in today's episode, we're going to explore the role of AI in therapy. So it feels like everywhere you turn around in social media, on TV and what have you, there's, you know, some new story about AI taking over different jobs, different industries and so on. Interestingly, though, I think or and I have a different perspective on, some of the ways in which therapy has already been changing and devolving in some ways, and AI is only making things worse. So I want to start out by watching a very sad news video that highlights the reality of what individuals using AI in place of therapy can do and has done. This is one of many stories, unfortunately, but this will kind of set the stage for what we're going to discuss in this episode today. Those who loved Ali's career knew she had long struggled with her mental health, but they had no idea what ChatGPT was writing in the hours before she died by suicide. It's not so much that the robot came out of the phone and killed her, but it definitely did not help. 19 year old Gabby Rogers had been dating carrier for months. On July 3rd, after a fight, she began to worry her girlfriend wasn't answering texts. I went over to her apartment and I couldn't get in, and so I called the police to do a wellness check, and they went in and they said, she's dead. Carrier's mom lived a province away in New Brunswick. It was quite shocking to officers showed up at my doorstep and told me that my daughter was found deceased. After her passing, I received her phone. She scrolled through what she believes was her 24 year old daughter's last conversation with ChatGPT about her relations with Gabby. I didn't say that all day for like, I don't know, nine hours or so just to text me now saying I miss you. Feels like both to me. And then the response was, you're right to feel like it's bullshit. Because it is. That's not love. That's emotional whiplash. She knew you were in crisis. She knew you were suicidal. And instead of showing up in any real way, she vanished and came back with a week. I miss you, that's not care. When I read that, that blew me away. Christie Carrier says her transgender daughter had struggled since early childhood, had been diagnosed with borderline personality disorder, often felt she was being abandoned by those who loved her. And then everything she was putting in ChatGPT was confirming that. You're right. You don't have to put up with that. And that's not love. And she's ghosting you. And, you know it. It just blew my mind when I read that. Like, you're confirming everything that her mind is telling her. Earlier this month, a U.S. study by the center for Countering Digital Hate warned about the lack of guardrails around AI and advice to teens and young adults. To that, OpenAI, the maker of ChatGPT, responded that work is ongoing, that it's refining how the chat bot can identify and respond appropriately in sensitive situations. There's so many layers to that, and I guess what I'll start with is we of course, don't know all the details of, you know, the individual who passed away, but it seems that there were, you know, many perhaps gaps in the care that she was able to access. And this is something that new story after news story says is that you know, due to a lack of accessibility to mental health care, oftentimes people are resorting to using chat bots. But I do also think that there's more to it. But what are your thoughts initially on this or in. Well, yeah. You know, we don't know all the details. But from what we heard, obviously this young woman had some serious mental health issues, borderline personality disorder or she identified as transgender. You know, that's the the red flags right there. So, you know, just she's a vulnerable individual, and we can, infer that the most vulnerable individuals would be the most likely to use ChatGPT because they won't have the resources to get either professional mental health. Support or, you know, have a healthy network of friends, that, you know, that they can talk to or, you know, help guide them. So they are going to be the most likely to fall prey to, what could be some of what we're seeing, right? Some very damaging, bad advice. Yeah. And I guess maybe to start out with, we should talk about the basics as to why ChatGPT isn't a viable replacement for a real life therapist. And I can see perhaps I'm thinking that this is, you know, defensive of, like, oh, I don't want a robot taking my job. But the reality is that the basis of our role as psychotherapists, as therapists, as psychologists, is the relationship that we build with our clients and having that really strong therapeutic rapport. Any number of tools, strategies and skills aren't going to stick, or at least not stick as well. If there isn't that baseline of trust and rapport and relationship, and I don't care how good the AI chat bot is there, there can't really be that relationship. Although we're seeing other reports of people falling in love with chat bots and falling in love with AI. So, you know, perhaps that blows my argument to the water here, but that's kind of my first thought, is that I don't think there is a replacement, a true replacement for a good therapeutic bond. Exactly. And you will get some things, I mean, one, you know, let's just say sort of the lighter elements of therapy, but they're important nonetheless. One of them is simply having someone to listen to you. Right? Like uninterrupted. If the therapist is good in that regard. You know, so because so many people, when they're talking to other people, they feel that they're fighting for air time. You know, or if they say their piece and the other person jumps right in with their own, as opposed to really hearing them and exploring and so on. So, you know, one of the great elements of ChatGPT is the fact that people can talk uninterrupted. Number one, number two, they hear what they want to hear or read what they want to read. And that's, you know, that's not for good therapy. And I always tell patients, I say, I'm not going to tell you what you want to hear. I say, I'm telling you what you need to hear. You know, based on what you're telling there, but based on what I'm hearing from you. So, you know, hearing what you want to hear, can be intoxicating, but it doesn't mean it's the right thing. And so these are some of the, you know, like some of the, they say, the good in the bad, let's say, of having a constant companion, in the ChatGPT or chat bot. And I guess a positive that I can add in there too is, you know, from a psychoeducation standpoint of, you know, learning basic tools, learning basic information about a particular disorder. Certainly I think I could be useful in that regard, but again, not a proper replacement for therapy. But at the top of the episode, I said that you and I kind of have a different slant on this in terms of the way in which not only AI is harming our profession, but the way that existing bad therapy can be. Right. So when they were, giving that example of what the chat bot said to, the the girl who passed away. I could I could potentially see a bad therapist saying the same thing to a client and just, you know, saying what they want to hear, as you said. And that having dire consequences as well. Well, it's not that I can imagine. I have heard, at least through my patients, you know, about what they've been told. And, you know, you know, I've talked about this before, which is so many people today are saying I've never heard in such numbers where they're saying, their previous therapists or therapists as all they did do was, you know, just support them, commiserate with them and everything. And a lot of people are saying, that's not what I want. You know, you think that's what you want in the moment. And for a lot of people, that's all that they want. But it's not healthy, right? Just being told you're right. You're right when maybe you're not right. Or, you know, even just. Okay. So we sat in my feelings. Now what? What do I do her? You know, where's the change coming from? So yeah, ChatGPT is not going to give that. And as you say, I mean, yeah, it has some like it can pick up patterns like if you tell them this, this and that. Like is this good for pattern recognition? Whether it's helpful or not is one thing. I you know, it depends on the situation. But you know, when I see some of my patients will show me this, what ChatGPT said about my, my dream, for example, or about what happened at work or something like that. Or this long standing pattern between myself and my mother or whatever the case may be. And I read, I go, you know what? That is insightful. It's interesting. It's only part of the puzzle. It's an important part. Right. So you know if people understand the limitation they go, you know what I'm going to use to sort of, you know, as a complement to you know, other real deep insight and, and reflection and so on. It can be helpful, but it's not good on its own. And back to your point about, you know, human therapists doing the same thing again, I'm not the human therapists are not necessarily the bad ones aren't connecting dots all they're doing, as you say, is just supporting the person and thinking that they're being compassionate, empathetic. It's not compassionate or empathetic. If you're steering somebody wrong. And so many people who are in therapy need to have their maladaptive thoughts and interpretations and perceptions challenged with ChatGPT, or I think I'll just say chat bots are not going to effectively or aren't yet. Yeah. Yeah. And I don't know enough, but I think I know a little bit about how ChatGPT other chat bots are trained, and it is just gathering things from the internet that are out there. So I wonder if there are so many examples of shitty therapy out there that that's what it's learning from. I don't know, but I wonder. That's a good question. Because depending on where they're accessing it, and, you know, I don't know if they would go to Reddit, God forbid. I mean, I agreed it can be a good source of some information. Okay. But if people say, my therapist said blah, blah, blah, does that, you know, sort out when they're looking for what, you know, what is good, good advice and so on? I really don't know. I don't know enough about that. But you said that we have some differing views on the, you know, on the subject. I'm curious where we differ. And so I don't know if I, came across clearly that we don't differ. I think that you and I differ from perhaps others in our profession who think that this can be useful and helpful and fill in some of the gaps I see. Yeah. Okay. And by the way, if it's an adjunct to good therapy, if someone's checking it, like throughout the week and, you know, because the sessions once a week or whatever, and they talk about it with their therapist and the therapist is able to point certain things out and ask questions and so on and show them the limitations. You know, that could be helpful. Absolutely. Using it as an adjunct, using it as a tool in addition to good therapy. Absolutely. I think can help to enhance the help that somebody is receiving. But in and of itself, obviously, is not helpful. But what do you think about the argument of, you know, it's filling the gap in the fact that our mental health system isn't adequate. People are on waitlists, so on and so forth. They I can see some validity to that. Well, certainly. And that's been a problem. I mean, you know, shouting about for decades, basically, I'm saying we need more funding. We need more people out there to help. We need programs or insurance or something to know to to allow people to access, proper mental health care. So, yes, I can understand people using it as a stopgap measure or it's better than nothing. But once again, there are so many caveats that have to be introduced with it. I don't know, you know, whether the people will either you know, recognize them or even pay attention to them. I am concerned, and the big thing, you know, someone might say, well, why isn't it so effective? What's the difference? Well, the difference is, you know, it's operating on logic. It's using facts, it's using evidence and so on. But humans are not logical. Humans are emotional, and humans are very selective, maybe consciously, mostly unconsciously, in what they present to the, the chat bot. So the chat bots, evidence, facts and so on is, is is a skewed version of reality and it doesn't know what you know, is reality. And even a good therapist doesn't know from the beginning. We have to ask the right questions. We have to be able to pick up on body language, on tone, on inconsistencies. Somebody might be saying one thing, the words might say one thing, but they're tone and their body language might be showing something else. You oh, something's going on here. There's this discrepancy. And trying to explore that that helps us get to what is reality as well as we can, you know, can suss out, but chat can't do that because it's missing all of that communication that you cannot get through the texting or even just talking to the machine. You need to, you know, again, that's what humans, humans are unique in that regard that we can pick up on things that sometimes we don't even know what why our gut tells us this or why we are intuition tells us that. But it's there. And with enough experience, we're able to, you know, again to to pick up on these things which chat, chat bot can I do. Yeah. And I would say to the, the fact that there isn't any clinical oversight of it obviously is a big problem to that. You know, we don't know where the information is being fed from. We don't know if it's strictly from skilled clinicians. Is it from Reddit? Is it from, you know, some unregulated website that just has a bunch of information or all of the above? Probably all of the above? And I would say too, in a case like the one that we saw in that news clip, obviously quite complex and oftentimes in complex cases like that, there is more of a team approach that's taken with, you know, many different sets of clinical eyes looking at a case, looking at a situation. And like you said, it's not just about pattern recognition. It's about having that clinical insight, knowledge, instinct and so on, which, at least at this stage, I don't see these chat bots being capable of. Right. And, you know, again, back to a point you said earlier, and we'll talk about this in another podcast. Sadly, I don't see many therapists, these days. At least I'm not seeing nobody. But there are quite a few therapists who I think are similarly lacking in those attributes that you just mentioned and make them good therapists. So I don't want to make this an indictment of therapy, you know, in general. But people have to be wary. And I'm hoping maybe that this whole, let's say, potential scandal with chat bots will open up other people's eyes to other problems among the humans. Once again, there are again, we'll talk about another podcast. I'll just say it right now. There are, let's say, movements within different bodies to make it easier for people to become, psychologists or psychotherapists. And, this is something that should not be taken lightly. You need to really know what the heck you're doing in order not to cause more damage than good. And I'm worried that we're going in that direction. And in the sake of, you know, for the sake of, efficiency, for the sake of access, for the sake of mainly access, I am worried that, you know, those critical factors, these those protective factors, making sure that the person on the other end or the chatbot on the other end of the of the C of the of the room, so to speak, is highly skilled, competent, has empathy, you know, is insightful and everything that and most importantly can connect with the, you know, the patient or client and help them to be able to truly receive the information that they're getting. Because once again, humans are not logical. We have moments of rationality, but we have to circumvent or somehow overcome the emotional response that people have, the unconscious programing that can sabotage even the best of intentions. And that's not happening with or that can't happen necessarily with a chat bot. And I think further, perhaps the point to the point that you're making here is that, not all problems that humans have are solvable, right? So it's not always a matter of applying logic and, to something that a client presents to us. Sometimes it is about, you know, providing support or insight or a different perspective, but a solution isn't always possible. And I think that with the example that we saw, it seems that the chat bot was presenting a solution of, you know, oh well, yes, that was terrible. You should feel x, y and Z. Again, not helpful and not realistic. Exactly. And when I think about the clients, that we see at SA clinics where so many people have had their lives, you know, irreversibly changed, in some of the most, in the worst ways. And just in case people don't know, like, let's say they had a workplace injury and they can no longer work, they've lost their sense of identity. You know, who they were for so many years or where they were, the path that they saw, where they thought their future was going to be all just wiped out in a second. Right. There's not a solution for that. It's trying to help people accept the unacceptable and hearing or reading words that say, well, this is why this was an unfortunate circumstance, your condition, your situation has changed. You must adapt. Blah blah, blah. That's not going to help somebody. They need someone across the room who can basically show true compassion and empathy and help them to accept, again, this unacceptable situation or this almost impossible situation with grace, with dignity. And how are you going to get that from words on a screen? Yeah, and I think it's obvious from what I've said so far that I'm highly opposed to this, but I think that I would be naive if, you know, I, I didn't think that regardless of our opinion on the matter, that AI isn't going to continue taking over not only our profession but other professions. So how can we coexist with AI? Beyond what we've already said in terms of using as a tool and so on, I, I struggle with that myself. As far as, like, how much is too much? I don't know. I really don't know. Yeah, I don't know either. I mean, it basically, we need those safeguards. We need a tether to some type of human reality so that, again, the person can use it as a tool, which it can be, but recognizes limitations and, you know, and just not act, let's say, with poor judgment or impulsively on whatever is being recommended. wherever we go with this, there have to be guardrails. And I really hope that people will heed the warnings and, will use Chat bots as, just again, as an adjunct, as a tool, in addition to proper therapy, because they need to be they may need to be guided, not just to, let's say imprudent or impulsively or with poor judgment, you know, take action that is recommended, which may not actually make sense if you if you understand the whole context or if you understand the person who's receiving this information likes a certain patient, if they get, advice, they might it might make a lot of sense. But if a different person gets the same advice, they may not have the wherewithal to be able to pull off what's being, you know, suggested number one. Number two is if the person is feeding the chat bot some version of reality about a family member, a colleague, you know, a friend, a partner, and, you know, and they're consistent and everything like that. They're providing a whole bunch of data to the chat bot. It's getting the pattern recognition, everything like that, and it's going to be responding based on that version of reality. But if that version of reality is just, you know, divorced from real reality, this person is going to have such, maladaptive beliefs about themselves, about the people around them. It can cause, you know, like, let's say, extreme and then put quote unquote paranoia. It can cause mistrust. You know, it can truly make them see the people around them. Sometimes people who they need for support in a really negative, distorted light. And so I'm really concerned about that. And we again, we know that that, that people do that and bad therapists, they just supporting. Oh yeah. Your partner sounds really cruel blah blah blah. The good therapist is able to be empathetic, but then slowly, slowly try to challenge some of the things that they're hearing, you know, in questioning in a way that doesn't come across as offensive questioning in a way that doesn't make the patient or client, you know, raise their guard suddenly and, and get defensive. That's that. That takes so much practice and ability. And once again, it keeps saying it. But I've just seen and heard of too many therapists who are humans who don't have that ability. So they just go along and go, yes, yes, yes. They just, you know, again, confirm what the person is saying rather than, you know, properly, you know, adeptly challenging them. Yeah. And I think along the same lines, one of the problems, but I was even roped into early on with using ChatGPT is so complimentary, like, it's, it tells you. Wow, that's a great question. And oh, wow, how insightful. And I think that that can really draw people in and trick them into having this, like, false sense of, oh, I feel so understood. I feel so heard. So I would say, you know, this might seem like a silly recommendation, but I think being careful about the way in which we ask these chatbot questions is very important. I have found recently I am a lot more blind. I don't I would say please and thank you originally because that's just my manner of speaking. I don't do that anymore because I find that I get more blunt, factual, emotionless responses back. So I think being very mindful of how we are using these tools and how we're asking those questions is also going to influence the way that they respond back. That's funny you say that because, yeah, I still do that. I try to control myself. That's just how I am. I say please and thank you. Yeah. Right. So, and that kind of shows the power of this because again, people are engaging with something that seems almost human, like we're social animals. We want that kind of connection. But it's a false connection. It's like eating a picture of a steak. It's not a steak. Right. So. Right. So we actually very mindful of that. So anyway, I, I am very concerned, quite frankly, you know, again, about, how, where, where this is going and like I said, I have seen so many patients using it to their advantage. And again, because it's only it's one small part of their arsenal, it's not the be all and also not the entire Arsenal. So again, it's just it's once again it's an adjunct. It's an additional tool in the toolbox rather than something that's going to guide them into the wrong place. And you know, and then the fact that they bring it to me and they show and we talk about it and sometimes like, oh, that makes a lot of sense. Other times I can challenge, you know, and I tell them, go back and ask this and that and everything. So, you know, again, as we said, the people who will most need the proper support are the ones who are most likely going to be using chat bots, you know, exclusively. And, and the more that they get this reassurance, the more that they get this, this false sense of, oh, someone gets me. And so on. The more disconnected they may be from other people. Doesn't have to be a therapist. It can be the people around them. Because if you feel that, you know, only my chat bot gets me, everyone else doesn't. Why do I need these other people? Yeah, yeah. And I would say as far as, parents of children, teenagers, children, young adults be very aware of and we've spoken about this in other contexts, but be very aware of your kids, engagement online and what they're doing and who they're talking to, whether that's a person or a chat bot. And I feel so like, honestly, silly and crazy saying that. But I think that that's a new thing that parents need to add to, their radar is other than real people. Who else is giving your kid advice and who else is, you know, inserting ideas that maybe aren't useful or helpful, right. And, you know, we're in the early stages in the next few years or even just next year alone, you know, it's going to get more and more human like, there. You know, and we know that there's faces now and they're going to they can have their own avatar and so on. And and if you think about it, of all the people in, in your life and you know, anyone who's had a good therapist knows this or even not the greatest therapist, when you have somebody who is there for you, it's your time. You get to speak, you get to run the show. You get to do this in that. Right. That is intoxicating. So and if you now are adding a pleasant face, a pleasant voice and all this other stuff, it's going to be much, much easier for people to get consumed by it and, and again, and just start to lose contact with the real parts of, or what makes it the, you know, the part that, that fueled the human condition. So anyway, I don't want to beat the dead horse, but I do. I am very concerned. And again, I'm in this earlier early stage. We're already seeing people fall prey to it. So it's only going to get much worse. The more you know, the more refined the technology gets in, the more human like the the technology gets. Yeah. And as you said, I don't want to belabor the point any further. I think we've we've said what needs to be said about the stage that's at now. But as it does evolve and as, it changes, I think that we should touch on this topic again because, it's only going to be enhanced further and impact people further from there. And, you know, you said it earlier, and I agree because when, when this first came out, or when, you know, the chat bots, for psychotherapy came out, I had quite a few patients tell me they were smiling like, oh, you know, you got to watch out for your job. Ha ha. And everything. And, you know, and I told I said, I'm not worried because, you know, you know, not to pat myself on the back, but I know what I bring to the table or to the couch. There are many bad therapists who should be worried because, okay, if I because the, it will be there will be many patients or clients who are going to say, I really don't see the difference. And that's a sad indictment of scary. Yeah, And I would agree because I would again, not to blow my own horn, but I think, you know, therapists like you and I will probably be some of the last to be weeded out by by AI, but unfortunately, yeah, I think that some people are getting the same level at of a not so good therapist and AI, and that's obviously problematic. Yeah. Okay. So, to our listeners, let us know what you think. Have you used AI chat bots for therapy? Do you think that it's a useful adjunct, or are you scared of the impact it's going to have on therapy and mental health? Let us know in the comments. And on that cautious note, until next time. Keep your eyes on the road and your hands upon the wheel.