Cyborg Goddess

A Feminist Tech Podcast

Transcript for Season 3 Episode 2

Jennifer Jill Fellows: We live in a truly wonderful time where the stigma around mental health is actually starting to lift. But we still have a long way to go. One of the major barriers for many of us when it comes to mental health, is that therapy is really expensive. Another barrier is that a lot of people find it challenging to open up to a therapist and to be vulnerable. But what if you didn’t have to talk to another person? What if for a fraction of the cost of a human therapist, you could access mental health chat bot right from the comfort of your phone, like whenever you need. And this isn’t science fiction. You can do this now. But before you get excited, you need to think carefully about what it is you’re paying for and what the limitations of these tools are.

JJF: Hey, everybody. Welcome to Cyborg Goddess, a Feminist Tech Studies Podcast. I’m your host, Jennifer Jill Fellows. And on this episode, I’ve invited Rachel Katz on the show to talk about her research into the ethics of AI psychotherapy chatbots. Rachel Katz is a PhD candidate at the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Her main research focus is in Biomedical Ethics, Philosophy of Medicine and Psychiatry, and AI Ethics. These interests come together in her current dissertation project, examining the pros and cons of AI facilitated psychotherapy, and that’s what she’s here to talk to me about today.

JJF: Hi, Rachel. Welcome to the show.

Rachel Katz: Hi. Thanks so much for having me.

JJF: I want to begin before I ask you questions by recognizing that digital space is physical space. Even digital personas, which we’ll be talking about today are sustained by physical infrastructure, and that physical space is in North America, as in many parts of the world, stolen land. Cyborg Goddess, the Feminist Tech podcast is recorded on the unseated territory of the Coast Salish people of the Qiqéyt Nation. Listeners, I also invite you to reflect on where you are located or situated today and what the history and current reality of that space that you occupy is.

JJF: So, Rachel, we’re both philosophers, which is fun. I don’t always have philosophers on the show. Can you tell me a little bit about how it was that you came to be pursuing a PhD in philosophy? Like, what’s your academic journey?

RK: Oh. So This, it always sounds like a high school university admissions essay to say that I got interested in bioethics when I was in high school, but I did. It was during a grade 11 English project, and I was originally interested in bioethics in the context of bringing back extinct species and coming through an interest in paleontology with a recognition that my physics and chemistry skills were not at the point where I could do paleontology as a career. So I looked at how to go into a career in ethics and bioethics and one of the many routes that was presented was philosophy. So, I ended up doing an undergraduate degree in, like a combined degree in philosophy and biology at McMaster, and then after that, I stayed at mcKMaster and did my MA there in philosophy. And I did my MA thesis on academic crowd funding. So I looked at the ethical issues associated with using academic specific sites for crowd funding research projects. And that’s kind of like the short version of how I ended up doing my PhD at the IHPST. But over all of those different degrees and over what I guess is close to ten years now, I’ve thought a lot about the field that I want to be in, and I really love the kind of inquiry that we get to do as philosophers. I really value the kind of measured response that I feel best represents the field, and I really love being a philosopher in an HPS department. I really like the interdisciplinary work. I really like the ways that we all kind of influence each other and make each other think about different kinds of contexts for the questions we’re asking. And I’m very happy to be doing my PhD at the Institute.

JJF: Nice. Just for listeners, HPS, I’m assuming that’s history and philosophy of science, because you’re, at The Institute for History and Philosophy of Science and tech.

RK: Oh, yeah, yeah.

JJF: Yes. That would mean you are working in an interdisciplinary environment alongside historians and possibly other science and technology studies folk, if I understand correctly.

RK: Yeah, basically. Actually, there are a growing number, it seems, of philosophers in our department. But there’s a very strong historical bent in our grad students right now and that’s been very helpful for me in figuring out exactly what I’m interested in and what the right questions to ask are.

JJF: That’s so cool. I really like that story. I think. . . Maybe this is just the circles I travel in, but I think that’s unusual because most people I know who got an undergrad in philosophy didn’t go from high school and know that they wanted to do something in philosophy. I certainly didn’t. I don’t think I even really knew that philosophy was an option of something I could study until I started in university. I think that’s really cool that you were like, Okay, I want to do bioethics. How do I do that? And then found your way into philosophy.

RK: I’m sometimes a little bit surprised that this thing I thought of when I was 16 is still kind of driving by academic interests in a very different way now, but, you know, over ten years later. Like, you know, I did reevaluate that at each stage, is this still where my interests actually lie? I also was pretty sure that I wasn’t going to like philosophy at first. I was very skeptical about the idea that you weren’t going to get to an answer, which is, I think, you know, makes a lot of sense why 17 year old would feel that way. But, I found I really loved the style of inquiry and the kinds of questions you get to cut to with philosophy. And so my fears by the end of my first semester of University were very much kind of put to bed. I realized how much I liked the style of writing and the style of discourse.

JJF: I think, yeah, I remember that frustration in the first ethics class I took as an undergrad. Ethics was the first philosophy class I ever took. I remember that frustration of being like, Well, this is just all questions. This is where are the answers? This is now, I’m like, Oh, I really like the freedom to explore questions and explore them with the certain methodologies that philosophers learn.

RK: I took two philosophy classes in my first semester of undergrad. One of them really stuck with me. It was a problems of philosophy course with Mark Johnstone and the way he structured the very basic introductory units and the way he designed the lectures and everything was very enticing and made me we want to do more in the field.

JJF: And there’s something freeing about being invited to ask questions and explore questions that may not have a set answer, for example.

RK: Yeah. Yeah. Now I look at my research, and it really is just a lot of questions. As I’m working in bioethics, my goal is, I think to annoy as many different kind of people as possible with the questions I’m asking.

JJF: Oh, my gosh, you’re a Gadfly. That’s a reference to Socrates for my listeners. There’s a little philosophy in-joke.

JJF: So you mentioned your current research, and you are still working in bioethics. But as you also mentioned through your Master’s and now into your PhD, you got interested in the more technology side as well and AI ethics. So can you talk to me a little bit about how that side of your research kind grew?

RK: Yeah. I actually mostly, I mostly see myself as an ethicist of philosopher of medicine and psychiatry working on issues related to trust. In my Master’s, one of the key things that I looked at was trust and one of the key issues that I looked at in the context of academic crowd funding was trust from a number of perspectives, and when I got to propose my PhD project, at first, actually, it was a completely different project. I applied to the institute with a project, also, sorry, we always call it the Institute for short or the IHPST It’s an alignment chart of bad shorthand for a department that takes up a whole line of text.

JJF: Right.

RK: So when I applied, I had a completely different project in mind on interpersonal trust in the context of global health emergencies, very much inspired by the pandemic. At the time I’ve been working for a couple of years on a project with researchers based at Western University, looking at ethical decisions in COVID research and development for treatments and vaccines and whatnot. So I was very interested in this still and I wanted to pursue something more related to how the general population respond to these kind of incidents and how these large spread global events affect everyday interpersonal relationships. And I did my comprehensive exam based on that topic. I applied to grants and whatnot, with that project, and then about six months ago in June, I emailed my supervisor after giving my CSHPS talk last year about this topic. I said, Hey, Karina, I think I want to switch topics. My whole committee was on board of it, it was fine. That’s how I ended up doing research in this AI space. I definitely don’t come from the background of looking at things like algorithmic bias and AI ethics as a field of its own. I very much come through this very I think modern bioethics and applied ethics root.

JJF: Right. That probably gives you a unique perspective though. I think that’s really interesting. I saw your CSHPS presentation. CSHPS is the Canadian society for the History and Philosophy of Science. We’re just throwing all sorts of acronyms today. Yes.

RK: Our acronyms are so long in this field..

JJF:  So you’ve been a member of that society for a while and presented this paper on AI facilitated psychotherapy and the potential risks or benefits of this. It is still a medical ethics issue, right psychotherapy or any kind of therapy is a medical ethics issue. But there’s also now this tech aspect, which I think is really interesting. I think that there are people coming from a tech ethics perspective who are looking at this, but bioethics would bring, I imagine in a different lens to examining this kind of issue, product, thing that exists.

RK: Yeah. I think a lot of the time people very rightly are concerned about the privacy element of various mental health apps. They are often regarded as not having good privacy policies, not having any real privacy policy, that sort of thing. So, a lot of it has to do with data security and data ethics, and I think those are really important areas to do research in, but they are not areas that I research in. I kind of like, for the purposes of my work, I often bracket those conversations because I think they’re really important, but I’m not really the person to do them. So, what I like to look at is, and what I find the question at the core of my dissertation in some ways is like what, if anything, makes the human relationship in traditional psychotherapy special. And does that kind of relationship get replicated or frankly, improved upon by the replacement of a human therapist with some kind of AI chat bot or virtual therapist.

JJF: So I want to dive in now to your PhD project, which is this examining AI facilitated psychotherapy. Before we jump into that, can we just back up and just talk about therapy or psychotherapy and how it works.

RK: Yeah. So a lot of the time in traditional psychotherapy, you have a patient and a human psychotherapist, they may come from a range of academic and professional backgrounds. They might be social workers, medical doctors, psychologists of varying stripes and usually, the way psychotherapy works is it’s talk based. There are different kinds that work for different kinds of people in different kinds of mental health concerns, and these kinds of sessions can last either for a short period of time, intermediate period of time, or be long lasting relationships that we form with each other, with the therapist with the patient. I generally am more interested in looking at the mid and long term relationships, the kind of relationship with a therapist where you’re looking at your relationship with them kind of over a longer period.

JJF:  Right.

RK: And as for things like cognitive behavioral therapy or CBT, dialectical behavioral therapy. But the important part here is that it’s usually conversation based.

JJF: Right.

RK: It’s not anything that involves like hypnotism or other kinds of

JJF: medication or anything like that.

RK:  Medication. Yeah. It’s it’s it’s basically it’s the talking is the therapy.

JJF:  Okay. So you’re focused on the talk based therapy as opposed to other types of therapy. You’re particularly interested in the talk based therapy where people are developing medium to long term relationships with their therapist.

RK: Yeah.

JJF: So we know there is short term emergency therapy where somebody might go for one to four sessions or something just to help them deal with something that’s happening. But there definitely are relationships that you might sustain with a therapist for months or years. Those are the ones that you’re more interested in, is that right?

RK: Yeah.

JJF: Cool. Okay. We’ve got a picture of what we’re looking at here. So, when we talk about these relationships that people form with their therapists, one of the things that you looked at, particularly in these relationships is philosophical theories of trust. Can you talk a little bit about what philosophical theories of trust you’re relying on here and what your preliminary hypotheses are about what role trust is playing in these therapist patient relationships.

RK: Yeah. So a lot of work and a lot of my work for a while has been very inspired by the work of philosophers like Annette Baier, Onora O’neill, and both sort of argue that there’s this need for vulnerability in order for trust to form. And I think that that exists on both sides of the therapeutic relationship. I think that the patient is the party that we most traditionally think of as being in the vulnerable position, right? They are divulging these private details to a therapist while opening themselves to feedback or advice that might be difficult to hear. I think that puts us in, I think, a traditionally vulnerable position.

JJF: Yeah.

RK: But I argue that the therapist is also making themselves a little bit vulnerable, by offering a piece of advice that may or may not work for the patient. That might in a small way undermine how the patient feels about the therapist. You might say, No, actually, that’s not how I would characterize that situation or that’s not how I would describe what happened. But I think that kind of back and forth and that kind of mutual vulnerability, and I I wouldn’t say that the therapist is experiencing this on the same level or to the same degree as the patient, but I think it’s important to acknowledge that the therapist is also liable to messing up or making a mistake. And I guess one other thing I should clarify here is that I’m not looking at cases where a patient is kind of like imminently suicidal. I’m looking at longer term chronic conditions that a patient may be experiencing, not necessarily acute precipice of emergency situations. Because I understand that that is a difficult view to hold, I think if you’re working with patients who are at higher risk of things like self-harm or suicidality. So, there is, I think it mutual vulnerability, and I think what happens is that when both parties trust each other, you can allow the therapeutic relationship to really grow because it allows the patient to feel comfortable divulging the cards they’re holding closest to their chest. It allows the therapist to maybe think a little bit more creatively and suggest different kinds of interventions that might work better or different kinds of interventions that might work simply because they know their patient well.

JJF: When I think about Annette Baier’s, for example, framing of trust, and she definitely does talk about the person who is placing trust in an expert like a therapist being vulnerable. But I think that your argument that the expert themselves also becomes vulnerable is really interesting. It might actually apply outside of therapy because while you were saying that, I was thinking about being a teacher in a classroom. And you walk in the students have placed some level of trust in you, that you’re going to teach this material to them and that they can trust that what you tell them is correct. But at the same time, you place some trust in them, that when you assign things, they’re going to in good faith follow through, that when you ask questions in the classroom, they’re going to try and answer, that when you set projects, they’re going to work to do the projects and that they’re going to trust you that you’ve set stuff up in the best way possible to try and help them guide through the material and you might make mistakes and they might make mistakes. So I think I see kind of a similar thing here with the therapist and the patient, right? That the therapist is vulnerable because they’re trusting the patient will recognize them as a therapist and will recognize their expertise and will in good faith, try to listen to them, and the patient may push back at certain places, and that can be an experience of vulnerability as well. And I think I think I see what you’re saying that they have to be able to kind of honor each other’s vulnerability in order for this relationship to work long term. Am I getting that right?

RK: Yeah, exactly. I think that you sort of have to, and, you know, I don’t begrudge or blame or judge people who talk about having had one or two bad sessions with a therapist because, you know, that it’s it’s such a deeply personal relationship, and I think, you know, the patient is within their rights to request the kind of care that they deserve or feel that they want especially in this kind of context. But I think sometimes people are quick to write off the possible growth for a therapeutic relationship because it didn’t give them an answer or the answer or the answer they wanted in the time frame that they wanted. There are things that can take a very long time to sort of get at, and there may be things that require you and your therapist knowing each other on a level that just requires more background than you can get in a few sessions.

JJF: Uh huh. Yeah. And this now makes sense to me why you’re focusing on mid and long term relationships where because as I understand philosophical theories of trust, a lot of it is about trust takes time. It takes time to show yourself to be worthy of trust, to earn someone’s trust, to be comfortable with the discomfort of vulnerability. And obviously, as you’ve said in these cases, the vulnerability is not equally shared, so we’re not saying that everybody is equally vulnerable, just that there is vulnerability that exists on the part of the therapist as well. Though I think you’re saying the patient obviously is more vulnerable here in a lot of cases.

RK: Yeah. The patient is definitely more vulnerable.

JJF: Now we have this picture of trust and vulnerability and a long term relationship between the therapist and patient. And that’s the human model that we have built up over the last few decades, generations, of therapy as a profession. So, can you give me a bit of a history of AI facilitated therapy? Because while we think of this as a new phenomenon, some of your research suggests, if I remember correctly, that this is not necessarily as new as we might think it is.

RK: Yeah. This goes back 50 years at this point. More than that. I can’t remember the exact year. I want to say it’s 1965, but sometime in the 60s, Joseph Weizenbaum who was a computer scientist based at MIT, developed the first chat called Eliza. And Eliza had a bunch of program modes, and I’ll say I’m not a computer scientist, so I think it’s an early attempt at natural language processing, but

JJF: Yeah, I believe she is. I’m going to use she/her pronouns for Eliza. I’m just going to do that.

RK: I feel she’s personable enough, and I’ve played around with the Internet version of Eliza that exists now, and she’s personable enough that it feels wrong to not personify her

JJF: To call her in it. It feels disrespectful.

RK: It does. Maybe she’s got this name. Anyway. One of the modes that was programmed for Eliza was doctor. And doctor was supposed to model a popular form of talk therapy at the time, Rogerian psychotherapy. A lot of it is repeating the same phrases back to the patient. The user, the service user might say, You know, I’m feeling, I’m feeling depressed. My boyfriend said that I should come here and Eliza might respond. Why do you feel depressed? And so once you kind of start saying more open ended things or trying to get Eliza to give an opinion, she’s not going to give an opinion, but Eliza will kind of pop it back to you. What information you give her?

JJF: Yeah. She keys into words you say and reframes them back at you as questions.

RK: Yeah. Exactly. And it’s very nice to see it work. I’m glad that someone has programmed like an online version because it’s really cool to play around with.

JJF: I’ll link it in the show notes, if any listeners want to check it out.

RK:  And people loved it. People really loved the idea of this, in their mind, non judgmental non human entity that was going to quote unquote listen to them. And so I’m, you know, reading Weizenbaum’s book right now, partially for my dissertation, partially for a class I’m teaching this year. And after creating Eliza, Weizenbaum became one of the most vocal critics of AI and the acceleration of AI. So. You know, what’s happened more recently though in AI psychotherapy is that increasingly companies are developing apps that include a certain amount of digital virtual therapy, which is or at least that’s what they’re calling it. Sometimes it’s a chat bot, sometimes it’s things like guided meditations or short cognitive behavioral therapy courses.

JJF: Self guided ones.

RK: Self guided. So they kind of help like a service user reframe the difficulties they’re experiencing in a way that the patient can sort of facilitate in theory without a human interlocutor.

JJF: Uh huh.

RK: And these apps have exploded over the last couple of years. I guess another thing I’ll make clear here is that I’m not talking about technology facilitated teletherapy. So I’m not talking about sites like Better Help.

JJF: They have their own ethical problems.

RK: They do. They’re ethicists looking into that. I’m looking very specifically at apps that don’t have, it’s a human looking at their phone and there is no human on the other side.

JJF: Right, so Better Help and services like that connect you to another human being, another therapist. But it is technology facilitated therapy. But we’re looking at these case or you’re, we’re!  I’m talking to you about these cases.

RK: We’re looking at them together today.

JJF: Yeah, where there’s no human on the other side. It’s like Eliza in that sense. This has been programmed. You’re talking to a non human entity and getting what is being called therapy that way.

RK: Yeah, exactly.

JJF:  Okay, so we know it stretches back a long time. I find it so interesting that Weizenbaum became such a big critic of this after the success of Eliza that he was just like, Oh, hell, no, this is not a good idea.

RK: Me too. I’m stoked to get my students to the chapters at the end of the book where he’s very critical about what we conceive of as artificial intelligence and what kind of skills or knowledge we’re calling intelligence. I hope they enjoy it because I’m excited.

JJF: Yeah, it’s really cool. But he’s mostly ignored by the field, as I understand it going forward. And so now we have these offerings of AI facilitated therapy with no human involvement popping up in a lot of different places. So, can we bring the issue of trust back in here now? Because there’s one thing that you said when you were talking about Eliza in particular, and I’m wondering if that still holds true today. You said that Weizenbaum rolled out Eliza and many people when they used Eliza really really liked the idea that it was non judgmental, because, now I’m calling it it again! It’s non judgmental because she can’t really be judgmental because first of all, that there’s no judgment in the programming, but secondly in another sense, I think we often think of the capabilities of being judgmental as human. But yeah, this sense that Eliza was somehow safe in some sense. I wonder if we can talk about trust and vulnerability when it comes specifically to these AI therapy tools.

RK: Yeah. I mean, I think that AI now can certainly be judgmental. We can bring trust back in, but I think trust is not a resolved issue in the context of AI facilitated psychotherapy. All right. Two big things that I’m hoping to get at in my dissertation and I’ll add that I’ve written my outline, but I became a PhD candidate in the fall, so I’m only about four months into writing. One of the things that I definitely want to demonstrate is that we cannot trust AI. Not in like a, it’s irresponsible to the trust AI, it’s that we cannot place trust. That’s not a kind of relationship you can have with artificial intelligence because what I think a lot of people can see as trust in the context of replacing their human therapist with an AI therapist, whether that’s a chat bot, a series of interactive guided meditations, what have you. That’s a relationship formed on reliance. Which is different from trust.

JJF: Okay.

RK: Because instead of thinking of the relationship as this dynamic relational one, it’s a relationship that’s pretty one sided. In the past, I’ve characterized AI therapy as being like talking to a wall or like a an affectionate wall. You can, you can rely on the app to be there for you. You can rely on it if you’re trying to, like, you know, stand up again, but it’s not going to help you walk on your own. It’s not going to help kind of move forward. Yeah. I think I think it’s you can’t form the same kind of in-depth relationship with it.

JJF:  Okay.

RK:  And so that’s kind of one of the main points that I’m hoping to convince people of in my dissertation. And the other is that I don’t think these apps are actually providing therapy because I think you need that trust relationship for therapy to be effective. And there are of course, a lot of, you know, social implications to this and what sorts of things count as successful therapy and that sort of thing. And so I’m going to have to wrangle those questions a little bit as well.

JJF: Right.

RK: But we are not actually able to form a trust relationship with AI and that AI cannot actually perform therapy.

JJF: Okay. I need to unpack both of these because now I’m fascinated. So I’m going to ask you follow up questions.

RK: Yeah, shoot.

JJF: Okay. Let’s take the trust part first. The idea that we can’t form trusting relationships with these AI tools. There were a couple of things that I thought were interesting here. One of the things that you highlighted right at the outset is that when you say we can’t form these trusting relationships, you’re not saying can’t as and shouldn’t. This isn’t like don’t do it, it’s bad for you.

RK: Yeah.

JJF: It’s more like you physically can’t do it. These tools are just not the kinds of things that you can form a trusting relationship with no matter how much you may want to or may think it’s a good idea. That’s the first thing that I wanted to draw out, which I thought was really interesting.

RK: Yeah. I guess I think some people accuse me of being a pedantic philosopher by saying that. They might be right. But I do actually think this is important for the patient ultimately because I think the personification and the humanization of these tools for performing therapy is that I think one of the dangers is that it allows patients to think they might be able to get more out of the app than they actually can. That might you a patient’s therapeutic experience and discourage them from seeking further help and have a ripple effect on their mental health and how they perceive possibly working to improve or change their mental health.

JJF: Right. One of the important things of recognizing that we can’t do this isn’t to say don’t use the tools. We’ll come to possible benefits or risks of the tools later, but is to say, if you’re going to use the tools, you have to recognize what they’re for and what their limitations are, and don’t fool yourself or go in with the expectation that you’re going to form a trusting relationship like you would with a human therapist because that’s not how the tools work. And when you were talking about that, like, the supportive wall, I remember you talking about that too in the presentation that I saw, right? The idea that these tools are like a supportive wall. And I was thinking about walls or tools of any other kind and like, maybe this is apocalyptic, but if the wall falls down, I don’t feel like I’ve been betrayed by the wall. Like it sucks and it’s going to be hard for me. But if the wall breaks and no longer supports me, I don’t feel like the wall betrayed me. Or take another tool. Like if my alarm clock doesn’t go off in the morning because something malfunctions, I rely on that alarm clock to get me up because I am not a morning person. But if it doesn’t work, I don’t feel like I’ve been betrayed by it, and how could it do this to me? But obviously if a human therapist betrays my trust, that’s a serious issue, right? Like, so I think that’s one way I’m trying to find an in here for the difference between trust and reliance is that I can rely on tools and not feel a deep sense of emotional loss if those tools don’t work if they break one day. Whereas if I form a trusting bond with a therapist or really anybody else in my life and they purposely let me down. I feel very differently about it. Is that part of the distinction that you’re going for here?

RK: Yeah. I think I have to figure out how to work this into the dissertation a bit, but I don’t know if you heard about Replika and how creating significant others or best friends on Replika. Then I think they shut down the partner feature.

JJF: Yeah. They shut down the erotic role play feature.

RK: Yeah. Suddenly, these entities that felt to some people like they were a step away from a human partner, were lifeless. For me, I’m not quite sure how to place that piece in all of this. But one of, I think that’s we’re talking about apocalyptic statements, but I think that when, if, when that happens with an AI therapy app, people feel as though they have formed this really close relationship with something because it looks humanish. But because it’s so one sided, I don’t think it’s the same kind of relationship at all. And that feeling of betrayal, there may be sometimes almost like a a heightened kind of agitation, when your alarm doesn’t go off. When you’re betrayed by a person, I think it’s accompanied by different kinds of negative feelings and feeling betrayed by a piece of technology, whether it’s your alarm clock, like the subway closure or what have you. There’s an enhanced feeling of agitation because it’s a tool that you rely on. Trust has this facet of fallibility that I don’t think we bestow on tech in the same way. I’m trying to figure out how all of those chips fall in all of this, but I do think that there’s a distinction between how we react to betrayal on both of these fronts as well, you’re right.

JJF: Right. Okay. So if I think about AI therapy bots and I think about Replika, for example. One of the things that happened in February 2023 when the erotic role play feature was taken out of Replika, was some people’s chat bots broke almost entirely and didn’t know who you were anymore, didn’t know the relationship you’d formed with them because removing the erotic role play also inadvertently removed other stuff. Suddenly, this conversation wasn’t available to you anymore in a way that you had been used to it being available. If I think about that and I think about a parallel human situation, if I was in a patient-therapist relationship and my therapist couldn’t see me one week because something catastrophic happened in their personal life. Maybe they had to shut down their practice for a week or so to deal with something that’s happening. I would feel possibly betrayed and annoyed, but also understanding, they’re human too and life happens and sometimes life is shitty and they need to take some time off or whatever. But if my bot breaks, it feels different. It’s way more frustration. Is that part of what you’re trying to get at?

RK: Yeah.

JJF:  We grant other humans a fallibility or a messy human lifeless that we don’t necessarily grant to our tools?

RK:  Exactly. Its job is its entire existence. Whereas I think I like what you said about my therapist is human too. The acknowledgment of the fact that you are dealing with another human already alters the kind of relationship that you’re in. It changes the kind of, I think, emotional associations were liable to make. If a therapist, you know, suddenly shuts down a practice, ghosts their patients, leaves the country. There’s a kind of betrayal and devastation that is still, I think, different from one of these apps suddenly shuttering.

JJF: Right. I also think you said something interesting for me there too, which is this idea that we recognize that our therapists, therapy is a job. Therapist is a job and they have other things going on in their lives as well. Whereas a therapy bought, their job is the whole thing. We expect them, I think, to be more available to us perhaps or available to us in different ways than a human therapist who we understand is a multifaceted individual in the world occupying many different roles. I think that’s really interesting too.

JJF: There’s another thing that you’ve talked about and you’ve kind of hinted at here as we’ve had this conversation. And this is that there may be something and maybe we’ve already gotten at this to some degree, but that there may be something really special and unique about the fact that traditional therapy is a human to human relationship. And that that is either lost or improved upon depending. But that changes. Let’s be neutral. That that changes when we’re in kind of a human AI situation. So, is there more to say here about the importance of a human to human relationship or the difference when we go into these relationships, if that’s even the right word, between humans and AI?

RK: I think in understanding that your therapist is a human and therefore is fallible in the ways that humans are, there’s a certain, yeah, so there’s an acknowledgment of that in therapy that is delivered by a human. But I think there’s this misguided bestowing of objectivity on AI. We assume anything that we receive from a piece of technology is objective and it’s efficient and it’s the most effective solution. And that’s I think likely not the case for many people in the context of psychotherapy, partially because one of the benefits of psychotherapy is talking to another person about it. The act of talking is the therapy. And so if you’re not talking to a person, then you might be doing something, you might be texting or eventually maybe doing a virtual video call with a Replika-therapist-type entity. But are you doing the same thing? I don’t I don’t think so. And I think even proponents of AI therapy may not think that it’s the same thing. I think that there’s this idea that you’re going to get, like I said, that efficient, objective answer, but what’s the best solution. But that’ll be the best solution that’s been kind of agreed upon by published research, other kinds of information that the AI can scrape from whatever databases it’s pulling from, and that may not be what works for you. And maybe the textbook solution for patients dealing with certain kinds of, let’s say depression, might be solution X, and maybe you’re the kind of patient who would really benefit from solution Y, but the only thing that the chat bot is going to tell you is solution X because that’s what works for 65% of people. It’s going to assume you’re probably in that 65% and it may either not have any sense of what to recommend if you’re not or might not have the same kind of robust data to back up other potential solutions and it may not give them. Because the other thing is that a lot of people think, oh, wow, these kind of chats could be great for replacing emergency line workers because that’s a job that demands a lot from the often volunteers who staff crisis hotlines. But all of these apps because they’re almost all privately owned are trying to cover their ass and avoid legal issues, and so they are not for use in emergency situations. They’re going to I think continue to just give the middle of the road most likely answer that’s going to be probably pretty conservative in terms of recommending different kind of changes, that thing.

JJF: Okay. This is a tendency regarding technologies in general that we’ve talked about before on this show, but this idea that the technology is somehow neutral or objective, and I think most of us know that’s not true. But there’s an additional issue here when it comes specifically to therapy, which is that you may go in thinking that you want the objective answer to whatever problem you’re seeking help with. But it sounds like most traditional therapy, especially talking therapy is actually set up on the premise that we’re not looking for the objective answer if there even is one. Instead, we’re looking for what answer works for you? So, even if these tools were actually able to give us a neutral or objective picture, that’s not actually what we need. What we need is a picture that takes our own subjectivities into account, is that right?

RK: Yeah. Basically, I think that there may be little pieces of info that you can give an introductory survey that you can give your AI therapist when you download an app. You might say, I like to cook and I like to run as stress relief, but that’s all you get to indicate in the intake form. Whereas with human therapist, you can talk about the kinds of stress relief that those things provide and how they maybe differ from one another and the kind of situations that cause you to seek them out for stress relief or for improving your mood or what have you. And so based on some of those anecdotes and they’re probably revealed slowly over the course of time with a human therapist, your therapist gets to know the kind of interventions that would work best for you, for you specifically.

JJF: Also, I imagine pivot if those aren’t working anymore. Like running used to work, but somehow running is just not doing it for me anymore. I don’t know why. I’m imagining a human therapist might be better able to pivot or even point out things that you yourself don’t recognize as stress relief just through talking.

RK: Yeah, and say, maybe you’re depending on running a stress relief too much and the increased dependence on that is actually causing you to find it a stressor or something like that.

JJF: Trying to fit in all my running every week. I don’t know. I don’t run.

RK: I do. I run a lot. So it is both stress relief and stressor in my life.

JJF: And recognizing it as both, I imagine, it could be really important, right?

RK: Yeah. I’m skeptical about whether these kinds of apps are at a point where or will ever really be at a point where they’re able to make those kinds of calls.

JJF: Right.

RK: Because they’ll be developed again to protect the company that made them. And that’s where that therapist vulnerability piece, I think comes back in.

JJF: Right, okay. So, they are not vulnerable, not only because they’re not human, but also because the tech companies that make them don’t want to risk the liability that can come with vulnerability.

JJF: So, we’ve talked about some of the risks that come with mistaking an AI chat bot relationship for one of trust rather than reliance and also mistaking an AI chat bot relationship as one that’s going to objectively solve all our problems. There are risks that come with that. But your research doesn’t tell us that we should just stop using these tools entirely. One of the things that you do talk about is the ways in which AI facilitated therapy might be quite useful if people go into it viewing it like self therapy. Can we talk about how these tools might be empowering or useful for some people?

RK: Yeah, I’m going to I guess flag first that the companies are referring to this as some of this is self therapy, which I don’t think that that’s necessarily bad thing. I haven’t done a ton of my background research on the self therapy movement yet, but I do think that it’s a bit of a problem that these apps are saying, we’re sort of providing a therapist, but we’re also just sort of providing you with a mirror to do self therapy.

JJF: Right. It’s another liability covering their butt kind thing like, Oh, it’s a therapist, but also know you’re on your own.

RK: Yeah. I do think that there are some benefits potentially to their use. I’m increasingly skeptical, but I want to leave room for that. The first thing that really got my gears turning about this topic was that I gave a lecture just on the phenomenon of AI facilitated psychotherapy to an AI ethics class last fall. And they talked a lot about, they actually reiterated a lot of what Eliza’s early clients, patients, customers?

JJF: Users?

RK: Yeah, there we go. Said about their interactions with Eliza. The liked students like the idea that there wasn’t someone actively listening on the other end and that it was great as a dress rehearsal or to be the first, they said first person, but first person in quotes. they told about a problem they were having. I want to preserve patient choice and all of this. So while I’m skeptical of that, and I think that there are still clear benefits to you know, summoning the courage to talk to another human being, I’ll give that to the people who want to fight for the place of AI therapy because I do want to honor and value patient choice in all of this. There are some kind of interventions, therapeutic interventions that I think work better for these kinds of apps. I think one example that I’ve thought about a fair bit is exposure therapy for conditions like OCD, obsessive compulsive disorder, where you create this hierarchy of exposure basically to intentionally trigger the kind of anxiety that produces obsessive compulsive behavior. And what happens over times that you ramp up the level of the exposure and you adjust to the exposure.

JJF: Okay. So I think you’re saying that there are times when these AI tools can be useful and you’ve given the example of obsessive compulsive disorder and exposure therapy. But in these cases, the tools aren’t necessarily being used alone, is that right? There’s often a therapist involved?

RK: There can be. So I mean, what it means is that a patient could potentially do exposure therapy outside of the context of the therapeutic like clinic?

JJF: Right. Okay.

RK: And keep track of the exposures basically in a virtual journal essentially. Bracketing again, all of the privacy concerns that we talked about. Let’s assume that we have the best exposure therapy app has been developed. It’s got airtight security, two factor authentication. You have to all of this.

JJF: Right, right.

RK: And so you could basically theoretically build an exposure hierarchy and go through that yourself and have it guided by the app? Because you are not rely, in this context, you are not kind of expecting or depending on a relationship with a therapist where that talk therapy element is as important.

JJF: Right. Right.

RK: It might complement these kinds of apps or, you know, whichever you view you can view the talk therapy as complementing the exposure therapy or use the exposure therapy and complement to talk therapy that you’re doing in the clinic. That’s kind of the only test use that I’ve thought has any real merit long term.

JJF: Right. So when you were talking about how you gave this lecture, and just like the users of Eliza, many people responded saying that they would, they could see the attraction of talking to an AI therapist, doing talk therapy with an AI therapist at least as a preliminary before going to see a human. And I wonder how much that attraction can also be explained by how uncomfortable it is to be vulnerable and how a patient going to a human therapist places themselves in a position of vulnerability. Whereas if you’re correct and we’re not entering relationships of vulnerability with these bots, maybe this is evidence that we know that it’s not the same. Do you know what I mean? Because we feel more comfortable talking about our problems with a bot than we do with another human, if that makes sense. Obviously, I know this isn’t everyone, but for the people for whom this is attractive.

RK: I was not to put the screws to a bunch of undergraduate students about this. But I think that this is definitely true. I think that what ends up happening is that we’re like, I’ve done the hard part. I’ve told someone, but that someone is like, you know, you’ve told a chat bot that you’re experiencing a certain issue and you think the hard parts over. The hard part is actually talking to another person. But because the AI chat bot fills this gap somewhere in between keeping it all to yourself and telling another human being, I think what ends up potentially happening is that patients don’t ever use the chat bot as a stepping stone and they maybe only half deal with the problem that they have resolved. That leads to this spiral of, I’m never going to totally get over this. I’m never going to get better from this because they’re trapped in this kind of limited scope of what they can expect from what is being advertised to them as therapy.

JJF: There’s a tech researcher you were making me think about Andrea Guzman. She focuses on Siri and digital assistant. No therapy bots. But one of the things she says is that digital assistants are ontologically ambiguous, which means it’s not entirely clear how we should define them or think about them and they slip between seeming human to seeming like a tool. I wonder how much chat bots because of this ambiguity are allowing us to think like, Oh, it’s okay. I told someone, but also I didn’t really tell someone. We flip back and forth between viewing them as more human versus viewing them as more like a tool. That just made me think of that. I think that’s really interesting. I think that yeah, for some people, then, I can see why you want to leave patient autonomy. For some people, this might be a stepping stone that they really need to push themselves. For other people, there is this risk of never totally dealing with your problems because you keep doing talk therapy with a bot instead of with a human.

RK: Yeah. I also get that in the context of university as big as the University of Toronto, it can be tough to get mental health care efficiently and having something like this as a stepping stone, even if what you want is another is a human therapist, but the only thing that’s available for the next six months until you can get in to see someone at a school clinic let’s say, is a chat bot. I don’t want to be so negative about the use of these chat bots that I don’t leave room for cases like that.

JJF: Yeah.

RK: But I’m concerned that what’ll end up happening is that long term health care funding will just kind of dwindle or get funneled into the development of these private apps, especially to more remote communities where it might be harder to find a human therapist to begin with. You’re already dealing with teletherapy, suggesting that apps just take over if you’re not in the city I think is another possible dystopian future use of these tools.

JJF: Yeah. That’s a whole other of can of worms is that on the one hand, we might want to say, look, if all you have access to is these tools because there’s long wait times or because you can’t afford therapy or because it’s just not available in your area or what have you. These tools might be better than nothing. But on the other hand, we don’t want to end up in a situation where viewing these tools is better than nothing means we fund these tools instead of funding better healthcare infrastructure. Yeah.

JJF: Oh yeah. So much dystopian stuff to talk about.

RK: There is, and I have to say a lot of what kind of initially kind sparked my interest in this topic, it took me a long time to narrow down what it was that I wanted to look at, was a video essay that I watched that was really well put together by a guy who mostly does video game coverage. I’m happy to send it to you.

JJF: Yeah, I’ll link it in the show notes. Sure.

RK: Such a good video essay, and he’s in the US, and he talks about his insurance plan at some point, potentially only covering a therapy app and not actually covering psychotherapy because from a health care funding perspective, they’re the same thing.

JJF: Oh, my gosh. Okay. So this actually brings me into one of the last questions I wanted to ask you. So at this point, we’ve acknowledged that these tools can be useful for people, depending on your situation and depending on how you use them and how you go into thinking about using them. But also, we’ve pointed out that there are a lot of dangers involved in using these tools as well. So what kind of considerations on the part of users or perhaps regulations on the part of governments would you like to see take place?

RK: I think it’s going to be hard to regulate these kinds of apps because they can be branded as so many different kinds of things. While apps with taglines like you know AI therapist might be subject to more stringent regulations down the line. Apps that are referred to as interactive journaling may not be.

JJF: Or I remember because we talked about Replika. While Replika never said it was a therapy app, they did have some advertising that talked about how having a best friend to chat would be good for your mental health.

RK: Yes. So like I don’t even know what kinds of regulatory things could be suggested because I think it’s all so. . .

JJF: It would be difficult to catch all of them.

RK: Yeah, there’s no easy way to say, all of these kinds of apps. I think it’s actually easier to talk about them in the context of philosophy. I can refer to a set of characteristics that apps may share and do philosophy with them or about them. But I think it’s probably harder to nail down some good government regulations of them.

JJF: Okay.

RK:  I do think that there are things that users can keep in mind when they’re trying to decide if a human therapist or maybe chat bot would better suit their needs. I think there are some people where especially if you’re already very self aware and what you want is something that’s going to essentially prompt you to do some journaling about your mental state and well being, that kind of app might be useful for someone. But I’m not in a position where I can make any of those kind of prescriptions for people, nor do I want to be. It’s tough. Ultimately, I guess, this ties into a broader interest I have in caring about scientific literacy amongst the general population, and I think people who are seeking any psychotherapy ought to have a sense of what they should expect to get out of it, whether it’s a human therapist or a chat bot because neither of those are going to make your life perfect. Neither of those are going to just give you the answer. Although the chat bot might pretend that it is or try to make you feel like you’re getting the answer.

JJF: Right.

RK: I think always approach with caution. I think there are lots of suggestions online for how to almost interview a new therapist. People are encouraged to go into a therapy consult with a set of questions about how their therapist or prospective of therapist might handle certain kinds of concerns. I think it’s may be worth going into doing a seven day trial with a new therapy app with a similar mindset. What kinds of things should you be looking to get out of this experience? What kinds of things is this app not capable of doing? And I think the app description and set up and whatnot being honest and clear about what those parameters are is the potential kind green flag and a lot of sort of cloudy language that doesn’t really tell you anything in particular.

JJF: Vague promises.

RK: Yeah, that’s perhaps more of a red flag in terms of the overall transparency and conduct of the App.

JJF: Am I right in thinking that users should also maybe go into this not expecting to form a trusting relationship with the app. Would that potentially be helpful? If we went in knowing that we can possibly have a relationship of reliance, but probably not one of trust.

RK: I think in some ways it prevents some of the inner turmoil that can result from an app shutting down or things like, like that.

JJF: Or giving wildly raw advice.

RK: Or God, I can imagine these apps doing a Spotify raped and you look at where you were in January, you look at where you are a year later and you’re like, Well, my life didn’t change at all. Why is that?

JJF: Oh, my gosh.

RK: And being frustrated with the app in not a human betrayal sense, but in an agitated and frustrated by a piece of technology that has failed you since. I think it might, you know, help some of those reactions maybe level out a bit, but I don’t think it resolves the central issue that I’m looking at.

JJF: Yeah, that’s really helpful. So, if anybody listening is considering using one of these, it’s just going in with realistic expectations and knowing that this relationship is different from a human to human therapist patient relationship. And that means it can’t replace that. It’s offering something different. Maybe that will work for you and maybe it won’t.

JJF: Thank you so much, Rachel, for discussing your research with me today. Is there anything else you want to leave our listeners with regarding the risks or opportunities or just anything with regards to AI facilitated psychotherapy?

RK: I guess this is all kind of very early stage research. And I think the funny thing about doing this research as a philosopher is that, I’ve had a lot of really cool experiences of course being the coolest, to do some interviews this research in the last year. And it’s tough because often I’ve been asked before about giving advice to people or giving almost clinical style expertise. And I don’t have that. In some ways, I’m just I’m asking, asking some questions.

JJF: Maybe we all need to ask more questions.

RK: Yeah, actually, you don’t have there’s to take away, regardless of the kind of therapy you’re looking at, ask more questions.

JJF:  I love it.

JJF: I want to thank Rachel again for taking the time to go over her research on psychotherapy chat bots with us. And thank you listener, for joining me for another episode of Cyborg Goddess. This podcast is created by me, Jennifer Jill Fellows, and it is part of the Harbinger Media Network. You can follow us on Twitter or BlueSky. And if you enjoyed this episode, please consider buying me a coffee. You’ll find links to the social media and my Ko-Fi page in the show. Until next time, everyone. Bye.

Next Post

Previous Post

© 2024 Cyborg Goddess

Theme by Anders Norén