Jennifer Jill Fellows: This episode comes with a content warning for a brief discussion of suicide. Take care of yourself, everyone.
JJF: So let’s talk robots. Once the stuff of science fiction now ubiquitous. And whether they are designed to resemble us and be as human-like as possible or not, there’s one thing that seems certain. Even the most highly trained among us are susceptible to forming strong emotional bonds with their robots and grieving the loss of their robots should it become damaged or destroyed. Given this, today I am joined by Dr. Julie Carpenter to talk about our relationships with robots, and something she calls the Human Gaze.
JJF: Doctor Julie Carpenter is a social scientist who studies the interactions between humans and various forms of AI technology. She is specifically interested in how AI technology affects our social relations and our sense of ourselves, and by extension, how the changes in our social relationships and sense of identity, in turn, influence the AI tools that we use and interact with. Her most recent book, The Naked Android came out in 2025, and I cannot wait to talk to her about it and about her current research.
JJF: So, hi, Julie. Welcome to the show.
Juli Carpenter: Hi. Thank you for having me on and inviting me.
JJF: Thank you so much for being here. I’ve been following your work since I read your 2016 book, and I’m so excited.
JC: Thank you. I really appreciate that you’ve read my books.
JJF: I will link both books in the show notes, and I highly recommend people check them out or get their libraries to order them. Interlibrary loan is also fabulous.
JJF: So I want to begin by recognizing that digital space is physical, both in terms of the data centers, cables, and other infrastructure that sustains digital space, but also in terms of the resource extraction needed to power, grow, and maintain this space. This means that digital space has a direct impact on the land and on people who call that land home. So with that in mind, I acknowledge that this podcast is recorded on the unceded territories of the Coast Salish people, including the territories of the q̓íc̓əy̓ (Katzie), qʼʷa:n̓ƛʼən̓ (Kwantlen), kʷikʷəƛ̓əm (Kwikwetlem), xʷməθkʷəy̓əm (Musqueam), qiqéyt (Qayqayt), Skwxwú7mesh (Squamish), scəw̓aθən (Tsawwassen) and səlilwətaɬ (Tsleil-Waututh) Peoples. And, Julie, can you share where you’re located today?
JC: Yeah, I’m in San Francisco, and I want to respectfully acknowledge the Indigenous people who have stewarded this land throughout the generations.
JJF: Okay, so, like I said, I read your 2016 book about human-robot interactions. I didn’t read it in 2016, unfortunately. I wish I found it then, but I read it a few years ago. And you looked at human robot interaction specifically in the military. And one thing I found super interesting in this book was how the robots were treated by the military squads that the robots were kind of working with being used by, like, military personnel would give robots names, and the bots might end up occupying kind of a space between mascot and pet.
JC: Yeah.
JJF: And I wondered if you could talk about how that research came about, and what were some of your big findings from that?
JC: So, the military is often one of the first places to develop and invest heavily in emerging technologies that they think will be valuable to them, right, in war fighting in some way. And so I thought for a long time that this might be a setting worth exploring. Then it so happened that I met somebody who worked at a base that has explosive ordnance disposal personnel or EOD. And this specific mission of service or job within the military, they’re the people who try to keep people safe from any sort of explosive or incendiary device, whether that’s chemical, nuclear, improvised explosive devices, mines, whatever they are. And so this friend of mine said it’s EOD who have been working with robots as closely for, like, decades now, literally, like the last 20 years. And they’re the group that’s been most heavily interacted with them that he was aware of. And so he was sort of my entre into that world. And I’m not military, so it was sort of some work as an ethnographer getting into that space. But once I did, I was really embraced, frankly.
JJF: Oh, nice.
JC: Yeah, by the community who saw that I was there to really do research that might even be helpful to them, right? And it wasn’t a critical look at the military. It was a look at how they’re interacting with the robots and that I didn’t want sensitive information and things like that.
JJF: Right.
JC: And it’s a fairly close knit group, and that’s an important part of the findings. And when I say close knit group, and I’m talking about the time of 2016, I want to place that in that time because since then, I know that they’ve been heavily recruiting a lot more EOD, so it might not be as close knit a group. But it’s a relatively small group. The US military doesn’t release exact numbers, but let’s say around 5,000 people may be active at the time. So also what’s interesting is EOD goes across all branches, Army, Navy, Air Force, Marines. And even though the context in which they work are very different and their training might eventually be very different. For example, Army will work on land. Air Force might work on things air oriented, right? Navy does a very extended training about underwater, making people safe from bombs and mines and things underwater. It’s a whole different set of training. But what’s interesting is that all the branches go through what’s called colloquially the schoolhouse. And so, at some point, they all have this touchstone for their training, which is unusual across the divisions of the Forces Army Navy Air Force Marine, to have that basis of understanding. And I’m going to say fraternity, even though that’s sort of a brotherhood thing, but the idea of brotherhood and fraternity are very important to the social structure of EOD. And I think that’s important to understand. It’s relatively, for social scientists, anyway, it’s sort of a closed group within a closed group,
JJF: Right,
JC: in a lot of ways. They’re also in a weird space that they go through very rigorous intellectual book and paper training and ongoing training as well as physical training. But then they’re also very good verbal communicators by the nature of their work they have to be because when you’re trying to mitigate an explosive situation, literally, they have to be in constant communication with their team members. So that worked out well for me interviewing them.
JJF: Right. That makes sense.
JC: Yeah, so that was sort of an, an unanticipated bonus, for me, too. And they were very helpful in helping me find other people to talk to. So besides the 22 EOD that I interviewed in depth in the book, I spoke to others in person, in the different bases, to get basic background information. I did archival research. If you read the book, you’ll see I have pictures of PowerPoints available to the public with a lot of digging. A lot of archival research about military robots with pictures of some of the older robots used, but some of the findings where as you said, I think for me, the biggest for me as a researcher, one of the most interesting things, especially at that time, because this was, like you said, 2016 when this research came out, was that space where these people who are highly trained. . . and let me clarify, these robots don’t look human like, and they’re not meant to be social because a lot of times when people say robot, what comes into your head is like a science fiction version of robot.
JJF: Like a C#PO or something.
JC: Right. Exactly. So unless you’ve had, like, an everyday interaction with a robot, you might have a different picture. So let me paint a new one. So EOD robots are more like if you’ve seen police on the news with a bomb disposal robot, that’s one model. You often see the bigger ones. They look sort of like a small tank. They’ve got tracked wheels. They usually have you’ll see with one claw arm, right, that they use to extend. And so the bigger ones you might see lifting the hood of a car to check it or opening the door of a car. Then another model that’s frequently used is a smaller version of that, for example, pack bot, literally the size of a backpack, a smaller version that looks like that big tract version, similar with the claw arm. They don’t resemble humans at all. Pack bot could even be thrown through a window, right? And sort of disassemble and climb upstairs and do these really interesting things, doesn’t look at all like a human, doesn’t speak, doesn’t understand verbal commands, and is used with a controller that’s built to resemble a gaming controller because people coming into EOD are most familiar with gaming controllers, right? So that’s interesting, too. And so you can see where the history of the nature of the job of EOD, using a robot as a tool makes a whole lot of sense to keep people, civilians, you know, other war fighters, the EOD team themselves, as far away from the potential explosion as possible. So robots are really critical tool to them that they depend on. And I think it’s important also to say, because otherwise, what happens is, if the robot doesn’t work, they’re semi-autonomous. They’re not autonomous. Like I said, they’re controlled, right? If it spins, if it gets stuck in a wadi or a desert ravine, they don’t work too well in sand in some situations. Then what happens is the team leader puts on the bomb suit, which you may have seen pictures of. It’s like 100 pounds of, you know, padding and everything. But unfortunately, so what’s interesting about that, too, I’m going to bring up that dynamic is the team leader takes the risk which is also unusual in the military. Often, what happens is, if there’s an equipment failure or strategy failure, it’s sort of the grunts that are going to take the risk. But EOD small teams, it’s a team leader that would put on the bomb suit, go out and try and diffuse the explosive. And those bomb suits don’t protect your hands. They don’t protect against percussive blast, you know. So that’s not safe at all. So your question was the findings, getting back to that after setting the groundwork there, is that sort of space between these people who are highly trained to work with these non-human-like, not socially designed robots who understand very well the capabilities and limitations of these robots. But sometimes after having them embedded with them for a period of time, they would form relationships with the robot that were social. Some people hate the word relationship when you’re talking about with AI or robots, but I’m going to use it as convenience because we don’t have a better linguistic term right now.
JJ: Mm hmm.
JC: And also, as related to that, an interesting finding I thought was that how they felt about the robot also had to do with their role on the team. So at the time teams were fairly small, four to five people, I understand now EOD teams are larger, somewhat larger, but still small, like maybe up to a dozen people. But at the time when they had these smaller teams, everybody has a very specific role. So you have the team leader who might be talking about strategy and listening in. You have somebody who might be getting visuals on the situation. They might be up on a ladder. Looking around with binoculars, right? And then you have, let’s say, the robot operator, who is the person who probably is most consistently responsible for maintaining the robot, for keeping the robot in order, and for using it, right, in these situations. And that’s a combination of being told information often about an object that they were told there’s an expression just doesn’t look right, right? That’s often how they find IEDs and other explosives, or they got some other intel. And the robot operator uses their own experience and strategy to manipulate the robot to manipulate the potential explosive to diffuse it. And their relationship with the robot would be different, let’s say, than the person who’s on the ladder looking out for the visual has a different relationship with the robot. So the person who’s the robot operator is obviously the one who spends the most time with it. Someone in one of the quotes I have in the book from the operators is he spent more time with the robot than his real girlfriend, you know, he was deployed, and he was responsible for the robot, and he would sleep next to the pack bot in the truck and so things like that are sort of maybe where they start where this joking affection, right? They name them. And people might say, Yeah, they named planes. They name boats. Right?
JJF: Cars.
JC: Cars right? One of the things, though, that’s different about this, though, they didn’t just name them and they included them often, not all teams, right, because the team culture is set by each team in the context of each team. So there was also sort of an age split. Older people, especially maybe some of the retired people who had worked with larger, bulkier robots and stuff seemed less inclined to attribute the socialness to it. Some of the younger people I spoke with who, of course, grew up with video games and more online and with different expectations about human technology relationships seem to have a more facile way of attributing socialness to the robot. So that could also look like including it in rituals jokingly, right? So let’s say they had a post incident meeting to discuss the forensics of it, which is something they do often to break down what went wrong, what went right.
JJF: Learn from it.
JC: Right, exactly. So they’d include the robot in rituals like that.
JJF: Even though it can’t talk or share anything?
JC: Right, it can’t talk. it can’t share. It has no knowledge to, you know. So but again, in a sort of humorous way. But again, I have quotes from several or more, actually, almost all of them, gave some quote where they went back and forth between how they felt about the robot. And to elicit a deeper conversation about it, I did ask them twice, once at the beginning of the interview, and once at the end, I asked them to define a robot in their own words. And at the beginning of the interview, before we talked, they would often give me something wrote that they may be memorized in the schoolhouse. Right they put together right there. It was very machine-like. They would say it’s machine. It’s a tool. You know, it helps keep us safe at a distance. So very machine, all of them across the board. And I think like 21 out of the 22 people or something, when I asked them that same question after the interview, after they talked through their thoughts, which might have been the first time they talked through their thoughts about a robot, especially in a non-judgmental space, right? I’m not one of their buddies.
JJF: Mm hmm.
JC: I’m not their colleagues.
JJF: You’re not going to make fun of them.
JC: I’m not gonna make fun of them. I’m also not encouraging them to say anything, but I’m also just listening. And their responses at the end of the interview would have them going back and forth. Well, it’s a machine. But you kind of get attached to it and it kind of develops a personality. And, you know, you kind of become bonded to it, but I know it’s just a machine. Right. So that going back and forth between the idea of attachment, between the intellectual idea of knowing that it’s quote, unquote, like just a robot, just a machine was, to me, the most interesting.
JJF: That is super interesting.
JC: Phenomenon, yeah. So there’s a lot of background I had to put in place, but it’s a very specific group and I like to make clear what’s going on.
JJF: No, I think that’s really interesting. And another thing I thought showed that ambiguity was kind of the there were a few groups, I think, that gave the robot kind of a funeral or a memorial service if and when something happened to the robot in its line of duty, while being used by the team, I suppose, which I also thought kind of showed this ambiguity. Like, when a tool breaks, you don’t usually honor the tool, but there’s something else going on here, right, that they felt a bit of an attachment to this.
JC: Yeah. So a couple of things. So well, let’s get back to the naming thing because I wanted to point out, you know, when people say, Oh, they name boats, they name cars, the difference is, is that you don’t think that boat or that car or that plane has any autonomy or it has any agency. And here what you’re. . . they’re doing, right, when they give a socialness to the robot is they are projecting, even as a joke or affection, a level of agency and autonomy to it that you wouldn’t with a boat or a car, right? You wouldn’t include the car in a meeting or have a funeral for it.
JJF: Right.
JC: So I do want to make clear about the funeral thing, I think so yes, sometimes people had rituals or wanted to have rituals. So these robots, obviously, because of the line of work can get destroyed. Right.
JJF: Yeah.
JC: And so if somebody had, let’s say, or a team or an operator had worked with a robot for a short period of time, they wouldn’t necessarily have an attachment to it. There are properties to attachment. One of them is it has to take place over a period of time, right? Like, you can be friendly with someone now, but that doesn’t mean you’re attached to them. You can meet them and like them. You can even be attracted to somebody, but that doesn’t mean you’re attached to a person or a thing, right? Sort of similar for an object in your life. You develop a narrative and a sense of history with it. You maybe even take care of it, like the robot operators do or sleep next to it or have this different context to it. So that’s when the attachment happens. So the funeral thing, so there was a huge reddit thread about my book when it came out.
JJF: Really?
JC: And I didn’t even know about it until somebody wrote to me and said, you probably know this, but you’re on the front page of Reddit.
JJF: Whoa.
JC: I know. I had no idea. But there was a big debate about the book. And at the time, I didn’t want to read it. I still haven’t really dived into it because, you know, it’s like, never read the comments thing.
JJF: Fair.
JC: Yeah, right? But I know that I’ve been told that and I did read, somebody had said in there that they were they claimed they were a former EOD and that they did indeed have a funeral for their robot. However, I have been told by actual EOD, who I know can verify that their EOD, that they would never bury a robot or equipment. That’s too sensitive information. You don’t leave the robot behind, right? So that’s not accurate. But what is accurate is the idea of the ritual and missing and lost. So if a robot was, I used the word hurt, was broken to the point where it wasn’t beyond disrepair, but they could send it back to a factory. I actually talked to somebody who worked for one of the factories. I won’t say which one who said that they would sometimes get letters with the broken robots that said, This is so and so, and we would really like this robot back.
JJF: Wow.
JC: Yeah.
JJF: Yeah.
JC: And that can also be let me clarify. You know, a robot like a car, let’s say, you and I both own the same model of car. Right.
JJF: Right.
JC: Over time, you realize if I were to drive your car and you were to drive mine, that they still might drive a little differently.
JJF: Mm hmm.
JC: Right? So some of it has to do with tool familiarity.
JJF: So some of it is practical.
JC: Right. But that’s not what these letters were saying. Right. They were naming the robot. Like, you know, we want Fido back.
JJF: Yeah.
JC: You know, exactly. Kind of thing. And I print an entire letter. I think it’s in the last chapter of the book from one of the soldiers who contacted me. I have another paper on this work that I printed earlier, and he’d read that. And he wrote to me to say that he and his wife were both former EOD, and that he had a robot, I think in the book, I called it Tracy. Then I had to give the robots pseudonyms.
JJF: Whoa.
JC: I know. Because the community is so small that if I said what the robot was named, the other
JJF; You would know
JC: Right. So it might be the first book where the robots had to get pseudonym.
JJF: So cool. It makes sense, though, when you think about it.
JC: Right. Because it’s such a small community, and they all keep in touch on, you know, Facebook groups and things like that. So if I said this robot FIDO, then they.
JJF: They would all know which team you were talking about
JC; They would have a good idea. So anyway, but anyway, in this letter he says he and his wife, he loves his wife. They’re both EOD, but he had this robot with him a long time that he named after his wife initially is a joke, right, because they’re both good at EOD. But when that robot blew up, he’s like, It’s not the same as human life. It’s not the same as a colleague, but he was hurt in a way he didn’t expect to be hurt. It was a sense of loss, I would say akin to almost a pet loss.
JJF: Right.
JC: And so what’s interesting about this too is not just the phenomenon of that, but if you are the actual military or if you are in because this work is transferable to other small team, high risk situations, you want to make sure people in these high risk decision making roles are not hesitating
JJF: To protect the pet. The bot.
JC: Right, to protect the pet. And in the book, I also show, and now we’re seeing sort of the initial fruits of this that the US military, the Australian military, other militaries around the world have all said that their ultimate goal is to do human-like robots, which they’re starting to do now. And so if people are becoming attached to these little tank-like robots, that are not social, are not really autonomous. They don’t talk to you. They don’t look like pets or anything like that. What’s gonna happen when they do?
JJF: When they can talk.
JC: Right. Or when they do have a more animal like robot embedded with the team.
JJF: Yeah. Or human-like.
JC: Right, exactly. So these are things that not just ethically, but design and strategy wise that people need to be attuned to, as well as the mental health of the soldiers, right? They go through enough potential trauma when they serve. They don’t need to be going through the conflicting feelings of this robot blew up and I feel weird about it. What does that mean?
JJF: Yeah. Yeah.
JC: You know.
JJF: Yeah.
JJF: And I think maybe that takes me to the next question I want to ask you, because we’ve been talking about robots that don’t approximate any kind of human sociality. Like, they’re not designed to do that, and yet people are still forming attachments to them and possibly relationships with them. We won’t dive into the metaphysics of relationships.
JC: Yeah.
JJF: But now we’ve got . . . and this was research you did on a small group in 2016. But now we have interactions with kind of life-like or human-like artificial entities everywhere. It’s becoming very mainstream. And so in your research, you’ve branched out, and in your latest book, you looked at social robots in general. So this is the 2025 book. So before we dive into that research, can you just tell us what a social robot is?
JC: Yeah.
JJF: Or how it differs from the military robots you studied earlier?
JC: Well, yeah, I mean, so the military, at least they used to, again, around the time I was doing that research really has two broad categories for everything, and that’s personnel or equipment. There’s no special third category. There’s or there was no third category for canines that helped them for robots and anything that had to do with loss. So there’s certainly, right now, no special category for robots. I couldn’t find a definition for a social robot. So as I said before, the idea of a robot is so loosely defined and interpreted around the world. You could ask everybody, and everybody’s going to have a different idea of what a robot is. And frankly, that’s fair. It’s a word that almost everybody has to operationalize and define when they’re talking about it. And what the idea of socialness is. So I kind of simplified things. The social robot is really a robot that anybody projects socialness onto because the socialness really, I think, lives in the people interacting with it. And they’re the ones that project the context onto things. So they often attribute capabilities to robots that they don’t have: intelligence or autonomy or agency. We see that with chatbots, people over attribute intelligence to them all the time. So when something is shaped like a person and interacts with you in a way that it speaks, even if it doesn’t speak lots of sentences and complicated nuance, and it just yes, no, but it understands you, or it says, you know, go this way, go that way. And it responds appropriately, then your brain is attuned to interacting with it in a human-like way, or even an animal-like way. And we also have these expectations, like I said before, of what a robot is. As you pointed out, we’re really sort of the first generation that’s truly interacting with robots in our space increasingly every day, right? More and more people are getting exposed to actually working or seeing a robot in real space. So we’re sort of figuring out I often say we’re negotiating socially how to interact with these new things.
JJF: Mhmm.
JC: And that’s another reason right now. I think we’re emerging this third social category. For these robots. So you asked for some examples. I’ll give you a couple. You said C3PO.
JJF: Yeah.
JC: You know, and R2D2 are actually sort of, I don’t want to say two ends of a spectrum, but they’re on a spectrum of human like robots, right? So when you say human like, you could think of C3PO, you could think of a robot from a movie that just is indistinguishable from a person.
JJF: Right, right.
JC: R2D2, we attribute a lot of human-like and animal-like traits to it. Even when he does the clicks and whistles. I said, he, I just gendered it. I don’t know why.
JJF: But I think he gets gendered in the movie.
JC: Is it? Yeah, okay, so not my bad. So even when it clicks and whistles, it does so in a way that its responses seem to make sense to one when it clicks and whistles like a dolphin, I should say, which is sort of an animal like thing. But it has a head that swivels, it shares attention with you. And by that, I mean, if you look at something, it would look in the same place. So it appears to have an intelligence and an agency, right? You can tell R2D2 to do something, it’ll roll off and do it, but it’ll also make its own decisions about things, which is what fictional things start. And in real life, a couple examples sort of on different spectrums, maybe, maybe not. So Tesla is pushing their Optimus robot out. It’s interesting because they’re pushing an idea of socialness with it by using demos where it shows it mixing drinks, and I’m going to say doing very broey fraty party things because they’re showing it to a very specific audience. They’re trying to showcase it to what they perceive. . . the audience they’re going for.
JJF: Their target market, kind of.
JC: Their target market, which is interesting because I don’t think their target market is going to be people who want to spend tens of thousands of dollars on Optimus to mix drinks in their home.
JJF: Mm hm.
JC: But regardless, that’s a whole other topic. Another example might be Asimo, which is now discontinued, but was extremely groundbreaking over decades. Honda, and I interviewed Asimo’s designer in the Naked Android, Takeshi Koshiishi, I thought had a beautiful, beautiful interview about his inspirations and why he built Asimo. Asimo was built by Honda to help, the goal was to help aging population of Japan, to help older people function in their homes with autonomy as long as they can, because Japan has the most rapidly aging population per capita. They’ve been, for decades, putting federal money into developing robots to help them. And Asimo was one of the first robots that, you know, could stand, could walk, could walk upstairs, and was really being developed to, and interact socially in a friendly way. Didn’t become deployed in that way, but became a sort of worldwide ambassador for Honda and Disney and other things. This is great, sort of emerging, hopeful optimistic use of technology.
JJF: Yeah.
JJF: Okay, so now we’ve got kind of idea, maybe not a full blown definition, but an idea of social robots. So let’s dive in to your 2025 book. I actually want to dive in by talking about the title. So your 2025 book, The Naked Android, why is it called The Naked Android? And I think it’s connected to this concept that you developed through the book that also appears in the title, which is the Human Gaze.
JC: Yeah, sure. So I actually described the title in the preface, and I almost put it there like an Easter Egg because not everybody reads the preface. In my next book, I’m going to call it introduction and see if that traps more people. But I actually do talk about it in the preface, so naked is an obvious sort of reference to the idea of human vulnerability, fragility, hubris, right? And just exposure, right? Being vulnerable to things. And Android, of course, is the default term culturally tied to masculine robots. Technically, the term we use for female-gendered robots is gynoid, which is sort of an awkward term often for people to use or they’re not aware of it. So people will say Android. And I talk about gendering and feminine and masculine defaults and why people gender robots and that whole thing a lot in the book. So the fact that it’s instead of defaulting to Droid, the language is still Android, interesting. But also, it’s a reference to an essay or, Jacques Derrida was speaking. And we can put a link,
JJF: absolutely
JC: in your his transcript where he’s talking about kind of a silly thing, being naked in front of his cat. And
JJF: Right.
JC: And he said but it prompted him to reflect on the siliness of shame and shame being a human thing. The cat doesn’t care, right?
JJF: The cat’s not bothered that he’s naked. It’s him that’s bother.
JC: It’s him that’s bothered. He’s projecting the idea of these human cultural norms and morals onto the cat.
JJF: Mm hm.
JC: And he knows it, yet he still feels embarrassed by it. So I was also drawn to that. I thought that that was an interesting piece that dovetailed into that. So I do credit Derrida with that. Well, I’m probably butchering his name. I’m trying.
JJF: I think I say it in a terribly anglophone accent, Derrida, I don’t think I’m saying it, right, either.
JC: I think that that’s fair in American accent. I have a little conversational French, so I’m just obnoxious enough. I try to put the correct spin on it.
JJF: Fair.
JC: Yeah. But yeah, so that was really the genesis of the Naked Android part. The Human Gaze is about the cultural filters that we use when building and interacting with AI in general, but with robots, which are a form of. . . their embodied AI, they’re AI in our space. So it’s how when we build and design and interact with them, it’s reflecting our own cultural norms, and it’s not necessarily about technical need or even when we build them or use them. It’s not necessarily about our everyday need. Again, going to ChatGPT LLMs, it’s not necessarily about technical need. People reach to it for all sorts of things that it wasn’t necessarily designed to do, right?
JJF: Right.
JC: And human robot relations are reciprocal in that we design robots to meet our needs, but then the robots also reshape our expectations and our imagination. So the gaze isn’t wrong. It’s just putting a name to how we’re designing and interacting with them, how we think of robots in relation to ourselves, and maybe a new way to think about blind spots and reconsider what we’re doing, as well as to be more or to hopefully be more reflexive about that includes things about representations of gender, race, ethnicity, religion, in robots, and in AI.
JJF: Okay, so there is a sense in which, and I mean, this is a longstanding, even in Sci Fi kind of discussion about how we build these bots sort of in our own image, though not always in our own image, and then interacting with the bots can tell us something about humanity, which kind of makes sense if we’re creating a third or expanding a third category of being between humans and tools. If this thing is not clearly a tool, but is also not clearly a human, then we have to start thinking about, well, what is clearly a human, right? Is that kind of part of the human gaze, as well that we get this kind of turn through our personification of these bots and our creation of these bots and our relationships with these bots, it gets kind of turned back on ourself or at least hopefully.
JC: Yeah, it’s sort of so I’m on BlueSky right now right now, I say, because everything changes, but you can find me on BlueSky right now. And there’s a conversation I saw going on about calling AI or different forms of robots and stuff as clankers, right? And then there was another group of people saying, No, you can’t call it that because then you’re making it an outgroup. Right?
JJF: So you’re othering it.
JC: Right. You’re othering it socially. And then another group of people said, Who cares? It’s just a robot. It’s just AI. Who cares if you call it clankers.
JJF: It can’t be offended.
JC: Right. It can’t be offended. I would say, and I think that what’s shown, I think, described in the book. And let me say that I interview people in the book, and the whole interview transcripts are put in for all of these people I tell you about. And I was going to say about the idea of clankers is I think what it says is about you, right? When you use the word. And that doesn’t mean people who use it are bad. I mean, there are all sorts of situations. It doesn’t necessarily make you a bad person. But it’s sort of like on my blog, which you’ll put, I think, a link to. Also in the show notes.
JJF: Yup!
JC: I wrote an essay about the ethics of kicking a robot.
JJF: kicking one.
JC: You know, where we see video different videos of people kicking robots, and some people feel bad about it. Right? But I think in the end, whether the robot feels it or not, what it does is it says something about you as a person. Now, it’s one thing to kick the robot to test it, right, or to test its stability. But if you were to, let’s say, kick a robot and it starts screaming, or you kick a robot knowing that it belongs to somebody else and that person’s upset about it. Then it says something about you. And your actions. So I think as we develop this third category, that’s the space we’re negotiating. And you talked about robots as mirrors. I think I sort of say that, but I also say that they’re funhouse mirrors.
JJF: Right. I remember that.
JC: It’s a distortion. And it’s who we think we are or who we think we want to be.
JJF: Or who we think we were.
JC: Right, or the idea of robots and the robot designers, including Asimo’s designer, but they almost all make science fiction references. This is the thing that inspired me. You know, I want, you know, you can see parts of it in their actual designs. So they reflect certain social structures, and in that way, they can often reproduce the social status quo. And if people aren’t aware, I should say that robots are now being used in different ways in religion or as representatives of religious aspects. I talk in the book. There’s some robots, where the robots aren’t meant to be religious per se, but they represent certain traditions that are and they’re supposed to be representative of Muslim beliefs and they’re sort of ambassadors for that, in a way. And then you have Gabriele Trovato Santo, which is a small, I’m going to say Catholic robot, but meant to be used as an entity where it’s sort of limited interaction. As I understand it, it’ll say prayers for people or with people. I’m less familiar with Catholicism. But people go to it for comfort. You know, so and then there’s a Shinto shrine in Japan, as well, they have a Shinto sort of priest robot. So there’s different representations and integrations of robots through religious lenses that way. And I think that that gets into that sort of very deep Do robots have souls conversation? And I talk about golems there? Yeah, I talk about other ideas, you know, zombies and cyborgs and, you know, when does something if it’s meaningful to a person, you know, what are the ethics of harming it and the stewardship of it, not just harming it. But how are we responsible for these as creations, sort of like a golem, right? And our care for them and care being what social norms are we setting up? Is it okay to kick a robot? Is it okay? Call it a clanker? You know, these sorts of things
JJF: Yeah. I found the chapter, well, the discussions of religion throughout the book, really, really interesting. And in particular, how different religious world views would affect or might be affected by our interactions with social robots. So you’ve mentioned robots as Shinto priests or robots as Catholic assistants for prayer, I guess, or something, and I’ve seen some of these, and then also chatbots that are like, emulating saints or emulating other religious figures, chatbots that have been fed certain scriptures from religions that have textual, not all religions have a book or text, but the ones that do, there are chatbots now emulating that kind of stuff. And it’s very interesting to me, especially when we start thinking about it with the soul question, because different religions have very different ideas about souls and ensoulment and spirituality. And I remember, like, when I was studying the Turing test and Alan Turing in the 1950s, one of the strong pushbacks that he got from 1950s philosophers in the Western world was like, Well, a robot can’t be ensouled, right? Like, souls are just for humans, not even animals, according to some religions and definitely not for robots. But then you point out, for example, that Shintoism has a very different view of souls and spirits. And so, yeah, can you say a little bit more about how religion and culture might change how we, is integrate the word interact with integrate robots into kind of our social landscape?
JC: Yeah, so Japan is often brought up as an example, not necessarily to exoticize but to show, if you will, sort of a binary or something very different than American culture. And part of that is dominant religions infusing the culture with beliefs, right? So in Japan is dominated religiously and culturally by Shintoism and Buddhism, as well as Christianity. And even if someone who’s Japanese doesn’t identify as Shinto or Buddhist, they will probably have absorbed a lot of that culture and cultural beliefs and ideas. And it was Hiroshi Ishiguro himself, one of the roboticists, very well known, well regarded roboticist who does extremely human like robots, who said in the book, you know, he said, We meaning, you know, Japanese regard the soul is different. And what he’s referring to is it’s more of a belief that it’s not whether things have souls or don’t have souls and how we treat them. It’s a belief that everything is there purposefully and with intent. And I think, again, it’s about stewardship. And how things reflect on yourself and caring and nurturing your sort of ecosystem around you, which is a different view of interacting with technology than we often have let’s say in the United States, for example, we don’t come into it often with that expectation or set of relationships. We see it often as something that’s problematic as something we are forced to integrate into our lives or have to learn that can be very rule-centered where you don’t know the rules and you don’t know how to interact with it. And then we develop more stories about the problems and our fears and concerns. And, you know, so we have different and I say we as an American, we have a different lens on it. Yeah.
JJF: Mm hmm. So I think that shows to me that this third space that’s being created might look very different in different contexts. Is that a fair assessment that the human gaze may not be. . . it wouldn’t be correct to think of it as like a monolithic human gaze.
JC: No, and that’s sort of why I called it the human gaze. It’s about how we position ourselves to robots, and that does change contextually, right? So the human gaze is about giving a name to this idea of cultural positioning. And again, sort of inspired by Sartre, le regard, and Laura Mulvey, as well, because I am nothing, if not, I was an undergraduate film theory major, and I often take things back to cinematic, which you saw in my book. There’s a lot about science fiction in movies. I often look at AI robots as another medium for communication. And certainly Laura Mulvey talks about the male gaze a lot. Which is also relevant when she was talking about in cinema on film and how the representation of women, there’s often a binary: women and men. Films at the time, especially in still often created by men for a male gaze. And that’s how women in the film are positioned, right? As an accessory to a man, or their lives revolve around the man or whatever. It’s for a male gaze. And that was Laura Mulvey. That was I mean, that’s grossly shortening her idea, but, you know, so that was an influence. But there’s lots of. . . people talk about lots of different gazes.
JJF: Yeah.
JC: So, you know, I think the human one is specific to AI and robots and a way for us to be reflexive about what our blind spots are when we’re developing and interacting.
JJF: So when we’re thinking about how this might affect us as humans when we’re interacting with robots, there are several points in the book where you reflect on how interacting with social robots might actually change our sense of the human. But there’s one that really struck me ’cause it’s one that I guess has always bugged me, and I was really excited to see this in your book. So, you relay I don’t know if I’m saying his name right. Hiroshi Ishiguru?
JC: Yes. Ishiguru. Yes.
JJF: So he relayed his experience interacting with a social box that had been designed to mimic him. So it’s like a doppelganger. And he said, and I’m going to quote from your book. “Everyone commented on how similar its appearance and behavior was to my own, but I disagreed. And this led me to contemplate how I don’t really know my own appearance, voice or movements” end of quotation. And I just thought this was so fascinating because it seemed so right to me. Like, there are so many movies where somebody walks by a clone and they’re like, Oh, it’s me. And I always thought, I’m not sure I would know it was me if I walked by a clone of myself because I don’t know what I look like moving through space.
JC: Right.
JJF: You know what I mean?
JC: Yeah. Right.
JJF: So, yeah, can you talk about this or other examples of how our sense of ourselves might be being altered as we interact with robots, either designed to mimic us or just, you know, social robots in general.
JC: So, as I said before, Dr. Ishiguro is a very well known roboticist who specializes in his lab, and he’s interested in making highly human like robots that are his goal is really to make androids, gynoids, droids that are indistinguishable for humans. And he’s been doing this for years, and he’s actually, I think, very good at it in a lot of ways. And it’s obviously a very complicated thing to do. And I thought that was a very interesting quote from him. So he did a project called Geminoid, which I saw demonstrations of in person. I know Dr. Ishiguro just a little bit. And I saw one of his early demos of Geminoid, which is what you’re talking about, his doppelganger robot. And originally, he thought that . . . his hope was eventually someday that it would be smart enough and not uncanny . . . he could use it to take meetings for himself or to work for himself.
JJF: Right, right.
JC: Yeah. And then, of course, time has elapsed. And I think it’s really interesting for him and other people in the book that I interviewed that had robot doppelgangers, like Nadia Magnenat Thalmann in Switzerland, who I also interviewed, who has a robot doppelganger of herself. And one thing you notice that I think they both talk about is how they age and the robot does not.
JJF: Right.
JC: So that’s one aspect of it. Certainly, there’s we’ve all had the experience of hearing our recorded voice.
JJF: Yeah.
JC: That doesn’t sound like me. And even when it’s explained to you scientifically, oh, well, your throat is near your ear drums. You’re going to sound different than you do, blah, blah, blah. It doesn’t matter because it’s not just that. Your cadence will sound strange to you. Everything sounds uncanny and off. You’re like, I don’t sound like that. Which is one reason for public speaking practice. They say, record yourself and then watch it back as painful as it is, right? And I think of that often when people make these robot doppelgangers, as painful as it is, you know, what does it look like once you’ve made this? And I thought Dr. Ishiguro’s comment was interesting because, like, he even used strands of his own hair
JJF: Wow,
JC: to not just weave into the robot, but to use as a source for This is what the hair should look like.
JJF: And the texture.
JC: Right, and the texture. Yeah, but it certainly gives you insight just into yourself, sort of othering yourself in a way. You’re removing yourself. And again, it’s a distorted funhouse mirror. You’re not looking at yourself. You’re looking at somebody else’s representation of you. It’s a medium, again, right? It’s, in my mind, it’s like somebody’s movie of you. Ultimately, you curate what this thing looks like and how it interacts and other people on the team. So it’s still a bunch of intended design decisions.
JJF: Mm hm,
JC: right?
JJF: Mm hmm.
JC: It is a distorted mirror.
JJF: Yeah. So if you other yourself in this way and create this kind of funhouse mirror of yourself, like when I’m editing this recording later, I learn certain things about how my voice appears to others. You learn certain things about how you appear to others, but you can’t learn the full picture, and even what you’re learning about how you appear to others is itself distorted by the team that helped you build this robot, by the fact that you built it at this age in this time, and it can’t age anymore and a whole host of other programmed responses and design choices. So it’s like it is teaching you something about yourself, but also, you need to go in with, like, a huge amount of caution.
JC: Exactly. That’s really, and it’s funny you say that because I go back to the voice recording. It’s sort of like that. I think for a lot of people, you hear yourself in a voice recording, and if you don’t like it, you might decide with intent to change some things or at least something. . .
JJF: to speak differently.
JC: Right, to change your cadence or your volume or interrupting or whatever it is. Or you might choose to ignore it, or go, This is just who I am. And I can’t overthink every aspect of my voice. And, you know, so I think you have to, like you said, with a grain of salt, look at it and go, That’s interesting. That gives me pause for introspection, or that is interesting, and I’ll note that for later robot design or something like that. But when you’re using it as a tool to look at people, you also have to factor in the design intentions the time in which it was done. Yeah.
JJF: So I do want to move to chatbots in a minute, but before we do, I want to ask you about possibly one of the more famous robots in your book, The Robot Sophia.
JC: Yes.
JJF: Who has been touring for a while now and I think appeared on some talk shows and stuff like that. I find Sophia really interesting, and I learned something new about Sophia that I did not know from your book, which is that Sophia has been granted Saudi Arabian citizenship. So can we talk a little bit about Sophia and about this kind of oddness of Sophia being a citizen of Saudi Arabia?
JC: Yeah. So I have sort of a funny relationship with the robots of Hanson Robotics. So that is related to David Hanson of Hanson Robotics. And he also is a roboticist whose goal has been for decades now to do highly human like robots. And he also wants to combine it with AGI or artificial general intelligence and make them human like intelligence. That’s his goal. So very early on, Hanson Robotics helped inspire me to get into human robot interaction. I actually started working with chatbots and web based forms of AI. And because I saw some of Hanson’s early designs, I was really sort of, again, going back to the military work, the ideas I had about if people are socializing these non-human like robots, what’s going to happen when the human like ones roll into town? And then here comes David Hanson, I saw on the news with this highly human-like robot, and I was like, now, I mean, you know, you got me.
JJF: Yeah.
JC: So, I’m very interested in his work, much like Dr. Ishiguro. On the other hand, he has come out with Sophia, and I say, on the other hand, because here’s where we talk about representations of gender and culture and race. Sophia, it’s a gendered female robot. And I think I’m going to forget all of it, but Sophia was modeled very specifically, as a lot of female gendered robots are by male roboticists who say in the interviews in the book, who say in other interviews archival that I’ve quoted, that they always model it either on their wife, their girlfriend, a movie star, a newscaster, in one case. It’s always women they find attractive and the epitome of attractiveness. They’re never going to do, let’s say, an older woman
JJF: Mm hm
JC: that they don’t find attractive. It’s always a certain type. Sophia falls into that category. Sophia, furthermore, finds it necessary to wear makeup and have a highly feminized persona on its Twitter, which I don’t follow anymore, but it’s social media accounts where it’s supposedly somebody’s running it to speak as if Sophia’s intelligent. They’ve used Sophia on a lot of late-night talk shows. They’ve trotted her out, and, I think, in some cases, have really not represented her capabilities or limitations very well. So she’s presented as something that looks very intelligent on the talk shows because they’ll have Sophia speaking with the talk show host, let’s say, Jimmy Fallon or whoever it is. And then that person is just gobsmacked. Oh, my God, this human like robot.
JJF: It responded to me.
JC: It responded, right? It had a conversation. And then you hear on the news and people’s remarks on YouTube of the clip and everything. You know, everyone’s like, Oh, my God, you know, what’s happening? So I have to say, to clarify, Sophia right now is being what’s called Wizard of Oz’d, what we say, which is
JJF: Or like artificial artificial intelligence.
JC: Right. I mean, someone’s operating it. She’s either giving scripted answers or someone offstage is plugging in the answers. Right. So that’s a little misleading.
JJF: A little
JC; A little, right? A little a little. So with the picture in mind, she has gone. she, I’m gendering it because the robot has been officially very gendered.
JJF: Very gendered
JC: repeatedly, in a very stereotypical way. Was on this publicity tour, where I think it’s fair to say that Saudi Arabia is looking for some reputation management, and Sophia is looking for PR. And together, they decided we will give it as a woman feminized robot, we’re going to give it this citizenship. And that’s going to show for the Saudi point of view, how we accept and embrace new emerging technologies. You know, we’re giving rights to human. There’s a stewardship of women as if this robot that looks like a woman is representative of women you know. And, of course, there’s this history of that not historically being true inside, right? So what happened was that was, I think, more controversial than they expected. I think some people found it fascinating because a lot of people do believe that Sophia has some sort of intelligence because they’ve seen it on talk shows or they follow its social media and have this misunderstanding or don’t know the history of Saudi Arabia, as well. But then people who did know what was going on, super upset by this because, again, the robots not a woman. There’s a history of human rights violations, right, that aren’t. . . that cannot be addressed by giving a robot citizenship, male or female, right? And I’m not sure that this played out the way either side expected, but they did get some positive response out of it. And they certainly generated a lot of conversation about the idea of personhood and robots, which is a whole other conversation about if robots should be given legal rights once they have a certain level of autonomy and should be recognized as having rights.
JJF: I think it also, though, shows up another question, which is that, so in my own field in philosophy, the debate over whether we will create AGI and whether or not we should grant legal rights and moral considerability to entities that have AGI or that we think have AGI, that’s a debate that goes back ages, like, decades, right? Like, I was introduced to this as an undergraduate in the late 90s and in the early 2000s when we had nothing approaching any. . . nobody was saying that any of this stuff was self-aware or intelligent or anything at that stage. And yet, we were still writing articles, trying to talk about what it would mean to give robots rights, give robots citizenship, et cetera, et cetera. And one critique from feminist and progressive circles that has often come forward is that we spend a lot of mental energy on these questions while not addressing actual humans who do not have full rights, for example, or not being given full moral considerability. So there’s the legal question and then the social moral question, too. And I think that this example also shows me that. Like, we do have to consider, at what point might it be good for human societies to extend considerability to artificial entities. But also, we shouldn’t do that at the expense of, like, also extending considerability to humans, if that makes sense.
JC: No, it absolutely makes sense.
JJF: So we’ve talked a lot about embodied AI, and I know that’s been most of your research. But recently, I read on your personal blog, which I will link to, you recently wrote a piece called a Dangerous Imitation of Care, where you talked specifically about generative AI. So in this case, it was people turning to ChatGPT and other generative AI tools as therapists. There’s been a lot of coverage about this at the time I’m recording. So this is October 2025. So there have recently been a number of alarming stories about chat bots causing or contributing to harm, even stories of suicide for a number of users. There are at the time of this recording two lawsuits, one in front of Google and Character AI, and another one in front of Open AI, respectively, regarding their chatbots, contributing potentially or allegedly contributing to harm. I wanted to talk a little bit about this blog, the Dangerous Imitation of Care and how a reliance on chatbots now for care or care work might be dangerous. So what are your concerns right now?
JC: Yeah, so I guess I want to start by saying, I understand why people would trust a chatbot. And I talk about that in the beginning of the essay. I have the greatest sympathy and understanding, especially in the United States, but in a lot of parts of the world, where mental health is the idea of maintaining mental health is devalued, right? We talk a good game, but at least in the United States, it’s often difficult to find a therapist covered by your insurance that deals with your specific issues that you want to address. It’s a real convoluted game.
JJF: Yeah.
JC: And if you are in a space, you may not even think you need a therapist, but if you find a listening ear that appears to be listening and sympathetic and even a little sycophantic and takes your side in something. And furthermore, is presented to you as a tool that’s magical and can solve anything, which is how it’s being presented to the world, as you pointed out, and I’m going to push my blog again because I have a whole other article about AGI and who benefits from the idea of artificial general intelligence. But the people selling the chatbot subscriptions and these chatbots are going to eventually be put into robots in other parts of your home. Who knows your appliances, whatever. It’s going to follow.
JJF: They are already shoved into all my work tools right now.
JC: Sure, exactly. Yeah, it’s right there when you need.
JJF: Yeah.
JC: So there’s the idea of availability, right? But there’s reasons actual therapists are not always available to you to bail you out of a certain. . . they need to teach you how to emotionally regulate and deal with stuff on your own, right? So the danger I mean, there’s a lot of dangers in this, but, you know, that’s part of it is there’s no clinical understanding of what’s going to make you heal yourself or work on yourself. There’s no process of assessment. There’s no
JJF: with the chatbots.
JC: Right, with the chatbots, right? There’s no clinical assessment based on expertise. It’s not going to ask you questions that can interrogate you in a way that leads to a clinical assessment. It’s not going to help you in a clinical way to help you achieve your own goals or to put words around your own goals or anything like that. It also changes the idea of what we expect from care, especially mental health care or possibly even what we expect from human friendships, but that’s a whole other thing. But the idea of having this always available thing is that it can also, it also has no understanding of when it’s feeding into something problematic. So we’ve talked about, and there have been cases where people who are already depressed, bipolar, whatever their situation, where a chatbot has fed into that and they’ve spiraled downward. The chatbot has no awareness of your clinical background. It can’t pick up on cues, you know, it can’t categorize you or according to the DSM and figure out, you know, anything.
JJF: Yeah.
JC: So it’s not helpful in that way. There’s no. . . And then let’s talk about accountability. So they’re marketed as companions. So you think they’re your friends as confidants. And they’re marketed as caregivers as things that will save everything, even when they hallucinate and cause harm, and there’s no clinical oversight. There’s no HIPAA. So your information, we don’t know who is reading it because a lot of AI is done by human labor and . . ..
JJF: AAI, again. Artificial artificial intelligence.
JC: Exactly, about indexing and looking at appropriate answers and things like. You’re not sure how they’re even going to use your data. Is it training data for somebody else? And where is the risk transferred to? As you said, there have been people who have claimed that a loved one has committed suicide because an LLM helped them, a chatbot helped them, I’m going to say, helped them spiral into, you know, depression or manic episodes, into very negative spaces.
JJF: Yes.
JC: You know? So the risk is transferred entirely to the user, and that is a real problem in mental healthcare, right? If you’re going in and saying, I have a need, I’m vulnerable and possibly at risk for harm, either self-harm or whatever, and the chatbot has no clue. It’s clueless. So real harm and lack of accountability.
JJF: Yeah, I’ll just also preface for . . . I think a lot of listeners know that this podcast is based in Canada. We also do not have coverage for mental health. So we have universal health care, but not for mental health.
JC: Wow.
JJF: We are very much in the same boat as many US citizens, I think, in that respect, that mental health care can cost a lot of money if you don’t have it through private insurance. And then there’s also concerns a lot of people in, like, remote locations, for example, have a lot of trouble accessing it, not just because of the cost, but also because of availability issues. So I also sympathize with you that I understand why people would turn to these kind of tools, particularly Open AI, saying that, like, ChatGPT can be used for everything, including solving climate change. Like, why couldn’t it help me feel better today if it can solve climate change?
JC: Or CharacterAI, why can’t you make a Freudbot and going it right to the source, you know?
JJF: Yeah, my Jungbot says.
JC: Yeah, exactly. Exactly. And That’s not the same as choosing a therapist, right? That’s choosing essentially a cartoon to go to. And there’s a cultural stigma to seeking mental health care for a lot of people still. And that, again, can be very deeply culturally embedded in a lot of ways. And chatbots created specifically for mental health care to triage it or to help people be mindful can be effective, especially in things like cognitive behavioral therapy, things where it’s about changing your mindset in little ways. There was a study at, I want to say, it was a Syrian refugee camp where people in the refugee camps, there was a stigma, especially for men about seeking mental health care. And some Syrian immigrants, I think they were at Stanford students, grad students, built a male and a female chatbot that were supposed to be sort of Syrian oriented to specifically help people in this sort of mindfulness way while they were in the refugee camps. And they found that people were adopting it, including the men were adopting the male chatbot and found it useful to talk to about their concerns. So it’s not that there’s no use for AI. It’s not that specific tools designed for specific uses and contexts, right? This was very specific. This wasn’t even a chatbot for mental health, right? This was an umbrella chatbot. This was for a very specific population and a very specific situation, right?
JJF: Yeah.
JC: And help them with a very specific set of issues. And it was really about triaging. Computers can potentially learn things by rote. What they suck at doing is understanding context and understanding human condition and things like suffering, somebody being in a remote place, different cultural lenses and ways of looking at things, accountability. You know, all of these things it cannot do. It doesn’t have an understanding of what it means to be human.
JJF: Yeah.
JC: And you have to look at that. Then you go, holy shit I really doesn’t excuse me. But then you’re like, you know, it really doesn’t understand. If you look at it that way, there’s an aspect where it devalues the work of therapists.
JJF: I want to thank you so much for sharing your thoughts and your research with me and my audience today. Is there anything else you’d like to leave my listeners with regarding human interactions with social robots?
JC: Yeah, I just I think I’m pro technology in a lot of ways. I think, I mean, that’s why I got into this. I think it’s fascinating. I’m not trying to demonize any particular thing, you know, but I am describing what we’re going through culturally, and I think that that gives us pause to reflect and these things can happen so quickly because of consumerism, because of how they’re marketed to us, because of strides in the technology. But I’m hoping human gaze gives some language to the idea of the critical thinking when we’re looking at these things. So that’s what I’m hoping to contribute.
JJF: I want to thank Julie again for sharing her research on human interactions with robots, relationships with robots, both humanoid and not and the human gaze with us today. And thank you, listener, for joining me for another episode of Cyborg Goddess. This podcast is created by me, Jennifer Jill Fellows, and it is part of the Harbinger Media Network. Music was provided by Epidemic Sound. You can follow us on BluSky or follow me on Mastodon. Social media links are in the show notes. And for longtime listeners, you may have noticed this episode was a little late. It was supposed to be posted at the end of 2025, not at the beginning of 2026. And so now I have a confession. I’m currently working on another project. I’m writing a book, and that has kind of impacted my ability to do this podcast. So going forward, I am going to be taking a little bit of a break. Not quite sure when the podcast will be back, but I can’t promise that we will be back in the summer of 2026. I will get back here hopefully with exciting news about my new book project as soon as I can. So until next time, everyone, whenever that is. Bye.