Cyborg Goddess

A Feminist Tech Podcast

Transcript for Season 3 Episode 5

JJF: The Turing test is one of the most famous tests for artificial intelligence that exists. When Turing posited the test in 1950 in a small article called Computing Machine Intelligence, he hypothesized that machines would be able to pass this test within 50 years. We didn’t quite see machines pass the test by the year 2000, but we arguably do have machines that can pass this test today. But what exactly does it mean when we say a machine has passed the Turing test? Does it mean the machine is intelligent, sentient, conscious, just simply well designed? And what exactly is the purpose of the Turing test anyway? Is it a test for machines or for us?

JJF: Hey, everybody. Welcome to Cyborg Goddess, a Feminist Tech Studies Podcast. As always, I’m your host Jennifer Jill Fellows. And on this episode, I’ve invited Dr. Simone Natale on the show to talk about his research into the legacy of the Turing Test and media Deception. Dr. Simone Natale is the Associate Professor in media Theory and history at the University of Turin. Before taking up his current position in his hometown in Italy, he taught and researched at Columbia University in the United States, Concordia University in Montreal, Humboldt University and the University of Cologne in Germany and Loughborough University in the UK. He is also the editor of the academic journal, media culture and society. His book, Deceitful Media, which he is here to talk to me about today has been translated into Italian, Chinese, and Portuguese.

JJF: Hi, Simone and welcome to the show.

Simone Natale: Hi, Jill. Thank you. I’m most excited.

JJF: Before we begin, I want to take just a moment to acknowledge at least some of the physical space that my digital podcast occupies. Our digital worlds are built and sustained through physical infrastructure, the servers use electricity and water that is extracted from physical space, and what happens online happens in geographical locations around the world. As such, I acknowledge that this podcast is recorded and produced on the unceded territory of the Coast Salish people of the QiqéytNation, one of the smallest indigenous nations in British Columbia and the only one without a dedicated land base.

JJF:  So I want to begin by getting to know you and know a bit more about your academic interests and maybe your academic journey. Can you tell me how you came to study media theory and media history?

SN: Uh, yes. Well, I’m always believe that to understand also the present, historical perspective is very useful. In my scholarship, I always studied both digital media but also putting them in relationship with media in the past. Actually, the topic of my PhD thesis, which was also my first book, Supernatural Entertainments, was the emergence of spiritualist, and how it mingled with the emergence of show business and media culture in the 19th century. You wouldn’t believe, but actually, I came to study artificial intelligence some years ago from there because basically one of the things that I mean, is kind of interesting, I think, is of spiritual seance and, which I was studying, is that in the context of a seance, something that might be perceived and interpreted as noise in other contexts, as a meaningless noise, can become full of meaning for the participants in the sciences.

JJF: Right.

SN: It can become information, become communication. I started to realize that something similar could happen also when we talk with machines or we communicate through and with technology. We can also start to listen for something that we might interpret as something more than just an interaction with a soulless technology.

JJF: That’s so interesting. You’re saying if I’m understanding, originally, the idea of the Victorian seance the context of a seance render certain things as interpreted as meaningful. If you’re doing a seance and I don’t know, an old house and the house creeks, that might be like, Oh, the spirits are here. Whereas when you’re not doing the sĂ©ance, like houses creek and you may not pay attention to it, it might just be the house settling?

SN: Yes.

JJF: Yeah. There’s this parallel then when we get to talking about artificial intelligence, where in the context of talking with a chat bot or something, things become meaningful that might otherwise not be viewed as meaningful. That’s so interesting. How did you get interested in seances? Can I ask that?

SN: Well, I don’t know. I mean, I always like horror movies and in a way, and then I became interested initially, actually in this spirit photography. In producing images that are interpreted as images of ghosts. And then from there, I started to study and I started to look at the fact that a lot of spiritualist mediums were performing on the stage, were actually participating in the show business. I became interested in the close relationship between these beliefs in spirit communication, communication with the other world and in media culture and all. Actually then I became interested in the Turing test, which is a very important proposal, let’s say, that’s one of the things that started, you know, the very idea of artificial intelligence. Because in a way, it suggested something similar to what was happening in seances is that, you know, about interpreting the messages as messages and maybe as a human agency when actually in the case of the Turing test it’s just a machine.

JJF: Yeah. We’re going to talk about that a little bit more and also about interpreting messages so that you get messages that are encoded into artificial intelligence or statistically significant in generative AI, and then the individual receiving this string of statistically significant word salad interprets it as something meaningful. I think that’s so cool. But I also think, now that you’re saying this, I do see this connection of spiritualism with the Turing test, right? Because in Computing Machine and Intelligence, one of the places Alan Turing puts this test forward. He actually talks about ESP and telepathy. I remember this. He says the evidence for telepathy is overwhelming. But then there’s no citation and I’m like, Oh, my goodness, what is he talking about? But the spiritualism and paranormal is built right into this origin story of the Turing test, right?

SN: Absolutely. Well, he mentioned telepathy because basically it was how he structured this article, this paper, this philosophy paper, although he was a mathematician, but he published in a journal of philosophy. So he structured this as you know, the idea of the test and then a range of objections. And one of objection was maybe there is telepathy involved, and then he actually excluded this option. But it’s interesting that he even discussed it. Well, the link also is based on the fact, well, I might tell a bit more about the Turing test itself.

JJF: Yeah, we should probably set up how the test actually works now that we’re talking about that. Let’s do that for people who don’t know.

SN: Exactly, because it’s not that everybody has to know the Turin test.

SN: Basically, the Turing test is a practical experiment that he proposes. I think what is interesting here is that in this article, where Turing in 195o, so really at the regions of the computer age, he proposed this test. The article starts with a question, can machines think? But then in the following lines, he proceeds basically to take away from this question. He argues that it doesn’t make sense to ask this question because we couldn’t find an agreement on what thinking means. And he proposes an alternative. He doesn’t propose a way to answer this question. He proposed just as an alternative to asking this question, which is the Turing test. In the Turing Test, you have someone, an interrogator, who’s entering in conversation in written conversation with someone or something. He or she, the interrogator doesn’t know who’s talking with because he or she cannot see them. Just from the contents of the conversation or the written conversation, he or she has to find out if they are talking with a machine or with a human. If a machine is able to convince in a statistically relevant way, the interrogator that it is actually a human, then it will pass the Turing test.

JJF: That’s a good overview of the test. Then as you said, through the second half of this paper, the Computing Machine and Intelligence paper, Turing considers a number of objections as to why people might think this test isn’t a valid test of machine intelligence. Again, there are some red flags. We don’t really know what thinking is, we’ve replaced thinking. We’re not talking about thinking anymore. We’ve replaced that with, can the machine pass the test. It’s very much a behavioral test. If the machine can behave like a human, then we will deem the machine to be, for all intents and purposes, intelligent like a human, whatever intelligence means. And one of the objections, as we were just talking about, is this idea of the paranormal objection that humans could use telepathy to rig the test or something like that, so that the human interrogator or the human who’s writing back to the interrogator could let each other know using telepathy that, Hey, I’m the human, and a machine couldn’t replicate telepathy. And he says we’d have to build a telepathy proof room, which I find so fascinating. So I guess I kind of see now how you became interested in the test. Was it kind of through this spiritualism and this idea of AI and paranormal?

SM: Yeah,  because in a way, I started to reason on the fact that with the Turing test, you can start, you can, you adjust to these messages and you can start to believe that these messages are actually from a human yeah?. And actually, the Turing test in a way, you may know that it is a computer talking. Although Turing wasn’t specific about that, but usually when someone has tried to do the Turing test, it was clear to the interrogator to the judge that they had to look up for the difference between humans and machines. But actually, in our everyday communications with machines, so we might know that a machine is talking. But it also might be situation in which we don’t expect that. In a way this situation might become even more frequent in the future, as technologies are created that are more and more credible, more and more able to pass as humans in the context of a chat room or even voice conversation.

JJF: I don’t know how many bots I’ve interacted with on Twitter, for example. But yeah, and that is different in some respects for the Turing test because like you said, in the original test, the interrogator knew that at least one of the entities they were having a conversation with was artificial. But now, it’s kind of like we’re presented with real world Turing tests increasingly all the time, but we don’t know. So, that aspect of the Turing test is gone, right? We don’t know that one of the people were angrily tweeting at is actually not a person, for example. And yes, with voice technology, it need not even just be written as it was in the original Turing test, too.

JJF: So one of the things that you note about the Turing test is that this test is a test of deception. I wanted to talk a little bit about this because I think that the word deception often has kind of a moralizing connotation, like deception is often viewed as a bad thing. But I don’t think that’s entirely what you mean here. I was wondering if you can unpack a little bit what you mean by saying that fundamentally the Turing test is about deception.

SN: Well, first, like in the Turing test, so it is a very binary choice that the judge has to make. Here or she has to decide if it’s a computer or if it’s a computer or human, yeah? And the computer can pass the test if it deceives the interrogator into believing a human. You were mentioning rightly that, so usually it is negative connotation, but it might not be always the case that this deception is something that may harm users. And this is also because actually, when we interact with machines like for instance, Chat-GPT or other generative AI, We usually actually know well that we are talking about a machine. We are not in a Turing test scenario. The machine does not deceive us into believing that it is a human. We know it’s software. This however, doesn’t mean that we might interpret what’s happening in different ways. There are already apps for instance that are used for people to actually enjoy friendly conversation or erotic conversation with chatbots. And we know that chatbots can be designed or fine-tuned so that they can be more credible. They can for instance, express their feeling. When you communicate with Chat-GPT, Chat-GPT doesn’t share much about its feeling. This of course because Chat-GPT doesn’t have feeling, but also because it has been programmed not to share its feelings. You can program Chat-GPT to share its feeling even if Chat-GPT doesn’t have them.

JJF: Right. Okay. The original test is a test of deception in that the machine is deceiving us into believing that it is human or the designers and programmers of the machine are using the machine to deceive us into thinking the machine is human. Might be a better way of putting that. But if I’m understanding you correctly, you’re saying that even when an AI isn’t trying to fool us, there’s still kind of an element of deception that can happen. So for example, Siri or Alexa, they aren’t trying to convince you that they’re actually human assistants, right? They’re pretty upfront about the fact that they are artificial creations. And like you said, Chat-GPT, in fact, I found if you push Chat-GPT to have an opinion or talk about itself, it’s very like, Oh, I’m just a large language model. I don’t have an interior life and stuff like that. So they aren’t trying to deceive us in the same way the Turing test is, but there are still elements that could be considered deceptive in some of these chat interactions, right? So you mentioned the idea of having an app that’s like a friend or a therapist or a relationship, and certainly I’ve played around with Replika a little bit and Replika, the chatbot will talk about like, Oh, this is what I did with my day and this is what I’m thinking and it will present an interior life complete with daydreams and emotions and all that kind of stuff, even though it doesn’t have day dreams and emotions. Even though I am under no illusions that this is another human, it’s an AI and it presents itself as an AI. There is this deception in terms of still mirroring humanity, even when these bots aren’t trying to fool us into believing that they are human. The elements of mirroring humanity may differ Chat-GPT doesn’t have much of a personality. Siri has a bit more. It will joke with you and stuff like that. But there’s this mirroring of humanity that is happening. Can we trace that back to the Turing test and to designers goals and aims here?

SN: The great implication of the Turing test, and I think this is maybe the greatest intuition of Turing so early in the history of this technology is that defining artificial intelligence in absolute terms is useless and maybe impossible because basically you cannot really understand what’s happening in someone else ahead. I know that, for instance, you, Jill, probably are thinking in the same way I am thinking because I just project my experience onto you, but I cannot do the same with a machine with a dog or with bat and so on. What you can do is actually see when something seems intelligent to someone. That’s what the Turing test introduces. Of course, as you directly mentioned, Chat-GPT, Siri, they don’t try to fool us into believing they are human. However, there are a lot of design elements that do build some responses on users. Different responses because of course, there are different users. For instance, the fact that the voice of Siri, Alexa, Google Assistant sounds as a human voice is not something that is natural. It’s a design choice. It is so because we can more easily familiarize with something that talks in the same way as we are used to in our conversation with people, and so we can integrate in a better way the technology in our everyday life. Of course, it’s also gendered life. There are a lot of elements that cannot but invite particular perception responses on users. Chat-GPT example, I mean, great what you were mentioning about Chat-GPT doing disclaimers, which is a good thing. I mean, it’s a reasonable thing to do, to remind the users that this is just artificial intelligence is not a person. And yet, just to make an instance, a banal choice of designs such as the use of the first person singular.

JJF: Yeah.

SN: Already is something that invites some forms of interpretation. This doesn’t mean that everybody is just believing Chat-GPT is human. But there are things that always may lead to certain responses and they’re taken and also to invite some responses. Of course, different software, different AI, have different uses and applications. And so designers, for instance, want to build Chat-GPT so that it sounds reliable, so that it sounds authoritative. While other designers want to design a Replika, design it so it sounds like a potential friend or a very sexy or erotic partner. But you can invite all this kind of response and interpretation. This is why I say that in a way deception in artificial intelligence is banal, is normalized is something that you cannot totally escape from.

JJF: Right. I think that’s really interesting about the first person singular because yeah, Chat-GPT when it says I am a large language model, which it will say, depending on how you prompt it. That’s giving this impression of a singular entity, an I or a self. It doesn’t have to say that. It could say something like this software program is or this generative AI program is a large language model or something like that. It doesn’t have to refer to itself with the first-person singular pronoun and doing that not the same deception as what other AIs will do, but it is again this design choice. It strikes me that what you’re saying in some respects is that design choices in general, not necessarily in a problematic way, but do rely on a bit of deception in some ways in terms of normalizing human interactions with some of these devices and tools and software programs and facilitating our interactions with them.

JJF: But I also want to pick up on another thing you said? Because I think most people who are familiar with the Turing test, think of it as a test of intelligence. But one thing you said in your last answer was that the real insight of Turing was to notice that we probably can’t test intelligence directly because we don’t have a good definition of intelligence, and we still don’t really today. But what we can test is whether or not it is perceived as intelligent. So let me just say, do you think an AI system can pass the Turing test and not be intelligent.

SN: Absolutely, because as you say, it’s all about the definition of intelligence, and there are attempts, efforts from philosophers as well as from computer scientists to define what is intelligence or what is conscious, for instance, and say this is the threshold where we will have technologies that are sentient or conscious but these definitions are only formal definition. They will not match precisely with actually our experience of being thinking things and so on.

JJF: Mhmm.

SN: Paradoxically, now we know although there are some people who say otherwise, but at this point it’s just a minority. We can compare it with global warming discussions. There are people who say there is no global warming. We know that we didn’t reach something like a strong intelligence, like an intelligence that can really be compared to the human intelligence in all counts, and we don’t even know if this is even theoretically possible. But even if we imagine that in the future, we’ll reach something like that, something that is true conscious, like, we might not have ways to really identify that as such. What we will just have will be our belief, if this is intelligence, if this is conscience or not. At the end of the day, it will be our perception. And it might and of course, perceptions, even if they are just perception or because they are perception it can have real consequences because if we treat someone as intelligent or conscious, these as consequences. It doesn’t mean that these technologies are less important than if they would reach the real intelligent strong artificial intelligence. It just means that actually the possibility to build a technology that can appear can see intelligent might be full of consequences for our societies and our worlds.

JJF: Right. One thing that I’m getting here is that it’s important to realize that yes, a machine can pass the Turing test without being intelligent or conscious or sentient, or any of those things. But what’s still valuable about the Turing test is that it reminds us of our beliefs and attitudes towards objects or towards machines. So we can build machines now that we believe are intelligent and that we will act on those beliefs and those beliefs will have all sorts of consequences for us, for the machines, for the future, all that kind of stuff. And so that idea that the test is a test of deception and belief rather than a pure test of intelligence, I think is a really important one to keep in mind.

JJF: I want to back up and do a little bit of history of what happened after computing machine and intelligence. One thing that you note in your work is that this focus on deception became a bit of a legacy from the Turing test. I want to talk about one of the next major things that happened and talk about Joseph Weizenbaum and Eliza. So can you tell me a little bit about what or who Eliza is and the relationship between Eliza and the Turing test?

SN: Yeah. This is a very interesting case. It’s actually the first chat bot that was created, and it was created in 1960s by computer scientists of Jewish German heritage who was working at MIT in the US

JJF: That’s Weizenbaum, right?

SN: Joseph Weizenbaum, exactly. He designed first chatbot, this first conversational agent. At the time in the 60s, computer power was much less developed than today. Also software, sophistication of software was much less developed than today. Weizenbaum didn’t have the possibility to create a technology that really could manage complex conversational inputs and language. But he had a very interesting idea. He understood that in a conversation, what makes our interlocutor credible is not only the content of the conversation, but also the identity, the role that we assigned to this interlocutor in the conversation. He realized that he could create a chatbot that could impersonate a psychotherapist. Why psychotherapist? Because he believed that a psychotherapist, especially of a Rogerian approach, a specific approach in psychotherapy, could add little to the conversation but still be credible. Let’s think about someone like, who mentions his or her mother, like in therapy session. Then the chat bot asks, tell me more about your mother. Something that doesn’t add much to the conversation and can use keywords to pull back questions. So with this idea, he built a chatbot which is very, very simple, but still could be credible in some situation because of the role and the identity that it was assigned. And this in a way, something that reminds of the Turing test because it also build something that could pass as human in some situations.

JJF: Yeah. I think that added layer that Weizenbaum brough, that you talked about, this idea that it’s not just the interaction that facilitates the deception, but also the identity that the chat bot reveals to the conversation partner. Eliza being positioned as a Rogerian therapist, where asking questions like, tell me about boats doesn’t sound weird because you’re like, the therapist must have something deeper in asking that question. So giving this kind of background that Eliza as a therapist means when all Eliza does is prompt you with what might seem like very banal questions, you add this kind of meaning or context that something more significant is happening here, right? And this idea that it’s not just about the interaction and the conversation, but also the whole context built around the conversation that adds another layer of deception. And I think I even still see that in chat bots today, like when Replika promises to be my soul mate or my best friend, that’s another layer, right, in terms of how I make the conversation meaningful. Is that right? Is that one of the things we can take away from the Weizenbuam Eliza experiment?

SN: Yes, absolutely. If we think of even the discovery in social psychology and sociology in the 60s 70s that social structures are constructed. So we can understand that when we navigate the world, we bring with us a range of preconceptions, and we use this basically to navigate the world to give meaning and interpretation to different situations. Of course, we bring these also to conversations and also to conversations with chat bots, and they can be activated by designing chat bots to behave in specific ways. This is what the first thing Weizenbaum tells us with this clever idea with his Eliza. The second thing I think that Weizenbaum tells us is, comes with the story of his discoveries when people started to use the chart because in a way like Weizenbaum had thought about Eliza as a way to debunk the illusions of artificial intelligence. People would, in his idea, talk with Eliza maybe being deceived at start, but then they would understand how it function and understand it is very different from human intelligence.

JJF: Right.

SN: However, when the chat started to circulate, he discovered that actually people sometimes, even if they knew that Eliza was just a software, they might have enjoyed, anyway doing conversation with Eliza. There was this anecdote that he told in his book about his secretary, it was a person who knew very well how Eliza worked, but asked Weizenbaum to go out of the room because she wanted to tell something personal to Eliza. This really spooked, let’s say, Weizenbuam and he became one of the most vocal critics of artificial intelligence because he realized it’s not so easy to debunk the deception.

JJF: Yeah, he built Eliza with this idea that once people understood how Eliza worked, the magic would wear off and the deception would fade away and people would realize that this is an interesting tool, but it’s definitely like, I’m not having a real conversation. This is not a really intelligent entity. Then he found that actually that didn’t happen, and even as he explained to people how the bot worked, even people who were like, I assume, closely related to the development of the bot like his own secretary, were just like, No, no, I have a personal relationship with Eliza, and I need you to leave so that I can have a private conversation with this bot. So, the deception continued even after Weizenbaum explained how the deception worked.

SN: Yeah. Yeah.

NNF: Revealing what was going on behind the curtain, for example, didn’t stop people from buying into the deception.

SN: Maybe like someone like for instance, Weizenbaum’s secretary, of course, we don’t know what she was thinking, but we can take her as an example of a user. Maybe she doesn’t really think that she has a personal relationship with the bot. But she just likes this experience of talking to the bot. We are ambivalent. We don’t spend everything in our life deciding if it’s thing A or thing B. If it’s a human or a machine, we might enjoy sometimes to play out with something we don’t believe really in. But this is complex, but it is, I think, how we are as humans. There are already a lot of research, for instance, interesting research about Replika users like research by Skjuve and other researchers, they interviewed a lot of users of Replika and they’ve shown that some people start to believe that Replika is really feeling things and it’s really like a person, others just want to play along and they find it’s funny. The interpretation may vary, but the magic is not so easily dispelled, and it is in a way also part of our life also.

JJF: Yeah. And I take your point. So we don’t know how much some people genuinely think that there is kind of consciousness and intelligence going on in their interactions with machines or artificial intelligence. Whereas other people, it may be a suspension of disbelief or something like that, going along with it for the fun of it. So there’s varying levels of deception that can happen too here. But in any case, kind of explaining how it happened or how the magic was created isn’t enough to stop people from either suspending disbelief or disregarding and saying, no, there’s something more here and there is some kind of consciousness, which I’ve seen people say about some of the generative AI that we have right now, right? So yeah, I take your point that different individuals will have different reactions, but it’s all kind of a variety of deception in some respects, whether it is a suspension of disbelief or something more.

SN: Yes. Yes, it’s interesting that you mentioned suspension of disbelief because there is this idea, suspension of disbelief comes from literary works and the films, the idea that to participate emotionally in a film to suspend your disbelief. However, what is interesting is that, for instance, a lot of our movies were marketed, have been market in the history of cinema, by point to the fact that some of the stories are characters are real for instance The Exorcist, one of the most successful movies was marketing through also the message that the Catholic Church is doing exorcism all the time. Actually, people believe a lot of things about ghosts, about telepathy about aliens. It’s not just actually we can suspend disbelief because disbelief, but also belief, is part of our life and that we perceive things and this applies also to to the case we are discussing here.

JJF: Yeah. That also brings up the other point that you said about the ambiguity that people aren’t necessarily categorizing, well, this is a machine, and this is a conscious sentient life form and that there is this kind of ambiguity which opens up space to play. That if you’re not rigidly trying to classify, this is intelligent, this is not intelligent, that also leaves space for deception and space to play in the ambiguity itself, which sounds very similar to what you’re saying of movies. Like I remember the Blair Witch Project in the 1990s and people were like, Oh, maybe this actually happened and it was marketed, maybe this actually happened. People really liked the maybe. That opened up space to play.

SN: I think one of the very good things is that we are able to choose, but also we are able not to choose. We might enjoy, for instance, talking to chatbot, even playing with the idea that this is our boyfriend or girlfriend or it’s our friend. And this is actually also is a thing I think to take seriously, but also with a kind of respect, also similar engagement with these technologies because people may not be just believing one thing or in one it’s not just about debunking what is happening. I think what is important is to make sure that people are protected that users are protected from harm, protected from deceptive mechanism that really might be abusive or might lead them to, for instance, financial losses or to stress and so on, so we have to protect users, but also we have to acknowledge that people might engage with these technologies in different ways, and the example of cinema again, when we go to watch a movie, we enjoy it also because we can participate emotionally in what is happening and this might happen also when we interact with AI that are billed for entertainment or social fiction, let’s say.

JJF: Yeah. If it is relying on this deception and this space to play and involving emotions and involving vulnerability, there are a lot of responsibilities that have to be considered then because this could be a very positive fun, maybe even healing space, but emotions and vulnerability always carry risks as well, so the deception has to be reflected on carefully, I think is what I’m taking away from that.

JJF: I also want talk, we’ve talked about Eliza and the legacy of Eliza. But another thing that your book spends a lot of time talking about is actually when the Turing test became an actual test, like an actual competition. Can we talk a little bit about the Loebner Prize and how it worked and what it was.

SN: Yeah, well, the price was the first really serious attempt to make a version of the Turing test as a competition an open competition for programmers.

JJF:  This was in the 90s, right?

SN: exactly, yes. It was in the 90s from the money of Loebner, who was an entrepreneur. Well, it is an interesting case. On one side, it wasn’t a place where the most advanced technology was tested. But it is interesting because it was a place where our boundaries between belief and disbelief, we’re tested throughout in a Turing test scenarios. Competitors like programmers try to create chat bots that could pass as human. What happened was quite interesting in a way. For instance, one thing is that programmers realized that they could use tricks to cheat judges in this version of the Turing test in the Loebner price. Sometimes, for instance, you could create a chat bot that was answering in very arrogant ways when someone asked a question, a judge asked a question, instead of saying, I don’t know this question which might lead to identify the bot as such, you know the chatbot can say, why are you asking me? I’m tired of.

JJF: What a stupid question.

SN: What a stupid question. This is not an intelligent in a way, an intelligent reply, but it can actually lead to judges to think this is actually a human.

JJF: It’s a very human reply.

SN: Exactly. It can, It can be seen as a human reply. This is one interesting thing that. Another thing, which is interesting is the fact that to create some uncertainty, you also had to have some humans who actually impersonated themselves, humans, with the judges. Some of them we sometimes perceived as more computer than humans.

JJF: Right.

SN; Because actually, we have some communication, some ways to communicate, can be perceived as more so to say, robotic than other. There is a wonderful book by journalist Christian, Brian, called The Most Human Human and he tells the story of his being one of these people pretending to be human in the test. He actually tells how he thought about the strategies to be recognized as a human, and he won a special prize as the most human human in that Loebner prize competition because he was the one among these different humans that was most consistently recognized as such.

JJF:  Right. So we had programmers designing bots that we’re trying to fool the interrogator into believing the bot was human, which is exactly what the Turing test lays out in the first place, we have bots that instead of saying I don’t know or I’m not sure, they would say things like, that’s a stupid question or I refused to answer that or something like that as one example. And there were other examples that you detail in the book of the ways in which bots deceived the audience by not revealing that it’s a bot and ways in which it tried to perform humanity. But we also, and I thought this was so fascinating, and I never thought about it before. We had humans, right? So you have the interrogator and the interrogators having a conversation with a bot and with a human. And so we had humans who were also trying to perform humanity, right, to get the interrogator to recognize, Hey, I’m the person. I’m the human being. And some of them failed and the interrogator thought that they were the machine. And so we had humans over thinking what it means to be a human and trying to perform humanity. I just thought that was so wild, the ways in which people tried to signal to other people that I’m the human. I feel like that’s happening all the time online now. How do we perform humanity in a world full of chat bots on social media? I thought that was really interesting and actually really timely.

SN: Absolutely. One thing is also that there are different kinds of communication. Some communication are very much shaped by routine. For instance, if we call a restaurant to book a table, this conversation might be very easily reproduced by a machine.

JJF: Yeah, it’s a very routine scripted conversation.

SN: Exactly exactly. They’re also in a way, communicative contexts where it is easier to pass as human This also is another dimension because the contest of the communication is always important. This means also the platform where you are doing this communication. Something’s different if it happens, for instance, on Twitter or on Instagram. Where you do have already, for instance, a range of routines about how people or influencers present themselves on the platform. You can use this routine to build a more convincing performance of a human and, and so it is very much specific to the platform to the context of communication. And we have to remember keep in mind also because these technologies are applied in different ways. You can do so many different things with GPT, with the same technology. You can do a customer service, you can do a bot that try to socialize. You can do software to search the web, you can do something like what is Chat-GPT that does professional tasks and answers about any kind of information you are seeking. You don’t have to think that there will be just one application that there will be many applications of the AI.

JJF: Yeah. And I think your point about going back to the context is really important, too. So we talked about how the context made Eliza’s communication meaningful, and the context can also be used when it comes to the Loebner prize or the Turing test or even things like Replika. But now, if we have bots online, bots, you can have a bot that makes a table reservation for you at a restaurant and things like that. The context, in some cases, I think, really blurs the distinction between the human and the artificial entity because there are certain contexts, like you said, where the conversation is very rote and very scripted and one that would lend itself to a bot. But there are other contexts where maybe that’s less likely. But I see it all the time now. I see people calling out other maybe-people and saying, well, you’re just a bot. I wonder how long it is till bots say that we’re bots. It’s probably already happening. Bots calling us out and saying that humans are bots probably already happening. But I think it’s another layer of deception. And so it’s not, if I’m getting this right, it’s not just that the bots deceive us into accepting something as meaningful, which can be quite useful like having a bot make reservations for you or do other things for you can be quite useful. But it’s also that humans are performing humanity. To some degree, I think there are a lot of people who have been talking about this on social media already, even before social media got flooded with bots, there was a sense that what you place on your Instagram feed or on Twitter or whatever is a curated version of yourself, so to speak. But I wonder if that’s also ramping up a notch as you’re not just performing a curated version of yourself, but you’re performing an idealized version of humanity to distinguish yourself from the bots, maybe. I don’t know if that makes sense.

SN: Well, there is a performance of authenticity in social media absolutely. This happens all the time. Again, in some context, being too authentic might be problematic because you need some detachment, for instance, in a lecture. There are ways in which the performance of authenticity can vary. I mean, there might be things that you want to share. You want to be perceived as someone who says things he or she believes in. But you don’t want me be to share other aspects of yourself that are not relevant to what is happening in that area. We constantly perform ourselves, and that’s something that machine can learn to do. Even if they are not intelligent, they can be good performers of human selves.

JJF: Apparently, the Loebner prize tells us they can be better than some humans at performing humanity.

SN: Sometimes, and some humans can be very bad at performing humanity.

JJF: I want to back up a little bit from the 1990s and talk about computers in general. Because we might accept that Eliza and the Chatbots competing in the Loebner prize were practicing deception. But one thing that you claim in this book is that deception isn’t just about chat bots performing humanity, that deception is really woven into human computer interaction at a much more fundamental level. I wondered if you could unpack that idea a little bit for us.

SN: Well, basically, as I said before, I believe that deception is banalized in these technologies. Because we are used to interactions with people. When we have technologies that interact with us with the same language that we use, the same language, the same means that we encounter in our everyday interaction with people, we will bring some of the preconception, some of the habits that we have built in our interaction in in a genuinely social environment, let’s say, and so there will be the emergence of something that we might call a kind of artificial sociality, something that has the appearance of sociality. But even when sociality, even if it’s not Replika, if it’s not something that tries to pretend to be a social interlocutor, there will be always elements of the design that have an effect on how we interact with technology and how we build our relationship with these technologies. In this sense, I argue that the deception has become banal. It’s something that we don’t even perceive as such because it’s not a machine pretending to be a human, but it’s still woven into our everyday uses of this technology.

JJF: Yeah. Okay. I like the idea of banal. In fact, in some senses, the layer of human computer interaction, this banal deception is actually in some senses very functional for me and I’m very happy to have it. Is that part of what you’re talking about?

SN: Yes, absolutely, you’re right. Well, for instance, even something like Alexa, or Siri, we appreciate the fact that it’s talking in a humanized voice because it would sound weird, it would be probably more difficult for us to use this technology if we would talk with the voice of a robot in 1950s movies. There are a lot of things that can actually be productive, positive. This is I think the ambivalence of it, and it’s an ambivalence that honestly, I haven’t personally solved yet and probably will never solve because there are certainly dimensions that can build better human computer interaction and better tools, things, to activating this kind of responses reaction. At the same time, there might be situation in which these can be more problematic. Part of the problem is that the boundaries are not so fixed and so clear as we might expect. For instance, one solution that some have pointed out about the problem of deception in AI is that AI always states that it is an AI. But this is just a Turing test scenario. We have said that actually, even if we know that it’s a software, we might still be led to project sociality, project humanity on the software. I think it’s a good idea to do that, but at the same time, it might not be enough and it might be that the ambivalence remains.

JJF: Right. On the one hand, it’s like natural language communication can be a deception that is very, very useful because it means that I can speak to the machine the way I’m speaking to you more or less. I don’t have to learn specific commands and specific codes. The machine seems to conform to me more than I conform to it and that seems really quite useful as a type of deception, as you say, a banal deception. But we have in any interaction that involves deception, this vulnerability, where people’s emotions get involved, people may get invested. People may draw certain conclusions that are deceptive itself that may put people at danger in terms of finding things meaningful that aren’t actually that meaningful, things like that. And there we might have to kind of think about, well, what level of deception is banal and is acceptable and is actually possibly beneficial and useful versus what level of deception puts people at risk and how best to mitigate the risk. And as you said, obviously, having these tools identify themselves as tools, as generative AI programs or other machine learning programs or what have you is good. But we already saw in the 1960s with Weizenbaum that that may not stop people from ending up in potentially risky and vulnerable situations. It’s a good start, but we still have to think through how best to deal with this deception in a way that’s morally responsible.

SN: Yes, I think that it might not be enough to just give guidelines state when it’s a machine, for instance, the machine has to state, it is a machine. Probably the responses are on different levels. On one side, this level, there can be regulation that can help to counteract risky dimensions of deceptive behavior in AI, yeah? On the other side, besides regulation, there can be ethical discussion among designers, computer scientists, which are already happening. For instance, even on the first-person pronoun that we mentioned, there is a lively discussion about, between these, among designers, if this is problematic or not. Dimension is ethics also in terms of designers and reflections about this problem of deception and appearance. Then the third thing is also that we as users, because we are the ones who decide if it’s intelligent or not, according to the Turing test, we also have a special responsibility and also in a way, an agency here. We have to build on our agency, we have to build ways that allow us, people, to reflect and to understand how to interpret different situation and to build just kind of literacy, but also a kind of, of depth in these interaction that are starting to become more and more common for so many people.

JJF: Yeah, I like that last thing you said. We know designers are having moral conversations about this about what would be ethically best. There’s some regulations that are starting to pop up at the legal and governmental level. But I liked your reminder that if the Turing test is a test of belief in intelligence rather than a test of intelligence, there is agency on the people who choose or do not choose to believe, right? So, having a media literacy and having conversations in public about this kind of thing can really leverage that agency and have people thinking about it. Like, your belief in these interactions matters. That’s what makes these things meaningful. There is agency on the part of the users as well, which I think we sometimes forget that we’re often talking about regulations and talking about designers, but the users do have power here as well. I really take that point and I thank you for it.

JJF; So, I want to talk directly about the book. On the very last page of the book, spoilers to anyone listening, but on the very last page, you say the following and I’m quoting from you directly. “Ultimately, what AI calls into question is the very essence of who and what we are. But not so much because it makes us lose sight of what it means to be human. Rather, the key message of AI is that our vulnerability to deception is part of what defines us.” I think this quote is so cool and I want to hear you talk about it. Can we talk about this quote and what it means to be vulnerable in this way and what we might need to be paying attention to.

SN: Yeah. Well, there is a range of scholarship in areas such as neuroscience, social psychology and other areas that show that deception is a structural component of how we navigate the world. An example that I can make is let’s say that I’m walking in the wood, and then at a certain point, I see something like an animal on my right side, yeah? And I turned and I discovered actually that is not an animal but it’s just a tree with a strange shape, yeah? I was deceived, but the very mechanism, my perception, that fool me might have saved me in another situation. Our perception is so shaped that for instance, it can recognize a face more easily. And that’s why we see faces in clouds. Our perception is built to perceive the world in ways that are not the, the word in physical terms, but are the word in psychological and subjective terms. That’s why in a way, deception is there as a component of ourselves and how we navigate the world. In this sense, we can embrace this. We have good ways to understand what’s going on. But the possibility of deception is always there and deception will always be part of how we navigate this world. I think this discovery is helpful in many areas. Also when we think about our interaction and relationship with so called intelligent machines.

JJF: Right. So the way we perceive the world is not necessarily to perceive it, I don’t love this term, but for lack of a better term, accurately, from a view from nowhere perspective. But instead, we perceive what is likely to be salient to us and that may not actually accurately reflect what’s going on, right? Like if there’s a predator in the woods, that’s super salient and I need to know that. So, I perceive shapes that might look like a predator and it draws my attention, and the deception is necessary because on the off chance that there is a predator, I need to know that now, right? Likewise, it seems that it’s very salient to me to have my attention be drawn to other sentient intelligent agents, right? So use of natural language, the ability to communicate, these are all also things that are going to call upon my perception and call upon my attention. But likewise, I can be deceived. I mean, even in the most simplest sense. There are all kinds of YouTube and TikTok videos showing animals that sound like they’re saying words. Because it’s funny, because it’s cute. But that’s another deception of our perception, where we listen to the cat and we’re like, Oh, that cat just said something meaningful. And the cat isn’t trying to communicate in English or any other language. It’s doing its own cat thing, but we take a video of it and put it online because it’s a deception that seems meaningful. And so we’re drawn to the natural language in the same way that we’re drawn to the image of a predator, for example, because these are salient things that we need to pay attention to for survival and flourishing. But that also means that it’s open to both positive and possibly negative exploitations through these use of natural language programs like large language models and things like that.

SN: I couldn’t have said it in a better way.

JJF: Oh, yay! I understood!! Okay. So there is this kind of vulnerability, and I think many people working in ethics know that vulnerability carries with it ethical responsibilities and obligations. And we haven’t talked in specific terms about how to mitigate those because your book is looking really at the deception and the risks. But I think we’ve already suggested other ways in which programmers and developers and governmental organizations and individuals and communities ourselves really need to start thinking about this vulnerability and thinking about the ways in which this vulnerability can be leveraged for beneficial things, but also the ways in which this vulnerability can be exploited to I mean, at the most basic level exploited to get us to purchase things like Replika has in-app advertising, for example, that’s probably where this is going to begin. It may not be where it ends, but that is another thing to be thinking about when we’re thinking about the vulnerability that we have in interacting with large language models and AI in general.

SN: Yes. So, I could say also that in a way, one might be tempted to think that you know that the great thing about AI can be AI that becomes conscious or intelligence. But actually, the fact that it seems conscious or intelligent already is extremely meaningful and full of consequences. So, in a way we have also to look for things that might appear as more banal again, like, appearance of intelligence rather than intelligence itself, but it can actually be incredibly transformative and of course, also risky in our society.

JJF: I want to thank Simone again for sharing his research into the Turing test, design, and deception with me today. And thank you, listener, for joining me for another episode of Cyborg Goddess. This podcast is created by me, Jennifer Jill Fellows, and it is part of the Harbinger Media Network. Music provided by Epidemic Sound. You can follow us on Twitter or BlueSky, and if you enjoyed this episode, please consider buying me a coffee. You’ll find links to the social media and my Ko-Fi page in the show. Until next time, everyone. Bye.

Next Post

Leave a Reply

© 2024 Cyborg Goddess

Theme by Anders Norén