Cyborg Goddess

A Feminist Tech Podcast

Transcript for Season 3 Episode 4

Jennifer Jill Fellows: The Internet was promised as a great democratizer of knowledge, where we could share information with each other and uplift humanity as a whole. But things haven’t exactly turned out that way, have they? Hate speech spreads quickly, radical violent groups seem to proliferate online, spreading their ideologies far and wide. Frustratingly, these groups are often aided in their recruitment efforts by algorithms that seem all too happy to recommend material to users if it will keep them engaged, regardless of where that engagement ultimately leads. But don’t worry. Tech corporations are quick to promise that all of this can be fixed with better content moderation algorithms. Sadly, my guest today doesn’t think that this is so much a promise as it is propaganda.

JJF: Hey, everybody. Welcome to Cyborg  Goddess of Feminist Tech Studies Podcast. I’m your host, Jennifer Jill Fellows. And on this episode, I’ve invited Dr. Michael Barnes on the show to talk about his research into online radicalization, content moderation, and the promise, or propaganda, of a quick tech fix to hate speech and objectionable content online.

JJF: Mike Barnes is a philosopher working with the Humanizing Machine Intelligence Project and the Machine Intelligence and Normative Theory Lab at the Australian National University. His philosophical interests are in ethics, social and political philosophy, philosophy of language, and philosophy of technology. Mike’s current research focuses on online speech and AI ethics. Today, he’s here to talk to me about online radicalization and social media content moderation.

JJF: Hi Mike, welcome to the show.

Mike Barnes: Thank you for having me.

JJF: At this point, I want to pause and ground us by reminding us all that digital space is literally physical space. We may hear metaphors about the cloud and the Metaverse, but all of our online experiences, including the experience of listening to this podcast, rely on physical infrastructure that is extracted from and occupies the land. I acknowledge that as I record Cyborg goods today, I am located on the unseated territory of the Coast Salish people of the QiqĂ©ytNation, one of the smallest nations in what Canadian settlers refer to as British Columbia and the only one without a dedicated land base. Can you tell me where you’re located today, Mike?

MB: Yes. I’m located in Canberra, which is the Capital of Australia, and we always begin all our meetings here with an acknowledgment and celebration of the first Australians on whose traditional lands we meet, and we always pay respects to the elders past and present. In Canberra, that’s specifically the Ngunnawal people and the Ngambri people.

JJF: A little bit of background, I want to know how it was that you became interested in philosophy as a discipline. How did you come to study philosophy?

MB: That requires retracing a lot of steps and is probably a complicated answer that I probably don’t know the real story to. But I think the short version is that I was always fairly a curious child and a curious person and read a lot. Like many philosophers, I didn’t major in philosophy originally, but I first went to University for essentially a journalism program and also kind of an English program. So it was through literature and journalism that I first got attracted to philosophical ideas. Literature provided the idea that there was complicated ideas that were worth expressing and journalism was partly about there’s important stories out there that need to be told. And when I started taking philosophy classes, I kind of saw the convergence of those two ideas that we could really take the time and express difficult ideas and, you know,  philosophy has a lot of autonomy in terms of what topics we get to focus on. I like that as being attractive in the sense that people through different ways were focusing on different problems, and as a philosopher, I got to focus on the problems that I find interesting and hope to communicate those to others as well.

JJF: Yeah, you’re right. Well, at least in my experience, I think you’re right that a lot of people don’t start out studying philosophy in post-secondary, which I think is in part because there are a lot of places that don’t introduce you to philosophy in high school or in grade school, for example. I myself didn’t really know what philosophy was when I signed up for my first philosophy class. I also think that, how do I want to say this?, I feel like there is this connection between philosophy and literature. I don’t know if all philosophers are comfortable with that connection. Like, I’m thinking of Simone de Beauvoir and Bertrand Russell, who were both wrote literature, but we’re like, no, no, that’s not philosophy. Like that’s something else.

MB: I do think my first exposure to philosophy was in a high school literature class, and it was through existentialism that first and that was definitely what attracted to me philosophy at first, was the existentialist novels and plays, and then existentialist philosophy more broadly. And that’s very far from the work I do now, but I think that connection between those ideas and the way of expressing these things through art does carry over to more analytic traditions as well.

JJF: Yeah.

MB: That’s something I certainly hold onto.

JJF: Yeah, I think it does. That happened to me, too. I’m just remembering my grade 12 English class had actually a unit on existentialism that you could sign up for, and I signed up for it. This maybe doesn’t paint me in the best light, but I signed up for it because the novel, there was a novel for every unit, and you got to pick your unit. I signed up for existentialism because the novel was The Metamorphosis by Kafka and it was short. I was like, Okay, I’m going to do this because it’s short and my teacher just laughed at me and was like, good luck.

MB:  I’m also Canadian, so I took grade 12 English and it was specially a literature class and I was Waiting for Godot in my case, which

JJF: Yeah.

MB: I got hooked on in certain sense and I didn’t get to pick it, but I also did appreciate its brevity, one thing I liked about it.

JJF: Okay. That’s awesome. One thing that you talked about that I want to circle back to is the idea that in philosophy, you maybe have quite a bit of freedom in terms of what you want to philosophically research. And with that in mind, I want to ask how you became interested in researching online extremism or maybe extremism in general. How did this happen?

MB:  I think it was in my PhD studies when I focused on the topic of hate speech that I really became interested in broader phenomenon and related phenomena. Because hate speech and hate crimes are distinct concepts and distinct issues that have interesting ways of analyzing each one. They often just go together in practice, that hate crimes are often preceded by hate speech and hate crimes are sometimes motivated by hate propaganda and the like. When I was doing my work on hate speech and my dissertation and my PhD, I really did want it to be grounded in real world experiences, how these things impact communities as they exist today and not kind of as an abstract topic. So, I was kind of on the lookout for stories about hate crimes and hate speech, which are not too hard to find. They’re always occurring. And a number of things stood out in the mid 2010s, late 2010s when I was doing this. Partly, you had some violent murderers, essentially, who would refer to online platforms as being where they were first attracted to certain violent ideologies, where they finally set them over the edge. One example, is Alexandre Bissonnette in Quebec City, who shot many people at a Mosque. He referred to Justin Trudeau’s tweet welcoming refugees to Canada as being the final straw that pushed him over the edge. I just thought this connection between online platforms, which, as we’ll talk about later just taking over more and more aspects of our lives, and extremism and hate crimes and hate speech was under exploited territory for philosophy, especially that I really wanted to dig into.

JJF: Yeah, I think certain phenomena that happen, it sometimes takes academia in general a long time, so maybe philosophy isn’t unique here and thinking about this phenomenon when it starts happening. This certainly can happen with technology, I think, because it moves so quickly. Academic publishing, I’ve said this on the podcast before, is quite slow. And the research is slow and it needs to be slow and that means that sometimes we take a while to respond to this. I think it’s not surprising to me that you identified this area and started thinking about the relationship between in this case, social media and hate speech and hate crimes and found that this was an under theorized area of philosophy, especially in the 2010’s, like social media was fairly new then, right?

MB: Not a surprise, and I don’t blame other philosophers for you have not written things about this in the few years that preceded it. But it is, as you kind of said, like those of us who are working on maybe more applied or topics that are relevant to real world experience now. It has to kind of grapple with the connection between our experiences and evolving technologies. And so, while we can certainly appeal to and learn a lot from, I know I do, historical philosophy. There’s the extra question of what has changed in the intervening years and how much do those changes matter? And you know that question itself is really interesting for us as philosophers, and it’s one that I was certainly drawn to in this topic specifically.

JJF: Okay. We’ve talked a little bit about how you got interested in hate speech and hate crimes. I think that’s related to the topic you’re here to talk to us about today online extremism. I think most people listening to the podcast are going to be aware of what online extremism is. But in case anyone listening isn’t quite aware, I was wondering if you could give us a bit of a touchstone here. What is it that you’re researching? What is online extremism?

MB: I think it is a bit difficult. Extremism itself is not really a term I particularly love, but to use some specific examples as we sometimes talk about online extremism, first off, is just the online manifestation or online development of certain offline or pre-online extremist ideologies, activities, behaviors, beliefs, however you want to put it, but extremism and extremist is the modifier, which often is pointing to these more outlier political factions, ideologies. And I don’t think there’s really a great way of cordoning off and defining the extremists from the non-extremists. But I do think one potential avenue for doing so is often through the advocating for violent means of achieving political ends. You have various political ideologies that are extremist because they involve the killing or violent removal of certain populations. One quick example, white supremacy, which is the extremist belief that advocates for often the violent removal of non-white populations from certain regions of the world. There’s many different types of violent extremist ideologies, certain factions of extremist jihadist Islamism I also focus on the case of Myanmar where the Rohingya people have been persecuted and that’s a Buddhist nationalist extremism. So it really can take many different guises. What I think unites them is advocating for violence as appropriate response to a certain perceived threat. But as your question alludes to these predate the Internet and are not defined by Internet or online communities. And when we talk about online extremism, we’re talking about how online technologies change or mediate these issues and what’s perhaps new, what’s old and how can we address it in different ways?

JJF: Yeah. It sounds like we’ve got a working definition, which is that we’re looking at populations that are on the extreme end of political spectrum, and we’re also looking at these points of view being coupled with calls for extreme violence or calls for violence in general as a way to rectify a perceived wrong or perceived problem. That’s really helpful. And I think it’s really important to point out that while we are going to talk a lot about social media platforms and about the Internet in general and its role in this, obviously, they didn’t create this. Extremism has existed for a really long time in human societies, we could do a whole history here. We’re not doing that because we’re going to focus on the platforms, but we can’t blame the platforms for the sole existence of this, which is something I think you’re very clear about in your work. It isn’t anything new. But one thing that I thought was interesting in looking at your work is that you point out that there’s something unique about what happens when we look at extremism specifically online and that the route that people travel in order to become radicalized or end up in extremist groups is perhaps a little bit different navigating this digital space than maybe it was before that. You highlighted two routes in your research and you talk about the social route to online extremism and the individual route to online extremism. Can we talk about what these are and what role the platforms might have in this kind of manifestation of extremism online?

MB:  Yes, definitely. I think one thing to say in advance is sort of the idea that the claim platforms, Facebook, YouTube, other social media organizations are responsible in some sense, or have been partly to blame for the rise in extremism is one that was in the ether and has been for a while since early 2010’s and how these things really became popular. And kind of identifying these two different routes and how they come together. What I’m trying to do is trying to clarify what we mean we talk about the role of platforms and sometimes the role of algorithms specifically in leading people to extremist beliefs or groups, or ideologies. And yeah, identify just two sort of broad ways that this can happen. I call one the individual, one, the social route. And I think it’s helpful to clarify these two, partly because the role of the platform is different in each and the role of the individual before they reach the platform is different in each. The social one is perhaps the most obvious. And it goes just along to the mission and the technology of social media is one that brings people together. That was, you know, Facebook’s model for a while was connecting the world or some variety of that. And while it’s great to connect old friends and business acquaintances and fellow philosophers and like that, you’re also going to connect people who hold violent extremist beliefs. And so that’s just one fact of the way that social media connects individuals, they’re going to connect individuals who perhaps hold preexisting radical extremist violent beliefs. And when these people form a community, they’re in some sense more likely to act in those beliefs, make plans, and solidify their beliefs. And I think it’s just an interesting history to also note that this is not something that start with social media. White supremacist groups were early adopters of the Internet in setting up forums, Stormfront is a notable one, and that with those, they are often served as, you know they spread by word of mouth and they were just a place to bring people together and keep them in the group. Social media offered new opportunities for outreach to be recommended as content to other users because of how Facebook eventually came to work. One thing I’d point out when I talked about the social route is that these groups are not only advocating for themselves. You also have Facebook’s algorithms that are in some senses advocated for them and pushing them in front of other people. So I’m sure you’re familiar, if you’ve been on Facebook, it says, groups you should join, or follow this page or all these other little pop ups or messages that tell you that here’s something you might be interested in. And I find the way that Facebook’s algorithms push certain groups to be one of the most important and most significant avenues through which it basically promotes extremist groups. I think that Facebook did internal research that they did a study that in Germany, that 64% of the extremist groups joins in Germany were because of their algorithms, because of people who were following their recommendation to join a particular group. And so they’re really significant. The impact of these algorithms do encourage people to join groups, and those groups sometimes are unified around what we were talking about as violent ideologies. That’s how it brings people together and what I call the social route.

JJF: Okay. I think it’s so interesting that some of these extremist groups were early adopters of the Internet setting up their own discussion boards. But if you have your own discussion board, people still have to find your discussion board, and they still have to sign up. But if you’re on Facebook, say, where tons of people, tons of millennials and boomers already are, not so much gen-z apparently, but tons of millennials and boomers. You’re on Facebook already, you’re signed up and then the group can find you and you’re already there. Maybe you have an acquaintance who’s already in this group. You get suggested as a friend to that acquaintance on Facebook, you make friends with them on Facebook, they are friends with all these other people. Now you have friend suggestions to these other people who are all part of the group, and then you might get suggested to the group. Or it might just be that you’re following and engaging with certain content, so the algorithm suggests that you join this extremist group. So the very same mechanism that brought me the joy of joining this like, group that post silly cat videos is the exact same mechanism that is suggesting that people join online extremist forums. Then once you’re in this social space, if I understand it, it can reinforce your already extremist beliefs and perhaps even push them to a more extreme place because you’re surrounded by people who share your beliefs, which has a reinforcing mechanism, which psychologists have already told us about, but now it’s happening online. Is that right?

MB: Exactly. Yeah, I think so. There’s a whole side as we can talk about psychology and social epistemology about how groups can promote certain forms of belief adoption, maybe some forms of polarization or pushing people to take up more extreme groups in political groups. That’s, social epistemology and psychology side to this. But the one that I’ve been focusing on really is, yeah, the more the technological one that you started with that story in terms of how people end up in these groups in the first place. And that is just as I said before, like, the mission and the tool that Facebook provides. And as you said, it’s the exact same thing that gives you the cat videos and the philosophers with cats and all these fun groups that exist there can push people to these rather innocent sounding groups, you know, patriots for, whatever, or guardians of this. There’s things that are not necessarily saying that they are racists, xenophobic violent groups, but perhaps that is what is occurring in those groups. And the thing I want to add really quickly, is that these groups are by default, not always by default, Facebook keeps changing the settings, but they’re private. And what occurs within the groups is sometimes hidden to Facebook themselves. Unlike the messages in the main feed, which are subject to their moderation. What occurs within the Facebook groups is only moderated when the people within the group kind of flag it for violating certain rules. And they’re kind of hidden from Facebook’s own eyes, which serves Facebook’s own interests. They’re very happy to have these groups doing these own things hidden from view, and in a sense where they can’t really do anything, their hands are tied. And I think that’s really interesting when combined with what we were just talking about in terms like there is a sense in which is pushing people towards groups that maybe are aligned to something we would normally call as violence extremism like that.

JJF: Right.

MB: And the quick little thing I’ll add is the cost of joining these groups is so low. Something comes up and says, do you want to join this group? You’re like, Yeah, sure, I’ll see what it is. You may not go there thinking that this is where you’re going to rededicate your life to certain conspiracy theories or odd views. But just because of how people interact with Facebook, which is, as we said, a very centralized place on the Internet, it can lead to these dynamics that just naturally lead one through a series of safe steps towards the hands of people who are often preying on vulnerable individuals at the end of the day.

JJF: So we’ve been picking on Facebook a lot. Let’s pick on some other platform because it’s not just Facebook, right?

MB: Definitely.

JJF: You have mentioned YouTube and the algorithm that’s used by YouTube, and I would imagine though I want to check that TikTok is somewhat similar, but can we talk about how these might also lead people in certain directions?

MB: Well, this is a good opportunity to talk about the individual.

JJF: Yeah, this is the individual route.

MB:  At the end of the day, they’re very connected. So they’re separable in terms of clarity of what we’re talking about. But because platforms function by hitting us on multiple fronts, like Facebook, for example, it’s not only recommending groups, recommending content, and that content is similarly optimized for engagement, which we’ll talk about in a second. So it’s these two things that do come together. But just to clarify what I mean by the individual route is the idea that you have not the platform bringing radicalized individuals together, but the platform playing that role of actually radicalizing a particular person by, in colloquial terms, putting them down a rabbit hole. Facebook, may be the target of the social route in some sense. YouTube is the poster boy, whatever, of this where it really became hounded by journalists and academics for the idea that it was pushing more and more extreme content to users through its recommendation engines. And it’s worth noting that, yeah, videos autoplay on YouTube. So you watch one video. It’s going to suggest the next one, and people were noticing, journalists, academics, regular folks, that it was sometimes pushing really odd content after you’d say, just watched one video on one topic. It was suddenly leading you down these various conspiracy theories and other things that we might put at the at the more extreme end of the political spectrum. And that idea was kind of backed up by some research that did look into this and suggest that YouTube was sort of playing a sort of radicalizing role in the sense that it would lead people to more extreme topics. Partly, that’s again, probably not because they were intentionally doing so, but just because it was like Facebook in its own newsfeed. It’s optimizing for watch time. And it’s pushing people towards content that they’ve showed some interest in, and it might have more interest in other videos that are also engaging and other views by people of a similar behavioral makeup. The idea that YouTube could take an individual who previously was not themselves an extremist of any sort and after few hours, a few weeks, a few months, turn them into an ideologue of a certain sort, was one that came up in various stories and reporting. And I don’t necessarily want to say that that’s not the case anymore. It’s hard to tell. All these algorithms, all these platforms are constantly changing and they respond to accusations of this sort and mix up the way that they do things. But I do think that’s sort of become less the focus of some of the research and online extremism right now.

JJF: Okay.

MB: But it still is the case, the things like TikTok you’re alluding to might do something similar in the sense that these are more algorithmically driven platforms that aren’t giving you content from your social network, like Facebook generally is, but they’re giving you algorithmically curated feeds based on content that they just think that you, or someone like you, might want to see, and it’s not really subject to the confines of people you’re following or interest you showed, they’re just going to throw a lot of things at you and see what sticks in some sense. But because TikTok tends to be shorter videos and offer more of a variety of content, and also because they’re rather hard to do research about it because each individual kind of gets different feeds, it’s more challenging to kind of do the research that people did and were able to do with YouTube and back up some of these claims about whether they’re current were pushing people towards certain directions.

JJF: So it sounds like perhaps some of the journalist and academic interventions with YouTube may have actually helped. Maybe in this case? I want to come back to why it’s a maybe. But first of all, so the individual through YouTube is perhaps not as big a problem anymore, but at least when it was a concern, so you say, for example, in your paper that somebody might watch a few videos on dating advice, and suddenly their feed might be inundated with pick up artists and men going their own way and men against feminism and all that stuff. Quite quickly, you might start from something that seems fairly innocuous, I need help dating, and end up in a place that could potentially lead to online extremism for some users. We don’t necessarily know how many.

MB: Mm hmm. Exactly. The incel community is one that also kind of first drew me to this topic. I was living in Toronto and there was that van attack on Young Street that that was, you know, claimed to be a part of the incel army, and involuntary celibate ideology is one that was, both kind of born and festered online, and certainly is tied to these various other kind of communities that are not as extreme, that are people, people who are looking for genuine advice, but understand the sex and gender that they’re attracted to. Then yeah, you read these stories, people who found out wound their way into the subreddits first looked up this, was led here by that. Now I’m fully committed. There’s a whole world of YouTube and other things like this that know that there are people out there waiting to be scooped up essentially. YouTube for a while was, yeah, their algorithms were catering to that. That function in some sense.

JJF: But maybe that’s not so much the case now, possibly because of some of this academic and journalistic critique. Although I did notice that you couched a lot of that, and I’m wondering how much of our maybe and possibly is because these algorithms are so proprietary that it’s sometimes hard to know what changes have been made. That sounds like maybe is even more the case with TikTok where all the algorithms are individualized. It’s hard to know how a user’s feed is being generated or what it’s optimizing for other than just to keep our attention. We know they’re doing that.

MB: Mm hmm. No, the lack of transparency around these platforms, the algorithms, and not just, just the algorithms, but just their data in general is one of the big frustrating aspects of trying to do work in this area because one possibility is that YouTube is no longer the focus of a lot of accusations there because they’ve improved and they have stepped up moderation practices. It seems like at least they’ve acknowledged the problem and they do have a variety of tools they put in place that seem to take the problem seriously. You know, one thing you can note is that now there’s a whole rival video platform, Rumble, operated out of Canada, that kind of claims to be the free speech YouTube. That’s because YouTube did clamp down on a lot of these people. And so there’s something there and whether that actually solved the problem or just displaced it, it’s a separate question that I can’t address right now. But the lack of transparency around these things is frustrating, and it’s one thing that the European Union is trying to address through Digital Services Act, and that you might see some changes in the law in the US that require more transparency. I do think that would be the first step in some sense to more fully understanding these problems because they’re clearly significant because of the size of YouTube you alluded to. It’s billions of people using it, and you only need a tiny, tiny, tiny portion of that community to be one that is socially harmful for the effects to be a great concern. And right now we don’t fully just understand the problem as well as we should.

JJF: Even if the vast majority of people see a video that is more radicalizing and just laugh or turn it off or whatever. If only a few people pause and take it seriously, and then even a smaller group actually click through and become radicalized, that’s, because of the number of us that are on there, just a huge number of people.

MB: Mm hmm. And that’s one that I emphasized to the average experience for most people, is not going to be one where they are, yeah, whether they’re led down a rabbit hole or a rabbit hole of particular sort that leads them to commit some violence. But the fact that we can kind of trace these routes and identify some of the fault lines lying on the algorithms and the platforms is significant and that it suggests that these are areas either of intervention or areas where a party that would claim that they are not responsible does have some responsibility because they have these social costs coming from their technologies. And that is, I think, kind of alluding to what we talked at the beginning, something that’s new in this area of, you know, like I said, old problems of extremism, violence, things like.

JJF: Yeah.

JJF: So we have this established now that YouTube has kind of acknowledged the problem and is trying to take it seriously. We’ve sort of had some commitments from Facebook about trying to take some of this stuff more seriously. I want to talk about this history a little bit because one thing you said is that tech companies largely tried to ignore the problem or deny that it was a problem for them specifically for quite a while. That does seem to be changing now. For example, they might have said something in the past, like, Well, I’m just providing the platform. I can’t control what people use it for, or, and this is something else you talked about that I wanted to circle back to, they say that they’re more mainstream and that the issues are really coming from more fringe Internet groups, more fringe chat groups and things like that. So, can we talk a little bit about the history of this rhetoric, why your view is that it’s incorrect, how it went down, that kind of stuff.

MB: Yeah, and there’s a number of factors here that are worth bringing out. One thing to pick up on in what you said is that this is probably most visible or sometimes becomes most talked about when it’s like these more fringe areas of the Internet 4-Chan being an area that is often notoriously associated with mass killings, unfortunately. You’ve had all these instances where these killers would post their manifestos on 4-Chan will always be in the news and people would learn what it is and learn how despicable and horrible, it really really is. Question, like how is this allowed there be various public pressure sometimes from infrastructure providers to take it down and shift around from various geographic locations and kind of digital locations. But I do think it’s 4-Chan, the size of 4-Chan is just minuscule compared to Facebook and YouTube as we were just talking about. The fact that this isn’t the average experience of YouTube or Facebook and the fact that it’s kind of the average experience of 4-Chan, does not mean that it’s a problem unique to something like 4-Chan or Gab, Parler, all these other things that kind of cater to a specific oral view. Because as you said earlier, you know, everyone’s on Facebook and everyone’s on YouTube. It really is, they really are the types of platform that attract people from all walks of life, including the ones who potentially spend also time on 4-Chan or are similarly attracted to those ideas, and that just makes the case, makes it plain that while this is not solely a problem of Facebook and not solely a problem of 4-Chan, they share a concern for the fact that this can occur on their platforms, and therefore, they have to have, whether it’s policies or procedures in place for how they’re going to handle it. They can’t just ignore it and pretend that it’s just not going to happen there because it’s not the thing that they explicitly request or cater to in some sense. So, that was one avenue of deflection you’d see from Facebook and other companies being trying to say, Oh, we’re not 4-Chan. That’s something over there. Those people are doing that. Where about cat videos? Where about your high school friends?

JJF: And connections.

MB: Connections, that’s the point. But all the while they are more than aware that they have a certain contingent of their user base, a small tiny portion, that is committed to similar ideas as those in 4-Chan, and there’s all sorts of meme pages and groups that are dedicated to just the exact same kind of horrible content that you would find on those platforms. While they do some effort to moderate and remove that content, especially on the main news feed, the fact that they have to do that shows that they are aware that it’s a problem for their own platform, and not something that they can just say is other people’s problems essentially. You saw this really come up in the aftermath of January 6 in the US, where Facebook and Mark Zuckerberg, were talking about, this was these other platforms, your Parler’s, your Gab’s, the ones that were specifically designed or catering to Trump loyalism in some sense. But from what I understand from the research that looked into this, it was, a lot of it, was planned in Facebook groups because that’s just the place where you can find people and it’s a platform that brings the largest number of people together, and these groups are always concerned with outreach. As we’ve just said, it’s not something they can just pin on others. And they, I think know this and only say that in their public comments when they’re trying to deflect responsibility in a particular sense.

 Yeah. So it’s like, it’s 4-Chan’s problem and Facebook’s problem. It everyone’s problems.

MB: It’s all our problems, exactly, it is. When you operate a website, a platform, an app that is seeking user generated content, this is what every platform has discovered is that they may want to be dedicated towards a certain type of user and a certain type of content. But when you open it up to people to basically decide for themselves how they’re going to use the platform, you will then be confronted with problems. Pornography, hate speech.

JJF: Online violence. Yeah.

MB:  Exactly.

JJF:  Okay.

MB: Exactly. And so that’s why all these platforms. I’m going to pick on Facebook and YouTube, but every platform has had to kind of reckon with their roles as moderators, curators, censors of a certain sort and just kind of arbiters of what are the values that we are going to accept in or are the values that we’re going to promote, which is not something they often willingly chose to do.

JJF: So now that the platforms can’t really say this is not our problem anymore. Now that we’ve accepted that if you’re going to have user generated content, you’re going to have to deal with this. Tech companies, you say now are proposing that they’re going to deal with these issues in terms of algorithms. They’re going to use algorithms to help protect us, I guess, from hate speech and online radicalization or something like that. Can we talk about the rhetoric now and why you view this algorithmic solution as something that at best is going to have limitations.

MB: Yeah, definitely. This really has been going on, I think the reckoning for Big Tech started in late 2010 picking up in 2017, 2018.

JJF:Shortly after we get the large social media.

MB: Well, when they became dominant and when you had things like Brexit, Trump campaign, Cambridge Analytica and where they really could no longer pretend that the bad effects of their platforms were things that they could wholly just ignore and deny. That’s the same thing when it comes to online extremism, various forms of violence that that is advocated, or live streamed on Facebook, they could no longer really just say this is not our problem and kind had to really kind of come up with solutions because they were dragged in front of Congress a number of times, Mark Zuckerberg, for example, the go-to line from figures like Mark Zuckerberg was that they would improve their algorithm moderation by developing and deploying better AI, that would more accurately detect hate speech or terrorist content and limit its reach. Various other services that are aimed to mitigate the effects of those things like algorithmic group recommendation or recommendation of content. All these things that are themselves, AI machine learning, ML, driven systems that have now been shown to have these sometimes disastrous effects that these platforms now had to reckon with. And the preferred solution was always like, oh we’ll just do it better

JJF: More AI

MB: More AI, more data, and that will be the necessary solution given the fact that we are dealing with content at such a huge scale that it needs to be something automated because we are requesting and soliciting user generated content at an unfathomable rate that we can only solve this through automated technologies. And yeah, we will continue to improve. Our classifiers get better every year. Better AI will be the solution. That was definitely the kind of the main thrust of specifically something like Mark Zuckerberg’s testimony in around 2018, 2019 when he was first being called out for this type of action. It’s similar to what’s going on now. I think just this week when we’re recording, Mark Zuckerberg was in front of Congress to talk about child sexual exploitation and problems on Facebook and Instagram. Similarly, they are talking about the necessity for improving their classifiers, improving their moderation systems to flag these things at the level of automatic detection. We’ll get clear on this as we go forward, and I certainly think there’s a role to be played for automatic detection in these AI tools. I just find it very interesting that it is the main thrust of the line when it is, in some sense, also in response to the problems generated by these algorithm and AI systems.

JJF: Yeah, it’s like, Okay, so the AI got us into this trouble, and it will also get us out of this trouble.

MB: Essentially.

JJF: We just need more AI.

MB: More AI. That’s the least thing they prefer to talk about.

JJF: So what might be some of the limitations of trying to do this with AI at scale in order to use AI to, like, flag objectionable content or hate speech or sus out these online extremist groups, I guess.

MB: Yeah. And this is a work time to talk about when we talk about better AI and better AI systems, that could mean a variety of different things because AI is a bit of a term of art and a bit of a marketing tool for describing what these are can sometimes be very crude algorithm systems. Or sometimes very sophisticated ones, and I’ll be the first to admit that there’s some very impressive AI based detection tools that do get better every year and do play a large role in content moderation. However, the role that they can play in such a massive problem of content moderation, which is a problem that, I think, in a lot of ways, we as a society, don’t quite know we have to grapple with constantly and don’t quite know what the right solutions are for things like what is appropriate speech on a platform like Facebook? What are the appropriate ages for certain platforms, and can we really enforce these things? These are complicated questions is the first thing I would want to admit, and AI has a role to be played here. But when we start thinking about what kind of AI systems are available, we start seeing that they can’t do, they have limitations, especially for complicated problems like hate speech, which is in my eye constantly evolving social phenomenon. The terms used to denigrate different groups evolve. You know, we have some sticky terms that have been around for a long time, but new euphemisms emerge all the time. And what most AI systems are capable of doing is taking an existing problem and automating it. So when we have a known problem with known solutions and air quotes in terms of whether that’s to have to remove that content or to flag it in certain sorts. It can do that well. But when problems evolve and change and emerge in new ways, those systems are limited because they don’t have the benefit of thousands of data points from which to draw as these things are constantly evolving. So one quick example is that while Facebook’s hate speech detection tools are quite impressive and work really well in English, they are more limited in other languages. And of course, Facebook is a global company that is growing in the developing world. And so, you know, what we often experience in the English language world is the best possible version you can get of Facebook and I’ll leave it to you and others to decide how good

JJF: That’s so depressing.

MB: Yes.

JJF: Yeah, it works best in English. And it only works okay in English.

MB: It can exactly. And you have different vernaculars in English, and then you have kind new conflicts that emerge that pose new challenges because of the specificities of how people will refer to these things. And so I think that there’s a role to be played for automated systems, but we shouldn’t overstate it. And that’s exactly what I think figures like Mark Zuckerberg and Big Tech CEOs, in general, tend to do is to identify these things as being the solution when at best, it’s going to be a part of a more holistic solution and a solution that is one that we’re still kind of as a society and as researchers and as people concerned are grappling to figure out what the best options available given these evolving problems that we have not really ever seen before could be.

JJF: Yeah. One thing I was thinking of as you were talking about this and you’re talking about how AI can be good in English for existing and known types of hate speech is, so as we have emerging dog whistles, for example, it may not know what those are. Or if we shift contexts. Here’s a silly example. In Canada, if I use the word fanny, it refers to bum and it’s a silly word for somebody’s bum. But in the UK, it’s a much dirtier word. Right? That’s not necessarily an example of hate speech, but even words in a single language, in English, shift in different contexts. It’s not clear that the AI is going to be sophisticated enough to do that, much less to pick up on a new dog whistle term that may evolve or develop to respond to a new situation, for example.

MB: Exactly. That’s part of the problem. It’s you know, attracted is maybe not the right word, but it interests me in this topic. Partly because I find language and the way we use it to be really complex and really fascinating. And you just gave a really brief, quick, but very compelling example for how one word used in one context is very different than used in another context. And you know, picking out which context you’re in is sometimes very easy. If you’re physically located in Canada and you’re talking to someone IRL in real life, you know what you are. But when you’re chatting with a friend in England, all of a sudden that context can shift a little. And maybe you don’t know and maybe your friend doesn’t know. And yeah, the automatic detection tools, I don’t know which context they might think is the appropriate in that sense. That’s just a really kind of quick and dirty example that kind of gets across that context can shift and context is a key determining factor for what our words both mean and do. And so these algorithmic tools, which are best at picking out things like sentiment of a certain sort, they can pick out whether something is meant in a hateful way, for example. But they don’t always pick out whether you’re reporting an instance of hate speech rather than actually doing an instance of speech. This happens all the time where people are just flagging this person emailed me, or look what I was sent. And they get flagged for hate speech by automatic tools that kind of they’re picking up on something, but they’re missing the fact that it was reported rather than used in some sense, or perhaps it was mocked all these things. We do so many things with our language and our words and our communication styles, and algorithmic tools can pick up on features of that for sure, but they can’t pick up, well, I’ll just say that they can’t pick up on all of it and won’t be able to pick up all of it when it’s a bit of an arms race, too. Because dedicated hate speakers, they know that they are being surveilled in this way, and they are figuring out, like, inventing new dog whistles. A standard example that’s a bit old now, but it’s like the three brackets around names to identify that as being a Jewish person, which was an innovation by white supremacist. Here’s a way we can talk to each other that this person is Jewish without flagging, without having to say it explicitly, and flagging, maybe whether it’s violate in terms of service or being flagged by modernization tools. And, you know, that’s one example, but there’s literally hundreds and thousands of these types of weird terms that they invent in the community in order to

JJF: Get around the algorithm.

MB: Exactly. And we’re all doing that too. Sometimes we’re we all have a weird relationship with the algorithm when you’re on TikTok or whatever, you’re trying to figure out.

JJF: That’s true.

MB: And so that’s the thing that’s just the fact of life on the Internet and another complicating factor for why moderating people’s behavior is a challenge because it’s a evolving practice, and that operates on multiple levels, just like how social media and how we interact social media is evolving hate speech, both online and offline is evolving and conflicts, what are the kind of targets of the sort of hate speech and other instances, that’s evolving too. So it’s really just a constantly shifting terrain. That’s where the real limitations of these AI solutions emerges, where they work best given historical data you know, doesn’t have to be super long, but just a nice amount of data that points to how these things should be addressed. And that’s why you have human content moderators to do the work of labeling these things like that that then the automatic systems can pick up. Those automatic systems require those human workers to have done that labeling in order for them to work. And that’ll be true as we move forward because as new problems emerge, you need, essentially, I guess new labels. That’s one way to put it.

JJF: Yeah. Let’s talk about the human content moderators. While we have Zuckerberg in front of Congress saying we’re going to use AI to get us out of the problems generated by AI. We know that that’s only part of the solution and that a huge other part of the solution is to hire or outsource huge labor forces of people who are moderating the content that goes up on these various platforms. Can you talk a little bit about human content moderators?

MB: Definitely, yes.

JJF: And what’s involved in this job.

MB: I mean, I’ve probably said this too many times, another fascinating aspect of this topic. One thing that once I kind of learned about this is I was like, Oh, I need to know more and need to dig in here, is really this, invisible army of constant moderators, human beings who are working for Facebook, YouTube, and others, not usually working for them, usually working for a company that then is contracted by these platforms. Sometimes even when they’re the sole contractor, so it really is just a weird organizational structure. But they are contracted by these platforms to do the work of monitoring, flagging, or  responding to flags rather, when users on these platforms flags something as being potential hate speech, it goes up the chain and it gets evaluated by an algorithm, gets evaluated by a person. And there are thousands, maybe hundreds of thousands of these people across the globe. The actual numbers are hard to come by because like I said, the complicated organizational structure, which these people are hired by the fact that these companies said don’t like to talk about the fact that there is this large human workforce of people who are paid relatively little, but to do fairly rote and psychologically damaging work because it’s the worst of the worst things you can think of on social media, which is the worst of the worst things that happened on the Internet, which is the worst of the worst things that happened in the world, are these people’s everyday life in some sense, where they are looking at images of graphic violence to decide whether that’s appropriate for the platform or not. And they’re trying to navigate Facebook’s or YouTube’s complicated rules about whether something is newsworthy enough to be included on the platform. Whether some images are nudity or not, whether they are overtly sexualized. Again, these value judgments that come into play. And then also in more you know, text-based things like whether this is a piece of hate speech or not. They’re basically constantly reading, constantly looking at all the things that we wish to avoid on these platforms.

JJF: All day on their workday. That’s what they do.

MB: Yes. I wish I had the stats in front of me, but it’s at a very quick pace too. They’re often given 30 seconds per video to decide whether it should be taken down or not. If they fall below that pace, they’re penalized, of course, because that’s how these companies work. It’s a nonstop barrage of horrible content. That’s why the burnout rate on these jobs is incredibly quick. People often only do them for a year or two. Sometimes they’re contractually limited to stop after two years because the platforms know this is hard work. And like I said, there’s just many, many, many, more of these people than we collectively know about, and many more of these people than these platforms like to talk about because it is kind of like the worst side of the organizations that they, you  know, for obvious kind of reasons, don’t like to lead with. But it’s kind of relatively often the biggest portion of their workforce is these moderators. So Facebook has around 70,000 full time employees. And some estimate that’s on the other front. On the moderators front, they sometimes have, could between 50 and100,000.

JJF: Wow.

MB: So it’s again, kind of, unfortunately, unclear, and then YouTube as well has a large number and every platform often contracts, and these things come and go as platforms themselves contract and grow. So now a lot of people work for TikTok instead. And, but the exact same problems emerge there because you have the same things of violence, murder, self-harm, animal harm, things that come up that,

JJF:  hate speech, all that.

MB: Exactly. It’s all there. And it’s the reporting on these workforces is always so depressing because you have these people who kind of were one, kind of not told the nature of their job before they kind of took it up because they basically had to sign an NDA before they were told what we’re actually going to do. They’re often told like, do you want to work for a tech company? This is particularly the case in the developing world, like in Africa, there was a really notorious case where person was hired in South Africa where he was a citizen, but then brought to Kenya. And that’s when he finally learned that he was going to do content moderation work and how kind of tolling it was, and he couldn’t do it and was just stuck in Kenya, essentially.

JJF: Wow.

MB: Yeah. He’s brought a case against that for various things, but that’s just the tip of the iceberg, essentially. These people who are often educated and often interested in working for technology because that can be attractive in its own right. Then when you sit down and it’s a watching, unfortunately, say cats be harmed, it sticks with you and the people talking about like, I still see this video when I close my eyes, things like that. Of course, they’re also doing it for things like text and hate speech and all the value judgments that go into deciding whether this is a piece of hate speech or not that are, like I said, very difficult for algorithms, also difficult for human when they have to sit with it for a little bit.

JJF: Yeah, there’s going to be limitations on that as well, right? Be we talked about context mattering. And so if you’re in a different part of the globe reading this piece of text, it may be very difficult for you to determine whether or not this is hate speech, for example. When we’re talking specifically about the text.

MB: Yes.

JJF: But yeah, I do remember learning about this and learning about the trauma that many of these employees suffer often long term trauma. In some cases with some of the contract companies, there’s very little mental health support or supports in general for the people doing the work. The work is not super highly paid, especially when you compare it to other tech jobs that people hold. Then the support, the extended benefits and supports just don’t entirely seem to be in place. At least that was my impression.

MB: And it’s really, it’s a really fascinating workforce, partly because you have these interesting pressures. One is the fact that you need people who understand the language. English is a fairly global language, so you have a lot of the English continent moderation being done in various parts of the world. Ireland is the most, one area because that’s for the Facebook’s global headquarters for tax reasons. But then you have in the Philippines, a lot of people doing content moderation work that’s in English. But then you have the fact that yeah, Facebook is a global company and is growing in regions of the world where people don’t speak English and you need therefore content moderators that understand that language. So sometimes, sometimes limited in how far they can displace certain areas of this work in different regions. And that’s why that person I was talking about in Kenya, Kenya is like the African base that covers all of Africa, including many hundreds of languages that are spoken across the continent.

JJF: Wow.

MB: Often, you know, they have only a small number of people covering a whole region.

JJF: Wow.

MB:  So it’s really odd in the sense of how essential it is and how that’s often small people who can do this in certain regions. But yeah, because it’s not really highly valued work. It is in some sense, fairly mechanical or they wanted to think of it as mechanical, I should say, that it’s not highly compensated, at least not relative to engineering jobs. The supports that are available are often paltry. One thing that’s interesting is the workers up they don’t trust it manager provides them with mental health support and they, one, I think it’s often people who are not qualified to provide it because they don’t know the nature of the problem that they’re dealing with. And also that they just are suspicious of how whether what they say in those rooms will be held against them because this comes down to the organized in structure that these are outsourced jobs, essentially, that these outsource companies are dependent on the contracts from Facebook to exist. And so they kind of have, they contract at the level of how many, the rate at which they will provide content moderator decisions and how much they will be compensated. So that’s kind of between these two companies. And then obviously for these much like an other sweatshopish industries, then it’s then up to that outsource company themselves to kind of drive the workers at the point where they can get as many hours out of them and as many decisions out of them as possible. A lot of the problems here are not necessarily unique to this workforce at all. There are very similar things across other industries that have been subject to globalization and outsourcing. But it is another area where people are doing important work and it’s not respected in the way that I and others think it should be. That’s something we see from the moderates themselves. Obviously, of course, they like to be paid more, but also they just want to be you know,

JJF:  Acknowledged?

MB: Treated as human beings. That’s not what we’re seeing right now. Treated, given the respect that they do deserve for doing important work that does keep people safe. Some people do take a lot of pride in the work they do. They suck up all this bad in order to shield others from it.

JJF: It’s kind of amazing.

MB: That’s something they can find meaning in, but it’s hard to find meaning when you have the higher ups who basically just deny you exist essentially. That’s what first got me interested in this.

JJF: Yeah. We have higher ups telling everybody, don’t worry. We’re going to fix this whole problem of radicalization with algorithms. More AI, more AI will fix the problem of AI. Meanwhile, there are just huge workforces of people trying to protect us from harmful content, and they’re not being fairly compensated or even acknowledged because I, tell me if you think this is true, I don’t think even now, even though this has broken out in the last few years, I don’t think most people are aware at least in certain parts of the globe, I don’t think, I don’t think in Canada, for example, that most people are aware of just the huge armies of content moderation that exist that support these platforms. That’s my sense is that the rhetoric of this all being AI has very much stuck with a lot of people.

MB: I agree. I mean, I teach on this topic to my students often and they seem surprised and they’ll admit, it’s not so much that they didn’t know about this. It’s more than they didn’t really thought about it. In the sense they just like you’re given this platform. It just appears on your phone on your laptop. You don’t think about the infrastructure behind it, the people behind it, all the many steps along the way and people along the way that are necessary for it to appear on your phone as it does. I think that’s by choice, I think in some instances and the fact that it’s a new terrain for us.

JJF: Yeah. I think a lot of people know about the problems with the way clothing is often made in clothing sweatshops or the way in which smartphones are made. But I don’t know that as many people are cognizant of the way our content is moderated. Even though as you said, in a lot of ways, the conditions of labor are very similar.

MB: Yeah, no, and sometimes these are referred to as digital sweatshops, and this fact that it is these largely American and Western companies that are contracting with people in the developing world to serve usually the interests of western audiences. The parallels are pretty apparent in many ways. So I think it’s not inappropriate to call these, or kind of hearken back to the idea of sweatshops as being sort of what’s at issue here. And in the same way the conditions of sweatshops themselves are, we often know that our shoes are made by someone and probably not in the best of conditions, but the actual, sometimes the location of those factories is hidden from regulators. And there’s a whole history,

JJF: And kept at arm’s length.

MB: Exactly, in which these things are just marginal, made invisible from prying eyes, essentially. Big Tech, I think is certainly complicit in the same arrangement with the content moderation.

JJF: This brings me to your central argument of this paper and this research that we’re talking about, because I think perhaps one of the reasons that people don’t necessarily spend a lot of time thinking about this is we are already given a narrative of what’s going on. What’s going on is that it’s algorithms and AI, and the people involved are hidden. And so in your paper, you had the following quote that I found really eye opening that I want to talk to you about. You said quoting from your paper. “I consider how the preference for technological solutions serves as an ideological function for Big Tech and its champions. That is, its primary purpose, whether intended as such or not is as propaganda that distracts us from the real avoidable human harm companies like Meta Platforms, who owns Facebook and Google, who owns YouTube contribute to worldwide.” Can we talk about this, this idea that the preference for technological solutions is propaganda. I love that.

MB: Yeah, definitely. No, I mean, you know, one thing that dear to me to this and I’ve repeated a few times. I find this rather complicated situation we’re in these are complicated problems and just what the best thing to do, the appropriate thing to do, the legitimate thing to do is something we have to think hard about. I think it’s something that we have to grapple with both as society and platforms have to grapple with internally. But I think often we have difficult challenging problems, we need to have a clear eye of what the actual problem is in order to make a first headway towards what those solutions are. We can’t do that when those problems are obfuscated in various ways, when we’re not given the full story and we’re not led to appreciate the reasons as they are in front of us. I definitely think that’s exactly what’s going on here with these claims of better AI will solve various issues of radicalization, various issues of moderation, various issues the platforms contribute to because I find these problems really challenging, really difficult and not the sort of thing that is a narrow technical problem, the way that they often focus on it. So it really is just implicit and sometimes explicit in the way they discuss these things, they’re like, Oh, it’s like this algorithm went wrong in this particular way because it misclassified this in a certain direction, and we can improve that by having more data that’s more accurate and tweak things here and there. But I don’t think this is a technical problem. I think this is a global social problem that has involved many various factors that need to be grappled with head on. When it’s presented as a technical problem, one that can be addressed through these sorts of improvements to the underlying AI technologies, I do consider that a type of propaganda in the sense that just leads us astray. It presents itself as being helpful to the discussion in some sense, but really it is unhelpful and serves the purpose of not challenging the power of big tech in the way that I think we really need to grapple with.

JJF: In fact, it almost seems like it might feed the power of Big Tech. If I accept the idea that the way to solve these problems is not to perhaps properly compensate content moderators and consider how these platforms existing algorithms are serving to perhaps lead people down the rabbit hole, for example. But instead, I think about, well, all we have to do is just have better AI tools that identify what hate speech looks like so that we can remove it, then I might be persuaded to give more investment into companies to help them develop these better tools that are going to save us all.

MB:  I think that’s their hope.

JJF: It gives them more power, right? This propaganda.

MB:  You see this also related area in the AI ethics world and philosophy where there’s sometimes you talk about algorithmic fairness, which is often discussed as disparate treatment based on certain attributes of people like that. There was a famous pro publica story about how, whether an algorithm decide, we’re going to have bail or not, whether it was biased against black defendants. That’s the problem of the algorithm is doing something wrong there, and we can address this by improving the algorithm, rather than talking about criminal justice system more broadly and the bail system and various other aspects. You treat these as technical problems, you kind of limit the solutions available. And Big Tech has been happy to kind of try to jump on and address the issues of algorithmic fairness in certain ways because yeah, they take these complicated social problems that might, if we were to think of them more fully, force us to grapple with various aspects of our society that ought to be grappled with my opinion, and render them in a different guise, and often don’t challenge the power that they do have. Instead, as you kind of said, just say, like, Okay, if you can solve this, please do. And yeah, I do find it sort of at best naive. That was, or,  cynically manipulative and things like that. But it certainly is not in my eyes, the right response to the severity of the various problems that we started our conversation with. The reason I like, the paper starts in one way and then another, is I find these all very connected. I do feel that we have to in order to address issues of the radicalization I was pointing to and talking about earlier, have a better sense of how Big Tech does wield power over us and where they are willing to direct their energies and how unwilling they are to cede that power. You see this in the employment case very directly.

JJF: So what Big Tech is doing, if I’m understanding the point, is taking something that is kind of social, possibly sometimes political and moral issues that we need to deal with, such as, for example, the power that Big Tech already holds, the way in which radicalization and online extremism spreads, the existence of extremism in our society in general, and how that’s happening, perhaps grappling with things like racism, xenophobia, sexism, et cetera. Instead of talking about those very complex, moral and social and political and legal issues, it says, no, no, no. This is actually just like a technical glitch or problem, and we just have to get better AI and that will fix it. It offers a simplistic solution, and who doesn’t want a simplistic solution?

MB: Mm hm.

JJF: So I understand kind of the seductiveness of this propaganda, I suppose. But yeah. Is that what’s kind of happening?

MB: Yes, exactly. And, you know, were you to take them at their word for it. It does also suggest that there’s no need for, you know, new regulations from governments.

JJF: Right.

MB: So it’s also plays a preemptive role of, suggesting that there’s only viable solution to this problem is one that is already one that they are tackling, and there’s no need to therefore kind of change the laws or make new regulations for which these platforms are not subject to very much at all, they’re shielded from liability in various ways from intermediary liability laws in the US and other places, and they’re very happy with that arrangement. And they’re constantly on guard in trying to prevent either changes or new laws that would subject them to new forms of liability that they’re not currently subject to. Yeah, if you were to, say you’re a lawmaker and you believe that this is a complicated technical problem and you see this really smart person in front of you who says they’ve got their best minds on it, you think maybe in some sense, it’s, it’s handled and certainly not something that you as a AI ignorant, as we all are in some various ways, person has the right to dictate. But of course, society and lawmakers do have the right to dictate how these companies operate in some sense. We should at least. We should take back some of the power that they hold over us and find out how we can shape them to our interests instead of their own. And when we’re distracted by fanciful claims of artificial intelligence, being able to make all these problems go away, we’re not taking on the hard problem of actually doing that collective determination as soon as possible and I think we need to do as soon as possible.

JJF: Right. Don’t accept the bait and switch that turns this into a quick fix technological problem with a solution that’s just on the horizon and instead actually do the messy work of dealing with these complex moral and social issues.

JJF: I want to thank you so much for taking the time to meet with me today. Is there anything else you’d like to share with the listeners about online extremism, AI propaganda, content moderators, any of this stuff?

MB: I think the only thing I’d like to share is kind of right what you set up top really hits home with me that ask people to remember and think critically that these technologies that these things that seem virtual and in the cloud and distant are all at the end of the day, physical technologies made by people, operated by people, moderated by people, and just to think more fully about what that means. And then we can take a bit more stock in our relationship to these platforms and maybe hopefully together, tackle the big problems.

JJF: I want to thank Michael again for taking the time to go over his research on radicalization and social media content moderation with us today. And thank you, listener, for joining me for another episode of Cyborg Goddess. This podcast is created by me, Jennifer Jill Fellows, and it is part of the Harbinger Media Network. Music is provided by Epidemic Sound. You can follow this podcast on Twitter or BlueSky, and if you enjoy this episode, please consider buying me a coffee. You’ll find links to the social media and my Ko-Fi page in the show. Until next time, everyone. Bye.

Next Post

Leave a Reply

© 2024 Cyborg Goddess

Theme by Anders Norén