Cyborg Goddess

A Feminist Tech Podcast

Transcript for Season 4 Episode 4

Jennifer Jill Fellows: The Belmont Report, issued in 1974, laid out three basic requirements for any human subjects research project. First, respect for persons, including their autonomy and self-determination. Second, the need for justice and fairness in the distribution of benefits and burdens. And third, the need for beneficence in terms of protecting human subjects from harm and making efforts to secure their well being. And this all sounds really good. But this document was written over 50 years ago. And the technological landscape of what is possible when conducting human subjects research has changed a lot since then, as my guest, Dr. Michelle Pham, we discuss today. So join me, Jennifer Jill Fellows, for another episode of Cyborg Goddess as we discuss brain computer interface trials, patient consent, and human subjects research ethics.

JJF: Dr Michelle Pham is a professor in the Center for Bioethics and Social Justice at Michigan State University. She conducts research in the interdisciplinary field of neuroethics and connected issues in philosophy of science, often with a focus on neurotechnology. She examines issues of consent and post clinical trial care for patients in experimental brain implant studies and has raised concerns that patients in brain implant studies may be exploited. And that’s what she’s here to talk with us about today.

JJF: Hi, Michelle, welcome to the show.

Michelle Pham:  Hi, Jill. Great to be here.

JJF:  I’m going to start by resisting the metaphor of the cloud or the web and remind myself that digital space occupies physical space. We’re hearing more and more about the immense energy and water needed to power the digital infrastructure that we rely on to say nothing of the raw materials. So digital space impacts our physical reality in tangible ways. And with that in mind, I think it’s important to acknowledge that Cyborg Goddess is recorded on the unceded traditional and ancestral lands of the Coast Salish people, including the Qiqéyt Nation.

JJF: So I want to start by asking you just how you got interested in technological research on the brain and brain computer interfaces, ’cause this is fascinating and way out of my wheel house.

MP: Yeah. I wish I had a more interesting story, but a lot of it, honestly, was by accident. So I moved out to Seattle to study at the University of Washington for my PhD. And I went there to do feminist philosophy of science. So that was my goal. And there were some amazing feminist philosophers of science there at the time. So Allison Wiley, who’s at UBC now, Lynn Hankinson Nelson, who passed away recently. Yeah, so that’s what I came out there to do. I wanted to sort of better understand science as social knowledge and think more about standpoint theory and feminist ethics. And while I was in, I think, my second or third year, one of our feminist bioethics faculty members, Sarah Gering got recruited to lead what was called a neuroethics thrust for the Center for Neuro Engineering at the University of Washington. And this was one of the first NSF funded centers for brain implant research. And it was basically the NSF’s way of trying to get researchers to collaborate across disciplinary lines. So you would have basic researchers. You’d have people doing applied research, applied scientific research, you’d have clinical researchers, and then you had the neuroethicists or ethicists. And so that’s actually how I got into it. So Sarah and Eran Klein, who’s a neurologist, who joined the team, but also has a PhD in bioethics.

JJF: Oh, wow.

MP: So, so they were leading that thrust, and then I kind of got wrapped up into it. I don’t know what they saw in me, but I ended up joining, and it was a lot of fun, and we had a great group going. We had weekly meetings. And so that’s how I actually got started.

JJF: Really, really cool. So it was sort of by accident, but, I mean, obviously, those professors must have seen something in you to tap you and bring you into this. So were you already doing kind of feminist bioethics or neuroethics before this?

MP: No. So I was exclusively doing philosophy of science when I was at University of Washington. And my dissertation advisor there, Andrea Woody works on scientific modeling. Like, her background is in chemistry, so she’s very much like a history and philosophy of science kind of person. So I was working on my dissertation on consensus issues in scientific practice and collaboration. And then I kind of just ended up joining the group. So I was working on my dissertation and then being a part of the neuroethics group at the same time. And I the parallel for a while, I had a really hard time sort of reconciling these two interests.

JJF: Yeah.

MP: I was like, So I’m doing this dissertation thing. It’s so stressful. It’s so hard to write. But then I’m also doing this thing with this other group. I’m learning how to do qualitative research. I’m learning how to do collaborative research. I’m learning from the different scientific talks that are happening. So these two separate things, but kind of what I realized was while I was, like, thinking about the theory and the practice from the philosophical standpoint, there was also this practice element in the neuroethics research that I was learning about inside of science. And I think I actually learned a lot about, like, the culture and the practice of science in neuro device research doing that. And so I think for a long time, it was just two paths. And I’d say, to some extent, I just made a decision for myself and decided I was kind of more interested in the practical side of things and wanted to be engaged with scientific practitioners as well as neuroethists. So I did the philosophy of science dissertation. I’m happy that I did it, but I’ve just decided to move forward with the neuroethics stuff at this point. I think at some point in the future, when I have some time to sort of step back a little bit, I’ll probably try and merge the two a little bit more because there are a lot of values in neurodevice research that we can think about in the philosophy of science context. But for now, I’m just going to stick to the neuroethics stuff for a little bit.

JJF: So let’s try and get some just kind of facts on the table for people who don’t know anything about this. So can you talk about some of the medical technological trials that are being run, what they’re being run for? I know in your research, you’ve mentioned human intracranial electrophysiology research. I think I said that right. So yeah, if you can tell us a little bit, like in layman’s terms, what this stuff is just so we kind of know what you’re researching.

MP: Yeah, of course. So let me just start off by telling you about a couple different kinds of neural devices. And then we can sort of talk about the different kinds of research studies that are happening in relation to those devices.

JJF: Amazing.

MP: So in my work, I mainly focus on two types. So one is called deep brain stimulation, sometimes referred to as DBS or actually DBS more commonly. And basically, here, you’ll have somebody with, say, a movement disorder like Parkinson’s disease, essential tremor, dystonia, and basically, they have to be what they would call refractory or treatment resistant. So you have one of these movement disorders. You’ve tried all the standard treatment options. They don’t work for you. And maybe by that point, your neurologist or your brain doctor would recommend that you try out or you explore the option of deep brain stimulation. And deep brain stimulation requires that you get brain surgery. So they would implant electrodes onto the surface of your brain. Those electrodes, so these tiny little metal things are connected to wires that run down the back of your neck and they’re connected to a battery pack that sits underneath the skin in your chest area, and that battery pack provides power for the electrodes to stimulate the brain. So deep brain stimulation, it stimulate the brain to help you treat or manage the movement disorder. Currently, at least in the US context, DBS is FDA approved for Parkinson’s disease, as well as dystonia. There’s some exceptions to the rule, but that’s what it’s primarily used for. And I’d say DBS is, because it’s FDA approved, you can say that it’s kind of more like primarily for clinical purposes, although you still do research using DBS. Another kind of neural device is brain computer interfaces, right, which you read about. And this one is a little bit different than DBS in the sense that you’d still have electrodes implanted into different regions of the brain. But that device, though, might provide stimulation, but it also records your brain data to then connect to some kind of external device. So for instance, somebody with a spinal cord injury might have a BCI or brain computer interface implanted. The device would record the brain data and it would allow them to say, use a prosthetic arm. It can also be used for a spelling program for folks who have speech disorders. There’s also a visual BCI where it bypasses a certain area of the eye for you and it creates artificial vision for you to navigate space.

JJF: Oh, wow.

MP: Yeah, so BCIs, I think the main distinction between a BCI and DBS, sort of technical stuff aside is the DBS provides stimulation to help you help treat or manage a particular neurological disorder. And then a BCI can stimulate and also record, and it would connect to some kind of external device. So a prosthetic arm, a spelling program, or a visual cortical prosthesis.

JJF: Okay.

MP: So those are two kinds of neural devices, and they both require surgical implantation. So pretty, you know, brain surgery is not a walk in the park, right? So there are risks that come with brain surgery. And then for research, I would say human intracranial research. The main thing I would say is just it’s basically research that involves placing electrodes into or onto the brain surfaces and for different purposes. And, you know, I know you have, other questions as well, but,

JJF: No that’s okay,

MP: you know, there’s different kinds of studies being done in this space. So some of these studies are what you might call, basic intracranial studies. There are no therapeutic or clinical aims attached to this study. It involves a human being. But the main aim is to understand how the brain functions because we understand so little about the human brain. And when the brain is open, you can basically record data and understand things like language processing, memory, things like that. And sometimes the clinical components and the basic components overlap, and then sometimes it’s purely for research purposes. So we can sort of talk more about that, but just so hopefully that’s sort of enough to kind of get us going.

JJF: Yeah, I think that’s helpful. So we have, like, the deep brain stimulation DBS, and that one stimulates the brain to try and manage certain neurological disorders or the symptoms of those disorders. And then we have I’ve forgotten the acronym.

MP: BCI

JJF: BCI Thank you. And that’s the one that will send messages or send information to some external device, so like a prosthetic arm or something like that. And those are both approved, right? They’re not necessarily undergoing study or they might be undergoing study for some conditions, but they’re approved for use for other conditions right now.

MP: So yeah, this is so confusing. So DBS Deep Brain Stimulation, the one that stimulates the brand only, it is FDA approved.

JJF: That one’s approved. Okay.

MP: BCI is not.

JJF: Oh, so that one’s still undergoing trials.

MP: All BCI studies are experimental.

JJF: Okay. Then there’s additional experimental trials with implants into the brain, and some of those are also for therapeutic, but some of them are just to gather research, which may in the future lead to development of therapeutic stuff, but that’s not what’s going on in the trial.

MP: That’s right.

JJF: That’s what it is. . .  So there’s kind of these four levels I’m now thinking about when it comes to I’m sure there’s more, but that’s probably enough for a layman like me to get going.

MP: Yeah. Yeah. That’s right.

JJF: Cool.

JJF: So obviously, we’re going to try and focus on the trials, I think, because that’s where it seemed like a lot of your research was talking about. And in particular, when we’re doing these kind of trials, this is obviously scientific research involving human subjects, which raises a whole bunch of ethical stuff that needs to be talked about. So can we talk a little bit about the participants in these trials? And in particular, maybe, as you’re talking about this, one of the terms that came up in your research is the idea of participant engagement. So can we talk a little bit about, you know, how this might differ from other types of medical trials that people might be aware of?

MP: Yeah, so I think it’s really important, I think, to emphasize that for at least DBS studies and BCI studies, all of the participants are disabled people. So it’s disabled folks who are contributing to science and helping to advance science. I think it’s really important that we recognize and we acknowledge that because oftentimes, you know, when we think about cutting edge or fancy research. We think so much about the researchers and the clinicians who are behind this research and maybe the industry partners and the funding agencies, but we really need to acknowledge that it’s disabled people who are essential to this research, and without them, this would not be possible. You cannot understand the human brain the way you do without these folks being willing to put their brains and their bodies on the line to help advance this work.

JJF: Yeah, and I think, especially with, you know, what you said earlier, like brain surgery is serious surgery. So it’s important to acknowledge that, like, these could be serious risks that people are taking in order to help humanity in general understand the brain better.

MP: Mm hmm. Exactly. And I think that, you know, when we’re talking about participant engagement, so, you know, there’s, like, a whole, I’m in academics, and so I tend to get really worried about things, but I’ll try to keep it short. But at least in the health sciences, in the last maybe 20, 25 years, there’s been increasing acknowledgment that in order for us to do effective health science research, we need to actually engage with the people that we’re purportedly trying to benefit or help in some sort of way. If you don’t have trustworthy relationships with the relevant patient populations that you’re working with, then they might not take up the work that you’ve produced. They might not trust you for various reasons, and they usually have pretty good reasons because of the history of medical research in the US, especially. And then also they think that when you engage with participants, you actually sort understand better their priorities about their health needs. And so there’s basically just increasing acknowledgment that in order to do good health science research, there needs to be a pretty robust dimension of participant engagement. And that engagement can sort of take different forms. There’s different versions of this. There’s more collaborative participant engagement where you might have community leaders as part of the research team or members of the community who are trained inside the research, you might take, like, a more social science or sociological approach where you could do qualitative interviews with a number of relevant participants who belong to the group that you’re trying to study, or you might have something like a more consultative model where you could have a patient advisory board as part of your research project, and you consult with them on matters at different time points during your research project. So that participant engagement, at least in the health sciences, I think, is growing, and it’s increasingly sort of just a normal part of research practice. But I think when you sort of think about the participant engagement lens in the context of neural device research, it’s actually a lot harder to justify because oftentimes, there is no direct clinical benefit at the end of these studies. So the way you would justify participant engagement in health science research is we need to talk with this population or this particular community because our research aims to benefit them in some way. And so it makes sense, then that you include them. But in the context of neural device research, if you have a device feasibility study, where you’re just trying to understand whether that device can actually work for this particular condition, that’s sort of the sole aim, and if it benefits the participant, that’s sort of an indirect benefit that comes later on. It’s sort of a little bit different. Or if you’re doing a study where someone’s brain is open because they’re getting implantation of the DBS device, but you’re just implanting additional electrodes to record information, and that’s it. And that information has nothing to do with their condition. Like, what is the justification there for participant engagement? So I think there’s a little bit more hesitancy and maybe pushback to do it in those contexts. And, you know, there’s some research showing that scientists are less willing to engage if they’re doing basic research. And so I think that’s kind of there’s like a disconnect there. But I do think in neurodevice research, it is important to engage with participants in this work, largely because they are advancing the science. You cannot do it without them. And I think long term, you know, people always write in their grants, like, this many millions of people are impacted by this thing, and if we could figure out this thing about this device, it would help these people. And so if you really are trying to help a specific population in the long run, you really should be engaging with them on some level, even if it’s basic science research, I would say.

JJF: I think that’s helpful because, yeah, like, when you hear about participant engagement research in other areas, so completely not even medical research because it happens in even non-medical research areas. For example, there’s been a lot written about engaging with Indigenous elders and hunters for ecological research on polar bear populations and stuff like that. And obviously, in cases where what you’re trying to research will have an impact on a community directly, it’s really important to engage with the community. For a whole variety of scientific and ethical and knowledge-making reasons that a lot of scholars have already covered. It tends to be that local communities have knowledge that the scientists don’t, for example, and also have concerns that may not have occurred to scientists and things like that. And so for both epistemological, like, knowledge reasons and for ethical reasons, it’s really important to engage. And then you also talked about kind of the insider outsider thing where we have members of a community who also are trained as scientists, so say, an ecologist who is also a member of a local community or something like that. But if I understand your point correctly, one of the reasons why participant engagement becomes, I don’t want to say challenging, I feel like that’s not the right word, but maybe it is just different or is differently looked at and approached by scientists in the areas that you’re looking at is that we can’t necessarily promise or it may not even be part of the scientific enterprise to deliver any kind of tangible benefit from this specific research for this population or for this individual that you’re looking at. So I remember one of the things that you talked about in your research is that somebody might be getting brain surgery um, to have an implant put in. And now that the brain is open and we can look at it, maybe they extend the surgery by a couple of hours to do some extra research on the brain. Or you just said in your answer, you extend the surgery by adding a few more electrodes that are going to track some more information. And that isn’t critical to the device that was supposed to be implanted to help manage the condition. So it’s not adding any tangible benefit, but it is adding a tangible risk because the surgery would be prolonged or because extra devices would be in the brain. So yeah, can you talk about that a little bit more?

MP: Yeah. And so I do also want to just say it usually extends the surgery time by like 15 to 20 minutes.

JJF: Oh, sorry.

MP:  No worries. No worries.

JJF: That’s a good point sorry.

MP: It’s very hard to keep track of all of this. And sometimes up to 30 or 40 minutes even, sort of, depending on the study. And these are still happening right now. So just to back up a little bit. So somebody who’s getting, who say, has a movement disorder, their doctor has told them, Okay, let’s it’s time for you to get a DBS device to help you manage your condition. And then their own doctor will also usually ask them, Hey, since you’re already getting brain surgery, I’m also conducting this basic study, trying to understand movement or language processing or memory, something like that about basic brain functions. Would you be willing to take part in this study, and it would happen at the same time as your clinical brain surgery. So it’s while your brain is open. So we might implant some temporary electrodes on your brain to collect additional information. And while that’s happening, you might do some language tasks. We might give you a set of vowels or something like that and then try and record your brain data while you’re taking on those tasks.

JJF: Oh, right, because people are often conscious for these surgeries, right?

MP: They are awake. And the study itself doesn’t benefit you. It doesn’t help you with your condition. You don’t really get paid anything at all, but you will help advanced science.

JJF: Mm hm.

MP: So that’s sort of the study that they would be taking part in. And yeah, there’s no benefit there. And they would also often say things like, there are no additional risks when you take part in this study, but all the risks that come with brain surgery, so stroke, infection, all that stuff, all of that gets amplified because the longer you’re on the table, the more amplified the risks become. I think it’s really hard, actually, to know how to communicate that to a potential participant in a study like that. How do you tell somebody? It’s the same risks, but they’re all amplified. Like, what does that even mean? Like, we don’t even know how to even . . .

JJF: So saying no additional risk is technically true because nothing extra is added, but all the risks that were there are not at the same level. They do increase. Okay. Yeah.

MP:  That’s right. That’s right.

JJF: So it’s not false to tell a patient there’s no additional risks, but it may not necessarily provide everything that somebody needs in order to make an informed decision.

MP: I think it’s hard to say. Like, I think there are debates and neuroethics about this. So I think a lot of folks say things like, what, what would be considered acceptable levels of risks? And how would you communicate that the risks are the same but amplified?

JJF: Yeah.

MP: And I think usually when people would hear, like, the risks are the same, but they’re just amplified and it’s just 20 minutes. Usually you don’t think much. And usually, as well, the participants will have a relationship with the doctor or the neurosurgeon who’s doing the study. And so I think they sometimes might feel indebted to this person who might be saving their lives. Like, I’ve done interviews where I had people who took part in these studies, like cry in the interview and say, you know, I owe my life to this person.

JJF: Mm hmm.

MP: My condition was so unmanageable and it was so difficult for me to live my life, and it’s completely changed after the DBS study, and why would I not volunteer if I can help science in some way? So the participants in these studies are extremely altruistic, and some have argued that even if there’s no clinical benefit, maybe the benefit is actually one of, like, fulfilling altruistic motivations.

JJF: Feeling like you’re giving back.

MP: You’re giving back or you’re helping to advance science. But I think that’s really different than thinking about a standard clinical trial where you can take on risks, and the risks are justified because you get some kind of clinical, you might get a clinical benefit.

JJF: You might get the benefit.

MP: Because it is an aim of the research.

JJF: Yeah. So if you’re thinking about an experimental drug trial, you’re taking drugs to manage a condition or an illness, and maybe they’ll work, and maybe you personally will see the benefit, but that’s not what we’re talking about here. So you aren’t personally going to see a tangible benefit, but you might help advance science in a way such that there might be new devices for new generations of people suffering from your condition or condition similar to yours, perhaps.

MP: Yeah. We don’t know if or how or when that would happen, right? Because, you’re trying to understand the brain’s basic functions. And I think the idea here is you need to have basic mechanistic knowledge about the brain to then be able to do the other applied stuff.

JJF: And we’re still learning the basics.

MP: That’s right. And so it’s probably not going to be anytime soon, right, for those benefits to take hold. And researchers are still conducting these types of basic intraoperative studies as we speak.

JJF: So I want to circle back to something you said there while you were talking about this. Because you’ve looked at a number of ethical concerns with these trials, and you’ve now talked twice about the importance and the challenges of patient trust, right? So you’ve talked a little earlier in our conversation, you mentioned that researchers are beginning to realize how important it is to have trust of the communities they’re researching with and or on and that many people have quite good reasons to distrust, given kind of the history of medical research in the United States, things like the Tuskegee experiments and stuff like that. But here, you also talked about the fact that many of the patients in these trials specifically that you’re looking at, often do place a lot of trust in the surgeons. And so that that trust itself also kind of changes the way in which perhaps patients look at their willingness or unwillingness to participate in these trials. So can we talk about trust?

MP: Yeah, I mean, I think it’s, it’s such an interesting topic, and I, don’t ask me for a definition of trust. There’s obviously really good work in philosophy on trust. But in this context, I think what sort of makes it really complicated is a variety of factors, right? Just some descriptive factors about what happened. So one is just the person who’s asking you to take part in the study is also the same person who’s giving you your care. So you’re just in a different relationship with that person who’s doing this research that’s not going to benefit you, but they’re also asking you to take part in research while you’re being cared for. So that sort of makes it difficult. I think another thing is that what’s sort of unique about these basic studies is that clinical care and basic research are happening at the same time. So while your brain’s open, they usually also ask you to perform a number of different tasks to make sure that they’re stimulating the correct area for your condition. So sometimes it’s actually hard to distinguish which part is the research and which part is part of my clinical care. How do you say no if you cannot distinguish between the two.

JJF: Right so how do you withdraw from the study, for example, which patients always have the right to do in clinical trials?

MP: That’s right. And then I think the other thing is because of just circling back to that relationship with the surgeon and maybe the research team, we don’t have enough research to sort of know this, but one of the concerns that folks have raised is that potential participants might feel obligated to say yes because they’re afraid that their clinical care may in some way be impacted if they say no. So you’re kind of just in this, like, sort of vulnerable position where you really need this device. You’re also asked to do this extra thing. And, you know, they say, like, Oh, it doesn’t. . .  you can say no. . .  it’s not going to impact your clinical care, but it’s actually really hard to know what participants are thinking when they’re presented with that opportunity. And if you are in a lot of pain and having a really difficult time with your quality of life, are you just saying yes to sort of get it over with? Are you worried that your clinical care might be impacted or your relationship with your healthcare team might change in some way? So that’s sort of another concern that shows up in that context. And then, yeah, when can you meaningfully say no? So you technically are allowed to at any point, right? But if you can’t distinguish when the research is happening and when the care component is happening, that makes it really difficult, as well.

JJF: I think that’s really interesting. So it’s your surgeon who’s already somebody you’re depending on for your regular care, the care that you already need. And then it’s that person that is coming to you and asking. So they’re both your primary care physician in this case, and then also they’re the people conducting the research. And so they’re kind of wearing two hats, and it may be hard for a patient to kind of distinguish surgeon as researcher versus surgeon as primary care practitioner, right? And so

MP: right.

JJF: If the surgeon is excited about the trial, and I mean, a lot of scientists are beautifully nerdily excited about the things that they’re doing. It may be hard to know when patients are giving consent because they genuinely want to be altruistic and they want to be a part of this versus when they’re giving consent because they want to make their surgeon happy, for example.

MP: I think that’s such a great way to put it. They want to make their surgeon happy.

JJF: Even if the surgeon isn’t pressuring, it could still feel like pressure.

MP: Yes. And I think just being asked, especially if your quality of life is not great and you’re seeking this additional option to help alleviate the symptoms. You’re in sort of a really stressful situation. And when you do all that consent, before surgery, I think it’s usually pretty stressful for these patient participants, you know, preparing for surgery. It’s a lot that they’re thinking about. So it might just be too many things to be holding in place at once. And I should also just add that more recently, so sometimes I go to NIH study section to do grant reviews, right? I’m noticing that there is some, I can’t say for sure how many, but I’m starting to notice a separation. Where I think researchers are maybe responding to some of these criticisms and trying to separate the research from the clinical component. So you would have a research team that is not part of the care team, so that gets separated, and then maybe your own clinician would ask you for the consent, but they’re not part of the research team, even though they would be overseeing it. So I’m beginning to sort of see that happen, but it sort of remains to be seen, like how widespread it’ll become.

JJF: Right. Would that make it easier during the procedure itself to tell the difference between what is your routine procedure and what is the trial?

MP: I’m not sure. I think if there’s, you know, different teams of people in there, then probably yes. This is sort of just more off the top of my head, right?

JJF: Right. So this is, like, a new change that just is starting, and we’ll have to see how it develops.

MP: Yes. And I think, importantly, we’d have to talk to these patient participants who were involved in these studies. Could they tell when it was different? Like, we need more research on this, and I think talking to them instead of guessing what it’s like for them is really important.

JJF:  So we’ve talked about clinical trials that happened during brain surgery. But you also distinguished, and I just want to make sure that I kind of understand the difference. Another area of research you looked at is the difference between opportunity studies and experimental trials. So can you talk about how these are different? Have we already kind of talked about one of these, maybe? Like, what’s going on with these? And why is it important to look at both when we’re thinking about the ethics behind this?

MP: Yeah, thank you for asking that. So I would say the you know, human intracranial electrophysiology research. So this is from a paper by Mergenthal et al, and I’m happy to provide these citations at the end if that’s helpful.

JJF: I can put a link to it, yeah, to the DOI in the show.

MP: Awesome. But the way we’ve been talking about these basic interoperative studies, right, that happened during clinical care, during the brain surgery, for movement disorders for DBS, so those would count as opportunity studies.

JJF: Okay.

MP: We also have experimental trials, and usually when we’re talking about those we’re sort of in the domain of brain computer interfaces, whereas basic interoperative studies are in the domain of DBS.

JJF: Okay. So the opportunity studies, it’s like you’re going in to have this deep brain stimulation device put in to manage your condition. And so there’s an opportunity while you’re undergoing the brain surgery to learn a little bit more basic information about the brain. Like you were talking about learning about language centers or motor skills and so getting the patient to do something so that you can observe the brain while the brain is open to observation.

MP: Yeah, and I think that’s why they call it opportunistic studies. They’re taking advantage of the fact that your brain is already open because it would be unethical to just recruit a person and open their brain up. Right? And start recording. You can’t do that. So the fact that it’s already open and you would just extend the surgery time by 20 to 40 minutes, and all the risks stay the same, but amplified, it’s much better than

JJF: than taking somebody who’s not going in for brain surgery and putting them under the knife and doing this. Cool. Okay, so then the experimental trials are more the devices you were talking about that can send a signal to a prosthesis.

MP: Mm hmm. Yeah.

JJF: Okay.

MP: Yeah, so in experimental trials, what’s sort of different is if you contrast them with the opportunity studies, folks are already undergoing brain surgery for a clinically indicated brain implant. For experimental trials, usually you are somebody with, say, injury, and you would get recruited to be a part of a BCI study, so a brain computer interface study. And you would not be undergoing brain surgery, if not for the fact that you are participating in that study.

JJF: Okay.

MP: So you are undergoing brain implant surgery purely for the research.

JJF: Right.

MP: And so here, I think it it’s a little bit different because you would have the device implanted, and then it’s like having a job, actually. You would be a member of the research team, and these research studies can span anywhere 3-5 years. And you would usually have to come into the lab maybe up to three times a week for several hours at a time. So you have the BCI implanted, and then they record the data, and, you know, they have some devices hooked up to your ports on your head, and they’re collecting your brain data while you’re doing a bunch of different tasks. And it’s pretty tiring, actually. We have really. . . some qualitative research showing this. And yeah, so folks are pretty integrated into the research team. So they’re going into the lab several hours at a time, for up to three days a week, for up to three to five years.

JJF: Oh, wow.

MP: These studies though are different in that so you’re doing surgery just for research. It doesn’t help you treat or manage your spinal cord injury, but it does improve your quality of life, right? So you have access to use a prosthetic arm, for instance. But usually, it’s not outside of the lab. So it’s something that you would be able to use when you’re inside of the lab. And then, sort of, depending on the study, sometimes you are required to have the device explanted or surgically removed when the study is over. And the reason for this is, well, there’s different reasons, but these are experimental devices, right? Like I said, BCIs are not FDA approved.

JJF: Right, right.

MP: So insurance would not cover you to continue using it, it’s really expensive to try and maintain. And then also they might sometimes researchers would say things like, we don’t know what the impacts of this device would be outside of kind of the

JJF: The lab setting.

MP: The lab setting. Outside of that three to five year framework. We don’t know what it would do to the brain. Like, maybe it might move or something like that, so you have to have it explanted or we’re not going to require you, but there are risks if you keep it in, and we can’t continue supporting you.

JJF: Right. Oh, wow. So in experimental trials, the patient is not undergoing surgery already. They get recruited for a trial and given the opportunity to participate in this trial. And then that’s when they would undergo the brain surgery. Otherwise, had they not been recruited, they would not be undergoing the risks of the surgery.

MP: Mm hmm.

JJF: And then they participate in this trial for three to five years, which could improve their quality of life. But because it’s a trial, we don’t know for sure.

MP: Yeah.

JJF: They get access possibly to, you were talking about the prosthetic arm or there was the eye that you were talking about too a little earlier in our conversation, that it creates kind of an artificial vision center.

MP: Mmhmm. So I can talk about that one a little bit later, but the participants do use that outside of the lab. So that’s kind of a different case. We can go back to it.

JJF:  But the arm doesn’t. . . so some of the stuff can be used outside of the lab depending on what you had done, and some of it can only be used in the lab. And then at the end of the three to five years, either the device needs to be removed or there’s no more support. So if. . .  you’re kind of taking a risk if you keep the device in because they don’t know if the device is going to continue to be safe outside of the five year window. And there’s no support for you anymore.

MP: That’s right. And I think I would say, too, that it’s also you might have the device in your head, but maybe it’s turned off. It’s not functional.

JJF: Oh, Okay.

MP: Yeah.

JJF: So either it’s removed or it’s turned off, or in some cases, you’re allowed to keep using it, but there’s no support. So there’s a variety of different ways in which the trial is, like, closed, I guess, is a way of talking about it.

MP: Mm hm.

JJF: So what do we need to know about the ethical landscape of this stuff? That’s a really broad question.

MP: Yeah, yeah.

JJF: What do you want to highlight? Like, what do people need to know about this?

MP: Yeah.

JJF: when it comes to ethics?

MP: I think whenever I tell people about experimental trials or BCI studies, these like they sometimes call them, first in human BCI studies

JJF: Cause they’ve been tested in animals beforehand, probably.

MP: Yes. Yeah. So I think it raises this really interesting and maybe sometimes scary question about what are the limits of human subjects research? So I think even basic intraoperative studies like, these are new contexts that I think when the Belmont Report was created in the 70s in response to egregious practices used on human subjects, I don’t think they were thinking about this kind of research. So I think that it just raises new questions about ethical principles for human subjects research, and then just also the limits of research on humans in this context. So I think that’s kind of a ripe area for research that I think I’m thinking about and lots of other folks are thinking about. And then going back to this thing about benefits. So, if the BCI does not help treat or manage, say, the spinal cord injury, but improves the participants’ quality of life in some way, so I don’t call them patient participants in this context because they’re not patients. They’re participants in research, right?

JJF: Right, so this wouldn’t be their primary physician that’s doing these

MP: No, this is different.

JJF: Experimental trials.

MP: This is different. And I think there is this really serious question about benefits. So there is qualitative research out there that tells us that these BCI participants are sometimes referred to as brain pioneers. They really feel a sense of belonging and purpose when they take part in these studies. They feel like they’re part of a team and that they’re really helping to advance science. Like, it’s a really meaningful way to contribute for them. So I think that’s really important. But at the end of the study, you might have to have this thing taken out. And so there’s just this question of, like, is that fair? And what kind of benefit should we be providing for these participants who are literally putting their brains on the line to help advance science, sometimes to get a sort of clinical benefit, sort of depending on the study, right? But we don’t have any sort of provisions to think about long-term care, what they should get out of it. So I think that makes me very uncomfortable because it inherently raises questions about unfairness, because the researchers who I think are amazing people, super smart, do get a lot of benefits when they do this kind of work, right? You’re doing groundbreaking research, you’re finding out this or that about this particular device. But the people who made this work possible basically might have the device explanted. And then, worse in other studies. . . so there’s a couple of other kinds of studies I should mention. So one was there we were talking about the BCI for acquired blindness.

JJF: Mm hm.

MP: So this was a study that was conducted by a medical device company, and basically they implanted something like 350 participants globally. They were using these devices out in the world to sort of navigate space, and then the company basically went bankrupt in 2019.

JJF: Oh wow.

MP: And the participants of the study found out not through the company, but through the news that they went bankrupt. And you can keep the device, but basically, they’re going to become obsolete in a few years.

JJF: And there’s no support now, I would imagine if anything goes wrong with the device.

MP: No. And like, replacement, you know, for pieces that you might have to change and things like that. So we are seeing cases where participants are being harmed in some way, right, as a result of these sort of other factors. So bankruptcy, you don’t really think about that, but

JJF: Yeah.

MP: In the neural device space, there’s a lot of collaboration between government funds and industry partners. And I think that’s a really great thing because you’re getting these devices from these companies, but they also have a bottom line. They do want to make money off of these devices. And so if they decide that this is not going to be a good market to invest in or something like that, and we switch directions, they’re allowed to do that, but basically the people who have the devices implanted basically might get screwed over.

JJF: Oh, wow. I have so many thoughts and questions right now. Yeah, so on the one hand, I was thinking about what you said is that, you know, you run a three to five year study, and the researchers involved in that study are probably getting funding, possibly funding from industry partners, possibly funding from government, maybe both. This might also advance their career, right? So they run this study, and that might be really helpful for their career as researchers, allowing them to reach certain milestones, get more access to funding, bigger teams, blah, blah, blah. But then I’m thinking about the patients and just, like, Well, I guess patient isn’t the right word. Sorry, the participants. And just at kind of the most basic level, this becomes like your life for three to five years that you go into the clinic. I think you were saying several times a week for a few hours every time, and you’re part of the team. But then at the end of the research, even before we talk about the technology and what happens with the technology that is now part of your body. But even outside of that, like, at the end of the research, the team wraps up and, like, the professional members of the team move on to do other things, but, like, what happens to you, right? So there’s this sense that you’ve kind of, like, given over a huge amount of your life to this for several years, and then it just wraps and it’s just done. And I feel like that would, by itself, could be quite hard to deal with for some people. And then add on to that the idea that for some people, the device itself may have hugely improved their quality of life. So their quality of life could be improved by being part of a team and feeling like you’re part of something meaningful. But then the device itself, depending on how well it works for you, could also improve your quality of life. And then to have that taken away, too at the end of the trial, like, Wow.

MP: Yeah, it’s really complicated.

JJF: Yeah.

MP: And I would say the other I’d probably also provide a link to the BCI Pioneers Coalition. So this is a nonprofit group that was started by Ian Burkhart, who I’m actually also collaborating with on a different project, related to neuroethics, obviously. But he was one of the first human BCI participants in the US. And this nonprofit group basically tries to build community among BCI participants across the country. Yeah, and I think there’s just a desire to connect with other people who have undergone this transformative experience. So Ian’s no longer part of the BCI study, but he’s very much steeped in that world, giving talks, doing awesome things, obviously, connecting people, building community. But I do think it does become such a big part of your life that somebody like Ian decided to start the Pioneers Coalition to continue connecting with people. So I do think there’s, uh,  the participants themselves are organizing.

JJF: Like advocating.

MP: Yeah. And I think that’s really amazing. And I think on the research side, we have to kind of just think more about I think fairness and how we can sort of make sure we compensate participants appropriately. And I think it’s tricky because anytime you have government funded research, there’s limits on how much you can compensate people. And then, obviously, for disabled folks, you can’t make more than a certain amount of money per year in order to be eligible for benefits. So there’s all sorts of complications, but I do think there are creative solutions to these kinds of things, and I think to show appreciation and to acknowledge the contributions of these pioneers, we really have to think about fairness and benefits, even if it’s not clinical benefit.

JJF: Yeah, you mentioned also the idea of, like, care after the trial, as well. I could see that as a way of acknowledging and giving thanks as well to try and maybe support and provide these kind of spaces for people to connect with each other and stuff. . . Yeah, I don’t I feel like there are creative solutions that could be done to try and make sure that things are more equitable and that people don’t feel quite as cast adrift, maybe.

MP: Mm hmm.

JJF: This maybe kind of gets to one of the other things I wanted to ask you about, which is that throughout your research, you kind of note that patients or participants, patients, I guess, in the opportunistic on the opportunity studies, and then participants in the experimental trials, are often seeking different things in these studies than perhaps medical researchers are, and that this might also complicate the ethics when it comes to kind of thinking about these trials and these studies. So can you talk about that a little bit.

MP: Yeah, I think for the basic studies, the researchers are primarily interested in, you know, obtaining generalizable knowledge about the human brain.

JJF: Mm hm.

MP: So that’s a very different aim from a patient participant who’s like, I want to give back to my clinician.

JJF: An advanced science, yeah.

MP: Yeah. And maybe help folks who are suffering from the same condition that I am. So I think those two things can be separate, but one way to sort of actually maybe even bridge some of that is to do more participant engagement.

JJF: Right.

MP: So then we don’t quite have this disconnect between the different kinds of aims. And maybe if you had more robust engagement with these different communities and populations, you actually sort of have a better understanding of what their needs are and what sorts of motivations they have to try and sort of fill in some of those gaps if you cannot provide a clinical benefit.

JJF: Mm hm.

MP: So I think, sometimes these aims can diverge. And I remember when I did some interviews, like a while back for patient participants in these basic studies, and they didn’t really even quite understand what the basic study was for. They couldn’t remember, right? And they thought, Well, at the end of the day, shouldn’t it be benefiting, like, Parkinson’s patients? Like, yes maybe, but we just don’t know, right?

JJF: Right. So understanding how language centers of the brain works, maybe that will benefit Parkinson’s patients, but the goal is just to understand the brain better, right? We don’t know what benefit that might bring.

MP: Yeah. And I think, in a sense, you know, when you’re writing grants, people always make these, like, really grand promises, right? Like, about, Oh, if we just knew this thing, then we could cure all these different conditions. But I mean, I think that would be the hope, but it’s usually a pretty long time before something like that happens. But yeah, I do think the motivations can be really different. And I think part of what participant engagement can do is maybe bridge some of those divergences and also just to get people talking with each other about what their needs are, what they can provide, what’s fair, what’s not fair?

JJF: Mm hmm.

MP: Yeah, I hope that answered the question, but I’m happy to. . .

JJF: Yeah, no, I think it does. Because I think if researchers aren’t necessarily aware of what patients are seeking or of what patients are hoping for, then the risks of miscommunication and possibly exploitation just go up, right?

JJF: So we’ve been talking about exploitation, and you noted in your work that exploitation is definitely possible. We’ve talked a few times now about how it might be possible that you might say yes to the surgeon because you want to keep your surgeon happy, for example, for the opportunity studies. But you also highlighted a distinction between harmful exploitation and something you call mutually advantageous exploitation, and something you call mutually advantageous exploitation. So can we talk about that a little bit and why that distinction might be important to keep in mind?

MP: Yeah, for sure. So I want to first note that I did not make this distinction.

JJ: Oh I apologize

MP:  So this is No, no, no. So this is the work of Alan Wertheimer, so I think he has a book. It’s called Exploitation. 1999. So brand new in philosophy and, right? Anything from the 90s is still new?

JJF: Yes, definitely.

MP: But basically, the distinction, let me just read you the actual definition, and we can sort of say more. So harmful exploitation takes place when let’s say you have two parties who are involved, say, A and B. And basically A takes unfair advantage of B and A benefits in some way, while B is harmed or made worse off as a result of the interaction.

JJF: Right.

MP: So when we think of exploitation, we usually think of those kinds of relationships where one person benefits and another is basically harmed as a result of that interaction that took place. Mutually advantageous exploitation is such a funny term, philosophers. But basically

JJF: we excel at that.

MP: But basically, again, you have two parties, right? And here, A takes unfair advantage of B, even though both A and B benefit from the interaction. So I think when we think of exploitation, we think, someone can’t possibly be exploited if they benefited from that interaction that took place. But I think what’s really amazing about Wertheimer’s work, and, of course, lots of people disagree, right, about his definition. And I won’t get into all that, but there’s a whole body of literature on that as well. But I think kind of the main thing for me is that you still can be exploited, even if you consent and you understood the terms, right, and even if you benefited in some way. And so I think just thinking about this in the context of neural device research, you might still think that some of these patients or participants, depending on the context, are exploited, even if they benefited from the study in some way. So even if it’s not a clinical benefit, but let’s just say it’s an altruistic benefit, you felt really good about advancing science. You might still think they’re exploited in the sense that the distribution of benefits between the parties who are involved, namely the research community and the participants or the patients, it’s not even. It’s not fair in some way, because the researchers are getting all of these, you know, accolades for doing these amazing things, getting funding, probably more funding as a result, right, of their publications. But the participants themselves get to fulfill altruistic motivations, but they don’t get payment.

JJF: No recognition.

MP: Yeah, they might have to get the device explanted. And I think the researchers are very grateful for the participants and the patients who take part in the studies, but there are no material benefits for them. And so I think in that sense, you might say, maybe they are exploited because this is kind of an unfair relationship. Even though they benefit in some way, it’s not a fair distribution of what the benefits are because one group or one community is substantially benefiting as a result of the participation.

JJF: Mm hm.

MP: And sort of a different way of thinking about it. . . So Wertheimer has these really funny examples to illustrate the point in his book. He says, imagine the two parties. Okay, so you have the two parties A and B. So imagine this person, let’s just call her Michelle. She’s driving on the road. It’s snowing and her car lands in a ditch. And then this other person is driving by and sees that the car has landed in the ditch. And let’s just say at market price to pull a car out of a ditch in the snow, let’s just say it costs $500. That’s the market price for it. But this person says, Hey, I can help you, benefit you, you know, get you out of that ditch. But it’s gonna cost you $3,000.

JJF: Right.

MP: And so the example there is just it’s sort of a silly example, but the idea is just that I did benefit

JJF: from being pulled out of the ditch.

MP: But they also took advantage of me.

JJF: Yes. Yes.

MP: Right? By basically taking advantage of my vulnerability because I don’t have any other options. And so I say “yes”. And so they pull me out, but then I pay a huge price, even though I benefit.

JJF: Yeah.

MP: And they benefit substantially. They make a lot more money.

JJF: Yeah, yeah, yeah. That is a silly example, but I think it’s very effective.

MP: But the idea is just that exploitation can happen, even if you understand the terms and you consent.

JJF: And even if you feel a benefit at the end.

MP: Even if you benefit in some way. And I think the main emphasis is on who benefits in the relationship and how much each of the parties involved benefit. And so exploitation can still take place.

JJF: Yeah.

MP: And some scholars, so this is Lena Janssen’s work says that part of it is because one party maybe is taking advantage of the vulnerability of the other party. Inside of that relationship, even if they both benefit. And here, in neurodevice research, they are taking advantage of the fact that these folks are disabled, right? So if you have a spinal cord injury, we’re not going to injure you more if we sort of implant this. So we’re taking advantage of this opportunity. But then, even if you’re fulfilling altruistic motivations, we’re not giving you any other sort of benefit when you take part.

JJF: And there’s also the vulnerability and inequality that just exists between participants and researchers or patients and surgeons. There’s a power inequality there already, and then you add onto it kind of the vulnerabilities that you’ve talked about already. I think that’s really interesting, and I think it’s important to note because I could imagine people saying, but look, the patients or the participants are benefit whether it’s a tangible benefit or not, like, they’re benefiting from being part of a community in the case of the research trials. Both I could imagine the participants and the patients benefiting from feeling kind of altruistic and feeling like, you know, you’re giving back to research, but there’s still a power imbalance here, and there’s still an inequality in terms of who benefits.

MP: Yeah. I think the other thing maybe sort of a broader point to note is that sometimes with research, I mean, especially research involving disabled folks, it’s always described as we will benefit this community in the long run or something like that. But we have lots of research that’s been done on and with disabled folks, but not to their benefit in the end. It benefits other industries or something like that. So I also kind of just want to be mindful of that history, and I think that’s why the engagement angle is so important, ensuring that brain pioneers and participants in these basic studies, their perspectives are taken into account. And I think also, like, we’ve never really asked them. Like, what other benefits would you like?

JJF: Yeah, like, what would you like to see happen?

MP: What would you like to see happen? Would you like maybe some kind of additional payment? Would you like additional care?

JJF: Or additional support for your device?

MP: Yeah. Like, I think something built into the system that is responsive to the needs of these folks, I think would be really important. Like, we shouldn’t it shouldn’t be me deciding. It shouldn’t be researchers deciding. I think it importantly needs to be the patients and the participants who are advancing this science.

JJF: I think that’s really interesting. So if I understand correctly, it’s like maybe consent is not enough here that we need to actually build into the design of these experiments room for agency, right?

MP: Mm hmm.

JJF: It’s not just that you can say yes or no, but that you actually will have a say in what the goals of this research are or in what happens after the trial closes or things like that.

MP: Yeah, I think, actually this maybe comes back to your point about power.

JJF: Yeah.

MP: Maybe distributing the power in a slightly different way so then these folks have a say in how this work turns out and how they participate and how they’re compensated, whatever that’s supposed to mean, even if it’s not money.

JJF: So, what would you like to see happening going forward with regards to this kind of research, either the opportunity studies or the experimental trials? Yeah, what kind of changes? You’ve mentioned some changes you’ve already seen, but what kind of things would you like to see going forward?

MP: I think so this is probably just going to go back to some of the things we talked about already. .

JJF; Let’s sum it up. I love it.

MP: Yeah. I think more participant engagement in this space. And I don’t just mean more research on the participants on the patients about their views. We can have more of that, and that’s great, like qualitative interviews, focus groups, asking them about their experiences being part of these studies. But maybe using more collaborative approaches to engagement, maybe like community based participatory research, where researchers and brain pioneers are partners in research where they’re collaborating together, and they’re incorporating the perspectives of these folks inside of their research aims. I think that’s actually maybe the next step. And hopefully, you know, people are open to thinking about that. And that’s what I’m trying to do in the neuroethics space as well. And then I think just to think more seriously about the limits of human subjects research, how much of this can we do? Because I think, the other thing I should note, too, is that once researchers write grants for this kind of thing and say it goes to the NIH for funding, they usually have already received IRB approval, so that’s institutional review boards. This is the institution that checks to make sure that there’s ethical treatment of human subjects in research, but it’s. . . I’m sort of a little worried that the Belmont Report and the principles that were created are perhaps ill equipped to handle the complexities of this research. Like, when that report came out in the 70s, I don’t think they were thinking that there would be physician investigators, where these folks are occupying this dual role. I think there was kind of this hard separation between research and clinical practice. But that sort of, those lines are increasingly becoming blurred. And yeah, what does it mean to amplify risks in a study? So I’m really curious to know, how are IRBs approving these studies? And how are they using the Belmont principles so that roughly the principles are like, justice, respect for persons, and beneficence. So we don’t have that benefit angle, right? Inside of some of this work. So I’d like to know a little bit more about how IRBs are handling these cases. I think that would be important. It would be important to know how they’re evaluating the ethics of this research. And I think to add sort of a further wrinkle, IRBs in the US. So usually when people think about them, they are situated at academic institutions, but increasingly, there are private IRBs now in the US that are for profit. And so I wonder, like, are the private IRBs evaluating these studies differently than, say, an IRB at a university institution? So those are kind of just more questions, I don’t know. And so, like Neuralink, which I believe right now has two participants in a BCI study. How did they get IRB approval from the private IRB? I’d be curious to know. Although I think that’s not really information that’s available out there.

JJF: So, it may be that we need kind of updated research ethics board standards, perhaps to deal with kind of, and that kind of, I think, goes along with your discussion of, we need to start thinking about the limits of human subject research. And, I mean, if this stuff hasn’t been updated since the 1970s, it may well be that we need to have kind of updated codes, right? Updated codes of what is ethically appropriate research and what is not, given what our technological capabilities are now and what we can do. Also just given the social changes that have happened whereby researchers and primary care physicians can be the same person, for example, or the same team of people. But then you were also talking about the. . . I did not know that research ethics boards were now sometimes private companies or increasingly private companies. And yeah, I think it’s concerning that there might be for profit companies doing this kind of work. But that kind of ties in with another question that I wanted to ask you on this, which is particularly when we’re looking at the implantation of devices, so the trials rather than the opportunity studies. You’ve mentioned now, also, Neuralink. And I was going to ask, like, these devices that are now in your body and a part of your body and a part of your phenomenological experience of the world, particularly when you’re talking about the vision one that isn’t just to the lab. Like, who owns the devices that are in your body or the data coming out of the devices that are in your body?

MP: Yeah, that’s something that comes up a lot. Um, questions about, I think, personal identity, for sure, your relationship to the device and how it sort of changes your orientation in the world.

JJF: Yeah.

MP: There’s been sort of a lot of debate about what they call, like, the personality disorder debate. Like, does it change your personality or something like that? And I think the scientists kind of are not particularly concerned about it, but it seems like the philosophers are on some level. I don’t really have a skin in the game when it comes to that.

JJF: That’s fair.  I was more thinking about, like, I wasn’t even thinking of the deep metaphysical questions the way I guess they can be there, but I was more thinking, like, if you come to rely on something,

MP: Mm hmm.

JJF: You know, I’ll take something very basic. Like, I’m wearing glasses right now. I need them to look at the computer screen. I would be surprised if the company I bought them from showed up and was like, Hey, well, you’re done with these. You know what I mean? And took them away. And, I mean, I know that people participating in a trial are aware that they are participating in a trial, but five years is a really long time to become accustomed to a device, for example, as being part of your lived experience and your lived reality. And I just wonder about how people feel when you discover the device is not actually yours at the end of the five years. And maybe you knew that going in, maybe you didn’t.

MP: Yeah, I mean, I think there is some research out there that tells us that participants are not thinking that far ahead.

JJF: Yeah,

MP: right? So there was this really famous depression study involving DBS devices. And the study basically got halted early because of safety concerns. And there’s, you know, controversy around whether it was appropriate to stop the study at that point. But of the, I think, like 90 something participants in that study, the depression study, 44 felt like they benefited,  clinically benefited,  clinically benefited from the device and wanted to keep it. But basically, there was no money allocated for that. They could either have the device removed that would be paid for or one life of a battery for replacement, and that was it.

JJF: Wow.

MP: And so I think there, it’s like, it actually seems pretty cruel.

JJF: Yeah.

MP:  If it is making a big difference and if you have refractory depression, nothing else is working for you. And then, oh, safety concerns.

JJF: Study over.

MP: Study over. No additional care. If you want to keep it, you deal with it. People don’t have money. To pay for that kind of thing. And so I think at least in the neuroethics literature, I think the acknowledgment is a lot of people think that’s not right. And this is why so many people have focused on this issue of post trial care obligations. What do we owe participants once a study is over, especially if they find some kind of benefit, whether it’s to quality of life or whether it’s a clinical benefit? Are we owed them something. And I think there are practical sort of considerations to be made, at least in the US context. Like, who’s going to pay for it? For how long? Like, how much benefit do you need for us to then, guarantee the continued care? But I think that there is suffering there.

JJF: Yeah, another place where I would think that having the participants at the table for all these discussions is really, really important.

MP: Mm hmm. And I think that, you know, given the data that they’re providing to the device manufacturers, to the researchers, it seems like they should be owed something.

JJF: I want to thank you so much for meeting with me today, Michelle. This was so fascinating. Is there anything else that you’d like to leave our listeners with regarding brain computer interface trials, patient consent, research ethics, or anything else in this area?

MP: Yeah. I mean, I think we covered a lot of it. I think I would just say that, you know, anytime we do research, especially human subjects research, usually the participants and the patients, they’re always anonymized. And so I think we kind of take them for granted in some sense, but you can’t do that without those people. And I think that acknowledging the way in which these people are putting their brains and their bodies on the line to help advance science is really crucial for building trust and also for equity and research, you know? Yeah, so I would just say at the end of the day, we really just need to think back to keep front and center the folks who are contributing to this research, who are the patients and the participants.

JJF:  It’s not just the scientists.

MP: It’s not just the scientists, and I do think they are owed something, but what they’re owed exactly should be determined with them and not just for them by some other entity out there. Also, we need to rethink IRBs.

JJF: Definitely. Thank you.

MP: Thank you, Jill.

JJF: I want to thank Michelle again for sharing her research on brain computer interface trials, patient consent, and human subjects research ethics with us today. And thank you, listener, for joining me for another episode of Cyborg Goddess. This podcast is created by me, Jennifer Jill Fellows, and it is part of the Harbinger Media Network. If you want to check out some of their other podcasts on that network, I’ve left a link in the Shownotes. Music for this episode was provided by Epidemic Sound. You can follow us on X, formerly Twitter, BlueSky or follow me on Mastadon. And if you enjoyed this episode, please consider leaving us a review. Until next time, everyone. Bye.

Next Post

Leave a Reply

© 2025 Cyborg Goddess

Theme by Anders Norén