No Easy Answers in Bioethics Podcast

Considering Consciousness in Neuroscience Research: Cabrera and Reimers - Episode 19

January 7, 2020 Laura Cabrera photoMark Reimers photo

What can neuroscience tell us about human consciousness, the developing brains of babies, or lab-grown brain-like tissue? How do we define “consciousness” when it is a complex, much-debated topic? In this episode, Michigan State University researchers Dr. Laura Cabrera, Assistant Professor in the Center for Ethics, and Dr. Mark Reimers, Associate Professor in the Neuroscience Program, discuss the many layers of consciousness. Examining recent research on lab-grown brain organoids, they discuss moral and ethical considerations of such research, including how future technologies could challenge our definitions of consciousness and moral agency. They distinguish consciousness from intelligence, also discussing artificial intelligence.

This episode was produced and edited by Liz McDaniel in the Center for Ethics. Music: "While We Walk (2004)" by Antony Raijekov via Free Music Archive, licensed under a Attribution-NonCommercial-ShareAlike License.

Related Items

  • Trujillo CA, Gao R, Negraes PD, et al. Nested oscillatory dynamics in cortical organoids model early human brain network development. bioRxiv. 2018:358622. DOI: 10.1101/358622.
  • Sawai T, Sakaguchi H, Thomas E, Takahashi J, Fujita M. The Ethics of Cerebral Organoid Research: Being Conscious of Consciousness. Stem Cell Reports. 2019;13(3): 440-447. DOI: 10.1016/j.stemcr.2019.08.003.

Episode Transcript

Liz McDaniel: Hello and welcome to another episode of No Easy Answers in Bioethics, the podcast from the Center for Ethics and Humanities in the Life Sciences at the Michigan State University College of Human Medicine. This episode features Center for Ethics Assistant Professor Dr. Laura Cabrera, and Dr. Mark Reimers, Associate Professor in the Neuroscience Program in MSU’s College of Natural Science. Their conversation focuses on the complex topic of consciousness and the brain, including recent research on lab-grown brain organoids. They discuss moral and ethical considerations of such research, including how future technologies could challenge our definitions of consciousness and moral agency.

Laura Cabrera: Welcome everyone. Today—I'm Dr. Laura Cabrera, I'm an assistant professor of neuroethics at the Center for Ethics and Humanities in the Life Sciences. And today I have Dr. Mark Reimers once again joining me for this podcast. Dr. Reimers is an associate professor of neuroscience and biomedical engineering at the Institute for Quantitative Health, and for those of you that might have listened to the, one of the previous podcasts where Mark and I talk about other interesting issues, for this one we're going to talk about consciousness and the brain. And again, the intersection kind of brings a lot of the conversations that Dr. Reimers and I have is the intersection of neuroscience, ethical and philosophical implications of that. And what better topic than consciousness, right? So that, consciousness has been one of the most debated topics in philosophy and neuroscience. So, to start, Dr. Reimers, what can neuroscience tell us about consciousness?

Mark Reimers: Thanks Laura. Well, to be frank neuroscience can't tell us a lot of what we would like to know about consciousness. But it can tell us something, and that's more than it could tell us two, you know, two decades ago. So I think there's something to be, to be proud of. Roughly speaking, we use the word "consciousness" in many different senses. And there's not a single neuroscience theory to explain all of those senses. There's not, as far as I can tell, a real singular kind of consciousness that explains all of the ways we use it in everyday life.

LC: Mm-hmm.

MR: That being said, I think we can say something about conscious awareness, although we can't really say something meaningful about perhaps higher consciousness or conscious deliberation, or at least we can say less about those. But let's talk about conscious awareness. What do we mean by that? We mean that you've had an experience, and you know that you've had the experience. You can talk about the experience, you can tell us that you've had the experience. As opposed to having an experience unconsciously when the same things may have happened to you but you're completely oblivious.

LC: Mm-hmm.

MR: And we can certainly you know talk about those kinds of things in many circumstances. But the way that that's measured experimentally is if you give a person a very brief sound or a very brief image, and then immediately follow it with some other sound or image that lasts for longer, and then the person may or may not be able to report the first sound or image. May have no memory of the fist, which may only last 100 milliseconds or 50 milliseconds or 30 milliseconds.

LC: Mm-hmm.

MR: So very very briefly. And if they are able to report it, then we can measure, we can look at what is different about their brain activity compared to when they had exactly the same very brief exposure but they can't report it.

LC: Mm-hmm.

MR: They think nothing's happened. And very roughly speaking what we can see, and this is not in anything that I've done but this is work by a number of researchers, particularly I would mention Stan Dehaene in Paris, that, where they show that there's a sort of recurrence of activity. That the sensory processing looks very similar in both cases in the sensory areas of the brain, but what seems to happen is about 200 milliseconds after the sensory activity, there seems to be a resurgence of prefrontal activity, particularly in the medial areas, which then seems to trigger a recurrence of the sensory activity about 3 or 400 milliseconds after the first sort of wave of activity reflecting purely the sensory processing. And there and then the prefrontal areas and the sensory areas sort of continue in a dialogue for perhaps half a second. And they also engage several other areas of the brain particularly the hippocampus. So it seems likely then that what we experience as a conscious experience is something that reflects a conversation between several different brain areas and the areas that have actually taken in the sensations that we think of as the experience.

LC: So this, I mean this is very interesting and I guess it really touches in that very important point about, you know, what constitutes consciousness. And I guess a recent example maybe you know might be interesting for the audience to, to hear about if they haven't heard about but at least in terms of consciousness is that a recent paper in the journal 'Cell' talk about these cortical organoids, basically these three-dimensional blobs of brain-like tissue created from induced pluripotent stem cells. So you would think that, you know, when is a line between when this brain-like tissue might start developing something that might be this type of conversation between different areas. Then we can call this organoid is consciousness. So in the study they reported that, you know, as the organoid age, like 2 months, for example, they started to notice certain networks events or coordinated firings of many neurons. So what does that mean? Does that mean that organoids like that if we live them for long enough, will they, can they become conscious?

MR: Well Laura, that's a very good newspaper headline- [laughs]

LC: [Laughs]

MR: But it's not I think very, you know, is not really a big worry at the moment. So first of all, you know we talk about brain waves or synchronized electrical activity, and we can apply that to all kinds of scales. So when, you know, a person's having an epileptic seizure, they're having, yes, a brain wave and it's synchronized activity. But it's not consciousness, it's not thought at all. Or if we are looking at a very early infant's brain, let's say in the womb, the fetus is having, you know, coordinated brain activity, and so does a rat fetus, and those are important for wiring up the early stages of development. But again there's no, no experiences happening. So the waves of activity that they were describing in this paper are actually happening at a frequency of about once per second. Which seems fast to our storytelling minds, but is actually about 50 times slower than the interesting things happen in the brain.

LC: Mm-hmm.

MR: So if we wanted to look at actual activity that's sort of a signature of the brain at work, or processing or engaging with the world, then we're looking at oscillations that might be on the order of 30 or 40 or 50 oscillations or cycles per second. And the time scales there are in the tens of milliseconds rather than in the seconds. The coordinated activity that they refer to is what we call in actual brains a Delta rhythm or a Delta wave, and that's reflecting a sort of biochemical events where the membrane potential of lots of cells sort of simultaneously, and they're communicating with each other on a cellular level, goes down or up. And so that makes the cells excitable for a while or less excitable for a while. And this is the sort of thing that happens when you're deeply asleep. You will have these big oscillations in amplitude, all the cells simultaneously where they're all becoming very quiet or they're all becoming more arousable. Doesn't mean that they are in fact active.

LC: Mm-hmm.

MR: So those are what's being reported there if you look at the, pictures in the data that they actually report there is no evidence of any faster activity that would be more characteristic of a brain at work.

LC: So now you talk at the very beginning of this answer about you know, like, fetuses.

MR: Mm-hmm.

LC: And so, at some point in the paper they talk about, you know, that they, this activity that resembles the brain activity of premature babies. I know that's different from a fetus, but this raises two question for me. One is, when do we know that babies, not fetuses, are conscious.

MR: Okay. Do you want me to answer that-

LC: Yeah.

MR: -And then come to the second? Sure. So, we don't know is the short answer. But the evidence we have suggests, and this is again work done partly by Stan Dehaene but more by his wife. And they showed that very early infants basically don't have that kind of distinction. They don't have any evidence of prefrontal, rapid prefrontal and sensory area of communication. But that particularly over the first year and particularly after 9 months when you start getting myelinated connections between the prefrontal cortex and the rest of the brain, and a myelinated connection means that the messages can be sent much more quickly, and so there's, you know, enough time for a message to be sent and to be returned before the, you know, that brain region forgets what it was talking about to begin with. So when that, when that rapid communications available, then you start seeing that signature of brain activity that is associated with consciousness. Not only a sensory impression but also a prefrontal activity, and then a resurgence of the sensory areas in activity in a way and a continued conversation between these areas. So that seems to, you know, that signature seems to, it's not an all or nothing thing. It doesn't sort of switch on right at 9 months, but it seems to, you know, steeply increase, you know, after 9 months and really continually increases over early childhood.

LC: And so this second related question is that, you know, consciousness it has been postulated that it's an emergent property of brains. And so you mentioned that this point we shouldn't really worry about organoids being conscious, but, you know, if we with time, you know, develop ways to keep these organoids alive and they can start developing myelination. So do you think in the future we could get to a point where these organoids might develop some sort of more what we call consciousness?

MR: Well I think it's always risky to say something can never happen in the distant future.

LC: Mm-hmm.

MR: Such predictions have been wrong again and again. However, I think that there is perhaps a fundamental common mistake that people make when they say even something which, which you and I would agree, consciousness is an emergent property, and most people would say, "of the brain." I mean what else could it be? Except that I think that what the what else is all of the structure of the input that's coming into that central nervous system from the world. I don't really believe that brains in vats, put in vats their whole lives, will attain much functionality. Our brains have evolved to deal with the kinds of sensory inputs and motor outputs that engage us with the real world. And I don't think that without those, that they will attain anything like what we might call intelligence much less consciousness.

LC: So now this raises yet another question because we've been talking about human consciousness. But I guess at least some of might see in other animals that they're conscious of something. So what can you tell us about animal consciousness?

MR: Again. not a whole lot. [Laughs]

LC: [Laughs]

MR: So the kinds of experiments that, again, Stan Dehaene and others have done with, you know, recording brain activity in response to stimuli and then having the animals—again animals don't tell you stories about what they've experienced but they can behave and make decisions in ways that are conditional on what, at least, we say they think they've experienced. And that's not always the same as what's actually happened. And in those kinds of experiments it seems that there's something similar to what we see in human beings. That is there has to be some sort of prefrontal engagement with the sensory input and then reactivation of those sensory areas. And we don't know if they're reactivating exactly the same cells, just broadly the same areas. And in order for an animal sort of to act on or make a clear decision about, about something that we would think of as like reporting or like being able to say that you've had an experience. So it seems at least for conscious awareness, and again I've said that that's only one sense in which we use the word consciousness, but at least for that kind of conscious awareness it seems that that's a more continuous property. And I don't expect that you know if you could give chimpanzees a voice box that they would suddenly start telling you about their experience, because I think there's many other kinds of social interactions that, that are important for consciousness that chimpanzees don't often engage in.

LC: So, what do you think are the ethical implications of working with brain organoids if we are unsure how consciousness emerges. I know that you mention that you know you think that they need a different type of input that they getting. But also I think it's problematic in a way that we don't have a clear line of defining when a certain level of consciousness gives rise to, you know, moral considerations. And I think maybe if we don't think about brain organoids because maybe that's, you know, a question that is already we know the answer. They're never going to be like a moral, an ethical problem there. This gives to pace to start thinking about babies because, you know, there is a lot of medical decisions—

MR: Mm-hmm.

LC: That we make where we really do need to think about, well, when does a certain level of consciousness become the matter of a moral problem.

MR: I think that that's an important question, Laura. My sense is that if we're going to... if we're going to live with the kind of technology that we're inventing, we'll need to, we'll need some way of formalizing the kinds of gradations of moral agency and consciousness. Right now under the American and most Western legal systems, you know, agency is essentially a binary yes or no decision. Once you're over 18 you're considered responsible. If you're under 18, maybe not, for many things. So, I think that we can't accommodate all of the sort of possibilities that are even arising now, much less that will arise with these new technologies if we're committed to a yes or no version of moral agency or consciousness. So I think we will have to rethink those things. but I think it's going to make for some very difficult decisions at least in terms of the kinds of absolutes and legal hard lines that we've been in the habit of working with.

LC: So I guess this gets, again, we're talking about different technologies, and so moving from brain organoids to start to think about things like artificial intelligence. And so, I don't know if you've seen the movie 'Ex Machina' where they clearly that's a theme that they try to explore. And so I guess the question here is do you think that artificial intelligence systems can be some, conscious? Will they be a different type of consciousness than human consciousness, and what might be the implications of that?

MR: So Laura again I would hesitate to say that something can, you know, can never happen, a broad class of things can never happen. I think that if... if we could see evidence of something closely analogous to the kind of sort of reactivation that is a hallmark of at least conscious awareness, then we might have a case for saying that these machines could be consciously aware. I'm not sure that that would be enough, but it would be at least a necessary condition, which I'm not aware that any current artificial intelligence actually exhibits.

LC: Mm-hmm.

MR: And I think that, you know, we mean so much more by consciousness, and, and in particular sort of social obligation and awareness that, much more than just awareness of particular sensation. And it's hard to know what would the analog of that look like. We don't even know what that really looks like in the brain of human beings, much less what it an analog might look like in a machine. So I wouldn't rule it out, but I don't see it becoming, you know, a moral issue any time in the foreseeable future. You know, that doesn't mean that artificial intelligence can't be already very intelligent, and perhaps we have other kinds of moral issues to deal with artificial intelligence.

LC: So I guess one thing that others have tried to do, and I mean they touch a little bit on the movie was, you know, the Turing test. So a test that was developed to try to see you know how, if it was a machine or it was a human. And then that test was not enough to test this artificial intelligent being of the movie. So but do you think that we need that different type of, if we were to develop artificial intelligence that was, kind of, that was hard to distinguish between a human and a machine, that the Turing test would be relevant? Or, you know, when are we also not like just things that receive input and output without really consciously knowing what we're doing? I don't know if it's clear my question but...

MR: Well I know we've talked about the Chinese Room analogy, and maybe that's what you're alluding to here?

LC: Um, yeah that's partly what, I wanted to integrate those, those two things.

MR: So, you know, Alan Turing who was, you know, a mathematical genius but socially rather inept, was, you know, trying to imagine, you know, as a first, you know, first attempt at how you might, how you might construct an artificial intelligence that could at least pass minimally for a human being. And I think that in those conditions it's pretty much been met. I mean we, you know, when you talk to, you know, Siri. There are hundreds of thousands if not millions of people in China who regularly talk to a completely automated therapist, and they say that, [laughs] so they keep coming back. So obviously many aspects of human conversation can be effectively mimicked on a low bandwidth line like a cell phone. I don't think that we should take, you know, the Turing test as sort of the, the absolute, you know, boundary for, you know, what, whether something is actually artificially intelligent. And I would think that it's unlikely that human intelligence is the only kind of model. There might well be other kinds of intelligences, some of which machines, only machines may exhibit, that might be quite distinctive and not even be recognizable, that wouldn't pass a Turing test but might be quite formidable intelligences on their own. I'm not sure how we would define those.

LC: And I guess this raises an important distinction. So one thing would be to say that something is intelligent, that develops some form of intelligence, versus something is conscious. Because you might be, you know, having passed all your tests, and people might think how intelligent someone might be. But you might be just really memorizing things without really being aware of what you are learning, or aware of why things happen in a certain way.

MR: Well what you've just alluded to is the major problem for undergraduate education in mathematics. [Laughs]

LC: [Laughs]

MR: But yes. I, and I think that one, you know, people casually identify intelligence with consciousness, and I don't think they're really necessarily the same thing at all. And I would like, you know, people to take more account of the sort of social nature of consciousness that we, you know, a lot of what goes through our minds when we're, you know, quiet, at least as far as our brain activity can, can show is, you know, thoughts about, you know, related to what other people think about us. You know we're compulsive about you know how our social standing is, and we're very motivated, we will risk our lives in many cases. How many hundreds of people have plunged to their deaths getting the perfect selfie.

LC: [Laughs]

MR: [Laughs] You know which is a great form of social status, but, you know, it's very risky. [Laughs] And so, you know, the, the compulsion that, that we feel about how other people view us is I think unusual in the animal kingdom, and is an aspect of consciousness that is poorly studied and, and very poorly understood. And so that's, you know, the original sense of the word, you know, "consciousness" as a word is only about 400 years old. And it comes from a Latin legal term "conscientia," which is sort of "witness who knew the same things as" you know, the person under investigation, study. And the early uses of the word are entirely in the sense of what we would now call "conscience." That is, you know, you're aware of how other people would view you if you were on trial for an act that you've done. It's only in the last century that this sort of cognitive sense of the word "consciousness" has proliferated and dominated the sense. So that we think that's the, the only sense of the word right now. But it really does have to do with, you know, deep engagement with how other people would view and judge your actions.

LC: Well, this is all the time we have for now, but this is definitely a good point to end this conversation, and we'll leave our audience with this thought that, you know, consciousness has more than one layer, and a lot of gradation. So thank you again Mark for joining us today. Hope we have you again for more interesting conversations.

MR: Well thank you Laura.

LM: Thank you for joining us today on No Easy Answers in Bioethics. Please visit us online at bioethics.msu.edu for full episode transcripts and other resources related to this episode. A special thank you to H-Net: Humanities and Social Sciences Online for hosting this series. This episode of No Easy Answers in Bioethics was produced and edited by Liz McDaniel in the Center for Ethics. Music is by Antony Raijekov via Free Music Archive.