Have you ever played the 1970s-era arcade game Pong? It’s basically primitive computer tennis: you hit a moving dot back and forth across the screen using a scrollable vertical line meant to function as a paddle. You win if you reach ten points before your opponent does. Even if you’ve never played, I expect you get the idea. Now imagine that instead of the paddle being controlled by your fingers curled around a joystick or poised over a keyboard, it’s controlled by a blob of brain cells…in a petri dish…hooked up to a microchip. And for fun let’s give the whole system a cheeky name—something like “DishBrain.” This might seem like science fiction, but “DishBrain” is the real name of a real thing, and it happened about three years ago.
In late 2022, researchers at Monash University in Australia reported in Neuron that they had grown not just rodent brain cells but also human ones, communicated with the neurons via electrical signals from a silicon microchip, and let the cells play Pong. The researchers concluded that the neuronal system exhibited “synthetic biological intelligence” and met formal criteria for sentience (more on this later).
I suspect you’ve never heard of this research or its recent advancements. Admittedly, I learned of most of it only in the last few months, and I teach classes on philosophy and technology for a living. I knew of much earlier experiments using rat neurons to control robots, but I only recently discovered that human cells have been in the mix for a few years. In 2023, researchers at Indiana University grew not just some cells on a silicon substrate but an “organoid brain”: the cells were allowed to organize themselves into a three-dimensional, organ-like structure, which quickly learned to perform with increasing accuracy speech recognition tasks and complex math problems. Not to be outdone by Monash’s cleverness at naming, the Indiana crew called their structure “Brainoware.” In the summer of 2024, international news outlets reported that researchers in China had developed software to enhance the human neuron-to-machine connection: collaborators from Tianjin University’s Haihe Laboratory of Brain-Computer Interaction and Human-Computer Integration and the Southern University of Science and Technology in Shenzhen found a way to integrate the many links required to control neuron-machine interactions into a single platform, which they called “MetaBOC”: an “open-source brain-on-chip intelligent complex information interaction system.” (Perhaps the old anti-drug public service announcements will soon go from “This is your brain on drugs” to “This is your brain-on-chip.”) None of these advancements made huge waves, and it’s not clear why. They have serious ethical implications as well as the potential to transform how humans interact with machines and artificial intelligence.
I’d like to use this space to consider several issues at stake in human neuron-microchip biocomputing. My aim is not necessarily to advocate for a specific position on the nature of these entities or the morality of the research but to highlight several limitations of applying prominent ontological concepts to this sort of human-AI hybrid and to raise awareness of some key ethical concerns that should be considered as the technology advances.
I will begin with a few fundamental ontological considerations, which will bleed into some important ethical concerns: are these brain-on-chip systems intelligent, and, importantly for ethics, are they sentient? These two concepts are among the most disputed in philosophy, and it’s especially challenging to determine just what these systems are given that they involve the brain, whose functions remain mysterious on many levels. It may be helpful to begin with the descriptions of intelligence and sentience offered by the Monash team in research articles and interviews, in which they often represent their biotechnology startup Cortical Labs (whose stated aim is to build “biological computer chips”). They had this to say in their research published in Neuron: “Cultures display the ability to self-organize activity in a goal-directed manner in response to sparse sensory information about the consequences of their actions, which we term synthetic biological intelligence” (1). The researchers theorized that this sort of intelligence may arise in a system when it follows something called the “Free Energy Principle” (FEP), a concept developed by neuroscientist Dr. Karl Friston and colleagues. The FEP says that intelligent systems try to minimize unpredictability (which, if I may vastly oversimplify, corresponds to “variational free energy,” or VFE) in their environment, and they might do this in a few different ways: a system might, for example, alter its predictions about events in its environment so that the predictions conform more accurately to its sensations, or it might act on its environment to change its sensations. As the Monash researchers put it, “Under this theory, BNNs [biological neuronal networks] hold ‘beliefs’ about the state of the world, where learning involves updating these beliefs to minimize their VFE or actively change the world to make it less surprising” (4). And that’s exactly what DishBrain did in the virtual game world: it implemented changes to improve the accuracy of its predictions by controlling the Pong “paddle” more effectively.
It’s important to note, though, that although the FEP explains what it means for something to be both sentient and intelligent, these two concepts are not necessarily connected, at least not when defined as they are in the literature I’ve been discussing. The Monash team, following Friston et al., defines sentience as “responsive to sensory impressions,” a criterion that a brain-on-chip system does seem to meet when it alters its “swing” to improve its Pong game. But it seems possible for an entity to respond to sensory impressions, even if it’s not intelligent—depending on one’s definition of intelligence, of course—when, say, it recoils from a cause of pain. Even a non-biological entity that is arguably unintelligent might be said to respond to sensory input: my blood pressure monitor, for example, responds to its “impressions” of my body when it gives a reading. Moreover, and again, depending on how one defines “intelligence,” an entity might be intelligent but non-responsive to sensory impressions: an AI, for instance, might be able to solve complex problems—thereby meeting one common definition of intelligence—without the ability to “sense” anything. But what’s required for sentient intelligence, in accordance with the FEP, goes beyond merely responding to sensory impressions or solving complex problems: sentient intelligence requires goal-directed activity aimed at minimizing unpredictability in response to sensory impressions.
If these requirements for sentient intelligence are met by DishBrain and similar brains-on-chips, as their creators believe, then things are getting pretty serious: we would need to consider whether Pong-playing brain organoids have moral standing—that is, whether they deserve legal rights, humane treatment, or some other layer of ethical concern.
But perhaps we’re moving a bit too fast. Maybe the description of sentience as “responsive to sense impressions” is problematic because it does not include phenomenal experience, which, for many (philosophers) is an essential feature of a theory of consciousness: one might argue that a sentient entity not only responds to sensory impressions, but also has some sort of subjective experience of its response and environment. If brains-on-chips lack such experience, one might argue they aren’t sentient and therefore lack moral standing.
Before delving into this, let’s be clear that we’re dealing with at least two distinct questions: i) does an intelligent entity qua intelligent have moral standing, and ii) does a sentient entity qua sentient have moral standing? I’m going to focus on the second question because I take it to be the more controversial of the two: I think it’s likely that many of us would say intelligent things (in whatever way “intelligence” is defined) qua intelligent do not have moral standing, whereas opinions are probably more mixed as to whether sentient things qua sentient do. For instance, I wager that most people would say ChatGPT is intelligent in some sense yet has no moral standing: it deserves no rights, humane treatment (whatever that might mean for an AI), paid sick days, birthday cards, heartfelt apologies, etc. (This was the consensus, at any rate, among the students in my recent ethics and technology course “Are Robots People?”) Similarly, I bet most would say that a non-intelligent human—such as one who is severely cognitively impaired—does have moral standing. So, although some intelligent things may warrant our ethical consideration, many people believe that intelligence alone is neither necessary nor sufficient for this. Sentience, on the other hand, is arguably different. For many, sentience is a necessary condition, although maybe not a sufficient one, for moral standing: not all sentient things deserve our moral concern, but all those that do are sentient. Now, I realize some will disagree with this. For example, some will hold that even non-sentient entities have moral standing, while others will hold that sentience shouldn’t be a criterion at all, believing it cannot be understood or empirically verified. However, I take it they are in the minority.
There isn’t enough space here to do justice to the complexity of the vast debates about the philosophical definition of sentience. To avoid falling down the rabbit hole of requirements for a theory of consciousness, I’ll focus on the two aspects of sentience I’ve described so far: responsiveness to sensory impressions and phenomenal experience.
As I mentioned earlier, many philosophers will be dissatisfied with the first description because it fails to account for the second one, but the unsatisfied philosophers’ response has been addressed by at least one defender of brain-on-chip sentience, who has claimed that sentience is often mistakenly conflated with consciousness. As Dr. Brett Kagan, Chief Scientific Officer at Cortical Labs, stated in an interview with The Scientist in 2022, “I must stress we do not mean consciousness…Consciousness is this experience of what it feels like to be humans. Sentience, formally, and historically, is being able to sense the environment…and to respond to it.” Kagan and colleagues stressed this same point earlier in 2022 in an article in the American Journal of Bioethics Neuroscience: “While colloquially the terms are exchangeable, it is imprecise and may lead to some conceptual conflations and to the wrong ethical conclusion” (114).
Although Kagan has a point that sentience and consciousness are sometimes unhelpfully conflated, the “formal” and “historical” definition of sentience on which his assessment of DishBrain is based is not without problems. His description of sentience as “being able to sense the environment…and to respond to it” seems too broad to be helpful. As I suggested earlier, it would include basically anything that can register a sense impression. In addition, it’s not clear why we should prefer Kagan and colleagues’ description of sentience to any other or why we should believe that consciousness, rather than sentience, isn’t the more relevant concept in our ontological and ethical assessments of human neuron-microchip hybrids. And perhaps there is good reason to conflate sentience and consciousness—if, say, the concepts are actually equivalent or mutually dependent in some way!
All this makes one wonder what there is to gain by insisting that brains-on-chips are sentient, no matter how we define it, or by denying that consciousness is an applicable concept. Perhaps an answer is suggested by Kagan’s assertion that conflating sentience and consciousness “may lead to…the wrong ethical conclusion.” One way to interpret his point could be that brains-on-chips would have much less claim to moral consideration if they were sentient but not conscious: if they were the latter, we’d probably start questioning whether it’s ethical to use them in experiments, grow them in labs, and so on. As we’ve noted, sentience alone does not grant moral standing, nor does it guarantee protection from becoming a subject of experimentation. Indeed, as Kagan has pointed out, experiments are conducted on sentient non-human animals all the time. So, a cynic might conclude that the researchers have a self-interested motive for keeping consciousness out of the mix. A less cynical voice might say that researchers just want to avoid protracted and gnarly debates about a very complex and disputed concept, one that is also difficult to verify empirically—how would we know if a human brain organoid were conscious? Even if the less cynical perspective were right, why claim that brains-on-chips are even sentient? A less charitable explanation suggests vanity: wouldn’t it be more groundbreaking, more intriguing, and—dare one say—more newsworthy to have generated, manipulated, and tested a sentient entity rather than a non-sentient one?
Could there be reasons independent of sentience that brains-on-chips deserve ethical consideration or special treatment? Perhaps the fact that the cells on the chips come from humans rather than, say, rodents or cockroaches is important? Being human certainly seems to be an essential factor in laws and morals applicable to experimenting on living beings, regardless of whether they’re intelligent, sentient, or conscious. For instance, it’s generally considered morally repugnant, as well as unlawful, to experiment on humans who have intellectual disabilities, are in a vegetative state, or are comatose. Even if someone no longer met the definition of a person (another highly contested concept), it seems there would be something about their being human that would grant them special protection. But this might lead us back where we started: when we ask whether a collection of neurons in a petri dish is human, we’re facing a question as difficult to answer as whether those neurons are sentient. In short, shifting the focus from sentience to humanity solves no problems.
If—for whatever reason—we can’t determine whether brains-on-chips are sentient or conscious, or we can’t find some reasonable grounds for granting them moral standing, then perhaps we should just throw up our hands and let researchers (or anyone who wants to) go wild: let them experiment however they wish on these whatevers-they-are, and hope for outcomes that don’t violate our moral intuitions or laws. (In fact, if you have the desire and the means, you can pick yourself up a few ready-to-use human brain organoids and some open-source brain-on-chip software!)
I think this response is worrisome, but not unlikely—given the ways some brain-on-chip developers talk about the future of their creations. Although specific research goals have been identified (e.g., testing the effects of drugs and various diseases on the brain), the developers are at times disturbingly cavalier about the future of these systems, marveling at the potential for development in ways that are unspecifiable at present—ways that one can imagine might become morally problematic. Cortical Labs, for one, seems quite open to the possibility of creating a being whose future has yet to be determined, even though they’re quite explicit about what it is they’re not doing. An article in Medium authored by Cortical Labs at the end of 2021 stated, “We’re not trying to give computers a better learning algorithm. We’re not putting copies of ourselves onto computer chips. And we’re not making tiny humans for your pocket. In fact, we don’t know what we’re making, because nothing like this has ever existed. An entirely new being. A fusion of silicon and neuron.” The CEO of Cortical Labs, Dr. Hong Weng Chong, stated in an interview that one of his lab’s research goals is to see “what we might be able to build if we are to increase the complexity of the systems built using this substrate.”
While it’s certainly the case that outcomes of groundbreaking things—be they technological or otherwise—cannot always be predetermined, it’s crucial to establish some guidelines, safeguards, and definitions when we’re dealing with something that has the potential to change what we think about such weighty concepts as sentience, consciousness, and human. We would do well to ask in the early stages of development not only what these brains-on-chips things are, but also what they deserve from us. Before research into human neuron-hybrids becomes uncontrollable, we need a better sense for (among many things): how the hybrids will react in more complex environments, whether there is evidence of consciousness—however that is defined, if new social hierarchies may emerge according to which neuron-chip hybrids are slotted below “100% natural humans,” and how ownership and responsibility might apply (e.g., Are brains-on-chips like pets? What about self-driving cars?). It’s not too early to address these issues, but there might come a time when it’s too late—if and when the technology becomes ubiquitous and its regulation decentralized. Here I am reminded of the work of Joanna Bryson, an ethics and AI policy expert, who pointed out in a proposal for a code of ethics for builders of artificial people that “after you have built something and someone else owns it is not the time to try to control how it gets used.”
Many of the above concerns were expressed several years ago by ethicists in one of the few published moral assessments of brain-on-chip technology. Their conclusions, as well as the subsequent response by organoid-chip researchers, are noteworthy. In “Mapping the Ethical Issues of Brain Organoid Research and Application,” ethicists Sawai et al. base moral standing on consciousness, a state they doubt brains-on-chips will ever achieve. Nevertheless, they say their moral assessments will proceed under the assumption that brain organoids will eventually become conscious, and they recommend a precautionary approach due to uncertainties surrounding the ontological status of such systems. They advocate for the establishment of governmental guidelines and norms, public input, and collaboration among all interested parties regarding the ethics of developing human-neuron-machine hybrids.
In their response to Sawai et al., Cortical Labs emphasized the benefits of brain-on-chip research—e.g., medical advancements as well as a reduction in experimentation on non-human animals if the testing burden can be shifted onto organoids in Petri dishes—and asserted that an overly precautious approach is unwarranted in the absence of definite evidence that brains-on-chips are conscious or a consensus regarding how such evidence might be collected.
What is noteworthy about the responses on both sides is that they advocate for different paths forward despite agreeing that it’s extremely difficult, if not impossible, to determine whether brains-on-chips are conscious or will ever become so. The one side takes the ontological uncertainty as the basis for caution and restriction, while the other takes it as a basis for more research.
Given the complexities surrounding consciousness, perhaps we need to identify a different foundation for moral standing. But we might find that all metrics—e.g., sentience, intelligence, humanity—lead back to the sticky notion of consciousness. Bryson, for one, has claimed that consciousness “must be the worst metric of ethical obligation one could propose, because no one actually knows what it means.” Her alternative is interesting: she suggests we have an ethical obligation to consider how we treat something that experiences suffering, which might be gauged in terms of the observable impact on its behavior. (Incidentally, she claims that humanoid artificial agents will never have moral standing so long as they continue to be built without feelings.) This may be a fruitful avenue if suffering does not require consciousness, but unfortunately, that’s not uncontroversial. Plus, it’s difficult to see how suffering would apply to something like a brain-on-chip: How, for instance, might we assess whether DishBrain is in anguish if it’s pained by its miserable backhand? What behavioral changes would indicate its displeasure at losing, or, for that matter, its joy at winning? And even if we could answer these questions, might we need to address the tricky issue of whether DishBrain has the capacity to choose how it acts? If it alters its behavior, could it have done otherwise or even opted out of playing altogether? Did anyone ask DishBrain if it wanted to play Pong? In the end, it may be the case that behavioral impact and suffering are too difficult to assess at the current stage of brain-on-chip development.
So perhaps DishBrain’s developers have a point that we need more research to determine what brains-on-chips are and how they should be treated. But ethicists also make a reasonable point that it seems dangerous to allow such technology to advance without regulation. Ideally, we will have some measures in place before “DishBrain” becomes “VatBrain” and then just “Brain”.
The post Navigating the Ethics and Ontology of Human Neuron-Microchip Biocomputers first appeared on Blog of the APA.
Read the full article which is published on APA Online (external link)