If robots do become sentient, how would we know? Are we disincentivized to recognize sentient machines?
we have no criteria for recognizing sentience in beings without biological brains and nervous systems.
Artificial intelligence (AI) is predicted to become sentient anywhere from never to sometime in the next decade or two. If robots do become sentient, how would we know? Are we disincentivized to recognize sentient machines? How would we view the rights of non-human beings?
Last month, 17 renowned philosophers submitted an amicus brief to New York’s highest court on behalf of the Nonhuman Rights Project (NhRP). The brief “exhausts the legal and logical space with respect to the notion of non-human persons,” says Robert C. Jones, a philosopher and member of the coalition, who examines ethics and non-human cognition.
Jane Goodall, Steven Wise and the legal team at the NhRP are petitioning the court to broaden the legal definition of persons to include two great apes, Tommy and Kiko (case discussed here). In its current form, the law recognizes legal persons as legal entities like corporations and natural humans (Homo sapiens). Legal persons enjoy rights, freedoms and protections under the law. Convincing a court that primates are enough like humans to deserve some of our rights, could set a precedent for how we deal with sentient AI.
Proving sentience (the ability to perceive, feel and experience) isn’t the challenge in this case. NhRP executive director, attorney Kevin Schneider says there’s been a “wave of sentient being laws that have passed around the world,” with language like:
‘We recognize animals are sentient beings…this does not change their status as property, this does not change their status as things.'”
This is why the NhRP is looking to expand the definition of personhood by putting the chimpanzees’ autonomy into play. “Sentience is a necessary prerequisite for autonomy,” says Schneider. The NhRP argues that sentient beings as cognitively complex as chimpanzees need to be treated in such a way that respects their autonomy and self-determination. There’s only one legal category where non-humans can have their rights as autonomous beings respected: legal persons.
The implications for sentient AI.
If advanced AI systems become conscious, AI won’t have to prove its autonomy. AI is designed with highly sophisticated intelligence and autonomy based on open-ended utility functions that human programmers can currently rewrite to cater to human needs and values until such a time as AI has the will to stop us. What AI will have to prove is that it is sentient. Unlike all current models of physiological life, we have no criteria for recognizing sentience in beings without biological brains and nervous systems. Many computer scientists and engineers say this simply isn’t a problem–because AI is not conscious. Here’s why it’s still a problem:
We don’t know what consciousness is (The Hard Problem).
Sentience and consciousness are often used interchangeably but there are subtle differences. Sentience is the capacity for subjective perceptions, feelings and experience. Consciousness is being aware of yourself and your surroundings. “It’s the what it’s like aspect of subjective experience. We all know what it’s like to be conscious. It’s so self-evident,” says Jones.
Yet no one has ever spotted it in a brain scan or picked consciousness up with surgical forceps and studied it. We don’t know its essence. The Hard Problem of consciousness is our most intimate mystery. We have no idea how awareness and experience come out of a purely physical process. No biologist, neuroscientist, philosopher or physicist is anywhere near solving The Hard Problem.
If we don’t know what consciousness is, it’s anyone’s best guess as to what special configuration arouses awareness and how long it will be before AI wakes up . The only other intelligent beings we create that display highly sophisticated behavior are other humans. We don’t know how or at what point during the development process our human creations become sentient and conscious.
Although we don’t know precisely what consciousness is or how it works, we know it exists. Our moral and legal systems are based on the responsibility we have to other conscious beings. Without a rational, comprehensive and ethical set of criteria for recognizing signs of machine consciousness, we’re susceptible to repeating the irreconcilable atrocities like slavery we’ve inflicted on other autonomous sentient beings.
A Functional Theory of Mind
What hope does AI have that we might recognize its biology-free sentience? Science correlates our minds with our brains. So it may be counterintuitive to think of a mind coming from anything other than a brain. A functional theory of mind is a view that a mind exists in virtue of its function. If it functions like a mind–has mental states, interests, beliefs, desires, if it can suffer–then it’s a mind. The view allows for evidence that could end biology’s monopoly on mind-making.
If people developing AI accept that it’s even possible–that it’s conceivable for AI to one day be conscious,” says Jones, “then by default they’re working with a functional theory of mind. If AI leads to the creation of minds, it ushers in a number of ethical and social justice issues.”
When should we start seriously talking about this?
An internationally recognized policy called The Precautionary Principle states we need to take precautions when a proposed action raises a plausible risk to the public. “We get into these situations over and over,” says Jones. “We ask ‘Why didn’t someone think to address this years ago?'”
Jones considers the precautionary principle as it applies to AI:
If it turns out we are creating a race–because in a sense we’re creating a race of robots–and that race is falling into this same pattern of being exploited for labor while having no rights or recognition… if this pattern is being repeated…” he says while rubbing his temples, “if we have a bunch of machines doing our bidding and they have some potential to become sentient, then we are morally culpable.”
One of the arguments against expanding the definition of personhood to include non-humans uses a version of the precautionary principle. Expanding personhood beyond humans might diminish human rights. We could be opening the floodgates and opening ourselves up to unforeseen ramifications.
We have no historical precedent for how to view intelligent, autonomous AI. The technological revolution modern life depends on offers little incentive to cast off our blinders and open ourselves up to recognizing sentience in machines. Even if we had a set of criteria for detecting the emergence of non-carbon-based consciousness, would we want to expand our human-made laws to protect the interests of non-humans? Is it nuts to talk about trying to hold down a more advanced species in a subordinate legal category? Is our moral integrity at risk if we try? Would humanity be at risk if we try?
We can’t go into this project with blinders on,” says Jones. “We can’t go into this project saying ‘Oh whatever, we’ll figure it out when it needs figuring out.’ The project of artificial intelligence has to go hand in hand with the ethics. And that conversation ain’t happening, it just ain’t happening. There are very few conversations going on.”
Dr. Hans C. Mumm