A Different Kind of Worry about Artificial Intelligence

Michael Prinzing
The Practical Philosopher
6 min readDec 1, 2016

--

Like many people, I’ve been thinking about Artificial Intelligence recently. The development of AI comes with a lot of ethical implications. Some get less attention than others.

Often, the people warning us about AI research are concerned about the so-called “Singularity”, where humans are no longer the smartest beings on the planet. This is thought to bode ill for our species. There are plenty of other well-founded, and more immediate, concerns about the economic effects of automation. If there isn’t a major change in the way that wealth gets distributed in our societies, we’ll see income inequality grow dramatically as more and more jobs are automated. Unless something changes, those who own AI technology will grow wealthier and wealthier, while those who don’t grow poorer and poorer.

In short, most of the concerns people raise about AI research are grounded in the interests of human beings.

I want to raise another kind of worry about AI research. This is a worry that focuses on the interests of the AIs themselves. If we succeed in creating artificial beings that are as cognitively (and maybe emotionally) sophisticated as ourselves, or even more sophisticated than us, then I think they will merit moral consideration. So, we should be very cautious about AI research both for our own sake, and for the sake of what we might create. (A few people have claimed something similar. See here and here for philosophers, and here for a non-philosopher.)

If we don’t recognize the moral status of AIs, then we are likely to treat them wrongly. If AIs have moral status, then we would not be permitted to force them to work for us — that’s called slavery. We would not be permitted to scrap them if they become inconvenient — that’s called murder. It should be obvious, then, that, if AIs do have moral status, it’s absolutely crucial that we recognize this.

The European Parliament has already started talking about creating a new legal category of “artificial persons” and about the rights that such people would have. (See here.) Many people, however, will think that all this sounds crazy. Wired magazine published an article in 2009 titled, “Do Humanlike Machines Deserve Human Rights?” The author, Daniel Roth, writes, “[T]he challenge isn’t how we learn to accept robots — but whether we should care when they’re mistreated. And if we start caring about robot ethics, might we then go one insane step further and grant them rights?” In this article Roth’s central example is an animatronic Elmo toy, which clearly isn’t conscious, and doesn’t have any of the things we look for in a being with moral status. So, Roth can be forgiven for thinking that this whole question is “insane”.

But I think Roth completely misses the point. It would be crazy to give Tickle-Me Elmo rights. But, nobody thinks we should. What’s at issue is whether — as the title of his article indicates — humanlike machines would deserve the same kind of treatment that humans do. Surely “humanlike” includes such properties as being autonomous, conscious, having desires, plans, and goals. If an artificial being were like this, I believe it certainly would deserve to be treated well.

Alright, that’s enough foreplay. Let’s get philosophical.

What I’m claiming is that an AI, if it were cognitively like a human being, would have moral status like a human being. So a crucial question is: What does it mean to have moral status?

To say that X has moral status, or that X merits moral consideration, is to say that when X’s interests are at stake, we are morally required to take X’s interests into consideration for X’s own sake. This goes beyond what we might call “moral relevance”, where even a rock can be morally relevant. A rock can belong to someone, and thus what one does with the rock is morally relevant. (You can’t just take someone’s property.) But, the rock doesn’t have interests. We don’t treat the rock in certain ways for the rock’s own sake. The rock is only morally relevant because of its owner. In other words, though it does have moral relevance, a rock can’t have moral status. This entails that, for something to have moral status, it must have interests. Of course, any cognitively complex AI would have interests. It would have goals and projects, an maybe desires and hopes.

Presumably humans are not the only beings with moral status. Non-human animals surely have at least some moral status — even if not as much as an adult human. Dogs, for instance, deserve not to be forced to fight each other for our pleasure. We owe this to dogs for their own sake. Something similar holds, I assume, for chimps, whales, and maybe a wide variety of other animals. Of course, at some point we get down to clams and other animals where it’s doubtful whether they merit any moral consideration. But, in any case, it should be uncontroversial that humans aren’t the only beings that have moral status. So, the question is: If moral status isn’t limited to humans, then why should it be limited to animals? Why think that artificial beings wouldn’t have moral status?

The standard philosophical views on the grounds of moral status (i.e., answers to the question “In virtue of what does a being have moral status?”) place a strong emphasis on the possession of sophisticated cognitive capacities. (See here for a very readable, if lengthy, overview of the philosophical literature on this question.) A simple view would be that to have moral status one must have the certain cognitive capacities. The relevant capacity could be the capacity for autonomy (i.e., the ability to decide on goals and engage in practical reasoning), or self-awareness, or the ability to value, or to care about one’s future.

A simple view of this kind faces objections from people who think that infants and/or fetuses, which lack the relevant capacity (or capacities), have the same moral status as adult humans. One might solve this problem by stipulating that moral status doesn’t require the actual possession of these cognitive capacities, only the potential to develop them. (Of course, you’d need to explain why mere potential is morally significant. But let’s not worry about that for now.)

Some people would claim that even this bar is too high, that a being has moral status if it has even some fairly rudimentary cognitive capacities: e.g., the capacity to feel pleasure and pain, to have desires, and so on. Or, again, maybe we should say what’s important is the potential to develop such rudimentary cognitive capacities.

For present purposes we don’t need to settle this question or adopt any of the aforementioned views. The reason to mention the standard theories is to show that there doesn’t appear to be any reason — on any of the standard views — why a humanlike AI would not have humanlike moral status. In fact, I don’t see how any plausible view of the grounds of moral status could exclude cognitively complex AIs.

So, if that’s right, then we should be cautious about AI research not only because of what it could mean for our (i.e., human) wellbeing, but also because of what it would probably mean for the wellbeing of the AIs themselves. This isn’t to say that we shouldn’t pursue AI research, just that we have a great many reasons, not all of them grounded in our own interests, to be cautious.

If you’re interested in learning more about AI, and what philosophers are saying about it and its impact on our future, check out Nick Bostrum. Bostrum is a professor at my old home, Oxford University, where he founded the Future of Humanity Institute. Foreign Policy named him one of the top 100 Global Thinkers. His recent book, Superintelligence, explores some of the ways that superhuman AI could develop and, crucially, how we might cope with and survive it. (You can also find him on YouTube.)

--

--