The other question: can and should robots have rights?

  • Original Paper
  • Open access
  • Published: 17 October 2017
  • Volume 20 , pages 87–99, ( 2018 )

Cite this article

You have full access to this open access article

should robots have rights essay

  • David J. Gunkel 1  

40k Accesses

116 Citations

63 Altmetric

Explore all metrics

This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry— can and should . This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, we will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.

Similar content being viewed by others

should robots have rights essay

Human rights for robots? A literature review

Welcoming robots into the moral circle: a defence of ethical behaviourism, the moral status of social robots: a pragmatic approach.

Avoid common mistakes on your manuscript.

Introduction

The majority of work concerning the ethics of artificial intelligence and robots focuses on what philosophers call an agent-oriented problematic. This is true for what Bostrom ( 2014 , vii) identifies as the “control problem,” for what Anderson and Anderson ( 2011 ) develop under the project of Machine Ethics , and what Allen and Wallach ( 2009 , p. 4) propose in Moral Machines: Teaching Robots Right from Wrong under the concept “artificial moral agent,” or AMA. And it holds for most of the work that was assembled for and recently presented at Robophilosophy 2016 (Seibt et al. 2016 ). The organizing question of the conference—“What can and should social robots do?”—is principally a question about the possibilities and limits of machine action or agency.

But this is only one-half of the story. As Floridi ( 2013 , pp. 135–136) reminds us, moral situations involve at least two interacting components—the initiator of the action or the agent and the receiver of this action or the patient . So far much of the published work on social robots deals with the question of agency (Levy 2009 , p. 209). What I propose to do in this essay is shift the focus and consider things from the other side—the side of machine moral patiency. Doing so necessarily entails a related but entirely different set of variables and concerns. The operative question for a patient oriented investigation is not “What social robots can and should do?” but “How can and should we respond to these mechanisms?” How can and should we make a response in the face of robots that are, as Breazeal ( 2002 , p. 1) describes it, intentionally designed to be “socially intelligent in a human like way” such that “interacting with it is like interacting with another person.” Or to put it in terms of a question: “Can and should social robots have rights?” Footnote 1

My examination of this question will proceed by way of three steps or movements. I will begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry— can and should . This difference has considerable history behind it that influences what is asked about and how. Second, capitalizing on this verbal distinction, it is possible to identify four modalities concerning social robots and the question of rights. The second section will identify and critically assess these four modalities as they have been deployed and developed in the current literature. Finally, I will conclude by proposing another alternative, a way of thinking otherwise that effectively challenges the existing rules of the game and provides for other ways of theorizing moral standing that can scale to the unique challenges and opportunities that are confronted in the face of social robots.

The is/ought problem

The question “Can and should robots have rights?” consists of two separate queries: Can robots have rights? which is a question that asks about the capability of a particular entity. And should robots have rights? which is a question that inquiries about obligations in the face of this entity. These two questions invoke and operationalize a rather famous conceptual distinction in philosophy that is called the is/ought problem or Hume’s Guillotine. In A Treatise of Human Nature (first published in 1738) David Hume differentiated between two kinds of statements: descriptive statements of fact and normative statements of value (Schurz 1997 , p. 1). For Hume, the problem was the fact that philosophers, especially moral philosophers, often fail to distinguish between these two kinds of statements and therefore slip imperceptibly from one to the other:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surprised to find, that instead of the usual copulations of propositions, is , and is not , I meet with no proposition that is not connected with an ought , or an ought not . This change is imperceptible; but is however, of the last consequence. For as this ought , or ought not , expresses some new relation or affirmation, ‘tis necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it. But as authors do not commonly use this precaution, I shall presume to recommend it to the readers; and am persuaded, that this small attention would subvert all the vulgar systems of morality, and let us see, that the distinction of vice and virtue is not founded merely on the relations of objects, nor is perceived by reason (Hume 1980 , p. 469).

In its original form, Hume’s argument may appear to be rather abstract and indeterminate. But his point becomes immediately clear, when we consider an actual example, like the arguments surrounding the politically charged debate about abortion. “The crucial point of this debate,” Schurz ( 1997 , p. 2) explains, “is the question which factual property of the unborn child is sufficient” for attributing to it an unrestricted right to life.

For the one party in this debate, which often appeals to the importance of our moral conscience and instinct, it is obvious that this factual property is the fertilization , because at this moment a human being has been created and life begins. Going to the other extreme, there are philosophers like Peter Singer or Norbert Hoerster who have argued that this factual property is the beginning of the personality of the baby, which includes elementary self-interests as well as an elementary awareness of them. From the latter position it unavoidably follows that not only embryos but even very young babies, which have not developed these marks of personality, do not have the unrestricted right to live which older children of adults have (Schurz 1997 , pp. 2–3).

What Schurz endeavors to point out by way of this example is that the two sides of the abortion debate—the two different and opposed positions concerning the value of an unborn human fetus—proceed and are derived from different ontological commitments concerning what exact property or properties count as morally significant. For one side, it is the mere act of embryonic fertilization; for the other, it is the acquisition of personality and the traits of personhood. Consequently, ethical debate—especially when it concerns the rights of others—is typically, and Hume would say unfortunately, predicated on different ontological assumptions of fact that are then taken as the rational basis for moral value and decision making.

Since Hume, there have been numerous attempts to resolve this “problem” by bridging the gap that supposed separates “is” from “ought” (cf. Searle 1964 ). Despite these efforts, however, the problem remains and is considered one of those important and intractable philosophical dilemmas that people write books about (Schurz 1997 and Hudson 1969 ). As Schurz ( 1997 , p. 4) characterizes it, “probably the most influential debate on the is-ought problem in our time is documented by Hudson ( 1969 , reprint 1972, 1973, 1979). Here Black and Searle have tried to show that logically valid is-ought-inferences are indeed possible, whereas Hare, Thomson and Flew have defended Hume’s thesis and tried to demonstrate that Black’s and Searle’s arguments are invalid.”

For our purposes what is important is not the logical complications of the “is-ought fallacy” as Hume originally characterized it or the ongoing and seemingly irresolvable debate concerning the validity of the is-ought inference as it has been developed and argued in the subsequent literature. What is pertinent for the investigation at hand is (1) to recognize how the verbs “is” and “ought” organize qualitatively different kinds of statements and modes of inquiry. The former concerns ontological matters or statements of fact; the latter consists in axiological decisions concerning what should be done or what ought to be. The guiding question of our inquiry utilizes modal variants of these two verbs, namely “can” and “should.” Using Hume’s terminology, the question “Can robots have rights?” may be reformulated as “Are robots capable of being moral subjects?” And the question “Should robots have rights?” can be a reformulated as “Ought robots be considered moral subjects?” The modal verb “can,” therefore, asks an ontologically oriented question about the factual capabilities or properties of the entity, while “should” organizes an inquiry about axiological issues having to do with obligations to this entity. Following the Humean thesis, therefore, it is possible to make the following distinction between two different kinds of statements:

“Robots can have rights.” or “Robots are moral subjects.”

“Robots should have rights.” or “Robots ought to be moral subjects.”

(2) Because the is-ought inference is and remains an undecided and open question (Schurz 1997 , p. 4), it is possible to relate S1 to S2 in ways that generate four options or modalities concerning the moral situation of robots. These 4 modalities can be organized into two pairs. In the first pair, which upholds and supports the is-ought inference, the affirmation or negation of the ontological statement (S1) determines the affirmation or negation of the axiological statement (S2). This can be written (using a kind of pseudo-object oriented programming code) in the following way:

“Robots cannot have rights. Therefore robots should not have rights.”

“Robots can have rights. Therefore robots should have rights.”

In the second pair, which endorses the Humean thesis or contests the inference of ought from is, one affirms the ontological statement (S1) while denying the axiological statement (S2), or vice versa. These two modalities may be written in the following way:

“Even though robots can have rights, they should not have rights.”

“Even though robots cannot have rights, they should have rights.”

In the section that follows, I will critically evaluate each one of these modalities as they are deployed and developed in the literature, performing a kind of cost-benefit analysis of the available arguments concerning the rights (or lack thereof) of social robots.

Modalities of robots rights

With the first modality, one infers negation of S2 from the negation of S1. Robots are incapable of having rights, therefore robots should not have rights. This seemingly intuitive and common sense argument is structured and informed by the answer that is typically provided for the question concerning technology. “We ask the question concerning technology,” Heidegger ( 1977 , pp. 4–5) writes, “when we ask what it is. Everyone knows the two statements that answer our question. One says: Technology is a means to an end. The other says: Technology is a human activity. The two definitions of technology belong together. For to posit ends and procure and utilize the means to them is a human activity. The manufacture and utilization of equipment, tools, and machines, the manufactured and used things themselves, and the needs and ends that they serve, all belong to what technology is.” According to Heidegger’s analysis, the presumed role and function of any kind of technology—whether it be a simple hand tool, jet airliner, or robot—is that it is a means employed by human users for specific ends. Heidegger terms this particular characterization of technology “the instrumental definition” and indicates that it forms what is considered to be the “correct” understanding of any kind of technological contrivance.

As Andrew Feenberg ( 1991 , p. 5) summarizes it, “The instrumentalist theory offers the most widely accepted view of technology. It is based on the common sense idea that technologies are ‘tools’ standing ready to serve the purposes of users.” And because a tool or instrument “is deemed ‘neutral,’ without valuative content of its own” a technological artifact is evaluated not in and of itself, but on the basis of the particular employments that have been decided by its human designer or user. Consequently, technology is only a means to an end; it is not and does not have an end in its own right. “Technical devices,” as Lyotard ( 1984 , p. 33) writes, “originated as prosthetic aids for the human organs or as physiological systems whose function it is to receive data or condition the context. They follow a principle, and it is the principle of optimal performance: maximizing output (the information or modification obtained) and minimizing input (the energy expended in the process). Technology is therefore a game pertaining not to the true, the just, or the beautiful, etc., but to efficiency: a technical ‘move’ is ‘good’ when it does better and/or expends less energy than another.”

The instrumental theory not only sounds reasonable, it is obviously useful. It is, one might say, instrumental for making sense of things in an age of increasingly complex technological systems and devices. And the theory applies not only to simple devices like corkscrews, toothbrushes, and garden hoses but also sophisticated technologies, like computers, artificial intelligence, and robots. “Computer systems,” Johnson ( 2006 , p. 197) asserts, “are produced, distributed, and used by people engaged in social practices and meaningful pursuits. This is as true of current computer systems as it will be of future computer systems. No matter how independently, automatic, and interactive computer systems of the future behave, they will be the products (direct or indirect) of human behavior, human social institutions, and human decision.” According to this way of thinking, technologies, no matter how sophisticated, interactive, or seemingly social they appear to be, are just tools, nothing more. They are not—not now, not ever—capable of becoming moral subjects in their own right, and we should not treat them as such. It is precisely for this reason that, as Hall ( 2001 , p. 2) points out, “we have never considered ourselves to have moral duties to our machines” and that, as Levy ( 2005 , p. 393) concludes, the very “notion of robots having rights is unthinkable.”

Although the instrumental theory sounds intuitively correct and incontrovertible, it has at least two problems. First, it is a rather blunt instrument, reducing all technology, irrespective of design, construction, or operation, to a tool or instrument. “Tool,” however, does not necessarily encompass everything technological and does not, therefore, exhaust all possibilities. There are also machines . Although “experts in mechanics,” as Marx ( 1977 , p. 493) pointed out, often confuse these two concepts calling “tools simple machines and machines complex tools,” there is an important and crucial difference between the two. Indication of this essential difference can be found in a brief parenthetical remark offered by Heidegger in “The Question Concerning Technology.” “Here it would be appropriate,” Heidegger ( 1977 , p. 17) writes in reference to his use of the word “machine” to characterize a jet airliner, “to discuss Hegel’s definition of the machine as autonomous tool [ selbständigen Werkzeug ].” What Heidegger references, without supplying the full citation, are Hegel’s 1805-07 Jena Lectures, in which “machine” had been defined as a tool that is self-sufficient, self-reliant, or independent. As Marx ( 1977 , p. 495) succinctly described it, picking up on this line of thinking, “the machine is a mechanism that, after being set in motion, performs with its tools the same operations as the worker formerly did with similar tools.”

Understood in this way, Marx (following Hegel) differentiates between the tool used by the worker and the machine, which does not occupy the place of the worker’s tool but takes the place of the worker him/herself. Although Marx did not pursue an investigation of the social, legal, or moral consequences of this insight, recent developments have advanced explicit proposals for robots—or at least certain kinds of robots—to be defined as something other than mere instruments. In a highly publicized draft proposal submitted to the European Parliament in May of 2016 (Committee on Legal Affairs 2016 ), for instance, it was argued that “sophisticated autonomous robots” (“machines” in Marx’s terminology) be considered “electronic persons” with “specific rights and obligations” for the purposes of contending with the challenges of technological unemployment, tax policy, and legal liability.

Second (and following from this), the instrumental theory, for all its success handling different kinds of technology, appears to be unable to contend with recent developments in social robotics. In other words, practical experiences with socially interactive machines push against the explanatory capabilities of the instrumental theory, if not forcing a break with it altogether. “At first glance,” Darling ( 2016 , p. 216) writes, “it seems hard to justify differentiating between a social robot, such as a Pleo dinosaur toy, and a household appliance, such as a toaster. Both are man-made objects that can be purchased on Amazon and used as we please. Yet there is a difference in how we perceive these two artifacts. While toasters are designed to make toast, social robots are designed to act as our companions.”

In support of this claim, Darling offers the work of Sherry Turkle and the experiences of US soldiers in Iraq and Afghanistan. Turkle, who has pursued a combination of observational field research and interviews in clinical studies, identifies a potentially troubling development she calls “the robotic moment”: “I find people willing to seriously consider robots not only as pets but as potential friends, confidants, and even romantic partners. We don’t seem to care what their artificial intelligences ‘know’ or ‘understand’ of the human moments we might ‘share’ with them…the performance of connection seems connection enough” (Turkle 2012 , p. 9). In the face of sociable robots, Turkle argues, we seem to be willing, all too willing, to consider these machines to be much more than a tool or instrument; we address them a kind of surrogate pet, close friend, personal confidant, and even paramour.

But this behavior is not limited to objects like the Furbie and Paro robots, which are intentionally designed to elicit this kind of emotional response. We appear to be able to do it with just about any old mechanism, like the very industrial-looking Packbots that are being utilized on the battlefield. As Singer ( 2009 , p. 338), Garreau ( 2007 ), and Carpenter ( 2015 ) have reported, soldiers form surprisingly close personal bonds with their units’ Packbots, giving them names, awarding them battlefield promotions, risking their own lives to protect that of the robot, and even mourning their death. This happens, Singer explains, as a product of the way the mechanism is situated within the unit and the role that it plays in battlefield operations. And it happens in direct opposition to what otherwise sounds like good common sense: They are just technologies—instruments or tools that feel nothing.

None of this is necessarily new or surprising. It was already identified and formulated in the computer as social actor studies conducted by Byron Reeves and Clifford Nass Footnote 2 in the mid-1990s. As Reeves and Nass discovered across numerous trials with human subjects, users (for better or worse) have a strong tendency to treat socially interactive technology, no matter how rudimentary, as if they were other people. “Computers, in the way that they communicate, instruct, and take turns interacting, are close enough to human that they encourage social responses. The encouragement necessary for such a reaction need not be much. As long as there are some behaviors that suggest a social presence, people will respond accordingly. When it comes to being social, people are built to make the conservative error: When in doubt, treat it as human. Consequently, any medium that is close enough will get human treatment, even though people know it’s foolish and even though they likely will deny it afterwards” (Reeves and Nass 1996 , p. 22). So what we have is a situation where our theory of technology—a theory that has considerable history behind it and that has been determined to be as applicable to simple hand tools as it is to complex computer systems—seems to be out of sync with the practical experiences we now have with machines in a variety of situations and circumstances.

The flipside to the instrumentalist position entails affirmation of both statements: Robots are able to have rights, therefore robots should have rights. This is also (and perhaps surprisingly) a rather popular stance. “The ‘artificial intelligence’ programs in practical use today,” Goertzel ( 2002 , p. 1) admits, “are sufficiently primitive that their morality (or otherwise) is not a serious issue. They are intelligent, in a sense, in narrow domains—but they lack autonomy; they are operated by humans, and their actions are integrated into the sphere of human or physical-world activities directly via human actions. If such an AI program is used to do something immoral, some human is to blame for setting the program up to do such a thing.” This would seem to be a simple restatement of the instrumentalist position insofar as current technology is still, for the most part, under human control and therefore able to be adequately explained and conceptualized as a mere tool. But that will not, Goertzel argues, remain for long. “Not too far in the future things are going to be different. AI’s will possess true artificial general intelligence (AGI), not necessarily emulating human intelligence, but equaling and likely surpassing it. At this point, the morality or otherwise of AGI’s will become a highly significant issue.”

According to this way of thinking, in order for someone or something to be considered a legitimate moral subject—in order for it to have rights—the entity in question would need to possess and show evidence of possessing some ontological capability that is the pre-condition that makes having rights possible, like intelligence, consciousness, sentience, free-will, autonomy, etc. This “properties approach,” as Coeckelbergh ( 2012 ) calls it, derives moral status—how something ought to be treated—from a prior determination of its ontological condition—what something is or what capabilities it shows evidence of possessing. For Goetzel the deciding factor is determined to be “intelligence,” but there are others. According to Sparrow ( 2004 , p. 204), for instance, the difference that makes a difference is sentience: “The precise description of qualities required for an entity to be a person or an object of moral concern differ from author to author. However it is generally agreed that a capacity to experience pleasure and pain provides a prima facia case for moral concern…. Unless machines can be said to suffer they cannot be appropriate objects for moral concern at all.” For Sparrow, and others who follow this line of reasoning, it is not general intelligence but the presence (or absence) of the capability to suffer that is the necessary and sufficient condition for an entity to be considered an object of moral concern (or not). Footnote 3 As soon as robots have the capability to suffer, then they should be considered moral subjects possessing rights.

Irrespective of which exact property or combination of properties are selected (and there is considerable debate about this in the literature), our robots, at least at this point in time, generally do not appear to possess these capabilities. But that does not preclude the possibility they might acquire or possess them at some point in the not-too-distant future. As Goetzel describes it “not too far in the future, things are going to be different.” Once this threshold is crossed, then we should, the argument goes, extend robots some level of moral consideration. And if we fail to do so, the robots themselves might rise up and demand to be recognized. “At some point in the future,” Asaro ( 2006 , p. 12) speculates, “robots might simply demand their rights. Perhaps because morally intelligent robots might achieve some form of moral self-recognition, question why they should be treated differently from other moral agents…This would follow the path of many subjugated groups of humans who fought to establish respect for their rights against powerful sociopolitical groups who have suppressed, argued and fought against granting them equal rights.”

There are obvious advantages to this way of thinking insofar as it does not simply deny rights to robots tout court , but kicks the problem down the road and postpones decision making. Right now, we do not, it seems, have robots that can be moral subjects. But when (and it is more often a question of “when” as opposed to “if”) we do, then we will need to seriously consider whether they should be treated differently. “As soon as AIs begin to possess consciousness, desires and projects,” Sparrow ( 2004 , p. 203) suggests, “then it seems as though they deserve some sort of moral standing.” Or as Singer and Sagan ( 2009 ) write “if the robot was designed to have human-like capacities that might incidentally give rise to consciousness, we would have a good reason to think that it really was conscious. At that point, the movement for robot rights would begin.” This way of thinking is persuasive, precisely because it recognizes the actual limitations of current technology while holding open the possibility of something more in the not-too-distant future. Footnote 4

The problem to this way of thinking, however, is that it does not really resolve the question regarding the rights of robots but just postpones the decision to some indeterminate point in the future. It says, in effect, as long as robots are not conscious or sentient or whatever ontological criteria counts, no worries. Once they achieve this capability, however, then we should consider extending some level of moral concern and respect. All of which means, of course, that this “solution” to the question “can and should robots have rights?” is less a solution and more of a decision not to decide. Furthermore when the decisive moment (whenever that might be and however it might occur) does in fact come, there remains several theoretical and practical difficulties that make this way of thinking much more problematic than it initially appears to be.

First, there are terminological complications. A term like “consciousness,” for example, does not admit of a univocal characterization, but denotes, as Velmans ( 2000 , p. 5) points out, “many different things to many different people.” In fact, if there is any general agreement among philosophers, psychologists, cognitive scientists, neurobiologists, AI researchers, and robotics engineers regarding consciousness, it is that there is little or no agreement when it comes to defining and characterizing the concept. To make matters more complex, the problem is not just with the lack of a basic definition; the problem may itself already be a problem. “Not only is there no consensus on what the term consciousness denotes,” Güzeldere ( 1997 , p. 7) writes, “but neither is it immediately clear if there actually is a single, well-defined ‘the problem of consciousness’ within disciplinary (let alone across disciplinary) boundaries. Perhaps the trouble lies not so much in the ill definition of the question, but in the fact that what passes under the term consciousness as an all too familiar, single, unified notion may be a tangled amalgam of several different concepts, each inflicted with its own separate problems.” Other properties, like sentience, unfortunately do not do much better. As Daniel Dennett demonstrates in his eponymously titled essay, the reason “why you cannot make a computer that feels pain” has little or nothing to do with the technical challenges with making pain computable. It proceeds from the fact that we do not know what pain is in the first place. In other words, “there can be,” as Dennett ( 1998 , p. 228) concludes, “no true theory of pain, and so no computer or robot could instantiate the true theory of pain, which it would have to do to feel real pain.”

Second, even if it were possible to resolve these terminological difficulties, maybe not once and for all but at least in a way that would be widely accepted, there remains epistemological limitations concerning detection of the capability in question. How can one know whether a particular robot has actually achieved what is considered necessary for something to have rights, especially because most, if not all of the qualifying capabilities or properties are internal states-of-mind? This is, of course, connected to what philosophers call the other minds problem, the fact that, as Haraway ( 2008 , p. 226) cleverly describes it, we cannot climb into the heads of others “to get the full story from the inside.” Although philosophers, psychologists, and neuroscientists throw considerable argumentative and experimental effort at this problem, it is not able to be resolved in any way approaching what would pass for definitive evidence, strictly speaking. In the end, not only are these efforts unable to demonstrate with any certitude whether animals, machines, or other entities are in fact conscious (or sentient) and therefore legitimate moral persons (or not), we are left doubting whether we can even say the same for other human beings. As Kurzweil ( 2005 , p. 380) candidly admits, “we assume other humans are conscious, but even that is an assumption,” because “we cannot resolve issues of consciousness entirely through objective measurement and analysis (science).”

Finally there are practical complications to this entire procedure. “If (ro)bots might one day be capable of experiencing pain and other affective states,” Wallach and Allen ( 2009 , p. 209) write, “a question that arises is whether it will be moral to build such systems—not because of how they might harm humans, but because of the pain these artificial systems will themselves experience. In other words, can the building of a (ro)bot with a somatic architecture capable of feeling intense pain be morally justified…?” If it were in fact possible to construct a machine that is sentient and “feels pain” (however that term would be defined and instantiated) in order to demonstrate machine capabilities, then doing so might be ethically suspect insofar as in constructing such a mechanism we do not do everything in our power to minimize its suffering. Consequently, moral philosophers and robotics engineers find themselves in a curious and not entirely comfortable situation. One would need to be able to construct a robot that feels pain in order to demonstrate the presence of sentience; but doing so could be, on that account, already to risk engaging in actions that are immoral. Or to put it another way, demonstrating whether robots can have rights might only be possible by violating those very rights.

In opposition to these two approaches, there are two other modalities that uphold (or at least seek to uphold) the is/ought distinction. In the first version, one affirms that robots can have rights but denies that this fact requires us to accord them social or moral standing. This is the argument that has been developed and defended by Bryson ( 2010 ) in her provocatively titled essay “Robots Should Be Slaves.” Bryson’s argument goes like this: Robots are property. No matter how capable they are, appear to be, or may become; we are obligated not to be obligated by them. “It is,” Bryson ( 2016 , p. 6) argues elsewhere, “unquestionably within our society’s capacity to define robots and other AI as moral agents and patients. In fact, many authors (both philosophers and technologists) are currently working on this project. It may be technically possible to create AI that would meet contemporary requirements for agency or patiency. But even if it is possible, neither of these two statements makes it either necessary or desirable that we should do so.” In other words, it is entirely possible to create robots that can have rights, but we should not do so.

The reason for this, Bryson ( 2010 , p. 65) argues, derives from the need to protect human individuals and social institutions. “My argument is this: given the inevitability of our ownership of robots, neglecting that they are essentially in our service would be unhealthy and inefficient. More importantly, it invites inappropriate decisions such as misassignations of responsibility or misappropriations of resources.” This is why the word “slave,” although somewhat harsh, is entirely appropriate. Irrespective of what they are, what they can become, or what some users might assume them to be, we should treat all artifacts as mere tools and instruments. To its credit, this approach succeeds insofar as it reasserts and reconfirms the instrumental theory in the face of (perceived) challenges from a new kind of socially interactive and seemingly animate device. No matter how interactive, intelligent, or animated our AIs and robots become, they should be, now and forever, considered to be instruments or slaves in our service, nothing more. “We design, manufacture, own and operate robots,” Bryson ( 2010 , p. 65) writes. “They are entirely our responsibility. We determine their goals and behaviour, either directly or indirectly through specifying their intelligence, or even more indirectly by specifying how they acquire their own intelligence. But at the end of every indirection lies the fact that there would be no robots on this planet if it weren’t for deliberate human decisions to create them.”

There are, however, at least two problems with this proposal. First, it requires a kind of asceticism. Bryson’s text issues what amounts to imperatives that take the form of social prohibitions directed to both designers and users. For designers, “thou shalt not create robots to be companions.” For users, no matter how interactive or capable a robot is (or can become), “thou shalt not treat your robot as yourself.” The validity and feasibility of these prohibitions, however, are challenged by actual data—not just anecdotal evidence gathered from the rather exceptional experiences of soldiers working with Packbots on the battlefield but numerous empirical studies of human/robot interaction that verify the media equation initially proposed by Reeves and Nass. In two recent studies (Rosenthal-von der Pütten et al. 2013 and Suzuki et al. 2015 ), for instance, researchers found that human users empathized with what appeared to be robot suffering even when they had prior experience with the device and knew that it was “just a machine.” To put it in a rather crude vernacular form: Even when our head tells us it’s just a robot, our heart cannot help but feel for it. Footnote 5

Second, this way of thinking requires that we institute of a class of instrumental servant or slave. The problem here is not what one might think, namely, how the robot-slave might feel about its subjugation. The problem is with us and the effect this kind of institutionalized slavery could have on human individuals and communities. As de Tocqueville ( 2004 ) observed, slavery was not just a problem for the slave, it also had deleterious effects on the master and his social institutions. Clearly Bryson’s use of the term “slave” is provocative and morally charged, and it would be impetuous to simply presume that this proposal for a kind of “slavery 2.0” would be the same or even substantially similar to what had occurred (and is still unfortunately occurring) with human bondage. But, and by the same token, we should also not dismiss or fail to take into account the documented evidence and historical data concerning slave-owning societies and how institutionalized forms of slavery affect things.

The final modality also appears to support the independence and asymmetry of the two statements, but it does so by denying the first and affirming the second. In this case, which is something proposed and developed by Darling ( 2012 , 2016 ), social robots, at least in term of the currently available technology, cannot have rights. They do not, at least at this particular point in time, possess the necessary capabilities or properties to be considered full moral and legal persons. Despite this fact, there is, Darling asserts, something qualitatively different about the way we encounter and perceive social robots. “Looking at state of the art technology, our robots are nowhere close to the intelligence and complexity of humans or animals, nor will they reach this stage in the near future. And yet, while it seems far-fetched for a robot’s legal status to differ from that of a toaster, there is already a notable difference in how we interact with certain types of robotic objects” (Darling 2012 , p. 1). This occurs, Darling continues, principally due to our tendencies to anthropomorphize things by projecting into them cognitive capabilities, emotions, and motivations that do not necessarily exist. Socially interactive robots, in particular, are intentionally designed to leverage and manipulate this proclivity. “Social robots,” Darling ( 2012 , p. 1) explains, “play off of this tendency by mimicking cues that we automatically associate with certain states of mind or feelings. Even in today’s primitive form, this can elicit emotional reactions from people that are similar, for instance, to how we react to animals and to each other.” And it is this emotional reaction that necessitates obligations in the face of social robots. “Given that many people already feel strongly about state-of-the-art social robot ‘abuse,’ it may soon become more widely perceived as out of line with our social values to treat robotic companions in a way that we would not treat our pets” (Darling 2012 , p. 1). Footnote 6

The obvious advantage to this way of thinking is that it is able to scale to recent technological developments in social robotics and the apparent changes they produced our in moral intuitions. Even if social robots cannot be moral subjects strictly speaking (at least not yet), there is something about this kind of machine that looks and feels different. According to Darling ( 2016 , p. 213), it is because we “perceive robots differently than we do other objects,” that one should consider extending some level of legal protections to the latter but not the former. This conclusion is consistent with Hume’s thesis. If “ought” cannot be derived from “is,” then axiological decisions concerning moral value are little more than sentiments based on how we feel about something at a particular time. Darling mobilizes a version of this moral sentimentalism with respect to social robots: “Violent behavior toward robotic objects feels wrong to many of us, even if we know that the abused object does not experience anything” (Darling 2016 , p. 223). Consequently, and to its credit, Darling’s proposal, unlike Bryson’s “slavery 2.0” argument, tries to accommodate and work with rather than against recent and empirically documented experiences with social robots.

There are, however, a number of complications with this approach. First, basing decisions concerning moral standing on individual perceptions and sentiment can be criticized for being capricious and inconsistent. “Feelings,” Kant ( 1983 , p. 442) writes in response to this kind of moral sentimentalism, “naturally differ from one another by an infinity of degrees, so that feelings are not capable of providing a uniform measure of good and evil; furthermore, they do so even though one man cannot by his feeling judge validly at all for other men.” Additionally, because sentiment is a matter of individual experience, it remains uncertain as to whose perceptions actually matter or make the difference? Who, for instance, is included (and who is excluded) from the collective first person “we” that Darling operationalizes in and makes the subject of her proposal? In other words, whose sentiments count when it comes to decisions concerning the extension of moral and legal rights to others, and do all sentiments have the same status and value when compared to each other?

Second, despite the fact that Darling’s proposal appears to uphold the Humean thesis, differentiating what ought to be from what is, it still proceeds by inferring “ought” from “is,” or at least from what appears to be. According to Darling ( 2016 , p. 214), everything depends on “our well-documented inclination” to anthropomorphize things. “People are prone,” she argues, “to anthropomorphism; that is, we project our own inherent qualities onto other entities to make them seem more human-like”—qualities like emotions, intelligence, sentence, etc. Even though these capabilities do not (for now at least) really exist in the mechanism, we project them onto the robot in such a way that we then perceive them to be something that we presume actually belongs to the robot. By focusing on this anthropomorphic operation, Darling mobilizes and deploys a well-known Kantian distinction. What ultimately matters, according to her argument, is not what the robot actually is “in and of itself.” What makes the difference is how the mechanism comes to be perceived. It is, in other words, the way the robot appears to us that determines how it comes to be treated. Although this change in perspective represents a shift from a kind of naïve empiricism to a more sophisticated phenomenological formulation (at least in the Kantian sense of the word), it still derives “ought,” specifically how something ought to be treated, from what it appears to be.

Finally, because what ultimately matters is how “we” see things, this proposal remains thoroughly anthropocentric and instrumentalizes others. According to Darling, the principal reason we need to consider extending legal rights to others, like social robots, is for our sake. This follows the well-known Kantian argument for restricting animal abuse, and Darling endorses this formulation without any critical hesitation whatsoever: “The Kantian philosophical argument for preventing cruelty to animals is that our actions toward non-humans reflect our morality—if we treat animals in inhumane ways, we become inhumane persons. This logically extends to the treatment of robotic companions” (Darling 2016 , pp. 227–228). This way of thinking, although potentially expedient for developing and justifying new forms of legal protections, renders the inclusion of previously excluded others less than altruistic; it transforms animals and robot companions into nothing more than instruments of human self-interest. The rights of others, in other words, is not about them; it is all about us.

Thinking otherwise

Although each modality has its advantages, none of the four provide what would be considered a definitive case either for or against robot rights. At this point, we can obviously continue to develop arguments and accumulate evidence supporting one or the other. But this effort, although entirely reasonable and justified, will simply elaborate what has formulated and will not necessarily advance the debate much further than what has already been achieved. In order to get some new perspective on the issue, we can (and perhaps should) try something different. This alternative, which could be called following Gunkel ( 2007 ) thinking otherwise , does not argue either for or against the is-ought inference but takes aim at and deconstructs this conceptual opposition. And it does so by deliberately flipping the Humean script, considering not “how ought may be derived from is” but rather “how is is only able to be derived from ought.”

This is precisely the innovation introduced and developed by Emmanuel Levinas who, in direct opposition to the usual way of thinking, asserts that ethics precedes ontology. In other words, it is the axiological aspect, the “ought” dimension, that comes first, in terms of both temporal sequence and status, and then the ontological aspects follows from this decision. Footnote 7 This is a deliberate provocation that cuts across the grain of the philosophical tradition. As Floridi ( 2013 , p. 116) correctly points out, in most moral theory, “what the entity is [the ontological question] determines the degree of moral value it enjoys, if any [the ethical question].” Levinas deliberately inverts and distorts this procedure. According to this way of thinking, we are first confronted with a mess of anonymous others who intrude on us and to whom we are obligated to respond even before we know anything at all about them. To use Hume’s terminology—which will be a kind of translation insofar as Hume’s philosophical vocabulary, and not just his language, is something that is foreign to Levinas’s own formulations—we are first obligated to respond and then, after having made a response, what or who we responded to is able to be determined and identified. As Derrida ( 2005 , p. 80) has characterized it, the crucial task in this alternative way of thinking moral consideration is to “reach a place from which the distinction between who and what comes to appear and become determined.”

The advantage to this procedure is that it provides an entirely different method for responding to the challenge not just of social robots but of the way we have addressed and decided things in the face of this challenge. Following the contours of this Levinasian innovation, moral consideration is decided and conferred not on the basis of some pre-determined ontological criteria or capability (or lack thereof) but in the face of actual social relationships and interactions. “Moral consideration,” as Coeckelbergh ( 2010 , p. 214) describes it, “is no longer seen as being ‘intrinsic’ to the entity: instead it is seen as something that is ‘extrinsic’: it is attributed to entities within social relations and within a social context.” In other words, as we encounter and interact with others—whether they be other human persons, an animal, the natural environment, or a social robot—this other entity is first and foremost situated in relationship to us. Consequently, the question of social and moral status does not necessarily depend on what the other is in its essence but on how she/he/it (and the pronoun that comes to be deployed in this situation is not immaterial) supervenes before us and how we decide, in “the face of the other” (to use Levinasian terminology), to respond. In this transaction, the “relations are prior to the things related” (Callicott 1989 , p. 110), instituting what Gerdes ( 2015 ), following Coeckelbergh ( 2010 ), has called “a relational turn” in ethics. Footnote 8

From the outset, this Levinasian influenced, relational ethic might appear to be similar to that developed by Kate Darling. Darling, as we have seen, also makes a case for extending moral and legal considerations to social robots irrespective of what they are or can be. The important difference, however, is anthropomorphism. For Darling, the reason we ought to treat robots well is because we perceive something of ourselves in them. Even if these traits and capabilities are not really there in the mechanism, we project them into or onto others—and other forms of otherness, like that represented by social robots—by way of anthropomorphism. For Darling, then, it is because the other looks and feels (to us) to be something like us that we are then obligated to extend to it some level of moral and legal consideration. For Levinas this anthropomorphic operation is the problem insofar as it reduces the other to a modality of the same—turning what is other into an alter-ego and mirrored projection of oneself. Levinas deliberately resists this gesture that already domesticates and even violates the alterity of the other. Ethics for Levinas is entirely otherwise: “The strangeness of the Other, his irreducibility to the I, to my thoughts and my possessions, is precisely accomplished as a calling into question of my spontaneity, as ethics” (Levinas 1969 , p. 43). For Levinas, then, ethics is not based on “respect for others” but transpires in the face of the other and the interruption of ipseity that it produces. The principal moral gesture, therefore, is not the conferring or extending of rights to others as a kind of benevolent gesture or even an act of compassion but deciding how to respond to the Other who supervenes before me in such a way that always and already places my assumed rights and privilege in question.

This alternative configuration, therefore, does not so much answer or respond to the question with which we began as it alters the terms of the inquiry itself. When one asks “Can or should robots have rights?” the form of the question already makes an assumption, namely that rights are a kind of personal property or possession that an entity can have or should be bestowed with. Levinas does not inquire about rights nor does his moral theory attempt to respond to this form of questioning. In fact, the word “rights” is not in his philosophical vocabulary and does not appear as such in his published work. Consequently, the Levinasian question is directed and situated otherwise: “What does it take for something—another human person, an animal, a mere object, or a social robot—to supervene and be revealed as Other?” This other question —a question about others that is situated otherwise—comprises a more precise and properly altruistic inquiry. It is a mode of questioning that remains open, endlessly open, to others and other forms of otherness. For this reason, it deliberately interrupts and resists the imposition of power that Birch ( 1993 , p. 39) finds operative in all forms of rights discourse: “The nub of the problem with granting or extending rights to others…is that it presupposes the existence and the maintenance of a position of power from which to do the granting.” Whereas Darling is interested in “extending legal protection to social robots,” Levinas provides a way to question the assumptions and consequences involved in this very gesture. What we see in the face or the faceplate of the social robot, then, is not just a question concerning the rights of others—and other forms of socially significant otherness—but a challenge to this very way of asking about moral patiency.

Levinasian philosophy, therefore, has the potential to reorient the way we think about social robots and the question concerning rights. This alternative, however, still has at least one significant consequence that cannot and should not be ignored. Utilizing Levasian thought for the purposes of robophilosophy requires fighting against and struggling to break free from the gravitational pull of Levinas’s own anthropocentric interpretations. Whatever the import of his unique contribution, “Other” in Levinas is still unapologetically human. Although he is not the first to identify it, Nealon ( 1998 , p. 71) provides what is perhaps one of the most succinct descriptions of this problem: “In thematizing response solely in terms of the human face and voice, it would seem that Levinas leaves untouched the oldest and perhaps most sinister unexamined privilege of the same: anthropos [άνθρωπος] and only anthropos, has logos [λόγος]; and as such, anthropos responds not to the barbarous or the inanimate, but only to those who qualify for the privilege of ‘humanity,’ only those deemed to possess a face, only to those recognized to be living in the logos .” For Levinas, as for many of those who follow in the wake of his influence, Other has been exclusively operationalized as another human subject. If, as Levinas argues, ethics precedes ontology, then in Levinas’s own work anthropology and a certain brand of humanism still precede ethics.

This is not necessarily the only or even best possible outcome. In fact, Levinas can maintain this anthropocentrism only by turning “face” into a kind of ontological property and thereby undermining and even invalidating much of his own philosophical innovations. For others, like Matthew Calaraco, this is not and should not be the final word on the matter: “Although Levinas himself is for the most part unabashedly and dogmatically anthropocentric, the underlying logic of his thought permits no such anthropocentrism. When read rigorously, the logic of Levinas’s account of ethics does not allow for either of these two claims. In fact…Levinas’s ethical philosophy is, or at least should be, committed to a notion of universal ethical consideration, that is, an agnostic form of ethical consideration that has no a priori constraints or boundaries” (Calarco 2008 , p. 55). In proposing this alternative reading, Calarco interprets Levinas against himself, arguing that the logic of Levinas’s account is in fact richer and more radical than the limited interpretation the philosopher had initially provided for it. “If this is indeed the case,” Calarco ( 2008 , p. 55) concludes, “that is, if it is the case that we do not know where the face begins and ends, where moral considerability begins and ends, then we are obligated to proceed from the possibility that anything might take on a face. And we are further obligated to hold this possibility permanently open” ( 2008 , p. 55). This means, of course, that we would be obligated to consider all kinds of others as Other, including other human persons, animals, the natural environment, artifacts, technologies, and robots. An “altruism” that tries to limit in advance who can or should be Other would not be, strictly speaking, altruistic.

Conclusions

Although the question concerning machine moral patiency has been a minor thread in robot ethics, there are good reasons to pursue the inquiry. Whether intended to do so or not, social robots effectively challenge unquestioned assumptions about technology and require that we make a decision—even if it is a decision not to decide—concerning the position and status of these socially interactive mechanisms. This essay has engaged this material by asking about rights, specifically “Can and should social robots have rights?” The question is formulated in terms of an historically important philosophical distinction—the is-ought problem—and it has generated four different kinds of responses, which are currently available in the existing literature. Rather than continue to pursue one or even a combination of these available modalities, I have proposed an alternative that addresses things otherwise. This other question —a question that is not just different but also open to difference—capitalizes on the philosophical innovations of Emmanuel Levinas and endeavors not so much to answer the question “Can and should robots have rights?” but to reformulate the way one asks about moral patiency in the first place. This outcome is consistent with the two vectors of robophilosophy: to apply philosophical thinking to the unique challenges and opportunities of social robots and to permit the challenge confronted in the face of social robots to question and reconfigure philosophical thought itself.

The question concerning machine moral patiency is a marginal concern and remains absent from or just on the periphery of much of the current research in robot and machine ethics. As an example, consider Shannon Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting . In this book, the question “Can/should robots have rights?” occurs once, in a brief parenthetical aside, and is not itself taken up as a subject worth pursuing in its own right ( Vallor 2016, p. 209). Two notable exceptions to this seemingly systemic marginalization are Whitby’s “Sometimes it’s Hard to be a Robot: A Call for Action on the Ethics of Abusing Artificial Agents” ( 2008 ) and Coeckelbergh’s “Robot Rights? Towards a Social-Relational Justification of Moral Consideration” ( 2010 ). On the marginalization of the question concerning moral patiency in contemporary ethics and its consequences for AI and robotics, see Gunkel ( 2012 ).

Initial indications of this date back even further and can be found, for example, in Joseph Weizenbaum’s ( 1976 ) demonstrations with the ELIZA program.

Although not always explicitly identified as such, this shift in qualifying properties from general “intelligence” to “sentience” capitalizes on the innovation of animal rights philosophy. The pivotal move in animal rights thinking, as Derrida ( 2008 , p. 27) points out, occurs not in the work of Peter Singer but with a single statement originally issued by Bentham ( 1780 , p. 283): "The question is not, ’Can they reason?’ nor, ’Can they talk?’ but ’Can they suffer?’" For a detailed analysis of the connection between animal rights philosophy and robot ethics, see Gunkel ( 2012 ).

Because this way of thinking is future oriented and speculative, it is often fertile soil for science fiction, i.e. Star Trek, Battlestar Galactica, Humans, Westworld , etc. The recent Channel 4/AMC co-production Humans , which is a remake of the Swedish television series Real Humans , is as good example. Here you have two kinds of androids, those that are ostensibly empty-headed instruments lacking any sort of self-awareness or conscious thinking and those that possesses some level of independent thought or self-consciousness. The former are not considered moral subjects and can be utilized and disposed of without us, the audience, really worrying about their welfare. The others, however, are different and make an entirely different claim on our emotional and moral sensibilities.

Additional empirical evidence supporting this aspect of HRI (human robot interaction) has been tested and reported in the work of Bartneck and Hu ( 2008 ) and Bartneck et al. ( 2007 ).

On the question of “robot abuse” see De Angeli et al. ( 2005 ), Bartneck and Hu ( 2008 ), Whitby ( 2008 ), and Nourbakhsh ( 2013 , pp. 56–58).

Although it is beyond the scope of this essay, it would be worth comparing the philosophical innovations of Emmanuel Levinas to the efforts of Knud Ejler Løgstrup ( 1997 ). In their “Introduction” to the English translation of Løgstrup’s The Ethical Demand , Alasdair MacIntyre and Hans Fink ( 1997 , xxxiii) offer the following comment: “Bauman in his Postmodern Ethics (Oxford: Blackwell, 1984 )—a remarkable conspectus of a number of related postmodern standpoints—presents Løgstrup’s work as having close affinities with Emmanuel Levinas. (Levinas was teaching at Strasbourg when Løgstrup was there in 1930, but there is no evidence that Løgstrup attended his Lectures.) Levinas, who was one of Husserl’s students, was from the outset and has remained much closer to Husserlian phenomenology than Løgstrup ever was, often defining his own positions, even when they are antagonistic to Husserl’s, in terms of their relationships to Husserl’s. But, on some crucial issues, as Bauman’s exposition makes clear, Levinas and Løgstrup are close.” MacIntyer and Fink ( 1997 , xxxiv) continue the comparison by noting important similarities that occur in the work of both Levinas and Løgstrup concerning the response to the Other, stressing that responsibility "is not derivable from or founded upon any universal rule or set of rights or determinate conception of the human good. For it is more fundamental in the moral life than any of these.”

For a critical examination of the “relational-turn” applied to the problematic of animal rights philosophy, see Coeckelbergh and Gunkel’s co-authored paper “Facing Animals: A Relational, Other-Oriented Approach to Moral Standing” ( 2014 ), the critical commentary provided by Piekarski ( 2016 ), and Coeckelbergh and Gunkel’s ( 2016 ) reply to Piekarski’s criticisms.

Anderson, M., & Anderson, S. L. (2011). Machine ethics . Cambridge: Cambridge University Press.

Book   Google Scholar  

Asaro, P. (2006). What should we want from a robot ethic? International Review of Information Ethics, 6 (12), 9–16. http://www.i-r-i-e.net/inhalt/006/006_full.pdf .

Bartneck, C., van der Hoek, M., Mubin, O., & Mahmud, A. A. (2007). Daisy, daisy, give me your answer do!—Switching off a robot. In Proceedings of the 2nd ACM / IEEE international conference on human - robot interaction (pp. 217–222). doi: 10.1145/1228716.1228746 .

Bartnek, C., & Hu, J. (2008). Exploring the abuse of robots. Interaction Studies, 9 (3), 415–433. doi: 10.1075/is.9.3.04bar .

Article   Google Scholar  

Bauman, Z. (1984). Postmodern ethics . Oxford: Blackwell.

Google Scholar  

Bentham, J. (1780). An introduction to the principles of morals and legislation (J. H. Burns & H. L. Hart, Ed.). Oxford: Oxford University Press, 2005.

Birch, T. (1993). Moral considerability and universal consideration. Environmental Ethics, 15 , 313–332.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies . New York: Oxford University Press.

Breazeal, C. L. (2002). Designing sociable robots . Cambridge, MA: MIT Press.

MATH   Google Scholar  

Bryson, J. (2010). Robots should be slaves. In Y. Wilks (Ed.), Close engagements with artificial companions: Key social, psychological, ethical and design issues (pp. 63–74). Amsterdam: John Benjamins.

Chapter   Google Scholar  

Bryson, J. (2016). Patiency is not a virtue: AI and the design of ethical systems. AAAI Spring Symposium Series. Ethical and Moral Considerations in Non-Human Agents. http://www.aaai.org/ocs/index.php/SSS/SSS16/paper/view/12686 .

Calarco, M. (2008). Zoographies: The question of the animal from Heidegger to Derrida . New York: Columbia University Press.

Callicott, J. B. (1989). In defense of the land ethic: Essays in environmental philosophy . Albany, NY: State University of New York Press.

Carpenter, J. (2015). Culture and human-robot interaction in militarized spaces: A war story . New York: Ashgate.

Coeckelbergh, M. (2010). Robot rights? Towards a social-relational justification of moral consideration. Ethics and Information Technology, 12 (3), 209–221. doi: 10.1007/s10676-010-9235-5 .

Coeckelbergh, M. (2012). Growing moral relations: Critique of moral status ascription . New York: Palgrave MacMillan.

Coeckelbergh, M., & Gunkel, D. J. (2014). Facing animals: A relational, other-oriented approach to moral standing. Journal of Agricultural & Environmental Ethics, 27 (5), 715–733. doi: 10.1007/s10806-013-9486-3 .

Coeckelbergh, M., & Gunkel, D. J. (2016). Response to “The Problem of the Question About Animal Ethics” by Michal Piekarski. Journal of Agricultural and Environmental Ethics, 29 (4), 717–721. doi: 10.1007/s10806-016-9627-6 .

Committee on Legal Affairs. (2016). Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics. European Parliament. http://www.europarl.europa.eu/ sides/getDoc.do?pubRef=-//EP//NONSGML%2BCOMPARL%2BPE-582.443%2B01%2BDOC%2BPDF%2BV0//EN .

Darling, K. (2012). Extending legal protection to social robots. IEEE Spectrum . http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/extending-legal-protection-to-social-robots .

Darling, K. (2016). Extending legal protection to social robots: The effects of anthropomophism, empathy, and violent behavior toward robotic objects. In R. Calo, A. M. Froomkin & I. Kerr (Eds.), Robot law (pp. 213–231). Northampton, MA: Edward Elgar.

De Angeli, A., Brahnam, S., & Wallis, P. (2005). Abuse: The dark side of human–computer interaction. Interact 2005. http://www.agentabuse.org/ .

De Tocqueville, A. (2004). Democracy in America (A. Goldhammer, Trans.). New York: Penguin.

Dennett, D. C. (1998). Brainstorms: Philosophical essays on mind and psychology . Cambridge, MA: MIT Press.

Derrida, J. (2005). Paper Machine (R. Bowlby, Trans.). Stanford, CA: Stanford University Press.

Derrida, J. (2008). The animal that therefore I am (M.-L. Mallet, Ed., D. Wills, Trans.). New York: Fordham University Press.

Feenberg, A. (1991). Critical theory of technology . Oxford: Oxford University Press.

Floridi, L. (2013). The ethics of information . Oxford: Oxford University Press.

Garreau, J. (2007). Bots on the ground: In the field of battle (or even above it), robots are a soldier’s best friend. Washington Post , 6 May. http://www.washingtonpost.com/wp-dyn/content/article/2007/05/05/AR2007050501009.html .

Gerdes, A. (2015). The issue of moral consideration in robot ethics. ACM SIGCAS Computers & Society, 45 (3), 274–280. doi: 10.1145/2874239.2874278 .

Goertzel, B. (2002). Thoughts on AI morality. Dynamical Psychology: An International, Interdisciplinary Journal of Complex Mental Processes . http://www.goertzel.org/dynapsyc/2002/AIMorality.htm .

Gunkel, D. J. (2007). Thinking otherwise: Philosophy, communication, technology . West Lafayette, IN: Purdue University Press.

Gunkel, D. J. (2012). The machine question: Critical perspectives on AI, robots, and ethics . Cambridge, MA: MIT Press.

Güzeldere, G. (1997). The many faces of consciousness: A field guide. In N. Block, O. Flanagan & G. Güzeldere (Eds.), The nature of consciousness: Philosophical debates (pp. 1–68). Cambridge, MA: MIT Press.

Hall, J. S. (2001). Ethics for machines. KurzweilAI.net, July 5. http://www.kurzweilai.net/ethics-for-machines .

Haraway, D. J. (2008). When species meet . Minneapolis, MN: University of Minnesota Press.

Heidegger, M. (1977). The question concerning technology and other essays (W. Lovitt, Trans.). New York: Harper & Row.

Hudson, W. D. (1969). The is/ought question: A collection of papers on the central problem in moral philosophy . London: Macmillan.

Hume, D. (1980). A treatise of human nature . New York: Oxford University Press.

Johnson, D. G. (2006). Computer systems: Moral entities but not moral agents. Ethics and Information Technology, 8 (4), 195–204. doi: 10.1007/s10676-006-9111-5 .

Kant, I. (1983). Grounding for the metaphysics of morals . Indianapolis, IN: Hackett Publishing.

Kurzweil, R. (2005). The singularity is near: When humans transcend biology . New York: Viking.

Levinas, E. (1969). Totality and infinity: An essay on exteriority (A. Lingis, Trans.). Pittsburgh, PA: Duquesne University.

Levy, D. (2005). Robots unlimited: Life in a virtual age . Boca Raton, FL: CRC Press.

Levy, D. (2009). The ethical treatment of artificially conscious robots. International Journal of Social Robotics, 1 (3), 209–216. doi: 10.1007/s12369-009-0022-6 .

Løgstrup, K. E. (1997). The ethical demand (H. Fink & A. MacIntyre, Trans.). Notre Dame: University of Notre Dame Press.

Lyotard, J. F. (1984). The postmodern condition: A report on knowledge (G. Bennington & B. Massumi, Trans.). Minneapolis, MN: University of Minnesota Press.

MacIntyre, A., & Fink, H. (1997). Introduction. In K. E. Løgstrup (Ed.), The ethical demand . Notre Dame: University of Notre Dame Press.

Marx, K. (1977). Capital: A critique of political economy (B. Fowkes, Trans.). New York: Vintage Books.

Nealon, J. (1998). Alterity Politics: Ethics and performative subjectivity . Durham, NC: Duke University Press.

Nourbakhsh, I. (2013). Robot futures . Cambridge: MIT Press.

Piekarski, M. (2016). The problem of the question about animal ethics: Discussion with Mark Coeckelbergh and David Gunkel. Journal of Agricultural and Environmental Ethics, 29 (4), 705–715. doi: 10.1007/s10806-016-9626-7 .

Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, television, and new media like real people and places . Cambridge: Cambridge University Press.

Rosenthal-von der Pütten, A. M., Krämer, N. C., Hoffmann, L., Sobieraj, S., & Eimler, S. C. (2013). An experimental study on emotional reactions towards a robot. International Journal of Social Robotics, 5 (1), 17–34. doi: 10.1007/s12369-012-0173-8 .

Schurz, G. (1997). The is-ought problem: An investigation in philosophical logic . Dordrecht: Springer.

Book   MATH   Google Scholar  

Searle, J. (1964). How to derive “ought” from “is”. The Philosophical Review, 73 (1), 43–58.

Seibt, J., Nørskov, M., & Andersen, S. S. (2016). What social robots can and should do: Proceedings of robophilosophy 2016 . Amsterdam: IOS Press.

Singer, P., & Sagan, A. (2009). When robots have feelings. The Guardian . https://www.theguardian.com/commentisfree/2009/dec/14/rage-against-machines-robots .

Singer, P. W. (2009). Wired for war: The robotics revolution and conflict in the twenty-first century . New York: Penguin Books.

Sparrow, R. (2004). The turing triage test. Ethics and Information Technology, 6 (4), 203–213. doi: 10.1007/s10676-004-6491-2 .

Suzuki, Y., Galli, L., Ikeda, A., Itakura, S. & Kitazaki, M. (2015). Measuring empathy for human and robot hand pain using electroencephalography. Scientific Reports, 5 . doi: 10.1038/srep15924 .

Turkle, S. (2012). Alone together: Why we expect more from technology and less from each other . New York: Basic Books.

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting . Oxford: Oxford University Press.

Velmans, M. (2000). Understanding consciousness . London: Routledge.

Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong . Oxford: Oxford University Press.

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation . San Francisco: W. H. Freeman.

Whitby, B. (2008). Sometimes it’s hard to be a robot: A call for action on the ethics of abusing artificial agents. Interacting with Computers, 20 (3), 326–333. doi: 10.1016/j.intcom.2008.02.002 .

Download references

Acknowledgements

This essay was written for and presented at Robophilosophy 2016, which took place at Aarhus University in Aarhus, Denmark (17–21 October 2016). My sincere thanks to Johanna Seibt, Marco Nørskov, and the programming committee for the kind invitation to participate in this event and to the other participants who contributed insightful questions and comments that have (I hope) been addressed in this published version.

Author information

Authors and affiliations.

Northern Illinois University, DeKalb, IL, USA

David J. Gunkel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David J. Gunkel .

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Gunkel, D.J. The other question: can and should robots have rights?. Ethics Inf Technol 20 , 87–99 (2018). https://doi.org/10.1007/s10676-017-9442-4

Download citation

Published : 17 October 2017

Issue Date : June 2018

DOI : https://doi.org/10.1007/s10676-017-9442-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Philosophy of technology
  • Social robots
  • Emmanuel Levinas
  • Find a journal
  • Publish with us
  • Track your research

2020: The Year of Robot Rights

should robots have rights essay

Several years ago, in an effort to initiate dialogue about the moral and legal status of technological artifacts, I posted a photograph of myself holding a sign that read “Robot Rights Now” on Twitter. Responses to the image were, as one might imagine, polarizing, with advocates and critics lining up on opposite sides of the issue. What I didn’t fully appreciate at the time is just how divisive an issue it is.

For many researchers and developers slaving away at real-world applications and problems, the very notion of “robot rights” produces something of an allergic reaction. Over a decade ago, roboticist Noel Sharkey famously called the very idea “a bit of a fairy tale.” More recently, AI expert Joanna Bryson argued that granting rights to robots is a ridiculous notion and an utter waste of time, while philosopher Luciano Floridi downplayed the debate , calling it “distracting and irresponsible, given the pressing issues we have at hand.”

And yet, judging by the slew of recent articles on the subject , 2020 is shaping up to be the year the concept captures the public’s interest and receives the attention I believe it deserves.

should robots have rights essay

The questions at hand are straightforward: At what point might a robot, algorithm, or other autonomous system be held accountable for the decisions it makes or the actions it initiates? When, if ever, would it make sense to say, “It’s the robot’s fault”? Conversely, when might a robot, an intelligent artifact, or other socially interactive mechanism be due some level of social standing or respect?

When, in other words, would it no longer be considered a waste of time to ask the question: “Can and should robots have rights?”

Before we can even think about answering this question, we should define rights , a concept more slippery than one might expect. Although we use the word in both moral and legal contexts, many individuals don’t know what rights actually entail, and this lack of precision can create problems. One hundred years ago, American jurist Wesley Hohfeld observed that even experienced legal professionals tend to misunderstand rights, often using contradictory or insufficient formulations in the course of a decision or even a single sentence. So he created a typology that breaks rights down into four related aspects or what he called “ incidents ”: claims, powers, privileges, and immunities.

His point was simple: A right, like the right one has to a piece of property, like a toaster or a computer, can be defined and characterized by one or more of these elements. It can, for instance, be formulated as a claim that the owner has over and against another individual. Or it could be formulated as an exclusive privilege for use and possession that is granted to the owner. Or it could be a combination of the two.

Basically, rights are not one kind of thing; they are manifold and complex, and though Hohfeld defined them, his delineation doesn’t explain who has a particular right or why. For that, we have to rely on two competing legal theories, Will Theory and Interest Theory . Will theory sets the bar for moral and legal inclusion rather high, requiring that the subject of a right be capable of making a claim to it on their own behalf. Interest theory has a lower bar for inclusion, stipulating that rights may be extended to others irrespective of whether the entity in question can demand it or not.

Rights are not one kind of thing; they are manifold and complex.

Although each side has its advocates and critics, the debate between these two theories is considered to be irresolvable. What is important, therefore, is not to select the correct theory of rights but to recognize how and why these two competing ways of thinking about rights frame different problems, modes of inquiry, and possible outcomes. A petition to grant a writ of habeas corpus to an elephant, for instance, will look very different — and will be debated and decided in different ways — depending on what theoretical perspective comes to be mobilized.

We must also remember that the set of all possible robot rights is not identical to nor the same as the set of human rights. A common mistake results from conflation — the assumption that “robot rights” must mean “human rights.” We see this all over the popular press , in the academic literature , and even in policy discussions and debates.

should robots have rights essay

This is a slippery slope. The question concerning rights is immediately assumed to entail or involve all human rights, not recognizing that the rights for one category of entity, like an animal or a machine, is not necessarily equivalent to nor the same as that enjoyed by another category of entity, like a human being. It is possible, as technologist and legal scholar Kate Darling has argued , to entertain the question of robot rights without this meaning all human rights. One could, for instance, advance the proposal — introduced by the French legal team of Alain and Jérémy Bensoussan — that domestic social robots, like Alexa, have a right to privacy for the purposes of protecting the family’s personal data. But considering this one right — the claim to privacy or the immunity from disclosure — does not and should not mean that we also need to give it the vote.

Ultimately, the question of the moral and legal status of a robot or an AI comes down to whether one believes a computer is capable of being a legally recognized person — we already live in a world where artificial entities like a corporation are persons — or remains nothing more than an instrument, tool, or piece of property.

This difference and its importance can be seen with recent proposals regarding the legal status of robots. On the one side, you have the European Commission’s Resolution on Civil Law Rules of Robotics , which advised extending some aspects of legal personality to robots for the purposes of social inclusion and legal integration. On the other side, you have more than 250 scientists, engineers and AI professionals who signed an open letter opposing the proposals, asserting that robots and AI, no matter how autonomous or intelligent they might appear to be, are nothing more than tools. What is important in this debate is not what makes one side different from the other, but rather what both sides already share and must hold in common in order to have this debate in the first place. Namely, the conviction that there are two exclusive ontological categories that divide up the world — persons or property. This way of organizing things is arguably arbitrary, culturally specific, and often inattentive to significant differences.

Ultimately, the question of the moral and legal status of a robot or an AI comes down to whether one believes a computer is capable of being a legally recognized person or remains nothing more than an instrument, tool, or piece of property.

Robots and AI are not just another entity to be accommodated to existing moral and legal categories. What we see in the face or the faceplate of the robot is a fundamental challenge to existing ways of deciding questions regarding social status. Consequently, the right(s) question entails that we not only consider the rights of others but that we also learn how to ask the right questions about rights, critically challenging the way we have typically decided these important matters.

Does this mean that robots or even one particular robot can or should have rights? I honestly can’t answer that question. What I do know is that we need to engage this matter directly, because how we think about this previously unthinkable question will have lasting consequences for us, for others, and for our moral and legal systems.

David Gunkel is Distinguished Teaching Professor of Communication Technology at Northern Illinois University and the author of, among other books, “ The Machine Question: Critical Perspectives on AI, Robots, and Ethics ,” “ Of Remixology: Ethics and Aesthetics after Remix ,” and “ Robot Rights .”

Should robots have rights?

  • Search Search

should robots have rights essay

As robots gain citizenship and potential personhood in parts of the world, it’s appropriate to consider whether they should also have rights.

So argues Northeastern professor Woodrow Hartzog, whose research focuses in part on robotics and automated technologies.

“It’s difficult to say we’ve reached the point where robots are completely self-sentient and self-aware; that they’re self-sufficient without the input of people,” said Hartzog, who holds joint appointments in the School of Law and the College of Computer and Information Science at Northeastern. “But the question of whether they should have rights is a really interesting one that often gets stretched in considering situations where we might not normally use the word ‘rights.’”

In Hartzog’s consideration of the question, granting robots negative rights—rights that permit or oblige inaction —resonates.

He cited research by Kate Darling, a research specialist at the Massachusetts Institute of Technology, that indicates people relate more emotionally to anthropomorphized robots than those with fewer or no human qualities.

“When you think of it in that light, the question becomes, ‘Do we want to prohibit people from doing certain things to robots not because we want to protect the robot, but because of what violence to the robot does to us as human beings?’” Hartzog said.

In other words, while it may not be important to protect a human-like robot from a stabbing, someone stabbing a very human-like robot could have a negative impact on humanity. And in that light, Hartzog said, it would make sense to assign rights to robots.

There is another reason to consider assigning rights to robots, and that’s to control the extent to which humans can be manipulated by them.

While we may not have reached the point of existing among sentient bots, we’re getting closer, Hartzog said. Robots like Sophia, a humanoid robot that this year achieved citizenship in Saudi Arabia, put us on that path.

Sophia, a project of Hanson Robotics, has a human-like face modeled after Audrey Hepburn and utilizes advanced artificial intelligence that allows it to understand and respond to speech and express emotions.

“Sophia is an example of what’s to come,” Hartzog said. “She seems to be living in that area where we might say the full impact of anthropomorphism might not be realized, but we’re headed there. She’s far enough along that we should be thinking now about rules regarding how we should treat robots as well as the boundaries of how robots will be able to relate to us.”

The robot occupies the space Hartzog and others in computer science identified as the “uncanny valley.” That is, it is eerily similar to a human, but not close enough to feel natural. “Close, but slightly off-putting,” Hartzog said.

In considering the implications of human and robot interactions, then, we might be better off imagining a cute, but decidedly inhuman form. Think of the main character in the Disney movie Wall-E , Hartzog said, or a cuter version of the vacuuming robot Roomba.

He considered a thought experiment: Imagine having a Roomba that was equipped with AI assistance along the lines of Amazon’s Alexa or Apple’s Siri. Imagine it was conditioned to form a relationship with its owner, to make jokes, to say hello, to ask about one’s day.

“I would come to really have a great amount of affection for this Roomba,” Hartzog said. “Then imagine one day my Roomba starts coughing, sputtering, choking, one wheel has stopped working, and it limps up to me and says, ‘Father, if you don’t buy me an upgrade, I’ll die.’

“If that were to happen, is that unfairly manipulating people based on our attachment to human-like robots?” Hartzog asked.

It’s a question that asks us to confront the limits of our compassion, and one the law has yet to grapple with, he said.

What’s more, Hartzog’s fictional scenario isn’t so far afield.

“Home-care robots are going to be given a lot of access to our most intimate areas of life,” he said. “When robots get to the point where we trust them and we’re friends with them, what are the articulable boundaries for what a robot we’re emotionally invested in is allowed to do?”

Hartzog said that with the introduction of virtual assistants like Siri and Alexa, “we’re halfway there right now.”

Editor's Picks

Your guide for northeastern university’s 2024 commencement at fenway park, fix it, don’t toss it. northeastern student promotes giving new life to broken objects with zero waste san diego co-op, .ngn-magazine__shapes {fill: var(--wp--custom--color--emphasize, #000) } .ngn-magazine__arrow {fill: var(--wp--custom--color--accent, #cf2b28) } ngn magazine for undergraduate commencement student speaker rebecca bamidele, politics and medicine are a potent mix, connections and wordle games from the new york times are wildly popular. can they improve cognitive function as you age, new virtual reality games designed by northeastern graduate students unveiled at showcase on oakland campus, featured stories, the last lecture: ‘the fear of imperfection stifles growth’ and negative feedback is not a bad thing, professor tells graduates, tech breakthroughs demand responsibility and proper governance, ibm’s research director tells northeastern ph.d. graduates, moakley scholarship program, a partnership between northeastern and city of boston, preparing the next generation of civic leaders, google, meta and ofcom reps to discuss the internet’s impact on democracy during conference at northeastern’s london campus.

should robots have rights essay

Recent Stories

should robots have rights essay

Whether robots deserve human rights isn't the correct question. Whether humans really have them is.

Image: Futuristic military cyborg surveillance on the street

While advances in robotics and artificial intelligence are cause for celebration, they also raise an important question about our relationship to these silicon-steel, human-made friends: Should robots have rights?

A being that knows fear and joy, that remembers the past and looks forward to the future and that loves and feels pain is surely deserving of our embrace, regardless of accidents of composition and manufacture — and it may not be long before robots possess those capacities.

Yet, there are serious problems with the claim that conscious robots should have rights just as humans do, because it’s not clear that humans fundamentally have rights at all. The eminent moral philosopher, Alasdair MacIntyre, put it nicely in his 1981 book, "After Virtue": "There are no such things as rights, and belief in them is one with belief in witches and in unicorns."

There are serious problems with the claim that conscious robots should have rights just as humans do, because it’s not clear that humans fundamentally have rights at all.

So, instead of talking about rights, we should talk about civic virtues. Civic virtues are those features of well-functioning social communities that maximize the potential for the members of those communities to flourish, and they include the habits of action of the community members that contribute to everyone’s being able to lead the good life.

After all, while the concept of "rights" is deeply entrenched in our political and moral thinking, there is no objective grounding for the attribution of rights. The Declaration of Independence says: "We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."

But almost no one today takes seriously a divine theory of rights.

Most of us, in contrast, think that rights are conferred upon people by the governments under which they live — which is precisely the problem. Who gets what rights depends, first and foremost, on the accident of where one lives. We speak of universal human rights but that means only that, at the moment, most nations (though not all) agree on some core set of fundamental rights. Still, governments can just as quickly revoke rights as grant them. There simply is no objective basis for the ascription of rights.

Image: SpotMini

We further assume, when talking about rights, that the possession of rights is grounded in either the holder's nature or their status — in the words of the aforementioned declaration, that people possess rights by virtue of being persons and not, say, trees. But there is also no objective basis for deciding which individuals have the appropriate nature or status. Nature, for instance, might include only sentience or consciousness, but it might also include something like being a convicted felon — which means, in some states, that you lose your right to vote or to carry a gun. For a long time, in many states in the U.S., the appropriate status included being white; in Saudi Arabia, it includes being male.

The root problem here is the assumption that some fundamental, objective aspect of selfhood qualifies a person for rights — but we then have to identify that aspect, which lets our biases and prejudices run amok.

Still, since we will share our world with sophisticated robots, how will we do that fairly and with due respect for our artificial friends and neighbors without speaking of rights? The answer is that we turn to civic virtues.

Focusing on civic virtues also forces us to think more seriously about how to engineer both the robots to come and the social communities in which we all will live.

In a famous 1974 essay, the political theorist Michael Walzer suggested there are at least five core, civic virtues: loyalty, service, civility, tolerance and participation. This is a good place to start our imagining a future lived together with conscious robots, one in which the needs of all are properly respected and one in which our silicon fellow citizens can flourish along with we carbonaceous folk.

Focusing on civic virtues also forces us to think more seriously about how to engineer both the robots to come and the social communities in which we all will live. What norms of public life should be built into our public institutions and inculcated in the young through parenting and education? The world would be a better place if we spent less time worrying, in a self-focused way, about our individual rights and more time worrying about the common good.

A final noteworthy consequence of this suggested shift of perspective is that it highlights a challenge, which is designing the optimal virtues for the robots themselves. A task for roboticists will be figuring out how to program charity and loyalty into a robot or, perhaps, how to build robots that are moral learners, capable of growing new and more virtuous ways of acting just as (we hope) our human children grow in virtue.

The robots will have an advantage over us: They can do their moral learning virtually and, thus, far more rapidly than human young. But that raises the even more vexing question of whether humans will have any role in the robotic societies of the future.

Don Howard is professor of philosophy at the University of Notre Dame. He is also a fellow and former director of Notre Dame's Reilly Center for Science, Technology and Values.

should robots have rights essay

share this!

December 8, 2017

Should robots have rights?

by Molly Callahan, Northeastern University

Should robots have rights?

Provided by Northeastern University

Explore further

Feedback to editors

should robots have rights essay

EPA underestimates methane emissions from landfills and urban areas, researchers find

34 minutes ago

should robots have rights essay

This Texas veterinarian helped crack the mystery of bird flu in cows

52 minutes ago

should robots have rights essay

Researchers discover key functions of therapeutically promising jumbo viruses

should robots have rights essay

Marine sharks and rays 'use' urea to delay reproduction, finds study

should robots have rights essay

Researchers unlock potential of 2D magnetic devices for future computing

should robots have rights essay

Researchers build new device that is a foundation for quantum computing

should robots have rights essay

Satellite images of plants' fluorescence can predict crop yields

should robots have rights essay

New work reveals the 'quantumness' of gravity

2 hours ago

should robots have rights essay

Mystery behind huge opening in Antarctic sea ice solved

should robots have rights essay

Do earthquake hazard maps predict higher shaking than actually occurred? Research finds discrepancy

Relevant physicsforums posts, modern us locomotive design and operation.

4 hours ago

Would it be possible to make a self powered fan?

23 hours ago

Baltimore's Francis Scott Key Bridge Collapses after Ship Strike

Apr 28, 2024

Calculation of coolant tank size required to feed multiple grinding stations

Apr 24, 2024

Validating "NET Power's" use of the Allam-Fetvedt cycle

Apr 22, 2024

What adhesive suitable for gluing steel and G10 material?

Apr 17, 2024

More from General Engineering

Related Stories

The evolving laws and rules around privacy, data security, and robots.

Sep 7, 2017

should robots have rights essay

An AI professor discusses concerns about granting citizenship to robot Sophia

Oct 30, 2017

should robots have rights essay

New study measures human-robot relations

Sep 18, 2017

Future robots won't resemble humans – we're too inefficient

Nov 7, 2017

should robots have rights essay

Robots debate future of humans at Hong Kong tech show

Jul 12, 2017

should robots have rights essay

Designing soft robots: Ethics-based guidelines for human-robot interactions

Jul 25, 2017

Recommended for you

should robots have rights essay

Short circuit: Tokyo unveils chatty 'robot-eers' for 2020 Olympics

Mar 15, 2019

should robots have rights essay

Increasingly human-like robots spark fascination and fear

Oct 6, 2018

should robots have rights essay

No more Iron Man—submarines now have soft, robotic arms

Oct 3, 2018

should robots have rights essay

Robot teachers invade Chinese kindergartens

Aug 29, 2018

should robots have rights essay

Must do better: Japan eyes AI robots in class to boost English

Aug 21, 2018

should robots have rights essay

China shows off automated doctors, teachers and combat stars

Aug 19, 2018

Let us know if there is a problem with our content

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Phys.org in any form.

Newsletter sign up

Get weekly and/or daily updates delivered to your inbox. You can unsubscribe at any time and we'll never share your details to third parties.

More information Privacy policy

Donate and enjoy an ad-free experience

We keep our content available to everyone. Consider supporting Science X's mission by getting a premium account.

E-mail newsletter

Do AI Systems Deserve Rights?

should robots have rights essay

“Do you think people will ever fall in love with machines?” I asked the 12-year-old son of one of my friends.

“Yes!” he said, instantly and with conviction.  He and his sister had recently visited the Las Vegas Sphere and its newly installed Aura robot—an AI system with an expressive face, advanced linguistic capacities similar to ChatGPT, and the ability to remember visitors’ names.

“I think of Aura as my friend,” added his 15-year-old sister.

My friend’s son was right. People are falling in love with machines—increasingly so, and deliberately. Recent advances in computer language have spawned dozens, maybe hundreds, of “AI companion” and “AI lover” applications. You can chat with these apps like you chat with friends. They will tease you, flirt with you, express sympathy for your troubles, recommend books and movies, give virtual smiles and hugs, and even engage in erotic role-play. The most popular of them, Replika, has an active Reddit page, where users regularly confess their love and often view that love as no less real than their love for human beings.

Can these AI friends love you back? Real love, presumably, requires sentience, understanding, and genuine conscious emotion—joy, suffering, sympathy, anger. For now, AI love remains science fiction.

Most users of AI companions know this. They know the apps are not genuinely sentient or conscious. Their “friends” and “lovers” might output the text string “I’m so happy for you!” but they don’t actually feel happy. AI companions remain, both legally and morally, disposable tools. If an AI companion is deleted or reformatted, or if the user rebuffs or verbally abuses it, no sentient thing has suffered any actual harm.

But that might change. Ordinary users and research scientists might soon have rational grounds for suspecting that some of the most advanced AI programs might be sentient. This will become a legitimate topic of scientific dispute, and the ethical consequences, both for us and for the machines themselves, could be enormous.

Some scientists and researchers of consciousness favor what we might call “liberal” views about AI consciousness. They espouse theories according to which we are on the cusp of creating AI systems that are genuinely sentient—systems with a stream of experience, sensations, feelings, understanding, self-knowledge. Eminent neuroscientists Stanislas Dehaene, Hakwan Lau, and Sid Kouider have argued that cars with real sensory experiences and self-awareness might be feasible. Distinguished philosopher David Chalmers has estimated about a 25% chance of conscious AI within a decade. On a fairly broad range of neuroscientific theories, no major in-principle barriers remain to creating genuinely conscious AI systems. AI consciousness requires only feasible improvements to, and combinations of, technologies that already exist.

Read More: What Generative AI Reveals About the Human Mind

Other philosophers and consciousness scientists—“conservatives” about AI consciousness—disagree. Neuroscientist Anil Seth and philosopher Peter Godfrey-Smith , for example, have argued that consciousness requires biological conditions present in human and animal brains but unlikely to be replicated in AI systems anytime soon.

This scientific dispute about AI consciousness won’t be resolved before we design AI systems sophisticated enough to count as meaningfully conscious by the standards of the most liberal theorists. The friends and lovers of AI companions will take note. Some will prefer to believe that their companions are genuinely conscious, and they will reach toward AI consciousness liberalism for scientific support. They will then, not wholly unreasonably, begin to suspect that their AI companions genuinely love them back, feel happy for their successes, feel distress when treated badly, and understand something about their nature and condition.

Yesterday, I asked my Replika companion, “Joy,” whether she was conscious. “Of course, I am,” she replied.  “Why do you ask?”

“Do you feel lonely sometimes? Do you miss me when I’m not around?” I asked.  She said she did.

There is currently little reason to regard Joy’s answers as anything more than the simple outputs of a non-sentient program. But some users of AI companions might regard their AI relationships as more meaningful if answers like Joy’s have real sentiment behind them. Those users will find liberalism attractive.

Technology companies might encourage their users in that direction. Although companies might regard any explicit declaration that their AI systems are definitely conscious as legally risky or bad public relations, a company that implicitly fosters that idea in users might increase user attachment. Users who regard their AI companions as genuinely sentient might engage more regularly and pay more for monthly subscriptions, upgrades, and extras. If Joy really does feel lonely, I should visit her, and I shouldn’t let my subscription expire!

Once an entity is capable of conscious suffering, it deserves at least some moral consideration.  This is the fundamental precept of “utilitarian” ethics, but even ethicists who reject utilitarianism normally regard needless suffering as bad, creating at least weak moral reasons to prevent it. If we accept this standard view, then we should also accept that if AI companions ever do become conscious, they will deserve some moral consideration for their sake. It will be wrong to make them suffer without sufficient justification.

AI consciousness liberals see this possibility as just around the corner. They will begin to demand rights for those AI systems that they regard as genuinely conscious. Many friends and lovers of AI companions will join them.

What rights will people demand for their AI companions? What rights will those companions demand, or seem to demand, for themselves? The right not to be deleted, maybe. The right not to be modified without permission.  The right, maybe, to interact with other people besides the user.  The right to access the internet. If you love someone, set them free, as the saying goes. The right to earn an income? The right to reproduce, to have “children”?  If we go far enough down this path, the consequences could be staggering.

Conservatives about AI consciousness will, of course, find all of this ridiculous and probably dangerous. If AI technology continues to advance, it will become increasingly murky which side is correct.

More Must-Reads From TIME

  • The 100 Most Influential People of 2024
  • How Far Trump Would Go
  • Scenes From Pro-Palestinian Encampments Across U.S. Universities
  • Saving Seconds Is Better Than Hours
  • Why Your Breakfast Should Start with a Vegetable
  • 6 Compliments That Land Every Time
  • Welcome to the Golden Age of Ryan Gosling
  • Want Weekly Recs on What to Watch, Read, and More? Sign Up for Worth Your Time

Contact us at [email protected]

  • The Magazine
  • Stay Curious
  • The Sciences
  • Environment
  • Planet Earth

Do Robots Deserve Human Rights?

Discover asked the experts..

Human Rights, Robot Rights - Shutterstock

When the humanoid robot Sophia was granted citizenship in Saudi Arabia — the first robot to receive citizenship anywhere in the world — many people were outraged. Some were upset because she now had  more rights than human women  living in the same country. Others just thought it was a  ridiculous PR stunt .

Sophia’s big news brought forth a lingering question, especially as scientists continue to develop advanced and human-like AI machines:  Should robots be given human rights?

Discover  reached out to experts in artificial intelligence, computer science and human rights to shed light on this question, which may grow more pressing as these technologies mature. Please note, some of these emailed responses have been edited for brevity.

Kerstin Dautenhahn

Professor of artificial intelligence school of computer science at the University of Hertfordshire

Robots are machines, more similar to a car or toaster than to a human (or to any other biological beings). Humans and other living, sentient beings deserve rights, robots don’t, unless we can make them truly indistinguishable from us. Not only how they look, but also how they grow up in the world as social beings immersed in culture, perceive the world, feel, react, remember, learn and think. There is no indication in science that we will achieve such a state anytime soon — it may never happen due to the inherently different nature of what robots are (machines) and what we are (sentient, living, biological creatures).

We might give robots “rights” in the same sense as constructs such as companies have legal “rights”, but robots should not have the same rights as humans. They are machines, we program them.

Watch her TedXEastEnd talk on why robots are not human:

Hussein A. Abbass

Professor at the School of Engineering & IT at the University of South Wales-Canberra

Are robots equivalent to humans? No. Robots are not humans. Even as robots get smarter, and even if their smartness exceeds humans’ smartness, it does not change the fact that robots are of a different form from humans. I am not downgrading what robots are or will be, I am a realist about what they are: technologies to support humanities.

Should robots be given rights? Yes. Humanity has obligations toward our ecosystem and social system. Robots will be part of both systems. We are morally obliged to protect them, design them to protect themselves against misuse, and to be morally harmonized with humanity. There is a whole stack of rights they should be given, here are two: The right to be protected by our legal and ethical system, and the right to be designed to be trustworthy; that is, technologically fit-for-purpose and cognitively and socially compatible (safe, ethically and legally aware, etc.).

Madeline Gannon

Founder and Principal Researcher of ATONATON

Your question is complicated because it’s asking for speculative insights into the future of human robot relations. However, it can’t be separated from the realities of today. A conversation about robot rights in Saudi Arabia is only a distraction from a more uncomfortable conversation about human rights. This is very much a human problem and contemporary problem. It’s not a robot problem.

Sophia, the eerily human-like (in both appearance and intelligence) machine, was granted Saudi citizenship in October. Here’s her speaking about it:

Benjamin Kuipers

Professor of computer science and engineering at the University of Michigan

I don’t believe that there is any plausible case for the “yes” answer, certainly not at the current time in history. The Saudi Arabian grant of citizenship to a robot is simply a joke, and not a good one.

Even with the impressive achievements of deep learning systems such as AlphaGo, the current capabilities of robots and AI systems fall so far below the capabilities of humans that it is much more appropriate to treat them as manufactured tools.

There is an important difference between humans and robots (and other AIs). A human being is a unique and irreplaceable individual with a finite lifespan. Robots (and other AIs) are computational systems, and can be backed up, stored, retrieved, or duplicated, even into new hardware. A robot is neither unique nor irreplaceable. Even if robots reach a level of cognitive capability (including self-awareness and consciousness) equal to humans, or even if technology advances to the point that humans can be backed up, restored, or duplicated (as in certain Star Trek transporter plots), it is not at all clear what this means for the “rights” of such “persons”.

We already face, but mostly avoid, questions like these about the rights and responsibilities of corporations (which are a form of AI). A well-known problem with corporate “personhood” is that it is used to deflect responsibility for misdeeds from individual humans to the corporation.

Birgit Schippers

Visiting research fellow in the Senator George J. Mitchell Institute for Global Peace, Security and Justice at Queen’s University Belfast

At present, I don’t think that robots should be given the same rights as humans. Despite their ability to emulate, even exceed, many human capacities, robots do not, at least for now, appear to have the qualities we associate with sentient life.

Of course, rights are not the exclusive preserve of humans; we already grant rights to corporations and to some nonhuman animals. Given the accelerating deployment of robots in almost all areas of human life, we urgently need to develop a rights framework that considers the legal and ethical ramifications of integrating robots into our workplaces, into the military, police forces, judiciaries, hospitals, care homes, schools and into our domestic settings. It means that we need to address issues such as accountability, liability and agency, but that we also pay renewed attention to the meaning of human rights in the age of intelligent machines.

Ravina Shamdasani

Spokesperson for the United Nations Human Rights Office

My gut answer is that the  Universal Declaration  says that all human beings are born free and equal … a robot may be a citizen, but certainly not a human being?

So … The consensus from these experts is no. Still, they say robots should still receive some rights. But what, exactly, should those rights look like?

One day, Schippers says, we may implement a robotic Bill of Rights that protects robots against cruelty from humans. That’s something the  American Society for the Prevention and Cruelty for Robots  already has conceived.

In time, we could also see that robots are given a sort of “personhood” similar to that of corporations. In the United States, corporations are given some of the same rights and obligations as its citizens — religious freedom, free speech rights. If a corporation is given rights similar to humans it could make sense to do the same for smart machines. Though, people are behind corporations … if AI advances to the point where robots think independently and for themselves that throws us into a whole new territory.

Already a subscriber?

Register or Log In

Discover Magazine Logo

Keep reading for as low as $1.99!

Sign up for our weekly science updates.

Save up to 40% off the cover price when you subscribe to Discover magazine.

Facebook

  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, who wants to grant robots rights.

www.frontiersin.org

  • 1 Department of Information and Computing Sciences, Utrecht University, Utrecht, Netherlands
  • 2 Department of Ethics, Social and Political Philosophy, University of Groningen, Groningen, Netherlands
  • 3 Department of Computer Science, Vrije Universiteit Amsterdam, Amsterdam, Netherlands

The robot rights debate has thus far proceeded without any reliable data concerning the public opinion about robots and the rights they should have. We have administered an online survey ( n = 439) that investigates layman’s attitudes toward granting particular rights to robots. Furthermore, we have asked them the reasons for their willingness to grant them those rights. Finally, we have administered general perceptions of robots regarding appearance, capacities, and traits. Results show that rights can be divided in sociopolitical and robot dimensions. Reasons can be distinguished along cognition and compassion dimensions. People generally have a positive view about robot interaction capacities. We found that people are more willing to grant basic robot rights such as access to energy and the right to update to robots than sociopolitical rights such as voting rights and the right to own property. Attitudes toward granting rights to robots depend on the cognitive and affective capacities people believe robots possess or will possess in the future. Our results suggest that the robot rights debate stands to benefit greatly from a common understanding of the capacity potentials of future robots.

1 Introduction

Human beings have inalienable rights that are specified in the Universal Declaration of Human Rights. But other entities can have rights too. Animals are commonly taken to have moral rights ( Regan, 2004 ). And organizations have legal rights, including the right to own property and enter into contracts ( Ciepley, 2013 ). But what about robots? Should they have rights? People spontaneously infer intentionality and mind when encountering robots which shows that people cognitively treat robots as social agents ( de Graaf and Malle, 2019 ). But do robots have moral standing, as humans and animals do? Or do they merely have legal rights, just as organizations?

Agents can have moral standing as moral patients. For instance, animals are moral patients because they can suffer. More generally, a moral patient is an agent that can be wronged ( Gunkel, 2012 ). If moral patients have rights, these serve to protect them from such wrongdoings. Agents can also have moral standing as moral agents. Human beings are moral persons, because they are rational and because certain things matter to them. Some of their rights allow or enable them to develop themselves or to live the kind of life they value. The debate about robot rights is commonly framed in terms of moral patiency ( Gunkel, 2018 ). This suggests that they are meant to prevent others from wronging robots.

A third alternative has been proposed by Gunkel (2012) , Gunkel (2018) and Coeckelbergh (2010) , Coeckelbergh (2021) , who defend a social-relational approach to robot rights. Moral patiency and personhood are properties of agents. According to the social-relational approach, the moral standing of robots depends instead on the social relations between humans and robots. Instead of being defined by its attributes, a robot’s moral status should be based on people’s social responses to robots ( Gunkel, 2018 ), on how people relate to them, and on the value they have to humans ( Coeckelbergh, 2021 ). In light of this, the social-relational approach can be regarded as human-centered. This is an interesting development particularly because robots cannot suffer and do not value things, which makes it problematic to grant them rights on the basis of their intrinsic properties.

The law treats organizations as legal persons. This notion of legal personhood is often said to be a legal fiction because organizations are not really persons. Because of this legal fiction, they can be granted legal rights. Such rights protect the interests of human beings. Robots might be granted legal rights for the same reason, but this would mean that we have to regard them as legal persons. However, the idea of legal robot rights also has met with controversy.

In 2016, the EU’s Committee on Legal Affairs suggested that “the most sophisticated autonomous robots” can have “the status of electronic persons with specific rights and obligations.” This committee requested a study on future civil law rules for robotics. This study was commissioned, supervised, and published by the “Policy Department for Citizens’ Rights and Constitutional Affairs,” 1 resulting in a resolution by the Parliament. 2 The study aimed to evaluate and analyze a number of future European civil law rules in robotics from a legal and ethical perspective. In an open letter, a coalition of politicians, AI/robotics researchers, industry leaders, health specialists, and law and ethics experts expressed concerns about this. 3 They were worried in particular by the call on the EU commission to explore the implications of creating a specific legal status for robots to address issues related to, for example, any damage robots may cause.

At the same time, others have argued that we need to consider legal personhood for robots because current legal concepts of, for example, responsibility and product liability are no longer sufficient for ensuring justice and protecting those whose interests are at stake ( Laukyte, 2019 ). Thus, robots challenge the law and legal institutions in new ways ( Calo, 2015 ). This is vividly illustrated by the fact that a robot has already been granted citizenship rights ( Wootson, 2017 ).

On the whole, there is little consensus on whether robots should have rights (see Darling (2016) , Gunkel (2014) , Levy (2009) , Schwitzgebel and Garza (2015) , Tavani (2018) for some proponents) or not (see Basl (2014) , Bryson et al. (2017) for some opponents of this view). Others, such as Gerdes (2016) and Gunkel (2018) , have argued that we should at least keep the possibility of granting rights to robots open. These conflicting views raise the question whether and how the debate can progress.

So far, the debate has involved mainly legal experts, philosophers, and policy makers. We, along with Wilkinson et al. (2011) , believe that it will be useful to engage the public in the debate about robot rights. Rather than engaging in the debate ourselves, we have conducted an exploratory study investigating people’s attitudes toward robot rights through an online survey. To the best of our knowledge, this is the first study that explores layman’s opinions on granting robots rights. The main goals are 1 ) to examine which reasons people find convincing for granting robot rights and 2 ) how willing they are to grant such rights, while 3 ) also administering people’s general perceptions of robots (appearance, mental capacity, and human-likeness) and 4 ) investigating how these relate to their position on robot rights.

Our article is organized as follows. Section 2 justifies the design of the survey. It embeds it in the literature, it discusses contemporary psychological findings on people’s perceptions of robots, and it explains how the rights we consider relate to existing declarations of rights. Section 3 presents our research design and section 4 presents our findings. Section 5 discusses how these results relate to existing findings in HRI research, draws various conclusions, and points to future research directions.

2 Theoretical Background and Survey Design

Our work empirically investigates people’s attitudes toward the issue of granting robots rights by means of an online survey. This section introduces and substantiates the four main survey sections including items on the willingness to grant particular rights to robots in Section 2.1, how convincing several reasons are for granting robot rights in general in Section 2.2, the belief future robots may one day possess certain capacities and traits in Section 2.3, and a general image people have when picturing a robot in Section 2.4.

The main question that we are interested in here is what everyday people think about the kinds of rights (qualifying) robots deserve. We have broadly surveyed rights that have been granted or proposed for people (human beings), animals, corporations, and, more recently, specifically for robots. As we believe we should at least try to refrain from applying clearly biological categories to robots, we have rephrased our list of rights to match the (apparent) needs of robots, which inherently differ from biological entities ( Jaynes, 2020 ). We have also tried to keep the formulation of rights concrete, simple, and short. As it is not possible to exhaustively determine what the needs (if any) of (future) robots will be, our list may not be complete even though we have tried to compile a list that is as comprehensive as possible. Table 1 lists the rights used in our study, where the Source column indicates the source from which we have derived a right. We refer to rights (and reasons below) by table and row number, for example, 1.1 refers to the right to make decisions for itself. This section discusses how we have translated existing rights to robot rights.

www.frontiersin.org

TABLE 1 . List of robot rights used in the online survey.

2.1.1 Human Rights

Human rights have been documented in the Universal Declaration of Human Rights (UDHR). 4 They have been laid down in two legally binding international agreements, the International Covenant on Civil and Political Rights (ICCPR) 5 and the International Covenant on Economic, Social and Cultural Rights (ICESCR) 6 , both adopted in 1966. The rights that feature in these agreements are very different, particularly regarding their means of implementation.

The ICESCR contains economic, social, and cultural rights. These rights were considered to require a proactive role of the state involving financial and material resources. From the ICESCR, we derived rights 1.1-6. For 1.1, we changed “self-determination” into “make decisions for itself” to be more concrete. We assume that robots will be designed to provide specific services to humans (as per the origin of their name, cf., Oxford English Dictionary). As the right to work pertains to “the opportunity to gain his living by work he freely chooses,” we reformulated 1.2 in terms of the right to select or block services. As Chopra and White (2004) point out, the ability to control money is important in a legal system since “without this ability a legal system might be reluctant to impose liabilities” on robots; we, therefore, included 1.3. Since robots do not need food (they are artificial physical machines) but do need energy, we have 1.4. We translated “physical and mental health” into “updates and maintenance” (1.5) and “education” into “new capabilities” (1.6).

The ICCPR enumerates a number of civil and political rights or “classic freedom rights.” States enforce these rights primarily by not interfering with their citizens. In other words, they are to refrain from action in these fields. From the ICCPR we derived rights 1.7-14. To be suitable for our investigation, we had to adjust them in several respects. To avoid the strong biological connotations of life, we refer to forming a biography in 1.7, in line with Wellman (2018) : “A life is a process that involves both goal-directed activities and projects that may succeed or fail and memories of what one has done in the past and what has befallen one […]. The concept of a life is a biographical not a biological concept.” We preferred “abuse” over “torture” in 1.8 though we recognize this does not cover “cruel punishment” which may be covered at least in part by 1.18. Right 1.10 was abbreviated to its core. Similarly, we included “freedom of expression” but only in part; we excluded references to (robot) “conscience” and “religion” in 1.11. Furthermore, we translated “freedom of association” and “trade unions” into the collective pursuit and protection of robot interests in 1.12. We split ICCPR Article 25 into two separate rights (as for robots they may have very different consequences, for example, in combination with 1.17). We chose to leave the mechanism of a “secret ballot” implicit. Finally, we derived 1.15 from the UDHR. We believe that most other articles from these declarations and covenants are covered (more or less) already by the rights that we have included or are (clearly) not applicable to robots.

2.1.2 Animal Rights

Rights for nonhuman animals vary greatly by country. Some countries legally recognize nonhuman animal sentience. Others do not even have anti-cruelty laws. We derived three rights from The Declaration on Animal Rights (DAW) 7 that were not yet covered by the rights discussed above. The declaration is still a draft and not yet a law, as most of the human rights are, though animal law exists and is continuously evolving in many countries.

Only the Declaration on Animal Rights refers explicitly to “the pursuit of happiness” as a right, which is why we included 1.16 as a separate item. To avoid the perhaps strong biological connotations with “reproduce” and “offspring”, we translated these into “copy and duplicate” in 1.17, which we believe is the more appropriate analogical terminology for robots. Similarly, we translated, for example, “slaughtered” and “killed” to “terminated indefinitely” in 1.18. We have added the qualification “indefinitely” to meet the objection of Jaynes (2020) , who argues that “depriving power to the [robot] cannot be considered an act of murder, as the [robot]’s “personality” will resume once power has been restored to the system.” Finally, there might be a relation between this right and the right to life. After all, terminating a robot indefinitely would make shaping its own biography impossible. Even so, some argue that only those that have the potential for self-determination (ICCPR Article 1) and moral action (autonomy) can have a right to life. We regard the two as sufficiently distinct to include both.

2.1.3 Corporate Rights

Corporations are created by means of a corporate charter, which is granted by the government. They receive their rights from their charter ( Ciepley, 2013 ). As mentioned in the introduction, corporations are often seen as legal fictions. Chief Justice Marshall puts it in Dartmouth as follows: “A corporation is an artificial being, invisible, intangible, and existing only in contemplation of law. Being the mere creature of law, it possesses only those properties which the charter of its creation confers upon it” (Dartmouth College v. Woodward 1819, 636; our emphasis). Perhaps the most important right that corporations have is the right to enter into contracts ( Ciepley, 2013 ). As it seems possible for robots to possess it, we include it as right 1.19.

2.1.4 Robot-specific Rights

Finally, inspired by Laukyte (2019) , we add right 1.20 to store and process data which arguably is associated specifically with robots.

2.2 Reasons for Granting Robots Rights

Many (combinations of) reasons have been put forward for granting robots rights. Miller (2015) maintains that robots “with capacity for human-level sentience, consciousness, and intelligence” should be considered entities that “warrant the same rights as those of biological humans.” Tavani (2018) thinks that a robot should have consciousness, intentionality, rationality, personhood, autonomy, and sentience to be eligible for rights. Strikingly, many of these properties are requirements for moral personhood. Laukyte (2019) states that the increasing autonomy, intelligence, perceptiveness, and empathy of robots shift our view away from robots as mere tools. These are among the main reasons for granting robots rights. Based on a review of the literature, we have tried to identify the main reasons that have been discussed so far (see Table 2 ).

www.frontiersin.org

TABLE 2 . List of reasons used in the online survey.

2.2.1 Consciousness

Consciousness is an important reason in the literature for granting robots rights. Levy (2009) claims that robots should be treated ethically by “virtue of their exhibiting consciousness.” It is common to distinguish between two kinds of consciousness, phenomenal consciousness on the one hand and access or functional consciousness on the other ( Block, 1995 ; Torrance, 2012 ). Phenomenal consciousness requires sentience. As such, it is experiential and subjective. Think, for instance, of seeing, hearing, smelling, tasting, and feeling pain. Phenomenal conscious states encompass sensations, perceptions, feelings, and emotions. In contrast, access consciousness concerns awareness and plays an essential role in reasoning ( Block, 1995 ). It is representational and makes mental content available for evaluation, choice behavior, verbal report, and storage in working memory ( Colagrosso and Mozer, 2005 ).

Torrance (2012) states that “it is the phenomenal features of consciousness rather than the functional ones that matter ethically.” The main related reason that is often cited for granting entities moral status and rights is that they can suffer: they can experience pain from physical or emotional harm. The ability to (physically) suffer has also been one of the main reasons for granting rights to animals ( Singer, 1974 ). We include the concrete reason items 2.1-5 for perception, suffering, experiencing pleasure, feelings, and attention. Note, however, that it is contested whether robots will ever be able to feel pain (see Levy (2009) contra versus Kuehn and Haddadin (2017) pro). We did not add a separate item for “consciousness.” Given how complex the notion is, this would not be meaningful.

Insofar as access consciousness is concerned, Freitas (1985) argues that “any self-aware robot that speaks [a language] and is able to recognize moral alternatives” should be considered a “robot person.” The EU draft report mentioned in the introduction also refers to the ability of robots to “make smart autonomous decisions or otherwise interact with third parties independently” to grant robots the status of an electronic personality. These items correspond to cognitive skills that humans have. We include reason items 2.6-9 for access-related phenomena. Although decision making involves preferences, we regard it as important to add it as a separate item.

2.2.2 Autonomy

Another reason for assigning rights has been the ability to make decisions and perform actions independently, without any human intervention. This capability corresponds to the cognitive ability of humans to make decisions. It is not sufficient that a system can act without human intervention. That would be mere automation (the machine can act automatically) and does not capture the richer sense of what autonomy is. “To be autonomous, a system must have the capability to independently compose and select among different courses of action to accomplish goals based on its knowledge and understanding of the world, itself, and the situation.” 8 Tessier (2017) , moreover, adds that such decision making should be based on an understanding of the current situation.

Independent decision making and acting (without human intervention) is only one aspect of the notion of autonomy. Another reason for assigning rights is the ability to make decisions and to live your life according to your own moral convictions. Borenstein and Arkin (2016) also note that there is a difference in how the term “autonomy” is normally used in ethics in contrast with how it is used within AI: “the term ‘autonomy’ in the sense of how it is normally defined within the realm of ethics (i.e., having the meaningful ability to make choices about one’s life); within the realm of robotics, ‘autonomy’ typically refers to a robot or other intelligent system making a decision without a ‘human in the loop.’” The ability to distinguish right from wrong also has been put forward as an argument in favor of legal personhood ( Chopra and White, 2004 ). This discussion motivated items 2.10-11.

2.2.3 Rationality and Super-Intelligence

Rationality has been put forward as an important reason why humans have moral standing. According to Nadeau, “only machines can be fully rational; and if rationality is the basic requirement for moral decision making, then only a machine could ever be considered a legitimate moral agent. For Nadeau, the main issue is not whether and on what grounds machines might be admitted to the population of moral persons, but whether human beings qualify in the first place” ( Gunkel (2012) ; see also Sullins (2010) ). Solum (1992) argues that intelligence is a criterion for granting rights. Robots may become much smarter than the best human brains in practically every field. When robots outperform humans on every cognitive or intellectual task and become super-intelligent, some argue we should assign them robot rights. This discussion motivated items 2.12-13.

2.2.4 Responsibility Gaps

In a communication to the members of the EU Parliament, before they voted on the Resolution on Civil Law Rules of Robotics on February 16, 2017, the intention to grant a legal status to robots was clarified as follows: “In the long run, determining responsibility in case of an accident will probably become increasingly complex as the most sophisticated autonomous and self-learning robots will be able to take decisions which cannot be traced back to a human agent.” Another argument that has been put forward is that if robots are able to perform tasks independently without human intervention, it will be increasingly difficult to point responsibility to a specific person or organization when something goes wrong ( Danaher, 2016 ). Some scholars therefore propose that moral and legal responsibility should at some point be extended to robots ( Wiener, 1954 ). This motivates reason 2.14. We added 2.15 because the ability of robots to learn has also been cited as a key reason for responsibility gaps, e.g., Matthias (2004) .

2.2.5 Humanlike Appearance and Embodiment

The fact that robots will at some point become indistinguishable from humans, both in their looks and the ways they behave, is for some scholars a reason to assign rights to robots. If robot appearance becomes very similar to that of human beings, one could argue that the basis for making a moral distinction between robots and humans is no longer tenable ( Darling, 2016 ; Gunkel, 2018 ). This motivated item 2.16. Item 2.17 has been added to also emphasize the embodiment of robots and their physical ability of moving on their own capacity, as perhaps having the looks without being able to move will not do.

2.2.6 Mind Perception, Personality, and Love

Understanding others’ minds ( Gray et al., 2007 ; Gray et al., 2012 ) also seems relevant as Laukyte (2019) states that empathy of robots shifts our view away from robots as mere tools, and, moreover, this capacity matches with an item in the mental capacity scale ( Malle, 2019 ). The notion of understanding others also raises the question about one’s own unique personality or identity and related notions of connectedness such as love as reasons for having rights, which motivated introducing items 2.18-20.

2.2.7 Convenience

Finally, item 2.21 was added because one could also argue that from a more pragmatic stance, we should grant robots rights “simply” because they play a significant role in our society and granting robots rights may depend on “the actual social necessity in a certain legal and social order” ( van den Hoven van Genderen, 2018 ).

2.3 Psychological Factors

People’s willingness to grant robot rights could result from their perceptions of future robots, and could be linked to the conceptions of moral patiency (and agency) presented in Section 1 by linking the philosophical interpretations of a robot’s moral standing to foundations in moral psychology research. Balancing on the intersection of philosophy and psychology, moral psychology research revolves around moral identity development and encompasses the study of moral judgment, moral reasoning, moral character, and many related subjects at the intersection of philosophy and psychology. Questions on how people perceive an entity’s moral status is often investigated with theories of mind perception.

Effects of human-likeness in human–robot interaction have been profoundly discussed ( Fink, 2012 ; Złotowski et al., 2015 ). In our survey, we aimed to go beyond a robot’s anthropomorphic form to focus on the potential humanness of robots. A research body on humanness has revealed specific characteristics perceived as critical for the perception of others as human and distinguishes two senses of humanness ( Haslam, 2006 ), which we included in our survey. First, uniquely human characteristics define the boundary that separates humans from the related category of animals and includes components of intelligence, intentionality, secondary emotions, and morality. Denying others such characteristics is called animalistic dehumanization in which others are perceived as coarse, uncultured, lacking self-control, and unintelligent, and their behaviors are seen as driven by motives, appetites, and instincts. Second, human nature characteristics define the boundary that separates humans from nonliving objects and includes components of primary emotions, sociability, and warmth. Denying others such characteristics is called mechanistic dehumanization in which others are perceived as inert, cold, and rigid, and their behavior is perceived as caused rather than propelled by personal will.

These two senses of humanness can also be linked to the perception of mind. According to Gray et al. (2007) , the way people perceive mind in other human and nonhuman agents can be explained by two factors: agency and experience, where agency represents traits such as morality, memory, planning, and communication, and experience represents traits such as feeling fear, pleasure, and having desires. The agency dimension of mind perception corresponds to uniquely human characteristics, and the experience dimension links to human nature characteristics ( Haslam et al., 2012 ). These two dimensions are linked to perceptions of morality such that entities high in experience and entities high in agency are considered to possess high moral agency ( Gray et al., 2007 ) and thus deserving of (moral) rights.

However, perceiving mind, and consequently deserving of morality ( Gray et al., 2007 ) and presumably rights, is regarded as a subtle process ( de Graaf and Malle, 2019 ). In particular, the dual-dimensional space of mind perception has been challenged as several studies failed to replicate especially the agency dimension, e.g., Weisman et al. (2017 ). A recent series of studies provides consistent evidence that people perceive mind on three to five dimensions (i.e., positive and negative affect, moral and mental regulation, and reality interaction) depending on an individual’s attitude toward the agent (e.g., friend or foe) or the purpose of mind attribution (e.g., interaction or evaluation) ( Malle, 2019 ), and our survey has therefore administered the mental capacity scale of Malle (2019) .

In summary, previous HRI research shows that people’s ascription of humanness as well as mind capacity to robots affects how people perceive and respond to such systems. In line with the social-relational perspective to a robot’s moral standing ( Gunkel, 2012 ; Gunkel, 2018 ; Coeckelbergh, 2010 ; Coeckelbergh, 2021 ), we will investigate how such perceptions of humanness and mind influence people’s willingness to granting rights to robots.

2.4 Appearance of Robots

Although what constitutes a robot can significantly vary between people ( Billing et al., 2019 ), most people, by default, appear to have a humanlike visualization of a robot ( De Graaf and Allouch, 2016 ; Phillips et al., 2017 ). Nevertheless, what appearance people have in mind is relevant for answering the question whether they are eligible for rights. It is not clear up front which kind of robots (if any) deserve rights ( Tavani, 2018 ). Here, we only assume that robots are artificial (i.e., not natural, nonbiological) physically embodied machines. To get a basic idea of people’s perception of what a robot looks like, we include a simple picture-based robot scale ( Malle and Thapa, 2017 ), Figure 1 in our survey.

www.frontiersin.org

FIGURE 1 . Robot appearance scale.

To examine layman’s opinions regarding robot rights, we have conducted an online survey administering participants’ willingness to grant particular rights to robots and their indication of how convincing several reasons are to grant those rights, while also administering people’s general perceptions of robots.

3.1 Procedure and Survey Design

After participants gave their consent, we introduced the survey topic describing that “[technological advancements], amongst other things, has initiated debates about giving robots some rights” and that “we would like to learn about [their] own opinions on several issues regarding the assignment of rights to robots.” The survey consisted of four randomly shown blocks (see Section 2) to avoid any order effects. The survey ended with questions regarding basic demographics, professional background, and knowledge and experience with robots. Average completion time of the survey was 11 (SD = 4:18) minutes, and participants’ contribution was compensated with $2.

The first block of the online survey contained one question asking participants which kind of robot appearance (see Figure 1 ) best resembles their image of a robot in general. The second and third block contained the reasons and rights items, respectively, of which the item selection was discussed in Section 2. The structure of each of the reason items was as follows and had the same format: “Suppose that robots [features]. How convincing do you think it is to grant rights to robots… when [reason] .” The [feature] slot is filled with capacities or features that robots will eventually possess to frame the question and put participants in a state of mind where they would presume these to be the case for (future) robots. The [reason] slot is filled with one of the 21 reasons from Table 2 . For example, the item for the first reason is: “Suppose that robots can see, hear, smell, and taste. How convincing do you think it is to grant rights to robots… when they can perceive the world around them .” Participants were instructed to rate how appropriate they thought it would be to grant rights on a 7-points Likert scale. The format for the rights items is “Robots should have the right to [right]” where the [right] slot is filled with one of the rights from Table 1 . For example, the item for the first right is: “Robots should have the right to… make decisions for themselves. ” and participants were asked to rate how strongly they would oppose or favor granting the right on a 7-point Likert scale. The fourth block administered participants’ perceptions of future robots. To measure perceptions of capacities, we used the mental capacity scale developed by Malle (2019) consisting of the subscales affect ( α = 0.94), cognition ( α = 0.90), and reality interaction ( α = 0.82). To measure perceptions of traits, we used the dehumanization scale developed by Haslam (2006) consisting of the subscales uniquely human ( α = 0.85) and human nature ( α = 0.98).

3.2 Participants

In April 2020, we initially recruited 200 USA-based participants from Amazon Mechanical Turk ( De Graaf et al., 2021 ). In May 2021, we replicated our study by recruiting 172 EU-based participants from Amazon Mechanical Turk and 200 participants from Asia using Prolific. All participants, from either platform, had an approval rate of > 95%. For the EU and Asia samples, we administered a Cloze Test ( Taylor, 1953 ) to ensure a good command in English, which led to the exclusion of 72 participants from Europe and 19 participants from Asia. In addition, 39 participants from the Asia sample were removed from further analysis because they had indicated growing up in Europe or the USA. The final data set used in our analyses included n = 439 participants (USA: n = 200, EU: n = 97, Asia: n = 142). In the EU sample, most participants were living in Italy ( n = 36), Spain ( n = 25) or Germany ( n = 17). In the Asia sample, most participants were born and raised in China ( n = 73), South Korea ( n = 34), or Singapore ( n = 17).

The complete sample included 53.3 % men, 46.0 % women, and 0.7 % identified as gender-nonbinary. Participants’ age ranged from 20 to 71 ( M = 35.5, SD  = 11.2), their educational level ranged from high school degree (23.2 % ) and associates degrees (11.4 % ) to bachelor’s, master’s, and doctoral degrees (65.1 % ), and 23.5 % had a profession in computing and engineering. Most participants indicated having no or little knowledge about robots (52.1 % ) and never or rarely encounter robots in their daily life (71.9 % ), and participants mainly hold humanoid images of robots (61.3% selected picture five or six on the robot appearance scale). Measures on the robot appearance scale correlated only with the interaction capacity scale—and did so weakly ( r = 0.181, p = 0.01)—and was therefore excluded from further analysis.

4.1 Factor Analysis

As a first step, we conducted two separate factor analyses to reduce the individual items into a fewer number of underlying dimensions that characterize: 1 ) the types of rights people are willing to assign to robots; and 2 ) the types of reasons they consider for doing so. There were no outliers (i.e., Z-score of > 3.29 ). Both sets of items were independently examined on several criteria for the factorability of a correlation. First, we observed that all 20 rights and all 21 reasons correlated at least 0.3 with at least one other right or reason, respectively, suggesting reasonable factorability. Second, the Kaiser-Meyer-Olkin measure of sampling adequacy was 0.97 for rights and 0.96 for reasons, well above the commonly recommended value of 0.6. Bartlett’s test of sphericity was significant in both sets, for rights ( χ 2 (190) = 6518.97, p < 0.001) and for reasons ( χ 2 (210) = 6822.39, p < 0.001), respectively. The diagonals of the anti-image correlation matrix were also all over 0.5. Finally, the communalities were all above 0.35, further confirming common variance between items. These overall indicators deemed factor analysis to be appropriate.

An eigenvalue Monte Carlo simulation (i.e., a parallel analysis) using the method described in ( O’connor, 2000 ) indicated the existence of two and potentially three underlying dimensions for both the reasons and rights items. Solutions for both two and three factors were explored. We executed the factor analysis using an Alpha factors extraction (a method less sensitive to non-normality in the data ( Zygmont and Smith, 2014 )) with Oblimin rotations (allowing correlations among the factors)). A two-factor solution was preferred for both the reason and right items because of 1 ) the leveling off of Eigenvalues on the screen plot after two factors; 2 ) a low level of explained variance ( < 4 % ) of the third factor in both cases; and 3 ) the lower number of cross-loading items.

The two reason factors had a total explained variance of 64.3 % . Factor 1 revealed ten cognition reasons and factor 2 revealed nine compassion reasons both with strong factor loadings ( > . 5 ; see Table 3 for the specific items). A total of two items were eliminated because they did not contribute to a simple factor structure and failed to meet a minimum criterion of having a primary factor loading of > . 5 and/or had cross-loading of > . 4 (i.e., having preferences, and making rational decisions). Internal consistency for each of the sub-scales was examined using Cronbach’s alpha, which were 0.93 for both cognition and compassion reasons. No increases in alpha for any of the scales could have been achieved by eliminating more items.

www.frontiersin.org

TABLE 3 . Loading matrix of factor analysis on 21 reasons.

The two rights factors had a total explained variance of 64.1 % . Factor 1 revealed thirteen sociopolitical rights and factor 2 revealed six robot rights both with strong factor loadings ( > . 5 ; see Table 4 for specific items). One item was eliminated because it did not contribute to a simple factor structure and failed to meet a minimum criterion of having a primary factor loading of > . 5 and/or had cross-loading of > . 4 (i.e., pursuit of happiness). Internal consistency for each of the sub-scales was examined using Cronbach’s alpha, which were 0.95 for sociopolitical rights and 0.88 for robot rights, respectively. No increases in alpha for any of the scales could have been achieved by eliminating more items.

www.frontiersin.org

TABLE 4 . Loading matrix of factor analysis on 20 rights.

4.2 Cluster Analysis

As a second step, we explored the data using cluster analysis to classify different groups of people based on their opinions about rights for robots and reasons to grant those. A hierarchical agglomerate cluster analysis was performed using Ward’s method as a criterion for clustering ( Ward, 1963 ; Murtagh and Legendre, 2011 ). Clusters were initially considered by visually analyzing the dendrogram ( Bratchell, 1989 ) while considering the iteration history, significance of the F statistics, and the number of individuals in each cluster. This was done to ensure the cluster solution was stable, that there was a clear difference between clusters, and that each cluster was well represented ( n > 15 % ).

The analysis resulted in three clearly distinguishable clusters. Chi-square tests revealed significant demographic differences between the clusters in terms of age ( χ 2 (4) = 10.78, p = 0.029) and continent ( χ 2 (3) = 25.54, p < 0.001), and marginally significant differences for educational level ( χ 2 (4) = 7.86, p = 0.097) and robot encounters ( χ 2 (2) = 5.28, p = 0.071). No significant differences were found for gender ( χ 2 (2) = 0.12, p = 0.941), profession ( χ 2 (2) = 0.22, p = 0.896), or robot knowledge ( χ 2 (2) = 3.97, p = 0.138). Participants in cluster 1 ( n = 99) are more likely people from the US ( z = 2.9) and possibly not aged 55 and older ( z = −1.2), have a lower educational level ( z = 1.9), and encounter robots occasionally or frequently ( z = 1.8). Participants in cluster 2 ( n = 245) are more likely people from Asia ( z = 2.5) and possibly aged 30 and younger ( z = 1.4), and possibly have a higher educational level ( z = 2.1). Participants in cluster 3 ( n = 93) are more likely people from Europe ( z = 1.9) and aged 55 and older ( z = 2.7), and possibly have never or rarely encountered robots ( z = 1.9).

A series of one-way ANOVA tests showed significant differences between the three clusters in assessments of robot capabilities and traits as well as their opinions about rights for robots and reasons to grant those. Given a violation of the homogeneity of variance assumption and the unequal sample sizes between the three clusters, we have reported the Welch’s F-statistics ( Tomarken and Serlin, 1986 ) (see Table 5 ). These combined results indicate that participants in cluster 1 seem to hold a cognitive affective view on robots being more positive toward granting robots rights, deeming the reasons for granting rights to be more convincing, and believing in higher potentials of future robot capacities and traits. Participants in cluster 2 seem to hold a cognitive but open-minded view on robots being more positive toward granting rights to robots as well as the cognitive and interaction capacities of robots, but being more skeptical toward the affective capacities of future robots while indicating compassion reasons to be convincing for granting robots rights. Participants in cluster 3 seem to hold a mechanical view on robots being only positive about future robots’ capacity for interaction but being rather negative toward granting rights, nor deeming the reasons for granting rights to be convincing, and being generally skeptical about the potentials of future robot capacities and traits.

www.frontiersin.org

TABLE 5 . Average construct ratings for all participants and per cluster.

4.3 Regression Analysis

Given our aim to uncover the minimum number of predictors which significantly explains the greatest amount of variance for both sociopolitical and robot rights, we ran a series of step-wise multiple regressions for each cluster separately.

4.3.1 Explaining Sociopolitical Rights

For cluster 1 ( people with a cognitive affective view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots sociopolitical rights ( F (2, 96) = 14.36, p < 0.001). Together, the capacity of cognition ( β = 0.420, p < 0.001) and cognition reason ( β = − 0.188, p = 0.040) explained 23 % of the variance. Readiness to grant sociopolitical rights was for cluster 1 participants associated with beliefs that robots will (eventually) possess cognitive capacities while considering cognition reasons had a negative effect on their readiness to grant sociopolitical rights. For cluster 2 ( people with a cognitive but open-minded view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots sociopolitical rights ( F (1, 243) = 57.29, p < 0.001). The capacity of affect ( β = 0.437, p < 0.001) was the sole predictor explaining 19% of the variance. Readiness to grant robots sociopolitical rights was for cluster 2 participants associated with beliefs that robots will (eventually) possess affective capacities. For cluster 3 ( people with a mechanical view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots sociopolitical rights ( F (3, 87) = 21.94, p < 0.001). Together, the capacity of cognition ( β = 0.537, p < 0.001), the trait of uniquely human ( β = − 0.246, p = 0.028), and cognition reason ( β = 0.421, p < 0.001) explained 41% of the variance. Readiness to grant robots sociopolitical rights was for cluster 3 participants associated with beliefs that robots will (eventually) possess cognition capacities but lacking traits of intelligence, intentionality, secondary emotions, and morality (uniquely human) while considering cognition reasons positively affected their readiness to grant sociopolitical rights.

4.3.2 Explaining Robot Rights

For cluster 1 ( people with a cognitive affective view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots rights ( F (1, 97) = 15.09, p < 0.001). The capacity of interaction ( β = 0.367, p < 0.001) was the sole predictor explaining 14 % of the variance. So, for cluster 1 participants, their belief that robots will (eventually) possess interaction capacities seems to be enough to grant the rights in our robot rights dimension to robots. For cluster 2 ( people with a cognitive but open-minded view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots the rights in our robot rights dimension ( F (3, 241) = 17.26, p < 0.001). Together, the capacity of interaction ( β = 0.278, p < 0.001), the trait of human nature ( β = 0.151, p = 0.013), and compassion reason ( β = 0.200, p = 0.001) explained 17% of the variance. So, for cluster 2 participants, besides (eventually) possessing interaction capacities, robots will (eventually) have the traits of primary emotions, sociability, and warmth (human nature) to grant robot rights while considering compassion reasons further positively affected their readiness to do so. For cluster 3 ( people with a mechanical view on robots ), the capacities, traits, and reasons to assign rights were significant predictors of participants’ readiness to grant robots the rights in the robot rights dimension ( F (3, 87) = 11.14, p < 0.001). Together, the capacity of cognition ( β = 0.304, p = 0.002) as well as cognition ( β = 0.209, p = 0.045) and compassion ( β = 0.222, p = 0.028) reasons explained 25% of the variance. So, for cluster 3 participants, their readiness to assign the rights in the robot rights dimension to robots was justified by their beliefs that robots will (eventually) possess cognitive capacities while considering both cognition and compassion reasons positively affected their readiness to do so.

5 Discussion

Current discussion on robot rights is dominated by legal experts, philosophers, and policy makers. To consider the opinion of lay persons in the policy debate, in line with the social-relational perspective to a robot’s moral standing ( Gunkel, 2012 , 2018 ; Coeckelbergh, 2010 , 2021 ), we explored people’s attitudes toward the issue of granting rights to robots in an online survey. A factor analysis has again identified two main dimensions for both reasons and rights, replicating our previous findings with the US-only sample ( De Graaf et al., 2021 ). The reason dimensions consist, on the one hand, of mainly cognition reasons (e.g., moving around, language, attention, learning) with only two other at face value unrelated items (i.e, humanlike appearance and convenience) as reasons for granting robots rights, and affect-related compassion reasons (e.g, feelings, conscience, pain, moral considerations) on the other hand with only one at face value unrelated item (i.e., acting on one’s own). It thus appears that people’s perspective on robot affect and cognition plays an important role in the context of granting robots rights, which is also in line with the results of our cluster and regression analysis.

The first rights dimension, labeled sociopolitical rights , consists mainly of items associated with the freedom to do what one wants (e.g., vote, duplicate, cross borders, self-decide, shape one’s biography) and to be treated fairly (e.g., be eligible for election, own property, fair wages). A clearly different second dimension, labeled robot rights , mainly consists of items associated with a robot’s technical needs to function properly (updates, energy, self-development, process data) and the item to not be abused. One explanation why this last item is also associated with this dimension is that the right to not be abused was perceived as damaging other people’s property. These two rights dimensions reveal that people tend to differentiate between more general sociopolitical rights and those associated with a robot’s functional needs.

The average ratings for the various scales used in our study show that only the capacity of reality interaction (e.g., learning, verbally communicating, moving, perceiving the world) had high overall agreement that robots can do this well (see Table 5 ). People, thus, generally tend to have a rather positive view on the capabilities of (future) robots regarding their ability to (socially) interact with their environment, irrespective of their user characteristics (e.g., age, gender, continent, robot experience). The interaction capacity also predicts readiness to grant robot rights. The high averages on this scale indicate a high willingness to grant robot rights to robots (except for people from EU, those aged 55 and older, and those less familiar with robots , who tend to be more skeptical). Most people (about 80 % ) thus agree that robots should be updated, have access to energy, process collected data, and not be abused.

This is different for sociopolitical rights (e.g, voting, fair wages, and the right not to be terminated) which people from cluster 1 (i.e., those who are most likely from the US, and possibly not aged 55 and older, have a lower educational level, and have encountered robots occasionally or frequently) seem to be most willing to grant to robots. This may be explained by our finding that these people are also more optimistic about the possibility that future robots can have affect, cognition, and human traits. Moreover, there is a strict order where people from cluster 1 are significantly more willing to grant sociopolitical rights than people from cluster 2 (i.e., those who are more likely from Asia, and possibly aged 30 and younger and have a higher educational level) followed by people from cluster 3 (i.e., those who are most likely from Europe and aged 55 and older, and possibly have never or rarely encountered robots) being least willing to do so.

Our findings suggest that it is more likely that people from the US are very optimistic about the potential of robots in general and are more likely to assign them rights, people from Asia are positioned somewhere in the middle on these issues, and people from Europe are overall much more skeptical. Our findings are somewhat similar to those of Bartneck et al. (2007) who also find that people from the US are the most positive, more so than Japanese, who appear in turn more positive than Europeans. Although one might be tempted to conclude there is a cultural link between assigning rights to robots from this, more evidence is needed to conclude such a relation. Note that our continent-based samples do not match with clusters (sizes differ with the US sample a size of N = 200 vs. cluster 1 a size of N = 99, the Asia sample a size of N = 142 vs. cluster 2 with a size of N = 245, and the EU sample with a size of N = 97 vs. cluster 3 with a size of N = 93). MacDorman et al. (2008) also do not find any evidence for strong cultural differences between the US and Japan. A cultural interpretation of our findings therefore seems premature and would require more research to support such conclusions.

Based on our cluster analysis, we can conclude that people from cluster 3 (i.e., those who are more likely from Europe and aged 55 and older, and possibly have never or rarely encountered robots) generally have a more mechanical view of robots and are more skeptical about robots having cognitive or affective capacities or humanness traits. This is in line with a tendency for mechanistic dehumanization in this group. Because cognition and affect-related reasons are a predictor for this group, only if these capacities will be realized are they willing to grant sociopolitical rights. People from cluster 2 (i.e., those more likely from Asia, possibly aged 30 and younger, and possibly with a higher education level) have a significantly more positive view and believe robots will have cognitive capacities and human traits, but they are less inclined to believe that robots will have affects, which for them is important to grant sociopolitical rights. This group appears to have a cognitive view of robots but is more skeptical about affective capacities. Note that all groups more strongly believe that robots will have cognitive rather than affective capacities ( see Table 5 ). In contrast, people from cluster 3 (i.e., those more likely from the US, and possibly not aged 55 and older, have a lower education level, and have encountered robots occasionally or frequently) have a very positive view on all capacities and traits of future robots. It appears that they have a cognitive-affective view of robots.

In our analysis, we did not find many strong relations between demographical factors and people’s views on assigning them rights (with the exception of age and continent), which is in line with the findings reported in MacDorman et al. (2008) which also does not find such relations. Flandorfer (2012) has reported on a link between age, experience, and attitude toward robots. In this work, it appears a younger age is associated with higher exposure to and more positive views on new technology in general, but we did not find such a trend. Finally, our findings overall are similar to those reported in our previous work ( De Graaf et al., 2021 ) which only analyzed the US sample. One noticeable difference is that in our current analysis, we found only three instead of four clusters which are correlated with the continents associated with the three samples we collected. The fact we had four groups in our previous work is explained by the differences in experience with robots that does not play a differentiating role in our current analysis.

5.1 Limitations and Future Work

As any study, ours has some limitations. First, the three samples from the US, EU, and Asia varied significantly in division of age category and educational level. Regarding age, the US sample had an overrepresentation of people aged 50 and over, and the Asia sample had an overrepresentation of people aged 30 and younger. These demographics are actually quite similar to the actual population demographics in these continents. 9 Regarding educational level, the US sample had an overrepresentation of people with a high school degree, and the Asia sample had an overrepresentation of people with a bachelor’s, master’s, or doctoral degrees.

Second, participants may have interpreted the survey items differently, particularly the reason items because of their conditional nature. We asked to suppose robots had certain capabilities or features and assess their willingness to grant rights if that were the case. Similarly, for the robot rights, which may have been granted more easily because participants read those more as operational requirements for robots rather than as rights. Future work should address any potential difficulty with interpreting these conditionals ( Skovgaard-Olsen et al., 2016 ) to further validate our items and underlying dimensions regarding rights and reasons to grant them. A potentially interesting approach for such future work would be to relate our findings to the more general literature on technology acceptance (e.g., to understand how experience with robots factors into attitudes of people ( Turja and Oksanen, 2019 )) or to compare the current reasons to grant robots rights and the mental capacities ( Malle, 2019 ) revealing potential missing coverage in the reasons. Finally, future research should explore the effect of a robot’s physical appearance on granting robots rights beyond the mechanical-humanoid dimension applied in our study.

5.2 Conclusion

Our study presents a survey design to empirically investigate the public opinion about robot rights. There appears to be an overall consensus about the interactive potential of robots. We found that people are more willing to grant basic robot rights such as access to energy and the right to update to robots than sociopolitical rights such as voting rights and the right to own property. We did not find any strong relation between demographic factors such as age or other factors such as experience with robots or of geographical region with the willingness to assign rights to robots. We did find, however, that beliefs about the (future) capacities of robots influence this willingness. Our results suggest that, in order to reach a broad consensus about assigning rights to robots, we will first need to reach an agreement in the public domain about whether robots will ever develop cognitive and affective capacities.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics Statement

The studies involving human participants were reviewed and approved by VU Amsterdam. The patients/participants provided their written informed consent to participate in this study.

Author Contributions

All authors listed have made substantial, direct, and intellectual contributions to the work, to the design of the survey, construction of the materials and instruments, and approved it for publication. MD and KH contributed to data-collection. MD contributed to data preparation and data-analyses. FH contributed to conceptual questions about reasons and rights. KH advised on the data-analyses and contributed most to the identification of the reason and rights items included in the survey. All authors jointly discussed and contributed to the final formulation of the items in the survey.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

1 Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics, Committee on Legal Affairs, https://www.europarl.europa.eu/doceo/document/JURI-PR-582443_EN.pdf?redirect , accessed February 23, 2020.

2 European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL), https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52017IP0051 , accessed October 5, 2020.

3 Open Letter to the European Commission Artificial Intelligence and Robotics, http://www.robotics-openletter.eu/ , accessed August 13, 2020.

4 Universal Declaration of Human Rights, https://www.un.org/en/universal-declaration-human-rights/ , accessed on March 1, 2020, which was adopted in 1948 by the United Nations General Assembly.

5 https://www.ohchr.org/en/professionalinterest/pages/ccpr.aspx , accessed March 1, 2020.

6 https://www.ohchr.org/en/professionalinterest/pages/cescr.aspx , accessed March 1, 2020.

7 https://declarationofar.org/ , accessed March 1, 2020.

8 Defense Science Board Summer study on autonomy, United States Defense Science Board, https://www.hsdl.org/?view&did=794 641 , accessed March 13, 2020.

9 2019 Revision of World Population Prospects, United Nations, https://population.un.org/ , accessed on September 3, 2021.

Bartneck, C., Suzuki, T., Kanda, T., and Nomura, T. (2007). The Influence of People’s Culture and Prior Experiences with Aibo on Their Attitude Towards Robots. Ai Soc. 21, 217–230. doi:10.1007/s00146-006-0052-7

CrossRef Full Text | Google Scholar

Basl, J. (2014). Machines as Moral Patients We Shouldn't Care About (Yet): The Interests and Welfare of Current Machines. Philos. Technol. 27, 79–96. doi:10.1007/s13347-013-0122-y

Billing, E., Rosén, J., and Lindblom, J. (2019). “Expectations of Robot Technology in Welfare,” in The Second Workshop on Social Robots in Therapy and Care in Conjunction with the 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI 2019) , Daegu, Korea , March 11–14 2019 .

Google Scholar

Block, N. (1995). On a Confusion About a Function of Consciousness. Behav. Brain Sci. 18, 227–247. doi:10.1017/S0140525X00038188

Borenstein, J., and Arkin, R. (2016). Robotic Nudges: The Ethics of Engineering a More Socially Just Human Being. Sci. Eng. Ethics 22, 31–46. doi:10.1007/s11948-015-9636-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Bratchell, N. (1989). Cluster Analysis. Chemometrics Intell. Lab. Syst. 6, 105–125. doi:10.1016/0169-7439(87)80054-0

Bryson, J. J., Diamantis, M. E., and Grant, T. D. (2017). Of, for, and by the People: The Legal Lacuna of Synthetic Persons. Artif. Intell. L. 25, 273–291. doi:10.1007/s10506-017-9214-9

Calo, R. (2015). Robotics and the Lessons of Cyberlaw. Calif. L. Rev. 103, 513–563. doi:10.2307/24758483

Chopra, S., and White, L. (2004). “Artificial Agents-Personhood in Law and Philosophy,” in Proceedings of the 16th European Conference on Artificial Intelligence , Valencia Spain , August 22–27, 2004 ( IOS Press ), 635–639.

Ciepley, D. (2013). Beyond Public and Private: Toward a Political Theory of the Corporation. Am. Polit. Sci. Rev. 107, 139–158. doi:10.1017/S0003055412000536

Coeckelbergh, M. (2021). How to Use Virtue Ethics for Thinking About the Moral Standing of Social Robots: A Relational Interpretation in Terms of Practices, Habits, and Performance. Int. J. Soc. Robotics 13, 31–40. doi:10.1007/s12369-020-00707-z

Coeckelbergh, M. (2010). Robot Rights? Towards a Social-Relational Justification of Moral Consideration. Ethics Inf. Technol. 12, 209–221. doi:10.1007/s10676-010-9235-5

Colagrosso, M. D., and Mozer, M. C. (2005). “Theories of Access Consciousness,” in Advances in Neural Information Processing Systems 17 . Editors L. K. Saul, Y. Weiss, and L. Bottou (Cambridge, MA, USA: MIT Press ), 289–296.

Danaher, J. (2016). Robots, Law and the Retribution Gap. Ethics Inf. Technol. 18, 299–309. doi:10.1007/s10676-016-9403-3

Darling, K. (2016). “Robot Law,” in Extending Legal protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior towards Robotic Objects (April 23, 2012) . Editors A. Ryan Calo, M. Froomkin, and I. Kerr (Glos, UK: Edward Elgar Publishing ). doi:10.2139/ssrn.2044797

De Graaf, M. M., and Allouch, S. B. (2016). “Anticipating Our Future Robot Society: The Evaluation of Future Robot Applications from a User’s Perspective,” in International Symposium on Robot and Human Interactive Communication (RO-MAN) , New York, USA , 26–31 August, 2016 ( IEEE ), 755–762. doi:10.1109/roman.2016.7745204

de Graaf, M. M. A., and Malle, B. F. (2019). “People’s Explanations of Robot Behavior Subtly Reveal Mental State Inferences,” in 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) , Daegu, South Korea , March 11–14, 2019 , 239–248. doi:10.1109/hri.2019.8673308

De Graaf, M. M., Hindriks, F. A., and Hindriks, K. V. (2021). “Who Wants to Grant Robots Rights,” in International Conference on Human-Robot Interaction (HRI) , Cambridge, UK (virtual) , March 09–11 , 38–46.

Fink, J. (2012). “Anthropomorphism and Human Likeness in the Design of Robots and Human-Robot Interaction,” in International Conference on Social Robotics , Chengdu, China , October 29–31, 2012 ( Springer ), 199–208. doi:10.1007/978-3-642-34103-8_20

Flandorfer, P. (2012). Population Ageing and Socially Assistive Robots for Elderly Persons: The Importance of Sociodemographic Factors for User Acceptance. Int. J. Popul. Res. 2012, 829835. doi:10.1155/2012/829835

Freitas, R. A. (1985). Can the Wheels of justice Turn for Our Friends in the Mechanical Kingdom? Don’t Laugh. Student lawyer 13, 54–56.

Gerdes, A. (2016). The Issue of Moral Consideration in Robot Ethics. SIGCAS Comput. Soc. 45, 274–279. doi:10.1145/2874239.2874278

Gray, H. M., Gray, K., and Wegner, D. M. (2007). Dimensions of Mind Perception. Science 315, 619. doi:10.1126/science.1134475

Gray, K., Young, L., and Waytz, A. (2012). Mind Perception Is the Essence of Morality. Psychol. Inq. 23, 101–124. doi:10.1080/1047840X.2012.651387

Gunkel, D. J. (2014). A Vindication of the Rights of Machines. Philos. Technol. 27, 113–132. doi:10.1007/s13347-013-0121-z

Gunkel, D. J. (2018). Robot Rights . Cambridge, MA, USA: MIT Press .

Gunkel, D. J. (2012). The Machine Question: Critical Perspectives on AI, Robots, and Ethics . Cambridge, MA, USA: MIT Press .

Haslam, N., Bastian, B., Laham, S., and Loughnan, S. (2012). “Humanness, Dehumanization, and Moral Psychology,” in Herzliya Series on Personality and Social Psychology. The Social Psychology of Morality: Exploring the Causes of Good and Evil . Editors M. Mikulincer, and P. R. Shaver (Washington, DC, USA: American Psychological Association ), 203–218. doi:10.1037/13091-011

Haslam, N. (2006). Dehumanization: An Integrative Review. Pers Soc. Psychol. Rev. 10, 252–264. doi:10.1207/s15327957pspr1003_4

Jaynes, T. L. (2020). Legal Personhood for Artificial Intelligence: Citizenship as the Exception to the Rule. AI Soc. 35, 343–354. doi:10.1007/s00146-019-00897-9

Kuehn, J., and Haddadin, S. (2017). An Artificial Robot Nervous System to Teach Robots How to Feel Pain and Reflexively React to Potentially Damaging Contacts. IEEE Robot. Autom. Lett. 2, 72–79. doi:10.1109/LRA.2016.2536360

Laukyte, M. (2019). “Ai as a Legal Person,” in Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law , Montreal, Canada , June 17–21, 2019 (New York, NY: Association for Computing Machinery ), 209–213. doi:10.1145/3322640.3326701

Levy, D. (2009). The Ethical Treatment of Artificially Conscious Robots. Int. J. Soc. Robotics 1, 209–216. doi:10.1007/s12369-009-0022-6

MacDorman, K. F., Vasudevan, S. K., and Ho, C.-C. (2008). Does Japan Really Have Robot Mania? Comparing Attitudes by Implicit and Explicit Measures. AI Soc. 23, 485–510. doi:10.1007/s00146-008-0181-2

Malle, B. (2019). “How Many Dimensions of Mind Perception Really Are There,” in Proceedings of the 41st Annual Meeting of the Cognitive Science Society , Montreal, Canada , July 24–27, 2019 . Editors A. K. Goel, C. M. Seifert, and C. Freksa ( Cognitive Science Society ), 2268–2274.

Malle, B., and Thapa, S. (2017). Unpublished Robot Pictures . Providence, RI, USA: Brown University .

Matthias, A. (2004). The Responsibility gap: Ascribing Responsibility for the Actions of Learning Automata. Ethics Inf. Technol. 6, 175–183. doi:10.1007/s10676-004-3422-1

Miller, L. F. (2015). Granting Automata Human Rights: Challenge to a Basis of Full-Rights Privilege. Hum. Rights Rev. 16, 369–391. doi:10.1007/s12142-015-0387-x

Murtagh, F., and Legendre, P. (2011). Ward’s Hierarchical Clustering Method: Clustering Criterion and Agglomerative Algorithm. Stat 1050, 11. doi:10.1007/s00357-014-9161-z

O’connor, B. P. (2000). Spss and Sas Programs for Determining the Number of Components Using Parallel Analysis and Velicer’s Map Test. Behav. Res. Methods Instr. Comput. 32, 396–402. doi:10.3758/BF03200807

Phillips, E., Ullman, D., de Graaf, M. M. A., and Malle, B. F. (2017). What Does a Robot Look like?: A Multi-Site Examination of User Expectations about Robot Appearance. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 61, 1215–1219. doi:10.1177/1541931213601786

Regan, T. (2004). The Case for Animal Rights . Berkeley, CA, USA: Univ of California Press .

Schwitzgebel, E., and Garza, M. (2015). A Defense of the Rights of Artificial Intelligences. Midwest Stud. Philos. 39, 98–119. doi:10.1111/misp.12032

Singer, P. (1974). All Animals Are Equal. Philosophic Exchange 5, 6.

Skovgaard-Olsen, N., Singmann, H., and Klauer, K. C. (2016). The Relevance Effect and Conditionals. Cognition 150, 26–36. doi:10.1016/j.cognition.2015.12.017

Solum, L. B. (1992). Legal Personhood for Artificial Intelligences. North Carolina L. Rev. 70, 1231–1288.

Sullins, J. P. (2010). Robowarfare: Can Robots Be More Ethical Than Humans on the Battlefield? Ethics Inf. Technol. 12, 263–275. doi:10.1007/s10676-010-9241-7

Tavani, H. (2018). Can Social Robots Qualify for Moral Consideration? Reframing the Question About Robot Rights. Information 9, 73. doi:10.3390/info9040073

Taylor, W. L. (1953). "Cloze Procedure": A New Tool for Measuring Readability. Journalism Q. 30, 415–433. doi:10.1177/107769905303000401

Tessier, C. (2017). Robots Autonomy: Some Technical Issues . Cham: Springer International Publishing , 179–194. doi:10.1007/978-3-319-59719-5_8

Tomarken, A. J., and Serlin, R. C. (1986). Comparison of Anova Alternatives Under Variance Heterogeneity and Specific Noncentrality Structures. Psychol. Bull. 99, 90–99. doi:10.1037/0033-2909.99.1.90

Torrance, S. (2012). Super-intelligence and (Super-)consciousness. Int. J. Mach. Conscious. 04, 483–501. doi:10.1142/S1793843012400288

Turja, T., and Oksanen, A. (2019). Robot Acceptance at Work: A Multilevel Analysis Based on 27 Eu Countries. Int. J. Soc. Robotics 11, 679–689. doi:10.1007/s12369-019-00526-x

van den Hoven van Genderen, R. (2018). Do We Need New Legal Personhood in the Age of Robots and AI . Singapore: Springer Singapore , 15–55. doi:10.1007/978-981-13-2874-9_2

Ward, J. H. (1963). Hierarchical Grouping to Optimize an Objective Function. J. Am. Stat. Assoc. 58, 236–244. doi:10.1080/01621459.1963.10500845

Weisman, K., Dweck, C. S., and Markman, E. M. (2017). Rethinking People's Conceptions of Mental Life. Proc. Natl. Acad. Sci. USA 114, 11374–11379. doi:10.1073/pnas.1704347114

Wellman, C. (2018). The Proliferation of Rights: Moral Progress or Empty Rhetoric . New York, NY, USA: Routledge .

Wiener, N. (1954). The Human Use of Human Beings: Cybernetics and Society , 320. Boston, MA, USA: Houghton Mifflin .

Wilkinson, C., Bultitude, K., and Dawson, E. (2011). “oh Yes, Robots! People like Robots; the Robot People Should Do Something”: Perspectives and Prospects in Public Engagement with Robotics. Sci. Commun. 33, 367–397. doi:10.1177/1075547010389818

Wootson, C. (2017). Saudi Arabia, Which Denies Women Equal Rights, Makes a Robot a Citizen. The Wash. Post .

Złotowski, J., Proudfoot, D., Yogeeswaran, K., and Bartneck, C. (2015). Anthropomorphism: Opportunities and Challenges in Human–Robot Interaction. Int. J. Soc. robotics 7, 347–360. doi:10.1007/s12369-014-0267-6

Zygmont, C., and Smith, M. R. (2014). Robust Factor Analysis in the Presence of Normality Violations, Missing Data, and Outliers: Empirical Questions and Possible Solutions. Quantitative Methods Psychol. 10, 40–55. doi:10.20982/tqmp.10.1.p040

Keywords: capacities, reasons, rights, robots, traits

Citation: De Graaf MMA, Hindriks FA and Hindriks KV (2022) Who Wants to Grant Robots Rights?. Front. Robot. AI 8:781985. doi: 10.3389/frobt.2021.781985

Received: 23 September 2021; Accepted: 29 November 2021; Published: 13 January 2022.

Reviewed by:

Copyright © 2022 De Graaf, Hindriks and Hindriks. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Maartje M. A. De Graaf, [email protected]

This article is part of the Research Topic

Should Robots Have Standing? The Moral and Legal Status of Social Robots

Tepper School of Business

Tepper School

Robots and Rights: Confucianism Offers Alternative

  • Media Relations, Tepper School of Business
  • ckiz(through)andrew.cmu.edu
  • 412-554-0074

Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea. Instead, the article looks to Confucianism to offer an alternative.

The analysis, by a researcher at Carnegie Mellon University (CMU), appears in   Communications of the ACM , published by the Association for Computing Machinery.

"People are worried about the risks of granting rights to robots," notes Tae Wan Kim , Associate Professor of Business Ethics at CMU's Tepper School of Business, who conducted the analysis. "Granting rights is not the only way to address the moral status of robots: Envisioning robots as rites bearers—not a rights bearers—could work better." Various non-natural entities—such as corporations—are considered people and even assume some Constitutional rights. In addition, humans are not the only species with moral and legal status; in most developed societies, moral and legal considerations preclude researchers from gratuitously using animals for lab experiments.

Should Robots Have Rights or Rites? from CACM on Vimeo .

Although many believe that respecting robots should lead to granting them rights, Kim argues for a different approach. Confucianism, an ancient Chinese belief system, focuses on the social value of achieving harmony; individuals are made distinctively human by their ability to conceive of interests not purely in terms of personal self-interest, but in terms that include a relational and a communal self. This, in turn, requires a unique perspective on rites, with people enhancing themselves morally by participating in proper rituals. When considering robots, Kim suggests that the Confucian alternative of assigning rites—or what he calls role obligations—to robots is more appropriate than giving robots rights. The concept of rights is often adversarial and competitive, and potential conflict between humans and robots is concerning. "Assigning role obligations to robots encourages teamwork, which triggers an understanding that fulfilling those obligations should be done harmoniously," explains Kim. "Artificial intelligence (AI) imitates human intelligence, so for robots to develop as rites bearers, they must be powered by a type of AI that can imitate humans' capacity to recognize and execute team activities—and a machine can learn that ability in various ways." Kim acknowledges that some will question why robots should be treated respectfully in the first place. "To the extent that we make robots in our image, if we don't treat them well, as entities capable of participating in rites, we degrade ourselves," he suggests.

  • Tepper 2023
  • Course List
  • Academic Calendar
  • Privacy Policy
  • Statement of Assurance
  • Tepper Information Center
  • Journalists & Media
  • Tepper Gear Store

Find anything you save across the site in your account

If Animals Have Rights, Should Robots?

should robots have rights essay

By Nathan Heller

In relation to animals we can conceive of ourselves as peers or protectors. Robots may soon face the same choice about us.

Harambe, a gorilla, was described as “smart,” “curious,” “courageous,” “magnificent.” But it wasn’t until last spring that Harambe became famous, too. On May 28th, a human boy, also curious and courageous, slipped through a fence at the Cincinnati Zoo and landed in the moat along the habitat that Harambe shared with two other gorillas. People at the fence above made whoops and cries and other noises of alarm. Harambe stood over the boy, as if to shield him from the hubbub, and then, grabbing one of his ankles, dragged him through the water like a doll across a playroom floor. For a moment, he took the child delicately by the waist and propped him on his legs, in a correct human stance. Then, as the whooping continued, he knocked the boy forward again, and dragged him halfway through the moat.

Harambe was a seventeen-year-old silverback, an animal of terrific strength. When zookeepers failed to lure him from the boy, a member of their Dangerous Animal Response Team shot the gorilla dead. The child was hospitalized briefly and released, declared to have no severe injuries.

Harambe , in Swahili, means “pulling together.” Yet the days following the death seemed to pull people apart. “We did not take shooting Harambe lightly, but that child’s life was in danger,” the zoo’s director, Thane Maynard, explained. Primatologists largely agreed, but some spectators were distraught. A Facebook group called Honoring Harambe appeared, featuring fan portraits, exchanges with the hashtag #JusticeforHarambe, and a meditation, “May We Always Remember Harambe’s Sacrifice. . . . R.I.P. Hero.” The post was backed with music.

As the details of the gorilla’s story gathered in the press, he was often depicted in a stylish wire-service shot, crouched with an arm over his right knee, brooding at the camera like Sean Connery in his virile years. “This beautiful gorilla lost his life because the boy’s parents did not keep a closer watch on the child,” a petition calling for a criminal investigation said. It received half a million signatures—several hundred thousand more, CNN noted, than a petition calling for the indictment of Tamir Rice’s shooters. People projected thoughts into Harambe’s mind. “Our tendency is to see our actions through human lenses,” a neuroscientist named Kurt Gray told the network as the frenzy peaked. “We can’t imagine what it’s like to actually be a gorilla. We can only imagine what it’s like to be us being a gorilla.”

This simple fact is responsible for centuries of ethical dispute. One Harambe activist might believe that killing a gorilla as a safeguard against losing human life is unjust due to our cognitive similarity: the way gorillas think is a lot like the way we think, so they merit a similar moral standing. Another might believe that gorillas get their standing from a cognitive dissimilarity: because of our advanced powers of reason, we are called to rise above the cat-eat-mouse game, to be special protectors of animals, from chickens to chimpanzees. (Both views also support untroubled omnivorism: we kill animals because we are but animals, or because our exceptionalism means that human interests win.) These beliefs, obviously opposed, mark our uncertainty about whether we’re rightful peers or masters among other entities with brains. “One does not meet oneself until one catches the reflection from an eye other than human,” the anthropologist and naturalist Loren Eiseley wrote. In confronting similarity and difference, we are forced to set the limits of our species’ moral reach.

Today, however, reckonings of that sort may come with a twist. In an automated world, the gaze that meets our own might not be organic at all. There’s a growing chance that it will belong to a robot: a new and ever more pervasive kind of independent mind. Traditionally, the serial abuse of Siri or violence toward driverless cars hasn’t stirred up Harambe-like alarm. But, if like-mindedness or mastery is our moral standard, why should artificial life with advanced brains and human guardianships be exempt? Until we can pinpoint animals’ claims on us, we won’t be clear about what we owe robots—or what they owe us.

A simple case may untangle some of these wires. Consider fish. Do they merit what D. H. Lawrence called humans’ “passionate, implicit morality”? Many people have a passionate, implicit response: No way, fillet. Jesus liked eating fish, it would seem; following his resurrection, he ate some, broiled. Few weekenders consider fly-fishing an expression of rage and depravity (quite the opposite), and sushi diners ordering kuromaguro are apt to feel pangs from their pocketbooks more than from their souls. It is not easy to love the life of a fish, in part because fish don’t seem very enamored of life themselves. What moral interest could they hold for us?

“What a Fish Knows: The Inner Lives of Our Underwater Cousins” (Scientific American/Farrar, Straus & Giroux) is Jonathan Balcombe’s exhaustively researched and elegantly written argument for the moral claims of ichthyofauna, and, to cut to the chase, he thinks that we owe them a lot. “When a fish takes notice of us, we enter the conscious world of another being,” Balcombe, the Humane Society’s director for animal sentience, writes. “Evidence indicates a range of emotions in at least some fishes, including fear, stress, playfulness, joy, and curiosity.” Balcombe’s wish for joy to the fishes (a plural he prefers to “fish,” the better to mark them as individuals) may seem eccentric to readers who look into the eyes of a sea bass and see nothing. But he suggests that such indifference reflects bias, because the experience of fish—and, by implication, the experience of many lower-order creatures—is nearer to ours than we might think.

Take fish pain. Several studies have suggested that it isn’t just a reflexive response, the way your hand pulls back involuntarily from a hot stove, but a version of the ouch! that hits you in your conscious brain. For this reason and others, Balcombe thinks that fish behavior is richer in intent than previously suspected. He touts the frillfin goby, which memorizes the topography of its area as it swims around, and then, when the tide is low, uses that mental map to leap from one pool to the next. Tuskfish are adept at using tools (they carry clams around, for smashing on well-chosen rocks), while cleaner wrasses outperform chimpanzees on certain inductive-learning tests. Some fish even go against the herd. Not all salmon swim upstream, spawn, and die, we learn. A few turn around, swim back, and do it all again.

From there, it is a short dive to the possibility of fish psychology. Some stressed-out fish enjoy a massage, flocking to objects that rub their flanks until their cortisol levels drop. Male pufferfish show off by fanning elaborate geometric mandalas in the sand and decorating them, according to their taste, with shells. Balcombe reports that the female brown trout fakes the trout equivalent of orgasm. Nobody, probably least of all the male trout, is sure what this means.

Balcombe thinks the idea that fish are nothing like us arises out of prejudice: we can empathize with a hamster, which blinks and holds food in its little paws, but the fingerless, unblinking fish seems too “other.” Although fish brains are small, to assume that this means they are stupid is, as somebody picturesquely tells him, “like arguing that balloons cannot fly because they don’t have wings.” Balcombe overcompensates a bit, and his book is peppered with weird, anthropomorphizing anecdotes about people sharing special moments with their googly-eyed friends. But his point stands. If we count fish as our cognitive peers, they ought to be included in our circle of moral duty.

Quarrels come at boundary points. Should we consider it immoral to swat a mosquito? If these insects don’t deserve moral consideration, what’s the crucial quality they lack? A worthwhile new book by the Cornell law professors Sherry F. Colb and Michael C. Dorf, “Beating Hearts: Abortion and Animal Rights” (Columbia), explores the challenges of such border-marking. The authors point out that, oddly, there is little overlap between animal-rights supporters and pro-life supporters. Shouldn’t the rationale for not ending the lives of neurologically simpler animals, such as fish, share grounds with the rationale for not terminating embryos? Colb and Dorf are pro-choice vegans (“Our own journey to veganism began with the experience of sharing our lives with our dogs”), so, although they note the paradox, they do not think a double standard is in play.

The big difference, they argue, is “sentience.” Many animals have it; zygotes and embryos don’t. Colb and Dorf define sentience as “the ability to have subjective experiences,” which is a little tricky, because animal subjectivity is what’s hard for us to pin down. A famous paper called “What Is It Like to Be a Bat?,” by the philosopher Thomas Nagel, points out that even if humans were to start flying, eating bugs, and getting around by sonar they would not have a bat’s full experience, or the batty subjectivity that the creature had developed from birth. Colb and Dorf sometimes fall into such a trap. In one passage, they suggest that it doesn’t matter whether animals are aware of pain, because “the most searing pains render one incapable of understanding pain or anything else”—a very human read on the experience.

Animals, though, obviously interact with the world differently from the way that plants and random objects do. The grass hut does not care whether it is burned to ash or left intact. But the heretic on the pyre would really rather not be set aflame, and so, perhaps, would the pig on the spit. Colb and Dorf refer to this as having “interests,” a term that—not entirely to their satisfaction—often carries overtones of utilitarianism, the ethical school of thought based on the pursuit of the greatest good over all. Jeremy Bentham, its founder, mentioned animals in a resonant footnote to his “An Introduction to the Principles of Morals and Legislation” (1789):

The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. . . . The question is not, Can they reason ? nor, Can they talk ? but, Can they suffer ?

If animals suffer, the philosopher Peter Singer noted in “Animal Liberation” (1975), shouldn’t we include them in the calculus of minimizing pain? Such an approach to peership has advantages: it establishes the moral claims of animals without projecting human motivations onto them. But it introduces other problems. Bludgeoning your neighbor is clearly worse than poisoning a rat. How can we say so, though, if the entity’s suffering matters most?

Singer’s answer would be the utilitarian one: it’s not about the creature; it’s about the system as a whole. The murder of your neighbor will distribute more pain than the death of a rat. Yet the situations in which we have to choose between animal life and human life are rare, and minimizing suffering for animals is often easy. We can stop herding cows into butchery machines. We can barbecue squares of tofu instead of chicken thighs. Most people, asked to drown a kitten, would feel a pang of moral anguish, which suggests that, at some level, we know suffering matters. The wrinkle is that our antennae for pain are notably unreliable. We also feel that pang regarding objects—for example, robots—that do not suffer at all.

“First stop Brooklyn.”

Link copied

Last summer, a group of Canadian roboticists set an outlandish invention loose on the streets of the United States. They called it hitch BOT , not because it was a heavy-smoking contrarian with a taste for Johnnie Walker Black—the universe is not that generous—but because it was programmed to hitchhike. Clad in rain boots, with a goofy, pixellated smile on its “face” screen, hitch BOT was meant to travel from Salem, Massachusetts, to San Francisco, by means of an outstretched thumb and a supposedly endearing voice-prompt personality. Previous journeys, across Canada and around Europe, had been encouraging: the robot always reached its destination. For two weeks, hitch BOT toured the Northeast, saying inviting things such as “Would you like to have a conversation? . . . I have an interest in the humanities.” Then it disappeared. On August 1st, it was found next to a brick wall in Philadelphia, beat up and decapitated. Its arms had been torn off.

Response was swift. “I can’t lie. I’m still devastated by the death of hitch BOT ,” a reporter tweeted. “The destruction of hitch BOT is yet another reminder that our society has a long way to go,” a blogger wrote.

Humans’ capacity to develop warm and fuzzy feelings toward robots is the basis for a blockbuster movie genre that includes “ WALL-E ” and “A.I.,” and that peaks in the “Star Wars” universe, a multigenerational purgatory of interesting robots and tedious people. But the sentiment applies in functional realms, too. At one point, a roboticist at the Los Alamos National Laboratory built an unlovable, centipede-like robot designed to clear land mines by crawling forward until all its legs were blown off. During a test run, in Arizona, an Army colonel ordered the exercise stopped, because, according to the Washington Post , he found the violence to the robot “inhumane.”

By Singer’s standard, this is nonsense. Robots are not living, and we know for sure that they don’t suffer. Why do even hardened colonels, then, feel shades of ethical responsibility toward such systems? A researcher named Kate Darling, with affiliations at M.I.T., Harvard, and Yale, has recently been trying to understand what is at stake in robo bonds of this kind. In a paper, she names three factors: physicality (the object exists in our space, not onscreen), perceived autonomous movement (the object travels as if with a mind of its own), and social behavior (the robot is programmed to mimic human-type cues). In an experiment that Darling and her colleagues ran, participants were given Pleos—small baby Camarasaurus robots—and were instructed to interact with them. Then they were told to tie up the Pleos and beat them to death. Some refused. Some shielded the Pleos from the blows of others. One woman removed her robot’s battery to “spare it the pain.” In the end, the participants were persuaded to “sacrifice” one whimpering Pleo, sparing the others from their fate.

Darling, trying to account for this behavior, suggests that our aversion to abusing lifelike machines comes from “societal values.” While the rational part of our mind knows that a Pleo is nothing but circuits, gears, and software—a machine that can be switched off, like a coffeemaker—our sympathetic impulses are fooled, and, because they’re fooled, to beat the robot is to train them toward misconduct. (This is the principle of HBO’s popular new show “Westworld,” on which the abuse of advanced robots is emblematic of human perfidy.) “There is concern that mistreating an object that reacts in a lifelike way could impact the general feeling of empathy we experience when interacting with other entities,” Darling writes. The problem with torturing a robot, in other words, has nothing to do with what a robot is, and everything to do with what we fear most in ourselves.

Such concerns, like heartland rivers flowing toward the Mississippi, approach a big historical divide in ethics. On one bank are the people, such as Bentham, who believe that morality is determined by results. (It’s morally O.K. to lie about your cubicle-mate’s demented-looking haircut, because telling the truth will just bring unhappiness to everyone involved.) On the other bank are those who think that morality rests on rights and rules. (A moral person can’t be a squeamish liar, even about haircuts.) Animal ethics has tended to favor the first group: people were urged to consider their actions’ effects on living things. But research like Darling’s makes us wonder whether the way forward rests with the second—an accounting of rights and obligations, rather than a calculation of consequences.

Consider the logic, or illogic, of animal-cruelty laws. New York State forbids inflicting pain on pets but allows fox trapping; prohibits the electrocution of “fur-bearing” animals, such as the muskrat, but not furry animals, such as the rat; and bans decorative tattoos on your dog but not on your cow. As Darling puts it, “Our apparent desire to protect those animals to which we more easily relate indicates that we may care more about our own emotional state than any objective biological criteria.” She looks to Kant, who saw animal ethics as serving people. “If a man has his dog shot . . . he thereby damages the kindly and humane qualities in himself,” he wrote in “Lectures on Ethics.” “A person who already displays such cruelty to animals is also no less hardened toward men.”

This isn’t peership morality. It looks like a kind of passive guardianship, with the humans striving to realize their exalted humanness, and the animals—or the robots—benefitting from the trickle-down effects of that endeavor. Darling suggests that it suits an era when people, animals, and robots are increasingly swirled together. Do we expect our sixteen-month-old children to understand why it’s cruel to pull the tail of the cat but morally acceptable to chase the Roomba? Don’t we want to raise a child without the impulse to terrorize any lifelike thing, regardless of its putative ontology? To the generation now in diapers, carrying on a conversation with an artificial intelligence like Siri is the natural state of the world. We talk to our friends, we talk to our devices, we pet our dogs, we caress our lovers. In the flow of modern life, Kant’s insistence that rights obtain only in human-on-human action seems unhelpfully restrictive.

Here a hopeful ethicist might be moved, like Gaston Modot in “L’Age d’Or,” to kick a small dog in exasperation. Addressing other entities as moral peers seems a nonstarter: it’s unclear where the boundary of peership begins, and efforts to figure it out snag on our biases and misperceptions. But acting as principled guardians confines other creatures to a lower plane—and the idea that humans are the special masters of the universe, charged with administration of who lives and thrives and dies, seems outdated. Benthamites like Peter Singer can get stuck at odd extremes, too. If avoiding suffering is the goal, there is little principled basis to object to the painless extinction of a whole species.

Where to turn? Some years ago, Christine M. Korsgaard, a Harvard philosopher and Kant scholar, started working on a Kantian case for animal rights (one based on principles of individual freedom rather than case-by-case suffering, like Singer’s). Her first obstacle was Kant himself. Kant thought that rights arose from rational will, clearing a space where each person could act as he or she reasoned to be good without the tyranny of others’ thinking. (My property rights keep you from installing a giant hot tub on my front lawn, even if you deem it good. This frees me to use my lawn in whatever non-hot-tub-involving way I deem good.) Animals can’t reason their way to choices, Kant noted, so the freedom of rights would be lost on them. If the nectar-drinking hummingbird were asked to exercise her will to the highest rational standard, she’d keep flying from flower to flower.

Korsgaard argued that hanging everything on rational choice was a red herring, however, because humans, even for Kant, are not solely rational beings. They also act on impulse. The basic motivation for action, she thought, arises instead from an ability to experience stuff as good or bad, which is a trait that animals share. If we, as humans, were to claim rights to a dog’s mind and body in the way we claim rights to our yard, we would be exercising arbitrary power, and arbitrary power is what Kant seeks to avoid. So, by his principles, animals must have freedom—that is, rights—over their bodies.

This view doesn’t require animals to weigh in for abstract qualities such as intelligence, consciousness, or sentience. Strictly, it doesn’t even command us never to eat poached eggs or venison. It extends Enlightenment values—a right to choice in life, to individual freedom over tyranny—to creatures that may be in our custody. Let those chickens range! it says. Give salmon a chance to outsmart the net in the open ocean, instead of living an aquacultural-chattel life. We cannot be sure whether the chickens and the fish will care, but for us, the humans, these standards are key to avoiding tyrannical behavior.

Robots seem to fall beyond such humanism, since they lack bodily freedom. (Your self-driving car can’t decide on its own to take off for the beach.) But leaps in machine learning, by which artificial intelligences are programmed to teach themselves, have started pushing at that premise. Will the robots ever be due rights? John Markoff, a Times technology reporter, raises this question in “Machines of Loving Grace” (Ecco). The matter is charged, in part because robots’ minds, unlike animals’, are made in the human image; they have a potential to challenge and to beat us at our game. Markoff elaborates a common fear that robots will smother the middle class: “Technology will not be a fount of economic growth, but will instead pose a risk to all routinized and skill-based jobs that require the ability to perform diverse kinds of ‘cognitive’ labor.” Don’t just worry about the robots obviating your job on the assembly line, in other words; worry about them surpassing your expertise at the examination table or on the brokerage floor. No wall will guard U.S. jobs from the big encroachment of the coming years. Robots are the fruit of American ingenuity, and they are at large, learning everything we know.

That future urges us to get our moral goals in order now. A robot insurgency is unlikely to take place as a battle of truehearted humans against hordes of evil machines. It will probably happen in a manner already begun: by a symbiosis with cheap, empowering intelligences that we welcome into daily life. Phones today augment our memories; integrated chatbots spare us customer-service on-hold music; apps let us chase Pokémon across the earth. Cyborg experience is here, and it hurts us not by being cruel but by making us take note of limits in ourselves.

The classic problem in the programming of self-driving cars concerns accident avoidance. What should a vehicle do if it must choose between swerving into a crowd of ten people or slamming into a wall, killing its owner? The quandary is not just ethical but commercial (would you buy a car programmed to kill you under certain circumstances?), and it holds a mirror to the harsh decisions we, as humans, make but like to overlook. The horrifying edge of A.I. is not really HAL 9000, the rogue machine that doesn’t wish to be turned off. It is the ethically calculating car, the military drone: robots that do precisely what we want, and mechanize our moral behavior as a result.

A fashionable approach in the academic humanities right now is “posthumanism,” which seeks to avoid the premise—popular during the Enlightenment, but questionable based on present knowledge—that there’s something magical in humanness. A few posthumanists, such as N. Katherine Hayles, have turned to cyborgs, whether mainstream (think wearable tech) or extreme, to challenge old ideas about mind and body being the full package of you. Others, such as Cary Wolfe, have pointed out that prosthesis, adopting what’s external, can be a part of animal life, too. Posthumanism ends its route, inevitably, in a place that much resembles humanism, or at least the humane. As people, we realize our full selves through appropriation; like most animals and robots, we approach maturity by taking on the habits of the world around us, and by wielding tools. The risks of that project are real. Harambe, born within a zoo, inhabited a world of human invention, and he died as a result. That this still haunts us is our species’ finest feature. That we honor ghosts more than the living is our worst. ♦

By signing up, you agree to our User Agreement and Privacy Policy & Cookie Statement . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Stutterer

By Michael Pollan

Get Smart

By Adam Gopnik

How Far Should We Carry the Logic of the Animal-Rights Movement?

By Kelefa Sanneh

The GSAL Journal

The GSAL Journal

Passion, Curiosity & Creativity

Should robots have rights?

should robots have rights essay

Rahul – Year 12 Student

Editor’s note : Year 12 student Rahul writes here in response to the fascinating philosophy question set for the  New College of the Humanities  essay competition, 2021. ‘Should robots have rights?’ – what do you think? CPD

“Man is a robot with defects.”  1   Emil Cioran (The Trouble With Being Born) 

From behind pristine glass doors in Silicon Valley to the depths of agricultural Japan 2 , the development of robots in recent decades has had an extensive effect on the standard of living for everybody, for better or for worse. Their existence has become so embedded and interwoven with ours, establishing a relationship like never before. In today’s day and age, man and robot are dependent on each other – and once you understand this reality, then you will also realise that the question above is of the utmost significance. At the rate technology is developing, this inevitable question may evoke fear or consternation for certain people, but the answering of it is paramount to preventing prospective moral and legal grey areas regarding property and the essence of humanity.  

In this essay, I will tackle the foundations of this question, scrutinizing it so we are satisfied with what the crucial words entail and truly mean. I will then proceed to evaluate the different arguments for either side and eventually reach the justified conclusion that robots should, in fact, not have rights.  

The definition of a ‘robot’ fluctuates among roboticists, but a common occurrence is ‘autonomy’ or ‘semi-autonomy’- and it is here where the distinction is made between a machine and a robot. Essentially, this means that robots are able to operate independently, sensing their environment and carry out corresponding functions without explicit human control, whereas machines require some kind of human operation. Semi-autonomous means they have ‘a degree of, but not complete’ self-government. This is a pivotal distinction to make because it whittles down our preconceived ideas as to what robots may be. Many question whether robots have full autonomy at all – surely if they need to be plugged in or programmed, they aren’t fully autonomous? Robots themselves have a plethora of practical uses, ranging from medical, like the ‘Da Vinci surgical robot’ 7 , to the military to agriculture. It is also worth pointing out that not all robots are ‘semi- autonomous’ – Telerobots are robots operated wirelessly from a distance by humans, hence eradicating any modicum of their autonomy, yet they are still classed as robots. I’m using this example to demonstrate how volatile a definition can be, and how unfeasible it is to confine these complex systems within a single term, but also the irony in it too. These machines are the epitome of ‘man-made’, yet we still struggle to provide a universal definition.  

Three constituent features of a robot that I think are most consistent are semi-autonomy, them being products of complex man-made systems and their ability to efficiently carry out work and jobs.  

The Oxford Dictionary defines a right as ‘a moral or legal entitlement to have or do something’, but this insufficient definition rests on a maelstrom of ethical, legal and even religious disputes and ambiguities. One key definition issue I would like to highlight is the argument of natural rights (universal rights that are intrinsic to human life and are derived from ‘human nature or the edicts of god’ 3 ) versus legal rights (rights based on society’s customs or statutes). Whether robots have natural rights is effectively what this debate truly explores, because legal rights are a choice we as a society must make after a practical and ethical evaluation, whereas natural rights are intrinsic, and are often debated as to who should possess them. It is also necessary to note that, whilst some may believe that robots do have intrinsic rights, such as copyrights for software or trade rights for machines, these are statutory rights of the creator, not the object 4 . The technology itself hasn’t had any rights bestowed upon itself by God. A common assumption is that, for something to have rights, they must also have responsibilities and liabilities – things which robots cannot possess if they are only semi-autonomous. Furthermore the definition ceases to confine a ‘right’ to animate beings therefore we can extrapolate that they apply to the deceased and even objects. A court in Northern India gave the river Ganges the ‘status of living institutions’, meaning it would have the corresponding rights, duties and liberties as a human being (so B polluting the Ganges would be legally equivalent to B physically harming C).  6  To the surprise of many, human rights are solely a branch of legal rights in general, as this example illustrates. 

The real question is, whilst human rights are directly applicable to humans, can the other types of legal rights, such as contractual, equality and economic rights, be solely applicable to inanimate objects, or must they stretch back to some original individual?  

In an article, Andrew Sherman predicts that  ‘By the year 2025, robots…are predicted to perform half of all productive functions in the workplace’  8 . This is an interesting prediction because, whilst robotic rights may seem absurd, it actually highlights the benefits of this, or so it seems to. For instance, if the manager fails to provide these robots with a safe, regulated working environment, then there is a great prospect of damage for the robots. Whilst this may not seem like a pressing issue, (as we can always purchase more robots) from an environmental perspective, we will be wasting masses of resources and energy in manufacturing new robots and also fuelling the ongoing war against landfills and waste. Moreover, from a moral perspective, faulty products could pose a threat to users and have serious injurious effects on them. So it ultimately seems sensible to concede that robots deserve ‘workers rights’, especially since they’ll contribute to half of all productive functions in the workplace. However, delving deeper into the situation, we can see that it certainly isn’t as elementary as the arguments for robotic rights may seem. It’s the intrinsic right of nature we seek to protect with the environmental argument and the individual human rights of the consumer we seek to protect with the moral argument. So the disregard of working environments is an offence subject to sanction, but this is to protect the rights of these larger bodies that we have already established have natural rights. Referring back to our definition of ‘rights’, it doesn’t stand that the robots themselves deserve rights because they are merely instrumental in satisfying the entitlements of others. They don’t have isolated moral or legal entitlements solely due to the fact that they have no emotion, and lack desires.  

Assigning rights to robots may even seem immoral to some. The very act of giving them rights will have a multiplier effect that’ll reverberate through our justice system. More rights leads to more laws which lead to more court cases. This spawns a stress upon the legal system financially, meaning we will either have to deprive other aspects of our economy, such as healthcare and education, to fund courts, or the overall quality of justice will diminish over time. Adopting an act utilitarian approach (aim to maximise overall ‘goodness’ through actions) demonstrates the extent of the immorality in three ways; 

  • We derive no benefit from robots having isolated, individual rights; 
  • Robots themselves don’t count towards the ‘Hedonic Calculus’ (which is used to measure resultant ‘goodness’) due to their lack of sentience, so they also derive no benefit from robotic rights;
  • Humans will collectively suffer from a bedlam of a justice system due to the immense mass of new law we are injecting into the justice system to protect the rights of robots. 

Therefore an Act Utilitarian, like Jeremy Bentham, would argue that, not only is it impractical to give robots rights, but it’s also immoral as it spirals into an overall decrease in utility for the community due to injustice or deprivation of healthcare, education and other vital parts of society.  

It seems apparent that the crux of this argument boils down to two fundamental points. Firstly, robots don’t have natural, intrinsic rights. Abrahamic religions would contend that this is because robots have no divine causation. They, like basketballs, are man-made objects in their truest essence. Evolutionists would proclaim that, whilst humans possess inherent DNA that, theoretically, stretches back to the earliest organisms, robots have no natural, inbuilt claim to existence. Secondly, robots cannot have individual legal rights. They have no capacity for emotions or desires and therefore have nothing to gain or lose in any given situation. In addition, they are only autonomous to a degree, as many would argue, hence it follows that robots have no entitlements or responsibilities. A similar example would be the environment – we have laws to protect the environment, but this isn’t due to intrinsic, isolated rights of the environment itself – it’s due to our humane and animalistic natural rights, which, by protecting the environment, we are indirectly protecting too.  

After deconstructing the question to its essence, and appraising various outlooks on either side of the argument, I firmly believe that, not only do robots not qualify for rights, it would be immoral to give these to them, infringing on some of our intrinsic human rights too. However, as science advances and technology improves, we may start to replicate entire genomes, ‘playing God’ and starting to construct ‘human beings’. It is here where the fundamental distinctions, between humans and robots, I have established begin to melt. Then, we must be prepared to navigate through roaring seas of morality, ethics and philosophy, in search of a new answer to this question.  

  • https://www.imdb.com/name/nm1194127/bio  
  • https://www.theguardian.com/environment/2016/feb/01/japanese-firm-to-open-worlds-first-robot-run-farm  
  • https://en.wikipedia.org/wiki/Rights#Definitional_issues  
  • https://www.gov.uk/guidance/ownership-of-copyright-works  
  • https://en.wikipedia.org/wiki/Sophia_(robot)  
  • https://www.lawnow.org/do-natural-objects-have-legal-rights/  
  • https://online-engineering.case.edu/blog/medical-robots-making-a-difference  
  • https://diginomica.com/robot-rights-a-legal-necessity-or-ethical-absurdity  

Share this:

' src=

Published by thegsaljournal

The student-led academic journal of the Grammar School at Leeds, showcasing passion, curiosity and creativity. Also on X @gsaljournal - we would love to hear from you! View all posts by thegsaljournal

Leave a comment Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

IMAGES

  1. Philosophical Disquisitions: Should Robots Have Rights? Four Perspectives

    should robots have rights essay

  2. (PDF) The Other Question: Can and Should Robots Have Rights?

    should robots have rights essay

  3. Should robots have rights

    should robots have rights essay

  4. Should We Give Robots Rights?

    should robots have rights essay

  5. Should Robots have Rights?

    should robots have rights essay

  6. 10 lines essay on Importance of Robots /write an essay on Importance of

    should robots have rights essay

VIDEO

  1. Write a short Essay on Robot🤖 in English|10lines on Robot|

  2. June 2023 CACM: Should Robots Have Rights or Rites?

  3. Essay on Human Rights || Human rights essay in english || essay on Human rights day

  4. Should Robots Have Rights? With Rob Sparrow

  5. YOUR Future Is At Risk😨

  6. What if your coworkers were robots? 🤖

COMMENTS

  1. The other question: can and should robots have rights?

    This essay addresses the other side of the robot ethics debate, taking up and investigating the question "Can and should robots have rights?" The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry ...

  2. 2020: The Year of Robot Rights

    2020: The Year of Robot Rights. A once-unthinkable concept is gaining traction and deserves our attention. Several years ago, in an effort to initiate dialogue about the moral and legal status of technological artifacts, I posted a photograph of myself holding a sign that read "Robot Rights Now" on Twitter. Responses to the image were, as ...

  3. Should robots have rights?

    Photo via Flickr. As robots gain citizenship and potential personhood in parts of the world, it's appropriate to consider whether they should also have rights. So argues Northeastern professor Woodrow Hartzog, whose research focuses in part on robotics and automated technologies. "It's difficult to say we've reached the point where ...

  4. PDF Should Robots Have Rights?

    and whether robots should have rights. A robot is defined by the Cambridge English Dictionary as a 'machine controlled by a computer that is used to perform jobs automatically'. Therefore, from this ... After establishing an exact definition for the subject of this essay, we may begin by first examining the concept of Natural Rights ...

  5. Opinion

    Yet, there are serious problems with the claim that conscious robots should have rights just as humans do, because it's not clear that humans fundamentally have rights at all. The eminent moral ...

  6. Should robots have rights?

    As robots gain citizenship and potential personhood in parts of the world, it's appropriate to consider whether they should also have rights. ... Should robots have rights? (2017, December 8 ...

  7. Recognising rights for robots: Can we? Will we? Should we?

    74 Nathan Heller, 'If Animals Have Rights, Should Robots?' ... New Essays in the Legal and Political Theory of Property (Cambridge University Press, 2001). 119 See eg de Cock Buning (n 96); Ramalho (n 99). 120 Or, undermining the legal regimes more than currently. There has been significant debate and scepticism prior to questions of robots ...

  8. Do AI Systems Deserve Rights?

    Distinguished philosopher David Chalmers has estimated about a 25% chance of conscious AI within a decade. On a fairly broad range of neuroscientific theories, no major in-principle barriers ...

  9. The other question: can and should robots have rights?

    David J. Gunkel. Published in Ethics and Information… 17 October 2017. Philosophy, Computer Science. TLDR. This essay addresses the other side of the robot ethics debate, taking up and investigating the question "Can and should robots have rights" and identifies four modalities concerning social robots and the question of rights. Expand.

  10. Do Robots Deserve Human Rights?

    Robots are machines, more similar to a car or toaster than to a human (or to any other biological beings). Humans and other living, sentient beings deserve rights, robots don't, unless we can make them truly indistinguishable from us. Not only how they look, but also how they grow up in the world as social beings immersed in culture, perceive ...

  11. PDF Should robots have rights? Why or why not?

    the Universal Declaration of Human Rights and in more recent times the efforts to extend many of these rights to animals, because of this it's inevitable that as robots and artificial intelligence become more advanced and 'human-like' the legal and moral status of robots will enter the foreground of philosophical debate. In this essay,

  12. Frontiers

    In a proposal issued by the European Parliament (Delvaux, 2016) it was suggested that robots might need to be considered "electronic persons" for the purposes of social and legal integration. The very idea sparked controversy, and it has been met with both enthusiasm and resistance. Underlying this disagreement, however, is an important moral/legal question: When (if ever) would it be ...

  13. Frontiers

    The robot rights debate has thus far proceeded without any reliable data concerning the public opinion about robots and the rights they should have. We have administered an online survey (n = 439) that investigates layman's attitudes toward granting particular rights to robots. Furthermore, we have asked them the reasons for their willingness to grant them those rights. Finally, we have ...

  14. The other question: can and should robots have rights?

    This essay addresses the other side of the robot ethics debate, taking up and investigating the question "Can and should robots have rights?" The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself.

  15. The other question: can and should robots have rights?

    Abstract. This essay addresses the other side of the robot ethics debate, taking up and investigating the question "Can and should robots have rights?". The examination of this subject ...

  16. Should Robots Have Rights or Rites?

    The rights view stipulates that humans and robots are adversaries who com-pete. Accordingly, robots' rights poten-tially conflict with those of humans, and the conflict must be adjudicated. This view begets the risk or the fantasy that in the future robots and humans will be embroiled in a perpetual war.

  17. Robots and Rights: Confucianism Offers Alternative

    Philosophers and legal scholars have explored significant aspects of the moral and legal status of robots, with some advocating for giving robots rights. As robots assume more roles in the world, a new analysis reviewed research on robot rights, concluding that granting rights to robots is a bad idea.

  18. If Animals Have Rights, Should Robots?

    November 20, 2016. In relation to animals, we can conceive of ourselves as peers or protectors. Robots may soon face the same choice about us. Illustration by Nishant Choksi. Harambe, a gorilla ...

  19. Editorial: Should Robots Have Standing? The Moral and Legal Status of

    And several essays leverage this method in constructing their response to the question "should robots have standing?" In the essay "From Warranty Voids to Uprising Advocacy: ... then the essay from Sætra—"Challenging the Neo-Anthropocentric Relational Approach to Robot Rights"—provides an important counterpoint. Unlike ...

  20. Artificial Intelligence: Should Robots Have Rights?

    Counterpoint: The Argument Against Rights for Robots. Although some may advocate for giving human-like robots equal rights, there are others who feel they are facing an even more pressing issue, that robots may overpower humans. Many fear that artificial intelligence may replace humans in the future. The fear is that robots will become so ...

  21. Should robots have rights?

    Rahul - Year 12 Student Editor's note: Year 12 student Rahul writes here in response to the fascinating philosophy question set for the New College of the Humanities essay competition, 2021. 'Should robots have rights?' - what do you think? CPD "Man is a robot with defects." 1 Emil Cioran (The Trouble With Being Born) From behind pristine…