“We Robot” Rights for Robots
March 2, 2012
A colleague forwarded the following conference announcement: We Robot 2012: Setting the Agenda. I’m never quite sure any more, but I believe this announcement is an actual conference, actually going to happen (as opposed to science fiction, where such things happened in the past century). The presentation titles look even more intriguing than those for my upcoming Microbiology meeting. For example:
- How Should The Law Think About Robots?
- Confronting Automated Law Enforcement [will Sharer nonviolence work, I wonder]
- Extending Legal Rights to Robots Based on Anthromorphism [hmm, what about octopods?]
- Delegation, Relinquishment and Responsibility: The Prospect of Robot Experts [My favorite!]
- Don’t Robots Have Politics? [Sure, who makes all those robocalls? ]
- Sex, Robots and Roboticization of Consent [Sigh. Where’s Woody Allen?]
- Military Robotics [Hunter-seekers, coming soon to your neighborhood]
- Liar Liar Pants on Fire! Examining the Constitutionality of Enhanced Robo-Interrogation [See? I told you they’re teaching computers to lie.]
Perhaps the most compelling–and unsettling–topic on the list is:
- Robots Working with Autism.
In fact, there is a lot of promising work using robots to help autistic children. But what does it say about us humans, if there exist beings whom autistic children prefer to us? Who’s going to judge humanity in the future?
12 Comments
Comments are closed.
First, looking at the robots themselves – some of the issues might arise just from how they’re constructed. For example, there’s no reason to assume that all robots will be humanoid (vs. a wrung sponge, plate of spaghetti), a particular size (visible), encased in a metallic/ceramo body (vs. squishy biologics) or ‘individual units’ (vs. part of a hive/’cloud’). Because ‘natural’ entities’ bodily characteristics limit/define their capabilities, I’m assuming the same would apply to robots. I’ve no idea which types of robots are easier to manufacture but do feel that this would impact the development/usage of robots we’d encounter first on a large scale which in turn would impact the future of other robot categories/classes in the marketplace.
Second, I’m not that concerned about an autistic child who forms a greater attachment to a computer or the at-present more commonplace occurrence, with a dog. I’m more concerned about the other direction of attachment: a single computer whose whole world literally revolves around one child (who will eventually grow up into an adult). I have a pretty good understanding of a human’s or dog’s capabilities but not of the capabilities or limits of a robot/computer that’s sophisticated enough to interface with a child.
Apart from robots, there’s also the human/machine interface technology …
So far there’s nothing that I (a non-scientist) am aware of that discusses the biological/biochemical side-effects of such a union. With biochemical agents (drugs), we now know to expect that any drug will have a much wider range of action (effects) than the one effect we specifically want. What happens when a machine because it is linked directly to a brain consistently overrides a major organ or the autonomic nervous system? In SF scenarios this usually just results in the hero/heroine’s being flat out exhausted but fully recovered within a couple of weeks without any long-term harm done. (SF has been pretty good at looking at potential psychological effects though.) But if you look at transplant patients on large steroid doses for extended periods, you sometimes see serious acute and long-term renal, liver or cardiac damage occasionally to the point of organ failure. Or another scenario: if the machine interface spoke directly to the reward and hunger centers in the brain, perhaps a lot of overweight people would finally lose the weight they’ve been struggling to lose or you could see an anorexia epidemic. Same with addictions … (What if the machine/human interface itself becomes an addiction? Is it reversible without withdrawal symptoms?)
Whether robot or machine/human interface, we’ll need to ensure that there’s a safety override switch to put everything back on manual control, as well as understand what the range of biochemical/physiological effects of such a union.
In two hundred years, how easily will we be able to define humans and robots as separate types of beings?
I suspect it is a little premature to consider robots as not machines, but it is well worth thinking about what we should do when they start to become sentient. The issue is much the same as with animals, I suspect.
What doesn’t appear to be included is avatars. I can imagine very human like visual simulations that could be used to induce all sorts of human responses. If teh avatars act on their own accord, and especially with some sentience, how should we treat them?
To me the bottom line is going to be sentience. But even this is tricky, as we protect human fetuses (more so in Europe than the US) and definitely babies. If a robot could develop a mind like a human, should its earlier, baby state be protected, even if it is more machine like?
And suppose we build really good minds without consciousness. How do we treat that sort of mind in a machine?
I just hope that the conference doesn’t get swamped with philosophers, but with other disciplines that might offer a better handle on these isssues.
I have mixed feelings about this. To be honest, many of my feelings come from dealing with a “rights dispute” (basically, a bunch of mountain bikers want the right to ride freely through an area full of endangered species, without taking responsibility for the damage they cause). My general feeling right now is that there are a few jerks and a lot of ignorant people out there who want to go with the flow, and so the jerks are driving the agenda.
Personally, I think that if we’re talking about rights, we should be talking about responsibilities. In particular, we should be talking about the responsibilities humans are trying to shrug off onto machines. But we should also be talking about what giving responsibility to a machine means.
There are many places where humans are cutting themselves out of the decision loop, for various reasons that boil down to “it’s difficult, dangerous, boring, and/or emotionally uncomfortable to be there” (which covers military drones, police enforcement, dealing with repetitive behaviors, or giving (sexual) pleasure to those society generally regards as losers). In some of these cases (such as defusing IEDs, or taking a negotiator’s cell phone into a hostage situation), robots make sense. In others, I think humans should be taking responsibility, even if we don’t want to. I’m also pretty sure my argument won’t go anywhere, at least in a tech-addicted crowd.
Two other points: one is that our society seems to run on addictions (whether to oil, the online world, or drugs of all sorts), but we have this amazing idea that good ideas, well presented, will always carry the day. If logical arguments were that powerful, addiction would not be a problem. Instead, beating an addiction is a messy, painful, often unending process. This plays out on all levels. For example, look at scientific arguments about climate change, and the response of a society that’s addicted to fossil fuels. Any surprise that many people deny there’s a problem, even as they buy a Prius and turn the thermostat down?
Now, we’re getting addicted to robots (in the military, on the manufacturing line, and possibly in sex and therapy), and we’re trying to deal with this logically. What could possibly go wrong?
The other point is that robot “life” isn’t the same as human life, any more than a corporate “person” is the same as a human person. You can merge corporations, subdivide them, take them over, and so forth, none of which you can do to a person. Similarly, robots can be taken down to or built out of bins of parts, upgraded, have subsystems owned and operated by different entities, have their memories erased and restored, and so forth. None of these are things humans can do.
This is analogous to the differences among biological kingdoms and phyla: you can cut a fungal mycelium in half and get two live mycelia, but cut a human in half and you get one dead human. This is part of the reason why we’re classed in different kingdoms: what counts as an individual is different in each case.
Robots are a different form of “life,” than humans and the legal theories that apply to them have to be different than those that apply to humans. Even such basic things as “what is an individual robot” need to be considered.
Although getting a robot to do something for us this might put us at one remove from decision-making for that action, we’ll still have ultimate responsibility for that action. We’ll definitely need guidelines and/or best practices especially regarding user responsibility/legal and financial liability. (In our society, if you frame something as a potential legal liability, people are more inclined to take it seriously.)
On the whole though, I’m in favor of robots possibly because of having read Asimov’s robot stories as a preteen but more so because I feel that robots can help where humans aren’t able because of physical, emotional or time limitations. How far we choose to develop robots is our responsibility.
Below is a brief description of a new computer which could benefit many students – always available, infinitely patient and can tell better than the student whether there’s a comprehension problem.
http://www.sciencedaily.com/releases/2012/03/120302132546.htm
…. “The new technology, which matches the interaction of human tutors, not only offers tremendous learning possibilities for students, but also redefines human-computer interaction.
“AutoTutor” and “Affective AutoTutor” can gauge the student’s level of knowledge by asking probing questions; analyzing the student’s responses to those questions; proactively identifying and correcting misconceptions; responding to the student’s own questions, gripes and comments; and even sensing a student’s frustration or boredom through facial expression and body posture and dynamically changing its strategies to help the student conquer those negative emotions….’
AIUI autism is often associated with seeming obsessive/compulsive behaviours, for example the need to always do something in exactly the same way. This seems like exactly the sort of thing that a computer-controlled system is always going to better at than a human.
A lot of great ideas here.
Are humans giving up too much responsibility for our humanity? Letting robots do our dirty work, like assassinating people in the Middle East?
And communicating with autistic people–shouldn’t our own humanity be able to deal with that better than a machine?
If a tutor can teach better than a human, on one level, that’s great. I uses a tutor program to teach Mendelian genetics, and students learn in a week what used to take six weeks. But then we do other, more interesting things in the classroom. But at Kenyon they’re paying fifty grand per year. Outside Kenyon, the choice may be different–either to eliminate the teacher or to have those “massive open online courses” (MOOC).
You’d think humans should communicate with other humans better than machines do. I was referring to specific issues with autism. For a better/more knowledgible treatment of autism I can’t do better than suggest Elizabeth Moon’s “The Speed of Dark” where she draws on her own experiences with an autistic son, and makes the main character autistic.
I hope we get to a point where ONLY robots are allowed to assassinate. Or even serve warrants, or stop speeders. We then may get to a time where no human is comfortable with performing assasination. And if a property owners wants to blast the warrant server with both barrels to assert his understanding of his rights, why should that unfortunate paper carrier be a human? It is as if the State is holding that human officer hostage by placing him in front of an irrate citizen.
Re: “And communicating with autistic people–shouldn’t our own humanity be able to deal with that better than a machine?”
‘Communicating’ and ‘caring’ are not equivalent/synonyms. Not all humans are equally gifted or effective communicators. If we really care for someone, then we should use the most effective tools/methods available – even if this means admitting to ourselves that we personally don’t have that capability.
Elizabeth Moon’s Speed of Dark was a very moving and informative book — thoroughly enjoyed it.
“where ONLY robots are allowed to assassinate”
That’s where we’re heading now. But how is that better?
Look at all the tiny drones coming out now, able to dart through window cracks and “hang” bat-like beneath tables. And assassinate? Why is that better?
I believe that human assassins are not born, they are made. Examples are military bootcamp, SWAT training, gang initiations. In a future where no humans are trained as murderers, then human on human murder will be very rare. Aside: It is becoming rarer, per capita in the world. I think it is possible that a greater trust will exist between people, when it becomes nearly inconceivable that any human would try to kill another. We should stop training people to murder. But occasionally some murdering needs to be done, like stopping an insane serial killer. So some ‘member’ of society will have to be an effective killer. When we make that member a human, we damage that person and we make everyone else wonder a bit if the person next to them is a killer. Thus a few robot killers is the answer and an improvement over the current system which trains a large percentage of society (mostly men) to be killers. The movie Demolition Man (1993) captures this concept a bit, but unfortunately in it human on human violence saves the day!