Your Planet Sustainable?Your Tribe Harmonious?Your Life Vibrant?
Future Proof Ideas since 2005, by Erwin van Lun

Why we can’t solve the ethics of robots

Discussions that place question marks around the ethics of robots appear regularly. How can we ensure that robots won't hurt us? The answer is simple: we can't.

We do want to develop robots that can move through society without aid, but not robots who can defend themselves? Is a robot allowed to react to a punch? To a dog attack? A knife stab? Being pushed over by a child? Probably that’s wise because a robot will have an owner who paid a neat sum of money for it. And that owner probably won’t want their robots to cross the street without a chance. In all perspectives, we probably want robots to avoid damage to themselves, avoid confrontations with humans, avoid confrontations with other animals. For example by running away, making subtle fast movements or, for example with animals, ‘grabbing’ the opponent in such a way that damage to the robot is avoided. For that the robot needs a hefty set of physical skills and a highly intelligent ethical system.

Therein lies the problem: robots are per definition connected to each other and to people. It’s possible to reprogram a robot. Once a robot has been bought you can take it apart yourself or connect it wirelessly to tweak the intelligence level. The same physical skills can now be used offensively. How can a human tell the different? Yesterday the robot was nice, today extremely hostile. That’s an unreliable friend.

The solution for a peaceful society lies in the motive: a robot can do it, but why would it? If a robot knows that its own fate when it does damage is destruction and ‘survival’ is its highest priority, it’ll to everything in its power to avoid causing damage and with it avoid taking damage.

Then there’s the element of one-time destruction: a kind of suicide-robot as it were. Programmed by humans to do as much damage as possible in one go. Or a whole army of such robots. Here is the essence that it’s programmed by humans. But in the peaceful world that’s being created we’re structurally busy with extermination the motives of people who do such things, and that with the media technology I write about so often. And then it comes full circle.

comments

Reaction by Boris Toet on 30 December 2008 16:29

I Think if we stick with some basic rules it will all be ok:

1/ Don’t let ethical issues be programmed by human beings
2/ Don’t give the robot a favourite soccerclub
3/ Do not run any microsoft software on the robot

Related postings

Archive

Twitter
RSS