Your Planet Sustainable?Your Tribe Harmonious?Your Life Vibrant?
Future Proof Ideas since 2005, by Erwin van Lun

Killing robots

Protection device supplier Taser International (which also supplies to consumers) considers equipping robots with weapons. The American army already has been using PackBot, made by iRobot, equipped with deadly weapons. A couple of hundreds of these robots are currently active in Iraq and Afghanistan. The Taser plan takes this a step further however. In their plan, the police can use robots against civilians. Thus robots could give electronic shocks to criminals until the police arrives (via ns). The question now is, what happens if the robots make mistakes through which people are accidentally wounded or even killed.

In how much can robots be held responsible for their own actions? Do autonomous robots have their own responsibilities, rights, and duties? And don’t they have to put down in the constitution? Will we get something like a Global RobotID?

Or do we need to see robots as children of grownup parents, and make the parents responsible? But will it be so simple to make the producer of a robot responsible? If you talk about parents, there are two people involved, which makes things complicated as they are. But what if a whole company, with hundreds of employees, is responsible? Or actually the company with all its suppliers?

And what happens if robots survive people? Survive companies? If they are autonomous, they can get their own power. And if something is wrong, they can take their own actions to get themselves repaired. No humans need to be involved here. And payment goes automatically. The robot after all has its own budget to take care of itself.

As the biggest part of the budget of a robot is spent on other robots, a real robot-economy evolves. In which money circulates, and robots service each other. They will more and more often ask other robots, as they can help them better than humans can. Is there anything humans can do in the future?

Can the robot also spend its budget to develop itself? To experiment? To learn new things? To even build new robots which live longer? Or need less power? Or make less noise? Exactly those things robots have been designed for.

But what if they then make mistakes? And don’t program their children-robots in an optimal way? And their children-robots hurt people? Are they responsible? And in what way can they then be punished? Even stronger: how will we, as a small, vulnerable species, be able to punish them? Or will they be forced to re-program, to then only go shopping for the elderly?

And what happens if a robot has to determine who is the good guy and who is the bad guy in a fight? And who determines what is good and what is wrong? That is more an ethical question, which also is culturally determined. If a good robot fights a bad human, and you have to interfere, does the human always come first? That in the end could be the easiest rule.

But what is a human? Is that 100% flesh and blood? Or can he have a prosthesis on arm and leg? Have artificial organs? Maybe even an artificial body? What will be a human in the future, if we can imitate everything? Is the human then in the brains? Or in the heart?

Enough material to think about. Fact is that we now allow robots in the virtual world, and that in a couple of decades the amount of physical robots will drastically increase. This will change the world enormously.

comments

Reaction by jees on 28 December 2008 10:30

robots are better to deal with crisis because of the lack of emotions

Related postings

Archive

Twitter
RSS