Sunday, September 28, 2014

Should Robots Be Used In War?

Artificial Intelligence has taken of since the start of the 2000’s.  With faster processors and computer power growing at alarmingly fast rates, the concept of robots being used more often is more achievable.  Of course, as is most technology, robots and artificial intelligence are inherently political.  Who will decide the multitude of moral questions these robots will have to face? A human will have to make a decision for all robots since robots cannot calculate decision like humans can.  Our ability to feel is what drives our decisions.  For example, one of the issues the UN plans to speak on this year is the use of robots in the military.  If a robot is programmed to kill, what is to stop it from taking out an entire country so that the robots manufacturer can win that war?  According to Isaac Asimov’s laws which were first introduced in 1942, “A robot may not injure a human being or, through inaction, allow a human being to come to harm”.  How can we expect to use robots for military purposes but expect those robots to follow the three laws of robotics?  Eventually, war will be between robots of one country against the robots of another.  Seeing how much of a failure it is in terms of war to have robots killing robots, we would eventually resort to other means of fighting such as biological and nuclear weapons.  You are not winning a war if the only casualties are robotic ones.
            As of now, every piece of technology used, whether in the military or not, is monitored by another human.  According to David Akerson, a lawyer and member of the Internation Committee for robot Arms Control, “Right now everything we have is remotely controlled and there’s always a human in the loop…We’re heading the direction to give the decision to kill an algorithm” (Garling).  How can we trust a piece of technology to rely on an algorithm to kill a human being.  People say that because robots don’t have feelings of empathy, rage, revenge, etc. that they would be more reliable at killing.  I disagree.  I think every human life is different and no algorithm can decide whether or not someone should die.  Would an algorithm be able to read a person’s emotion and know whether or not to kill him/her or question him/her for information?  There are too many things to consider that no one algorithm can teach a hunk of metal whether to kill or not.
            I think that we should research robotics and artificial intelligence, but I do not think robots should ever be made to be used in the military.  I think to live a safe life, robots should be made to follow Asimov’s three laws of robotics (which are attached below).  These three laws may seem simple in theory, but to actually program a robot to understand and abide by these laws has proven to be on of the most complex problems in the field of artificial intelligence.



1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

No comments:

Post a Comment