There is no denying, the man-machine reality is no longer bound to the books of science fiction or the cinematic realm of the likes of Starwars.
Robots, and all its subsidiary forms be it androids or simple bots, are engraved in our everyday industry lives. What is still far from reality, it would seem, however is the reality about the creation and co-existence between man and humanoid androids like the one in Terminator and the ethical debate of whether one should ever be created.
Back in the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics:
-A robot may not injure a human being, or, through inaction, allow a human being to come to harm
-A robot must obey the orders given it by human beings except where such orders would conflict with the First Law
-A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
These three laws were in his science fiction book titled, Runaround. Pioneering as it may be, Asimov’s Three Laws of Robotics represent more problems and conflict to roboticists than they solve.
Ethical issues are going to continue to be on the rise as long as more advanced robotics come into the picture.
According to The Ethical Landscape of Robotics, Noel Shanky argues that “the cognitive capabilities of robots do not match that of humans, and thus, lethal robots are unethical as they may make mistakes more easily than humans.” And Ronald Arkin believes that “although an unmanned system will not be able to perfectly behave in battlefield, it can perform more ethically that human beings.”
Ethical debates about not only the production of robots but as well as its usage is being debated as of the moment. An example is the use of it in military aids. Today, many other uses for military robots are being developed applying other technologies to robotics. The U.S. military alone is going to count with a fifth of its combat units fully automated by 2020.