The law of robot autonomy has always been a fascination among both science fiction enthusiasts and also professional roboticists. However, is it really smart for roboticists to push robots beyond the limits of their programming and give them more autonomy than is technologically feasible?

Or will The Matrix become less of a fiction than it is?

Asimov's three Laws of Robotics were meant to ensure that robots would remain safe and useful tools for humans but some modern roboticists are now arguing that the rules don't mesh with current technology and propose a new set of robotics laws.

The three laws underlined by Asimov's Laws of Robotics:-

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such a protection does not conflict with the First or Second Law.

According to David Woods, a systems engineer at Ohio State University, and Robin Murphy, a rescue robotics expert at Texas A&M University, when dealing with robots that are not yet self-aware, Asimov's Laws function better as a literary device than as an ethical guideline. They believe that engineers and programmers need a set of rules to govern their robots and the way they deploy them, both to ensure human safety and to allow robots to operate with minimal human oversight:

Their first law says that humans may not deploy robots without a work system that meets the highest legal and professional standards of safety and ethics. A second revised law requires robots to respond to humans as appropriate for their roles, and assumes that robots are designed to respond to certain orders from a limited number of humans.

The third revised law proposes that robots have enough autonomy to protect their own existence, as long as such protection does not conflict with the first two laws and allows for smooth transfer of control between human and robot. That means a Mars rover should automatically know not to drive off a cliff, unless human operators specifically tell it to do so.


Source: MSNBC