Under Isaac Asimov’s First Law of Robotics, a robot must not injure another human being or allow another human being to enter harm’s way. Asimov’s Laws of Robotics first appeared in his 1942 short story, ‘Runaround,’ These laws are clearly present in not only other works of science fiction but also in reality as the human race embarks on integrating artificial intelligence into our everyday lives.

A roboticist from the Bristol Robotics Laboratory in the UK was fascinated by Asimov’s First Law of Robotics and recently conducted an experiment placing a robot in an ethical trap. What if a robot, programmed to save human lives, was in a situation where multiple human lives were at risk but was only able to save one? The results were quite astounding.

Alan Winfield and his colleagues created an experiment where a robot was required to save other automations that were acting as proxies for humans. The other automations were headed for danger (falling into a hole) and the robot was programmed to prevent this from happening.

When there was one automation headed toward the hole, the robot was successfully able to push the automation out of the way. However, when multiple automations were headed toward the hole, a few times the robot was able to save only one. There were a couple of times where the robot was able to save both. However, out of 33 trials, there were 14 instances where the robot was paralyzed by the decision over who to save and did nothing. The robot was so conflicted that both automations fell into the hole.

As society places more trust in artificial intelligence, this experiment certainly makes one wonder if robots are even capable of being equipped with an ethical code. In an age where automatic cars may be a reality, what if the vehicle were placed in a situation where it had to choose to save its passengers versus other motorists or pedestrians on the road? These questions are also arise in the world of military combat and the potential for robot soldiers. Will these robots, unfazed by emotion, be able to know when  to engage in combat and when not to?

Winfield presented his findings on September 2 at the Towards Autonomous Robotic Systems meeting in Birmingham, UK. While Winfield describes his robot as an “ethical zombie”, when asked if he believes in the possibility of robots being able to make ethical choices he responded, “My answer is: I have no idea.”

Check out Winfield’s experiment here:


Source: New Scientist