I met Eric Molinsky of the great Imaginary Worlds podcast at a Brooklyn gaming store this week. In his most recent episode about “The Robot Uprising,” his interview guests touch on the “slaves who turn against their masters” idea as carrying particular triggers for Americans due to centuries of legal human chattel slavery and its legacy toward civil rights, race relations and slavery today.
How would a robot deal with current real-life decision making needs – like, say, Google’s self-driving cars — when programmed to follow some moral/ethical code (say, Isaac Asimov’s original Three Laws of Robotics, noted below):
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
When confronted with the decision to avoide a collision to protect the driver versus hitting a pedestrian or just hitting a lightpole, the car will likely choose the lightpole. However, soon human drivers — who easily can ignore such “moral programming” — will learn to intuit “passive driver” robot cars and cut them off in traffic, drive aggressively around them, and otherwise exploit the “privledged authority” associated with human operators.