- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This involves strange robot behavior, of course, and the troubleshooting humans attempting to diagnose and repair the problem. Some mistakes are due to the nature of the three laws: they are broad and leave much to interpretation: what qualifies as harm to a human? physical pain? emotional anguish? can preventing long-term harm justify causing short-term harm? Some mistakes are due to conflicts between the laws, despite their rules of precedence: when laws conflict in a complicated way, so much of the robot's brain is absorbed in resolving the conflict that the robot behaves drunkenly.
Robots require human robopsychologists to assess, diagnose, and provide therapy. Robots are surprisingly human in their deviousness, in their psychological hang-ups, and in their reasoning. This human treatment of the subject of robots reminds me of Stanislaw Lem's The Cyberiad; robots with human problems are the source of much comedy in both books.
It's short and fun. Go read it!
*I'd like to see that proof!
This post's theme word is epanorthosis, "the immediate rephrasing of something said in order to correct it or to make it stronger. Usually indicated by: no, nay, rather, I mean, etc." Quite useful in issuing precise orders to robots.
This post written like Isaac Asimov! I often feel that my thoughts form in the style of the latest writer I'm reading: here is a datum supporting that suspicion.