Monday, December 6, 2010

I, Robot rhetorical device

I came across a clever bit of writing that particularly tickled my fancy while reading Isaac Asimov's I, Robot. Its self-aware authorial voice was reminiscent of David Foster Wallace. (Or perhaps, given their respective places on the timeline, it was prescient of David Foster Wallace.) On page 148 of the 192-page-long edition I read:
Francis Quinn was a politician of the new school. That, of course, is a meaningless expression, as are all expressions of the sort. Most of the "new schools" we have were duplicated in the social life of ancient Greece, and perhaps, if we knew more about it, in the social life of ancient Sumeria and in the lake dwellings of prehistoric Scotland as well.

But, to get out from under what promises to be a dull and complicated beginning, it might be best to state hastily that Quinn...
What an excellent device! I would like to be proud enough of my writing to refuse to remove the dull bits, and instead simply acknowledge them as dull and move along.


This post's theme word is nihilarian, "one who does useless work."
This post written like H. P. Lovecraft. Although if I include the extended quote, it's Isaac Asimov.

Saturday, December 4, 2010

I, Robot

Isaac Asimov's delightful I, Robot has almost nothing to do with the eponymous movie. It is a set of charming vignettes detailing the early years of the development of the "positronic" robot brain, smarter than humans and equally self-aware. The only difference is that the robots are bound to the Three Laws of Robotics:
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
These rules are "impressioned" into robot minds straight off the assembly line. There are repeated assurances from many robot-building authorities of mathematical proof* that a robot brain would fail catastrophically (disabling the robot) before it could break any of the three laws. I, Robot is about the various ways in which the previous sentence does not mean what you think it means.

This involves strange robot behavior, of course, and the troubleshooting humans attempting to diagnose and repair the problem. Some mistakes are due to the nature of the three laws: they are broad and leave much to interpretation: what qualifies as harm to a human? physical pain? emotional anguish? can preventing long-term harm justify causing short-term harm? Some mistakes are due to conflicts between the laws, despite their rules of precedence: when laws conflict in a complicated way, so much of the robot's brain is absorbed in resolving the conflict that the robot behaves drunkenly.

Robots require human robopsychologists to assess, diagnose, and provide therapy. Robots are surprisingly human in their deviousness, in their psychological hang-ups, and in their reasoning. This human treatment of the subject of robots reminds me of Stanislaw Lem's The Cyberiad; robots with human problems are the source of much comedy in both books.

It's short and fun. Go read it!



*I'd like to see that proof!

This post's theme word is epanorthosis, "the immediate rephrasing of something said in order to correct it or to make it stronger. Usually indicated by: no, nay, rather, I mean, etc." Quite useful in issuing precise orders to robots.
This post written like Isaac Asimov! I often feel that my thoughts form in the style of the latest writer I'm reading: here is a datum supporting that suspicion.