Author: Isaac Asimov
Type: Fiction, short stories
I read it: August 2015
“There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. Mankind is no longer alone. Have you ever thought of it that way?
They’re a cleaner and better breed than we are.”
So says Dr. Susan Calvin around the year 2057. Born in 1982, she came up through the ranks of the organization U.S. Robots, and remained the most renowned “robopsychologist” through many iterations of the machines throughout the decades. She is interviewed by a journalist about the chronology of roboticism, and reflects on the events that shaped what it meant to live alongside robots.
Here is a long review of a short book, with summaries and thoughts on each of the nine stories that make up the essential Asimov work I, Robot.
The opening story sets the emotional core of the book and frames the primary dilemma: how much is a robot in charge of its own self? Robbie is a nursemaid robot before the laws could catch up and ban robots as house helpers. The girl he cares for, Gloria, is obsessed with her metal friend, who is more or less a loyal dog. Built like a large, mute Bender from Futurama, Robbie has personality and a sincere drive to protect. Gloria’s mother is the fearful one:
“I don’t care how clever it is. It has no soul, and no one knows what it may be thinking. A child just isn’t made to be guarded by a thing of metal.”
The story doesn’t get into specifics, it just stays focused on the bond between a human and an almost-human. To Dr. Calvin, Robbie is a memorable case study in robotics.
The second story takes place in 2015 (!) and is the first to feature hapless human duo Donovan and Powell. They have the thankless task of retrieving resources from the planet Mercury. They send one of their robots, SPD 13 (or Speedy), out into the field on a routine endeavor, but he glitches. He’s caught between the rule about obedience and the rule about self-preservation. For this is the first time in the book that we get Asimov’s famous “three laws of robotics.”
As relayed by Powell they are:
“One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.
Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.”
The interplay between the three laws is the core of the subsequent stories.
Donovan and Powell have a new job working with experimental robots. They create Cutie (taken from QT-1; humans just can’t resist a nickname) by putting him together from parts shipped to them. Cutie instantly latches onto logic as his defining characteristic. He quickly postulates that “a chain of valid reasoning can end only with the determination of truth, and I’ll stick till I get there.” This proves frightening as he basically constructs his own reality and the humans lose their grip on commanding him. Cutie would rather worship the great machinery at the station, because to him its power is more feasible than the idea that humans could have created him. At one point Cutie sums up the weakness of humans by highlighting their imperfection. “You are makeshift,” he states.
But Cutie’s logic is not infallible, and can’t lead him to truth. The three laws mean that he can only do what’s best for the humans in charge, which means simply doing his job extraordinarily well. This story hints at robot uprising but squashes it in the end.
Catch That Rabbit
This may be the weakest entry in the book. It once again features Donovan and Powell trying to figure out why their robot has seemingly gone off the rails. The robot is an experimental model that has subordinates, likened to a hand that controls its fingers. Things go awry and the humans are nearly in mortal danger toward the end, until they can use reason to deduce why the robots are acting the way they are. It’s further intellectual play of extrapolating the three laws into new situations.
“Liar!” breathes fresh energy into the robot dilemma. It features Susan Calvin directly, as well one of her superiors, Alfred Lanning. They have a robot on their hands, Herbie, that can read minds. They get him alone in a room at different points to test whether he really knows what he claims to know. And Herbie can indeed read minds—but is he telling it straight?
When things conflict, Calvin realizes that Herbie is slowly going insane by virtue of dealing with humans. He is being told to obey directly by answering questions honestly, yet his circuitry is also telling him to prevent human harm. He reasons that to tell humans the truth in all instances, especially with personal or emotional subject matter, is to violate the first law. Hence the title of the story, and Herbie’s exclamations when put under harsh questioning from Calvin:
“Stop! Close your mind! It is full of pain and frustration and hate! I didn’t mean it, I tell you! I tried to help! I told you what you wanted to hear. I had to!”
Little Lost Robot
Another instance of Susan Calvin needing to outwit a robot, this is also another story that addresses robot insubordination. The lost robot of the title has gone missing willingly, by blending in with a stock of robots exactly like himself. Calvin devises psychological games to root him out, though the robot thinks that the superior intellect of him and his kind will keep the blinds over the eyes of the humans. There are guesses as to the motivations of the stray robot: “All normal life, consciously or otherwise, resents domination.” But if a robot life is still rule-bound to the three laws, can it truly rebel?
Poor Donovan and Powell are at it again. A gleaming new spaceship is built with a positronic brain inside to control it. At the end of a workday the two board the ship and the door closes behind them. They are at the whim of the ship as it navigates itself through interstellar distances. (Notably, these stories track the evolution of both robotic capabilities and human space travel, which allows Asimov to use these concepts in his other work.)
This story tests the limits of the first law, toying with whether or not a robot could push humans to the brink of death—or even partially beyond—as long as the humans come out okay in the bigger picture.
“Evidence” may be the most intriguing tale of the bunch. Throughout Calvin’s career at U.S. Robots, the machines have been deemed acceptable for certain uses in space, but are blocked from existing in the public and private spheres. She relates that despite the technological advances in space, the core of robotic history has to do with “what has happened to the people here on Earth in the last fifty years.” She refers to the story of a politician who is accused of being a robot.
Byerley is an ideal, humanistic candidate who is on an upward trajectory in the public eye. A bitter enemy starts a campaign to out him as a robot because “actions such as his could come only from a robot, or from a very honorable and decent human being.” Given that the latter is so rare, his campaign gains legs. He taps into the distrust and fear that humans have for highly capable robots. Certain fundamentalists are excited about the cause:
They were not a political party; they made pretense to no formal religion. Essentially they were those who had not adapted themselves to what had once been called the Atomic Age, in the days when atoms were a novelty. Actually, they were the Simple-Lifers, hungering after a life, which to those who lived it had probably appeared not so Simple, and who had not been, therefore, Simple-Lifers themselves.
Doesn’t this mindset seem all too plausible in our own world? (Asimov forecasted this story for 2032.)
The story is personal yet far-thinking, and touches on the struggle to find a logical reason why we shouldn’t let robots help govern us. As Calvin summarizes:
“If a robot can be created capable of being a civil executive, I think he’d make the best one possible. By the Laws of Robotics, he’d be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice.”
It’s a compelling argument from a compelling story. And Asimov, in his best move, leaves it up to the reader to figure out whether or not Byerley really is a robot.
The Evitable Conflict
On the heels of “Evidence,” the final story keeps us out of space yet expands the issue of robotics to a global scale. By 2044, the Regions of Earth have formed a Federation, and robots are engaged on a global scale of production. This was after the twentieth century that brought “a new cycle of wars…ideological wars…the emotions of religion applied to economic systems, rather than to extra-natural ones.” The Regions have stabilized into different—yet more or less high-functioning—areas where things are flowing smoothly thanks to the robots. However, a few reports come in that show production glitches, so an investigation is underway.
Are humans sabotaging the world economies for personal or nationalistic reasons, or is something larger at play? Perhaps the robots are taking over…but what would that mean exactly? These aren’t the evil automatons of our comic books, as Asimov goes to lengths to explain. One character describes it as such:
“The Machines are not superbrains in Sunday supplement sense,—although they are so pictured in the Sunday supplements. It is merely that in their own particular province of collecting and analyzing a nearly infinite number of data and relationships thereof, in nearly infinitesimal time, they have progressed beyond the possibility of detailed human control.”
The explanation is both mundane and fascinating. Perhaps it isn’t a stretch to imagine machines progressing beyond human control. After all, this is the stereotypical fear of anything we categorize under the “artificial intelligence” umbrella. But could it be possible to create machines so carefully wired and efficient that they take things into their own hands, not due to their own agenda but simply due to the dictates of their programming? Could they “progress beyond the possibility of detailed human control” for the purpose of serving the greater good of human society?
Asimov’s speculations about human-robot relationships are not an us-versus-them proposition, but a realistic inquiry into our actual future.