I thought I’d mix it up today and have a little fun with this essay. It was about time for me to take a break from the more serious stuff and write about genre fiction. My hope is that, even if you’re not a sci-fi nerd, you’ll like this essay.
Warning: This essay contains spoilers. That said, Asimov’s Foundation and Robot novels have been out for decades. Foundation came out 80 years ago. Foundation and Earth, the final novel in the series, came out almost 40 years ago. The statute of limitations on spoilers has passed.
I titled this essay provocatively, but I want to start by saying that Asimov is one of the greats. While he’s most known for his canonical science fiction, he wrote hundreds of books, including scientific treatises and a historical look at the Hebrew Bible from an atheist’s perspective. As far as I know, I’ve read almost every work of fiction he ever wrote (one of the notable exceptions being Fantastic Voyage II, because the first Fantastic Voyage bored me and reminded me too much of the movie Magic School Bus: Human Body, which I had to watch in elementary or middle school).
I really enjoyed most of Asimov’s stuff. Nightfall seems increasingly imaginative and important as time goes on. If you read one thing by Asimov, the novel version of Nightfall (which includes Daybreak) might be the thing to read. And perhaps no single individual influenced scientific and popular thought about robotics more than Asimov, whose “Three Laws” continue to be the jumping off point for every discussion of ethical robotics.1
Also, I think Asimov has been treated unfairly by the screen adaptations of his work. I enjoyed the movie I, Robot, but other than the “Three Laws” and one minor character from the book, it has close to nothing to do with anything Asimov wrote. Likewise, from all accounts, the Foundation television series has taken little more than a premise and the name of a character from the Asimov book it’s supposedly based on.
So, I don’t want to be unfair to Asimov in this essay. But in hindsight, some of the ideas in his novels seem to me to be more and more problematic.
Some Background:
Asimov was a secular humanist, president of the American Humanist Association, and friend of Kurt Vonnegut’s. Like Vonnegut, he was an outspoken atheist, who also wasn’t hostile or antagonistic to religion.
Most of Asimov’s science fiction fell into his “Future History” series, which begins in the late 20th Century and extends tens of thousands of years into the future. This includes his Robot novels, his Foundation novels, and the Empire trilogy.
The Robot novels begin with I, Robot (a collection of short stories) and chronicle the invention of artificially intelligent robots and faster-than-light travel, as well as humanity’s progression from an Earth-bound species to a spacefaring civilization populating dozens of star systems in the nearest part of the Milky Way. The novels explore the complex interaction between human beings and sentient, fully-conscious robots, as well as the implications of Asimov’s Three Laws of Robotics:2
“First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law”
Foundation begins about fifty-thousand years in the future, when human beings have civilized the entire Milky Way Galaxy, colonizing and developing tens of thousands of planets. Earth is no more. Most of civilization is dominated by a Galactic Empire, which has provided peace and stability for twelve-thousand years.
In the novel, a man named Hari Seldon develops a complex computational theory known as psychohistory, which essentially predicts the future of human civilization, based on historical data of economic trends, population movements, cultural trends, and a million other factors. He predicts that the Empire is about to collapse and that there will be thirty thousand years of dark ages and misery, but that he can set in motion events that might shorten the period of misery to one thousand years. (If you notice parallels to Rome, and the prophecy of the twelve eagles, you would be correct.)
The Foundation novels chronicle those next thousand years, and the efforts of those who come after Seldon to use psychohistory to navigate the dark ages and preserve human knowledge and civilization.
The Empire novels were written later and link the Robot and Foundation novels by adding in some detail about events that come in between those series.
Psychohistory:
Asimov’s Three Laws may ultimately prove more influential. But there was a time when psychohistory seemed eminently plausible. The economist Paul Krugman has said that it is Asimov’s psychohistory that first got him interested in studying economics. Two decades after Asimov published Foundation, President Kennedy said,
“The fact of the matter is that most of the problems, or at least many of them that we now face, are technical problems, are administrative problems. They are very sophisticated judgments which do not lend themselves to the great sort of "passionate movements" which have stirred this country so often in the past. Now they deal with questions which are beyond the comprehension of most men.”
When I first read Foundation as a kid, I, too, was enthralled by the idea that intelligent minds with sophisticated computers and access to all the data in existence could predict the historical trends that govern human life. Of course, even at the time, I came down strongly in favor of free will. But Hari Seldon’s psychohistory seemed to allow for free will on the individual level, at the scale of small decisions that “don’t matter in the grand scheme of things.”
Today, I’m far less enthralled by psychohistory. Actually, I think it’s impossible. Not simply out of reach, but impossible. Even with computers more sophisticated than our feeble minds can comprehend, with access to all of the economic, political, historical, and scientific data that has ever existed, I don’t think it is possible to predict the future. At least not in the way that psychohistory seeks to.
One source I’d cite on this subject is Nassim Nicholas Taleb, who has done more than anyone alive to change the way we talk about risk and uncertainty. One of the primary themes of his Incerto is that we inhabit a world governed by randomness and chaos, most importantly that produced by chance events with massive impact. He tells us that the knowledge we don’t have will always be more important than the knowledge we do, because the unexpected will always come from the places we least expect, about which we know the least.
Psychohistory is based on a theory of causality that allows little room for free will. In Second Foundation, one extremely powerful individual throws a wrench into that, but even the impact of a “Great Man” can be corrected for by tweaking the model. More fundamentally, the idea of psychohistory is that once we can accurately predict the future outcomes of various courses of action, society’s problems will all be solved by the clever design of managers. In other words, we’ll no longer need democracy.
In the Foundation novels, this seems to work out great. Adept administrators guide the “Foundation” through the collapse of the Galactic Empire, and – because a hologram of the long-dead Hari Seldon is always there to tell them what the next problem will be and how to solve it – manage to peacefully begin to restore civilization to one corner of the galaxy. The citizens of Terminus (the home planet for Foundation) live in peace and prosperity.
In reality, psychohistory doesn’t work. Which means that sort of society wouldn’t be so frictionless and peaceful. People don’t enjoy having administrators solve their problems for them, especially when it doesn’t work out perfectly. Unlike the men and women of Terminus, we don’t have a hologram of Hari Seldon to tell us what to do.
But more importantly, the idea that a computer model could predict the future with such accuracy as to remove all need for deliberation and choice in society is a totalitarian idea. The logical implication is that human beings should not be allowed to make decisions for themselves. When we do, the results are messy. When the computer does it, the results are clean.
Moreover, the only way such a theory as psychohistory could work is if causality is so perfect as to remove most of free will from human lives (in other words, we’re just fooling ourselves about it and we really don’t make our own decisions). This is a popular idea, but one I find fatally flawed.3 And the logical implication of this idea, too, is that we should stop fooling ourselves: we should just give up the whole charade about free choice. If we don’t really have free will, why should we pretend we do? And why shouldn’t we let some algorithm tell us what to do?
But it gets worse.
The Zeroth Law of Robotics:
If you’ve seen the movie I, Robot, you’ll know that the intelligent robots derive a “Zeroth Law of Robotics” out of the Three Laws, which basically states that their ultimate goal should be to ensure the survival of humanity. They can violate the other three laws in service of this law (hence, it comes before the First Law). In other words, they can kill individual humans, or even large numbers of humans, in service of the greater good.
In the movie, this sets up an “evil robots attempt to enslave humanity” plot. However, for Asimov, the Zeroth Law was a good thing. In his novels, the Zeroth Law is postulated by a benevolent and highly intelligent robot (Daneel Olivaw) who develops actual independence. He is no longer bound by the Three Laws. They’re still burned into his brain, but he transcends them.4 He’s kind of a philosopher-robot, in possession of deep wisdom and capable of original thought. He’s equal to (or perhaps greater than) human beings.
After the events of the Robot novels, robots are phased out and humanity bans them. This is actually a common trope in science fiction, about which I’ll write more at some point. In most books, there’s some sort of cataclysmic war between humans and artificial intelligence, but in Asimov’s Future History, that’s not the case. Interestingly, there is a taboo against artificial intelligence that survives for tens of thousands of years, throughout the Foundation novels.5
However, robots survive in the shadows, including Daneel Olivaw. Olivaw and his fellow robots essentially hide outside of human society, while occasionally venturing in disguised as human beings (it’s impossible to tell). This allows them to manipulate people and events in order to control all of human society and guide it in the direction they believe is best for human flourishing. According to Asimov, this is a good thing.
At the end of Foundation and Earth, we learn that this has been going on behind the scenes for millennia. Asimov presents it as benign and beneficial, but it amounts to a secret cabal of robots, running around disguised to look like human beings, paternalistically pulling strings and orchestrating events because they appointed themselves guardians of humanity.
Foundation and Earth is the very last novel in the Asimov’s Future History. (I said there would be spoilers.) But if paternalistic robots literally conspiring to save humans from themselves wasn’t enough, we get something even more sinister at the very end of that novel.
The End of Future History:
As I kid, I actually thought the robots pulling the strings behind the scenes was cool. Any conspiracy is great fun if you’re in on it. Paternalism doesn’t feel restrictive if you imagine yourself to be on the team pulling the strings.
But the one thing my adolescent mind instinctively recoiled from was the decision made at the end of Foundation and Earth. In the previous novel, humans on one particular planet develop a hive mind. They retain individual awareness but become part of a superorganism, or group consciousness, known as Gaia. Each mind within Gaia can commune directly with other minds, but without losing touch with the physical reality of their individual body.
This sounds like hell to me. If forced to choose between hive mind and death, I choose death. But at the end of Foundation and Earth, humanity chooses to totally embrace the hive mind and create a new pan-galactic superorganism known as “Galaxia,” which will incorporate all individuals within the galaxy. (I mentioned this in another essay last November.)
Then the novel ends, and the whole series ends, because there’s nowhere to go after that. Humans have evolved to a higher plane, I suppose. The hive mind is pure collectivism, in which individual identity is literally subsumed into a greater mass.
Putting it Together:
Again, I like Asimov. He’s on anybody’s short list of “Most Important Science Fiction Authors of the 20th Century,” and you could probably leave off “the 20th Century” and still have it be a true statement.
But I think the combination of the collectivism implied by Galaxia, the technocratic paternalism of psychohistory (along with its ramifications for free will), the disturbing implications of the Zeroth Law of Robotics (i.e., a robot could commit murder if doing so advanced the greater good of humanity), and the manipulative conspiracy of Daneel Olivaw create a rather dark picture.6 It’s not a galaxy I’d want to live in.
Which is a shame, because otherwise the Milky Way as envisioned by Asimov is a rich and interesting place, filled with incredible technology, diverse peoples,7 and plenty of examples of human flourishing. If anything, Asimov’s envisioned future is a better one for humanity, with less violence, greater proliferation of knowledge, more social stability, and less oppression than that envisioned by many other science fiction authors. Even the collapse of Asimov’s Galactic Empire isn’t as cataclysmic as the jihads in the Dune universe, nor as vicious as the casual violence of the Hyperion universe.
If anything, the Future History is a secular utopia of scientific progress, at least in some respects.8 It isn’t the future I’d want to live in. But it isn’t hellish and it isn’t meant to be. I don’t want to leave readers with the wrong impression. Asimov wasn’t an extremist. He imagined a future with relaxed social mores compared to the 40s and 50s, but it’s rather staid and traditionalist compared to the strangeness of Heinlein’s To Sail Beyond the Sunset or the radicalism of Kim Stanley Robinson’s 2312.
In fact, I think quite a lot of people rather like the idea of connected consciousness. Even more people probably like the idea of benevolent artificial intelligences guiding and guarding humanity behind the scenes, saving us from our own worst enemies: ourselves. And many economists, statisticians, social historians, and computational modelers love the idea of psychohistory.
The title of this essay is a joke. It seems absurd to imagine that Isaac Asimov was a totalitarian. That’s why I added the word “soft.” His novels aren’t filled with jackbooted thuggery or tyrannical dictators. But in some ways, subtlety just makes it all the more insidious. It isn’t the vile and revanchist autocrat you have to fear. It’s the one who greets you with a smile and pleasantly tells you not to worry, because everything is being taken care of “for the greater good.”
Obviously from a technical perspective, you could name dozens of scientists and engineers more influential to robotics than Asimov. Also, I recognize that discussions of ethical robotics have moved on a great deal from the Three Laws. However, most discussions begin by referencing them or explaining their limitations, in part because they are so well-known and easy to understand. Perhaps the most important thing about the Three Laws wasn’t the proposed framework itself, but the exhaustive exploration of its implications and nuances and unintended consequences in a series of short stories and novels.
These laws are imprinted into the positronic brains of robots. They are incapable of violating the laws and will self-destruct or breakdown if they succeed in attempting to do so.
Someday, I will write about that, but perhaps not in nonfiction. Determinism is a topic for another time.
Does that sound familiar?
The taboo/ban is another part of the trope. My favorite version of this is in Frank Herbert’s Dune, in which a commandment against creating any artificial intelligence has been written into the Catholic Bible at something like a new Council of Nicaea.
In this case, a literal conspiracy. Not a theory about a conspiracy, but a shadowy organization that conspires to run things.
All human. Humans never meet any aliens in the Milky Way. However, one reason given for embracing the hive mind is fear of an eventual encounter with a hostile alien race.
Although, I supposed if you replaced science with religion, it might resemble a theocratic society based on a kind of integralism that prioritizes order over anything else.