Foreign policy scholar Walter Russell Mead argues that everyone should read science-fiction (or perhaps more accurately, speculative fiction), not because it has the best prose, plots or characters, but because it gives us a chance to see outside of our usual frames of reference, and possibly encounter humanity in a new light.
Taken as a whole, the field of science fiction today is where most of the most interesting thought about human society can be found. At a time when many academics have become almost willfully obscure, political science is increasingly dominated by arcane and uninspiring theories and in which a fog of political correctness makes some forms of (badly needed) debate and exploration off limits, science fiction has stepped forward to fill the gap.
— Mead, Walter Russell. “Literary Saturday: Science Fiction is a Genre That Everyone Should Read.” Via Meadia/The American Interest, 18 September 2010.
I’m sympathetic to Mr. Mead’s argument on several levels: as a former science-fiction reader who once thoroughly enjoyed the genre; as one who mines the currents of history for patterns that might be applicable today; and as one who holds Mead’s intellect in some esteem. I’m not sure any of these things can overpower the catalyst that drove me away from science fiction, which is a tendency to explore new frontiers of the human condition in exactly the same way.
I refer not to similar plots, characters or superficial elements, but the underlying theme, which usually—when boiled down to its simplest elements—is a novel-length Facebook status update from the author, saying in essence “Wouldn’t it be great if human nature were no longer a limiting factor, and we could dispense with x, which bugs the crap out of me?”
Sure, it would be great. And if your aunt had balls she’d be your uncle.
Why I find much science-fiction/speculative fiction hard to take is that if one has a decent knowledge of history, religion and anthropology, one will understand that humanity actually has hard-coded limits that will be nigh-impossible to transcend. Or more accurately, that our superficial layer of cultural software can be changed relatively easily, but most of what plagues us as a species is the result of our neurological firmware, which has evolved into its present condition over millennia and will take literal millennia to add any new lines of code.
Let us take, for example, emotion. Every human being is going to (at some point) feel love, hate, joy, sadness, anger, relief, pride, shame, et cetera. Much of the time we can decide whether or not we want to embrace or suppress specific feelings in a specific instant, but we don’t have any control over whether emotion itself occurs at all, and sometimes strongly-felt emotion can override our reason. Our collective evolutionary, genetic and neurological heritage thus dictates that the emotion switch is stuck in the “on” position for all of us, with an occasional involuntary “override” capability. On a macro level this means humanity is subject to irrationality, and will be until evolutionary pressures millions of years down the road might decide that we must evolve a “cutoff” capability. Any human civilisation where we still resemble actual homo sapiens is going to have a certain amount of irrationality and illogic built-in by default.
Lots of the firmware that creates annoying problems has a perfectly good, reasonable role. Like all manner of flora and fauna we discriminate (in the objective—not perjorative—sense). The human brain is a generalization machine; it looks for patterns inductively and deductively. It has evolved this mechanism in its firmware to help all humans adapt to our various physical and social environments; it is not merely an inculculated software artefact of one specific culture. Operating at a certain baseline level the ability to identify a pattern, associate it with a predicted outcome, and act to achieve our desired outcome is good. That ability keeps us from eating food that has gone bad, from touching red-hot objects, and so on. But as everyone knows we can easily run into problems by over-utilising this necessary survival skill (or by providing lousy inputs).
An individual can modify his or her software generalisations, if they are self-aware enough to a) know that they are happening and 2) they desire to either embrace or reject the conclusions. What is completely beyond the individual’s control is the firmware, the part of the brain subconsciously assimilating data and making generalisations, every moment of every day. We can and should pass laws to prevent certain kinds of discrimination in public life and commerce, but we can never hope to eliminate the firmware in the brain that inductively and deductively arrives at generalisation-hypotheses. It’s used in too many facets of human (and animal) existence. Ergo prejudice of one kind or another will always be with us, as a species (even if culturally, we conquer the usual racial / gender / orientation kind).
History and religion become useful wells to draw upon because they show us that man’s inner struggle hasn’t changed much over the past few thousand years. We may have newer gear—spacecraft and computers—but in the end we are governed by the same evolutionary programming handed down from our ancestors eons ago. We wonder at the vastness of the cosmos and our small place in it. We wonder how we might best fulfill our potential in a world where the outcomes are uncertain and the stakes so high. Sometimes that striving takes us into conflict with others who have the same (or a different) goal. How we decide on and pursue that goal depends on our cultural software, but our evolutionary firmware is the why. Nobody arrives on the planet and wants to sit still and do absolutely nothing until the day they pass on.
Background knowledge of history will also tell you what utopian experiments have already been tried and found wanting. (Hint: it’s all of them.) The great failing—if not defining feature—of utopian projects of all eras is that they generally try to paper over mankind’s firmware with less-sturdy cultural software. This might last for three or four generations at best, but ultimately our firmware will reassert itself.
Regrettably, when one reads science-fiction that tries to get around humanity’s firmware limitations, the authors tend to run for all-out transhumanism (whereby humanity’s limiting factors are solved by experimentation, genetic and biological manipulation, et cetera). But because the author’s also writing the story for humans today—who generally don’t want to read about persons and things they cannot relate to—you get post-humans who are either 1) a little bit too human, which kind of nukes the premise of the story, or 2) are sufficiently un-human to the point of being uninteresting to the human being who has to slog through the story in the here and now. It’s a hard, virtually impossible balance to strike, and as a result I find myself reaching for histories rather than science/speculative fiction.
And let’s not even get into the facepalm territory of why some authors give spacefaring civilisations with faster-than-light drives ultra-low-tech bullets and projectile weapons. Tomorrow’s Earth-bound fighters and gunships already have planned directed-energy weapon upgrades. And we’ll have those operationally deployed before the first manned interplanetary spacecraft sets out for Mars.