The Purpose of AI

I recently read an excellent cautionary tale (and with a romance to boot), David Walton’s Three Laws Lethal (2019).  The subject of “artificial intelligence” or AI (it isn’t really intelligence, but that’s another story) is hot.  To take only one rather specialized example, the Federal Communications Commission’s Consumer Advisory Committee last year carried out a brief survey of the roles of AI, both harmful and helpful, in dealing with robocalls and robotexts.  So it seems like an appropriate moment to take a look at Walton’s insights.

Frankenstein and the Three Laws

It’s well known that the early history of SF—starting with what’s considered by some to be the first modern SF story, Mary Shelley’s Frankenstein (1818)—is replete with tales of constructed creatures that turn on their creators and destroy them.  Almost as well known is how Isaac Asimov, as he explains in the introduction to his anthology The Rest of the Robots (1964), “quickly grew tired of this dull hundred-times-told tale.”  Like other tools, Asimov suggested, robots would be made with built-in safeguards.  Knives have hilts, stairs have banisters, electric wiring is insulated.  The safeguards Asimov devised, around 1942, gradually and through conversations with SF editor John W. Campbell, were his celebrated Three Laws of Robotics:

I, Robot novel cover

1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The “positronic” or “Asenion” robots in many of Asimov’s stories were thus unable to attack their human creators (First Law).  They could not run wild and disobey orders (Second Law).  Asimov’s robot could not become yet another Frankenstein’s monster.

That still left plenty of room for fascinating stories—many of them logic puzzles built around the Three Laws themselves.  A number of the loopholes and ramifications of the Three Laws are described in the Wikipedia article linked above.  It even turned out to be possible to use robots to commit murder, albeit unwittingly, as in Asimov’s novel The Naked Sun (1956).  When he eventually integrated his robot stories with his Foundation series, he expanded the Three Laws construct considerably—but that’s beyond our scope here.

Autonomous Vehicles

To discuss Three Laws Lethal, I must of course issue a

Spoiler Alert!
Three Laws Lethal book cover

Walton’s book does cite Asimov’s laws just before Chapter One, but his characters don’t start out by trying to create Asenion-type humanoid robots.  They’re just trying to start a company to design and sell self-driving cars.

The book starts with a vignette in which a family riding in a “fully automated Mercedes” runs into an accident.  To save the passengers from a falling tree, the car swerves out of the way, in the process hitting a motorcyclist in the next lane.  The motorcyclist is killed.  The resulting lawsuit by the cyclist’s wife turns up at various points in the story that follows.

Tyler Daniels and Brandon Kincannon are friends, contemporary Silicon Valley types trying to get funding for a startup.  Computer genius Naomi Sumner comes up with a unique way to make their automated taxi service a success:  she sets up a machine learning process by creating a virtual world with independent AIs that compete for resources to “live” on (ch. 2 and 5).  She names them “Mikes” after a famous science-fictional self-aware computer.  The Mikes win resources by driving real-world cars successfully.  In a kind of natural selection, the Mikes that succeed in driving get more resources and live longer:  the desired behavior is “reinforced.”

Things start to go wrong almost at once, though.  The learning or reinforcement methods programmed into the AIs don’t include anything like the Three Laws.  A human being who’s been testing the first set of autonomous (Mike-guided) vehicles by trying to crash into them is killed by the cars themselves—since they perceive that human being as a threat.  Two competing fleets of self-guided vehicles see each other as adversaries and can be turned against their controllers’ enemies.  The story is both convincing—the AI development method sounds quite plausible and up-to-the-minute (at least to this layman)—and unnerving.

But the hypothetical AI system in the novel, it seems to me, casts some light on an aspect of AI that is not highlighted in Three Laws-type stories.

Having a Goal

The Mikes in Three Laws Lethal are implicitly given a purpose by being set up to fight for survival.  That purpose is survival itself.  We recall that a robot’s survival is also included as the third of the Three Laws—but in that context survival is subordinated to protecting humans and obeying orders.  Asimov’s robots are conceived as basically passive.  They would resist being destroyed (unless given orders to the contrary), but they don’t take positive action to seek the preservation or extension of their own existence.  The Mikes, however, like living beings, are motivated to affirmatively seek and maintain themselves.

If an AI population is given a goal of survival or expansion, then we’re all set up for Frankensteinian violations of the First Law.  That’s what the book depicts, although in a far more sophisticated and thoughtful way than the old-style SF potboilers Asimov so disliked.

At one point in Walton’s story, Naomi decides to “change the objective.  She didn’t want them to learn to drive anymore.  She wanted them to learn to speak” (ch. 23, p. 248)—in order to show they are sapient.  Changing the goal would change the behavior.  As another character puts it later on, “[i]t’s not a matter of preventing them from doing that they want” (as if a Law of Robotics were constraining them from pursuing a purpose, like a commandment telling humans what not to do).  Rather, “[w]e teach them what to want in the first place.”  (ch. 27, p. 288)

Goals and Ethics

Immanuel Kant

The Three Laws approach assumes that the robot or AI has been given a purpose—in Asimov’s conception, by being given orders—and the Laws set limits to the actions it can take in pursuing that purpose.  If the Laws can be considered a set of ethical principles, then they correspond to what’s called “deontological” ethics, a set of rules that constrain how a being is allowed to act.  What defines right action is based on these rules, rather than on consequences or outcomes.  In the terms used by philosopher Immanuel Kant, the categorical imperative, the basic moral law, determines whether we can lawfully act in accordance with our inclinations.  The inclinations, which are impulses directing us toward some goal or desired end, are taken for granted; restraining them is the job of ethics.

Some other forms of ethics focus primarily on the end to be achieved, rather than on the guardrails to be observed in getting there.  The classic formulation is that of Aristotle:  “Every art and every investigation, and similarly every action and pursuit, is considered to aim at some good.”  (Nicomachean Ethics, I.i, 1094a1)  Some forms of good-based or axiological ethics focus mostly on the results, as in utilitarianism; others focus more on the actions of the good or virtuous person.  When Naomi, in Walton’s story, talks about changing the objective of the AI(s), she’s implicitly dealing with an axiological or good-based ethic.

As we’ve seen above, Asimov’s robots are essentially servants; they don’t have purposes of their own.  There is a possible exception:  the proviso in the First Law that a robot may not through inaction allow harm to come to humans does suggest an implicit purpose of protecting humans.  In the original Three Laws stories, however, that proviso did not tend to stimulate the robots to affirmative action to protect or promote humans.  Later on, Asimov did use something like this pro-human interest to expand the robot storyline and connect it with the Foundation stories.  So my description of Three Laws robots as non-purposive is not absolutely precise.  But it does, I think, capture something significant about the Asenion conception of AI.

Selecting a Purpose

There has been some discussion, factual and fictional, about an AI’s possible purposes.  I see, for example, that there’s a Wikipedia page on “instrumental convergence,” which talks about the kinds of goals that might be given to an AI—and how an oversimplified goal might go wrong.  A classic example is that of the “paperclip maximizer.”  An AI whose only goal was to make as many paper clips as possible might end by turning the entire universe into paper clips, consistent with its sole purpose.  In the process, it might decide, as the Wikipedia article notes, “that it would be much better if there were no humans because humans might decide to switch it off,” which would diminish the number of paper clips.  (Apparently there’s actually a game built on this thought-experiment.  Available at office-supply stores near you, no doubt . . .)

A widget-producing machine like the paperclip maximizer has a simple and concrete purpose.  But the purpose need not be so mundane.  Three Laws Lethal has one character instilling the goal of learning to speak, as noted above.  A recent article by Lydia Denworth describes a real-life robot named Torso that’s being programmed to “pursue curiosity.”  (Scientific American, Dec. 2024, at 64, 68)

It should be possible in principle to program multiple purposes into an AI.  A robot might have the goal of producing paper clips, but also the goal of protecting human life, say.  But it would then also be necessary to include in the program some way of balancing or prioritizing the goals, since they would often conflict or compete with each other.  There’s precedent for this, too, in ethical theory, such as the traditional “principle of double effect” to evaluate actions that have both good and bad results.

Note that we’ve been speaking of goals give to or programmed into the AI by a human designer.  Could an AI choose its own goals?  The question that immediately arises is, how or by what criteria would the AI make that choice?  That methodological or procedural question arises even before the more interesting and disturbing question of whether an AI might choose bad goals or good ones.  There’s an analogy here to the uncertainty faced by parents in raising children:  how does one (try to) ensure that the child will embrace the right ethics or value system?  I seem to recall that David Brin has suggested somewhere that the best way to develop beneficent AIs is actually to give them a childhood of a sort, though I can’t recall the reference at the moment.

Conclusions, Highly Tentative

The above ruminations suggest that if we want AIs that behave ethically, it may be necessary to give them both purposes and rules.  We want an autonomous vehicle that gets us to our destination speedily, but we want it to respect Asimov’s First Law about protecting humans in the process.  The more we consider the problem, the more it seems that what we want for our AI offspring is something like a full-blown ethical system, more complex and nuanced than the Three Laws, more qualified and guarded than Naomi’s survival-seeking Mikes.

This is one of those cases where contemporary science is actually beginning to implement something science fiction has long discussed.  (Just the other day, I read an article by Geoffrey Fowler (11/30/2024) about how Waymo robotaxis don’t always stop for a human at a crosswalk.)  Clearly, it’s time to get serious about how we grapple with the problem Walton so admirably sets up in his book.

Self-Spoofing

Visions of Sugarplums

Among the Hallmark Channel’s Christmas season offerings this year is a comedy called Sugarplummed, which has the distinction of being a spoof of the Hallmark Channel itself.

Satirizing the recurring tropes of the innumerable Hallmark Christmas movies isn’t a new game.  As noted in my 2019 post, takeoffs on this subgenre are legion.  Just today, a “Sally Forth” newspaper cartoon rang yet another change on the meta-tradition of Hallmark spoofery.

Sugarplummed movie poster (from https://www.imdb.com/title/tt33900751/mediaviewer/rm361928706/?ref_=tt_ov_i)

But it’s particularly entertaining to watch someone make fun of themselves.  In Sugarplummed, Emily, the harried mother in a family of four, is devoted to a schmaltzy series of TV movies featuring a woman named “Sugarplum.”  She’s thrown off the rails when the Sugarplum character magically materializes in the real world, determined to fulfill Emily’s wish for a perfect Christmas.  Sugarplum carries around a thick book that contains The Rules.  The Rules are a collection of mandates that embody all the Hallmark clichés we know so well.  Sugarplum’s world, in her home town of (naturally) “Perfection,” lives by these rules.  Why shouldn’t they work in Emily’s life as well?

For a while, they do, and we’re treated to a series of implausible “Christmas miracles” where everything works out perfectly.  Then, as the Hallmark plot summary puts it, “when Sugarplum’s magical fixes start to backfire one by one, Emily begins to question what an ideal holiday really is.”  The moral of the story is not to pursue perfection to the detriment of our ordinary relationships and joys—or, to put it another way, the familiar adage that “the perfect is the enemy of the good.”  That’s an apt reminder.

Enchantment and Reality

The Disney canon is another inviting target for satire.  The long history of Disney movies and shows contains plenty of well-worn recurring tropes and themes.  And, since much of Disney’s original focus was on children, those motifs tend to be on the sentimental or upbeat side, making the repetition even more attractive for satire.  (For some reason, the repetition of ugly or grimdark tropes, as in the plethora of dystopian fantasies now flooding the market, doesn’t attract as much jeering.)

Shrek (2001) did a pretty good job of spoofing Disney, the theme parks as well as the movies—not just in the details, but in the overall plot.  Instead of having the prince or princess turned into a monster or critter and restored in the end, Beauty and the Beast-style, Shrek’s inamorata Fiona turns out to be an ogre who’s been turned into a princess, and the happy ending ensues once she’s been restored to her natural homely form.

Of course, there’s a certain sniggering almost-hostility in making fun of somebody else’s clichés.  We can undermine that attitude when we make fun of ourselves.  Disney one-upped Shrek when it released the movie Enchanted in 2007.  Here, an animated character named Giselle, a perfect exaggeration of a Disney fairy tale heroine, is sent by a wicked witch to a terrible foreign land that works by entirely different rules:  New York City.  Now in live-action, we see Giselle try to cope with the grittier world we live in, where her set of Rules don’t apply.  (In one classic inversion, the climax sees Giselle attacking the monster to save the man she loves, rather than the other way around.)  We do get a happy ending, but it’s much more along the lines of learning to be a grown-up and living with (while improving on) imperfection.

The Science Fiction Version

The self-spoof approach isn’t limited to fantasy, or to the big screen.  Consider E.E. Smith’s Lensman series, the very model of classic space opera.  Satires of the space opera genre are also common, from Harry Harrison’s Star Smashers of the Galaxy Rangers to the 1980 Flash Gordon movie.  But in one case, Smith did it to himself.

In the last Lensman book, Children of the Lens (1947), Smith’s redoubtable hero Kimball Kinnison goes undercover as—of all things—a science fiction writer, Sybly White (see chapter 3).  We see a couple of excerpts from the SF novel Kinnison writes, which (of course) “was later acclaimed of one of Sybly White’s best.”  The excerpts parody, not just space opera generally, but Smith’s own ebullient, melodramatic style.  It’s charming to see Smith good-naturedly bringing onstage a caricature of himself.

Taking Ourselves Lightly

As I’ve noted elsewhere, humor can often involve a kind of meanness toward the target of the humor.  The charm of the self-satire is that the temptation toward meanness or hostility is absent.  Spoofing oneself is almost bound to be an affectionate parody, the kind where the humorist really likes the subject of the humor.  These often make for the best satires, untainted by any sourness or smugness and genuinely understanding how the satirized thing works.

I give Hallmark credit for Sugarplummed, then, especially because it shows the showrunners are not taking themselves too seriously.  That genial attitude is in tune with the Christmas season Hallmark is celebrating.  As Chesterton put it in Orthodoxy (ch. 8):  “Angels can fly because they can take themselves lightly.”

Civilization and the Rule of Law (reprise)

Another rerun, adapted from a post from seven years ago.

We’ve talked about how the Star Trek-Star Wars divide reflects preferences for a more lawful or more chaotic world; how F&SF stories often show us a defense of civilization against chaos; and how civilization makes science possible and rests in turn on human technology.  But both order and technology can be oppressive.  The missing element is the rule of law.

Universal Laws

It’s a crucial element of right governance that there are rules applying to everyone, as opposed to the arbitrary wishes of a dictator, who can make decisions based on favoritism, political preferences, or personal relationships.  The Wikipedia article describes rule of law as “the legal principle that law should govern a nation, as opposed to being governed by decisions of individual government officials.”

Rule of Law pyramid
(Rule of Law Institute of Australia)

As we saw in the recently updated post The Good King (reprise), the concept of the rule of law goes back at least to Aristotle.  It became a central principle of the American founders via the English tradition of John Locke.  “Rule of law implies that every citizen is subject to the law, including lawmakers themselves” (Wikipedia again).  It is thus in tension with kingship, where rule is almost by definition arbitrary and personal.  But one can have mixed cases—kings who are bound by certain laws, as in the British constitutional monarchy.

Without the rule of law, we depend on the good behavior of those who have power of some sort—physical, military, economic.  We slide toward the “war of each against all,” where might makes right and the vulnerable are the pawns of the strong.  Autocracy soon follows, as people look for any means to find safety from those who are powerful but unscrupulous.  Hence the quotation from John Christian Falkenberg, which I’ve used before:  “The rule of law is the essence of freedom.”  (Jerry Pournelle, Prince of Mercenaries (New York:  Baen 1989), ch. 21, p. 254.)  Strength itself, a good thing, is only safe under laws.

Test Cases

It’s easy to miss the importance of the rule of law.  We’re typically born into a society with better or worse laws, and criticize them from the inside.  It’s less common to find ourselves in straits where lawfulness as such has collapsed.  Regrettably, sizable numbers of people are exposed to such conditions in the world today.  But many of us are fortunate enough not to see them ourselves.  As always, fantasy and science fiction provide useful “virtual laboratories” for examining the possibilities.

Tunnel in the Sky (audiobook) cover

A classic SF case is where a group thrown into a “state of nature” attempts to set up a lawful society.  For example, in Heinlein’s Tunnel in the Sky (1955), students from a high-school class on survival techniques are given a final exam in which they are dropped onto an unspecified planet to survive for up to ten days.  When an astronomical accident leaves them stranded, they need to organize for the long term.  Rod Walker, the hero, becomes the leader-by-default of a growing group of young people.  The tension between this informal leadership and the question of forming an actual constitution—complete with committees, regulations, and power politics—makes up a central theme of the story.

The Postman movie poster

David Brin’s post-apocalyptic novel The Postman (1985), later made into a 1997 movie with Kevin Costner, illustrates the power of civil order, the unstated practices of a culture, as recalling—and perhaps fostering—the rule of law.  The hero, a wanderer who happens to have appropriated a dead postman’s uniform and mail sack, presents himself as a mail carrier for the “Restored United States of America” to gain shelter in one of the isolated fortress-towns, ruled by petty tyrants, that remain.  His desperate imposture snowballs into a spreading movement in which people begin to believe in this fiction, and this belief puts them on the road toward rebuilding civilization.  The result is a sort of field-test not only of civil order and government, but of what Plato famously imagined as the “noble lie.”

In Niven & Pournelle’s Lucifer’s Hammer (1977), a small community headed by a United States Senator hopes to serve as a nucleus for reconstructing civilization after a comet strike.  We see at the end the strong pull of personal rule or kingship:  as the Senator lies dying, the future of the community will be determined by which of the competing characters gains the personal trust and endorsement of the people—and the hand of the Senator’s daughter, a situation in which she herself recognizes the resurfacing of an atavistic criterion for rule.  Unstated, but perhaps implicit, is the nebulous idea that deciding in favor of scientific progress may also mean an eventual movement back toward an ideal of rule by laws, not by inherited power.

Seeking a Balance

The “laboratory” of F&SF is full of subversions, variations, and elaborations on the rule of law.  In particular, we should note the counter-trend previously discussed as “chaotic good.”  Laws can be stifling as well as liberating.

The Moon is a Harsh Mistress cover

Heinlein’s The Moon Is A Harsh Mistress (1966) imagines how the “rational anarchy” of a lunar prison colony is mobilized to throw off autocratic rule.  The healthy chaos of the libertarian Loonies is hardly utopian, but the story does make it seem appealing.  Interestingly, Heinlein returned to this setting with a kind of critique twenty years later in The Cat Who Walks Through Walls (1985), where the post-revolution lunar anarchy seems much less benign, seen from an outsider’s perspective.

Taran Wanderer book cover

While fantasy seems to concern itself with this issue much less than science fiction, consider the region called the “Free Commots” in Lloyd Alexander’s Chronicles of Prydain.  When protagonist Taran visits this area in the fourth book (Taran Wanderer), he finds a society of independent villages, where the most prominent citizens are master-craftspeople.  They neither have nor need a lord to organize them.  The Commots contrast favorably to the feudal or wilderness regions through which Taran travels.  A kind of anarchic democracy, as an ideal, thus sneaks into what otherwise seems to be a traditional aristocratic high fantasy.

One way of managing the tension between a government of laws and a culture of liberty is the principle of subsidiarity:  the notion that matters should be governed or controlled at the lowest possible organizational level where they can be properly handled.  It’s frequently illustrated in G.K. Chesterton’s ardent defenses of localism.  In The Napoleon of Notting Hill (1904), extreme localism is played for laughs—“half fun and full earnest,” to borrow Andrew Greeley’s phrase.  The more mature Tales of the Long Bow (1924), which might qualify as a sort of proto-steampunk story, treats the idea more seriously, in the form of an oddly high-tech (for 1924) revolt of local liberty against overweening and arbitrary national rule.

It remains true that we need good people as well as sound laws.  “Good men can make poor laws workable; poor men will wreak havoc with good laws” (James M. Landis, 1960; see this article at 432 & n.107).  The quality of a civilization can also be assessed by whether it fosters citizens who act with decency and good judgment even when there isn’t a law to constrain them (as in David Brin’s “Ritual of the Street Corner”).  After all, we neither can nor should try to create laws to govern everything.

But being willing to improvise well in situations where no law applies is different from considering oneself above the law, disdaining the constraints that apply to everybody else.  This is doubly and triply true of rulers, who are constantly tempted to arrogate power and dodge accountability to accomplish their ends.  If a ruler is allowed to get away with law-breaking, we’re headed for trouble.

Brin has noted that the stories that fill and shape our culture—movies, books, television—encourage a broad “suspicion of authority” that tells us all institutions are corrupt or useless, and so are most other people—so that the heroes and heroines of the stories can face and overcome challenges by their heroic actions.  Like most attitudes, suspicion of authority is helpful in moderation, but corrosive when it gets out of hand.  If that attitude leads us to throw over laws and institutions altogether in the hope that individual heroes or autocrats will save us, we need to keep in mind that benevolent dictatorship, unconstrained by law, is just one step away from despotism.

The Fragility of Civilization

When we grow up taking for granted the rule of law, we can fail to see how vulnerable it is—along with the civilization that it reflects and makes possible.

“The Establishment,” as they used to say in the 1960s, seems vast and invulnerable.  When we’re trying to make a change, it seems insuperable, so rigid that nothing can be done about it.  But this is an illusion.  The structure of civilization, good and bad, is fragile.  It’s easier than we think to throw away the rule of law, so painfully constructed (as Rod Walker found), in favor of shortcuts or easy answers to our problems.

One thing F&SF have brought us is a better sense of this vulnerability.  The spate of post-apocalyptic tales in recent years—zombie apocalypses, worldwide disasters, future dystopias like The Hunger Games, going all the way back to the nuclear-war stories of the 1950s—do help us appreciate that our civilization can go away.

But that collapse doesn’t require a disaster.  Civilization, and the rule of law, can erode gradually, insidiously, as in the “Long Night” stories we’ve talked about earlier.

Historically, the Sixties counterculture fostered anarchists who felt “the Establishment” was invulnerable.  Often with the best of intentions, they did more to undermine civil order than they expected.  Those who now see no better aim than breaking up the structures of democratic government and civil life—whether from the side of government, or from the grass roots—also fray the fabric of civilization.  The extrapolations of science fiction and fantasy illustrate why eroding the rule of law should not be taken lightly.

Near the bottom of Brin’s Web home page, he places the following:

I am a member of a civilization

It’s good that we have a rambunctious society, filled with opinionated individualists. Serenity is nice, but serenity alone never brought progress. Hermits don’t solve problems. The adversarial process helps us to improve as individuals and as a culture. Criticism is the only known antidote to error — elites shunned it and spread ruin across history. We do each other a favor (though not always appreciated) by helping find each others’ mistakes.

And yet — we’d all be happier, better off and more resilient if each of us were to now and then say:

“I am a member of a civilization.” (IAAMOAC)

Step back from anger. Study how awful our ancestors had it, yet they struggled to get you here. Repay them by appreciating the civilization you inherited.

It’s incumbent on all of us to cherish and defend the rule of law.  Give up civilization lightly, and we may not have the choice again.

The Good King (reprise)

Watching The Lord of the Rings movies again recently, to share the experience with my wife, brought to mind this post from eight years ago.  Here it is again, with minor updates.

I began to wonder some years back about the curious preference for monarchy in futuristic settings.  In the world at large, monarchies have been retreating in favor of republics and democracies, at least in theory, since 1776.  Why are SF writers so fond of equipping future societies with kings, emperors, and aristocracies?

Star Kingdoms

We can pass lightly over the old-time, pulp-type stories where royal rule is merely part of the local color:  Burroughs’ A Princess of Mars (1912), Edmond Hamilton’s The Star Kings (1949), E.E. Smith’s The Skylark of Space (1928) with its Osnomian royal families.  Here, like flashing swords and exotic costumes, monarchy is simply part of a deliberately anachronistic setting.  Similarly in high fantasy, where aristocracy comes naturally in the typical pseudo-medieval milieu.

But we see royal or aristocratic governments in more modern stories too.  Asimov’s Foundation stories are centered around a Galactic Empire. (See also the more recent Apple+ series of the same name.)  Since that series was based on Gibbons’ The History of the Decline and Fall of the Roman Empire, an Empire was inevitable.  Similarly in Star Wars, which draws heavily on Asimov.  Jerry Pournelle’s CoDominium future history has a First and a Second “Empire of Man.”  David Weber’s heroine Honor Harrington serves the “Star Kingdom of Manticore” (later “Star Empire”), modeled closely on England around 1810.  Lois McMaster Bujold’s Vorkosigan Saga contains a number of polities with different forms of government, but many of the stories focus on Barrayar, which has an Emperor.  Anne McCaffrey’s popular Pern series has no monarch, but has two parallel aristocracies (the feudal Holders and the meritocratic dragonriders).  It got to the point where I began to feel a decided preference for avoiding monarchical or imperial governments in SF storytelling.

The Lure of Kingship

Aragorn with crown

There’s something that attracts us in royalty—or we wouldn’t see so much of it.  I encountered this puzzlement directly.  As a kid reading The Lord of the Rings, I was as moved as anyone by the return of the true King.  I asked myself why.  If I don’t even approve of kingship in theory, why am I cheering for Aragorn?

The reasons we’re drawn to monarchy seem to include—

  • Kings are colorful. (So are princesses.)
  • Stability
  • Personal loyalty
  • Individual agency

The first point is obvious, but the others are worth examining.

Stability

It’s been pointed out that even in a constitutional government, a monarch provides a symbolic continuity that may help to hold a nation together.  British prime ministers may come and go, but the King, or Queen, is always there.  This gives some plausibility to the idea of a future society’s returning to monarchy.

Something like this stabilizing function is behind commoner Kevin Renner’s half-embarrassed harangue to Captain Rod Blaine, future Marquis of Crucis, in Niven & Pournelle’s The Mote in God’s Eye:  “maybe back home we’re not so thick on Imperialism as you are in the Capital, but part of that’s because we trust you aristocrats to run the show.  We do our part, and we expect you characters with all the privileges to do yours!”  (ch. 40)

Unfortunately, relying on the noblesse oblige of the aristocrats doesn’t always work out well.  It depends on who they are.  For every Imperial Britain, there’s a North Korea.  When the hereditary succession breaks down, you get a War of the Roses or Game of Thrones.

Too much depends on getting the right monarch.  By the law of averages, it doesn’t take long before you get a bad ruler, whether by inheritance or by “right of conquest”—and you’re up the well-known creek.

Personal Loyalty

Personal loyalty appeals to us more strongly than loyalty to an institution.  One can pledge allegiance to a state—but even the American Pledge of Allegiance starts with a symbol:  the flag, and then “the Republic for which it stands.”  Loyalty to an individual moves us more easily.

This kind of loyalty doesn’t have to be to a monarch.  Niven & Pournelle’s Oath of Fealty explores how loyalty among, and to, a trusted group of managers can form a stronger bond than the mere institutional connections of a typical modern bureaucracy.  One can be faithful to family (the root of the hereditary element in kingship), to friends, or even an institution or a people.  But it’s easiest with an individual.  This loyalty is the basis for the stability factor above.

Individual Agency

The vast machinery of modern government sometimes seems to operate entirely in the abstract, without real people involved.  “Moscow said today . . .”

In fact it’s always people who are acting.  But it’s easier to visualize this when you have a single person to focus on.  “When Grant advanced toward Richmond . . .”  In the extreme case, we have the ruler who claims to embody the state in his own person:  “L’état, c’est moi” (attributed to Louis XIV, the “Sun King” of France).

In a fascinating 2008 essay, Jo Walton quotes Bujold on political themes in SF:  “In fact, if romances are fantasies of love, and mysteries are fantasies of justice, I would now describe much SF as fantasies of political agency.”  A science fiction character is frequently involved in effecting a revolution, facing down a potential dictator, or establishing a new order—exercising autonomous power.  Walton links this notion of political agency to the fact that SF illustrates change:  “SF is the literature of changing the world.”  The world-changers can be outsiders, or they can be the rulers themselves—as in a number of the examples above.

It’s not surprising that we’re attracted to characters who act outside the normal rules.  We (especially Americans, perhaps) are fond of the idea that good people can act in ways that are untrammeled by the usual conventions.  I’ve already mentioned Robin Hood.  And the whole concept of the superhero—the uniquely powerful vigilante who can be relied on to act for the good—is powered by this attraction.

But this idealization of individual initiative is also dangerous.  Too much depends on getting the right hero—or the right monarch.  It can only work if the independent agent is seriously and reliably good:  virtuous, in the classical sense of virtue as a well-directed “habit” or fixed character trait.  Even then, we may be reluctant to give any hero unlimited power.  Too much is at stake if it goes wrong.

The Rule of Law

Our admiration for the powerful ruler is always in tension with our dedication to the rule of law:  “a government of laws, not of men,” in the well-known phrase attributed to John Adams.  We can see this as far back as Aristotle:  “law should rule rather than any single one of the citizens.  And following this same line of reasoning . . . even if it is better that certain persons rule, these persons should be appointed as guardians of the laws and as their servants.”  (Politics book III, ch. 16, 1287a)

No human being can be trusted with absolute authority.  This is the kernel of truth in the aphorism that “power tends to corrupt and absolute power corrupts absolutely.”  But we can’t get along without entrusting some power to someone.  When we do, it had better be someone who’s as trustworthy as possible.

The Ideal of the Good King

Thus the true king must be a virtuous person—a person of real excellence.  This is the ideal of an Aragorn or a King Arthur, whose return we’re moved to applaud (even against our better judgment).  It should be obvious that the same principles apply to the good queen—or emperor, empress, princess, prince, president, prime minister:  the leader we follow.  But I’ll continue using “king” for simplicity’s sake.

What virtues do we look for in a good monarch—aside from the obvious ones of justice, wisdom, courage, self-control?

If the ruler or rulers are going to be “servants of the laws,” they require humility.  A king who serves the law can’t claim to be its master.  Arrogance and hubris are fatal flaws in a ruler.  For example, we should always beware of the leader who claims he can do everything himself and is unable to work with others.

The good king is also selfless—seeking the common good of the people, not his own.  Self-aggrandizement is another fatal flaw.

In effect, what we’re looking for is a ruler who doesn’t want to rule:  a king who believes in the sovereignty and the excellence of common people.

Aragorn defers to Frodo

It’s significant that Aragorn, our model of the good king, is introduced in LotR as “Strider,” a scruffy stranger smoking in a corner of a common inn.  Even when he’s crowned in victory, he remembers to exalt the humble.  The movie has him tell the four hobbits, “You kneel to no one.”  Tolkien’s text is more ceremonious:  “And then to Sam’s surprise and utter confusion he bowed his knee before them; and taking them by the hand . . . he led them to the throne, and setting them upon it, he turned . . . and spoke, so that his voice rang over all the host, crying:  ‘Praise them with great praise!’”  (Book VI, ch. 4, p. 232)

We see the same essential humility and selflessness in other admirable leaders, kings or not:  Taran in the Chronicles of Prydain, and the revolutionary princess in Lloyd Alexander’s Westmark trilogy; Niven & Pournelle’s Rod Blaine; Jack Ryan in Tom Clancy’s novels; “Dev” Logan, head of Omnitopia Inc. in Diane Duane’s Omnitopia Dawn—the unpretentious opposite of the “imperial CEO.”  America was fortunate enough to have such an example in the pivotal position of first President, George Washington.

The Alternative

At the other end of the spectrum, the most dangerous person to trust is an unprincipled and unscrupulous autocrat—someone convinced of his personal superiority and infallibility.  Giving power to an individual who has no interest in serving the common good, but only in self-aggrandizement, puts a nation in subjection to a Putin, a Mussolini, a Kim Jong-un.

The antithesis of the good king is the tyrant, who, however innocently he may start out, figures in our stories mainly as the oppressor to be overthrown.  It’s much better, if possible, to intercept such a potentially ruinous ruler before the tyranny comes into effect:  Senator Palpatine before he becomes Emperor, Nehemiah Scudder before he wins his first election.  Allowing the tyrant to gain power may make for good stories, but it generates very bad politics.

If we must have strong leaders, then in real life as well as in stories, character is key—and hubris is deadly.

Love By the Numbers

Computer Dating

It’s been some time since computers got involved with dating.  I don’t mean stories about romance with a robot—though those have been around for a long time too.  I’m thinking about stories where a couple becomes involved with each other via some sort of computer intermediary.

We have a subgenre of tales where a couple meets on an online platform—perhaps via chatting, in You’ve Got Mail (1998), or gaming, as in Ready Player One and my own The World Around the Corner.  And of course the proliferation of actual online dating services—I met my wife that way—lends itself to romantic comedy.  Here, for example, is a list of the “10 best rom-coms about online dating,” which run the gamut from 1998-2021.

But we can even identify a sub-subgenre where the workings of the dating service itself are the basis for the meet-cute.  In Love, Guaranteed (2020), a lawyer takes on the case of a man who wants to sue a dating site that guarantees true love.  For the Christmas season, we have Mingle All the Way (2018), in which the designer of an app for busy people to find a plus-one without the stress of actual dating has to enroll in the app herself to convince an investor that the app is viable.  She ends up, of course, finding much more than a temporary plus-one.

Computers Aren’t Magic

If we’re going to construct a story that genuinely deals with computer-guided matching of couples, we need to think seriously about what computers can and can’t do.  It’s tempting just to wave around the word “algorithm” like a magic wand and assume that it can do all sorts of vaguely specified things—as if it were magic.  Our imaginary computer setup is a “black box”:  we don’t have to say how it works, it just somehow predicts which couples will get along.

Of course we can’t expect an author to produce an actual detailed account of how their hypothetical system works, any more than we expect any science fiction writer to tell us exactly how a faster-than-light drive would work, or an impenetrable force bubble.  If the writer could provide all the specs, they could patent the product and retire comfortably.  But, just as with any other SF premise, we have to make the imaginary invention plausible enough to engage the reader’s willing suspension of disbelief.

The Soulmate Equation cover

One recent rom-com, Christina Lauren’s The Soulmate Equation (2021), gets high marks for taking a sensible approach.  Freelance statistician and data analyst Jessica Davis runs into River Peña, a scientist whose startup company “has developed a service that connects people based on proprietary genetic profiling technology” (p. 18).  The technology is nothing so simple as a ‘love gene.’  Rather, in his initial study of over a thousand students, “a series of nearly forty genes were found to be tightly correlated with attraction” (p. 33).  River then worked backward:  “Could he find a genetic profile of people who had been happily married for longer than a decade?”  (p. 33)  The further research turned up “nearly two hundred genes that were linked to emotional compatibility long-term” (p. 34).

This has the sound of legitimate science.  Sample sizes are given, and the statistical universe is large enough to have some plausibility.  The imaginary research doesn’t establish why these gene sequences are related to compatibility:  it merely finds a correlation, which has yet to be explained.  And the number of genes involved indicates that the relationship must be highly complex.  The company’s tests (which sound rather like a 23 and Me arrangement) don’t claim to predict romantic love with precision; they yield a percentage probability that two people will match well, from “Base Matches” up to 24% to “Titanium Matches” at 80-90%, and even three examples of “Diamond Matches” with a score higher then 90%.  If such testing were really possible, this sounds like a plausible account of how it would work.

Jessica takes the test on a dare, more or less, and (of course) is found to be a Diamond Match with River Peña himself, whom she’s already found a bit off-putting.  She warily agrees to try out a relationship with him, and hilarity ensues in classic romantic-comedy fashion.  It’s a delightful story, and the air of plausibility about the company and its compatibility testing helps support the reader’s enjoyment.

The Gene Factor

One of the reasons this superficial plausibility is important is that the underlying idea is so volatile.  Do we really believe a couple’s potential for romance could be predicted from their genome?  On the one hand, we find ourselves wanting to say that love is ineffable.  Romantic love is famously supposed to be unpredictable, unexplainable, defying reason.  On the other hand, we do also tend to feel that romantic compatibility rests on some kind of perceptible factors, and also that true love is somehow fated, that lovers are “made for each other.”  And given the limited but impressive successes modern biology has had in connecting genes to physical results, the genome is a tempting place to look for that kind of manifest destiny.

Whether we find that hypothesis likely will depend partly on whether we find the materialist theory convincing:  that human beings and their behavior can be entirely reduced to their physical structure.  Materialist approaches are widespread today, but one can also reasonably take the position that there are human characteristics not wholly reducible to biology.  (Even the book’s title refers to “soulmates,” not merely sexmates.)

But let’s say it’s plausible enough that at least the initial attractiveness of one person to another might be encoded in their genes.  I even recall reading somewhere that whether we like or dislike the way a certain person smells may be a wired-in indicator of genetic compatibility, which makes some evolutionary sense.  Can we generalize that notion, as Dr. Peña’s research is supposed to have done, to a genetic basis for long-term compatibility?

The Other Factors

Even on a physical basis, we are not merely a product of our genes.  While the “nature versus nurture” battle continues to simmer, we generally tend to be equally sure that our adult personalities are influenced by our cumulative life experiences as well as by our innate characteristics.  Identical twins don’t turn out to be identical people, even when they grow up in similar environments, much less if they grow up separately.  We can distinguish our temperament, the emotional potential we’re born with, from our “character,” the set of habits and tendencies and attitudes that owe a lot to experience and action over time.

Outward appearance that is all we initially have to observe about a potential mate, but these aspects are more internal.  Necessarily, the internal factors are likely to come through more clearly as a relationship continues.  We’ve noted that “love at first sight” is a common enough experience of the beginning of love, but that its genuineness can only be validated retrospectively, over time.  Like the meet cute/meet hard situation, perhaps this can create the opportunity to fall, the initial attraction.  But the basic attraction is at most only the beginning of the story—not the end.  A couple’s ultimate compatibility is bound to depend on how their lives have shaped them as well as on their genetic endowments.

Even if nature and nurture combine to present a felicitous pairing, that’s still not the end of the story.  We are not compelled to love someone.  Irresistible attraction, sure:  finding someone especially fetching may (or may not) be entirely beyond our control.  But what we do about it isn’t.  What we develop out of that basic orientation toward each other is what really matters.

Here, The Soulmate Equation gratifyingly takes the right tack.  Responding to Jess’s skepticism about the significance of the test results, River says:

“I’ll believe the test if it says we are biologically compatible, but I’m not a scientific zealot, Jess.  I recognize the element of choice. . . . No one is going to force you to fall in love with me.”  (p. 130; see also p. 230)

Jess concludes:  “Destiny could also be a choice, she’d realized.  To believe or not, to be vulnerable or not, to go all in or not.”  (p. 351)

So I’m doubtful in the end that Dr. Peña’s algorithm could work quite as well as the story suggests.  Nonetheless, the book is an enjoyable romp, and also a thoughtful one.  Any tale that helps us think more insightfully about the puzzles of romantic love is a stimulating as well as an entertaining read.