The Purpose of AI

I recently read an excellent cautionary tale (and with a romance to boot), David Walton’s Three Laws Lethal (2019).  The subject of “artificial intelligence” or AI (it isn’t really intelligence, but that’s another story) is hot.  To take only one rather specialized example, the Federal Communications Commission’s Consumer Advisory Committee last year carried out a brief survey of the roles of AI, both harmful and helpful, in dealing with robocalls and robotexts.  So it seems like an appropriate moment to take a look at Walton’s insights.

Frankenstein and the Three Laws

It’s well known that the early history of SF—starting with what’s considered by some to be the first modern SF story, Mary Shelley’s Frankenstein (1818)—is replete with tales of constructed creatures that turn on their creators and destroy them.  Almost as well known is how Isaac Asimov, as he explains in the introduction to his anthology The Rest of the Robots (1964), “quickly grew tired of this dull hundred-times-told tale.”  Like other tools, Asimov suggested, robots would be made with built-in safeguards.  Knives have hilts, stairs have banisters, electric wiring is insulated.  The safeguards Asimov devised, around 1942, gradually and through conversations with SF editor John W. Campbell, were his celebrated Three Laws of Robotics:

I, Robot novel cover

1—A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

2—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The “positronic” or “Asenion” robots in many of Asimov’s stories were thus unable to attack their human creators (First Law).  They could not run wild and disobey orders (Second Law).  Asimov’s robot could not become yet another Frankenstein’s monster.

That still left plenty of room for fascinating stories—many of them logic puzzles built around the Three Laws themselves.  A number of the loopholes and ramifications of the Three Laws are described in the Wikipedia article linked above.  It even turned out to be possible to use robots to commit murder, albeit unwittingly, as in Asimov’s novel The Naked Sun (1956).  When he eventually integrated his robot stories with his Foundation series, he expanded the Three Laws construct considerably—but that’s beyond our scope here.

Autonomous Vehicles

To discuss Three Laws Lethal, I must of course issue a

Spoiler Alert!
Three Laws Lethal book cover

Walton’s book does cite Asimov’s laws just before Chapter One, but his characters don’t start out by trying to create Asenion-type humanoid robots.  They’re just trying to start a company to design and sell self-driving cars.

The book starts with a vignette in which a family riding in a “fully automated Mercedes” runs into an accident.  To save the passengers from a falling tree, the car swerves out of the way, in the process hitting a motorcyclist in the next lane.  The motorcyclist is killed.  The resulting lawsuit by the cyclist’s wife turns up at various points in the story that follows.

Tyler Daniels and Brandon Kincannon are friends, contemporary Silicon Valley types trying to get funding for a startup.  Computer genius Naomi Sumner comes up with a unique way to make their automated taxi service a success:  she sets up a machine learning process by creating a virtual world with independent AIs that compete for resources to “live” on (ch. 2 and 5).  She names them “Mikes” after a famous science-fictional self-aware computer.  The Mikes win resources by driving real-world cars successfully.  In a kind of natural selection, the Mikes that succeed in driving get more resources and live longer:  the desired behavior is “reinforced.”

Things start to go wrong almost at once, though.  The learning or reinforcement methods programmed into the AIs don’t include anything like the Three Laws.  A human being who’s been testing the first set of autonomous (Mike-guided) vehicles by trying to crash into them is killed by the cars themselves—since they perceive that human being as a threat.  Two competing fleets of self-guided vehicles see each other as adversaries and can be turned against their controllers’ enemies.  The story is both convincing—the AI development method sounds quite plausible and up-to-the-minute (at least to this layman)—and unnerving.

But the hypothetical AI system in the novel, it seems to me, casts some light on an aspect of AI that is not highlighted in Three Laws-type stories.

Having a Goal

The Mikes in Three Laws Lethal are implicitly given a purpose by being set up to fight for survival.  That purpose is survival itself.  We recall that a robot’s survival is also included as the third of the Three Laws—but in that context survival is subordinated to protecting humans and obeying orders.  Asimov’s robots are conceived as basically passive.  They would resist being destroyed (unless given orders to the contrary), but they don’t take positive action to seek the preservation or extension of their own existence.  The Mikes, however, like living beings, are motivated to affirmatively seek and maintain themselves.

If an AI population is given a goal of survival or expansion, then we’re all set up for Frankensteinian violations of the First Law.  That’s what the book depicts, although in a far more sophisticated and thoughtful way than the old-style SF potboilers Asimov so disliked.

At one point in Walton’s story, Naomi decides to “change the objective.  She didn’t want them to learn to drive anymore.  She wanted them to learn to speak” (ch. 23, p. 248)—in order to show they are sapient.  Changing the goal would change the behavior.  As another character puts it later on, “[i]t’s not a matter of preventing them from doing that they want” (as if a Law of Robotics were constraining them from pursuing a purpose, like a commandment telling humans what not to do).  Rather, “[w]e teach them what to want in the first place.”  (ch. 27, p. 288)

Goals and Ethics

Immanuel Kant

The Three Laws approach assumes that the robot or AI has been given a purpose—in Asimov’s conception, by being given orders—and the Laws set limits to the actions it can take in pursuing that purpose.  If the Laws can be considered a set of ethical principles, then they correspond to what’s called “deontological” ethics, a set of rules that constrain how a being is allowed to act.  What defines right action is based on these rules, rather than on consequences or outcomes.  In the terms used by philosopher Immanuel Kant, the categorical imperative, the basic moral law, determines whether we can lawfully act in accordance with our inclinations.  The inclinations, which are impulses directing us toward some goal or desired end, are taken for granted; restraining them is the job of ethics.

Some other forms of ethics focus primarily on the end to be achieved, rather than on the guardrails to be observed in getting there.  The classic formulation is that of Aristotle:  “Every art and every investigation, and similarly every action and pursuit, is considered to aim at some good.”  (Nicomachean Ethics, I.i, 1094a1)  Some forms of good-based or axiological ethics focus mostly on the results, as in utilitarianism; others focus more on the actions of the good or virtuous person.  When Naomi, in Walton’s story, talks about changing the objective of the AI(s), she’s implicitly dealing with an axiological or good-based ethic.

As we’ve seen above, Asimov’s robots are essentially servants; they don’t have purposes of their own.  There is a possible exception:  the proviso in the First Law that a robot may not through inaction allow harm to come to humans does suggest an implicit purpose of protecting humans.  In the original Three Laws stories, however, that proviso did not tend to stimulate the robots to affirmative action to protect or promote humans.  Later on, Asimov did use something like this pro-human interest to expand the robot storyline and connect it with the Foundation stories.  So my description of Three Laws robots as non-purposive is not absolutely precise.  But it does, I think, capture something significant about the Asenion conception of AI.

Selecting a Purpose

There has been some discussion, factual and fictional, about an AI’s possible purposes.  I see, for example, that there’s a Wikipedia page on “instrumental convergence,” which talks about the kinds of goals that might be given to an AI—and how an oversimplified goal might go wrong.  A classic example is that of the “paperclip maximizer.”  An AI whose only goal was to make as many paper clips as possible might end by turning the entire universe into paper clips, consistent with its sole purpose.  In the process, it might decide, as the Wikipedia article notes, “that it would be much better if there were no humans because humans might decide to switch it off,” which would diminish the number of paper clips.  (Apparently there’s actually a game built on this thought-experiment.  Available at office-supply stores near you, no doubt . . .)

A widget-producing machine like the paperclip maximizer has a simple and concrete purpose.  But the purpose need not be so mundane.  Three Laws Lethal has one character instilling the goal of learning to speak, as noted above.  A recent article by Lydia Denworth describes a real-life robot named Torso that’s being programmed to “pursue curiosity.”  (Scientific American, Dec. 2024, at 64, 68)

It should be possible in principle to program multiple purposes into an AI.  A robot might have the goal of producing paper clips, but also the goal of protecting human life, say.  But it would then also be necessary to include in the program some way of balancing or prioritizing the goals, since they would often conflict or compete with each other.  There’s precedent for this, too, in ethical theory, such as the traditional “principle of double effect” to evaluate actions that have both good and bad results.

Note that we’ve been speaking of goals give to or programmed into the AI by a human designer.  Could an AI choose its own goals?  The question that immediately arises is, how or by what criteria would the AI make that choice?  That methodological or procedural question arises even before the more interesting and disturbing question of whether an AI might choose bad goals or good ones.  There’s an analogy here to the uncertainty faced by parents in raising children:  how does one (try to) ensure that the child will embrace the right ethics or value system?  I seem to recall that David Brin has suggested somewhere that the best way to develop beneficent AIs is actually to give them a childhood of a sort, though I can’t recall the reference at the moment.

Conclusions, Highly Tentative

The above ruminations suggest that if we want AIs that behave ethically, it may be necessary to give them both purposes and rules.  We want an autonomous vehicle that gets us to our destination speedily, but we want it to respect Asimov’s First Law about protecting humans in the process.  The more we consider the problem, the more it seems that what we want for our AI offspring is something like a full-blown ethical system, more complex and nuanced than the Three Laws, more qualified and guarded than Naomi’s survival-seeking Mikes.

This is one of those cases where contemporary science is actually beginning to implement something science fiction has long discussed.  (Just the other day, I read an article by Geoffrey Fowler (11/30/2024) about how Waymo robotaxis don’t always stop for a human at a crosswalk.)  Clearly, it’s time to get serious about how we grapple with the problem Walton so admirably sets up in his book.

The Good King (reprise)

Watching The Lord of the Rings movies again recently, to share the experience with my wife, brought to mind this post from eight years ago.  Here it is again, with minor updates.

I began to wonder some years back about the curious preference for monarchy in futuristic settings.  In the world at large, monarchies have been retreating in favor of republics and democracies, at least in theory, since 1776.  Why are SF writers so fond of equipping future societies with kings, emperors, and aristocracies?

Star Kingdoms

We can pass lightly over the old-time, pulp-type stories where royal rule is merely part of the local color:  Burroughs’ A Princess of Mars (1912), Edmond Hamilton’s The Star Kings (1949), E.E. Smith’s The Skylark of Space (1928) with its Osnomian royal families.  Here, like flashing swords and exotic costumes, monarchy is simply part of a deliberately anachronistic setting.  Similarly in high fantasy, where aristocracy comes naturally in the typical pseudo-medieval milieu.

But we see royal or aristocratic governments in more modern stories too.  Asimov’s Foundation stories are centered around a Galactic Empire. (See also the more recent Apple+ series of the same name.)  Since that series was based on Gibbons’ The History of the Decline and Fall of the Roman Empire, an Empire was inevitable.  Similarly in Star Wars, which draws heavily on Asimov.  Jerry Pournelle’s CoDominium future history has a First and a Second “Empire of Man.”  David Weber’s heroine Honor Harrington serves the “Star Kingdom of Manticore” (later “Star Empire”), modeled closely on England around 1810.  Lois McMaster Bujold’s Vorkosigan Saga contains a number of polities with different forms of government, but many of the stories focus on Barrayar, which has an Emperor.  Anne McCaffrey’s popular Pern series has no monarch, but has two parallel aristocracies (the feudal Holders and the meritocratic dragonriders).  It got to the point where I began to feel a decided preference for avoiding monarchical or imperial governments in SF storytelling.

The Lure of Kingship

Aragorn with crown

There’s something that attracts us in royalty—or we wouldn’t see so much of it.  I encountered this puzzlement directly.  As a kid reading The Lord of the Rings, I was as moved as anyone by the return of the true King.  I asked myself why.  If I don’t even approve of kingship in theory, why am I cheering for Aragorn?

The reasons we’re drawn to monarchy seem to include—

  • Kings are colorful. (So are princesses.)
  • Stability
  • Personal loyalty
  • Individual agency

The first point is obvious, but the others are worth examining.

Stability

It’s been pointed out that even in a constitutional government, a monarch provides a symbolic continuity that may help to hold a nation together.  British prime ministers may come and go, but the King, or Queen, is always there.  This gives some plausibility to the idea of a future society’s returning to monarchy.

Something like this stabilizing function is behind commoner Kevin Renner’s half-embarrassed harangue to Captain Rod Blaine, future Marquis of Crucis, in Niven & Pournelle’s The Mote in God’s Eye:  “maybe back home we’re not so thick on Imperialism as you are in the Capital, but part of that’s because we trust you aristocrats to run the show.  We do our part, and we expect you characters with all the privileges to do yours!”  (ch. 40)

Unfortunately, relying on the noblesse oblige of the aristocrats doesn’t always work out well.  It depends on who they are.  For every Imperial Britain, there’s a North Korea.  When the hereditary succession breaks down, you get a War of the Roses or Game of Thrones.

Too much depends on getting the right monarch.  By the law of averages, it doesn’t take long before you get a bad ruler, whether by inheritance or by “right of conquest”—and you’re up the well-known creek.

Personal Loyalty

Personal loyalty appeals to us more strongly than loyalty to an institution.  One can pledge allegiance to a state—but even the American Pledge of Allegiance starts with a symbol:  the flag, and then “the Republic for which it stands.”  Loyalty to an individual moves us more easily.

This kind of loyalty doesn’t have to be to a monarch.  Niven & Pournelle’s Oath of Fealty explores how loyalty among, and to, a trusted group of managers can form a stronger bond than the mere institutional connections of a typical modern bureaucracy.  One can be faithful to family (the root of the hereditary element in kingship), to friends, or even an institution or a people.  But it’s easiest with an individual.  This loyalty is the basis for the stability factor above.

Individual Agency

The vast machinery of modern government sometimes seems to operate entirely in the abstract, without real people involved.  “Moscow said today . . .”

In fact it’s always people who are acting.  But it’s easier to visualize this when you have a single person to focus on.  “When Grant advanced toward Richmond . . .”  In the extreme case, we have the ruler who claims to embody the state in his own person:  “L’état, c’est moi” (attributed to Louis XIV, the “Sun King” of France).

In a fascinating 2008 essay, Jo Walton quotes Bujold on political themes in SF:  “In fact, if romances are fantasies of love, and mysteries are fantasies of justice, I would now describe much SF as fantasies of political agency.”  A science fiction character is frequently involved in effecting a revolution, facing down a potential dictator, or establishing a new order—exercising autonomous power.  Walton links this notion of political agency to the fact that SF illustrates change:  “SF is the literature of changing the world.”  The world-changers can be outsiders, or they can be the rulers themselves—as in a number of the examples above.

It’s not surprising that we’re attracted to characters who act outside the normal rules.  We (especially Americans, perhaps) are fond of the idea that good people can act in ways that are untrammeled by the usual conventions.  I’ve already mentioned Robin Hood.  And the whole concept of the superhero—the uniquely powerful vigilante who can be relied on to act for the good—is powered by this attraction.

But this idealization of individual initiative is also dangerous.  Too much depends on getting the right hero—or the right monarch.  It can only work if the independent agent is seriously and reliably good:  virtuous, in the classical sense of virtue as a well-directed “habit” or fixed character trait.  Even then, we may be reluctant to give any hero unlimited power.  Too much is at stake if it goes wrong.

The Rule of Law

Our admiration for the powerful ruler is always in tension with our dedication to the rule of law:  “a government of laws, not of men,” in the well-known phrase attributed to John Adams.  We can see this as far back as Aristotle:  “law should rule rather than any single one of the citizens.  And following this same line of reasoning . . . even if it is better that certain persons rule, these persons should be appointed as guardians of the laws and as their servants.”  (Politics book III, ch. 16, 1287a)

No human being can be trusted with absolute authority.  This is the kernel of truth in the aphorism that “power tends to corrupt and absolute power corrupts absolutely.”  But we can’t get along without entrusting some power to someone.  When we do, it had better be someone who’s as trustworthy as possible.

The Ideal of the Good King

Thus the true king must be a virtuous person—a person of real excellence.  This is the ideal of an Aragorn or a King Arthur, whose return we’re moved to applaud (even against our better judgment).  It should be obvious that the same principles apply to the good queen—or emperor, empress, princess, prince, president, prime minister:  the leader we follow.  But I’ll continue using “king” for simplicity’s sake.

What virtues do we look for in a good monarch—aside from the obvious ones of justice, wisdom, courage, self-control?

If the ruler or rulers are going to be “servants of the laws,” they require humility.  A king who serves the law can’t claim to be its master.  Arrogance and hubris are fatal flaws in a ruler.  For example, we should always beware of the leader who claims he can do everything himself and is unable to work with others.

The good king is also selfless—seeking the common good of the people, not his own.  Self-aggrandizement is another fatal flaw.

In effect, what we’re looking for is a ruler who doesn’t want to rule:  a king who believes in the sovereignty and the excellence of common people.

Aragorn defers to Frodo

It’s significant that Aragorn, our model of the good king, is introduced in LotR as “Strider,” a scruffy stranger smoking in a corner of a common inn.  Even when he’s crowned in victory, he remembers to exalt the humble.  The movie has him tell the four hobbits, “You kneel to no one.”  Tolkien’s text is more ceremonious:  “And then to Sam’s surprise and utter confusion he bowed his knee before them; and taking them by the hand . . . he led them to the throne, and setting them upon it, he turned . . . and spoke, so that his voice rang over all the host, crying:  ‘Praise them with great praise!’”  (Book VI, ch. 4, p. 232)

We see the same essential humility and selflessness in other admirable leaders, kings or not:  Taran in the Chronicles of Prydain, and the revolutionary princess in Lloyd Alexander’s Westmark trilogy; Niven & Pournelle’s Rod Blaine; Jack Ryan in Tom Clancy’s novels; “Dev” Logan, head of Omnitopia Inc. in Diane Duane’s Omnitopia Dawn—the unpretentious opposite of the “imperial CEO.”  America was fortunate enough to have such an example in the pivotal position of first President, George Washington.

The Alternative

At the other end of the spectrum, the most dangerous person to trust is an unprincipled and unscrupulous autocrat—someone convinced of his personal superiority and infallibility.  Giving power to an individual who has no interest in serving the common good, but only in self-aggrandizement, puts a nation in subjection to a Putin, a Mussolini, a Kim Jong-un.

The antithesis of the good king is the tyrant, who, however innocently he may start out, figures in our stories mainly as the oppressor to be overthrown.  It’s much better, if possible, to intercept such a potentially ruinous ruler before the tyranny comes into effect:  Senator Palpatine before he becomes Emperor, Nehemiah Scudder before he wins his first election.  Allowing the tyrant to gain power may make for good stories, but it generates very bad politics.

If we must have strong leaders, then in real life as well as in stories, character is key—and hubris is deadly.

Foundation and Dune

As the Apple Foundation series has gradually diverged from the books, sinking from ‘adapted from Asimov’s series’ to ‘loosely inspired by Asimov’s series’ levels, we’ve seen a dramatically opposite example of a classic SF novel adaptation:  the latest movie version of Frank Herbert’s Dune.  The two make an instructive comparison.

Spoiler Alert!

Apple Strikes Out

I haven’t quite finished viewing this season of the Foundation TV series yet, but the trend is pretty clear.  Apple’s version has departed from the storyline of the written works so extensively that I can’t picture how they could possibly get back to it.  Unfortunately, what Goyer & co. have replaced it with is just routine space opera, mildly interesting but no more. 

The original series, as I said in my last post, is cerebral.  It’s more like a political drama than like Star Wars.  And it seems to me that, pace the commentators who consider it unfilmable, the original story could have been filmed in the manner of a political drama, with a modicum of action involved (Hober Mallow’s face-off with the Korellians in “The Merchant Princes,” the escape of the Darells and Ebling Mis from the Mule’s minions, et cetera).  But that’s not how moviegoing audiences have been taught to think of science fiction, and the Apple writers have struck out in a different direction—back to the safe and familiar, rather than what’s distinctive in the Foundation series.

The warship Invictus

The judgment of Rob Bricken in Gizmodo (10/22/21)—“Foundation Just Became Star Wars, and It Sucks”—may be a little simplified.  But it’s basically sound.  The example that triggered Bricken’s article is a useful one.  Several of the episodes (6-8) focus on how warriors from Anacreon kidnap several Foundation folks to try and gain control of a massive Imperial warship, the Invictus.  The ship is presented as a kind of Death Star, a crucial weapon.  The Anacreonians want to use it for revenge, to destroy Trantor, the capital of the Empire—which is presented as a major blow to civilization, something Our Heroes must stop.

But this is all backwards.  In “The Mayors,” third part of the first Foundation book, Anacreon does get the Foundation to help them refurbish an old Imperial warship that they found derelict in space.  The Anacreonians think of this as a major victory, though their concern is expanding their rule in the Periphery, not attacking Trantor.  But the whole point of the incident is that possession of this Big Damn Weapon makes no difference in the course of history.  The canny Salvor Hardin neutralizes the significance of this warship through entirely nonviolent means—a matter of social and psychological leverage rather than military force.  (I’m avoiding the details so as not to spoil the story for those who may want to go back fruitfully to the written works.)

Nor, for that matter, is the fate of the Imperial capital especially important in the long run.  The Seldon Plan predicts its fall in the early years of the Plan, and the collapse of the Empire is necessary to create the environment in which the rise of the Foundation can occur.

Meanwhile, in the TV series, the uploaded simulacrum of Hari Seldon appears to be trying to establish the Second Foundation on his homeworld of Helicon, a planet of no significance in the original series.  Aficionados of the books will recognize that this change (unless it’s all an elaborate deception) would undo most of the action and tension of the latter half of the series.  Again, I’m being deliberately vague (read the books!).

Emperor Day

And Apple continues to follow the Emperors through a peculiar religious ordeal that may or may not have any long-term significance.  There is a religion-politics connection in the original series; it’s possible that Apple intends to bend this arc back to meet the original plotline in some way.  But, again, it’s so far off track already that the result is likely to have little resemblance to Asimov’s story.

Apple’s version of Salvor Hardin (who at this point shares nothing but the name with Asimov’s character) continues to be presented as a Chosen One.  So is Gaal Dornick, on whom the writers have bestowed an ability to predict the future by some sort of mathematical or mystical intuition (a notion that almost seems to have been borrowed from Dune, oddly enough).  In Episode 6, “Death and the Maiden,” at 34:30, Hari Seldon goes to far as to talk about “an entire galaxy pivoting around the actions of an individual.”  But that’s exactly what the premise of the Seldon Plan denies, as Asimov tells us over and over again.  Emphasizing the crucial importance of individuals may be a good narrative practice in itself (and is arguably true in fact).  It is, however, simply inconsistent with Asimov’s premise—at least until the appearance of the Mule, the ‘exception that proves the rule.’

So far, at least, Apple’s Foundation TV series exemplifies one way an adaptation can go wrong.  By ignoring what’s interesting and engaging in the original books, and substituting entirely different content that simply happens to be what’s in fashion at present, the adaptation can lose what’s valuable in the original without the benefit of anything new and equally interesting.

Villeneuve Scores a Victory

Frank Herbert’s iconic SF novel Dune (1965) has been transmuted to video twice before.  A 1984 film by David Lynch has received mixed reviews; it has its quirks, but the major problem is that, since a 507-page book is compressed into 2:17 of film, it’s unlikely anyone not already familiar with the book could follow the complex plot.  In 2000, the Syfy Channel released a TV mini-series version; I’ve never seen it, but, again, reports have been mixed.

Denis Villeneuve’s version hit American theatres on October 21, 2021.  The new film is impressive.  Note that this show is only the first half of the story; Dune:  Part Two, is currently (12/2021) scheduled for release October 20, 2023.  That makes sense.  No two-hour movie could possibly do justice to the book.  (I’m only speaking here about the first book; describing the innumerable sequels, prequels, and associated volumes that have come out since would take an entire post by itself—but IMHO, the later add-ons decline in quality exponentially, so we can safely ignore them here.)

Zendaya as Chani

What’s striking about the new movie is the care it takes in translating Herbert’s work to the screen.  The novel’s remarkable worldbuilding is reflected in stunning visuals that fit together smoothly to support the plot.  Watching it, I had the same kind of reaction I did watching The Fellowship of the Ring twenty years ago:  wow, there it is, just as I imagined it:  ornithopters, stillsuits, Duke Leto, Chani.  The casting is excellent; almost all the actors embody the characters vividly.  (One of the reasons I’ve never gone back to watch the TV mini-series, which I taped at the time, is that I just can’t envision William Hurt as the Duke.)

Moreover, the plot holds together.  Villeneuve follows the storyline of the book very closely.  He does it intelligently, though, rather than slavishly.  For example, there was a banquet scene in the book that doesn’t appear in the movie.  But the banquet isn’t really essential to the plot, and it would have been particularly hard to render it on film in any case—almost all the interest of the scene consists in the characters’ internal thoughts about what’s happening.  So, although I’d been looking forward to seeing that scene, I must agree that it made sense to skip it to save time and finesse a difficult cinematic challenge.

On the whole, though, the storyline of the movie closely reflects that of the book.  This means we get to enjoy the things that made the book engrossing in the first place:  the conflicting allegiances that the hero, Paul Atreides, must navigate; the quasi-mystical disciplines and secret long-term planning of the Bene Gesserit; the devious alliance of the Emperor and the villainous House Harkonnen; the way Paul and his mother Jessica begin to become familiar with the culture of the desert-dwelling Fremen, first officially, and then later when they’re on the run from the Harkonnen.  These pieces have to fit together perfectly to make the plot understandable; and from what I hear, the average moviegoer who has not read the novel is enabled to follow that intricate plot.  This is a noteworthy achievement for the director, screenwriters, and cast.

Aerial battle in 2011 The Three Musketeers

When we hear that a favorite book is being translated to film, this is what we’re primarily looking for:  a new perspective on what was so good in the book.  A movie can get away with substantially altering the story:  see, for instance, my earlier discussion of Man of La Mancha, or the 2011 steampunk version of The Three Musketeers.  But if that’s the path they choose to follow, it’s up to the screenwriters to make the revised story work, and give us a new structure that’s just as satisfying as the original (though perhaps in different ways).  The third possibility is that instead of doing either of those two things, the writers just mess up the original story without giving us a new “take” that can stand on its own feet.  And unfortunately, that third category is the one into which Apple’s Foundation seems to be falling.

Hope for the Future

Perhaps the Foundation crew will still find a way to pull something great out of the plot snarl they’ve created so far.  Perhaps not.  But I’m pleased that the box-office success of the latest Dune can stand as an example to the industry that a genuinely faithful version of a SF story can be both a critical and a money-making success.  With luck, we might see a trend in this direction—drawing on the widely varied types of stories available in the F&SF genres rather than simply looking for the next Game of Thrones or Star Wars.

Third Foundation

I finally caved and subscribed to yet another streaming service, Apple TV+.  I couldn’t resist the need to see what the new TV series would make of Isaac Asimov’s classic SF Foundation stories.

Although the book series is on the order of eighty years old, the TV series is just getting started, so I need to issue a

Spoiler Alert!

Asimov’s Appeal

I grew up reading the Foundation series; it was always a favorite of mine.  Asimov took his premise from Gibbon’s History of the Rise and Fall of the Roman Empire (1776-1789), with a science-fictional twist.

Isaac Asimov, Foundation, cover

A twelve thousand-year-old empire rules the galaxy; but Hari Seldon, inventor of a new science of “psychohistory” that statistically predicts the aggregate actions of human masses (as distinct from the acts of individual persons), realizes that the Empire is headed for an inevitable collapse.  Thirty thousand years of chaos and barbarism will follow.  But, while Seldon concludes the fall cannot be stopped, he does see a way to shorten the period of darkness.  He establishes two “Foundations” from which civilization may be restored more quickly—in a mere thousand years.  Seldon’s mathematics allows him to arrange things in such a way that the Seldon Plan will inevitably prevail—at least to a very high order of probability.

A few years ago I discussed the Seldon Plan in a post on “Prophecy and the Plan” (2018).  For a more detailed description, and one reader’s take on the novels, see Ben Gierhart’s 10/6/2021 article on Tor.

The original three books consist of a series of short stories taking place over about four hundred years.  There are some overlapping characters, but no character persists through the whole time period.  Part of the attraction of the series is the sweep of history over many lifetimes, giving a sense of scope and gravity to the combined stories.  Some of it comes from the age-old appeal of the fated outcome:  we know the Plan will prevail, but how?  And from the midpoint of the series on, a different question takes over:  if through a low-probability turn of events the Plan is in danger of failing, can it be preserved?

We do want it to be preserved, even though the (First) Foundation is composed of fallible and all-too-human people; because the great overarching goal of the Plan is the preservation of civilization in the face of barbarism.  I’ve noted before that this is a compelling theme.

Second Foundation cover

Most of the original stories were first published individually in the SF magazines, and later collected into the aforementioned three volumes—Foundation, Foundation and Empire, and Second Foundation (1951-1953).  Then things got complicated.  In 1981, Asimov “was persuaded by his publishers” (according to Wikipedia) to add a fourth book, Foundation’s Edge.  Several more followed, in the course of which Asimov tied in the Foundation series with his other great series, the positronic robot stories.  The new additions in some ways sought to resolve issues in the original trilogy, and in others tended to undermine the originals.  After Asimov’s death, three other celebrated authors—Gregory Benford, Greg Bear, and David Brin—were recruited to write three more Foundation books.  In the last volume of this new trilogy, Brin manages to pull off a brilliant resolution of the whole series.  But even that conclusion didn’t stop the flow of further related tales.

And now, as if things weren’t already confusing enough . . .

Apple’s Augmentation

A screen adaptation of the series was announced in 2017, and Apple picked it up in 2018.  Asimov’s daughter, Robyn Asimov, serves as one of the executive producers.  The principal writer, David S. Goyer, foresees eighty episodes—none too many for such a vast saga.

The trailers (such as this one) made it clear that the look and feel of the TV series would be rather different from those of Asimov’s cerebral books.  That’s not necessarily a bad thing.  The original tales have become dated in both content and style.  The question is, can Apple preserve what’s appealing in the original stories, while bringing them to life for a modern audience?

We’ve now seen three episodes (the fourth premieres tomorrow).  That’s not enough to allow for a full evaluation of the series, of course.  But it’s fun to try and guess where it’s going and report on how it’s doing, even at this early stage.  If nothing else, there’s the entertainment value, later on, of seeing how wildly inaccurate my take on the story may turn out to be.  So let’s see how the adaptation stands as of the third Foundation episode.

Emperors Demand Attention

Gaal Dornick, reimagined for Apple, with the Prime Radiant

As of Episode 2, I was favorably impressed.  Scores of details had been changed from the books, but often in interesting ways.  For example, Asimov’s cast of characters tended to be almost all-male—although the latter half of the series did include two distinctive female characters with strong agency, Bayta Darell and her descendant Arkady Darell.  The TV series diversifies the cast considerably.  Seldon’s protegé Gaal Dornick is now a black woman.  So is Salvor Hardin, the first Mayor of Terminus and leader of the Foundation.  The technology and culture of the Empire looks pretty convincing on-screen, though it doesn’t exactly track Asimov’s descriptions.  Goyer & co. introduce some up-to-date speculative ideas, such as the notion that the succession of Galactic Emperors at this time is a series of clones—though there’s no obvious reason for that last, other than to modernize the hypothetical science a bit.

The third episode, though, seems to veer away from Asimov’s basic underlying concepts.  However interesting Goyer’s repeating Emperors might be, I expected us to shift away from them as the Foundation itself took center stage.  But Episode 3 continued to focus a great deal of attention on the Emperors.  This seems to run counter to the underlying theme that the Empire fades away as other players become ascendant on the galactic scene.  I don’t know why we’re still spending so much time on the Emperors, unless they’re going to play a larger continuing role than the books would suggest—which makes me wonder what else is happening to the plotline.

The world-city of Trantor

Science and Mysticism

Asimov’s story, while engrossing, was essentially rationalistic.  Historical events had logical explanations (generally laid out explicitly by the characters after the crisis had passed).  Science, whether technological or psychological, was a dominant theme.  And the key to the whole Seldon Plan concept was that the course of history is determined by economic, cultural, and sociological forces, rather than by any individual’s actions.  One might agree or disagree with that premise, but it was the (I can’t believe I’m saying this) foundation of the whole original series—even though Asimov himself found a way around what might have become a stultifying predictability with the unforeseen character of the Mule.

The video adaptation points up a number of elements with a more mystical quality.  The Time Vault, which in the books is merely a recording of speeches about historical crisis points by the long-dead Seldon, in the TV series is an ominous pointed object hovering unsupported over the landscape of Terminus; we haven’t yet seen what it does.  The “Prime Radiant,” a sort of holographic projector containing the details of the Plan, is presented as a unique and numinous object—though that is, to be sure, a genuine Asimov detail, albeit in a different context.

Salvor Hardin, a la Apple

More significantly, Salvor Hardin, a likeable if devious political schemer in the original stories, here appears to be the “Warden” of the Vault, a sort of Obi-Wan Kenobi figure who lurks in the desert.  In Episode 3 we see her set apart even as a child; as an adult, she’s the only person who can pass through the protective field around the Vault that repels all others.  One character even suggests that she may have been somehow included in the Plan.

Now, this invocation of the “Chosen One” trope is directly antithetical to the notion that history is shaped by statistical aggregates and social forces.  Seldon’s Plan, by its nature, cannot depend on the unique actions of individuals.  Even when Asimov introduces the Mule as a mutant with mental powers that can change the large-scale behavior of human populations, that’s presented as disrupting the Plan, ruining Seldon’s statistical predictions.  To have personal qualities written into the Plan itself would undercut the whole idea.  Thus, at the end of Episode 3, I’m wondering whether the TV series is going to carry through the basic Asimovian premise at all.

The Expanded Universe

The sequels to the original trilogy, first by Asimov himself and then by others, took the book series off in somewhat different directions.  I’d been wondering whether the TV series would incorporate the whole “Robots and Empire” connection, or stick to the earlier structure.  To that question, at least, we seem to have an answer.

Eto Demerzel (Daneel Olivaw)

A recurring character in the first three episodes is a woman, an advisor to the Emperors, who turns out in one scene to be a robot.  I hadn’t caught her name at first, and had to look it up in the cast list.  She turns out to be Eto Demerzel (male in the books), who is really the very long-lived robot R. Daneel Olivaw, operating under an alias.  Daneel is one of my favorite characters in the early robot novels The Caves of Steel and The Naked Sun.  In Asimov’s later stories he assumes a much greater importance in shaping the whole course of galactic history.

So it appears that Goyer’s version of the Empire’s history does incorporate Asimov’s later expansion of the Foundation universe, at least to that extent.  It will be fascinating to see how far the writers take that connection—in particular, whether the “second trilogy” contributions of the “Killer Bs” (Benford, Bear, Brin) also figure into the plot.  We’re not likely to see those ultimate developments for years (in real time), though, if the eighty-episode prediction is accurate.

Not A Conclusion

We’re still very early in the development of the Foundation video series.  Tomorrow’s episode might overturn half my speculations here and send us off in an entirely different direction.  But in the meantime, it’s fun to go over what we’ve seen so far and where it seems to be going—even if the secret plans of the screenwriters are as mysterious to us as the Seldon Plan is to the Foundation itself.

Portraying the Transhuman Character

More Than Human

Kevin Wade Johnson’s comments on my recent post about The Good Place raised a couple of issues worth a closer look.  Here’s one:

Lots of science fiction, and some fantasy, deals with characters who are greater, or more intelligent, or more gifted in some way, than mere humans.  But we the authors and readers are mere humans.  How do we go about showing a character who’s supposed to be more sublime than we can imagine?

It’s one thing to have characters whose capabilities are beyond us.  Superman can leap tall buildings with a single bound; I can’t.  But I can easily comprehend Superman’s doing so.  (I can even see it at the movies.)  On the other hand, if a character is supposed to be so intelligent I can’t grasp their reasoning, or has types of knowledge that are beyond me, that’s harder to represent.  I can simply say so:  “Thorson had an intelligence far beyond that of ordinary men.”  But how can I show it?

Long-Lived Experience

There are a number of ways this can come up.  For example, if a character lived a very long time, would their accumulated experience allow for capabilities, or logical leaps in thinking, beyond what we can learn in our short lives?

I’m thinking of a Larry Niven story—I’m blanking on the name:  maybe one of the “Gil the Arm” stories?—in which a character who appears to be a young woman turns out to be centuries old, and when she drops the deception, she moves with uncanny grace—she doesn’t bump into anything or trip over her own feet, because she’s had that long to train herself in how to move (without the limitations imposed by our bodies’ degeneration from aging).

Of course, a story about long-lived people doesn’t have to take long-lived learning into account.  The depiction of the “Howard Families” in Heinlein’s Methuselah’s Children and Time Enough for Love almost seem dedicated to the opposite proposition, that no matter how long we live, we’re basically the same kinds of personalities; we don’t learn much.

Galadriel, radiantIn a similar way, Tolkien’s immortal elves may seem ineffably glorious to us, but their behavior often seems all too human—especially if you read The Silmarillion, where elves make mistakes, engage in treachery, and allow overweening pride to dictate their actions in ways that may surprise those of us familiar only with LotR.  On the other hand, the books and movies do succeed in convincing us that characters like Galadriel and Gandalf are of a stature that exceeds human possibility.

Logic and Language

There are other ways to have transhuman abilities.  As Kevin observes, Niven’s “Protectors” fit the description.  Niven imagines a further stage of human development—something that comes after childhood, adolescence, and adulthood—that we’ve never seen, because when our remote ancestors arrived on Earth from elsewhere, they lacked the plants hosting the symbiotic virus necessary for transition to that final stage.  The “trans-adult” Protectors are stronger, faster, and more durable than ordinary humans.  They also think faster.  Thus Niven shows them as following out a chain of logic with blinding speed to its conclusion, allowing them to act long before regular humans could figure out what to do.  Because this is a matter of speed, not incomprehensible thinking, Niven can depict a Protector as acting in ways that are faster than normal, but are explainable once we sit down and work out the reasoning.

Sherlock Holmes, arena fight sceneA visual analogue is used in the 2009 and 2011 Sherlock Holmes films starring Robert Downey, Jr.  Unlike most other treatments of the character, Guy Ritchie’s version supposes that Holmes’ incredible intelligence can be used not only for logical deduction, but to predict with lightning speed how a hand-to-hand combat may develop.  Holmes thus becomes a ninja-like melee fighter, so effective as to confound all opponents.  The movie shows us this by slowing down the process that to Holmes is instantaneous:  we see a very short montage of positions and moves as they would occur, or could occur, before we see Holmes carry out the final “conclusion” of his martial reasoning.  This allows us to appreciate what the quasi-superhuman character is doing and why, without actually having to execute the same process ourselves.

Preternatural intelligence may be more subtle in its effects.  Such a person may, for example, be able to understand things fully from what, to us, would be mere hints and implications.  So, for example, when Isaac Asimov introduces the members of the Second Foundation in his Foundation series, he tells us that their tremendous psychological training allows them to talk among themselves in a manner so concise and compressed that entire paragraphs require only a few words.

Speech as known to us was unnecessary.  A fragment of a sentence amounted almost to long-winded redundancy.  A gesture, a grunt, the curve of a facial line—even a significantly timed pause yielded informational juice.  (Second Foundation, end of chapter 1, “First Interlude,” p. 16)

Second Foundation coverBreaking the fourth wall, Asimov warns us that his account is “about as far as I can go in explaining color to a blind man—with myself as blind as the audience.”  (same page)  He then adroitly avoids showing us any of the actual conversation; instead, he says he’s “freely translating” it into our ordinary language.  This move illustrates one of the classic ways of presenting the incomprehensible in a story:  point out its incomprehensibility and “translate” into something we can understand.  (Note that this is much more easily done in writing than in a visual medium such as TV or the movies.)

A similar technique is used by Poul Anderson in his 1953 novel Brain Wave, which starts with the interesting premise that in certain regions of space, neurons function faster than in others.  When Earth’s natural rotation around the center of the galaxy brings it into a “faster” area, the brains of every creature with a central nervous system speed up, and human beings (as well as other animals) all become proportionately smarter.  Anderson notes that the speech of the transformed humans would be incomprehensible to us and, like Asimov, “translates” it for our convenience.  When a couple of the characters, in a newly invented faster-than-light spaceship, accidentally cross the border back into the “slow zone,” they are unable to understand the controls they themselves designed until the ship’s travel brings them out and lets their intelligence return to its new normal.  (Anderson’s concept may have been the inspiration for the “Zones of Thought” universe later developed in several fascinating stories by Vernor Vinge.)

Showing and Telling

We can glean some general principles from these examples.  If the extraordinary acts don’t actually have to be shown in the medium I’m using, I can simply point to them and tell the reader they’re there.  In a written story, I can say my main character is a world-class violinist without having to demonstrate that level of ability myself.  (Although if I have some experience in that particular art, I’ll be able to provide some realistic details, to help make my claim sound plausible.)  But if the supernormal achievement is something that can be shown in our chosen medium, we have to be able to demonstrate it:  a movie about the great violinist will have to exhibit some pretty masterful violin-playing, or those in the audience who know something about the art will laugh themselves silly.

Flowers For Algernon coverWe should note that there are good and bad ways of telling the audience about a character’s superiority.  In the unforgettable short story “Flowers for Algernon,” which consists entirely of diary entries by Charlie Gordon, the main character, the text vividly shows us the effects of an intelligence-raising treatment on a man of initially lower-than-normal intelligence.  The entries improve so radically in writing competence and understanding that when Charlie describes how his brainpower is beginning to exceed that of ordinary humans, we believe him, because we’re already riding on the curve of rising ability up to our own level that is apparent in the text—a true tour de force of writing.  On the other hand, in the drastically worse movie version, Charly (1968), the screenwriters are reduced to having Charly stand in front of an audience of experts and scornfully dismiss the greatest intellectual achievements from human history—a weak and ineffective technique at best for conveying superiority.

Summary

This quick review of the problem turns up several methods for handling supernormal abilities in a story.

 

  • If the superior ability is intelligible to us ordinary people in the audience—maybe it’s just doing normal things faster—we can have the wiser or super-enabled person explain it to someone less wise: our last post’s Ignorant Interlocutor.
  • If the advantage is mainly a matter of speed, we can slow it down to a speed at which regular people can follow the action.
  • If we can get away without actually showing the ability in question, we may be able to point toward it, or “translate” it into something we can understand, and convincingly tell the audience about it—if we can achieve the necessary suspension of disbelief.
  • If a character is supposed to be, let us say, preternaturally wise, and there’s simply no way to avoid showing that in the dialogue, the best we can do is to evoke the best we can do—have the character be as wise as possible—and imply ‘like this, only more so.’ This method—like “projecting” a line or a curve—is the method of “supereminence,” which is sometimes employed in theological talk about things that are inherently beyond our full understanding.

 

Kicking around this question makes us aware that portraying the more-than-human character is only a special case of a more general problem.  When our stories try to incorporate anything that’s indescribable, incomprehensible, how do we handle that?  Our F&SF stories frequently want to reach out beyond the boundaries of human experience, yet in a tale written for ordinary humans.  We’ll talk about the more general question next time.