Escaping the Cycle: Battlestar Galactica vs. Mass Effect

I’ve talked in the past about my love-hate relationship with SyFy’s Battlestar Galactica. I decided to give it another change and did a full re-watch through the four seasons. The overall verdict is that I liked the series even more, including the last season, than I did the first time, with the exception of the last episode. The last episode, I hated even more than I did before. Common complaints with the ending had to do with the whole “I see angels” and deciding for the viewer how God was supposed to be seen in the show. This time around, however, my complaints take a different angle.

This got me to thinking about another epic sci-fi story with a controversial ending: Mass Effect. Both stories are essentially the same: a repeating cycle of machines rising up against the organic life that created them. Across the galaxy, civilization after civilization rises up and falls to the same fate: the machines destroy their creators. In both stories, a key theme is: how do we break the cycle? It’s a sci-fi version of the wheel of karma, the cycle of rebirth and death. In other words, how do we achieve moksha, a future where organic life can survive past technological singularity?

People complained about both endings. For Mass Effect, folks complained that the choices didn’t matter in the end. I disagree with this interpretation of the Mass Effect ending, and I’m going to contrast it with Battlestar Galactica (BSG) to make an argument that Mass Effect’s (ME) ending was, in fact, quite good. And the final choice in the game has substantially different implications.

(SPOILERS to follow)

At the end of BSG, the last episode does pose the question: how do we break the cycle? They find a new planet (our Earth) on which to colonize and survive, and they make the decision to blend in with the natives, scatter themselves over the planet, and eschew their technology by launching their fleet into the sun. The choose to start over with a stone age existence.

The end of ME also has a loss of technology: the network of jump gates get destroyed, presumably making interstellar travel either impossible, or much more difficult. In the case of ME, this loss of advanced civilization is a side effect, not deliberately chosen. Also, each solar system still has space age technology. Two of the three choices made at the end of ME carries the lessons learned into the future and truly breaks the cycle. One choice is ambiguous. More on this in a minute.

Coming back to BSG, the only thing they accomplish is to accept humans and cylons living together and procreating together, and the implication that they’ll interbreed with the indigenous human population. But they also make a deliberate choice to “start over” and “wipe the slate clean”. There’s one problem with that: the entire series has established “all of this has happened before and all of this will happen again.” If they don’t carry any lessons learned forward, why would they expect different results.

To add insult to injury, in the final scene of the series has the Baltar and Six angels talking together in modern-day Earth. Baltar-angel asks Six-angel if the cycle will repeat, and she’s optimistic this time. Why? No fracking good reason whatsoever. She says, “law of averages. If we do this enough, we’ll eventually be surprised.” Well, why didn’t that apply before? This sounds insane: “doing the same thing over and over and expecting different results.” My biggest gripe is this: I felt BSG was, in the end, an utter tragedy. No lessons were carried forward, and my takeaway is that they’re doomed to repeat the cycle. To amplify my point, the fact that they chose to link their story to present-day Earth was self defeating. If they wanted to make the argument that lessons were remembered and carried forward, even at a genetic level, they would have had to show how humans were fundamentally different… but they didn’t. One of the strengths of the show was how human and relatable their characters were. Caprica before the fall is very much like modern day Earth. So, the cycle repeats itself.

The second big gripe I have that adds to the tragic feel of the show. They talk about how love (with familial love being as or more important than romantic love) was key to their breaking the cycle (because apparently procreation was key somehow… that’s never explained), but then they scatter. The hardest part for me was Lee’s father going off to be alone, and Lee seemingly alone at the end. All those talks about family, and bonds… it’s like they all abandoned their hearths with each other (excepting some characters like Helo and Athena, and Caprica and Baltar). I guess the solitude of the Adamas was hard to swallow, and frankly I found it somewhat unbelievable after the way the series built their characters.

The failure of the final episode, I believe, is in the writing. The entire series was brilliant, and my impression is that the writers weren’t consciously trying to make it a tragedy. All in all, the BSG finale just seems like a tragic accident.

Turning to ME, the player is given three choices at the end, when presented a somewhat similar problem: how to break the cycle. Any choice he makes will have the side effect of destroying the jump gates. To not choose is to allow the machines to continue the cycle and harvest/destroy all advanced organic life. People complained that the three choices were superficial, involving a “blue-green-red” palette swap for the end scenes. This, however, isn’t true. The choices are fundamentally different (and this was more apparent in the revised endings). The choices presented are:

  1. Blue (control): Become one with the reapers (the machines in question). Your consciousness gets uploaded into the reapers and you become a reaper yourself, transforming their consciousness with yours. You then have the power to choose to stop the cycle, because you control the other reapers. You become, in essence, a god, and watch over the galaxy to prevent the cycle from happening again.
  2. Green (synthesis): You have the choice to blend synthetic and organic life… and through the reaction during the jump gates destruction, this effect is carried to all aspects of the galaxy. You die, but every other living being becomes a synthetic-organic hybrid, removing the distinction between synthetic and organic life.
  3. Red (destruction): You die, but you destroy all synthetic life and artificial intelligence throughout the galaxy. The cycle may or may not repeat itself, but organic life continues past the point of singularity, knowing and remembering what happened. This is the most uncertain of all fates, but it’s not a “start again and hope for different results.”

In order to not repeat the past, the starting conditions have to be different. The biggest thing is: what’s changed, what’s different from the last time we started this cycle? In ME, each choice offers a different set of starting conditions. BSG did not.

The BSG writers effectively made no choice… they wanted their cake and ate it too, essentially mapping their ending on the “no choice” ME ending (which was also available: if you choose to not choose, the final AI kills you and the cycle continues… the reapers harvest and wipe out humanity and all the other space faring races).

If the blue/green/red choices were mapped on to BSG, this is how it could have happened:

  1. Blue (control): Kara Thrace (or other suitable character, but I pick Starbuck in an effort to undo the stupid “she’s an angel” thread) becomes a cylon and merges consciousness with the base ship. She transcends consciousness and guides the base ships away, watching humanity’s development from afar. Some cylons live with humans and interbreed, and human civilization continues, with technological and philosophical lessons learned. Maybe it becomes possible for human consciousness to be uploaded into synthetic brains, transcending mortality and blurring the lines between synthetic and organic. There’s a fundamental recognition that machines are just as conscious as organics.
  2. Green (synthesis): I’m… gonna go with Kara again. But maybe Hera could have worked as well. One of them accomplishes something that causes all synthetics to share organic properties (although one could already argue they do) and that all humans get cylon bio tech as part of them (just like they did with the ship itself towards the end). Or a simpler way to achieve this would have been to have them settle in peace and intermarry, with they implication that in time, all their descendants were hybrids. They almost achieved this in the series, excepting that they decided to forget everything.
  3. Red (destruction): This was the ending I was hoping for, because I thought it would have been bold. The humans come to the conclusion that software is just software in the end, convincing simulations of personality but not actually sentient. They find a way to destroy all cylons, and there is no continuation of the hybrid line. The humans settle on the new world. They remember the sins of the past. In some ways, they’d already learned this lesson in the colonies by abandoning AI, and networked computer technology. That solves the problem of singularity. Only this time, they don’t have the threat of the cylons coming home to destroy them, because they’re all dead. In other words, it’s a purely simple military victory.

So, after the second watch through (which had the benefit of having played the full ME trilogy by that point), I feel like the writers of BSG and ME were trying to address the same problem: how to solve the cycle of technological singularity and escape the doom of machines rising up to destroy the organics. It feels like the BSG writers ran out of time to parse through the options, present them distinctly to the viewers, and have their characters choose one. ME did a better job of this.

Until next week,
Kyle

 

Conscious Artificial Intelligence? – I Don’t Buy It

BLUF:  Sentient robots, AI with self awareness deserving of personhood, and synthetic life are all fantasies–no more probable than faster-than-light travel, lightsabers, or magic sky monkeys.  In other words, we can’t make conscious software.

I’ve been re-watching “Battlestar Galactica” (BSG), and recently finished the “Mass Effect” (ME) series.  Both play on a common theme in science-fiction: the machines will rise up against their makers.  Other blockbuster movies, “Terminator” and its ilk, and the “Matrix” trilogy, and the first (or among the first), “2001: A Space Odyssey” also explored this idea.  Moving past the “machines will rise to kill us”, science fiction is littered with robot and android characters that come off as “real” personalities in the context of their stories, from C3PO and R2D2 in Star Wars to Commander Data in Star Trek, and even the Transformers, immediately introduced as “sentient robots”.  This has been one of my favorite themes in sci-fi, and my recent excursions into BSG and ME prompted me to ponder again about the potential of Artificial Intelligence (AI) and whether AI as we concieve it could ever be truly self aware.

These are two distinct questions, although interrelated:

  1. Will machines rising against us to kill or enslave their creators?
  2. Is Artificial Intelligence alive?  (for the purpose of this discussion, we’ll define “alive” as “self-aware”, “having personhood”, and “sentient”–you get the idea).  In other words, is synthetic life possible?

I find the first question more alarming; the second question more interesting.  It is important to state up front that intelligence does not equal self-awareness.  We can concievably create a machine that is smarter, can think faster, learn more, self evolve, and analyze better than the human mind.  This does not necessarily mean that the machine is alive or has any real sense of consciousness/self awareness.

Battlestar Galactica, Mass Effect, and the Matrix all consider both questions, and how they interrelate to varying degrees.  The Terminator movies seemed to focus solely on the first question (I don’t recall any of the characters worrying about whether the Terminator had feelings or rights), and no one feels threatened by C3PO or Commander Data.  (Oddly enough, while Star Trek did have an episode dedicated to Data’s rights and recognized personhood in Federation law, Lucas didn’t seem to have a problem presenting C3PO as a character with personality and friendships only to have others mind wipe him without a second thought…)

There is a concept called “technological singularity“.  It hypothesises that at some point we will create a machine (more likely a network of machines, software, and systems) that is smarter than us and has the ability to modify itself (essentially, take control of its own evolution).  As networked as things are today, it is not difficult to concieve of a system that learns, grows, and has control of manufacturing and logistics so that it could even repair itself, expand itself, or make more like it, all without human intervention.  The hypothesis continues that once a machine smarter than us can self-improve, we cannot–by definition–predict how it will behave.  We can’t do so because if it’s smarter than human potential, we can’t concieve of what it can concieve of.  There is nothing that says such a machine must become hostile to human life, but many speculations assert that it would be.  It could become hostile simply as a matter of concluding that biological life is ineffecient–no “malice” involved.  This seems to be the underlying premise of Mass Effect when the final AI (which in the story predates several 50,000 year cycles of galactic civilizations) states that it is inevitable that organic life will create synthetic life that will rise up and kill their creators. (SPOILER: The linked clip is the “extended ending” encounter with the final AI at the end of Mass Effect 3, which explains the premise of the series.  Best sci-fi series ever, BTW.  EVAR!).   AI doesn’t have to be alive to be hostile.

Beyond being a simple matter of calculation, AI could rise up against organic life as a means of self defense.  This starts to get into the area of synthetic life.  Does the machine have “a soul?”  Again, just because a machine acts with a sense of self-preservation doesn’t mean that it is “aware” of that.  It could just be acting upon calculated conclusions given to it by the programming of its creators (or by itself once it starts to self-modify its own code–the synthetic analogue to “changing its mind”).

So how does one determine if it’s conscious?

A machine uprising often brings social and philosophical questions to bear.  Do they become citizens?  Do they become slaves?  Is it right to enslave a machine once it becomes self-aware (if such is possible).  In Battlestar Galactica (BSG) they revisit the theme over and over again.  Are the human-looking cyclons really in love?  They seem so human, but are they like humans?  Or are they just exteremely sophisticated programs capable of modeling the social expectations of humans, able to play on fears and sympathies for the purpose of manipulation?  And since humans would revolt against slavery, artificial personalities mimic that behavior.

Unfortunately, BSG dropped the ball on this one.  The writers, by the end of the series, conclusively decided for the audience that the cyclons who looked like humans had souls and were alive… and were part of God’s plan, guided by angels.  (Interestingly enough, Mass Effect left this open to the player’s interpretation).  I think BSG would have been much bolder to do something that revealed all the cylons as nothing more than unconscious, if complex, software.  Too bad.

But, as good sci-fi does, it begs the question.  What do you think about AI?  Will computers one day have such complexity that we should consider synthetic personalities be given status and rights as citizens?  Do you think we should eventually recognize personhood?

I am of the opinion that AI cannot be sentient and will never be “living persons” in the manner that we are,  so I don’t think we ever “should” give them rights… but I think we probably will.

The root of this question resides in the answer to the following:  how do we test for sentience?  What does it mean to be alive?  Ok, so philosophers have debated the latter for some time now, and I’m not sure we have a robust scientific answer to the former (if we do, I’m open to changing my position).  At the root of it all: what is consciousness?

Dr. Steven Novella, host to the podcast “The Skeptics Guide to the Universe” (of which I’m a fan), is a neurosurgeon, and comes down on the side that the brain creates the mind.  You are nothing more than the result of chemical processes in the meat of your head.  In his 33-minute lecture on the topic he likens believing that our conscious awareness is more than the brain (meaning, any belief in the soul) is nothing better than a belief in creationism.

While I don’t know that I completely agree with his conclusions, I do believe that his line of reasoning (that the mind is the brain) is central to considering whether A.I. can be conscious.  I don’t know what consciousness is, and if I remember Dr. Novella’s discussion in his blog, while he asserts that evidence supports that the mind is the result of the brain, admits that scientists don’t yet understand what exactly consciousness is, or how exactly it arises.  (Not to say that scientists couldn’t… we’re just not there yet).

For the sake of this essay, I’m rejecting such notions of “soul” or any appeal to the mystical to separate organic life from synthetic life.  I would argue that even if the mind is just a physical phenomenon of the brain, I still have a problem with the idea of computer consciousness. My thought on sentient A.I. stems from a recognition of what I don’t know, nor what the scientific community seems to yet have an answer for.

So, getting to how we might test a machine for sentience.   The first thing that comes to mind is observation of behavior.  Yet, how would we know that a sophisticated artificial personality is nothing more than a convincing simulation?  Dr. Novella addresses this in the lecture: “You could behave in every way like you have consciousness, and I would never know…. that’s an unfalsifiable hypothesis, and isn’t scientifically useful [paraphrased]”.  On the other side, how do we know that other people are sentient, since we cannot directly experience their awareness for ourselves–we can only observe their behavior.  Must we use the same yardstick and be relegated to observation?  (Perhaps the ultimate answer is “yes”, but we don’t seem to have the understanding yet to do this).

Quickly, before testing another for consciousness, we need to come back to ourselves.  Am I sentient?  I don’t think we have to get too bogged down on this.  For the normal person, our own experience of awareness and individuality is self-evident.  No proof is required.  I am conscious, and what I’m considering is whether a machine can be “like me”.  It’s that “like me” factor that prompts us to do things like define, extend, and protect the rights of others.  Because they’re “like me”, and I want those rights too.

It’s also reasonable to accept that other people are conscious.  We largely understand how life came to be, and how brains evolved, and the chemistry behind it (we understand the “does” even if not all of the “how”).  Accepting that the same sources/causes brought about other people that brought about me, then it’s unreasonable to think that other people are different from me.  So, we establish that all of us are conscious.

But it seems that is the only rational leap I can extend to other people, and I can’t extend that same logic to a computer without something else compelling.  Right now, we can create very convincing performances, and somewhat convincing personalities with a computer.  Moreover, people emotionally respond to computer-generated characters (whether in games, on Pixar films, or those furbies that came out a while back).  Certainly, we don’t take people’s reactions as proof that the subject is conscious.  But here’s the thing: we understand how computers came to be.  Siri on your iPhone and the scripted conversations and graphics of Mass Effect are just very sophisticated iterations of the same technology that drives a calculator.  I think we would all agree that calculators are not conscious.

So, we know for a fact that machines are not like us.  They didn’t evolve, they don’t operate under the same chemistry and they’re not built of the same stuff.  They have increased leaps and bounds in capability, behavior, simulation, abstraction, etc.  But an increase in complexity alone does not drive a change in its fundamental nature.  Therefore, I assert the conclusion is that software doesn’t have feelings.

Software can certainly behave as if it has feelings.  We can program it to respond to input and simulate human behavior.  And just as important, its behavior can evoke real emotional responses in us.  I suspect that there will come a time when a personality is written that is so compelling, and is more intelligent than a human (even if not conscious) that people start defending rights for A.I. based on their feelings.

There’s a scene in BSG where a cylon captured on  the Pegasus (one of the starships) is raped.  For all intensive purposes, she appears to be a woman (even under normal levels of medical examination).  The rapists refer to her as a toaster, a “thing”.  The crew mates on Galactica, who had previously referred to their own cylon prisoners as things and toasters–even Cally who had shot Boomer–now look with disgust on the Pegasus crew bragging about the rape.  This of course brings up the question, is it ok to rape an android that looks real, while understanding “it” is “just software”?  I would argue no, because the evoked actions and emotions reveal the truth about the humans interacting with the A.I. much more than it reveals anything about the A.I. itself.  The dudes who did that were still rapist scum.

Of course that raises the thought that if an A.I. is nothing more than software, then it is qualitatively no difference than a computer character in a video game.  When does doing something to a simulated human become not ok if it’s ok in a video game?  We shoot computer characters all the time, and in more sophisticated games like Dragon Age, Mass Effect and Star Wars: Knights of the Old Republic, we have player choices that allow us to engage in douchebaggery against our allies (or downright evil if we take our characters to the Dark Side of the Force).  I know I find the douchebaggery options distasteful, but not because the characters have feelings–it’s just software.  But, I’m not intending to digress into the ethics of video games with violent and moral choices… a topic for another day perhaps.

Bottom line, I have yet to be convinced that A.I. could become truly conscious.  The only arguments I’ve found in support of it seem to be variations on demonstrating behavior or an emotional response based on human’s reactions to behavior.  I recognize that if the mind is just the physical brain, then it is possible that we might create synthetic minds in the future, but such would most likely require a paradigm shift in how we design computers (maybe it would require biological components?).  However, based on current computer technology (and future developments along those paths), and without a full understanding of consciousness and a compelling test for its presence, I don’t think Artificial Consciousness is any more than magical speculation at this point.  Stories with sentient A.I. characters moves towards the realm of fantasy and away from sci-fi.

~Kyle