Conscious Artificial Intelligence? – I Don’t Buy It

BLUF:  Sentient robots, AI with self awareness deserving of personhood, and synthetic life are all fantasies–no more probable than faster-than-light travel, lightsabers, or magic sky monkeys.  In other words, we can’t make conscious software.

I’ve been re-watching “Battlestar Galactica” (BSG), and recently finished the “Mass Effect” (ME) series.  Both play on a common theme in science-fiction: the machines will rise up against their makers.  Other blockbuster movies, “Terminator” and its ilk, and the “Matrix” trilogy, and the first (or among the first), “2001: A Space Odyssey” also explored this idea.  Moving past the “machines will rise to kill us”, science fiction is littered with robot and android characters that come off as “real” personalities in the context of their stories, from C3PO and R2D2 in Star Wars to Commander Data in Star Trek, and even the Transformers, immediately introduced as “sentient robots”.  This has been one of my favorite themes in sci-fi, and my recent excursions into BSG and ME prompted me to ponder again about the potential of Artificial Intelligence (AI) and whether AI as we concieve it could ever be truly self aware.

These are two distinct questions, although interrelated:

  1. Will machines rising against us to kill or enslave their creators?
  2. Is Artificial Intelligence alive?  (for the purpose of this discussion, we’ll define “alive” as “self-aware”, “having personhood”, and “sentient”–you get the idea).  In other words, is synthetic life possible?

I find the first question more alarming; the second question more interesting.  It is important to state up front that intelligence does not equal self-awareness.  We can concievably create a machine that is smarter, can think faster, learn more, self evolve, and analyze better than the human mind.  This does not necessarily mean that the machine is alive or has any real sense of consciousness/self awareness.

Battlestar Galactica, Mass Effect, and the Matrix all consider both questions, and how they interrelate to varying degrees.  The Terminator movies seemed to focus solely on the first question (I don’t recall any of the characters worrying about whether the Terminator had feelings or rights), and no one feels threatened by C3PO or Commander Data.  (Oddly enough, while Star Trek did have an episode dedicated to Data’s rights and recognized personhood in Federation law, Lucas didn’t seem to have a problem presenting C3PO as a character with personality and friendships only to have others mind wipe him without a second thought…)

There is a concept called “technological singularity“.  It hypothesises that at some point we will create a machine (more likely a network of machines, software, and systems) that is smarter than us and has the ability to modify itself (essentially, take control of its own evolution).  As networked as things are today, it is not difficult to concieve of a system that learns, grows, and has control of manufacturing and logistics so that it could even repair itself, expand itself, or make more like it, all without human intervention.  The hypothesis continues that once a machine smarter than us can self-improve, we cannot–by definition–predict how it will behave.  We can’t do so because if it’s smarter than human potential, we can’t concieve of what it can concieve of.  There is nothing that says such a machine must become hostile to human life, but many speculations assert that it would be.  It could become hostile simply as a matter of concluding that biological life is ineffecient–no “malice” involved.  This seems to be the underlying premise of Mass Effect when the final AI (which in the story predates several 50,000 year cycles of galactic civilizations) states that it is inevitable that organic life will create synthetic life that will rise up and kill their creators. (SPOILER: The linked clip is the “extended ending” encounter with the final AI at the end of Mass Effect 3, which explains the premise of the series.  Best sci-fi series ever, BTW.  EVAR!).   AI doesn’t have to be alive to be hostile.

Beyond being a simple matter of calculation, AI could rise up against organic life as a means of self defense.  This starts to get into the area of synthetic life.  Does the machine have “a soul?”  Again, just because a machine acts with a sense of self-preservation doesn’t mean that it is “aware” of that.  It could just be acting upon calculated conclusions given to it by the programming of its creators (or by itself once it starts to self-modify its own code–the synthetic analogue to “changing its mind”).

So how does one determine if it’s conscious?

A machine uprising often brings social and philosophical questions to bear.  Do they become citizens?  Do they become slaves?  Is it right to enslave a machine once it becomes self-aware (if such is possible).  In Battlestar Galactica (BSG) they revisit the theme over and over again.  Are the human-looking cyclons really in love?  They seem so human, but are they like humans?  Or are they just exteremely sophisticated programs capable of modeling the social expectations of humans, able to play on fears and sympathies for the purpose of manipulation?  And since humans would revolt against slavery, artificial personalities mimic that behavior.

Unfortunately, BSG dropped the ball on this one.  The writers, by the end of the series, conclusively decided for the audience that the cyclons who looked like humans had souls and were alive… and were part of God’s plan, guided by angels.  (Interestingly enough, Mass Effect left this open to the player’s interpretation).  I think BSG would have been much bolder to do something that revealed all the cylons as nothing more than unconscious, if complex, software.  Too bad.

But, as good sci-fi does, it begs the question.  What do you think about AI?  Will computers one day have such complexity that we should consider synthetic personalities be given status and rights as citizens?  Do you think we should eventually recognize personhood?

I am of the opinion that AI cannot be sentient and will never be “living persons” in the manner that we are,  so I don’t think we ever “should” give them rights… but I think we probably will.

The root of this question resides in the answer to the following:  how do we test for sentience?  What does it mean to be alive?  Ok, so philosophers have debated the latter for some time now, and I’m not sure we have a robust scientific answer to the former (if we do, I’m open to changing my position).  At the root of it all: what is consciousness?

Dr. Steven Novella, host to the podcast “The Skeptics Guide to the Universe” (of which I’m a fan), is a neurosurgeon, and comes down on the side that the brain creates the mind.  You are nothing more than the result of chemical processes in the meat of your head.  In his 33-minute lecture on the topic he likens believing that our conscious awareness is more than the brain (meaning, any belief in the soul) is nothing better than a belief in creationism.

While I don’t know that I completely agree with his conclusions, I do believe that his line of reasoning (that the mind is the brain) is central to considering whether A.I. can be conscious.  I don’t know what consciousness is, and if I remember Dr. Novella’s discussion in his blog, while he asserts that evidence supports that the mind is the result of the brain, admits that scientists don’t yet understand what exactly consciousness is, or how exactly it arises.  (Not to say that scientists couldn’t… we’re just not there yet).

For the sake of this essay, I’m rejecting such notions of “soul” or any appeal to the mystical to separate organic life from synthetic life.  I would argue that even if the mind is just a physical phenomenon of the brain, I still have a problem with the idea of computer consciousness. My thought on sentient A.I. stems from a recognition of what I don’t know, nor what the scientific community seems to yet have an answer for.

So, getting to how we might test a machine for sentience.   The first thing that comes to mind is observation of behavior.  Yet, how would we know that a sophisticated artificial personality is nothing more than a convincing simulation?  Dr. Novella addresses this in the lecture: “You could behave in every way like you have consciousness, and I would never know…. that’s an unfalsifiable hypothesis, and isn’t scientifically useful [paraphrased]”.  On the other side, how do we know that other people are sentient, since we cannot directly experience their awareness for ourselves–we can only observe their behavior.  Must we use the same yardstick and be relegated to observation?  (Perhaps the ultimate answer is “yes”, but we don’t seem to have the understanding yet to do this).

Quickly, before testing another for consciousness, we need to come back to ourselves.  Am I sentient?  I don’t think we have to get too bogged down on this.  For the normal person, our own experience of awareness and individuality is self-evident.  No proof is required.  I am conscious, and what I’m considering is whether a machine can be “like me”.  It’s that “like me” factor that prompts us to do things like define, extend, and protect the rights of others.  Because they’re “like me”, and I want those rights too.

It’s also reasonable to accept that other people are conscious.  We largely understand how life came to be, and how brains evolved, and the chemistry behind it (we understand the “does” even if not all of the “how”).  Accepting that the same sources/causes brought about other people that brought about me, then it’s unreasonable to think that other people are different from me.  So, we establish that all of us are conscious.

But it seems that is the only rational leap I can extend to other people, and I can’t extend that same logic to a computer without something else compelling.  Right now, we can create very convincing performances, and somewhat convincing personalities with a computer.  Moreover, people emotionally respond to computer-generated characters (whether in games, on Pixar films, or those furbies that came out a while back).  Certainly, we don’t take people’s reactions as proof that the subject is conscious.  But here’s the thing: we understand how computers came to be.  Siri on your iPhone and the scripted conversations and graphics of Mass Effect are just very sophisticated iterations of the same technology that drives a calculator.  I think we would all agree that calculators are not conscious.

So, we know for a fact that machines are not like us.  They didn’t evolve, they don’t operate under the same chemistry and they’re not built of the same stuff.  They have increased leaps and bounds in capability, behavior, simulation, abstraction, etc.  But an increase in complexity alone does not drive a change in its fundamental nature.  Therefore, I assert the conclusion is that software doesn’t have feelings.

Software can certainly behave as if it has feelings.  We can program it to respond to input and simulate human behavior.  And just as important, its behavior can evoke real emotional responses in us.  I suspect that there will come a time when a personality is written that is so compelling, and is more intelligent than a human (even if not conscious) that people start defending rights for A.I. based on their feelings.

There’s a scene in BSG where a cylon captured on  the Pegasus (one of the starships) is raped.  For all intensive purposes, she appears to be a woman (even under normal levels of medical examination).  The rapists refer to her as a toaster, a “thing”.  The crew mates on Galactica, who had previously referred to their own cylon prisoners as things and toasters–even Cally who had shot Boomer–now look with disgust on the Pegasus crew bragging about the rape.  This of course brings up the question, is it ok to rape an android that looks real, while understanding “it” is “just software”?  I would argue no, because the evoked actions and emotions reveal the truth about the humans interacting with the A.I. much more than it reveals anything about the A.I. itself.  The dudes who did that were still rapist scum.

Of course that raises the thought that if an A.I. is nothing more than software, then it is qualitatively no difference than a computer character in a video game.  When does doing something to a simulated human become not ok if it’s ok in a video game?  We shoot computer characters all the time, and in more sophisticated games like Dragon Age, Mass Effect and Star Wars: Knights of the Old Republic, we have player choices that allow us to engage in douchebaggery against our allies (or downright evil if we take our characters to the Dark Side of the Force).  I know I find the douchebaggery options distasteful, but not because the characters have feelings–it’s just software.  But, I’m not intending to digress into the ethics of video games with violent and moral choices… a topic for another day perhaps.

Bottom line, I have yet to be convinced that A.I. could become truly conscious.  The only arguments I’ve found in support of it seem to be variations on demonstrating behavior or an emotional response based on human’s reactions to behavior.  I recognize that if the mind is just the physical brain, then it is possible that we might create synthetic minds in the future, but such would most likely require a paradigm shift in how we design computers (maybe it would require biological components?).  However, based on current computer technology (and future developments along those paths), and without a full understanding of consciousness and a compelling test for its presence, I don’t think Artificial Consciousness is any more than magical speculation at this point.  Stories with sentient A.I. characters moves towards the realm of fantasy and away from sci-fi.