What OS is running YOU?

In one of my earlier posts, I posited that it’s difficult to test whether a simulated intelligence is self-aware or not (do we grant an AI rights and personhood).  I concluded by asserting that choosing to believe in an AI’s self awareness is tantamount to choosing to believe in an invisible deity.  This led to a series of conversations, some in the comments and more on Facebook, that got me to back away from this hard assertion.

Somewhat related to this, we now see discussions on whether we ourselves live in a simulated universe.  Just last week in the Skeptics Guide to the Universe Episode 379 (starting around the 12:15 mark) they discussed that scientists have performed the small simulation of reality.  The SGU then moved on to discuss a recently claim that we almost mathematically certainly exist in a simulated universe right now.  (A description of this argument can be found here, although this is not necessarily the article to which the SGU was referring).

This got me to thinking.

Aside from the science-fiction story potential, the possibility that we are, ourselves, simulated beings is quite interesting.  There are many implications to this if it turns out to be true.  First, in considering this, I’m going to put aside any question about the soul, or faith.  If you believe in a soul and your understanding is defined by religious teaching, this speculation is just silly, and there’s no point in engaging.  However, if you contend that we don’t truly understand consciousness, and don’t automatically look to spiritual answers to explain it away, then this possibility bears thought.

First of all, if we live in a simulated universe, and we are also simulated intelligences experiencing self-awareness, this shoots to hell my earlier assertion on simulated intelligences being not self aware.  If the math is correct that we almost certainly are simulated (or living in a simulated world), and given that I take my own self-awareness as axiomatic, then it is almost certain that I am simulated, and we, in turn, will create self-aware intelligences.

As I think through this, I realize I’m conflating two things.  Living in a simulated world (like in the Matrix) is not the same as being a simulated being yourself, although the Matrix makes a case for the computer programs in the story also being sentient.  So, let’s back away from ourselves being simulated for a bit and consider whether we live in a virtual reality (VR) or not.  More specifically, let’s consider that we do live in such a world–what then would it mean?  (Remember, sci-fi is speculative fiction… so as sci fi lovers, consumers, and writers, we should speculate on such things).

Folks who postulate along such lines also try to come up with ways on how we might test for such a thing.  They’ve stated that a simulated world should have glitches, or discrete points of resolution (like pixels of reality).  However, the SGU notes that we already now what the pixel of reality is–a planck length–and the pixel of time–a chronon— so why would someone capable of building such a perfect simulation not therefore model the world physics off of real world physics.  In fact, this is exactly what is proposed… a simulation of the fundamental forces of the world should, over time, produce simulated life.  In such a case, we have an untestable hypothesis.

This is what I refer to the “all-powerful deceiver hypothesis”.  This is the idea that we live in a perfect simulation indistinguishable from reality.  This also refers to the idea I’ve heard where young-Earth creationists told me that the Devil (or God) placed dinosaur bones and old geological evidence pointing to a long-earth history in order to test our belief in the Bible, in the idea that those who hold to faith (superstition in this case) even when presented with demonstrable facts get rewarded somehow.  I find such ideas largely without merit… it’s not practical to treat this world as imaginary.

There is also the thought that if something it not provable or unprovable, then the burden of proof is on the one making the positive assertion.  This is a claim made my atheists against the theist position–if you can neither prove or disprove God’s existence, then the burden of proof lies with the person making the extraordinary claim that there exists an all-powerful creator.  Extraordinary claims require commensurate levels of extraordinary proof, and so the rational position becomes one of skepticism towards any god’s existence.

How is this different from the simulated world (SW) possibility?  At least for the present, the claim is that probabilistic math backs up the SW hypothesis.  Now, as I said, it’s not practical for me to go around informing my decisions over such a belief.  In the absence of proof, I actively believe we live in a SW.  But for the purposes of speculation…

…if we live in a SW, there is a creator, by definition.  Not a god, but a person, people, or civilization that constructed the SW.  Glitches could explain why people have non-repeatable experiences they interpret as miraculous.  Maybe the gods are programs of some sort, but I would argue that such wouldn’t be necessary (if the SW is modeled off of the laws of physics).

…if we live in a SW, that implies there is a real world.  There are still real physics, and non-simulated beings of intelligence and self-awareness.

…if we live in a SW, we don’t know how nested we are.  It’s possible that we do experience an afterlife, at a higher SW level.  If there is a RW at the end, there is still a finite point at which existence ends (without looking to transcendent or religious explanations).

But the idea that’s really neat to me… what if us figuring out we’re in a simulation is “the point”… it’s the test that proves to our creators (programmers?) that we (or the system, meaning our universe) is self-aware.

Ooooh.

Ok, on that note, see y’all next week.  And give the SGU a listen… really good discussion on this.

Cheers,
Kyle

Conscious Artificial Intelligence? – I Don’t Buy It

BLUF:  Sentient robots, AI with self awareness deserving of personhood, and synthetic life are all fantasies–no more probable than faster-than-light travel, lightsabers, or magic sky monkeys.  In other words, we can’t make conscious software.

I’ve been re-watching “Battlestar Galactica” (BSG), and recently finished the “Mass Effect” (ME) series.  Both play on a common theme in science-fiction: the machines will rise up against their makers.  Other blockbuster movies, “Terminator” and its ilk, and the “Matrix” trilogy, and the first (or among the first), “2001: A Space Odyssey” also explored this idea.  Moving past the “machines will rise to kill us”, science fiction is littered with robot and android characters that come off as “real” personalities in the context of their stories, from C3PO and R2D2 in Star Wars to Commander Data in Star Trek, and even the Transformers, immediately introduced as “sentient robots”.  This has been one of my favorite themes in sci-fi, and my recent excursions into BSG and ME prompted me to ponder again about the potential of Artificial Intelligence (AI) and whether AI as we concieve it could ever be truly self aware.

These are two distinct questions, although interrelated:

  1. Will machines rising against us to kill or enslave their creators?
  2. Is Artificial Intelligence alive?  (for the purpose of this discussion, we’ll define “alive” as “self-aware”, “having personhood”, and “sentient”–you get the idea).  In other words, is synthetic life possible?

I find the first question more alarming; the second question more interesting.  It is important to state up front that intelligence does not equal self-awareness.  We can concievably create a machine that is smarter, can think faster, learn more, self evolve, and analyze better than the human mind.  This does not necessarily mean that the machine is alive or has any real sense of consciousness/self awareness.

Battlestar Galactica, Mass Effect, and the Matrix all consider both questions, and how they interrelate to varying degrees.  The Terminator movies seemed to focus solely on the first question (I don’t recall any of the characters worrying about whether the Terminator had feelings or rights), and no one feels threatened by C3PO or Commander Data.  (Oddly enough, while Star Trek did have an episode dedicated to Data’s rights and recognized personhood in Federation law, Lucas didn’t seem to have a problem presenting C3PO as a character with personality and friendships only to have others mind wipe him without a second thought…)

There is a concept called “technological singularity“.  It hypothesises that at some point we will create a machine (more likely a network of machines, software, and systems) that is smarter than us and has the ability to modify itself (essentially, take control of its own evolution).  As networked as things are today, it is not difficult to concieve of a system that learns, grows, and has control of manufacturing and logistics so that it could even repair itself, expand itself, or make more like it, all without human intervention.  The hypothesis continues that once a machine smarter than us can self-improve, we cannot–by definition–predict how it will behave.  We can’t do so because if it’s smarter than human potential, we can’t concieve of what it can concieve of.  There is nothing that says such a machine must become hostile to human life, but many speculations assert that it would be.  It could become hostile simply as a matter of concluding that biological life is ineffecient–no “malice” involved.  This seems to be the underlying premise of Mass Effect when the final AI (which in the story predates several 50,000 year cycles of galactic civilizations) states that it is inevitable that organic life will create synthetic life that will rise up and kill their creators. (SPOILER: The linked clip is the “extended ending” encounter with the final AI at the end of Mass Effect 3, which explains the premise of the series.  Best sci-fi series ever, BTW.  EVAR!).   AI doesn’t have to be alive to be hostile.

Beyond being a simple matter of calculation, AI could rise up against organic life as a means of self defense.  This starts to get into the area of synthetic life.  Does the machine have “a soul?”  Again, just because a machine acts with a sense of self-preservation doesn’t mean that it is “aware” of that.  It could just be acting upon calculated conclusions given to it by the programming of its creators (or by itself once it starts to self-modify its own code–the synthetic analogue to “changing its mind”).

So how does one determine if it’s conscious?

A machine uprising often brings social and philosophical questions to bear.  Do they become citizens?  Do they become slaves?  Is it right to enslave a machine once it becomes self-aware (if such is possible).  In Battlestar Galactica (BSG) they revisit the theme over and over again.  Are the human-looking cyclons really in love?  They seem so human, but are they like humans?  Or are they just exteremely sophisticated programs capable of modeling the social expectations of humans, able to play on fears and sympathies for the purpose of manipulation?  And since humans would revolt against slavery, artificial personalities mimic that behavior.

Unfortunately, BSG dropped the ball on this one.  The writers, by the end of the series, conclusively decided for the audience that the cyclons who looked like humans had souls and were alive… and were part of God’s plan, guided by angels.  (Interestingly enough, Mass Effect left this open to the player’s interpretation).  I think BSG would have been much bolder to do something that revealed all the cylons as nothing more than unconscious, if complex, software.  Too bad.

But, as good sci-fi does, it begs the question.  What do you think about AI?  Will computers one day have such complexity that we should consider synthetic personalities be given status and rights as citizens?  Do you think we should eventually recognize personhood?

I am of the opinion that AI cannot be sentient and will never be “living persons” in the manner that we are,  so I don’t think we ever “should” give them rights… but I think we probably will.

The root of this question resides in the answer to the following:  how do we test for sentience?  What does it mean to be alive?  Ok, so philosophers have debated the latter for some time now, and I’m not sure we have a robust scientific answer to the former (if we do, I’m open to changing my position).  At the root of it all: what is consciousness?

Dr. Steven Novella, host to the podcast “The Skeptics Guide to the Universe” (of which I’m a fan), is a neurosurgeon, and comes down on the side that the brain creates the mind.  You are nothing more than the result of chemical processes in the meat of your head.  In his 33-minute lecture on the topic he likens believing that our conscious awareness is more than the brain (meaning, any belief in the soul) is nothing better than a belief in creationism.

While I don’t know that I completely agree with his conclusions, I do believe that his line of reasoning (that the mind is the brain) is central to considering whether A.I. can be conscious.  I don’t know what consciousness is, and if I remember Dr. Novella’s discussion in his blog, while he asserts that evidence supports that the mind is the result of the brain, admits that scientists don’t yet understand what exactly consciousness is, or how exactly it arises.  (Not to say that scientists couldn’t… we’re just not there yet).

For the sake of this essay, I’m rejecting such notions of “soul” or any appeal to the mystical to separate organic life from synthetic life.  I would argue that even if the mind is just a physical phenomenon of the brain, I still have a problem with the idea of computer consciousness. My thought on sentient A.I. stems from a recognition of what I don’t know, nor what the scientific community seems to yet have an answer for.

So, getting to how we might test a machine for sentience.   The first thing that comes to mind is observation of behavior.  Yet, how would we know that a sophisticated artificial personality is nothing more than a convincing simulation?  Dr. Novella addresses this in the lecture: “You could behave in every way like you have consciousness, and I would never know…. that’s an unfalsifiable hypothesis, and isn’t scientifically useful [paraphrased]”.  On the other side, how do we know that other people are sentient, since we cannot directly experience their awareness for ourselves–we can only observe their behavior.  Must we use the same yardstick and be relegated to observation?  (Perhaps the ultimate answer is “yes”, but we don’t seem to have the understanding yet to do this).

Quickly, before testing another for consciousness, we need to come back to ourselves.  Am I sentient?  I don’t think we have to get too bogged down on this.  For the normal person, our own experience of awareness and individuality is self-evident.  No proof is required.  I am conscious, and what I’m considering is whether a machine can be “like me”.  It’s that “like me” factor that prompts us to do things like define, extend, and protect the rights of others.  Because they’re “like me”, and I want those rights too.

It’s also reasonable to accept that other people are conscious.  We largely understand how life came to be, and how brains evolved, and the chemistry behind it (we understand the “does” even if not all of the “how”).  Accepting that the same sources/causes brought about other people that brought about me, then it’s unreasonable to think that other people are different from me.  So, we establish that all of us are conscious.

But it seems that is the only rational leap I can extend to other people, and I can’t extend that same logic to a computer without something else compelling.  Right now, we can create very convincing performances, and somewhat convincing personalities with a computer.  Moreover, people emotionally respond to computer-generated characters (whether in games, on Pixar films, or those furbies that came out a while back).  Certainly, we don’t take people’s reactions as proof that the subject is conscious.  But here’s the thing: we understand how computers came to be.  Siri on your iPhone and the scripted conversations and graphics of Mass Effect are just very sophisticated iterations of the same technology that drives a calculator.  I think we would all agree that calculators are not conscious.

So, we know for a fact that machines are not like us.  They didn’t evolve, they don’t operate under the same chemistry and they’re not built of the same stuff.  They have increased leaps and bounds in capability, behavior, simulation, abstraction, etc.  But an increase in complexity alone does not drive a change in its fundamental nature.  Therefore, I assert the conclusion is that software doesn’t have feelings.

Software can certainly behave as if it has feelings.  We can program it to respond to input and simulate human behavior.  And just as important, its behavior can evoke real emotional responses in us.  I suspect that there will come a time when a personality is written that is so compelling, and is more intelligent than a human (even if not conscious) that people start defending rights for A.I. based on their feelings.

There’s a scene in BSG where a cylon captured on  the Pegasus (one of the starships) is raped.  For all intensive purposes, she appears to be a woman (even under normal levels of medical examination).  The rapists refer to her as a toaster, a “thing”.  The crew mates on Galactica, who had previously referred to their own cylon prisoners as things and toasters–even Cally who had shot Boomer–now look with disgust on the Pegasus crew bragging about the rape.  This of course brings up the question, is it ok to rape an android that looks real, while understanding “it” is “just software”?  I would argue no, because the evoked actions and emotions reveal the truth about the humans interacting with the A.I. much more than it reveals anything about the A.I. itself.  The dudes who did that were still rapist scum.

Of course that raises the thought that if an A.I. is nothing more than software, then it is qualitatively no difference than a computer character in a video game.  When does doing something to a simulated human become not ok if it’s ok in a video game?  We shoot computer characters all the time, and in more sophisticated games like Dragon Age, Mass Effect and Star Wars: Knights of the Old Republic, we have player choices that allow us to engage in douchebaggery against our allies (or downright evil if we take our characters to the Dark Side of the Force).  I know I find the douchebaggery options distasteful, but not because the characters have feelings–it’s just software.  But, I’m not intending to digress into the ethics of video games with violent and moral choices… a topic for another day perhaps.

Bottom line, I have yet to be convinced that A.I. could become truly conscious.  The only arguments I’ve found in support of it seem to be variations on demonstrating behavior or an emotional response based on human’s reactions to behavior.  I recognize that if the mind is just the physical brain, then it is possible that we might create synthetic minds in the future, but such would most likely require a paradigm shift in how we design computers (maybe it would require biological components?).  However, based on current computer technology (and future developments along those paths), and without a full understanding of consciousness and a compelling test for its presence, I don’t think Artificial Consciousness is any more than magical speculation at this point.  Stories with sentient A.I. characters moves towards the realm of fantasy and away from sci-fi.

~Kyle