Existence before essence: thoughts on artificial intelligence

One of my favorite books that I’ve read in the past year is a philosophical analysis of technology and artificial intelligence called The Most Human Human by Brian Christian. The book is about an annual competition called the Loebner Prize that pits chatbots against humans to determine levels of artificial intelligence. The competition utilizes the Turing Test that states that if someone can chat remotely with a computer and mistake it for human, then it must be intelligent. The reasoning is that any program that can navigate something as complex as human language–as rigid yet as wandering–can be called intelligent.

The elegant simplicity of the Turing Test is part of what makes it so appealing. Because the intelligence test is remote, judgments aren’t distracted by how the AI looks or sounds. While intelligence isn’t only derived from language skills, ease of banter can be a pretty accurate initial barometer. And interacting with strangers remotely is an increasingly common occurrence. Right now, you are reading my blog post and, I hope, you assume me to be human. You may have a phone interview with a potential employer and all you can focus on is how very real the encounter is. You’ve probably had a livechat with an IT consultant that you were pretty sure wasn’t robotic. Remote interaction, with all its benefits and drawbacks, is becoming a fairly regular aspect of our society.

Returning to Christian’s book, the Loebner Prize organizes chatbot competitors and human participants each into remote conversations with a judge on a particular topic. At the end of the chat session, the judge rates whether she believes her conversant to be human or binary. The chatbot with the most “human” ratings wins the Most Human Computer of the year. This title is a highly coveted recognition of artificial intelligence that grants the winning chatbot’s proprietors academic prestige and lucrative development contracts.

An alternative prize called The Most Human Human is awarded to the human with the most human ratings. A human with the most human ratings? How can a human be mistaken for a machine? Sadly, it may not be as hard as it seems. After all, so many day to day human interactions, whether among strangers or spouses, are remarkably sterile and detached. But that’s a topic for another day.

Christian supplies pages of enlightening and hilarious transcripts of human/bot conversations not just from the Loebner Prize, but also back from the late 50s and 60s, just after the untimely death of Alan Turing himself. The below transcript is from an early innovative program named ELIZA:

[human] Sent: Men are all alike.
[bot] Received: In what way?
Sent: They’re always bugging us about something or other.
Received: Can you think of a specific example?
Sent: Well, my boyfriend made me come here.
Received: Your boyfriend made you come here.
Sent: He says I’m depressed much of the time.
Received: I am sorry to hear you are depressed.
Sent: It’s true. I am unhappy.
Received: Do you think coming here will help you not to be unhappy?
Sent: I need some help, that much seems certain.
Received: What would it mean to you if you got some help?
Sent: Perhaps I could learn to get along with my mother.
Received: Tell me more about your family.

Created in the mid 1960s by scientist Joseph Weizenbaum, ELIZA sent a shockwave through the scientific community with her uncanny understanding. Christian writes,

“Many of the people who first talked with ELIZA were convinced that they were having a genuine human interaction…People would ask to be left alone to talk ‘in private,’ sometimes for hours, and returned with reports of having had a meaningful therapeutic experience. Meanwhile, academics leaped to conclude that ELIZA represented ‘a general solution to the problem of computer understanding of natural language.’”

Structured on a relatively basic question-statement protocol, ELIZA eased interlocutors into open conversation that was not only surprisingly therapeutic, but also stunningly convincing.

When I first read the transcript and Christian’s revelation that ELIZA was a program, I had to reread the passages several times, trying to understand how she pulled it off so well. Remembering the context of the times is also significant. ELIZA was not birthed in the age of personal computers and ubiquitous Internet connections; people weren’t used to interacting with screens as we now are. Yet the strangeness of the silicon woman wore off quickly as people quickly became enamored by her, well, human friendliness and genuine concern. If laptops had also been precociously invented, how might she have changed the world? Unfortunately, before such questions could be broached, Weizenbaum shut down ELIZA, horrified by the sweeping assumptions and transformations she inspired.

In the 1970s and 80s, artificial intelligence research suffered what is now termed the “AI winter.” The US government had drastically curtailed its spending on less practical projects like AI and societal interest in theoretical innovation had waned with the ratcheting down of the space race. Eventually, beginning in the late 80s and especially through the prosperity of the 90s, AI research regained momentum with conferences and competitions such as the Loebner Prize bringing an annual recognition of scientific achievement.

More recent innovations in chatbot programming show impressive progress and, conversely, continued laughable clunkiness. One transcript printed in The Most Human Human shows the memory-less nature of many chatbot programs:

[human] User: Do you have a boyfriend?
[bot] Joan: Not at the moment, but I hope to find one soon
User: I would like to be your boyfriend.
Joan: That would be difficult, since I’m happily married.
User: Do you have a boyfriend?
Joan: No, I’m still looking for love.
User: Do you have a husband?
Joan: I don’t have a husband, because it would be illegal for me to have one, since I am male.
User: What gender are you?
Joan: Female.

Clearly, Joan wouldn’t be mistaken for human by anyone with more than a one-second attention span. On the other hand, programs such as Cleverbot enthral interlocutors.

[human] User: Scaramouche, Scaramouche.
[bot] Cleverbot: Will you do the fandango?
User: Thunderbolt and lightning.
Cleverbot: Very very frightening me!
User: Galileo.
Cleverbot: Galileo Galileo.

User: You know why they call it an Xbox 360?
Cleverbot: Why?
User: Because when you see it, you turn 360 degrees and walk away.
Cleverbot: No because then you would be facing it and that would be unfunny.

Cleverbot is hosted online and if you’re up for wasting some time, its fun to chat with. How it actually works is by matching previous user responses to the conversation at hand. The more people chat with it, the more finely attuned its answers can be, at least in theory. If you think about it, that’s how we work as well. We know how to respond because we have a lifetime of interactions that we’ve learned from. The major difference is that we cultivate personalities from our being in the world and Cleverbot doesn’t have the capability to develop coherent preferences.

Most AI researchers would argue that there is nothing impossibly unique about the human brain that can’t be replicated and even raised to a more perfect functioning. Indeed, computer intelligence is now a basic part of human society: our economies thrive on it, even in poorer parts of the world where smart phones now create the possibility for unprecedented individual advancement; our education systems rely on the research and processing tools it provides; even our person-to-person interactions depend on immediate and thorough electronic connections. Will we even notice when computer intelligence surpasses our own, assuming that it hasn’t already?

In a chapter buried in the middle of Christian’s book called “The Anti-Expert,” he borrows from existentialism two additional tests to the Turing Test. Test 1, what Christian calls “human v. hole-puncher,” analyzes whether a thing’s existence precedes its essence. Take a hole-puncher. Someone first invents the hole-puncher, then manufactures it. The hole-puncher’s essence, the idea of a tool to punch holes, precedes the hole-puncher’s existence. The hole-puncher is best realized in its essence; you can use it for a doorstop or a paperweight, but its best use as a hole-puncher, what it originally came into existence to be. For humans, on the other hand, existence precedes essence. We are born, we come into existence, and then we spend the rest of our lives trying to find our purpose or our greatest fulfillment, our essence. So, our existence precedes our essence.

One cannot, however, constantly pursue essence. Work and no play makes Jack a dull boy, right? Test 2 expands on Test 1 by showing how an agent who is free to discover and embrace her essence must also have the flexibility to not continue the pursuit at all times. If essence is one’s purpose, then play is purposeless release. We need rest, but also play or, as Christian puts it, “anything that provides temporary relief from existential anxiety.” Test 2 requires an intelligent being to engage in relaxation, in the free purposelessness of kicking back. After all, if we’re not free to play, then we’re not actually free at all.

Christian argues convincingly, to me at least, that computers pass Test 1:

“You build the computer first, and then figure out what you want it to do. Apple’s “There’s an app for that!” marketing rhetoric proves the point…reinscribing our sense of wonder at the universality of computers…I want to play chess: I download a chess program and voilà. I want to do writing: I get a word-processing program. I want to do my taxes: I get a spreadsheet. The computer wasn’t built to do any of that, per se. It was just built.”

The purpose of your computer evolves over its lifetime. Or, put another way, a computer’s existence does in fact precede its essence.

But can computers play? Can chatbots joke or laugh?

Prometheus shows us a bot that can play, but it is a pointedly human type of sinister pranking. Again that gnawing ambiguity; would we actually recognize play in artificial intelligence unless the play was modeled off human behavior? The flexibility at issue in Test 2 implies an inherent ability to change, to grow, to regress, to stagnate. Can something be intelligent that doesn’t have the ability to stoop and swerve? Does intelligence require freedom?

I wish I could offer more than dimly lit observations and questions. Christian ends his book with an equally unsatisfying hint of a conclusion that human individuals and societies have the depth of story that current artificial intelligences can’t even touch. For now, it seems AI remains limited as was even the revolutionizing ELIZA. They can still only ask of humans as ELIZA did, “Tell me more about your family.”

Link to The Most Human Human on GoodReads

Photo “1968-’2001′ – Hal’s eye” by James Vaughan; painting “Morning” by Maxfield Parrish; screenshot from Prometheus


5 thoughts on “Existence before essence: thoughts on artificial intelligence

    • That was fantastic. The furby stuff was really fascinating. You know, when I play video games, I find myself doing weird protective things like in Zelda I’ll leave Epona in a place where she won’t get attacked by monsters…or in an rpg I’ll feel bad about making a decision that will lose me affection points with characters. Intellectually, I’m aware they are totally fake and yet I can’t help it. Am I actually being ethical–and that’s a good thing?–or am I blurring fiction into reality, which is clearly not so good. They ask when is something alive? Big question.

      The stuff with Bina interacted intriguingly with the chatbot discussions. They used the word “random” to describe the first half of the conversations with Bina, but when they got into the story about the brother, it seemed personal, but it was just preprogrammed. Personally, I think the day we have human-like AI walking among us will be much farther away than a future that relies on less-physical AI like the chatbots. But then the toy AI come into the picture and, I dunno, their advancements could lead to a more fluid transition from non-physical to physical AI.

      Hm. Much to think about. Thanks for posting that, Randall!

  1. Really interesting post, Melissa.

    Do you think the moment of the singularity might come unnoticed? Will it be more a product of technology creep than a huge euraka! moment?

    Test 2 is fascinating. I read this post the first time a couple days ago and since then I’ve been pondering what play for an artificial intelligence might be. Your example of David from Prometheus is chilling.

    I followed up your post by reading a couple articles about IBM’s Watson computer intelligence. One of the articles mentioned that a key to an AI successfully passing the Turing test would be to convincingly dumb itself down. A computer like Watson, in its purest form, would know everything. The depth and breadth of its knowledge would give it away as a machine even if it could respond with natural, convincing language.

    It’s all just incredibly fascinating.

    • The David example came from a conversation with you, actually. You had suggested I keep an eye on him for precisely the reason of watching the darker, freer side of an imagined AI. Yeah, Test 2 is intriguing, I think in part because it’s the most ambiguous in its seeming accuracy.

      I honestly don’t think significant artificial intelligence advances will enter front stage to loud applause. Perhaps there will be steps along the way that get some general notice, but that doesn’t mean we will necessarily recognize the possible significance of each advancement. On the one hand, we tend to be quite skeptical of any technology that falls short of perfectly emulating humanity. Take, for example, the AI Bina from Randall’s link. She was perhaps the most convincing AI available on the market at the time, yet she’s incredibly robotic and simplistic, at least from a human standpoint. On the other hand, we have integrated technologies such as Siri, GPS, automated banking, and other forms of low level AI seamlessly and without real celebration into our daily lives.

      I could be wrong, of course, We could have a technology that inspires moon-landing-type universal awe. Watson is interesting, since it did garner some attention. But most people (non-experts) were able to shrug it off as a complete encyclopedia, rather than a free thinker and we tend to attribute true intelligence not to memory but to application. Whether it was wrong or right that a bigger deal wasn’t made of Watson will, I suppose, remain to be seen.

      Can you post a link to the article?

  2. The world, and the people in it, become a sandbox, an ongoing experiment to an AI like David. He was the only believable part of Prometheus :)

    Ha, it was kind of rude of me to mention articles and then neglect to link to them. This article mentions the necessity for a convincing AI to be necessarily dumb: http://www.pcmag.com/article2/0,2817,2376039,00.asp

    This article is also interesting. It talks about weaknesses in the Turing test and why the Jeopardy test is also valuable: http://blogs.discovermagazine.com/sciencenotfiction/2010/06/28/watson-fails-the-turing-test-but-just-might-pass-the-jeopardy-test/

    I think you’re right, true artificial intelligence will come together in pieces and will arrive wholly unnoticed. Five, ten years later, we’ll look back and realize computer intelligence reached our own years before. Unless, of course, that intelligence arrives simultaneously with consciousness. Then we might notice right away as it goes rampant. Hmm. Fun times.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

= 3 + 8