One of my favorite books that I’ve read in the past year is a philosophical analysis of technology and artificial intelligence called The Most Human Human by Brian Christian. The book is about an annual competition called the Loebner Prize that pits chatbots against humans to determine levels of artificial intelligence. The competition utilizes the Turing Test that states that if someone can chat remotely with a computer and mistake it for human, then it must be intelligent. The reasoning is that any program that can navigate something as complex as human language–as rigid yet as wandering–can be called intelligent.
The elegant simplicity of the Turing Test is part of what makes it so appealing. Because the intelligence test is remote, judgments aren’t distracted by how the AI looks or sounds. While intelligence isn’t only derived from language skills, ease of banter can be a pretty accurate initial barometer. And interacting with strangers remotely is an increasingly common occurrence. Right now, you are reading my blog post and, I hope, you assume me to be human. You may have a phone interview with a potential employer and all you can focus on is how very real the encounter is. You’ve probably had a livechat with an IT consultant that you were pretty sure wasn’t robotic. Remote interaction, with all its benefits and drawbacks, is becoming a fairly regular aspect of our society.
Returning to Christian’s book, the Loebner Prize organizes chatbot competitors and human participants each into remote conversations with a judge on a particular topic. At the end of the chat session, the judge rates whether she believes her conversant to be human or binary. The chatbot with the most “human” ratings wins the Most Human Computer of the year. This title is a highly coveted recognition of artificial intelligence that grants the winning chatbot’s proprietors academic prestige and lucrative development contracts.
An alternative prize called The Most Human Human is awarded to the human with the most human ratings. A human with the most human ratings? How can a human be mistaken for a machine? Sadly, it may not be as hard as it seems. After all, so many day to day human interactions, whether among strangers or spouses, are remarkably sterile and detached. But that’s a topic for another day.
Christian supplies pages of enlightening and hilarious transcripts of human/bot conversations not just from the Loebner Prize, but also back from the late 50s and 60s, just after the untimely death of Alan Turing himself. The below transcript is from an early innovative program named ELIZA:
[human] Sent: Men are all alike.
[bot] Received: In what way?
Sent: They’re always bugging us about something or other.
Received: Can you think of a specific example?
Sent: Well, my boyfriend made me come here.
Received: Your boyfriend made you come here.
Sent: He says I’m depressed much of the time.
Received: I am sorry to hear you are depressed.
Sent: It’s true. I am unhappy.
Received: Do you think coming here will help you not to be unhappy?
Sent: I need some help, that much seems certain.
Received: What would it mean to you if you got some help?
Sent: Perhaps I could learn to get along with my mother.
Received: Tell me more about your family.
Created in the mid 1960s by scientist Joseph Weizenbaum, ELIZA sent a shockwave through the scientific community with her uncanny understanding. Christian writes,
“Many of the people who first talked with ELIZA were convinced that they were having a genuine human interaction…People would ask to be left alone to talk ‘in private,’ sometimes for hours, and returned with reports of having had a meaningful therapeutic experience. Meanwhile, academics leaped to conclude that ELIZA represented ‘a general solution to the problem of computer understanding of natural language.’”
Structured on a relatively basic question-statement protocol, ELIZA eased interlocutors into open conversation that was not only surprisingly therapeutic, but also stunningly convincing.
When I first read the transcript and Christian’s revelation that ELIZA was a program, I had to reread the passages several times, trying to understand how she pulled it off so well. Remembering the context of the times is also significant. ELIZA was not birthed in the age of personal computers and ubiquitous Internet connections; people weren’t used to interacting with screens as we now are. Yet the strangeness of the silicon woman wore off quickly as people quickly became enamored by her, well, human friendliness and genuine concern. If laptops had also been precociously invented, how might she have changed the world? Unfortunately, before such questions could be broached, Weizenbaum shut down ELIZA, horrified by the sweeping assumptions and transformations she inspired.
In the 1970s and 80s, artificial intelligence research suffered what is now termed the “AI winter.” The US government had drastically curtailed its spending on less practical projects like AI and societal interest in theoretical innovation had waned with the ratcheting down of the space race. Eventually, beginning in the late 80s and especially through the prosperity of the 90s, AI research regained momentum with conferences and competitions such as the Loebner Prize bringing an annual recognition of scientific achievement.
More recent innovations in chatbot programming show impressive progress and, conversely, continued laughable clunkiness. One transcript printed in The Most Human Human shows the memory-less nature of many chatbot programs:
[human] User: Do you have a boyfriend?
[bot] Joan: Not at the moment, but I hope to find one soon
User: I would like to be your boyfriend.
Joan: That would be difficult, since I’m happily married.
User: Do you have a boyfriend?
Joan: No, I’m still looking for love.
User: Do you have a husband?
Joan: I don’t have a husband, because it would be illegal for me to have one, since I am male.
User: What gender are you?
Clearly, Joan wouldn’t be mistaken for human by anyone with more than a one-second attention span. On the other hand, programs such as Cleverbot enthral interlocutors.
[human] User: Scaramouche, Scaramouche.
[bot] Cleverbot: Will you do the fandango?
User: Thunderbolt and lightning.
Cleverbot: Very very frightening me!
Cleverbot: Galileo Galileo.
User: You know why they call it an Xbox 360?
User: Because when you see it, you turn 360 degrees and walk away.
Cleverbot: No because then you would be facing it and that would be unfunny.
Cleverbot is hosted online and if you’re up for wasting some time, its fun to chat with. How it actually works is by matching previous user responses to the conversation at hand. The more people chat with it, the more finely attuned its answers can be, at least in theory. If you think about it, that’s how we work as well. We know how to respond because we have a lifetime of interactions that we’ve learned from. The major difference is that we cultivate personalities from our being in the world and Cleverbot doesn’t have the capability to develop coherent preferences.
Most AI researchers would argue that there is nothing impossibly unique about the human brain that can’t be replicated and even raised to a more perfect functioning. Indeed, computer intelligence is now a basic part of human society: our economies thrive on it, even in poorer parts of the world where smart phones now create the possibility for unprecedented individual advancement; our education systems rely on the research and processing tools it provides; even our person-to-person interactions depend on immediate and thorough electronic connections. Will we even notice when computer intelligence surpasses our own, assuming that it hasn’t already?
In a chapter buried in the middle of Christian’s book called “The Anti-Expert,” he borrows from existentialism two additional tests to the Turing Test. Test 1, what Christian calls “human v. hole-puncher,” analyzes whether a thing’s existence precedes its essence. Take a hole-puncher. Someone first invents the hole-puncher, then manufactures it. The hole-puncher’s essence, the idea of a tool to punch holes, precedes the hole-puncher’s existence. The hole-puncher is best realized in its essence; you can use it for a doorstop or a paperweight, but its best use as a hole-puncher, what it originally came into existence to be. For humans, on the other hand, existence precedes essence. We are born, we come into existence, and then we spend the rest of our lives trying to find our purpose or our greatest fulfillment, our essence. So, our existence precedes our essence.
One cannot, however, constantly pursue essence. Work and no play makes Jack a dull boy, right? Test 2 expands on Test 1 by showing how an agent who is free to discover and embrace her essence must also have the flexibility to not continue the pursuit at all times. If essence is one’s purpose, then play is purposeless release. We need rest, but also play or, as Christian puts it, “anything that provides temporary relief from existential anxiety.” Test 2 requires an intelligent being to engage in relaxation, in the free purposelessness of kicking back. After all, if we’re not free to play, then we’re not actually free at all.
Christian argues convincingly, to me at least, that computers pass Test 1:
“You build the computer first, and then figure out what you want it to do. Apple’s “There’s an app for that!” marketing rhetoric proves the point…reinscribing our sense of wonder at the universality of computers…I want to play chess: I download a chess program and voilà. I want to do writing: I get a word-processing program. I want to do my taxes: I get a spreadsheet. The computer wasn’t built to do any of that, per se. It was just built.”
The purpose of your computer evolves over its lifetime. Or, put another way, a computer’s existence does in fact precede its essence.
But can computers play? Can chatbots joke or laugh?
Prometheus shows us a bot that can play, but it is a pointedly human type of sinister pranking. Again that gnawing ambiguity; would we actually recognize play in artificial intelligence unless the play was modeled off human behavior? The flexibility at issue in Test 2 implies an inherent ability to change, to grow, to regress, to stagnate. Can something be intelligent that doesn’t have the ability to stoop and swerve? Does intelligence require freedom?
I wish I could offer more than dimly lit observations and questions. Christian ends his book with an equally unsatisfying hint of a conclusion that human individuals and societies have the depth of story that current artificial intelligences can’t even touch. For now, it seems AI remains limited as was even the revolutionizing ELIZA. They can still only ask of humans as ELIZA did, “Tell me more about your family.”
Link to The Most Human Human on GoodReads