paint-brush
What Constitutes Artificial Intelligence? Is It The Turing Test?by@salilsethi
1,693 reads
1,693 reads

What Constitutes Artificial Intelligence? Is It The Turing Test?

by Salil SethiJanuary 15th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<strong>A Sweaty Costume</strong>

Company Mentioned

Mention Thumbnail
featured image - What Constitutes Artificial Intelligence? Is It The Turing Test?
Salil Sethi HackerNoon profile picture

A Sweaty Costume

“Yes ma’am, Roll-Oh get,” the robot says, as it goes to the kitchen to prepare dinner, opening a can of food and blowing fire to a candle.

Roll-Oh walks clumsily, suspiciously like a man in an uncomfortable costume. Nonetheless, it ably frees the domestic housewife of all her daily chores, at the simple press of a button. This was the promise of robotics, demonstrated in the 1940 short film “Leave it to Roll-Oh”, presented at the New York World’s Fair. Mechanical robotics already automate so much of our lives, the film argues, that it will be only a matter of time before we can expect personal, four-limbed metal people as ready-made servants: watering our plants, greeting our mailman, helping cook dinner.

At the time, the notion didn’t seem so far off. The nuclear bombs dropped on Hiroshima and Nagasaki had demonstrated the breadth of science’s capacity to confound our wildest imagination. A booming economy spurred technological innovation and entirely new industries such as in-home television.

It was in this climate that Alan Turing, an already lauded genius in the field of computer science, began to consider the question: what would constitute artificial intelligence?

Stupid Machine

So what does constitute artificial intelligence?

Today, just as it was all the way back in the 1940s, the matter is clouded by false marketing. The narrator of “Leave it to Roll-Oh” describes with great awe how “Some robots have even learned to fly. Tiny, automatic brains in giant airliners…”. Just as the film exaggerates the mechanics of aeronautics to imply that somehow an airplane has its own thinking brain, so too do many of today’s tech companies market their otherwise quite ordinary, programmed software using words like “smart” and “AI-based”.

In the end, you can’t be blamed for wondering: is my “smart home” really smart? Is Siri intelligent? She doesn’t seem very intelligent when I ask for “ramen noodles” in my area, and she returns a Google image search for “tiny poodles”.

In 1950 Alan Turing published a paper called “Computing Machinery and Intelligence”. In it, he presented a question with the following hypothetical situation: you’re holding a text conversation with two individuals, both outside of view, one of whom is a person and the other a machine. You have to determine which is the human and which is the machine, while the machine mimics as closely as possible the conversational behavior of a regular person. If the machine were to succeed — either because you cannot determine or because you incorrectly guess which is the real human — does this imply that the machine is itself sentient?

Answering the Wrong Test

In the time since its first conception, the so-called Turing test has been somewhat reinterpreted. Nowadays, it’s addressed most often as a programming challenge. Prominent competitions based on Turing test, such as the Loebner Prize, task judges with choosing the contestant whose program best mimics a real human. Past winners of these sorts of prizes have demonstrated some of the faults in treating Turing’s hypothetical literally. “Eugene Goostman” is a chatbot programmed by three Russians that managed to convince 33 percent of judges in a Turing competition that it was a human. How? It was given the personality of a boy for whom English was a second language.

So, was Eugene cheating? Technically, no. Perhaps the problem here is the premise. The Turing test wasn’t necessarily meant to be carried out literally. Maybe it wasn’t even about the robots at all. What Turing’s hypothetical situation really does is reveal ourselves to us. It begs deep questions. How do we define conscious thought? How might we understand non-animal intelligence? And is there anything really all that special about us humans that can’t be replicated in a program?

The Real Test

The Turing test is a hypothetical, meant to give some framework to how we might define what constitutes sentience in a machine. And, contrary to popular belief, it is not simply a bar by which we can say: if a machine passes this test, then it is capable of thinking. Instead, all Turing would’ve had us do is think about the question he proposed.

Might a machine able to converse as a human itself constitute a thinking entity?

It’s not a technical matter, it’s a philosophical one. Before you come up with your own answer, though, watch the video below.

While we never did get our Roll-Ohs, it does appear that the conversational robots Alan Turing imagined nearly 70 years ago are now, here. Google Duplex, the new AI software that can hold a phone call as convincing as you or I, has already won real-life Turing games. Face it: without prior knowledge, you simply wouldn’t be able to distinguish Google Duplex as non-human over the phone. Not only does it respond to context, and unexpected changes in the direction of the conversation, but it speaks with the cadence of a casual human voice, and even throws in an “mhm”, “gotcha” and other unnecessary but convincing ad-libs where applicable.

So we’ve finally arrived at the moment Turing prepared us for. Can we say Google Duplex and other similar, upcoming AIs are themselves thinking entities? At this point, probably not.

Maybe it’s time for an updated Turing test.