paint-brush
ELIZA Reinterpreted: The World’s First Chatbot Was Not Intended as a Chatbot at Allby@machineethics
122 reads

ELIZA Reinterpreted: The World’s First Chatbot Was Not Intended as a Chatbot at All

by Machine EthicsSeptember 10th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Weizenbaum built ELIZA as a research tool for human-machine communication, not to simulate psychoanalytic conversations. Its true purpose was later misinterpreted.
featured image - ELIZA Reinterpreted: The World’s First Chatbot Was Not Intended as a Chatbot at All
Machine Ethics HackerNoon profile picture

Abstract and 1. Introduction

  1. Why ELIZA?
  2. The Intelligence Engineers
  3. Newell, Shaw, and Simon’s IPL Logic Theorist: The First True AIs
  4. From IPL to SLIP and Lisp
  5. A Critical Tangent into Gomoku
  6. Interpretation is the Core of Intelligence
  7. The Threads Come Together: Interpretation, Language, Lists, Graphs, and Recursion
  8. Finally ELIZA: A Platform, Not a Chat Bot!
  9. A Perfect Irony: A Lisp ELIZA Escapes and is Misinterpreted by the AI Community
  10. Another Wave: A BASIC ELIZA turns the PC Generation on to AI
  11. Conclusion: A certain danger lurks there
  12. Acknowledgements and References

2 Why ELIZA?

Why Joseph Weizenbaum built ELIZA does not appear to be much of a mystery; Isn’t the answer just as simple as Pamela McCorduck put it in Machines Who Think[33]:


“[...] Weizenbaum got interested in language. Ed Feigenbaum introduced him to [...] Kenneth Colby, a psychiatrist who had [...] turned to computers as a possible way of gaining new insights into neurotic behavior. [...] In 1963, Weizenbaum went to MIT, [where] he designed a program that would answer [simple questions]. It was a short, tricky program, based on sleight of hand, and it led Weizenbaum to ask himself some very serious questions about mystification and the computer [...]. [If] you could do a simple question-answering machine, why not a complicated one? How different would complexity make such a machine? Could you seem to have complex responses based on simple rules? [...] Weizenbaum drove into work many a morning with his neighbor Victor Yngve, who had developed the COMIT language, for pattern matching. If you were going to play around with matching patterns, why not the patterns in English words and sentences? ELIZA was the result. ELIZA was intended to simulate—or caricature, as Weizenbaum himself suggests—the conversation between a Rogerian psychoanalyst and a patient, with the machine in the role of analyst.”


The reason that Weizenbaum gives McCorduck for choosing the Rogerian setting was the simplicity of creating an “illusion of mutual understanding”:


“What I mean here is the cocktail party conversation. Someone says something to you that you really don’t fully understand, but because of the context and lots of other things, you are in fact able to give a response which appears appropriate, and in fact the conversation continues for quite a long time. We do it all the time, not only at cocktail parties. Indeed, I think it’s a very necessary mechanism, because we can’t, even in serious discussion, probe to the limit of possible understanding. [...] That’s necessary. It’s not cheating.”[33, pp. 251–253]


When Weizenbaum was looking for a context where he could carry on that sort of illusion, he needed one where ignorance would not destroy the illusion of understanding: “For example [he goes on], in the psychiatric interview the psychiatrist says, tell me about the fishing fleet in San Francisco. One doesn’t say, “Look, he’s a smart man—how come he doesn’t know about the fishing fleet in San Francisco?” What he really wants to hear is what the patient has to say about it. [...]”[33, pp. 251–253] [italics as in the original]


However, as mentioned above, the title of Weizenbaum’s 1966 paper belies – or at least complexifies – the post hoc account he gave McCorduck. The title of the paper is not “ELIZA – A Computer Program that Simulates or Caricatures the conversation between a Rogerian psychoanalyst and a patient.” but rather: “ELIZA – A Computer Program For the Study of Natural Language Communication Between Man And Machine”.[47] This, and other features of that paper that I go into below, as well as a recently discovered manuscript signed in Weizenbaum’s own hand[2], suggest that ELIZA was not, as quoted by McCorduck “intended to simulate—or caricature [...] the conversation between a Rogerian psychoanalyst and a patient”, but rather that it was intended, just as the title of the paper says, as a platform for research in [human]-machine communication. To understand the details of this misinterpretation, we need to understand the context in which ELIZA appeared, and how ELIZA came it into the public eye.


Author:

(1) Jeff Shrager, Blue Dot Change and Stanford University Symbolic Systems Program (Adjunct)( [email protected]).


This paper is available on arxiv under CC BY 4.0 license.