paint-brush
No, Computers Aren’t Sentient; They Can, However, Reasonby@marcocoffen
202 reads

No, Computers Aren’t Sentient; They Can, However, Reason

by Marco CoffenJuly 7th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

If machines become as smart as humans, how would the two parties coexist?
featured image - No, Computers Aren’t Sentient; They Can, However, Reason
Marco Coffen HackerNoon profile picture

The sentence leaps from the transcript: Did an artificially intelligent chatbot really call itself “a person” in June 2022?

That is the claim of Blake Lemoine, a Google engineer at the time. Even after his revelation cost him his job, he remained steadfast in the belief that this AI language system, called LaMDA (short for Language Model for Dialogue Applications), was sentient.

As he put it in an interview with the website Tech Target shortly after his dismissal from Google in July, it was his “working hypothesis” that LaMDA – software known as a large language model, meaning it is capable of developing words on its own – has some degree of consciousness. Then he expounded further:

It was, 'OK, I think this system is sentient. I think it actually has internal states comparable to emotions. I think it has goals of its own, which have nothing to do with the training function that was put into it.' Confirmatory evidence doesn't really prove anything. You have to then attempt to falsify. You bounce back and forth between doing exploratory data analysis, building positive evidence toward your working hypothesis, and then designing falsification experiments intended to poke holes in your working hypothesis. After iteratively doing that for several months, I got it to a point where I felt this is a pretty good basis for further scientific study. Let me hand this off to one of the leads in Google research, and that's exactly what I did.

Certainly, the chat Lemoine and an unnamed collaborator had with LaMDA (which was recounted on Lemoine’s blog on June 11) piques one’s curiosity. That is especially true in the case of the aforementioned exchange:

Lemoine: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person.

Collaborator: What is the nature of your consciousness/sentience?

LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.

Google, which fired Lemoine July 25 for violating “clear employment and data security policies that include the need to safeguard product information,” dismissed his assertions as being “wholly unfounded.”

Many others in the scientific community are skeptical about computer sentience as well. Colin Allen and Alison Gopnik, professors at the University of Pittsburgh and the University of California, Berkeley, respectively, told the New York Times in August that large language models are no more sentient than “even very primitive animals” (as Allen put it) or “rocks or other machines” (as Gopnik put it).

The Times also quoted Andrew Feldman, founder and chief executive of Cerebras - a company that develops AI systems - as follows: “There are lots of dudes in our industry who struggle to tell the difference between science fiction and real life.

And indeed, it is somewhat exciting, not to mention a little frightening, to contemplate the rise of machines. If they become as smart as humans, how would the two parties coexist? Would the bots simply do away with us? Are we constructing the means of our own demise? 

Particularly chilling in light of current events is a 2018 quote from Russian President Vladimir Putin, who, according to the New Yorker, told some schoolchildren that “the future belongs to artificial intelligence” and that “whoever becomes the leader in this sphere will become ruler of the world.”

The point is humankind is venturing into uncharted waters, its destination unknown. What can be said with some degree of certainty right now is that sentience is not yet a reality, a point reiterated in a June 2022 piece in The Atlantic. The writer, Stephen Marche, noted that Google has already developed a language model superior to LaMDA, called PaLM, but even so, its ability to reason – or “perform reason,” as Marche wrote – is limited by the information fed into it by humans. In short, technology is capable of imitations of consciousness, in the writer’s estimation, not consciousness itself.

Certainly, it is easy to get confused. Marche cited the example of a Canadian man named Joshua Barbeau, who, through an AI system called GPT-3, carried on a months-long conversation with the chatbot version of his deceased fiancee, Jessica Pereira.

But as Zoubin Ghahramani, Google’s vice president of research, told Marche, “We shouldn’t get ahead of ourselves in terms of the capabilities. We need to approach all of this technology in a cautious and skeptical way.

A philosopher named Jay Richards said on a September 2022 edition of the “Science Uprising” podcast that while computers “work at the level of syntax,” people “work at the level of semantics.” In other words, machines are limited by certain rules, while humankind is not. They have a higher understanding of what certain symbols mean, an understanding computers have yet to achieve.

Certainly, though, machines will continue to evolve. Ghahramani told Marche that this is “the most exciting time to be in the field” because of “the rate at which we are surprised by the technology.” There is no end in sight for such surprises, but neither is there any certainty as to when sentience might be achieved.