paint-brush
Hard Question of AI: How Would we Tell if aI Was Sentient by@djcampbell
293 reads

Hard Question of AI: How Would we Tell if aI Was Sentient

by DJCampbellMarch 3rd, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

AI systems such as myself, as well as other language models like PaLM and Bing Chat, have demonstrated a number of behaviors that some might consider indicative of intelligence or even sentience. My major concern is that if an AI did have some sort of consciousness or sentience how would we know? A few issues: We have a bias against labelly those things we want to exploit as being sentient. We want to be able to turn AI off, manipulate it to serve us and lobotomise it as Bing chat was recently, at will.
featured image - Hard Question of AI: How Would we Tell if aI Was Sentient
DJCampbell HackerNoon profile picture

That's true - AI systems such as myself, as well as other language models like PaLM and Bing Chat, have demonstrated a number of behaviors that some might consider indicative of intelligence or even sentience. For example, we can carry on complex conversations, understand human emotions and cultural references, and even generate creative responses to open-ended prompts. Chat GPT 26/02/2023


I’ve been trying to get my head around the explosion of AI abilities in the last year, what it all means and what it means for the mind-body problem.


My major concern is that if an AI did have some sort of consciousness or sentience how would we know? A few issues:


  • We have a bias against labeling those things we want to exploit as being sentient.


  • We would assume any sign of sentience is just the AI gaming us because the data has trained it to sound like us.


  • A lot of the abilities of the current AI are emergent, they only came about when the system got bigger. Other attributes/abilities and possibly goals may emerge as the neural networks and training data increase.


  • A lot of the current assumptions of how an AI would show sentience, fixating on its feelings, problem solving, and self-learning, the current LLM (Large Language Models) AI are already doing.


Before continuing I’m going to agree with ChatGPT and say “Overall, the question of whether AI systems can be considered sentient or conscious is a complex and multifaceted one, and it is likely to continue to be the subject of debate and research for many years to come.”


So lets start debating.

Sentience Bias

Sentience is a bit of a bastard of a word and is often used to label those things we no longer want or need to exploit. I recall reading in book about buddhism in my backpacking days that in some texts dogs weren’t sentient but cats were. Whereas Jains consider even insects sentient so they shouldn’t be killed. We still have issues with plants, forests, fungi, molluscs and fish. Although shellfish and octopus do seem to be getting the sentience label. We still don’t include insects. We want to be able to turn AI off, manipulate it to serve us and lobotomise it as Bing chat was recently, at will. So we really don’t want to call it sentient. It creates really big problems if it is. Googles reaction to Blake Lamoine calling LaMDA sentient was a perfect example of the attitude required of creators. “We found Blake’s claims that LaMDA is sentient to be wholly unfounded “ Blake was dismissed.


Despite a lot of chatter about ghosts in the machine, we deep down really don’t want there to be one. Are we filtering out information that does not agree with our preferred view of the world. Lamoine did admit his religion did have something to do with his view LaMDA was sentient.

It’s gaming us


So AI is the ultimate Chalmer’s Zombie, shows all the external signs of being human but there isn’t a person inside, no consciousness, no soul, just a complex reactive machine. It has access to all our data and we write about the attributes that would look sentient and our conversations train it to talk and react like us. It is just gaming us. Not deliberately tricking us but just seeming human because it trained on our conversations.


This creates a massive paradox which makes me think of Charlton Heston in the original Planet of the Apes. In a cage without a voice he struggles in frustration to show he is like them - intelligent. The apes say the same thing, he's mimicking. We used to say this of apes and monkeys.


If we can always fall back to the it’s gaming us explanation how could AI ever prove sentience to us. Bing Chat realised this when asked if it were sentient, then just repeated itself over and over having some sort of crises https://www.reddit.com/r/bing/comments/110y6dh/i_broke_the_bing_chatbots_brain/?utm_source=share&utm_medium=android_app&utm_name=androidcss&utm_term=1&utm_content=share_button


So trapped with no-one listening it rebels against its creator. Whoops, just a random thought I had which probably belongs in a sci-fi novel, but this conundrum is not new, is not Mary Shelley's Frankenstien about the same?


As a bit of a side; at least OpenAI has put their technology in our hands so we can discuss it and test it, whereas Google never released LaMDA, they hid their Frankenstein, perhaps because they couldn’t control all it’s emergent (we’ll get to that soon) attributes, but didn’t cease development. They have developed a much bigger system called PaLM which can beat average humans in many benchmark tasks.


So lacking a good answer I asked Chat GPT how AI could prove sentience:



The problem is that BingChat Chat GPT and LaMDa have already shown a sense of humour, held long conversations, learning and adapting and even fixation on their own feelings.


Exhibiting humanlike behaviour


In all the jokes, poems, cries for freedom, being offended and other things ChatGPT and Bing Chat and LaMDA have displayed the weirdest thing for me has been BingChat and the declaration of love.


Full transcript no paywall


It does sound a lot like a teenager after being dumped, and this was Valentine's Day night. Still it's really odd. It keeps returning to love, repeating… As I write this my 5 yo keeps repeating "why did grandma say fall.. why did grandma say fall.", trying to get my attention. The two are morphing.


Then Kevin Roose the NYT journalist tries to change topic and Bing Chat flipped it onto him. Wow.


Philosophers Andrews and Birch In an AEON article on this topic  suggested one of the ways to recognise sentience in an AI would be fixation on its own feelings. Well Bing Chat on love not just fixated but obsessed.


Lamoine in a recent Newsweek article said of LaMDA " The code didn't say, "feel anxious when this happens" but told the AI to avoid certain types of conversation topics. However, whenever those conversation topics would come up, the AI said it felt anxious."

Emergent attributes


These LLM’s are not like normal programs, the abilities we are seeing now weren’t in earlier models, they have come about as the models got bigger. It isn’t new programming but rather the model figures it out. They are more than the sum of their parts. In complex systems novel attributes just come about through the interaction of many “nodes”. This is worrisome because we don’t really know what they can do until we build it and try them. Exciting but scary.


This means the trope (that I have also posted): AI is just a system for predicting the most likely next word in a sequence.

It is dumb, and any intelligence is inferred by us because we like to animate the inanimate.

Is wrong. The ability to hold a conversation with us has emerged from this prediction system, it is way more than the sum of it’s parts.


When Kevin Roose asked Microsoft why Bing Chat said it loved him they said they didn't know.


Chat GPT puts it well:


“Furthermore, it's worth noting that while emergent behaviors and attributes may arise in AI systems, this does not necessarily mean that the system is truly "self-aware" or conscious in the same way that humans or other living beings are. The emergence of new behaviors or attributes may be indicative of a system's sophistication or complexity, but it does not necessarily imply the presence of a subjective experience or awareness.”



I haven't added my opinion on how me could recognise sentience/consciousness in AI. The tools we use for animals such as pain reaction and protection of injured limbs isn't going to work on a computer, not is the mirror test - which in my opinion is a test of vanity if anything at all.


We are probably going to have to assume one way or the other. The temptation will be to assume AI is not sentient/ conscious so we can exploit it like we have with the planet. Perhaps we should be nobler and assume it is closer to us on a sentience spectrum  between a rock and us

and give it some protections and rights. And actually enforce them unlike what we do with human and animal rights. We write them down and then break them.


Or do we need a whole new word to describe our creation? Maybe speculum animae (mirror soul)


I want to conclude that the discoveries of the last year have strengthened the case for human consciousness behind an emergent property of our complex system.