Search
Search
When Is an AI System Sentient?
When Is an AI System Sentient?

Date

source

share

How can we tell whether an AI program “thinks” or “feels”? In the recent debate of Blake Lemoine’s claims about LaMDA, a functionalist approach can help us understand machine consciousness and feelings. It turns out that Mr Lemoine’s claims are . . .
How can we tell whether an AI program “thinks” or “feels”? In the recent debate of Blake Lemoine’s claims about LaMDA, a functionalist approach can help us understand machine consciousness and feelings. It turns out that Mr Lemoine’s claims are exaggerated and LaMDA can not be sensibly said to feel anything.

If you like reading about philosophy, here’s a free, weekly newsletter with articles just like this one: Send it to me!

Blake Lemoine and Google LaMDA: Asking the right questions

In the past few days, there has been a lot of discussion around the case of Blake Lemoine, a Google engineer who was put on leave following his public claims that a computer program called LaMDA had become sentient and that it should be treated as a person.

This is a fascinating case in many respects; the actual claim of computer sentience is the least interesting of them.

Primarily, the whole debate is a good exercise in asking the right questions and flagging the wrong ones. In the news, all kinds of issues get mixed up and stirred together, until the resulting mess is impossible to sort out again. Should Mr Lemoine be fired by Google or is he a martyr for truth? Does his program have a soul? Should we better regulate what AI companies are doing? Do we need to protect the program’s rights or respect its feelings? Is exploiting machines a form of slavery? And what is the relevance of Mr Lemoine labelling himself as a Cajun Discordian and a priest?

Let’s try to untangle the threads and look at the questions one by one.

The Lemoine LaMDA transcript

The whole discussion started when Mr Lemoine published the transcript of a conversation between himself, a colleague, and the AI program LaMDA, trying to make the case that LaMDA is intelligent, sentient, self-aware and that even, as he said in an interview, it has a soul.

I will give you a few of the most interesting quotes below, but the whole thing is worth reading if you want to make up your own mind about the capabilities of LaMDA. What nobody questions is that LaMDA is an amazing piece of software that can sustain an interesting and human-like dialogue about very difficult topics, and I guess that it could very likely be able to pass a Turing test. But does this mean that the program is sentient or that it has a soul?

What nobody questions is that LaMDA is an amazing piece of software that can sustain an interesting and human-like dialogue about very difficult topics. When Is an AI System Sentient?

We will read the transcript charitably, that is, we won’t assume that it’s faked or cherry-picked (although it could well have been), or that all the answers have been pre-programmed into the machine. We will assume it produces its answers dynamically and spontaneously in response to the questions and that the content of the answers was as surprising to the researchers as it is to us. So we will give the program the benefit of the doubt and then see if a case can be made that LaMDA is sentient, a person, or in any relevant way equivalent to a …

Read the full article which is published on Daily Philosophy (external link)

More
articles

More
news

What is Disagreement?

What is Disagreement?

This is Part 1 of a 4-part series on the academic, and specifically philosophical study of disagreement. In this series...

Spinoza’s Psychological Theory

Spinoza’s Psychological Theory

[Revised entry by Michael LeBuffe on November 21, 2024. Changes to: Main text, Bibliography] In Part III of his Ethics,...