Google’s AI is not sentient. Not even slightly

Date

source

share

A Google AI engineer has been put on leave for thinking an AI has become sentient. However, this is an illusion caused by a clever language model and a human anthropomorphising, writes Gary Marcus.

 
 

Blaise Aguera y Arcas, polymath, novelist, and Google VP, has a way with words. When he found himself impressed with Google’s recent AI system LaMDA, he didn’t just say, “Cool, it creates really neat sentences that in some ways seem contextually relevant”, he said, rather lyrically, in an interview with The Economist on Thursday, 
“I felt the ground shift under my feet … increasingly felt like I was talking to something intelligent.”
Nonsense. Neither LaMDA nor any of its cousins (GPT-3) are remotely intelligent. All they do is match patterns, drawn from massive statistical databases of human language. The patterns might be cool, but language these systems utter doesn’t actually mean anything at all. And it sure as hell doesn’t mean that these systems are sentient. Which doesn’t mea…

Originally appeared on iai News RSS feed Read More

More
articles

More
news