Search
Search
Can Machines Think?
Can Machines Think?

Date

source

share

The question whether machines can think is more complex than it appears at first sight. The Turing Test attempted to provide a way to judge whether computers are intelligent, but pretending to be human in a chat is not the . . .
The question whether machines can think is more complex than it appears at first sight. The Turing Test attempted to provide a way to judge whether computers are intelligent, but pretending to be human in a chat is not the same as being intelligent. AlphaGo is undoubtedly intelligent in its domain, but couldn not pass a Turing Test.

If you like reading about philosophy, here’s a free, weekly newsletter with articles just like this one: Send it to me!

We are surrounded by intelligent machines: smartphones that can answer questions and book seats in restaurants; fridges that can warn you when the milk expires; cars that drive by themselves; computers that play better chess than humans; Facebook tagging algorithms that recognize human faces. Still, one question is worth asking: these machines can perform all kinds of impressive tricks, but can machines actually think?

The question is interesting because a lot depends on it. If machines could think like we do, will they at some point in the future be better at it than we are? Will they then become a threat to us? Might they develop feelings? Will machines be lazy or angry at us for asking them to work when they don’t want to? If they become conscious, will they claim rights? Will human rights have to be changed to apply to them? Will they have a right to vote? A right to be treated with respect? Will their dignity or their freedom become issues? Will they have to be protected from exploitation? Or will they find ways to exploit us?

Thinking about… ice-cream

Much depends on how one understands the question. “Does X think?” might mean multiple, different things. It might mean, for instance, does it think like a human? In this case, we should expect the machine to have feelings, to be distracted or sleepy sometimes, or to make typos when writing. Because if it didn’t do all these things it wouldn’t think like a human. – But then, what else is involved in “thinking like a human?” If I ask my phone’s intelligent assistant whether it likes ice-cream, what answer do I expect to get? – “Yes, I do like ice-cream, but only strawberry flavour.” Would this be a satisfactory answer? Obviously, the machine cannot mean that, since it doesn’t have the hardware to actually taste anything. So the response must be fake, just a series of words designed to deceive me into thinking that the machine actually understands what ice-cream tastes like. This doesn’t seem to be proof of a high intelligence.

What if it responded: “What a stupid question! I cannot taste ice-cream, so how would I know?” This seems to be a better, more intelligent and honest answer, but it causes another problem. Now the machine doesn’t pretend to be a human any more. In fact, what makes this response a good response is precisely that it gives up the pretence of sounding “human.” So perhaps other aspects of intelligence don’t …

Read the full article which is published on Daily Philosophy (external link)

More
articles

More
news

What is Disagreement?

What is Disagreement?

This is Part 1 of a 4-part series on the academic, and specifically philosophical study of disagreement. In this series...

Medieval Skepticism

Medieval Skepticism

[Revised entry by Charles Bolyard on January 9, 2025. Changes to: Main text, Bibliography] Overarching surveys of the history of...

Taming and Tolerating Uncertainty

Taming and Tolerating Uncertainty

Democracy is existential to its core, and the social question is key to its survival. Since large-scale transformations of society—including...