The concept of thinking machines remains entirely a psychological construct. Artificial intelligence applications cannot fundamentally shape human existence; they cannot model, transform, or design our experiences in truly novel ways. This means that individual and interpersonal experiences inhabit a world that is categorically different from the realm of artificial minds. Large language models and other types of generative AI cannot reach the depth and complexity of our daily activities because they are merely wishful idealizations that are not capable of capturing the vibrant, fluid, and continuous nature of human communication. Artificial intelligence remains a useful, task-oriented tool under our responsibility, but it should not be mistaken for an experience-altering revelation. The gap between human consciousness and artificial systems is not merely technological but ontological — rooted in the fundamental nature of embodied experience and authentic engagement with the world.
Mind Over Mechanism
Machines do not have a mind; they are unable to think, feel, or experience. In a recently published article in Aeon, Alva Noë argues that computers “don’t actually do anything.” What he means by this is that computers are not autonomous; they don’t engage with the world as self-sufficient beings. Artificial intelligence models are neither morally responsible nor do they actively engage with the world and the objects and events in it. They work within a predetermined framework that is designed to deliver a specific output. In other words, artificial intelligence models are developed for specific purposes; their existence remains prearranged.
“The story of technology” writes Noë, “has always been that of the ways we are entrained by the tools and systems that we ourselves have made.” We are the authors of this story; it didn’t write itself. Large language models are therefore tools that we use to navigate through the world. These applications are made by us and for us; they are the products of intelligent beings that can be useful for a vast number of problems. Computational power is an efficient way of automating specific tasks. Machines can be used by intelligent beings to solve problems, but they are not themselves intelligent. As Noë writes, “If there is intelligence in the vicinity of pencils, shoes, cigarette lighters, maps or calculators, it is the intelligence of their users and inventors. The digital is no different.”
Our experiences are too complex to be successfully replicated by artificial systems. Muñoz, Bernacer, Noë, and Thompson argue in Why AI will never be able to acquire human-level intelligence that the biological foundation of human intelligence cannot be replicated by large language models, which, despite their practical applications, will never achieve true AGI due to their fundamental lack of physical embodiment. Our experiences cannot be separated from our …
Read the full article which is published on Daily Philosophy (external link)