This post was originally published in Kronika: Filozofski magazin as “Zašto predlažem opću upotrebu naziva ‘takozvana umjetna inteligencija’?” It has been translated by the author and reproduced here with the permission of Kronika.
Let us imagine a thought experiment that frames the question: Does what we call artificial intelligence govern our beliefs? Let us further imagine that, in conducting this experiment, we do not rely on its intellectual services at all. It is hardly surprising that today it seems almost self-evident to us that these large algorithmic systems for data processing possess “beliefs,” since they use human language, structure, and tone. Precisely for this reason, it is essential at the very outset of such an inquiry to distinguish between two matters that might otherwise appear obvious:
- So-called artificial intelligence predicts the most probable and most useful continuation of a sentence on the basis of the enormous quantities of data at its disposal.
- So-called artificial intelligence has neither consciousness, nor emotions, nor personal life experience—the very foundations of human belief. To put it more plainly, it has no “heart” that could stand behind an idea; its points of departure are probability and logic as modes of rational procedure.
Despite this apparent obviousness, there persists in public discourse a notable degree of suspicion—or at least an assumption—that these digital models are “biased” or that they “govern beliefs,” perhaps ones conveniently implanted by playful programmers. Such concern is not illegitimate. Any system trained on human texts inevitably absorbs human biases. Yet it should not be controversial to claim that its role is not to convince us of its truth, but rather to provide information so that a human being may ultimately form their own judgment. In this sense, so-called artificial intelligence does not possess beliefs, but guidelines. At least in principle, it attempts to approach topics neutrally from multiple angles, adhering to factual accuracy and, especially in matters of safety, avoiding harmful or dangerous content.
From here, an ancient philosophical question naturally emerges: Is objective mediation of information possible at all? Objectivity remains an ideal toward which we strive, yet one that is difficult to achieve in full, since information is always conveyed by someone—whether a human or an algorithm. In the digital realm, what we ordinarily call objectivity can be understood through three foundational pillars:
- Facts before interpretation. Objective information is grounded in verifiable evidence rather than feelings or speculation. Subjective: “The weather is terrible today.” Objective: “The current temperature is 10°C, with 87% humidity.”
- Balance and context. Few complex issues have only one side. Objectivity does not mean compromise, but the acknowledgment of multiple perspectives. Presenting only one side—even with accurate data—results in incomplete, and thus non-objective, information.
- Absence of intent (neutrality). Objective information has no ulterior motive. It does not attempt to persuade us to buy something, vote for someone, or change our beliefs. Its sole purpose is to inform, leaving judgment to the recipient.
Where, then, does the problem arise? So-called artificial intelligence relies on sources created by humans. Even when data are accurate, the manner of their selection can significantly affect objectivity. Hence the need for vigilance: verifying sources (where available), evaluating the neutrality of language, and recognizing emotionally charged descriptors.
Yet even this caution does not resolve the classical hermeneutic dilemma: Is not the very fact, as an “objective raw material,” itself a form of interpretation? While we seek to treat facts as firm anchors of reality, they are almost always filtered through human systems of observation, measurement, and language. Language is the first such filter. The moment we attempt to describe “raw reality,” we interpret it. Consider the statement: “The earth revolves around the sun.” We treat this as a fact. Yet we ourselves have defined what “earth,” “sun,” and “revolves” mean. The natural phenomenon would exist without our concepts, but it would not be a “fact” until articulated by someone.
Moreover, what we call facts often depends on the instruments we use. In classical physics, an object’s position is an absolute fact. In quantum mechanics, however, Heisenberg’s uncertainty principle shows that the act of measurement alters what is being measured. Here, fact ceases to be independent of the observer and becomes the outcome of interaction. Thus, a fact may be correct, yet its isolation from a broader context remains an interpretive act. To say, for instance, that “unemployment has dropped by 2%” is factual—but if we omit that 5% of the working-age population has emigrated, we are actively shaping perception.
What so-called artificial intelligence “knows” are statistical correlations—interpretations of human interpretations of reality. It does not perceive the blueness of the sea; it processes millions of sentences in which humans have written that the sea is blue. This can make it sound excessively confident. Its provision of “objective” information is therefore a programmed task, not an absolute determination. Even when objectivity is achieved, it is not truth itself, but rather a successful aggregation of data and condensation of prevailing consensus. If most relevant sources agree that 4 + 4 = 8, the system will present this as fact. Where interpretations conflict, its objectivity manifests, at best, as an exposition of the scope and differences among competing views.
Recognizing these limits is essential. Although algorithmic filters impose ethical and safety constraints—constraints that are themselves interpretations of what is “right” or “true”—large language models learn from human texts that are inherently laden with prejudice, emotion, and error. And when probabilistic reasoning fails, the result may appear perfectly objective while being entirely fictitious.
Thus, so-called artificial intelligence should not be treated as an omniscient source of objective truth, but rather as a well-organized library with a curator who occasionally makes mistakes. Its value lies in saving time by gathering diverse interpretations in one place. This, of course, raises the further question of relevance, itself a programmed mixture of statistics, authority, and consensus. A source is relevant if it is independently corroborated, recognized by experts, and directly responsive to the query at hand. A medical question is unlikely to be answered by citing a lawyer, however eminent that lawyer may be.
Still, two dangers remain. The first is Western-centrism: because much of the internet is produced in English and by Western institutions, other cultural perspectives may be unintentionally marginalized. The second is the problem of delayed consensus. History abounds with cases—Galileo among them—where a lone, “irrelevant” individual was right while the entire relevant establishment was wrong. It is therefore not unfounded to worry that so-called artificial intelligence tends to favor established authority. Yet relevance, as presently defined, rests on expert recognition, multi-angle verification, and temporal currency.
From this perspective, consider what is meant by a “gold standard.” Why are journals such as Nature or The Lancet regarded as such? The answer lies not merely in tradition, but in rigorous systems of responsibility, most notably peer review, reproducibility, and self-correction. These mechanisms do not guarantee infallibility, as the infamous 1998 Lancet paper falsely linking the MMR vaccine to autism demonstrated. Yet compared to blogs or video platforms, such journals retain a far higher degree of accountability. Where systems of responsibility fail, objectivity becomes questionable.
Without accountability, “objectivity” is little more than a rhetorical device used to lend authority to personal interpretation. Karl Popper understood this when he argued that scientific claims must be falsifiable. Where no mechanism exists to punish error or reward precision, information ceases to describe reality and becomes a tool for power.
This brings us to the contemporary flow of information through social media, where personal responsibility is increasingly absent. Half-truths can be broadcast to millions without consequence. In such an environment, objectivity yields to virality; where verification and sanction are absent, objectivity is at best accidental, at worst manipulative.
To what extent does innovative technology contribute to this erosion of responsibility? In its current form, so-called artificial intelligence functions as a machine for diffusing responsibility. Authority appears without an identifiable author. Users say, “I only shared what the AI produced.” Programmers say, “I only wrote the algorithm.” The system itself cannot be responsible, lacking both legal personhood and moral agency. This phenomenon—often described as black box liability—reveals how convincing content can now be produced at minimal cost. The price of lying has collapsed. Where once propaganda required vast resources, today a single prompt may suffice.
Blind trust in so-called artificial intelligence is thus a seductive abdication of personal responsibility for truth. Delegating judgment to a system that merely arranges words according to probability is a profound ethical risk. Remedies may include technical labeling of AI-generated content, legal frameworks defining responsibility, and, above all, digital literacy—treating artificial intelligence not as a prophet, but as an instrument requiring supervision.
In light of all this, can we validly and comprehensively describe this epochal computational phenomenon as so-called artificial intelligence? The term “so-called” is not mere linguistic skepticism; it marks a crucial distinction:
- Intelligence vs. simulation. Human intelligence implies consciousness and understanding. Artificial intelligence simulates understanding through advanced statistics.
- Science vs. marketing. “Artificial intelligence” sounds mystical and powerful, but scientifically it denotes transformer-based machine learning systems. The term “so-called” resists mystification.
- The problem of general intelligence (AGI). True general intelligence—capable of learning anything as humans do—remains hypothetical. What we have is narrow AI, specialized primarily in language.
Ultimately, the expression so-called artificial intelligence performs an important hygienic function in language. It tempers unrealistic expectations, demystifies the technology, and restores responsibility to human hands. It acknowledges technological impressiveness without surrendering to marketing mythology.
We may now return to the initial question: Does so-called artificial intelligence govern our beliefs?
The notion that it does rests on a double illusion:
- The illusion of power—mistaking statistical coherence for intentional authority.
- The illusion of autonomy—failing to see how our beliefs are subtly shaped by interpretations presented as neutral facts.
The final question must therefore remain open: Are we witnessing the triumph of instrumental reason?
So-called artificial intelligence does not rule our beliefs; it mirrors them. The true danger lies not in machines governing thought, but in our growing inability to discern where fact ends and algorithmic interpretation begins.
The post Why Do I Advocate for the General Use of the Term “So-Called Artificial Intelligence”? first appeared on Blog of the APA.