“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated.” –Elon Musk
When Elon Musk acquired Twitter, he positioned himself as a “free speech absolutist,” arguing that unrestricted expression is vital for democracy. Critical of the way Twitter’s management team were censoring speech acts, when Musk took control of Twitter, he vowed to bolster freedom of speech protections. Musk rolled back key content moderation policies, cut trust and safety teams, and weakened efforts to label or fact-check posts—moves that opened the door to hate speech, harassment, and false information. He also reinstated accounts previously suspended for such posts, and for inciting violence. These actions have drawn widespread criticism, with commentators claiming that this led to a more toxic environment on the social media platform and have made X (formerly Twitter) a haven for misinformation and dangerous rhetoric.
Leaving aside accusations of hypocrisy regarding Musk’s “absolutist” position around free speech (magnified by his close association with Donald Trump’s selective enforcement of speech freedoms), it is interesting to look at whether or not abandonment of censorship on a social media platform does in fact cultivate a well-functioning marketplace of ideas which is what Musk, purportedly, set out to do with his idea of a “digital town square.”
The town square/market place of ideas position is essentially an epistemic one. The idea being that truth emerges when all ideas are allowed to compete freely in public discourse—even false ones. It is a well-supported position that has been endorsed by great thinkers in political, legal and philosophical fields for hundreds of years, from John Milton, to John Stuart Mill, to Justice Oliver Wendell Holmes, to highly regarded contemporary academics such as Peter Singer and José Medina.
The premise is that our ability to obtain knowledge depends on epistemic friction, which is the necessary resistance that ideas encounter when exposed to differing viewpoints and critical engagement. In being forced to consider other opinions we challenge assumptions, refine understanding and strengthen intellectual growth, pushing our reasoning to its logical limits.
Following John Stuart Mill’s interpretation of this, for epistemic friction to be effective, all opinions must be aired so that they can be debated. And by “all opinions” Mill emphatically meant all opinions, regardless of how untrue, offensive or even immoral they might be (although he did draw a line at those which led to direct physical harm). And not only this, he believed these opposing beliefs and diverse viewpoints should be “frequently and fearlessly debated,” such that the rationale, reasoning and justification around them remain alive and they don’t become what Mill termed “dead dogma”—rote learned beliefs passed down through generations. You can see how Peter Singer brings this to life in his excellent essay on “Free Speech, Mohammed, and the Holocaust,” where he argues that countries such as Austria, should repeal it’s law against Holocaust Denial.
From Theory to Reality: Musk’s Version of the Marketplace of Ideas
So back to X, Elon Musk, and his abandonment of content moderation. How does his approach fare against these requirements for epistemic friction in a marketplace of ideas, i.e., that:
- Any and all beliefs are allowed, regardless of how untrue, offensive or immoral.
- Beliefs are to be fearlessly and frequently debated.
I think most would agree that the X social media platform is certainly a place for (2) and the initiatives Musk has implemented satisfy (1). So really X should be functioning as an “open-source experiment in collective epistemology” of sorts, providing a valuable market place of ideas to further knowledge. However, it doesn’t seem to be. Why is this?
Why the Digital Town Square Falls Apart
Well, clearly in today’s digital landscape and algorithm-driven attention economy the “marketplace of ideas” process is fundamentally altered. What worked in Milton and Mill’s era no longer functions as it did.
Yes, a lot of the ills of the internet that people point to as novel are not really new, rather are made more salient by the changed spatial and temporal boundaries of online communication (speed, scale, pervasiveness of information etc.)—take hate speech, misinformation, conspiracy theories and even echo chambers as examples.
However, there are some truly novel characteristics of the digital landscape that come together to make the freedom of speech debate much trickier. Some things to consider relate to the generation of information, the distribution of information and even arguably how we consume information from a cognitive and neurological perspective.
1. Information Generation: The Noise of Democracy
In our current communication environment, user-generated content has become by far the dominant paradigm. Up to 80% of all online content is user-generated, with platforms like Twitter relying entirely on user-generated content rather than vetted or edited sources. While Mill (and presumably other authors) believed in open expression, he also valued expert voices. Yet, in an environment where celebrity tweets dominate over academic discourse, expertise is drowned out. Among Twitter’s top 10 most-followed accounts are pop stars and athletes, not public intellectuals, and they often have a lot to say about areas well outside of their expertise. The rise of the “influencer” speaks volumes. Their defining trait is impact, not insight. This dilutes the epistemic value of available information, as content is driven by popularity rather than reliability.
2. Information Distribution: Algorithmic Manipulation
Another novel feature of today’s information environment is that algorithms shape what we see based on deeply personalized data—not to inform us, but to maximize engagement and ad revenue. As advertising revenue-driven businesses, platforms like X and Facebook use predictive analytics to determine what content users see, based on engagement optimization, personalization, and virality—regardless of accuracy.
Content that earns likes, shares, and comments is amplified. Research consistently shows that hate speech, misinformation, conspiracy theories, and salacious content spread more rapidly and widely on social media platforms than other types of content—largely because it’s more emotive and attracts greater engagement. And we see how well clickbait farms thrive in this ecosystem because attention equals money—and outrage pays far better than accuracy.
Moreover, algorithms are optimized to deliver content a user is likely to engage with, reinforcing “more of the same,” rather than the diverse viewpoints needed to create epistemic friction.
And this optimization toward advertising performance isn’t limited to platforms like Facebook and X. Search engines such as Google are also built on ad-driven models, so search results are shaped by algorithms that prioritize relevance, engagement, and profitability also.
This algorithm-driven, engagement-optimized system actively undermines the conditions necessary for epistemic friction—namely, exposure to diverse, conflicting viewpoints.
3. Information Consumption: The Rise of Passive Knowledge
Lastly, what is both fascinating and deeply worrying is how social media platforms appear to be designed in ways that may actually reshape how we process information—both cognitively and neurologically.
Michael Lynch, in his book The Internet of Us, argues that digital platforms have altered the way we engage with information. His chapter on “Google Knowing” is well worth reading. Lynch claims that the way we consume information online—shaped by the speed of access, the design of scrolling interfaces, and the constant influx of content—is becoming increasingly similar to the way we process sensory input: automatically, without reflection, and with an instinctive trust that bypasses reasoning.
Therein lies a serious epistemic problem. When we draw inferences from our senses to form beliefs about the world, it happens without conscious effort. These inferences involve complex sub-processes, but they are automatic—what cognitive scientists call System 1 processing: fast, intuitive, and unconscious. Lynch suggests that online information consumption is beginning to mimic this kind of passive receptivity, where content is simply absorbed, not critically evaluated.
This shift in cognitive habits is alarming—not only because it erodes the role of critical reasoning in belief formation, but because it normalizes a passive, reflexive relationship with information. When platforms reward speed, emotion, and engagement over accuracy or thoughtfulness, they foster an environment where reflective thinking is sidelined—and where misinformation can thrive, unchecked and unexamined.
In theory, Musk’s “digital town square” echoes a marketplace of ideas vision: a place where all ideas, no matter how controversial or offensive, can be openly aired and debated. But in practice, today’s digital environment doesn’t support the conditions required for meaningful epistemic friction. The way information is generated, distributed, and consumed on platforms like X turns the marketplace of ideas into something else entirely—one driven by emotion, engagement, and speed rather than reasoned debate. Full freedom of speech in this context doesn’t create truth through friction; it amplifies outrage, drowns out expertise, and bypasses critical thinking. As long as attention, not understanding, is the primary currency, the promise of a digital town square remains an illusion.
The post Musk’s Town Square Is Built on Sand first appeared on Blog of the APA.
Read the full article which is published on APA Online (external link)