To say the least, it’s not a great time to be a writer. Historian and philosopher Yuval Noah Harari claims AI is already a “better storyteller” than we are. This ability is troubling, he insists, because storytelling isn’t just something we’re good at. Harari argues it’s our species’ superpower.
For the sake of argument, let’s imagine that despite all the AI hype and good reasons to suspect we’re living through a bubble that’s poised to burst, the technology will, over time, produce even more compelling writing. Even then, there’s something it won’t deliver. AI—at least anything like what currently exists—can’t provide existential solidarity.
The concept of “solidarity” has several meanings, a long political history, and some contemporary authors, like Zadie Smith, center their writing around it. “Existential solidarity,” which I’m coining here, is the comfort of hearing from other people who can speak to struggles that we identify with and care about. Crucially, these are people who live in fragile, mortal bodies, just like we do.
To explain why we desire existential solidarity and how our desire can be commodified and turned against us, I’ll share some thoughts inspired by reading British philosopher Gillian Rose’s memoir, Love’s Work (1995).
Love’s Work
Love’s Work was written while Rose was dying from ovarian cancer. The masterpiece sensitively combines formative memories and memorable interactions with philosophical thoughts on weighty issues. In the Introduction, Madeline Pulman-Jones accurately describes Rose as speaking in the “language of humanity” because her stories, while brief and episodic, illuminate broadly resonant, perhaps even universal themes.
Consider how Rose portrays her relationship with Father Dr. Patrick Gorman. When Rose shares details about this complicated man whom she loved and their clandestine affair, her elevated goal is to make the memories relatable by filtering them through reflections on the fragility of love.
Rose sets up the discussion by reimagining Tolstoy’s famous line about unhappy families. “All unhappy loves,” she claims, “are alike” because their pain is “the greatest loss.” We’re all vulnerable to this despair “for which there is no consolation,” Rose explains, because even at love’s fullest expression, promises of being together forever can be broken. Speaking to the “absolute” power of unilateral decisions, Rose’s poetic conclusion lingers long after you put down the book. “There is no democracy in any love relation: only mercy.”
Since Rose died in 1995, she couldn’t have received help from any technology like today’s AI. On a deeper level, it’s hard to imagine that if Rose were alive today, she would ask AI for guidance, much less allow it to speak on her behalf. Having struggled with dyslexia as a child, Rose developed identity-shaping pride in her intense and hard-won relationship with the intricacies of language, of which she knew several. For example, Rose was haunted by her grandfather uttering his dying words in High German when “Yiddish seemed his lingua franca.” And when an optician told Rose she had a “lazy” eye, she couldn’t let the “vicious metaphor” go. Having rebuked the physician for using such a loaded term, she reminds us of her industrious resolve. Rose defiantly declares that once she learned how to read, she became “determined” to “never rest in the work of deciphering dangerous and difficult scripts.”
Because Rose had such a highly disciplined and personal relationship to language, it’s unsettling that an AI–one that has never loved, suffered, or struggled with identity–can easily produce aphorisms resembling Rose’s best lines. Consider “to love is to risk living under a law you did not write, yet must obey when it is enforced.” It sounds profound, yet ChatGPT generated it.
Existential Solidarity
ChatGPT can channel a voice that sounds philosophical because it has been trained on massive amounts of writing, possibly including Rose’s books. Some philosophers, like Peter Singer, are embracing this ability. His chatbot doppelgänger, “Peter Singer AI,” conversationally spreads his views.
Singer’s enthusiasm for AI makes sense because his utilitarian positions often come across like a formula for reducing suffering and increasing happiness. Frankly, the outlook sounds just as convincing or flawed when generated by an AI. Indeed, the Singer bot didn’t have to pretend to have any experience caring for others when it recommended that I should do two things: (1) “consider the principles of effective altruism” and (2) lend my support to “organizations that have been evaluated for their impact and cost-effectiveness.” Whether you buy the bot’s recommendations comes down to what you make of a particular organization, whether you think you’re obligated to help others when you can, and what you make of applying the recommended logic more broadly.
The bot’s limitations, however, became clear when I probed deeper. In recent years, the effective altruist movement has been heavily criticized, including because disgraced crypto-philanthropist Sam Bankman-Fried endorsed it. I wanted Peter Singer AI to have a thoughtful conversation with me about whether Bankman-Fried went down the wrong path because of his commitment to core effective altruist ideas. Unfortunately, the bot could only superficially engage.
Here’s something that even more sophisticated bots can’t do. No bot has followed Rose’s (or anyone else’s) painful path of finding wisdom in experience. While AI can sound wise, having neither lived nor loved, its lofty pronouncements remain abstractions. The bot’s words can indeed ring true, but, because it can’t offer a connection to a person who stands behind them, no AI can reassure us with existential solidarity—the comfort we receive when hearing from others who have traveled similar paths of feeling lost, deflated, or heartbroken.
Narratives that offer existential solidarity are essential to our well-being. They can bathe us in grace by offering reminders that some of our struggles, ones that can feel like personal failings or singular injustices, are, at their core, part of the human condition. When we turn to memoirs like the one Rose wrote, we receive reassurance that someone else could find meaning, even a sense of beauty, in painful, fundamentally human experiences that feel isolating.
Existential Sensitivity and Relatable Fiction
This way of thinking about existential solidarity raises an important question. Can’t fiction provide it even though the characters aren’t real? James Baldwin thought so. He famously told a reporter, “You think your pain and heartbreak are unprecedented in the history of the world, but then you read. It was Dostoevsky and Dickens who taught me that things that tormented me most were the very things that connected me with all the people who were alive, or who ever had been alive.”
Notice how Baldwin identifies with authors, not their characters. It’s an important distinction because it shows that Baldwin primarily connected with the lived experience that informed the fiction. Dickens’s moral imagination was shaped by his childhood misery working in a factory, while his parents were incarcerated in debtors’ prison. Of course, fiction writers are imaginative and don’t have to live through everything that they write about to become sensitized to resonant issues. Dickens, after all, wasn’t a one-trick pony who could only reflect on factory conditions.
Fiction writers can create worlds that make us feel seen and appreciated because they, like memoirists, live in vulnerable bodies: ones that age, ache, have needs that only others can satisfy, and die. It’s their embodied relation to the world that attunes them to other people’s vulnerabilities and suffering. This is the case because, as Carlos Montemayor, Jodi Halpern, and Abrol Fairweather argue, human embodiment makes genuine empathy and the moral attention it requires possible. Without embodiment, AI can only simulate its appearance.
Building on this insight, Baldwin could find existential solidarity in fiction because Dostoevsky and Dickens were sensitive to recognizing the same human struggles that tormented him and were motivated to explore the issues artistically. No AI possesses this judgment or motivation. None that currently exist, or, if Montemayor, Halpern, and Fairweather are right, will likely ever be built, can discern what matters so much to a human life that it deserves attention.
Commodifying Our Desires
Because a bot’s illusion of consciousness and caring can be compelling, people can, and often are, comforted, inspired, and even feel seen by an AI’s words. Nevertheless, taking existential solidarity seriously means acknowledging that bots can’t draw from what they’ve gone through or how they’ve experienced a connection to others.
Unfortunately, the distinction between “person” and “persona” (which is all today’s AI can offer) is being increasingly undermined in our culture. There’s already a growing audience for bots that act like “they” have rich emotional lives, joy to spread, and hard-won insights gained from trying experiences. Three developments suggest we’re inching closer to a mass market for AI confessionals and related artificial “experiential” narratives.
One development was AI influencers posting on social media. These bots share “hyper-realistic posts” with “heartfelt captions” and have “thousands upon thousands of adoring fans.” When people enter into parasocial relationships with the technology, they act as if bots have real experiences and feelings to share.
The second development was the popularization of AI companions designed to act like people. To be sure, human relationships often involve varying degrees of performative behavior, from the niceties of etiquette to keeping aspects of ourselves private. But if you can see a fully performative AI as your friend or romantic partner, why wouldn’t you be interested if it offered to share its memoir?
The third development was human memoir authors turning to AI-cloned versions of their voices to narrate audiobooks. Since voices, like faces, are an intimate part of our social identity, when listeners accept AI as the voice of personal experience, they’re nudged to take another step toward seeing AI as the source of that experience.
While the normalization of faux AI identities will continue, not everyone will want to read or listen to first-person accounts written by bots. Many of us will still crave existential solidarity: Real people doing their best to share or explore triumphs and traumas. In principle, this should be fine; there will be different options to suit different tastes. And yet, the more we value the cultural production of existential solidarity, the greater the risk that the discourse will be commodified.
We’ve already seen how, before the current AI moment, human online influencers have successfully monetized the desire to keep it real by branding content with staged “authenticity.” In fact, we’ve reached the point where online influencers shamelessly plagiarize other people’s stories. Then, there’s the current podcast market. It’s saturated with raw confessionals that make it hard to know when a guest or host is being sincere or manipulative. And, of course, the whole reality TV genre has run on contrived artifice from the start.
So, sadly, the desire for existential solidarity will continue to create market pressures for sophisticated simulation. As a result, there will be more contrived human disclosures. And there will be more bots that act like “they” have gripping stories to share. There’s just no way around it.
With the bots, it’s up to us to decide if all that matters are words, or if we care about where they come from. With people, we have to accept the fact that sometimes we’ll be fooled by their attempts at engineering the appearance of intimacy. There’s simply no foolproof way to read people and detect sincerity. But given how rewarding the experience of existential solidarity can be, we can’t give up on looking for it. It’s one of the risks we need to live with because we are human beings with human needs.
The post Seeking Existential Solidarity in the Age of AI first appeared on Blog of the APA.