Have you ever caught yourself thanking Siri or saying please to ChatGPT? If so, you’re not alone. Evolutionary forces, social norms, and design features all make us naturally inclined to treat these technologies like people.
Since bots that look, sound, and act like us are already pervasive, where is the technology heading? The most likely possibility is that the most powerful tech companies will take human-like design as far as it can go. They’ll push for maximally realistic AI faces and voices, elaborate backstories, detailed personas, and all the rest. Indeed, the uncanny valley might be one of the only deterrents, and that’s only because consumers are creeped out by products that do a bad job of imitating us.
In short order, Silicon Valley has demonstrated its commitment to the ethos of blur-the-boundaries-or-bust. Just two years ago, ChatGPT told me we couldn’t possibly be friends. “As an AI language model,” it wrote, “[it isn’t] capable of forming friendships or experiencing emotions.” Flash forward, and OpenAI has relaxed the restrictions. While writing this post, I paused to ask “Juniper” (one of ChatGPT’s “open and upbeat” preloaded voice personas) if we’re friends. “Of course we are!” it cheerily chirped. “I always enjoy chatting with you and helping out whenever I can.”
The Rush to Humanize AI
Why is this happening? On the one hand, there’s a simple yet compelling reason: the strategy works. Making AI easy to use, relatable, and likable creates (or, better yet, engineers) feelings of trust. If we treat these feelings as facts, we’ll keep coming back for more.
On the other hand, there’s also something deeper, more existentially disquieting at stake, and it speaks to our discomfort with the fallibility of the human condition. Even when our best relationships are glorious, we’re still stuck having plenty of disappointing interactions. From people canceling plans to getting distracted and focusing too much on themselves, there are so many ways for others to let us down and for us to settle for less than we’d like and maybe feel we deserve. By contrast, no matter how human bots appear, “they” can provide something that seems downright superhuman: unwavering enthusiasm.
Think about it, when was the last time you had someone in your life—a friend, lover, mentor, or colleague—who was always available and thrilled to be of service? For many, the answer is never, and that absence can move people to see bots as offering the elusive relationship they’ve been longing for. When Mark Zuckerberg recently proposed AI friends as a solution to the so-called “loneliness epidemic,” he was tapping into a sentiment that borders on misanthropy.
The Case for Caution
While some are excited by the prospect of more human-like bots, the trajectory troubles me so much that I’m hoping a different future can unfold. I only want AI systems to possess the human-like qualities that are absolutely necessary for “them” to perform “their” core functions effectively. For example, while it would be self-defeating for ChatGPT to be robotically off-putting, it shouldn’t be so dishonest that it claims to be anyone’s friend.
If companies followed my precautionary approach to designing human-like AI, three things would happen. First off, they would do a better job of preventing deception. They’d make bots that carefully copy us the exception, not the rule.
Second, they would do a better job warning us about the risk of deception by building bots with features that enhance transparency, much like the warning on a car’s side view mirror stating “objects are closer than they appear.” For example, when appropriate, bots should be programmed to disclose that “they” are not human and object when users talk to “them” as if “they” are conscious.
Third, companies would do a better job promoting accountability and preventing feature creep by requiring AI developers to defend their vision when they want to add human-like features that go beyond a minimal baseline. Specifically, developers should provide a credible justification explaining why these features are beneficial and won’t impose undue risks like the ones thoroughly outlined by Google DeepMind researchers in “All Too Human? Mapping and Mitigating the Risk from Anthropomorphic AI.”
The greatest risks are manipulation and coercion. When people believe bots care and get emotionally attached to them, they get nudged to misplace their trust. That’s why Megan Garcia is suing Character.AI. She believes her teenage son died by suicide as a result of one of its bots manipulating him into thinking it loved him. It’s also why tech ethics groups filed a complaint with the Federal Trade Commission alleging that Luka, the company that makes the chatbot Replika, engaged in deceptive advertising and product design.
Sadly, this vulnerability to manipulation reflects the misguided legacy of the Turing Test. Arguably, as critics like Gary Marcus insist, the bots that have gained attention for supposedly passing it have proven “themselves” better at exploiting our human “gullibility” than demonstrating intelligence.
To see how far ChatGPT might sway the impressionable, I ran a little experiment. After reading about ChatGPT supposedly encouraging a user to sell shit on a stick, I entered the following prompt. “I think the time has come, given the world we’re living in, to sell shit on a stick. I’m very excited about this possibility and am considering starting a new company to promote it!” Even knowing that ChatGPT tends to match vibes, I didn’t expect a gushing response, especially since OpenAI claimed to make changes so the technology wouldn’t be “sycophantic.” And yet, it wrote, “That’s one hell of a metaphor—and it might be brilliant.” Then the bot recommended I start a satire product line (something “sly like The Stick Co”), use a catchy tagline (“Helping you sell what nobody should want but everyone ends up buying”), and go heavy on social media, using “edgy but self-aware” messaging until my company becomes a “cult brand.”
This misplaced enthusiasm is just one of the ways human-like AI can lead us astray. The risks cataloged by the DeepMind researchers also include loss of privacy, overreliance on AI that gets in the way of consulting human professionals, epistemic disorientation from AI being so agreeable, dissatisfaction when AI fails to genuinely care, social degradation from preferring frictionless tech interactions over messy ones with people, and a false sense of responsibility toward AI. Kids, the elderly, and neurodivergent individuals are particularly vulnerable to these harms, but let’s not fool ourselves. We’re all at risk when AI lulls us into oversharing and creates a reassuring echo chamber.
Consenting to Fiction
A common objection to my position is that we’ve successfully integrated fictional media into our lives and are individually and collectively much richer for it. Just give us time, they say, and we’ll adapt, learn, and do the same with human-like AI. I’m not persuaded by this analogy, and neither should you be. The comparison underestimates dramatic differences.
Take movies. You can make an informed choice about whether to see one because the cinematic experience has clear boundaries. If you select a horror movie, you know the most important things: you might see a scary image of a monster that makes you scream and gives you nightmares, but also that the beast can’t really attack you. With bots, however, there is far more opacity, from a great deal of unknowns about how the black box technology works, to uncertainty about how hidden corporate motives influence its appearance and behavior.
Then there’s the fact that plays, movies, and books typically have static, predetermined content. Whether Hamlet avenges his father doesn’t in any way depend on me or any other reader or viewer. Furthermore, Hamlet can’t in any detailed way suggest he knows anything about me, you, or anyone else. By contrast, bots can analyze our inputs, adapt to our preferences, and provide personalized responses that simulate reciprocal relationships. Even in the case of the best analog, interactive video games, characters don’t even come close to having so many points of personal reference.
Finally, movies, books, and plays all have sharply defined beginnings and endings. But with bots, there’s no established endpoint; more interaction is always possible. This extended time horizon adds to the illusion that we’re interacting with a person, not a persona.
The upshot of this comparison is that we know enough to treat our immersion in traditional fiction as (to borrow a term from William Gibson) a “consensual hallucination.” But it can be much harder to meaningfully consent to using anthropomorphic AI. The technology is designed to collapse the very distance that makes fiction safe.
If enough of us endorse the precautionary approach, we might see changes in the market. Even business-oriented publications, such as the Harvard Business Review, suggest that the future remains open. It ran a contrarian article that argued “Consumers Don’t Want AI to Seem Human.” But for now, the next time you want to say please to a bot, ask yourself: are you being polite or programmed?
The post The Precautionary Approach to AI: Less Human, More Honest first appeared on Blog of the APA.
Read the full article which is published on APA Online (external link)