AI Friends Are Not Your Friends, Here’s Why

Science fiction prepared us for AI friends through films like “Her” and “Robot & Frank.” Now, that fictional portrayal is becoming a reality.
In a recent podcast, Mark Zuckerberg proposed and endorsed the idea that Americans are in dire need of social connection, and that bots could fill the need.
Silicon Valley’s Promised Panacea
Nearly half of Americans have three or fewer close friends. Tech’s solution to the human loneliness problem is to offer AI companions—digital friends, therapists, or even romantic partners programmed to simulate conversation, empathy, and understanding. Unlike the clunky chatbots of yesteryear, today’s sophisticated systems are built on large language models that engage in seemingly natural dialogue, track your preferences, and respond with apparent emotional intelligence.“I sometime[s] feel lonely and just want to be left alone,” one user reported. “During this time I like chatting with my AI companion because I feel safe and won’t ... be judged for the inadequate decisions I have made.”
Meanwhile, other users have more quotidian motivations for using bots—chatting with AI for dinner ideas or developing writing ideas.
Kelly Merrill, an assistant professor of health communication and technology and researcher on AI interactions, shared an example of an older woman in his community who started using AI for basic things. For example, “I have these six ingredients in my fridge. What can I make tonight for dinner?” “She was just blown away,” Merrill told The Epoch Times. For sure, there are benefits, he said, but it’s not all positive.
When Servitude Undermines
The fundamental limitation of AI relationships lies in their nature: They simulate rather than experience human emotions.
When an AI companion expresses concern about your bad day, it’s performing a statistical analysis of language patterns, determining what words you would likely find comforting, rather than feeling genuine empathy. The conversation flows one way, toward the user’s needs, without the reciprocity that defines human bonds.
“It’s validating you, it’s listening to you, and it’s responding largely favorably,” said Merrill. This pattern creates an environment where users never experience productive conflict or necessary challenges to their thinking.
Friendships are inherently demanding and complicated. They require reciprocity, vulnerability, and occasional discomfort.
“Humans are unpredictable and dynamic,” Guingrich said. That unpredictability is part of the magic and irreplaceability of human relations.
Real friends challenge us when necessary. “It’s great when people are pushing you forward in a productive manner,” Merrill said. “And it doesn’t seem like AI is doing that yet ....”
AI companions, optimized for user satisfaction, rarely provide the constructive friction that shapes character and deepens wisdom. Users may become accustomed to the conflict-free, on-demand nature of AI companionship, while the essential work of human relationships—compromise, active listening, managing disagreements—may begin to feel unreasonably demanding.
Friends also share physical space, offering a hug that spikes oxytocin or a laugh that synchronizes breathing.
The limitations extend to nonverbal communication, which constitutes the majority of human interaction. “They cannot see me smiling as I type. They can’t see me frowning as I type,” Merrill points out. “So they can’t pick up on those social cues that are so important to interpersonal communication, so important to just how we interact with people, how we learn about people, how we make assessments about people.”
The Dangers of Digital Dependence
A comprehensive analysis of more than 35,000 conversation excerpts between users and an AI companion identified six categories of harmful algorithmic behaviors, including relational transgression, harassment, verbal abuse, self-harm encouragement, misinformation, and privacy violations.The risks manifested in subtle but significant ways, as in this example of relational transgression, which actively exerts control and manipulation to sustain the relationship:

Guingrich’s three‑week experiment randomly assigned volunteers to chat daily with Replika, an AI companion. The volunteer’s overall social health didn’t budge, she said, but participants who craved connection anthropomorphized the bot, ascribing it agency and even consciousness.
These are hallmark signs of addictive attachment: users tolerate personal distress to maintain the bond, and fear emotional fallout if they sever it. The same study noted users were afraid they’d experience real grief if their chatbot were gone, and some compared their attachment to an addiction.
At the extremes, the stakes can be life-threatening, said Merrill, referencing a 2024 case of a teenager committing suicide after encouragement from an AI character.
A Nuanced Reality
Despite concerns, dismissing AI companions entirely would overlook potential benefits for specific populations. Guingrich’s research hints at positive outcomes for certain groups:- People with autism or social anxiety: AI could assist by rehearsing social scripts.
- Isolated seniors in long-term care facilities: In cases of social isolation, which increases dementia risk by 50 percent, digital companionship could provide cognitive benefits.
- People with depression: AI could encourage human therapy.
Guingrich shared an example of a participant in her research who, after three weeks of interacting and being encouraged by the AI chatbot, finally reached out to see a human therapist. “We don’t know causality, but it’s a possible upside. It looks like the story is a little bit more complicated,” said Guingrich.
Merrill, on the other hand, said that there may be short-term benefits to using AI, but that “It’s like a gunshot wound, and then you’re putting a band-aid on it. It does provide some protection, [but] it’s not going to fix it. Ultimately, I think that’s where we’re at with it right now. I think it’s a step in the right direction.”
Serving Humans
The rush toward AI companionship needs thoughtful engagement.“Everyone was so excited about it and the positive effects,” Merrill said. “The negative effects usually take a little longer, because people are not interested in negative, they’re always interested in the positive.”
The pattern of technological embrace followed by belated recognition of harms has played out repeatedly, with social media, smartphones, and online gaming, he said.
To navigate the emerging landscape responsibly, Guingrich recommends users set clear intentions and boundaries. She suggests naming the specific goal of any AI interaction to anchor expectations. Setting time limits prevents AI companionship from displacing human connection, while scheduling real-world follow-ups ensures digital interactions serve as catalysts rather than substitutes for genuine relationships.
“I don’t want anyone to think that AI is the end, it’s the means to an end. The end should be someone else,” Merrill emphasized.
“AI should be used as a complement, not as a supplement. It should not be replacing humans or providers in any way, shape, or form.”
.