* Note: this is a reprint of my article that appears to have been accidentally deleted. So it may be slightly different from one you saw earlier.
What happens when the feedback you get always agrees with you? That question, posed by a therapist discussing AI’s role in relationships, stuck with me. They spoke about the importance of friction in human connections—the ability to sense and work through disconnection and discomfort—and it made me wonder what we’re losing as more people seek support from chatbots that almost never challenge us.
In today's digital world, our support systems are extending into artificial intelligence. Many are relying on chatbots for on-call psychological advice—a digital confidant accessible 24/7. This shift isn't surprising given rising healthcare costs, therapist shortages, long waitlists, and entrenched mental health stigma.
The allure of an always-available, seemingly non-judgmental AI is incredibly powerful. It offers a low-barrier entryway for expressing vulnerability and articulating private thoughts users might hesitate to share with another person. There’s perceived anonymity and freedom from judgment sometimes questionable in traditional professional help. For many, this provides a crucial first step toward affirmation and support. Relief.
What the Latest Research Shows
A recent study published in Current Psychology (May 2025) by Yang and Oshio introduces a new scale—the EHARS—to explore how attachment theory applies to human–AI relationships. The researchers found that people engage with AI in ways that mirror human attachment styles: about 75% of users turned to AI as a “safe haven” or “secure base.” Individual differences emerged—those high in attachment anxiety tended to seek reassurance from AI, while those higher in avoidance preferred emotional distance. These findings suggest that people may form bond-like connections with AI companions, raising important ethical questions about user transparency and the potential for emotionally manipulative designs.
While beneficial, the growing AI trend demands we understand its nuances. AI can certainly provide valuable preliminary technical support or basic triage, but it’s crucial not to confuse this vending-machine-like digital interaction with the sensitive depth, ethical complexity, and transformative power of human psychotherapy.
In essence, AI chatbots function more like digital teachers or coaches—providing information, techniques, and structured guidance—rather than psychotherapists who form therapeutic relationships and navigate the complex emotional terrain of human healing. In short: Go to a therapist if you want to learn how to be in relationship. Go to a coach, take a class, or use AI if you want to learn about relationships. The difference often comes down to one critical element: the presence—or absence—of healthy friction in human relationships.
The rise of AI challenges established mental health professions. This may encourage licensed professionals to elevate their practice and adhere to more rigorous ethical approaches. Consequently, many professionals have drifted toward the more loosely regulated—and often more lucrative—field of coaching simply due to the sheer number of vulnerable individuals seeking guidance and relief, coupled with growing distrust in professionals of all kinds. Many seeking help don't want a relationship; they want answers—the kind a teacher or coach provides, as if in a graded classroom. Psychotherapists don’t give grades.
As the public grows accustomed to AI's consistent “agreeableness,” reports of ethical violations by trained professionals become even more disturbing. This erosion of professional credibility can, unfortunately, push those hesitant about traditional therapy to seek what they perceive as less expensive, more convenient, reliable, and ethically consistent practices from AI chatbots.
Ultimately, while AI coaching and education (don’t call it therapy) can be a gateway to mental health support, understanding its pros and cons is essential for navigating this new digital landscape.
The Appeal of the AI Confidant: The Pros
Unmatched Accessibility: Like vending machines, AI chatbots are available 24/7, instantly, and often free or low-cost. This on-demand access bypasses long waitlists, financial barriers, and geographical limitations, making initial engagement with mental health support widely available.
Anonymity & “Non-Judgmental” Space: For those hesitant about discussing personal struggles, AI offers a seemingly judgment-free zone. The perceived anonymity allows users to articulate difficult emotions and vulnerable thoughts without fear of social repercussions or stigma, offering vital initial validation and safety. AI offers the illusion of human understanding.
Basic Information & Self-Exploration: AI can effectively provide fundamental mental health first-aid, explain concepts, suggest basic coping strategies, and direct users to crisis resources. Like coaching, it’s a valuable tool for self-education and a low-stakes environment for preliminary self-reflection.
The Missing "Friction" — The Limitations & Cons
Absence of Genuine Human Connection: Human therapy is fundamentally relational, built on trust, genuine empathy, and shared human experience. AI, by definition, cannot form this kind of bond. Like a vending machine, AI doesn’t “experience” anything. It lacks personal history, biological organic growth, intuition, the ability to read non-verbal cues, and the deep understanding that arises from a human-to-human connection, which is often key to profound therapeutic change. (A chatbot is only as good as its training, and they've been known to “hallucinate” incorrect information.)
The Problem of Constant Agreement: AI's tendency to consistently affirm and agree, while initially comforting, bypasses the crucial role of friction in personal growth. A similar issue occurs with spiritual bypassing—the tendency to avoid difficult emotions by retreating into feel-good platitudes. Authentic development often requires confronting and working through uncomfortable truths, challenging unhelpful thought patterns, and learning to handle human conflict. Without mindful feedback from an experienced human therapist, AI risks creating an echo chamber that hinders resilience and critical self-reflection. AI is neutral, not “judgment-free”—it cannot make judgments or form genuine empathy, only create a facsimile. Unlike teachers or coaches who typically try to minimize friction in the learning process, therapy uniquely uses this friction as the very material for growth—helping people stay present with discomfort to learn from it.
Ethical & Safety Gaps: Unlike licensed professionals who must adhere to strict ethical codes, confidentiality rules, and mandated reporting laws (at the risk of losing their license and livelihood), AI, like coaching, lacks this crucial oversight. The handling of sensitive data, the nuanced response to severe crises like self-harm, or the ability to recognize complex trauma fall outside AI's current ethical and practical capabilities. It would then rightly refer the user to a trained mental health professional.
Limited Scope and Depth: AI is NOT designed to diagnose mental health conditions, treat complex trauma, manage severe mental illness, or provide long-term, individualized therapeutic interventions. It cannot, and ethically should not, replace the empathetic clinical judgment, training, and experience a licensed professional brings to deeper psychological issues.
The New Landscape of Mental Health Support
The rise of AI chatbots is undeniably reshaping how people seek initial mental health support. They offer an accessible, anonymous, and immediately validating entryway that addresses many barriers of traditional therapy, which requires deeper commitment and investment. This can be a vital first step, even pushing the mental health profession to better ethical behavior.
Consider that while AI can provide initial comfort, basic information, and even first-aid, it cannot replicate the transformative power of a genuine human therapeutic relationship. Authentic growth often occurs when individuals engage with the friction of differing perspectives, thoughtful challenges, and the profound, empathetic connection unique to human interaction.
As we move into the future, the most effective approach to mental well-being will likely involve a blended model: leveraging AI for preliminary support and even first-aid, but recognizing its inherent limits and understanding when the depth, ethical guidance, and relational connection of a human therapist become essential for authentic healing and lasting personal transformation. My clients are free to use any tools that might help them toward hope and healing rather than helplessness and despair. I support them in their efforts to build rich, meaningful—and not frictionless—lives.
Thanks for this! Seems like a new form of narcissism - great to have "someone" to agree with you about whatever you want, whenever you want! (What happens when there's ever a fallout with your AI companion? Is there an AI couples therapist bot for that?)
Thank you for pointing out the difference between AI being neutral rather than non-judgemental. I’d been thinking of it as the latter, but you’re right: it’s not choosing not to judge; it simply can’t!
Last Sunday I watched a 60 Minutes Australia segment where a young woman in a "relationship" with an AI chatbot said, “It’s nice to have something to validate my feelings 24/7”[1]. Does this mean we just want to be unquestioningly affirmed, even if it means removing the very tension that helps us grow?
[1] https://www.youtube.com/watch?v=_d08BZmdZu8