Therapists Are Now Arguing With AI In Their Own Sessions

Therapists Are Now Arguing With AI In Their Own Sessions - Professional coverage

According to Forbes, a new and disruptive trend is emerging in therapy offices: clients are walking in with mental health answers generated by AI, like ChatGPT, and presenting them as authoritative guidance. This is creating a prolonged tussle where therapists must interpret, reinterpret, and often refute the AI’s so-called solutions, all while managing heightened client expectations. The phenomenon mirrors issues faced by medical doctors, as noted in a recent Journal of the American Medical Association piece titled “When Patients Arrive With Answers.” With an estimated 400 million weekly active users on ChatGPT alone, a significant portion are using it for mental health advice, seeking free, 24/7 access that contrasts sharply with traditional therapy’s cost and scheduling. This puts therapists in a defensive position, bogged down by managing AI’s quick-fix responses during sessions.

Special Offer Banner

The AI Is In The Room

Here’s the thing: this isn’t just about patients doing a little extra homework. It’s a fundamental shift in the perceived source of authority. For decades, a therapist could gently contextualize advice from a well-meaning friend or a sketchy internet forum. But now? The client believes they’ve consulted an oracle. The AI’s conversational, confident tone, as the JAMA article points out, “implies competence.” So when a human professional disagrees, it’s not just a difference of opinion—it feels to the client like rejecting science itself. The dynamic has completely changed.

Not All Bad, But Mostly Tricky

You could try to put a positive spin on this. Maybe it means clients are more engaged. Perhaps the AI has made them more open to discussing certain issues, giving the therapist a starting point. If the AI’s advice is sensible, it could even boost the therapist’s credibility when they agree with it. But let’s be real. That’s a best-case scenario, and it feels pretty naive. The downsides are massive and way more likely.

First, anchoring is a huge problem. A client who has latched onto an AI-suggested diagnosis like “you have borderline personality disorder” or a specific treatment protocol will cling to it. The therapist then has to spend precious session time not on therapy, but on debunking, often sounding dismissive in the process. Second, the value proposition of therapy itself gets questioned. Why pay $200 an hour when ChatGPT says the same thing for free? Unless a therapist can demonstrably add value AI can’t—empathy, nuanced judgment, handling uncertainty—they’re in a rough spot.

A Lose-Lose Dynamic

This creates a professional bind that’s almost impossible to win. Push back on the AI too hard, and you look defensive, maybe even like you’re protecting your paycheck. Agree with it too readily, and you undermine your own expertise. It turns the therapeutic alliance—a partnership built on trust—into a three-way debate with an invisible, supposedly infallible participant. The therapist’s role morphs from healer to fact-checker. And in a field where the relationship is the bedrock of progress, that’s a dangerous shift.

So what’s the solution? The JAMA article suggests meeting patients “with recognition, not resistance.” That’s a nice sentiment, but it’s incredibly hard to execute when you’re essentially recognizing a source you believe could be harmful. Therapists might need new tools and training for this exact scenario—how to validate the client’s proactive search for help while critically evaluating the AI’s output together. But honestly, it feels like we’re putting a band-aid on a bullet wound. The core issue is that we’ve unleashed confident, conversational AI systems into deeply sensitive human domains without any guardrails, and now the professionals on the front lines are left to manage the fallout.

Leave a Reply

Your email address will not be published. Required fields are marked *