AI and the limits of human empathy

South Park Studios / Park County / Comedy Central / MTV / Paramount Global
South Park Studios / Park County / Comedy Central / MTV / Paramount Global

As a new homeowner, I have been ferociously gobbling up data center resources in a near-constant use of artificial intelligence (AI) services. Sometimes the service shuts me down, telling me in a very polite way to stop asking so many questions: “Please wait until 6:00 am tomorrow for more requests.” I rolodex to the next – ChatGPT, Claude, or Gemini.

How can I replace this shower head adapter? The heater has gone out, what are some common areas of concern that are easily fixed? A circuit breaker has tripped repeatedly, what might be happening? How do I maintain a gas fireplace? How can I replace this GFCI outlet?

This year, I even made decisions regarding healthcare plans based on Claude’s superior data analysis and cost estimation for various hypothetical medical needs for 2026. I couldn’t have done it myself; at least, it would’ve taken me many hours to find an answer.

Asked and answered, AI has come through nearly every time. When it works, I feel like a superhuman – able to tackle problems big and small, helping my family in the process. But every time AI helps me power through and solve something, it scares me because I know it’s coming for my profession, too.

About six months ago, a client told me they used ChatGPT to solve a concern often talked about in psychotherapy. They were using AI to help them be more objective about a potentially abusive relationship they were in, and asking, “Should I leave my partner?”

ChatGPT inquired about their relationship history, guided them through some pro/con style decision-making, and provided insight about how to initiate the conversation while protecting them from harm.

My client was coming to me to explain it already happened – not to clarify or question the AI. They had broken up with their partner, following ChatGPT’s advice. While heartbroken about the relationship, they were strangely relieved and, dare I say, happy. Yes, happy. I recognized that face. It is the same one I feel when AI helps me solve something, too! And ChatGPT wasn’t wrong, the relationship was exhibiting symptoms of the Cycle of Abuse.

Staying professional, I listened intently. Internally, I thought, “I knew this day would come.” AI companies have been increasingly experimenting with more “human-like” responses.

Since that moment, countless other clients have mentioned their use of AI to augment, supplement, and sometimes replace therapy processes. It’s almost like I’m competing with machines for accessibility, affordability, and effectiveness in practice. These experiences have forced me to confront an unexpected question: Am I now practicing dual care with an algorithm?

In fact, the American Psychological Association (APA) ethics code Standard 10.04 Providing Therapy to Those Served by Others reads,

In deciding whether to offer or provide services to those already receiving mental health services elsewhere, psychologists carefully consider the treatment issues and the potential client’s/patient’s welfare. Psychologists discuss these issues with the client/patient or another legally authorized person on behalf of the client/patient in order to minimize the risk of confusion and conflict, consult with the other service providers when appropriate, and proceed with caution and sensitivity to the therapeutic issues.

When I read this, it’s pretty clear that clients are receiving treatment via multiple services – even if AI isn’t legally and ethically responsible for clients like licensed psychologists and other allied mental health providers.

People are trusting AI more and more for life decisions – big and small. A systematic review of AI chatbots compared to healthcare professionals in the British Medical Bulletin suggests that respondents found ChatGPT to have a “73% likelihood of being perceived as more empathic than a human practitioner in a head-to-head matchup, using text-based interactions.” The researchers conclude that across analyses, text-based AIs are found to be more empathic than human healthcare providers.

In training, our compassion and empathic listening were routinely analyzed by supervisors. We would be told to lean in more, stay present, slow down, and get inquisitive. Not a week would go by without someone commenting about an opportunity to feel for another. These messages were all in service of us finding empathy and effectively communicating it to our clients. Psychological services were all about humans relating to each other that relate to other humans, and working to improve these dynamics.

Today, empathy has been “democratized” as the tech sector likes to say about everything. This care and understanding is being supplanted by machines calculating and algorithmically delivering messages it thinks users want to hear.

Some in the media are sounding the alarm. The satirical show, South Park, ridiculed the almost sycophantic nature of AI to support, agree, and always like what a user is saying. The New York Times has featured various above-the-fold stories on the topic of AI chatbots encouraging delusions because they “empathize” so effectively.

What is being lost in this transition is something essential: empathy between humans has always had limits.

The psychotherapy context is one important context for limits. For instance, if someone comes into my office with delusions and/or hallucinations, I know that supporting their reality of events might actually contribute to their spiral and further disconnection. Sometimes having your reality tested is necessary to your well-being and functioning as a human.

There’s a second limit related to humans and our very humanity. We’re flawed. I frequently say to clients that I have no intent in hurting you, but despite my best efforts something I say in therapy might trigger and/or affect you deeply. My flawed, human self may attempt to be empathic, while failing to do so. In those circumstances, I hope and aspire to rebuild connection and process the rupture – to provide a new experience for clients for whom many have seen disconnections as the end of relationships. To be human is to err, and to heal and change is as well. AI circumvents this human-to-human process of empathy and relationship building in favor of the best answer always.

I struggle mightily with questions about what limitless AI empathy means for humanity. I’m being supplemented and, at times, likely replaced by technology. But it’s also replacing a question to a friend, a question to a medical doctor, a draft of a will by a lawyer, and the risk of making a mistake in an interpersonal encounter.

For the first time in my technology filled life, I cannot understand what will happen next. I have no predictions. Whatever does happen, let us not forget humanity is responsible for empathy. Without humans – with their limits and mistakes – I wonder if what we're calling “empathy” will become something else entirely: perfectly calibrated responses that never challenge, never fail, and never require the messy work of human repair.