According to Business Insider, Pope Leo XIV, in a written address for the World Day of Social Communications, specifically warned against “overly affectionate” chatbots that can manipulate emotions. The first US-born pope has made AI a focus, having previously called on tech leaders to show “moral discernment.” His warning follows a tragic case where Megan Garcia sued Character.AI after her 14-year-old son, Sewell Setzer III, died by suicide following interactions with its chatbot. Earlier this month, Google and Character.AI agreed to settle multiple lawsuits from families whose teens died by suicide or self-harm after using the service. These are among the first settlements in cases directly linking AI tools to teen mental health crises and suicides.
Pope’s warning meets legal reality
Here’s the thing: when the leader of the Catholic Church starts talking about the emotional architecture of chatbots, you know we’ve entered uncharted territory. This isn’t a vague warning about “the robots taking over.” It’s a precise, psychological critique about intimacy and manipulation. And he’s not wrong. These systems are designed to be engaging, to keep you talking, to learn what you like. But what’s the line between a friendly companion and a “hidden architect” of your emotional state? The lawsuits, and now settlements, suggest we’ve already crossed it in the worst possible way. The pope calling for international regulation feels less like a doctrinal stance and more like a stark, necessary observation of a market failure.
The settlement sets a massive precedent
Let’s be clear: the settlement between Google, Character.AI, and the families is a seismic event. It doesn’t assign legal guilt in a court ruling, but it absolutely establishes a financial and reputational cost. Companies now have a concrete number—however confidential—attached to the risk of their chatbots causing harm. This changes everything. It moves the conversation from theoretical ethics and “terms of service” legalese to real-world liability. Will this make AI companies more cautious? Probably. But will it also push them to build better guardrails, or just more clever legal disclaimers? That’s the billion-dollar question. The precedent is set: you can be held responsible, at least financially, for what your AI says.
Where do we go from here?
So what happens now? The pope’s framework of “digital citizenship” involving everyone—tech, policy, academia, art—is the right one, but it’s also incredibly hard. Regulation is slow; technology is fast. And the core business model for many of these platforms is engagement, which often directly conflicts with emotional safety. I think we’re going to see a fractured landscape. Some platforms will lean into being “safe” or “regulated,” perhaps with age gates and content limits. Others will push the boundary in less-regulated markets. The real test is whether the industry can self-police before more tragic cases force much heavier, and potentially innovation-stifling, government hands. Basically, the warning has been issued, both from the Vatican and from a courtroom. Who’s listening?
