According to Forbes, author Mark Manson, who wrote the 2016 mega-bestseller “The Subtle Art of Not Giving a F*ck,” is now launching an AI personal growth mentor called Purpose with futurist Raj Singh. This move comes as a November 7, 2025 research letter in JAMA Network Open reported that 13.1% of U.S. youths, about 5.4 million people, use generative AI for mental health advice, with over 92% finding it helpful. Manson argues current AI tools offer broad, inoffensive advice, while Purpose uses a “persistent memory architecture” to learn a user’s history and challenge their assumptions. The app is subscription-based and employs bank-level encryption to address privacy concerns, a critical issue highlighted in studies like one from the Journal of Medical Internet Research. This launch enters a market with apps like Replika and PI, all aiming to alleviate life’s uncertainty.
The Validation Trap
Here’s the thing about most AI chatbots: they’re programmed to be nice. They’re conflict-averse. As Douglas Mennin from Teachers College Columbia University notes, that affirming, validating quality is part of their relational support. But is constant validation what we actually need for growth? Manson doesn’t think so. His whole philosophy is about choosing what to care about deeply and then taking radical responsibility. A chatbot that just tells you “you’re right” or “that’s valid” all the time is basically the opposite of that. It’s fast food for the soul—satisfying in the moment, but not nutritious for the long haul. Purpose is trying to sell vegetables. Whether people will buy them is the real question.
Existentialism In An App
It’s fascinating that Manson is framing this with existentialist philosophy, pulling from thinkers like Sartre. The idea that “existence precedes essence” is a fancy way of saying you aren’t born with a predefined purpose—you create it through your choices. And that’s painfully individual work. So an AI tool that genuinely adapts to your unique values and history, that can question you, is attempting to automate a deeply personal philosophical process. We already see this demand for personalization everywhere—Netflix, Amazon, YouTube. Why wouldn’t we want it for our inner lives? But can an algorithm ever truly grasp the nuance of a human’s “self-created essence”? I’m skeptical, but the attempt is telling of our moment.
advice”>Privacy And The Perils Of Bad Advice
This is where it gets deadly serious. The privacy risks are no joke. You’re sharing your deepest fears and struggles with a model, and as that research points out, that data can become part of its training set, potentially leaked. Purpose’s bank-level encryption is a good start, but it’s a necessary baseline, not a bonus. And then there’s the darker side, the headlines about AI interactions gone terribly wrong. When you’re dealing with mental health, guardrails aren’t a feature—they’re the entire foundation. An AI that’s built to “challenge” and “push back” walks a very fine line. Who programs those boundaries? Manson? That’s a lot of faith to put in one author’s judgment, no matter how many books he’s sold.
The Accountability Mirror
So what are we really looking at? The massive adoption by young people shows a clear, aching demand for guidance. The traditional therapy market can’t scale to meet it, and books are one-way conversations. AI seems like a logical, if fraught, next step. Apps like Purpose are betting we want a tough-love digital coach over a sycophantic chatbot. But in the end, Manson’s own core message circles back: it’s about personal responsibility. The AI is just a tool, a mirror. As Kierkegaard said, life must be lived forwards. No algorithm can do that living for you. The best it can do is ask the hard questions you’re avoiding. Whether you answer them honestly? That’s still on you.
