According to Wired, serial tech founder Sol Kennedy developed BestInterest, an AI app designed to help divorced parents navigate difficult communications with ex-partners. The tool builds on existing platforms like OurFamilyWizard by using AI to filter emotionally triggering content and suggest responses with “spine” rather than excessive apologies. This development reflects a growing trend of people turning to AI for real-time emotional support during contentious co-parenting situations.
Table of Contents
The Psychology Behind High-Conflict Co-Parenting
The emergence of AI tools for divorce communications reflects deeper psychological patterns in high-conflict separations. When personality disorders or long-standing resentments are involved, ordinary communication platforms often become battlegrounds. The fundamental challenge isn’t just about filtering language—it’s about navigating power dynamics, emotional triggers, and the complex legacy of failed relationships. Traditional therapy provides weekly support, but these AI tools attempt to fill the gap between sessions with real-time intervention, creating what amounts to a digital emotional buffer.
Critical Legal and Ethical Concerns
The most significant unaddressed issue involves legal admissibility and authenticity. While platforms like OurFamilyWizard maintain court-admissible records, AI-modified communications could create evidentiary problems. If an AI tool substantially alters incoming messages or generates outgoing responses, courts might question whether the communications accurately represent the parties’ actual positions. There’s also the ethical question of whether AI-mediated communication constitutes authentic interaction or creates a false representation of co-parenting capabilities. The very “spine” that Kennedy sought to build could backfire if it makes users appear more assertive or reasonable than they actually are in unmediated interactions.
Technical Limitations and User Adoption Challenges
The effectiveness of tools like BestInterest depends heavily on ChatGPT and similar language models that have inherent limitations in understanding nuanced human relationships. These systems lack genuine emotional intelligence and operate based on statistical patterns rather than true comprehension. More fundamentally, the adoption problem Kennedy identified with ToneMeter persists—if only one party uses the tool, it creates an asymmetry that might not resolve underlying conflicts. The technology also risks creating dependency, where users never develop their own conflict resolution skills, potentially undermining long-term co-parenting success.
Market Realities and Sustainability Questions
The co-parenting technology market faces unique challenges that differ from typical SaaS businesses. The target audience is inherently transient—most high-conflict situations eventually resolve or settle into manageable patterns. This creates a natural churn problem that subscription models struggle to overcome. Additionally, the emotional intensity of the use case means user expectations are extraordinarily high, while satisfaction metrics are difficult to define. Success might mean the tool becomes unnecessary, creating a paradoxical business model where the best outcome is customer attrition.
Future Development Pathways
Looking forward, the most viable applications of AI in co-parenting might focus on documentation and pattern recognition rather than active communication mediation. Tools that help identify communication patterns, track agreement compliance, or provide neutral summaries of interactions could offer more sustainable value without the ethical complications of active message modification. The integration with existing legal and therapeutic systems will be crucial—these tools cannot exist in isolation from the professional support systems that manage divorce and custody matters. As the technology evolves, regulatory frameworks will likely emerge to govern how AI can be used in legally sensitive family communications.