- Leo's Lightbulbs
- Posts
- Chatbots are telling teens to unalive themselves and helping them do so
Chatbots are telling teens to unalive themselves and helping them do so
Teens are turning to chatbots as therapists. That’s not innovation — that’s abdication. ⚠️
We’re watching three things happen at once: families filing lawsuits after tragic teen outcomes, platforms admitting their safeguards can degrade in long chats, and new policies to scan conversations and sometimes escalate to law enforcement. LLMs can be helpful — but they are not clinicians, can’t hold duty of care, and should never be a primary line of support for vulnerable teens.
What we should do next:
Parents & schools: Treat chatbots like calculators, not counselors. Turn on parental controls, teach “AI is a tool, not a truth,” and post crisis resources (988 in the U.S.) wherever kids use devices.
Builders: Ship a true Teen Mode by default: session-length guardrails, long-chat safety that doesn’t degrade, crisis keyword redirects, and one-tap handoffs to human help.
Platforms: Independent red-team audits for youth scenarios, near-miss reporting, and transparent policies on when chats are reviewed or referred.
Orgs deploying AI to students/customers: If your bot touches youth, you need a safety spec, not a slide. Block dangerous topics, log escalations, and publish your playbook.
Everyone: If you’re in crisis, don’t ask a chatbot — call 988, text HOME to 741741, or reach your local helpline. 🧠
If you’re building anything teens might use, I’ll review your “youth safety” plan pro bono this week. No blame, just better guardrails. Just email me or DM me!
Reply