ANew Post

Sam Altman says ChatGPT will start separating minors from adults with age-prediction tech, then route teens into a locked-down experience.
That means default parental controls, fewer features, and tighter filters on sex, self-harm, and other risky topics.
He frames it as a values trade-off: when freedom, privacy, and safety collide, teen safety wins.
There’s even talk of ID checks and escalating to parents or authorities in cases of acute distress.

On paper, that sounds compassionate.
In practice, it’s a tectonic shift in how “the internet” treats young people—and it won’t stop at teens.
Age-guessing models will make mistakes, sweeping adults into kid mode and chilling sensitive but legitimate conversations.
Mandatory verification will expand the surveillance footprint just as AI becomes the default interface for learning and healthcare questions.
Escalations to parents or police could deter vulnerable kids from seeking help, pushing them to darker corners of the web.
And once safety overrides privacy for one group, regulators and platforms will copy-paste that logic everywhere.

My take: protect kids, yes—but not by normalizing identity checks and black-box “age scores” at the chat layer.
If we need guardrails, make them opt-in, transparent, locally enforced, and audited by third parties—not a precedent for ID-gated AI.
Because today it’s “for minors.” Tomorrow it’s for everyone.

Reply

or to participate.