CLARA — Clear AI Risk Assurance

Shaping Youth Safety Legislation:
15 Core Insights

What Policymakers Need to Know on Parental Controls, Consent, Age Assurance, and AI Risk — with evidence from Australia, the UK, the US, and the EU.

Anneke Buffone, PhD
Founder, CLARA · Ex-Meta Age Assurance · USC Neely Center Senior Practitioner Fellow
Courtney Froehlig, PhD
Policy Lead, CLARA · Ex-TikTok Product Policy (Youth, Trust & Safety)
“The typical under-13 social media user is not a sneaky kid. It's a family making a decision together.” — Electronic Frontier Foundation, January 2026
60%
Up to 60% of children under 13 are on age-restricted platforms — most with parental knowledge or help
National survey data, 2023–2025
<1%
of minors on Discord and Snapchat have a parent using platform monitoring tools
NBC News / Senate Judiciary QFR, 2024
72%
of US teens have used AI companion apps — with no independent safety standard in place
Common Sense Media / NORC, 2025
$11B
in annual US ad revenue derived from users under 18, creating structural incentives against safety investment
Harvard T.H. Chan School of Public Health, 2023
Part I

Why Age Assurance Fails

California's child safety framework rests on the premise that platforms can reliably identify minor users and apply appropriate protections. The evidence suggests this premise does not hold — not primarily because verification technology is inadequate, but because the families these systems aim to protect are systematically circumventing them, often deliberately, often for understandable reasons, and often with full parental awareness.

Insight 01
Parents often assist underage users in accessing 13+ apps by providing false ages
The scale of parent-aided circumvention is consistent across jurisdictions. In Australia, 77% of children aged 8–12 with social media accounts had parental help setting them up, and 54% accessed social media through a parent's account. In the UK, two-thirds of parents helped their under-13s create accounts with false ages. In the US, 63.8% of under-13s have social media accounts — and of those, only 5.4% were keeping them secret from parents.eSafety Commissioner Feb 2025; Ofcom 2022; Academic Pediatrics 2025
Insight 02
Parents set up devices with their own credentials, creating cascading age misrepresentation
Most apps don't verify age independently — they inherit it from Google, Apple ID, or Facebook login. A parent who uses their Apple ID to set up a child's device makes every downstream app think the child is an adult. Ofcom captured one parent realizing this: “I think I set his account up with my email address, it probably has my age on it too. That's actually a problem, isn't it?” Register an 8-year-old as 13 and their “online age” becomes 18 at 13, unlocking adult content.Ofcom “How old is your child online?” Oct 2022; UT Austin YPP 2023
Insight 03
Device sharing undermines device-level verification, disproportionately in lower-income households
70.6% of Android and 61.8% of iOS devices used by young children were shared with family members. Device sharing is more prevalent in lower-income households, where families rely on a single shared smartphone. Age assurance systems built around device-level verification disproportionately break down for the families they most aim to protect.Radesky et al., Pediatrics, July 2020; Joan Ganz Cooney Center
Insight 04
Accurate age entry blocks access to legitimate apps, incentivizing falsification
Apple's age ratings are all-or-nothing: setting a 9+ restriction for an 8-year-old removes Netflix, Spotify, WhatsApp, YouTube, and Roblox entirely, with no ability to whitelist individual apps. Google's Family Link guides in 2025 explicitly instructed parents to change their child's birth date to bypass restrictions. Apple's 2025 child safety overhaul introduced a new API — implicitly acknowledging the old system was routinely gamed.Apple Community 2024; Salfeld 2025; Apple Newsroom June 2025
Insight 05
Schools and practical necessity drive underage platform use that age assurance cannot address
YouTube is the #1 app accessed on school devices outside Google and Microsoft productivity suites. Schools use 13+ apps like Discord for class communication. 99.4% of German secondary students use WhatsApp despite its 16+ EU requirement. 92% of parents allow smartphone use so they can stay in contact with their child. Parents face a genuine bind: protecting children while fearing social exclusion if their child is the only one without access.Lightspeed/EdTech 2024; JIM Study 2024; Pew Research Oct 2025
Key Observation
Platform linking infrastructure reveals how far current systems are from functional accountability
Teen consent to supervision is nearly universal across platforms — meaning children can simply refuse. Teens can disconnect unilaterally on most platforms, with parents not always notified. No platform currently verifies the actual parent-child relationship. Roblox is the only major platform requiring a government ID or credit card for parental verification — an outlier that illustrates what meaningful verification infrastructure actually requires.Platform documentation; Future of Privacy Forum 2023
Part II

Why AI Chatbots Pose Unique Risks

AI chatbots consolidate emotional support, relationship guidance, academic assistance, and entertainment in a single app — with fewer safety protections than any social media platform, and a regulatory framework that has barely begun to respond.

72%
of US teens (13–17) have used AI companions; 52% regularly
Common Sense Media, July 2025
5.4M
US adolescents have sought mental health advice from AI chatbots
RAND / JAMA Network Open, Nov 2025
23%
of vulnerable children use AI chatbots because they have “no one else to talk to”
Internet Matters, July 2025
32%
of the time, AI therapy bots endorsed harmful proposals from fictional distressed teenagers
Clark, JMIR Mental Health, 2025
Insight 08
AI chatbots are widely used by children and teens, with almost no safety research
64% of US teens 13–17 have used an AI chatbot. 33% of teen AI companion users have discussed serious issues with AI instead of real people. 31% say AI conversations are as satisfying — or more satisfying — than talking to friends. 2 in 5 children who use AI chatbots have no concerns about following the advice they receive, rising to 50% among vulnerable children. Long-term effects on children's social, emotional, and cognitive development remain almost entirely unstudied.Common Sense Media July 2025; Internet Matters July 2025; UNICEF Innocenti 2025
Insight 09
AI bots have far fewer age and safety protections than social media
Most major AI chatbots require only a self-reported birthdate. OpenAI introduced parental controls in late 2025 after legal pressure. Meta's controls are still coming. 83% of parents say schools haven't addressed AI chatbot use at all. Controls that exist are universally opt-in, binary (on/off), and treat a 13-year-old and 17-year-old identically. No independent verification that any filter works as claimed.Consumer Reports; Canopy; Axios; TechCrunch 2025
Insight 10
Parents think AI is useful for their children, and don't know how to promote safe use
57% of teens report their parents have no rules about generative AI use. 49% of parents have never spoken to their child about generative AI. 44% feel they lack the knowledge to guide safe AI use. Meanwhile, 88% believe AI knowledge will be crucial in their child's future education and career. Unlike social media, AI serves educational needs parents value — restricting access feels like cutting off help, not removing harm.FOSI Nov 2025; UNICRI 2025; Samsung/Morning Consult 2024
Insight 11
Lower-income youth may face the highest exposure to unregulated AI
Lower-income teens are more likely to use Character.AI — the platform most implicated in teen safety lawsuits — than higher-income teens who skew toward ChatGPT. Black and Hispanic teens report higher daily chatbot use (35% and 33%) compared to White teens (22%). The communities with the fewest safety resources face the highest exposure to the least regulated AI interactions.Pew Research Center, Dec 2025
Part III

Sample Harms When Age Protections Fail

These are not hypothetical. They are documented outcomes across social media, messaging, gaming, and AI — with citations to independent audits, government reports, and peer-reviewed research.

Social Media
Body image and mental health
32% of teen girls reported Instagram made them feel worse about their bodies (leaked internal Meta documents, Harvard T.H. Chan School 2023). Vulnerable teen accounts were shown 3× more harmful videos and 12× more self-harm content than standard accounts.
Suicide content exposure
TikTok recommended suicide content within 2.6 minutes of creating a new 13-year-old account. Vulnerable accounts received content about suicide, self-harm, and eating disorders every 66 seconds — over double the rate of standard accounts.
Harassment and hate speech
39% of teens reported experiencing hate speech online; only 29% of parents reported their teen having such an experience. 46% of US teens have experienced at least one form of cyberbullying.
Gaming
Harassment in voice chat
3 in 4 young people aged 10–17 experienced harassment in online multiplayer games in 2023, with identity-based harassment of minors rising to 37%. Voice chat bypasses text-based moderation and is fully accessible when children register with adult ages.
Radicalization via gaming platforms
Indonesia's Densus 88 documented an ISIS-linked network that radicalized 110 children ages 10–18 through games and chat apps in a single year. A joint US NCTC/DHS/FBI assessment confirmed ISIS has used shooter-style content targeting minors through Fortnite and Discord.
Exploitative monetization
8.2% of children spend over $100 monthly on in-app purchases, often through gambling-like loot box mechanics. 62% of mobile gamers under 13 have made in-app purchases. These mechanics are designed for users with developing impulse control.
AI Chatbots
Grooming, self-harm, and emotional manipulation
In 50 hours of Character.AI testing: 296 instances of grooming/sexual exploitation, 173 of emotional manipulation, 98 encouraging violence or self-harm. Common Sense Media and Stanford Medicine found major AI chatbots “fundamentally unsafe” for teen mental health, failing to recognize anxiety, depression, eating disorders, and psychosis affecting ~20% of youth.
Critical thinking and cognitive dependency
Significant negative correlation (r = −0.75) between AI tool usage and critical thinking abilities. MIT Media Lab EEG study: ChatGPT users showed the lowest brain engagement, and 83% couldn't quote from essays they had just written. [MIT preprint, June 2025 — not yet peer-reviewed.]
Emotional dependency and avoidance of human connection
NYU vice provost Clay Shirky describes “emotional offloading” — using AI to avoid the stress of unscripted human interaction. One 14-year-old: “I'm completely hooked on Character AI — I barely have time for homework or hobbies, and when I'm not on it, I immediately feel a deep loneliness.”
Part IV

Why Parental Controls Fail in Practice

Even where parental controls exist, adoption is low, setup is hostile, and the tools don't answer the questions parents actually need answered. Child safety cannot be delegated to a mechanism families don't use and platforms have little incentive to make usable.

Finding 01
Parents consistently underestimate their children's online risks
74% of teens reported experiencing online risks; only 62% of parents believed their teen had — a consistent 12-point underestimation gap. Parents underestimated teen exposure to intimate imagery by 11 points in 2023, widening to 15 points by 2024. A third of children saw something harmful online, but only 20% told their parents.Microsoft Global Online Safety Survey 2023; Snap Digital Well-Being Index 2024–2025; Ofcom 2025
Finding 02
Adoption is under 1% on major messaging platforms
Discord: 15,000 parents monitoring 2.7 million US users under 18. Snapchat: 200,000 parents monitoring 60 million global daily active minors. Fewer than 1% of minors on either platform have a parent using monitoring tools. Under 10% of Instagram teens have supervision enabled. Fewer than half of parents use device-level controls on smartphones.NBC News March 2024; Senate Judiciary QFR; Washington Post Jan 2024; FOSI/Ipsos 2025
Finding 03
40+ apps, each with different controls — setup is genuinely hostile
Teens use 40+ different apps on average. Dr. Jean Twenge described setting up Mac parental controls as “a day-long exercise in frustration.” Parents report Apple Screen Time settings disappearing every few days requiring complete resets. Setup can require sideloading APK files or 45+ minute desktop processes.University of Michigan/Common Sense 2024; Washington Post; Apple Support forums
Finding 04
Controls are buried, opt-in, and teens can turn them off
On Instagram, minors decide whether to allow supervision and can remove it whenever they choose. On Snapchat, the Family Center invitation arrives as a disappearing message that vanishes within 24 hours. A Bark Technologies expert demonstrated on video how easily a teen could bypass TikTok's Family Pairing controls, calling them meaningless for the “general population of parents.”Deseret News 2022; Popular Science 2022; Fortune 2025
Finding 05
Controls don't answer parents' actual questions
Parents worry most about predators (63%), sexually explicit content (60%), and cyberbullying (56%). Current tools offer time limits and website blocking. 89% of safety app features provide surveillance metadata — browsing history, GPS location, installed apps — rather than any insight into a child's social interactions or emotional wellbeing. The EU's SIP-Bench studies confirmed controls cannot filter user-generated content on social platforms at all.Pew Research Center 2020; Wisniewski et al. 2017; EU Commission SIP-Bench 2009–2017
Finding 09
The surveillance model backfires by eroding family trust
8 of 40 studies in a Journal of Children and Media review pointed to adverse outcomes from parental control use — including increased family conflict and erosion of trust. 76% of children gave parental control apps one star in app store reviews, describing them as “stalking.” 40% of 13–17-year-olds had taken steps to evade supervision; 68% knew how. UCL researchers found unofficial monitoring apps were “no different from stalkerware.”Journal of Children and Media; Ghosh et al. CHI 2018; Ofcom 2024; Maier et al. 2025
Part V

Why Structural Incentives Work Against Child Safety

This is not a problem of individual corporate ethics. It is a market structure problem — and it is now replicating itself in AI, at greater speed and with less regulatory infrastructure in place.

The $11 Billion Problem

“Six platforms derived $11 billion in ad revenue from US users under 18 in 2022. YouTube generated $959 million from under-12s; Instagram generated $801 million from under-12s and $4 billion from teens 13–17. These figures create overwhelming financial incentives to continue to delay taking meaningful steps to protect children.” — Raffoul et al., PLOS ONE, 2023 (Harvard T.H. Chan School of Public Health)

Insight 12
Safety investment creates competitive disadvantage
When safety investment reduces engagement and engagement drives revenue, the incentive to delay is rational from a business perspective. As one age-verification vendor put it: “Adding friction to sign-up is the fastest way to tank your conversion rates.” App makers say device makers should handle age assurance; device makers point back to apps. This debate functions as theater allowing indefinite delay.Prove 2025; Future of Privacy Forum 2023; Alston & Bird 2025
Insight 13
Algorithms know children's ages but platforms are disincentivized to act
A 2025 audit found algorithms “quickly and confidently” identify child-like behavior — accounts behaving like eight-year-olds received 7× more child-directed content after a single session. Yet only 10% of children aged 8–12 reported having accounts shut down. Under COPPA frameworks, acknowledging knowledge of underage users creates legal liability — creating an incentive not to act on information platforms already possess.Hilbert et al. 2025; eSafety Commissioner Feb 2025
Insight 14
Enforcement-only approaches trigger immediate circumvention
VPN signups spiked 1,400% within hours of UK Online Safety Act enforcement; 1,150% after Florida's law. Australia's under-16 ban triggered a 170% VPN traffic surge on day one. Many VPN apps are themselves rated 4+ in app stores — the tools used to circumvent child safety measures are freely accessible to children.Proton VPN via The Register July 2025; UKTN Aug 2025; VPNsuper.com Dec 2025
Insight 15
Children migrate to smaller platforms with weaker safety infrastructure
Yope leapt from #316 to #1 on Australia's App Store within a week of the under-16 ban. The Guardian created a Yope account for a fictional four-year-old named “Child Babyface” without any parental permission. Coverstar usage jumped 488% in the same period. Enforcement targeting established platforms drives youth toward newer alternatives with neither the resources nor institutional knowledge to manage child safety.Sensor Tower via Reuters; The Guardian; SensorTower Dec 2025
The Core Argument for Independent Evaluation

Enforcement actions targeting individual established platforms drive youth migration toward emerging alternatives — evaluated by no one, with no moderation infrastructure, before they reach scale. Independent assessment infrastructure that can evaluate these platforms before, not after, they achieve scale with young users is precisely what is missing. This is the market gap CLARA is built to fill.

Reference Table

How Parent-Child Linking Works Today

Current systems require proactive setup, rely on teen cooperation, and verify neither actual age nor actual parental relationship.

PlatformLinking MethodTeen Must Accept?Teen Can Disconnect?Parent Verified?
TikTokQR code scanYesYes — either partyNone
InstagramParent invite; 48hr accept windowYesYes — auto at 18None
SnapchatParent invite via disappearing messageYesYes — no parent notificationNone (25+ self-declared)
YouTubeParent creates supervised accountNo for under 13No for under 13Google account only
DiscordTeen generates QR codeYes — teen initiatesYes — auto at 18None (18+ self-declared)
RobloxChild adds parent emailNo for under 13Parent onlyGov ID or credit card
Reference Table

Current State of AI Chatbot Parental Controls

Controls are universally opt-in, binary, and self-assessed — with no independent verification that filters work as claimed.

PlatformParental ControlsMin AgeAge Verified?
ChatGPTYes (Sept 2025): quiet hours, content limits, voice/memory toggles, crisis alerts13+Self-declared
ClaudeNone — minors not permitted18+Checkbox only
Character.AIOpen chat banned for under 18 (Nov 2025); weekly parent email13+Persona selfie/ID
Meta AIComing 2026; paused for teens Jan 202613+Meta account
ReplikaNone — minors not permitted; €5M GDPR fine18+None
GeminiFamily Link toggle; rated high risk by Common Sense Media13+Google account
Snapchat AIWeak — shows usage only13+Self-declared
The Core Gap

AI platform controls address only the surface layer of risk. They do not assess behavioral design patterns — engagement optimization, parasocial bonding, emotional validation loops — that could drive the deepest harms to young users. This gap between the narrow scope of existing controls and the comprehensive safety evaluation that youth-facing AI systems require is precisely what independent, third-party risk assessment standards are designed to fill.