Clear AI Risk Assurance

The system protecting children online needs to be rebuilt.

Platforms are not required to prove their safety systems work. The researchers who understand these failures from the inside are structurally prevented from studying them. Every country is measuring something different. And AI is moving faster than any existing framework can track.

60%
Up to 60% of children under 13 are on age-restricted platforms — most with a parent's knowledge or help
National survey data, 2023–2025
72%
of US teens have used AI companion apps — with no independent safety standard in place
Common Sense Media / NORC, 2025
<20%
average in-app parental control activation across major platforms
Platform disclosures and independent audits
50+
jurisdictions legislating youth online safety with no shared measurement framework
Regulatory landscape review, 2025

The Problem

We don't let builders inspect their own buildings.

“We don't let car manufacturers grade their own crash tests. We don't let pharmaceutical companies self-certify their drugs. There is no coherent argument for why digital platforms serving children should be the exception.”

The evidence base for child online safety is structurally compromised. Researchers inside platforms know where the problems are and are not allowed to say so. Researchers outside often don't know what questions to ask, or can't secure funding that lets them ask freely. The organizations attempting independent work frequently depend on the industry they're meant to evaluate. The result is a field that cannot produce the honest, comparable data that policy requires.

01No proof required

Platforms must ship age gates and parental controls. They are not required to show these work. Up to 60% of children under 13 are on age-restricted platforms — most with parental knowledge or help. In-app control activation runs below 20% across most major platforms. Controls that exist only on paper are not child safety. They are liability transfer.

02The independence problem

Many child safety organizations accept technology funding or have technology executives on their boards — for some kinds of work, that is a reasonable arrangement. But for measuring baseline harm rates and evaluating whether safety systems actually work, independence is not optional. Safety claims require an evaluator with no financial stake in the outcome.

03Fragmented global measurement

More than 50 jurisdictions are legislating youth online safety with no shared methodology. One country's privacy law may limit what another can access. Piecemeal national standards are possible — but global platforms operating across all of them will not invest deeply in compliance that can't be compared across borders. A globally coordinated, research-backed standard is the faster and more durable path.

04AI is a new and unmeasured risk

72% of US teens have used AI companion apps. Parents and teachers openly say they don't know how to keep children safe on these systems. Most age assurance for AI is a checkbox. We don't yet have reliable data on what children discuss with AI, at what rates, or what happens afterward. The harm patterns these systems introduce — emotional dependence, learning displacement, parasocial formation — are not detectable by any existing safety framework, and no certification standard currently exists.

Our Work

What we consult on, build, and advocate for.

CLARA is a three-person founder team. We are not a think tank, not a lobby, and not a platform watchdog that publishes annual reports no one reads. We consult, we build, and we advocate — in the places where independent expertise with inside-platform experience is rarest and most needed.

These aren't four parallel workstreams. The advisory work pays for the builds. The builds generate real-world data. The data makes the advisory worth hiring. We're not running four departments — we're running one flywheel at founder stage. We're looking for partner organizations and funders who want to help us hire the team to get the full vision to reality.

01 — Research & Advisory

Consult on methodology. Advocate for standards.

We advise regulators, policymakers, and research bodies on how to evaluate youth-facing AI and social platforms — and what questions the industry has systematically avoided asking. Survey design, red team protocol development, standards co-design, legislative testimony. We are available to any regulatory or policy body working on youth digital safety. We are currently co-developing an age assurance survey with researchers from Jon Haidt's group and the Stanford Social Media Lab, and presenting to the California Privacy & Consumer Protection Committee in March 2026.

Age AssuranceParental ControlsAI SafetyPolicy AdvocacyExpert TestimonyGlobal Survey
Read: Shaping Youth Safety Legislation — 15 Core Insights →
02 — Red Team Testing

Test whether safety systems actually work.

We run adversarial testing on youth-facing AI systems using synthetic persona registries — simulated minor users interacting with logged-out chatbots and age-gated platforms to detect behavioral failures that companies cannot credibly self-report. This is the evaluation methodology the field lacks: not asking platforms what their safety features do, but testing whether those features hold under realistic conditions. Red team findings feed directly into CLARA's advisory and advocacy work.

Synthetic PersonasBehavioral TestingAI CompanionsAge GatesIndependent Audit
03 — Family Safety Coach

An iOS app that walks parents through the controls that actually exist.

Most parents don't activate parental controls because the controls are confusing, inconsistently named, and buried. The CLARA Family Safety Coach is an iOS app that walks a parent through their child's specific devices and apps — step by step — with a built-in coach that surfaces relevant research, practical tips, and what good controls actually look like. It's powered by CLARA's own control taxonomy and will integrate with the Claude API for real-time coaching. Currently in development.

iOS AppParental ControlsStep-by-Step WalkthroughAI CoachIn Development
04 — Family Programs

Build digital agency, not just digital rules.

The Tech Wellness Trainer is a browser plugin that monitors a child's interaction with AI and social platforms and flags whether the tool is working for the kid or working on them. Reclaim & Rewire is a five-day immersive program for tween families — ages 10–12 — that builds the skills and habits the Trainer requires to be meaningful. Both are grounded in behavioral science, education research, and addiction recovery evidence. Both generate real-world data that sharpens CLARA's evaluation instruments.

Ages 10–12Digital AgencyBrowser PluginReclaim & RewireBehavioral Science

How It Works Together

The four areas aren't four parallel workstreams — they're one integrated thesis executed in sequence.

The advisory and red team work is live now. It generates revenue and credibility, and it funds and informs everything else. The Family Safety Coach is Zack's primary build focus. Reclaim & Rewire is a pilot program, not a scaled operation. The global survey is where we're headed once we have the partnerships to anchor it.

Three people can hold all four because the research feeds the tools, the tools generate the data, and the data sharpens the advisory. It's one loop, not four jobs. What we're looking for are the partner organizations and funders who want to help us hire the team to take it to scale.

Current Work

Active, not aspirational.

These are the collaborations, engagements, and public appearances underway now.

Age Assurance Survey — United States
Collaborative survey on age assurance and circumvention, co-developed with researchers from Jon Haidt's group and the Stanford Social Media Lab.
In Development
Youth AI Usage — Grant Research
Joint grant with Dunigan Folk on youth AI usage patterns, what children are discussing with AI systems, and the safety implications for families and educators.
In Progress
UK Department for Education — AI Tutor Safety
Exploring partnership on independent safety evaluation methodology for AI tutoring systems in UK schools.
Early Conversations
School System AI Policy
Collaborating with a US school system on AI use policies and educator guidance grounded in behavioral evidence.
Active
California Privacy & Consumer Protection Committee
Presenting on age assurance evidence, circumvention research, and parental controls effectiveness in Sacramento.
March 17, 2026
All Tech Is Human — Age Assurance Podcast
Featured conversation on age assurance research and the limits of current safety systems, moderated by David Polgar.
Upcoming
Montclair Digital Life Council
Contributing to policy development and digital literacy education at the municipal level, where families actually live with these problems.
Ongoing