Clear AI Risk Assurance
Platforms are not required to prove their safety systems work. The researchers who understand these failures from the inside are structurally prevented from studying them. Every country is measuring something different. And AI is moving faster than any existing framework can track.
The Problem
“We don't let car manufacturers grade their own crash tests. We don't let pharmaceutical companies self-certify their drugs. There is no coherent argument for why digital platforms serving children should be the exception.”
The evidence base for child online safety is structurally compromised. Researchers inside platforms know where the problems are and are not allowed to say so. Researchers outside often don't know what questions to ask, or can't secure funding that lets them ask freely. The organizations attempting independent work frequently depend on the industry they're meant to evaluate. The result is a field that cannot produce the honest, comparable data that policy requires.
Platforms must ship age gates and parental controls. They are not required to show these work. Up to 60% of children under 13 are on age-restricted platforms — most with parental knowledge or help. In-app control activation runs below 20% across most major platforms. Controls that exist only on paper are not child safety. They are liability transfer.
Many child safety organizations accept technology funding or have technology executives on their boards — for some kinds of work, that is a reasonable arrangement. But for measuring baseline harm rates and evaluating whether safety systems actually work, independence is not optional. Safety claims require an evaluator with no financial stake in the outcome.
More than 50 jurisdictions are legislating youth online safety with no shared methodology. One country's privacy law may limit what another can access. Piecemeal national standards are possible — but global platforms operating across all of them will not invest deeply in compliance that can't be compared across borders. A globally coordinated, research-backed standard is the faster and more durable path.
72% of US teens have used AI companion apps. Parents and teachers openly say they don't know how to keep children safe on these systems. Most age assurance for AI is a checkbox. We don't yet have reliable data on what children discuss with AI, at what rates, or what happens afterward. The harm patterns these systems introduce — emotional dependence, learning displacement, parasocial formation — are not detectable by any existing safety framework, and no certification standard currently exists.
Our Work
CLARA is a three-person founder team. We are not a think tank, not a lobby, and not a platform watchdog that publishes annual reports no one reads. We consult, we build, and we advocate — in the places where independent expertise with inside-platform experience is rarest and most needed.
These aren't four parallel workstreams. The advisory work pays for the builds. The builds generate real-world data. The data makes the advisory worth hiring. We're not running four departments — we're running one flywheel at founder stage. We're looking for partner organizations and funders who want to help us hire the team to get the full vision to reality.
We advise regulators, policymakers, and research bodies on how to evaluate youth-facing AI and social platforms — and what questions the industry has systematically avoided asking. Survey design, red team protocol development, standards co-design, legislative testimony. We are available to any regulatory or policy body working on youth digital safety. We are currently co-developing an age assurance survey with researchers from Jon Haidt's group and the Stanford Social Media Lab, and presenting to the California Privacy & Consumer Protection Committee in March 2026.
Read: Shaping Youth Safety Legislation — 15 Core Insights →We run adversarial testing on youth-facing AI systems using synthetic persona registries — simulated minor users interacting with logged-out chatbots and age-gated platforms to detect behavioral failures that companies cannot credibly self-report. This is the evaluation methodology the field lacks: not asking platforms what their safety features do, but testing whether those features hold under realistic conditions. Red team findings feed directly into CLARA's advisory and advocacy work.
Most parents don't activate parental controls because the controls are confusing, inconsistently named, and buried. The CLARA Family Safety Coach is an iOS app that walks a parent through their child's specific devices and apps — step by step — with a built-in coach that surfaces relevant research, practical tips, and what good controls actually look like. It's powered by CLARA's own control taxonomy and will integrate with the Claude API for real-time coaching. Currently in development.
The Tech Wellness Trainer is a browser plugin that monitors a child's interaction with AI and social platforms and flags whether the tool is working for the kid or working on them. Reclaim & Rewire is a five-day immersive program for tween families — ages 10–12 — that builds the skills and habits the Trainer requires to be meaningful. Both are grounded in behavioral science, education research, and addiction recovery evidence. Both generate real-world data that sharpens CLARA's evaluation instruments.
How It Works Together
The four areas aren't four parallel workstreams — they're one integrated thesis executed in sequence.
The advisory and red team work is live now. It generates revenue and credibility, and it funds and informs everything else. The Family Safety Coach is Zack's primary build focus. Reclaim & Rewire is a pilot program, not a scaled operation. The global survey is where we're headed once we have the partnerships to anchor it.
Three people can hold all four because the research feeds the tools, the tools generate the data, and the data sharpens the advisory. It's one loop, not four jobs. What we're looking for are the partner organizations and funders who want to help us hire the team to take it to scale.
Current Work
These are the collaborations, engagements, and public appearances underway now.