Technology & Science
Meta Deploys AI Visual Age Detection to Purge Under-13 Accounts Globally
On 5–6 May 2026 Meta began activating an AI system that scans users’ photos and text for height, bone structure, birthdays and school clues, auto-deactivating suspected under-13 profiles and expanding Teen Account safeguards to 27 EU nations, Brazil and, for the first time, Facebook in the U.S.
Focusing Facts
- If flagged, an account is immediately disabled and must pass Meta’s age-verification process; the visual-cue AI is already live in select countries and slated to reach all regions handling Instagram Live, Reels and Facebook Groups within months.
- The enforcement push follows an April 2026 New Mexico jury verdict fining Meta $375 million for misleading safety claims and threatening $3.7 billion in future penalties.
- Meta says it has auto-enrolled “hundreds of millions” of teens into restricted Teen Accounts since 2024 and will send U.S. parents in-app prompts this month to confirm their children’s ages.
Context
Tech platforms have been here before: after the 1998 U.S. COPPA law, sites like Yahoo! Kids walled off children’s areas, only to watch teens lie about birthdays. Today’s machine-inference strategy echoes 19th-century factory inspectors who measured child workers’ height to enforce labor laws—useful but far from foolproof. Meta’s shift from self-reported data to probabilistic biometric cues marks a broader historical arc: the migration of governance from states to code, where algorithms police rules that lawmakers struggle to enforce. It also resurrects the privacy-versus-protection debate that led Facebook to shutter facial recognition in 2021, suggesting a pendulum swing back toward surveillance lite. Over a 100-year horizon this moment may signal the normalization of “ambient age verification,” a precursor to wider automated compliance systems that could one day underpin digital ID regimes—or, if public backlash mirrors past biometrics scandals, become another abandoned experiment in the long tug-of-war between youth safety, civil liberties, and platform economics.
Perspectives
Corporate self-publication
Meta corporate blog — Presents the new AI age-assurance tools as evidence that Meta is proactively safeguarding teens and removing under-13s while respecting privacy because the system is “not facial recognition.” As the company under regulatory fire, Meta’s house blog frames the rollout as voluntary innovation, downplaying the privacy trade-offs and portraying outside, app-store-level age checks it lobbies for as a public-spirited solution.
Tech trade press
Android Headlines, Analytics Insight, etc. — Report the bone-structure-scanning AI as an impressive technical upgrade that will deactivate suspected underage accounts and extend Teen Account protections across Instagram, Facebook and multiple regions. These outlets largely recycle Meta’s press-release language and focus on feature specs, offering scant scrutiny of accuracy gaps or civil-liberty concerns—reflecting a reliance on corporate statements for quick tech coverage.
Industry commentary sceptical of strict regulation
Social Media Today, News.com.au — Acknowledges the new AI measures but stresses they still won’t stop determined kids, arguing blanket bans and tough penalties are ineffective or unfair to platforms. By framing enforcement failures as inevitable and harsher rules as ‘unfair,’ the commentary tilts toward industry interests, minimising the case for stronger child-safety regulation or privacy safeguards.
Like what you're reading?