YouTube is testing a new feature that uses AI to determine whether a viewer is over eighteen years old. With Australia banning social media access for teenagers, the US Senate has been mulling over doing the same, as evidenced by the near-ban of TikTok. Google seems to be preempting any new status quo by rolling age autodetection out alongside a few other child safety features.
YouTube’s AI for kids
YouTube CEO Neal Mohan announced the AI age restriction in his Tuesday letter on the video media site’s big bets for 2025. He mentions using machine learning to estimate a user’s age to distinguish between adults and children. This would let them automatically filter out inappropriate content and promote the kind of content better suited to each. Engadget reports that the feature will work by examining a user’s searches, the categories of videos they watch, and the ages of their accounts. If users search for things like taxes, or if an account is twenty years old, then the algorithm can conclude that the user is an adult.
Should the AI model find a user underage, then the app’s settings will orient them toward child-friendly content, blocking explicit content and search results through the SafeSearch filter.
YouTube says testing will begin by year’s end, with plans for a global rollout much later in 2026. Although the feature was announced for the video streaming site, it’ll reportedly be tested on other Google platforms.
Google is not the only one to experiment with new child protection measures. Last year, Meta announced an “adult classifier” tool for Instagram, to identify underage users pretending to be adults. These AI age monitors would make for great age restrictors for parents concerned about the internet. Adult content remains readily accessible to children, and a measure that adds more steps to the process than a lone “Are you over 18?” challenge could go a long way.