Meta expands teen safety protections to 27 European Union countries
Advanced AI will now analyse profile clues to identify users lying about their age
Meta Platforms announced on Tuesday that it is significantly expanding its technical safeguards for teenage users across the European Union and on Facebook in the United States.
This strategic shift comes as the social media giant faces intensifying pressure from international regulators to address mounting concerns regarding teen mental health, online abuse, and the proliferation of AI-generated illicit content.
The California-based tech giant revealed it will deploy advanced artificial intelligence to proactively identify accounts suspected of belonging to teenagers, even if those users have provided an adult birthdate.
This specific technology, which analyses entire profiles for contextual clues to determine a user's true age, will be rolled out across 27 EU member states.
Furthermore, Meta is bringing these protections to Facebook in the United States for the first time, with a subsequent expansion to the United Kingdom and Europe scheduled for June.
This initiative also includes strengthening circumvention measures to prevent new account creation by users Meta suspects are underage.
The move follows significant legal pressure in the United States. On Monday, the state of New Mexico filed a motion asking a judge to declare Meta a "public nuisance" and requested a fine of $3.7 billion.
The lawsuit demands a total overhaul of Meta’s platforms to better protect young people. European countries are similarly pushing for stricter age-verification protocols.
Meta previously introduced "Teen Accounts" on Instagram last year, which automatically defaulted younger users into private settings with restricted messaging.
By expanding these features to Facebook and broader territories, the company aims to demonstrate a commitment to safety amidst a global regulatory crackdown.