Facing persistent scrutiny over child safety, Meta is deploying a more assertive artificial intelligence system on Instagram, initiating tests in the United States today, April 21. The enhanced AI aims to identify users suspected of being teenagers despite claiming adult birthdays on their profiles. In a notable shift, when the system flags an account based on various signals, Instagram will automatically apply its suite of restrictive “Teen Account” settings, overriding the user-provided age information in an intensifying effort to bolster protections for minors online.
This initiative represents an expansion of Instagram’s use of AI for age detection, a practice first detailed in 2024 that involved analyzing clues like birthday greetings in messages and how users engaged with content. The system launching today, however, moves beyond passive estimation to proactive intervention.
Meta stated in its announcement the aim is to ensure more young users benefit from the platform’s protective defaults, highlighting that 97% of teens globally who were previously placed into Teen Accounts chose to keep those more private settings. Users whose accounts are automatically switched will be notified and offered a way to appeal if they believe the AI has made a mistake – an important recourse given the inherent limitations of age estimation technology.
Decoding the AI’s Signals
How does this AI make its determination without relying on facial analysis, which Meta denies using for this system? The company has indicated the system analyzes various signals, including the type of content an account interacts with, information listed in the profile, and significantly, the account’s creation date.
This builds on earlier techniques Meta discussed back in July 2021, which also involved looking for explicit birthday mentions (like “Happy 21st Bday!”) and sometimes comparing age information provided across Facebook and Instagram. These methods stand in contrast to Instagram’s other age verification methods, like the partnership with Yoti using video selfies introduced in June 2022, which primarily apply when users actively try to change their listed birthdate from under 18 to over 18 and require direct user action.
What Teen Settings Entail
When the new AI system triggers the switch to a Teen Account, a specific set of restrictions are applied by default. These include automatically setting the account to private, limiting direct messages so they can only be received from followers or existing connections, and reducing the visibility of content deemed sensitive, a measure Instagram began defaulting for teens in August 2022.
The teen settings also incorporate usage notifications after 60 minutes and a default “sleep mode” silencing notifications between 10 PM and 7 AM. Alongside the technical rollout, Meta mentioned it is also providing resources and tips to parents, developed with experts like pediatric psychologist Dr. Ann-Louise Lockhart, on discussing age settings and online safety.
Regulatory Backdrop and Industry Tensions
Meta’s refinement of its internal age detection tools occurs under persistent regulatory examination. In May 2024, the European Commission launched formal proceedings under the Digital Services Act (DSA) against Meta, questioning if Facebook and Instagram adequately protected minors.
The Commission’s concerns included the potential for “behavioral addictions in children” stemming from interface designs and the effectiveness of Meta’s age verification. Former European Commission Executive Vice President Margrethe Vestager stated at the time, “With the Digital Services Act, we established rules that can protect minors when they interact online.”
That probe, which could result in substantial fines, followed an earlier DSA investigation announced in April 2024 concerning Meta’s handling of disinformation and deceptive ads.
This regulatory environment intersects with friction within the tech sector over child safety responsibilities. Just last month, following Utah passing the App Store Accountability Act requiring app stores to manage age verification, Google criticized Meta’s advocacy for such laws.
As reported earlier, Google argued Meta was attempting to “offload their own responsibilities” onto app stores, a move Google claimed would introduce “new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act.” Meta’s current enhancement of its own platform’s AI capabilities presents a different tactic in navigating these complex responsibility questions, occurring alongside, rather than solely relying on, efforts to mandate checks by third parties like app stores.