Meta Backs EU-Wide ‘Digital Majority Age’ to Bolster Child Safety

Meta supports a pan-European 'digital majority age' requiring parental consent, sparking a debate with Google over who is responsible for online child safety.

Meta, the parent company of Facebook and Instagram, has announced its support for a common “digital majority age” across the European Union. The proposal would require parents to approve app downloads for their younger teens, creating a consistent, industry-wide standard for online safety.

This move follows months of intense pressure, and not only from inside the EU. Parents have protested in New York City over child safety failures, and regulators are increasing their scrutiny. While Meta presents the plan as a solution, rivals like Google argue it unfairly shifts the responsibility for age verification onto app stores.

The announcement places Meta at the center of a global debate over how to best protect children online.

Meta Backs Pan-European Digital Age Rules

The proposal, which has backing from France, Spain, and Greece, would establish a unified age of digital adulthood. Below this age, minors would need explicit parental consent to access social media and other online services, a measure supported by 75% of EU parents in a recent poll.

Meta’s framework rests on three key pillars. First, it champions parental approval for app downloads. Second, it insists that these rules apply broadly across all digital services teens use, including gaming and streaming, not just social media platforms.

The company argues that focusing only on social media would miss the full picture of a teen’s digital life, which involves an average of 40 apps per week. It warns this narrow focus could inadvertently push young users toward unregulated and potentially less safe online spaces.

The third pillar is a call for robust, privacy-preserving age verification. Meta argues this is critical for any “digital majority” system to function effectively and has consistently advocated for this to be handled at a higher, more centralized level, a position detailed in previous policy papers.

The Responsibility Debate: Platforms vs. App Stores

This push for app-store-level verification places Meta in direct conflict with rivals like Google. The debate intensified after Utah passed its App Store Accountability Act in March, requiring app stores to manage age verification, a move Meta vocally supported.

Google publicly criticized the approach, stating, “There are a variety of fast-moving legislative proposals being pushed by Meta and other companies in an effort to offload their own responsibilities to keep kids safe to app stores,” and warned that centralizing age data could create new privacy risks for minors without addressing the core harms inspiring lawmakers to act.

Meta counters that its approach empowers parents, contrasting it with outright social media bans which it claims “Bans take away parental authority, focus narrowly on one type of online service among the nearly two million apps available to teens, and overlook how teens use social media to connect with the world around them, grow and learn.”

The company believes a centralized system is more efficient. Instagram’s policy chief, Tara Hopkins, stated, “I think it makes much more sense that this is done at the ecosystem, app store, operating system level,” reinforcing this view.

A Backdrop of Regulatory Pressure and New Tech Threats

Meta’s policy shift did not occur in a vacuum. It follows intense public and regulatory scrutiny. In April, grieving families, joined by Prince Harry and Meghan, protested outside its NYC headquarters, demanding action on online harms.

The protest, organized by groups like the Heat Initiative, accused Meta of prioritizing its $164 billion in annual revenue over user safety. Parents shared stories of lethal fentanyl sales and sextortion facilitated through social media platforms.

This pressure builds on years of warnings, including the 2021 “Facebook Files” which showed the company knew internally about Instagram’s negative mental health impacts on teen girls. The US Surgeon General has also called for warning labels on social media.

The European Commission has also launched formal proceedings against Meta under the powerful Digital Services Act (DSA). The investigation is examining potential addictive designs and the effectiveness of Meta’s age verification tools.

At the DSA’s launch, EU official Margrethe Vestager said, “With the Digital Services Act, we established rules that can protect minors when they interact online,” signaling a new era of enforcement that could result in substantial fines for non-compliance.

The landscape of online threats is also rapidly evolving with AI. A recent investigation uncovered a disturbing trend of AI-generated gore and fetish content featuring cartoon characters on YouTube, reminiscent of the “Elsagate” scandal.

Meanwhile, privacy experts are raising alarms over data collection by new AI-powered toys, such as the upcoming “AI Barbie” from Mattel and OpenAI, which could monitor children’s expressions and interactions.

Proactive Measures and Lingering Concerns

In response, Meta has deployed new internal safety features. It now requires parental approval via its Family Center before teens under 16 can use Instagram Live. This builds on its “Teen Account” framework, which applies restrictive defaults that Meta claims 94% of parents find helpful.

More assertively, Meta is testing a proactive AI system in the US that identifies suspected underage users based on activity signals, not facial analysis. It then automatically applies the restrictive teen settings, including private accounts and DM limits, though an appeal process is available.

These actions follow the stalled progress of federal legislation like the Kids Online Safety Act (KOSA) in the US, which Meta had lobbied against before it failed in late 2024. This has pushed action to the state level, such as California’s ban on algorithmic feeds for minors.

Despite these steps, child safety advocates argue they are insufficient. Matthew Sowemimo of the UK’s NSPCC stated, “For these changes to be truly effective, they must be combined with proactive measures so dangerous content doesn’t proliferate on Instagram, Facebook and Messenger in the first place,” emphasizing the need to stop harmful content at its source rather than just managing its impact.

This call for proactive regulation is gaining traction. In Ireland, new rules from the media commission, Coimisiún na Meán, will take effect on July 21, mandating robust age verification for platforms showing adult content.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x