Scroll to read more

In the wake of its first-ever Youth Safety and Well-Being Summit, which was held last month in Washington DC, Meta has called for global cooperation among governments to establish new, definitive requirements around key elements of child safety online, including provisions for access and detection, as well as rules around who what is and is not acceptable content, particularly in relation social apps.

Meta’s Youth Safety Summit brought together mental health experts, educators, researchers, policy writers and parents, who held a series of discussions around the key issues relating to child safety online, and how to best address the evolving requirements of this key aspect.

Various reports have already indicated the depth of the problem – from the mental impacts of negative self-comparison on Instagram, to children dying while undertaking dangerous stunts, inspired by TikTok trends.

Social media apps already have age requirements, along with a variance of tools designed to detect and restrict youngsters from logging in and accessing inappropriate material. But most of these safeguards are easily circumvented, and with kids growing up online, they’re becoming increasingly savvy in evading such than their parents may suspect.

More advanced systems, however, are already in play, including facial recognition access gating (not ideal, given concerns around uploading kids’ images), and more advanced age-estimation software, which can determine the age of the account holder based on a range of factors.

Instagram is already working with third-party platforms on the latter, and Meta also notes that it’s implemented a range of additional measures to detect and stop kids from accessing its apps.

But it doesn’t want to go it alone, and it sees this, really, as a broader issue beyond its own remit.

As per Meta’s President of Global Affairs Nick Clegg:

“The European Union and the United States have tried to establish various fora by which key decision makers of the regulatory agencies in DC and the regulatory agencies in Brussels meet together (…)  the more they could do that with their counterparts, like India, it would be a very good thing for this agenda.”

Meta’s taken a similar approach with content regulation, implementing its own, external Oversight Board to scrutinize its internal decisions, while also calling on governments to take note of this approach, and establish more definitive rules that would apply to all online providers.

That would take some of these tough decisions out of Meta’s hands, reducing scrutiny on the company, while also establishing universal requirements for all platforms, which would improve safety overall.

There is some question within that around potential restrictions on competition, in that start-ups may not have the resources to meet such requirements. That could solidify Meta’s dominance in the sector – yet, even with that consideration, the argument still makes sense.

And given the real-world impacts that we’ve seen as a result of social media-originated trends and shifts, it makes sense that governments should be looking to develop more definitive regulatory requirements, on a broad scale.

Specifically, Meta’s calling for regulation to address three key elements:

  1. How to verify age: so that young children can’t access apps not made for them and that teens can have consistent, age-appropriate experiences
  2. How to provide age-appropriate experiences: so that teens can expect similarly safe experiences across all apps that are tailored to their age and life-stage
  3. How to build parental controls: so that parents and guardians have the tools to navigate online experiences for their teens together

Meta notes that it will continue to develop its own approaches, but it would prefer to see more centralized, definitive regulation, under which all platforms would have to abide.

Given the potential for harm, the push makes sense, and it’ll be interesting to see if this becomes a bigger talking point among UN member states, to begin with, over the coming months.