Scroll to read more

Meta, Google, TikTok and more have all signed on to the European Commission’s updated ‘Code of Practice on Disinformation’, which aims to increase enforcement action against concerted efforts to mislead users through various types of online manipulation.

As explained by the European Commission:

Today, Commission welcomes the publication of the strengthened Code of Practice on Disinformation. The 34 signatories, such as platforms, tech companies and civil society followed the 2021 Commission Guidance, and took into account the lessons learned from the COVID 19 crisis and Russia’s war of aggression in Ukraine.”

EU Code of Practice on Disinformation

The new reinforced agreement builds on the initial Code of Practice that was launched in 2018, which was the first official, cross-jurisdictional effort to combat the influence of online disinformation operations.

Though the definitions here are important – ‘misinformation’ is incorrect or misleading information, which can often occur unintentionally, when, for example, a user shares a false article that they believe to be correct. ‘Disinformation’ is a deliberate, coordinated effort to deceive – which is an important distinction from a legal enforcement standpoint, and a key pillar of this new Code.

The updated agreement aims to tackle disinformation programs by reducing financial incentives for such programs, empowering users with better tools to recognize, understand and flag disinformation, and expanding fact-checking operations to expedite detection and enforcement.

The Code will also now cover deep fakes and evolving forms of manipulation, which will see the platforms develop coordinated approaches to tackling such activities, which could be a big step towards improving detection and response.

The Code also includes measures to ensure transparency in political advertising –by allowing users to easily recognize political ads thanks to better labeling and information on sponsors, spend and display period’.

“Signatories will have 6 months to implement the commitments and measures to which they have signed up. At the beginning of 2023, they will provide the Commission with their first implementation reports.”

It’s a big step, which could have a major positive impact in tackling such activity, with each of the platforms now being held accountable in enforcing these elements.

Meta has welcomed the new Code announcement.

Meta, of course, has long been pushing for broader industry regulation, in order to take the enforcement onus of the platforms in isolation.

Meta, as with all platforms, would prefer to be more hands off, and let users communicate freely, within legal bounds, but in recent times it’s been forced into making difficult decisions about what is and is not allowed within its apps, which has led, at times, to significant user backlash.

New regulations like this are a step towards broader oversight, which will level the playing field for all platforms, while also removing the decisions on rule-breaking posts from its own moderation teams.

It’ll be interesting to see how the new regulations are enacted, and the impact that then has – and how the EU looks to respond to new issues and concerns in real-time.

There are always inherent risks in such, as it comes down to who’s deciding what is and isn’t correct. But the focus on ‘disinformation’ specifically limits the scope in this respect, honing in on clearly deliberate, concerted programs designed to deceive users for a defined objective.

It could be a major step, which could then see similar expanded to more regions.

You can read the new EU Code of Practice on Disonformation here.