Scroll to read more

Amid ongoing debate around the impact of misinformation shared online, and the role that social media, in particular, plays in the spread of false narratives, a new anti-disinformation push in Europe could play a big role in improving detection and response across the biggest digital media platforms.

As reported by The Financial Times, Meta, Twitter, Google, Microsoft and TikTok are all planning to sign on to an updated version of the EU’s ‘anti-disinformation code’, which will see the implementation of new requirements, and penalties, in dealing with misinformation.

As per FT:

“According to a confidential report seen by the Financial Times, an updated “code of practice on disinformation” will force tech platforms to disclose how they’re removing, blocking or curbing harmful content in advertising and in the promotion of content. Online platforms will have to counter “harmful disinformation” by developing tools and partnerships with fact-checkers that may include taking down propaganda, but also the inclusion of “indicators of trustworthiness” on independently verified information on issues like the war in Ukraine and the COVID-19 pandemic.”

The push would see an expansion of the tools currently used by social platforms to detect and remove misinformation, while it may also see a new body formed to set rules around what classifies as ‘misinformation’ in this context, which could take some of the onus on this off the platforms themselves.

Though that would also place more control into the hands of government-approved groups to determine what is and isn’t ‘fake news’ – which, as we’ve seen in some regions, can also be used to quell public dissent.

Last year, Twitter was forced to block hundreds of accounts at the request of the Indian Government, due to users sharing ‘inflammatory’ remarks about Indian Prime Minister Narendra Modi. More recently, Russia has banned almost every non-local social media app over the distribution of news relating to the invasion of Ukraine, while the Chinese Government also has bans in place for most western social media platforms.

The implementation of laws to curb misinformation also, by default, put the lawmakers themselves in charge of determining what falls under the ‘misinformation’ banner, which, on the surface, in most regions, seems like a positive step. But it can be used in a negative, authoritarian way.

In addition to this, the platforms would be required to provide a country-by-country breakdown of their efforts, as opposed to sharing global or Europe-wide data on such.

The new regulations will eventually be incorporated into the EU’s Digital Services Act, which will force the platforms to take relative action, or risk facing fines of up to 6% of their global turnover.

And while this agreement would relate to European nations specifically, similar proposals have already been shared in other regions, with the Australian, Canadian and UK Governments all seeking to implement new laws to force big tech action to limit the distribution of fake news.

As such, this latest push likely points to a broader, international approach to fake news and misinformation online, which will ensure digital platforms are held accountable for combating false reports in a timely, efficient manner.

Which is good, and most would agree that misinformation has had harmful impacts in recent years, in various ways. But again, the complexities around such can make enforcement difficult, which also points to the need for an overarching regulatory approach to determine what, exactly, is ‘fake news’, and who gets to determine such on a broad scale.

Referring to ‘fact checkers’ is one thing, but really, given the risks of misuse, there should be an official, objective body, detached from government, that can provide oversight on such.

That too will be exceeding difficult to implement. But again, the risks of allowing censorship, through the targeting of selective ‘misinformation’, can pose just as significant a threat as false reports themselves.