Scroll to read more

With elections being held in several nations in 2024, Meta has reiterated its approach to tackling electoral misinformation, and how it’s looking to combat new elements, like generative AI, in regards to how it can be used to mislead voters.

Meta’s President of Global Affairs Nick Clegg, a former UK Minister himself, has provided an overview of three key elements in Meta’s updated civic protection approach, which he believes will be critical within the coming election cycles in various nations.

Those three focal elements are:

  • Political advertisers will have to disclose when they use AI or other digital techniques to create or alter a political or social issue ad. Meta unveiled this policy earlier in the month, with Clegg reiterating that this will be a requirement, with effective penalties if political advertisers fail to do so.
  • Meta will block new political, electoral and social issue ads during the final week of the U.S. election campaign. Meta implemented this rule in 2020, in order to stop campaigns from making claims that may be uncontestable, given the time frame. This is critical in relation to the first point, because while Meta does have penalties for deepfakes, a campaign may be willing to risk such, if it could help to seed doubt about an opponent, particularly in the final days leading into a poll.
  • Meta will continue to combat hate speech and Coordinated Inauthentic Behavior, which has been a key focus for its moderation teams. Meta will continue to remove the worst examples, while also labeling updates from state-controlled media, to ensure more transparency in political messaging.

Clegg has also underlined Meta’s expanding, and unmatched moderation effort, which has been increased significantly over time, especially around political influence and interference.

No tech company does more or invests more to protect elections online than Meta – not just during election periods but at all times. We have around 40,000 people working on safety and security, with more than $20 billion invested in teams and technology in this area since 2016. We’ve also built the largest independent fact-checking network of any platform, with nearly 100 partners around the world to review and rate viral misinformation in more than 60 languages.”

in some ways, this feels like a direct response to X, which, under owner Elon Musk, has eschewed modern approaches to content moderation, in favor of leaning into the wisdom of the crowd, in order, according to Musk at least, to bring more universal, unfiltered truth, and let the people decide, as opposed to social media executives, what is and is not correct.

That approach is likely to become more problematic during election cycles, with X already coming under fire for failing to address problematic posts that have led to civil unrest.   

In this respect, Meta’s taking more direct accountability, which some will also view as corporate censorship, but after it was widely blamed for swaying voter actions in the 2016 election, Meta’s processes are now much more solidified and reinforced based around what it, and others, have assessed is the best practice approach.

And Meta’s systems will be tested again in the new year, which will raise more questions around the influence of social platforms in this respect, and the capacity for anyone to amplify their messaging via social apps.

Meta’s hoping that its years of preparation will enable it to facilitate more relevant discussion, without manipulation of its tools.

You can read Nick Clegg’s full election safety overview here.