Meta’s once again looking to dispel the notion that its platforms have contributed to political division and angst, this time based on a new series of scientific studies, which incorporate Meta data and user experiments, and underline, Meta says, that there’s no definitive link between algorithmic amplification and political polarization, as such.
Though they’re not a perfect example of the full extent of the concern.
The studies, published in the academic journals Science and Nature, are based on analysis of Facebook and Instagram activity leading into the 2020 US Presidential Election, with Meta’s team’s partnering with selected academic groups to facilitate the research.
For each paper, the researchers conducted a range of different tests with participating users (users explicitly agreed to take part in the experiments), including:
- Preventing Facebook users from seeing any ‘reshared’ posts
- Displaying Instagram and Facebook feeds to users in reverse chronological order, instead of in an order curated by Meta’s algorithm.
- Significantly reducing the number of posts Facebook users saw from ‘like-minded’ sources
The experiments were primarily designed to test the echo chamber hypothesis, which posits that social media algorithms indoctrinate people’s views by showing them more of the content that they agree with, and less of what they don’t. By manipulating these elements, the researchers examined the impact that each change had on political opinions and voting behavior, and found that there was no clear link between social media algorithms and user leanings.
As per Meta:
“Although questions about social media’s impact on key political attitudes, beliefs, and behaviors are not fully settled, the experimental findings add to a growing body of research showing there is little evidence that key features of Meta’s platforms alone cause harmful ‘affective’ polarization or have meaningful effects on these outcomes.”
Which may be true, however it’s hard to measure the full extent of political impact that social media, in general, has had in such an isolated way, in shifting different elements and objectives and seeing what comes out.
Because the impacts are actually far broader than that. It’s not just direct social media engagement that’s shifted opinion, but the impact that algorithmic incentives have had on the media sector in general. For example, Facebook’s algorithm amplifies content that sparks more discussion, as that helps to fuel more engagement, and keep users interested. That, in turn, incentivizes media organizations to publish content that will spark more comments, and research has shown that high-arousal emotions, like anger and happiness, are the key drivers of comments on web posts.
On top of that, negative emotions drive more virality, which means that the best way to maximize the amount of comments and replies is to post things that make people angry enough to respond.
Years of digital engagement have pushed media organizations in this direction, not just on Facebook, but across other digital platforms too, with algorithm-defined systems looking to highlight the posts that generate the most shares, the most discussion, which further pushes the media in this direction.
So it’s not just direct platform engagement that influences such behavior, but how these types of systems have changed the incentive structure for publications. That’s why we’re seeing many more divisive takes and perspectives, because the structures of the internet are built around this, and that won’t be uncovered by manipulating user social feeds.
As such, you can’t argue that Meta’s systems aren’t to blame for political polarization, though they’re not the only ones. But Meta does have the most reach, and thus, the most impact. Indeed, according to the latest ‘Social Media and News’ study from Pew Research, Facebook is the biggest news source among social media platforms for U.S. adults, so it arguably does have the most influence in this respect.
So while these studies do show that certain elements of social media usage don’t have as big an influence on political opinions as some suggest, they don’t take into account the broader scope of influence, which likely would point to increased political division as a result of the shifting news landscape.
In fairness, the researchers themselves do note this, but in their view, they can only do so much, and these experiments do focus on several key elements that some believe have an impact on political polarization. The findings show that these smaller elements have little effect, which Meta is trumpeting as a vindication of its systems. But even the researchers note that these are limited in scope.
What the studies do show, however, is that some theories about political polarization as a result of social media usage are flawed, and that changing specific algorithmic drivers may not have the transformative effect that many think.
In other words, it’s complex, and there are no easy solutions, and as such, pointing the finger at Meta specifically may not be fair.
But again, Meta is now a key source of news for U.S. adults, and what it shows you does have an effect. Meta’s been shifting away from news content for some time either way, as it’s now deriving far more engagement from AI recommended Reels posts, which are helping to boost time spent, away from news debate.
Maybe that’s a better pathway to reducing political angst, though it’s worth noting that the key driver here remains Meta’s own business needs, not the good of society.
Essentially, I wouldn’t be putting too much faith in Meta looking to ‘do the right thing’ in this context, nor in Meta’s own statements that clear it of any ills.
You can read Meta’s full summary of the findings here, and read the summary reports in Science and Nature.