Scroll to read more

X’s new, more “free speech” aligned approach to content moderation is being put to the test, with various groups seemingly now using X’s more lax enforcement processes to spread misinformation around the war in Israel, which was sparked by militant group Hamas launching a large-scale assault on Israeli citizens over the weekend.

Amid rising tensions, many people in the region have turned to X for real-time updates, which has once again made it a valuable breeding ground for partisan propaganda.

Indeed, according to analysis conducted by disinformation protection group Alethea, various coordinated groups are now posting false and inflammatory X updates related to the Israel-Hamas war.

As reported by NBC News:

[Various] accounts — many of which previously focused on more innocuous topics like professional basketball or life in Japan — previously showed no outward association, but suddenly began posting similar content over the weekend as news of the attacks broke. In many cases, the accounts would post the exact same phrases. It’s not clear if the accounts were created for the express purpose of posting the misinformation, or if they were hacked or sold.”

In response, X says that it’s treating the conflict “as a crisis requiring the highest level of response”, which has resulted updates to Community Notes to get them up on posts faster, in order to maximize crowd-sourced fact-checking, while it’s also announced a change to its Public Interest Policy, which will see more people able to keep posts related to the conflict active, in the interests of ensuring users are better informed.

But the reports of widespread disinformation have also sparked deeper scrutiny, with EU Commissioner for Internal Market Thierry Breton issuing a public call to X owner Elon Musk directly, asking him to “urgently ensure that your systems are effective” in dealing with misinformation and hate speech in the app.

In response, Musk called on Breton to provide examples of these alleged infringements, to which Breton said that Musk is able to action such based on reports like the one from Alethea, along with other independent analysis groups, which have also found similar trends.

Though, of course, Musk’s various supporters, who now get priority exposure in the app through X Premium, have taken Breton’s response as a signal that there actually is no such evidence, which is largely Musk’s aim in making such public stances. Musk has now deployed this tactic several times, effectively dismissing such claims by calling for specific data to be shared in public, though he too could also share the same specifics by providing data on exactly what X has and has not actioned.

In fact, Musk and his team are logically the best placed to provide such disclosure, as outside research groups, many of whom are now hampered by reduced insight, due to X upping the price of its API access, are only able to assess a fraction of overall posts in the app.

X has all of the data, and Elon’s keen to tout the open, transparent nature of his team. Why not share all the info that it has to counter any such claims, letting analysts then dig into what’s actually happening, based on X’s perspective. 

Then again, much of the concern stems from the fact that Musk has also cut various elements of X’s moderation systems, including regional staff reductions and software replacements, in addition to its new rules around what is and is not acceptable in the app. Musk himself has also been sharing his own views on the conflict, and being the most-followed user in the app, that’s also helped to contribute to the broader debate around the facts of the conflict, sparking more attention on certain elements.

Which, in turn, is also spooking regulators and officials, though the actual evidence, from either side, is fairly thin right now, at least in terms of what’s been shared in public.

But effectively, no one believes that X is going to be able to maintain adequate enforcement of misinformation around major conflicts like this, based on the noted changes at the app. Which is putting X under more scrutiny, and while other platforms are also dealing with misinformation in their own apps, it’s X, in particular, that’s under the microscope, which is being amplified even more by Elon’s self-involvement in the discussion and discourse.

The situation is still evolving, so we don’t have the evidence as yet. But the signs, based on various reports, are that more mis- and disinformation is proliferating on X, and that the company’s heavier reliance on crowd-sourced fact-checking, via Community Notes, is likely not enough to address all incidences as effectively as it would have been under past Twitter management.

But that system wasn’t perfect either, so it could be that X is also being unfairly targeted, due again, to its cost-cutting measures. X still remains hugely influential in such situations, and officials are keeping a close eye on how it’s able to manage such, in line with Elon’s stated “free speech” approach, and that could mean that X is going to be under more pressure than ever, simply because it’s put the spotlight on itself in this respect.

It’s also somewhat interesting to note how X, and Elon himself, is addressing this latest incident, in variance to its approach to other conflicts.

Elon, thus far, seems a lot less interested in commenting on government requests in India, or tensions in China, which could be because his other company, Tesla, is looking to expand its business interests in both regions. That’s less of a concern in Israel, which is another wrinkle to monitor within Elon’s various stances and statements.

In any event, we don’t have all the evidence right now, but independent groups are criticizing X’s lack of enforcement capacity, while X is claiming to be doing all that it can to address such, as quickly as possible.

It’s a crucial test, which could end up leading to a bigger review of the future of the app.