Scroll to read more

Conceptually, this should be a positive step, but in practice, the outcome could be much different.

Today, X owner Elon Musk has announced a change to X’s creator ad revenue share program, in order to stop participants sharing sensationalized, divisive, and sometimes fake reports, with a view to sparking more responses, and thus, increasing their prospects of monetization.

X’s creator ad revenue share scheme enables X Premium subscribers to make money from their posts, by giving them a percentage of the ad revenue gleaned from ads shown in the post reply stream. However, only ads shown to other X Premium subscribers count.

And given that very few X users are signed up to X Premium, and the majority of them are ideologically aligned with Elon Musk’s view, the best way, then, to maximize your income from this program is to align your posts around the issues that are of most relevance to this audience, which, in large part, can be defined by Musk’s own posts.

If Elon says that it’s important, his many supporters will pay attention, which therefore means that by posting about, say, the war in Iran, you’ll increase your chances of sparking more replies, and thus boost your monetization potential.

Ironically, Elon himself has provided an example of the type of post that will now be ineligible for monetization, as this post has since been tagged as misinformation. Or it had been, with a Community Note, but that’s since been removed, likely because Elon’s supporters have voted it down within the Notes system.

Which is a fuzzy element of this new amendment. Does this apply to all posts that receive a Community Note, or only the posts where the note reaches public view, based on consensus among the Notes community?

And what if a Note does get approved, and is displayed in the app, but then gets voted down again, as per the above example?

It’s pretty unclear, though Elon did add that “any attempts to weaponize @CommunityNotes to demonetize people will be immediately obvious, because all code and data is open source.”

Which is fairly common refrain for Musk, that the system will simply correct itself based on transparency.

Which probably won’t work in practice, because the Community Notes system itself has already been corrupted by various groups who work together to approve and reject notes based on their own interests.

Community Notes are approved and rejected based on Community Notes participant consensus, with an emphasis being placed on Notes gaining approval from those with opposing political perspectives, based on a vague formula related to in-app activity. 

Though as recently reported by Wired, some Community Notes contributors have been approved on multiple accounts, which effectively enables them to double and triple vote in support of their own amendments, while Notes groups have also banded together in order to “actively coordinate on a daily basis to upvote or downvote particular notes.”

The scale of that type of manipulation is impossible to know at this stage, but given that the entry requirements for Community Notes contributors are so low, and that Notes can and do have an impact on the perception of information in the app, you can bet that various groups are indeed targeting the tool within their broader communication and propaganda efforts.

So while this new amendment may have some impact in reducing the spread of misinformation as a means to maximize creator ad revenue potential, it’s unlikely to stop manipulation of the Community Notes system. And it could, conversely, lead to coordinated attacks, as Musk notes, to hurt opposing perspectives.

X has put a lot of faith in the Community Notes system as a method for combating misinformation in the app, though various reports have suggested that it’s not capable of meeting such requirements, and will not stamp out lies and abuse of X posts, no matter how much Musk and Co. want that to be the case.

But now, it basically has to, because as part of its cost-cutting drive, X has effectively removed, or significantly scaled back all of the internal teams responsible for content moderation, which means that the platform is now largely reliant on Community Notes as the primary filter for the content that users see in the app.

And with the vast majority of Community Notes never shown in the app, due to internal debates over what’s true and what’s not, it doesn’t seem like a system that will be able to effectively slow the spread of misinformation.

But maybe, with this new proviso, that could disincentivize some X users from posting sensationalized content for engagement.

I mean, it hasn’t stopped the app’s most prominent user, but sure, maybe it’ll work for everybody else.