Scroll to read more

The 2022 news cycle has not been kind to Twitter.

On the back of the Elon Musk takeover saga, and more recent revelations that the company has been deliberately working to mislead investors, and the market, on various fronts, today, another story has raised even more questions about Twitter management – and what the heck is going on in Twitter HQ.

As reported by The Verge:

In the spring of 2022, Twitter considered making a radical change to the platform. After years of quietly allowing adult content on the service, the company would monetize it. The proposal: give adult content creators the ability to begin selling OnlyFans-style paid subscriptions, with Twitter keeping a share of the revenue.”

Porn Twitter would certainly be one heck of a pivot, and the associated risks of not only directly acknowledging the presence of such content, but encouraging it, would be far-reaching, potentially alienating advertisers who would fear being associated with more controversial material, and inviting more scrutiny from US regulators

But neither of these is the reason that Twitter decided to abandon the project:

Before the final go-ahead to launch, though, Twitter convened 84 employees to form what it called a “Red Team.” The goal was “to pressure-test the decision to allow adult creators to monetize on the platform, by specifically focusing on what it would look like for Twitter to do this safely and responsibly”[…] What the Red Team discovered derailed the project: Twitter could not safely allow adult creators to sell subscriptions because the company was not – and still is not – effectively policing harmful sexual content on the platform.

Specifically, the Red Team found that Twitter ‘cannot accurately detect child sexual exploitation and non-consensual nudity at scale’, a problem that exists right now, with Twitter repeatedly falling short of agreed standards and processes to detect and remove such material.

The investigation found that as Twitter has grown, its investment in detecting harmful sexual content has not increased in-step, with the company instead prioritizing growth over all else, leaving major gaps in its processes.

The revelations are another startling insight into the state of Twitter, which may or may not be riddled with bots, and already hosts so much porn content that a search for virtually any term in the app will eventually unearth some shocking video clip in-stream.

That, in itself, should see the app come under increasing regulatory scrutiny – while The Verge also notes that Twitter has actually become more of a focus for adult performers in recent years, due to Tumblr’s decision to ban adult content in 2018. That means that Twitter is now one of the only mainstream platforms that allows users to upload sexually explicit photos and videos, which has seen more in the adult industry use it as a promotional tool for their content and services.

And amid this, Twitter’s capacity to detect and remove harmful sexual content has been in steady decline. Which seems like a disaster waiting to happen, with Twitter potentially one court case away from major penalties on such.

Wonder how Elon feels about that?

Musk, of course, has been seeking to exit his $44 billion Twitter takeover bid due, ostensibly, to the fact that Twitter, in Musk’s view, has lied about the presence of bots and spam on its platform.

Twitter has repeatedly stated that bots and spam make up 5% of its active user count, but the Musk case has also forced Twitter to reveal that it bases this assessment on very limited testing.

“Twitter’s quarterly estimates are based on daily samples of 100 mDAU, combined for a total sample of approximately 9,000 mDAU per quarter.”

That’s a total sample size of 9k accounts – or 0.0038% of Twitter’s audience. In this respect, Musk may well be right to question Twitter’s metrics, while further revelations from former Twitter security Chief Peter Zatko about Twitter’s significant security vulnerabilities and flaws could also lead to further examination of the company’s processes, and even fines as a result of failures in this respect.

Add in these new claims relating to the company’s failure to detect and remove harmful sexual content, and Elon, if he does eventually become Tweeter in Chief, could be forced to pay out a raft of penalties among his first actions at the app, which could significantly impact the platform’s capacity to align with his grand vision of a future where tweets contribute to ‘preserving the light of consciousness’.

Based on the wording of the takeover agreement, I’m not sure that any of these new revelations can actually be factored into the Musk takeover either way. But it makes a lot more sense now why Twitter was willing to accept Musk’s buy-out bid, and why it worked to establish a contract with few exit clauses to lock him into the deal.

But this, of course, is an aside from the main concern – that Twitter is failing to protect vulnerable people through its inability to police harmful adult content, which an internal review has acknowledged, to the point that it couldn’t see any way to fix it.

That’s a major concern, and should be a major point being pushed by regulators, who will now likely seek to grill Twitter’s execs about these latest revelations.

What will that mean for the future of the platform? It’s not good, but if the trade-off is that we end up with a better, safer online ecosystem, that better protects users, then Twitter should be held to account, within any capacity possible.