Scroll to read more

Amid various investigations into how it protects (or doesn’t) younger users, TikTok has announced a new set of filters and options to provide more ways to limit unwanted exposure in the app.

First off, TikTok has launched a new way for users to automatically filter out videos that include words or hashtags that they don’t want to see in their feed.

TikTok keyword filters

As you can see in this example, now, you can block specific hashtags via the ‘Details’ tab when you action a clip. So if you don’t want to see any more videos tagged #icecream, for whatever reason (weird example TikTok folk), now you can indicate that in your settings, while you can also block content containing chosen key terms within the description.

Which is not perfect, as the system doesn’t detect the actual content, just what people have manually entered in their description notes. So if you had a phobia of ice cream, there’s still a chance that you might be exposed to disturbing vision in the app, but it does provide another means to manage your experience in a new way.

TikTok says that the option will be available to all users ‘within the coming weeks’.

TikTok’s also expanding its limits on content exposure relating to potentially harmful topics, like dieting, extreme fitness, and sadness, among others.

Last December, TikTok launched a new series of tests to investigate how it might be able to reduce the potentially harmful impacts of algorithm amplification, by limiting the amount of videos in certain, sensitive categories that are highlighted in user ‘For You’ Feeds.

It’s now moving to the next stage of this project.

As explained by TikTok:

“As a result of our tests, we’ve improved the viewing experience so that viewers now see fewer videos about these topics at a time. We’re still iterating on this work given the nuances involved. For example, some types of content may have both encouraging and sad themes, such as disordered eating recovery content.”

This is an interesting area of research, which essentially seeks to stop people from stumbling down rabbit holes of internet information, and becoming obsessed with possibly harmful elements. By restricting how much on a given topic people can view at a time, that could have a positive impact on user behaviors.

Finally, TikTok’s also working on a new ratings system for content, like movie classifications for TikTok clips.

“In the coming weeks, we’ll begin to introduce an early version to help prevent content with overtly mature themes from reaching audiences between ages 13-17. When we detect that a video contains mature or complex themes – for example, fictional scenes that may be too frightening or intense for younger audiences – a maturity score will be allocated to the video to help prevent those under 18 from viewing it across the TikTok experience.”

TikTok censored content

TikTok has also introduced new brand safety ratings to help advertisers avoid placing their promotions alongside potentially controversial content, and that same detection process could be applied here to better safeguard against mature themes and material.

Though it would be interesting to see how, exactly, TikTok’s system detects such content.

What kind of entity identification does TikTok have in place, what can its AI systems actually flag in videos, and based on what parameters?

I suspect that TikTok’s system may be very well advanced in this respect, which is why its algorithm is so effective at keeping users scrolling, because it’s able to pick out the key elements of content that you’re more likely to engage with, based on your past behavior.

The more entities that TikTok can register, the more signals it has to match you with clips, and it does seem like TikTok’s system is getting very good at figuring out more elements in uploaded videos.

As noted, the updates come as TikTok faces ongoing scrutiny in Europe over its failure to limit content exposure among young users. Last month TikTok pledged to update its policies around branded content after an EU investigation found it to be ‘failing in its duty’ to protect children from hidden advertising and inappropriate content. On another front, reports have also suggested that many kids have severely injured themselves, some even dying, while taking part in dangerous challenges sparked by the app.

TikTok has introduced measures to combat this too, and it’ll be interesting to see if these new tools help to reassure regulatory groups that it is doing all that it can to keep its young audience safe, in more respects.

Though I suspect it won’t. Short-form video requires attention-grabbing gimmicks and stunts, which means that shocking, surprising and controversial material generally performs better in that environment.

As such, TikTok’s very process, at least in part, incentivizes such, which means that more creators will keep posting potentially risky content in the hopes of going viral in the app.