81m videos removed from TikTok between April and June for violating the platforms Community Guidelines or Terms of Service

TikTok has released its Q2 Community Guidelines Enforcement Report, which details the volume and nature of violative content and accounts removed from TikTok to protect the safety of its community and the integrity of its platform.

The report reveals that 81,518,334 videos were removed globally between April and June for violating TikTok’s Community Guidelines or Terms of Service. While this is a huge number, it represents less than 1 per cent of all videos uploaded to the platform. Of those videos, TikTok identified and removed 93 per cent within 24 hours of them being posted and 94.1 per cent before a user reported them. 87.5 per cent of removed content had zero views, which is an improvement since its last report, when the figure was 81.8 per cent. To save you doing the maths, this means that just over 10m of the videos had been viewed.

TikTok said it it also continuing to make steady progress in its proactive detection of hateful behaviour, bullying, and harassment. 73.3 per cent of harassment and bullying videos were removed before any reports compared to 66.2 per cent in the first quarter this year, while 72.9 per cent of hateful behaviour videos were removed before any reports compared to 67.3 per cent from January – March. TikTok said the  progress is attributable to ongoing improvements to its systems that proactively flag hate symbols, words, and other abuse signals for further review by our safety teams.

The platform has also added prompts that encourage people to consider the impact of their words before posting a potentially unkind or violative comment. The effect of these prompts has already been felt, it said with nearly four in 10 people choosing to withdraw and edit their comment. Though not everyone chooses to change their comments, TikTok said it is encouraged by the impact of features like this, and that it continues to develop and try new interventions to prevent potential abuse.

It is expanding on these features with improved mute settings for comments and questions during livestreams. As of yesterday, the host or their trusted helper can temporarily mute an unkind viewer for a few seconds or minutes, or for the duration of the livestream. If an account is muted for any amount of time, that persons entire comment history will also be removed. Livestream hosts can already turn off comments or limit potentially harmful comments using a keyword filter.  

Array