Twitter has made several changes to how it handles online abuse and harassment over the last few weeks after admitting last month that it has not being doing enough to combat online hate. In its latest effort to fight abuse it has introduced the ability to filter the types of words and accounts a user sees, as well being more proactive in identifying abuse and continued improvement of transparency.
Earlier this month, it began stopping permanently suspended people from creating new accounts, introduced a ‘safe search’, and made ‘low-quality’ replies less visible. It followed this by introducing time outs for people that use politically incorrect language on the platform.
“We’re continuing our work to make Twitter safer, moving faster than ever to do so,” said Ed Ho, Twitter VP of engineering, in a blog post. “During the past few weeks alone, we’ve made a number of changes on this front including updating how you can report abusive Tweets, stopping the creation of new abusive accounts, implementing safer search results, collapsing abusive or low-quality Tweets, and reducing notifications from conversations started by people you’ve blocked or muted.”
The latest raft of changes mean that users will be able to mute ‘eggs’ – those without profile pictures – unverified email addresses or phones numbers. In addition, users can mute words, phrases, or conversations, from their home timeline for a day, week, month, or indefinitely.
Furthermore, Twitter says it will now work to identify abusive accounts while they are engaging in such behaviour – even when they have not been reported – and taking sanctions upon these accounts such as the time outs mentioned above.
Moreover, Twitter has promised continued ‘transparency and openness’ of its reporting process – including notifying users of when it has received their report and if further action has been taken.