Twitter is set to expand its Safety Mode feature to allow users to temporarily block accounts that send harmful and abusive tweets.
What’s new on Twitter?
The new addition to the feature will see accounts being flagged and blocked for seven days for making hateful remarks or repeatedly sending uninvited comments.
Twitter said half of its users in North America, Australia, New Zealand and Ireland will be the first to have access to the improved Safety Mode feature.
When the company first introduced this feature in September 2021, it explained how it will work.
When the feature is turned on in your Settings, our systems will assess the likelihood of a negative engagement by considering both the Tweet’s content and the relationship between the Tweet author and replier.
Twitter’s algorithms will also assess the relationship between the tweet’s content and the relationship between the tweet author and the replier. If the offending account is one that the user follows or frequently interacts with, the system will not automatically block it.
Our technology takes existing relationships into account, so accounts you follow or frequently interact with will not be autoblocked.
A new feature that works closely with the Safety Mode is the Proactive Safety Mode feature. It proactively identifies potentially harmful replies and will prompt the people to consider turning on their Safety Mode.
This new feature was added based on feedback from a group of users who were involved in the initial trial; they wanted help identifying unwelcome interactions.
Like other social media companies, Twitter has struggled to deal with abuse and harassment on its platforms. The firm is under scrutiny from regulators, with a French court ruling recently that Twitter must demonstrate it fights online attacks.
The UK is also preparing legislation to force all social media sites to act swiftly on hate speech or face fines.