Twitter works to rid of trolls with keyword muting, reporting

An increasing problem across social networks: Twitter looks to combat online harassment.
Written by Jake Smith, Contributor
(Image: File photo)

Twitter on November 15 took new steps to curb online harassment by focusing on controls, reporting, and enforcement, as abuse among users has risen across social networks.

Users can now mute specific keywords, phrases, and entire conversations from appearing in their notifications. In the past, users could only mute other users.

Under Twitter's new rules, hateful conduct that targets people based on their race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease, can be directly reported through the "hateful conduct" button.

Support teams who respond to the report have been retrained on Twitter's reinforcement policies through "special sessions on cultural and historical contextualization of hateful conduct." The social network also has new internal tools to combat the abuse.

Twitter said in a blog post that it doesn't "expect these announcements to suddenly remove abusive conduct from Twitter. No single action by us would do that. Instead we commit to rapidly improving Twitter based on everything we observe and learn."

Social media is most cited as the scene of online harassment, according to the Pew Research Center. State governments have worked to curb the issue.

Weird Tech 2: 20 Weird and wonderful Twitter accounts

Editorial standards