Effective tools for detecting harmful actions on social media are scarce, as this type of behavior is often ambiguous in nature and/or exhibited via seemingly superficial comments and criticisms. Aiming to address this gap, a research team featuring Binghamton University computer scientist Jeremy Blackburn analyzed the behavioral patterns exhibited by abusive Twitter users and their differences from other Twitter users. Today, we get to know all about the newest algorithm that is hoped to lower bullying by making social media platforms welcoming to everyone.
Aims towards a safer platform
According to computer scientist and the designer of Twitter’s newest anti-bullying algorithm Jeremy Blackburn, the algorithms learn how to tell the difference between bullies and typical users by weighing certain features as they are shown more examples. Using advanced AI mechanisms in order to distinguish those amongst the ones who bring unnecessary harm to other users’ Twitter users, the algorithms are estimated to be able to identify abusive users on Twitter with 90 percent accuracy.

Prevent bullying actions
Blackburn and his team are currently exploring pro-active mitigation techniques to deal with harassment campaigns. The researchers then performed natural language processing and sentiment analysis on tweets themselves, as well as a variety of social network analyses on the connections between users. They’ve also developed algorithms to automatically classify two specific types of offensive online behavior, i.e., cyberbullying and cyber aggression.
A trustworthy algorithm
The algorithms were able to identify abusive users on Twitter with 90 percent accuracy. These are users who engage in harassing behavior, e.g. those who send death threats or make racist remarks to users.