Moderating offensive online comments has become an increasingly difficult and urgent challenge. But, Kevin Munger, a PhD student at NYU’s Department of Politics, is hoping to change that with data science.
In his intriguing paper, “Tweetment Effects on the Tweeted: Experimentally Reducing Racist Harassment,” Munger demonstrates how his new comment moderation method, which uses ‘bot’ Twitter accounts, has already reduced offensive behavior up to a month after intervention.
Munger developed his methodology through three major stages.
First, he collected a dataset of Twitter users who engaged in racist behavior by searching for those who had tweeted racial slurs at other users. Previous studies have been limited by restrictive samples since users are unlikely to come forward as openly racist. But, by using data science, Munger could independently discern racial harassment without relying on users to identify themselves.
Yet, because “a dictionary method-like researching for ethnic slurs cannot capture any information about the tone of a tweet,” Munger explained, the second stage of his research involved using a program called “streamR” to assign each user an ‘offensiveness score’ based on the average number of offensive words per tweet. By assigning an ‘offensiveness score’, he could discard users who used racial slurs sarcastically. He then narrowed the dataset to 50 white adult males, as they are “the largest and most politically salient demographic engaging in racist online harassment of blacks.”
Finally, Munger created a suite of Twitter ‘bots’ that would tweet the users who used offensive racial slurs to remind them that their language was harmful. Interestingly, his bots were classified into four groups: they were either black or white, and had either few followers or many.
He discovered that those who had been tweeted by his white male bot with a large number of Twitter followers were the most likely to reduce tweeting racial slurs. Remarkably, his bots also influenced users to tweet racial slurs approximately 186 fewer times overall in the month after intervention, suggesting that his method is a strong way forward for tackling online harassment.
His research also raises a provocative question: what does it say about our online community if it is both the white male user who most frequently performs racial harassment, and the white male Twitter ‘bot’ who has the greatest power to moderate such harassment?
by Cherrie Kwok