Weeding Out the Trolls
ADDITIONAL CONTRIBUTORS Bill Tressler

By Bill Tressler

Photo courtesy of eurobas.

“Please don’t feed the trolls!”

This request is a familiar one to any card-carrying internet denizen. For as long as there have been message boards or comment sections, there have been trolls. Trolls, or users who make intentionally inflammatory posts in order to stir the pot, have become a mainstay of online communities.

They come in all shapes and sizes: the racist, the overly-opinionated politico, the meme-spammer. Each kind has their own bag of tricks, and some are easier to detect than others. Some aren’t even trying to be trolls, they’re just folks with unpopular opinions and/or unfortunately-bad grammar. Simply put, it can sometimes be hard to weed out who’s trolling and who’s just trying to contribute to the discussion.

A recent study, however, aims to make that process much, much easier. Professor Danescu-Niculescu-Mizil of Cornell University, along with Assistant Professor Jure Leskovec and PhD student Justin Cheng, both of Stanford University, collaborated on a study aiming to create an algorithm that could detect “Future-Banned Users” or FBUs. The team chose the term FBU because the term “troll” is fluid and hard to define. While many FBUs are trolls and vice versa, they are not completely synonymous.

With funding from a Google Faculty Research Award and a Stanford Graduate Fellowship, the team set out to observe the communities of three popular websites: Breitbart (a political news site), CNN (a general news site), and IGN (a gaming news and discussion site).

Cheng elaborates to BTR on the researcher’s methods and on how the results can be applied online.

“We wanted to study relatively large communities, but at the same time wanted to see if the observations we made would generalize across multiple communities,” says Cheng.

For a study like this, one of the most important steps is identifying the common traits of FBUs. Rather than relying on identifying the malicious intent of individual posts, the researchers focused on the most commonly shared “signals” among FBUs in order to group them. They found that FBUs were among the most active users, posting more frequently and on a smaller number of articles, and garnering more responses than the average user. FBUs also tended to become more anti-social in the time leading up to their banning, and their overall post quality declined as well.

Over the course of 18 months, the researchers observed just under 1.75 million users and over 38 million posts across the three communities. Of the 343,926 IGN users observed, 5,706 (or 1.7 percent) were eventually banned for their posts. Of the 246,422 on Breitbart, 5,350 (or 2.2 percent) were banned. The big shocker came from CNN: of the 1,158,947 users observed, 37,627 (or 3.3 percent) were eventually banned.

Cheng was surprised by these results. Considering very little data currently exists on the prevalence of trolling, he and the team were unsure of what to expect. “Rather than trolling being a relatively rare occurrence, we found that a sizable proportion of users were banned on CNN–three in 100. This statistic suggested that antisocial behavior actually happens quite a bit and is worth understanding.”

Photo courtesy of Paul VanDerWerf.

These numbers are alarming, but perhaps misleadingly so. Cheng was quick to point out that “banned user” didn’t necessarily mean “troll,” and that perhaps they were simply someone who got on a moderators bad side one day. Using the current set of classifiers (post content, overall activity, response from community, etc), the researchers had a high detection success rate (80 percent, determined from observing only 5-10 posts per user), but still wound up misidentifying one in five users as antisocial. Simply put, the algorithm is in need of some serious fine-tuning before it’s ready to be solely relied upon to relieve beleaguered moderators.

Now that the observation has been done and the initial algorithm figured out, the question remains: How do sites implement this algorithm into online moderation? Cheng suggests that it would be best used “as a tool to assist human moderators in managing their communities.” While the algorithm proved to be fairly accurate, the researchers believe that it would be best utilized as a filter to hone in on possible problem users so that the moderators, or the community as a whole, can decide their fate.

Indeed, in its current state, this algorithm is best used as an instrument for moderators, rather than a replacement for them. While it may be incredibly tempting for a moderator to hand over banning duties to this more scientific method, they’d be doing so to the detriment of the community. Human language is nuanced, with differing dialects, senses of humor, and ever-shifting context, so 100 percent detection of a sarcastic or inflammatory post via an equation is nearly impossible.

The human element is crucial to proper moderation. Understanding who a person is and why they said something, as opposed to just what they said, is key to maintaining a healthy and active community. Without it, scores of innocent users would be banned for simply using similar speech patterns or posting at similar rates to trolls.

Until the day that an algorithm can pick up on sarcasm and dark humor, or differentiate between a naive question and a tongue-in-cheek one, online community moderation is best left to humans. This algorithm, however, has the potential to be a hugely effective tool for moderators, and may one day lead to a largely troll-free internet.

And that, fellow netizens, is a beautiful thought indeed.

recommendations