The problem with human moderators


If Big Tech in 2018 already has a theme, it’s that social networks are passive platforms no longer. Since the new year, both Facebook and YouTube have stepped up with new guidelines and processes to manage — and in some cases police — content on their networks.

All of this started well before the new year, of course. Twitter has been following through on a lengthy project to both clarify its content policies and take a more active role in saying who and what is allowed on its platform, most recently with its so-called “Nazi purge.” The current trend arguably started with Reddit, when then-CEO Ellen Pao pushed for tighter control of harassment and revenge porn on the site.

This digital reckoning now feels inevitable, but it was hastened by events over the last year. Anger at the big networks reached a crescendo last year after Facebook — the most influential of the bunch — was widely criticized for hosting fake news and politically charged ads with virtually no oversight. But while the old system of letting algorithms sort things out was clearly flawed, the networks’ re-assertion of the role of gatekeepers is worrisome, too.

In the case of YouTube, the changes, announced yesterday, mostly involve demonetizing (that is, removing the ads from) videos from creators under a certain view time or subscriber threshold, which sounds fine. However, what the relatively clinical blog post doesn’t discuss is the new way YouTube will deal with big partner accounts: Human moderators will review their content — all of it — turning off monetization on any specific video they may find objectionable.

Coming in the wake of Logan Paul’s infamous visit to Japan’s suicide forest and his subsequent, numerous apologies, it seems clear this introduction of human moderators is intended to head off incidents exactly like that. Presumably, if this system had been in place then, the moderator would have raised a hand and said, “Uh, guys…?”

Let’s be clear about what we’re talking about here: Demonetizing isn’t the same thing as deleting. This isn’t censorship per se, though it is sending a message to creators about what content is acceptable and what isn’t. The thinking is that, over time, YouTube creators will post less of the demonetized stuff and more videos that “contribute positively to the community,” in the words of YouTube’s Neal Mohan and Robert Kyncl.

It is sending a message to creators about what content is acceptable and what isn’t.

Isn’t that a good thing? Maybe, but if it were as simple as enforcing YouTube’s community guidelines, a bot could do it, and we already know that doesn’t work. With humans involved, it raises a different set of questions: Who are these humans? What qualifications or biases do they have? And what exactly raises a red flag in their minds?

The answer to that last question will likely vary depending on the answers to the first two. It doesn’t help that most terms of service and community guidelines are purposely vague to give moderators wiggle room. In the case of Twitter, which used to have an unofficial label as “the free speech wing of the free speech party,” the policies have even been contradictory, and the network itself has sometimes appeared unsure why certain tweets are flagged, accounts suspended, or verification stripped.

This isn’t a case for zero censorship. There are things virtually everyone would agree shouldn’t be on a network as popular and public-facing as Facebook or YouTube. Neo-Nazis spouting hateful ideology, graphic depictions of violence, direct threats — they all need to go.

But audiences have been clamoring for more content policing beyond just the most extreme. And by and large, the networks have acquiesced to the demand, staffing up to review more content by hand since algorithms can only do so much. But the companies are only as good as the humans they hire, and the job of content moderator is largely a thankless one — the daily slog of viewing vast amounts of objectionable content has a psychological toll attached.

The companies are only as good as the humans they hire.

Historically, the big tech companies haven’t been good at human intervention. In 2016, human moderators at Facebook were accused of purging conservative news from its trending topics section. It also removed a historic photo from the Vietnam war that same year, justified its decision, then reversed it. Twitter’s CEO has basically admitted its enforcement policies have been a mess. Even Google, generally thought to be the most algorithmically driven of the bunch isn’t immune from human failings: Back in 2012, it tried to challenge Facebook’s social media dominance with Google+, its own social network, and deliberately put companies’ Google+ pages higher than their Facebook pages in search results.

Put simply: We shouldn’t trust Twitter, Facebook, Google, YouTube, or any other private tech company to create a system that consistently punishes bad actors based on a common standard. Humans are driven by biases. Systems can help correct for those biases, but we can’t judge without knowing what those systems are. And if they don’t work as intended, that could leave us with a worse problem than when we started: turning each network into its own massive filter bubble, where anything deemed offensive is purged.

Every platform is now a content cop on the beat. Maybe they always were, but when you make loud public statements that you’re going to start more actively policing content, it means more calls to the police. That’s going to inevitably mean more users getting kicked out of these networks, some as big as Logan Paul.

On the surface, that may feel OK. YouTube can afford to lose a few big personalities, and maybe it should. The more difficult question is what do such actions say about the network? When users are punished for offensive content, what do those users’ sympathizers and supporters think — those who might not agree with inconsistent applications of poorly worded policies? How do they start to think of YouTube (and Twitter, Facebook, etc.)? How do they express their uneasiness, and where do they start spending their time?

I don’t know the answers to those questions. But I do know simple math: The more things you push out of a bubble, the smaller it’ll get. And you might not like what forms alongside it.