Advertisement

Hidden in the shadows, web giants’ moderators go through hell

Advertisement

Why moderate the internet?

Wasn’t the internet created to be a platform for exchange where freedom of expression is the key? So, shouldn’t everyone be able to say what they want? That is what it has become, but it isn’t exactly what it should be.

Freedom of expression, and the corresponding freedom of opinion, represent a universal right which is guaranteed by human rights, a cornerstone of all democracies. However, like all freedoms, it it subject to national limits according to the protection of individuals (along with their reputations) as well as public interest (particularly national security, i.e. the fight against terrorism).

Even if they often forget it, internet users cannot say everything they want because national laws apply on the internet. For example, on YouTube you have to comply with YouTube’s privacy policy as well as national laws. So the question is: who polices Facebook, Google & Co.? We need to keep in mind that these are private companies, so this complicates the task even more, and the state doesn’t have the power to intervene directly by removing what it doesn’t want to see.

Governments have made agreements with technology giants to ensure that local laws are respected, which hasn’t always been easy because the lack of communication between the two, which initially led Facebook being blocked in some countries, including Turkey. Beyond the problem of moderation, states are seeking solutions to modern technological problems (dependence on social networks, to name one).

How does moderation work?

Moderation on the internet (Google, Facebook, Twitter…) can be done in two ways: the first is done using artificial intelligence, and the second is with specialized (human) teams. In this case, it’s not that machines are replacing humans, it’s that they work together in a complementary way. In a documentary released on the European TV network ARTE, one moderator put it simply, “An algorithm can’t do this.” Their work? Removing everything that is contrary to the rules of their company (and to human rights): child labor, sexuality, terrorism, online harassment, and so on…

AndroidPIT shutterstock 558278971 Facebook
We complain about Facebook’s lack of responsiveness in moderating, but without the moderators it would be even worse. / © shutterstock, AndroidPIT

Who are these people? Where did they come from? These are difficult questions to answer because these mysterious employees generally refuse to talk about themselves and remain in the shadows (for their own safety, by the way). ARTE’s documentary showed that Facebook uses third-party companies to moderate its network. While there are moderators everywhere, the documentary showed an office in the Philippines where employees go to an office everyday to moderate content. While they are happy to have a job that allows them to earn a living, many are also reluctant to go there because they face the worst that can be found on the internet. Occupational psychologists have shown that there is comparable parallel between the trauma experienced by these moderators and soldiers of war.

Their days consist of approving or deleting messages, photos or videos. Whether reported by users or detected by artificial intelligence, this content must be verified by a moderator who will decide if it can remain online. Moderators are specialized in a discipline that generally requires them to view photos or videos that are difficult to look at. Make no mistake, their work is not only a decision-making process, they also have to memorize many elements in order to be able to do good work (for example, the flags and slogans of the various terrorist groups).

“I’m going to quit, I have to stop because I’m a mess.”

The primary challenge for moderators is being psychologically and emotionally strong enough to endure the horrors they see on a daily basis. “Horrors” is really just a euphemism here: torture, rape, incredible violence, pedophilia…These are just some examples of the scenarios moderators regularly encounter. And of course, there are countless issues with racism, homophobia, etc.

Live events can sometimes be very problematic. When a video is live, moderators cannot stop it until something happens which doesn’t fit within the rules. For example, a moderator cannot stop a stream, even if a suicide is taking place, until the person is dead due to freedom of speech. By contrast, when it comes to other content like images, moderators have to act fast and have only 10 seconds to make a decision about whether or not to sensor the content.

“We’ve enslaved ourselves, but we need to realize that all this is not acceptable.” – Anonymous moderator

The other big problem for moderators is the incredible weight on their shoulders. “It’s not an easy job,” one of them said, since the slightest mistake can “cost one or more lives, start a war, or lead to harassment or suicide”. As a matter of fact, some moderators, depressed and emotionally exhausted, have also resorted to suicide.

Many problems with the concept

It’s interesting to note that from one platform to another, content may not suffer the same fate. For example, an image can be considered either a threat or satire. Nicole Wong, a former senior employee of Google and the Barack Obama administration, explained that moderation is sometimes very delicate. She gave the example of videos of Saddam Hussein’s death, some of which showed him hanging and others showed him dead. YouTube decided to ban only the videos of him dead because there was “no need to show this”, while the videos of him hanging were kept for “historical perspective”.

One of the moderators said, “the biggest mistake you can make is to keep nude pictures”. This puritanical view of the world might be the most problematic because hatred has taken hold online. For example, under the pretext of freedom of expression, many neo-Nazis are able to express their ideals and become more and more numerous. Of course, Facebook isn’t the cause of the problems, and is instead just the instrument. But still, that doesn’t mean that the network isn’t partly responsible.

On social networks people are easily upset, and they think that they understand a subject with just a few bits of information.

Ed Lingo, journalist

A former Facebook executive spoke about the mentality in Silicon Valley: “We don’t talk about the negatives.” Instead, they plead that they’re oblivious until, for example, the European Union starts butting in. Facebook is a company that has a lot of expertise in IT, products, infrastructure and user experience, but has no editorial or copyright skills. Today, politics is omnipresent on the internet, especially on Facebook and Twitter. So we need to find real solutions.

The most tragic thing is that these giants (and especially Facebook) may make big speeches with no point other than allowing them to hear the sound of their voices, but the problems are getting worse. One ex-employee articulated the worst thing about it: “The feeling of indignation is what really works. Facebook attracts more people with outrageous content than by removing it. Facebook helps to spread hatred.”

What do you think these tech giants should do to moderate content? Tell us your thoughts in the comments!

Thanks you for your visiting on this page Hidden in the shadows, web giants’ moderators go through hell

For more details about this post, please visit the source link here: https://www.androidpit.com/facebook-google-moderators-going-through-hell

Advertisement