SHARE

Mark Zuckerberg has announced addition of 3,000 new moderators who will have to review objectionable content on the platform especially live-streamed footage of murder, suicide and rape.

In his announcement, Zuckerberg revealed that the company already has 4,500 people around the world working in its ‘community operations team’ and this new hired help could improve the review process including appropriately censoring content and remove extreme content quickly enough. Last week the company left footage of a Thai man killing his 11-month-old-daughter on Facebook Live on the platform for a whole day.

Instead of scrutinizing content before it’s uploaded, Facebook relies on users of the social network to report inappropriate content. Moderators then review reported posts and remove them if they turn out to be foul in reference to Facebook’s community standards (nudity, hate speech or glorified violence). The social network has pledged to work harder to identify and remove disturbing content – but doing it can take a psychological toll. A Facebook spokeswoman said that the company recognizes that the work can often be difficult and that every person is offered psychological support and wellness resources. The company said it also has a program in place designed to support people in these roles, which is evaluated annually.

“You can have a situation where the words that are being typed by the end user are exactly the same but one is a casual joke and the other is a serious thing that needs escalation. This requires intuition and human judgement. Algorithms can’t do that.” said Peter Friedman, chief executive of Live World. Beyond the psychological toll, there’s an enormous burden of judgement: they have to distinguish between child pornography and iconic Vietnam War photos, between the glorification of violent acts and the exposure of human rights abuses.

Decisions must be nuanced and culturally contextualized or Facebook will be accused of infringing freedom of speech. Moderators may also be required to make judgments about suicidal individuals. Facebook is testing artificial intelligence as a way of detecting comments indicative of suicidal thoughts, but humans would be required to be reviewed.

1 COMMENT

  1. This is a welcome idea as some people actually don’t know the use of facebook.I think forums and seminars should be put together to educate people.

LEAVE A REPLY

Please enter your comment!
Please enter your name here