Monika Bickert, Facebook's head of global policy management, said the primary goal was to prevent harm, and that to a great extent, the company had been successful. But perfection, she said, is not possible."We have billions of posts every day, we're identifying more and more potential violations using our technical systems," Bickert told the newspaper. "At that scale, even if you're 99 percent accurate, you're going to have a lot of mistakes."
>>>>
Revealed: Facebook's secret censorship rule-book.
Thousands of slides of banned 'hate groups', phrases and emojis that 7,500 lowly-paid moderators are supposed to use to police 2 Billion users
Daily Mail/UK Dec 28, 2018
A committee of young company lawyers and engineers has drawn up thousands of rules outlining what words and phrases constitute hate speech and should be removed from the social media platform across the globe. They have also drawn up a list of banned organisations laying down which political groups can use their platform in every country on earth, a New York Times investigation has revealed.An army of 7,500 lowly-paid moderators, many of whom work for contractors who also run call centers, enforce the rules for 2 billion global users with reference to a baffling array of thousands of PowerPoint slides issued from Silicon Valley. They are under pressure to review each post in under ten seconds and judge a thousand posts a day.
The company outsources much of the individual post moderation to companies that use largely unskilled workers - many hired from call centers.
Since the Cambridge Analytica scandal and accusations, 'Fake News' has been used to spread misinformation across the platform, Facebook has been under pressure to be more open about how it moderates content and uses the data it holds.
As recently as this month, Israeli Prime Minister Benjamin Netanyahu's son, 27-year-old Yair, was temporarily suspended from Facebook for 'hate speech' over a series of posts about Muslims and Palestinians. He wrote in Hebrew: 'There will not be peace here until: 1. All the Jews leave the land of Israel. 2. All the Muslims leave the land of Israel. I prefer the second option.' In another post he added: 'Do you know where there are no attacks, in Iceland and Japan. That's because there are no Muslims populations.' Facebook initially removed the posts from its platform, but then decided to suspend him for 24 hours after he re-posted a screen grab showing the deleted posts.
>>>>
Why Facebook can't be 'fixed' by Erin Dunne December 28, 2018
Facebook rapidly expanded. New markets, new users, and new countries quickly came online, and Facebook, focused on growth, was all too happy to watch the platform grow. That growth came with new problems, and Facebook, under increasing pressure, faced two starkly different options: moderate content or leave the platform an un-moderated free-for-all.In the end, Facebook decided to moderate content. Bad press over abuses of the platform by violent and malicious actors had taken its toll. But actually, following through on cutting posts, groups, and users that Facebook thought might be linked to violence proved far more difficult than the company seems to have anticipated.
As the Times investigation found, the supposed moderation often failed:
"Moderators were once told, for example, to remove fund-raising appeals for volcano victims in Indonesia because a co-sponsor of the drive was on Facebook's internal list of banned groups. In Myanmar, a paperwork error allowed a prominent extremist group, accused of fomenting genocide, to stay on the platform for months. In India, moderators were mistakenly told to take down comments critical of religion." But setting real challenges aside, once Facebook decided to moderate content, it also wanted to do so cheaply. The expertise necessary for high-quality guidelines, translation, and revisions is difficult to find and expensive. The result was outsourcing moderation to third-party firms ill-prepared to evaluate high volumes of content and lacking the necessary historical, political, and linguistic understanding.
Fixing those problems and paying more for moderators proficient in diverse languages and well-versed in current politics would likely be so costly as to over-burden the company, which had never planned on playing the role of global speech police in the first place. Worse, even with such moderation, deciding just what should be allowed to stay while complying with Facebook's own community guidelines and national and international existing laws would still be an all but impossible task.
So why did Facebook pick this more complex and deeply flawed path of moderation in the first place?
Next Page 1 | 2 | 3 | 4 | 5 | 6 | 7
(Note: You can view every article as one long page if you sign up as an Advocate Member, or higher).