The Times: Big Tech's dirty little secret: how offensive content is allowed to stay online

Body

For David Ibsen, executive director of the Counter Extremism Project, a US non-profit group that aims to tackle extremist ideology, the only shocking thing about the Dispatches programme was that Facebook allowed an investigative journalist to be hired as a moderator. “This really shows the seriousness with which Facebook takes moderation,” said Ibsen. “Imagine Facebook letting an investigative reporter into its engineering team. It wouldn’t happen. If safety of users is really the No 1 concern for Facebook you have to ask, why is the work outsourced?” Ibsen said Facebook, which reported a record $5bn (€4.3bn) profit for the first quarter of 2018 despite the Cambridge Analytica scandal, could afford to hire thousands of moderators, but he did not believe the web giant was serious about tackling extremist content. “I maintain that when Facebook announces policy changes, it’s really just PR to get rid of the bad press,” he said. Ibsen cites the example of an Isis video known as “cubs of the caliphate” showing a young boy executing a teenager. Ibsen said the video, uploaded in 2015 and used as a recruitment tool, was still on Facebook three years later. “We’ve been saying all along that the companies don’t want to remove bad content as it’s part of the business plan. But that contradicts what Mark Zuckerberg has been saying publicly. So sincerity from the top is not trickling down. Unless the moderators are empowered to aggressively remove this stuff, it’s not going to make a difference.” 

Date
July 22, 2018
Article Source

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On April 3, 2017, the day Vladimir Putin was due to visit the city, a suicide bombing was carried out in the St. Petersburg metro, killing 15 people and injuring 64. An al-Qaeda affiliate, Imam Shamil Battalion, claimed responsibility. 

View Archive