YouTube Struggles to Win Back Worried Advertisers

YouTube’s recent decision to again boost the requirement for videos that can be paired with paid advertising demonstrates the company’s ongoing challenge of monitoring and removing problematic content, including extremist and terrorist material. Last March, a media investigation led to major brands pulling their ads from YouTube after they were found to appear next to extremist content, and hateful videos.

Since this very public incident, the company has repeatedly struggled to implement policies and procedures to monitor and police content. Last year, the company put in place a new rule requiring channels to have at least 10,000 views to be eligible to benefit from ads.

Now, the Google-owned platform said that it will require video creators to have at least 1,000 subscribers and 4,000 hours of video watched in the last 12 months before they can monetize their videos with advertisements. YouTube also said that it would continue to monitor signals like spam and abuse flags to identify content that violates their terms of service and community guidelines, and increase its content moderation staff. The updated policy is a step in the right direction, but it unfortunately raises additional questions and concerns about the video sharing platform’s ability to effectively monitor what is happening on its site.

This most recent iteration of YouTube’s advertising policy represents a tacit admission that previous measures simply failed to address the scope of the problem. Moreover, such actions have only occurred in response to a mass exodus of advertising dollars, negative media attention, and scrutiny from U.S. and European policymakers.

Still, the truth remains that there are no guarantees YouTube’s newest policy will be any more effective at preventing ads from appearing next to worrying content, including terrorist and extremist propaganda. After nearly a year of assuring the public that YouTube is “committed to being part of the solution” in the fight against online extremism and that it takes “these issues very seriously,” the company is continually being forced to publish new statements and promise to fix its persistent problems in an effort to assuage advertisers and policymakers.

YouTube even recently admitted that its response to dangerous content was lacking. Chief Business Officer Robert Kyncl and Product Lead Neal Mohan said in a joint blog post on January 16 that “it’s been clear over the last few months that we need a higher standard” [emphasis added]. It is unclear, however, if this higher standard will be sufficient to combat the presence of extremist, hateful material online when it has failed to do so previously.

For YouTube to take almost a year to make improvements to its platform is troubling, given that advertisers raised the issue publicly in March 2017 and advocacy groups and policy experts have been expressing concern since the rise of ISIS in 2014. This timeline illustrates two things: (1) that tech firms take action only when their bottom line is put at risk and their reputations are threatened and (2) even then, the proposed solutions can be inadequate.

In response to past failures to adequately police policy-violating content, the Incorporated Society of British Advertisers (ISBVA) has now called on Google and Facebook to establish an independent body to regulate and monitor the detection and removal of dangerous content. The group’s members include Lloyds Banking Group, Unilever, and Procter & Gamble. This latest move is a clear reflection of a level of advertiser concern and frustration that won’t go away.

Daily Dose

Extremists: Their Words. Their Actions.

In Their Own Words:

We reiterate once again that the brigades will directly target US bases across the region in case the US enemy commits a folly and decides to strike our resistance fighters and their camps [in Iraq].

Abu Ali al-Askari, Kata’ib Hezbollah (KH) Security Official Mar. 2023
View Archive