Updated: Tracking Facebook’s Policy Changes

In March 2018, reports surfaced that the private information of up to 87 million Facebook users had been harvested by the firm Cambridge Analytica since 2014. This was not the first time Facebook had received backlash for the mishandling of user data: Facebook first issued an apology on the issue in 2007, when a feature called Beacon tracked and shared users’ online activity without expressly asking them for permission. In 2013, after admitting to a year-long data breach that exposed the personal information of 6 million users, Facebook promised to “work doubly hard to make sure nothing like this happens again.” Nonetheless, Facebook rushed to update its data policy immediately following the latest scandal, publishing at least five press releases detailing new measures and adjustments issued in March and April of 2018. (Reuters, New York Times, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)

For over a decade, Facebook has faced criticism for the misuse of its platform on issues ranging from the publication of inappropriate content to user privacy and safety. Rather than taking preventative measures, Facebook has too often jumped to make policy changes after damage has already been done. The Counter Extremism Project (CEP) has documented instances in which Facebook has made express policy changes following public accusations, a scandal, or pressure from lawmakers. While one would hope that Facebook is continuously working to improve security on its platform, there is no excuse as to why so many policy changes have been reactive, and it raises the question as to what other scandals are in the making due to still-undiscovered lapses in Facebook’s current policy. (Reuters, New York Times, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)

November 2007: Facebook receives backlash in response to its Beacon advertising feature, which tracks users’ actions on other websites and shares it with their friends on Facebook. (New York Times)

Subsequent Policy Change(s)

  • December 2007: Facebook provides the option to opt out of using Beacon. (Facebook Newsroom)

February 2009: North Carolina Attorney General Roy Cooper and Connecticut Attorney General Richard Blumenthal demand that Facebook and other social media sites enact more privacy controls to protect children and teenagers. According to Blumenthal, a preliminary number of sex offenders found on Facebook was “substantial.” (NBC News)

Subsequent Policy Change(s)

  • December 2009: Facebook announces the formation of the Facebook Safety Advisory Board, a group of five leading Internet safety organizations that Facebook will consult on issues related to online safety. (Facebook Newsroom)

August 2009: The Office of the Privacy Commissioner of Canada recommends policy changes to Facebook after a year-long investigation of Facebook’s privacy policies and controls. The investigation was launched following a complaint from the Canadian Internet Policy and Public Interest Clinic. (Facebook Newsroom, Office of the Privacy Commissioner of Canada)

Subsequent Policy Change(s)

  • August 2009: Facebook updates its Privacy Policy to better describe its practices and provide reasons for data collection. (Facebook Newsroom)
  • December 2009: Facebook announces new tools that users can use to better review, understand, and update their privacy settings. (Facebook Newsroom)

May 2010: Facebook receives “intense criticism” from users over the complicated nature of the site’s privacy settings, including accusations that the site is trying to force people to share their data. (Guardian)

Subsequent Policy Change(s)

  • May 2010: Facebook announces that it will introduce simpler and more powerful controls for sharing personal information. (Facebook Newsroom)

November 2011: The Federal Trade Commission launches an eight-count complaint against Facebook charging the site with deceiving consumers by telling them they could keep their information on Facebook private. According to the complaint, Facebook told consumers that third-party apps could only access users’ information needed to operate, when in reality they could access nearly all of a user’s personal data. Facebook was also charged with sharing user information with advertisers. (Federal Trade Commission, Federal Trade Commission, NBC News)

Subsequent Policy Change(s)

  • November 2011: Facebook reaches a settlement with the Federal Trade Commission in which it agrees to make several changes to its privacy control settings, such as obtaining consumers’ express consent before their information is shared. (Facebook Newsroom, Federal Trade Commission)

May 2013: A women’s activist campaign highlight pages on Facebook glorifying rape and violence against women, many of which passed the site’s moderation process. Several businesses pull their ads from Facebook as a result. (CNN, Women, Action, & the Media, Reuters)

Subsequent Policy Change(s)

  • May 2013: Facebook announces that it will update its guidelines and moderator training to crack down on gender-based hate speech. (CNN, Facebook)
  • June 2013: Facebook implements a new review policy for pages and groups aimed at restricting ads from appearing alongside pages that contain any violent, graphic, or sexual content. (Facebook Newsroom, Reuters)

October 2013: The Daily Beast and The Verge reveal that Facebook as well as its photo- and video-sharing platform Instagram are being used for private firearms sales, and that the sites have no related policy regulations in place. (The Verge)

Subsequent Policy Change(s)

  • March 2014: Facebook announces new regulations regarding the private sale of firearms on its sites. (Facebook Newsroom)

2016: Throughout the year, U.K. and European lawmakers express concern that social media platforms have become a “vehicle of choice” for extremists to recruit and radicalize. Several governments threaten legislative action against the tech companies. (Telegraph, Reuters, Wired)

Subsequent Policy Change(s)

  • December 2016: Facebook, Microsoft, Twitter, and YouTube, launch a shared industry database of “hashes”––digital “fingerprints” of extremist imagery––in an effort to curb the spread of terrorist content online. (Facebook Newsroom)

November 2016: Facebook is initially accused of proliferating “fake news stories” on its site that may have swayed the 2016 presidential election. (Vox)

Subsequent Policy Change(s)

  • December 2016: Facebook introduces a new option to flag news stories as disputed and report potential hoaxes on the site. (Facebook Newsroom)
  • March 2017: Facebook introduces a third-party fact-checking tool on its site. (Guardian, Facebook Help)

March 2017: Facebook faces backlash after a report surfaces revealing that hundreds of U.S. Marines were sharing nude photos of female colleagues and making degrading comments about them in a private Facebook group. (Buzzfeed, Reveal)

Subsequent Policy Change(s)

  • April 2017: Facebook introduces new tools to “help people when intimate images are shared on Facebook without their permission.” (Facebook Newsroom)

May-June 2017: U.K. and European lawmakers increase pressure against tech companies, calling for new laws to punish companies that continue to host extremist material on their platforms. On May 1, the U.K. Home Affairs Committee publishes a report saying that tech companies are “shamefully far” from taking action to tackle illegal and hateful content. In June, U.K. Prime Minister Theresa May calls on fellow G7 members to pressure tech companies to do much more to remove hateful and extremist material. In May, the Guardian also publishes details of Facebook’s content moderation guidelines on controversial topics such as sex, terrorism, and violence. The documents expose Facebook’s contradictory positions on these issues, which have resulted in the company’s ongoing inability to remove unwanted content. (CNBC, U.K. Home Affairs Committee, Guardian, Guardian)  

Subsequent Policy Change(s)

  • June 2017: Facebook launches the Global Internet Forum to Counter Terrorism (GIFCT), a partnership with Microsoft, Twitter, and YouTube aimed at combating extremist content online. (Facebook Newsroom)
  • May 2018: Facebook releases a report on its efforts to enforce its Community Guidelines and remove inappropriate, hateful, and extremist content since October 2017. (Facebook Newsroom)

September-October 2017: Facebook discloses that the Internet Research Agency, a Russian company linked to the Russian government, bought more than $100,000 worth of political ads and disseminated content that reached 126 million users on Facebook in an attempt to sow discord among American citizens prior to the 2016 presidential election. Facebook receives additional accusations, including from U.S. President Donald Trump, that misinformation and “fake news” was spread on the platform in an attempt to influence the election. (New York Times, New York Times, Twitter, Facebook)

Subsequent Policy Change(s)

  • October 2017: Facebook introduces new measures to increase transparency for all ads, including a new requirement for political advertisers to verify their identities. (Facebook Newsroom, Facebook Newsroom)
  • December 2017: Facebook replaces its “Disputed Flags” feature with a “Related Articles” feature, also aimed to fight false news on the site. (Facebook Newsroom)
  • April 2018: Facebook announces the launch of new policies to increase transparency around ads and Pages on Facebook. (Facebook Newsroom)
  • April 2018: Facebook announces the launch of a new initiative to help assess social media’s impact on elections. (Facebook Newsroom)
  • May 2018: Facebook launches a new initiative called “Inside Feed,” an online resource that claims to be a “behind-the-scenes look at Facebook’s fight against false news.” (Mashable, Inside Feed)
  • May 2018: Facebook releases a 12-minute short film called “Facing Facts” about Facebook’s fight against fake news. (Mashable, Facebook Newsroom, Facebook Newsroom)
  • May 2018: Facebook launches a digital and print ad campaign in an attempt to educate the public on how to identify fake news. (Mashable, Facebook Newsroom)
  • May 2018: Facebook introduces a policy that requires political ads on its platforms to be labeled with the name(s) of the company or individual funding them. (Facebook Newsroom)
  • June 2018: Facebook introduces a policy that allows users to view more information about Pages on Facebook, including any advertisements that they might be running. (Facebook Newsroom)

March 2018: Facebook faces backlash about how it handles user data. U.S. and British lawmakers ask the company to explain how Cambridge Analytica was able to collect private information on more than 50 million Facebook accounts without alerting users. (New York Times)

Subsequent Policy Change(s)

  • March 2018: Facebook pledges to make changes to its data policies, and introduces new measures to make its privacy controls easier to find and use. (Facebook Newsroom, Facebook Newsroom)
  • April 2018: Facebook announces an update made to its data policy to “better spell out what data we collect and how we use it in Facebook, Instagram, Messenger and other products,” as well as additional updates to restrict data access on the site. Facebook also asks users to review their privacy settings. (Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)
  • April 2018: Facebook announces a program called the Data Abuse Bounty to “reward people who report any misuse of data by app developers.” (Facebook Newsroom)
  • April 2018: Facebook runs TV ads promising that “Facebook will do more to keep you safe and protect your privacy.” (The Verge, Wired)
  • May 2018: Facebook introduces plans to build a feature called “Clear History” that will allow users to have more information about and control over personal data usage from third-party applications. (Facebook Newsroom)
  • May 2018: Facebook introduces a customized message onto individuals users’ News Feeds with detailed explanations about their chosen privacy settings. (Facebook Newsroom)

April–June 2018: In a Congressional hearing in April, U.S. lawmakers demand accountability from Facebook with respect to its handling of user data. In June, the Wall Street Journal reveals that Facebook struck deals with companies, including Nissan Motor Co. and the Royal Bank of Canada, in 2015 that gave them access to the personal data of users. (Wall Street Journal, Wall Street Journal, Chicago Tribune)

Subsequent Policy Change(s)

  • June 2018: Facebook releases a 747-page document to Congress disclosing detailed information about the deals it made with at least 61 individual companies in 2015 that disclosed users’ personal data, claiming that they were necessary to give app developers time to adjust to changes in Facebook’s policies. Facebook’s disclosure contradicts the company’s previous statements that it restricted third-party access to user data in 2015. (Wall Street Journal)

July 2018: The British documentary series Channel 4 Dispatches reveals that Facebook instructed content moderators to allow specific right-wing Pages containing content that violates Facebook’s rules to remain on the site because they generate “a lot of revenue for Facebook.” The allegation comes after the series sent an undercover reporter to work as a content moderator at CPL Resources, a third-party firm in Dublin, Ireland, contracted by Facebook. (Guardian)

Subsequent Policy Change(s)

  • July 2018: Facebook promises to review and update its training practices for all content moderators and review its content moderating staff at CPL Resources “to ensure that anyone who behaves in ways that are inconsistent with Facebook’s values no longer works to review content on our platform.” (Verge, Facebook Newsroom)

July 2018: Facebook faces growing criticism from human rights groups and international media for allowing content on its site that has incited violence. In October 2017, United Nations investigators and human rights groups accused Facebook of allowing anti-Muslim hate speech on its site that facilitated violence against Rohingya Muslims in Myanmar. In April 2018, the New York Times published an article highlighting false news on Facebook that encouraged violence against Sri Lanka’s Muslims. Since April 2018, at least 24 people are killed across India in lynchings and other violent mob attacks, which were reportedly motivated by false reports of child kidnappings that were spread over the Facebook-owned messaging platform WhatsApp.  (New York Times, New York Times, New York Times, Washington Post, BBC News)

Subsequent Policy Change(s)

  • July 2018: Facebook states that it will create partnerships with local civil society groups in Sri Lanka and Myanmar to identify false information that should be removed from its site, and promises to remove all content that could lead to physical harm. However, the policy as announced did not apply to Facebook-owned messaging app, WhatsApp. (New York Times)
  • July 2018: WhatsApp announces plans to limit message forwarding in India. The announcement comes two days after the New York Times reported that Facebook’s latest round of policy changes would not apply to WhatsApp. (Washington Post, New York Times)

November 14, 2018: The New York Times publishes a report detailing how COO Sheryl Sandberg and other Facebook executives worked to downplay and spin bad news. The article reveals that Facebook failed to act against the Russian campaign to disrupt the 2016 U.S. presidential elections and Cambridge Analytica’s effort to collect millions of users’ private data. Moreover, the New York Times exposes how Facebook engaged with a public relations firm to criticize its competitors while deemphasizing concerns about Facebook’s own problems. (New York Times, CNBC)    

Subsequent Policy Change(s)

  • November 15, 2018: One day after the New York Times exposé is published, Facebook CEO Mark Zuckerberg announces that the company will establish an independent body to oversee its content moderation systems. Facebook does not announce any key details about the body’s membership or how it will ensure the oversight body’s independence. (Facebook, CNBC)
  • May 6, 2020: More than a year after Zuckerberg’s statement, Facebook announces the first members to its new Oversight Board. In an op-ed, members of the board explain that, in this initial phase, users will be able to appeal to the board only in cases where Facebook has removed their content. The board also writes that “[w]e will not be able to offer a ruling on every one of the many thousands of cases that we expect to be shared with us each year.” (New York Times, Facebook, Oversight Board)

March 2019: Facebook faces criticism from lawmakers, advertisers, and the media after gunman Brenton Harrison Tarrant uses Facebook Live to broadcast the March 15 shootings at two mosques in Christchurch, New Zealand, killing 50 people. New Zealand police has to alert Facebook to the presence of the 17-minute video, which was viewed approximately 4,000 times by the time Facebook removes it. (Fox News, CBS News, Washington Post, Washington Post, CNN)

Subsequent Policy Change(s)

  • April 15, 2019: A month after the attacks, Facebook states that it will stop users from posting live videos if they have violated the company’s community guidelines in the past. The policy change is announced ahead of Facebook’s meeting with the Australian government, which had recently passed new legislation threatening fines and prison sentences for executives should tech companies fail to remove terrorist or other objectionable content from their websites. (Sydney Morning Herald, Business Insider)
  • May 14, 2019: A day ahead of the launch of the Christchurch Call, a voluntary commitment by governments and tech companies to combat online extremism, Facebook institutes a “one-strike policy” and ban those who violate its new Facebook Live rules. In a blog post, Facebook states that “anyone who violates our most serious policies will be restricted from using Live for set periods of time – for example 30 days – starting on their first offense.” The Christchurch Call was spearheaded by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron. (BBC News, Facebook)

August 2019: On August 3, Patrick Crusius opens fire at a Walmart in El Paso, Texas, killing 22 people and injuring at least 27 others. Crusius, a 21-year-old white man from Allen, Texas, confesses to authorities that he sought to target Mexicans. Investigators believe Crusius published a racist and xenophobic manifesto—which discusses a “Hispanic invasion of Texas”—on 8chan’s “politically incorrect” online message board prior to his shooting rampage. Copies of the white supremacist manifesto subsequently appear on Facebook and Twitter. (USA Today, Vox, New York Times, CNET)

Subsequent Policy Change(s)

  • August 2019: In the aftermath of the attack, a Facebook spokeswoman states that posting the manifesto to praise the shooting violated the company’s terms of use. Facebook also says that is blocking links to sites that contain the manifesto. (CNET)
  • September 2019: Facebook states that it is expanding its definition of terrorist organizations and that it would block links from fringe sites like 8chan from being shared on Facebook. The company issues the announcement a day before Facebook and other tech firms are scheduled to testify before the U.S. Senate Commerce Committee. (New York Times, Facebook)

June 2020: On June 17, a coalition of civil rights groups launch Stop Hate for Profit. The campaign calls upon Facebook’s advertisers to boycott ad spending on the social media platform for the month of July in light of Facebook’s handling of misinformation and hate speech. Over the following week, major corporations like The North Face, Verizon, Unilever, and Honda America commit to joining the advertising boycott. Backlash against the company had intensified as misinformation and hateful content continued to appear on the platform amid U.S. protests against racism and police brutality. Facebook also previously refused to label or remove a post by U.S. President Donald Trump that said, “when the looting starts, the shooting starts,” in May. (New York Times, New York Times, Forbes, Washington Post)

Subsequent Policy Change(s)

  • June 26, 2020: CEO Mark Zuckerberg announces that Facebook will begin to start labeling political speech that has violated its Community Standards and remove posts that incite violence or attempt to suppress voting, even from political leaders. Zuckerberg also states that Facebook will expand its “ads policy to prohibit claims that people from a specific race, ethnicity, national origin, religious affiliation, caste, sexual orientation, gender identity or immigration status are a threat to the physical safety, health or survival of others.” (Wall Street Journal, Washington Post, Facebook)

Summer 2020: The Conference on Jewish Material Claims Against Germany leads a campaign to directly call on CEO Mark Zuckerberg to remove Holocaust denial posts from the popular social media platform. The #NoDenyingIt campaign encourages Holocaust survivors to post a video message addressing Zuckerberg on Facebook and Facebook-owned Instagram. Zuckerberg was also criticized in July 2018 when in an interview with tech website Recode, he said that Facebook should not remove Holocaust-denying content because he did not think users were necessarily “intentionally getting it wrong.” (Associated Press, PR Newswire, CNBC, BBC News)

Subsequent Policy Change(s)

  • October 12, 2020: Facebook updates its hate speech policy to ban content that “denies or distorts the Holocaust.” The company explains that its latest policy change “is supported by the well-documented rise in anti-Semitism globally and the alarming level of ignorance about the Holocaust, especially among young people.” Zuckerberg also writes in a Facebook post, “I’ve struggled with the tension between standing for free expression and the harm caused by minimizing or denying the horror of the Holocaust. My own thinking has evolved as I’ve seen data showing an increase in anti-Semitic violence, as have our wider policies on hate speech. Drawing the right lines between what is and isn’t acceptable speech isn’t straightforward, but with the current state of the world, I believe this is the right balance.” (Facebook, Facebook)

December 2020: Ireland’s Department of Justice proposes new hate speech laws. The proposed bill—which is set to go before the Irish Cabinet in spring 2021—will make, among other things, the sharing of hate speech on social media a criminal offense. The proposal also concludes that “a company accused of displaying or distributing hateful material should be able to defend itself by showing that it has reasonable measures in place to prevent dissemination of this type of material in general, was complying with those measures at the time and was unaware and had no reason to suspect that this particular content was inciteful.” (Irish Central, Irish Times, Ireland’s Department of Justice)

Subsequent Policy Change(s)

  • January 29, 2021: Facebook promises to tackle hate speech in Ireland and launches a new abuse reporting method on its newsfeed. Users are now able to click on three dots next to a post to report it as hate speech. (Irish Examiner)

February 2021: A U.S. federal judge approves a $650 million settlement—one of the largest for privacy violations—of a lawsuit against Facebook for allegedly using face-tagging and other biometric data without users’ permission. The class action lawsuit was filed in Illinois in 2015, with nearly 1.6 million Facebook users in the state submitting claims. (Associated Press, NBC News)

Subsequent Policy Change(s)

  • November 2, 2021: Facebook, which changed its corporate name to Meta in late October 2021, says it will drastically scale back its facial recognition system in the coming weeks. Specifically, the company will shut down its Face Recognition system and delete data associated with more than one billion people’s faces. Still, Facebook will continue to use its facial recognition technology for “a narrow set of cases,” including helping users to gain access to a locked account, verifying identity for financial products, and unlocking a personal device. (NBC News, CNN, Facebook)

August 2021: On August 15, the Taliban takeover the Afghan government and ramp up its presence on social media platforms. The events reportedly lead to some confusion among tech companies on how to moderate Taliban content. (New York Times)

Subsequent Policy Change(s)

  • August 2021: Following the Taliban’s takeover, Facebook claims that it has been enforcing a ban on the group “for years.” The company also said, “We also have a dedicated team of Afghan experts, who are native Dari and Pashto speakers and have knowledge of local context…” Facebook also notes that its policy applies across all of its platforms, including Instagram and WhatsApp. Nonetheless, there are reports that the Taliban continues to use WhatsApp. A New York Times reporter also sends Facebook and YouTube accounts belonging to a Taliban spokesman, asking the companies to comment. Facebook fails to respond, but the accounts—which were created in September 2020—are removed. (Recode, BBC News, New York Times)

Daily Dose

Extremists: Their Words. Their Actions.

In Their Own Words:

We reiterate once again that the brigades will directly target US bases across the region in case the US enemy commits a folly and decides to strike our resistance fighters and their camps [in Iraq].

Abu Ali al-Askari, Kata’ib Hezbollah (KH) Security Official Mar. 2023
View Archive