Tracking Facebook’s Policy Changes

In March 2018, reports surfaced that the private information of up to 87 million Facebook users had been harvested by the firm Cambridge Analytica since 2014. This was not the first time Facebook had received backlash for the mishandling of user data: Facebook first issued an apology on the issue in 2007, when a feature called Beacon tracked and shared users’ online activity without expressly asking them for permission. In 2013, after admitting to a year-long data breach that exposed the personal information of 6 million users, Facebook promised to “work doubly hard to make sure nothing like this happens again.” Nonetheless, Facebook rushed to update its data policy immediately following the latest scandal, publishing at least five press releases detailing new measures and adjustments issued in March and April of 2018. (Reuters, New York Times, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)

For over a decade, Facebook has faced criticism for the misuse of its platform on issues ranging from the publication of inappropriate content to user privacy and safety. Rather than taking preventative measures, Facebook has too often jumped to make policy changes after damage has already been done. The Counter Extremism Project (CEP) has documented instances in which Facebook has made express policy changes following public accusations, a scandal, or pressure from lawmakers. While one would hope that Facebook is continuously working to improve security on its platform, there is no excuse as to why so many policy changes have been reactionary, and it raises the question as to what other scandals are in the making due to still-undiscovered lapses in Facebook’s current policy. (Reuters, New York Times, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)

November 2007: Facebook receives backlash in response to its Beacon advertising feature, which tracks users’ actions on other websites and shares it with their friends on Facebook. (New York Times)

Subsequent Policy Change(s)

  • December 2007: Facebook provides the option to opt out of using Beacon. (Facebook Newsroom)

February 2009: North Carolina Attorney General Roy Cooper and Connecticut Attorney General Richard Blumenthal demand that Facebook and other social media sites enact more privacy controls to protect children and teenagers. According to Blumenthal, a preliminary number of sex offenders found on Facebook was “substantial.” (NBC News)

Subsequent Policy Change(s)

  • December 2009: Facebook announces the formation of the Facebook Safety Advisory Board, a group of five leading Internet safety organizations that Facebook will consult on issues related to online safety. (Facebook Newsroom)

August 2009: The Office of the Privacy Commissioner of Canada recommends policy changes to Facebook after a year-long investigation of Facebook’s privacy policies and controls. The investigation was launched following a complaint from the Canadian Internet Policy and Public Interest Clinic. (Facebook Newsroom, Office of the Privacy Commissioner of Canada)

Subsequent Policy Change(s)

  • August 2009: Facebook updates its Privacy Policy to better describe its practices and provide reasons for data collection. (Facebook Newsroom)
  • December 2009: Facebook announces new tools that users can use to better review, understand, and update their privacy settings. (Facebook Newsroom)

May 2010: Facebook receives “intense criticism” from users over the complicated nature of the site’s privacy settings, including accusations that the site is trying to force people to share their data. (Guardian)

Subsequent Policy Change(s)

  • May 2010: Facebook announces that it will introduce simpler and more powerful controls for sharing personal information. (Facebook Newsroom)

November 2011: The Federal Trade Commission launches an eight-count complaint against Facebook charging the site with deceiving consumers by telling them they could keep their information on Facebook private. According to the complaint, Facebook told consumers that third-party apps could only access users’ information needed to operate, when in reality they could access nearly all of a user’s personal data. Facebook was also charged with sharing user information with advertisers. (Federal Trade Commission, Federal Trade Commission, NBC News)

Subsequent Policy Change(s)

  • November 2011: Facebook reaches a settlement with the Federal Trade Commission in which it agrees to make several changes to its privacy control settings, such as obtaining consumers’ express consent before their information is shared. (Facebook Newsroom, Federal Trade Commission)

May 2013: A women’s activist campaign highlight pages on Facebook glorifying rape and violence against women, many of which passed the site’s moderation process. Several businesses pull their ads from Facebook as a result. (CNN, Women, Action, & the Media, Reuters)

Subsequent Policy Change(s)

  • May 2013: Facebook announces that it will update its guidelines and moderator training to crack down on gender-based hate speech. (CNN, Facebook)
  • June 2013: Facebook implements a new review policy for pages and groups aimed at restricting ads from appearing alongside pages that contain any violent, graphic, or sexual content. (Facebook Newsroom, Reuters)

October 2013: The Daily Beast and The Verge reveal that Facebook as well as its photo- and video-sharing platform Instagram are being used for private firearms sales, and that the sites have no related policy regulations in place. (The Verge)

Subsequent Policy Change(s)

  • March 2014: Facebook announces new regulations regarding the private sale of firearms on its sites. (Facebook Newsroom)

2016: Throughout the year, U.K. and European lawmakers express concern that social media platforms have become a “vehicle of choice” for extremists to recruit and radicalize. Several governments threaten legislative action against the tech companies. (Telegraph, Reuters, Wired)

Subsequent Policy Change(s)

  • December 2016: Facebook, Microsoft, Twitter, and YouTube, launch a shared industry database of “hashes”––digital “fingerprints” of extremist imagery––in an effort to curb the spread of terrorist content online. (Facebook Newsroom)

November 2016: Facebook is initially accused of proliferating “fake news stories” on its site that may have swayed the 2016 presidential election. (Vox)

Subsequent Policy Change(s)

  • December 2016: Facebook introduces a new option to flag news stories as disputed and report potential hoaxes on the site. (Facebook Newsroom)

March 2017: Facebook faces backlash after a report surfaces revealing that hundreds of U.S. Marines were sharing nude photos of female colleagues and making degrading comments about them in a private Facebook group. (Buzzfeed, Reveal)

Subsequent Policy Change(s)

  • April 2017: Facebook introduces new tools to “help people when intimate images are shared on Facebook without their permission.” (Facebook Newsroom)

May-June 2017: U.K. and European lawmakers increase pressure against tech companies, calling for new laws to punish companies that continue to host extremist material on their platforms. On May 1, the U.K. Home Affairs Committee publishes a report saying that tech companies are “shamefully far” from taking action to tackle illegal and hateful content. In June, U.K. Prime Minister Theresa May calls on fellow G7 members to pressure tech companies to do much more to remove hateful and extremist material. (CNBC, U.K. Home Affairs Committee, Guardian)  

Subsequent Policy Change(s)

  • June 2017: Facebook launches the Global Internet Forum to Counter Terrorism (GIFCT), a partnership with Microsoft, Twitter, and YouTube aimed at combating extremist content online. (Facebook Newsroom)
  • May 2018: Facebook releases a report on its efforts to enforce its Community Guidelines and remove inappropriate, hateful, and extremist content since October 2017. (Facebook Newsroom)

September-October 2017: Facebook discloses that the Internet Research Agency, a Russian company linked to the Russian government, bought more than $100,000 worth of political ads and disseminated content that reached 126 million users on Facebook in an attempt to sow discord among American citizens prior to the 2016 presidential election. Facebook receives additional accusations, including from U.S. President Donald Trump, that misinformation and “fake news” was spread on the platform in an attempt to influence the election. (New York Times, New York Times, Twitter, Facebook)

Subsequent Policy Change(s)

  • October 2017: Facebook introduces new measures to increase transparency for all ads, including a new requirement for political advertisers to verify their identities. (Facebook Newsroom, Facebook Newsroom)
  • December 2017: Facebook replaces its “Disputed Flags” feature with a “Related Articles” feature, also aimed to fight false news on the site. (Facebook Newsroom)
  • April 2018: Facebook announces the launch of new policies to increase transparency around ads and Pages on Facebook. (Facebook Newsroom)
  • April 2018: Facebook announces the launch of a new initiative to help assess social media’s impact on elections. (Facebook Newsroom)
  • May 2018: Facebook launches a new initiative called “Inside Feed,” an online resource that claims to be a “behind-the-scenes look at Facebook’s fight against false news.” (Mashable, Inside Feed)
  • May 2018: Facebook releases a 12-minute short film called “Facing Facts” about Facebook’s fight against fake news. (Mashable, Facebook Newsroom, Facebook Newsroom)
  • May 2018: Facebook launches a digital and print ad campaign in an attempt to educate the public on how to identify fake news. (Mashable, Facebook Newsroom)
  • May 2018: Facebook introduces a policy that requires political ads on its platforms to be labeled with the name(s) of the company or individual funding them. (Facebook Newsroom)

March 2018: Facebook faces backlash about how it handles user data. U.S. and British lawmakers ask the company to explain how Cambridge Analytica was able to collect private information on more than 50 million Facebook accounts without alerting users. (New York Times)

Subsequent Policy Change(s)

  • March 2018: Facebook pledges to make changes to its data policies, and introduces new measures to make its privacy controls easier to find and use. (Facebook Newsroom, Facebook Newsroom)
  • April 2018: Facebook announces an update made to its data policy to “better spell out what data we collect and how we use it in Facebook, Instagram, Messenger and other products,” as well as additional updates to restrict data access on the site. Facebook also asks users to review their privacy settings. (Facebook Newsroom, Facebook Newsroom, Facebook Newsroom)
  • April 2018: Facebook announces a program called the Data Abuse Bounty to “reward people who report any misuse of data by app developers.” (Facebook Newsroom)
  • April 2018: Facebook runs TV ads promising that “Facebook will do more to keep you safe and protect your privacy.” (The Verge, Wired)
  • May 2018: Facebook introduces plans to build a feature called “Clear History” that will allow users to have more information about and control over personal data usage from third-party applications. (Facebook Newsroom)
  • May 2018: Facebook introduces a customized message onto individuals users’ News Feeds with detailed explanations about their chosen privacy settings. (Facebook Newsroom)