Tech Advancements Allow Creation of Hard to Detect Phony Video, Audio Recordings
New York, NY – In advance of the upcoming Senate and House committee hearings featuring testimony from Facebook CEO Mark Zuckerberg, the Counter Extremism Project (CEP) today called on Facebook and other tech companies to take a more aggressive approach to the spread of fake news and information enabled by new widely available artificial intelligence (AI) tools. From the abuse of data of more than 85 million Americans as part of the Cambridge Analytica scandal, to the manipulation and spread of fake media content, to the promulgation of terrorist propaganda, Facebook’s consistent response has been lackadaisical or one of all out denial.
“As we have seen, the issue of fake news proliferating online is something that will not simply disappear,” said Dr. Hany Farid, CEP Senior Advisor and Dartmouth College Computer Science Professor. “In the wrong or malicious hands, new artificial intelligence technology can be weaponized to produce fake images, fake audio, and fake video. This becomes a serious threat to our democracy when someone can create leaders doing and saying things that never happened. This manipulation of content and information can have real-life repercussions and serious national security implications. It is important for Facebook and other tech firms to acknowledge the extent of this problem and agree on a strategy to effectively expose and defeat the coming wave of doctored audio and video online.”
With studies showing that fake news tends to spread far faster than the truth, the potential harm unleashed by this technology could be immense. Tech companies, while aware of the burgeoning problem, have yet to take concrete steps to blunt its impact or to develop ways to prevent fake news to be uploaded and shared on their platforms. Clearly, they are not doing enough.
Experts Have Found That Fake News Often Spreads Faster & Has Further Reach Than Other Forms Of Media:
A 2018 MIT Study Found Fake News Spreads Faster And Reaches More People Than The Truth. “They found that whereas the truth rarely reached more than 1000 Twitter users, the most pernicious false news stories … routinely reached well over 10,000 people. False news propagated faster and wider for all forms of news – but the problem was particularly evident for political news.” (Katie Langin, “Fake News Spreads Faster Than True News On Twitter – Thanks To People Not Bots,” Science Magazine, 3/8/18)
- “An Analysis Of The Words In The Tweets Suggested That False News Instilled Fear, Disgust And Surprise, Whereas True News Was More Likely To Arouse Feelings Including Sadness, Joy And Trust.” “These features, especially the greater ‘novelty,’ might be expected to draw more attention to false news, the researchers say.” (Philip Ball, “’News’ Spreads Faster And More Widely When It’s False,” Nature, 3/8/18)
In 2017, Experts Said “The Very Nature Of Social Media Networks Could Be Making Us Peculiarly Vulnerable” To False News And That “Education And Filtering Technology Might Not Be Enough.” (Mark Buchanan, “Why Fake News Spreads So Fast On Facebook,” Bloomberg, 8/31/17)
Facebook’s Flagging Feature To Highlight Fake News On Its Platforms Has Been Found To Be Faulty. “The tagging of fake news is not consistent, and some stories that have been flagged continue to circulate without a warning. In other cases, traffic to fake news posts actually increased after Facebook applied the warning.” (Matthew Ingram, “Facebook’s Fact Checking Can Make Fake News Spread Even Faster,” Fortune, 5/16/17)
Tech Companies Have Been Criticized “For Enabling The Spread Of This Kind Of Misleading Or Incorrect Information, Because They Allow Any User Or Even Automated Bots To Post Legitimate-Looking Articles, Which Then Proceed To Spread Like Wildfire Through ‘Liking’ And ‘Sharing.’” (Ben Panko, “How Fake News Breaks Your Brain,” Smithsonian, 6/30/17)
CEP Senior Advisor Hany Farid Has Called For Tech Companies To Step Up & Deal With This Issue:
According To Dartmouth Computer Scientist Hany Farid, The Issue Is Likely To Get Worse. “It’s an arms race. As we are developing faster, folks are creating more sophisticated technology to augment audio, images and video. The way this is going to end is that you take the ability to create a perfect fake out of the hands of the amateur.” (Elizabeth Gibney, “The Scientist Who Spots Fake Videos,” Nature, 10/6/17)
- Farid: “We Have A Fake News Phenomenon That Is Not Going Away. Add To That Fake Images, Fake Audio, Fake Video And You Have An Explosion Of What I Would Call An Information War.” (CBS This Morning, 3/12/18)
- Farid: “I Feel Like If We Keep Going Down The Road We Are Going I Am Not Optimistic. I Think Things Are Going To Keep Deteriorating Online.” (Matt Reynolds, “Technology Can’t Rescue Us From AI-Generated Fake News,” Wired, 1/9/18)
- Farid: “For Decades, Technology Companies Have Claimed That They Cannot Rein In Online Abuses.” “With increased criticism and threats of regulatory legislation from the United Kingdom, the European Union, and United States, technology companies have finally said that they are going do more to rein in abuses. There is no question that this is a difficult problem, but there is also no question that reining in online abuses is not a priority for these companies, and so they have been frustratingly slow in responding to very real threats with very real consequences.” (Charlotte Albright, “Farid To Facebook (And Others): Root Out Extremist Content,” Dartmouth News, 6/29/17)
- Farid: “From Child Exploitation To Online Extremism, Illegal Sex Trade, Cyber-Bullying, Revenge Porn, Cyber-Crime, Fake News Malware, And Trolling, The Internet, And Social Media In Particular, Are Becoming Poisonous. By Not Addressing These Abuses Head-On, These Companies Run The Risk Of Alienating Their Users, And More Importantly For Them, Advertisers.” (Charlotte Albright, “Farid To Facebook (And Others): Root Out Extremist Content,” Dartmouth News, 6/29/17)