On June 7, Tech Against Terrorism and the Global Internet Forum to Counter Terrorism (GIFCT) hosted a webinar on hash sharing and identifying terrorist content online. According to the email invitation, the webinar was the “first e-learning session” where participants will discuss the “benefits and challenges with image, video, and audio hashing in detecting, classifying, and moderating terrorist content online.”
Presenters—including from Facebook—were meant to share information and best practices with smaller tech firms in their efforts to combat the spread of terrorist material online. Incredibly, this webinar represents the first e-session on hashing despite Facebook, Twitter, and YouTube’s joint announcement of the GIFCT in June 2016.
Despite its lack of meaningful action, Big Tech has not hesitated to promote the GIFCT and its stated activities. For example, in December 2017, the GIFCT touted that its database of terrorist content contained more than 40,000 hashes, and that access had been shared with new members like Ask.fm, Cloudinary, Instagram, JustPaste.it, LinkedIn, Oath (now Verizon Media), and Snap. The GIFCT failed to provide any specific details such as how much content had actually been identified, removed, and blocked using those 40,000 hashes. Most recently, GIFCT claimed that the database has grown to 200,000 hashes, but again, fails to provide concrete details about how those hashes have been used to reduce the amount of terrorist content available online.
Following the webinar’s conclusion, Tech Against Terrorism and the GIFCT published and circulated a case study that was “intended to provide insight into how a Public-Private Partnership has contributed to the fight against terrorist use of the internet by using the [GIFCT’s] hash-sharing database to support small platforms.” The case study was conducted with JustPaste.It—one of the smaller tech platforms that is a member of the GIFCT—which received technical and operational support from Facebook.
Disturbingly, the case study revealed that: “Companies are NOT required to agree on shared content policies in order to participate, which means that if a company finds a match to the content database they are not obligated to report on that match to the consortium or anyone else. They are not even required to take action.” This proves that when the tech industry describes GIFCT’s ability to “substantially disrupt” terrorists’ misuse of sites and platforms, it is misleading the public. Participation and sharing of information in the GIFCT is completely voluntary, and thus, GIFCT expansion and activity does not therefore translate into substantive action by member companies to share, let alone remove, verifiably terrorist content.
The case study also proves that the GIFCT is, in fact, meant to facilitate cooperation between large and small tech firms—negating the tech industry’s argument that regulation mandating content removal is a massive burden for small- and medium-sized companies. In this case with JustPaste.It (which has one employee), Facebook provided support “at all stages of the technical integration,” a developer for technical assistance, and a handbook with a “detailed description of the technology and on-boarding procedure.” Indeed, when the GIFCT applies itself according to it stated mission of supporting smaller tech companies, compliance in removing known terrorist content would certainly be possible. JustPaste.It, which has hosted ISIS propaganda as well as a video by al-Qaeda’s leader Ayman al-Zawahiri where he outlines strategies for the terror group’s fight against the United States and describes the U.S. as a supporter of global evil, also noted that it took approximately one month to begin potential content removal via the GIFCT’s hash-sharing database. Contrary to how it is framed in the case study, one month is not significantly “time consuming” in the overall scheme of implementing new compliance measures.
JustPaste.It’s use of the database also confirmed that companies are able “to consult verified terrorist content and decreases dependence on user content reporting and government takedown requests.” In other words, shared hashes allow tech firms to be proactive in their search for and removal of known terrorist images and videos. Interestingly, the case study materials describe this as “independence” for companies in terms of their content moderation efforts—rather than raising content moderation as a free speech concern. When the GIFCT has an audience of tech experts, proactive content removal is discussed in terms of corporate efficiency. When the GIFCT—through tech lobbyists—is speaking to lawmakers threatening regulatory action, the same discussion is presented in a way that raises fears about infringing on civil and human rights.
The webinar confirms existing concerns about whether or not the GIFCT represents a serious effort to combat online extremism or a minimal attempt to assuage legislators’ criticisms about the spread of terrorist content on the Internet. Given that the GIFCT continues to resist the establishment of industry standards and reveals little about the actual use of its hashing database, it is obvious that lawmakers must take action to regulate. As a purely voluntary exercise controlled and funded by the tech industry, the GIFCT and its members cannot be trusted to take consistently decisive and permanent action in the interest of public safety.