TECH TALK

Comment on this article

EU Copyright Filtering Proposal

by Casandra Laskowski

Just as the spam of GDPR[1]*-related privacy policy updates emails has ended, headlines turned to a controversial EU copyright reform proposal. There have been lively debates about this recent attempt to reform EU copyright law, and it has been a bit of a rollercoaster ride. Since I started drafting this post, there have been two votes on the proposal, and it is now up for debate again this fall. The proposal is quite large, but much criticism has centered on Articles 11 and 13. This post will focus on Article 13, which would require internet platforms to perform automatic filtering of all content uploaded by users.[2] This section has the potential to chill the generation of creative works, stifle community content moderation, and impact institutional repository management.

One complaint against this section is that the language is unclear. The Max Planck Institute has said it “creates legal uncertainty, in particular by its use of undefined legal concepts and barely understandable formulations.” For example, the proposal calls for copyright protection measures to be “appropriate and proportionate,” without defining what that might mean. Additionally, over 70 internet pioneers wrote a letter arguing that the versions of the text thus far do not “provide either clarity or consistency in their attempts to define which Internet platforms would be required to comply with the provision, and which may be exempt.”

This unclear scope is problematic as it could “create an expensive barrier to entry for smaller platforms and startups.” SPARCEurope, Liber, and several other institutions wrote an open letter expressing worry that institutional repositories would face increased operation cost as they implement automatic filtering as well as additional legal expenses to manage the risks of intermediary liability. All of this would have a negative impact on Open Access and Open Science. Because many repository entries are pre-prints, managers could potentially be inundated with false positives to wade through.

Article 13 also has the potential of chilling speech and creation. These algorithms, like the Content ID system employed by YouTube, are not error proof. And, while the proposal requires that platforms provide complaint and redress mechanisms, it does not specify what they should look like or how easy they should be to use. One creator had to fight a takedown when licensee Sony Music Entertainment erroneously claimed copyright over the creator’s licensed material, and the process was opaque and tedious. Not all users are savvy enough to use complex processes if some algorithm flags their content. Others might be capable but apathetic about the effort required. Wikimedia also notes that requiring automatic filters “does not leave room for the types of community processes which have been so effective on the Wikimedia projects.”

On July 5, in a rare move, the European Parliament voted to reject the proposal, despite the European Parliament Committee on Legal Affairs having voted in favor of the strict policy on June 20. The Parliament vote was close: 318-278. This proposal has been debated for over two years, and the debate will reopen this fall. Even if a similar version of Article 13 passes, it may face additional legal challenges before implementation. Max Planck claims that the law conflicts with “Article 15 of the E-Commerce Directive[, which] prohibits Member States from imposing. . .general obligations to monitor the information which they transmit or store.” I recommend keeping an eye on the DeepLinks Blog, by the Electronic Frontier Foundation, or SPARCEurope to stay abreast of developments with the proposal.

Copyright 2018 by Casandra Laskowski.

About the author: Casandra Laskowski is a Reference Librarian and Lecturing Fellow at Duke Law. She received her J.D. from the University of Maryland School of Law, and her M.L.I.S. from the University of Arizona. Prior to pursuing her career as a law librarian, she worked as a geospatial analyst in the United States Army and served a fifteen-month tour of duty in Iraq. Her areas of interest include privacy, censorship, and the intersection of national security and individual liberty.

[1] If you are unclear about GDPR or just curious, IFLA has a great webinar you can watch.
[2] Article 11 creates an ancillary copyright, sometimes called link tax and this blog post provides a deep dive, including the history of ancillary copyright efforts in Germany and Spain.