Back to news

Policy Debates over EU Platform Liability Laws: New Human Rights Case Law in the Real World

April 13, 2016

This is the last of four posts on the European Court of Human Rights’ (ECHR) rulings in Delfi v. Estonia and MTE v. Hungary. In both cases, national courts held online news portals liable for comments posted by their users – even though the platforms did not know about the comments. Those rulings effectively required platforms to monitor and delete users’ online expression in order to avoid liability. The ECHR condoned this outcome in Delfi, which involved threats and hate speech. But in MTE, it held that a nearly identical ruling in a case involving trade defamation violated the European Convention on Human Rights (Convention).  The monitoring order in MTE harmed defendants’ ability to provide a “platform for third-parties to exercise their freedom of expression by posting comments” online, the Court said, and was therefore impermissible under Article 10 of the Convention.

The MTE and Delfi rulings are relevant for current policy discussions in the European Commission and elsewhere. The idea of requiring intermediaries to monitor user-generated content has come up repeatedly as part of the Digital Single Market initiative, most recently in this winter’s “platform liability” consultation. (I contributed to that consultation from my current position at Stanford CIS, and was involved in the Commission’s previous Notice and Action inquiry as an attorney for Google.)

As long as the eCommerce Directive remains intact, monitoring proposals like those being floated in Brussels should be largely hypothetical for protected Internet hosts. But if the eCommerce safe harbors were re-opened for debate, the Convention-based limits on monitoring that the ECHR identified could suddenly become very important.

Monitoring for content so bad that platforms “know it when they see it”

One key question, if lawmakers sought to alter current EU law and expand monitoring obligations, would be what kinds of legal violations intermediaries can actually identify. Delfi condoned a monitoring obligation for some fairly extreme content – hate speech and threats of violence – that the Court thought anyone should recognize as "on their face manifestly unlawful." (P. 117)  MTE held that a monitoring requirement violated expression and information rights under Article 10 of the European Convention, in a case involving more legally ambiguous statements defaming a business.

The Court’s Article 10 concerns – that intermediaries monitoring user comments would systematically remove protected expression – are particularly important for “gray area”  statements that cannot easily be identified as legal or illegal.  It is much easier to mistakenly remove lawful content if it falls in this grey area.  For example, an intermediary might err on the side of removal if making an accurate legal judgment required factual investigation or nuanced legal analysis. If lawmakers did consider requiring platforms to monitor content, they would need to think carefully about what kinds of content actually fall within the "manifestly unlawful" or “know it when you see it” category. 

Assuming, as the Delfi court did, that there are kinds of illegal content that human reviewers can readily identify, the next question is: can software identify it, too? It is unrealistic to expect platforms operating at Internet scale to hire and train legions of employees to review every piece of user content.  If lawmakers really went forward with requiring platforms to monitor content, the monitoring would almost certainly be carried out by machines.  To pass scrutiny under MTE, then, would a law need to prescribe only monitoring that can reliably be automated without significant over-removal of lawful speech?  Does such a thing exist?

Suppose the law mandated monitoring that in theory could be carried out accurately, but intermediaries in reality fell short of that standard and used cheaper tools that frequently removed lawful speech.  Would that practical result mean the law itself violated Article 10? Does the answer depend on the foreseeability of flawed, real-world implementation by intermediaries?

If the European Commission and other policy makers are serious about the “take down, stay down” idea, they will quickly arrive at questions like these.  The MTE and Delfi rulings provide important parts of the answers.

Obligations for platforms that “already have monitoring tools”

Proponents of monitoring obligations often argue that if an intermediary already has tools to detect duplicate content (or to otherwise somehow detect unlawful content), it should have to use them.  Copyright holders make this claim about YouTube’s ContentID; Max Mosley made this claim about the duplicate image detection Google uses to fight child abuse images; law enforcement has raised it with respect to pro-terrorist and hate speech content on numerous platforms.   This argument in favor of policing by intermediaries is variously framed: as a matter of practicality, economic efficiency, traditional “due diligence” doctrine, and more.   It also arises as an eCommerce Directive interpretation.  The eCommerce-based argument is that when intermediaries scan and recognize content for business purposes, they become too “active” to claim safe harbors under Articles 12-14.  Therefore, Article 15’s protections from monitoring obligations do not apply.

Those who oppose monitoring obligations, including myself, often respond that this line of thought creates perverse incentives: No platform will want to voluntarily build tools to fight bad content if doing so only increases its own obligations and potential liability.   Besides, existing tools can be imprecise and take down good content with the bad – as the CJEU has recognized in overruling monitoring injunctions. The risk of over-removal is enough of a problem when intermediaries use the tools for narrow purposes like identifying child sexual abuse content.   If they are compelled by law to use existing tools on harder-to-identify content, the scope for error and removal of lawful speech increases.   Imposing a legal obligation to monitor would also raise more acute Article 10 implications than voluntary efforts, because it involves governments in compelling private actors to delete controversial expression.

The argument that intermediaries with existing monitoring tools can legally be compelled to monitor more fits strangely with the Delfi and MTE cases.  Some platforms, after all, may develop monitoring tools precisely because of the Delfi ruling. Can lawmakers bootstrap from there to say the platforms then must use those same tools to find other kinds of content – like the defamation at issue in MTE, for example? That would be a lot like Mosley’s argument: that because Google built tools to find and weed out child sex abuse imagery, it must use those same tools to find pictures that violated his privacy.

Delfi and MTE highlight the perversity of this reasoning.  A platform that institutes monitoring – whether voluntarily or to comply with Delfi – should not for that reason be denied protection under MTE. That outcome would create the wrong incentives for platforms, do harm to Internet users’ Article 10 rights, and defeat the purpose of these ECHR rulings.

Conclusion

This story is far from over. For one thing, it is possible that the MTE case could be appealed to the ECHR’s Grand Chamber.  For another, the issues it raises about Internet monitoring and online free expression and information will recur in litigation and legislative battles in Europe and around the world.  But the reasoning and outcome of the MTE case is an important step forward, and one that should influence outcomes in other fora.

 

This article was originally published at the CIS Blog Policy Debates over EU Platform Liability Laws: New Human Rights Case Law in the Real World
Date published: April 14, 2016
Region
European Union
Topic, claim, or defense
Hate Speech