Amended 9th Circuit Decision Does Not Clarify the Extent to Which Service Providers Can Manually Screen for Inappropriate User Content

In April 2017, the 9th Circuit Court of Appeals startled online service providers that allow users to post content (known as UGC, or user-generated content), by holding that the use of moderators to screen out inappropriate UGC before the UGC becomes publicly available could cause the service provider to lose the Digital Millennium Copyright Act (DMCA) safe harbor under 17 U.S.C. Section 512(c), which insulates service providers from third-party copyright infringement claims arising out of the UGC. This is a common practice, especially for brands that run UGC contests and children’s or family sites. Service providers have long understood that, unlike the immunity granted them for most non-intellectual property claims other than certain federal law violations arising out of UGC under the Section 230(c)(1) of the Communications Decency Act (CDA), content curation would cause it to lose the DMCA safe harbor. However, they had relied on decisions like that of the 4th Circuit in the 2004 CoStar Group vs. LoopNet, which permitted cursory manual screening to weed out content that was infringing or clearly inappropriate for the venue, including content topic venue rules – in this case, that only real estate listing photos would be posted on a real estate listing service. On Aug. 30, 2017, the 9th Circuit amended its opinion in Mavrix Photographs v. LiveJournal, available here, but did not provide any more helpful guidance than did the initial April 2017 opinion on where the line should be drawn as to how much manual moderator intervention would result in the failure to meet the law’s requirement that the UGC be stored at the direction of the user, not the service provider.

The appellate court remanded to the district court for fact-finding. Given the alleged facts, if they are proven on remand, the district court will likely not need to consider the nuances of where the line should be drawn. The plaintiffs allege that the defendants’ moderators made substantial editorial curation decisions, allowing public display of only about a third of user submissions based on guidelines as to what was perceived by the defendants to be the most popular content. The 9th Circuit instructed the court below to look at whether those activities went beyond “accessibility-enhancing activities,” which prior 9th Circuit cases had held were permissible service provider involvement in the UGC uploading process. The Mavrix court noted that “accessibility-enhancing activities include automatic process, for example, to reformat posts or perform some technical change. … Some manual service provider activities that screen for infringement or other harmful material like pornography can also be accessibility-enhancing.” The court also noted that Section 512(m) provides protection for monitoring to weed out infringing content. However, it referred to LoopNet, which it noted was another circuit’s opinion, as having “extended accessibility-enhancing activities” when it permitted “‘cursory’ manual screening to determine whether photographs depicted were real estate.” So, in the 9th Circuit, can a service provider screen out more than pornography and likely copyright infringement, and if it uses live persons to do so rather than an automated software process, is it even less likely to retain its Section 512(c) protection? Today it remains unclear, and we are unlikely to get an answer in the 9th Circuit on remand of Mavrix under the facts of this case. Accordingly, online service providers should consider limiting their screening to likely copyright infringement and the most objectively inappropriate content such as pornography, and where possible use automation rather than live persons to do so. This means a brand or service provider screening out, for instance, UGC that is disparaging of it or content that is simply inappropriate for the venue (e.g., anti-immigration political speech on a brand’s user forum about wine and cheese) may well lose the safe harbor, at least in the 9th Circuit. Service providers will have to weigh the risk of loss of the DMCA safe harbor against brand protection and venue integrity. Presumably, even in the 9th Circuit, the more objectively inappropriate the UGC (e.g., hate speech), the more likely the screening will be seen as “accessibility-enhancing.” It is unclear if the breadth of permitted DMCA content screening will be found to mirror the breadth of the CDA’s explicit allowance under Section 230(c)(2)(a) of blocking and removing content that “that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”  However, the 9th Circuit’s questioning of LoopNet suggests that at least “otherwise objectionable” content moderation goes too far under the DMCA’s statutory limitations.  While it is worth noting that the Mavrix court’s discussion of LoopNet was merely in dicta, and it only said the 9th Circuit has yet to address the question, service providers should keep in mind that the CDA and DMCA offer different protections, and of the statutory differences between what they can do under the CDA and retain that immunity, and what they must not do under the DMCA to maintain that safe harbor, especially given at least the Mavrix panel’s narrow interpretation of the DMCA.

There are a few other lessons in Mavrix for online service providers that publish UGC. The most notable, and the primary issue addressed by the 9th Circuit panel in that case, is that volunteer user moderators will be deemed agents of the service provider, and their acts attributed to the service provider, if the common-law test for agency is met. This means service providers that have user-moderated services need to be careful about how much control and influence they have over those user moderators and their activities, lest they lose their Section 512(c) protection when those moderators make content publication decisions that go beyond accessibility enhancement and screening out porn and obvious copyright infringement. On the other hand, if the user moderators remain merely other users and not agents of the service, then their editorial curation results in a UGC posting by a user, albeit a different user, not the service. Also, the court reinforced the “red flag” rule that a service provider loses its safe harbor when it has or should have knowledge that publicly available UGC is infringing, effectively adding an effectiveness requirement for any screening by or on behalf of the service provider if voluntarily undertaken. Finally, the appellate court instructed the lower court to determine whether the service provider had the right and ability to control the UGC, and was receiving a financial benefit from infringing content – a various liability standard codified into the DMCA safe harbor qualifications.

The development and operation of a DMCA copyright infringement safe harbor program for UGC is far more complex that just registering an agent of service with the Copyright Office and having an infringing content notice and takedown program, and like many other areas of the law companies will need to make risk tolerance decisions balancing differing interests in determining whether or not to moderate or screen UGC, and if so how and for what.  For more information, contact the author.