Section 230 of the Communications Decency Act (CDA, codified 47 U.S.C. § 230) and the Safe Harbor provisions of the Digital Millennium Copyright Act (DMCA, codified 17 U.S.C. § 512) provide certain protections for operators of online services from some, but not all, third-party claims arising out of user content posted on those services. These protections are essential for the survival of services that host user content. However, there are many limitations to the scope of protection and conditions on what is required to be eligible for those protections. The law in this area continues to evolve, so we have summarized recent notable cases and legislation that publishers should consider in operating their user content programs.
Section 230 of the CDA protects providers of interactive computer services from liability from suits that may arise when other internet users post material to a provider’s platform, except for infringement of intellectual property rights (which in some circuits has been held to include publicity rights), or violation of federal criminal law or certain other narrow carve-outs. 47 U.S.C. § 230(c)(1).
- Can’t Avoid CDA Immunity Through Indirect Action:
The anticipated case Hassell v. Bird holds true to this. In this case, the California Supreme Court concluded that ordering a third-party platform owner to assist in removing defamatory content (posted to Yelp) would undermine the purpose of the CDA. 5 Cal. 5th 522 (2018). The California Supreme Court reversed the Court of Appeal’s reasoning that by not suing Yelp directly, plaintiff could effectively sidestep the CDA, and held that plaintiff’s litigation strategies could not overcome CDA immunity for a platform provider. The California Supreme Court further reasoned that a platform simply failing to remove reviews from a website, after an order was obtained against the user, did not constitute “aiding and abetting” under California law, and CDA immunity continued to apply. The Supreme Court of the United States declined to take up an appeal in this case.
Additional recent case law and legislation further clarify circumstances where Section 230 immunity does and does not apply. For example, website providers are not immune from civil liability when (1) the provider is very involved in the creation of the content on the provider’s platform, (2) a specific carveout to immunity applies, or (3) when liability does not revolve around the content published.
- Too Involved in the Creation of Content:
A 2018 case filed in the Southern District of New York against Facebook posed the question of whether Facebook was so involved in the creation of certain targeted website advertisements on its website that it could be considered the creator of allegedly discriminatory conduct, thus barring Section 230 immunity. National Fair Housing Alliance et al. v. Facebook, Case No. 1:18-cv-02689-JGK (S.D.N.Y.). The Southern District of New York was not the first court asked to consider this underlying issue. The Ninth Circuit in Fair Hous. Council of San Fernando Valley v. Rommates.com, LLC, determined that Section 230 immunity did not apply to Roommates.com when Roommates posted a questionnaire and required users to answer it, because the court determined that making the questionnaire and requiring answers to it made Roommates too involved in the creation of the content used to match individuals looking for roommates based on protected characteristics. 521 F.3d 1157 (9th Cir. 2008). On the other hand, the Seventh Circuit in Chicago Lawyers’ Comm. For Civil Rights Under Law, Inc. v. Craigslist, Inc., found that Section 230 immunity applied to Craigslist when there was no effort on Craiglist’s part to categorize or classify users or reviews to promote postings. 519 F.3d 666 (7th Cir. 2008).
While the parties in National Fair Housing Alliance stipulated to dismiss this case in February 2019, what is important to take away from these cases is that whether a website provider is considered a “co-creator,” thus barring Section 230 immunity, depends on how much the provider is involved in creating content. The less involvement in promotion or creation of content, the more likely CDA immunity will apply.
- The Recent Immunity Carveout:
H.R. 1865, which passed April 11, 2018, amends Section 230 to clarify that the CDA does not protect website providers from civil liability for all content posted on their website. Specifically, this amendment clarifies that Section 230 does not prohibit enforcement of federal and state law against website providers and users of interactive computer services when the content at issue regards sexual exploitation of children or sex trafficking.
- When Liability Does Not Revolve Around Content Published:
Recently, the Georgia Court of Appeals clarified that CDA immunity does not apply when an interactive platform does not actually publish the content at issue. Maynard v. Snapchat, Inc., 346 Ga. App. 131 (2018). In this case, Snapchat was sued for negligence for providing a “speed filter” for users to capture their speed while moving (e.g., driving). The driver in Maynard never posted the photo, nor did the plaintiff allege that the third party posted a snap displaying the filter. The court held that Section 230 immunity did not apply to Snapchat here because Snapchat was not the publisher or speaker of the driver’s content, since the upload was not completed. Similarly, a Wisconsin Appeals Court held that the operator of a firearm-advertising website was not entitled to Section 230 immunity because the plaintiff’s claims were aimed at the design of the website, not the content posted on the provider’s platform. Daniel v. Armslist, LLC, 2018 WI App 32, review granted, 2018 WI 93. The court here ignored two decades of precedent interpreting the CDA as a broad immunity. These cases illustrate the principle that bad facts lead to bad law.
- Exporting 230 Immunity:
Section 230 immunity is an American law, but recently NAFTA 2.0, the USMCA, embraced Section 230-like immunity and extended its pro-free speech policy to Mexico and Canada.
Like Section 230, the Safe Harbor provisions of the DMCA also provide certain protection for online services providers in the form of an affirmative defense to copyright infringement claims arising out of certain user content displayed at the direction of a user if certain “safe harbor” conditions have been met. 17 U.S.C. § 512.
- The New DMCA Designated Agent Requirement for Safe Harbor Protection:
On Nov. 1, 2016, the Copyright Office published an amendment to 37 CFR § 201.38 that changed the registration requirements for designated agents ‒ individuals designated to receive notifications of claimed copyright infringement ‒ under the DMCA. This new registration program sunset the old registry and requires entities that wish to avail themselves of Safe Harbor protection to register their designated agents through a new online registration system, paying $6.00 per designated agent. Entities that had previously registered agents through the paper-based system were required to re-register online to maintain DMCA protection. Registry must be renewed after three years.
Designated agents may be registered here: https://www.copyright.gov/dmca-directory/.
- Repeat Infringers:
Section 512 states that as a condition of eligibility for Safe Harbor protection, a service provider must adopt and reasonably implement a policy that provides for the termination of “repeat [copyright] infringers.” Recent case law has helped clarify the requirements of a permissible policy as well as the definition of repeat infringer.
The Fourth Circuit in 2018 held that because a service provider failed to reasonably implement a policy to terminate repeat infringers, the service provider was not entitled to Safe Harbor protection. BMG Rights Mgmt. (US) v. COX Commc’ns, Inc., 881 F.3d 293 (4th Cir. 2018). The Ninth Circuit also in 2018 helped clarify the requirements of an acceptable termination policy by holding that just because a platform lacks a detailed written policy to terminate users who repeatedly infringe copyrights, the vague written policy does not automatically erase DMCA Safe Harbor protection. Ventura Content, LTD v. Motherless, Inc., 885 F.3d 597 (9th Cir. 2018).
Regarding the definition for repeat infringer, the same BMG court held that “repeat infringer” should be defined broadly. The court rejected defendant’s argument that the term “repeat infringers” in Section 512 is limited to “adjudicated infringers,” meaning people who have been held liable by a court for multiple instances of copyright infringement and reasoned that the term has a broader meaning than defendant argued.
- Expeditious Response:
If a platform provider responds expeditiously to remove, or disable, certain infringing material posted on its platform by another individual, the DMCA Safe Harbor provisions can, if other requirements are met, apply to the platform provider and protect it from liability. The power of this protection was recently discussed in the case of Long v. Dorset, 2019 WL 861424 (N.D. Cal. Feb. 22, 2019). The Northern District of California granted Facebook’s motion to dismiss in this case because it found that Facebook’s five-business-day response to plaintiff’s initial email notifying it of allegedly infringing images, coupled with Facebook’s continued exchange of emails with plaintiff to resolve the issue, satisfied the “responds expeditiously to remove” requirement of Section 512(b)(2)(E).
- An Important, But Unanswered, Question:
The DMCA has a requirement that for Safe Harbor immunity to apply to an online publisher, user content must be stored at the direction of the user, not the service provider. It remains unclear under what circumstances screening by a live person, instead of an automated software process, might preclude a finding that that user content is being stored at the direction of the user as a result of service provider intervention.
In 2017, the Ninth Circuit held that where human moderators assist in selecting content submitted by users, an online service provider may not be eligible for Safe Harbor protections. Marvix Photographs, LLC v. LiveJournal, Inc., 873 F.3d 1045 (9th Cir. 2017). The court distinguished the facts in Marvix from automated activities service providers that “did not actively participate in or supervise file uploading” (e.g., to reformat posts or perform some technological change), and manual screening for infringement and pornography, noting that those “accessibility enhancement” activities are the only kind of screening the Ninth Circuit has accepted. The court recognized the Fourth Circuit’s LoopNet decision, which allowed human screening to enforce venue content rules (restricting content to real estate listings), but did not reach a decision as to whether that would be adopted by the Ninth Circuit. Ultimately, the court expressed concern that human intervention interrupts the process of storing content at the direction of the user. Here, human moderators manually reviewed submissions and publicly posted only about one-third of submissions, based on whether the submission was thought to be trending and popular – clearly making editorial decisions well beyond what the Fourth Circuit approved in LoopNet.
Thus, it is unclear in the Ninth Circuit where the line should be drawn as to how much manual moderator intervention would result in the failure to meet the DMCA’s requirement that user content be stored at the direction of the user, not the service provider. It is also unclear where other circuits will draw the line after this decision.
A Small Side Note
A related copyright issue pertaining to online platforms is in-line linking. In-line linking places a line of HTML on a third-party site, so that a webpage displays content directly from the site where the content resides. A question presented to many courts is whether in-line linking is copyright infringement. Most courts that have considered the issue have ruled that direct liability for copyright infringement may not be imposed merely for creating a link to third-party content, except where the link connects to a live performance or other material streamed to the public. However, one 2018 case stands apart from the majority. In Goldman v. Breitbart News Network, LCC, the judge of the Southern District of New York expressly rejected the Ninth Circuit’s “server test,” in which courts consider whether HTML instructions link the image directly from the computer on which the image is stored. 302 F. Supp. 3d 585 (S.D.N.Y. 2018). The Ninth Circuit and other courts have found that this practice does not implicate the Copyright Act because no copy of the image is stored on the intermediary’s computer.
In Breitbart, plaintiff sued a number of media organizations for embedding a link to a photograph he owned and had not linked to them. Specifically, the issue presented here is whether the media companies infringed on plaintiff’s copyright when they embedded into articles they wrote tweets of an image that was originally posted by plaintiff on Snapchat and subsequently went viral through Twitter and other media platforms. The District Court judge denied the media companies’ summary judgment motion, finding they might be liable for copyright infringement for in-line linking. The Second Circuit denied interlocutory appeal by the District Court judge. On Jan. 4, 2019, this case partially settled. As a result, it remains unclear if this issue will ever be resolved by the Second Circuit. Even if not, the District Court holding is an outlier.
Online services that permit users to post and display messages, photos, profiles, reviews and other content should carefully administer user content programs to try to qualify, as much as practical, for the protections available under the CDA and DMCA. While there are gaps in the scope of protection and not all eligibility requirements may be met in all instances, these protections are important tools for online services to reduce their user content risks. For more information, contact the authors.