The Communications Decency Act (“CDA”)1 was crafted at the dawn of the Internet Age to provide immunity for any “interactive computer service”2 (“ICS”) from liability based on information provided by third parties.3 Colloquially known by its section number in Title 47 of the U.S. Code, Section 230 shielded the nascent industry by greatly limiting federal claims4 against an ICS and preempting conflicting state claims.5 Section 230’s immunity clause has generally been interpreted by courts to cover virtually all forms of third-party content published by platforms of all types (e.g., Facebook, Airbnb, Tinder, and Amazon), even if the information is patently inaccurate, illegal, or intended to deceive.
The cases included in this survey relate to a wide array of business law topics, including hosting a marketplace for products designed to inflict harm or for recruitment of—and communications by—like-minded individuals (Part II.A), liability for marking a competitor’s products as a security risk (Part II.B), loss of immunity due to a role in compiling information (Part II.C), removing content or users from a platform (Part II.D), and the Snapchat speed filter (Part II.E). The future of Section 230 is uncertain due to controversy surrounding the increased moderation by major platforms, including warning labels, user bans, and content removal, which prompted an executive order from the President encouraging increased scrutiny of Section 230’s application and multiple public statements calling for the repeal or amendment of the law itself (Part III). Given the wide-ranging application of Section 230’s immunity clause to companies that constitute a significant portion of the economy, all business lawyers should monitor its current applications, the impact of the recent executive order, and any future statutory amendments or new administrative rules.
Crime victims with significant injuries who attempt to recover civil damages are often forced to look for responsible parties other than the tortfeasor to obtain redress. Such defendants in these civil suits can include insurers, co-conspirators, or entities that enable the crime to occur through their own negligent conduct. Armslist, a marketplace for firearms, is a frequent target of plaintiffs who fall victim to the crimes committed by its customers.6 Unlawful use of firearms typically results in both extensive harms and a defendant who is judgment proof.
In the case of Stokinger v. Armslist, LLC,7 the plaintiff was a Boston police officer shot by a convicted felon using a gun purchased from an arms trafficker who used Armslist to purchase dozens of firearms for resale. The customers of the arms trafficker were typically those who could not legally purchase a firearm or wanted to purchase a firearm that could not be traced to the end user. The plaintiff argued that Armslist illegally facilitated the trafficking of firearms to prohibited owners.8
Armslist defended the suit by claiming immunity under Section 230. In arguing against the applicability of Section 230, Officer Stokinger put forth several arguments, including that the claim was not based on Armslist’s actions as a publisher but in its design and maintenance of the site, that Armslist knew or should have known that its website facilitates arms trafficking, and that there should be a presumption against preemption of state law.9 The court rejected each argument, holding that the plaintiff failed to prove that his case should overcome precedent.10
In Force vs. Facebook, Inc.,11 the Second Circuit held that Section 230 insulated Facebook from claims alleging that it provided material support to terrorists by doing too little to limit their use of the platform for recruitment and communication. However, the case is most noteworthy for Chief Judge Katzmann’s dissenting opinion on the issue of whether algorithmic content matching deviates substantially from the fairly neutral purveyor or presenter of information that was envisioned in 1996 when Section 230 was enacted.
In his dissent, Judge Katzmann asserted that the pendulum has swung too far toward immunity, stating: “[W]e today extend a provision that was designed to encourage computer service providers to shield minors from obscene material so that it now immunizes those same providers for allegedly connecting terrorists to one another.”12 Furthermore, despite the fact that Facebook does publish content created by these groups, “it strains the English language to say that in targeting and recommending these writings to users—and thereby forging connections, developing new social networks—Facebook is acting as ‘the publisher . . . of . . . information provided by another information content provider.’”13
Two cases in the Ninth Circuit addressed the applicability of Section 230 to the screening decisions made by the designers of cybersecurity software, such as spam filters, malware screens, and antivirus protection. The courts analyzed Section 230’s immunity shield in light of the presence or absence of an anticompetitive animus in the screening actions taken.
In Enigma Software Grp. USA, LLC v. Malwarebytes, Inc.,14 the Ninth Circuit heard the complaint of a software company that claimed that the defendant, a direct competitor malware protective service, was flagging it as a “potentially unwanted program.”15 The defendant, Malwarebytes, asserted that its actions were immunized by Section 230(c)(2)(A), which allows an ICS to filter “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.”16
The court ruled that Section 230 does not immunize Malwarebytes against claims alleging deceptive business practices and tortious interference. It reasoned that “[u]sers would not reasonably anticipate providers blocking valuable online content in order to stifle competition. Immunizing anticompetitive blocking would, therefore, be contrary to another of the statute’s express policies: ‘remov[ing] disincentives for the . . . utilization of blocking and filtering technologies.’”17
Soon after, the Northern District of California decided Asurvio LP v. Malwarebytes Inc.,18 in which the same defendant was sued for similarly flagging and blocking the software of another company. While Asurvio’s software, like that of Malwarebytes, provided computer maintenance services, its product did not have the same primary function as that of Malwarebytes. The court applied the decision in Enigma but arrived at the opposite conclusion, holding that the Section 230 exception carved out in Enigma only applies when an ICS, like an antivirus or malware screening service, filters out software made by a company that is a “direct competitor.”19
Stephanie Lukis brought a putative class action suit against Whitepages Inc. and Instant Checkmate LLC, both purveyors of online background reports.20 Advertisements for each company’s services included free previews of the reports that were available for individual sale or through a subscription that allows subscribers to run an unlimited number of reports each month.
Lukis alleged that the companies violated Illinois’ right of publicity law by using her name and other identifying information for a commercial purpose without her consent.21 Whitepages argued, inter alia, that Section 230 immunizes it as the mere publisher of information provided by another source.22 The court disagreed, holding that “Whitepages did not act as a mere passive transmitter or publisher of information that was ‘provided by another information content provider.’ . . . Rather, it is alleged to have actively compiled and collated, from several sources, information regarding Lukis.”23 Thus, Section 230 immunity was denied to the defendant companies.
Courts continue to wrangle with various forms of content moderation on ICSs. The actions that cause these suits range from the removal of a post, to temporary suspensions, permanent deletion of an account, and, in extreme cases, blocking the offending user from creating a new account on the platform. In Wilson v. Twitter,24 Twitter did all of the above to the plaintiff, who used his account to publish hate speech aimed at the LGBT community. Wilson asserted several claims, including one under the Civil Rights Act of 1964, which the court rejected on the merits.25 In addition, the court held the claims were barred by Section 230, observing:
While this case does not represent the “typical” case envisioned by § 230 immunity, wherein a litigant seeks to hold an interactive computer service provider liable for publishing content from a third-party which the litigant finds objectionable, courts have readily found that the statutory immunity also applies to the factual scenario presented here, where the plaintiff objects to the removal of his or her own content.26
Similar conclusions were reached in other cases in which platforms removed the content, profiles, and/or platform access of users who violated the terms of service. In King v. Facebook, Inc.,27 the platform removed multiple posts by a Black activist and suspended his account several times. The plaintiff alleged that “Facebook’s treatment of black posters is not equivalent to its treatment of others, [in] that Facebook ‘has allowed whites to say the same thing that blacks have been banned for.’”28 The court held that, “[b]ecause each cause of action accuses Facebook of wrongful acts it took as a publisher, none survives the application of Section 230(c)(1) of the CDA.”29
In Federal Agency of News LLC v. Facebook, Inc.,30 the plaintiff brought several claims against Facebook for shutting down its account. The plaintiff—a company controlled by a Russian, state-sponsored media operation—created 470 Facebook pages and produced 80,000 pieces of content that reached 126 million Americans in an effort to influence the 2016 U.S. presidential election.31 Facebook removed the account on the ground that it violated Facebook’s terms of service. The court held that Facebook could rely on Section 230 as a defense, because the plaintiff’s “claims are based on Facebook’s decision not to publish [its] content.”32
In Domen v. Vimeo, Inc.,33 Vimeo successfully invoked Section 230 against claims based on its banning videos promoting or showing conversion therapy. Also referred to as Sexual Orientation Change Efforts (“SOCE”),34 conversion therapy seeks to turn a homosexual or transsexual person into a heterosexual, gender-normative person through behavior modification. This controversial therapy has been strongly opposed by the American Psychological Association for more than a decade,35 and several states have enacted restrictions or bans on its practice.36 Vimeo followed suit by adopting an anti-SOCE policy that resulted in the removal of the plaintiff’s content and the court agreed that the removal was protected by Section 230. The opinion significantly contributes to the jurisprudence on SOCE by adding a reported opinion from a district court in the Second Circuit, as few cases have been reported outside of the Ninth Circuit.
Finally, in a case not involving Section 230, a court upheld YouTube’s innovative, intermediate method of content moderation in a suit brought by PragerU.37 Instead of removing PragerU’s content from the platform, YouTube categorized its videos as containing potentially mature content; YouTube’s categorization would limit a user’s access to PragerU’s videos only if the user previously activated YouTube’s Restricted Mode.38 Less than 2 percent of YouTube users activate the Restricted Mode of browsing and viewing videos.39 However, PragerU claimed that the classification of its videos as separate from other videos violated its First Amendment rights and that YouTube was engaged in false advertising in violation of the Lanham Act.40 The First Amendment claim failed because “[t]he Free Speech Clause of the First Amendment prohibits the government—not a private party—from abridging speech” and neither YouTube nor its parent company Google are state actors.41 The false advertising claim likewise failed because YouTube’s statements about its content moderation policies are neither “commercial advertising or promotion” nor a “false or misleading representation of fact” requisite to establish a claim.42
Courts have issued conflicting opinions regarding one of the most popular mobile phone applications in the world. Snap Inc., the maker of Snapchat, has faced several actions for injuries stemming from the use of filters for user-generated content. Filters overlay a variety of information onto a user’s pictures. Some are static, acting much like a picture frame or stickers placed onto the picture, while others are dynamic, incorporating information from sensors in the phone like the speed filter, which uses a phone’s GPS information to determine the current speed of the person taking the photo.43
In Lemmon v. Snap Inc.,44 three young men were killed in an automobile accident caused by the trio’s attempt to log an entry in Snapchat at more than 100 miles per hour. Their Snapchat post showed that they had logged one entry showing the vehicle traveling at 123 miles per hour. When the car crashed a few minutes later, it was estimated to have been traveling at 113 miles per hour.45 The plaintiffs alleged that:
Snap knew or should have known that . . . many of its users were drivers of, or passengers in, cars driven at speeds of 100 m.p.h. or more because they wanted to use Snapchat to capture a mobile photo or video showing them hitting 100 m.p.h. and then share the Snap with their friends.46
In Lemmon, the court referenced Maynard v. Snapchat, Inc.,47 a case in which the company’s Section 230 defense failed. In Maynard, a passenger in the speeding car described the wreck and the use of Snapchat’s speed filter feature as follows:
I looked up and noticed that we seemed to be accelerating. I looked in the front, and saw Christal McGee holding her phone. The screen had a speed on it, which was about 80 m.p.h. and climbing. I asked Christal if her phone was keeping up with the speed of the car. Christal said it was. I told her I was pregnant and asked her to slow down. Christal responded and said she was just trying to get the car to 100 m.p.h. to post it on Snapchat. She said “I’m about to post it.”48
Immediately following that statement, Maynard pulled out of his apartment complex, was struck by the speeding vehicle, and suffered permanent brain damage.49 The Maynard court held that Section 230 did not apply because the claim was not based on any third-party posts: The plaintiffs instead “seek to hold Snapchat liable for its own conduct, principally for the creation of the Speed Filter and its failure to warn users that the Speed Filter could encourage speeding and unsafe driving practices.”50
Acknowledging that its holding was inconsistent with that of Maynard—a state court decision out of Georgia—the Lemmon court explained that it was bound to apply Ninth Circuit precedent under which the plaintiff’s claim was barred because the speed filter was a “content-neutral tool,” meaning that users could have used the filter to post pictures of themselves not speeding.51
The application of Section 230 has remained relatively stable for the past quarter century, despite being at the center of an evolving industry. Its future, however, may depend on the impact of the Executive Order on Preventing Online Censorship that President Trump issued on May 28, 2020.52
Public discourse in recent years has included relentless accusations from both left- and right-wing news sources that the other side is propagating “fake news.” In response, some ICSs have begun to flag articles with the ratings provided by independent fact checkers, many of which adhere to the Code of Principles set forth by the International Fact-Checking Network, when determining the truthfulness of information that is shared online.53 A variety of media companies— ranging from small outlets to worldwide media titans, such as Reuters and the Associated Press—have pledged to follow the Code of Principles and agreed to be assessed by the organization.54
On May 27, 2020, Twitter flagged two of President Trump’s tweets about mail-in ballots in California. Twitter explained: “We added a label to two @realDonaldTrump Tweets about California’s vote-by-mail plans as part of our efforts to enforce our civic integrity policy. We believe those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process.”55
The executive order issued just one day later called for a revamp of laws governing liability of Internet intermediaries. It states: “When large, powerful social media companies censor opinions with which they disagree, they exercise a dangerous power. They cease functioning as passive bulletin boards, and ought to be viewed and treated as content creators.”56 President Trump followed up the order the next day with a more direct message on Twitter: “REVOKE 230!”57
Of course, Section 230 cannot be repealed by executive order because it is codified law. The U.S. Congress would have to draft and approve a bill to modify the law and, although there has been increasing attention from lawmakers on the inner workings of technology giants like Google and Facebook, there is no indication of political will or actual plans to upend this cornerstone of the Internet.
What remains to be seen is how federal officials and agencies named in the executive order will respond to its directives. The order directs the heads of departments and agencies to, among other things, review federal spending on advertisements with platforms that restrict ads “due to viewpoint discrimination”;58 consider whether to take action in response to complaints collected through a “Tech Bias Reporting tool” launched by the White House;59 and request that the Federal Communications Commission propose regulations to “clarify” which actions can result in the loss of Section 230’s protections.60 The Attorney General is ordered to “establish a working group regarding the potential enforcement of State statutes that prohibit online platforms from engaging in unfair or deceptive acts or practices,” and the order directs the working group to “develop model legislation for consideration by legislatures in States where existing statutes do not protect Americans from such unfair and deceptive acts and practices.”61 Each of these actions is subject to, and will likely face, judicial review in the coming year.