Unless you’re a corporate lawyer or outside counsel for an online business that hosts third party content, you probably have never heard of Section 230 of the Communications Decency Act, part of the mammoth 1996 Telecommunications Act. In Section 230, just 26 words (see immediately below) spell out why a site isn’t liable for what other people may choose to publish on it.
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
This establishes a “safe harbor” rule, establishing regulatory certainty that has allowed many of today’s largest Internet companies that host third party content (from Facebook, Twitter, Amazon to Yelp and Reddit) to operate. Don’t like the content they host? Tough. You can complain and express your own opinions, even request the correction of errors or the takedown of untrue content, but you can’t sue the hoster for being a hoster. In essence, Section 230 is the statutory glue behind everything that lets the Internet operate freely – and probably the basis for everything you love and hate about the resulting experience.
You’d think that after 20 years, the law would be solidly supported by judicial decisions and established a precedent, but as recent court battles highlight, the scope and application of the law is a long way from settled. Attempts by legislators and judges to refine or even redefine Section 230’s boundaries are chipping away at the broad immunity websites have taken for granted, and some defenders of Section 230 now worry that the core protections enshrined in the law are under threat.
This change is evident in many places in the courts, where recent rulings have been against internet companies that depend on safe harbor. In June 2016, for example, a judge ordered Yelp to remove reviews that were found to be “defamatory” even if they were “accurate”. In August 2016, Twitter was told it could not use a Section 230 defense in a lawsuit over unwanted tweets (and especially retweets). And in September, a panel of federal judges gave approval for a $10 million lawsuit against Match.com brought by a woman who was stabbed by a man she’d met through the site, to proceed to trial.
Legal experts who deal in this area are becoming more and more concerned with these trends. “At some point, you wonder if this is just judicial activism going crazy,” says Eric Goldman, a cyber law expert, and professor at the Santa Clara University School of Law. “You wonder if judges are saying, I know what Section 230 says. I just don’t agree with it.”
Section 230 was born in a simple bipartisan compromise between House legislators (Chris Cox and Ron Wyden) in 1995, responding to the clear need to keep the Internet open and growing – and out of the hands of the plaintiff’s bar. In hindsight, the concept is simple: Websites aren’t publishers. They’re intermediaries. To sue an online platform over an offensive (to someone) blog post would be like suing the New York Public Library for carrying a copy of Lady Chatterley’s Lover. For the embryonic consumer Internet, faced with a potential deluge of content-restricting lawsuits, Section 230 was a creative workaround — a patch — that allowed this new and rapidly evolving platform for the expression of ideas to grow into the network of successful commercial enterprises we see today.
For all that the Internet, would have been different without Section 230, it hasn’t been without critics. While it has defended free speech and open access to content, it’s also allowed a lot of bad behaviors to develop. Cyberbullying, doxing and death threats all shelter under Section 230 to some extent. Many sites try to police these obnoxious forms of content, utilizing permissions to curate content that are also enshrined in of Section 230 (by NOT classifying them as publishers when the choose to “edit” content on their sites), but they don’t “have” to do so under the law. Legal immunity might be great for tech companies, but not if you’re the target of anonymous trolls out to ruin your online (and maybe real world) life for no good reason.
Section 230 is at the heart of an emerging, although not yet very visible, debate: to what extent can we allow the co-mingling of the online and physical worlds under a single set of legal principles, when it’s increasingly clear that the “operating” rules can be and often are very different. Some physical world rules exist as an expression of social norms or quality of life that we think are important. If the internet is allowed to basically break all that up, we might be giving it more power than it deserves – or that we should want it to have.
Section 230 and safe harbor is an important concept, and we don’t want to lose it anytime soon, but let’s apply it with the right blend of intelligence and caution.