Joint statement endorsed by 28 civil society groups and 53 individuals
Click here for printable PDF
Policymakers have expressed concern about both harmful online speech and the content moderation practices of tech companies. Section 230, enacted as part of the bipartisan Communications Decency Act of 1996, says that Internet services, or “intermediaries,” are not liable for illegal third-party content except with respect to intellectual property, federal criminal prosecutions, communications privacy (ECPA), and sex trafficking (FOSTA). Of course, Internet services remain responsible for content they themselves create.
As civil society organizations, academics, and other experts who study the regulation of user generated content, we value the balance between freely exchanging ideas, fostering innovation, and limiting harmful speech. Because this is an exceptionally delicate balance, Section 230 reform poses a substantial risk of failing to address policymakers’ concerns and harming the Internet overall. We hope the following principles help any policymakers considering amendments to Section 230.
Principle #1: Content creators bear primary responsibility for their speech and actions.
Content creators—including online services themselves—bear primary responsibility for their own content and actions. Section 230 has never interfered with holding content creators liable. Instead, Section 230 restricts only who can be liable for the harmful content created by others. Law enforcement online is as important as it is offline. If policymakers believe existing law does not adequately deter bad actors online, they should (i) invest more in the enforcement of existing laws, and (ii) identify and remove obstacles to the enforcement of existing laws. Importantly, while anonymity online can certainly constrain the ability to hold users accountable for their content and actions, courts and litigants have tools to pierce anonymity. And in the rare situation where truly egregious online conduct simply isn’t covered by existing criminal law, the law could be expanded. But if policymakers want to avoid chilling American entrepreneurship, it’s crucial to avoid imposing criminal liability on online intermediaries or their executives for unlawful user-generated content.
Principle #2: Any new intermediary liability law must not target constitutionally protected speech.
The government shouldn’t require—or coerce—intermediaries to remove constitutionally protected speech that the government cannot prohibit directly. Such demands violate the First Amendment. Also, imposing broad liability for user speech incentivizes services to err on the side of taking down speech, resulting in overbroad censorship—or even avoid offering speech forums altogether.
Principle #3: The law shouldn’t discourage Internet services from moderating content.
To flourish, the Internet requires that site managers have the ability to remove legal but objectionable content—including content that would be protected under the First Amendment from censorship by the government. If Internet services could not prohibit harassment, pornography, racial slurs, and other lawful but offensive or damaging material, they couldn’t facilitate civil discourse. Even when Internet services have the ability to moderate content, their moderation efforts will always be imperfect given the vast scale of even relatively small sites and the speed with which content is posted. Section 230 ensures that Internet services can carry out this socially beneficial but error-prone work without exposing themselves to increased liability; penalizing them for imperfect content moderation or second-guessing their decision-making will only discourage them from trying in the first place. This vital principle should remain intact.
Principle #4: Section 230 does not, and should not, require “neutrality.”
Publishing third-party content online never can be “neutral.”1 Indeed, every publication decision will necessarily prioritize some content at the expense of other content. Even an “objective” approach, such as presenting content in reverse chronological order, isn’t neutral because it prioritizes recency over other values. By protecting the prioritization, de-prioritization, and removal of content, Section 230 provides Internet services with the legal certainty they need to do the socially beneficial work of minimizing harmful content.
Principle #5: We need a uniform national legal standard.
Most Internet services cannot publish content on a state-by-state basis, so state-by-state variations in liability would force compliance with the most restrictive legal standard. In its current form, Section 230 prevents this dilemma by setting a consistent national standard—which includes potential liability under the uniform body of federal criminal law. Internet services, especially smaller companies and new entrants, would find it difficult, if not impossible, to manage the costs and legal risks of facing potential liability under state civil law, or of bearing the risk of prosecution under state criminal law.
Principle #6: We must continue to promote innovation on the Internet.
Section 230 encourages innovation in Internet services, especially by smaller services and startups who most need protection from potentially crushing liability. The law must continue to protect intermediaries not merely from liability, but from having to defend against excessive, often-meritless suits—what one court called “death by ten thousand duck-bites.” Without such protection, compliance, implementation, and litigation costs could strangle smaller companies even before they emerge, while larger, incumbent technology companies would be much better positioned to absorb these costs. Any amendment to Section 230 that is calibrated to what might be possible for the Internet giants will necessarily mis-calibrate the law for smaller services.
Principle #7: Section 230 should apply equally across a broad spectrum of online services.
Section 230 applies to services that users never interact with directly. The further removed an Internet service—such as a DDOS protection provider or domain name registrar—is from an offending user’s content or actions, the more blunt its tools to combat objectionable content become. Unlike social media companies or other user-facing services, infrastructure providers cannot take measures like removing individual posts or comments. Instead, they can only shutter entire sites or services, thus risking significant collateral damage to inoffensive or harmless content. Requirements drafted with user-facing services in mind will likely not work for these non-user-facing services.
1 We are addressing neutrality only in content publishing. “Net neutrality,” or discrimination by Internet access providers, is beyond the scope of these principles.
Individual Signatories
Affiliations are for identification purposes only
- Prof. Susan Ariel Aaronson, Elliott School of International Affairs, George Washington University
- Prof. Enrique Armijo, Elon University School of Law
- Prof. Thomas C. Arthur, Emory University School of Law
- Farzaneh Badiei, Internet Governance Project, Georgia Institute of Technology (research associate)
- Prof. Derek Bambauer, University of Arizona James E. Rogers College of Law
- Prof. Jane Bambauer, University of Arizona James E. Rogers College of Law
- Prof. Annemarie Bridy, University of Idaho College of Law
- Prof. Anupam Chander, Georgetown Law
- Lydia de la Torre, Santa Clara University School of Law (fellow)
- Prof. Sean Flynn, American University Washington College of Law
- Prof. Brian L. Frye, University of Kentucky College of Law
- Prof. Elizabeth Townsend Gard, Tulane Law School
- Prof. Jim Gibson, University of Richmond, T. C. Williams School of Law
- Prof. Eric Goldman, Santa Clara University School of Law
- Prof. Edina Harbinja, Aston University UK
- Prof. Gus Hurwitz, University of Nebraska College of Law
- Prof. Michael Jacobs, DePaul University College of Law (emeritus)
- Daphne Keller, Stanford Center for Internet and Society
- Christopher Koopman, Center for Growth and Opportunity, Utah State University
- Brenden Kuerbis, Georgia Institute of Technology, School of Public Policy (researcher)
- Prof. Thomas Lambert, University of Missouri School of Law
- Prof. Stacey M. Lantagne, University of Mississippi School of Law
- Prof. Sarah E. Lageson, Rutgers University-Newark School of Criminal Justice
- Prof. Jyh-An Lee, The Chinese University of Hong Kong
- Prof. Mark A. Lemley, Stanford Law School
- Thomas M. Lenard, Senior Fellow and President Emeritus, Technology Policy Institute
- Prof. David Levine, Elon University School of Law
- Prof. Yvette Joy Liebesman, Saint Louis University School of Law
- Yong Liu, Hebei Academy of Social Sciences (researcher)
- Prof. Katja Weckstrom Lindroos UEF Law School, University of Eastern Finland
- Prof. John Lopatka, Penn State Law
- Prof. Daniel A. Lyons, Boston College Law School
- Geoffrey A. Manne, President, International Center for Law & Economics; Distinguished Fellow, Northwestern University Center on Law, Business & Government
- Prof. Stephen McJohn, Suffolk University Law School
- David Morar, Elliott School of International Affairs, George Washington University (visiting scholar)
- Prof. Frederick Mostert, The Dickson Poon School of Law, King’s College London
- Prof. Milton Mueller, Internet Governance Project, Georgia Institute of Technology
- Prof. Ira S. Nathenson, St. Thomas University (Florida) School of Law
- Prof. Christopher Newman, Antonin Scalia Law School at George Mason University
- Prof. Fred Kennedy Nkusi, UNILAK
- David G. Post, Beasley School of Law, Temple University (retired)
- Prof. Betsy Rosenblatt, UC Davis School of Law (visitor)
- Prof. John Rothchild, Wayne State University Law School
- Prof. Christopher L. Sagers, Cleveland-Marshall College of Law
- David Silverman, Lewis & Clark Law School (adjunct)
- Prof. Vernon Smith, George L. Argyros School of Business and Economics & Dale E. Fowler
School of Law, Chapman University - Prof. Nicolas Suzor, QUT Law School
- Prof. Gavin Sutter, CCLS, School of Law, Queen Mary University of London
- Berin Szóka, President, TechFreedom
- Prof. Rebecca Tushnet, Harvard Law School
- Prof. Habib S. Usman, American University of Nigeria
- Prof. John Villasenor, Electrical Engineering, Public Policy, and Law at UCLA
- Prof. Joshua D. Wright, Antonin Scalia Law School at George Mason University
Institutional Signatories
- ALEC (American Legislative Exchange Council) Action
- Americans for Prosperity
- Center for Democracy & Technology
- Competitive Enterprise Institute
- Copia Institute
- Freedom Foundation of Minnesota
- FreedomWorks
- Information Technology and Innovation Foundation
- Innovation Economy Institute
- Innovation Defense Foundation
- Institute for Liberty
- The Institute for Policy Innovation (IPI)
- International Center for Law & Economics
- Internet Governance Project
- James Madison Institute
- Libertas Institute
- Lincoln Network
- Mississippi Center for Public Policy
- National Taxpayers Union
- New America’s Open Technology Institute
- Organization for Transformative Works
- Pelican Institute
- Rio Grande Foundation
- R Street Institute
- Stand Together
- Taxpayers Protection Alliance
- TechFreedom
- Young Voices