U.S. laws have traditionally given online services significant leeway to moderate user-generated content however they see fit. In particular, there is a long history of U.S. courts relying on Section 230 of the Communications Decency Act (“CDA 230”) to reject a wide range of claims seeking to hold online services providers liable for hosting, displaying, removing or blocking third-party content, including under contract, defamation, tort and civil rights laws. CDA 230 does not protect online services providers from all claims related to third-party content. For example, there are statutory exceptions for IP infringements and criminal violations. But many commentators credit CDA 230 as one of the most important laws in the development of the Internet by allowing online services providers to focus on growing their user base without having to discharge unduly burdensome duties to continuously review, assess and moderate user-generated content.

In recent years, CDA 230 has come under scrutiny for its alleged impact on freedoms of speech, online safety, and misinformation. We describe further below recent examples of lawmakers and authorities seeking to impose more onerous content moderation restrictions or obligations on digital platforms. While some of these efforts have targeted specific social media companies or business models, these developments will impact any company offering a service in which users can communicate with one another, including multiplayer video games, dating websites, social streaming platforms, virtual hangout spaces, social e-commerce platforms, and more.

We therefore recommend that any online services provider:

  • Review their policies, terms of service, mechanisms and reporting channels in light of the new content moderation laws in Florida, Texas, California, New York and other jurisdictions, as applicable;
  • Closely monitor content moderation developments in the U.S. in state and federal legislatures and court dockets at all levels; and
  • Continuously evaluate their content moderation policies to balance the protection of free speech with the shared objective of protecting users.

Trump Administration Executive Order on Preventing Online Censorship: In May 2020, after a social media platform labeled a Trump post “potentially misleading”, President Trump issued Executive Order 13925 entitled “Preventing Online Censorship”, which claimed that digital platforms were relying on CDA 230 immunity to engage in deceptive or arbitrary actions to censor user-generated content expressing viewpoints with which they disagree. Pursuant to the Executive Order, the Department of Justice released proposed amendments to CDA 230 that would have, among other things, only allowed an online services provider to restrict access to user-generated content where the provider had terms of use that clearly prohibit such content, the action was consistent with such terms, and the provider gave a reasonable explanation of the action to affected users and a meaningful opportunity to respond.

Biden Administration’s Calls to Reform Section 230: In May 2021, soon after assuming office,President Biden revoked Trump’s Executive Order 13925. But President Biden has also called for CDA 230 to be pared back. Rather than taking issue with the perception that digital platforms wrongfully restrict access to political content, President Biden has taken issue with the perception that digital platforms do not sufficiently restrict access to certain objectionable content. For example, in September 2022, the Biden Administration released a readout of its listening session on “Tech Platform Accountability” calling for legislators to “Remove special legal protections for large tech platforms: Tech platforms currently have special legal protections under Section 230 of the Communications Decency Act that broadly shield them from liability even when they host or disseminate illegal, violent conduct or materials. The President has long called for fundamental reforms to Section 230.”

Florida Transparency in Technology Act (SB 7072). In May 2021, Governor DeSantis signed SB 7072 into law with the stated intention of preventing social media platforms from engaging in unfair censorship of Floridians. Much of the law is currently enjoined from taking effect, but if the U.S. Supreme Court reviews this law and finds it to be constitutional, the statute would impose significant content moderation obligations on qualifying social media platforms. For example, the law generally requires a social media platform to provide a precise and thorough explanation of why it censored a user whenever it does so, and to apply its censorship standards in a consistent manner. The law establishes a private right of action for inconsistent content moderation practices which could entitle claimants to statutory damages of up to $100,000. The law also prohibits social media platforms from changing their content moderation policies more than once every 30 days, and gives users a right to opt-out of certain content recommendation or de-prioritization algorithms.

Texas’ Act relating to censorship of digital expression (HB 20). In September 2021, Governor Abbott signed HB 20 into law with the stated intention of preventing social media platforms from engaging in wrongful censorship of Texans. Much of the law is currently enjoined from taking effect, but if the U.S. Supreme Court decides to take up a case involving this law and finds it to be constitutional, the statute would require social media platforms to publish a transparency report biannually with numerous categories of details and statistics about the platform’s content moderation operations, such as how many user complaints it received of potential terms of service violations, how such complaints were handled, and the results of the complaints, with respect to the preceding 6-month period. The law also generally prohibits a social media platform from censoring a user based on viewpoints, represented viewpoints, or geographical location, and prevents email service providers from filtering email messages except in certain narrow circumstances, such as where the email is reasonably believed to include malicious computer code or obscene material. The law establishes a private right of action for contraventions of its anti-spam filtering provisions, and generally enables users to sue for declaratory and injunctive relief.

New York’s Act requiring social media networks to maintain hateful conduct reporting mechanisms (S 4511A). In June 2022, Governor Hochul signed S 4511A into law with the stated intention of combating the proliferation of hate on social media. The law defines a social media network as for-profit providers and operators of internet platforms designed to enable users to share any content with other users or to make such content available to the public. Unlike the content moderation statutes of Florida, Texas and California, New York’s law does not include size-based thresholds in its definition of qualifying social media networks. The law requires a social media network conducting business in New York to provide and maintain a clear and easily accessible mechanism for individual users to report incidents of hateful conduct and a clear and concise policy on how the network will respond. “Hateful conduct” means the use of a social media network to vilify, humiliate, or incite violence against a group or class of persons on the basis of race, color, religion, ethnicity, national origin, disability, sex, sexual orientation, gender identity or gender expression. The New York Attorney General may institute penalties for violations of up to $1,000 per day on which a violation took place.

California’s Social Media Content Moderation Law (AB 587). In September 2022, Governor Newsom signed AB 587 into law with the stated intention of protecting Californians from hate and disinformation spread online. Qualifying social media companies must post terms of service that meet certain format, language and content requirements. For example, the terms of service must include a description of the process that users must follow to flag violating content or users, a list of potential actions the social media company may take against violating content or users, and the social media company’s commitments on response and resolution time. The law takes aim at five categories of problematic content: (1) hate speech or racism; (2) extremism or radicalization; (3) disinformation or misinformation; (4) harassment; and (5) foreign political interference. Social media companies must submit lengthy and detailed reports twice a year to the California Attorney General describing their content moderation practices and breaking down how often content was flagged, actioned or resulted in disciplinary action against users for including the five categories of content. The California Attorney General may institute penalties for violations of up to $15,000 per day on which a violation took place.

Author

Jonathan Tam is a partner in the San Francisco office focused on global privacy, advertising, intellectual property, content moderation and consumer protection laws. He is a qualified attorney in Canada and the U.S. passionate about helping clients achieve their commercial objectives while managing legal risks. He is well versed in the legal considerations that apply to many of the world’s cutting-edge technologies, including AI-driven solutions, wearables, connected cars, Web3, DAOs, NFTs, VR/AR, crypto, metaverses and the internet of everything.

Author

Vivian Tse regularly advises US and multinational companies on complex international trade, regulatory compliance, and cross-border commercial transactions related matters.