top of page
Writer's pictureStudent Submitted Article

Section 230- Necessary Protection or Social Media Bailout?

Article By: Arvind Salem

Photo Credit: www.aei.org


Section 230 of the Communications and Decency Act of 1996 was under fire from both Republicans and Democrats. However, both parties disapprove of Section 230 for different reasons. Republicans are concerned that big tech is using the section as a shield to effectively censor conservatives. On the other hand, Democrats are concerned that big tech is not doing enough to remove misinformation but cannot impose the necessary regulation due to the protections provided by Section 230. This reveals a fundamental question- why was Section 230 enacted in the first place and what is so controversial about it?

The impetus for Section 230 was a case brought by a user against an online platform called Prodigy. In this case, Prodigy argued it was not responsible for its users’ speech. However, since Prodigy engaged in content moderation, the court ruled that Prodigy was a publisher and should be held liable for any illegal or misleading content it publishes. In response to this decision, Sen. Ron Wyden, D-Ore. and former Rep. Chris Cox, R-Calif. Introduced Section 230, as a way of protecting tech companies from becoming liable for everything their user's post and protecting them if they decide to engage in moderation to better the platform. This is what allows Twitter, YouTube, and Facebook to engage in content moderation without being sued, while not holding them liable for anything users post on their site. The law was necessary to protect the fledgling internet from being destroyed by lawsuits before it even got off the ground. Without Section 230, social media companies could be held liable for both moderating and not moderating content. Content moderation could lead to lawsuits because it could be interpreted as denying free speech, and not moderating content could lead to lawsuits because if online service providers are treated like publishers, as seen in the Prodigy case, they are liable for everything their user's post. Even if the internet could flourish in an environment where both content moderation and lack of content moderation could get sued, online service providers would either have to spend an incredible amount of resources vetting every piece of information, which would make these services more expensive, or they would have to give up content moderation all together, in which case misinformation (hard to control even with content moderation) would skyrocket and users would be unable to tell fact from fiction. Section 230 was necessary at that time and has allowed the internet as we know it to thrive. But has it outlived its usefulness?

The debate around Section 230 revolves around two specific provisions- sections c (1) and c (2). Section 230(c)(1) states that online service providers and users may not “be treated as the publisher or speaker of any information provided by another person". This provision protects companies that do not moderate user-generated content. It ensures that online service providers are not held liable for any comments or posts that are false, incite violence, or contain other forms of illegal content. Every online service provider with user-generated content (including user comments or reviews) is protected by this provision, which makes sure that they are not sued for everything their user's post. Examples of such sites are Facebook, YouTube, Amazon, Twitter, and Google. This is the part of Section 230 that angers Democrats, as it disincentives content moderation and allows misinformation to thrive. Democrats were especially angry at Facebook after the 2016 election when they alleged that Russian misinformation cost them the election.

Section 230(c)(2) states that online service providers and users may not be “held liable” for any voluntary, “good faith” action “to restrict access to or availability of material that the provider or user considers being obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” This provision protects companies that do moderate their content. It ensures that companies like Instagram, Twitter, Tik Tok, Facebook, and YouTube can engage in content- moderation without having the worry of being sued. This is the part of Section 230 that angers Republicans as it allows social media companies to censor Republicans through content moderation. For example, Twitter could ban sitting President Donald Trump and they were protected, in part, by this provision, which angered many Republicans and led to increased cries to remove Section 230.

Section 230 (c)(2) empowers online service providers, and the language can be interpreted in a variety of ways. It is up to the courts to settle that controversy. Based on the language of the section there are two requirements that a company must meet to claim Section 230 (c)(2) immunity. First, the provider must be acting under "good faith". This means that they cannot simply say that they are enforcing their terms of service but must prove that they are acting in good faith. An example of "bad faith" would be acting in an anticompetitive manner (i.e. stifling competition). The second requirement is that online service providers can only moderate "objectionable material" (in addition to the other types of content enumerated in the bill but "objectionable material is the only one that causes controversy). Since the Supreme Court has not ruled on what qualifies as "objectionable material" there is no national standard. Without a national standard, the way this rule is interpreted depends on the region. Some courts interpret it broadly, giving companies much leeway, with objectionable referring to any content that the provider or user deems to be objectionable. For context, courts may look at the company's existing policies regarding the same or similar content. Some courts interpret Section 230 (c)(2) more narrowly, arguing that the legislative intent and history of this section implies that objectionable refers to content that is considered as potentially offensive and not simply undesirable.

Section 230 has many exceptions. A defendant cannot use section 230 as immunity in a federal criminal prosecution (does not include state prosecutions), in a case that involves" any law pertaining to intellectual property", in cases involving Electronic Communications Privacy Act of 1986 (ECPA) "or any similar state law", or in prosecutions involving sex trafficking. The first exception is clear. The second exception is quite vague, but it has usually been interpreted to mean that section 230 cannot be used for immunity when a plaintiff's claim involves intellectual property rights (copyright infringement, trademark infringement, patent infringement, etc.). The third exception pertains to the ECPA, a federal law that regulates wiretapping and other forms of electronic eavesdropping. Even for civil cases regarding this law, section 230 immunity does not apply (which is why this is a separate exception). The last exception is part of the Allow States and Victims to Fight Online Sex Trafficking Act of 2017 (FOSTA). Some violations covered that companies may be prosecuted for under this act include using their platform to engage in the sex trafficking of minors or engaging in sex trafficking involving force, fraud, or coercion. The guidelines mentioned here are not comprehensive and refer to the sources cited for an exhaustive list of guidelines and exceptions.


There have been many efforts to reform Section 230. President Trump issued the Executive Order on Preventing Online Censorship, explaining the executive branch’s position on Section 230. This order was later revoked by President Biden. During the previous Congress (the 116th), twenty-six bills would have amended Section 230. Some bills proposed to get rid of the section entirely, while others proposed to broaden the exceptions. One example would be to apply a Section 230 exception to any federal suit (not just federal criminal prosecutions). One bill was called the Platform Accountability and Consumer Transparency Act (PACT). PACT would have amended Section 230 so that certain providers lose immunity under subsection (c)(1) if the provider knows about illegal activity on their platform and does not stop it within 24 hours.


So far, all bills in Congress attempting to reform Section 230 have failed. However, efforts to revise Section 230 are far from over. Section 230 continues to remain a point of controversy as online service providers have become more influential than ever before.


Sources:


Comments


bottom of page