Understanding Section 230 – Social Media Companies’ Get Out of Jail Free Card

Posted on May 17, 2024 in Explainers.

Section 230: phone screen showing social media apps

For most companies, if they sell a product that harms consumers, people can hold them liable in a court of law for damages they suffer. However, uniquely, this is not the case for social media and other online platforms.  

In 1996, before social media even existed, the U.S. Congress passed a law, Section 230 of the Communications Decency Act, which immunizes “interactive computer services” from liability. This protection was intended to cultivate an internet industry that looked very different than it does today. Back then, the internet was accessed by only 20 million U.S. residents, who only used it for 30 minutes per day on average, and the most popular websites were chatrooms and online bulletins like Prodigy and AOL

Today the landscape of “interactive computer services” would be unrecognizable to the lawmakers who passed Section 230 in 1996. The internet industry has blossomed into a multi-trillion-dollar sector, and giant social media platforms like Facebook, Instagram, and YouTube dominate the media landscape, particularly among teens, who spend an average of almost 5 hours per day on these platforms.  

Times have changed, and yet our laws have not. Section 230 remains in place. It has become a shield protecting some of the world’s richest companies from responsibility for victims of their products. What made sense three decades ago simply does not make sense today.  

The Twenty-Six Words that Created the Internet

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

Section 230 has two important sub-sections, each handing social media companies powerful tools to avoid accountability.  

  • The first sub-section, (c)(1), immunizes platforms from lawsuits that treat them as the “publisher or speaker” of content created by ‘third-party’ users. This means that, for instance, the parents of children who overdose on drugs obtained via social media apps cannot sue the platforms because Section 230 holds that platforms have no responsibility for content created by drug dealers on their apps, even if their algorithms do nothing or promote it in children’s feeds.  
  • The second sub-section, (c)(2), shields companies from liability for decisions they take on what they allow users can share on their platforms. This sub-section is known as the “Good Samaritan” portion of Section 230 because its intent was to encourage websites to proactively remove harmful content. However, while companies can and do use this freedom to proactively remove objectionable content, nothing obliges them to do so. This means, for instance, social media companies may or may not choose to remove deepfake pornography of women on their sites without the risk of facing lawsuits from its creators.  
Section 230: US Congress building
Photo by Louis Velazquez on Unsplash

How Section 230 Deprives Victims of Justice  

Section 230 was created by legislators as a shield for well-intentioned companies to clean up the internet. Today it evolved into a weapon for giant corporations to escape justice and ignore victims. It has become a routine tactic used by platforms to protect themselves from liability and scrutiny under the law.  

In 2017, Matthew Herrick filed a lawsuit against Grindr, a popular dating app, to hold the company accountable for its failure to enforce its terms of service. A year earlier, a man began stalking him and creating fake accounts on Grindr in his name, matching with and directing men expecting sex to his workplace and home. Over the course of 10 months, nearly 1,400 men, as many as 23 in a day, arrived at his home and job.   

Herrick filed about 100 complaints with Grindr to close the false accounts, but the dating app refused to take any meaningful action, endangering his life. When he and his attorneys sued Grindr, the company used Section 230 in federal court to claim immunity from liability, which successfully halted his lawsuit.  

Matthew Herrick is not the only victim who has been denied justice due to Section 230. In June 2020, Kristin Bride awoke to find her son, Carson, dead by suicide after receiving hundreds of abusive messages on the anonymous messaging app Yolo. When she tried to sue Yolo in the U.S. for designing a product that was unsafe for children and allowed anonymous bullying, her case was dismissed in its entirety by a district court on the grounds of Section 230. 

In 2015, Beatrice Gonzalez’s daughter, Nohemi, was killed during terrorist attacks in Paris, France. Afterwards, the terrorist group ISIS claimed responsibility for the attacks on YouTube. Gonzalez sued several social media companies in the U.S. arguing they had aided ISIS by allowing it to use their platforms. Though the case eventually reached the Supreme Court, it was dismissed due to Section 230.  

These stories demonstrate the problem at the core of Section 230: it places far too much trust in social media companies. By extending companies such expansive protections, Section 230’s framers assumed that companies would proactively take advantage of their liability shield to remove harmful and distressing content. Lawmakers thought that so-called ‘Good Samaritans’ would use their freedom to protect users.  

Instead of empowering ‘Good Samaritans’, Section 230 has become social media companies’ Get Out of Jail Free Card. The law deprives victims of justice when they experience online harm, even when it materializes offline and endangers their physical safety. This legal regime places the burden on users – particularly women, people of color, children, and queer individuals – to defend themselves from their stalkers, assaulters, harassers, and abusers. Often, this means that victims are silenced and forced to drop out of the digital communities that define 21st-century daily life. 

Section 230: teenager holding phone
Photo by Daria Nepriakhina 🇺🇦 on Unsplash

The Culture of Impunity behind Section 230  

Section 230 has created a culture of impunity in which the world’s wealthiest companies face no legal or financial consequences for their failures to protect users and remove harmful content. While these companies have been trusted to self-regulate with little oversight, online harms have proliferated: children and teens’ mental health suffers, antisemitism is accepted and amplified, and protections for our elections have been rolled back.  

This is not to say that social media companies make no efforts to address online harms. During the internet’s early days, Section 230 enabled companies to build out complex systems of rules intended to restrict the flow of harmful content.  

However, as the platforms matured, they grew to understand that removing content also hurts their bottom line. Harmful content can be among the most engaging, and content moderation itself is challenging. This means that left to their own devices, social media companies do not have strong incentives to invest enough resources into trust and safety, nor to design their services to be safe and healthy for users. 

Today’s social media companies are not always Good Samaritans. While some do act in good faith, others do just enough clean up to maintain a façade of caring about users. Removing Section 230’s liability protections wholesale would not solve this problem. Instead, a more nuanced approach would be to place a condition on liability protections conferred by Section 230 to only protect platforms that make reasonable efforts to address harms, even if not every piece of harmful content is removed. It means that, like other companies in the U.S., social media companies would have to take meaningful steps to protect their consumers. 

This approach to Section 230 reform would strengthen the Accountability and Responsibility of social media companies, two bedrock principles of CCDH’s STAR Framework for social media regulation.

  • Accountability means that democratic and independent bodies, such as courts and strong regulators, can issue judgments and empower users to challenge unjust decisions made by social media companies.  
  • Responsibility is the idea that companies have a duty to protect their users, and when that duty is not upheld, companies and their senior executives face consequences, including fines and personal liability. 

Other jurisdictions have led the charge on reforming their liability shields to strengthen Accountability and Responsibility. In 2022, the European Union enacted its Digital Services Act, which gives companies protection from liability only if they act quickly to remove illegal content after it is reported to them. Similarly, in 2023, the United Kingdom passed the Online Safety Act, which obliges platforms to address the spread of illegal content subject and shields platforms liability only if they conform with codes of practice. Each of these laws will significantly enhance Accountability and Responsibility in their countries. 

The U.S. should follow their lead. Section 230 is clearly outdated. While it may have made sense in 1996, today the biggest corporations in the world enjoy special privileges possessed by no other industry, depriving victims of justice under the law. The principles of Accountability and Responsibility should serve as a starting point for reform of this law to make the internet a safer and fairer space for all. 

If you want to stay up to date with how social media companies’ regulation advances, sign up to our mailing list to receive the latest on CCDH’s campaigns for social media reform to make the internet a safer place. 

Email community sign-up

This field is for validation purposes and should be left unchanged.