X Content Moderation Failure

How Twitter/X continues to host posts we reported for extreme hate speech

Cover of CCDH's latest report on X's content moderation failure. Cover shows an image of x and distorted screenshots of hateful tweets identified throughout the report.

New research by CCDH shows that X (formerly Twitter) continues to host nearly 86% of a set of 300 hateful posts after a week since we reported them.

As of September 7th 2023, tweets promoting and glorifying antisemitism, anti-Black racism, neo-Nazism, white supremacy and/or other racism were still up.

Download report Find out more


CCDH also found dozens of advertisements for household brands, such as Apple and Disney, appearing next to hate speech – despite X CEO Linda Yaccarino’s claims to have “built brand safety and content moderation tools that have never existed before at this company”.

Researchers collected a sample of 300 posts, which were categorized as containing hate speech from 100 accounts (three posts per account).

A week after the posts were reported to moderators (on August 30 and 31) via official reporting tools, researchers found that X left up 259 of 300 posts (86%).

90 of 100 accounts also remained active.

Each post was in clear violation of at least one of X’s policies against hateful conduct, which prohibit incitement and harassment of others on the basis of protected characteristics. 

Some posts were also in violation of rules against slurs, dehumanization, hateful imagery and the targeting of others by referencing genocide.

Twitter/X is enabling dangerous content that could lead to real-life violence. The longer Congress delays social media regulation, the longer people’s safety will be in the hands of greedy and unreliable billionaires.

Platforms and their owners must be held accountable for enabling the spread of hate and misinformation.

You can access our database here. Viewer discretion is advised.