New Report: X (Twitter) continues to host 86% of posts reported by CCDH for extreme hate speech – while threatening to sue anti-hate groups

Posted on September 13, 2023 in Press releases.

Cover of CCDH's latest report on X's content moderation failure. Cover shows an image of x and distorted screenshots of hateful tweets identified throughout the report.

Center for Countering Digital Hate (CCDH) reported 300 tweets containing extreme hate to X, it left up 259 of them

CCDH stands with the Anti-Defamation League (ADL) amid Musk’s latest legal threat against independent research into online hate

WASHINGTON, DC (September 13, 2023) – New research by the Center for Countering Digital Hate published today shows that X (formerly Twitter) continues to host nearly 86% of a set of 300 posts reported for hate speech – which included tweets promoting and glorifying antisemitism, anti-Black racism, neo-Nazism, white supremacy and/or other racism.

CCDH also found dozens of advertisements for household brands, such as Apple and Disney, appearing next to hate speech – despite X CEO’s Linda Yaccarino’s claims to have “built brand safety and content moderation tools that have never existed before at this company”.

Researchers collected a sample of 300 posts which were categorized as containing hate speech from 100 accounts (three posts per account). Taken together the 100 accounts identified in the research have a sum total of 1,060,106 followers.

One week after the posts were reported to moderators (on August 30 and 31) via official reporting tools, researchers found that X left up 259 of 300 posts (86.33%).

90 of 100 accounts also remained active.

Each post was in clear violation of at least one of X’s policies against hateful conduct, which prohibit incitement and harassment of others on the basis of protected characteristics. 

Some posts were also in violation of rules against slurs, dehumanization, hateful imagery and the targeting of others by referencing genocide.

Examples of extreme posts left online by Twitter include:

  • Posts denying the Holocaust, or mocking victims of the Holocaust
  • Posts glorifying the Nazis, including one describing Hitler as “a hero who will help secure a future for white children!”
  • Memes accusing Black people of being harmful to “A quiet, peaceful, functioning society”
  • Posts claiming “Blacks don’t need provoking before becoming violent. It’s in their nature.”
  • Posts condemning interracial relationships – specifically, encouraging others to “Stop Race mixing” and “break up with your non-white gf today”

Please see a full breakdown of results here.

CCDH’s report comes a week after Elon Musk, the owner of X, threatened to file a defamation lawsuit against the ADL (Anti-Defamation League), claiming that the ADL’s statements about rising hate speech on the platform negatively impacted its advertising revenue.

On July 31, Musk’s X Corp filed a lawsuit against CCDH over its reporting of the proliferation of hate and disinformation on the platform under Musk’s leadership. CCDH has previously said it views the lawsuit as a move “straight out of the authoritarian playbook.”

“Musk claims to be pro-free speech, yet he has proven incredibly thin-skinned, issuing legal threats when criticized for his botched management of Twitter/ X. It is not free speech but hate speech which has truly prospered in his time at Twitter/ X, emboldened by his interactions with far-right white supremacists and conspiracy theorists. Impunity for loud bigots creates a hostile environment for the victims of abuse and most folks who don’t want to venture into a cesspit of hatred every day,” said Imran Ahmed, founder and CEO of CCDH.

“Antisemitism is a form of defamation against the entire Jewish people, who have already suffered an unimaginable burden due to hate and lies. While Musk demands immediate legal action for what he claims is defamation against his character, he continues to refuse to be held accountable for the defamation he spreads against all Jews,” continued Ahmed.

For more than two decades, social media companies have hidden behind the legal protections conferred by Section 230 of the Communications Decency Act 1996 — before social media platforms existed. This legislation states that these companies cannot be held liable as publishers in any way for the hate, antisemitism, and disinformation that they push to billions.

“It is an intolerable inequality under the law, caused by the US Congress’ stark failure to pass meaningful transparency and accountability legislation for these enormous platforms that privately control, monetize and distort public discourse to suit their commercial agenda,” said Ahmed. “It is time for Congress to ensure that hate has consequences.”

CCDH has been at the forefront of researching the rise of hate speech on X since it was acquired by Musk in October 2022.

In December 2022, CCDH research revealed tweets containing hateful slurs were up between 33% and 202% in the month after the takeover.

The non-profit also recorded a 119% surge in tweets promoting the slur linking LGBTQ+ with ‘child grooming’ in the four months after the takeover, and reported that the platform failed to act on 99 of 100 hateful posts by paid subscribers to the ‘Twitter Blue’ service.

And in August 2021, CCDH found that Twitter allowed 89% of antisemitic posts to remain on the platform, after they were reported to the platform at the time.

For this report, researchers identified a subset of 140 posts that promoted antisemitism, including racist caricatures of Jewish people and claims that Jewish people control the world. 

X continued hosting this content in 85.00% of cases (119 out of 140).

X continued to host 16 posts that researchers identified as containing Holocaust denial, including posts that mock victims of the Holocaust and present death camps as benign.

Ads for household brands appearing next to extreme hate speech

CCDH also found that advertisements from household brands, such as Apple and Disney, appeared next to examples of extreme hate speech recorded in the study. 

Researchers identified 38 such ads, which were recorded via accounts created by researchers. The ads appeared either on the “For You” feed or on the profiles of accounts promoting hate speech.

On the point of brand safety on X, CEO Linda Yaccarino has previously said: “since acquisition, we [X] have built brand safety and content moderation tools that have never existed before at this company.”
In hopes of winning back advertisers, X recently signed a new brand safety deal with digital ad-tech firm Integral Ad Science and introduced an industry-standard blocklist that alleges to “protect advertisers from appearing adjacent to unsafe keywords in the Home Timeline”.