Israel-Gaza crisis: X fails to remove 98% of posts reported by the CCDH for hate and extremism
- Researchers find that X continued to host 98% of 200 posts reported to them by researchers which promoted antisemitism, Islamophobia, anti-Palestinian hate, or other hate speech
- All posts in the sample breached platform rules against hateful content, which prohibit racist slurs, dehumanization, and hateful imagery
- 98% of sample posts remained on the site seven days after they were reported to moderators
WASHINGTON, DC / LONDON (11/14/23) – Elon Musk’s X (previously Twitter) continues to host the overwhelming majority of a sample of posts that breach platform rules for promoting antisemitism, Islamophobia, anti-Palestinian hate and other hateful rhetoric in the wake of the Israel-Gaza crisis, the Center for Countering Digital Hate has found.
The CCDH’s study, published Tuesday 14 November, comes amid several warnings of a rise in hate speech and misinformation on X and other platforms following the outbreak of the Israel-Hamas conflict.
Researchers collected a total of 200 hateful posts that were published after Hamas’ attacks on Israel on 7 October – all of which either directly addressed the ongoing conflict, or appeared to be informed by it. The posts were collected from a total of 101 separate X accounts.
The posts were reported to moderators for breaching platform rules via the official reporting tools on Tuesday 31 October. The sample of posts were subsequently reviewed on Tuesday 7 November to audit the action taken.
Despite having a full week to process the reports, researchers found that X continued to host 98% (196) of the 200 posts.
Posts the platform continued to host included those that:
- Incite violence against Muslims, Palestinians, and Jewish people
- State that “Hitler saw Jews for what they were”
- Claim that Muslims are “smelly rats”
- Refer to Palestinians in Gaza as “animals”
- Deny and diminish the Holocaust
- Promote antisemitic caricatures
- Promote antisemitic conspiracy theories
- Deny the existence of Palestinians as a people
- Glorify Nazis and Nazism
In total, the posts that remained up have accrued 24,043,693 views. Out of the 101 accounts in the study, only one was suspended and a further two “locked”.
43 of the 101 accounts in the sample are verified accounts, which means they benefit from algorithmic boosts to the visibility of their posts.
Imran Ahmed, CEO and founder of the Center for Countering Digital Hate (CCDH), said:
“After an unprecedented terrorist atrocity against Jews in Israel, and the subsequent armed conflict between Israel and Hamas, hate actors have leapt at the chance to hijack social media platforms to broadcast their bigotry and mobilize real-world violence against Jews and Muslims, heaping even more pain into the world.
“X has sought to reassure advertisers and the public that they have a handle on hate speech – but our research indicates that these are nothing but empty words.
“Our ‘mystery shopper’ test of X’s content moderation systems – to see whether they have the capacity or will to take down 200 instances of clear, unambiguous hate speech – reveals that hate actors appear to have free rein to post viciously antisemitic and hateful rhetoric on Elon Musk’s platform.
“This is the inevitable result when you slash safety and moderation staff, put the Bat Signal up to welcome back previously banned hate actors, and offer increased visibility to anyone willing to pay $8 a month. Musk has created a safe space for racists, and has sought to make a virtue of the impunity that leads them to attack, harass and threaten marginalized communities.”
Notes on methodology:
To collect the sample of hateful posts, researchers identified a list of accounts posting hateful content by searching through the followers, followings, likes and retweets of known hateful accounts.
For each account, CCDH selected between one and six posts promoting hateful rhetoric.
The sample should not be seen as a representative sample of posts relating to the Israel-Gaza crisis, but rather as a means of testing X’s moderation systems.
The sample comprised 200 posts by 101 X accounts.
The posts were reported on 31 October using X’s own tools for flagging hateful conduct and were subsequently reviewed seven days later, on 7 November, to audit action taken.