Big Tech platforms fail to act on 89% of anti-Muslim hate speech, new study finds
Facebook, Instagram, TikTok, Twitter, and YouTube collectively fail to act on 89% of posts containing anti-Muslim hatred and Islamophobia—even after they are reported to moderators, a new report has found.
- New research finds that five major social media platforms consistently failed to address anti-Muslim racism—even after abusive posts were reported to moderators
- Twitter performed particularly poorly, ignoring 97% of Islamophobic posts. Facebook failed to act against 94% of Islamophobic posts; YouTube 100%; Instagram 86%; and TikTok 64%
- Facebook was also found to have hosted several large groups specifically dedicated to spreading anti-Muslim hatred, with a combined 361,922 followers
- Tech platforms also failed to address 89% of posts promoting the “Great Replacement” conspiracy theory—directly violating pledges they made following the 2019 Christchurch mosque terror attacks
- CCDH is a US non-profit (501c3) that researches the architecture of online hate and misinformation. The Center has offices in Washington, D.C. and London
Using the platforms’ own reporting tools, the Center for Countering Digital Hate (CCDH) flagged 530 separate posts which contained bigoted and dehumanising content designed to target Muslim people through racist caricatures, conspiracies, and false claims.
CCDH’s Failure to Protect report finds that posts were collectively viewed at least 25 million times.
Much of the hateful content was easily identifiable, and yet platforms still chose not to act, said Imran Ahmed, the CCDH’s Chief Executive.
Instagram, TikTok and Twitter allow users to use hashtags such as #deathtoislam, #islamiscancer and #raghead. Content spread using the hashtags received at least 1.3 million impressions.
Facebook: failed to act on 118 of 125 = 94.4% failure to act rate
Twitter: 102 of 105 = 97.1%
YouTube: 23 of 23 = 100%
Instagram: 195 of 227 = 85.9%
TikTok: 32 of 50 = 64%
These findings echo CCDH’s previous Failure to Act reports. Earlier this month, researchers found that Instagram fails to act on 90% of user reports of misogynist abuse sent via Direct Message, and in 2021 CCDH discovered that Big Tech platforms collectively ignore 84% of antisemitic posts.
Christchurch mosque attacks and big tech’s commitment to the Christchurch call
The 2019 terror attack on two mosques in Christchurch, New Zealand, claimed the lives of 51 people. The terrorist was an adherent of the “Great Replacement” conspiracy theory—a white supremacist and Islamophobic ideology which claims that non-white immigrants are ‘replacing’ white people and culture in Western countries.
Two months after the attack, New Zealand’s Prime Minister Jacinda Ardern and French President Emmanuel Macron led an international summit aiming to eliminate terrorist and violent extremist content online, known as The Christchurch Call to Action.
Twitter, Meta. Google and YouTube each signed up to the Call, which commits them to taking measures “seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination… including its immediate and permanent removal.”
They also pledged to “enforce community standards or terms of service in a manner consistent with human rights and fundamental freedoms.”’
The social media giants also released a joint statement which promised they would be “resolute in our commitment to ensure we are doing all we can to fight the hatred and extremism that lead to terrorist violence.”
Contrary to these pledges, the platforms collectively took no action against 88 of 99 posts (89%) which promoted the “Great Replacement” theory.
Facebook and Instagram ignored 97.5% and 80% of such posts, respectively.
Imran Ahmed, Chief Executive of the Center for Countering Digital Hate (CCDH), said:
“Much of the hateful content we uncovered was blatant and easy to find – with even overtly Islamophobic hashtags circulating openly, and hundreds of thousands of users belonging to groups dedicated to preaching anti-Muslim hatred.
“When social media companies fail to act on hateful and violent content, it normalises these opinions, gives offenders a sense of impunity, and can inspire offline violence. Adherents to the Great Replacement conspiracy theory committed mass murder at both the Christchurch mosques and Tree of Life synagogue.
“Platforms are aware that highly emotional, hate-filled misinformation keeps people glued to their platforms, which is what drives profit. So they aren’t incentivised to spend money on cleaning it up.
“The public has a right to demand that Big Tech is more transparent and accountable, and act more responsibly.”
Sumayyah Waheed, Muslim Advocates’ senior policy counsel, said:
“Dangerous anti-Muslim content that clearly violates social media companies’ rules continues to run rampant on platforms—leading to threats, violence and even genocide against Muslims worldwide.
“This eye-opening report confirms what we’ve experienced for years: social media platforms are failing to take down prohibited anti-Muslim content online that leads to anti-Muslim hate and violence in the real world.”
Rita Jabri Markwell of the Australian Muslim Advocacy Network (AMAN) said:
“Three years on from Christchurch, social media companies are full of spin when it comes to fighting the drivers of violence. We are not surprised by these findings but it’s a relief to have our experiences investigated and validated.
“Across the world, from India to Australia, Europe to North America, anti-Muslim conspiracy theories have been used to stir violence and extreme politics.”