Hate Pays

How X accounts are exploiting the Israel-Gaza conflict to grow and profit

CCDH's new report shows that 10 X (Twitter) accounts promoting anti-Jewish, anti-Muslim & anti-Palestinian hate grew 4 times faster after October 7.

Download report Find out more

About

A new report by the Center for Countering Digital Hate shows how since the outbreak of the Israel-Gaza conflict on October 7, accounts posting anti-Jewish and anti-Muslim content have seen a sharp rise in followers on X (Twitter). What’s worse, the report finds that X profits from anti-Jewish and anti-Muslim content in relation to Israel-Gaza, serving ads from legacy brands near hate speech and misinformation

An intro from CCDH CEO Imran Ahmed

Antisemitism is one of the most pernicious, persistent, and fundamental evils in our societies.

In all the work CCDH does, we see the symbols and language of antisemitism – its semiotics – threaded through other forms of hatred and conspiracies. For example, anti-vaccine disinformation spreaders who invent evil cabals of malignant powerbrokers seeking to control the Earth and who repurpose Nazi-era imagery to achieve these ends.

However, seeing antisemitism in its most literal form – targeting Jews for their identity and race – shocks like nothing else. We know where such unabashed racism and hatred leads – to the ghettos of Eastern Europe, the horrors of Auschwitz, the murderous terror in synagogues in Pittsburgh and Poway, and the butchery of October 7th, 2023 in southern Israel.

On that day, while the world recoiled in horror, some cynics knew there was a way to profit from the horror. In this study, we examined accounts that exploited how Twitter’s algorithms reward engagement – positive or negative – and the content moderation failures on Elon Musk’s X to profit from tragedy. The platform seemingly rewarded these accounts for producing controversial, sensational, and engaging content with turbocharged follower growth, visibility, and, where revenue sharing is permitted, therefore, increased revenues.

Social media platforms know this platform dynamic exists. It’s a byproduct of their desire to target us constantly with high-engagement content that increases time spent on the platform by engaging our minds with things that thrill, titillate, or terrify us. We know this means that disinformation and hate are given an advantage on those platforms. This isn’t a level playing field. It’s tilted in favor of hate and lies. Those preaching tolerance and goodwill have to ice-skate uphill to keep up.

Each account we study here should have been assessed for community standards violations against hate speech. But, they were not closed down despite repeated violations.

None of the accounts deserved the enormous visibility they received by cynically goading, upsetting, and terrorizing others into emotional responses. In our study, we show, using case studies, how these posts were given additional reach and engagement by critics. Indeed, CCDH warned in its first-ever report, Don’t Feed the Trolls, that even “negative” engagement counts as engagement on social media platforms, increasing their attractiveness to the algorithms that control what content gets promoted in timelines and what does not.

This isn’t about ‘creators’ or platforms’ vaunted defense of free speech. Ultimately this comes down to profit. Our timelines are created by companies who want to monetize our attention, enabling malignant and successful ‘engagement farmers’ who jump on trending topics to boost their profits regardless of the lies and hate they spread. All too often, the mosaic on posts that make up our timelines isn’t a reflection of the world around us. But a race to the bottom of hate and sensationalism with the most profitable ad next to it.

Our study demonstrates a disturbing reality that after October 7th, a day of enormous evil, the platforms rewarded speech that simply has no justification on any grounds.

We need platforms to improve how they curate content and to implement effective guardrails against promoting the most offensive hate speech. Any platform that fails to ensure they are protecting the integrity of its space deserves criticism and scrutiny.

This study seeks to create an informed discourse on how platform owners can create winners and losers on social media and why we have allowed companies to profit while toxic accounts divide communities and terrorize Jewish people who face a wall of hatred when they log on each day. Elon Musk’s notion of free speech combined with algorithms trying to answer the question of how best to monetize content – has created a race to the bottom. This phenomenon is not unique to Twitter, but what makes X so dangerous is that the platform’s rollback of content moderation means it is in a death spiral, with content in an algorithmically-accelerated race to the bottom. These accounts weaponizing tragedy to grow and profit are symptomatic of the toxic environment Mr Musk has created.

It isn’t rocket science, it’s just common sense for us to recognize that effective impunity for abusers, bigots, and racists destroys the ability of ordinary people to enjoy platforms and exercise their right to free speech. It’s time these platforms are forced to be more transparent and accountable. Lawmakers can simply no longer rely on platforms for enforcement of their own rules and standards – this is not an issue of enforcing terms and conditions but of negligence for the safety and widespread harm caused by the overall toxicity of a platform that all but welcomes and rewards antisemites for posting hate.