Social media’s role in mobilizing far-right UK rioters
Malicious speculation and false information spread across social media following the stabbing attack in Southport, UK – which took the lives of three girls aged between six and nine. Violent unrest erupted first in Southport, then in other locations around the UK. Mosques, hotels, and public buildings have been attacked, law enforcement officers have been injured, and innocent people with migrant and Muslim backgrounds have been targeted by abuse.
Social media platforms failed and allowed violent riots to erupt
There are many factors which contributed to this outbreak of violence. But by failing to identify and quell disinformation about the stabbing, social media companies contributed to the explosion of violence which has followed this tragic incident.
Early investigations into which social media platforms and messaging services were implicated in the violence have found that Telegram and X (formerly Twitter) have played a role. While Telegram is an encrypted messaging platform and therefore difficult to assess, X posts containing false, incomplete, or maliciously suggestive information are publicly accessible and have been seen by millions of users in the UK.
Algorithms privilege incendiary content
Much of CCDH’s published research has examined the relationship between algorithms and the promotion of hateful content online. We have shown how hateful and false information is promoted by social media algorithms.
False claims about the Southport stabbing have been algorithmically prompted and disseminated. These claims have been shared and reshared on social media and often included anti-immigrant and anti-Muslim views. False and deceptive disinformation has been circulated, evidently with the intention of inciting a violent offline response. For example, in one now-deleted post, an X user shared a screenshot of an account falsely claiming to be a parent of one of the children killed in the stabbing making claims about the migration status of the attacker. This false claim was widely circulated and picked up by a web of accounts masquerading as news outlets.
Another dynamic at play is a phenomenon that CCDH documented in our 2019 report Don’t Feed the Trolls: backlash against the original posts and attempts to correct the false information they contain increases online engagement with the falsehood, and thus increases its algorithmic promotion and visibility.
The financial rewards of hate
In moments of informational chaos, there are those who seek to profit by spreading false and hateful disinformation knowing it will be rewarded by social media algorithms. CCDH has pointed out the incentives which allow malevolent accounts to exploit tragedies such as the Stockport stabbings and to post incendiary content with the sole purpose of accumulating more followers and online engagement that they can turn into financial rewards from the platform.
It is likely that this dynamic is in play here, just as CCDH evidenced following the 7th of October attack on Israel. This incentive to profit off of chaos is all the more concerning given Elon Musk’s reinstatement of previously banned-bad actors, opening the gates for tragedy grifters to operate on his platform.
Previous CCDH research has documented the role of ‘blue tick’ accounts in the spread of toxic information. Purchasers of blue ticks get special privileges from X and promotion of their posts and account visibility to other users on the site. Reporting by the BBC and others has shown blue tick accounts around the world piggybacking off the riots to post unevidenced conspiracies which have enflamed the situation.
What can be done to hold social media companies to account for their role in the violence?
As UK Prime Minister Sir Keir Starmer and Home Secretary Yvette Cooper have said, social media companies must be held to account for their role in the wave of violence following the tragic Southport attack.
While the UK has passed the Online Safety Act, which places a legal responsibility on social media and search services to identify and remove illegal content on their platforms, this law is still being implemented by the regulator and will not be enforced until 2025. CCDH hopes that the powers granted by the Online Safety Act are sufficient to address situations like this explosion of disinformation, but the Government must be in constant review of the efficacy of the regime and grant new powers to the regulator as needs arise. While supporting the robust implementation of the Online Safety Act, CCDH will continue to campaign for policy interventions to address platforms’ inaction on hate and disinformation.
Our call on the UK Government:
- Make clear to platforms that their failure to remove and de-
- Utilize existing powers under Public Order Act, Online Safety Act, and criminal law to clamp down on hate speech and hate actors online
- Amend Online Safety Act with new powers for proactive capacity to intervene when disinformation spikes
- Empower researchers with statutory data access rights, increasing expert capacity to identify developing crises and alert authorities to developments before they become violent
- Work with groups representing targeted communities to provide support for those affected and involve them in discussions on tackling hate and disinformation targeting their communities.
Our call on social media platforms:
- De-amplify hateful narratives and clamp down on the virality of hate speech
- Stop all monetization related to the accounts of hate actors and those who incite violence.
- Institute crisis response protocols for rapid deployment in times of information chaos
- Fundamentally alter systems so that they are safe by design, detect rule-violating posts before they are able to spread widely, and introduce friction into the viral spread of false information
- Enforce their existing policies on hateful conduct and illegal activity
- Stop closing independent oversight pathways and enable data access for researchers to increase the capacity to identify and respond to developing situations
Social media companies and policymakers must act now to stop the spread of hate and disinformation. If you care about stopping online-fueled violence across the UK, then add your name and demand action now.