How to report misinformation on social media
Disinformation and misinformation spread like wildfire on social media in times of conflict, war, and emergencies.
The amount of content shared on these platforms is so overwhelming that it can be hard for most people to discern what is true and what is false. This problem has worsened since some social media companies slashed their Trust and Safety teams – and are shamelessly failing to moderate content.
However, as users, we can take action to help combat the spread of content we know to be false. First, we should always verify before amplifying information, meaning we should be cautious with what we share and engage with.
But equally important is to report false information when we see it. Users can report posts that spread lies, conspiracies or misleading claims to the platforms. Each social media channel has its own Community Standards to define what can and cannot be posted.
Our research shows that platforms sometimes fail to act on reported content. Nonetheless, users should still report when misinformation and disinformation flood their timelines.
Here’s how you can do it:
Here’s how you can report it on Facebook.
Here’s how you can report it on Instagram.
Here’s how you can report it on TikTok.
Here’s how you can report it on LinkedIn.
Here’s how you can report it on YouTube.
Here’s how you can report it on WhatsApp.
Here’s how you can report it on Telegram.
*Twitter/X doesn’t allow users to report posts as misleading anymore, but it’s still possible to report them for hate, abuse and harassment, child safety, violent speech, spam, privacy, suicide or self-harm, sensitive or disturbing media, deceptive identities, and violent and hateful entities.
For those struggling to be on social media in times of crisis, you can check out tips to help navigate these platforms and build information resilience.