STAR Framework: Safety by Design

Posted on January 11, 2023 in News.

STAR Framework: Safety By Design. Picture of a woman smiling at her phone.

This is the first in a series of policy blogs focused on showing how adopting CCDH’s STAR Framework will help to better understand and disrupt online hate and harmful mis/disinformation.

Don’t leave it to chance [or profiteers] – embed safety by design from the beginning

As we start a new year, now is a good time to reflect on some of the key highlights from our research from 2022 and explain how our STAR Framework – A Global Standard for Regulating Social Media provides a basis for better understanding and disrupting online hate and harmful mis/disinformation that we see everyday in our work. This is the first in a series of policy blogs focused on the STAR Framework and this one is specifically about Safety by Design.

What is the STAR Framework?

Before jumping into the research and analysis you may be asking – what is STAR and where did it come from?  Good question.  We developed the STAR Framework in consultation with regulators, legislators, civil society and academics following CCDH’s Global Summit held in Washington D.C. in May 2022.  CCDH’s Global Summit brought together some of the leading thinkers from the UK, US, EU, Canada, Australia, and New Zealand to reflect on the current state of online harm and what different jurisdictions were doing and developing as solutions. 

We saw a need to develop a values and research-driven framework to support global efforts to regulate Big Tech companies, and to ensure that these core standards were shared globally for maximum connectedness and effectiveness.  Because both malicious individuals and Big Tech operate across borders, regulation will be most effective when we work together and ensure it does too.

The core elements of the STAR Framework are: 

  • Safety by Design 
  • Transparency of algorithms, rules enforcement and economics (advertising) 
  • Accountability to independent and democratic bodies
  • Responsibility of companies and their senior executives.

A short explanation of each element is outlined in the table below and you can read the full STAR Framework here

What is Safety by Design?

As any good builder knows, you need to have the foundations of a house safe before building extensions and letting people live in it. Safety by Design means that technology companies need to be proactive at the front end to ensure that their products and services are safe for the public, particularly minors. Rather than waiting for harm to occur, safety by design principles adopt a preventative systems approach to harm. This includes embedding safety considerations through risk assessments and decisions when designing, implementing, and amending products and services.

Safety by design is the basic consumer standard that we expect from companies in other sectors.  The Office of the Australian eSafety Commissioner has written extensively on Safety by Design and has resources available for companies that are proactively seeking to embed this within their operations – making compliance easy.

2022 Research Highlights and What They Mean for Safety By Design

In 2022, CCDH produced several studies that demonstrate how Big Tech platforms are failing to meet safety by design standards in the current unregulated environment. 


Case Study: Metaverse

For us here at CCDH, 2022 began with talking to legislators and regulators about the findings from our unique investigation into VR Chat, the most popular dedicated social app available on Meta’s VR platform. Despite public promises from Mark Zuckerberg and his head of PR, Nick Clegg, when they launched Metaverse, who said that safety had been built in from day 1, CCDH researchers found that the opposite was true. VR Chat—the most reviewed social app in Facebook’s VR Metaverse — was rife with abuse, harassment, racism and pornographic content. We found that users, including minors, were being exposed to abusive behavior every seven minutes, such as:

  • Minors being exposed to graphic sexual content.
  • Bullying, sexual harassment and abuse of other users, including minors.
  • Minors being groomed to repeat racist slurs and extremist talking points.
  • Threats of violence and content mocking the 9/11 terror attacks.

We reported all of these incidents to Facebook using their web reporting tool when we found them. All of our reports about users who abused and harassed other users went unanswered. 

Frankly, this is not what we would have expected to see if safety by design was embedded, as the company executives had publicly promised. 

The frequency of the abuse and harassment that we identified – one incident every seven minutes – indicates that little thought or resource was put into designing the safety parameters of the platform or proactive monitoring of compliance with the platform’s terms and conditions. In addition, there was no evidence we found that the system was responsive to complaints when abuse did occur as all of our complaints went unanswered.  

Abuse and harassment over chat were predictable risks that could and should have been identified and addressed through a risk assessment and design process, given that the chat/messenger is a functionality in Meta’s other products, such as Instagram and Facebook.  

These known problems, and the harm they can cause, are amplified given the immersive nature of virtual reality. The fact that this product was marketed as safe is deeply concerning and seems to have been motivated solely by commercial benefits, including early release for Christmas sales, rather than a real commitment to public safety.


Case Study: Meta (Facebook and Instagram), Twitter, TikTok, YouTube

We conducted a number of studies in 2022 which looked at whether platforms were responsive to complaints and reports that content had breached their stated terms and conditions on hate speech and disinformation. These reports were made using the tools that the companies had available on their platforms – which you would expect to work, right?  

All of the major social media platforms are failing to effectively respond to reports that their standards have been breached.  

For example, with our study on anti-Muslim hate, we found that Facebook, Instagram, TikTok, Twitter, and YouTube, collectively failed to act on 89% of posts containing anti-Muslim hatred and Islamophobic content reported to them: Facebook failed to take action against 94% of posts promoting anti-Muslim hate; Twitter 97%; YouTube 100%; Instagram 86%; and TikTok 64%.

This study reflects similar results from our research on:

In short, the big platforms are consistently failing to take appropriate action when reports are made using their systems. This means that:

  • Their reporting systems aren’t working
  • They have made a business decision not to resource the enforcement of their standards, making the publication of the standards themselves worthless.  

These are not just statistics and theoretical problems. The impact on individuals and communities is very real. Amber Heard in the Hidden Hate report commented: 

“The amount of people who might be in a similar situation to what I was back in 2015 who look at what has happened to me and decide to not act in the interests of their safety, or their voice, is scary to think about.”  Heard is concerned about other people with less resources than her who are impacted by online abuse: “If I can’t utilize this tool, if I can’t open Instagram, if I can’t engage at all, then what does it say about a person who doesn’t have the emotional resources that I have, that come with age and experience?” 

As the Interim Director of HRC, Joni Madison, noted in her joint introduction to the Digital Hate report:

“Violent rhetoric leads to stigma and radicalization, which leads to violence. Nearly 1 in 5 of any type of hate crime is now motivated by anti-LGBTQ+ bias, and the last two years have been the deadliest for transgender people, particularly Black transgender women, we have seen since we began tracking fatal violence against the community.”

We know that the failure of Big Tech companies to act on online hate in a dedicated resourced and systematic way, embedding safety by design, can lead to a significant risk of offline harm to communities, as we have seen only too recently in tragic events in Colorado, Buffalo, El Paso, Halle, Christchurch, and Myanmar.  

So how would a safety by design approach help change outcomes? It would mean that user safety was paramount. It would mean that there was a way to report abusive content and abusive users regardless of the way that messages may have been sent.

For example, in our Hidden Hate study, we discovered that there was no way to report abusive content that was received in voice notes sent via DMs. It would mean that abuse, harassment and harmful content that was reported was acknowledged and dealt with in a responsive reporting system. It would mean that policies were enforced and that safety had been factored into the design of the different features of the platform.