7 Ways Meta is Harming Kids: Findings from the Company’s Internal Studies

Posted on February 22, 2024 in Explainers.

Teenager holding phone: Meta is harming people's mental health

Meta, owner of Facebook and Instagram, faces growing public scrutiny related to the child safety risks of its apps. In 2023, 42 U.S. state Attorneys General sued Meta for allegedly designing its apps that harm children’s health and safety. This litigation has unearthed disturbing facts about the spread of harmful content on Instagram. 

Meta disclosed recently unsealed documents relating to its internal research on child safety risks to the state Attorneys General. This research – called the Bad Experiences and Encounters Framework (BEEF) survey – asked almost 240,000 Instagram users whether they had experienced 22 different kinds of online harms, ranging from misinformation and hate speech to unwanted sexual advances and self harm. 

Arturo Béjar, a former Meta employee who led the company’s online safety efforts, conducted the BEEF survey from June 27 to July 8, 2021. In November 2023, Béjar blew the whistle and testified before Congress about the child safety risks he identified while working at Meta. CCDH obtained a copy of the BEEF survey. Here are seven revelations you should know:

1) Large numbers of users reported encountering all types of harmful content.

The findings of the 2021 BEEF survey demonstrate that online harms are rife on Instagram. Users of all ages were asked whether they had encountered harms during the previous 7 days, and the results were damning:

  • 30.3% of users said they saw misinformation
  • 25.3% witnessed hate
  • 11.9% received unwanted sexual advances
  • 6.7% were exposed to self-harm content
Meta is harming people's mental health:  image showing Meta's findings

For users who experienced harmful content, the BEEF survey followed up by asking them how many encounters they had with that content during the previous 7 days. For instance, a user who encountered hate speech was asked how often they had seen hateful content. On average, users who reported encountering hate speech, self harm, or unwanted sexual advances said they saw this content 3 to 4 times.

2) Users said that Instagram made them feel worse about themselves, with negative consequences for their mental health.

The BEEF survey’s findings also show how Instagram can be harmful to kids’ mental health. The survey asked users, “Have you ever felt worse about yourself because of other people’s posts on Instagram?” 19.2% of respondents said yes. These users were then asked how many times they had felt this way in the previous 7 days, which was about 4 on average.

According to the Attorneys’ General complaint, researchers at Meta have raised fears that its products are damaging to users’ mental health, especially teenage girls. For instance, the complaint cites an internal presentation that notes how teens suffered from “constant negative comparisons”, including 66% of teen girls and 40% of teen boys on Instagram. In another presentation quoted by the complaint, Meta’s researchers concluded that these negative social comparisons “[c]an cause or exacerbate a number of issues,” including “body image, eating disorders, anxiety, loneliness, depression, envy, online aggression, [and] passive use.”

3) While Meta claims to prioritize child safety, children disproportionately experienced safety risks on Instagram.

While most people reported encountering some form of harmful content on Instagram, kids reported the most exposure to harms. A majority of users overall reported encountering at least one type of harm in the previous 7 days. For adult users, this was the case for 43.5% of those aged 35 to 44, and 31.2% of those aged 45 and up.

Kids fared even worse. 54.1% of teens aged 13 to 15 and 57.3% of teens aged 16 to 17 said they had encountered at least one harm in the previous 7 days. For a company that claims protecting children is a “top priority”, these statistics show how children experience harms at much higher rates than other groups of users.

Meta is harming people's mental health:  image showing Meta's findings

4) Instagram’s design is riddled with safety risks, according to users.

Not all of Instagram’s features are created equal. The BEEF survey found that some were particularly associated with safety risks. To make this assessment, users who encountered an online harm were asked: “In the last 7 days, where in Instagram did you see this [harm]? Please select all that apply.” (Users could select from multiple responses, so the percentages add up to more than 100%.)

Three features stood out:

  • Instagram’s Chat/Direct Messaging (DM) feature was where many users experienced sexual harassment and bullying. 68.6% of users who received unwanted sexual advances and 60.1% of users who were bullied said it happened on Instagram’s DM feature.
  • Instagram’s Feed/Stories feature often recommended content that made people feel worse about themselves or was hateful. 40.9% of users who felt worse about themselves and 31% of users who saw hate speech said Instagram’s Feed/Stories feature showed them the offending content.
  • Instagram’s Search/Explore feature also recommended many users content that made them worse about themselves or was hateful. 36.3% of users who felt worse about themselves and 31.3% of users who witnessed hate said Instagram’s Search/Explore feature displayed the harmful content.
Meta is harming people's mental health:  image showing Meta's findings

5) Exposure to online harms pushed large numbers of Instagram users offline.

The BEEF survey found that several types of harmful content caused a majority of users to feel discouraged from posting on Instagram. This applied to 56% of users who felt worse about themselves, 51.8% of users who encountered self-harm content, and 51.7% of users who were bullied. 

Similarly, after encountering harmful content, many users reported closing the Instagram app. This was the case for more than half of users who felt worse about themselves, 41.9% of users who were exposed to self harm, and 39.4% of users who saw hate speech.

6) Meta’s measurement of online harms doesn’t tell the whole story.

Meta’s chosen approach to measuring the spread of harmful content is called “prevalence”. Meta calculates prevalence by estimating how many views were received by content that violates its rules and then dividing by the total number of views on the platform. 

But, child safety experts have faulted Meta’s prevalence metric for being deceptive and incomplete. In his written testimony for Congress, Arturo Béjar was critical of Meta’s approach to measuring online harms: 

“Meta’s current approach to these issues only addresses a fraction of a percent of the harm people experience on the platform…[T]here is a material gap between their narrow definition of prevalence and the actual distressing experiences that are enabled by Meta’s products.”

Meta has every incentive to define the scope of online harms as narrowly as possible and downplay the spread of harmful content. This makes prevalence a useful metric to deny the problem and delay taking action.

Prevalence obscures the aggregate effects of harmful content on users. In calculating prevalence, high rates of harmful content can be masked behind a huge denominator (the total views on a platform), and harmful content that isn’t included in Meta’s rules is outright ignored. Prevalence also communicates nothing about the exact nature of harmful content in circulation.

For instance, in 2021, shortly after Béjar finished conducting the BEEF survey, a senior Meta executive announced that the prevalence of hate speech on Facebook was 0.05%. 

While this number may seem very small, when you consider the massive user bases and billions of posts shared every day on Meta’s platforms, even prevalence of 0.05% or less could mean that hate speech was viewed millions of times. And this estimate doesn’t account for the kinds of hate speech that weren’t covered by Meta’s rules, nor does it provide any specific insight into where and against whom the hate speech was being inflicted, which is crucial for efforts to understand and address it.

The BEEF survey represents a different approach to assessing online safety risks. By directly asking users about their experiences, Béjar was able to obtain a more complete and comprehensive understanding of harms on Instagram. 

Researchers should develop more alternatives to prevalence that can holistically measure online harms, including through direct access to platform data about harmful content. This knowledge would both contribute to our understanding of child safety risks and facilitate developing solutions to address them.

7) Meta can’t be trusted to protect children and must be held accountable for its misconduct.

After the BEEF survey was completed, Meta reacted by ignoring and suppressing its findings. Béjar personally emailed Meta’s senior leadership, including Mark Zuckerberg, urging them to act on what he found. Zuckerberg’s response: silence. 

Béjar was also allegedly prevented from sharing the survey’s findings with other Meta employees. Instead, he was forced to write about the child safety risks identified by the BEEF survey as if they were “hypothetical”. This is all while Meta continued to publish reports using prevalence to argue that the spread of online harms was being addressed. 

Meta simply can’t be trusted. It took the concerted efforts of a brave whistleblower and 42 attorneys general to bring the BEEF survey to light. One has to go back thirty years to the prosecution of Big Tobacco to see corporate misconduct rivaling that of Meta.  

CCDH calls on lawmakers to pass legislation to ensure that it and other large platforms are held accountable for online harms. 

CCDH’s STAR Framework (Safety by Design, Transparency, Accountability, and Responsibility) sets the global standard for what this social media regulation could look like. If regulations consistent with STAR were implemented, Meta and other large platforms would possess better incentives to share research with the public and design their products with children’s safety in mind from the start.  


Call on policymakers to protect kids online. Social media self-regulation has failed. It’s time to hold these companies accountable. Add your name: