Data Access for Researchers: The Key to Online Safety Regulation

Posted on April 17, 2023 in News.

turned on flat screen monitor

Data access has become an increasingly complex and challenging issue for public interest researchers, especially on social media. Twitter is implementing stricter data access policies, and new research from CCDH shows TikTok ‘transparency’ appears to be heavily redacted. 

Efforts by social media companies to obfuscate the truth and frustrate research efforts expose the urgent need for mandatory data access for researchers. A new amendment to the Online Safety Bill would create a mandatory data access process and should be adopted to ensure platform transparency. 

Amending the Online Safety Bill to include data access for researchers

A new amendment to the UK’s Online Safety Bill would give the regulator the power to appoint independent researchers with access to platforms’ data. The amendment, from Lord Bethell and Lord Clement-Jones, details a process for Ofcom to issue a code of practice for researchers, enabling data access while ensuring user privacy and protecting trade secrets.

This amendment is vital to the success of the UK’s online safety regulation. Researcher access provides an important regulatory backstop for Ofcom, and by increasing oversight, makes it easier to horizon-scan and regulate platforms effectively. International comparators, like the European Union’s Digital Services Act, already include mandatory data access for researchers. 

Barriers to data access for researchers

The status quo leaves transparency in the hands of Big Tech companies with a vested interest in opacity. Even when they voluntarily offer research insights, they cut off research access or offer incomplete data that obscures online harms on their platforms.

Twitter recently announced that the platform will no longer allow free research access to the Application Programming Interface (API). Access to the platform’s API enabled academics, researchers and public-interest organizations to study some of the most important issues impacting society. Researchers sounded the alarm about the imposition of fees starting at $42,000 a month for API access, as these will make almost all research projects unviable.

Compare this to the situation at TikTok. TikTok offers research insights in its creative center, now one of the few tools researchers have to examine online harms. But new research from CCDH suggests this transparency is illusionary, with TikTok suppressing information about dangerous content that exists on the platform. 

TikTok obscures transparency data on eating disorder and self-harm videos with billions of views 

TikTok offers data on the popularity of hashtags on the platform with users in different age groups and locations. Researchers hoped that the tool would play an important role in identifying potential harms taking place, particularly those affecting children.

However, CCDH found TikTok withholding data about hashtags relating to important online harms, including 50 hashtags about suicide, self-harm and eating disorders.

Researchers seeking transparency data about these hashtags were instead directed back to the tool’s home page. Comparison with other data shows that potentially harmful videos posted to these missing hashtags had been viewed nearly 58 billion times.

This new analysis demonstrates the inherent limitations of voluntary transparency. TikTok offers insights that are patchy and incomplete, with major gaps when it comes to priority online harms like suicide, self-harm and eating disorders. The result is that researchers are unable to analyze who is being exposed to certain types of potentially harmful content.

TikTok’s data insights platform failed for 50 hashtags despite the fact that they are used widely on the platform. Analysis of total views for each hashtag, which is visible via the TikTok app, shows videos containing these 50 missing hashtags have a total of 57.9 billion views globally. Missing hashtags include:

  • A hashtag about self-harm which has amassed a total of 6.7 billion views globally
  • A hashtag used by eating disorder communities which has 2.8 billion views globally
  • A hashtag about suicide, which has amassed a total of 650 million views globally

While data for some hashtags related to these topics was available, others were missing from the tool, making it impossible to get a complete picture of harms on the platform. 

Other hashtags missing from TikTok’s transparency tools included ones about misogynist influencer Andrew Tate, Holocaust conspiracies and the Uyghur ethnic minority oppressed by the Chinese government.

In another omission, TikTok’s transparency tool notes that it excludes all data about the number of under-18s viewing content on the platform, making it impossible for researchers to know how much potentially harmful content is consumed by TikTok’s youngest users.

Mandatory data access for researchers prevents online harms

Whether it’s Twitter making access to its API effectively unaffordable, or TikTok obscuring online harms from researcher scrutiny, everywhere the barriers to researchers’ data access are rising. 

This is why mandatory access for academics and public interest organizations is vital. Transparency should not be left to the whims of billionaire tech bosses, or redacted when it might risk company profits. 

The EU’s Digital Services Act recognised the importance of researchers’ data access. The UK’s Online Safety Bill must do the same by adopting the researcher access amendment. 

CCDH will continue to hold social media companies to account for the consequences their technology and business choices have for individuals and society. We hope that peers adopt this amendment to help us do just that.