Bill C-292

Algorithmic Transparency and Non-Discrimination

What is Bill C-292, why is it important, and why is it an important step towards algorithmic transparency?

Find out more

About

What does the Bill do?

Canadian Federal Bill C-292 introduced by Peter Julian, MP would give individuals more information on how their personal information is being used by online communication services in algorithms to inform the type of content that they see or are prevented from seeing.  This includes how the algorithm prioritizes, assigns weight to or ranks different categories of personal information to:

  • make predictions, recommendations or decisions about a user; and 
  • to withhold, amplify or promote content to that user. 

Bill C-292 also requires transparency for other purposes, such as the use of algorithmic processes and content moderation. 

The transparency requirements sit alongside non-discriminatory provisions on protected grounds, such as race, marital status, disability and gender identity.  Under Bill C-292 an online communication service would not, for example, be able to use the personal information of an individual in a discriminatory way by making goods or services unavilable when they can normally access them as a member of the public or by targeting advertising (e.g. for credit or healthcare) in a way that discriminates against / removes that opportunity for that person.

Why is this important?

Most people are not aware that they are being influenced by algorithms, let alone how these algorithms work or the impact that they have.  Public transparency is needed.  This Bill will help to achieve this goal and ensure more accountability from the online communication services that have daily access to (and are profiting from) our personal information and are controlling what we see.  This has a direct impact on what we think and how we act.  The practice of taking personal information to point people towards certain content or block other content poses a threat to personal freedom and safety, and has a direct impact on our cognitive functioning, social cohesion and understanding of the world.   For example, throughout the pandemic, this has led to many people being directed towards COVID-19 misinformation (such as anti-vax content) and extremist content.

Online services are a fundamental part of our infrastructure, and it is important that the freedoms and protections that we enjoy, including non-discrimination, are reflected and realised in this environment.  The routine use of online services means that these companies have an unparalleled level of control over the content we see and the information we receive. Rather than giving people freedom to choose their own content, algorithms apply methods such as predictive analysis, machine learning techniques, deep learning, neural nets, regression analysis, and rule-based systems to promote whatever content will garner the greatest reaction and engagement, even if hateful or false. These highly personal, highly invasive systems have resulted in the polarization of economic, democratic, and social thought and are directly correlated to a rise in formation of radical hate groups, networks and extremism. 

Since the Covid-19 pandemic, online hate has reached an unprecedented high. As a society we are becoming increasingly polarized, and there has been an alarming increase in hate speech, disinformation, and extremism found on online platforms and via search engines. This harmful content, including conspiracy theories, disproportionately affects women and marginalized communities and perpetuates a history of discrimination. 

It is clear that the unregulated environment that promotes content to produce increased interest and continual use of the service benefits the privileged executives of companies like Facebook and Youtube. The increase in profit provides incentive to neglect the safety and security of people, particularly minority groups. Not only do these algorithms harm those targeted by increased hate, it also steals our freedom over our own decision making processes and forces certain ways of thinking. 

The utility of online platforms and communication services is undeniable. However, we must also realize the great power they have in shaping our individual lives and society as a whole.  This bill seeks to change that operating environment.

CCDH support of Bill C-292

If we are going to address online harm, we need to understand how the algorithms that order content work and that is why it is a core component of our best practice STAR framework for legislative efforts globally”, said Chief Executive of the Center for Countering Digital Hate  (CCDH), Imran Ahmed.  “We support legislation that aims to enhance the safety, transparency and accountability of social media platforms and search engines.  If we are to have an informed debate over the responsibilities of platforms, we need transparency of algorithms, enforcement of community standards, and the economics – in particular, advertising – of those platforms. Bill C-292, tabled by Canadian Member of Parliament, Peter Julian, takes an important step towards providing precisely that transparency.  If passed, it will give users more information about how online services operate, and help to embed human rights and democratic values through its non-discrimination obligations.

More Information
Visit MP Julian’s website for more information about Bill C-292: https://www.peterjulian.ca/