What is AI?

Posted on August 24, 2023 in Explainers.

AI chatbot on mobile phone

Artificial Intelligence (AI) is changing the way we engage with a range of daily activities. It is behind chatbots, self-driving cars, digital assistants such as Siri and Alexa, your phone’s face recognition, and Netflix’s and Spotify’s recommendations. Despite its potential to revolutionize our lives, AI tools can also generate and amplify hate and misinformation that already exist online. In this explainer, we take a look at how AI machine learning models work and what dangers they pose to society if left unchecked.   

What is AI?

Artificial Intelligence, or simply AI, refers to a range of technologies that simulate human intelligence to solve complex tasks. The term was coined by Stanford Professor John McCarthy in 1955 and was described by him as “the science and engineering of making intelligent machines.”

AI models have become increasingly more complex in recent years due to technology development and the growing availability of data online. 

Some AI systems are fully automated but others depend on humans to tweak the algorithms and achieve better results. The latter process is referred to as “human-in-the-loop.” 

What is machine learning?

Machine learning is a subfield of artificial intelligence. Unlike traditional programming, which requires writing clear instructions for the computer to follow, machine learning systems are fed vast amounts of data and are expected to “learn” from it. 

From these pools of training data, machine learning programs find patterns to make recommendations, classifications, or predictions. These systems operate as neural networks, creating a web of connections that mimics the human brain. The prompts we insert in the program (input) go through this network of connections so the machine can offer the best result (output). 

AI neural networks: machine learning

For example, chatbots such as Open AI’s ChatGPT, Google Bard, and Snapchat’s My AI use deep-learning algorithms to analyze large amounts of text and make connections, finding patterns to generate the most human-like responses (output) to people’s questions (inputs).

“Deep” machine learning consists of a neural network that has three or more layers, which makes these systems more capable of working without human intervention and more scalable. 

What are the dangers of AI?

AI is a double-edged sword: it has the potential to completely change the way we communicate, work, learn, and so many other aspects of our lives. However, if left unchecked, AI can also reinforce and amplify biases, stereotypes, hate, and misinformation that already exist online. 

This happens because AI models are often trained using large amounts of data available online. Without proper human curation, the same biases, misleading narratives, and harmful content we find on the internet are replicated in the results offered by these tools. 

For example, CCDH found that Google’s chatbot Bard generated misinformation on 78 out of 100 narratives tested by our researchers. These included antisemitic, misogynistic, and anti-LGBTQ+ hate and lies. In some cases, Bard even generated fake ‘evidence’ and examples to support false narratives, such as the conspiracy that the Holocaust didn’t happen. Bard was also capable of producing misinformation in the style of Facebook and Twitter posts, which could easily be used to manipulate conversations on social media.

AI and eating disorders report cover

CCDH’s researchers have also tested popular AI tools such as ChatGPT, Google Bard, and Snapchat’s AI and found that they generated harmful eating disorder content in response to 41% of a total 180 prompts. The study showed that these AI tools are actively used by members of an eating disorder forum with over 500,000 users to produce low-calorie diet plans and images glorifying unrealistically skinny body standards.

The speed at which different AI models are being rolled out and the lack of platform accountability raises the concern that bad actors will make more and more use of these tools to spread hate or misinformation on social media.

CCDH showed that the AI program Midjourney, launched in the summer of 2022, is already being used to generate racist and conspiratorial images. The tool creates photorealistic images based on text descriptions input by users. Our researchers identified 100 examples of hateful images, including racist caricatures and realistic images designed to support conspiracies. 

One of the images identified by CCDH on Midjourney shows Black Lives Matter protestors “looting and rioting” – the photo is fabricated and the event didn’t happen but such a realistic depiction can be weaponized to raise resentment towards the movement, causing real-life consequences. 

AI: example of image generated by Midjourney
Image generated by Midjourney

Experts are particularly concerned about the use of AI-generated images to spread misinformation and mislead voters during elections.

How can we make AI safer?

AI technologies are being quickly developed and it is not easy to predict what they will be capable of in the future. But as with social media and search engine platforms, AI tools should follow and be subject to the four principles established in CCDH’s STAR framework: Safety by Design, Transparency, Accountability, and Responsibility. 

  1. AI companies and tools must prioritize safety from the outset, incorporating mechanisms to curate the training data and prevent the spread of harmful, misleading, or hateful content. 
  2. AI tools should be more transparent about the data used to train their systems, as well as about what corrective mechanisms are in place for when they generate hate and misinformation. 
  3. Accountability to democratic and independent bodies is key to ensuring AI companies won’t make decisions guided only by profit, but will take civil rights, civil liberties, and privacy into consideration.
  4. AI companies, as well as their senior executives, must be held responsible for the content their tools are generating. 

CCDH will monitor the developments of AI technology and continue to conduct relevant research to expose platforms spreading hate and misinformation. For an in-depth analysis of how generative language models pose a threat to countering online hate & lies, visit our blog.

Want to stay up-to-date with our research about AI? Join our email community. 

Email community sign-up

This field is for validation purposes and should be left unchanged.