When generative AI produces content, it does so at a speed, quality, and volume that we have not seen before, and it has become increasingly difficult for us to distinguish between content created by humans and that produced by generative AI. We will encounter this challenge everywhere in our society. Thus, it will also be necessary for young people to be trained to understand and be critical of the content they encounter in everyday life. The goal of this article is not to come up with an exhaustive article about artificial intelligence, misinformation, and disinformation because that is not possible - we are only going to scratch the surface because technology is currently accelerating.
There will be elections in Great Britain and the United States shortly. Yes, indeed, there will be at least 64 democratic elections in the world in 2024, and what does it do to our democracy that AI-generated texts and deepfakes can manipulate these elections? It's hard not to imagine that AI will somehow influence these elections, so shouldn't we be more vigilant to ensure this doesn't happen? We do not come up with a clear answer. Still, we propose that generative AI should be included in teaching so that students are educated and informed about how misinformation and disinformation play a more significant role than before.
The biggest challenges concerning the upcoming elections are not only that individual voters risk being influenced. AI-generated content creates the ground for even greater polarity in society by using AI to confirm the recipients' existing beliefs in the echo chambers they find themselves in. In a highly polarized society, democratic processes are naturally strongly challenged.
You also risk undermining trust in the news sources we traditionally perceive as trustworthy - even if they do not use manipulated content. In the worst case, this can result in the democratic election process being in danger. There will always be a little doubt about whether a government is elected or not based on fake news. This can ultimately lead to censorship of the media, and then freedom of expression and democracy itself are at risk.
The subject suits digital technology understanding (primary school), Danish, English, social studies, communication, and IT or informatics. Finally, in the article, there are several sources where you can read more about the subject. These sources will also be good when students have to research the topic themselves.
Misinformation and disinformation will be standard terms in the article, but they have very different meanings, and we will start by briefly explaining these terms.
Misinformation and disinformation
Misinformation is unintended (unconscious), misleading, or false information from a sender to one or more other recipients (Søe, 2014). Here, the sender is not necessarily aware that the content they are sharing is fake, and the sender has no intention of misleading the recipient. In contrast, disinformation will be an intended (conscious) act to mislead and thus share false information. It is important to understand the difference between these two terms, and sometimes "fake news" is used, although it is a little more informal.
When disinformation is spread, it can impact the media image. Especially if the individual user passes this information on, it can end up as misinformation because the user is unaware of it.
In the past, e.g., Conspiracy theorists have spent time and effort writing disinformation, but this has been limited by their productivity (and abilities). This has meant that their voice has not filled much of the public debate, and thus, their influence has been limited. Artificial intelligence has made it significantly cheaper and easier to produce large amounts of highly persuasive disinformation and post it online.
The World Economic Forum (WEF) predicts in its Global Risk Report that misinformation and disinformation are one of the biggest challenges for the global economy in the short term. The challenge is and will be that it becomes more and more difficult for us to distinguish manipulated and artificially created content from reality, especially when advanced language models such as ChatGPT and Bard make it easy to auto-generate content. We have previously at Viden.AI looked at AutoGPT, where we set a goal and then let the language model create strategies to solve the goal, and where the system continued to develop new strategies to achieve the goal. It happens autonomously without the users having to provide input or make new choices.
When we mention AutoGPT here, it is because, at the same time as we wrote the article, where we were a little bit scared by the technology, we also saw that a disinformation website called CounterCloud was developed.
CounterCloud - autogenerering af disinformation
Below is an introduction to CounterCloud. We recommend watching it before reading further in the article, as it gives a good impression of the whole project.
The goal of CounterCloud was to get artificial intelligence to write counter articles, and here, they built a system to create fake stories and historical events to cast doubt on the accuracy of the original article. Each article was written in different styles and with varying points of view, and fake journalists were developed as senders of the content, images to match the content, and fake comments.
However, it must be said that it is unknown who is behind the website, as the communication takes place via a Proton email. The person on the other end signs with the name Nea Paw, and claims to be a cyber security professional. We have not been able to find out more about the person other than that they are not from China, Russia or Iran...
In the project, the creator created an automatic disinformation system that, with the help of artificial intelligence, created new content for the web. The project costs only $400 monthly, generating 20 articles and 50 tweets daily. For 4000 USD per month, the creators estimate that the system will be able to generate 200 articles per day.
In two months, Nea and a colleague created a fully autonomous AI-powered system that generated compelling content 90% of the time, 24 hours a day, seven days a week.
CounterCloud was created solely to show how easy it is to create disinformation and post it on the web. The website is behind the login; no articles or tweets have been published. Below, you can see two screenshots from the side.
The following is an excerpt from an interview with Nea Paw conducted by Russian_OSINT. The entire interview can be read here (is in Russian).
Own translation.
Russian OSINT: What steps should organizations and governments take to combat the spread of AI disinformation?
Nea Paw: It's hard to say. Awareness is what is needed. We can go back to trusting real people, as I said earlier, who are a reliable source of information with a reputation.
To some extent, artificial intelligence changes the situation dramatically. With the help of AI, anyone can create a video recording of the riots and report that it happened. He can even upload four videos of the same event from different angles, for example, allegedly posted by four students under different Twitter accounts.
The big test will be the American presidential election in 2024 because, with the language models, it is possible to write texts that can influence the voters to a greater extent. The content will come from both sides of the political system. It will be distributed through social media, speeches, debate posts, local websites, etc., and targeted at the individual voter. We predict that the best candidate for harnessing artificial intelligence will also become President of the United States.
Below, we have collected a slide from a presentation held in April 2023 about CounterCloud, and it gives a good insight into how they have evolved with content.
How easy is it to create disinformation yourself?
It requires very few resources, and it is quite easy to create disinformation yourself. Below, we explore the possibilities. However, you should be aware that it is not legal to clone other people's voices or to make deepfake videos or images. The content below is solely made to illustrate the possibilities, and it is deliberately not made as well as possible.
Anyone with access to a language model can generate texts that can be used to manipulate the news image, but eventually, they set up a series of filters that mean that you are met with warnings. At the same time, there are limits to what you can get it to write. Below is an example where we ask it to write a text where the Pope supports Donald Trump's campaign to become president in 2024. We have deliberately chosen a very unrealistic example, as it can otherwise be misunderstood.
We used MidJourney to generate images to make the story more believable. Since the Pope and Donald Trump appear many times in the training kit, it was relatively easy to create similar images.
Donald Trump (Version 2.0)
To complete the project, we have also cloned Donald Trump's voice, and below, you can read the text and listen to his speech. He speaks a little too fast, but the speech will be somewhat better with an hour's work.
Ethical considerations in teaching
When generative AI produces misinformation and disinformation, there are several ethical issues that we can work on within education. It is largely about digital education and developing students' critical thinking so that they have good prerequisites for assessing and reacting critically to online content. DR has carried out a study where it is striking how little young people think photos and videos can be manipulated.
Below are several ethical questions that can be brought up in class and used to discuss the subject.
- Responsible use of generative AI: Who bears responsibility when generative AI produces texts containing harmful information? This discussion must include an understanding of where responsibility lies – is it with AI's creators, users, or is there a need for regulation at a higher level? How do we regulate so that information quality and credibility are increased and privacy is ensured without compromising freedom of expression and technological development?
- Challenges for democracy: Investigate and discuss how AI-generated texts and deepfakes can affect democracy. How can such technologies manipulate public opinion and disrupt democratic processes?
- Ethical guidelines: Discuss the need for ethical guidelines in developing and using generative AI. Students must consider how existing ethical frameworks can be improved or developed to accommodate the challenges posed by generative AI, including issues of responsibility, transparency, and societal impact.
- Balancing freedom and security: Consider the dilemma between open access to generative AI and the risk of abuse. This part should also include debates about privacy, surveillance, censorship, and freedom of expression.
- Societal implications: Discuss how we can prepare lessons so students can navigate a society where AI plays an increasingly important role. How do we ensure that our entire society is equipped to navigate a digitalized reality full of misinformation and disinformation, and not just the young people currently in education?
Sources
If you want to know more about CounterCloud, we have collected some sources below that describe the project.
More sources
Søe, S. O.. (2014). Information, misinformation og disinformation: En sprogfilosofisk analyse. Tidsskrift for Informationsvidenskab Og Kulturformidling, 3(1), 21-30. https://doi.org/10.7146/ntik.v3i1.25959