Maybe you've even tried writing thanks for a response from ChatGPT? Or maybe you've received an apology when you've pointed out that it made a mistake? Or maybe you've sat in a deep conversation and felt like it's like writing with someone who knows you and your feelings? There's a pretty good reason for that. OpenAI, the company behind ChatGPT, pulls several human traits down on their algorithm to make it seem human. In this way, they make technology more accessible to ordinary people, and one does not need to understand how best to build advanced queries. However, it also means that we are starting to approach a reality where we cannot necessarily distinguish between a computer or a human being we are writing with.

In this article, I will explore what it means that artificial intelligence is attributed to these human-like characteristics, also called anthropomorphism, and what it means for a teaching situation. Anthropomorphism derives from ancient Greek and means human and form and means animating, thus giving entities human traits. OpenAI, for example, uses different techniques to make it easier for humans to understand and interact with ChatGPT.

I'll touch on some of the techniques language models use to appear human and why it can be a problem when we include technology in teaching. It may seem harmless, but I will unfold several issues within the connection between anthropomorphism and language models in the article. I will do this from a standpoint as a technology-interested high school teacher, focusing on pedagogy and didactics - and not least with the idea that my job is disappearing into a language model that thinks it has empathy and emotions.

💡
In my research for this article, there was surprisingly little literature on anthropomorphism and digital technologies. Most of it deals with robots and how physical emission affects humans. At the bottom, you can see all my sources, but the majority are websites. Only a few are scientific articles.

The Uncanny Valley

When we adopt digital technologies that pretend to contain advanced cognitive abilities, which is not the case, it can frustrate and disappoint the user. In 1970, the Japanese roboticist Masahiro Mori developed a theory that deals with physical robots that mimic humans.

According to the theory, the more a robot resembles a human being, the more its imperfections become apparent and evoke a feeling of strangeness, discomfort, or cognitive dissonance. (Dr. Martha Boeckenfeld)

If a robot tries to imitate human behavior, its shortcomings and flaws often become more apparent to us. There comes a point where we no longer view the robot as a machine trying to appear human but rather as a "human" behaving strangely. Masahiro Mori describes this phenomenon as "the uncanny valley" – an eerie valley or alienation where our empathy decreases as our understanding of the machine diminishes. According to the theory, the robots will eventually emerge from this valley and be accepted by humans.

Grafik: Masahiro Mori - Egen oversættelse


The eerie valley focuses on robots and their physical and aesthetic appearance, but another valley can be even more eerie and even deeper. In this valley, referred to as "the uncanny valley of the mind", we begin to relate to what happens when artificial intelligence mimics the human mindset. A new challenge arises when we attribute emotions and social cognition to something non-human. The valley seems to deepen when people are presented with artificial intelligence that can make their own decisions and reactions, thus challenging our unique human identity.

Artificial intelligence is just a simulation of language based on statistics and large amounts of data and, therefore, has no understanding or experience of emotions. Therefore, people who interact with such systems may also experience alienation when they realize a discrepancy between their own and the system's processing of understanding and emotions.

A major challenge with the language models is that they can write the same way as humans and be very helpful in their answers. But there is also the possibility that their writing is inhumane or incorrect. In all cases, we find it difficult to get an explanation for the answer we have been given, and therefore, we may have difficulty trusting the technology. However, much of it is based on how to prompt artificial intelligence because the answer you get depends a lot on what you write.

Encoding human traits in ChatGPT

The first versions of GPT were difficult for the ordinary user to access, so it was not a great success. The challenge was that you needed a terminal or OpenAI's Playground, which was difficult with many setting options. But that changed when OpenAI launched ChatGPT, where you wrote together with artificial intelligence using a simple interface like text or search on Google. The conversations were thus made human, and everyone could understand how it worked.

It is important to make clear that language models are not designed to answer questions from the user or provide a correct answer if asking a question. They are designed to predict which word will make the most sense in the next part of a sentence - word after word. However, because the language models are structured as they are, the user believes that the language model is endowed with advanced cognitive abilities and sometimes also that it is like writing with another human being.

The feeling of writing with something intelligent is amplified when responses from ChatGPT come slowly, as if someone on the other end is typing the text. The user has to wait for ChatGPT to finish writing - unless you stop it. It is entirely intentional that OpenAI has inserted delay in the answers ChatGPT comes up with, and there are several reasons for that. When the result is not immediate, OpenAI ensures that their underlying system can keep up and that you do not send too many messages too quickly. You end up reading the content at the same pace as displayed on the screen, which helps the user. If all the texts were at once, you might quickly skim through them and thus not understand the content.

The language models also have a built-in understanding of what the user enters, affecting how we perceive the models. In some cases, it even responds, as in the example below, where I write together with Bing AI (a version of the GPT-4 model).

Own conversation with Bing AI.

It gets increasingly angry and cheeky by insisting that it be called Sydney (Microsoft's nickname during production), eventually shutting down the conversation.

In this small example, we can draw parallels to how humans would react, and one might think that an intelligent system prompted the answer. However, the truth is that the language model has been rewarded through a "reward model", and here the developers have blocked the call of Sydney. What's interesting here, however, is that the language model uses previous answers to develop new ones – just like a human would in a conversation.

At the same time, the use of smileys and the way Bing AI responds makes it react in a way that is atypical for a computer system. Normally, we would expect that there had been an error or that there had been an answer that was repeated.

Your new best friend at school

In schools, students need to learn about and with artificial intelligence, but here it is also important to teach them about the ethical use of these tools. Excessive use of anthropomorphism in digital technologies can be misleading for students. At the same time, it can lead students to believe that they are interacting with an AI that possesses awareness, emotion, and empathy – something that is usually reserved for the teacher.

We've seen how My AI was rolled out to everyone with Snapchat. Thus, young people gained access to an artificial intelligence that a digital friend could be familiar with. This development is frightening since we are dealing with children; therefore, they do not have the same level of knowledge and understanding of digital technologies.

We are dealing with technology so fast that we have difficulty keeping up. There will be jobs that will be massively affected, and many jobs will disappear, but others will arise because of new technological opportunities. This will greatly affect how we see future education programs because it will be harder to see job opportunities at the other end. If we become too dependent on artificial intelligence in our education system, it can lead to the role of the teacher being toned down or many jobs disappearing altogether.

In this rapid development, imagining how humans can develop incorrect mental notions that chatbots are living beings is not difficult. The challenge is, therefore, that the human traits in artificial intelligence can lead to unrealistic expectations and misunderstandings.

Risks and consequences of anthropomorphization of artificial intelligence

When language models simulate having emotions, awareness, and empathy, it can become a slippery slope if these are used uncritically as part of teaching or as part of the students' toolbox. The classroom is already influenced by tech companies' latest technologies, which have taken over more and more roles in teaching unhindered. The students use the language models extensively as a cognitive sparring partner when the tasks become too difficult, or they need help understanding the academic content. This will shift some of the teachers' work to the language models, and this will help change the power relationship in teaching.

A major risk is that the teacher loses touch with the student's professionalism, as guidance and support in everyday life are processed by artificial intelligence. If the human traits of the language models are further amplified, it could mean that students are left with a system that guides their education but with little or no impact from the teacher. This may mean we get students who rely more on the language models than their classmates or the teacher. Ultimately, commercialization also means that those who can afford it have access to the best language models.

Students must be (educated) not only to learn how to use AI ethically and responsibly. They also need to learn how artificial intelligence works and how the technology is developed. They must understand that technology has no magic and that language models do not think like humans.

In education, we should avoid attributing human qualities to artificial intelligence, and we should start with how we refer to the systems. For example, we should avoid phrases like "Artificial intelligence teaches us to...", but instead write that "The system is designed to...". At the same time, it is important to clarify that AI does not listen and understand but receives an input processed from a dataset that produces an output.

To alleviate these challenges, here is a list of different options we can make use of in teaching when working with language models:

  • Artificial intelligence has several limitations, and it is important that we learn what language models can and cannot do and that it is a natural part of our education at all levels of the school system.
  • It must be clear that artificial intelligence is not human, and the machine has no thinking or emotions involved in its responses. It is based solely on inputs and the datasets and algorithms it consists of.
  • In teaching, it is important that we avoid anthropomorphizing technology, and we do this best by being clear in the way we talk about technology. This can be done, for example, by avoiding saying: "Artificial intelligence thinks", because it is better to say: "Artificial intelligence has been programmed to analyse data and, based on analyses, make different recommendations".
  • It is also important that we do not use language that suggests that artificial intelligence has human traits or emotions. For example, AI cannot be happy, sad, or excused. It is better to describe how the language model arrived at an output.
  • Clarifies that artificial intelligence does not have emotions or consciousness and that there is a difference between humans and artificial intelligence. Artificial intelligence is just a tool created to help with decision-making and problem-solving.
  • We must promote students' critical thinking and skepticism; therefore, students should learn to question the possibilities and limitations of artificial intelligence.

Developers of educational learning aids also have a responsibility to avoid anthropomorphic features. For example, by making it very clear to students when artificial intelligence is being used - both in connection with evaluation and development of teaching courses, but also when many of the teacher's functions are replaced by artificial intelligence.

However, there are also advantages to including language models as part of the teaching, and when used correctly, it can increase learning in students. Here, the future must show how we can include artificial intelligence ethically, responsibly, and in a way where teachers and students can keep up.

The biggest problems arise when we do not know whether it is a human or a machine we are typing with, when we can see something strange in the answers we receive. Therefore, it is important to declare whether artificial intelligence is used to generate a response. 

We, therefore, need to ensure the above when working with artificial intelligence in education, and it is crucial that we maintain human relations and understanding in schools. What is certain, however, is that we will see new systems that will look more and more like humans, and at some point, we will not be able to tell the difference. In the deepest valley, we will have students who become test subjects for human-like interaction with the systems, and here, we must be ready to pick up students who feel uncomfortable. We must approach this technology with a critical eye and keep in mind that artificial intelligence, although it can mimic human behavior, is not human.

Sources

OpenAI’s ChatGPT as the ultimate form of abstraction
Artificial intelligence just took a huge leap forward, and is the ultimate form of general knowledge abstraction
ChatGPT interview: Do You Mind If I Call You Chat? | The Artist Engineer Podcast
Our guest on this episode is “ChatGPT”, a cutting-edge natural language Artificial Intelligence designed by OpenAI. ChatGPT’s remarkable language abilities and versatility in performing various tasks have garnered high acclaim. During this interview, we put ChatGPT’s conversational AI to the test to…
Antropomorfisme - Wikipedia, den frie encyklopædi
Why Do Humans Anthropomorphize AI?
You might intellectually know a chatbot is a computer, but it’s hard not to imagine it has feelings.
AI Chatbots Are Getting Better. But an Interview With ChatGPT Reveals Their Limits
TIME interviewed OpenAI’s chatbot, ChatGPT, about how it works and the risks of AI
And man created AI in his image ... | Arthur D. Little
Should we care that AIs will be increasingly anthropomorphic? ChatGPT arrived through lightning-speed adoption. Indeed, OpenAI confirmed that ChatGPT users surpassed 1 million just five days after its launch, making it the fastest-adopted app in history. And although not yet confirmed by OpenAI, we…
Reflections on AI in Education | BFIS
Explore the impact of AI in education: discover how GPT can help teachers, and dive into the future of AI-powered learning.
Addressing the Risks and Consequences of Anthropomorphizing AI
(All opinions express the views of the author but not his employer) Anthropomorphizing is the act of attributing human-like characteristics or emotions to non-human entities, such as animals or machines. With the rise of artificial intelligence and natural language processing technology, there has b
ChatGPT: The Next Firestorm in Education | AACSB
AI’s expanding capabilities might seem shocking, or even threatening. But remember, higher ed has faced new tech before—and emerged better than ever.
People Who Should Learn to use ChatGPT
You are probably one of them; #ChatGPT in 2023 is poised the way the World Wide Web was in 1993
How anthropomorphism hinders AI education - Raspberry Pi Foundation
Our approach to describing AI and machine learning in AI education resources for young people.
The AI emotions dreamed up by ChatGPT
AI chatbots are already imagining what feelings they’ll end up with. But if they did develop them, would we even notice?
The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers?
This study aimed to explore the experiences, perceptions, knowledge, concerns, and intentions of Gen Z students with Gen X and Gen Y teachers regarding the use of generative AI (GenAI) in higher education. A sample of students and teachers were recruited to investigate the above using a survey consi…
Derfor er robotter og klovne så uhyggelige
Robotter og computeranimationer, der ligner mennesker, er tit super uhyggelige. Teorien om ’den uhyggelige dal’ forklarer hvorfor.
GPT and Human Psychology
Analogies with Human Thinking and Reasoning.
Parental Acceptance of Children’s Storytelling Robots: A Projection of the Uncanny Valley of AI
Parent–child story time is an important ritual of contemporary parenting. Recently, robots with artificial intelligence (AI) have become common. Parental acceptance of children’s storytelling robots, however, has received scant attention. To address this, we conducted a qualitative study with 18 par…
ChatGPT & You: Language Generation’s Uncanny Valley
Taking a look how our most Advanced AI Language Model to-date can shape our perception of NLG technology

Eisikovits, N. (2023, 19. marts). Chatbots aren’t becoming sentient, yet we continue to anthropomorphize AI. Fast Company. https://www.fastcompany.com/90867578/chatbots-arent-becoming-sentient-yet-we-continue-to-anthropomorphize-ai

El-Netanany, O. (2023, 14. marts). Everything ChatGPT tells you is made up. Israeli Tech Radar. https://medium.com/israeli-tech-radar/everything-chatgpt-tells-you-is-made-up-3da356113dc9

Filipec, O. & Woithe, J. H. (2023). Understanding the Adoption, Perception, and Learning Impact of ChatGPT in Higher Education [Bachelorprojekt, Jönköping University]. . http://hj.diva-portal.org/smash/get/diva2:1762617/FULLTEXT01.pdf

Hjorth, L., Hjorth, N., Jakobsen, S., Schlebaum, P. & Thormod, J. (2017, 31. maj). Antropomorfisering af droner. Roskilde University. https://rucforsk.ruc.dk/ws/portalfiles/portal/59897809/pd_projekt_doner_master.pdf

Hutson, M. (2017, 13. marts). Beware emotional robots: Giving feelings to artificial beings could backfire, study suggests: Study could influence design of future artificial intelligence. Science. https://www.science.org/content/article/beware-emotional-robots-giving-feelings-artificial-beings-could-backfire-study-suggests

James, D.. (2023, 5. juni). AI and the problems of personification. Rationale Magazine. https://rationalemagazine.com/index.php/2023/06/05/ai-and-the-problems-of-personification/

Lin, Chaolan & Sabanovic, S. & Dombrowski, Lynn & Miller, Andrew & Brady, Erin & MacDorman, Karl. (2021). Parental Acceptance of Children's Storytelling Robots: A Projection of the Uncanny Valley of AI. Frontiers in Robotics and AI. 8. 10.3389/frobt.2021.579993.

Rosenberg, S. (2023, 13. februar). ChatGPT's edge: We want to believe. Axios. https://www.axios.com/2023/02/13/chatgpt-ai-eliza-believe-human

Stein, Jan-Philipp & Ohler, Peter. (2017). Venturing Into the Uncanny Valley of Mind—The Influence of Mind Attribution on the Acceptance of Human-Like Characters in a Virtual Reality Setting. Cognition. 160. 43-50. 10.1016/j.cognition.2016.12.010.