Matomo

Professional Grief: AI and Feelings of Powerlessness in Teaching

· 10 min read
Professional Grief: AI and Feelings of Powerlessness in Teaching

In the previous article, I explored how AI can challenge the teacher's professional identity, and this is the third article in a series of seven so far. If you haven't read the first two articles, I recommend that you start here:

Professional Grief: When Artificial Intelligence Transforms the Teacher’s Reality
In this blog series, I will explore ‘professional grief’—the feelings of loss, uncertainty, and existential doubt that educators may experience when new technology challenges their professional identity. I will publish seven articles, with this one being the first, releasing a new article every Thursday in the coming weeks. The
Professional grief: When artificial intelligence challenges the teacher’s professional identity
In the latest article, I introduced the concept of “professional grief” and the challenges educators face when encountering artificial intelligence. If you haven’t read it, you can find it here: Professional Grief: When Artificial Intelligence Transforms the Teacher’s RealityIn this blog series, I will explore ‘professional grief’—the feelings

The encounter with generative AI in teaching can evoke a deep sense of powerlessness in many teachers because, suddenly, AI systems can perform tasks that previously required human expertise. This can contribute to shaking our experience of control and autonomy in the classroom. In this article, I will use concrete examples from my everyday life and conversations with teachers to illuminate our new challenges.

When AI systems suddenly perform tasks that previously required human expertise, it can shake one's sense of control and autonomy in teaching. Katrine Friisberg points out that using AI involves relinquishing power because "algorithms that we don't know choose the information we get access to" (Friisberg, 2023). When teachers and students increasingly rely on the recommendations of AI systems, they can feel that the technology controls them rather than vice versa.

This feeling can be amplified when students use AI to "short-circuit" teaching, for example, by submitting ChatGPT-generated assignments. Suddenly, the teacher can feel reduced to passive spectators in their teaching. The constant flow of new AI tools gives us the feeling of being unable to keep up. According to Rafferty (2023), teachers are often expected to become "AI experts almost overnight" - an impossible task that can easily lead to feelings of inadequacy and powerlessness in the face of technology. Even when we are curious about technology, we can experience a form of powerlessness when we realize what opportunities students suddenly have at their disposal without comprehending the significance of technology for their learning.

Lack of language for dialogue about AI with students

I will start this article with an example from my high school. The 1st year students have just begun, and in that connection, we have the obligatory IT introduction, where I visit the classes, review the school's rules, and introduce the students to the IT tools to be used in teaching. On one of my slides, I show an overview of written assignments, which states that we use Urkund in high school to investigate whether students have plagiarized. While I'm going through this slide, a student raises their hand and asks, "Can you find out if students have used ChatGPT?"

In the situation, I interpreted the question negatively, thinking that the student was merely interested in finding out how much they could get away with using ChatGPT to avoid doing the assignments themselves. However, upon further reflection, I realized other motives could be behind it. Maybe the student was curious about the technology and wanted an open dialogue about its possibilities and limitations in a learning context. Perhaps the student sought guidance on using ChatGPT ethically and responsibly to supplement their learning.

In any case, I felt, in a way, powerless in that particular situation because I hadn't prepared a well-thought-out answer and generally lacked a language to communicate clearly about language models directed at the students. My answer was that we don't have systems to detect it, but the teachers are skilled and can see through the use of AI in the students' assignments. My answer may not have been entirely accurate because research shows that we often have difficulty determining when and how generative AI has contributed to an assignment (Fleckenstein et al., 2024).

Therefore, the student's question, which initially evoked a feeling of powerlessness, has perhaps opened up a critical reflection. How can teachers best respond to the student's questions and behavior regarding AI tools? How do we balance setting clear frameworks and inviting a curious and constructive dialogue?

Powerlessness in the Classroom and During Exams

In my daily life and conversations with other teachers, we experience how some students become increasingly dependent on technology, to the point where every single assignment bears more precise imprints of ChatGPT than of the student's voice. A concrete example was a student who wrote in their exam report: "It sounds like a good plan to create attention and motivate young people to become blood donors. Here is the corrected version of your text:" For me, it was a clear sign that the student had used AI as an aid, which I had also previously observed in class.

But what do you do as a teacher in such a situation? I am not allowed to confront the student directly during the exam, but I have to pass on my concern to the principal and participate in a series of meetings because of one sentence. On the one hand, our teachers are responsible for ensuring that students live up to the academic and ethical standards that apply during exams. If we discover that a student has used ChatGPT in a way that can be interpreted as cheating on an exam, we must react and report it to the management. At the same time, this situation puts us in a dilemma. What if the student used ChatGPT as a legitimate aid? What if the "corrected version" that the student refers to is merely the result of the student using the AI to get feedback and suggestions for improvement on their text? Is it still cheating, then?

The examiner and I assessed that we did not have sufficient grounds to start a case, so we did not take further action against the student. Ultimately, it is up to the principal to assess whether the student has cheated by using ChatGPT, and this assessment must sometimes be made based on a presumption rather than concrete evidence. This situation can also leave management with a feeling of powerlessness.

Teachers are in a gray area where our judgment and discretion suddenly become crucial. We can no longer refer to clear rules or systems but must assess for ourselves when the use of AI crosses a line. This assessment can have significant consequences for the individual student. It is a heavy burden, quickly leading to a feeling of powerlessness. No matter what we do, we risk making a decision that is unfair to the student or compromises our professionalism and values.

At the same time, this situation raises questions about our autonomy and control as teachers. If we constantly have to act as detectives and try to expose whether students have used ChatGPT, aren't we at risk of being reduced to a kind of controller? What does it do to our relationship with the students if they experience us as someone primarily out to "catch" them? It can create a feeling of powerlessness because our actual task - to facilitate learning and support the students' development - is suddenly overshadowed by this need for control.

ChatGPT Challenges Teachers' Professionalism

Another example is from a teacher who experienced in history class that the students used ChatGPT to challenge the teacher's professionalism. Specifically, the teacher had taught about World War II, and the students had investigated the topic using ChatGPT instead of using their textbook or searches on Google. Here, it turned out that the students trusted the language model's output more than the teacher's explanation. This triggered a significant discussion between the students and the teacher about Winston Churchill's role during the war. The problem was that ChatGPT presented Churchill from an American point of view and not from a broader Western perspective. This created a confrontation where the AI informed the students better than the teacher.

The teacher felt powerless and even experienced a form of existential crisis because of the situation. Suddenly, his professional authority and many years of expertise were challenged by an AI in which the student had more confidence. This raised uncomfortable questions: What is the teacher's role if the students have more trust in the machines anyway? As a professional, is one reduced to a kind of moderator for the students' interaction with AI?

Here, Bandura's concept of self-efficacy, which I touched upon in the previous article, comes into play again (Bandura, 2013). The teacher's belief in their ability to master both the academic material and the new technologies is crucial to the experience of control and autonomy. Suppose we have high educational and technical self-efficacy. In that case, we will be better equipped to see the students' use of AI as an opportunity for dialogue and critical reflection rather than an attack on our authority. We can assert our expertise not by rejecting AI but by using it as a starting point to discuss different perspectives, quality-assess sources, and train students' judgment. Conversely, teachers with low self-efficacy risk feeling inferior to and threatened by AI systems - an experience that only reinforces powerlessness.

Although the example of the student challenging the history teacher is thought-provoking, it is not necessarily representative of all students' use of ChatGPT. In many cases, students' interaction with AI can be an occasion for deeper learning and the development of critical thinking. One could imagine students using ChatGPT to generate different perspectives on a historical event and, instead of uncritically accepting the AI's output, using it as a starting point for a critical comparison with other sources. What differences and similarities are there? What biases or shortcomings can be identified in the AI's presentation? Students can better understand source criticism and historical analysis by asking these questions. In this process, AI does not become a substitute for the teacher; instead, it is a tool that can promote students' curiosity and critical thinking.

A survey of American teachers shows the same trend: Students uncritically apply AI-generated answers, even when wrong, and have a more challenging time persistently working with complex tasks (Grose, 2024). As one English teacher put it, students are quicker to give up if they don't understand something immediately. They also fear being perceived as stupid by their classmates and assume their thought processes should be as fast as AI's.

Several teachers, however, also point to constructive ways to address these challenges. Some use newer books in their teaching, as AI has more difficulty generating comprehensive analyses of them. Others replace traditional tests with tailored questions and classroom activities that AI can't handle. In general, teachers are skeptical but pragmatic. They know that AI is here to stay but doubt its learning potential and are aware of the pitfalls - in contrast to specific decision-makers who uncritically hail AI's possibilities and thus risk taking shortcuts concerning students' learning and education.

Ultimately, it may be about finding a new balance. On the one hand, we must maintain our responsibility to ensure academic integrity and fairness in exams while also allowing students to explore and use these new tools ethically and transparently. But this is a tricky balance, and pursuing the perfect solution can lead to a feeling of powerlessness.

Conclusion

The examples in this article show how generative AI can evoke a feeling of powerlessness in teachers. We face dilemmas where technology challenges our authority, autonomy, and relationships with students.

Concrete situations where a student may have used ChatGPT for an exam assignment put the teacher in a dilemma between enforcing the rules and showing trust. At the same time, it raises questions about how we define cheating and aids in a world where generative AI has become a part of everyone's everyday life. Clear guidelines are needed to use technology ethically and responsibly in teaching to navigate these dilemmas. Here, I believe that the management at educational institutions must be ready to support the teachers during this transition period. When teachers feel powerless and challenged by generative AI, they need support, clear guidelines, and space to develop and experiment with new forms of teaching themselves.

However, although the integration of generative AI in teaching can seem overwhelming, we, as teachers and students, are not powerless victims of technology. On the contrary, we can actively shape the development and exploit AI's potential to create new and exciting learning experiences. This requires us to dare to think in new ways and adapt our didactics and pedagogy to the new conditions. How can we use AI to stimulate students' creativity, critical thinking, and problem-solving skills? How can we design teaching courses that incorporate AI ethically and meaningfully? By proactively exploring these questions, we can help steer the development in a direction that benefits our students and strengthens their competencies for a future with AI.

Finally, it is about building the necessary competencies among teachers and students. We need continuing education to understand and apply AI ethically and pedagogically. This applies not only to technical skills but also to the ability to reflect critically on the role of technology in our subjects and society as a whole. In this way, AI can also help open up new forms of learning, where students, to a greater extent, become autonomous, independent, and creative. It is less about reproducing knowledge and more about applying it in new ways. By viewing AI as an opportunity to rethink our pedagogical practice, we may be able to help create an education that not only prepares students for a future with AI but also gives them the tools to shape it actively.

Feelings of loss of control and powerlessness are a natural reaction to AI's disruption. In the following article, I will focus on another consequence — how technology changes the teacher-student relationship.

Kilder

Bandura, A. (2013). Self-efficacy. Kognition Og Pædagogik, 22(83).

Fleckenstein, J., Meyer, J., Jansen, T., Keller, S. D., Köller, O., & Möller, J. (2024). Do teachers spot AI? Evaluating the detectability of AI-generated texts among student essays. Computers and Education: Artificial Intelligence, 6(100209), 100209.

Friisberg, K. (2023, May 5). 7 ting du skal vide om AI i skolen. SDU. https://www.sdu.dk/da/nyheder/7-ting-du-skal-vide-om-ai-i-skolen

Grose, J. (2024, August 14). What Teachers Told Me About A.I. in School. The New York Times. https://www.nytimes.com/2024/08/14/opinion/ai-schools-teachers-students.html?unlocked_article_code=1

Rafferty, A. (2023, February 13). How will the use of AI in education impact the roles of teachers? The Learning Counsel. https://thelearningcounsel.com/articles/how-will-the-use-of-ai-in-education-impact-the-roles-of-teachers/