Recently, the debate about artificial intelligence in education has been influenced by the discussion on chatbot SkoleGPT. The controversy arose after an article from Denmark's Radio (DR), which dramatically claimed that SkoleGPT could provide detailed descriptions of self-harm and suicide. The coverage focused on the potential risks of children and young people using the technology.
I will briefly outline the challenge; perhaps more moderate language models are not the way forward.

However, several sources criticize DR's approach as one-sided and sensationalistic. They point out that DR actively sought out the controversial output without disclosing their method or mentioning the many instances in which SkoleGPT did not provide problematic responses. Furthermore, they criticize the loss of nuances and background information, including the very purpose of SkoleGPT.
Ove Christensen, a special consultant in digital competency enhancement, believes that the focus should instead be on how students best learn to understand and critically relate to AI. In a LinkedIn article, he argues that the problem is not the security holes in SkoleGPT, but the lack of systematic education in AI and digital judgment in public schools.
Ove Christensen also points out that a safer SkoleGPT would require more significant investments. Still, the central problem is the absence of a Danish or Nordic AI model developed under democratic control. This leaves schools dependent on commercial AI solutions that lack sufficient regulation.

The issue has also had political consequences. The Socialist Party (SF) has asked Education Minister Mattias Tesfaye (S) to explain how more thorough tests of AI systems for schools will be ensured. The party's educational spokesman, Sigurd Agersnap, believes that there is a need for restrictions on what responses chatbots for school use can provide students.

Laurits Rasmussen, CEO and co-founder of Pathfindr, suggests in his debate article in Skolemonitor that the technology be used under supervision by introducing moderators who oversee conversations and intervene if students show signs of distress. He compares AI to other tools in schools that can be dangerous if misused, such as knives in the craft room or chemicals in the physics class.
"If we completely shut down all content that can be harmful in some contexts but beneficial in others, we are pushing our children onto other platforms—completely without any supervisors," writes Laurits Rasmussen.
We also recommend reading Malte von Sehested's article in IT Torvet, which points out that AI education is not systematic, leaving students poorly equipped to navigate digitally.

Other platforms have the same challenge
The challenges we see are not unique to AI chatbots, and social media platforms like TikTok and Instagram are already battling with issues related to suicide and self-harm. The big challenge is likely that a regular Google search could return the same results as those reported by DR. And with a little effort, one could find more open models that lack guardrails. At the same time, we also see dubious posts on TikTok and Instagram, where children are even more exposed to self-harm.
Language models in education
The discussion about SkoleGPT illustrates schools' fundamental dilemma: How do we balance the need to protect students with the necessity of preparing them for a digital future?
One should ask whether it is not better to have a more open language model in education, where the teacher didactically shapes it for the student rather than allowing the student to navigate the technology on their own.
As the debate continues, it is clear that the solution lies in improving safety filters and strengthening systematic education in technology understanding and digital judgment in schools.




