Matomo

The SchoolGPT debate: Between security and learning

· 6 min read
The SchoolGPT debate: Between security and learning

Recently, the debate about artificial intelligence in education has been influenced by the discussion on chatbot SkoleGPT. The controversy arose after an article from Denmark's Radio (DR), which dramatically claimed that SkoleGPT could provide detailed descriptions of self-harm and suicide. The coverage focused on the potential risks of children and young people using the technology.

I will briefly outline the challenge; perhaps more moderate language models are not the way forward.

💡
If you are in crisis or having suicidal thoughts, you can contact the Lifeline at 70 201 201 every day between 11 AM and 5 AM. You can also chat with Lifeline at livslinien.dk.
Chatbot made for public schools shocks experts: Provided detailed descriptions of self-harm
After DR contacted the creators behind the Danish chatbot SkoleGPT, they have now changed the bot's responses.

However, several sources criticize DR's approach as one-sided and sensationalistic. They point out that DR actively sought out the controversial output without disclosing their method or mentioning the many instances in which SkoleGPT did not provide problematic responses. Furthermore, they criticize the loss of nuances and background information, including the very purpose of SkoleGPT.

Ove Christensen, a special consultant in digital competency enhancement, believes that the focus should instead be on how students best learn to understand and critically relate to AI. In a LinkedIn article, he argues that the problem is not the security holes in SkoleGPT, but the lack of systematic education in AI and digital judgment in public schools.

Ove Christensen also points out that a safer SkoleGPT would require more significant investments. Still, the central problem is the absence of a Danish or Nordic AI model developed under democratic control. This leaves schools dependent on commercial AI solutions that lack sufficient regulation.

Generic illnesses - based on the debate about SkoleGPT
Generative artificial intelligence (GenAI) has become a huge industry. Enormous sums are being invested in developing language models and creating services that, based on these language models, can generate something that simulates knowledge and insight about the world we live in.

The issue has also had political consequences. The Socialist Party (SF) has asked Education Minister Mattias Tesfaye (S) to explain how more thorough tests of AI systems for schools will be ensured. The party's educational spokesman, Sigurd Agersnap, believes that there is a need for restrictions on what responses chatbots for school use can provide students.

After descriptions of self-harm: SF wants Tesfaye to ensure more tests of AI systems for schools
How do we ensure against new cases where students can receive graphic descriptions of self-harm and suicide? This is what the Minister of Education must answer after a case concerning the chatbot SkoleGPT.

Laurits Rasmussen, CEO and co-founder of Pathfindr, suggests in his debate article in Skolemonitor that the technology be used under supervision by introducing moderators who oversee conversations and intervene if students show signs of distress. He compares AI to other tools in schools that can be dangerous if misused, such as knives in the craft room or chemicals in the physics class.

"If we completely shut down all content that can be harmful in some contexts but beneficial in others, we are pushing our children onto other platforms—completely without any supervisors," writes Laurits Rasmussen.

Debate: The case of SkoleGPT raises several significant questions
If we completely shut down all content that can be harmful in some contexts but beneficial in others, we push our children onto other platforms—completely without any supervisors, writes the commentator.

We also recommend reading Malte von Sehested's article in IT Torvet, which points out that AI education is not systematic, leaving students poorly equipped to navigate digitally.

When choosing fear and sensationalism instead of the thorough story - IT Torvet
Last week's sharp angle from Denmark's Radio could end up harming more than it benefits students' ability to engage with AI.

Other platforms have the same challenge

The challenges we see are not unique to AI chatbots, and social media platforms like TikTok and Instagram are already battling with issues related to suicide and self-harm. The big challenge is likely that a regular Google search could return the same results as those reported by DR. And with a little effort, one could find more open models that lack guardrails. At the same time, we also see dubious posts on TikTok and Instagram, where children are even more exposed to self-harm.

Language models in education

The discussion about SkoleGPT illustrates schools' fundamental dilemma: How do we balance the need to protect students with the necessity of preparing them for a digital future?

One should ask whether it is not better to have a more open language model in education, where the teacher didactically shapes it for the student rather than allowing the student to navigate the technology on their own.

As the debate continues, it is clear that the solution lies in improving safety filters and strengthening systematic education in technology understanding and digital judgment in schools.


Digital Responsibility reveals: Instagram speaks falsely when it claims to remove self-harm content with AI | Digital Responsibility
A new study from Digital Responsibility shows that Instagram still does not remove violent self-harm content and, contrary to their own statement, chooses not to use artificial intelligence to automatically remove the content.
Digital Responsibility | NEW ANALYSIS OF TIKTOK'S DARK UNIVERSES
What is it that children encounter on TikTok? And what do TikTok's algorithms mean for those who already show interest in self-harm and depression-related content?
Organization: Children are exposed to self-harm content on TikTok
According to Digital Responsibility, children are presented with extensive content related to self-harm and suicide on TikTok.
On TikTok, Stinne could explore the difficult, the dark, and reflect on her self-harm.
A vicious cycle of algorithms and addiction was almost impossible for the young woman to escape. When TikTok does not delete self-harm videos, they trap more like her, she says.
Organization: Children are exposed to self-harm content on TikTok
According to Digital Responsibility, children are presented with extensive content related to self-harm and suicide on TikTok.
Lack of EU requirements makes Telegram a digital no-man's land where illegal content is shared without consequences: Digital - Altinget | News, analysis, and debate about Danish politics
The mapping of an illegal market for intimate material on Telegram clearly shows how unregulated digital services lure criminals and pave the way for new forms of crime, writes Ask Hesby Holm.
💡
This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.