Here is this week's newsletter on generative artificial intelligence in the education sector, where we have selected several articles and news from both at home and abroad.

When selecting content for our newsletter, we usually focus on opportunities and challenges in teaching. However, this week we are both saddened and concerned about the AI surveillance taking place in American schools. In several places, AI systems have been implemented to monitor students' written communication to prevent violence, self-harm, and bullying. In Europe, such systems will be classified as high-risk AI according to the AI regulation, and therefore, they cannot be implemented legally without extensive adjustments and safety measures.

At Viden.AI, we have previously written about AI learning objects, and now Systime has developed some AI activities in their teaching materials that students can use in their classes. At the same time, Christian Dalsgaard has investigated high school students' use of AI and provides concrete recommendations for how AI can be integrated to promote learning. Conversely, several educators in Kristeligt Dagblad warn that cheating with AI is out of control in Danish high schools and advocate for more analog teaching.

Additionally, we have selected the following interesting news and articles:

  • A minor research study shows that a short introduction to AI encourages students to engage more critically with AI's responses.
  • In Denmark, understanding technology is primarily an elective subject in primary education but is also gradually being integrated as an element across various subjects. On the other hand, China has adopted a national strategy that specifically introduces education in artificial intelligence to actively develop technological talents and strengthen the country's position in the AI industry.
  • A new study from the Tow Center for Digital Journalism dives even deeper into the issue of searching in language models. It points out many problems with credibility and references in AI tools, which must necessarily be addressed in teaching.
  • If we in Denmark hope to get a good Danish language model, it requires a lot of data to train on. Therefore, the Danish Language Model Consortium is out to engage both public and private organizations to donate Danish text data.
  • This week, an AI-generated research paper was submitted that passed peer review but was not ultimately published. With just a few clicks, conducting "research" without human involvement is now possible. What does this do to our knowledge society, and are we heading towards a knowledge crisis?

Happy reading – and don't forget the articles we didn't have time to elaborate on.


AI surveillance in American schools leads to data security flaws and raises ethical concerns

Several schools in the USA are using AI-driven surveillance to monitor students' online activity and prevent violence, self-harm, and bullying. Systems like Gaggle scan students' written communication and alert school staff to potential danger situations. Schools see it as necessary to protect students, but the case from Vancouver Public Schools shows that the surveillance also poses serious security risks.

A flaw in Vancouver allowed journalists to access 3,500 unencrypted student documents containing sensitive information. Moreover, AI surveillance has resulted in LGBTQ+ students unintentionally having their sexuality or gender identity exposed, and students feel mistrust toward the school when their private thoughts are monitored. Critical voices also point out that the system generates many false alarms and does not necessarily reduce the risk of violence or suicide.

Schools use AI to monitor kids, hoping to prevent violence. Our investigation found security risks
Schools are turning to AI-powered surveillance technology to monitor students on school-issued devices like laptops and tablets.

Systime develops AI activities for high school education

With the Danish ministry's new recommendations for using generative AI in high school education, Systime is focusing on developing AI-based interactive activities that support students' learning. Inspired by Christian Dalsgaard's research, Systime has developed a strategy that ensures AI is used as a learning-enhancing tool rather than a replacement for students' thinking.

To ensure quality and safety, the interactions are academically limited, tested by teachers and students, and designed to comply with GDPR requirements. Systime emphasizes that dialogue with teachers is crucial for further developing the AI learning materials and ensuring they support students' independent learning processes.

https://systime.dk/systimenyt/post/vi-skaber-ai-interaktiviteter-der-fremmer-laering


Learning processes with generative AI should be strengthened in high school

Researcher Christian Dalsgaard has investigated how high school students use generative AI in their schoolwork. Instead of focusing on cheating and plagiarism, he has analyzed students' working methods and learning processes in collaboration with Aarhus Business College, Herning Gymnasium, and Uddannelsescenter Ringkøbing-Skjern.

His research shows that students use AI for task preparation, inspiration, text revision, feedback, explanations, and enhanced understanding.

Depending on how AI is used, each method can be both learning-enhancing and learning-hindering. Christian Dalsgaard points out that students do not primarily use AI to copy answers but rather as a sparring partner that helps them understand and structure their work. He believes that schools should focus on teaching students to balance using AI as a learning aid and avoiding an uncritical dependence on technology. He argues that we need to move away from a black-and-white view of cheating and instead focus on the learning processes that AI strengthens.

Learning processes with generative AI should be strengthened
Do you have an idea for a new publication? At Systime, we are always looking for new and creative authors. Maybe your idea is the one we need.

AI cheating among students is out of control

Educators in Danish high schools are experiencing a significant increase in cheating using AI, particularly ChatGPT.

"It has fallen like a bomb across the entire youth education sector. It has been horrible this past year, how quickly it has happened," says Anne Solbjerg, a Danish teacher at Frederikssund Gymnasium.

Several educators confirm that students perform significantly better at home than in exam-like tests, suggesting extensive use of AI.

According to teachers, the problem particularly affects academically weak students who struggle to express themselves in writing. Students confirm that ChatGPT has become a widely used tool, with some using it to secure high grades or compensate for a lack of self-confidence.

In response, several high schools are reverting to handwritten submissions and analog tests.

"I encourage a massive analog counter-offensive. Back to pen and paper, even if they first have to learn to write by hand again. Otherwise, they won't learn anything more," says senior lecturer Mikael Busch to Kristeligt Dagblad.

The Minister of Education, Mattias Tesfaye, noted in Kristeligt Dagblad that the government has issued recommendations for using AI in education and is considering increasing the use of pen-and-paper tests.

Warning from educators: Students' cheating with artificial intelligence has exploded
Several educators in youth education institutions experience that written proficiency is in free fall. Artificial intelligence has normalized cheating to an extent that is feared to harm young people's future opportunities.

Behind a paywall


Short-term AI teaching does not change students' use of ChatGPT

A small study from the University of Tartu in Estonia indicates that a brief introduction to AI does not reduce high school students' tendency to uncritically trust ChatGPT, even when the model provides incorrect answers.

The study involved 36 Estonian high school students, divided into two groups: a control group and an intervention group that received teaching materials about ChatGPT's limitations and how to use AI critically. Following this, students were asked to solve math problems using ChatGPT, with half of the model's responses deliberately containing errors.

The results showed that, on average, students accepted incorrect answers in 52.1% of cases, indicating widespread over-reliance on ChatGPT. Surprisingly, the teaching did not reduce this reliance, leading students to ignore ChatGPT's correct answers more often.

The study concludes that a short AI literacy intervention is insufficient to reduce reliance on AI-based recommendations. For students to develop a critical approach to AI, more comprehensive and systematic learning trajectories are needed to explain AI's limitations and train them to think more slowly and analytically.

Short-term AI literacy intervention does not reduce over-reliance on incorrect ChatGPT recommendations
In this study, we examined whether a short-form AI literacy intervention could reduce the adoption of incorrect recommendations from large language models. High school seniors were randomly assigned to either a control or an intervention group, which received an educational text explaining ChatGPT’s working mechanism, limitations, and proper use. Participants solved math puzzles with the help of ChatGPT’s recommendations, which were incorrect in half of the cases. Results showed that students adopted incorrect suggestions 52.1% of the time, indicating widespread over-reliance. The educational intervention did not significantly reduce over-reliance. Instead, it led to an increase in ignoring ChatGPT’s correct recommendations. We conclude that the usage of ChatGPT is associated with over-reliance and it is not trivial to increase AI literacy to counter over-reliance.

China introduces AI education in primary schools to strengthen technological leadership

From September, Beijing schools will introduce AI courses in both primary and secondary schools as part of a national strategy to develop technological talents and promote China's AI industry. The teaching will include at least eight hours a year and be integrated into subjects such as IT and natural sciences.

The Chinese education authorities plan to collaborate with universities, tech companies, and research institutions to develop courses and create innovative learning environments with AI-assisted teaching. AI will be used as a teaching tool and part of leisure activities, research projects, and after-school services.

At the primary school level, the focus will be on developing students' AI thinking, while high school students will receive more practical courses focusing on AI applications and innovation. Additionally, AI ethics will be part of the curriculum to ensure students learn to use generative AI responsibly and ethically.

Revolution in primary schools in China: Artificial intelligence instead of the small table
By 2030, all Chinese schools must teach artificial intelligence down to first-grade level.
China to launch AI courses for primary, secondary school students
The move, which will see schools providing at least eight hours of AI classes a year, furthers China’s AI ambitions.
China’s capital city is making AI education mandatory, even for elementary schoolers
Beijing’s decision to make AI education mandatory comes as China powers ahead in the AI race, with its homegrown startups gaining global attention.

AI-based search engines have a serious citation problem

A new study from the Tow Center for Digital Journalism at Columbia University in New York reveals serious problems with citations in AI-based search engines. Researchers tested eight AI search tools and found that they often distort news content, ignore publishers' blocks, and generate fake citations.

Source: Tow Center for Digital Journalism

The study showed that AI chatbots provided incorrect answers 60% of the time, often with convincing certainty. Premium versions of chatbots did not necessarily perform better but were more likely to give erroneous, yet self-assured answers.

Moreover, several AI models ignored web publishers’ blocks, meaning they pulled information from sources that had attempted to exclude them.

Another issue is that AI tools often incorrectly attribute news content to the wrong sources or link to incorrect or non-existent URLs, undermining the credibility of their answers and news media’s ability to gain traffic and revenue. Even when AI companies have licensing agreements with news publishers, that does not guarantee their content will be accurately cited.

The conclusion is that AI-based search engines still face major challenges regarding accuracy, transparency, and citation, which can have serious consequences for news media and users relying on AI for information retrieval.

AI Search Has A Citation Problem
We Compared Eight AI Search Engines. They’re All Bad at Citing News.
The Dark Side of AI Search Nobody’s Telling You About (But Should)
A February 2025 BBC study reveals why we should all be at least a little worried about searching with AI

Danish language models require Danish data - a lot of data!

Large amounts of Danish text data are needed to develop language models that function optimally in a Danish context. Otherwise, Denmark risks falling behind in global AI development. The government's AI strategy emphasizes the importance of responsible data collection, but experts point out that this alone is insufficient.

Therefore, the Danish Language Model Consortium is actively working to engage both public and private organizations in donating Danish text data. The goal is to ensure that AI models developed in Denmark create real value for businesses and society.

Organizations wishing to contribute can sign up via the Danish Language Model Consortium's website.

Join the Danish Language Model Consortium - Alexandra Institute
Help drive the development of responsible Danish language models! The Danish Language Model Consortium is a value-based community open to anyone who can subscribe to its aims and principles.

AI-generated research article passed peer review

A scientific article created by Sakana AI’s system passed peer review at an AI workshop before being withdrawn as part of a planned experiment. This marks the first time a fully AI-generated research article has passed standard peer review.

The experiment demonstrates both the potential and limitations of AI-generated research articles and raises essential questions about how scientific journals should handle this technology in the future.

AI-generated paper passes peer review before planned withdrawal
A scientific paper generated by Sakana AI’s system passed peer review at an AI workshop before being withdrawn as planned.

Other news of the week

With AI Changing Everything, Here’s How Teachers Can Shape the New Culture of Learning | KQED
The impact of artificial intelligence is growing, and that means educators and parents must keep a closer eye on how learning is evolving as well.
No End in Sight for AI’s Invasion into Higher Education
Perhaps the AI apocalypse will paradoxically encourage educators to adopt more traditional approaches to teaching and giving assignments.
We must become even better at using it. Otherwise, we will live in the world of the USA and China
We must not let the fear of artificial intelligence paralyze us. In the future, the technology will indeed give humanity superpowers, says a journalist and author of a new book about artificial intelligence.
OpenAI declares AI race “over” if training on copyrighted works isn’t fair use
National security hinges on unfettered access to AI training data, OpenAI says.
Can we prevent cheating during exams? Schools need to learn more about artificial intelligence
245 school leaders and IT managers from the country's independent schools sit in class in Nyborg to learn more about artificial intelligence.
More AI competencies on the way to vocational education
Quizmaster, resume writer, teaching assistant: Artificial intelligence can take on many roles in education at vocational colleges - and out in practice. A new project aims to ensure that teachers get the right skills to equip students for a reality where technology is integrated.
Municipalities make it clear: Copilot and ChatGPT do not free up labor | Version2
Neither the government, KL, nor the municipalities expect that the most hyped AI solutions will free up labor in the public sector. It requires that artificial intelligence comes much closer to citizens, but GDPR gets in the way.
University students describe how they adopt AI for writing and research in a general education course - Scientific Reports
Scientific Reports - University students describe how they adopt AI for writing and research in a general education course
AI summaries are coming to Notepad
Drawing shapes is getting easier, too.
AI’s Power to Pace Learning
AI isn’t just a gas pedal for learning. Its real power is in modulating speed, balancing rapid insight with deep, meaningful understanding.
💡
This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.