Here is this week's newsletter on generative artificial intelligence in the education sector, where we have selected several articles and news from both at home and abroad.
When selecting content for our newsletter, we usually focus on opportunities and challenges in teaching. However, this week we are both saddened and concerned about the AI surveillance taking place in American schools. In several places, AI systems have been implemented to monitor students' written communication to prevent violence, self-harm, and bullying. In Europe, such systems will be classified as high-risk AI according to the AI regulation, and therefore, they cannot be implemented legally without extensive adjustments and safety measures.
At Viden.AI, we have previously written about AI learning objects, and now Systime has developed some AI activities in their teaching materials that students can use in their classes. At the same time, Christian Dalsgaard has investigated high school students' use of AI and provides concrete recommendations for how AI can be integrated to promote learning. Conversely, several educators in Kristeligt Dagblad warn that cheating with AI is out of control in Danish high schools and advocate for more analog teaching.
Additionally, we have selected the following interesting news and articles:
- A minor research study shows that a short introduction to AI encourages students to engage more critically with AI's responses.
- In Denmark, understanding technology is primarily an elective subject in primary education but is also gradually being integrated as an element across various subjects. On the other hand, China has adopted a national strategy that specifically introduces education in artificial intelligence to actively develop technological talents and strengthen the country's position in the AI industry.
- A new study from the Tow Center for Digital Journalism dives even deeper into the issue of searching in language models. It points out many problems with credibility and references in AI tools, which must necessarily be addressed in teaching.
- If we in Denmark hope to get a good Danish language model, it requires a lot of data to train on. Therefore, the Danish Language Model Consortium is out to engage both public and private organizations to donate Danish text data.
- This week, an AI-generated research paper was submitted that passed peer review but was not ultimately published. With just a few clicks, conducting "research" without human involvement is now possible. What does this do to our knowledge society, and are we heading towards a knowledge crisis?
Happy reading – and don't forget the articles we didn't have time to elaborate on.
AI surveillance in American schools leads to data security flaws and raises ethical concerns
Several schools in the USA are using AI-driven surveillance to monitor students' online activity and prevent violence, self-harm, and bullying. Systems like Gaggle scan students' written communication and alert school staff to potential danger situations. Schools see it as necessary to protect students, but the case from Vancouver Public Schools shows that the surveillance also poses serious security risks.
A flaw in Vancouver allowed journalists to access 3,500 unencrypted student documents containing sensitive information. Moreover, AI surveillance has resulted in LGBTQ+ students unintentionally having their sexuality or gender identity exposed, and students feel mistrust toward the school when their private thoughts are monitored. Critical voices also point out that the system generates many false alarms and does not necessarily reduce the risk of violence or suicide.
Systime develops AI activities for high school education
With the Danish ministry's new recommendations for using generative AI in high school education, Systime is focusing on developing AI-based interactive activities that support students' learning. Inspired by Christian Dalsgaard's research, Systime has developed a strategy that ensures AI is used as a learning-enhancing tool rather than a replacement for students' thinking.
To ensure quality and safety, the interactions are academically limited, tested by teachers and students, and designed to comply with GDPR requirements. Systime emphasizes that dialogue with teachers is crucial for further developing the AI learning materials and ensuring they support students' independent learning processes.
https://systime.dk/systimenyt/post/vi-skaber-ai-interaktiviteter-der-fremmer-laering
Learning processes with generative AI should be strengthened in high school
Researcher Christian Dalsgaard has investigated how high school students use generative AI in their schoolwork. Instead of focusing on cheating and plagiarism, he has analyzed students' working methods and learning processes in collaboration with Aarhus Business College, Herning Gymnasium, and Uddannelsescenter Ringkøbing-Skjern.
His research shows that students use AI for task preparation, inspiration, text revision, feedback, explanations, and enhanced understanding.
Depending on how AI is used, each method can be both learning-enhancing and learning-hindering. Christian Dalsgaard points out that students do not primarily use AI to copy answers but rather as a sparring partner that helps them understand and structure their work. He believes that schools should focus on teaching students to balance using AI as a learning aid and avoiding an uncritical dependence on technology. He argues that we need to move away from a black-and-white view of cheating and instead focus on the learning processes that AI strengthens.
AI cheating among students is out of control
Educators in Danish high schools are experiencing a significant increase in cheating using AI, particularly ChatGPT.
"It has fallen like a bomb across the entire youth education sector. It has been horrible this past year, how quickly it has happened," says Anne Solbjerg, a Danish teacher at Frederikssund Gymnasium.
Several educators confirm that students perform significantly better at home than in exam-like tests, suggesting extensive use of AI.
According to teachers, the problem particularly affects academically weak students who struggle to express themselves in writing. Students confirm that ChatGPT has become a widely used tool, with some using it to secure high grades or compensate for a lack of self-confidence.
In response, several high schools are reverting to handwritten submissions and analog tests.
"I encourage a massive analog counter-offensive. Back to pen and paper, even if they first have to learn to write by hand again. Otherwise, they won't learn anything more," says senior lecturer Mikael Busch to Kristeligt Dagblad.
The Minister of Education, Mattias Tesfaye, noted in Kristeligt Dagblad that the government has issued recommendations for using AI in education and is considering increasing the use of pen-and-paper tests.
Behind a paywall
Short-term AI teaching does not change students' use of ChatGPT
A small study from the University of Tartu in Estonia indicates that a brief introduction to AI does not reduce high school students' tendency to uncritically trust ChatGPT, even when the model provides incorrect answers.
The study involved 36 Estonian high school students, divided into two groups: a control group and an intervention group that received teaching materials about ChatGPT's limitations and how to use AI critically. Following this, students were asked to solve math problems using ChatGPT, with half of the model's responses deliberately containing errors.
The results showed that, on average, students accepted incorrect answers in 52.1% of cases, indicating widespread over-reliance on ChatGPT. Surprisingly, the teaching did not reduce this reliance, leading students to ignore ChatGPT's correct answers more often.
The study concludes that a short AI literacy intervention is insufficient to reduce reliance on AI-based recommendations. For students to develop a critical approach to AI, more comprehensive and systematic learning trajectories are needed to explain AI's limitations and train them to think more slowly and analytically.
China introduces AI education in primary schools to strengthen technological leadership
From September, Beijing schools will introduce AI courses in both primary and secondary schools as part of a national strategy to develop technological talents and promote China's AI industry. The teaching will include at least eight hours a year and be integrated into subjects such as IT and natural sciences.
The Chinese education authorities plan to collaborate with universities, tech companies, and research institutions to develop courses and create innovative learning environments with AI-assisted teaching. AI will be used as a teaching tool and part of leisure activities, research projects, and after-school services.
At the primary school level, the focus will be on developing students' AI thinking, while high school students will receive more practical courses focusing on AI applications and innovation. Additionally, AI ethics will be part of the curriculum to ensure students learn to use generative AI responsibly and ethically.
AI-based search engines have a serious citation problem
A new study from the Tow Center for Digital Journalism at Columbia University in New York reveals serious problems with citations in AI-based search engines. Researchers tested eight AI search tools and found that they often distort news content, ignore publishers' blocks, and generate fake citations.
The study showed that AI chatbots provided incorrect answers 60% of the time, often with convincing certainty. Premium versions of chatbots did not necessarily perform better but were more likely to give erroneous, yet self-assured answers.
Moreover, several AI models ignored web publishers’ blocks, meaning they pulled information from sources that had attempted to exclude them.
Another issue is that AI tools often incorrectly attribute news content to the wrong sources or link to incorrect or non-existent URLs, undermining the credibility of their answers and news media’s ability to gain traffic and revenue. Even when AI companies have licensing agreements with news publishers, that does not guarantee their content will be accurately cited.
The conclusion is that AI-based search engines still face major challenges regarding accuracy, transparency, and citation, which can have serious consequences for news media and users relying on AI for information retrieval.
Danish language models require Danish data - a lot of data!
Large amounts of Danish text data are needed to develop language models that function optimally in a Danish context. Otherwise, Denmark risks falling behind in global AI development. The government's AI strategy emphasizes the importance of responsible data collection, but experts point out that this alone is insufficient.
Therefore, the Danish Language Model Consortium is actively working to engage both public and private organizations in donating Danish text data. The goal is to ensure that AI models developed in Denmark create real value for businesses and society.
Organizations wishing to contribute can sign up via the Danish Language Model Consortium's website.
AI-generated research article passed peer review
A scientific article created by Sakana AI’s system passed peer review at an AI workshop before being withdrawn as part of a planned experiment. This marks the first time a fully AI-generated research article has passed standard peer review.
The experiment demonstrates both the potential and limitations of AI-generated research articles and raises essential questions about how scientific journals should handle this technology in the future.