This week, we focus on a new study from the University of Copenhagen, which shows that many high school students fear being accused of plagiarism, even when they have not attempted to cheat. This is mainly due to a lack of insight into how plagiarism detection tools operate.

Jeppe Klitgaard Stricker has published a new article introducing the concept of "the synthetic knowledge crisis" and discussing how generative AI produces texts that sound credible but lack the academic depth characteristic of actual knowledge. Jeppe uses terms like “Digital Plastic” and “Slop” to describe this AI-generated content that appears as knowledge without being so.

We also refer to several posts from last week published on Folkeskolen.dk, all focusing on AI in education:

This week's other interesting articles came from Gymnasieskolen and Zetland:

  • In high school, according to Kasper Nissen, we should integrate AI into teaching with an open approach that supports learning rather than maintaining outdated evaluation methods that become increasingly difficult to enforce.
  • Additionally, Ida Ebbensgaard argues in a new book that AI is not only a challenge but also an opportunity to enhance creativity, innovation, and efficiency. However, this requires that we understand AI's potential and limitations.

You can also read this week's other news at the end of the newsletter.

Happy reading!

💡
This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.

Students fear unfair plagiarism cases

A new study led by researcher Mads Paludan Goddiksen from the University of Copenhagen shows that many high school students fear being accused of plagiarism, even when they did not intend to cheat.

Graphic: https://gymnasieskolen.dk/articles/aerlige-elever-er-bange-for-at-blive-taget-i-plagiat/

Students have limited insight into what happens when their assignments are checked for plagiarism in the school's software. Many mistakenly believe that the system automatically determines whether cheating has occurred, which is not the case. AI tools like ChatGPT have further blurred the boundaries between plagiarism and the legitimate use of technology. Studies also show that students often struggle to identify plagiarism unless it is obvious.

Mads Paludan Goddiksen recommends that schools become more transparent about the plagiarism process and clarify that human judgment always determines whether cheating has occurred. He also calls for more education on proper citation practices and academic integrity so that students better understand when a text is plagiarized.

Honest students fear being caught in plagiarism

Read the entire study below.

The dark side of text-matching software: worries and counterproductive behaviour among European upper secondary school and bachelor students - International Journal for Educational Integrity
Text-matching software (TMS) is a standard part of efforts to prevent and detect plagiarism in upper secondary and higher education. While there are many studies on the potential benefits of using this technology, few studies look into potential unintended side effects. These side effects include students worrying about being accused of plagiarism due to TMS output, even though they did not intentionally plagiarise. Although such worries are frequently mentioned in the literature, little is known about how prevalent they are, why they occur and how students react to them. This paper aims to fill this knowledge gap. The data for the study comprise 36 interviews with upper secondary and Bachelor students from three European countries combined with survey data from 3,424 students from seven European countries representing a broad range of disciplines. The study found that a substantial proportion of the two groups of students – 47% of upper secondary and 55% of Bachelor students – had experienced TMS-related worries during their current studies. Furthermore, there were substantial differences across countries. Students worry partly because they have a poor understanding of how TMS is used in their institution, and partly because they know that plagiarism is taken very seriously. The study shows that TMS-related worries can lead students to become very focused on not being caught plagiarising, to such an extent that some adopt citation practices that they believe are suboptimal. The paper concludes that institutions using TMS should always combine it with training for students and teachers. Students should be clearly informed about how TMS is used and should develop an understanding of plagiarism and good citation practice that goes beyond the narrow focus on any overlap between texts elicited by the software.

The synthetic knowledge crisis: AI and the hollowing out of knowledge

In a new article, Jeppe Klitgaard Stricker writes about a current issue concerning AI: the synthetic knowledge crisis. The central problem is not AI-generated content. Instead, it often appears legitimate knowledge without undergoing the traditional mechanisms for scientific quality assurance, such as peer review, academic debate, and methodological transparency.

This creates educational challenges, where students increasingly risk confusing superficial, AI-generated texts with in-depth academic understanding.

At the same time, Jeppe points out that the research community is a co-creator of the problem, as productivity and quantifiable results are often valued higher than insight and depth. This can lead to an academic environment where the quantity of research becomes more important than its quality.

Therefore, the choice is clear: Either we accept a future in which synthetic knowledge replaces genuine insight, or we maintain the processes that ensure the value of knowledge.

Jeppe proposes that educational institutions reform teaching and research, focusing more on research-based discussions and depth than productivity.

Read the entire article on Stricker.ai – and remember to subscribe to Jeppe's newsletter for valuable insights and perspectives on artificial intelligence and education.

The Synthetic Knowledge Crisis
In AI contexts, we talk about bias and misinformation, but I believe we overlook a problem that is at least as significant: the synthetic knowledge crisis. Leon Furze uses the term Digital Plastic, and Simon Willison talks about Slop concerning the type of AI-generated content that barely consists of substance. Regardless of the words we use to describe the phenomenon, the issue is clear: our traditional understanding of what is worth knowing and how we know something is hollowing out.

AI as a co-researcher: The potential of a digital scientific assistant

A new research project introduces an AI-based co-scientist designed to assist researchers in formulating and improving scientific hypotheses. The system is based on a multi-agent model that employs an iterative approach to generate, debate, and evolve hypotheses – inspired by the scientific method.

The AI co-researcher does not replace researchers but acts as an advanced assistant that can analyze large amounts of data and propose new ideas in a structured and transparent manner. Researchers interact with the system by stating their research goals in natural language and can continuously provide feedback to adjust the output.

Towards an AI co-scientist
Scientific discovery relies on scientists generating novel hypotheses that undergo rigorous experimental validation. To augment this process, we introduce an AI co-scientist, a multi-agent system built on Gemini 2.0. The AI co-scientist is intended to help uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and aligned to scientist-provided research objectives and guidance. The system’s design incorporates a generate, debate, and evolve approach to hypothesis generation, inspired by the scientific method and accelerated by scaling test-time compute. Key contributions include: (1) a multi-agent architecture with an asynchronous task execution framework for flexible compute scaling; (2) a tournament evolution process for self-improving hypotheses generation. Automated evaluations show continued benefits of test-time compute, improving hypothesis quality. While general purpose, we focus development and validation in three biomedical areas: drug repurposing, novel target discovery, and explaining mechanisms of bacterial evolution and anti-microbial resistance. For drug repurposing, the system proposes candidates with promising validation findings, including candidates for acute myeloid leukemia that show tumor inhibition in vitro at clinically applicable concentrations. For novel target discovery, the AI co-scientist proposed new epigenetic targets for liver fibrosis, validated by anti-fibrotic activity and liver cell regeneration in human hepatic organoids. Finally, the AI co-scientist recapitulated unpublished experimental results via a parallel in silico discovery of a novel gene transfer mechanism in bacterial evolution. These results, detailed in separate, co-timed reports, demonstrate the potential to augment biomedical and scientific discovery and usher an era of AI empowered scientists.

AI chatbots distort facts - even with reliable sources

A BBC study has revealed that AI chatbots continue to produce misleading answers, even when they rely on fact-checked news sources.

The researchers tested leading AI models and found that

  • 19% of AI-generated answers with BBC references contained factual errors.
  • 51% of the answers had significant journalistic shortcomings, such as unclear distinctions between facts and opinions, lack of context, or unsubstantiated claims.
  • 13% of the cited BBC sources were either distorted or non-existent.

Examples include misrepresenting NHS recommendations about smoking and vaping and erroneously describing police collaboration with private security companies regarding shoplifting. The AI models also provided a skewed portrayal of the Middle East conflict without basis in the cited sources.

The BBC emphasizes that these errors are serious, as a well-functioning society relies on a shared understanding of facts. The prevalence of inaccurate AI-generated information can lead to misinformation and undermine trust in the media.

Chatbots Are Hallucinating - No Matter What · Data Ethics Think Tank
Even if the news sites around the world, who today block for AI bots or…

AI in schools: Should it be about learning or efficiency?

Ronni Laursen writes in an op-ed on Folkeskolen.dk that it is naive to think that AI can replace teachers. He criticizes the Danish Employers' Association (DA) for suggesting that AI could free up 9,000 teaching positions, as experiences from previous digitization projects show that top-down implementation often fails.

He parallels previous attempts to standardize teaching through learning platforms, which many teachers deliberately chose not to use. He warns that AI in schools could lead to standardized teaching where automated solutions replace flexibility and pedagogical judgment.

While AI can be a helpful tool to support teaching – for example, by customizing material for the individual student or automating feedback – it should not be used to reduce the number of teachers.

AI in schools: We have seen the naivety before
It is naive to think that AI can replace teachers, writes Ronni Laursen in today’s post, but artificial intelligence should assist teachers in their daily work.

AI in schools: Can circle pedagogy strengthen students' critical thinking?

In Folkeskolen, Lene Rachel Andersen writes that AI’s entry into schools requires a pedagogical shift that strengthens students' ability to reflect rather than merely reproduce knowledge critically. She points out that AI has already created three challenges in education: Students can use AI to complete their assignments without learning anything, do not develop independent thinking and writing skills, and often trust AI more than their teachers—even when AI provides incorrect answers.

Read the entire post on folkeskolen.dk

Artificial Intelligence: Circle pedagogy can vaccinate students against wrong and easy answers
It is important that the school adheres to the strong, Nordic tradition of education, says the think tank director.

Public school must embrace technology – not reject it

DI Digital supports the government’s proposal for mobile-free public schools, as private smartphones can disrupt lessons and negatively affect social dynamics. However, they point out that it is also essential to integrate technology that can strengthen children’s digital skills.

Technological literacy should be an elective in the upper grades and mandatory for all students. Digital teaching materials can increase motivation and create lesson variation, and children must learn to use and understand technology.

Although the government does not plan to make technological literacy mandatory, DI Digital will continue to help all students acquire the necessary skills to navigate a digitalized world.

Public schools must not turn their backs on technology - DI Digital
The Welfare Commission published its long-awaited report on Tuesday with 35 recommendations that together aim to address the challenges related to the well-being of children and adolescents. The commission’s recommendations on a mobile phone-free primary school captured attention, not least due to the government’s proposal for a ban on phones in schools.

Chatbots in teaching: From control to learning

In a post in Gymnasieskolen, Kasper Nissen suggests that language models like ChatGPT represent a paradigm shift in the education sector. Instead of perceiving AI as a threat to traditional teaching, he highlights the potential for appropriate implementation to free up teachers' and students' resources.

Rather than viewing AI as a problem because students can use it to cheat on written assignments, he proposes changing the evaluation so that homework no longer serves as a control tool but purely as an educational resource. If written assignments are no longer used for grading but for learning and feedback, the fear of cheating becomes irrelevant. The few annual, controlled exams will still reveal students who let AI do all the work.

Teachers can also gain more time for valuable feedback because AI can take over mechanical corrections of grammar and structure while human instructors focus on interpretation, style, and nuance. In this way, AI becomes not an enemy but a tool that provides teachers and students greater freedom to focus on what truly matters.

Kasper Nissen believes that AI should be integrated into teaching with an open approach that supports learning rather than maintaining outdated evaluation methods that become increasingly difficult to enforce.

The liberating potential of chatbots

AI as a superpower: Potential, risks, and the way forward

Ida Ebbensgaard has just published the book Ægte and highlights in an article in Zetland how AI can enhance human creativity and problem-solving while presenting significant challenges.

To harness AI’s potential without losing control, Ida Ebbensgaard argues that we must master the technology – not reject it. AI can be compared to electricity or the internet in its transformative power and should be seen as a tool that supports but does not replace human judgment.

Read Ida's article below and possibly order the book from Zetland.

The machine gives humans superpowers. Here’s why we need to use AI much more
Anyone can use artificial intelligence. It isn’t difficult. You just need to get started.
Ægte. A small book about artificial intelligence
What does artificial intelligence mean for us humans? Ida Ebbensgaard has written a small book that argues that precisely because technology is so prevalent, genuine attention and relationships will increase in value: The more machine, the more human. And she offers suggestions on how to approach the change in practice –

This week's other news

Black box, big power: Why AI needs to learn to explain itself
For AI to truly be effective and reliable, users need to understand how the models make decisions. But how do we create transparency in artificial intelligence?
Artificial intelligence can give consumers lower electricity bills
Read more here.
Here is the task you cannot outsource: As a leader in 2025, you must be an AI superuser: Management - Mandag Morgen
Most leaders feel on shaky ground regarding AI, a new analysis shows. But there is no way around finding a footing in the booming technology. Sooner rather than later. We asked a tech expert, a career advisor, and a top executive how leaders tame artificial intelligence.
Kunstig intelligens bør skærpe ledelsens blik på arbejdsmiljøet - OFFENTLIG LEDELSE
Vi kan ikke tale om kunstig intelligens uden at tale om arbejdsmiljø – og ledelse, mener Steffen Bohni, direktør i Det Nationale Forskningscenter for Arbejdsmiljø. For kunstig intelligens vil påvirke og forandre både arbejdsopgaver og medarbejdere – og derfor også arbejdsmiljøet.

This article has been machine-translated into English. Therefore, the content may contain nuances or errors. The Danish version is always up-to-date and accurate.