This week, we focus on a new study from the University of Copenhagen, which shows that many high school students fear being accused of plagiarism, even when they have not attempted to cheat. This is mainly due to a lack of insight into how plagiarism detection tools operate.
Jeppe Klitgaard Stricker has published a new article introducing the concept of "the synthetic knowledge crisis" and discussing how generative AI produces texts that sound credible but lack the academic depth characteristic of actual knowledge. Jeppe uses terms like “Digital Plastic” and “Slop” to describe this AI-generated content that appears as knowledge without being so.
- While Jeppe warns against the risk of superficial AI-generated knowledge, a new interesting study shows how AI can potentially enhance scientific practice.
- At the same time, a new BBC survey reveals that AI chatbots such as ChatGPT, Gemini, and Perplexity often distort facts, even when drawing on credible news sources.
We also refer to several posts from last week published on Folkeskolen.dk, all focusing on AI in education:
- If schools are to remain places for learning and formation, we must ensure that AI does not merely become a tool for efficiency.
- We need to help students develop critical thinking and independent thought in a time when AI challenges traditional teaching methods.
- The government's plans for mobile-free public schools must not lead to a general rejection of technology. DI Digital argues that technological literacy should be mandatory to prepare students for a digital future.
This week's other interesting articles came from Gymnasieskolen and Zetland:
- In high school, according to Kasper Nissen, we should integrate AI into teaching with an open approach that supports learning rather than maintaining outdated evaluation methods that become increasingly difficult to enforce.
- Additionally, Ida Ebbensgaard argues in a new book that AI is not only a challenge but also an opportunity to enhance creativity, innovation, and efficiency. However, this requires that we understand AI's potential and limitations.
You can also read this week's other news at the end of the newsletter.
Happy reading!
Students fear unfair plagiarism cases
A new study led by researcher Mads Paludan Goddiksen from the University of Copenhagen shows that many high school students fear being accused of plagiarism, even when they did not intend to cheat.
Students have limited insight into what happens when their assignments are checked for plagiarism in the school's software. Many mistakenly believe that the system automatically determines whether cheating has occurred, which is not the case. AI tools like ChatGPT have further blurred the boundaries between plagiarism and the legitimate use of technology. Studies also show that students often struggle to identify plagiarism unless it is obvious.
Mads Paludan Goddiksen recommends that schools become more transparent about the plagiarism process and clarify that human judgment always determines whether cheating has occurred. He also calls for more education on proper citation practices and academic integrity so that students better understand when a text is plagiarized.
Read the entire study below.
The synthetic knowledge crisis: AI and the hollowing out of knowledge
In a new article, Jeppe Klitgaard Stricker writes about a current issue concerning AI: the synthetic knowledge crisis. The central problem is not AI-generated content. Instead, it often appears legitimate knowledge without undergoing the traditional mechanisms for scientific quality assurance, such as peer review, academic debate, and methodological transparency.
This creates educational challenges, where students increasingly risk confusing superficial, AI-generated texts with in-depth academic understanding.
At the same time, Jeppe points out that the research community is a co-creator of the problem, as productivity and quantifiable results are often valued higher than insight and depth. This can lead to an academic environment where the quantity of research becomes more important than its quality.
Therefore, the choice is clear: Either we accept a future in which synthetic knowledge replaces genuine insight, or we maintain the processes that ensure the value of knowledge.
Jeppe proposes that educational institutions reform teaching and research, focusing more on research-based discussions and depth than productivity.
Read the entire article on Stricker.ai – and remember to subscribe to Jeppe's newsletter for valuable insights and perspectives on artificial intelligence and education.
AI as a co-researcher: The potential of a digital scientific assistant
A new research project introduces an AI-based co-scientist designed to assist researchers in formulating and improving scientific hypotheses. The system is based on a multi-agent model that employs an iterative approach to generate, debate, and evolve hypotheses – inspired by the scientific method.
The AI co-researcher does not replace researchers but acts as an advanced assistant that can analyze large amounts of data and propose new ideas in a structured and transparent manner. Researchers interact with the system by stating their research goals in natural language and can continuously provide feedback to adjust the output.
AI chatbots distort facts - even with reliable sources
A BBC study has revealed that AI chatbots continue to produce misleading answers, even when they rely on fact-checked news sources.
The researchers tested leading AI models and found that
- 19% of AI-generated answers with BBC references contained factual errors.
- 51% of the answers had significant journalistic shortcomings, such as unclear distinctions between facts and opinions, lack of context, or unsubstantiated claims.
- 13% of the cited BBC sources were either distorted or non-existent.
Examples include misrepresenting NHS recommendations about smoking and vaping and erroneously describing police collaboration with private security companies regarding shoplifting. The AI models also provided a skewed portrayal of the Middle East conflict without basis in the cited sources.
The BBC emphasizes that these errors are serious, as a well-functioning society relies on a shared understanding of facts. The prevalence of inaccurate AI-generated information can lead to misinformation and undermine trust in the media.
AI in schools: Should it be about learning or efficiency?
Ronni Laursen writes in an op-ed on Folkeskolen.dk that it is naive to think that AI can replace teachers. He criticizes the Danish Employers' Association (DA) for suggesting that AI could free up 9,000 teaching positions, as experiences from previous digitization projects show that top-down implementation often fails.
He parallels previous attempts to standardize teaching through learning platforms, which many teachers deliberately chose not to use. He warns that AI in schools could lead to standardized teaching where automated solutions replace flexibility and pedagogical judgment.
While AI can be a helpful tool to support teaching – for example, by customizing material for the individual student or automating feedback – it should not be used to reduce the number of teachers.
AI in schools: Can circle pedagogy strengthen students' critical thinking?
In Folkeskolen, Lene Rachel Andersen writes that AI’s entry into schools requires a pedagogical shift that strengthens students' ability to reflect rather than merely reproduce knowledge critically. She points out that AI has already created three challenges in education: Students can use AI to complete their assignments without learning anything, do not develop independent thinking and writing skills, and often trust AI more than their teachers—even when AI provides incorrect answers.
Read the entire post on folkeskolen.dk
Public school must embrace technology – not reject it
DI Digital supports the government’s proposal for mobile-free public schools, as private smartphones can disrupt lessons and negatively affect social dynamics. However, they point out that it is also essential to integrate technology that can strengthen children’s digital skills.
Technological literacy should be an elective in the upper grades and mandatory for all students. Digital teaching materials can increase motivation and create lesson variation, and children must learn to use and understand technology.
Although the government does not plan to make technological literacy mandatory, DI Digital will continue to help all students acquire the necessary skills to navigate a digitalized world.
Chatbots in teaching: From control to learning
In a post in Gymnasieskolen, Kasper Nissen suggests that language models like ChatGPT represent a paradigm shift in the education sector. Instead of perceiving AI as a threat to traditional teaching, he highlights the potential for appropriate implementation to free up teachers' and students' resources.
Rather than viewing AI as a problem because students can use it to cheat on written assignments, he proposes changing the evaluation so that homework no longer serves as a control tool but purely as an educational resource. If written assignments are no longer used for grading but for learning and feedback, the fear of cheating becomes irrelevant. The few annual, controlled exams will still reveal students who let AI do all the work.
Teachers can also gain more time for valuable feedback because AI can take over mechanical corrections of grammar and structure while human instructors focus on interpretation, style, and nuance. In this way, AI becomes not an enemy but a tool that provides teachers and students greater freedom to focus on what truly matters.
Kasper Nissen believes that AI should be integrated into teaching with an open approach that supports learning rather than maintaining outdated evaluation methods that become increasingly difficult to enforce.
AI as a superpower: Potential, risks, and the way forward
Ida Ebbensgaard has just published the book Ægte and highlights in an article in Zetland how AI can enhance human creativity and problem-solving while presenting significant challenges.
To harness AI’s potential without losing control, Ida Ebbensgaard argues that we must master the technology – not reject it. AI can be compared to electricity or the internet in its transformative power and should be seen as a tool that supports but does not replace human judgment.
Read Ida's article below and possibly order the book from Zetland.
This week's other news
This article has been machine-translated into English. Therefore, the content may contain nuances or errors. The Danish version is always up-to-date and accurate.