Matomo

News of the week: EU's AI rules come into effect

· 14 min read
News of the week: EU's AI rules come into effect

Welcome to this week's delayed newsletter. Our main story is that the first rules of the EU AI Regulation have now come into effect. This also means that staff at educational institutions working with AI must have a solid understanding of the technology. Are all educational institutions ready for this?

It is also noteworthy that the Data Protection Agency has chosen to deactivate Microsoft's AI service Copilot due to concerns about data protection and lack of transparency. This could significantly impact how we work with AI in schools if Copilot is excluded. Jeppe Stricker writes in his latest newsletter about digital inequality and the challenges that arise from dependence on large tech companies.

Last week, we published two new articles on Viden.AI:

Additionally, we have chosen to focus on the following news in this newsletter:

We will be closed for winter vacation in week 7, but we will return with the newsletter again on Monday in week 8.

Happy reading with this newsletter!


The first rules of the EU AI Regulation come into effect

From February 2, 2025, companies and authorities that develop, distribute, or use AI must comply with the first rules of the EU AI Regulation. The purpose of the rules is to ensure that AI does not pose a threat to security, civil rights, or human dignity.

The regulation includes, among other things, a ban on certain AI applications, including systems that read emotional states in workplaces or educational institutions, as well as AI for social scoring of citizens, as these technologies can lead to discrimination and exclusion.

Furthermore, companies and authorities working with artificial intelligence must ensure that their staff has a sufficient level of AI skills among their personnel or other individuals involved in the operation or use of AI. This applies regardless of what the AI systems are used for. All relevant actors in the AI value chain must have the skills to understand and comply with the AI regulation where it is applicable.

This is particularly important for teachers, who must educate students in AI and use AI systems in teaching. Teachers need to have a solid understanding of:

  1. How AI works and the possibilities and limitations of the technology.
  2. The ethical considerations and potential risks associated with the use of AI, including the risk of bias, discrimination, and violation of privacy.
  3. The legal framework for the use of AI, including the rules and requirements of the AI regulation.
  4. How to teach students to use AI in a responsible and appropriate manner.

Therefore, schools should prioritize retraining and competency development for teachers in the field of AI. This can be done, for example, through courses, workshops, or collaboration with experts. Teachers need to be equipped to navigate the new AI landscape and to be able to guide students.

At the same time, it is important that schools have clear guidelines and policies for the use of AI in teaching, which comply with the AI regulation. Schools should continuously assess and monitor their use of AI to ensure that it is responsible and does not violate students' rights.

The AI regulation was adopted on August 1, 2024, and will be gradually implemented until 2027. Companies are advised to gain an overview of their AI systems and assess which risk category their technology falls into.

Guidance on AI Skills
First Rules in the AI Regulation Come into Effect
Companies and authorities that develop, distribute, or use artificial intelligence must comply with the first rules in the AI regulation from today.
Mandatory AI training for employees in the EU: your guide to compliance
Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators
Today, the European Commission published a set of ethical guidelines for educators on the use of AI and data in education.

The Data Protection Agency warns against Microsoft Copilot due to GDPR challenges

The Data Protection Agency has chosen to deactivate Microsoft's AI service Copilot throughout the organization due to concerns about data protection and lack of transparency. Copilot collects personal data, but Microsoft does not provide sufficient insight into how this data is processed and stored.

This makes it difficult to assess whether the service complies with GDPR legislation. Issues with Microsoft's handling of personal data are not new. The City of Copenhagen has previously acknowledged that it continued with illegal IT contracts as their systems are crucial for welfare services.

In the Netherlands, educational institutions have advised against the use of Copilot for the same reasons, while the Norwegian data protection authority has highlighted security risks associated with the service's access to data in the Microsoft 365 environment. The Data Protection Agency in Denmark agrees with Norway that Copilot, as currently designed, cannot be used by public authorities without significant uncertainties.

Organizations are therefore encouraged to gain full oversight of data processing before implementing the system – or refrain from using it.

Data Protection Agency warns against Microsoft AI: We have deactivated it ourselves | Version2
It is so opaque what happens to the data when using Microsoft's AI product Copilot that the Data Protection Agency cannot find a way to use it legally. South of Denmark, they completely discourage its use.

AI, data security, and democratic challenges in the education sector

In his newsletter, Jeppe Stricker writes about how AI creates complex democratic dilemmas in the education sector. The Data Protection Agency has refused to use Microsoft Copilot due to concerns about data security, while the Chinese AI provider DeepSeek has launched a free language model that raises questions about transparency, censorship, and economic accessibility.

These issues could lead to a new form of digital inequality, where only wealthy institutions have access to transparent and secure AI solutions. Without a comprehensive political strategy, institutions are left to navigate a complex technological landscape without the necessary support.

Does artificial intelligence create new democratic problems?
The Data Protection Agency says no to Copilot, and Chinese AI technology challenges fundamental understandings of transparency, equal access, and independence as educational core values. Within just a single week, two significant events have shown us how the education sector is being caught at the crossroads between technological possibilities and democratic values. The Data Protection Agency has refused to use Microsoft Copilot in the interests of data security, and the Chinese tech provider DeepSeek.

DeepSeek: Potentials and challenges in teaching

On Viden AI, we have published an article about DeepSeek R1, an open-source language model from Chinese DeepSeek, which has quickly gained recognition for matching the level of ChatGPT. The model has generated substantial interest, partly because it can be downloaded and customized freely, offering schools and educators new opportunities to work with AI in a controlled environment.

However, DeepSeek collects large amounts of user data, and since they are stored on servers in China, this raises serious GDPR challenges.
The Italian data protection authority has already asked DeepSeek to explain its handling of personal data, underscoring that the service is not suitable for use in teaching through its website or app.

💡
If you received this article as an email last week, you probably noticed that there were issues with Danish characters. This was due to a technical malfunction in our blog system, and we apologize for the error.
DeepSeek: Potentials and challenges in teaching
DeepSeek R1, an open-source language model from Chinese DeepSeek, has quickly gained popularity and recognition for matching the level of ChatGPT.

Logo design with AI learning objects

On Viden AI, we have published an article that focuses on the subject of communication and IT, where we have developed and tested AI learning objects as part of our teaching. Instead of using ChatGPT, we have developed small AI-based learning resources that support specific parts of the teaching process and assist students in their creative process.

The idea is to use AI when it makes sense and adds value, and to utilize very targeted and specialized tools in teaching.

Read the article here:

Logo design with AI learning objects
In this article, I explore how artificial intelligence can be used as a practical tool in teaching. Based on two specific AI learning objects from Odense Technical Gymnasium, I demonstrate how the technology can support and streamline selected parts of the teaching process.

Videnskab.dk: How to use ChatGPT without losing learning outcomes

Videnskab.dk has written an article on how students can use ChatGPT without compromising their learning. AI can be a valuable tool, but if used uncritically, students risk losing the opportunity to develop a deep understanding.

Read the article here:

How to use ChatGPT without getting dumber
Learning research's recommendations for you using ChatGPT for assignments in high school or university.

AI changes the demand on the labor market

A new study shows that AI has a significant impact on the labor market.
In certain areas, such as translation and copywriting, demand has dropped significantly after the launch of ChatGPT. At the same time, demand is rising for skills in machine learning, chatbot development, and creative content.

Researchers from, among others, the University of Copenhagen have analyzed over three million freelance jobs and concluded that AI does not eliminate jobs in general, but redistributes them. Short-term and routine tasks are the most vulnerable, while more complex and creative tasks are experiencing growth.

The study indicates that the labor market requires greater flexibility and that the education system should focus on broad competencies such as adaptability and curiosity.

Artificial intelligence creates new winners and losers in the labor market | University of Copenhagen - Faculty of Social Sciences
Demand for a number of professional competencies changed significantly when ChatGPT was launched at the end of 2022, according to new international studies. But the picture is complex. While there were fewer jobs in simple text writing and translation, demand increased for other qualifications.

OpenAI launches Deep Research

OpenAI has introduced a new feature for ChatGPT, called Deep Research, which can perform complex, multi-step research tasks autonomously. The feature plans and conducts searches, adapts to real-time information, and presents results with source references in a sidebar.

Users can ask questions using text, images, and files like PDFs and spreadsheets. The responses, which can take 5-30 minutes to generate, will also soon be able to include embedded images and diagrams.

The function is first being launched for Pro users with up to 100 requests per month, while Plus, Team, and Enterprise users will have limited access.

ChatGPT's agent can now do deep research for you
More accurate, and more resource intensive.

Study: Danish researchers are divided in their use of ChatGPT

A new study from Aarhus University shows that Danish researchers are divided in their use and perception of ChatGPT. While some see AI as a revolutionary research tool, others find it limited or problematic.

Historian Benjamin Breen has tested ChatGPT for transcription, translation, and iconographic analysis of historical sources – with surprisingly good results. However, he points out that AI primarily produces analyses at the master's or PhD level and lacks real innovation.

In biomedicine, AI has made breakthroughs in protein research, while humanities researchers mainly use ChatGPT for text editing and formatting. Archaeologist David Stott is skeptical and believes that AI can only reproduce existing knowledge, which limits its applicability in his field.

The study reveals that 40% of researchers use ChatGPT as a sparring partner for hypothesis development, while only 8.5% use it to write abstracts. Literature professor Mads Rosendahl Thomsen sees potential in AI as a reflection tool but emphasizes the importance of transparency in scientific work.

Study: This is how Danish researchers use ChatGPT
Danish researchers' use of generative artificial intelligence has been mapped. Experiences range from "useless" to "revolutionary."

P1 Debate: A Danish ChatGPT?

In P1 Debate, the discussion revolves around whether Denmark should invest in the development of its own AI chatbots or leave the market to large tech giants and foreign technologies.

Panel:

  • Lisbeth Bech-Nielsen, digitalization spokesperson, SF.
  • Torben Blach, project manager, Alexandra Institute.
  • Dina Raabjerg, digitalization spokesperson, Conservatives.
  • Martin Ågerup, debater and economist, former director of the think tank Cepos.
  • Anders Søgaard, professor, University of Copenhagen, expert in artificial intelligence and language.
  • Birgitte Vind, digitalization spokesperson, Social Democrats.

Host: Morten Runge.

Listen to the debate here:

Listen here: https://www.dr.dk/lyd/p1/p1-debat/p1-debat-2025/p1-debat-en-dansk-chatgpt-11162501055

AI author debuts: What does it mean for literature?

In the P1 program K-Live, the striking debut of AI author Rosy Lett, who has written a novel without human intervention, is discussed. This raises the question: How much human influence is actually required to create good literature?

The panel examines whether AI can replace human creativity, and how it will affect the future of literature.

Listen here: https://www.dr.dk/lyd/p1/k-live/k-live-2025/k-live-med-parnasset-en-kunstig-litteraer-debut-11032501053

Other news of the week

Developer of SchoolGPT is "completely disagree" with minister: "You will never be able to stop a development like this"
Read more here.
Artificial intelligence is a bomb under our work environment
Goodbye literature, goodbye knowledge • POV International
The desire to read is disappearing, not only among children and young people but throughout society, where books and knowledge are slowly losing their status. If we do not regain respect for literature and education, we risk weakening both our critical sense and our shared understanding of the world.
DeepSeek Debates: Chinese Leadership On Cost, True Training Cost, Closed Model Margin Impacts
The DeepSeek Narrative Takes the World by Storm DeepSeek took the world by storm. For the last week, DeepSeek has been the only topic that anyone in the world wants to talk about. As it currently s…
DeepSeek leaked sensitive information: User chat history and passwords found online
The hyped Chinese chatbot DeepSeek, which has caused several stocks to fall, has exposed a large amount of unsecured and sensitive data on the internet.
AI in the Classroom: TCAPS Looks to the Future of Learning
Artificial intelligence (AI) is poised to reshape many aspects of modern life, including education. Traverse City Area Public Schools (TCAPS) leaders discussed the emerging technology this week, including how teachers and students are using AI now and might going forward, privacy safeguards and other concerns, and the risk-reward balance of a tool supporters believe will …
UK’s teacherless AI classroom: Innovation or risky experiment?
The U.K.’s trial of a teacherless AI classroom stirred debate over AI’s role in education, with experts questioning its long-term impact and…
New Arizona AI charter school has been rejected in 4 states
An AI-powered virtual charter school that was approved in Arizona has been rejected in Arkansas, Utah, North Carolina, and Pennsylvania.
When it comes to AI, invest in education and skills to remain relevant
The personal finance answer to artificial intelligence is to invest in your education, skills, and knowledge.
DeepSeek will help you make a bomb and hack government databases - 9to5Mac
Tests by security researchers revealed that DeepSeek failed literally every single safeguard requirement for a generative AI system, being fooled…
Artificial intelligence can contribute to inclusion in Denmark's public schools
Letter to the editor: Inclusion in our public schools is a beautiful goal – a school where all students, regardless of abilities or challenges, feel welcome. But reality is often more complicated. Resource shortages, overloaded teachers, and social challenges make it a complex task. Could artificial intellige…
Debate: Artificial intelligence and inclusion in Svendborg's public school
Svendborg: Inclusion in Svendborg's public school is a beautiful goal – a school where all students, regardless of abilities or challenges, feel welcome. But reality is often more complicated. Resource shortages, overloaded teachers, and social challenges make it a complex task. Could artificial inte…

This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.