Matomo

News of the week: EU's AI regulations come into effect.

· 14 min read
News of the week: EU's AI regulations come into effect.

Welcome to this week's delayed newsletter. Our main story is that the first rules of the EU AI regulation have now come into effect. This also means that employees at educational institutions working with AI need to have a solid understanding of the technology. Are all educational institutions ready for that?

It is also striking that the Data Protection Agency has chosen to deactivate Microsoft's AI service Copilot due to concerns over data protection and lack of transparency. This could have significant implications for how we work with AI in schools if Copilot is excluded. Jeppe Stricker writes in his latest newsletter about digital inequality and the challenges that come with dependence on large tech companies.

Last week, we published two new articles on Viden.AI:

In addition, we have chosen to focus on the following news stories in this newsletter:

We will be closed for winter vacation in week 7 but will return with the newsletter again on Monday in week 8.

Happy reading with this newsletter!


The first rules of the EU AI regulation come into effect

As of February 2, 2025, companies and authorities that develop, distribute, or use AI must comply with the first rules of the EU AI regulation. The purpose of these rules is to ensure that AI does not pose a threat to security, civil rights, or human dignity.

The regulation includes, among other things, a ban on certain AI applications, including systems that read emotional states in workplaces or educational institutions, as well as AI for social scoring of citizens, as these technologies can lead to discrimination and exclusion.

Additionally, companies and authorities working with artificial intelligence must ensure that their employees possess a sufficient level of AI skills among their staff or other individuals involved in the operation or use of AI. This applies regardless of what the AI systems are used for. All relevant actors in the AI value chain must have the skills to understand and comply with the AI regulation where it is applicable.

This is particularly important for teachers who must teach students about AI and use AI systems in their teaching. Teachers need to have a solid understanding of:

  1. How AI works and what opportunities and limitations the technology has.
  2. The ethical considerations and potential risks associated with the use of AI, including the risk of bias, discrimination, and invasion of privacy.
  3. The legal frameworks for the use of AI, including the rules and requirements of the AI regulation.
  4. How to teach students to use AI responsibly and appropriately.

Schools should therefore prioritize training and professional development for teachers in the field of AI. This could be through courses, workshops, or collaboration with experts. Teachers must be equipped to navigate the new AI landscape and guide their students.

At the same time, it is important for schools to have clear guidelines and policies for the use of AI in teaching that are in accordance with the AI regulation. Schools should continuously assess and monitor their use of AI to ensure that it is responsible and does not violate students' rights.

The AI regulation was adopted on August 1, 2024, and will be gradually implemented until 2027. Companies are advised to create an overview of their AI systems and assess which risk category their technology belongs to.

Guidance on AI skills
First rules in the AI regulation come into effect
Companies and authorities that develop, distribute, or use artificial intelligence must comply with the first rules of the AI regulation from today.
Mandatory AI training for employees in the EU: your guide to compliance
Ethical guidelines on the use of artificial intelligence and data in teaching and learning for educators
Today, the European Commission published a set of ethical guidelines for educators on the use of AI and data in education.

The Data Protection Agency warns against Microsoft Copilot due to GDPR challenges

The Data Protection Agency has chosen to deactivate Microsoft's AI service Copilot throughout the organization due to concerns over data protection and lack of transparency. Copilot collects personal data, but Microsoft does not provide sufficient insight into how this data is processed and stored.

This makes it difficult to assess whether the service complies with GDPR legislation. Issues with Microsoft's handling of personal data are not new. The City of Copenhagen has previously acknowledged continuing its illegal IT contracts, as their systems are crucial for welfare benefits.

In the Netherlands, educational institutions have advised against using Copilot for the same reasons, while the Norwegian Data Protection Authority has pointed out the security risks associated with the service's access to data in the Microsoft 365 environment. The Data Protection Agency in Denmark agrees with Norway that Copilot, as currently designed, cannot be used by public authorities without significant uncertainties.

Organizations are therefore encouraged to gain a full overview of data processing before implementing the system – or refrain from using it.

The Data Protection Agency warns against Microsoft AI: We have also turned it off | Version2
It is so opaque what happens to data when using Microsoft's AI product Copilot, that the Data Protection Agency cannot find a way to use it legally themselves. South of Denmark, people are entirely advised against using it.

AI, data security, and democratic challenges in the education sector

In his newsletter, Jeppe Stricker writes about how AI creates complex democratic dilemmas in the education sector. The Data Protection Agency has refused to use Microsoft Copilot due to concerns over data security, while the Chinese AI provider DeepSeek has launched a free language model that raises questions about transparency, censorship, and economic accessibility.

These issues could lead to a new form of digital inequality, where only wealthy institutions have access to transparent and secure AI solutions. Without a comprehensive political strategy, institutions are left to navigate a complex technological landscape without the necessary support.

Is artificial intelligence creating new democratic issues?
The Data Protection Agency says no thanks to Copilot, and Chinese AI technology challenges fundamental understandings of transparency, equal access, and independence as educational core values. In just one week, two significant events have shown us how the education sector is being caught in the intersection of technological possibilities and democratic values. The Data Protection Agency has refused to use Microsoft Copilot for the sake of data security, and the Chinese tech provider DeepSeek has.

DeepSeek: Potential and challenges in education

At Viden AI, we have published an article about DeepSeek R1, an open-source language model from the Chinese company DeepSeek, which has quickly gained recognition for matching the level of ChatGPT. The model has garnered significant interest, among other reasons because it can be downloaded and customized freely, providing schools and educators with new opportunities for working with AI in a controlled environment.

However, DeepSeek collects large amounts of user data, and since it is stored on servers in China, this raises serious GDPR challenges.
The Italian data protection authority has already asked DeepSeek to account for its handling of personal data, underscoring that the service is unsuitable for use in education via its website or app.

💡
If you received this article as an email last week, you probably noticed that there were issues with Danish characters. This was due to a technical error in our blog system, and we apologize for the mistake.
DeepSeek: Potential and challenges in education
DeepSeek R1, an open-source language model from the Chinese company DeepSeek, has quickly gained popularity and recognition for matching ChatGPT’s level.

Logo design with AI learning objects

At Viden AI, we have published an article focusing on the subject of communication and IT, where we have developed and tested AI learning objects as part of our teaching. Instead of using ChatGPT, we have developed small AI-based learning resources that support specific parts of the teaching process and assist students in their creative process.

The idea is to use AI when it makes sense and adds value, and to utilize very specific and specialized tools in teaching.

Read the article here:

Logo design with AI learning objects
In this article, I explore how artificial intelligence can be used as a practical tool in education. Based on two concrete AI learning objects from Odense Technical Gymnasium, I demonstrate how the technology can support and streamline selected parts of the teaching process.

Videnskab.dk: How to use ChatGPT without losing educational value

Videnskab.dk has written an article on how students can use ChatGPT without compromising their learning. AI can be a useful tool, but if used uncritically, students risk losing the opportunity to develop a deep understanding.

Read the article here:

How to use ChatGPT without getting dumber
Learning research recommendations for those using ChatGPT for assignments in high school or university.

AI changes job demand in the labor market

A new study shows that AI has a significant impact on the labor market.
In certain areas, such as translation and copywriting, demand has fallen sharply since the launch of ChatGPT. At the same time, there is increasing demand for skills in machine learning, chatbot development, and creative content.

Researchers from, among others, the University of Copenhagen have analyzed over three million freelance jobs and concluded that AI does not remove jobs in general, but redistributes them. Short-term and routine tasks are most at risk, while more complex and creative tasks are experiencing growth.

The study suggests that the labor market requires greater flexibility and that the education system should focus on broad competencies such as adaptability and curiosity.

Artificial intelligence creates new winners and losers in the labor market | University of Copenhagen - Faculty of Social Sciences
The demand for a range of professional skills changed significantly when ChatGPT was launched at the end of 2022, according to a new international study. But the picture is complex. While there were fewer jobs in simple copywriting and translation, demand for other qualifications increased.

OpenAI launches Deep Research

OpenAI has introduced a new feature for ChatGPT called Deep Research, which can perform complex, multi-step research tasks autonomously. The feature plans and executes searches, adapts to real-time information, and presents results with citations in a sidebar.

Users can ask questions using text, images, and files, such as PDFs and spreadsheets. The responses, which can take 5-30 minutes to generate, will also be able to include embedded images and diagrams in the future.

The feature will launch first for Pro users with up to 100 queries per month, while Plus, Team, and Enterprise users will have limited access.

ChatGPT’s agent can now do deep research for you
More accurate, and more resource intensive.

Study: Danish researchers are divided in their use of ChatGPT

A new study from Aarhus University shows that Danish researchers are divided in their use and perception of ChatGPT. While some see AI as a revolutionary research tool, others find it limited or problematic.

The historian Benjamin Breen has tested ChatGPT for transcription, translation, and iconographic analysis of historical sources – with surprisingly good results. However, he points out that AI primarily produces analyses at the master's or PhD level and lacks genuine innovation.

In biomedicine, AI has made breakthroughs in protein research, while humanities researchers mainly use ChatGPT for text editing and formatting. Archaeologist David Stott is skeptical and believes AI can only reproduce existing knowledge, limiting its usefulness in his field.

The study shows that 40% of researchers use ChatGPT as a sounding board for hypothesis development, while only 8.5% use it to write abstracts. Literature professor Mads Rosendahl Thomsen sees potential in AI as a reflection tool but emphasizes the importance of transparency in academic work.

Study: How Danish researchers use ChatGPT
The use of generative artificial intelligence by Danish researchers has been mapped. Experiences range from "useless" to "revolutionary."

P1 Debate: A Danish ChatGPT?

In P1 Debat, the question is discussed whether Denmark should invest in developing its own AI chatbots or leave the market to major tech giants and foreign technologies.

Panel:

  • Lisbeth Bech-Nielsen, digitalization spokesperson, SF.
  • Torben Blach, project manager, Alexandra Institute.
  • Dina Raabjerg, digitalization spokesperson, Conservatives.
  • Martin Ågerup, debater and economist, former director of the think tank Cepos.
  • Anders Søgaard, professor, University of Copenhagen, expert in artificial intelligence and language.
  • Birgitte Vind, digitalization spokesperson, Social Democrats.

Host: Morten Runge.

Listen to the debate here:

Listen here: https://www.dr.dk/lyd/p1/p1-debat/p1-debat-2025/p1-debat-en-dansk-chatgpt-11162501055

AI author debuts: What does it mean for literature?

In the P1 program K-Live, the remarkable debut of AI author Rosy Lett, who has written a novel without human intervention, is discussed. This raises the question: How much human influence is actually required to create good literature?

The panel explores whether AI can replace human creativity and how this will affect the future of literature.

Listen here: https://www.dr.dk/lyd/p1/k-live/k-live-2025/k-live-med-parnasset-en-kunstig-litteraer-debut-11032501053

This week's other news

Developer of SkoleGPT is "completely disagree" with the minister: "You will never be able to stop a development in this way"
Read more here.
Artificial intelligence is a bomb under our working environment
Goodbye literature, goodbye knowledge • POV International
The love for reading is disappearing, not only among children and young people but in society as a whole, where books and knowledge are slowly losing their status. If we do not regain respect for literature and education, we risk weakening both our critical sense and our shared understanding of the world.
DeepSeek Debates: Chinese Leadership On Cost, True Training Cost, Closed Model Margin Impacts
The DeepSeek Narrative Takes the World by Storm DeepSeek took the world by storm. For the last week, DeepSeek has been the only topic that anyone in the world wants to talk about. As it currently s…
DeepSeek leaked sensitive information: User chat histories and passwords found online
The hyped Chinese chatbot DeepSeek, which has caused several stocks to plummet, has exposed a large amount of unsecured and sensitive data on the internet.
AI in the Classroom: TCAPS Looks to the Future of Learning
Artificial intelligence (AI) is poised to reshape many aspects of modern life, including education. Traverse City Area Public Schools (TCAPS) leaders discussed the emerging technology this week, including how teachers and students are using AI now and might going forward, privacy safeguards and other concerns, and the risk-reward balance of a tool supporters believe will …
UK’s teacherless AI classroom: Innovation or risky experiment?
The U.K.’s trial of a teacherless AI classroom stirred debate over AI’s role in education, with experts questioning its long-term impact and…
New Arizona AI charter school has been rejected in 4 states
An AI-powered virtual charter school that was approved in Arizona has been rejected in Arkansas, Utah, North Carolina, and Pennsylvania.
When it comes to AI, invest in education and skills to remain relevant
The personal finance answer to artificial intelligence is to invest in your education, skills, and knowledge.
DeepSeek will help you make a bomb and hack government databases - 9to5Mac
Tests by security researchers revealed that DeepSeek failed literally every single safeguard requirement for a generative AI system, being fooled…
Artificial intelligence can contribute to inclusion in Denmark's public schools
Letter to the editor: Inclusion in our public school is a beautiful goal – a school where all students, regardless of abilities or challenges, feel welcome. But the reality is often more complicated. Resource shortages, overloaded teachers, and social challenges make it a complex task. Could artificial intellige…
Debate: Artificial intelligence and inclusion in Svendborg's public school
Svendborg: Inclusion in Svendborg's public school is a beautiful goal – a school where all students, regardless of abilities or challenges, feel welcome. But the reality is often more complicated. Resource shortages, overloaded teachers, and social challenges make it a complex task. Could artificial inte…

This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.