Matomo

News of the week: The EU adopts the AI ​​regulation

· 9 min read
News of the week: The EU adopts the AI ​​regulation

There has been little news about artificial intelligence and education in the past week; therefore, this is a shorter newsletter than usual. What is new, however, is that the AI ​​Regulation has been adopted and sets the standard for the safe and ethical use of artificial intelligence.

At Viden.AI, we bring much news about how artificial intelligence is developing and how technology can be used in education. Although we find many good examples that make sense for learning, we also want schools and teachers to take a critical view of the technology. Therefore, this week, we focus on two articles that specifically argue against using artificial intelligence in education. Ben Williamson sets out 21 arguments against including artificial intelligence in teaching, while Thomas Aastrup Rømer criticizes Aarhus University's new direction when it is allowed for exams.

In addition to this week's news, we have also found a scientific article that reveals how language models show signs of hidden racism and how they reinforce prejudice. In this section, we have also included a video about a new program called Devin, which the company behind it calls an autonomous AI software engineer.


The EU adopts the AI Regulation

On Wednesday, 13 March 2024, the European Parliament approved the AI ​​Regulation, which introduces significant security measures around the use of artificial intelligence in Europe. The law, negotiated with the member states in December 2023, was adopted by 523 votes in favor, 46 against, and 49 abstentions. The law is expected to be finally adopted before the end of the legislative period. It will enter into force 20 days after publication in the EU's Official Gazette, with complete application 24 months after entry.

We have previously written about AI ​​regulation, and otherwise, we have collected some newer sources below:

What does EU Artificial Intelligence regulation mean for AI in education?
A look at how the EU AI Act potentially affects AI’s use in educational settings
The EU AI Act passed — here’s what comes next
The EU Act will come into force in 2025.
MEPs approve world’s first comprehensive AI law
The EU’s AI Act seeks to counter the risks associated with the rapidly growing AI sector.
Artificial Intelligence Act: MEPs adopt landmark law | News | European Parliament
On Wednesday, Parliament approved the Artificial Intelligence Act that ensures safety and compliance with fundamental rights, while boosting innovation.

AI in education – a public problem

Ben Williamson, Chancellor's Fellow at the University of Edinburgh and author of Big Data in Education: The Digital Future of Learning, Policy, and Practice, has written a blog post with 21 critical points about AI in education. In the article, Williamson highlights concerns that the concept of artificial intelligence is vaguely defined and that it helps to mystify the actual functions and possibilities of the technology. At the same time, the ongoing hype is helping to create an unrealistic notion of AI's potential, also helping to oversell the technology in education.

According to Williamson, there is a lack of evidence for the benefits that AI is supposed to contribute to teaching, but also knowledge of the problems that can arise around personal data, bias, discrimination, and environmental consequences. Williamson advocates that the challenges and risks associated with AI be discussed and addressed responsibly.

We would recommend reading all 21 points, too. These are interesting considerations that everyone working with AI and education should consider. Read the article below:

AI in education is a public problem

Artificial Intelligence creates debate at Aarhus University

Some experts, including Thomas Aastrup Rømer, author and expert in learning theory, have expressed concern about the introduction of artificial intelligence at Aarhus University.

Previously, Aarhus University had banned the use of artificial intelligence for exams, but it has subsequently changed its position and sees the technology as a tool to improve learning processes and democratize education. At the same time, Niels Lehmann, vice-dean for education, argues that competencies within artificial intelligence are in demand on the labor market and that this can help raise students' skills. Thomas Aastrup Rømer criticizes the university's approach sharply, believing it goes beyond the student's in-depth learning and education.

He warns that the development may lead to students on university courses only working superficially with the material. This development signals a collapse of university philosophy, where the educational environment becomes a kindergarten. He is particularly concerned that this phasing in of new technologies does not consider the long-term consequences for students and society.

Read the entire post on Berlingske (behind a paywall and in danish):

Kunstig intelligens til eksamen skal hjælpe de svageste studerende: Universitetet bliver en »børnehave«, advarer tidligere lektor
Læs mere her.

News of the week

Eksperters spåkugle: Her er AI i kommunikationsfaget om 10 år
Kunstig intelligens har taget k-scenen med storm og skabt en bølge af forventning, men også udløst bekymringer. Så hvordan ser fremtiden egentlig ud? Kommagasinet har bedt fire eksperter beskrive deres vildeste AI-scenarier.
Inddrag medarbejderne i AI-beslutninger
Medarbejderne skal efteruddannes i brugen af AI og inddrages i beslutninger om AI-værktøjer. Det vil øge produktiviteten og trivslen, siger Louise Harder Fischer, lektor ved IT-Universitetet.
Iværksætter: I en digital tidsalder skal empati være et grundfag på alle uddannelser
For at finde vej i en verden med techgiganter, kunstig intelligens og sociale platforme er det på tide at sætte empati på dagsordenen. Skoleelever skal for eksempel lære at skelne mellem følelser og empati, mener dagens kronikør

Bag betalingsmur

Artificial Intelligence In Education: Teachers’ Opinions On AI In The Classroom
In recent years, the meteoric rise of artificial intelligence (AI) has sent shockwaves through society on both economic and cultural levels. Seemingly poised to become as ubiquitous as email, this rapidly evolving technology is transforming many aspects of daily life—including how we teach and learn
KI-forsker: Derfor er arbeidet med norske språkmodeller så viktig - Digital Norway
Denne våren lanseres flere norske alternativer til språkmodellene GPT-4 og Googles Gemini. Vi tok tak i én av forskerne bak for å høre hvorfor.
Debat: Vi smadrer klimaet med kunstige kattevideoer
Vi bør skrue ned for de populære AI-tjenester, som bruger enorme mængder af vand og energi på at producere underholdning. AI skal bruges der, hvor det optimerer brugen af sparsomme ressourcer.
AI Act kan blive en snubletråd for Europa: “Man kan ikke regulere sig til en førsteplads i det globale AI-kapløb” | Version2
Den nye AI-forordning fra EU er en prøveballon, som både kan give europæiske borgere et boost på sin retssikkerhed, men også bringe unionens virksomheder bagud i AI-kapløbet. Her er tredje afsnit om de EU-lovpakker, du skal forholde dig til i din virksomhed.
Det gør mig ked af det, at jeg ikke stoler på mine elever
rektor oensker kunstig intelligens i undervisningen
Nyheder fra TV 2/Bornholm.
Generativ AI fumler med jura på en måde, der viser at revolutionen ikke er nært forestående | Radar
Forskere fra Stanford University viser med forsøg, at flere af de mest udbredte AI-sprogmodeller vrøvler i stor stil, når de bliver forelagt juridiske spørgsmål.

Scientific articles

Below, we select articles or tools with a slightly more scientific perspective. These are articles that we read ourselves to stay up-to-date, but we also know that most people find them uninteresting.

AI reinforces racial stereotypes

Researchers have found that ChatGPT and Google Gemini exhibit covert racism, especially if the person speaks an ethnolect such as African American Vernacular English (AAVE). An article from Cornell University highlights that AI systems can reinforce the prejudices inherent in the language models when used in the company's recruitment. For example, the researchers found that people who spoke AAVE, according to the language models, were set to have lower intelligence and be less suitable for work.

The models were significantly more likely to describe AAVE speakers as “stupid” and “lazy”, assigning them to lower-paying jobs.

The article focuses exclusively on employee recruitment, but this is worth considering when using language models in teaching. We can unconsciously reinforce prejudices and stereotypes if we are not aware of them while being critical of the output the model brings.

As AI tools get smarter, they’re growing more covertly racist, experts find
ChatGPT and Gemini discriminate against those who speak African American Vernacular English, report shows
Dialect prejudice predicts AI decisions about people’s character, employability, and criminality
Hundreds of millions of people now interact with language models, with uses ranging from serving as a writing aid to informing hiring decisions. Yet these language models are known to perpetuate systematic racial prejudices, making their judgments biased in problematic ways about groups like African…

Devin - the first AI software engineer

The company Cognition has developed Devin, which can plan and solve complex tasks autonomously. At the same time, the program can make decisions based on experience and improve itself over time. Devin emulates a software development environment and has access to write code and search the web.

If you want to test Devin, you can sign up for their waiting list below:

Request Access to Devin
To start using Devin for engineering work, please request access.
Introducing Devin, the first AI software engineer