There has been little news about artificial intelligence and education in the past week; therefore, this is a shorter newsletter than usual. What is new, however, is that the AI Regulation has been adopted and sets the standard for the safe and ethical use of artificial intelligence.
At Viden.AI, we bring much news about how artificial intelligence is developing and how technology can be used in education. Although we find many good examples that make sense for learning, we also want schools and teachers to take a critical view of the technology. Therefore, this week, we focus on two articles that specifically argue against using artificial intelligence in education. Ben Williamson sets out 21 arguments against including artificial intelligence in teaching, while Thomas Aastrup Rømer criticizes Aarhus University's new direction when it is allowed for exams.
In addition to this week's news, we have also found a scientific article that reveals how language models show signs of hidden racism and how they reinforce prejudice. In this section, we have also included a video about a new program called Devin, which the company behind it calls an autonomous AI software engineer.
The EU adopts the AI Regulation
On Wednesday, 13 March 2024, the European Parliament approved the AI Regulation, which introduces significant security measures around the use of artificial intelligence in Europe. The law, negotiated with the member states in December 2023, was adopted by 523 votes in favor, 46 against, and 49 abstentions. The law is expected to be finally adopted before the end of the legislative period. It will enter into force 20 days after publication in the EU's Official Gazette, with complete application 24 months after entry.
We have previously written about AI regulation, and otherwise, we have collected some newer sources below:
AI in education – a public problem
Ben Williamson, Chancellor's Fellow at the University of Edinburgh and author of Big Data in Education: The Digital Future of Learning, Policy, and Practice, has written a blog post with 21 critical points about AI in education. In the article, Williamson highlights concerns that the concept of artificial intelligence is vaguely defined and that it helps to mystify the actual functions and possibilities of the technology. At the same time, the ongoing hype is helping to create an unrealistic notion of AI's potential, also helping to oversell the technology in education.
According to Williamson, there is a lack of evidence for the benefits that AI is supposed to contribute to teaching, but also knowledge of the problems that can arise around personal data, bias, discrimination, and environmental consequences. Williamson advocates that the challenges and risks associated with AI be discussed and addressed responsibly.
We would recommend reading all 21 points, too. These are interesting considerations that everyone working with AI and education should consider. Read the article below:
Artificial Intelligence creates debate at Aarhus University
Some experts, including Thomas Aastrup Rømer, author and expert in learning theory, have expressed concern about the introduction of artificial intelligence at Aarhus University.
Previously, Aarhus University had banned the use of artificial intelligence for exams, but it has subsequently changed its position and sees the technology as a tool to improve learning processes and democratize education. At the same time, Niels Lehmann, vice-dean for education, argues that competencies within artificial intelligence are in demand on the labor market and that this can help raise students' skills. Thomas Aastrup Rømer criticizes the university's approach sharply, believing it goes beyond the student's in-depth learning and education.
He warns that the development may lead to students on university courses only working superficially with the material. This development signals a collapse of university philosophy, where the educational environment becomes a kindergarten. He is particularly concerned that this phasing in of new technologies does not consider the long-term consequences for students and society.
Read the entire post on Berlingske (behind a paywall and in danish):
News of the week
Scientific articles
Below, we select articles or tools with a slightly more scientific perspective. These are articles that we read ourselves to stay up-to-date, but we also know that most people find them uninteresting.
AI reinforces racial stereotypes
Researchers have found that ChatGPT and Google Gemini exhibit covert racism, especially if the person speaks an ethnolect such as African American Vernacular English (AAVE). An article from Cornell University highlights that AI systems can reinforce the prejudices inherent in the language models when used in the company's recruitment. For example, the researchers found that people who spoke AAVE, according to the language models, were set to have lower intelligence and be less suitable for work.
The models were significantly more likely to describe AAVE speakers as “stupid” and “lazy”, assigning them to lower-paying jobs.
The article focuses exclusively on employee recruitment, but this is worth considering when using language models in teaching. We can unconsciously reinforce prejudices and stereotypes if we are not aware of them while being critical of the output the model brings.
Devin - the first AI software engineer
The company Cognition has developed Devin, which can plan and solve complex tasks autonomously. At the same time, the program can make decisions based on experience and improve itself over time. Devin emulates a software development environment and has access to write code and search the web.
If you want to test Devin, you can sign up for their waiting list below: