Matomo

News of the week: HTX students reveal security vulnerabilities in ExamCookie

HTX students reveal security flaws in ExamCookie, Mistral AI launches an EDU license, and researchers are working on more sustainable AI.

· 9 min read
News of the week: HTX students reveal security vulnerabilities in ExamCookie

Week 7 means winter vacation for many, and that applies to us at Viden.AI as well - although there are a few student submissions that need to be graded. Nevertheless, there were a few news stories that were too interesting for us to completely relax.

Especially the story that Version2 published last week about security flaws in ExamCookie, and that the system can be easily circumvented.

New language models are continuously being developed along with new opportunities to utilize them in teaching. It is particularly interesting that the French company Mistral AI has released an EDU version of Le Chat Pro, where students can use it for about 35 DKK per month. At the same time, there are new initiatives underway to develop good open-source European language models.

The use of AI is a significant environmental burden, and in its current form, this is likely correct. However, researchers from SDU are working on a method that could significantly reduce energy consumption. Specifically, they are testing a technique that minimizes the number of bits per parameter in language models. It may sound technical, but fundamentally it is about switching from calculations with decimal numbers to whole numbers - a change that can make AI much more energy efficient.

At the University of Bergen, doubts have arisen about whether a lecturer has used AI to grade students' exams and provide feedback on their exam papers.

Sam Altman, CEO of OpenAI, participated in the podcast Re:Thinking, where they discuss what skills humans should have in a future characterized by AI.

Furthermore, there are a number of other news stories that could be interesting to delve into.

Happy reading!

💡
This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.

HTX students reveal security flaws in ExamCookie

Version2 reports that two 3rd-year students from HTX have discovered that the code behind the exam program ExamCookie can be easily accessed and analyzed. They have published a guide on GitHub on how to bypass the program's monitoring features using DLL injection techniques. They also demonstrate that one can simply rename their internet browser to evade monitoring and use AI tools like ChatGPT without being detected.

The students demonstrate significant vulnerabilities in ExamCookie, including the ability to disable screenshot features and clipboard logging. Two IT security experts express concern, as this could potentially make it harder to detect cheating during exams.

Moreover, other students found a remarkable comment in ExamCookie's HTML code: “It is completely wrong with the upcoming mink scandal. Mette Frederiksen is the biggest liar yet seen in Christiansborg!” According to a security expert, the comment indicates low coding standards and a "serious product."

In the HTML code of ExamCookie version 1414, the comment reads: “It is completely wrong with the upcoming mink scandal. Mette Frederiksen is the biggest liar yet seen in Christiansborg!” Illustration: Version 2

The director of ExamCookie was not aware of the comment, which has now been removed. He also dismisses the criticism, asserting that it is only theoretically possible to circumvent the program. Schools will be alerted if students attempt to cheat, he states. In response to the criticism, ExamCookie has released an updated version with enhanced security.

The case raises questions about whether technological solutions can effectively prevent cheating during exams, and whether there is a need for a more thorough revision of exam formats in light of AI developments.

💡
At Viden.AI, we have also investigated the techniques that the students write about on GitHub, and we assess that it requires considerable knowledge to perform DLL injection, even if one follows their guide. At the same time, renaming the browser will prevent a notification in ExamCookie, but it does not prevent monitoring of the clipboard or capturing the students' screenshots. In cases of reasonable suspicion, there will still be a significant risk of detection, even if one renames the browser.However, like the two security experts, we are very puzzled by the low coding standards and that the source code was audited without detecting the developer comment about Mette Frederiksen.
Students warn: This is how easily school ChatGPT defenses can be evaded | Version2
Artificial intelligence makes it easy to cheat on exams, so when over 100,000 Danish high school students take their mid-term exams in these weeks, every mouse click is monitored. However, this monitoring proves to be easy to bypass.

Behind paywall


Mistral AI launches mobile app

The French company Mistral AI has launched its AI assistant Le Chat as a mobile app for iOS and Android. Previously, Mistral primarily focused on businesses and professional users, but now the company is directly competing with OpenAI’s ChatGPT, Google Gemini, and the Chinese AI assistant DeepSeek.

For the education sector, Mistral offers an EDU license for about 35 DKK per month, providing students and educators with a more open and tailored AI solution compared to the closed systems from American actors. Alternatively, the system can also be used for free, but then one accepts that their data is used to improve the system.

French Mistral launches a mobile app: Will compete with ChatGPT, Gemini, and DeepSeek
The French AI startup Mistral AI has just launched a mobile version of its AI assistant, Le Chat, for both iOS and Android – but the company faces enormous competition from players like OpenAI’s ChatGPT, Google Gemini, and not least Chinese DeepSeek.

European collaboration on open language models

A consortium of 20 leading European research institutions, companies, and EuroHPC centers has entered into a collaboration to develop a family of open, multilingual language models under the project OpenEuroLLM. The initiative aims to strengthen Europe’s digital sovereignty and competitiveness in AI by making advanced language models available to businesses, industry, and public organizations.

Open Euro LLM
A series of foundation models for transparent AI in Europe

Researchers work to make AI searches more sustainable

The use of AI chatbots like ChatGPT requires significant amounts of energy and water, which has an environmental cost. A single search can consume many times more energy than a Google search, and as millions of people use chatbots daily, this results in enormous total consumption.

To address this challenge, researchers from the Department of Mathematics and Computer Science at the University of Southern Denmark are working on making large language models more energy-efficient. Lukas Galke and Peter Schneider-Kamp have been granted resources on the supercomputer LEONARDO to develop more sustainable models. They focus on reducing energy consumption during inference - that is, during user interaction - rather than during the training of models.

The researchers hope to minimize the number of bits per parameter in language models, which could significantly reduce energy consumption. If their method succeeds, a search could potentially become 30 times more energy efficient than today.

Are you wasting chatbot energy without realizing it?
Chatbots consume enormous amounts of energy when they are developed and when we users search in them. Researchers now want to create more sustainable chatbots so that we can search with a slightly better conscience.

University cancels grading after suspicion of errors

35 out of 71 students at the University of Bergen will receive new grading for their exam in European history and politics. The decision comes after complaints regarding grading justifications raised concerns within the university's administration.

Several students speculated whether the examiner had used AI to assess the exam papers, but the university emphasizes that the complaints do not directly mention AI. However, after a series of random checks, the university could not confirm that the grading had been conducted in a professionally responsible manner and therefore decided to cancel the assessments for the affected students.

Students suspected that the examiner used AI. Now they will receive new grading
35 of 71 students will receive new grading for their exam in European history and politics at the University of Bergen. The department regrets what has happened.

Podcast: The ability to ask questions will become the most important skill in the age of AI

Sam Altman, CEO of OpenAI, predicts that AI will fundamentally reshape the economy and labor market. In a conversation with psychologist Adam Grant on the podcast Re:Thinking, Altman emphasizes that raw intelligence will no longer be the most important skill in the future job market. Instead, the ability to ask good questions and connect information in new ways will be crucial.

Altman explains that intelligence was previously measured by how much knowledge a person could remember. But in an age where AI can store and recall information far better than humans, it is more important to identify patterns and connections rather than just memorizing facts. He compares this development to the rise of the internet, where teachers in his school days tried to ban the use of Google in teaching. Instead of making us dumber, it enabled us to solve more complex tasks.

Grant summarizes Altman's point by stating that it will become more important to be a “connector” of information than a “collector” of facts. Creativity and the ability to see connections across different fields of knowledge will be central competencies in the age of AI.

Sam Altman on the future of AI and humanity
Podcast Episode · ReThinking · 01/07/2025 · 40m

This week's other news

Gemini
Gemini 2.0 our most capable AI model yet, built for the agentic era.
When the oligarchs kiss Trump's ring, the struggle for tech regulation becomes a fight for the future of democracy - Altinget.dk
With Trump as president, we see a new oligarchy where power and money go hand in hand. This gives tech billionaires access to political influence, threatening democracy. Therefore, we should limit the power of big tech companies and ensure that technology is used to build a more just society, writes Lisbeth Bech-Nielsen (SF).
AI has killed the analytical essay – so what? (a call to action)
Australia’s social media ban shows how extreme the technology debate has become – there’s a better way
A more balanced approach to social media use might be needed.
Digital Technology Understanding: From Good Intentions to Real Learning
Digital Technology Understanding: From Good Intentions to Real Learning
ChatGPT comes to 500,000 new users in OpenAI’s largest AI education deal yet
Still banned at some schools, ChatGPT gains an official role at California State University.

This article has been machine-translated into English. Therefore, there may be nuances or errors in the content. The Danish version is always up-to-date and accurate.