In this week's newsletter, the focus is on the fact that students from DTU will in the future work with artificial intelligence as part of the teaching and for use in exams. At the same time, Italians are nervous about ChatGPT and believe that OpenAI collects the users' data and subsequently trains their language model on these (and thus violates the GDPR). They have therefore asked OpenAI to explain this within 30 days.
On Thursday, we also published an article about misinformation and disinformation, with a fear of how it will affect several upcoming elections around the world.


DTU is opening up artificial intelligence in teaching

As the first university, the Technical University of Denmark (DTU) has announced that artificial intelligence will be integrated into teaching and examination processes in 2024. This also includes language models such as ChatGPT.
The university writes that it is happening as part of an objective to utilize natural science and technical science for the benefit of society and that they want to offer leading engineering programs in Europe. They will adapt teaching methods and exam questions and use artificial intelligence more systematically in courses and research.
The university has developed guidelines for the ethical use of AI, focusing on academic integrity. These guidelines encourage students to use AI responsibly, including proper submission attribution.

💡
AI at DTU
Guidelines for the use of AI

It must be clearly stated if something in a delivery is not your product. If you do not do this, it is considered exam cheating, cf. DTU's honor code, rules for good academic practice, and exam cheating.

1. AI can generate output that is inaccurate or wrong. AI- is only an aid.

2. You are the guarantor of the quality of your work and that what you deliver is correct. AI-generated output may contain copyrighted material without it appearing.

3. It is your responsibility to ensure you do not infringe copyright. AI can be biased and trained based on particular attitudes.

4. You must therefore always be critical of its output. AI recycles the information you feed in. Therefore, you must avoid giving sensitive information.

5. The exam with 'All aids, but no internet access' means you may not use AI.

DTU adheres to the scientific publisher Elsevier's guidelines for the use of AI.
Universitet vil bruge kunstig intelligens i undervisning | Nyheder | DR
Kunstig intelligens skal indgå meget systematisk i undervisningen på Danmarks Tekniske Universitet (DTU).
DTU åbner for brug af kunstig intelligens i undervisning
Kunstig intelligens - Artificial Intelligence, AI, er en del af ingeniørers arbejdsfelt og skal indgå mere systematisk i undervisningen og på længere sigt ved eksaminer på DTU.
Eksamen med åbent internet og kunstig intelligens
Undervisere på DTU høster erfaring med brug af kunstig intelligens i undervisningen og til eksamen.

Italy's Data Protection Authority claims that ChatGPT violates the GDPR

Italy's Data Protection Authority, Garante per la protezione dei dati personali, has officially notified OpenAI that it has found violations of the Data Protection Act. Last year, ChatGPT was blocked for four weeks because the regulator had privacy concerns, something OpenAI was "fixed or clarified". In this connection, the supervisory authority set up a "fact-finding activity", which now claims to have found violations of the GDPR.
According to the BBC, the breach relates to the mass collection of users' data, which is then used to train the algorithm. At the same time, they are also concerned that younger users may be exposed to inappropriate content.
OpenAI has 30 days to respond to these accusations and explain their side of the story.
In its final decision on the case, the Italian Data Protection Authority will consider the ongoing work within the particular working group established by the European Data Protection Board (EDPB).

ChatGPT bryder GDPR, lyder det fra Italiens datatilsyn | Version2
Bruddene drejer sig angiveligt om masseindsamling af brugerdata til træning af algoritme, og at yngre brugere potentielt udsættes for upassende indhold genereret af ChatGPT.

New articles on Viden.AI: Disinformation

In 2024, at least 64 democratic elections will be held worldwide, but how will AI-generated texts and deepfakes affect these elections?

Map of all the democratic elections held in 2024. Source: https://time.com/6550920/world-elections-2024/

We investigated this in a short article and during our research we discovered a website designed to generate and spread disinformation for only $400 per month. We have been in contact with the creators of this site, who have given us essential insight into this problem and raised a serious question: How many similar systems already exist out there, and what impact do they have on all the world's democracies?

We're surprised this topic isn't getting more attention. Perhaps the general population has not yet realized that artificial intelligence can be - and probably already is - abused.

The solution to meet this challenge is to educate young people (indeed the entire population!) in understanding digital technology. How else can they critically navigate the digitized world we live in?

Ethical aspects of chatbots in education - misinformation and disinformation
When generative AI produces content, it does so at a speed, quality, and volume that we have not seen before, and it has become increasingly difficult for us to distinguish between content created by humans and that produced by generative AI. We will encounter this challenge everywhere in our society.
Hver tredje borger: Kunstig intelligens vil skade politiske valg
Læs mere her.

News of the week

Forsker: AI-tempoet i skolen skal sættes ned for at undgå tekno-panik | Radar
Der er brug for at omkalibrere de aktuelle diskussioner om AI og uddannelse, mener australsk professor bag nylig forskningsartikel om samme emne. Synspunkterne bakkes op af danske fagfæller.

Bag betalingsmur

Falsk forskning oversvømmer videnskabelige magasiner
Læs mere her.
A historical day for the AI Act - The AI Act reaches an important milestone politically - DI Digital
After a whirlwind negotiation in December, European member states finally approved the AI Act at the Coreper level, leaving it (almost) ready for its final destination.
Digital dannelse: Det kræver undervisning i det digitale – og pauser fra det
De digitale muligheder skal være en naturlig del af gymnasielivet, men der skal også være pauser fra den digitale verden. Det er udgangspunktet for det digitale liv på Randers Statsskole.
SDU-dekaner: Myterne om humaniora nægter at dø – men virkeligheden fortæller en anden historie
Fortællingen om humaniora som producent af evighedsstudenter og arbejdsløse er ikke dækkende for nutidens humanistiske uddannelser og studerende, skriver Lars Grassmé Binderup og Simon Møberg Torp.
Kunstig intelligens kan udfordre os mere end klima
Misinformation er blevet hverdag, og det vil være naivt at tro, at kunstig intelligens (AI) ikke spiller en rolle. Men hvordan vil misinformation blive yderligere forstærket med AI?
Kunstig intelligens bør være en borgerret
AI kan gøre en kæmpe positiv forskel på mange områder og hjælpe især ressourcesvage borgere, der måske ikke har indsigt eller overskud til selv at sikre sig den bedste løsning.
Lærerne står alene med enorm opgave: Tillidsrepræsentanter efterlyser handling
Majority of educators call for govt monitoring of AI development, use cases
This strong consensus indicated a widespread recognition of the potential risks and ethical considerations associated with AI, noted the report. This report is based on a survey of 6,313 educators, ranging from primary school and high school teachers, college professors and education professionals,…
Most Top News Sites Block AI Bots. Right-Wing Media Welcomes Them
Nearly 90 percent of top news outlets like ‘The New York Times’ now block AI data collection bots from OpenAI and others. Leading right-wing outlets like NewsMax and Breitbart mostly permit them.
AI-visionsudspil skal være med til at sætte retningen for Danmark - IT-Branchen
Med fem stærke forslag sætter IT-Branchen med nyt AI-visionsudspil en klar retning for, hvordan Danmark i langt højere grad kan omfavne AI og fremtidens teknologier til gavn for hele landet.
Vi står over for den største teknologiske revolution siden elektriciteten, mener Ulrik Vestergaard: »Det er et lys i mørket«
Læs mere her.

Bag betalingsmur


The geek corner

Meta launches Code Llama 70B

Meta has just released their language model, which focuses on coding, and it has 70 billion parameters. Meta calls it "the largest and best performing model in the Code Llama family". Before downloading it, you should know that it takes up 131 GB and requires a powerful GPU and a lot of RAM to run.

EvalPlus is an evaluation tool for testing code generation of large language models, and here, Code Llama scores 70B (65.2), which is quite a bit lower than GPT-4 (85.4).
You can test the model here without logging in:

Code Llama 70B | NVIDIA NGC
Code Llama is an LLM capable of generating code, and natural language about code, from both code and natural language prompts.
Meta Releases Code Generation Model Code Llama 70B, Nearing GPT-3.5 Performance
Code Llama 70B is Meta’s new code generation AI model. Thanks to its 70 billion parameters, it is “the largest and best-performing model in the Code Llama family”, Meta says.
Introducing Code Llama, a state-of-the-art large language model for coding
Code Llama, which is built on top of Llama 2, is free for research and commercial use.