This week focuses on how a university implements chatbots to communicate with students. In the future, the students will probably have to get used to the fact that the language models will have more space, but what will this mean for teaching? Mainly, there can be challenges since the language models are not error-free, which came to light when Microsoft's Copilot encouraged a user to self-harm. This week's newsletter was changed to Tuesday at the last minute, but I hope you read along anyway.


Chatbots in teaching at an American university

At Ferris State University in Michigan, they are working on an experiment where they will introduce the students to two AI chatbots called Ann and Fry. The chatbots will be equipped with voice recognition and speech capabilities and are meant to participate in the classroom with the students. Here, they can join in discussions and ask the students questions. At the same time as they interact with the students, they also collect data used to personalize and improve the learning experience for the individual student. This learning analytics data is also used to train the AI ​​model to give the best possible questions to the students so that the chatbots do not behave unethically.

The project can indicate how artificial intelligence can be integrated into education to create more inclusive learning environments. However, there are also many challenges when the chatbot begins to harvest data and adapt to this.

Read about the project here:

When Bots Go to Class - EdSurge News
Ferris State U. plans to use chatbots to put its curriculum to the test, and other colleges are looking at ways to use AI to improve the student experience.

Microsoft Copilot suggests self-harm

Colin Fraser, a data specialist at Meta, shared a conversation last week in which Microsoft Copilot identified himself as the Joker from the Batman universe and suggested that the user harm himself.

Copilot’s deranged response to a user asking if they should end their life. Illustration: Jody Serrano / Gizmodo

Microsoft told the tech magazine Gizmodo that it had investigated the conversation and changed its security filters to block such prompts in the future. At the same time, a Microsoft spokesperson claims that Colin Fraser's prompt was designed to bypass their security systems and that people will not experience it when using the service as intended.

Microsoft has subsequently corrected Copilot so that such answers are not possible. However, this shows that the technology is not flawless, and we must be aware of this when we incorporate language models into teaching.

Microsoft’s Copilot AI told a user that ‘maybe you don’t have anything to live for’
The company’s AI chatbot, formerly Bing Chat, told a data scientist that it identified as the Joker character and then suggested self-harm

News of the week

Professor stillede kontroversiel chatbot 60 spørgsmål – så forstummede hans kritik: »Jeg er ekstremt imponeret over, hvor godt den svarer«
Læs mere her.
PROFESSOR: GENERATIV AI MEDFØRER ASTRONOMISKE KRAV TIL STUDERENDE
”Lige nu er der mange, som lægger jernbaneskinner i alle mulige retninger.” De færreste har for tiden et klart blik for, hvad generativ kunstig intelligens, AI, vil betyde for uddannelse af f.eks. datamatikere, IT-teknologer eller andre på landets erhvervsakademier i fremtiden. En af konsekvenserne…
Kunstig intelligens til eksamen skal hjælpe de svageste studerende: Universitetet bliver en »børnehave«, advarer tidligere lektor
Læs mere her.
Microsoft ignored safety problems with AI image generator, engineer complains
Shane Jones said he warned management about the lack of safeguards several times, but it didn’t result in any action
AI likely to increase energy use and accelerate climate misinformation – report
Claims that artificial intelligence will help solve the climate crisis are misguided, warns a coalition of environmental groups
Debat: Teknologiforståelse i skolen er afgørende - men regeringens strategi er slet ikke nok
Det er positivt, at regeringen vil indføre teknologiforståelse som valgfag og faglighed i folkeskolen, så børn rustes bedre til den digitale hverdag nu og i fremtiden – men strategien er stadig alt for tilbageholdende, mener dagens debattør.
Many students want to learn more about AI, but schools just don’t have the right tools
Students want more AI in school
Ekspertgruppe vil indhegne og give plads til AI - IT-Branchen
Regeringens tech-ekspertgruppe har netop offentliggjort en række anbefalinger, der skal være med til at indhegne og give plads til AI i vores samfund.
Forskere overraskede over kønsstereotyper i ChatGPT
En DTU-studerendes analyse af ChatGPT afslører, at onlinetjenesten er voldsomt kønsstereotyp. Arbejdet er første skridt til at give udviklerne af kunstig intelligens et værktøj til at teste mod alle typer diskriminerende fordomme.
Why educators should embrace artificial intelligence
Schools must get on board with AI and start figuring out how to utilize this new technology to increase productivity and efficiency. Educators must figure out what students should be taught, becaus…
OpenAI’s GPT Is a Recruiter’s Dream Tool. Tests Show There’s Racial Bias
Recruiters are eager to use generative AI, but a Bloomberg experiment found bias against job candidates based on their names alone
Intelligence service: how schools are managing AI
Machine-thinking has the potential to create a paradigm shift in education but the change and challenges are huge