This week focuses on how a university implements chatbots to communicate with students. In the future, the students will probably have to get used to the fact that the language models will have more space, but what will this mean for teaching? Mainly, there can be challenges since the language models are not error-free, which came to light when Microsoft's Copilot encouraged a user to self-harm. This week's newsletter was changed to Tuesday at the last minute, but I hope you read along anyway.
Chatbots in teaching at an American university
At Ferris State University in Michigan, they are working on an experiment where they will introduce the students to two AI chatbots called Ann and Fry. The chatbots will be equipped with voice recognition and speech capabilities and are meant to participate in the classroom with the students. Here, they can join in discussions and ask the students questions. At the same time as they interact with the students, they also collect data used to personalize and improve the learning experience for the individual student. This learning analytics data is also used to train the AI model to give the best possible questions to the students so that the chatbots do not behave unethically.
The project can indicate how artificial intelligence can be integrated into education to create more inclusive learning environments. However, there are also many challenges when the chatbot begins to harvest data and adapt to this.
Read about the project here:
Microsoft Copilot suggests self-harm
Colin Fraser, a data specialist at Meta, shared a conversation last week in which Microsoft Copilot identified himself as the Joker from the Batman universe and suggested that the user harm himself.
Microsoft told the tech magazine Gizmodo that it had investigated the conversation and changed its security filters to block such prompts in the future. At the same time, a Microsoft spokesperson claims that Colin Fraser's prompt was designed to bypass their security systems and that people will not experience it when using the service as intended.
Microsoft has subsequently corrected Copilot so that such answers are not possible. However, this shows that the technology is not flawless, and we must be aware of this when we incorporate language models into teaching.