Matomo

6 tips for ethical use of artificial intelligence in education

· 12 min read
6 tips for ethical use of artificial intelligence in education

Artificial intelligence is now found in many different teaching materials, and here, you, as a teacher, should make several ethical considerations before introducing AI in teaching. Therefore, in this article, we have gathered some good advice on how to select tools for teaching on an ethical basis.

1. Does it make sense to use artificial intelligence in my profession?

Does AI add more to student learning, or do we do it because it's new and smart? There is external pressure that everyone must take a stand on artificial intelligence, but the technology should/must be included in the teaching. But we should not digitize for the sake of digitization, and the new tools should be used when it helps to form students' understanding of technology. This learning can also be professional, where students and the teacher relate exploratively to technology with subject glasses on, both to see opportunities and challenges. Here, however, there will be a novelty effect because we include new technology in our profession, and we can, therefore, be blinded by the fact that it is new and smart. This does not mean we should abandon our normal teaching just because a new technology has appeared. We have to opt in or out of artificial intelligence ourselves, so it makes sense in our teaching.

We also don't always have to be first movers just because a new technology comes along. Therefore, it is also about whether you, personally and professionally, can keep up.

2. Study how artificial intelligence works

Before using artificial intelligence in education, it's important to understand how the technology works. Whether it's ChatGPT, an app, or a learning management system plugin, there will always be an underlying technology that controls the generated output.

All the programs and websites enriched with artificial intelligence will have strengths and weaknesses. For example, few people can explain how a language model works without getting too technical. Still, it is necessary to understand the basics to remove the "magic" veil around chatbots. Ultimately, it's all based on statistics, mathematics, various algorithms, and human training.

Moreover, the variety of tools has its advantages and disadvantages. For example, ChatGPT stopped collecting data in September 2021, which means it does not know about the war in Ukraine. ChatGPT-3.5 also has major problems with sources; in this case, it would be better to use Bing Chat. There will also be a big difference between using an artificial intelligence trained for scientific texts in a closed knowledge database or conducting searches online.

A model to manage them all

It is expensive to develop and train models because it requires enormous computing power, a large database, and a lot of human training. It is estimated that ChatGPT costs around DKK 4.5 million per day in server expenses; therefore, at 4.5 million per day, the model also needs to be monetized.

OpenAI has launched an API for their GPT model, and it is primarily the basis for most programs enriched with artificial intelligence. An API is a standardized way to exchange data between IT systems, making it very easy for developers to create new programs using OpenAI data.

Therefore, we see many programs that, on the surface, look like they have made their own artificial intelligence, but underneath, it is all run (and controlled) by OpenAI. The reason some of these programs are great for a particular profession can simulate being a famous person or can be customer service in a company is the way they have been trained. Here, the developers have set up a framework for artificial intelligence and trained it for a specific domain. Since then, they have launched their artificial intelligence as their own "unique" product. Therefore, we also see a lot of power put in the hands of a company that can determine the direction of many small businesses.

You should, therefore, ask some questions before choosing to use artificial intelligence in teaching:

  • What kind of data basis is artificial intelligence trained on?
  • Is it possible to get sources on what is being generated?
  • Who is behind the technology?

3. Is the role of the teacher clearly defined in teaching?

When deciding to use a new tool in teaching, it is crucial to consider the role of the teacher to avoid artificial intelligence undermining the teacher's authority.

Teachers can ask themselves some ethical questions before introducing AI into education. The school's management may also have answers to some of these questions:

  • Should the teacher monitor and adjust the recommendations that AI gives students to ensure they are relevant and appropriate?
  • How do we address potential misinformation or lack of context that AI can generate?
  • How can the teacher use artificial intelligence to differentiate teaching and meet students' individual needs and prerequisites?
  • Should the teacher be passive while students work with a tool and only provide support if they need help?

You must constantly ensure that the teacher's function is maintained in the classroom and not let artificial intelligence take over areas that require human empathy or empowerment. Therefore, one advice would be to choose tools where the teacher has a function and can support students. The teacher should also have a clearly defined role where he/she picks up on the academic content that students have worked on, ensuring that teaching remains meaningful and engaging. 

If you choose to include artificial intelligence in teaching, it is important to consider how the teacher-student relationship is maintained.

4. Research and choose the right tools for your field of study

Many tools can be integrated into teaching, but the same tools are often used for all assignments. Take the time to research the different options within your field of study and be curious.

For example, ChatGPT can be used in math to create assignments for students, but it's not optimal for performing calculations because it's a language model. However, OpenAI is working on implementing plugins that make it possible to link ChatGPT with WolframAlpha.

Regarding image generation in creative subjects, tools like MidJourney, BlueWillow, and DALL-E 2 have already built a code of ethics into the images produced. This helps ensure that the tools used in education meet ethical standards (American). However, one should be aware that Midjourney and BlueWillow use Discord as a platform, and therefore, there will be problems with GDPR.

Read our article "Generative AI makes images".

5. Investigate whether technology collects student data

The big challenge with all the tools available is that they go so fast that we do not have time to examine what data they collect. Here, it is important to be critical and opt out of technology if it is impossible to account for where data is stored or where you can get a data processing agreement.

We see that Microsoft and Google are beginning to enrich their programs with artificial intelligence, and they will certainly be made available to students. However, this does not mean that we should believe that all the rules are being respected, and we need to consider whether we want to use these systems in education.

It is also worth considering whether you want students to be part of an unregulated mass experiment to train new artificial intelligence.

  • Which tech companies do I share my students' information with?
  • What can we use artificial intelligence for?
  • Does our school have a data processing agreement with the company in question?

6. Bias and bias

One must be aware of whether there may be societal biases, racial biases, gender-specific stereotypes, etc., in the tools. If the training data that AI is trained for is biased, it is likely to transmit these biases in its output. This means, for example, that if the model is trained on texts written by male authors, it may be biased about male characteristics or perspectives. Another example is Google's image recognition algorithm, which misidentifies people of color as primates.

At the same time, there may be challenges if, based on language patterns in the training data, the language model is more positive or negative towards different population groups.

The above are some reasons why the language model sometimes fails in its output, but we must consider that these biases and biases affect what our students read. According to OpenAI, you have to take this with you if you want to use their model. But as more people use the technology, it also gets better at securing the content presented to users.

But there is a but because even though it is getting better at sorting out the things that may be offensive, it is still an American ethic being imposed on language models. For example, try asking about the abortion issue. Here it will find it very difficult to answer.

Conversely, some models are stripped of all ethics, meaning they can write about anything. It can also be problematic because there is no filter here, and you must be very careful. For example, we've made a few queries for this language model.

It also means that it is advantageous to work with critical thinking in teaching, and students thus understand that the language models contain biases and biases.

Opportunities and limitations

The above are suggestions for aspects to consider if you want to work with artificial intelligence in education. The list is not a priority, and it could contain many more items. However, we have selected the most important ones and tried to come up with an idea of the challenges and opportunities to address before including the technology.

Some of the content will not be possible to use due to GDPR, limitations in curricula, school policy in the area, course methods, or lack of framework and guidelines from UVM. There will be a period where quite a lot will be up to the individual teacher at the individual school, which puts extra pressure on an already busy everyday life.

Sources

Kunstig intelligens skal ind i skolerne
Kunstig intelligens skal ind i skolerne
Hvad skal vi med kunstig intelligens i skolen? - IT Torvet
I denne artikel ser vi på hvad kunstig intelligens egentlig er, hvad det har af muligheder og udfordringer og om det er prisen værd!
Samtale med ChatGPT-4
Om kunstig intelligens og viden
OpenAI’s ChatGPT Reportedly Costs $100,000 a Day to Run – CIOCoverage- Driven for Technology Leaders
OpenAI
OpenAI API
We’re releasing an API for accessing new AI models developed by OpenAI.
Undersøgelse og nysgerrighed - AI chatbots i historie - STX | emu danmarks læringsportal
Der er åbenlyse problemer med ChatGPT, og den indvarsler måske en ny tid for undervisning og læring - også i historie. Men hvad kan chatbots bruges til inden for historiefaget?
ChatGPT Gets Its “Wolfram Superpowers”!
Wolfram plugin gives computationally accurate answers to ChatGPT queries using Wolfram’s data knowledgebase and language. Custom visualizations are given as well. Stephen Wolfram explains how it works.
OpenAI’s DALL-E 2 is pretty compliant - but who is responsible anyway?
Since the beta launch of the image AI DALL-E 2, users have generated more than three million images. OpenAI draws a first interim conclusion.
The Ethics of AI Art: The Case of DALL-E 2 [CASE STUDY]
BY DEX PARRA & SCOTT R. STROUD Artificially intelligent neural networks that generate images from phrases input by humans, such as DALL-E 2, present n
Does ChatGPT Have Privacy Issues?
ChatGPT can be a bit of fun, but you might take it more seriously when you know how much private data it collects about you.
Announcing new generative AI experiences in Google Workspace | Google Workspace Blog
Google Workspace is using AI to become even more helpful, starting with new capabilities in Docs and Gmail to write and refine content.
Introducing Microsoft 365 Copilot – your copilot for work - The Official Microsoft Blog
Humans are hard-wired to dream, to create, to innovate. Each of us seeks to do work that gives us purpose — to write a great novel, to make a discovery, to build strong communities, to care for the sick. The urge to connect to the core of our work lives in all of us. But…
ChatGPT presser danske uddannelser: “Det kan ende i et våbenkapløb” – Radar
Et forbud mod ‘lektiehjælperen’ ChatGPT holder ikke, erkender uddannelsesinstitutioner – de er nødt til at tilpasse sig.
Her undervises eleverne i brugen af ChatGPT
Mens det fortsat diskuteres, i hvilket omfang ChatGPT bør bruges på landets uddannelser, er en gymnasielærer gået forrest. Hun finder det »rigtig vigtigt« at inddrage chatbotten i undervisningen.
ChatGPT genrejser spørgsmålet, om maskiner kan tænke. Men hvad vil det egentlig sige at tænke?
Ved at bestå eksaminer og svare på komplicerede spørgsmål har computerprogrammet ChatGPT sat nye standarder for kunstig intelligens. Men et 70 år gammelt eksperiment viser, hvorfor det er svært at måle, om en maskine er intelligent.
Ny forskning: Drop ChatGPT hvis I bekymrer jer om inklusion! - Bootstrapping
når ChatGPT bruges i kommunikation på arbejdspladsen, vil det skade virksomhedskulturen og virksomhedens mangfoldighed ved at forstærke stereotyper gennem intern og ekstern kommunikation.”
ChatGPT and large language model bias
The AI-fueled chatbot gives answers that can seem human-sounding. They may also share humans’ bias.
Google’s solution to accidental algorithmic racism: ban gorillas
Google’s ‘immediate action’ over AI labelling of black people as gorillas was simply to block the word, along with chimpanzee and monkey, reports suggest
We asked ChatGPT whether generative AIs were biased. Here’s what it told us... - TechInformed
TechInformed asked OpenAI’s ChatGPT what it thinks about its own technology’s bias. To what extent are generative AIs biased?
ChatGPT koster over 7 millioner kroner om dagen i serverutgifter
Microsoft arbeider med å lage sin egen brikke for å få utgiftene ned.