Critical AI Literacy

Critical AI Literacy

Introduction to the Critical AI Literacy Module

Welcome! This module is designed to help foster a critical and reflective mindset regarding novel AI technologies, with a specific focus on text-based generative AI technologies such as ChatGPT.

The module should take approximately 2 hours to complete, and aims to help you get a better understanding of the terminology, workings, limitations, (ethical) implications, and possible applications of novel AI technologies. It will not prescribe whether and how you should use these technologies, but should enable you to make informed decisions for yourself regarding your involvement with new AI technologies.

Learning outcomes

After completing this module, you…

  • …can explain the basics of what Generative AI is;
  • …can critically reflect on how Generative AI works;
  • …can evaluate some of the ethical implications regarding the use of generative AI models;
  • …can describe what some applications and conditions for the responsible use of generative AI models are.
     

The following questions are discussed in this module:

  • What is AI?
  • How do generative AI models work?
  • What are the limitations of generative AI models?
  • What ethical implications exist surrounding the use of generative AI models?
  • What are some applications, considerations, and conditions for the responsible use of generative AI models?

If after completing this course you find yourself wanting more information about AI and its use in education, please visit our EDU Support page on this subject (https://edusupport.rug.nl/ai).

'AI Literacy', Microsoft Designer

What is AI?

Since the introduction of consumer-friendly and powerful AI technologies such as ChatGPT, there has been a lot of talk surrounding the topic of AI and its workings. To understand and be able to reflect on these novel technologies, it is useful to define a number of terms and processes beforehand. You can click on the different pages to discuss the definition of a number of important terms and processes.

Once you feel comfortable with the terms, you can go to the quiz on the next page to test your knowledge.

Artificial Intelligence, Generative AI, and Large Language Models


Since programs such as ChatGPT have become widely available, terms such as Artificial Intelligence (AI), Generative AI (GenAI), and Large Language Model (LLM) have quickly
entered the vocabulary of our daily lives. On this page, we define these three terms together, as they all relate to each other: they represent different levels of the same concept, with Artificial Intelligence being the broadest in scope, and Large Language Model being the narrowest.

‘The differences between 'regular' AI, GenAI, and LLMs’, Microsoft Designer.


You can click on the tabs of the accordion below to get the relevant definitions of these three terms.

Artificial Intelligence (AI)

Artificial intelligence actually has two, connected definitions. However, one is less important for this module, namely that it is a (sub)field of computer science. The other definition is what this field of computer science actually studies, namely computer technologies centered around the goal of having computers perform human tasks. This includes tasks such as (image) classification, language generation, or complex decision making. Artificial intelligence as a field started as early as the 1950s, and since then many AI technologies have been created. Many of us have been using AI technologies in our daily life through things such as route planners, song and video suggestions on our social media apps, and spelling control in our word processors.

Generative AI (GenAI)

Generative AI is the term used for a subset of AI technologies that aim to create new things that previously did not exist, such as text, music, images, or video. In other words, these technologies have a generative function. With this distinct goal it differentiates itself from predictive AI, which focuses on predicting future events on the basis of current data (i.e., weather predictions), and from descriptive AI, which focuses on identifying patterns and summarizing information in data (i.e., automated suggestion for key words). Generative AI has seen a large increase in popularity in recent years due to breakthroughs in how AI models are trained and function. One of the popular examples of Generative AI is ChatGPT.

Large Language Model (LLM)

Large Language Models (LLM) are a type of Generative AI technologies that are trained on enormous amounts of language data for the equivalent of hundreds of human years (possible because of the parallel processing of data), and as a result are able to generate human-like text. As generative models, LLMs produce texts that are new, rather than remixing existing texts through copying and pasting.

Machine Learning (ML)

AI technologies can be created by letting algorithms learn from data without much human supervision or coding. This process is generally known as machine learning. With machine learning, the computer is given a large amount of so-called training data, as well as an instruction to do something with it. It can then learn about the relationships between data points with some human help (e.g., by humans already labeling some data), or, as is common with the technology behind LLMs, mostly on its own. Through parallel processing, machine learning can process large amounts of data very quickly. The resulting AI technologies can be very efficient and powerful, but given the limited involvement of humans it can also be very difficult to explain how these technologies get to their outcomes.

In the case of LLMs where the instruction would be to generate a humanlike text, the computer learns patterns in the training data by itself by processing a large corpus of language data. It then uses those patterns to achieve the desired.
 

'Large amounts of information being transferred to a machine', Microsoft Designer

Neural Networks

Neural Networks are a way for AI technologies to store and process information. This storage is modeled after the working of the human brain. In a neural network, there are layers of so-called ‘neurons’. In the first layer, the input given by a user is first broken up into smaller pieces of information. Each of these smaller pieces of information is then stored in a different neuron. A layer, consisting of multiple neurons, thus represents the full information the network has at that stage. Each neuron in a layer then sends its information to one or more neurons in the next layer of the network. Additionally, the information also gets altered when it is sent forward. It could become more important, less important, or could even be discarded. What happens with the information in the connection between two neurons is determined by a so-called parameter. During machine learning, the parameters that dictate how information is changed when sent forward through the neural network are determined based on the patterns the computer finds in the training data. Eventually, the neurons in the final layer of the network produce an output that is presented back to the user.

You can watch the animation below to see how a neural network fits within a Generative AI application.
 

Pre-training & Fine-tuning

Machine learning for Large Language Models has two phases: Pre-training & Fine-tuning

During pre-training, the computer is “left alone” with large amounts of text data (billions of pages of text) and an instruction to learn how to create human-like texts. This machine learning phase results in an AI model that possesses an internal mapping of how human language is structured, called a foundational model. This mapping is a high-dimensional table (more than the three dimensions we humans can traverse) that encodes all the different ways different elements of human language relate to each other. In this table, elements that are similar are closer together in the table than elements that are dissimilar. The table can then be used to train a neural network, which in turn can be used to generate rudimentary new texts. Therefore, after pre-training, the AI model can adequately complete the instruction it was given at the start of machine learning, but it is not yet optimized enough to be made available to the public.

During fine-tuning, the goal is to further optimize the model in order to better handle specific tasks or to generate specific outcomes. In this phase, human workers give the model feedback on its responses. This is needed for LLMs, because there are many different language tasks - chatting, summarizing, reporting, joking - which all differ in how a text should look. Furthermore, the foundational model can create false, hurtful, or dangerous statements. What types of feedback are given to the model depends on the ultimate use of the model and intentions of the developers (more on this later). Without the fine-tuning phase, the model output is less reliable and useful, and not fit for use by the general public.
 

Transformers & Attention

To deal with the complexities of human language, new text-based generative AI technologies use a relatively new type of neural network called a transformer. Transformers have a unique feature known as attention, which allows them to prioritize certain words over others, and more carefully analyze the relation between important words, punctuation, and sentence structures. This attention feature means that newer text-based generative AI technologies can better understand the context of the input and the overall conversation and respond adequately.
 

'Artificial intelligence utilizing an attention mechanism to transform information’, Microsoft Designer

Quiz - What is AI?

How do GenAI models work?

At this point in time, it is not possible for humans to explain how Text-based Generative AI models, better known as Large Language Models (LLMs), exactly come to their output. However, it is possible to explain the general steps involved in the construction of their output.

Five general steps

The process from a user’s input - or prompt, as they are generally called - to an AI generated output goes via a seemingly simple five step process. Using the prompt and the context of the conversation, the model tries to predict the next word that would follow the given input. Once it has decided on what word to add, it repeats the process many times over, continuously looking at the original input plus the new words it has added, until it eventually decides to add a stop command. This process is adequately known as next-word prediction.

Next-word prediction can be broken down into five general steps: tokenization, vectorization, embedding, attention transforming, and output generation and selection. Using the tabs below, you can go through each step, as well as some important notes on the selection of the next word. This first animation will show you the general cycle of next-word prediction. The animations in each step will show in more detail what happens during each step, explaining the different elements in this first animation.

1. Tokenization

The first step to next word prediction is to cut up the input into smaller chunks that the model understands. These smaller chunks are called tokens. Whilst the process is called next word predictions, in actuality, tokens can be more than just words, such as parts of words, punctuation, or maybe even collections of words. In the case of ChatGPT’s first version, the model had a dictionary of over 50,000 tokens it could use to cut up the prompt.

2. Vectorization

As computers work in numerical space, each token gets turned into a list of values known as vectors. This vector is basically a coordinate for where the token exists in the multidimensional table that encodes human languages, created during the pre-training phase of machine learning. Each value in this list represents one of the dimensions of that table. To highlight the complexity of human language: ChatGPT‘s first version used vectors with a length of over 12,000 values.

3. Embedding

All vectors of the tokens in the input/prompt, as well as a few additional ones representing certain features of the text such as the position of each token in the input or the similarity between two words, are combined to create a table that is known as an embedding. This embedding is the numerical representation of the input and is used to capture certain semantic and syntactic information.

4. Neural Network

The embedding can now be fed forward through the transformer neural network. For each vectorized token (also the ones earlier in the input that already have tokens following them), the transformer utilizes its attention mechanism to try and predict the next token, taking into consideration the context of the conversation by placing different weights to different tokens in an input. The next predicted token after the final vectorized token in the prompt gets saved, and the transformer feeds the embedding through to a new layer, where the process is repeated, although slightly altered based on the output of the previous layer. In ChatGPT’s first version, this process was repeated 96 times.

5. Output generation and selection

In the final layer of the transformer neural network, a list of vectors is created, alongside a list of probabilities that represent how likely each vector (which is a numerical representation of a token) would follow the input in natural language. The probabilities in this list of possibilities is moderated by fine-tuning, and does not necessarily reflect the true probabilities you would get based only on the raw data the model was trained on. This list is back-translated to readable words, and eventually one of the tokens is chosen to be added (the higher the probability, the likelier).

The process then gets repeated from step 1, until the stop command is chosen to be added by the neural network.

Notes on output selection

As can be seen in the animations, LLMs do not simply focus on one token, but rather create a list of possibilities, and are not guaranteed to pick the most probable token to add. This probabilistic approach to language generation makes LLMs, and specifically chatbots such as ChatGPT, feel more human by providing more surprising or unexpected answers.

Which token gets chosen is dependent on a number of factors. For instance, there are so-called model ‘hyperparameters’, which are settings for the model as a whole, that dictate how likely it is that the model can forgo the more probable token. This includes settings such as word differentiation, which will make words that are already often used less likely to be chosen again. Of particular importance is the hyperparameter known as ‘temperature’. The higher the temperature of a model is, the more likely it is that words with a lower probability of occurring naturally still get selected as output. This setting usually ranges from 0 to 1, with most generative AI text models having it set around a value of 0.8 so that there are more varied responses. However, for most generative AI models this setting is not available to users, so knowing exactly how random the responses can be is unclear.

Additionally, developers can put in guardrails to protect against harmful or offensive language and information that can slip through the fine-tuning. These guardrails are not directly embedded into the AI model, but rather act as a final check to make sure language that wasn’t trained out through fine-tuning still is kept in check. Often these guardrails are updated frequently, and later on used as a basis for further refinement of the AI model when it is updated.

Augmented Language Models

The description of the creation of LLMs on the earlier page, as well as the process of output generation hints that the pre-training in which an LLM finds patterns in language is the key element to how it can give responses to prompts. This may be at odds with things you might have heard regarding LLMs being connected to the internet and constantly improving and updating its knowledge base. Indeed, Microsoft’s Co-pilot LLM gives internet sources for queries, seemingly confirming this suspicion.

However, this is not the case: Co-pilot, for example, is something known as an Augmented Language Model, or ALM. At its core, the ability to produce language does come from pre-training, but it can incorporate data from outside its training data (for example by using a so-called 'vector database') by placing this newer data in the (hidden) context of the conversation. When asked to summarize current events, the LLM sends the prompt to the existing Bing search engine, which gives back a few results to the LLM, and the LLM summarizes that information using its existing “map” of data. Thus, the neural network behind ALMs is not constantly updating, but simply uses extensive context to give seemingly up-to-date results.

'Artificial intelligence utilizing tools to augment itself’, Microsoft Designer

Quiz - How do GenAI models work

What are the limitations of the output of GenAI?

In the previous two sections of this module, we outlined four key insights regarding the ways in which Large Language Models (LLMs) construct their responses:

  • LLMs are initially reactive: they are wholly dependent on the input given to them to start creating.
  • LLMs are probabilistic: the output they present to one user can be very different to the story they present to another user, even if the same prompt has been used.
  • LLMs do not have a long-term vision: they can only take into consideration what has been outputted, not what might be inherently logical to output next.
  • LLMs are heavily context-dependent, which means that all previously inputted and outputted text will influence a new response.

Stemming from the mentioned insights, there are several limitations associated with GenAI output. In this section, we will explore four of these limitations.

General limitations

Due to the way LLMs work, there are certain limitations to their output that should be taken into account when working with these kinds of tools.

‘An artificial intelligence that has limitations when being worked with’, Microsoft Designer

Below we discuss some of these limitations.

Lack of truthfulness

LLMs are trained on ‘next-word’ predictions. This means that it uses statistics to estimate what would be the most plausible next word in the sentence. But these plausible and convincing responses can be incorrect or made-up (‘hallucinating AI’).

The model cannot verify its own output or assess its reliability. Always use your critical thinking skills when reading the output and always verify it against other sources of information.

In the example below, ChatGPT is giving an erroneous answer, as the third female Prime Minister of the United Kingdom was Liz Truss. She was in office in 2022. Moreover, given that this prompt was entered in June 2024, ChatGPT is providing information about a fictional future date.

Lack of sources

GenAI systems do not offer sources in their output to those specific pieces of information they are basing their answer on, which makes it difficult to verify their claims. They can also hallucinate references. GenAI systems are different from search engines such as Google, and so you should not use them in the same way when looking for scientific information. Although some models such as ChatGPT Plus and Microsoft Copilot have begun to include links to internet sources in their output, you should always critically evaluate and cross-reference all AI-generated output.

Below is an example of ChatGPT hallucinating references. Even though the references appear plausible, neither of these articles seems to exist. The DOI codes for the articles are also made up.

Generic or reductive output

The output of GenAI can be quite generic or reductive. Especially when the prompt is very short, simple, or unspecific, the language used in the response can be rather bland, formulaic, and uninspiring.

In the following example, the prompt is very basic and generic, and the output is equally dull:

 

To increase the quality of the output, try to make your prompts as specific as possible. In the example below, the prompt is much more precise: it specifies who the intended audience is, it clearly states the aim of the message, and it offers instructions on how to deliver the message. This information is reflected in the output, as the language is more engaging and dynamic.

Inconsistent output reproducibility

Due to how text-based AI models work, they generate inconsistent output. When given the same input (e.g. a prompt), the model will give you a different result each time. This makes it difficult to consistently reproduce content from such models.

The example below illustrates that two identical prompts entered on the same day still produce a different result each time. Not only do the language and structure of the output differ in several ways, but the content is also different. For example, the first answer states that Michael Phelps competed in five Olympic Games, whereas the second output mentions only four Olympic Games.

Since reproducibility is often a key aspect of academic research, it has been suggested that the use of AI models that generate unreproducible output for research purposes may lead to a ‘reproducibility crisis’ in science. Therefore, the inconsistent output and its impact on reproducibility should always be taken into account when considering using GenAI for academic purposes.

CLEAR prompting framework

Reflecting on the insights gained from learning about the limitations of text-based Generative AI (LLMs), you can utilize a framework for interacting with these models to better work with the way it creates output. The CLEAR framework by Leo S. Lo is such a framework (see this paper for more information). It states that any time you consider interacting with an LLM to remember that you and your prompt should be:

  • Concise - do not give more information than needed, give enough space for a proper reaction, and avoid unnecessary context. Otherwise, the model might get 'distracted';
  • Logical - write out a plan for the structure of the response to counteract the lack of logical thought of the model, especially when looking for a long response;
  • Explicit - give clear instructions as to what you want to see in the response to direct the attention mechanism to the most important aspects of your prompt. Also, you can specify the way you want the model to format the output;
  • Adaptive - do not give the same prompt for different topics to account for potential gaps in the dataset/pattern recognitions of the model. Try out different formulations and frames for your prompt to get the model to produce different outcomes;
  • Reflective - consider any incremental value of a follow-up prompt to make the decision on whether or not the current conversation session (context) can still lead to better results. Especially when just starting a conversation, following up with adjustments or specifications to what you mentioned in your initial prompt can help to fine-tune the output.

Quiz - Limitations of GenAI models

What ethical implications exist surrounding GenAI?

Generative AI tools are accompanied by ethical challenges. Some of these challenges are specifically related to their use in higher education, while others intersect with communities, the environment, and humanity as a whole.

Work through the pages to find out more about some of the main ethical concerns related to the development and use of Generative AI.

The role of humans in AI ethics

Before considering how the responses of AI models are constructed, it is important to remember that LLMs lack internal thought or reasoning that shape their responses. They create text simply based on patterns in language that they have learned after examining tons of language data for a long time. However, which responses a model actually gives in response to a user is largely determined by the human feedback it has received during the fine-tuning phase, which is a product of the values and aims of the AI model developer. OpenAI, the developer of ChatGPT, stated that it had three guiding principles for the fine-tuning of their models  (OpenAI, 2022):

  • helpfulness
  • truthfulness
  • harmlessness

However, journalists have found that other developers, such as Microsoft, have put other values such as persuasiveness higher than truthfulness on their priorities list (New York Times, 2023).

Therefore, it is important to remember that AI-generated content is rooted in human choices, values, and flaws. By understanding the process of how responses are generated, we hope you can (begin to) understand that AI models are not all-knowing machines that can solve any problem given to them, but should be understood as tools that can fail from time to time.

 

‘A helpful, truthful, and harmless generative AI’, Microsoft Designer.

Bias and stereotypes

Generative AI models cannot tell the difference between true and false or right and wrong. They only contain information from their training data, which often consists of information obtained from large parts of the internet. Human biases and stereotypes present in this training data such as those related to race, gender, ethnicity, and socioeconomic status may therefore be reflected in the output. A dataset that is often used for training Generative AI models are the massive (9.5-plus petabytes), freely available archives of web crawl data that Common Crawl provides. These datasets may not be entirely free of bias and other problematic content, and a Generative AI model trained on them may therefore include such content in its output.

While Big Tech companies like OpenAI have built so-called ‘guardrails’ to prevent unethical, hateful, and discriminatory results from being generated, there remains a risk of bias because of the biases inherently present in the training data. On top of that, the biases of the people training the models are also reflected in the output.  

For example, if a model is trained on a dataset that associates certain jobs with specific genders, the model is more likely to generate output confirming these stereotypes. You should always check the AI-generated output for bias, stereotypes, and other harmful content. When asking Bing - now Microsoft Copilot - to create an image of ‘a biologist working in a state-of-the-art laboratory’, the generated image was more likely to depict a white male scientist than a female scientist of color.

‘A biologist working in a state-of-the-art laboratory’, Microsoft Designer.

 

GenAI developers are aware of these biases and have worked hard to address them. However, this has raised a whole set of new issues. In February 2024, Google sparked controversy when its GenAI model Google Gemini appeared to have become reluctant to generate images of white people in an attempt to make the output of its image generator more diverse. For example, a query to generate ‘image of the pope’ resulted in images of a Black and a female pope. And when asking for pictures of ‘a US senator from the 1800s’, the results included what appeared to be Black and Native American women.

‘A US senator from the 1800s’, Google Gemini. Posted in https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical 

 

These results, though more diverse, are historically inaccurate. Google has since apologized, writing on X that “Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.” (quoted in The Verge, 2024). Big Tech companies will continue to work to address these issues.

Whenever you evaluate AI-generated output, apply your critical thinking skills to identify any potential biases or stereotypes in the output, and cross-reference the information with academic sources (accessed via the University Library, for example) to offer a more balanced view.

Lack of diversity in training data

The datasets used to train foundational models such as GPT 3.5 for ChatGPT are often limited in their scope. As the map from the Internet Health Report 2022 shows, more than 60 percent of the so-called ‘benchmark datasets’ - datasets that big tech companies use to test the performance of their models - are from the United States. There is very little data from South America and almost nothing from Africa and Russia.

Source: Mozilla Foundation. Facts and Figures about AI - The Internet Health Report 2022. https://2022.internethealthreport.org/facts/

 

Using GenAI could reduce cultural and linguistic diversity and lead to the marginalization of underrepresented groups. ChatGPT, for example, has mainly been trained on English data and data written in a few other languages, meaning that source materials in English are overrepresented. Consequently, minority voices can be left out, as these are less present in or even absent from the training data.

If this AI-generated output is subsequently used to further train the AI models, this could lead to an even further reduction of diversity and complexity in the output. Namely, using so-called ‘synthetic data’ as training material unintentionally creates a feedback loop which can perpetuate previously existing biases. When a model relies heavily on such synthetic data for its training purposes, there is even a risk of ‘model collapse’, which is when the AI model ends up generating overly repetitive or low-quality output.

To learn more about the concept of 'model collapse', watch the video below:

Disinformation

The ability of GenAI to create realistic and plausible text, video, audio and code makes the creation of false, biased, or politically motivated media faster and easier to produce. An example of these are so-called ‘deepfakes’. A deepfake is an image or audio/video recording that has been manipulated to depict someone doing or saying something they never actually did or said. The two AI-generated photos below, for example, are two realistic yet fake depictions of Pope Francis riding a white motorcycle and wearing a white puffer jacket, respectively.

Source: https://www.nytimes.com/2023/04/08/technology/ai-photos-pope-francis.html

 

Deepfakes can be used to spread disinformation and to promote hate speech or politically biased content. Always try to verify the authenticity of images or recordings by tracing them back to their original source. One way of doing this is by conducting a so-called ‘reverse image search’. Web browsers such as Google offer this option. Also, be aware that any images you share on the internet may be incorporated into GenAI training data and might be manipulated and used in unethical ways.

Environmental costs

It is difficult to calculate the exact environmental costs of GenAI. The size of the model, the training approach used and the capabilities of the tool influence how much energy and water the model uses. Likewise, there are very different energy needs for training a model and for using it. That said, the energy costs of training and running the tools are estimated to be considerable.

One recent peer-reviewed study estimates that by 2027, the annual electricity usage of AI servers could equal the annual electricity usage of a country like the Netherlands or Sweden. This amounts to 0.5 percent of the world’s current electricity use.

As for water consumption - used for electricity generation and cooling of the data servers - according to one study , ChatGPT could use up to 500 ml of water per 5-50 prompts.

 

‘The environmental costs of generative AI’, Microsoft Designer.

Exploitative labor to train GenAI models

Some GenAI tools have been trained using 'reinforcement learning through human feedback' (Fine-tuning). For this kind of training, human workers review a prompt and the generated output to give the model feedback about the accuracy and helpfulness of the output. Workers also have to check whether the output is appropriate.  

While this approach is not new - social media platforms are also known to employ humans to moderate their content - ChatGPT’s company OpenAI was criticised for outsourcing this practice to low-wage employees in Kenya. These employees must review toxic and explicit content to make the tools safer to use, but this often comes at the cost of their own mental well-being. More generally, companies are known to outsource AI-related work to employees in low-income countries in order to reduce costs. For example, a 2023 academic study shows how French start-ups have outsourced AI-related tasks to low-paid workers in Madagascar.

 

‘Exploitative labour to train generative AI models’, Microsoft Designer.

Unequal accessibility

The cost of tools is a barrier for many students in accessing generative AI tools.
Some AI tools offer both free and paid versions of their tool, which could lead to unequal accessibility among students. Those who can afford to pay for the paid version of a tool may have an unfair advantage in assignments that incorporate the use of GenAI.

With ChatGPT, for example, there are important differences between the free account and the paid account (ChatGPT Plus) that have an impact on its accessibility:

  • There is a high global demand, but the number of users is limited. However, users with a paid account are given priority and have access even during peak times. Access with the free account is more unpredictable and works best during off-peak hours.
  • The paid version works faster than the free version, especially with longer content.
  • Users with a paid account are given access to new models, features, and updates first. For example, from September 2023, ChatGPT can browse the internet (so it's not limited to only the information up to September 2021 that is included in its training data), you can use your own voice for asking questions, and you can upload images. But all these features only work for users with a paid account.

Moreover, if students are learning online from other countries, the use of particular GenAI tools may be restricted due to government regulation or censorship.

Copyright issues

The use of GenAI raises complex questions about copyright:

  • Who - if anyone - owns the copyright on AI-generated output?
  • Does copyright infringement occur when the training data that was used to generate this output contains copyrighted material?
  • Can you use copyrighted materials to formulate prompts?

At this moment, it is difficult to provide clear-cut answers to these questions. It will take time for lawmakers and policy makers to catch up with the latest developments.

For the time being, adhere to the following guidelines:

  • Do not enter copyrighted materials into the GenAI tool.
  • Observe caution with entering your own materials into the GenAI tool, as your prompts may be used to fine-tune the model.
  • Always acknowledge and cite AI-generated output in your work. Referencing conventions are still under development; however, some of the major citation style guides including APA , MLA , and Chicago  already offer preliminary referencing guidelines on their websites.
  • When you are unsure how to cite AI-generated content, include a note in your work that explains where and how you used the AI tool and which prompts you used to do so.

 

‘Copyright’, Microsoft Designer.

Data privacy issues

The data from almost all generative AI tools is processed and stored on US servers and must comply with the privacy laws and regulations applicable there. This legislation is less strict than European laws and regulations.

Most privacy policies state that information about you can be collected. Providers may gather information about your visit, how you use the services, and your interactions. Other details such as your IP address, location, and used devices are also recorded. Your information can (and will) be used for maintaining, improving, and analyzing the tools. In other words: your input will probably be used to further train the GenAI tools.

That is why it is important not to provide GenAI systems with personal data, privacy (sensitive) information or copyright-protected material.

 

‘Data privacy issues’, Microsoft Designer.

Quiz - Ethical implications of GenAI models

What are some applications, considerations and conditions for using GenAI?

This section is split into three parts:

  • educational applications for students
  • considerations for teachers
  • conditions for responsible use

Depending on your status in the classroom, you can choose one of the first two sections to read. In these parts, we will provide a few examples of ways you can use Generative AI tools to assist you in your studies or teaching. These parts also have a quiz to test your knowledge on the information presented within. In the Conditions part, we outline a few ground rules that you can use to determine if and how you should engage with Generative AI tools and their output.

Educational applications for students

There are numerous ways in which text-based generative AI models can assist you in your studies. Go through the following tabs to explore a few examples of how you could use GenAI during your studies. Please note that this is just a small sample of the many possibilities that exist. Moreover, some of the examples below can be combined to form new applications. In all cases, it is important to describe to the AI tool:

  • what you want to do
  • which perspective/role you want the tool to take
  • how the output should be formatted

Please note: Before using GenAI for an assignment, always check with your course instructor about what is and isn’t allowed! Unauthorized or insufficiently attributed use of GenAI is considered fraud, as stated in the UG-wide policy document  for the use of AI in Teaching.

For more information on how to write effective prompts when using GenAI in any of the below ways, visit this website.

 

Transcribe or translate interviews

There are AI tools that can transcribe spoken text into written text, and vice versa. You can, for example, use these tools when you conduct interviews and have to process the gathered information. In this case, the use of such an AI tool that can automatically transcribe the spoken text into written form can save you quite some time. Alternatively, you may use AI tools to translate interviews into another language.

Possibility engine

GenAI can generate alternative ways of expressing an idea. Enter the same prompt several times to examine alternative responses. You can then reflect on these different alternatives.

Summarize texts

When there is a lot of text to read, for example when conducting a literature analysis, you can use an AI tool to summarize the different texts to be able to quickly scan which parts are relevant to read in more detail and which are not. Do bear in mind, however, that these responses may contain inaccuracies or hallucinations, so always approach these summaries with caution and a critical eye!

Generate search terms

When you need to conduct a literature search and are unsure which search terms to use, GenAI can help you generate search terms when you describe the topic to it.

Help with coding

When you learn to write computer code, you may struggle with writing syntax, defining variables, and implementing functions and loops correctly. Certain AI tools, for example ChatGPT, are trained in a variety of programming languages. Such an AI tool can quickly debug code, offer suggestions for improvement, and provide feedback. In this way, you receive quick feedback and the AI tools aid you in your learning process.

Socratic opponent

GenAI can act as an opponent to develop an argument. You can enter prompts into a GenAI model following the structure of a conversation or debate. This can help you to challenge and further sharpen your argument.

Coach

GenAI can guide you through a complex task by providing instructions and taking you through these instructions, one step at a time.

Study buddy

AI can help you reflect on learning material or explain new concepts. You can describe your current level of understanding to the GenAI and ask for ways to help you study the material. The tool can also be used to help you prepare for other tasks such as job interviews.

Motivator

AI offers games and challenges to extend learning. You can ask the GenAI tool for ideas about how to extend your learning after providing a summary of the current level of knowledge (e.g. quizzes, exercises).

Quizmaster

You can use diverse GenAI tools to shape your exam preparation process. For example, you can ask GenAI to generate mock questions or quizzes.

Research process

The following diagram might offer some inspiration on how to use text-based generative AI during the research process:

Quiz - Applications for students

Considerations for teachers

There are different types of AI tools that have the potential to enhance education and allow for innovative approaches to the educational process. Please go through the following tabs to explore what factors you need to take into account when considering the use of GenAI in the classroom.

For help with generating prompts for educational design, consider this website of the VU ('Vrije Universiteit Amsterdam'). For more general information on how to write effective prompts, visit this website.

 

Course design

When (re)designing a course, an important model to keep in mind is the constructive alignment triangle. This model shows that the course's intended learning outcomes should align with the teaching and learning activities of the course, as well as with the assessment methods (both formative and summative). When you as a teacher plan on using an AI tool in your course, for instance in the assessment, it is important that students learn to work with this tool during the course's teaching and learning activities. Depending on whether the use of an AI tool is a goal in itself or merely used as an aid in teaching and learning, it can be included in the learning outcomes.

Learning outcomes

If you want to adapt your courses and programmes to the rapid and continuous advancement of AI tools, it is good to question whether all current learning outcomes (on both course and program level) are still relevant. Since the rapid dispersion of AI in different fields of work can change the demands of the future workfield, it is recommended to reassess which (new) competencies students would need in order to succeed in their future careers and whether the learning outcomes address these. As AI tools can help with some of the lower-level cognitive tasks  students have to perform, you could decide to formulate learning outcomes to focus on higher cognitive levels. Lastly, skills related to working with AI tools, such as effective prompting, may be a relevant addition to learning outcomes of programmes or courses.

Practically, consider whether the wording and verbs of the learning outcomes have to change to reflect the students’ learning process better. For example, consider the change from 'a student can write a policy document' to 'a student can develop a policy document'. The product the student ultimately hands in is the same, but the focus is more on the process rather than the outcome.

Skills students still need to learn when AI is widely available

Whether or not students still need to learn a certain skill when readily available AI tools can do this for them is a question without a clear-cut answer. If it is important that your students learn skills for which they could also use an AI program, there are ways to prevent or limit the use of an AI tool. For instance, if you want students to learn writing skills you can have them work on their assignment in an ‘AI-free’ environment such as a classroom or an exam hall. Even if learning a specific skill is not the main goal of your course, it is still important for students to be able to competently judge the output of an AI tool. Therefore, learning a skill, at least to a certain degree, is still necessary and important.

Learning and assessment

The availability of GenAI tools may have an impact on assessment. There are concerns that students may generate (parts of) an essay using GenAI. There are tools available that promise they can detect when a piece of text is AI-generated. The performance of these types of tools is not foolproof, however, and can lead to situations where a student is accused of AI plagiarism without having done so (false positives).

A good way to deal with this concern is to reshape assessment practices in such a way that the focus is more on continuous assessment (combining formative and summative practices in a course) and feedback rather than merely high-stakes summative assessment at the end of a course. For instance, looking at the capabilities of GenAI to quickly generate feedback, students could be asked to interact with such a tool and document the outcome to give insight into their learning process. For such process-oriented assessments, you may ask students to try out different prompts and ask them to critically reflect and evaluate the output they receive.

Enhancing student learning

AI can be used to enhance student learning. For example, students learning how to program computer code often struggle with writing correctly formatted syntax that does what they envision it to do. AI tools that have been trained on datasets that include programming languages can quickly debug code and offer suggestions for improvement. In this way students receive quick feedback without having to wait for their teacher.

AI tools can enhance student learning regarding different types of skills, such as writing skills and analytical or critical thinking. It is important to emphasize to students that results from an AI tool can function as a starting point, but always require further evaluation and adaptation from the students themselves. Assignments should be designed in such a way that such behavior is stimulated. In addition, always be clear to students up front about when they are and are not allowed to use AI in your courses and how they are required to report on their use of AI.

AI can also be used to create personalized learning experiences for students, for example through the use of adaptive learning environments such as SlimStampen. When used correctly and responsibly, AI programs can tremendously help students in the development of their skills and knowledge.

Quiz - Considerations for teachers

Conditions for responsible use

Because of the limitations and ethical considerations listed in earlier sections of this module, it is essential that you fact-check and critically evaluate all AI-generated output. The ‘EDIT prompting framework’ is a useful tool to critically evaluate AI-generated output:

  • Evaluate - Evaluate your AI output content for language, facts, and structure
  • Determine - Determine accuracy and corroborate with sources
  • Identify - Identify biases and misinformation in output
  • Transform - Transform content to reflect adjustments and new findings

The following diagram is another useful resource to check whether you can safely use ChatGPT:

 

 

Finally, view the video on this page to learn three rules of thumb on how to use GenAI responsibly:

References and future readings

Section: What is AI?

Artificial Intelligence


IBM. (n.d.). What is Artificial Intelligence. ibm.com. Retrieved from: https://www.ibm.com/topics/artificial-intelligence

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

The University of Queensland Library. (2023). Artificial Intelligence: Digital Essentials. Retrieved from: https://uq.pressbooks.pub/digital-essentials-artificial-intelligence/. Licensed under CC BY-NC 4.0 International License

 

Generative AI (GenAI)


Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693. Licensed under CC BY-SA 4.0 International License

Pasick, A. (2023). Artifical Intelligence Glossary: Neural Networks and Other Terms Explained. The New York Times. Retrieved from: https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

The University of Queensland Library. (2023). Artificial Intelligence: Digital Essentials. Retrieved from: https://uq.pressbooks.pub/digital-essentials-artificial-intelligence/. Licensed under CC BY NC 4.0 International License

 

Large Language Models (LLM)


EDU Support. (2023). Artificial Intelligence (AI) Tools in education. Educational Support and Innovation (ESI). Retrieved from: https://edusupport.rug.nl/2365784080

Pasick, A. (2023). Artifical Intelligence Glossary: Neural Networks and Other Terms Explained. The New York Times. Retrieved from: https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

 

Machine Learning (ML)


EDU Support. (2023). Artificial Intelligence (AI) Tools in education. Educational Support and Innovation (ESI). Retrieved from: https://edusupport.rug.nl/2365784080

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

The University of Queensland Library. (2023). Artificial Intelligence: Digital Essentials. Retrieved from: https://uq.pressbooks.pub/digital-essentials-artificial-intelligence/. Licensed under CC BY NC 4.0 International License

Yasar, K. (2023). Black box AI. TechTarget. Retrieved from: https://www.techtarget.com/whatis/definition/black-box-AI

 

Neural Networks


Hardesty, L. (2017). Explained: Neural networks. MIT News. Retrieved: https://news.mit.edu/2017/explained-neural-networks-deep-learning-0414

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.

Pasick, A. (2023). Artificial Intelligence Glossary: Neural Networks and Other Terms Explained. The New York Times. Retrieved from: https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html

The University of Queensland Library. (2023). Artificial Intelligence: Digital Essentials. Retrieved from: https://uq.pressbooks.pub/digital-essentials-artificial-intelligence/. Licensed under CC BY NC 4.0 International License

 

Pre-training & Fine-tuning


Abideen, Z. (2023). Autoregressive Models of Natural Language Processing. Medium. Retrieved from: https://medium.com/@zaiinn440/autoregressive-models-for-natural-language-processing-b95e5f933e1f

Encord. (n.d.) Generative Pre-Trained Transformer (GPT). Encord.com. Retrieved: https://encord.com/glossary/gpt-definition/

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.

OpenAI. (2022). Aligning language models to follow instructions. Openai.com. Retrieved from: https://openai.com/index/instruction-following/

Radford, A., Narasimhan, K., Salimans, T., & Sutskever, I. (2018). Improving Language Understanding by Generative Pre-Training. Semantic Scholar. Retrieved from: https://www.semanticscholar.org/paper/Improving-Language-Understanding-by-Generative-Radford-Narasimhan/cd18800a0fe0b668a1cc19f2ec95b5003d0a5035

 

Transformers & Attention


Abideen, Z. (2023). Attention Is All You Need: The Core Idea of the Transformer. Medium. Retrieved from: https://medium.com/@zaiinn440/attention-is-all-you-need-the-core-idea-of-the-transformer-bbfa9a749937

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.

Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693. Licensed under CC BY-SA 4.0 International License

Pasick, A. (2023). Artifical Intelligence Glossary: Neural Networks and Other Terms Explained. The New York Times. Retrieved from: https://www.nytimes.com/article/ai-artificial-intelligence-glossary.html

Section: How do GenAI models work?

The five general steps of the next-word prediction process


Abideen, Z. (2023). Autoregressive Models of Natural Language Processing. Medium. Retrieved from: https://medium.com/@zaiinn440/autoregressive-models-for-natural-language-processing-b95e5f933e1f
Alagumalai, V. (2023). Demystifying the Architecture of ChatGPT: A Deep Dive. LinkedIn. Retrieved from:

https://www.linkedin.com/pulse/demystifying-architecture-chatgpt-deep-dive-vijayarajan-a/

Collins, K. (2023). How ChatGPT could embed a ‘Watermark’ in the Text It Generates. The New York Times. Retrieved from: https://www.nytimes.com/interactive/2023/02/17/business/ai-text-detection.html

Kyle Hill. (2023). ChatGPT Explained Completely [Video]. Youtube. Retrieved from: https://www.youtube.com/watch?v=-4Oso9-9KTQ

Last, B., & Sprakel, T. (2023). Chatten met Napoleon. Boom.
Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693. Licensed under CC BY-SA 4.0 International License

Wolfram, S. (2023). What is ChatGPT doing… and why does it work?. Stephen Wolfram Writings. Retrieved from: https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/

Section: What are the limitations of the output of GenAI models?

Inconsistent output & reproducibility


Ball, P. (2023). Is AI leading to a reproducibility crisis in science? Nature. Retrieved from: https://www.nature.com/articles/d41586-023-03817-6

 

CLEAR prompting framework


Lo, L. S. (2023). The CLEAR path: A framework for enhancing information literacy through prompt engineering. The Journal of Academic Librarianship, 49(4). https://doi.org/10.1016/j.acalib.2023.102720

 

Throughout section


Miao, F., & Holmes, W. (2023). Guidance for generative AI in education and research. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000386693. Licensed under CC BY-SA 4.0 International License

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education Quick start guide. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000385146/PDF/385146eng.pdf.multi. Licensed under CC BY SA 3.0 International License

TLC Science. (2024). Responsible use of Generative Artificial Intelligence (GenAI) in higher education. University of Amsterdam. Retrieved from https://rise.articulate.com/share/MyfLgG-cXE1a7XBuctQhndpJB-BgpYny#/. Licensed under CC BY-NC-SA 4.0 International License

Section: What ethical implications exist surrounding GenAI?

Opening section


OpenAI. (2022). Aligning language models to follow instructions. Retrieved from: https://openai.com/index/instruction-following/
Ramponi, M. (2022). How ChatGPT actually works. AssemblyAI. Retrieved from: https://www.assemblyai.com/blog/how-chatgpt-actually-works/
Weise, K., & Metz, C. (2023). When A.I. Chatbots Hallucinate. The New York Times. Retrieved from: https://www.nytimes.com/2023/05/01/business/ai-chatbots-hallucination.html

 

Bias and stereotypes


Baack, S. (2024). Training Data For The Price Of A Sandwich. Mozilla Foundation. Retrieved from: https://foundation.mozilla.org/en/research/library/generative-ai-training-data/common-crawl/. Licensed under CC BY 4.0 International License
Robertson, A. (2024). Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis. The Verge. Retrieved from:
https://www.theverge.com/2024/2/21/24079371/google-ai-gemini-generative-inaccurate-historical

 

Lack of diversity in training data


Mozilla Foundation. (2022). Internet Health Report 2022. Retrieved from: https://2022.internethealthreport.org/facts/

 

Disinformation


Huang, K. (2023). Why Pope Francis Is the Star of A.I.-Generated Photos. The New York Times. Retrieved from: https://www.nytimes.com/2023/04/08/technology/ai-photos-pope-francis.html

 

Environmental costs


Erdenesanaa, D. (2023). A.I. Could Soon Need as Much Electricity as an Entire Country. The New York Times. Retrieved from: https://www.nytimes.com/2023/10/10/climate/ai-could-soon-need-as-much-electricity-as-an-entire-country.html
O’Brien, & Fingerhut, H. (2023). A.I. Tools fueled a 34% spike in Microsoft’s water consumption, and one city with its data centers is concerned about the effect on residential supply. Fortune. Retrieved from: https://fortune.com/2023/09/09/ai-chatgpt-usage-fuels-spike-in-microsoft-water-consumption/?utm

 

Exploitative labour to train GenAI models


Le Ludec, C., Cornet, M., & Casilli, A. A. (2023). The Problem with annotation. Human labour and outsourcing between France and Madagascar. Big Data & Society, 10(2). doi.org/10.1177/20539517231188723
Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time. Retrieved from: https://time.com/6247678/openai-chatgpt-kenya-workers/

 

Whole section


Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education Quick start guide. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000385146/PDF/385146eng.pdf.multi. Licensed under CC BY SA 3.0 International License

TLC Science. (2024). Responsible use of Generative Artificial Intelligence (GenAI) in higher education. University of Amsterdam. Retrieved from https://rise.articulate.com/share/MyfLgG-cXE1a7XBuctQhndpJB-BgpYny#/. Licensed under CC BY-NC-SA 4.0 International License

Van Gurp, S. / Avans Hogeschool (2023). Infographic Generatieve AI voor studenten / Generative AI for students. Edusources. Retrieved from: https://edusources.nl/materials/45e2751d-1076-4979-9232-a847bcb966dd. Licensed under CC BY-NC-SA 4.0 International License

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

EDU Support. (2023). Artificial Intelligence (AI) Tools in education. Educational Support and Innovation (ESI). Retrieved from: https://edusupport.rug.nl/2365784080

Section: What are some applications and conditions for using GenAI?

Educational applications for students


EDU Support. (2023). Artificial Intelligence (AI) Tools in education. Educational Support and Innovation (ESI). Retrieved from: https://edusupport.rug.nl/2365784080

Paul R. MacPherson Institute for Leadership, innovation and Excellence in Teaching. (2023). Generative Artificial Intelligence in Teaching and Learning at McMaster University. Retrieved from: https://ecampusontario.pressbooks.pub/mcmasterteachgenerativeai/. Licensed under CC BY International 4.0 International License

Sabzalieva, E., & Valentini, A. (2023). ChatGPT and Artificial Intelligence in higher education Quick start guide. UNESCO. Retrieved from: https://unesdoc.unesco.org/ark:/48223/pf0000385146/PDF/385146eng.pdf.multi. Licensed under CC BY SA 3.0 International License

 

Conditions for responsible use


Blakeslee, S. (2004). The CRAAP Test. LOEX Quarterly, 31(3), 6 - 7. https://commons.emich.edu/loexquarterly/vol31/iss3/4 / https://library.csuchico.edu/sites/default/files/craap-test.pdf

Lukes, D. (2023). Learn to Be a Prompt Engineer: PREP and other guides to uncovering the secrets to ChatGPT mastery. LinkedIn. Retrieved from: https://www.linkedin.com/pulse/learn-prompt-engineer-prep-other-guides-uncovering-secrets-lukes/

Tiulkanov, A. (2023). Is it high time to take ChatGPT offline? LinkedIn. Retrieved from: https://www.linkedin.com/pulse/high-time-take-chatgpt-offline-aleksandr-tiulkanov/ / https://www.linkedin.com/posts/tyulkanov_a-simple-algorithm-to-decide-whether-to-use-activity-7021766139605078016-x8Q9/

TLC Science. (2024). Responsible use of Generative Artificial Intelligence (GenAI) in higher education. University of Amsterdam. Retrieved from https://rise.articulate.com/share/MyfLgG-cXE1a7XBuctQhndpJB-BgpYny#/. Licensed under CC BY-NC-SA 4.0 International License

  • The arrangement Critical AI Literacy is made with Wikiwijs of Kennisnet. Wikiwijs is an educational platform where you can find, create and share learning materials.

    Author
    Martijn Blikmans-Middel
    Last modified
    2025-05-08 10:05:48
    License

    This learning material is published under the Creative Commons Attribution 4.0 International license. This means that, as long as you give attribution, you are free to:

    • Share - copy and redistribute the material in any medium or format
    • Adapt - remix, transform, and build upon the material
    • for any purpose, including commercial purposes.

    More information about the CC Naamsvermelding 4.0 Internationale licentie.

    Additional information about this learning material

    The following additional information is available about this learning material:

    Description
    Dit is een Wikiwijs versie van de Critical AI Literacy module van de Rijksuniversiteit Groningen, oorspronkelijk beschikbaar gemaakt op de Brightspace omgeving van RUG. De module behandelt de definitie, werking, tekortkomingen en kansen, en ethische implicaties van (tekstuele) Generatieve AI modellen. Aan de hand van mix van videos, afbeeldingen, en tekst leert de leerling in ongevaar twee uur of de basis van tekstuele Generatieve AI modellen. Deze training is bedoeld om een basiskennis over Generatieve AI aan te brengen, zodat potentiele gebruikers geinformeerd de keuze kunnen of ze wel of niet een AI tool willen gebruiken. De module benoemt een aantal basisregels en -tips over het goed omgaan met AI tools, maar is niet gericht op het optimaal of meest efficient omgaan met deze tools. De module is oorspronkelijk gemaakt door Lilian Tabois, Martijn Blikmans-Middel, Alicia Streppel, Rob Nijenkamp en Yvonne de Jong. Deze Wikiwijs versie is gemaakt door Martijn Blikmans-Middel
    Education level
    WO - Bachelor; WO - Master;
    Learning content and objectives
    Informatica;
    End user
    leerling/student
    Difficulty
    gemiddeld
    Learning time
    2 hour 0 minutes
    Keywords
    ai, ai literacy, basic skills, critical literacy, digitial literacy, implications, limitations, technology, use cases, working
  • Download

    You can download the entire arrangement in the formats listed below.

    Metadata

    LTI

    Learning environments that use LTI can play Wikiwijs arrangements and tests and report results. To do this, the learning environment must be registered with Wikiwijs. Do you want to use the LTI connection? Send an email to info@wikiwijs.nl with the request to set up an LTI connection.

    Are you already registered with us? You can use the Launch URLs below.

    Arrangement

    Exercises and tests

    What is AI?

    How do GenAI models work?

    Limitations of GenAI models

    Ethical implications of GenAI models

    GenAI applications for students

    GenAI considerations for teachers

    IMSCC package

    If you don’t want to copy the Launch URLs separately, but want to download them all at once, download the IMSCC package.

    QTI

    The exercises and tests of this arrangement can also be downloaded as QTI. This consists of a ZIP file that contains all information about the specific exercise or test; order of questions, images, points to be achieved, etc. Environments with a QTI player can play QTI.

    For developers

    Wikiwijs learning materials can be used in an external learning environment. Technical connections can be made and the learning material can be exported in various ways. More information about this can be found on our Developers Wiki.