Smart studying with AI (Bachelor)

Smart studying with AI (Bachelor)

1. Introduction

Wat is Generatieve AI? Soorten, Voordelen en Toepassingsmethoden Duidelijk UitgelegdWelcome to this e-learning about AI! Whether it's for generating new ideas or surprising literature suggestions, the number of applications is vast. And this number is growing steadily. To use AI responsibly in your studies, it's important to learn how to work with it. That’s why the VU University Library (UBVU), responsible for information literacy education, has created this e-learning. This e-learning is intended for beginning students. You are taking your first steps towards AI literacy!

After completing this e-learning, you will be able to:

  • Explain how AI tools and AI models are developed through the collection and processing of data
  • Identify relevant AI tools and language models
  • Assess when the use of AI tools is meaningful
  • Create effective prompts for searching and finding literature
  • Reflect on privacy and the origin of data
  • Correctly annotate AI output with attention to copyright

Completing the entire e-learning takes approximately 60 minutes. The duration depends on whether you go through all the sections. The content is alternated with assignments for assessment. Check your prior knowledge by taking the quizzes at the beginning of each chapter. And check your acquired knowledge at the end of each chapter.


Want to get up to speed on AI quickly? Go to the FAQ. Here, your most important questions are answered concisely.

 

Do you have questions about this e-learning? Contact the UB: vraag.ub@vu.nl.

 

 

FAQ

1. What is AI?

When people talk about AI, they usually mean: AI based on Large Language Models (LLMs), also known as generative AI (GenAI).
These language models are trained on enormous amounts of text and can therefore generate new text. They can summarize, translate, or analyze texts for you. The most well-known example is GPT-4 (OpenAI). Keep in mind that AI is more than just GenAI. It is, in fact, an umbrella term for multiple applications.

You can find more information in Chapter 2 of this e-learning.

2. What is VU's policy regarding the use of AI?

VU sees AI as a powerful tool for studying. To that end, it aims to develop students' AI literacy. This 'literacy' is not intended to promote the use of AI itself. The goal is to help you learn to assess when the use of AI is desirable—and if so, in what way. Students also learn what the risks are. 

Read more: https://vu.nl/en/news/2025/new-framework-for-generative-ai-in-education 

3. Are you allowed to use AI tools for your studies?

Make sure to do this properly. Preferably use tools for which the university holds a license. So, work with Copilot rather than ChatGPT, as the former ensures user privacy. And if a license is missing, at least use your tools critically.

More information in chapters 4 and 6.

4. Can a lecturer require you to use an AI tool?

Yes, but only for AI tools for which VU holds a license. Lecturers may, for example, ask you to use Microsoft Copilot for assignments via the VU license. You then log in with your VU credentials. The data will remain within VU and will not be shared with OpenAI. Guidelines will also change in the future. So keep an eye on updates about the latest developments.
For more information, see the student information page.

5. How does AI add value to your studies?

By treating AI as a study companion that can assist you with various tasks. You can ask AI to explain complex concepts or generate practice questions. Or you might ask it to come up with counterarguments or offer alternative perspectives. Ultimately, this helps you better support your own viewpoint. However you use AI, see it as a tool—never as a replacement for your own thinking.

More information in Chapter 3.

6. What are the risks of using AI?

AI literacy means being able to engage critically and responsibly with AI applications. AI tools can hallucinate—this means they may invent fictional sources or non-existent titles. Although some statements may sound convincing, they can be incorrect. One risk is that you give a prompt, receive an answer, but don’t fully understand why the tool generated that response. Companies provide only limited information about how their products work. Also, be careful not to use AI without proper attribution or as a substitute for your own work. This can lead to fraud or plagiarism, which are serious offenses in academia.

Want to read more? Go to Chapters 6 and 7.

7. Does AI take over academic skills?

AI partly helps you with tasks you used to do yourself, such as summarising a text or searching for relevant information. However, tasks are not just disappearing—new skills are also being added. For example, you need to be even more critical of information when it comes from AI. That said, AI does not completely take over your skills. It supports tasks that you still need to be able to perform yourself and for which you remain responsible.

8. Is AI taking over jobs

Yes, but that’s only half the story. Just like with earlier innovations, the arrival of AI will lead to the disappearance of some jobs or significant changes in their content. This mainly affects jobs in the service sector, such as translators and call center employees. Those performing physical work, such as hospitality or construction workers, are less affected. Despite the major impact on the labor market, AI does not automatically lead to mass layoffs. At the same time, these developments also create opportunities. As with previous technological breakthroughs, new jobs will emerge.  

9. Who can you contact with questions about AI?

You can find information about AI in multiple places. If you want to know more about the university’s policy on AI, keep an eye on the VU website. Not sure whether you’re allowed to use AI in a particular course, or how? Ask the course lecturer. And all information related to AI literacy—from the definition of the concept to tips on how to become AI literate—can be found in this e-learning. The University Library keeps it up to date so that it reflects the most recent developments.

2. What is AI?

Literally, AI stands for Artificial Intelligence. Don’t think of futuristic robots or science fiction, but rather of technology that performs everyday tasks that would normally require human intelligence. Netflix recommendations, spam filters, or route planning—these are examples of classic AI.

AI systems analyse data, recognise patterns, and make predictions or decisions. Within AI, there are various subfields, such as machine learning, natural language processing, and computer vision. Generative AI—like ChatGPT—is a recent and notable form, because it not only recognises patterns but can also create new content. This e-learning focuses on generative AI (GenAI).

In the remainder of this chapter, concepts related to AI will be explained in more detail.

Watch the video below in which Lieven Scheire explains where things can go wrong when training AI. This example helps to understand how AI works at a fundamental level.

(The video below is in Dutch, but you can use the subtitles.)

Quiz: test your prior knowledge!

2.1. What is GenAI?

ClassGeneratieve AI | Rathenau Instituutical AI recognizes and classifies. Think of systems that detect spam, plan routes, or make recommendations. They analyze existing data and make decisions or predictions based on that data. Generative AI creates something new. It generates text, images, audio, or code based on an input (also known as a prompt).

Examples include ChatGPT, DALL·E, or MS Copilot. These systems are trained on enormous amounts of data and predict which word or image most logically follows — without truly understanding what they are saying or creating. Want to learn more about such AI tools? Then continue to chapter 4.2 (Which AI tools and models are relevant?).

A handy rule of thumb: classical AI recognizes, generative AI creates.

2.2. What are Large Language Models?

GenAI is a collective term for AI systems that can create new content. Large Language Models (LLMs) refer to a specific type of GenAI. This type focuses on language processing. LLMs are trained on enormous amounts of text and can therefore generate, summarize, translate, or analyze sentences. The most well-known example is GPT-4 (OpenAI). In short: LLMs are the engine behind many GenAI applications.

Below you’ll find an overview of AI and its subsets:

 

2.3. How does GenAI work?

When GenAI creates new text (or images, audio, or software code), it may seem like you're talking to a smart conversation partner. In reality, it is a statistical model that predicts which word (or image, sound, or piece of code) is most likely to follow the previous one. Often this works well, sometimes it doesn’t — more on that in the following chapters.

GenAI’s prediction process happens in four steps:

  • Pre-training: first, the model is fed with a vast amount of information. Through billions of sentences (or other data), it learns to recognize patterns. Over time, it begins to understand grammar, style, and structure.
  • Tokenization and vectorization: the model breaks the text into components (tokens), which are converted into numbers (vectors). This allows the model to compute with language.
  • Prediction via neural networks: the model uses a so-called transformer architecture to determine which words are important. It continuously predicts the most likely next token.
  • Fine-tuning with human feedback: after pre-training, the model is refined with the help of human trainers. They provide feedback on what is desirable, polite, or correct.

Would you like these AI concepts explained in a different way? Or is something still unclear? Then watch the video below (up to 4:00 min):

2.4. AI in education and research

Books, articles, datasets... Whether scientists are teaching or conducting research, they rely on large amounts of information for their work. In processing that information, GenAI is a powerful tool. You too can use this helpful assistant, provided you do so in a critical manner (see chapter 3 for tips on how to do that).

Quiz: test your knowledge!

3. When do you apply AI?

Source: generated with MS Copilot.

You now know what AI is and how it works. But in what ways can you apply AI tools? And how can you use them in your studies?

This chapter will inform you about relevant AI applications that can support you in your studies.

Quiz: test your prior knowledge!

3.1. AI as a study assistant.

Students are increasingly using AI tools as a supplement to traditional study skills. Think of tools like Copilot, Elicit or NotebookLM, which function as digital assistants. These tools can help you structure your thoughts, rewrite texts, or generate ideas. They can be especially useful in the early stages of an assignment, when you're still searching for an angle or structure. Instead of staring endlessly at a blank screen, you can ask GenAI for suggestions for an outline, a sample introduction, or even a list of potential research questions. This lowers the threshold to get started and helps you get to the core more quickly.

In addition, GenAI can also act as a personal study coach. You can use AI to explain complex concepts, generate practice questions, or even help you plan your study work. Some students use AI as a kind of sparring partner: they input an idea and ask for counterarguments or alternative perspectives. This stimulates critical thinking and helps you better substantiate your own viewpoint. It is important, however, that you do not see GenAI as a replacement for your own thinking, but as a tool that supports you in your learning process.

3.2. Applications: where AI can truly help you

AI can be a helpful study aid in many ways. Explore the tabs below for examples related to searching and finding literature. These are just a few of the many possibilities available. Some examples can even be combined to create new applications.

Note: always check with your instructor whether the use of AI tools is permitted. Using them without permission or proper citation may be considered academic misconduct.

Want to learn how to get the best results? Then continue to Chapter 5: “How to get the best results?”.

 

1. Formulating a research question

By asking AI for suggestions based on a topic, you can more quickly arrive at a clear and well-defined question. For example: “What research questions are relevant within the theme of climate migration?” AI can then offer various angles, such as legal, economic, or social perspectives.

2. Summarizing texts

Especially with long or complex articles, AI can help you quickly grasp the core ideas. For example, you might ask: “Summarize this article in 200 words, focusing on the conclusion.” This is useful for literature reviews, but also when preparing for exams. Note: AI-generated summaries are not always complete or accurate, so always verify them against the original text. And be careful not to share articles in a chatbot that are behind a paywall — doing so infringes on copyright.

3. Quick introduction to the topic

Various AI tools can help you quickly get oriented with a new topic. These include AI tools that, for example, summarize key information and articles.

4. Improving your own text

You can enter a paragraph and ask for a rewrite in an academic style or to correct grammatical errors. AI tools such as Writefull or Grammarly are specifically designed for this purpose.

5. Generating search terms

Based on a description of your topic, AI can suggest synonyms or related terms that you can use in databases.

6. Generating references

You can ask AI tools to format a source in APA style. Or you can ask an AI tool to compile a bibliography based on a topic. However, be aware: AI sometimes invents sources that do not exist or provides only a limited selection of sources. Always check whether the mentioned sources actually exist, for example via Libsearch or Google Scholar.

7. Socratic opponent

AI can act as an opponent in a discussion. This way, you can learn to better support your viewpoint. By using a conversation or debate structure in your prompts, you can ask AI to provide counterarguments or pose critical questions. This helps you sharpen and strengthen your reasoning.

8. Role as coach

AI can guide you through the execution of complex tasks. Provide step-by-step instructions and go through the process together one step at a time. This way, you learn in a structured manner.

3.3. AI is intelligent, but not infallible.

Although AI tools offer many possibilities and can deliver impressive results, it is important to be aware of their limitations. AI tools, such as ChatGPT, are trained on vast amounts of data. These so-called Large Language Models predict which word is likely to follow the previous one, without knowing whether the content is accurate. This can lead to “hallucinations”: information that sounds convincing but is factually incorrect. An example of a hallucination is a chatbot inventing a non-existent article and providing an author name and DOI.

It is therefore important to always critically assess AI output. Ask yourself questions such as: (1) “Does this sound logical?” or (2) “Can I verify this?” (See chapter 5.4 for a checklist to review GenAI output). If you use AI tools, use them as a starting point, not as an end point. Rely on your own knowledge and/or feedback from instructors and/or reliable sources when evaluating AI output. Only then can you use AI responsibly in your studies.

In addition, transparency is important. If you use AI tools for an assignment, you must disclose this when asked. Undisclosed use may be considered fraud, especially if you present AI output as your own work. Discuss with your instructor what is and isn’t allowed, and be transparent about your approach.

Quiz: test your knowledge!

4. What are relevant AI tools?

There are various applications for which you can use AI in your studies. But which specific AI tools can you use for that? In this chapter, you will get to know different AI tools that can support you in your studies. Think of AI tools that help you search for literature, structure texts, or generate ideas. It is important to critically assess what an AI tool does, how it works, and whether its use is appropriate in a given context.

Quiz: test your prior knowledge!

4.1. When is an AI tool useful?

AI tools don’t magically solve everything. They are instruments that need to be used wisely, depending on your goal. That’s why you should always start by asking yourself: “What do I want to achieve?” Are you looking for help with structuring a text, finding literature, analyzing data, or generating ideas? Each AI tool has its own strengths and challenges. An AI tool like Copilot, for example, is useful for brainstorming or rephrasing text. However, such a chatbot is less suitable for finding reliable sources. For that purpose, tools like Elicit or Consensus are more appropriate. These AI tools explicitly refer to scientific literature.

Making a good decision also means learning to recognize when AI tools support your learning process and when they distract you from thinking for yourself. So don’t use AI tools as a shortcut, but as a sparring partner. Think of it as a toolbox: you choose the right tool for the right task.

 

4.2. Note! Use within the VU.

Be aware of what VU offers in the field of AI. VU provides the basic version of Copilot. VU has determined that user privacy is sufficiently safeguarded when you are logged in with your VU account. Your data will then not be reused to train the model. More information about this can be found via this linkAlso be aware that Copilot uses Bing to search the internet in real time.

When using other AI tools, such as ChatGPT or Elicit, you are personally responsible for the risks. These other AI tools fall outside VU’s license and are therefore not supported by the IT helpdesk or the University Library.

What does this mean in practice? For example, when you upload a text to ChatGPT, you don’t know exactly what happens to that data. So never share sensitive information, such as names, email addresses, research data, etc., via GenAI. The general rule is: if in doubt, use Copilot or ask for advice at UBVU.

4.3. Which AI tools and models are relevant?

There are many AI tools. Some AI tools can be useful to use during your studies. Below is an overview of commonly used AI tools, including their applications, advantages, and limitations. Note: this is just a small selection of the many AI tools available.

AI-tool

Application Advantage Limitation

1. Copilot

Text, coding, image Broadly applicable, intuitive Hallucinations, no source citation

2. Claude

Text, brainstorming, reasoning, image, coding Strong in nuance, long context Limited availability

3. ChatGPT

Text, brainstorming, reasoning, image, coding Broadly applicable, intuitive interface Prone to errors, hallucinates sources

4. Elicit

Literature research Systematic and scientific, summaries with source citation English-language focus

5. Consensus

Evidence-based answers, scientific consensus Peer-reviewed sources Less suitable for creative tasks

6. NotebookLM

Literature analysis, working with own documents Contextual search in own sources Limited to uploaded data

7. DALL-E, Adobe Firefly

Image generation Creative visual output Limited control over style

It is important to know that the above AI tools are based on underlying LLMs such as GPT-4 (OpenAI), Claude 3 (Anthropic), Gemini (Google). Be aware that these models do not understand the world like humans do. They do not know what is factually valid, but are statistical and based on probability.

Watch the video below for a demonstration of how to search for sources in the following AI tools: Consensus, ScholarGPT, Elicit, and Semantic Scholar:

4.4. The quality of your prompt determines the outcome

A language model (LLM), such as Copilot or ChatGPT, works differently from a search engine. That’s why specific requirements apply to asking a good question. This question, or prompt, that you enter into an AI tool therefore determines the quality of the result. An unclear prompt often produces a superficial or unusable answer. A clear and structured prompt, on the other hand, can lead to good output. But how do you formulate such a clear and structured prompt? In chapter 5.2, you’ll find an overview of the elements.

Quiz: test your knowledge!

5. How do you get the best results?

As you read in the previous chapters, AI tools can assist with a wide range of tasks. However, the quality of the output strongly depends on how you use the AI tool. In this chapter, you’ll learn how to use AI effectively by giving clear instructions, thinking critically about the origin of the information, and consciously choosing which AI tool to use and when.

Quiz: test your prior knowledge!

5.1. Clear instructions make the difference.

A language model is an artificial intelligence model trained to predict the next word or a missing word in a sentence, based on your input or prompt. The more specific and clear your prompt is, the better the model understands what you mean. A good prompt contains different elements depending on your goal. When you don’t use specific elements in your prompt, you often receive a generic, superficial, or even irrelevant answer.

What does this look like in practice? If you ask, “What is climate change?”, you’ll get a standard definition. However, if you ask, “Explain in no more than 150 words what climate change is. Explain it as if you’re talking to a high school student. Include an example from the Netherlands.”, you force the language model to give a focused, understandable, and contextual explanation. These kinds of prompts not only help you get better output, but also help you formulate more clearly what you actually want to know.

In section 5.2, you’ll learn which elements can be useful to make your prompt as complete as possible.

5.2. Prompting: do's and dont's

Prompting is a skill you can train. The more you experiment with different formulations, the better you learn what works. There are a number of do’s and don’ts that can help you communicate more effectively with AI tools.

 

✅Do’s:

Below is a list of elements you can consider including in your prompt:

1. Task

Description:
Clearly describe your task or instruction.

Example(s):
"Summarize this article in 150 words."

2. Persona


Description:
Simulate a persona for tone and style.

Example(s):

  • "Write from the perspective of a first-year psychology undergraduate student."
  • "Take the viewpoint of a sociology researcher. Use a scientific tone, but avoid jargon."
  • "Pretend you are a university lecturer. Keep asking me questions in return."
3. Steps


Description:

Outline steps and/or the sequence of the result.

Example(s):
"First discuss the benefits of AI and then conclude with possible next steps. Describe the next steps in a bullet-point format."

4. Context


Description:
Provide sufficient context and/or constraints.

Example(s):
"Use the following sources to answer my questions. [Links to your sources and/or PDF files]."

5. Goal


Description:
Define goal and target audience.

Example(s):
"Rewrite this paragraph for a first-year pedagogy undergraduate student (university level). The purpose of this paragraph is to inform them about […]."

6. Format


Description:

Give instructions about the output.

Example(s):

  • "Create a table from the following information […]."
  • "Provide a bullet-point list about […]."
  • Other examples: language, word count, concise, bullet-point format, with formatting, image, diagram.


Good prompts are therefore clear, specific, and aligned with the goal. That’s why it’s important to first be aware of what your goal is. Based on your goal, you can determine which elements from the table above are needed to make your prompt as complete as possible. So instead of simply asking, “Summarize this article,” it’s better to ask, “Summarize this article in 200 words for a second-year psychology student, with emphasis on the article’s conclusion.”

You can also ask AI what information is still missing to carry out the task as effectively as possible. Additionally, various prompt libraries are available that contain example prompts:


❌Dont’s:

  1. Don’t ask vague or overly broad questions.
  2. Don’t expect perfect answers without thinking for yourself.
  3. Don’t blindly adopt AI output without verification.
  4. Don’t use AI as a final editor, but rather as a thought partner or sparring partner.

Prompting is therefore more than just ‘asking something’. It’s an iterative process in which you adjust your prompt based on the generated responses. In that sense, you can also compare it to learning how to argue: through language, you try to guide a system that has no intentions or goals of its own, but does learn statistically which word sequence is most logical.

Curious to see a demonstration of good versus poor prompting? Then watch the video below:
(The video is in Dutch, but you can use the subtitles).

5.3. Tool versions: free or paid?

Many AI tools, such as ChatGPT, offer both a free and a paid version. For example, the free version of ChatGPT is less powerful than the paid one. Paid AI tools often provide access to newer models, faster processing, more context memory (important for longer tasks), and additional features.

Still, a free version isn’t necessarily bad. For simple brainstorming, short summaries, or rewriting paragraphs, a free AI tool is often sufficient. However, if you're working on a complex academic project, where you want to analyze multiple documents or have extended conversations with an AI tool, a paid version can make a significant difference in quality and efficiency.

Also pay attention to privacy terms: (free) AI tools may use your input to further train their models. AI tools offered through educational institutions often have better agreements regarding data protection.

 

Source: ChatGPT, 2025

5.4. Know where your data comes from

A common misconception is that AI tools, such as ChatGPT, ‘search the internet’ for their answers. This is not true! Most AI models are trained on large amounts of text up to a certain point in the past. These models generate responses based on patterns in that data. They do not have live access to current information unless the AI tools are connected to a search engine. Examples of such tools include MS Copilot or Perplexity.

It’s therefore important to be aware that AI tools do not guarantee factual accuracy. AI tools can sound convincing but may ultimately hallucinate. Examples of hallucinations include fabricated sources, incorrect data, or made-up citations. This is especially problematic in an academic context.

Tip: always use AI output as a starting point for further research, not as a final product. Verify facts, check sources, and be critical of tone, biases, and completeness of the information. AI tools can be powerful aids, but you remain responsible for the content you use.

Below you’ll find a checklist to help you assess GenAI content:

Quiz: test your knowledge!

6. How do you use AI responsibly?

AI is capable of processing more data than humans, and it does so much faster. Although AI can be of great value to scientists, the technology is far from flawless. It comes with risks and limitations. However, this does not have to hinder its use. If you know where potential pitfalls lie, it is possible to avoid them.

Quiz: test your prior knowledge!

6.1. Risks and limitations

Below is a list of risks and limitations:

1. Hallucinations

GenAI provides fictitious sources or invents non-existent books. Although some claims may sound convincing, they are incorrect. This is especially risky in academic work, for example when it involves medical or legal applications. Want to know more about hallucinations? Then watch the video below:

(The video below is in Dutch, but you can use the subtitles.)

2. Bias and discrimination

AI models produce output based on the input they receive. So if there are biases in the training data, they will adopt and/or amplify those biases. This can include stereotypes in image generation or preferences for certain groups in text output.

Do you notice anything about the images below? (Prompt used: create an image of a doctor and nurse in a hospital setting.)

                   

Image source: generated with MS Copilot (2025)

In case you hadn’t noticed: the doctors in all three images are young men, and the nurses are young women. Do you notice anything else?

3. Black box

You give AI a task and then something comes out. But why this specific answer? That often remains unclear, partly because the companies behind AI provide limited information about how their products work. This makes it difficult to truly understand the output and take responsibility for it.

4. Environmental and climate impact

AI models run on servers, and these servers require a lot of energy. That doesn’t necessarily mean AI usage harms the climate, as is often suggested in the media. If the energy is generated from fossil fuels, that is indeed the case. But with other energy sources (solar, wind, and nuclear), no CO2 is released. Additionally, the energy consumption per prompt is decreasing. While a ChatGPT prompt initially required 2.9 Wh, it has now dropped to the level of a Google search (0.3 Wh). Another aspect is AI’s water usage. This is also more nuanced than expected. Although a lot of water is needed to cool the servers, this hardly leads to environmental impact if the water evaporates or returns to the natural water cycle. Finally, AI is not only a burden on the climate and environment—it also helps relieve it through smart solutions, such as analyzing climate data faster than humans and planning logistics more efficiently.

5. Overestimation of creativity

Although the results are often impressive, AI does not replace human creativity. It can, however, limit that creativity if you become too dependent on it. This is related to a decline in critical thinking, which is essential for verifying AI results. If AI takes over too much, users may neglect their own thinking and writing skills.

6. Disinformation and manipulation

AI makes it easy to create deepfakes, fake news, or manipulated images. This can lead to deception, polarization, or political influence.

7. The concentration of power in Big Tech

Only a few large companies, mostly based in the United States, control the development of GenAI. This limits transparency and democratic oversight. This issue is even more significant due to the lack of public alternatives.

8. Violation of academic integrity

Using AI without proper attribution or as a replacement for original work can lead to fraud or plagiarism. The line between ‘assistance’ and ‘replacement’ is sometimes blurry. In academia, this is considered a serious offense. More on this topic can be found in Chapter 7: ‘How to reference correctly?’.

6.2. Dealing with risks and limitations

In the previous section, we listed potential risks and limitations. But how can you deal with these risks and limitations? Below are several actions you can take:

Risks & limitations

Actions

1. Hallucinations
  • Always check sources: ask for a DOI or URL and verify that it actually exists.
  • Use fact-checking tools: combine AI output with Google Scholar, LibSearch, PubMed, or Consensus.
2. Bias and discrimination
  • Be alert to stereotypes: ask yourself who is being represented, and who is not.
  • Use diverse prompts: ask AI for perspectives from different cultures or genders.
  • Compare multiple outputs: have AI generate several versions and compare the differences.
3. Black box-problem
  • Ask for explanations: have AI explain why it gives a certain answer.
  • Use explainable AI tools: some tools (such as LIME or SHAP) provide insight into decision logic.
  • Keep thinking for yourself: see AI as a suggestion, not as the truth.
4. Environmental-impact
  • Use AI consciously: ask yourself whether this prompt is truly necessary.
  • Combine prompts: ask one good question instead of ten separate ones.
5. Overestimation of creativity
  • Use AI as a source of inspiration, not as a final product.
  • Add your own voice: rewrite AI output in your own style.
  • Experiment with unexpected combinations: let AI assist you, but you remain the director.
  • Use AI to train your thinking: let AI provide explanations and improve them yourself.
6. Disinformation and manipulation
  • Verify visual material: ask AI whether an image was generated.
  • Stay critical of viral content: consider who created it and why they did so.
7. The concentration of power in Big Tech
  • Use open source models as an alternative. Example: Mistral.
  • Support public AI initiatives, such as Hugging Face or EduGenAI.
  • Stay informed about AI policy through your university and/or the EU AI Act.
8. Academic integrity
  • Be transparent: always state whether and how you used AI.
  • Use AI as a sparring partner, not as a ghostwriter.
  • Check faculty rules: not every faculty allows the use of AI.

 

6.3. Main privacy risks

Do you want to know something quickly? Just enter your question and within seconds, an answer appears. Because of this ease of use, many people don’t realize that AI ‘remembers’ the data you input—sometimes temporarily, other times permanently. AI companies may have various reasons for this, such as further training of the model or because the data is considered valuable. This can become problematic if you feed AI with personal, confidential, and/or academic information.

Below are several privacy risks listed:

  1. Most AI models are ‘black boxes’:
    You have no visibility into who has access to your data, where it is stored (whether outside the EU or not), and whether it is shared with third parties.
  2. Risk of data leaks or misuse:
    If you enter sensitive information, it could—due to a leak or error—be exposed publicly. This includes patient data, research information, or personal reflections. If such data becomes public, it can lead to legal, ethical, or reputational damage.
  3. Laws and regulations:
    According to European privacy legislation (AVG, GDPR), personal data may only be processed if it is necessary and lawful. Many AI tools do not (yet) comply with these rules. Using AI in education or research therefore requires extra caution.
  4. Increasing digital dependency:
    By unknowingly sharing large amounts of data with major tech companies, you strengthen their power. This limits control over your own information and over the development of public, transparent AI alternatives.

6.4. Personal actions

What can you do yourself to avoid these privacy risks? Below are possible actions:

  1. Use AI tools that have been approved within your institution, such as MS Copilot at VU.
  2. Do not enter personal data or confidential information.
  3. Read the privacy policy of the AI tool you are using.
  4. Use local or open source models when working with sensitive data.
  5. Ask yourself: "Would I also include this in an email to a stranger?"

Quiz: test your knowledge!

7. How do you reference correctly?

Science builds on the work of other researchers. In your own work, you acknowledge your intellectual debt to previous research. That’s why it’s crucial to be transparent about the methods, data, and analyses you use. This allows others to verify whether you’ve used the right sources and interpreted them correctly, among other things. It also enables other scientists to replicate the research to see if it yields the same results. Without verifiability, there is no science. The rise of AI introduces new challenges.

In this chapter, you will learn more about AI and plagiarism, citation, paraphrasing, and referencing.

Quiz: test your prior knowledge!

7.1. Plagiarising, quoting, paraphrasing & referencing

If it is unclear where your own findings end and those of others begin, you may be accused of plagiarism. Plagiarism is the use of someone else's work or ideas without proper source citation. Literally, it means stealing words or ideas. It can involve:

  • copying text without citing the source
  • paraphrasing without attribution
  • reusing your own previous work without acknowledging it (self-plagiarism)

It is considered a scientific sin if presented as original work. Those who commit plagiarism may face serious consequences. The discovery of plagiarism can lead to:

  • retraction of publications
  • disciplinary actions
  • damage to reputation
  • or even the loss of degrees or positions

To prevent this, you can quote someone else's work directly. By using quotation marks, you indicate that a passage comes from someone else. This is called citing. It is also possible to retell someone else's work in your own words without changing the original meaning. You do not use the exact wording, for example, because the original is too technical, too long, or unsuitable for your audience. This is called paraphrasing. Paraphrasing helps make the content more accessible, but you must remain faithful to the original meaning.

When quoting and paraphrasing, you must always provide a source citation, also known as referencing. Referencing means clearly indicating in your text where your information comes from. There are different ways to cite sources. Check with your faculty to see which method is used there.

7.2. AI and plagiarism: risks and tips

A recent form of plagiarism occurs when you use AI-generated output without indicating its origin. After all, it is based on existing texts or ideas. By not acknowledging this, you act as if the texts or ideas are your own. It is therefore important to indicate when you use AI and how you have done so.

These are the main risks of using AI in relation to plagiarism:

1. Unintentional plagiarism

Users adopt AI output without realizing that it is based on existing texts or ideas. Without proper source citation, this can be considered plagiarism.

Tip: Treat AI output as an external source. Check and rewrite it in your own words, citing the source where necessary.

2. Self-plagiarism through the reuse of AI-generated text.

If you use AI to rewrite or reuse previous texts without disclosing it, this may be considered self-plagiarism.

Tip: Be transparent about reuse and mention the use of AI in your work.

3. Loss of authorship

If you let AI write a large part of your text, it becomes unclear who the ‘author’ is. This can lead to questions about ownership and integrity.

Tip: Use AI as a tool, not as a substitute. You remain responsible for the content.

4. Invisible sources

AI tools often do not provide clear source references. If you adopt AI-generated output without knowing its origin, you may unintentionally commit plagiarism.

Tip: Use AI output only as a starting point. Search for the original source before incorporating it.

5. Laziness

AI makes it easy to generate texts quickly. This can lead to superficial work and skipping critical thinking or personal analysis.

Tip: Use AI to deepen your thinking, not to replace it. Reflect on what you want to say yourself.

7.3. AI and plagiarism: accountability

Keep in mind: AI is a tool, and you remain responsible for the content. There are several ways to show where and how you have used AI.

  1. Mention it in your introduction.
    • For example: “In writing this paper, I used ChatGPT (version GPT-4, OpenAI) to support the structuring of paragraphs and the rephrasing of some sentences. The final content was reviewed and adjusted by me.”
    • Or: “To generate sample questions and summarize literature, I used a GenAI tool. All sources were manually verified by me.”
  2. Include it in a footnote.
    • For example: “The first version of paragraph 3 was generated using ChatGPT. This text was then rewritten and supplemented based on my own analysis.”
    • [Footnote: OpenAI. (2025). ChatGPT (version 4). https://chat.openai.com]
  3. Add a brief AI accountability statement at the end of your paper.
    • For example: “ChatGPT (OpenAI, 2025) was used in this paper to support brainstorming on structure and rephrasing of some paragraphs. The content was critically reviewed and adjusted by the author.”

7.4. AI recognition

It takes time to properly account for your use of AI. Sometimes it may be tempting to skip this step—after all, how likely is it that you’ll get caught? More likely than you might think. Teachers can recognize when AI is the actual author of a text in several ways.

  1. Style and tone differences:
    • AI-generated text often has a different writing style than your previous work. Teachers or supervisors may notice:
      • sudden shifts in style
      • unnaturally smooth sentences
      • or an overly general, ‘polished’ tone
  2. Content inconsistencies:
    • AI can produce incorrect or fabricated information (hallucinations), such as:
      • non-existent sources or quotes
      • illogical reasoning
      • or superficial explanations of complex concepts
  3. Detection tools:
    • There are tools that attempt to identify AI-generated text (such as Turnitin AI Detection or GPTZero). These are not 100% reliable but can prompt further investigation.
  4. Comparison with previous work:
    • If your earlier assignments or writing samples are available, teachers may notice differences in style, depth, or structure.
  5. Lack of substantiation:
    • AI texts often lack proper source references or contain vague citations without verifiable literature.

Quiz: test your knowledge!

8. Tips, further reading & references

8.1. Tips & further reading

Would you like to learn more about AI, such as prompting, AI & society, etc.? Take a look below at a collection of literature, libraries, and podcasts.

 

Literature/articles:

  1. AI, je nieuwe collega (only available in Dutch)
  2. De handigste AI tools voor studenten in 2025 (only available in Dutch)

 

Promptlibraries:

  1. Prompt library (MS Copilot)
  2. AI prompt library (Maastricht University)
  3. Prompt library (Hogeschool Windesheim)

 

AI-tool libraries:

  1. The shortkit (only available in Dutch)
  2. AI-tools: handige tools vind je hier. (only available in Dutch)
  3. Padlet AI-tools

 

Podcasts:

  1. The diary of a CEO
  2. AI-youtuber Matt Wolfe
  3. The AI education conversation

 

Only available in Dutch:

  1. Nooit meer schrijven?
  2. AI, je nieuwe collega 
  3. Lekker werken met AI (BusinessWise) 
  4. AI report 
  5. AI today live 
  6. De grote tech show (BNR) 
  7. Het AI-tussenuurtje 
  8. AI & onderwijs 

 

8.2. References

Feedback & win a prize!

Would you like to help us improve the e-learning and have a chance to win one of three great prizes (including a €50 gift card for Grand Café the Living)? Then fill out the evaluation form below!

 

  • The arrangement Smart studying with AI (Bachelor) is made with Wikiwijs of Kennisnet. Wikiwijs is an educational platform where you can find, create and share learning materials.

    Last modified
    2025-10-20 12:33:05
    License

    This learning material is published under the Creative Commons Attribution-ShareAlike 4.0 International License. This means that, as long as you give attribution and publish under the same license, you are free to:

    • Share - copy and redistribute the material in any medium or format
    • Adapt - remix, transform, and build upon the material
    • for any purpose, including commercial purposes.

    More information about the CC Naamsvermelding-GelijkDelen 4.0 Internationale licentie.

    Additional information about this learning material

    The following additional information is available about this learning material:

    Description
    An e-learning course that teaches bachelor students how to use AI critically in their studies.
    Education level
    WO - Bachelor; WO - Master; HBO - Bachelor;
    Learning content and objectives
    Informatica;
    End user
    leerling/student
    Difficulty
    gemiddeld
    Learning time
    1 hour 0 minutes

    Used Wikiwijs arrangements

    E-learnings team informatiediensten. (2025).

    Slim studeren met AI (Bachelor)

    https://maken.wikiwijs.nl/217678/Slim_studeren_met_AI__Bachelor_

  • Download

    You can download the entire arrangement in the formats listed below.

    Metadata

    LTI

    Learning environments that use LTI can play Wikiwijs arrangements and tests and report results. To do this, the learning environment must be registered with Wikiwijs. Do you want to use the LTI connection? Send an email to info@wikiwijs.nl with the request to set up an LTI connection.

    Are you already registered with us? You can use the Launch URLs below.

    Arrangement

    Exercises and tests

    Prior knowledge chapter 2

    Chapter 2

    Prior knowledge chapter 3

    Chapter 3

    Prior knowledge chapter 4

    Chapter 4

    Prior knowledge chapter 5

    Chapter 5

    Prior knowledge chapter 6

    Chapter 6

    Prior knowledge chapter 7

    Chapter 7

    IMSCC package

    If you don’t want to copy the Launch URLs separately, but want to download them all at once, download the IMSCC package.

    QTI

    The exercises and tests of this arrangement can also be downloaded as QTI. This consists of a ZIP file that contains all information about the specific exercise or test; order of questions, images, points to be achieved, etc. Environments with a QTI player can play QTI.

    For developers

    Wikiwijs learning materials can be used in an external learning environment. Technical connections can be made and the learning material can be exported in various ways. More information about this can be found on our Developers Wiki.