Below is a list of risks and limitations:
GenAI provides fictitious sources or invents non-existent books. Although some claims may sound convincing, they are incorrect. This is especially risky in academic work, for example when it involves medical or legal applications. Want to know more about hallucinations? Then watch the video below:
(The video below is in Dutch, but you can use the subtitles.)
AI models produce output based on the input they receive. So if there are biases in the training data, they will adopt and/or amplify those biases. This can include stereotypes in image generation or preferences for certain groups in text output.
Do you notice anything about the images below? (Prompt used: create an image of a doctor and nurse in a hospital setting.)

Image source: generated with MS Copilot (2025)
In case you hadn’t noticed: the doctors in all three images are young men, and the nurses are young women. Do you notice anything else?
You give AI a task and then something comes out. But why this specific answer? That often remains unclear, partly because the companies behind AI provide limited information about how their products work. This makes it difficult to truly understand the output and take responsibility for it.
AI models run on servers, and these servers require a lot of energy. That doesn’t necessarily mean AI usage harms the climate, as is often suggested in the media. If the energy is generated from fossil fuels, that is indeed the case. But with other energy sources (solar, wind, and nuclear), no CO2 is released. Additionally, the energy consumption per prompt is decreasing. While a ChatGPT prompt initially required 2.9 Wh, it has now dropped to the level of a Google search (0.3 Wh). Another aspect is AI’s water usage. This is also more nuanced than expected. Although a lot of water is needed to cool the servers, this hardly leads to environmental impact if the water evaporates or returns to the natural water cycle. Finally, AI is not only a burden on the climate and environment—it also helps relieve it through smart solutions, such as analyzing climate data faster than humans and planning logistics more efficiently.
Although the results are often impressive, AI does not replace human creativity. It can, however, limit that creativity if you become too dependent on it. This is related to a decline in critical thinking, which is essential for verifying AI results. If AI takes over too much, users may neglect their own thinking and writing skills.
AI makes it easy to create deepfakes, fake news, or manipulated images. This can lead to deception, polarization, or political influence.
Only a few large companies, mostly based in the United States, control the development of GenAI. This limits transparency and democratic oversight. This issue is even more significant due to the lack of public alternatives.
Using AI without proper attribution or as a replacement for original work can lead to fraud or plagiarism. The line between ‘assistance’ and ‘replacement’ is sometimes blurry. In academia, this is considered a serious offense. More on this topic can be found in Chapter 7: ‘How to reference correctly?’.