| 1. Hallucinations |
- Always check sources: ask for a DOI or URL and verify that it actually exists.
- Use fact-checking tools: combine AI output with Google Scholar, LibSearch, PubMed, or Consensus.
|
| 2. Bias and discrimination |
- Be alert to stereotypes: ask yourself who is being represented, and who is not.
- Use diverse prompts: ask AI for perspectives from different cultures or genders.
- Compare multiple outputs: have AI generate several versions and compare the differences.
|
| 3. Black box-problem |
- Ask for explanations: have AI explain why it gives a certain answer.
- Use explainable AI tools: some tools (such as LIME or SHAP) provide insight into decision logic.
- Keep thinking for yourself: see AI as a suggestion, not as the truth.
|
| 4. Environmental-impact |
- Use AI consciously: ask yourself whether this prompt is truly necessary.
- Combine prompts: ask one good question instead of ten separate ones.
|
| 5. Overestimation of creativity |
- Use AI as a source of inspiration, not as a final product.
- Add your own voice: rewrite AI output in your own style.
- Experiment with unexpected combinations: let AI assist you, but you remain the director.
- Use AI to train your thinking: let AI provide explanations and improve them yourself.
|
| 6. Disinformation and manipulation |
- Verify visual material: ask AI whether an image was generated.
- Stay critical of viral content: consider who created it and why they did so.
|
| 7. The concentration of power in Big Tech |
- Use open source models as an alternative. Example: Mistral.
- Support public AI initiatives, such as Hugging Face or EduGenAI.
- Stay informed about AI policy through your university and/or the EU AI Act.
|
| 8. Academic integrity |
- Be transparent: always state whether and how you used AI.
- Use AI as a sparring partner, not as a ghostwriter.
- Check faculty rules: not every faculty allows the use of AI.
|