Although AI tools offer many possibilities and can deliver impressive results, it is important to be aware of their limitations. AI tools, such as ChatGPT, are trained on vast amounts of data. These so-called Large Language Models predict which word is likely to follow the previous one, without knowing whether the content is accurate. This can lead to “hallucinations”: information that sounds convincing but is factually incorrect. An example of a hallucination is a chatbot inventing a non-existent article and providing an author name and DOI.
It is therefore important to always critically assess AI output. Ask yourself questions such as: (1) “Does this sound logical?” or (2) “Can I verify this?” (See chapter 5.4 for a checklist to review GenAI output). If you use AI tools, use them as a starting point, not as an end point. Rely on your own knowledge and/or feedback from instructors and/or reliable sources when evaluating AI output. Only then can you use AI responsibly in your studies.
In addition, transparency is important. If you use AI tools for an assignment, you must disclose this when asked. Undisclosed use may be considered fraud, especially if you present AI output as your own work. Discuss with your instructor what is and isn’t allowed, and be transparent about your approach.