6.3. Main privacy risks

Do you want to know something quickly? Just enter your question and within seconds, an answer appears. Because of this ease of use, many people don’t realize that AI ‘remembers’ the data you input—sometimes temporarily, other times permanently. AI companies may have various reasons for this, such as further training of the model or because the data is considered valuable. This can become problematic if you feed AI with personal, confidential, and/or academic information.

Below are several privacy risks listed:

  1. Most AI models are ‘black boxes’:
    You have no visibility into who has access to your data, where it is stored (whether outside the EU or not), and whether it is shared with third parties.
  2. Risk of data leaks or misuse:
    If you enter sensitive information, it could—due to a leak or error—be exposed publicly. This includes patient data, research information, or personal reflections. If such data becomes public, it can lead to legal, ethical, or reputational damage.
  3. Laws and regulations:
    According to European privacy legislation (AVG, GDPR), personal data may only be processed if it is necessary and lawful. Many AI tools do not (yet) comply with these rules. Using AI in education or research therefore requires extra caution.
  4. Increasing digital dependency:
    By unknowingly sharing large amounts of data with major tech companies, you strengthen their power. This limits control over your own information and over the development of public, transparent AI alternatives.