Introduction to Research Data Management

Introduction to Research Data Management

Introduction

Introduction to Research Data Management


 

This course provides an introduction to Research Data Management (RDM) and guides you in creating your own Data Management Plan (DMP). Throughout the modules, you will explore key processes and best practices in Research Data Management, with a strong focus on applying them to your own project. By the end of the course, you will be equipped to develop a comprehensive data management plan. Additionally, recommendations on appropriate tools and best practices will be provided throughout the course.

 

Keep an eye out for these boxes, they might have some useful tips and tricks.

 

Before you start, it is recommended to read over this list of definitions related to data management, these terms will be used throughout the course.

 

Navigate through the pages using the arrows.

Structure of the course

The main text will provide all the information you need to put the subject into practice. Each chapter will be explored through a combination of text, videos, and exercises to enhance your understanding.

At the end of each chapter, you will complete two exercises:

  • One will be integrated into the webpage.
  • The other will require you to write down key information, forming a solid foundation for your RDM & Privacy Framework, which will be fundamental to developing your data management plan.

Exercise Breakdown:

  1. Project-Focused Exercise – You will reflect on your own project, evaluating how the chapter’s topic applies to your work. Think of this as preparation for your data management plan.
  2. Quiz or Task – This will test your understanding of the material. If you answer incorrectly, you can retake it until you achieve a perfect score.

The course follows a logical sequence based on the sections of a data management plan. While it is recommended to go through each section in order, you are free to revisit previous sections or jump ahead if you’re eager to explore a particular topic. The key is to ensure you complete all sections and exercises.

 

Each chapter will have additional resources which can be used to dive deeper into a topic, here you will find further examples and explanations.

 

Click the arrow to the right to continue to the next page.

DMPonline

DMPonline is an online tool that is used to create, develop, and gain feedback on your data management plan.

Various templates exist in which you can set up your DMP. We strongly recommend that you use the VU template, which is called VU DMP template 2021 (NWO & ZonMw certified) v1.4. Below you’ll find an explanation of how to access this template. If you need to write a DMP for funding agencies NWO, ZonMw or ERC, you can use the VU template as well.


VU template

You can find the VU DMP template in DMPonline. It includes concise guidance on how to complete your DMP.

You can select the VU template by taking the following steps (see also the picture below).

  1. On your dashboard, click on Create plan.
  2. Enter the title of your research project (you don’t have to select the check box for mock testing).
  3. Select Vrije Universiteit Amsterdam as your primary research organisation.
  4. For the question on primary funding organisation, select the check box on the right, saying that no funder is associated with your plan.

 

 

 

If you’re aiming to write a full DMP based on VU Amsterdam DMP template, please make sure you don’t select the GDPR registration form.

 

 

 

It is a legal requirement to register all data processing activities, such as research projects. Using the VU template will automatically register your project within the GDPR system. If your project does not have a Data Management plan (DMP), or if you used a different DMP template, you must complete the GDPR registration form separately.

Research Data Management Basics

RDM basics


 

Effective research data management is essential for ensuring the quality, integrity, and success of your project. Every decision you make regarding your data, from collection to dissemination, can have a significant impact on your research outcomes. By understanding the key stages of the research process and planning your activities accordingly, you can identify potential challenges early, implement necessary safeguards, and maintain a structured approach to your work.

This chapter explores the Research Lifecycle, a framework that outlines the various stages of research, helping you visualize your project from start to finish. By mapping out your tasks within this lifecycle, you can create a clear timeline, improve efficiency, and anticipate potential roadblocks, ultimately leading to a smoother research experience.

The Research Data lifecycle

How you manage your research data has a direct impact on the overall quality and integrity of your research. To fully understand how certain decisions affect your project, it is important to identify the key stages of research and map out your activities accordingly. By doing so, you can easily recognise potential bottlenecks in the project, implement appropriate safeguards, and take a proactive approach to your project. Of course, there will always be unexpected challenges that arise in a project but with good management and planning, these can have less impact on your progress.

Research Lifecycle
Research Lifecycle

The Research Lifecycle is a commonly used framework that outlines the stages involved in research. It maps research from the point of conceptualization to the final stage of dissemination. The lifecycle is broken down into 6 stages:

 

  • Discover and Initiate
  • Plan and Design
  • Collect and Store
  • Process and Analyse
  • Document and Preserve
  • Publish and Share

 

Often tasks will span over multiple stages, but in general we can categorise most tasks into one of the six stages of the research lifecycle. If you can map out the tasks which will take place in your research, it will be easier to create a clear timeline and plan for your overall project.

If you fail to plan, you plan to fail.

 

Follow the arrow to discover what happens at each stage.

Discover and Initiate

In this initial stage of research, it is common to explore your ideas and look at what data is relevant to your research question. This may include tasks such as evaluating existing datasets, discovering relevant literature, and formulating ideas.

Reusing data is a great way to build upon existing research and contribute to a sustainable research landscape. Improvements to data sharing infrastructure and Open Science practices have led to an increase in accessible and reusable datasets. Before you jump into creating your own dataset, it is important to check whether a suitable dataset already exists. To do so, there are several useful data repositories which can help your search.

 

Once an appropriate dataset has been identified, it is important to evaluate if the dataset is suitable for reuse. This will consist of checking: the terms and conditions of access and use (license), evaluating possible ways to access and use the data, associated costs and time commitments, and the format of the data and metadata.

 

Searching through available datasets is a valuable step in the research process. Even if you fail in finding a suitable dataset, the process will give you a wider view on a topic and help you identify gaps in the current research.

 

Plan and Design

The planning and design stage of your research is the foundation for the remainder of the project. In this phase you will coordinate with any collaborators and colleagues on the best approach for your project, define the roles and responsibilities of everyone involved and ensure that all legal and ethical requirements have been considered.

The following documents are typically prepared during this period:

 

 

 

 

The VU offers a variety of support services for researchers at every stage of the research lifecycle. If you are unsure where to start, contact the central research data management team or take a look at the Research and Impact Support Portal.

It is advised to connect with support services in the early stages of your research project. Tasks such as legal agreements take time to complete; reaching out early helps avoid delays to your research project and ensures you can continue with your research.

 

Collect and Store

Collection

Now that your planning is complete, it's time to begin collecting your data. The tools you use to collect your data will differ depending on your project - for example you may use Qualtrics, an MRI machine, or a recording device.

Research is not only about the data itself but also how it is collected. Documenting your data collection process is essential for reproducible research. Be diligent in recording the steps taken during data collection to correct your dataset.

This can include but is not limited to documenting:

  • machine settings (e.g., calibration, parameters),
  • specific hardware used (e.g., device model, serial number),
  • methodology followed (e.g., procedures, protocols),
  • decisions made during data collection (e.g., protocol changes),
  • software packages and versions used,
  • anything that is relevant to understand how data was collected.

 

This not only allows others to replicate your process but also helps you understand your dataset later.

 

Storage

Where you store your data during the research process will be determined by the characteristics of your project and the data you will collect. The project's needs and data sensitivity play a large role in selecting the most suitable storage location. In a later chapter, we will explore the topic of secure and suitable data storage.

 

 

 

 

 

 

 

Process and Analyse

 

Once data collection is complete, the next step is to prepare your data for analysis. This stage involves cleaning your data and selecting appropriate tools to explore your findings. The tools and methods you use will depend on the nature of your research and the type of data you've collected.


 

Data cleaning: Preparing your data for analysis.

Before analysis can begin, your data must be clean and suitable for use. Data cleaning is the process of transforming raw data into a usable format. Since raw data is rarely ready for immediate analysis, this step is essential.

Depending on your data type, cleaning may involve:

  • formatting and restructuring your data,
  • removing duplicates,
  • handling missing values,
  • removing unnecessary or irrelevant data,
  • validating data for consistency and accuracy.

You may also use software tools to assist with data cleaning and preparation, such as transcription software. Always ensure the tools you use are secure and supported by the VU. If you're unsure, contact the Research Data Support Desk for guidance.


 

Data Analysis: Exploring your findings

With your data cleaned and structured, you can begin the Data Analysis phase. This involves examining your data to answer your research questions and identify patterns, trends, or relationships.

Depending on your methodology and data type, you may use:

  • Quantitative tools: Programming languages like R or Python, or statistical software such as SPSS
  • Qualitative tools: Text analysis software like ATLAS.ti for coding and interpreting open-ended responses or transcripts

When working with code or analysis scripts:

  • Write clear, well-documented code,
  • Use comments to explain your logic and decisions,
  • Create a codebook that defines your variables, values, and data structures.

This promotes transparency, reproducibility, and easier collaboration.

 

Version control is an essential aspect of the documentation process and should be established at this stage, if not already implemented.

Document and Preserve

Now, your project is finished and it's time to move your data from active research storage to the archive. The archive is a long-term storage space which ensures that once the data is submitted, it cannot be altered or lost. It is VU policy to archive your data for (at least) 10 years. This stage ensures integrity and transparency in your research.  

 

A dataset consists of the following documents:

  • Raw or cleaned data (if the cleaned data has been archived, the provenance documentation is also required)

  • Software (& version) needed to open the files when no preferred formats for the data can be provided

Publish and Share

The research landscape heavily promotes Open Science, and under this umbrella falls Open Data Publishing. Sharing your data with others can help promote collaboration, further scientific research, and increase the recognition of your research. Publishing your data in a repository is a great way to share your research and increase your research outputs.

 

The way you publish your research data depends on the nature of your project, the type of data involved, and any legal or ethical considerations. Generally, there are three common approaches to data publishing.  

1. Open Data Publishing:

  • The data is fully accessible to the public via an online repository. Anyone can view, download, and reuse the data, provided they comply with the terms of the associated license (e.g., Creative Commons).

2. Restricted Data Publishing:

  • The data can be reused, but the access is controlled. This may apply to the entire dataset or only specific parts of it. Potential users must submit a request and meet specific requirements, often outlined in the reuse license, before accessing the data. While the data itself is not publicly accessible, metadata and descriptions of the data are available online.
  • Data may be restricted due to potential privacy risks.

3. Closed Data Publishing:


Regardless of the chosen publishing approach, data must remain available for verification purposes. This ensures that the integrity of the research can be confirmed if questions arise about the validity of the findings.

 

 

 

 

Once you have determined what data you can and cannot share publicly, you can determine which repository you will publish in. You can find out more information on specific repositories here. Each repository will have their own requirements, but it is always required to include the code you used to analyze your data, as well as clear reuse documentation along with a reuse license.  

Summary

Organisation and planning are key to a successful research project. Best practices and policies are often in place to ensure data management is considered at all stages of the research lifecycle. However, data management is not just an administrative task; it is key for promoting good scientific practices and for many more reasons.

Good data management helps: increase the integrity of your research; contribute to the impact of your research; improve the quality of your research; support future reuse of data; and make your job easier as a researcher!

 

Task 1:

Think of your own research. Can you imagine what tasks you will take on?

  1. Write down all the different tasks which you anticipate will occur during the entire research project. The more detailed you make this list, the better overview you will have of your project.
  2. Once you have a complete list created, assign them to the appropriate stages of the lifecycle. If some tasks will take place over multiple stages, you can note them in both sections.

 

This list can be the starting point for your planning, if you want to go one step further you can place these tasks on a timeline. This will help keep you focused and help you set realistic goals.

Some examples of tasks are: selecting a research topic, choosing a research methodology, thinking about resources
(money, people), discussing ideas with colleagues and/or supervisors, writing down and finalising details of research proposal the project’s research design, preparing and submitting an application for ethics approval, arranging software needed for data analyses, preparing manuscripts for journal publication, writing a data management plan, selecting a data storage location, etc...

 

The VU's data management plan template is modeled from the research lifecycle, so you can use the information gathered throughout the course when designing your own data management plan.


 

Task 2:

Data package and Data assets

The data package


 

Understanding how data is structured within a research project is crucial for effective data management. Each individual piece of data or documentation is known as a Data Asset, which may evolve as the research progresses. These assets collectively form a Data Package, the final compilation of all collected, analyzed, and processed data, along with the necessary contextual information to ensure its future usability.

In this chapter, we will explore the different types of data assets, how they change throughout the research process, and how to effectively organize them. By properly managing your data assets, you can enhance the quality, reproducibility, and long-term impact of your research.

Research data is often visualized as numerical data; however, research data is full of variety and can differ immensely. It includes all physical and digital information collected, observed, generated, or created for analysis. Additionally, administrative documents such as key files, informed consent forms, and interview guides are essential components of research data that contribute to the FAIRness of a project.

 

 

FAIRness refers to your data being Findable, Accessible, Interoperable, and Reusable. You will often see this term referred to when discussing research data. The goal of FAIR data is to ensure transparency, longevity, and availability of research data. Applying these four principles to your research data increases the usability of your data and ensures it remains valuable now and into the future. Good data management contributes to the FAIRness of your data. For more information on what FAIR means to your research check out this resource.

 

 

 

 

 

What is a data asset?

When we think of research data, we often only consider numerical data; however, this does not truly reflect the diversity of research data. Research data includes all information (physical and digital) collected, observed, generated, or created for the purpose of analysis to produce and validate original research results. Administrative documentation such as a key file, informed consent and interview guides should also be recognized as important elements of research data.  

 

The term 'Data Asset' refers to an individual piece of data or documentation within a project. This can be a CSV file containing your raw data, the transcripts of an interview, or the results of an EEG scan. These data assets may transform throughout your project. For example, after you have cleaned a data set, you will then have a new data asset. This is considered a new data asset and should be treated as one. As you progress through your project, you will create and collect various data assets.

All of these assets together form a 'Data Package'. This is the final product of your research data. This package should include the data you collected, analysed, and processed, along with the contextual information that describes it (metadata). It should also contain the guidelines and protocols you followed, details about how the data was collected, its potential future used, and any other information needed to understand, verify, or reuse the dataset. Preparing a well-documented data package is especially important when it comes to archiving

Think back to task 1 of the previous chapter. Can you think of the data assets which will be created during each task?

 

 

Throughout the lifetime of a project, many data assets will be collected and created. The organisation of these data assets is the key to good data management. You can categorize your data assets into four groups:

  1. Raw Data
  2. Processed Data
  3. Analysed Data
  4. Other Data

Data categorisations

 

Raw data

Raw data is the original, unprocessed data collected at the start of your research. It has not been cleaned, analysed, or reorganized. Depending on your project, raw data can take many forms.

Examples include:

  • audio recordings or interviews,
  • responses from questionnaires,
  • measurements collected by devices,
  • or biological samples.

What counts as raw data depends on your research context. If you are reusing a dataset that was cleaned by another research group, that version may be considered your raw data. Likewise, if a laboratory provides you with processed outputs, those files are your raw data for the purpose of your project.

 

Processed data

Once you have collected the raw data, it will then be used to create your processed data. This is raw data that has been altered in some way, often with the intent to make it suitable for analysis. This includes processes such as cleaning the data, pseudonymizing the data, and performing statistical analysis. Within the research lifecycle, we can consider this data in a preparation stage. Processing raw data produces a new, structured dataset that is ready for analysis and can be used to answer your research question.

 

Analysed data

Once your data has been cleaned, formatted, and organised, the next step is analysis. Analysed data refers to the output of this stage - the results generated from applying statistical, computational, or qualitative methods to your processed dataset.

This is typically the version of the data you use to support your findings and present in publications. It may include:

  • Graphs or charts generated from statistical tests (e.g., logical regression plots)
  • Summary tables or model outputs
  • Coding schemes or thematic frameworks from qualitative analysis
  • Visualisations that communicate patterns, trends, or relationships in your data

 

 

Other data

Finally, you have the remainder of your data assets. This category can also be considered your metadata and supporting documentation. Throughout your project, you have amassed documentation that is valuable to the context of your other data assets. The inclusion of these data assets should ensure others (and yourself) can understand and interpret your data package at a later stage. Examples of documentation would include ReadME files, Codebooks, Software packages, interview guides, metadata files, and any other files used to document your process.

 

 

Below are two examples on how to structure your data assets.

The first example consists of interview data which gets transcribed and analysed by use of a self-written R code.

 

 

The example below consists of experimental physical data.

 

 

Summary

Within a research project, individual pieces of data or documentation are referred to as Data Assets. These assets evolve throughout the research process, for example, when raw data is cleaned or transformed into a new version. Properly tracking and managing these assets is essential for maintaining data integrity.

The final collection of all data assets forms the Data Package, which includes:

  • Collected, analysed, and processed data
  • Metadata that describes the data
  • Research protocols and guidelines
  • Information on data collection and potential future use

A well-organized data package is key to effective archiving and future reuse. To streamline data management, data assets can be categorized into four groups: Raw Data, Processed Data, Analyzed Data, and Other Data. Proper organization of these assets ensures clarity, reproducibility, and long-term usability of research data.


Task 1:

Think about the data assets you will collect for your research project.

  1. List all the data assets you will create and collect for your project (once again, the more detail the better!).
  2. Categorise them into raw, processed and analysed data. Think of the data assets which will transform throughout your project and the ones which will remain consistent.

 

Having a complete list of data assets helps in the later stages of the data management plan and gives you an overview of your data throughout the entire research lifecycle.

 

Personal Data

Personal data


 

In today's research landscape, handling personal data responsibly is a critical aspect of ensuring the integrity of your project and the protection of individuals' rights. Personal data refers to any information that can identify a person, either directly or indirectly, and includes a wide range of data types such as names, contact details, or even more sensitive information like health records or interview responses.

As you collect and process personal data in your research, it is essential to be aware of the legal frameworks governing its use. One of the most important regulations in this area is the General Data Protection Regulation (GDPR), which sets strict rules on how personal data should be managed, stored, and shared to safeguard privacy.

This chapter will guide you through the key concepts surrounding personal data, its legal protections under the GDPR, and how to ensure your research complies with these regulations. You will learn about the rights of individuals, the responsibilities of researchers, and the measures necessary to protect personal data throughout its lifecycle, from collection to storage, and eventual disposal. By understanding and applying these principles, you can ensure that your research respects privacy while maintaining high ethical standards.

What is personal data?

Personal data’ refers to any information related to an identified or identifiable natural person.  

A natural person means personal data is from or related to a living person. This information can be objective or subjective. This information can differentiate one individual from another and says something about them.  

In general, if you are using data collected about people, it is always best to assume it is personal data.  

 

Personal data can be objective or subjective:

Objective personal data

This information is factual and can be verified. It does not involve opinions, feelings, or personal judgments.

Example:

  • Name: John Doe

  • Date of Birth: August 15, 2005

  • Grade Level: 11th Grade

  • GPA: 3.8

  • Attendance Record: 96% present

These are factual pieces of information that can be confirmed through school records.

 

Subjective personal data

This refers to opinions, feelings, or personal perspectives. It is based on how someone feels or thinks, not on measurable facts.

Example:

  • Favorite Subject: “I love literature because it allows me to express myself creatively.”

  • Self-Assessment: “I think I’m a great team leader in group projects.”

  • Career Goals: “I hope to become a psychologist because I enjoy helping others understand themselves.”

  • Learning Style: “I learn best through visual aids and hands-on activities.”

These reflect personal opinions, preferences, or beliefs that may vary from person to person.

 

 

Personal data is broad term and covers more data types than what we might initially think. Not only does it include information which can be directly linked to an individual but it also encompasses information which can be used as a puzzle piece to re-identify someone.

 

Directly and indirectly identifiable data

Directly identifiable personal data includes information such as name, address or photographs/video recordings of faces, for which little to no effort and no additional information are required to determine to whom the data belong. These types of data are what we most commonly relate to as personal data. However, it is important to be aware that personal data can also be indirectly identifiable.

Indirectly identifiable data require more effort as well as additional information to determine whom the data belongs to. Indirectly identifiable personal data include genetic information, data that are unique to an individual, datasets with extreme or unusual values (e.g., extreme physical measurements unique to elite athletes, highly unique employment history) or any other characteristics about a person (e.g., ethnicity, gender, occupation and/or education) that, when combined into one record, can single out that person as unique in your dataset. Indirectly identifiable data may not immediately identify an individual, but they do provide the potential for identification of that individual

When collecting any data from participants, it is best practice to minimize the amount of personal data collected. You should think critically about why you need specific information from your participants and always strive to minimize the personal data you collect.

 

 

 

The General Data Protection Regulation (GDPR/ AVG)

The GDPR outlines an additional category of personal data which requires some additional considerations. This is the special categories of personal data. When working with special categories of personal data there are stricter conditions:

  • Racial or ethnic origin

  • Political opinions

  • Religion or philosophical beliefs

  • Trade union membership

  • Health

  • Including any type of physical measurement, assessment of mental well-being or cognitive function, even in a non-clinical population  

  • Sex life or sexual orientation

  • Personal data concerning criminal convictions and personal data concerning unlawful or objectionable conduct for which a ban has been imposed

  • Biometric data (fingerprints, iris-scans)

  • Genetic data

Contact your Privacy Champion if you think this applies to your research.

7 principles of the GDPR

You are not expected to be a privacy expert. Instead, you should understand why the privacy of your participants is important and requires additional care and consideration when planning your research. Support is available to help you uphold privacy standards in your research. If you have any questions or concerns, you can always contact your Privacy Champion for guidance.

Under the GDPR, there are seven key principles for data protection. You should keep these in mind when working with personal data.

 

Lawfulness, fairness and transparency:

  • Have a valid legal ground for processing personal data.

  • Be clear, honest and open with your participants on how you will process their data.

  • Process data in a way which is fair.

 

Purpose limitation:

  • Be clear and specify why the data will be processed from the start of the project.
  • Only use the data for this purpose.
  • Inform your participants about the purpose of data processing.

 

Data minimisation:

  • Only collect data which is relevant to your research.
  • Be critical of why you need someone's personal data; if it is not necessary, don't collect it.
  • Periodically review the personal data you have, and delete what is no longer required.

 

Accuracy:

  • Take reasonable steps to ensure data is accurate and up to date.
  • If you discover data is incorrect, take steps to resolve this and document the process.

 

Storage limitation:

  • Be clear and transparent with participants regarding the retention period of their data.
  • Do not keep data for longer than necessary.

 

Integrity and Confidentiality:

  • Implement the appropriate measures to ensure the protection of personal data.
  • Speak to your supervisor or data steward to ensure these measures are correct.

 

Accountability:

  • Take responsibility for how you handle your data.
  • Document clearly how you will handle your data.
  • Be aware of the processes for reporting a data breach and follow if necessary.

Summary

Personal data applies to any data that can be used to (re)identify a person. It is important to remember that we live in a world with increasing datafication, meaning more and more personal data is available that can be used to re-identify participants. Protecting personal data means being aware of ways in which identification is possible. For example, someone's job title, years of experience, and field of work may not be directly identifiable on their own. But combined with data available publicly on LinkedIn, this information can be used to identify someone and find further details about them.

If you are unsure whether your data would be considered personal data, reach out to your supervisor or a Data Steward.

Task 1

You will need the list of data assets created in the previous chapter to complete this task.

  1. Evaluate each data asset and determine whether they contain any personal data.
  2. Take note of each data asset that contains personal data.
  3. Take note of each data asset that contains a special category of personal data.

Working with personal data

Working with personal data


 

This chapter will explore essential aspects to consider when working with personal data, including the legal grounds for processing it. You will learn about the different legal bases for collecting and using personal data, such as consent, public interest, and legitimate interest, and how to determine which applies to your research.

 

Next, we will dive into the differences between anonymous and pseudonymous data. Understanding these distinctions is key to data protection: anonymous data cannot be traced back to an individual, while pseudonymous data still allows for potential identification with additional information. You will explore how these types of data impact privacy risks, legal requirements, and methods for managing data throughout your research project.

Finally, this chapter will present some key security measures required when working with personal data. These measures help ensure data remains confidential throughout the research project. This section will outline practical steps for safeguarding data, including access controls, secure storage solutions, file encryption, and general best practices. By implementing these measures, your research can uphold its ethical standards and maintain the trust of participants whose data is being used.

By the end of this chapter, you will have a clearer understanding of how to handle personal data ethically and legally, ensuring compliance with data protection laws while protecting the privacy of individuals involved in your research.

Legal ground

You will often hear the term 'processing' when discussing personal data. This is an umbrella term which refers to anything you do to the data. This can include collecting data, reusing data, cleaning data, (long term) storage, sharing data, and even deleting data.

 

When processing personal data, we must first have a 'legal ground'. We touched briefly on this in the last section, but now we will learn what this means in practice. In research, the most common legal ground for processing personal data is 'informed consent'. It is a researcher's responsibility to appropriately inform their participants on what will happen to their data, what it will be used for, and how they can report issues or concerns relating to data privacy. This is typically done within 2-3 documents:

  1. Informed consent:

    • This form clearly describes the data you will collect and for what purpose. It should include an outline of what is expected during participation. If you intend to make data available for reuse, this should also be addressed. The language and terminology used in the form should be comprehensible and tailored to the participant, considering the target audience's age, capability, and language skills. Consent should always be voluntary, and participants should always be allowed to cease their participation at any stage of the research.
  2. Participant information letter:

    • This information letter outlines the purpose of the data being collected and provides additional context about the research project. It should be written in clear and accessible language so that participants can easily understand it. Participants must be given sufficient time to ask questions and express any concerns they may have. A copy of the information letter should be provided for participants to keep and refer to at any time during or after the study.
  3. Privacy statement (if required):

    • This statement should clearly outline all processing activities that will occur with the data, list all parties who can access the data, and inform participants on how to report concerns of data misuse/ data protection.
    • This statement should be available to participants both during and after the research project. If your project has a dedicated website, you can publish it here. Alternatively, you can upload it to the Open Science Framework (OSF) and generate a persistent identifier (PID) link for long-term access.

 

There are other legal grounds to process personal data within the GDPR. It is required for a member of the VU privacy team to evaluate whether they are suitable, so do not make this decision for yourself. If you think they may apply to your project, reach out to the faculty Privacy Champion.

 

 

 

Templates are available for all the documents listed above; contact either your supervisor or Data steward to access these.

Anonymous and pseudonymous data

When collecting data from individuals, it is important to minimise the amount of personal information gathered. Wherever possible, avoid collecting directly identifiable data, such as names, addresses, or contact details, unless it is essential to address your research question. This approach helps protect participants' privacy and reduces the risk of re-identification.

 

If personal data must be collected, it should be de-identified to the greatest extent possible. Two commonly used terms related to de-identification are anonymous and pseudonymous, but they refer to different concepts.

Anonymisation does not equal pseudonymisation

 

Anonymous data means a participant can never be (re-)identified from the data contained in a dataset, even if the data is merged with another dataset. Full anonymity is very difficult and takes time and consideration to achieve; always consider the likelihood of re-identification when evaluating if the data is indeed anonymous. If you are unsure, reach out for support.

Examples of anonymous data:

  • aggregated data (e.g., 17% of participants have preference X)
  • randomised data, where, within a group of participants, the data have been randomly swapped so that the participants cannot be traced back to a person (simulation dataset for universities)
Be aware: small sample sizes and unanimous answers can impact the anonymity of a dataset. This should be considered when evaluating whether you can classify your data as truly anonymous.

 

Pseudonymous data means a participant can still be re-identified from the dataset. However, they cannot be identified without some additional information.

Techniques for pseudonymisation:

  • removing directly identifiable information such as a name and replacing it with a random identification number
    • The random IDs are stored in a so-called key file
  • generalisation (categorising variables into groups)
  • removing outliers
If you use a key file to store the link between identifiable information and the associated random IDs, this should be stored in a separate location from the remainder of your research data. This increases the security of your data and reduces the risk of unauthorised re-identification of participants if your research data is compromised.

 

 

Below you will find a table which portrays the differences between terms.

 

 

De-identification table
De-identification table

Security Measures

The seven principles of the GDPR, which were discussed in the previous chapter, outline the requirements when working with personal data. How we apply appropriate security measures is guided by these principles to ensure potential data privacy risks are mitigated.

This section will discuss some of the most common security measures applied to (personal) research data. This is not an exhaustive list; always discuss whether you have applied the appropriate security measures to your research data with your faculty Privacy Champion.

 

Data access control:

Restricting access to personal data ensures that only authorised individuals can view or process the data. This includes setting up role-based permissions, using secure login procedures (e.g., multi-factor authentication), and maintaining logs of who accessed the data and when. Regular reviews of access rights should be performed to prevent unauthorized use.

 

Secure data storage and transfer:

Personal data must be stored in secure environments, such as institutionally approved servers or encrypted cloud services. The options available at the VU will be discussed in the following chapter. You should always avoid using unapproved data storage platforms (e.g. Dropbox, Google Drive, etc.).

Data transfers, especially over the internet, should always be done using secure methods. The VU supports the use of Surf Filesender and Zivver for secure data transfer, but only when absolutely necessary.

  • Avoid using personal email or unapproved platforms for sending or sharing sensitive data.

 

File encryption:

Encrypting files containing personal data adds an extra layer of security for data which requires additional security measures. This ensures that even if the files are accessed or intercepted by unauthorised parties, the contents remain unreadable without the correct decryption key. Both full-disk encryption and file-level encryption tools (such as Cryptomator) can be used.

  • Always consult your supervisor or data steward before implementing encryption, as improper use may lead to data becoming inaccessible.

 

Third-party software/tools:

Only approved third-party tools and services should be used when processing or storing personal data. These tools must comply with relevant data protection laws and institutional policies. It is important to review their privacy policies, data processing agreements, and ensure that data is not transferred outside approved jurisdictions without appropriate safeguards (such as Standard Contractual Clauses or adequacy decisions).

  • Free, browser-based tools, such as SurveyMonkey, online document converters, or transcription websites, are not permitted for processing personal data due to privacy and security concerns. Always consult your Privacy Champion to ensure you're using approved and compliant tools.

 

Research projects that handle highly sensitive data, such as highly identifiable data and/or data collected from a vulnerable population, may require additional security measures beyond those discussed in this section. If your project works with highly sensitive personal data, you should reach out to your Privacy Champion because a full risk assessment may be required. If you are unsure whether your data is highly sensitive or not, speak to a Data Steward or Privacy Champion.

Summary

When working with personal data, you must ensure you are complying with regulatory and ethical guidance. This requires you to have a level of awareness regarding your responsibility as a researcher when handling personal data. It is not expected that you will be a privacy expert, but you should think critically when collecting personal data and reach out for further support if necessary.

In this chapter, we looked at what a legal ground is for processing personal data and the difference between anonymous and pseudonymous data. The following tasks will demonstrate how this knowledge is applicable in the research process.

 

If you think your legal ground is not informed consent, contact your privacy champion to discuss this further. The decision to use a legal ground outside of informed consent can only be made with insight from a privacy expert.

Task 1:

Thinking of the project you will work on, consider what legal ground will be used when processing personal data. On a page, list all the personal data collected from participants and indicate the legal ground that will be used. Use the following format:

Personal data Legal ground
   
   
   
   

Task 2:

Press 'start' to commence the second task.

Data storage

Data storage


 

In this chapter, you will learn about the key requirements to consider when choosing where to store your research data. VU offers several storage options for researchers; this chapter outlines the benefits and limitations of each. Before selecting a storage solution, it is important to first understand the types of data you will be working with. This will help ensure that you choose the most appropriate storage option based on the specific needs of your project.

Storage is something we encounter daily, whether it's photographs on our phones, tax records, or old boxes tucked away in the shed. The way we store these items can greatly affect our ability to find and retrieve them safely when needed.

For example, you wouldn't store an expensive sports car in the same place as an old tractor, as you'd be concerned about damage or theft of the car. Similarly, you wouldn’t park a bus in a private car space, as it simply wouldn’t fit.

The same logic applies to research data. The nature of your data will guide you in selecting the most appropriate storage option. Consider factors such as the sensitivity of your data, its volume, and how you plan to interact with it once it is stored. These considerations will help you choose the right solution for your research needs.

 

Storage considerations

There are several storage options available to VU researchers, each with its own set of benefits and limitations. When deciding where to store your research data, it is important to first assess your specific requirements and use this evaluation to make an informed decision about which storage option is most suitable for your research data. This section will highlight the key factors that influence storage selection, while the following section will help align these factors with the available storage options.

 

Data Quantity:

  • Depending on the kind of data you work with, storage capacity can be an important factor on your decision. This refers to the total file size and number of files you will produce throughout your research. This consideration is particularly relevant if you work with data such as EEG, MRI, FMRI, video files or high resolution imaging. Data volumes exceeding 500GB can be classified as a large.

 

 

 

Data Sensitivity:

  • Regulations and policies for handling sensitive (personal) data require specific conditions for data storage. If working with personal data, it is important to consider the vulnerability of the population and the identifiability of the information collected; the higher these are, the higher sensitivity your data is. Features such as restricted access, two-factor authentication, and encryption increase the suitability of the storage location.

 

 

Data Sharing:

  • If you are working with organizations outside of the VU, not all storage options will be suitable. It is important to consider who you will be collaborating with and whether they need access to the data. Collaboration does not always require that all parties have access to all of the data, so considering the level of access will also help determine where to store your data (e.g., at the folder and sub-folder level).

 

Physical or Digital Data:

  • Data comes in a variety of formats. As we move towards digitisation, there is an increased focus on the digital data produced during research. However, when working with data which is physical, such as paper questionnaires, informed consent, physical samples, etc., you should also choose a suitable storage location for these physical data.
The above factors are discussed with requirements in mind, but it is also important to consider which requirements are not necessary. For example, if your data is non-sensitive & non-personal, you should not add security measures which are not necessary.

Storage options

This section provides an overview of all storage options available at the VU. While it is useful to understand the benefits and limitations of each, not all options are accessible to students. Be sure to check with your supervisor early in your project to determine which storage solutions are available to you.

 


Yoda

Benefits:

  • Approved for high privacy/ confidentiality risks
  • Storage of large volumes of data that don't need to be frequently accessed for processing/ analyzing
  • Creation of structured metadata to describe your research data (FAIR)
  • Access possible for external users (2FA)
  • Cost covered up to 500GB
  • Archiving and data publishing available through YODA
  • Links with PURE

Limitations:

  • Data will likely need to be copied locally prior to data processing/ analysis
  • Lacks desktop sync
  • Does not allow for access management at a folder and sub-folder level; everyone in the YODA group has access to all folders and sub-folders

 

Research Drive

Benefits:

  • Approved for high privacy/ confidentiality risks
  • Storage of large volumes of data that need to be regularly accessed for processing/ analyzing
  • Similar to SurfDrive, but works on a project level rather than individual
  • Has a desktop sync client for easy management of locally copied data
  • Facilitates access management at the folder and sub-folder level
  • Access possible for external users
  • Cost covered up to 500GB

Limitations:

  • Requires encryption for very high risk data
  • Requires syncing of data locally before processing/ analyzing
  • Does not offer structured metadata
  • Not suitable for archiving or publishing

     

SciStor

Benefits:

  • Storage of very large volumes of data that need to be regularly accessed for processing/ analyzing
  • Data can be accessed directly from SciStor without copying locally
  • Best option for high performance computing
  • Allows for access management at the folder and sub-folder level

Limitations:

  • Access not possible for external (non-VU) users
  • No coverage of storage costs, but costs are kept low
  • Additional measures required for very-high risk data
  • Access rights managed entirely by IT for Research, changes can only be made upon request
  • Not suitable for archiving or publishing

     

Sharepoint

Benefits:

  • Replacement for the previously used G-Drive
  • Similar to OneDrive, but ensures data storage is linked to a project rather than an individual
  • Allows for access management at a folder and sub-folder level
  • Access possible for external users

Limitations:

  • Requires encryption for high risk data

  • Very easy to grant access to data, meaning unauthorised data access can happen by mistake

  • Difficult to maintain an overview of who has access to folders and Teams channels

  • Does not offer structured metadata
  • Not suitable for archiving or publishing
  • Managed externally by Microsoft, unlike VU- or SURF-hosted services. Due to its external management, there may be additional security and access concerns.

 

SurfDrive: (TO BE FILLED IN BASED ON CURRENT ADVICE 24/06/25)

Benefits:

 

Limitations:

 

Summary

Before selecting a storage location for your research data, you must first identify the key requirements for your project and data. This chapter has introduced important factors to consider when defining these requirements, along with an overview of the most common storage options available at VU. However, in some cases, a project may fall outside the scope of these standard requirements, and the existing storage options may not offer the necessary features. If this applies to your project, you should contact the data support staff to discuss your needs and determine whether a custom solution is required.

Don't forget: If you use a key file to store the link between identifiable information and the associated random ID, this should be stored in a separate location from the remainder of your research data. This increases the security of your data and reduces the risk of unauthorised re-identification of participants if your research data is compromised.

Task 1:

  1. Identify Your Requirements: Consider factors such as data sensitivity, security, accessibility, storage capacity, and collaboration needs.
  2. Review Available Storage Options: Explore the standard storage solutions provided by VU and compare them against your requirements.
  3. Assess Compatibility: Ensure the chosen storage option meets compliance, backup, and quantity needs.
  4. Seek Guidance if Needed: If your project has unique requirements that existing options don’t meet, consult the data support staff for advice on a custom solution.
  5. Make Your Selection: Choose the most suitable storage option and set up your data management plan accordingly.

 

Metadata and documentation

Documentation and Metadata


 

In research, the value of your data extends beyond its initial collection and analysis; it also lies in how well it is documented and described for future use. In this chapter, you will learn the importance of good documentation and detailed metadata, and how to create both.

 

This is where metadata and documentation play a crucial role. Metadata refers to the structured and unstructured information that describes, explains, or locates your data, providing context and making it easier to discover, interpret, and reuse. Proper documentation, on the other hand, includes detailed explanations about the data’s creation, structure, and how it should be handled or analysed.

Together, metadata and documentation ensure that your research data remains comprehensible and accessible over time, even by individuals who were not involved in its original collection. In this chapter, we will explore the importance of metadata and documentation in research, best practices for creating them, and how they contribute to data integrity, sharing, and long-term preservation. Whether you are working with datasets, surveys, or experimental results, understanding how to effectively document your research is essential for maintaining transparency and ensuring the reproducibility of your work.

 

Metadata

Metadata is essential for managing and understanding research data, but it can take different forms depending on how it is organised and used.

Structured metadata refers to metadata that is organized in a defined, consistent format, often within a specific schema or database. This allows for easy searchability, categorization, and integration across different platforms, such as data repositories. 

Unstructured metadata consists of more flexible, free-form information that doesn’t follow a specific format. This could include descriptions, notes, or other contextual information that provides valuable insights but may not fit neatly into a standardized framework. While unstructured metadata may require more effort to manage and analyse, it often captures nuances and details that structured metadata cannot.

 

Project and file level Metadata

 

Metadata can be created at different levels to describe and provide context for your data. At a minimum, you will have project-level metadata, which offers an overview of the entire research project. This may include information such as the project title, creator(s), institutional affiliation, funding details, project location, research objectives, and any ethical approvals. Project-level metadata helps others (and your future self) understand the broader context of your research.

In addition to project-level metadata, granular or file-level metadata can also be created. This refers to metadata associated with specific datasets, files, or pieces of analysis. For example, codebooks provide valuable contextual information for variables while comments within your code can describe the purpose of functions or variables, making your scripts easier to understand and reuse. Similarly, annotations or notes on interview transcripts can help explain themes, coding decisions, or analytical choices made during qualitative analysis.

Regardless of the level, metadata should be rich, consistent, and meaningful. Well-documented metadata improves the transparency, reusability, and long-term value of your data by ensuring that both you and others can accurately interpret and work with the data, even months or years after the original project has ended.

 

Metadata refers to information that describes your data, essentially, it's 'data about data'. In many cases, terms like metadata and documentation can be used interchangeably. for instance, materials such as ReadMe files or codebooks are often considered unstructured metadata but may be included within your documentation folder. There's no strict tile about whether to label such files as metadata or documentation. What's important is that you provide enough contextual information to help others, and yourself understand and interpret you data.

 

Documentation

Effective documentation is a cornerstone of high-quality, responsible research. It ensures that your data, methods, and findings are not only understandable and accessible to others, but also reproducible, transparent, and sustainable over time. Good documentation allows your research to be verified, built upon, or reused, whether by collaborators, other researchers, or even your future self.

Documentation goes beyond simply recording your results. It encompasses detailed descriptions of your research design, how data were collected, processed, and analysed, as well as the rationale behind key decisions made throughout your project. It includes notes on software used, code written, data cleaning steps taken, and any transformations or assumptions applied. Without this contextual information, data can quickly lose its meaning and value.

Why Documentation Matters

Documentation is important because it helps make your research process clear and transparent, building trust in your results. It also allows others, and your future self, to reproduce your work and verify the findings. Good documentation ensures that your data and methods can be reused in a future project or by other researchers. Additionally, it supports continuity in long-term and collaborative projects by keeping important knowledge available, even if team members change or the project is revisited after an extended period.

What to Document

Depending on your discipline and methodology, documentation my include several key components.

- Project-level documentation covers the purpose, objectives, team members, funding sources, ethical approvals, timelines, and overall context of your research.

- Data-level documentation involved details such as file naming conventions, variable definitions, units of measurement, formats, data sources, and quality control processes.

- Methodological documentation describes sampling strategies, data collection protocols, instruments or tools used, software settings, codebooks, or interview guides.

- Analytical documentation includes scripts or code with detailed comments, version histories, statistical methods used, rationale for methodological choices, and interpretation notes.

- Finally, standardised metadata consists of structured descriptive information that helps others understand and locate your data,

Best Practices for Creating Documentation

  • Start early and update regularly: Don’t wait until the end of your project to begin documenting. Make it part of your workflow.

  • Be clear and consistent: Use standardized terminology, consistent formats, and clear language.

  • Use tools wisely: Consider using electronic lab notebooks, version control systems (e.g., Git), DMPonline, or templates provided by the VU.

  • Include links between data and documentation: Ensure users can easily locate relevant documentation for any dataset or file.

  • Consider your audience: Write your documentation so that someone outside your project can understand it, even if they are not within your discipline.

 

Summary

This chapter highlights the importance of metadata and documentation in research data management. Metadata provides structured or unstructured information that describes, explains and contextualises your data, making it easier to find, understand, and reuse. Structured metadata follows a predefined format or schema, allowing for better organisation and easier retrieval, while unstructured metadata offers more flexible, free-form descriptions, capturing additional context and nuances.

Documentation plays an equally crucial role by ensuring that the processes behind data collection, analysis, and interpretation are clearly explained and reproducible. It includes detailed descriptions of research methods, assumptions, and any decisions made throughout the project. Good documentation enhances transparency, allows others to understand and reproduce your work, and helps to ensure that your research remains accessible and valuable over time.

Together, metadata and documentation are vital for ensuring the integrity, accessibility, and long-term usability of your research data, facilitating collaboration and enabling future researchers to build upon your work.


Task 1:

Apply to Your Research
Think of a research project you are working on (or imagine a simple one, like a survey-based study).

  1. Write down three pieces of structured metadata you would include for your dataset.

  2. Write a short paragraph of unstructured metadata that provides additional context for the data.

  3. List two key elements you would include in your project documentation to ensure reproducibility.


Task 2: (to be embedded)

Identify Metadata Types
Look at the following examples of information. For each, indicate whether it is an example of structured metadata, unstructured metadata, or documentation:

Archiving

Archiving your data


 

We have reached the final stages of the research lifecycle. Your data has been through a lot, from initial collection to detailed analysis, and now it’s time to give it a permanent home where it can shine for the world to see.

In this chapter, you will learn the important steps of archiving your data: where to archive it, what should be included in the archive, and how to share your data responsibly and effectively with others.

It can be tempting to move on to the next project at this point, but doing so risks leaving work incomplete and missing a valuable opportunity; while the research publication is often viewed as the main output of a project, the dataset itself is a crucial scholarly product. A well-managed data package can lead to citations, reuse by other researchers, and recognition in your field. And it’s not just others who benefit; having a clear, archived record of your data makes it easier for you to revisit, verify, and build on your own work in the future.

Proper archiving is not just about organization; it's about contributing to the culture of transparency, reproducibility, and open science. Your data set could answer questions you haven't yet imagined or support future discoveries.

If you're interested in reading more about archiving at the VU, check out the faculty archiving guidelines.

Where to begin

Archiving refers to the process of storing your data in a long-term storage facility, the version of the data which will be archived should be complete with all data and information used throughout the study. This includes the documentation and metadata which you have created along the way and any other information which would be necessary to interpret and potentially reuse your data.

Archiving research data is a requirement, especially if there is an associated publication. Archiving ensures your results can be verified at a later date. Long term storage for research data is usually located in a repository with a 'lock' placed on the data. This locking ensures the data cannot be edited or altered at a later stage, thus allowing for data integrity to be maintained and for research integrity to be evaluated.

Repositories typically provide a persistent identifier, usually a DOI (Digital Object Identifier). This is a code which can be used to always link back to your data, even if the platform in which it is hosted is altered or changed.

 

 

Steps to archiving

Right now archiving might seem a long time away, therefore you can always keep note of this section of the course and refer back when you come to the later stages of your research. Please refer to the Faculty Archiving Guidelines for a comprehensive understanding of the requirements and their rationale.

 

The following steps are a reiteration of the good research data management practices we've been exploring throughout this course. By incorporating these steps into your workflow during the research process, rather than leaving them until the end, you will reduce your overall workload and ensure more accurate and consistent documentation. Many of these practices are already defined in your data management plan, which is why developing that plan early on is such an important part of effective data management.

 

Steps to archiving your data:

  1. Organize your data
    • Clean your files: remove duplicates, temporary files, or test versions.
    • Use clear file names: follow a consistent naming convention.
    • Structure your folders logically: group files by data type, collection date, or experiment phase.
  2. Document your data package (metadata & ReadMe)
    • Create a README file: describe what each file contains, how it was created, and any necessary context for reuse or interpretation.
    • Include metadata: use a formal metadata schema if possible (e.g. Dublin Core, DataCite).
  3. Check for sensitive or confidential data
    • Anonymize human subjects if necessary and if possible.
    • Redact or restrict sensitive elements that can't be shared.
    • Review consent forms and consider what data sharing is possible (document this also).
  4. Ensure data integrity and reusability:
  5. Package the data
    • Bundle:
      • Research data,
      • Code/ scripts,
      • Documentations files (README, metadata, methodology, licenses)
    • Compress the files logically (e.g. ZIP the entire package or subfolders by type).
  6. Select an appropriate repository
    • Choose a trusted archvie that aligns with your research area, such as:
      • Disciplinary repositories (e.g. GenBank, ICPSR, Dryad)
      • Institutional repository (Yoda)
      • General-purpose repositories (OSF, DataverseNL, Zenoda)
    • If your dataset contains sensitive/ personal data, Yoda is the recommended option.
  7. Assign a License and DOI
    • Choose a clear license (e.g. Creative Commons/ VU custom license) that explains how others can use the data.
    • Use a repository which will assign a DOI (Digital Object Identifier) for citation and persistent access.
  8. Verify and submit
    • Double-check all files and metadata for accuracy and completeness.
    • Submit your data package and confirm that it's discoverable and accessible.
  9. Cite and promote your data
    • Include the DOI in your publications.
    • Register your dataset in Pure if using a VU external repository.
    • Promote your dataset in presentations or on your professional profiles (eg. ORCID, LinkedIn).

What to include?

You should archive whatever would be necessary to properly interpret your data. This includes data, but also documentation about the data and the research process, as well as code scripts used in the process of your research.

 

  • All research data which was created, processed and analysed
  • Documentation
  • Metadata
  • License
  • ReadMe file

TO BE COMPLETED (24/6/25)

 

Summary

Archiving research data is a critical final step in the research data lifecycle. It ensures that data remains accessible, understandable, and reusable over the long term, both for your future self and for other researchers. Proper archiving supports transparency, replicability, and the broader goals of open science.

A well-archived dataset includes not only the data itself but also comprehensive metadata and documentation. Choosing an appropriate repository, preferably a trusted, discipline-specific or institutional one helps safeguard the data against loss or degradation, and ensures that it can be cited correctly.

By planning for archiving early in your research process and adhering to best practices, you contribute to a more robust and reliable scientific record. Archiving is not just about storage, it's about preserving the value of your research for years to come.


Task 1:

Choose a Repository
Choose a suitable repository for archiving your (real or hypothetical) research dataset. Briefly answer:

  1. Is the repository discipline-specific, institutional, or general-purpose?

  2. What features make it appropriate for your data?

  3. Does the repository provide a DOI or other persistent identifier?

 

Closing Summary

Closing Summary


Congratulations on completing all the chapters of this online course!

You now possess a strong foundation in Research Data Management (RDM) and have acquired practical strategies to responsibly handle and protect research data. From initial study design to final data archiving, you’ve explored best practices for each stage of the research data lifecycle. Our hope is that you will carry this knowledge into your own work, applying it in ways that promote ethical, organised, and thoughtful research.

Throughout the course, the tasks and exercises you completed were designed not only to reinforce your understanding, but also to help you begin shaping your own Data Management Plan (DMP). Developing a strong DMP is a crucial next step, and we encourage you to refine your plan by seeking feedback from peers, supervisors, or a data management professional.

To support you in this process, you can use DMPonline, as introduced in Chapter 1. There, you’ll find tailored templates and guidance, and you can link the platform directly to your VU Amsterdam account. If you're unsure about any section of your plan or need further clarification, don’t hesitate to reach out to a data management expert - either by emailing rdm@vu.nl or by contacting your faculty data steward.

If you plan to attend a follow-up Research Data Management workshop, we recommend bringing your completed exercises from Task 1 with you. These will serve as a solid starting point for discussions and feedback during the session.


Final Words
Good data management is not just a technical skill - it’s a commitment to quality, transparency, and respect for those who contribute to and are affected by your research. By embedding ethical RDM practices into your work, you contribute to better science and a more trustworthy research environment.

We wish you success in all your future research endeavours. Stay curious, stay responsible, and don’t forget - support is always available when you need it.

 

 

Additional information resources