6. Risk Assessment & Regulation

6. Risk Assessment & Regulation

6.1. Introduction: the essence of risk assessment

Author: Ad Ragas

Reviewer: Martien Janssen

 

Learning objectives

After this module, you should be able to:

  • explain the terms risk, hazard, risk assessment, risk management and solution-focused risk assessment ;
  • explain the different steps of the risk assessment process, the relation between these steps and how the principle of tiering works;
  • give an example of a risk indicator;
  • indicate the most important advantages and disadvantages of the risk assessment paradigm.

 

Key words

Risk, hazard, tiering, problem definition, exposure assessment, effect assessment, risk characterization

 

 

Introduction

We assess risks on a daily basis, although we may not always be aware of it. For example, when we cross the street, we – often implicitly – assess the benefits of crossing and weigh these against the risks of getting hit by a vehicle. If the risks are considered too high, we may decide not to cross the street, or to walk a bit further and cross at a safer spot with traffic lights.

Risk assessment is common practice for a wide range of activities in society, for example for building bridges, protection against floods, insurance against theft and accidents, and the construction of a new industrial plant. The principle is always the same: we use the available knowledge to assess the probability of potential adverse effects of an activity as good as we can. And if these risks are considered too high, we consider options to reduce or avoid the risk.

 

Terminology

Risk assessment of chemicals aims to describe the risks resulting from the use of chemicals in our society. In chemical risk assessment, risk is commonly defined as “the probability of an adverse effect after exposure to a chemical”. This is a very practical definition that provides natural scientists and engineers the opportunity to quantify risk using “objective” scientific methods, e.g. by quantifying exposure and the likelihood of adverse effects. However, it should be noted that this definition ignores more subjective aspects of risk, typically studied by social scientists, e.g. the perceptions of people and (dealing with) knowledge gaps. This subjective dimension can be important for risk management. For example, risk managers may decide to take action if a risk is perceived as high by a substantial part of the population, even if the associated health risks have been assessed as negligible by natural scientists and engineers.

Next to the term “risk”, the term “hazard” is often used. The difference between both terms is subtle, but important. A hazard is defined as the inherent capacity of a chemical (or agent/activity) to cause adverse effects. The labelling of a substance as “carcinogenic” is an example of a hazard-based action. The inherent capacity of the substance to trigger cancer, as for example demonstrated in an in vitro assay or an experiment with rats or mice, can be sufficient reason to label a substance as “carcinogenic”. Hazard is thus independent of the actual exposure level of a chemical, whereas risk is not.

Risk assessment is closely related to risk management, i.e. the process of dealing with risks in society. Decisions to accept or reduce risks belong to the risk management domain and involve consideration of the socio-economic implications of the risks as well as the risk management options. Whereas risk assessment is typically performed by natural scientists and engineers, often referred to as “risk assessors”, risk management is performed by policy makers, often referred to as “risk managers”.

Risk assessment and risk management are often depicted as sequential processes, where assessment precedes management. However, strict separation of both processes is not always possible and management decisions may be needed before risks are assessed. For example, risk assessment requires political agreement on what should be protected and at what level, which is a risk management issue (see Section on Protection Goals). Similarly, the identification, description and assessment of uncertainties in the assessment is an activity that involves risk assessors as well as risk managers. Finally, it is often more efficient to define alternative management options before performing a risk assessment. This enables the assessment of the current situation and alternative management scenarios (i.e., potential solutions) in one round. The scenario with the maximum risk reduction that is also feasible in practice would then be the preferred management option. This mapping of solutions and concurrent assessment of the associated risks is also known as solution-focused risk assessment.

 

Risk assessment steps and tiering

Chemical risk assessment is typically organized in a limited number of steps, which may vary depending on the regulatory context. Here, we distinguish four steps (Figure1):

 

  1. Problem definition (sometimes also called hazard identification), during which the scope of the assessment is defined;
  2. Exposure assessment, during which the extent of exposure is quantified;
  3. Effect assessment (sometimes also called hazard or dose-response assessment), during which the relationship between exposure and effects is established;
  4. Risk characterization, during which the results of the exposure and effect assessments are combined into an estimate of risk and the uncertainty of this estimate is described.

 

Figure 1. Risk assessment consists of four steps (problem definition, exposure assessment, effect assessment & risk characterization) and provides input for risk management.

 

The four risk assessment steps are explained in more detail below. The four steps are often repeated multiple times before a final conclusion on the acceptability of the risk is reached. This repetition is called tiering (Figure 2). It typically starts with a simple, conservative assessment and then, in subsequent tiers, more data are added to the assessment resulting in less conservative assumptions and risk estimates. Tiering is used to focus the available time and resources for assessing risks on those chemicals that potentially lead to unacceptable risks. Detailed data are gathered only for chemicals showing potential risk in the lower, more conservative tiers.

The order of the exposure and effect assessment steps has been a topic of debate among risk assessors and managers. Some argue that effect assessment should precede exposure assessment because effect information is independent of the exposure scenario and can be used to decide how exposure should be determined, e.g., information on toxicokinetics can be relevant to determine the exposure duration of interest. Others argue that exposure should precede effect assessment since assessing effects is expensive and unnecessary if exposure is negligible. The current consensus is that the preferred order should be determined on a case-by-case basis with parallel assessment of exposure and effects and exchange of information between the two steps as the preferred option.

 

Figure 2: The principle of tiering in risk assessment. Initially risks are assessed using limited data and conservative assumptions and tools. When the predicted risk turns out unacceptable (Risk >1; see below), more data are gathered and less conservative tools are used.

 

 

Problem definition

The scope of the assessment is determined during the problem definition phase. Questions typically answered in the problem definition include:

  • What is the nature of the problem and which chemical(s) is/are involved?
  • What should be protected, e.g. the general population, specific sensitive target groups, aquatic ecosystems, terrestrial ecosystems or particular species, and at what level?
  • What information is already available, e.g. from previous assessments?
  • What are the available resources for the assessment?
  • What is the assessment order and will tiering be applied?
  • What exposure routes will be considered?
  • What is the timeframe of the assessment, e.g. are acute or (sub)chronic exposures considered?
  • What risk metric will be used to express the risk?
  • How will uncertainties be addressed?

 

Problem definition is not a task for risk assessors only, but should preferably be performed in a collaborative effort between risk managers, risk assessors and stakeholders. The problem definition should try to capture the worries of stakeholders as good as possible. This is not always an easy task as these worries may be very broad and sometimes also poorly articulated. Risk assessors need a clearly demarcated problem and they can only assess those aspects for which assessment methods are available. The dialogue should make transparent which aspects of the stakeholder concerns will be assessed and which not. Being transparent about this can avoid disappointments later in the process, e.g. if aspects considered important by stakeholders were not accounted for because suitable risk assessment methods were lacking. For example, if stakeholders are worried about the acute and chronic impacts of pesticide exposure, but only the acute impacts will be addressed, this should be made clear at the beginning of the assessment.

The problem definition phase results in a risk assessment plan detailing how the risks will be assessed given the available resources and within the available timeframe.

 

Exposure assessment

An important aspect of exposure assessment is the determination of an exposure scenario. An exposure scenario describes the situation for which the exposure is being assessed. In some cases, this exposure situation may be evident, e.g. soil organisms living a contaminated site. However, especially when we want to assess potential risks of future substance applications, we have to come up a typical exposure scenario. Such scenarios are for example defined before a substance is allowed to be used as a food additive or before a new pesticide is allowed on the market. Exposure scenarios are often conservative, meaning that the resulting exposure estimate will be higher than the expected average exposure.

The exposure metric used to assess the risk depends on the protection target. For ecosystems, a medium concentration is often used such as the water concentration for aquatic systems, the sediment concentration for benthic systems and the soil concentration for terrestrial systems. These concentrations can either be measured or predicted using a fate model (see Section 3.8) and may or may not take into account bioavailability (see Section 3.6). For human risk assessment, the exposure metric depends on the exposure route. An air concentration is often used to cover inhalation, the average daily intake from food and water to cover oral exposure, and uptake through skin for dermal exposure. Uptake through multiple routes can also be combined in a dose metric for internal exposure, such as Area Under the Curve (AUC) in blood (see Section 6.3.1). Exposure metrics for specific wildlife species (e.g. top predators) and farm animals are often similar as those for humans. Measuring and modelling route-specific exposures is generally more complex than quantifying a simple medium concentration, because it does not only require the quantification of the substance concentration in the contact medium (e.g., concentration in drinking water), but also quantification of the contact intensity (e.g., how much water is consumed per day). Especially oral exposure can be difficult to quantify because it covers a wide range of different contact media (e.g. food products) and intensities varying from organism to organism.

 

Effect assessment

The aim of the effect assessment is to estimate a reference exposure level, typically an exposure level which is expected to cause no or very limited adverse effects. There are many different types of reference levels in chemical risk assessment; each used in a different context. The most common reference level for ecological risk assessment is the Predicted No Effect Concentration (PNEC). This is the water, soil, sediment or air concentration at which no adverse effects at the ecosystem level are being expected. In human risk assessment, a myriad of different reference levels are being used, e.g. the Acceptable Daily Intake (ADI), the oral and inhalatory Reference Dose (RfD), the Derived No Effect Level (DNEL), the Point of Departure (PoD) and the Virtually Safe Dose (VSD). Each of these reference levels is used in a specific context, e.g. for addressing a specific exposure route (ADI is oral), regulatory domain (the DNEL is used in the EU for REACH, whereas the RfD is used in the USA), substance type (the VSD is typical for genotoxic carcinogens) or risk assessment method (the PoD is typical for the Margin of Safety approach).

What all reference levels have in common, is that they reflect a certain level of protection for a specific protection goal. In ecological risk assessment, the protection goal typically is the ecosystem, but it can also be a specific species or even an organism. In human risk assessment, the protection goal typically comprises all individuals of the human population. The definition of protection goals is a normative issue and it therefore is not a task of risk assessors, but of politicians. The protection levels defined by politicians typically involve a high level of abstraction, e.g. “the entire ecosystem and all individuals of the human population should be protected”. Such abstract protection goals do not always match with the methods used to assess the risks. For example, if one assumes that one molecule of a genotoxic carcinogen can trigger a deathly tumour, 100% protection for all individuals of the human population is feasible only by banning all genotoxic carcinogens (reference level = 0). Likewise, the safe concentration for an ecosystem is infinitely small if one assumes that the sensitivity of the species in the system follows a lognormal distribution which asymptotically approaches the x-axis. Hence, the abstract protection goals have to be operationalized, i.e. defined in more practical terms and matching the methods used for assessing effects. This is often done in a dialogue between scientific experts and risk managers. An example is the “one in a million lifetime risk estimated with a conservative dose response model” which is used by many (inter)national organizations as a basis for setting reference levels for genotoxic carcinogens. Likewise, the concentration at which the no observed effect concentration (NOEC) for only 5% of the species is being exceeded is often used as a basis for deriving a PNEC.

Once a protection goal has been operationalized, it must be translated into a corresponding exposure level, i.e. the reference level. This is typically done using the outcomes of (eco)toxicity tests, i.e. tests with laboratory animals such as rats, mice and dogs for human reference levels and with primary consumers, invertebrates and vertebrates for ecological reference levels. Often, the toxicity data are plotted in a graph with the exposure level on the x-axis and the effect or response level on the y-axis. A mathematical function is then fitted to the data; the so-called dose-response relationship. This dose-response relationship is subsequently used to derive an exposure level that corresponds to a predefined effect or response level. Finally, this exposure level is extrapolated to the ultimate protection goal, accounting for phenomena such as differences in sensitivity between laboratory and field conditions, between tested species and the species to be protected, and the (often very large) variability in sensitivity in the human population or the ecosystem. This extrapolation is done by dividing the exposure level that corresponds to a predefined effect or response level by one or more assessment or safety factors. These assessment factors do not have a pure scientific basis in the sense that they account for physiological differences which have actually been proven to exist. These factors also account for uncertainties in the assessment and should make sure that the derived reference level is a conservative estimate. The determination of reference levels is an art in itself and is further explained in sections 6.3.1 for human risk assessment and 6.3.2 for ecological risk assessment.

 

Risk characterization

The aim of risk characterization is to come up with a risk estimate, including associated uncertainties. A comparison of the actual exposure level with the reference level provides an indication of the risk:

\(Risk\ Indicator= {{Exposure\ Level}\over {Reference\ Level}}\)

If the reference level reflects the maximum safe exposure level, then the risk indicator should be below unity (1.0). A risk indicator higher than 1.0 indicates a potential risk. It is a “potential risk” because many conservative assumptions may have been made in the exposure and effect assessments. A risk indicator above 1.0 can thus lead to two different management actions: (1) if available resources (time, money) allow and the assessment was conservative, additional data may be gathered and a higher tier assessment may be performed, or (2) consideration of mitigation options to reduce the risk. Assessment of the uncertainties is very important in this phase, as it reveals how conservative the assessment was and how it can be improved by gathering additional data or applying more advanced risk assessment tools.

Risks can also be estimated using a margin-of-safety approach. In this approach, the reference level used has not yet been extrapolated from the tested species to the protection goal, e.g. by applying assessment factors for interspecies and interindividual differences in sensitivity. As such, the reference level is not a conservative estimate. In this case, the risk indicator reflects the “margin of safety” between actual exposure and the non-extrapolated reference level. Depending on the situation at hand, the margin-of-safety typically should be 100 or higher. The main difference between the traditional and the margin-of-safety approach in risk assessment is the timing for addressing the uncertainties in the effect assessment.

 

Reflection

Figure 3 illustrates the risk assessment paradigm using the DPSIR chain (Section 1.2). It illustrates how reference exposure levels are being derived from protection goals, i.e. the maximum level of impact that we consider acceptable. The actual exposure level is either measured or predicted using estimated emission levels and dispersion models. When measured exposure levels are used, this is called retrospective or diagnostic risk assessment: the environmental is already polluted and the assessor wants to know whether the risk is acceptable and which substances are contributing to it. When the environment is not yet polluted, predictive tools can be used. This is called prospective risk assessment: the assessor wants to know whether a projected activity will result in unacceptable risks. Even if the environment is already polluted, the risk assessor may still decide to prefer predicted over measured exposure levels, e.g. if measurements are too expensive. This is possible only if the pollution sources are well-characterized. Retrospective (diagnostic) and prospective risk assessments can differ substantially in terms of problem definitions and methods used, and are therefore discussed in separate sections in this online book.

 

Figure 3: The risk assessment paradigm and the DPSIR chain.

 

Figure 3 can also be used to illustrate some important criticism on the current risk assessment paradigm, i.e. the comparison between the actual exposure level and a reference level. In current assessments, only one point of the dose-response relationship is being used to assess risk, i.e. the reference level. Critics argue that this is suboptimal and a waste of resources because the dose-response information is not used to assess the actual risk. A risk indicator with a value of 2.0 implies that the exposure is twice as high as the reference level but this does not give an indication of how many individuals or species are being affected or of the intensity of the effect. If the dose-response relationship would be used to determine the risk, this would result in a better-informed risk estimate.

A final critical remark that should be made, is the fact that risk assessment is often performed on a substance-by-substance basis. Dealing with mixtures of chemicals is difficult because each mixture has a unique composition in terms of compounds and concentration ratios between compounds. This makes it difficult to determine a reference level for mixtures. Mixture toxicology is slowly progressing and several methods are now available to address mixtures, i.e. whole mixture methods and compound-based approaches (Section 6.3.6). Another promising development are effect-based methods (Section 6.4.2). These methods do not assess risk based on chemical concentration, but on the toxicity measured in an environmental sample. In terms of DPSIR, these methods are assessing risks on the level of impacts rather than the level of state or pressures.

 

6.2. Ecosystem services and protection goals

In preparation

6.3. Predictive risk assessment approaches and tools

6.3.1. Environmental realistic scenarios (PECs) – Human

under review

6.3.2. Environmental realistic scenarios (PECs) – Eco

Authors: Jos Boesten, Theo Brock

Reviewer: Ad Ragas, Andreu Rico

 

Learning objectives:

You should be able to:

  • explain the role of exposure scenarios in environmental risk assessment (ERA)
  • describe the need for, and basic principles of defining exposure assessment goal
  • link exposure and effect assessments and can describe the role of environmental scenarios in future ERAs

 

Keywords: pesticides, exposure, scenarios, assessment goals, effects

 

 

Role of exposure scenarios in environmental risk assessment (ERA)

An exposure scenario describes the combination of circumstances needed to estimate exposure by means of models. For example, scenarios for modelling pesticides exposure can be defined as a combination of abiotic (e.g. properties and dimensions of the receiving environment and related soil, hydrological and climate characteristics) and agronomic (e.g. crops and related pesticide application) parameters that are thought to represent a realistic worst-case situation for the environmental context in which the exposure model is to be run. A scenario for exposure of aquatic organisms could be e.g. a ditch with a minimum water depth of 30 cm alongside a crop growing on a clay soil with annual applications of pesticide using a 20-year time series of weather data and including pesticide exposure via spray drift deposition and leaching from drainpipes. Such a scenario would require modelling of spray drift, leaching from drainpipes and exposure in surface water, ending up in a 20-year time series of the exposure concentration. In this chapter, we explain the use of exposure scenarios in prospective ERA by giving examples for the regulatory assessment of pesticides in particular.

 

Need for defining exposure assessment goals

Between about 1995 and 2001 groundwater and surface water scenarios were developed for EU pesticide registration; also referred to as the FOCUS scenarios. The European Commission indicated that these should represent ‘realistic worst-cases’, a political concept which leaves considerable room for scientific interpretation. Risk assessors and managers agreed that the intention was to generate 90th percentile exposure concentrations. The concept of a 90th percentile exposure concentration assumes a statistical population of concentrations and 90% of these concentrations are lower than this 90th percentile (and thus 10% are higher). This 90th percentile approach has since then been followed for most environmental exposure assessments for pesticides at EU level.

 

The selection of the FOCUS groundwater and surface water scenarios involved a considerable amount of expert judgement because this selection could not yet be based on well-defined GIS procedures and databases on properties of the receiving environment. The EFSA exposure assessment for soil organisms was the first environmental exposure assessment that could be based on a well-defined GIS procedure, using EU maps of parameters like soil organic matter, density of crops and weather. During the development of this exposure assessment, it became clear that the concept of a 90th percentile exposure concentration is too vague: it is essential to define also the statistical population of concentrations from which this 90th percentile is taken. Based on this insight, the EFSA Panel on Plant Protection Products and their Residues (PPR) developed the concept of the exposure assessment goals, which has become the standard within EFSA for developing regulatory exposure scenarios for pesticides.

 

Procedure for defining exposure assessment goals

Figure 1 shows how an exposure assessment goal for the risk assessment of aquatic organisms can be defined following this EFSA procedure. The left part specifies the temporal dimensions and the right part the spatial dimensions. In box E1, the Ecotoxicologically Relevant type of Concentration (ERC) is defined, e.g. the freely dissolved pesticide concentration in water for pelagic organisms. In box E2, the temporal dimension of this concentration is defined, e.g. annual peak or time-weighted average concentration for a pre-defined period. Based on these elements, the multi-year temporal population of concentrations can be generated for one single water body (E5) which would consist of e.g. 20 peak concentrations in case of a time series of 20 years. The spatial part requires definition of the type of water body (e.g. ditch, stream or pond; box E3) and the spatial dimension of this body (e.g. having a minimum water depth of 30 cm; box E4). Based on these, the spatial population of water bodies can be defined (box E6), e.g. all ditches with a minimum water depth of 30 cm alongside fields treated with the pesticide. Finally, then in box E7 the percentile combination to be taken from the spatial-temporal population of concentrations is defined. Specification of the exposure assessment goals does not only involve scientific information, but also political choices because this specification influences the strictness of the exposure assessment. For instance, in case of exposure via spray drift a minimum water depth of 30 cm in box E4 leads to about a three times lower peak concentration in the water than a minimum water depth of 10 cm.

 

Figure 1. Scheme of the seven elements of the exposure assessment goal for aquatic organisms.

 

The schematic approach of Figure 1 can easily be adapted to other exposure assessment goals.

 

Interaction between exposure and effect assessment for organisms

Nearly all the environmental protection goals for pesticides involve assessment of risk for organisms; only groundwater and drinking water from surface water are based on a concentration of 0.1 μg/L which is not related to possible ecotoxicological effects. The risk assessment for organisms is a combination of an exposure assessment and an effect assessment as is illustrated by Figure 2.

 

Figure 2. Overview of the risk assessment of organisms based on parallel tiered effect and exposure assessments.

 

Both the effect and the exposure assessment are tiered approaches with simple and conservative first tiers and less simple and more realistic higher tiers. A lower exposure tier may consist of a simple conservative scenario whereas a higher exposure tier may e.g. be based on a scenario selected using sophisticated spatial modelling. The top part of the scheme shows the link to the risk managers which are responsible for the overall level of protection. This overall level of protection is linked to the so-called Specific Protection Goals which will be explained in Section 6.5.3 and form the basis for the definition of the effect and exposure assessment goals. So the exposure assessment goals and resulting exposure scenarios should be consistent with the Specific Protection Goals (e.g. algae and fish may require different scenarios). When linking the two assessments, it has to be ensured that the type of concentration delivered by the exposure assessment is consistent with that required by the effect assessment (e.g. do not use time-weighted average concentrations in acute effect assessment). Figure 2 shows that in the assessment procedure information flows always from the exposure assessment to the effect assessment because the risk assessment conclusion is based on the effect assessment.

 

A relatively new development is to assess exposure and effects at the landscape level. This typically is a combination of higher-tier effect and exposure assessments. In such an approach, first the dynamics in exposure is assessed for the full landscape, and then combined with the dynamics of effects, for example based on spatially-explicit population models for species typical for that landscape. Such an approach makes a separate definition of the exposure and effect scenario redundant because this approach aims to deliver the exposure and effect assessment in an integrated way in space and time. Such an integrated approach requires the definition of “environmental scenarios”. Environmental scenarios integrate both the parameters needed to define the exposure (exposure scenario) and those needed to calculate direct and indirect effects and recovery (ecological scenario) (see Figure 3). However, it will probably take at least a decade before landscape-level approaches, including agreed-upon environmental scenarios, will be implemented for regulatory use in prospective ERA.

 

Figure 3. Conceptual framework of the role of an environmental scenario in prospective ERA (adapted after Rico et al. 2016).

 

 

References

 

Boesten, J.J.T.I. (2017). Conceptual considerations on exposure assessment goals for aquatic pesticide risks at EU level. Pest Management Science 74, 264-274.

Brock, T.C.M., Alix, A., Brown, C.D., et al. (2010). Linking aquatic exposure and effects: risk assessment of pesticides. SETAC Press & CRC Press, Taylor & Francis Group, Boca Raton, FL, 398 pp.

Rico, A., Van den Brink, P.J., Gylstra, R., Focks, A., Brock, T.C.M. (2016). Developing ecological scenarios for the prospective aquatic risk assessment of pesticides. Integrated Environmental Assessment and Management 12, 510-521.

 

6.3.3. Setting reference levels for human health protection

in preparation

6.3.4. Setting safe standards for ecosystem protection

Authors:    Els Smit, Eric Verbruggen

Reviewers: Alexandra Kroll, Inge Werner

 

Learning objectives

You should be able to:

  • explain what a reference level for ecosystem protection is;
  • explain the basic concepts underlying the assessment factor approach for deriving PNECs;
  • explain why secondary poisoning needs specific consideration when deriving a PNEC using the assessment factor approach.

 

Key words: PNEC, quality standards, extrapolation, assessment factor

 

Introduction

The key question in environmental risk assessment is whether environmental exposure to chemicals leads to unacceptable risks for human and ecosystem health. This is done by comparing the measured or predicted concentrations in water, soil, sediment, or air, with a reference level. Reference levels represent a dose (intake rate) or concentration in water, soil, sediment or air below which unacceptable effects are not expected. The definition of ‘no unacceptable effects’ may differ between regulatory frameworks, depending on the protection goal. The focus of this section is the derivation of reference levels for aquatic ecosystems as well as for predators feeding on exposed aquatic species (secondary poisoning), but the derivation of reference values for other environmental compartments follows the same principles.

 

Terminology and concepts

Various technical terms are in use as reference values, e.g. the Predicted No Effect Concentration (PNEC) for ecosystems or the Acceptable Daily Intake (ADI) for humans (Section on Human toxicology). The term “reference level” is a broad and generic term, which can be used independently of the regulatory context or protection goal. In contrast, the term “quality standard” is associated with some kind of legal status, e.g., inclusion in environmental legislation like the Water Framework Directive (WFD). Other terms exist, such as the terms ‘guideline value’ or ‘screening level’ which are used in different countries to indicate triggers for further action. While the scientific basis of these reference values may be similar, their implementation and the consequences of exceedance are not. It is therefore very important to clearly define the context of the derivation and the terminology used when deriving and publishing reference levels.

 

PNEC

A frequently used reference level for ecosystem protection is the Predicted No Effect Concentration (PNEC). The PNEC is the concentration below which adverse effects on the ecosystem are not expected to occur. PNECs are derived per compartment and apply to the organisms that are directly exposed. In addition, for chemicals that accumulate in prey, PNECs for secondary poisoning of predatory birds and mammals are derived. The PNEC for direct ecotoxicity is usually based on results from single species laboratory toxicity tests. In some case, data from field studies or mesocosms may be included.

A basic PNEC derivation for the aquatic compartment is based on data from single species tests with algae, water fleas and fish. Effects on the level of a complex ecosystem are not fully represented by effects on isolated individuals or populations in a laboratory set-up. However, data from laboratory tests can be used to extrapolate to the ecosystem level if it is assumed that protection of ecosystem structure ensures protection of ecosystem functioning, and that effects on ecosystem structure can be predicted from species sensitivity.

 

Accounting for Extrapolation Uncertainty: Assessment Factor (AF) Approach

To account for the uncertainty in the extrapolation from single species laboratory tests to effects on real life ecosystems, the lowest available test result is divided by an assessment factor (AF). In establishing the size of the AF, a number of uncertainties must be addressed to extrapolate from single-species laboratory data to a multi-species ecosystem under field conditions. These uncertainties relate to intra- and inter-laboratory variation in toxicity data, variation within and between species (biological variance), test duration and differences between the controlled laboratory set-up and the variable field situation. The value of the AF depends on the number of studies, the diversity of species for which data are available, the type and duration of the experiments, and the purpose of the reference level. Different AFs are needed for reference levels for e.g. intermittent release, short-term concentration peaks or long-term (chronic) exposure. In particular, reference levels for intermittent release and short-term exposure may be derived on the basis of acute studies, but short-term tests are less predictive for a reference level for long-term exposure and larger AFs are needed to cover this. Table 1 shows the generic AF scheme that is used to derive PNECs for long-term exposure of freshwater organisms in the context of European regulatory framework for industrial chemicals (REACH; see Section on REACH environment). This scheme is also applied for the authorisation of biocidal products, pharmaceuticals and for derivation of long-term water quality standards for freshwater under the EU Water Framework Directive. Further details on the application of this scheme, e.g., how to compare acute and chronic data and how to deal with irregular datasets, are presented in guidance documents (see suggested reading: EC, 2018; ECHA, 2008). Similar schemes exist for marine waters, sediment, and soil. However, for the latter two environmental compartments often too little experimental information is available and risk limits have to be calculated by extrapolation from aquatic data using the Equilibrium Partitioning concept. The derivation of Regulatory Acceptable Concentrations (RAC) for plant protection products (PPPs) is also based on the extrapolation of laboratory data, but follows a different approach focussing on generating data for specific taxonomic groups, taking account of the mode of action of the PPP (see suggested reading: EFSA, 2013).

 

Table 1. Basic assessment factor scheme used for the derivation of PNECs for freshwater ecosystems used in several European regulatory frameworks. Consult the original guidance documents for full schemes and additional information (see suggested reading: EC, 2018; ECHA, 2008).

Available data

Assessment factor

At least one short-term L(E)C50 from each of three trophic levels

(fish, invertebrates (preferred Daphnia) and algae)

1000

One long-term EC10 or NOEC (either fish or Daphnia)

100

Two long-term results (e.g. EC10 or NOECs) from species representing

two trophic levels (fish and/or Daphnia and/or algae)

50

Long-term results (e.g. EC10 or NOECs) from at least three species

(normally fish, Daphnia and algae) representing three trophic levels

10

 

Application of Species Sensitivity Distribution (SSD) and Other Additional Data

The AF approach was developed to account for the uncertainty arising from extrapolation from (potentially limited) experimental datasets. If enough data are available for other species than algae, daphnids and fish, statistical methods can be applied to derive a PNEC. Within the concept of species sensitivity distribution (SSD), the distribution of the sensitivity of the tested species is used to estimate the concentration at which 5% of all species in the ecosystem is affected (HC5; see section on SSDs). When used for regulatory purposes in European regulatory frameworks, the dataset should meet certain requirements regarding the number of data points and the representation of taxa in the dataset, and an AF is applied to the HC5 to cover the remaining uncertainty from the extrapolation from lab to field.

Where available, results from semi-field experiments (mesocosms, see section on Community ecotoxicology) can also be used, either on its own or to underpin the PNEC derived from the AF or SSD approach. SSDs and mesocosm-studies are also used in the context of authorisation of PPP.

 

Reference levels for secondary poisoning

Substances might be toxic to wildlife because of bioaccumulation in prey or a high intrinsic toxicity to birds and mammals. If this is the case, a reference level for secondary poisoning is derived for a simple food chain: water è fish or mussel è predatory bird or mammal. The toxicity data from bird or mammal tests are transformed into safe concentrations in prey. This can be done by simply recalculating concentrations in laboratory feed into concentrations in fish using default conversion factors (see e.g., ECHA, 2008). For the derivation of water quality standards under the WFD, a more sophisticated method was introduced that uses knowledge on the energy demand of predators and energy content in their food to convert laboratory data to a field situation. Also, the inclusion of other, more complex and sometimes longer food chains is possible, for which field bioaccumulation factors are used rather than laboratory derived values.

 

Suggested additional reading

EC (2018). Common Implementation Strategy for the Water Framework Directive (2000/60/EC). Guidance Document No. 27. Technical Guidance For Deriving Environmental Quality Standards. Updated version 2018. Brussels, Belgium. European Commission. https://circabc.europa.eu/ui/group/9ab5926d-bed4-4322-9aa7-9964bbe8312d/library/ba6810cd-e611-4f72-9902-f0d8867a2a6b/details

ECHA (2008). Guidance on information requirements and chemical safety assessment Chapter R.10: Characterisation of dose [concentration]-response for environment. Helsinki, Finland. European Chemicals Agency. May 2008. https://echa.europa.eu/documents/10162/13632/information_requirements_r10_en.pdf/bb902be7-a503-4ab7-9036-d866b8ddce69

EFSA (2013). Guidance on tiered risk assessment for plant protection products for aquatic organisms in edge-of-field surface waters. EFSA Journal 2013; 11(7): 3290 https://efsa.onlinelibrary.wiley.com/doi/epdf/10.2903/j.efsa.2013.3290

Traas, T.P., Van Leeuwen, C. (2007). Ecotoxicological effects. In: Van Leeuwen, C., Vermeire, T.C. (Eds.). Risk Assessment of Chemicals: an Introduction, Chapter 7. Springer.

 

6.3.5. Species Sensitivity Distributions (SSDs)

Authors: Leo Posthuma, Dick de Zwart

Reviewers: Ad Ragas, Keith Solomon

 

Learning objectives:

You should be able to:

  • explain that differences exist in the reaction of species to exposure to a chemicals;
  • explain that these differences can be described by a statistical distribution;
  • derive a Species Sensitivity Distribution (SSD) for sensitivity data;
  • derive benchmark concentration from an SSD;
  • derive a predicted impact from an SSD.

 

Keywords: Species Sensitivity Distribution (SSD), benchmark concentration, Potentially Affected Fraction of species (PAF)

 

 

Introduction

The relationship between dose or concentration (X) and response (Y) is key in risk assessment of chemicals (see section on Concentration-response relationships). Such relationships are often determined in laboratory toxicity tests; a selected species is exposed under controlled conditions to a series of increasing concentrations to determine endpoints such as the No Observed Effect Concentration (NOEC), the EC50 (the Effect Concentration causing 50% effect on a studied endpoint such as growth or reproduction), or the LC50 (the Effect Concentration causing 50% lethal effects). For ecological risk assessment, multiple species are typically tested to characterise the (variation in) sensitivities across species or taxonomic groups within the ecosystem. In the mid-1980s it had been observed that–like many natural phenomena–a set of ecotoxicity endpoint data, representing effect concentrations for various species, follows a bell-shaped statistical distribution. The cumulative distribution of these data is a sigmoid (S-shaped) curve. It was recognized, that this distribution had particular utility for assessing, managing and protecting environmental quality regarding chemicals. The bell-shaped distribution was thereupon named a Species Sensitivity Distribution (SSD). Since then, the use of SSD models has grown steadily. Currently, the model is used for various purposes, providing important information for decision-making.

Below, the dual utility of SSD models for environmental protection, assessment and management are shown first. Thereupon, the derivation and use of SSD models are elaborated in a stepwise sequence.

 

The dual utility of SSD models

A species sensitivity distribution (SSD) is a distribution describing the variance in sensitivity of multiple species exposed to a hazardous compound. The statistical distribution is often plotted using a log-scaled concentration axis (X), and a cumulative probability axis (Y, varying from 0 – 1; Figure 1).

 

Figure 1. An species-sensitivity distribution (SSD) model, its data, and its dual use (from YàX, and from XàY). Dots represent the ecotoxicity endpoints (e.g., NOECs, EC50s, etc.) of different species.

 

Figure 1 shows that different species (here the dots represent 3 test data for algal species, 2 data for invertebrate species and 2 data fish species) have different sensitivities to the studied chemical. First, the ecotoxicity data are collected, and log10-transformed. Second, the data set can be visually inspected by plotting the bell-shaped distribution of the log-transformed data; deviations of the expected bell-shape can be visually identified in this step. They may originate from causes such as a low number of data points or be indicative for a selective mode of action of the toxicant, such as a high sensitivity of insects to insecticides. Third, common statistical software for deriving the two parameters of the log-normal model (the mean and the standard deviation of the ecotoxicity data) can be applied, or the SSD can be described with a dedicated software tool such as ETX (see below), including a formal evaluation of the ‘goodness of fit’  of the model to the data. With the estimated parameters, the fitted model can be plotted, and this is often done in the intuitively attractive form of the S-shaped cumulative distribution. This curve then serves two purposes. First, the curve can be used to derive a so-called Hazardous Concentration on the X-axis: a benchmark concentration that can be used as regulatory criterion to protect the environment (YàX). That is, chemicals with different toxicities have different SSDs, with the more hazardous compounds plotted to the left of the less hazardous compounds. By selecting a protection level on the Y-axis–representing a certain fraction of species affected, e.g. 5%–one derives the compound-specific concentration standards. Second, one can derive the fraction of tested species probably affected at an ambient concentration (XàY), which can be measured or modelled. Both uses are popular in contemporary environmental protection, risk assessment, and management.

 

Step 1: Ecotoxicity data for the derivation of an SSD model

The SSD model for a chemical and an environmental compartment (e.g., surface water, soil or sediment) is derived based on pertinent ecotoxicity data. Those are typically extracted from scientific literature or ecotoxicity databases. Examples of such databases are the U.S. EPA’s Ecotox database, the European REACH data sets and the EnviroTox database which contains quality-evaluated studies. The researcher selects the chemical and the compartment of interest, and subsequently extracts all test data for the appropriate endpoint (e.g., ECx-values). The set of test data is tabulated and ranked from most to least sensitive. Multiple data for the same species are assessed for quality and only the best data are used. If there is > 1 toxicity value for a species after the selection process, the geometric mean value is commonly derived and used. A species should only be represented once in the SSD. Data are often available for frequently tested species, representing different taxonomic and/or trophic levels. A well-known triplet of species frequently tested is “Algae, Daphnids and Fish”, as this triplet is a requested minimum set for various regulations in the realm of chemical safety assessment (see section on Regulatory frameworks). For various compounds, the number of test data can be more than hundred, whilst for most compounds few data of acceptable quality may be available.

 

Step 2. The derivation and evaluation of an SSD model

Standard statistical software (a spreadsheet program) or a dedicated software model such as ETX can be used to derive an SSD from available data. Commonly, the fit of the model to the data set is checked to avoid misinterpretation. Misfit may be shown using common statistical testing (Goodness of Fit tests) or by visual inspection and ecological interpretation of the data points. That is, when a chemical specifically affects one group of species (e.g., insects having a high sensitivity for insecticides), the user may decide the derive an SSD model for specific groups of species. In doing so, the outcome will consist of two or more SSDs for a single compound (e.g., an SSDInsect and an SSDOther when the compound is an insecticide, whilst the SSDOther might be split further if appropriate). These may show a better goodness of fit of the model to the data, but – more importantly – they reflect the use of key knowledge of mode of action and biology prior to ‘blindly’ applying the model fit procedure.

 

Step 3a. The SSD model used for environmental protection

The oldest use of the SSD model is the derivation of reference levels such as the PNEC (YàX). That is, given the policy goal to fully protect ecosystems against adverse effects of chemical exposures (see Section on Ecosystem services and protection goals), the protective use is as follows. First, the user defines which ecotoxicity data are used. In the context of environmental protection, these have often been NOECs or low-effect levels (ECx, with low x, such as EC10) from chronic tests. This yields an SSD-NOEC or SSD-ECx. Then, the user selects a level of Y, that is: the maximum fraction of species for which the defined ecotoxicity endpoint (NOEC or ECx) may be exceeded, e.g., 0.05 (a fraction of 0.05 equals 5% of the species). Next, the user derives the Hazardous Concentration for 5% of the species (YàX). At the HC5, 5% of the species are exposed to concentrations greater than their NOEC, but–which is the obverse–95% of the species are exposed to concentration less than their NOEC. It is often assumed that the structural and functional integrity of ecosystems is sufficiently protected at the HC5 level if the SSD is based on NOECs. Therefore, many authorities use this level to derive regulatory PNECs (Predicted No Effect Concentration) or Environmental Quality Standards (EQS). The latter concepts are used as official reference levels in risk assessment, the first is the preferred abbreviation in the context of prospective chemical safety assessments, and the second is used in retrospective environmental quality assessment. Sometimes an extra assessment factor varying between 1 and 5 is applied to the HC5 to account for remaining uncertainties. Using SSDs for a set of compounds yields a set of HC5 values, which–in fact–represent a relative ranking of the chemicals by their potential to cause harm.

 

Step 3b.The SSD model used for environmental quality assessment

The SSD model also can be used to explore how much damage is caused by environmental pollution. In this case, a predicted or measured ambient concentration is used to derive a Potentially Affected Fraction of species (PAF). The fraction ranges from 0–1 but, in practice, it is often expressed as a percentage (e.g., “24% of the species is likely affected”). According to this approach, users often have monitored or modelled exposure data from various water bodies, or soil or sediment samples, so that they can evaluate whether any of the studied samples contain a concentration higher than the regulatory reference level (previous section) and, if so how many species are affected. Evidently, the user must clearly express what type of damage is quantified, as damage estimates based on an SSDNOEC or an SSDEC50 quantify the fractions of species affected beyond the no effect level and at the 50% effect level, respectively. This use of SSDs for a set of environmental samples yields a set of PAF values, which, in fact, represent a relative ranking of the pollution levels at the different sites in their potential to cause harm.

 

Practical uses of using SSD model outcomes

SSD model outcomes are used in various regulatory and practical contexts.

  1. The oldest use of the model, setting regulatory standards, has a global use. Organizations like the European Union and the OECD, as well as many countries, apply SSD models to set (regulatory) standards. Those standards are then used prospectively, to evaluate whether the planned production, use or release of a (novel) chemical is sufficiently safe. If the predicted concentration exceeds the criterion, this is interpreted as a warning. Dependent on the regulatory context, the compound may be regulated, e.g., prohibited from use, or its use limited. The data used to build SSD models for deriving regulatory standards are often chronic test data, and no or low effect endpoints. The resulting standards have been evaluated in validation studies regarding the question of sufficient protection. Note that some jurisdictions have both protective standards as well as trigger values for remediation, based on SSD modelling.
  2. The next use is in environmental quality assessment and management. In this case, the predicted or measured concentration of a chemical in an environmental compartment is often first compared to the reference level. This may already trigger management activities if the reference values have a regulatory status, such as a clean-up operation. The SSD may, however, be used to provide more detailed information on expected magnitude of impact, so that environmental management can prioritize most-affected sites for earlier remediation. The use of SSDs needs be tailored to the situation. That is, if the exposure concentrations form an array close to the reference value, the use of SSDNOEC\s is a logical step, as this ranks the site pollution levels (via the PAFs) regarding the potentially affected fraction of species experiencing slight exceedances of the no effect level. If the study area contains highly polluted sites, that approach may show that all measured concentrations are in the upper tail of the SSDNOEC sigmoid (horizontal part). In such cases, the SSDEC50 provides information on across-sites differences in expected impacts larger than the 50% effect level.
  3. The third use is in Life Cycle Assessment of products. This use is comparative, so that consumers can select the most benign product, whilst producers can identify ‘hot spots’ of ecotoxicity in their production chains. A product often contains a suite of chemicals, so that the SSD model must be applied to all chemicals, by aggregating PAF-type outcomes over all chemicals. The model USEtox is the UN global consensus model for this application.

 

Today, these three forms of use of SSD models have an important role in the practice of environmental protection, assessment and management on the global scale, which relates to their intuitive meaning, their ease of use, and the availability of a vast number of ecotoxicity data in the global databases.

6.3.6. Mixtures

under review

6.3.7. Predicting ecotoxicity from chemical structure and mode of action (MOA)

Author: Joop Hermens

Reviewers: Monika Nendza and Emiel Rorije

 

Date uploaded: 15th March 2024

 

Learning objectives:

You should be able to:

  • explain why in silico methods are relevant in risk assessment and mention different in silico approaches that are applied.
  • explain the concept of quantitative structure-activity relationships and mention a few methodologies that are applied to derive a QSAR.
  • understand the importance of classification in modes of action and give examples of a few major modes of action (MOA) classes.
  • classify chemicals into a certain MOA class and apply a QSAR model for class 1 chemicals.

 

Keywords: quantitative structure-activity relationship (QSAR), Modes of Action (MOA) based classification schemes, octanol-water partition coefficient, excess toxicity

 

Introduction

The number of chemicals for which potential risks to the environment has to be estimated is enormous. Section 6.5 on ‘Regulatory Frameworks’ discusses the EU regulation on REACH (Registration, Evaluation, Authorisation and Restriction of Chemicals) and gives an indication of the number of chemicals that are registered under REACH. Because of the high number of chemicals, there is a strong need for predictive methods including Read-across from related chemicals, Weight of Evidence approaches, and calculations based on chemical structures (quantitative structure-activity relationships, QSARs).

 

This section discusses the following topics:

  • Major prediction methodologies
  • Classification of chemicals based on chemical structure into modes of action (MOA)
  • Predicting ecotoxicity from chemical structure

 

Prediction methodologies

A major in silico prediction methodology is based on quantitative structure-activity relationships (QSARs) (ECHA 2017a). A QSAR is a mathematical model that relates ecotoxicity data (the Y- variable) with one or a combination of structural descriptors and/or physical-chemical properties (the X-variable or variables) for a series of chemicals (see Figure 1).

 

Figure 1. The principle of a QSAR.

Note: LC50, EC50: concentrations with 50 % effect on survival (LCxx: Lethal Concentration xx%) or on sublethal parameters (ECxx: Effect Concentration xx%), NOEC: No-Observed Effect Concentrations regarding effects on growth or reproduction, or in general the most sensitive parameter. A QSAR is related to molecular events and, therefore, concentrations should always be based on molar units.

 

Most models are based on linear regression between Y and X. Different techniques can be used to develop a QSAR including a simple graphical presentation, linear regression equations between Y and X or multiple parameter equations based on more than one property (Y versus X1, X2, etc.). Also, multivariate techniques, such as Principal Component Analysis (PCA) and Partial Least Square Analysis (PLS), are applied. More information on these techniques can be found in section 3.4.3 ‘Quantitative structure-property relationships (QSPRs)’.

 

Multi-parameter linear regression takes the form of Y(i) = a1X1(i) + a2X2(i) + a3X3(i) + ... + b                          (1)

 

See Box 1 for more details.

 

Nowadays, Machine Learning techniques, like Support Vector Machines (SVM), Random Forest (RF) or neural networks, are also applied to establish a mathematical relationship between toxicological effect data and all kinds of chemical properties. The advantage is that it should allow to model non-linear relationships, but at the expense of interpretability of the model. Machine Learning techniques and QSAR models are outside the scope of this section.

 

Box 1: Statistics and validation of QSARs

 

Multiple-parameter linear regression

 

Multiple linear regression equations take the form of

 

Y(i) = a1X1(i) + a2X2(i) + a3X3(i) + … + b                                                                                                     (1)

 

where Y(i) is the value of the dependent parameter of chemical i

X1-X3(i) are values for the independent parameters (the chemical properties) of chemical i

a1-a3 are regression coefficients and b is the intercept of the linear equation

 

Statistical quality of the model

The overall quality of the equation is presented via the Pearson’s correlation coefficient (r) and the standard error of estimate (s.e.). The closer r is to 1.0, the better the fit of the relationship is. The square of r represents the percentage of information in the Y variable that is explained by the X-variable(s).

The significance of the influence of a certain X parameter in the relationship is indicated by the confidence interval of the regression coefficient.

 

Validation of the model (Eriksson et al. 2003)

The model is developed using a so-called “training set” that consists of a limited number of carefully selected chemicals. The validity of such a model should be tested by applying it to a "validation set" i.e., a set of compounds for which experimental data can be compared with the predictions, but which have not been used in the establishment of the (mathematical form of) the model. Another validation tool is cross-validation. In cross-validation, the data are divided in a number of groups and then a number of parallel models are developed from reduced data with one of the groups deleted. The predictions from the left-out chemicals are compared with actual data and the differences are used to calculate the so-called “cross-validated” r2 or Q2 from the correlation observed versus predicted of the left-out chemicals. In the so-called leave-one-out (LOO) approach, one chemical is left out and predicted from a model calculated from the remaining compounds. The LOO approach is often considered to yield a too optimistic value for the true model predictivity. Some modelling techniques apply a wide set (hundreds) of molecular descriptors (experimental and/or theoretical). This may lead to overfitted models and in these cases a good validation procedure is essential, as overfitting will automatically lead to poor external predictive performance (a low Q2).

 

Some modelling techniques apply a wide set (hundreds) of molecular descriptors (experimental and/or theoretical). This may lead to overfitted models and in these cases a good validation procedure is essential, as overfitting will automatically lead to poor external predictive performance (a low Q2).

 

OECD (2004) identified a number of principles for (Q)SAR validation. The principles state that “to facilitate the consideration of a (Q)SAR model for regulatory purposes, it should be associated with the following information: (i) a defined endpoint, (ii) an unambiguous algorithm, (iii) a defined domain of applicability, (iv) appropriate measures of goodness-of-fit, robustness and predictivity and (v) a mechanistic interpretation, if possible.”

 

The Y-variable in a QSAR can for example be fish LC50 data (concentration killing 50 % of the fish) or NOEC (no-observed effect concentrations) for effects on the growth of Daphnia magna, after a specific exposure duration (e.g. LC50 to fish after 96 hours). The X-variable may include properties such as molecular weight, octanol water partition coefficient KOW, electronic and topological descriptors (e.g., quantum mechanics calculations), or descriptors related to the chemical structure such as the presence or absence or the number of different functional groups. Uptake and bioaccumulation of organic chemicals depend on their hydrophobicity and the octanol-water partition coefficient is a parameter that reflects differences in hydrophobicity. The effect of electronic or steric parameters is often related to the potency of chemicals to interact with the receptor or target, or more directly to reactivity (towards specific biological targets). More information on chemical properties is given in section 3.4.3 ‘Quantitative structure-property relationships (QSPR)’ and section 3.4.1 ‘Relevant chemical properties’.

 

Read-across is the appropriate data-gap filling method for “qualitative” endpoints like skin sensitisation or mutagenicity for which a limited number of results are possible (e.g. positive, negative, equivocal). Read-across is frequently applied in predicting human-health related endpoints. Furthermore read-across is recommended for “quantitative” endpoints (e.g., 96h-LC50 for fish) if only a low number of analogues with experimental results are identified. In that case it is simply assumed that the quantitative value of the endpoint for the substance of interest is identical to the value for the closest structural analogue for which experimental data is available. More information on read across can be found in ECHA (2017b).

 

Classification of chemicals based on chemical structure into modes of action (MOA) and QSAR equations

Information on mechanisms and mode of action is essential when developing predictive methods in integrated testing strategies (Vonk et al. 2009). “Mode of action” has a broader meaning than “mechanism of action”. Mode of action refers to changes at the cellular level while mechanism of action refers to the interaction of a chemical with a specific molecular target. In QSAR research the terminology is not always clearly defined and mode of action is used both in the broad sense (change at cellular level) as well as the narrow definition (interaction with a target). A QSAR should preferably be developed for a series of chemicals with a known and similar mechanism or mode of action (OECD 2004). Several schemes to classify chemicals according to their mode of action (MOA) are available. Well known MOA classification systems are those from Verhaar et al. (1992) and the US Environmental Protection Agency (US-EPA) (Russom et al. 1997). The latter classification scheme is based on a number of information sources, including results from fish physiological and behaviour studies, joint toxicity data and similarity in chemical structure. The EPA scheme includes a number of groups including: narcotics (or baseline toxicants), oxidative phosphorylation uncouplers, respiratory inhibitors, electrophiles/proelectrophiles, and Acetylcholinesterase (AChE) inhibitors. The Verhaar scheme is relatively simple and has identified four broad classes, including: Class 1, inert chemicals, Class 2, less inert chemicals, Class 3, reactive chemicals, and Class 4, specifically acting chemicals. Classes 1 and 2 are also known as non-polar and polar narcosis, respectively. Class 3 and 4 include chemicals with so-called “excess toxicity”, i.e. the chemicals are more toxic than base line toxicants (see Box 2 and Figure 4). Automated versions of the Verhaar classification system are available in the OECD QSAR Toolbox and in Toxtree (Enoch et al. 2008). Other classification systems apply more categories (Barron et al. 2015; Busch et al. 2016). More information about mechanisms and modes of action is given in section 4.2 ‘Toxicodynamics & Molecular Interactions’.

 

Expert systems can assign a MOA class to a chemical and predict toxicity of large data sets. Specific QSAR models may be available for a certain MOA (Figure 2), although one should realize that validated QSARs are available only for a limited number of MOAs (see also under ECOSAR). The Rule-based Expert systems are based on chemical-structure rules (using e.g. the presence of specific chemical substructures in a molecule) such as identified in Box 2 for a number of classes of chemicals and MOA.

 

Figure 2. The approach to select QSARs for predicting toxicity. The QSARs are MOA specific.

 

The Verhaar classification scheme is developed based on acute fish toxicity data. A major class of chemicals are compounds with a non-specific mode of action, also called narcosis type chemicals or baseline toxicity. This class 1 in this classification scheme includes aromatic and aliphatic (chloro)hydrocarbons, alcohols, ethers and ketones. In ecotoxicology, baseline (or narcosis-level) toxicity denotes the minimal effects caused by unspecific non-covalent interactions of xenobiotics with membrane components, i.e. membrane perturbations (Nendza et al. 2017). This MOA is non-specific and each organic chemical has this MOA as a base-line or minimum effect (see section 4.2). The effect (mortality or a sublethal effect) will occur at a constant concentration in the cell membrane and the internal lethal concentration (ILC) is around 50 mmol/kg lipid and is independent of the octanol-water partition coefficient (KOW). Box 2 gives an overview of the Verhaar classification scheme and also includes chemical structures within each class and short descriptions of the mode of action.

 

Box 2: Examples of chemicals in each of the classes (Verhaar class 1 to class 4)

Class 1 chemicals: inert chemicals

 

MOA: non-polar narcosis

Non-specific mechanism. Effect is related to presence of a chemical in cell membranes. Effect will occur at a constant concentration in a cell membrane.

Class 2 chemicals: less inert chemicals

 

MOA: polar narcosis

Similar to class 1, with hydrogen bonding in addition to thermodynamic partitioning.

 

Class 3 chemicals: reactive chemicals (electrophiles)

 

MOA: related to reactivity

Electrophiles may react with a nucleophile. Nucleophilic groups are for example NH2, OH, SH groups and are present in amino acids (and proteins) and DNA bases. Exposure to these chemicals may lead for example to mutagenicity or carcinogenicity (DNA damage), protein damage or skin irritation.

 

Class 4 chemicals: specific acting chemicals

 

MOA: specific mechanism

Several chemicals have a specific MOA. Insecticides such as lindane and DDT specifically interact with the nervous system. Organophosphates are neurotoxicants that interact with the enzyme Acetylcholine-esterase.

 

 

LC50 data of class 1 chemicals show a strong inverse relationship with the hydrophobicity (Kow). This decrease of LC50 with KOW is logical because the LC50 is inversely related to the bioconcentration (BCF) and BCF increases with KOW (see equation 2 and Figure 3).

 

                                                                                                                                           (2)

 

Figure 3. Relation between (i) concentration with 50 % mortality (log LC50), (ii) bioconcentration factor (log BCF) and (iii) internal lethal concentration (ILC) and the octanol-water partition coefficient (log KOW). Also see Figure 2 in section 4.1.7.

 

Figure 4A shows the relationship between guppy log LC50 and log KOW for 50 compounds that act via narcosis (class 1 chemicals). The line in Figure 4A represents the so-called minimum or base-line toxicity. Figure 4B additionally shows LC50 data for the other classes (class 2, 3 and 4). LC50s of class 2 compounds (polar narcosis) are significantly lower on the log KOW scale. The distinction into non-polar and polar narcosis was introduced by Schultz and Veith (Schultz et al. 1986). The LC50 values of reactive and specifically acting chemicals (Classes 3 and 4, respectively) are mostly below base-line toxicity (see Figure 4B).

 

 

 

 

Figure 4. Correlation between log LC50 data and the octanol-water partition coefficients (log KOW) for class 1 (Figure 4A, top) and classes 2, 3 and 4 chemicals (Figure 4B, bottom). Data are from Verhaar et al. (1992).

 

Several QSARs are published for class 1 chemicals for different species including fish, crustaceans and algae and effects on survival (LC50), growth (EC50) or no-observed effect concentrations (NOEC). Some examples are presented in Table 1. The equations have the following format:

 

log 14-d LC50 (mol/L) = -0.869 log Kow - 1.19, n=50, r2=0.969, Q2=0.957, s.e.=0.31                                    (3)

 

The intercept in the equations gives information about the sensitivity of the test. The intercept in equation 7 (-2.30) is 1.11 lower than the intercept of equation 5 (1.19). The difference of 1.11 is on a logarithmic scale and the slopes of the equations are similar (-0.869 versus -0.898). This means that the test on sublethal effects (NOEC) is a factor 13 (101.11) more sensitive than the LC50 test and this is in agreement that a factor of 10 is the standard assessment factor for extrapolating from LC50 to NOEC.

 

These QSAR equations for class 1 are relatively simple and include the octanol-water partition coefficient KOW) as the only parameter. QSAR models for reactive chemicals and specific acting compounds are far more complex because the intrinsic toxicity (reactivity and potency to interact with the target) and also biotransformation to active metabolites will affect the toxicity and effect concentration.

 

An example of a QSAR for reactive chemicals is presented in Box 3. This example also shows how a QSAR is derived.

 

The ‘excess toxicity’ value (Te), also called toxic ratio (TR), presents an easy way to interpret tox data. Excess toxicity (Te) is calculated as the ratio of the estimated LC50 value for base-line toxicity (using the Kow regression) and the experimental LC50 value (equation 4).

 

                                                                                                            (4)

 

 

Table 1. QSARs for class 1 chemicals.

Species

Endpoint

QSAR

Eqn. #

FISH

 

 

 

Poecilia reticulata

log 14-d LC50

-0.869 log Kow - 1.19

5

 

(mol/L)

n=50 r2=0.969 Q2=0.957 s.e.=0.31

 

Pimephales promelas

log 96-h LC50

-0.846 log Kow - 1.39

6

 

(mol/L)

n=58 r2=0.937 Q2=0.932 s.e.=0.36

 

Branchidanio rerio

log 28-d NOEC

-0.898 log Kow - 2.30

7

 

(mol/L)

n=27 r2=0.917 Q2=0.906 s.e.=0.33

 

CRUSTACEANS

 

 

 

Daphnia magna

log 48-h LC50

-0.941 log Kow - 1.32

8

 

(mol/L)

n=49 r2=0.948 Q2=0.944 s.e.=0.34

 

Daphnia magna

log 16-d NOEC

-1.047 log Kow - 1.85

9

 

(mol/L)

n=10 r2=0.968 Q2=0.954 s.e.=0.39

 

ALGAE

 

 

 

Chlorella vulgaris

log 3-h EC50

-0.954 log Kow - 0.34

10

 

(mol/L)

n=34 r2=0.916 Q2=0.905 s.e.=0.32

 

n is the number of compounds, r2 is the correlation coefficient, Q2 is the cross-validated r2 and s.e. is the standard error of estimate

LC50: concentrations with 50 % effect on survival

NOEC: no-observed effect concentrations for sublethal effects (growth, reproduction)

EC50: concentrations with 50 % effect on growth

The equations are taken from EC_project (1995)

 

 

Box 3: Example of a QSAR

 

Data set: acute toxicity (LC50) of reactive chemicals

 

Chemicals: 15 reactive chemicals including a, b-unsaturated carboxylates

Y: log LC50 to Pimephales promelas (in mol/L)

X1: log kGSH reaction rate to glutathione (in (mol/L)-1 min-1)

X2: log KOW octanol-water partition coefficient

 

Te: excess toxicity in comparison with calculated base-line toxicity (calculated with equation 4).

Log LC50 base-line (mmol/L) = 0.846 log KOW – 1.39 (see equation 6 in Table 1)

 

Dataset

a, b-unsaturated carboxylates

log LC50

[mol/L]

log kGSH

[(mol/L)-1 min-1]

log KOW

log LC50 base-line

[mol/L]

Te

Isobutyl methacrylate

-3.64

-0.73

2.66

-3.64

1.0

Methyl-methacrylate

-2.59

-0.70

1.38

-2.56

1.1

Isopropyl methacrylate

-3.53

-1.00

2.25

-3.29

1.7

Hexyl acrylate

-5.15

1.31

3.44

-4.30

7.1

Benzyl methacrylate

-4.58

-0.49

2.87

-3.82

5.8

2-Ethoxy ethyl methacrylate

-3.76

-0.60

1.45

-2.62

14

Tetrahydrofurfuryl methacrylate

-3.69

-0.52

1.30

-2.49

16

Isobutyl acrylate

-4.79

1.62

2.22

-3.27

33

Diethyl fumarate

-4.58

2.05

1.84

-2.95

43

Ethyl acrylate

-4.6

1.60

1.32

-2.51

124

Acrylonitrile

-3.57

0.87

0.23

-1.58

97

Acrylamide

-2.81

-0.33

-0.67

-0.82

97

Hydroxy propyl acrylate

-4.59

1.47

0.35

-1.69

800

2-Hydroxyethyl acrylate

-4.38

1.71

-0.21

-1.21

1500

Acrolein

-6.74

3.92

0.10

-1.47

180000

 

QSAR based on two parameters:

log 96-h LC50 = -0.67 ± 0.09 log kGSH - 0.31 ± 0.11 log KOW  -  3.33 ± 0.21

r2  =  0.82, s.e. = 0.47

 

The relatively low standard deviation in the regression coefficients show that both parameters are significant.

The LC50 decreases with increasing KOW – related to effect of hydrophobicity on accumulation

The LC50 decreases with increasing reactivity – more reactive chemicals are more toxic.

 

The discussed examples are QSARs with one or only a few X variables. Other QSPR approaches use large numbers of parameters derived from chemical graphs. The CODESSA software for example, generates molecular (494) and fragment (944) descriptors, classified as (i) constitutional, (ii) topological, (iii) geometrical, (iv) charge related, and (v) quantum chemical (Katritzky et al. 2009). Some models are based on structural fragments in a molecule. Fish toxicity data were analysed with this approach and up to 941 descriptors were calculated for each chemical in the data sets studied  (Katritzky et al. 2001). Most of the data are the same as the ones presented in Figure 3. Two to five parameter correlations were calculated for the four Verhaar classes. The correlations for class 4 toxins were less satisfactory, most likely because the QSAR included different mechanisms into one model. This approach applies a wide set (hundreds) of molecular descriptors and this may lead to overfitted models. In such a case, validation of the model is essential (Eriksson et al. 2003).

 

Expert systems

Several expert systems are developed that apply QSAR and other in silico methods to predict ecotoxicity profiles and fill data gaps. The following two are briefly discussed: the ECOSAR program from the US-EPA (Environmental Protection Agency) model and the QSAR toolbox from the OECD (Organisation for Economic Cooperation and Development).

 

ECOSAR

The Ecological Structure Activity Relationships (ECOSAR) Class Program is a computerized predictive system that estimates aquatic toxicity. The program has been developed by the US-EPA. As mentioned on their website: “The program estimates a chemical's acute (short-term) toxicity and chronic (long-term or delayed) toxicity to aquatic organisms, such as fish, aquatic invertebrates, and aquatic plants, by using computerized Structure Activity Relationships (SARs)".

 

Key characteristics of the program include:

  • Grouping of structurally similar organic chemicals with available experimental effect levels that are correlated with physicochemical properties in order to predict toxicity of new or untested industrial chemicals.
  • Programming of a classification scheme in order to identify the most representative class for new or untested chemicals.
  • Continuous update of aquatic QSARs based on collected or submitted experimental studies from both public and confidential sources.

 

ECOSAR software is available for free and is posted below as a downloadable software program without licensing requirements. Information on use and set-up is provided in the ECOSAR Operation Manual v2.0 and ECOSAR Methodology Document v2.0.

 

OECD QSAR Toolbox

The OECD Toolbox is a software application intended for filling data gaps in (eco)toxicity. The toolbox includes the following features:

  1. Identification of relevant structural characteristics and potential mechanism or mode of action of a target chemical.
  2. Identification of other chemicals that have the same structural characteristics and/or mechanism or mode of action.
  3. Use of existing experimental data to fill the data gap(s).

 

Data gaps can be filled via classical read-across or trend analysis using data from analogues or via the application of QSAR models.

 

The OECD QSAR Toolbox is a very big and powerful system that requires expertise and experience to use it. The OECD QSAR Toolbox can be downloaded at https://www.oecd.org/chemicalsafety/risk-assessment/oecd-qsar-toolbox.htm. Guidance documents and training materials are also available there, as well as a link to the video tutorials on ECHA’s YouTube channel.

 

When using the OECD QSAR toolbox to identify suitable analogues for a Read-Across approach to estimate substance properties, it is very important to not only look at structural similarity of the chemical structures, but also to take into account any information (from experimental data, or from estimation models – the so called ‘profiles’ in the OECD QSAR Toolbox. The example in Box 4 on the importance of assigning the correct MOA underlines this.

 

Box 4: Chemical domain: small change in structure - large consequences for toxicity. The importance of assigning the correct MOA

 

To illustrate the limitations of the read-across approach, as well as underlining the importance of being able to correctly assign the ‘real’ MOA to a chemical structure we can look at two very close structural analogues:

 

 

1-chloro-2,4-dinitrobenzene

1,2-dichloro-4-nitrobenzene

CAS RN

97-00-7

99-54-7

Log KOW

2.17

3.04

Mol Weight

203 g/mol

192 g/mol

 

                                                     

 

Both substances have the same three functional groups; aromatic 6-ring, nitro-substituent and chloro-substituents, the same substitution pattern on the ring (1,2,4-positions). The only structural difference between them is the number of substituents, as one nitro-substituent is replaced with another chloro-substituent. When calculating Chemical Similarity coefficients between the two substances (often used as a start to determine the ‘best’ structural analogues for Read Across purposes) these two substances will be considered 100% similar by the majority of existing Chemical Similarity coefficients, as these often only compare the presence/absence of functional groups, and not the number.

 

Looking at the chemical structures and the examples given for the Verhaar classification scheme (Box 2) one could easily come to the conclusion that these two substances both belong to the Class 2: less inert, or polar narcosis type chemicals.

 

Applying the class 2 polar narcosis QSAR for Pimephales promelas 96 hr- LC50, as reported in EC_project (1995):

 

LC50 (mol/L) log LC50 = -0.73 log KOW – 2.16  n = 86, r2 =0.90, Q2 = 0.90, s.e. = 0.33

 

yield estimates of the LC50 of 36.5 mg/L for the dinitro-compound and 8 mg/L for the dichloro-compound (see Table below). When looking at experimentally determined acute (96hr) fish toxicity data for these two compounds, the estimate for the dichloro-compound is quite close to reality (96hr LC50 for Oryzias latipes of 4.7 mg/L), even though we do not have data for the exact same species Pimephales promelas). But the estimate for the dinitro-compound is largely underestimating the toxicity as the experimental 96hr LC50 for Oryzias latipes is as low as 0.16 mg/L, a factor of 230 times lower than estimated by the polar-narcosis type QSAR.

 

The explanation is in the MOA assignment, as 1,2-dichloro-4-nitrobenzene has indeed a polar narcosis type MOA, but the 1-chloro-2,4-dinitrobenzene is actually an alkylating substance (unspecific reactive, Class 3 MOA) as the electronic interactions of the 2,4-dinitro substitution make the 1-chloro substituent highly reactive towards nucleophilic groups (e.g. DNA, or proteins). This reactivity leads to an increased toxicity.

 

It should be noted that software implementations of MOA classification schemes, like the Verhaar classification scheme in the ToxTree software, or as implemented in the OECD QSAR Toolbox, do identify both nitro-benzene substances as Class 3, unspecified reactive MOA. The OASIS MOA classification scheme, and also the ECOSAR classification scheme do distinguish between mono-nitrobenzenes as inert and di-nitrobenzenes as (potentially) reactive substances and (correctly) assign different MOA to these two substances. ECOSAR subsequently has a separate polynitrobenzene grouping, with its own log KOW based linear regression QSAR for fish toxicity. In the summary below the ECOSAR estimates for 96hr LC50 for fish in general are also given for comparison. The polynitrobenzene model still underestimates the toxicity of the alkylating agent 1-chloro-2,4-dinitrobenzene by a factor of 25.

 

 

1-chloro-2,4-dinitrobenzene

1,2-dichloro-4-nitrobenzene

QSAR 96hr LC50

(Pimephales promelas) 

36.5 mg/L

8.02 mg/L

Experimental 96 hr LC50’

(Oryzias latipes)

0.16 mg/L

4.7 mg/L

ratio between QSAR / experimental

228

1.7

OECD QSAR Toolbox MOA assignment:

Verhaar (modified)

Class 3 (unspecific reactive)

Class 3 (unspecific reactive)

MOA by OASIS

Reactive unspecified

Basesurface narcotics

ECOSAR classification

Polynitrobenzenes

Neutral Organics

ECOSAR LC50 fish 96 hr

4.01 mg/L

(polynitrobenzene model)

16.2 mg/L

(neutral organics model)

 

94.6 mg/l

(neutral organics model)

 

 

 

References

Barron, M.G., Lilavois, C.R., Martin, T.M. (2015). MOAtox: A comprehensive mode of action and acute aquatic toxicity database for predictive model development. Aquatic Toxicology 161, 102-107.

Busch, W., Schmidt, S., Kuhne, R., Schulze, T., Krauss, M., Altenburger, R. (2016). Micropollutants in European rivers: A mode of action survey to support the development of effect-based tools for water monitoring. Environmental Toxicology and Chemistry 35, 1887-1899.

EC_project (1995). Overview of structure-activity relationships for environmental endpoints. Report prepared within the framework of the project "QSAR for Prediction of Fate and Effects of Chemicals in the Environment", an international project of the Environmental Technologies RTD Programme (DG XII/D-1) of the European Commission under contract number EV5V-CT92-0211. Research Institute of Toxicology, Utrecht University, Utrecht, The Netherlands.

ECHA (2017a). Non-animal approaches: Current status of regulatory applicability under the REACH, CLP and Biocidal Products regulations. European Chemicals Agency, Helsinki, Finland.

ECHA (2017b). Read-Across Assessment Framework (RAAF). https://echa.europa.eu/documents/10162/13628/raaf_en.pdf. European Chemicals Agency, Helsinki, Finland.

Enoch, S.J., Hewitt, M., Cronin, M.T.D., Azam, S., Madden, J.C. (2008). Classification of chemicals according to mechanism of aquatic toxicity: An evaluation of the implementation of the Verhaar scheme in Toxtree. Chemosphere 73, 243-248.

Eriksson, L., Jaworska, J., Worth, A.P., Cronin, M.T.D., McDowell, R.M., Gramatica, P. (2003). Methods for reliability and uncertainty assessment and for applicability evaluations of classification- and regression-based QSARs. Environmental Health Perspectives 111, 1361-1375.

Katritzky, A.R., Slavov, S., Radzvilovits, M., Stoyanova-Slavova, I., Karelson, M. (2009). Computational chemistry approaches for understanding how structure determines properties. Zeitschrift Fur Naturforschung Section B-a Journal of Chemical Sciences 64, 773-777.

Katritzky, A.R., Tatham, D.B., Maran, U. (2001). Theoretical descriptors for the correlation of aquatic toxicity of environmental pollutants by quantitative structure-toxicity relationships. Journal of Chemical Information and Computer Sciences 41, 1162-1176.

Nendza, M., Müller, M., Wenzel, A. (2017). Classification of baseline toxicants for QSAR predictions to replace fish acute toxicity studies. Environmental Science: Processes Impacts 19, 429-437.

OECD (2004). The report from the expert group on (quantitative) structure-activity relationships [(q)sars] on the principles for the validation of (Q)SARs, OECD series on testing and assessment, number 49. Organisation for Economic Cooperation and Development, Paris, France.

Russom,C.L., Bradbury, S.P., Broderius, S.J., Hammermeister, D.E., Drummond, R.A. (1997). Predicting modes of toxic action from chemical structure: Acute toxicity in the fathead minnow (Pimephales promelas). Environmental Toxicology and Chemistry 16, 948-967.

Schultz, T.W., Holcombe, G.W., Phipps, G.L. (1986). Relationships of quantitative structure-activity to comparative toxicity of selected phenols in the Pimephales-promelas and Tetrahymena-pyriformis test systems. Ecotoxicology and Environmental Safety 12, 146-153.

Verhaar, H.J.M., van Leeuwen, C.J., Hermens, J.L.M. (1992). Classifying environmental pollutants. 1: Structure-activity relationships for prediction of aquatic toxicity. Chemosphere 25, 471-491.

Vonk, J.A., Benigni, R., Hewitt, M., Nendza, M., Segner, H., van de Meent D., et al. (2009). The Use of Mechanisms and Modes of Toxic Action in Integrated Testing Strategies: The Report and Recommendations of a Workshop held as part of the European Union OSIRIS Integrated Project. Atla-Alternatives to Laboratory Animals 37, 557-571.

6.4. Diagnostic risk assessment approaches and tools

Author: Michiel Kraak

Reviewers: Ad Ragas and Kees van Gestel

 

Learning objectives:

You should be able to

  • define and distinguish hazard and risk
  • distinguish predictive tools (toxicity tests) and diagnostic tools (bioassays)
  • list bioassays and risk assessment tools at different levels of biological organization, ranging from laboratory to field approaches

 

Keywords: hazard assessment, risk assessment, prognosis, diagnosis, effect based monitoring, bioassays, effect directed analysis, mesocosm, biomonitoring, TRIAD approach, eco-epidemiology.

 

To determine whether organisms are at risk when exposed to certain concentrations of hazardous compounds in the field, the toxicity of environmental samples can be analysed. To this purpose, several approaches and techniques have been developed, known as diagnostic tools. The tools described in Sections 6.5.1-6.5.8 have in common that they make use of living organisms to assess environmental quality. This is generally achieved by performing bioassays in which the selected test species are exposed to (concentrates or dilutions of) environmental samples after which their performance (survival, growth, reproduction etc) is measured. The species selected as test organisms for bioassays are generally the same as the ones selected for toxicity tests (see section on Selection of ecotoxicity test organisms).

Each biological organization level has its own battery of test methods. At the lowest level of biological organization, a wide variety of in vitro bioassays is available (see section Effect based monitoring: in vitro bioassays). These comprise tests based on cell lines, but also bacteria and zebra fish embryos are employed. If the response of a bioassay to an environmental sample exceeds the predefined effect-based trigger value, the response is considered to be indicative of ecological risks. Yet, the compounds causing the observed toxicity are initially unknown. However, these can subsequently be elucidated with Effect Directed Analysis (see section Effect Directed Analysis). The sample causing the effect is subjected to fractionation and the fractions are tested again. This procedure is repeated until the sample is reduced to a few individual compounds, which can then be identified allowing to confirm their contribution to the observed toxic effects.

At higher levels of biological organization, a wide variety of in vivo tests and test organisms are available, including terrestrial and aquatic plants and animals (see section Effect based monitoring: in vivo bioassays). Yet, different test species tend to respond very differently to specific toxicants and specific field collected samples. Hence, the results of a single species bioassay may not reliably reflect the risk of exposure to a specific environmental sample. To avoid over- and underestimation of environmental risks, it is therefore advisable to employ a battery of in vitro and in vivo bioassays. In a case study on effect-based water quality assessment, we showed the great potential of this approach, resulting in the ranking of sites based on ecological risks rather than on the absence or presence of compounds (see section Effect based water quality assessment).

At the higher levels of biological organization, effect-based monitoring tools include bioassays performed in mesocosms (see section Community Ecotoxicology in practice) and in the field itself, the so called in situ bioassays (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Cosm studies represent a bridge between the laboratory and the natural world. The originality of mesocosms is based on the combination of ecological realism, the ability to manipulate different environmental parameters and still having the opportunity to replicate treatments.

In the field, the aim of biomonitoring is the in situ assessment of environmental quality on a regular basis in time, using living organisms (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms ). Organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites, after which they are recollected and either their condition is analysed (in situ bioassay) or the internal concentrations of specific target compounds are measured, or both (see section Biomonitoring: in situ bioassays and contaminant concentrations in organisms).

Finally, two approaches will be introduced that aid to bridge policy goals and ecosystem responses to perturbation: the TRIAD approach and eco-epidemiology. The TRIAD approach is a tool for site-specific ecological risk assessment, combining and integrating information on contaminant concentrations, bioassay results and ecological field inventories in a ‘Weight of Evidence’ approach (see section TRIAD approach). Eco-epidemiology is defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems, and the application of this study to reduce ecological impacts (see section Eco-epidemiology).

6.4.1. Effect-based monitoring: In vitro bioassays

Author: Timo Hamers

Reviewer: Beate Escher

 

Learning objectives:

You should be able to

  • explain why effect-based monitoring is “more comprehensive” than chemical-analytical monitoring
  • name several characteristics which make in vitro bioassays suitable for effect-based monitoring purposes
  • give examples of most widely used bioassays
  • describe the principles of a reporter gene assay, an enzyme induction assay, and an enzyme inhibition assay
  • indicate how results from effect-based monitoring with in vitro bioassays can be interpreted in terms of environmental risk

 

Key words: effect-based monitoring; cell line; reporter gene assay; toxicity profile; trigger value

 

 

Effect-based monitoring

Diagnosis of the chemical status of the environment is traditionally performed by the analytical detection of a limited number  of chemical compounds. Environmental quality is then assessed by making a compound-by-compound comparison between the measured concentration of an individual contaminant and its environmental quality standard (EQS). Such a compound-by-compound approach, however, cannot cover the full spectrum of contaminants given the unknown identity of the vast majority of compounds released into the environment. It also ignores the presence of unknown breakdown products formed during degradation processes and the presence of compounds with concentration levels below the analytical limit of detection. Furthermore, it overlooks combined effects of contaminants present in the complex environmental mixture.

To overcome these shortcomings, effect-based monitoring has been proposed as a comprehensive and cost-effective, complementary strategy to chemical analysis for the diagnosis of environmental chemical quality. In effect-based monitoring the toxic potency of the complex mixture is determined as a whole by testing environmental samples in bioassays. Bioassays are defined as “biological test systems that consist of whole organisms or parts of organisms (e.g., tissues, cells, proteins), which show a measurable and potentially biologically relevant response when exposed to natural or xenobiotic compounds, or complex mixtures present in environmental samples” (Hamers et al. 2010).

Bioassays making use of whole organisms are further referred to as in vivo bioassays (in vivo means “while living”). In vivo bioassays have relatively high ecological relevance as they provide information on survival, reproduction, growth, or behaviour of the species tested. In vivo bioassays will be addressed in a separate section.

 

In vitro bioassays

Bioassays making use of tissues, cells, proteins are called in vitro bioassays (in vitro means “in glass”), as – in the past –they were typically performed in test tubes or petri dishes made from glass. Nowadays, in vitro bioassays are more often performed in microtiter wells-plates containing multiple (6, 12, 24, 48, 96, 384, or 1536) test containers (called “wells”) per plate. Most in vitro bioassays show a very mechanism-specific response, which is for instance indicative of the inhibition of a specific enzyme or the activation of a specific molecular receptor.

In addition to the mechanism-specific information about the complex mixture present in the environment, in vitro bioassays have several other advantages. Small test volumes, for instance, make the in vitro assays suitable to test small samples. If sampling volumes are not restricted, however, the small volume of the in vitro bioassays allow that pre-concentrated samples (i.e. extracts) can be tested. Moreover, in vitro bioassays have short test durations (usually incubation periods range from 15 minutes to 48 hours) and can be performed in relatively high-throughput, i.e. multiple samples can be tested per microtiter plate experiment. Microtiter plate experiments require an easy read-out (e.g. luminescence, fluorescence, optical density), which is typically a direct measure for the toxic potency to which the bioassay was exposed. Finally, using cells or proteins for toxicity testing raises less ethical objections than the use of intact organisms as is done in in vivo bioassays.

Cell-based in vitro bioassays can make use of different types of cells. Cells can be isolated from animal tissue and be grown in medium in cell culture flasks. If a flask grows full, cells can be diluted in fresh medium and be distributed over several new flasks (i.e. “passaging”). For cells freshly isolated from animal tissue (called primary cells), however, the number of passages is limited, due to the fact that the cells have a limited number of cell doublings. Thus, the use of primary cells in environmental monitoring is not preferred, as preparation of cell cultures is time-consuming and requires the use of animals. Moreover, the composition and activity of the cells may change from batch to batch. Instead, environmental monitoring often makes use of cell lines. A cell line is a cell culture derived from a single cell which has been immortalized, allowing the cell to divide infinitely. Immortalization of cells is obtained by selecting either a (mutated) cancer cell from a donor animal or human being, or by causing a mutation in an a healthy cell after isolation using chemicals or viruses. The advantage of a cell line is that all cells are genetically identical and can be used for an indefinite number of experiments. The drawback of cell lines is that the cells are cancer cells that do not behave like a healthy cell in an intact organism. For instance, cancer cells have lost  their differentiated properties and have a short cell cycle due to increased proliferation (see section on In vitro toxicity testing).

 

Examples of in vitro bioassays

Reporter gene bioassays are a type of in vitro bioassays that are frequently used in effect-based monitoring. Such bioassays make use of genetically modified cell lines or bacteria that contain an incorporated gene construct encoding for an easily measurable protein (i.e. the reporter protein). This gene construct is developed in such a way that its expression is triggered by a specific interaction between the toxic compound and a cellular receptor. If the receptor is activated by the toxic compound, transcription and translation of the reporter protein takes place, which can be easily measured as a change in colour, fluorescence, or luminescence.

 

The most well-known reporter gene bioassays are steroid hormone-sensitive bioassays. These bioassays are based on the principle by which steroid hormones act, i.e. activation of a receptor protein followed by translocation of the hormone-receptor complex to the nucleus where it binds to a hormone-responsive element of the DNA, thereby initiating transcription and translation of steroid hormone-dependent genes. In case of a hormone-responsive reporter gene bioassay, the reporter gene construct is also under transcriptional control of a hormone-responsive element. Activation of the steroid hormone receptor by an endocrine disrupting compound thus leads to expression of the reporter protein, which can easily be measured. Estrogenic activity, for instance, is typically measured in cell lines in which a plasmid is stably transfected  into the cellular genome that encodes for the reporter protein luciferase (Figure 1). Expression of this enzyme is under transcriptional control of an estrogen-responsive element (ERE). Upon exposure to an environmental sample, estrogenic compounds present in the sample may enter the cell and bind and activate the estrogen receptor (ER). The activated ER forms a dimer with another activated ER and is translocated to the nucleus where the dimer binds to the ERE, causing transcription and translation of the luciferase reporter gene. After 24 hours, the exposure is terminated and the amount of luciferase enzyme can be easily quantified by lysis of the cells and adding the energy source ATP and the substrate luciferin. Luciferin is hydrolysed by luciferase, which is associated with the emission of light (i.e. the same reaction as occurs in fireflies or in glowing worms). The amount of light produced by the cells is quantified in a luminometer and is a direct measure for the estrogenic potency of the complex mixture to which the cells were exposed.

 

Figure 1: Principle of an estrogen responsive reporter gene assay: estrogenic compounds (red) enter the cell and activate the estrogen receptor (ER; triangle). Activated ERs form a dimer that is translocated to the nucleus where they bind to estrogen response elements (EREs). The regular subsequent pathway is indicated in black: estrogen responsive genes are transcribed into mRNA and translated into proteins that cause feminizing effects. The reporter gene pathway is indicated in blue: the reporter gene, which is also under transcriptional control of the ERE, is transcribed and translated into the reporter protein luciferase. Upon opening of the cell (lysis) and addition of the substrate luciferin and ATP as energy source, light is produced, which is a direct measure for the amount of luciferase produced, and thereby also for the estrogenic potency to which the cells were exposed.

 

Another classic bioassay for the detection of dioxin-like compounds is the ethoxyresorufin-o-deethylase (EROD) bioassay (Figure 2). The EROD bioassay is an enzyme induction bioassay that makes use of a hepatic cell line (i.e. derived from liver cells). Similar as described for the estrogenic compounds, dioxin-like compounds can enter these cells upon exposure to an environmental sample, and bind and activate a receptor protein, i.c. the arylhydrocarbon receptor (AhR) (see section on Receptor interactions). The activated AhR is subsequently translocated to the nucleus where it forms a dimer with another transcription factor (ARNT) that binds to the dioxin responsive element (DRE), causing transcription and translation of dioxin-responsive genes. One of these genes encodes for CYP1A1, a typical Phase I biotransformation enzyme. Upon lysis of the cells and addition of the substrate ethoxyresorufin, CYP1A1 is capable of hydrolysing this substrate into ethanol and resorufin, which is a fluorescent reaction product that can be measured easily. As such, the amount of fluorescence is a direct measure for the dioxin-like potency to which the cells were exposed.

 

Figure 2: Simplified representation of the EROD (ethoxyresorufin-O-deethylase) assay. Dioxin-like compounds enter the hepatic cell and bind to the arylhydrocarbon receptor (AhR), which is translocated to the nucleus where it binds to dioxin-responsive elements (DREs) in the DNA. This causes transcription and translation of cytochrome P-4501A1 (CYP1A1). After 24h of incubation, the cells are lysed and the substrate ethoxyresorufin is added, which is oxidized by CYP1A1 into the fluorescent (pink) product resorufin

 

Another classic bioassay is the acetylcholinesterase (AChE) inhibition assay for the detection of organophosphate and carbamate insecticides (Figure 3). By making a covalent bond to the active site of the AChE enzyme, these compounds are capable of inhibiting the hydrolysis of the neurotransmitter acetylcholine (ACh) (see section on Protein inactivation). The in vitro AChE inhibition assay makes use of the principle that AChE can also hydrolyse an alternative substrate called acetylthiocholine (ATCh) into acetic acid and thiocholine (TCh). AChE inhibition leads to a decreased rate of TCh formation, which can be measured using an indicator, called Ellman’s reagent. This indicator reacts with the thiol (-SH) group of TCh, resulting in a yellow breakdown product that can easily be measured photometrically. In the bioassay, purified AChE (commercially available for instance from electric eel) is incubated with an environmental sample in the presence of ATCh and Ellman’s reagent. A decrease in the rate by which the yellow reaction product is formed is a direct measure for the inhibition of the AChE activity.

 

Figure 3: Principle of AChE inhibition: The normal hydrolysis of the neurotransmitter ACh by AChE is shown in the top row (1). The inhibition of AChE by the organophosphate insecticide dichlorvos is shown in the middle row (2). The phosphate ester-group does not release from the AChE active site, causing a decrease in AChE available for ACh hydrolysis. The principle of the AChE inhibition assay is shown in the bottom row (3). The remaining AChE activity is measured using an alternative substrate ATCh. The thiocholine product can be measured using the DTNB indicator (Ellman’s reagent), which reacts with the thiol group, leading to a disulphide and a free TNB molecule. The yellow colour of the latter allows photometric quantification of the reaction.

 

Another bioassay that is used to detect mutagenic compounds in environmental samples is the Ames assay, which has been described in the section on Carcinogenicity and Genotoxicity.

 

Interpretation of the toxicity profile

In practice, multiple mechanism-specific in vitro bioassays are often combined into a test battery to cover the spectrum of toxicological endpoints in an (eco)system. As such, the battery can be considered as a safety net that signals the presence of toxic compounds at low concentrations. However, the question what combination of in vitro tests provides a sufficient level of coverage for the toxicological endpoints of concern still is an open one.

 

Still, testing an environmental sample in a battery of mechanism-specific in vitro bioassays yields a toxicity profile of the sample, indicating its toxic potency towards different endpoints. Two main strategies have been described to interpret in vitro toxicity profiles in terms of risk. In the “benchmark strategy”, the toxicity profiles are compared to one or more reference profiles (Figure 4). A reference profile may be defined as the profile that is generally observed in environmental samples from locations with good chemical and/or ecological quality. The benchmark approach indicates to what extent the observed toxicity profile deviates from a toxicity profile corresponding to the desired environmental quality. It also indicates the endpoints that are most affected by the environmental sample.

 

Figure 4: Example of a benchmark approach, in which toxicity profiles for sediment samples from different water systems (different colours blue) have been compared to their own reference profile (all green boxes). Colours green-yellow-orange-red indicate an increase in bioassay response. Different bioassays have been indicated at the top of the Figure. The tree-like structure (dendrogram) at the right indicates the relative distance between the different toxicity profiles. It clearly distinguishes between reference sites and clean sites on the one hand and harbour sites on the other hand. In between are samples from shipping lanes (Moerdijk and Nieuwe Maas). Zierikzee Inner Harbor is clearly a location with a deviating toxicity profile that is not similar to other harbour sites. Redrawn from Hamers et al. (2010) by Wilma IJzerman.

 

In the “trigger value strategy” the response of each individual bioassay is compared to a bioassay response level at which chemicals are not expected to cause adverse effects at higher levels of biological organization. This endpoint-specific “safe” bioassay response level is called an effect-based trigger (EBT) value. The method for deriving EBT values is still under development. It can be based on different criteria, such as laboratory toxicity data, field concentrations, or EU environmental quality standards (EQS) of individual compounds, which are translated into bioassay-specific effect-levels (see section on Effect-based water quality assessment).

 

In addition to the benchmark and trigger value approaches focusing on environmental risk assessment, effect-based monitoring with in vitro bioassays can also be used for effect-directed analysis (EDA). EDA focuses on samples that cause bioassay responses that cannot be explained by the chemicals that were analyzed in these samples. The goal of EDA is to detect and identify emerging contaminants that are responsible for the unexplained bioassay response and are not chemically analyzed because their presence or identity is unknown. In EDA, in vitro bioassay responses to fractionated samples are used to steer the chemical identification process of unknown compounds with toxic properties in the bioassays (see section on Effect-Directed Analysis).

 

Further reading:

Hamers, T., Leonards, P.E.G., Legler, J., Vethaak, A.D., Schipper, C.A. (2010). Toxicity profiling: an integrated effect-based tool for site-specific sediment quality assessment. Integrated Environmental Assessment and Management 6, 761-773

6.4.2. Effect Directed Analysis

Author: Marja Lamoree

Reviewers: Timo Hamers, Jana Weiss

 

Learning goals:

You should be able to

  • explain the complementary nature of the analytical/chemical and biological/toxicological techniques used in Effect-Directed Analysis
  • explain the purpose of Effect-Directed Analysis
  • describe the steps in the Effect-Directed Analysis process
  • describe when the application of Effect-Directed Analysis is most useful

 

Keywords: extraction, bioassay testing, fractionation, identification, confirmation

 

 

In general, the quality of the environment may be monitored by two complementary approaches: i) quantitative chemical analysis of selected (priority) pollutants and ii) effect-based monitoring using in vitro/vivo bioassays. Compared to the more classical chemical analytical approach that has been used for decades, effect-based monitoring is currently applied in an explorative manner and has not yet matured into a routinely implemented monitoring tool that is anchored in legislation. However, in an international framework, developments to formalize the role of effect-based monitoring and to standardize the use of bioassay testing for environmental quality assessment are underway.

A weakness of the chemical approach is that because of the preselection of target compounds for quantitative analysis other compounds that are of relevance for the environmental quality may be missed. In comparison, inclusiveness is one of the advantages of effect-based monitoring: all compounds – and not only a few pre-defined ones – having a specific effect will contribute to the total, measured biological activity (see Section In vitro bioassays). In turn, the effect-based approach strongly benefits from chemical analytical support to pinpoint which compounds are responsible for the observed activity and to be able to take measures for environmental protection, e.g. the reduction of the emission or discharge of a specific toxic compound into the environment.  

In Effect-Directed Analysis (EDA), the strengths of analytical chemical techniques and effect-based testing are combined with the aim to identify novel compounds that show activity in a biological analysis and that would have gone unnoticed using the chemical and the effect-based approach separately. A schematic representation of EDA is shown in Figure 1 and the various steps are described below in more detail. There is no limitation regarding the sample matrix: EDA has been applied to e.g. water, soil/sediment and biota samples. It is used for in-depth investigations at locations that are suspected to be contaminated but where the compounds responsible for the observed adverse effects are not known. In addition to environmental quality assessment, EDA is applied in the fields of food security analysis and drug discovery. In Table 1 examples of EDA studies are given.

 

Figure 1. Schematic representation of Effect-Directed Analysis (EDA).

 

1. Extract

The first step is the preparation of an extract of the sample. For soil/sediment samples, a sieving step prior to the actual extraction may be necessary in order to remove large particles and obtain a sample that is well-defined in terms of particle size (e.g. <200 μm). Examples of biota samples are whole organism homogenates or parts of the organism, such as blood and liver. For the extraction of the samples, analytical techniques such as liquid/liquid or solid phase extraction are applied to concentrate the compounds of interest and to remove matrix constituents that may interfere with the later steps of the EDA.

 

2. Biological analysis

The choice of endpoint to include in an EDA study is very important, as it dictates the nature of the toxicity of the compounds that may be identified (see Section on Toxicodynamics and Molecular Interaction). For application in EDA, typically in vitro bioassays that are carried out in multiwell (≥ 96 well) plates can be used, because of their low cost, high throughput and ease of use (see Section on In vitro bioassays), although sometimes in vivo assays (see Section on In vivo bioassays) are applied too.

 

Table 1. Examples of EDA studies, including endpoint, type of bioassay, sample matrix and compounds identified.

Endpoint

Type of bioassay

Sample matrix

Type of compounds identified

 

In vitro

 

 

Estrogenicity

Cell based reporter gene

Sediment

Endogenic hormones

Anti-androgenicity

Cell based reporter gene

Sediment

Plasticizers, organophosphorus flame retardants, synthetic fragrances

 

idem

Water

Pharmaceuticals, pesticides, plasticizers, flame retardants, UV filters

Mutagenicity

Bacterial luminescence reporter strain

Water

Benzotriazoles

Thyroid hormone disruption

Radioligand binding

Polar bear plasma

Metabolites of PCBs, nonylphenols

 

In vivo

 

 

Photosystem II toxicity

Pulse Amplitude Modulation fluorometry

Water

Pesticides

Endocrine disruption

Snail reproduction

Sediment

Phthalates, synthetic fragrances, alkylphenols

 

3. Fractionation

Fractionation of the extract is achieved by the application of chromatography, resulting in the separation of the – in most cases – multitude of different compounds that are present in an extract of an environmental sample. Chromatographic separation is obtained after the migration of compounds through a sorbent bed. In most cases, the separation principle is based on the distribution of compounds between the liquid mobile phase and the solid stationary phase (liquid chromatography, or LC), but a chromatographic separation using the partitioning between the gas phase and a sorbent bed (gas chromatography, or GC) is also possible. At the end of the separation column, at specified time intervals fractions can be collected that are simpler in composition in comparison to the original extract: a reduction in the number of compounds per fraction is obtained. The collected fractions are tested in the bioassay and the responsive fractions are selected for further chemical analysis and identification (step 4). The time intervals for fraction collection vary between a few minutes in older applications and a few seconds in new applications of EDA, which enables fractionation directly into multiwell plates for high throughput bioassay testing. In cases where fractions are collected during time intervals in the order of minutes, the fractions are still so complex that a second round of fractionation to obtain fractions of reduced complexity is often necessary for the identification of compounds that are responsible for the observed effect (see Figure 2).

 

Figure 2. Schematic representation of extract fractionation and selection of fractions for further testing, identification and confirmation.

 

4. Chemical Analysis

Chemical analysis for the identification of the compounds that cause the effect in the bioassay is usually done by LC coupled to mass spectrometric (MS) detection. To obtain high mass accuracy that facilitates compound identification, high resolution mass spectrometry (HR-MS) is generally applied. Fractions obtained after one or two fractionation steps are injected into the LC-MS system. In studies where fractionation into multiwell plates is used (and thus small fractions in the order of microliters are collected), only one round of fractionation is applied. In these cases, identification and fraction collection can be done in parallel, using a splitter after the chromatographic column that directs part of the eluent from the column to the well plate and the other part to the MS (see Figure 3). This is called high throughput EDA (HT-EDA).

 

Figure 3. Schematic representation of high-throughput Effect-Directed Analysis (HT-EDA).

 

5. Identification

The use of HR-MS is necessary to obtain mass information to establish the molecular weight with high accuracy (e.g. 119.12423 Dalton) to derive the molecular formula (e.g. C6H5N3) of the compound. Optimally, HR-MS instrumentation is equipped with an MS-MS mode, in which compound fragmentation is induced by collisions with other molecules, resulting in fragments that are specific for the original compound. Fragmentation spectra obtained using the MS-MS mode of HR-MS instruments help to elucidate the structure of the compounds eluting from the column, see for an example Figure 4.

 

Figure 4. Example of a chemical structure corresponding to an accurate mass of 119.12423 Dalton and the corresponding molecular formula C6H5N3: 1,2,3-benzotriazole.

 

Other information such as log Kow may be calculated using dedicated software packages that use elemental composition and structure as input. To aid the identification process, compound and mass spectral libraries are used as well as the more novel databases containing toxicity information (e.g. PubChem Bioassay, Toxcast). Mass spectrometry instrumentation vendor software,  public/web-based databases and databases compiled in-house enable suspect screening to identify compounds that are known, e.g. because they are applied in consumer products or construction materials. When MS signals cannot be attributed to known compounds or their metabolites/transformation products, the identification approach is called non-target screening, where additional identification techniques such as Nuclear Magnetic Resonance (NMR) may aid the identification. The identification process is complicated and often time consuming, and results in a suspect list that needs to be evaluated for further confirmation of the identification.

 

6. Confirmation

For an unequivocal confirmation of the identity of a tentatively identified compound, it is necessary to obtain a standard of the compound to investigate whether its analytical chemical behaviour corresponds to that of the tentatively identified compound in the environmental sample. In addition, the biological activity of the standard should be measured and compared with the earlier obtained data. In case both the chemical analysis and bioassay testing results support the identification, confirmation of compound identity is achieved.

In principle, the confirmation step of an EDA study is very straightforward, but in current practice the standards are mostly not commercially available. Dedicated synthesis is time consuming and costly, therefore the confirmation step often is a bottleneck in EDA studies.

 

The application of EDA is suitable for samples collected at specific locations where comprehensive chemical analysis of priority pollutants and other chemicals of relevance has been conducted already, and where ecological quality assessment has revealed that the local conditions are compromised (see other Sections on Diagnostic risk assessment approaches and tools). Especially those samples that show a significant difference between the observed (in vitro) bioassay response and the activity that may be calculated according to the concept of Concentration Addition (see Section on Mixture Toxicity) by using the relative potencies and the concentrations of compounds active in that bioassay need a further in-depth investigation. EDA can be implemented at these ‘hotspots’ of environmental contamination to unravel the identity of compounds that have an effect, but that were not included in the chemical monitoring of the environmental quality. Knowledge with regard to the main drivers of toxicity at a specific location supports accurate decision making that is necessary for environmental quality protection.

 

6.4.3. Effect-based monitoring: In vivo bioassays

Effect based monitoring: in vivo bioassays

 

Authors: Michiel Kraak, Carlos Barata

Reviewers: Kees van Gestel, Jörg Römbke

 

Learning objectives:

You should be able to:

  • define in vivo bioassays and to explain how in vivo bioassays are performed.
  • give examples of the most commonly used in vivo bioassays per environmental compartment.
  • motivate the necessity to incorporate several in vivo bioassays into a bioassay battery.

 

Key words: risk assessment, diagnosis, effect based monitoring, in vivo bioassays, environmental compartment, bioassay battery

 

Introduction

To determine whether organisms are at risk when exposed to hazardous compounds present at contaminated field sites, the toxicity of environmental samples can be analysed. To this purpose, several diagnostic tools have been developed, including a wide variety of in vitro, in vivo and in situ bioassays (see sections on In vitro bioassays and on In situ bioassays). In vivo bioassays make use of whole organisms (in vivo means “while living”). The species selected as test organisms for in vivo bioassays are generally the same as the ones selected for single species toxicity tests (see sections 4.3.4, 4.3.5, 4.3.6 and 4.3.7 on the Selection of ecotoxicity test organisms). Likewise, also the endpoints measured in in vivo bioassays are the same as those in single species ecotoxicity tests (see section on Endpoints). In vivo bioassays therefore have a relatively high ecological relevance, as they provide information on the survival, reproduction, growth, or behaviour of the species tested. A major difference between toxicity tests and bioassays is the selection of the controls. In laboratory toxicity experiments the controls consist of non-spiked ‘clean’ test medium (see section on Concentration response relationships). In bioassays the choice of the controls is more complicated though. Non-treated test medium may be incorporated as a control in bioassays to check for the health and quality of the test organisms. But control media, like standard test water or artificial soil and sediment may differ in numerous aspects from natural environmental samples. Therefore, the control should preferably be a test medium that has exactly the same physicochemical properties as the contaminated sample, except for the chemical pollutants being present. This ideal situation, however, hardly ever exists. Hence, it is recommended to also incorporate environmental samples from less or non-contaminated reference sites into the bioassay and to compare the response of the organism to samples from contaminated sites with those from reference sites. Alternatively, controls can be selected as the least contaminated environmental samples from a gradient of pollution or as the dilution required to obtain no effect. As dilution medium artificial control medium can be used or medium from a reference site.

 

The most commonly used in vivo bioassays

         For the soil compartment, the earthworms Eisenia fetida, E. andrei and Lumbricus rubellus, the enchytraeid Enchytraeus crypticus and the collembolan Folsomia candida are most frequently selected as in vivo bioassay test organisms. An example of employing earthworms to assess the ecotoxicological effects of Pb contaminated soils is given in Figure 1. The figure shows the total Pb concentrations in different field soils taken from a soccer field (S), a bullet plot (B), grassland (G1, G3) and a forest (F1-F3) site near a shooting range. The pH of the grassland soils was near neutral (pHCaCl2 = 6.5-6.8), but the pH was rather low (3.2-3.7) for all other field sites. Earthworms exposed to these soils showed a significantly reduced reproductive output (Figure 1) at the most contaminated sites. At the less contaminated sites, earthworm responses were also affected by the difference in soil pH, leading to low juvenile numbers in the acid soil F0 but high numbers in the near neutral reference R3 and the field soil G3. In fact, earthworm reproduction was highest in the latter soil, even though it did contain an elevated concentration of 355 ± 54 mg Pb/kg dry soil. In soil G1, which contained almost twice as much Pb (656 ± 60 mg Pb/kg dry soil), reproduction was much lower and also reduced compared to the control, suggesting the presence of additional, unknown stressor (Luo et al., 2014).

 

Figure 1. Reproduction of the earthworm Eisenia andrei after 4 weeks of exposure to control soils (LF2.2, R1, R2, R3) and field soils (S, B0, G1, G3, F0, F1, F3) from a Pb pollution gradient near a shooting range. Shown are the mean relative numbers of juveniles ± SD (n=4-5), compared to the control Lufa 2.2 (LF2.2) soil, as a function of average total Pb concentrations in the soils. Data from Luo et al. (2014).

 

For water, predominantly daphnids are employed, mainly Daphnia magna, but sometimes also other daphnid species or other aquatic invertebrates are selected. Also bioassays with several primary producers are available. An example of exposing daphnids (Chydorus sphaericus) to water samples is shown in Figure 2. The bars show the toxicity of the water samples and the diamonds the concentrations of cholinesterase inhibitors, as a proxy for the presence of insecticides. The toxicity of the water samples was higher when also the concentrations of insecticides were higher. Hence, in this case, the observed toxicity is well explained by the measured compounds. Yet, it has to be realized that this is an exception rather than a rule, since mostly a large portion of the toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities and moreover, interactions are also not covered by such analytical data (see section on Effect based water quality assessment).

 

Figure 2. Toxicity of water samples to daphnids (Chydorus sphaericus)(bars) and the concentrations of cholinesterase inhibitors, as a proxy for the presence of insecticides (diamonds). Data from Pieters et al. (2008).

 

For sediments, oligochaetes and chironomids are selected as test organisms, but sometimes also rooting macrophytes and benthic diatoms. An example of exposing chironomids (Chironomus riparius) to contaminated sediments is shown in Figure 3. Whole sediment bioassays with chironomids allow the assessment of sensitive species-specific sublethal endpoints (see section on Chronic toxicity), in this case emergence. Figure 3 shows that more chironomids emerged on the reference sediment than on the contaminated sediment and that the chironomids on the reference sediment also emerged faster than on the contaminated sediment.

 

Figure 3. Emergence of chironomids (Chironomus riparius) on a reference (blue line) and a contaminated sediment (red line). Data from Nienke Wieringa.

 

For sediment, also benthic diatoms are selected as in vivo bioassay test organisms. Figure 4 shows the growth of the benthic diatom Nitzschia perminuta after 4 days of exposure to 160 sediment samples. The dotted line represents control growth. The growth of the diatoms ranged from higher than the controls to no growth at all, raising the question which deviation from the control should be considered a significant adverse effect.

 

Figure 4. Growth of the benthic diatom Nitzschia perminuta after 4 days of exposure to 160 sediment samples. The dotted line represents control growth. Data from Harm van der Geest.

 

In vivo bioassay batteries

         Environmental quality assessments are often performed with a single test species, like the four examples given above. Yet, toxicity is species and compound specific and this may therefore result in large margins of uncertainty in the environmental quality assessments, consequently leading to over- or underestimation of environmental risks. Obvious examples include the presence of herbicides that only would induce responses in bioassays with primary producers and the other way around, the presence of insecticides that induces strong effects on insects and to a lesser extent on other animals, but that would be completely overlooked in bioassays with primary producers. To reduce these uncertainties and to increase ecological relevance it is therefore advised to incorporate more test species belonging to different taxa in a bioassay battery (see section on Effect based water quality assessment).

 

References

Luo, W., Verweij, R.A., Van Gestel, C.A.M. (2014). Determining the bioavailability and toxicity of lead to earthworms in shooting range soils using a combination of physicochemical and biological assays. Environmental Pollution 185, 1-9.

Pieters, B.J., Bosman-Meijerman, D., Steenbergen, E., Van den Brandhof, E.-J., Van Beelen, P., Van der Grinten, E., Verweij, W., Kraak, M.H.S. (2008). Ecological quality assessment of Dutch surface waters using a new bioassay with the cladoceran Chydorus sphaericus. Proceedings Netherlands Entomological Society Meetings 19, 157-164.

6.4.4. Effect Based water quality assessment

Effect-based water quality assessment

 

Authors: Milo de Baat, Michiel Kraak

Reviewers: Ad Ragas, Ron van der Oost, Beate Escher

 

Learning objectives:

You should be able to

  • list the advantages and drawbacks of an effect-based monitoring approach in comparison to a compound-based approach for water quality assessment.
  • motivate the necessity of employing a bioassay battery in effect-based monitoring approaches.
  • explain the expression of bioassay responses in terms of toxic/bioanalytical equivalents of reference compounds.
  • translate the outcome of a bioassay battery into a ranking of contaminated sites based on ecotoxicological risk.

 

Keywords: Effect-based monitoring, water quality assessment, bioassay battery, effect-based trigger values, ecotoxicological risk assessment

 

Introduction

Traditional chemical water quality assessment is based on the analysis of a list of a varying, but limited number of priority substances. Nowadays, the use of many of these compounds is restricted or banned, and concentrations of priority substances in surface waters are therefore decreasing. At the same time, industries have switched to a plethora of alternative compounds, which may enter the aquatic environment, seriously impacting water quality. Hence, priority substances lists are outdated, as the selected compounds are frequently absent, while many compounds with higher relevance are not listed as priority substances. Consequently, a large portion of toxic effects observed in surface waters cannot be attributed to compounds measured by water authorities, and toxic risks to freshwater ecosystems are thus caused by mixtures of a myriad of (un)known, unregulated compounds. Understanding of these risks requires a paradigm shift towards new monitoring methods that do not depend on chemical analysis of priority substances solely, but consider the biological effects of the entire micropollutant mixture first. Therefore, there is a need for effect-based monitoring strategies that employ bioassays to identify environmental risk. Responses in bioassays are caused by all bioavailable (un)known compounds and their metabolites, whether or not they are listed as priority substances.  

 

Table 1. Example of the bioassay battery employed by the SIMONI approach of Van der Oost et al. (2017) that can be applied to assess surface water toxicity. Effect-based trigger values (EBT) were previously defined by Escher et al. (2018) (PAH, anti-AR and ER CALUX) and Van der Oost et al. (2017).

 

Bioassay

Endpoint

Reference compound

EBT

Unit

in situ

Daphnia in situ

Mortality

n/a

20

% mortality

in vivo

Daphniatox

Mortality

n/a

0.05

TU

Algatox

Algal growth inhibition

n/a

0.05

TU

Microtox

Luminescence inhibition

n/a

0.05

TU

in vitro CALUX

cytotox

Cytotoxicity

n/a

0.05

TU

DR

Dioxin (-like) activity

2,3,7,8-TCDD

50

pg TEQ/L

PAH

PAH activity

benzo(a)pyrene

6.21

ng BapEQ/L

PPARγ

Lipid metabolism inhibition

rosiglitazone

10

ng RosEQ/L

Nrf2

Oxidative stress

curcumin

10

µg CurEQ/L

PXR

Toxic compound metabolism

nicardipine

3

µg NicEQ/L

p53 -S9

Genotoxicity

n/a

0.005

TU

p53 +S9

Genotoxicity (after metabolism)

n/a

0.005

TU

ER

Estrogenic activity

17ß-estradiol

0.1

ng EEQ/L

anti-AR

Antiandrogenic activity

flutamide

14.4

µg FluEQ/L

GR

Glucocorticoid activity

dexamethasone

100

ng DexEQ/L

in vitro antibiotics

T

Bacterial growth inhibition (Tetracyclines)

oxytetracycline

250

ng OxyEQ/L

Q

Bacterial growth inhibition (Quinolones)

flumequine

100

ng FlqEQ/L

B+M

Bacterial growth inhibition (β-lactams and Macrolides)

penicillin G

50

ng PenEQ/L

S

Bacterial growth inhibition (Sulfonamides)

sulfamethoxazole

100

ng SulEQ/L

A

Bacterial growth inhibition (Aminoglycosides)

neomycin

500

ng NeoEQ/L

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Bioassay battery

The regular application of effect-based monitoring largely relies on the ease of use, endpoint specificity, costs and size of the used bioassays, as well as on the ability to interpret the measured responses. To ensure sensitivity to a wide range of potential stressors, while still providing specific endpoint sensitivity, a successful bioassay battery like the example given in Table 1 can include in situ whole organism assays (see section on Biomonitoring and in situ bioassays), and should include laboratory-based whole-organism in vivo (see section on In vivo bioassays) and mechanism-specific in vitro assays (see section on In vitro bioassays). Adverse effects in the whole-organism bioassays point to general toxic pressure and represent a high ecological relevance. In vitro or small-scale in vivo assays with specific drivers of adverse effects allow for focused identification and subsequent confirmation of (groups of) toxic compounds with specific modes of action. Bioassay selection can also be based on the Adverse Outcome Pathways (AOP) (see section on Adverse Outcome Pathways) concept that describes relationships between molecular initiating events and adverse outcomes. Combining different types of bioassays ranging from whole organism tests to in vitro assays targeting specific modes of action can thus greatly aid in narrowing down the number of candidate compound(s) that cause environmental risks. For example, if bioanalytical responses at a higher organisational level are observed (the orange and black pathways in Figure 1), responses in specific molecular pathways (blue, green, grey and red in Figure 1) can help to identify certain (groups of) compounds responsible for the observed effects.

 

Figure 1. From toxicokinetics via molecular responses to population responses. Redrawn from Escher et al. (2018) by Wilma IJzerman.

 

Toxic and bioanalytical equivalent concentrations

The severity of the adverse effect of an environmental sample in a bioassay is expressed as toxic equivalent (TEQ) concentrations for toxicity in in vivo assays or as bioanalytical equivalent (BEQ) concentrations for responses in in vitro bioassays. The toxic equivalent concentrations and bioanalytical equivalent concentrations represent the joint toxic potency of all unknown chemicals present in the sample that have the same mode of action (see section on Toxicodynamics and molecular interactions) as the reference compound and act concentration-additively (see section on Mixture toxicity). The toxic equivalent concentrations and bioanalytical equivalent concentrations are expressed as the concentration of a reference compound that causes an effect equal to the entire mixture of compounds present in an environmental sample. Figure 2 depicts a typical dose-response curve for a molecular in vitro assay that is indicative of the presence of compounds with a specific mode of action targeted by this in vitro assay. A specific water sample induced an effect of 38% in this assay, equivalent to the effect of approximately 0.02 nM bioanalytical equivalents.

 

Effect-based trigger values

The identification of ecological risks from bioassay battery responses follows from the comparison of bioanalytical signals to previously determined thresholds, defined as effect-based trigger values (EBT), that should differentiate between acceptable and poor water quality. Since bioassays potentially respond to the mixture of all compounds present in a sample, effect-based trigger values are expressed as toxic or bioanalytical equivalents of concentrations of model compounds for the respective bioassay (Table 1).  

 

Figure 2. Dose response relationship for a reference compound in an in vitro bioassay. The blue lines show that a specific water sample induced an effect of 38%, representing approximately 0.02 nM bioanalytical equivalents.

 

Ranking of contaminated sites based on effect-based risk assessment

Once the toxic potency of a sample in a bioassay is expressed as toxic equivalent concentrations or bioanalytical equivalent concentrations, this response can be compared to the effect-based trigger value for that assay, thus determining whether or not there is a potential ecological risk from contaminants in the investigated water sample. The ecotoxicity profiles of the surface water samples generated by a bioassay battery allow for calculation and ranking of a cumulative ecological risk for the selected locations. In the example given in Figure 3, water samples of six locations were subjected to the SIMONI bioassay battery of Van der Oost et al. (2017), consisting of 17 in situ, in vivo and in vitro bioassays. Per site and per bioassay the response is compared to the corresponding effect-based trigger value and classified as ‘no response’ (green), ‘response below the effect-based trigger value’ (yellow) or ‘response above the effect-based trigger value’ (orange). Next, the cumulative ecological risk per location is calculated.

The resulting integrated ecological risk score allows ranking of the selected sites based on the presence of ecotoxicological risks rather than on the presence of a limited number of target compounds. This in turn permits water authorities to invest money where it matters most: identification of compounds causing adverse effects at locations with indicated ecotoxicological risks. Initially, the compounds causing the observed effect-based trigger value exceedance will not be known, however, this can subsequently be elucidated with targeted or non-target chemical analysis, which will only be necessary at locations with indicated ecological risks. A potential follow-up step could be to investigate the drivers of the observed effects by means of effect-directed analysis (see section on Effect-directed analysis).

 

Figure 3. Heat map showing the response of 17 in situ, in vivo and in vitro bioassays to six surface water samples. The integrated risk score (SIMONI Risk Indication; Van der Oost et al., 2017) is classified as ‘low risk’ (green), ‘potential risk’ (orange) or ‘risk’ (red).

 

References

Escher, B. I., Aїt-Aїssa, S., Behnisch, P. A., Brack, W., Brion, F., Brouwer, A., et al. (2018). Effect-based trigger values for in vitro and in vivo bioassays performed on surface water extracts supporting the environmental quality standards (EQS) of the European Water Framework Directive. Science of the Total Environment 628-629, 748-765.

Van der Oost, R., Sileno, G., Suarez-Munoz, M., Nguyen, M.T., Besselink, H., Brouwer, A. (2017). SIMONI (Smart Integrated Monitoring) as a novel bioanalytical strategy for water quality assessment: part I – Model design and effect-based trigger values. Environmental Toxicology and Chemistry 36, 2385-2399.

 

Additional reading

Altenburger, R., Ait-Aissa, S., Antczak, P., Backhaus, T., Barceló, D., Seiler, T.-B., et al. (2015). Future water quality monitoring — Adapting tools to deal with mixtures of pollutants in water resource management. Science of the Total Environment 512-513, 540–551.

Escher, B.I., Leusch, F.D.L. (2012). Bioanalytical Tools in Water Quality Assessment. IWA publishing, London (UK).

Hamers, T., Legradi, J., Zwart, N., Smedes, F., De Weert, J., Van den Brandhof, E-J., Van de Meent, D., De Zwart, D. (2018). Time-Integrative Passive sampling combined with TOxicity Profiling (TIPTOP): an effect-based strategy for cost-effective chemical water quality assessment. Environmental Toxicology and Pharmacology 64, 48-59.

6.4.5. Biomonitoring: in situ bioassays and contaminant concentrations in organisms

Author: Michiel Kraak

Reviewers: Ad Ragas, Suzanne Stuijfzand, Lieven Bervoets

 

Learning objectives:

You should be able to

  • name tools specifically designed for ecological risk assessment in the field.
  • define biomonitoring and to describe biomonitoring procedures.
  • list the characteristics of suitable biomonitoring organisms.
  • list the most commonly used biomonitoring organisms per environmental compartment.
  • argue the advantages and disadvantages of in situ bioassays.
  • argue the advantages and disadvantages of measuring contaminant concentrations in organisms.

 

Key words: Biomonitoring, test organisms, in situ bioassays, contaminant concentrations in organisms, environmental quality

 

 

Introduction

Several approaches and tools are available for diagnostic risk assessment. Tools specially developed for field assessments include the TRIAD approach (see section on TRIAD approach), in situ bioassays and biomonitoring. In ecotoxicology, biomonitoring is defined as the use of living organisms for the in situ assessment of environmental quality. Passive biomonitoring and active biomonitoring are distinguished. For passive biomonitoring, organisms are collected at the site of interest and their condition is assessed or the concentrations of specific target compounds in their tissues are analysed, or both. By comparing individuals from reference and contaminated sites an indication of the impact on local biota at the site of interest is obtained. For active biomonitoring, organisms are collected from reference sites and exposed in cages or artificial substrates at the study sites. Ideally, reference organisms are simultaneously exposed at the site of origin to control for potential effects of the experimental set-up on the test organisms. As an alternative to field collected animals, laboratory cultured organisms may be employed. After exposure at the study sites for a certain period of time, the organisms are recollected and either their condition is analysed (in situ bioassay) or the concentrations of specific target compounds are measured in the organisms, or both.

The results of biomonitoring studies may be used for management decisions, e.g. when accumulation of contaminants has been demonstrated in the field and especially when the sources of the pollution have been identified. However, the use of biomonitoring studies in environmental management has not been captured in formal protocols or guidelines like those of the Water Framework Directive (WFD) or – to a lesser extent – the TRIAD approach and effect-based quality assessments. Biomonitoring studies are typically applied on an case-by-case basis and their application therefore strongly depends on the expertise and resources available for the assessment. The text below explains and discusses the most important aspects of biomonitoring techniques used in diagnostic risk assessment.

 

Selection of biomonitoring test organisms

The selection of adequate organisms for biomonitoring partly follows the selection of test organisms for toxicity tests (see section on the Selection of test organisms). Suitable biomonitoring organisms:

  • Are sedentary, since sedentary organisms may adapt more easily to the in situ experimental setup than more mobile organisms, for which caging may be an additional stress factor. Moreover, for sedentary organisms  the relationship between the accumulated compounds and the environmental quality at the exposure site is straightforward, although this is more relevant to passive than to active biomonitoring.
  • Are representative for the community of interest and native to the study sites, since this will ensure that the biomonitoring organisms tolerate local conditions other than contamination, preventing that stressors other than poor environmental conditions may affect their performance. Obviously, it is also undesirable to introduce exotic species into new environments.
  • Are long living, at least substantially longer than the exposure duration and preferably large enough to obtain sufficient material for chemical analysis.
  • Are easy to handle.
  • Respond to a gradient of environmental quality, if the purpose of the biomonitoring study is to analyse the condition of the organisms after recollection (in situ bioassay).
  • Accumulate contaminants without being killed, if the purpose of the biomonitoring study is to measure contaminant concentrations in the organisms after recollection.
  • Are large enough to obtain sufficient biomass for the analysis of the target compounds above the limits of detection, if the purpose of the biomonitoring study is to measure contaminant concentrations in the organisms after recollection.

 

Based on the above listed criteria, in the marine environment mussels belonging to the genus Mytilus are predominantly selected. The genus Mytilus has the additional advantage of a global distribution, although represented by different species. This facilitates the comparison of contaminant concentrations in the organisms all around the globe. Lugworms have occasionally also been used for biomonitoring in marine systems. For freshwater, the cladoceran Daphnia magna is most frequently employed, although occasionally other species are selected, including mayflies, snails, worms, amphipods, isopods, caddisflies and fish. Given the positive experience with marine mussels, freshwater bivalves are also employed as biomonitoring organisms. Sometimes primary producers have been used, mainly periphyton. Due to the complexity of the sediment and soil compartments, few attempts have been made to expose organisms in situ, mainly restricted to chironomids on sediment.

 

In situ exposure devices

An obvious requirement of the in situ exposure devices is that the test organisms do not suffer from (sub)lethal effects of the experimental setup. If the organisms are large enough, cages may be used, like for freshwater and marine mussels. For daphnids, a simple glass jar with a permeable lid suffices. For riverine insects, the device should allow the natural flow of the stream to pass, but meanwhile prevent the organisms from escaping. In the device shown in Figure 1a, caddisfly larvae containing tubes are connected to floating tubes, maintaining the larvae at a constant depth of 65 cm. In the tubes, the caddisfly larvae are able to settle and build nets on artificial substrate, a plastic doormat with bristles standing out.

An elegant device for in situ colonization of periphyton was developed by Blanck (1985)(Figure 1b). Sand-blasted glass discs (1.5 cm2 surface) are used as artificial substratum for algal attachment. Substrata are placed vertically in the water, parallel to the current, by means of polyethylene racks, each rack supporting a total of 170 discs. After the colonization period, the periphyton containing glass discs can be harvested, offering the unique possibility to perform laboratory or field experiments with entire algal and microbial communities, replicated 170 times.

 

Figure 1. Left: Experimental set-up for in situ exposure of caddisfly larvae according to Stuijfzand et al. (1999, derived from Vuori (1995)). Right: Experimental set-up for in situ colonization of periphyton according to Ivorra et al. (1999), derived from Blanck (1985). Drawn by Wilma IJzerman.

 

In situ bioassays

After exposure at the study sites for a certain period of time, the organisms are recollected and their condition can be analysed (Figure 2). The endpoint is mostly survival, especially in routine monitoring programs. If the in situ exposure lasts long enough, also effects on species specific sublethal endpoints can be assessed. For daphnids and snails, this is reproduction and for isopods growth. For aquatic insects (mayflies, caddisflies, damselflies, chironomids), emergence has been assessed as a sensitive ecological relevant endpoint (Barmentlo et al., 2018).

 

Figure 2. In situ exposure experiment. (A). Preparing damselfly containing jars. (B) Exposure of the in situ jars in ditches. (C). Retrieved jar containing a single damselfly larva. (D). Close up of the damselfly larva ready for inspection. Photos by Henrik Barmentlo.

 

In situ bioassays come closest to the actual field situation. Organisms are directly exposed at the site of interest and respond to all joint stressors present. Yet, this is also the limitation of the approach. If organisms do respond it remains unknown what causes the observed adverse effects. This could be (the combination of) any natural or anthropogenic physical or chemical stress factor. In situ bioassays can therefore be best combined with laboratory bioassays (see section on Bioassays) and the analysis of physico-chemical parameters, conform the TRIAD approach (see section on TRIAD approach). If the adverse effects are also observed in the bioassays under controlled laboratory conditions, then poor water quality is most likely the cause. The water sample may then be subjected to suspected target analysis, non-target analysis or effect directed analysis (EDA). If adverse effects are observed in situ but not in the laboratory, then the presence of hazardous compounds is most likely not the cause. Instead, the effects may be attributable to e.g. low pH, low oxygen concentrations, high temperatures etc, which may be verified by physico-chemical analysis in the field.

 

Online biomonitoring

A specific application of in situ bioassays are the online systems for continuous water quality monitoring. In these systems, behaviour is generally the endpoint (see section on Endpoints). Organisms are exposed in a laboratory setting in situ (on shore or on a boat) in an experimental device to a continuous flow of surface water. If the water quality changes, the organisms respond by changing their behaviour. Above a certain threshold an alarm may go off and, for instance, the intake of surface water for drinking water preparation can be temporarily stopped.

 

Contaminant concentrations in organisms

As an addition or as an alternative to analysing the condition of the exposed biomonitoring organisms upon retrieval, contaminant concentrations in organisms can be analysed. This has several advantages over chemical analysis of environmental samples: biomonitoring organisms may be exposed for days to weeks at the site of interest, providing time integrated measurements of contaminant concentrations, in contrast to the chemical analysis of grab samples. This way, biomonitoring organisms actually serve as ‘biological passive samplers’ (see to section on Experimental methods of assessing available concentrations of organic chemicals). Another advantage of measuring contaminant concentrations in organisms is that they only take up the bioavailable (fraction of) substances, ecologically very relevant information, that remains unknown if chemical analysis is performed on water, sediment, soil or air samples. Yet, elevated concentrations in organisms do not necessarily imply toxic effects, and therefore these measurements are best complemented with determining their condition, as described above. Moreover, analysing contaminants in organisms may be more expensive than measurements of environmental samples, due to a more complex sample preparation. Weighing the advantages and disadvantages, the explicit strength of biomonitoring programs is that they provide insight into the spatial and temporal variation in bioavailable contaminant concentrations. In Figure 3 two examples are given. The left panel shows the concentrations of PCBs in zebra mussels at different sampling sites in Flanders, Belgium (Bervoets et al., 2004). The right panel shows the rapid (within 2 wk) Cd accumulation and depuration in biofilms translocated from a reference to a polluted site and from a polluted to a reference site, respectively (Ivorra et al., 1999).

 

Figure 3. Left panel: Mean concentration of PCBs in 25 pooled zebra mussels at different sampling sites in Flanders, Belgium. Comparison between indigenous (black bars) and transplanted mussels (grey bars), from Bervoets et al. (2004). Right panel: Cd concentrations in local and translocated biofilms (R: Reference site; P: polluted site) from Ivorra et al. (1999). Drawn by Wilma IJzerman.

 

References

Barmentlo, S.H., Parmentier, E.M., De Snoo, G.R., Vijver, M.G. (2018). Thiacloprid-induced toxicity influenced by nutrients: evidence from in situ bioassays in experimental ditches. Environmental Toxicology and Chemistry 37, 1907-1915.

Bervoets, L., Voets, J., Chu, S.G., Covaci, A., Schepens, P., Blust, R. (2004). Comparison of accumulation of micropollutants between indigenous and transplanted zebra mussels (Dreissena polymorpha). Environmental Toxicology and Chemistry 23, 1973-1983.

Blanck, H. (1985). A simple, community level, ecotoxicological test systemusing samples of periphyton. Hydrobiologia 124, 251-261.

Ivorra, N., Hettelaar, J., Tubbing, G.M.J., Kraak, M.H.S., Sabater, S., Admiraal, W. (1999). Translocation of microbenthic algal assemblages used for in situ analysis of metal pollution in rivers. Archives of Environmental Contamination and Toxicology 37, 19-28.

Stuijfzand, S.C., Engels, S., Van Ammelrooy, E., Jonker, M. (1999). Caddisflies (Trichoptera: Hydropsychidae) used for evaluating water quality of large European rivers. Archives of Environmental Contamination and Toxicology 36, 186-192.

Vuori, K.M. (1995). Species- and population-specific responses of translocated hydropsychid larvae (Trichoptera, Hydropsychidae) to runoff from acid sulphate soils in the River Kyronjoki, western Finland. Freshwater Biology 33, 305-318.

6.4.6. TRIAD approach for site-specific ecological risk assessment

Author: Michiel Rutgers

Reviewers: Kees van Gestel, Michiel Kraak, Ad Ragas

 

Learning goals:

You should be able

  • to describe the principles of the TRIAD approach
  • to explain the importance of weight of evidence in risk assessment
  • to use the results for an assessment by applying the TRIAD approach

 

Keywords: Triad, site-specific ecological risk assessment, weight of evidence

 

 

Like the other diagnostic tools described in the previous sections (see sections on Effect-based monitoring In vivo bioassays and In vitro bioassays, Effect-directed analysis, and Effect-based water quality assessment and Biomonitoring), the TRIAD approach is a tool for site-specific ecological risk assessment of contaminated sites (Jensen et al., 2006; Rutgers and Jensen, 2011). Yet, it differs from the previous approaches by combining and integrating different techniques through a ‘weight of evidence’ approach. To this purpose, the TRIAD combines information on contaminant concentrations (environmental chemistry), the toxicity of the mixture of chemicals present at the site ((eco)toxicology), and observations of ecological effects (ecology) (Figure 1).

The mere presence of contaminants is just an indication of potential ecological effects to occur. Additional data can help to better assess the ecological risks. For instance, information on actual toxicity of the contaminated site can be obtained from the exposure of test organisms to (extracts of) environmental samples (bioassays), while information on ecological effects can be obtained from an inventory of the community composition at the specific site. When these disciplines tend to converge to corresponding levels of ecological effects, a weight of evidence is established, making it possible to finalize the assessment and to support a decision for contaminated site management.

 

Figure 1: The TRIAD approach integrating information on contaminant concentrations (environmental chemistry), bioassays ((eco)toxicology) and ecological field inventories (ecology) into a weight of evidence for site-specific ecological risk assessment (adapted from Chapman, 1988).

 

The TRIAD approach thus combines the information obtained from three lines of evidence (LoE):

  1. LoE Chemistry: risk information obtained from the measured contaminant concentrations and information on their fate in the ecosystem and how they can evoke ecotoxicological effects. This can include exposure modelling and bioavailability considerations.
  2. LoE Toxicity: risk information obtained from (eco)toxicity experiments exposing test organisms to (extracted) samples of the site. These bioassays can be performed on site or in the laboratory, under controlled conditions.
  3. LoE Ecology: risk information obtained from the observation of actual effects in the field. This is deduced from data of ecological field surveys, most often at the community level. This information may include data on the composition of soil communities or other community metrics and on ecosystem functioning.

 

The three lines of evidence form a weight of evidence when they are converging, meaning that when the independent lines of evidence are indicating a comparable risk level, there is sufficient evidence for providing advice to decision makers about the ecological risk at a contaminated site. When there is no convergence in risk information obtained from the three lines of evidence, uncertainty is large. Further investigations are then required to provide a unambiguous advice.

 

Table 1. Basic data for site-specific environmental risk assessment (SS-ERA) sorted per line of evidence (LoE). Data and methods are described in Van der Waarde et al. (2001) and Rutgers et al. (2001).

Tests and abbreviations used in the table:

  • Toxic Pressure metals (sum TP metals). The toxic pressure of the mixture of metals in the sample, calculated as the potentially affected fraction in a Species Sensitivity Distribution with NOEC values (see Section on SSDs) and a simple rule for mixture toxicity (response addition; Section on Mixture toxicity).
  • Microtox. A bioassay with the luminescent bacterium Allivibrio fischeri, formerly known as Vibrio fischeri. Luminescence is reduced when toxicity is high.
  • Lettuce Growth and Lettuce Germination. A bioassay with the growth performance and the germination percentage of lettuce (seeds).
  • Bait Lamina. The bait-lamina test consists of vertically inserting 16-hole-bearing plastic strips filled with a plant material preparation into the soil. This gives an indication of the feeding activity of soil animals.
  • Nematodes abundance and Nematodes Maturity Index 2-5. The biomass and the Maturity Index (MI) of the nematode community in soil samples provide information about soil health (Van der Waarde et al. 2001).

 

The results of a site-specific ecological risk assessment (SS-ERA) applying the TRIAD approach are first organized basic tables for each sample and line of evidence separately. Table 1 shows an example. This table also collects supporting data, such as soil pH and organic matter content. Subsequently, these basic data are processed into ecological risk values by applying a risk scale running from zero (no effects) to one (maximum effect). An example of a metric used is the multi-substance Potentially Affected Fraction of species from the mixture of contaminants (see Section on SSDs). These risk values are then collected in a TRIAD table (Table 2), for each endpoint separately, integrated per line of evidence individually, and finally integrated over the three lines of evidence. Also the level of agreement between the three lines of evidence is given a score. Weighting values are applied, e.g. equal weights for all ecological endpoints (depending on number of methods and endpoints), and equal weights for each line of evidence (33%). When differential weights are preferred, for instance when some data are judged as unreliable, or some endpoints are considered more important than others, the respective weight factors and the arguments to apply them must be provided in the same table and accompanying text.

 

Table 2. Soil Quality TRIAD table demonstrating scaled risk values for two contaminated sites (A, B) and a Reference site (based on real data, only for illustration purposes). Risk values are collected per endpoint, grouped according to respective Lines of Evidence (LoE), and finally integrated into a TRIAD value for risks. The deviation indicates a level of agreement between LoE (default threshold 0.4). For site B, a Weight of Evidence (WoE) is demonstrated (D<0.4) making decision support feasible. By default equal weights can be used throughout. Differential weights should be indicated in the table and described in the accompanying text.

 

 

References

ISO (2017). ISO 19204: Soil quality -- Procedure for site-specific ecological risk assessment of soil contamination (soil quality TRIAD approach). International Standardization Organization, Geneva. https://www.iso.org/standard/63989.html.

Jensen, J., Mesman, M. (Eds.) (2006). LIBERATION, Ecological risk assessment of contaminated land, decision support for site specific investigations. ISBN 90-6960-138-9, Report 711701047, RIVM, Bilthoven, The Netherlands.

Rutgers, M., Bogte, J.J., Dirven-Van Breemen, E.M., Schouten, A.J. (2001) Locatiespecifieke ecologische risicobeoordeling – praktijkonderzoek met een Triade-benadering. RIVM-rapport 711701026, Bilthoven.

Rutgers, M., Jensen, J. (2011). Site-specific ecological risk assessment. Chapter 15, in: F.A. Swartjes (Ed.), Dealing with Contaminated Sites – from Theory towards Practical Application, Springer, Dordrecht. pp. 693-720.

Van der Waarde, J.J., Derksen, J.G.M, Peekel, A.F., Keidel, H., Bloem, J., Siepel, H. (2001) Risicobeoordeling van bodemverontreiniging met behulp van een triade benadering met chemische analyses, bioassays en biologische veldinventarisaties. Eindrapportage NOBIS 98-1-28, Gouda.

6.4.7. Eco-epidemiology

Authors: Leo Posthuma, Dick de Zwart

Reviewers: Allan Burton, Ad Ragas

 

Learning objectives:

You should be able to:

  • explain that and how effects of chemicals and their mixtures can be demonstrated in monitoring data sets;
  • explain that effects can be characterized with various impact metrics;
  • formulate whether and how the choice of impact sensitivity metric is relevant for the sensitivity and outcomes of a diagnostic assessment;
  • explain how ecological and ecotoxicological analysis methods relate;
  • explain how eco-epidemiological analyses are helpful in validating ecotoxicological models utilized in ecotoxicological risk assessment and management.

 

Keywords: eco-epidemiology, mixture pollution, diagnosis, impact magnitude, probable causes, validation

 

 

Introduction

Approaches for environmental protection, assessment and management differ between ‘classical’ stressors (such as excess nutrients and pH) and chemical pollution. For the ‘classical’ environmental stress factors, ecologists use monitoring data to develop concepts and methods to prevent and reduce impacts. Although there are some clear-cut examples of chemical pollution impacts [e.g., the decline in vulture populations in South East Asia due to diclofenac (Oaks et al. 2004), and the suit of examples in the book ‘Silent Spring’ (Carson 1962)], ecotoxicologists commonly have assessed the stress from chemical pollution by evaluating exposures vis a vis laboratory toxicity data. Current pollution often consists of complex mixtures of chemicals, with highly variable patterns in space and time. This poses problems when one wants to evaluate whether observed impacts in ecosystems can be attributed to chemicals or their mixtures. Eco-epidemiological methods have been established to discern such pollution stress. These methods provide the diagnostic tools to identify the impact magnitude and key chemicals that cause impacts in ecosystems. The use of these methods is further relevant for validating the laboratory-based risk assessment approaches developed by ecotoxicology.

 

The origins of eco-epidemiology

Risk assessments of chemicals provide insights in expected exposures and impacts, commonly for separate chemicals. These are predictive outcomes with a high relevance for decision making on environmental protection and management. The validation of those risk assessments is key to avoid wrong protection and management decisions, but it is complex. It consists of comparing predicted risk levels to observed effects. This begs the question on how to discern effects of chemical pollution in the field. This question can be answered based on the principles of ecological bio-assessments combined with those of human epidemiology. A bio-assessment is a study of stressors and ecosystem attributes, made to delineate causes of impacts via (often) statistical associations between biotic responses and particular stressors. Epidemiology is defined as the study of the distribution and causation of health and disease conditions in specified populations. Applied epidemiology serves as a scientific basis to help counteracting the spreading of human health problems. Dr. John Snow is often referred to as the ‘father of epidemiology’. Based on observations on the incidence, locations and timings of the 1854 cholera outbreak in London, he attributed the disease to contaminated water taken from the Broad Street pump well, counteracting the prevailing idea that the disease was caused by transmission via air. His proposals to control the disease were effective. Likewise, eco-epidemiology – in its ecotoxicological context – has been defined as the study of the distribution and causation of impacts of multiple stressor exposures in ecosystems. In its applied form, it supports the reduction of ecological impacts of chemical pollution. Human-health eco-epidemiology is concerned with environment-mediated disease.

The first literature mention of eco-epidemiological analyses on chemical pollution stems from 1984 (Bro-Rasmussen and Løkke 1984). Those authors described eco-epidemiology as a discipline necessary to validate the risk assessment models and approaches of ecotoxicology. In its initial years, progress in eco-epidemiological research was slow due to practical constraints such as a lack of monitoring data, computational capacity and epidemiological techniques.

 

Current eco-epidemiology

Current eco-epidemiological studies in ecotoxicology aim to diagnose the impacts of chemical pollution in ecosystems, and utilize a combination of approaches in order to diagnose the role of chemical mixtures in causing ecological impacts in the field. The combination of approaches consists of:

1. Collection of monitoring data on abiotic characteristics and the occurrence and/or abundance of biotic species, for the environmental compartment under study;

2. If needed: data optimization, usually to align abiotic and biotic monitoring data, including the chemicals;

3. Statistical analysis of the data set using eco-epidemiological techniques to delineate impacts and probable causes, according to the approaches followed in ‘classical’ ecological bio-assessments;

4. Interpretation and use of the outcomes for either validation of ecotoxicological models and approaches, or for control of the impacts sensu Dr. Snow.

 

Key examples of chemical effects in nature

Although impacts of chemicals in the environment were known before 1962, Rachel Carson’s book Silent Spring (see Section on the history of Environmental toxicology) can be seen as early and comprehensive eco-epidemiological study that synthesized the available information of impacts of chemicals in ecosystems. She considered effects of chemicals a novel force in natural selection when she wrote: “If Darwin were alive today the insect world would delight and astound him with its impressive verification of his theories of survival of the fittest. Under the stress of intensive chemical spraying the weaker members of the insect populations are being weeded out.”

Clear examples of chemical impacts on species are still reported. Amongst the best-known examples is a study on vultures. The population of Indian vultures declined more than 95% due to diclofenac exposure which was used intensively as a veterinary drug (Oaks et al. 2004). The analysis of chemical impacts in nature becomes however more complex over time. The diversity of chemicals produced and used has vastly increased, and environmental samples contain thousands of chemicals at often low concentrations. Hence, contemporary eco-epidemiology is complex. Nonetheless, various studies demonstrated that contemporary mixture exposures affect species assemblages. Starting from large-scale monitoring data and following the four steps mentioned above, De Zwart et al. (2006) were able to show that effects on fish species assemblages could be attributed to both habitat characteristics and chemical mixtures. Kapo and Burton Jr (2006) showed the impacts of multiple stressors and chemical mixtures in aquatic species assemblages with similar types of data, but slightly different techniques. Eco-epidemiological studies of the effects of chemicals and their mixtures currently represent different geographies, species groups, stressors and chemicals/mixtures that are considered. The potential utility eco-epidemiological studies was reviewed by Posthuma et al. (2016). The review showed that mixture impacts occur, and that they can be separated from natural variability and multiple-stressor impacts. That means that water managers can develop management plans to counteract stressor impacts. Thereby, the study outcomes are used to prioritize management  to sites that are most affected, and to chemicals that contribute most to those effects. Based on sophisticated statistical analyses, Berger et al. (2016) suggested chemicals can induce effects in the environment at concentrations much lower than expected based on laboratory experiments. Schäfer et al. (2016) argued that eco-epidemiological studies that cover both mixtures and other stressors are essential for environmental quality assessment and management. In practice, however, the analysis of the potential impacts of chemical mixtures is often still separate from the analysis of impacts of other stressors.

 

Steps in eco-epidemiological analysis

Various regulations require collection of monitoring data, followed by bio-assessment, such as the EU Water Framework Directive (see section on the Water Framework Directive). Therefore, monitoring data sets are increasingly available. The data set is subsequently curated and/or optimized for the analyses. Data curation and management steps imply amongst others that taxonomic names of species are harmonized, and that metrics for abiotic and biotic variables represent the conditions for the same place and time as much as possible. Next, the data set is expanded with novel variables, e.g. a metric for the toxic pressure exerted by chemical mixtures. An example of such a metric is the multi-substance Potentially Affected Fraction of species (msPAF). This metric transfers measured or predicted concentrations into the Potentially Affected Fraction of species (PAF), the values of which are then aggregated for a total mixture (De Zwart and Posthuma 2005). This is crucial, as adding each chemical of interest as a separate variable implies an increasingly expanding number of required sampling sites to maintain statistical power to diagnose impacts and probable causation.

The interpretation of the outcomes of the statistical analyses of the data set is the final step. Here, it must be acknowledged that statistical association is not equal to causation, and that care must be taken to explain the findings as indicative for mixture effects. Depending on the context of the study, this may then trigger a refined assessment, or alignment with other methods to collect evidence, or a direct use in an environmental management program.

 

Eco-epidemiological methods

A very basic eco-epidemiological method is quantile regression. Whereas common regression methods explore the magnitude of the change of the mean of the response variable (e.g., biodiversity) in relation to a predictor variable (e.g., pollutant stress), the quantile regression looks at the tails of the distributions of the response variable. How this principle operates is illustrated in Figure 1. When a monitoring data set contains one stressor variable at different levels (i.e., a gradient of data), the observations typically take the shape of a common stressor-response relationship (see section on Concentration-effect relationships). If the monitoring sites are affected by an extra stressor, the maximum-performance under the first stressor cannot be reached, so that the area under the curve contains the XY-points for this situation. Further addition of stressor variables and levels fills this space under the curve. When the raw data plotted as XY show an ‘empty area’ lacking XY-points, e.g. in the upper right corner, it is likely that the stressor variable can be identified as a stressor that limits the response variable, for example: chemicals limit biodiversity. The quantile regression calculates an upper percentiles (e.g., the 95th percentile) of the Y-values in assigned subgroups of X-values (“bins”). Such a procedure yields a picture such as Figure 1.

 

Figure 1. The principle of quantile regression in identification of a predictor variable (= stressor) that acts as a limiting factor to a response variable (= performance). It is common to derive e.g. the 95th percentile of the Y values in a ‘bin’ of X values to derive a stressor-impact curve. As illustration, the 95th percentile is marked only for the first bin of X values, with the blackened star.

 

More complex methods for analysis of (bio)monitoring data have been developed and applied. The methods are closely associated to those developed for, and utilized in, applied ecology. Well-known examples are ‘species distribution models’ (SDM), which are used to describe the abundance or presence of species as a function of multiple environmental variables. A well-known SDM is the bell-shaped curve relating species abundances to water pH: numbers of individuals of a species are commonly low at low and high pH, and the SDM is characterized as an optimum model for species abundance (Y) versus pH (X). Statistical models can also describe species abundance, presence or biodiversity, as a function of multiple stressors, for example via Generalized Linear Models. These have the general shape of:

 

Log(Abundance)= (a. pH + a’ pH2) + (b. OM + b’ OM2) + …… + e,

 

with a, a’, b and b’ being estimated from fitting the model to the data, whilst pH and OM are the abiotic stressor variables (acidity and Organic Matter, respectively); the quadratic terms are added to allow for optimum and minimum shaped relationships. When SSD models (see Section on Species Sensitivity Distribution) are used to predict the multi-substance Potentially Affected Fraction of species, the resulting mixture stress proxy can be analysed together with the other stressor variables. Data analyses from monitoring data from the United States and the Netherlands have, for example, shown that the abundance of >60% of the taxa is co-affected by mixtures of chemicals. An example study is provided by Posthuma et al. (2016).

 

Prospective mixture impact assessments

In addition to the retrospective analysis of monitoring data, in search of chemical impacts, recent studies also show examples of prospective studies of effects of mixtures. Different land uses imply different chemical use patterns, summarized as ‘signatures’. That is, agricultural land use will yield intermittent emissions of crop-specific plant protection products, aligning with the growing season. Emissions from populated areas will show continuous emission of household chemicals and discontinuous emissions of chemicals in street run-off associated to heavy rain events. The application of emission, fate and ecotoxicity models showed that aquatic ecosystems are subject to the ‘signatures’, with associated predicted impact magnitudes (Holmes et al. 2018; Posthuma et al. 2018). Although such prospective assessments did not yet prove ecological impacts, they can assist in avoiding impacts by preventing the emission ‘signatures’ that are identified as potentially most hazardous.  

 

The use of eco-epidemiological output

Eco-epidemiological analysis outputs serve two purposes, closely related to prospective and retrospective risk assessment of chemical pollution:

1. Validation of ecotoxicological models and approaches;

2. Derivation of control measures, to reduce impacts of diagnosed probable causes of impacts.

 

If needed, multiple lines of evidence can be combined, such as in the Triad approach (see section on TRIAD) or approaches that consider more than three lines of evidence (Chapman and Hollert, 2006). The higher the importance of a good diagnosis, the better the user may rely on multiple lines of evidence.

First, the validation of ecotoxicological models and approaches is crucial, to avoid that important environmental protection, assessment and management activities rely on approaches that have limited relationship to field effects. Eco-epidemiological analyses have, for example, been used to validate the protective benchmarks used in the chemical-oriented environmental policies.

Second, the outcomes of an eco-epidemiological analysis can be used to control causes of impacts to ecosystems. Some studies have, for example, identified a statistical association between observed impacts (species expected but absent) and pollution of surface waters with mixtures of metals. Though local experts first doubted this association due to lack of industrial activities with metals in the area, they later found the association relevant given the presence of old spoil heaps from past mining activities. Metals appeared to leach into the surface waters at low rates, but the leached mixtures appeared to co-vary with species missing (De Zwart et al. 2006).

 

References

Berger, E., Haase, P., Oetken, M., Sundermann, A. (2016). Field data reveal low critical chemical concentrations for river benthic invertebrates. Science of The Total Environment 544, 864-873.

Bro-Rasmussen, F., Løkke, H. (1984). Ecoepidemiology - a casuistic discipline describing ecological disturbances and damages in relation to their specific causes; exemplified by chlorinated phenols and chlorophenoxy acids. Regulatory Toxicology and Pharmacology 4, 391-399.

Carson, R. (1962). Silent spring. Boston, Houghton Mifflin.

Chapman, P.M., Hollert, H. (2006). Should the sediment quality triad become a tetrad, a pentad, or possibly even a hexad? Journal of Soils and Sediments 6, 4-8.

De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P. (2006). Predictive models attribute effects on fish assemblages to toxicity and habitat alteration. Ecological Applications 16, 1295-1310.

De Zwart, D., Dyer, S.D., Posthuma, L., Hawkins, C.P. (2006). Use of predictive models to attribute potential effects of mixture toxicity and habitat alteration on the biological condition of fish assemblages. Ecological Applications 16, 1295-1310.

De Zwart, D., Posthuma, L. (2005). Complex mixture toxicity for single and multiple species: Proposed methodologies. Environmental Toxicology and Chemistry 24,: 2665-2676.

Holmes, C.M., Brown, C.D., Hamer, M., Jones, R., Maltby, L., Posthuma, L., Silberhorn, E., Teeter, J.S., Warne, M.S.J., Weltje, L. (2018). Prospective aquatic risk assessment for chemical mixtures in agricultural landscapes. Environmental Toxicology and Chemistry 37, 674-689.

Kapo, K.E., Burton Jr, G.A. (2006). A geographic information systems-based, weights-of-evidence approach for diagnosing aquatic ecosystem impairment. Environmental Toxicology and Chemistry 25, 2237-2249.

Oaks, J.L., Gilbert, M., Virani, M.Z., Watson, R.T., Meteyer, C.U., Rideout, B.A., Shivaprasad, H.L., Ahmed, S., Chaudhry, M.J., Arshad, M., Mahmood, S., Ali, A., Khan, A.A. (2004). Diclofenac residues as the cause of vulture population decline in Pakistan. Nature 427(6975), 630-633.

Posthuma, L., Brown, C.D., de Zwart, D., Diamond, J., Dyer, S.D., Holmes, C.M., Marshall, S., Burton, G.A. (2018). Prospective mixture risk assessment and management prioritizations for river catchments with diverse land uses. Environmental Toxicology and Chemistry 37, 715-728.

Posthuma, L., De Zwart, D., Keijzers, R., Postma, J. (2016). Water systems analysis with the ecological key factor 'toxicity'. Part 2. Calibration. Toxic pressure and ecological effects on macrofauna in the Netherlands. Amersfoort, the Netherlands, STOWA.

Posthuma, L., Dyer, S.D., de Zwart, D., Kapo, K., Holmes, C.M., Burton Jr, G.A. (2016). Eco-epidemiology of aquatic ecosystems: Separating chemicals from multiple stressors. Science of The Total Environment 573, 1303-1319.

Posthuma, L., Suter, II, G.W., Traas, T.P. (Eds.) (2002). Species Sensitivity Distributions in Ecotoxicology. Boca Raton, FL, USA, Lewis Publishers.

Schäfer, R.B., Kühn, B., Malaj, E., König, A., Gergs, R. (2016). Contribution of organic toxicants to multiple stress in river ecosystems. Freshwater Biology 61, 2116–2128

6.5. Regulatory Frameworks

Regulatory frameworks

 

Authors: Charles Bodar and Joop de Knecht

Reviewers: Kees van Gestel

 

Learning objectives:

You should be able to

  • explain how the potential environmental risks of chemicals are legally being controlled in the EU and beyond
  • mention the different regulatory bodies involved in the regulation of different categories of chemicals
  • explain the purpose of the Classification, Labelling and Packaging (CLP) approach and its difference with the risk assessment of chemicals

 

Keywords: chemicals, environmental regulations, hazard, risk

 

Introduction

There is no single, overarching global regulatory framework to manage the risks of all chemicals. Instead, different regulations or directives have been developed for different categories of chemicals. These categories are typically related to the usage of the chemicals. Important categories are industrial chemicals (solvents, plasticizers, etc.), plant protection products, biocides and human and veterinary drugs. Some chemicals may belong to more than one category. Zinc, for example, is used in the building industry, but it also has biocidal applications (antifouling agent) and zinc oxide is used as a veterinary drug. In the European Union, each chemical category is subject to specific regulations or directives providing the legal conditions and requirements to guarantee a safe production and use of chemicals. A key element of all legal frameworks is the requirement that sufficient data on a chemical should be made available. Valid data on production and identity (e.g. chemical structure), use volumes, emissions, environmental fate properties and the (eco)toxicity of a chemical are the essential building blocks for a sound assessment and management of environmental risks. Rules for the minimum data set that should be provided by the actors involved (e.g. producers or importers) are laid down in various regulatory frameworks. With this data, both hazard and risk assessments can be carried out according to specified technical guidelines. The outcome of the assessment is then used for risk management, which is focused on minimizing any risk by taking measures, ranging from requests for additional data to restrictions on the particular use or a full-scale ban of a chemical.

 

REACH

REACH is a regulation of the European Union, adopted to improve the protection of human health and the environment from the risks that can be posed by chemicals, while enhancing the competitiveness of the EU chemicals industry. REACH stands for Registration, Evaluation, Authorisation and Restriction of Chemicals. The REACH regulation entered into force on 1st June 2007 to streamline and improve the former legislative frameworks on new and existing chemical substances. It replaced approximately forty community regulations and directives by one single regulation.

REACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. REACH applies to a very broad spectrum of chemicals: from industrial to household applications, and very much more. It requires that EU manufacturers and importers register their chemical substances if produced or imported in annual amounts of > 1 tonne, unless the substance is exempted from registration under REACH. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users are responsible to show that their substances do not adversely affect human health or the environment.

The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. Before testing on vertebrate animals like fish and mammals, the use of alternative methods must be considered. The European Chemical Agency (ECHA) coordinates and facilitates the REACH program. For production volumes above 10 tonnes per year, industry has to prepare a risk assessment, taking into account all risk management measures envisaged, and document this in a chemical safety assessment (CSA). A CSA should include an exposure assessment, hazard or dose-response assessment, and a risk characterization showing risks ratios below 1.0, i.e. safe use (see sections on REACH Human and REACH Eco).

 

Classification, Labelling and Packaging (CLP)

The EU CLP regulation requires manufacturers, importers or downstream users of substances or mixtures to classify, label and package their hazardous chemicals appropriately before placing them on the market. When relevant information (e.g. ecotoxicity data) on a substance or mixture meets the classification criteria in the CLP regulation, the hazards of a substance or mixture are identified by assigning a certain hazard class and category. An important CLP hazard class is ‘Hazardous to the aquatic environment’, which is divided into categories based on the criteria, for example, Category Acute 1, representing the most (acute) toxic chemicals (LC50/EC50 ≤ 1 mg/L). CLP also sets detailed criteria for the labelling elements, such as the well-known pictograms (Figure 1).

Figure 1: Pictogram used to indicate hazardousness to the environment. (source: https://www.pictogrammenwinkel.nl/index.php?main_page=product_info&products_id=5988, with permission)

 

Plant protection products regulation

Plant protection products (PPPs) are pesticides that are mainly used to keep crops healthy and prevent them from being damaged by disease and infestation. They include among others herbicides, fungicides, insecticides, acaricides, plant growth regulators and repellents (see section on Crop Protection Products). PPPs fall under the EU Regulation (EC) No 1107/2009 which determines that PPPs cannot be placed on the market or used without prior authorization. The European Food and Safety Authority (EFSA) coordinates the EU regulation on PPPs.

 

Biocides regulation

The distinction between biocides and PPP is not always straightforward, but as a general rule of thumb the PPP regulation applies to substances used by farmers for crop protection while the biocides regulation covers all other pesticide applications. Different applications of the same active ingredient, one as a PPP and the other as a biocide, may thus fall under different regulations. Biocides are used to protect humans, animals, materials or articles against harmful organisms like pests or bacteria, by the action of the active substances contained in the biocidal product. Examples of biocides are antifouling agents, preservatives and disinfectants.

According to the EU Biocidal Products Regulation (BPR), all biocidal products require an authorization before they can be placed on the market, and the active substances contained in the biocidal product must be previously approved. The European Chemical Agency (ECHA) coordinates and facilitates the BPR. More or less similar to other legislations, for biocides the environmental risk assessment is mainly performed by comparing compartmental concentrations (PEC) with the concentration below which unacceptable effects on organisms will most likely not occur (PNEC).

 

Veterinary and human pharmaceuticals regulation

Since 2006, EU law requires an environmental risk assessment (ERA) for all new applications for a marketing authorization of human and veterinary pharmaceuticals. For both products, guidance documents have been developed for conducting an ERA based on two phases. The first phase estimates the exposure of the environment to the drug substance. Based on an action limit the assessment may be terminated. In the second phase, information about the fate and effects in the environment is obtained and assessed. For conducting an ERA a base set, including ecotoxicity data, is required. For veterinary medicines, the ERA is part of a risk-benefit analysis, in which the positive therapeutic effects are weighed against any environment risks, whereas for human medicines the environmental concerns are excluded from the risk-benefit analysis. The European Medicines Agency (EMA) is responsible for the scientific evaluation, supervision and safety monitoring of medicines in the EU.

 

Harmonization of testing

Testing chemicals is an important aspect of risk assessment, e.g. testing for toxicity, for degradation or for a physicochemical property like the Kow (see Chapter 3). The outcome of a test may vary depending on the conditions, e.g. temperature, test medium or light conditions. For this reason there is an incentive to standardize the test conditions and to harmonize the testing procedures between agencies and countries. This would also avoid duplication of testing, and leading to a more efficient and effective testing system.

The Organization for Economic Co-operation and Development (OECD) assists its member governments in developing and implementing high-quality chemical management policies and instruments. One of the key activities to achieve this goal is the development of harmonized guidelines to test and assess the risks of chemicals leading to a system of mutual acceptance of chemical safety data among OECD countries. The OECD also developed Principles of Good Laboratory Practice (GLP) to ensure that studies are of sufficient quality and rigor and are verifiable. The OECD also facilitates the development of new tools to obtain more safety information and maintain quality while reducing costs, time and animal testing, such as the OECD QSAR toolbox.

6.5.1. REACH human

Authors:      Theo Vermeire

Reviewers:  Tim Bowmer

 

Learning objective:

You should be able to:

  • outline how human risk assessment of chemicals is performed under REACH;
  • explain the regulatory function of human risk assessment in REACH.

 

Keywords: REACH, chemical safety assessment, human, RCR, DNEL, DMEL

 

 

Human risk assessment under REACH

The REACH Regulation aims to ensure a high level of protection of human health and the environment, including the promotion of alternative methods for assessment of hazards of substances, as well as the free circulation of substances on the internal market while enhancing competitiveness and innovation.  Risk assessment under REACH aims to realize such a level of protection for humans that the likelihood of adverse effects occurring is low, taking into account the nature of the potentially exposed population (including sensitive groups) and the severity of the effect(s). Industry therefore has to prepare a risk assessment (in REACH terminology: chemical safety assessment, CSA) for all relevant stages in the life cycle of the chemical, taking into account all risk management measures envisaged, and document this in the chemical safety report (CSR). Risk characterization in the context of a CSA is the estimation of the likelihood that adverse effect levels occur due to actual or predicted exposure to a chemical. The human populations considered,  or protection goals,  are workers, consumers and humans exposed via the environment. In risk characterization, exposure levels are compared to reference levels to yield “risk characterization ratios” (RCRs) for each protection goal. RCRs are derived for all endpoints (e.g. skin and eye irritation, sensitization, repeated dose toxicity) and time scales. It should be noted that these RCRs have to be derived for all stages in the life-cycle of a compound.

 

Environmental exposure assessment for humans

Humans can be exposed through the environment directly via inhalation of indoor and ambient air, soil ingestion and dermal contact, and indirectly via food products and drinking water (Figure 1). REACH does not consider direct exposure via soil.

 

Figure 1. Main exposure routes considered in REACH for environmental exposure of humans.

 

In the REACH exposure scenario, assessment of human exposure through the environment can be divided into three steps:

  1. Determination of the concentrations in intake media (air, soil, food, drinking water);
  2. Determination of the total daily intake of these media;
  3. Combining concentrations in the media with total daily intake (and, if necessary, using a factor for bioavailability through the route of uptake concerned).

 

A fourth step may be the consideration of aggregated exposure taking into account exposure to the same substance in consumer products and at the workplace. Moreover, there may be similar substances, acting via the same mechanism of action, that may have to be considered in the exposure assessment, for instance, as a worst case, by applying the concept of dose or concentration addition.

The section on Environmental realistic scenarios (PECs) – Human explains the concept of exposure scenarios and how concentrations in environmental compartments are derived.

 

Hazard identification and dose-response assessment

The aim of hazard identification is to classify chemicals and to select key data for the dose-response assessment to derive a safe reference level, which in REACH terminology is called the DNEL (Derived No Effect Level) or DMEL (Derived Minimal Effect Level). For human end-points, a distinction is made between substances considered to have a threshold for toxicity and those without a threshold. For threshold substances, a No-Observed-Adverse Effect Level (NOAEL) or Lowest-Observed-Adverse-Effect Level (LOAEL) is derived, typically from toxicity studies with laboratory animals such as rats and mice. Alternatively a Benchmark Dose (BMD) can be derived by fitting a dose-response model to all observations. These toxicity values are then extrapolated to a DNEL using assessment factors to correct for uncertainty and variability. The most frequently used assessment factors are those for interspecies differences and those for intraspecies variability (see section on Setting safe standards). Additionally, factors can be applied to account for remaining uncertainties such as those due to a poor database.

For substances considered to exert their effect by a non-threshold mode of action, especially mutagenicity and carcinogenicity, it is generally assumed, as a default assumption, that even at very low levels of exposure residual risks cannot be excluded. That said, recent progress has been made on establishing scientific, ‘health-based’ thresholds for some genotoxic carcinogens. For non-threshold genotoxic carcinogens it is recommended to derive a DMEL, if the available data allow. A DMEL is a cancer risk value considered to be of very low concern, e.g. a 1 in a million tumour risk after lifetime exposure to the chemical and using a conservative linear dose-response model. There is as yet no EU-wide consensus on acceptable levels of cancer risk.

 

Risk characterization

Safe use of substances is demonstrated when:

•         RCRs are below one, both at local and regional level. For threshold substances, the RCR is the ratio of the estimated exposure (concentration or dose) and the DNEL; for non-threshold substances the DMEL is used.

•        The likelihood and severity of an event such as an explosion occurring due to the physicochemical properties of the substance as determined in the hazard assessment is negligible.

 

A risk characterization needs to be carried out for each exposure scenario (see Section on Environmental realistic scenarios (PECs) – Human) and human population. The assessment consists of a comparison of the exposure of each human population known to be or likely to be exposed with the appropriate DNELs or DMELs and an assessment of the likelihood and severity of an event occurring due to the physicochemical properties of the substance.

 

 

Example of a deterministic assessment  (Vermeire et al., 2001)

 

Exposure assessment

Based on an emission estimation for processing of dibutylphthalate (DBP) as a softener in plastics, the concentrations in environmental compartments were estimated. Based on modelling as schematically presented in Figure 1, the total human dose was determined to be 93 ug.kg bw-1.

PEC-air

2.4

µg.m-3

PEC-surface water

2.8

µg.l-1

PEC- grassland soil

0.15

mg.kg-1

PEC-porewater agric. soil

3.2

µg.l-1

PEC-porewater grassl. soil

1.4

µg.l-1

PEC-groundwater

3.2

µg.l-1

Total Human Dose

93

µg.kgbw-1.d-1

 

Effects assessment

The total dose should be compared to a DNEL for humans. DBP is not considered a genotoxic carcinogen but is toxic to reproduction and therefore the risk assessment is based on endpoints assumed to have a threshold for toxicity.  The lowest NOAEL of DBP was observed in a two-generation reproduction test in rats and at the lowest dose-level in the diet (52 mg.kgbw-1.d-1 for males and 80 mg.kgbw-1.d-1 for females) a reduced number of live pups per litter and decreased pup weights were seen in the absence of maternal toxicity. The lowest dose level of 52 mg.kgbw-1.d-1 was chosen as the NOAEL. The DNEL was derived by the application of an overall assessment factor of 1000, accounting for interspecies differences, human variability and uncertainties due to a non-chronic exposure period.

 

Risk characterisation

The deterministic estimate of the RCR would be based on the deterministic exposure estimate of 0.093 mg.kgbw-1.d-1 and the deterministic DNEL of 0.052 mg.kgbw-1.d-1. The deterministic RCR would then be 1.8, based on the NOAEL. Since this is higher than one, this assessment indicates a concern, requiring a refinement of the assessment or risk management measures.

 

Additional reading

Van Leeuwen C.J., Vermeire T.G. (Eds.) (2007) Risk assessment of chemicals: an introduction. Springer, Dordrecht, The Netherlands, ISBN 978-1-4020-6102-8 (e-book), https://doi.org/10.1007/978-1-4020-6102-8.

Vermeire, T., Jager, T., Janssen, G., Bos, P., Pieters, M. (2001) A probabilistic human health risk assessment for environmental exposure to dibutylphthalate. Journal of Human and Ecological Risk Assessment 7, 1663-1679.

 

6.5.2. REACH environment

Author: Joop de Knecht

Reviewers: Watze de Wolf

 

Keywords: REACH, European chemicals regulation

 

 

Introduction

REACH establishes procedures for collecting and assessing information on the properties, hazards and risks of substances. At quantities of > 10 tonnes, the manufacturers, importers, and down-stream users must show that their substances do not adversely affect human health or the environment for the uses and operational conditions registered. The amount of standard information required to show safe use depends on the quantity of the substance that is manufactured or imported. This section explains how risks to the environment are assessed in REACH.

 

Data requirements

As a minimum requirement, all substances manufactured or imported in quantities of 1 tonne or more need to be tested in acute toxicity tests on Daphnia and algae, while also information should be provided on biodegradability (Table 1). Physical-chemical properties relevant for environmental fate assessment that have to be provided at this tonnage level are water solubility, vapour pressure and octanol-water partition coefficient. At 10 tonnes or more, this should be supplemented with an acute toxicity test on fish and an activated sludge respiration inhibition test. At this tonnage level, also an adsorption/desorption screening and a hydrolysis test should be performed. If the chemical safety assessment, performed at 100 tonnes or more in case a substance is classified based on hazard information, indicates the need to investigate further the effects on aquatic organisms, the chronic toxicity on these aquatic species should be determined. If the substance has a high potential for bioaccumulation (for instance a log Kow > 3), also the bioaccumulation in aquatic species should be determined. The registrant should also determine the acute toxicity to terrestrial species or, in the absence of these data, consider the equilibrium partitioning method (EPM) to assess the hazard to soil organisms. To further investigate the fate of the substance in surface water, sediment and soil, simulation tests on its degradation should be conducted and when needed further information on the adsorption/desorption should be provided. At 1000 tonnes or more, chronic tests on terrestrial and sediment-living species should be conducted if further refinement of the safety assessment is needed. Before testing vertebrate animals like fish and mammals, the use of alternative methods and all other options must be considered to comply with the regulations regarding (the reduction of) animal testing.

 

Table 1 Required ecotoxicological and environmental fate information as defined in REACH

1-10 t/y

  • Acute Aquatic toxicity (invertebrates, algae)
  • Ready biodegradability

10-100 t/y

  • Acute Aquatic toxicity (fish)
  • Activated sludge respiration, inhibition
  • Hydrolysis as a function of pH
  • Adsorption/ desorption screening test

100-1000 t/y

  • Chronic Aquatic toxicity (invertebrates, fish)
  • Bioaccumulation
  • Surface water, soil and sediment simulation (degradation) test
  • Acute terrestrial toxicity
  • Further information on adsorption/desorption

≥ 1000 t/y

  • Further fate and behaviour in the environment of the substance and/or degradation products
  • Chronic terrestrial toxicity
  • Sediment toxicity
  • Avian toxicity

 

Safety assessment

For substances that are classified based on hazard information the registrant should assess the environmental safety of a substance, by comparing the predicted environmental concentration (PEC) with the Predicted No Effect Concentration (PNEC), resulting in a Risk Characterisation Ratio (RCR=PEC/PNEC). The use of the substance is considered to be safe when the RCR <1.

Chapter 16 of the ECHA guidance offers methods to estimate the PEC based on the tonnage, use and operational conditions, standardised through a set of use descriptors, particularly the Environmental Release Categories (ERCs). These ERCs are linked to conservative default release factors to be used as a starting point for a first tier environmental exposure assessment. When substances are emitted via waste water, the physical-chemical and fate properties of the chemical substance are then used to predict its behaviour in the Wastewater Treatment Plant (WWTP). Subsequently, the release of treated wastewater is used to estimate the concentration in fresh and marine surface water. The concentration in sediment is estimated from the PEC in water and experimental or estimated sediment-water partitioning coefficient (Kpsed). Soil concentrations are estimated from deposition from air and the application of sludge from an WWTP. The guidance offers default values for all relevant parameters, thus a generic local PEC can be calculated and considered applicable to all local emissions in Europe, although the default values can be adapted to specific conditions if justified. The local risk for wide-dispersive uses (e.g. from consumers or small, non- industrial companies) is estimated for a default WWTP serving 10,000 inhabitants. In addition, a regional assessment is conducted for a standard area, a region represented by a typical densely populated EU area located in Western Europe (i.e. about 20 million inhabitants, distributed in a 200 x 200 km2 area). For calculating the regional PECs, a multi-media fate-modelling approach is used (e.g. the SimpleBox model; see Section on Multicompartment fate modelling). All releases to each environmental compartment for each use, assumed to constitute a constant and continuous flux, are summed and averaged over the year and steady-state concentrations in the environmental compartments are calculated. The regional concentrations are used as background concentrations in the calculation of the local concentrations.

The PNEC is calculated using the lowest toxicity value and an assessment factor (AF) related to the amount of information (see section Setting safe standards or chapter 10 of the REACH guidance. If only the minimum set of aquatic acute toxicity data is available, i.e. LC50s or EC50s for algae, daphnia and fish, a default value of 1000 is used. When one, two or three or more long-term tests are available, a default AF of 100, 50 and 10 is applied to No Observed Effect Concentrations (NOECs), respectively. The idea behind lowering the AF when more data become available is that the amount of uncertainty around the PNEC is being reduced.

In the absence of ecotoxicological data for soil and/or sediment-dwelling organisms, the PNECsoil and/or PNECsed may be provisionally calculated using the EPM. This method uses the PNECwater for aquatic organisms and the suspended matter/water partitioning coefficient as inputs. For substances with a log Kow >5 (or with a corresponding log Kp value), the PEC/PNEC ratio resulting from the EPM is increased by a factor of 10 to take into account possible uptake through the ingestion of sediment. If the PEC/PNEC is greater than 1 a sediment test must be conducted. If one, two or three long-term No Observed Effect Concentrations (NOECs) from sediment invertebrate species representing different living and feeding conditions are available, the PNEC can be derived using default AFs of 100, 50 and 10, respectively.

For data rich chemicals, the PNEC can be derived using Species Sensitivity Distributions (SSD) or other higher-tier approaches.

6.5.3. Pesticides (EFSA)

under review

6.5.4. Environmental Risk Assessment of Pharmaceuticals in Europe

Author: Gerd Maack

Reviewers: Ad Ragas, Julia Fabrega, Rhys Whomsley

 

Learning objectives:

You should be able to

  • Explain the philosophy and objective of the environmental risk assessment of pharmaceuticals.
  • mention the key aspects of the tiered approach of the assessment
  • identify the exposure routes for human and veterinary medicinal products and should know the respective consequences in the assessment

 

Keywords: Human pharmaceuticals, veterinary pharmaceuticals, environmental impact, tiered approach

 

 

Introduction

Pharmaceuticals are a crucial element of modern medicine and confer significant benefits to society. About 4,000 active pharmaceutical ingredients are being administered worldwide in prescription medicines, over-the-counter medicines, and veterinary medicines. They are designed to be efficacious and stable, as they need to pass different barriers i.e. skin, the gastrointestinal system (GIT), or even the blood-brain barrier before reaching the target cells. Each target system has a different pH and different lipophilicity and the GIT is in addition colonised with specific bacteria, specialized to digest, dissolve and disintegrate organic molecules. As a consequence of this stability, most of the pharmaceutical ingredients are stable in the environment as well and could cause effects on non-target organisms.

 

The active ingredients comprise a variety of synthetic chemicals produced by pharmaceutical companies in both the industrialized and the developing world at a rate of 100,000 tons per year.

While pharmaceuticals are stringently regulated in terms of efficacy and safety for patients, as well as for target animal safety, user and consumer safety, the potential effects on non-target organisms and environmental effects are regulated comparably weakly.

The authorisation procedure requires an environmental risk assessment (ERA) to be submitted by the applicants for each new human and veterinary medicinal product. The assessment encompasses the fate and behaviour of the active ingredient in the environment and its ecotoxicity based on a catalogue of standardised test guidelines.

In the case of veterinary pharmaceuticals, constraints to reduce risk and thus ensure safe usage can be stipulated in most cases. In the case of human pharmaceuticals, it is far more difficult to ensure risk reduction through restriction of the drug's use due to practical and ethical reasons. Because of their unique benefits, a restriction is not reasonable. This is reflected in the legal framework, as a potential effect on the environment is not included in the final benefit risk assessment for a marketing authorisation.

 

Exposure pathways

Human pharmaceuticals

Human pharmaceuticals enter the environment mainly via surface waters through sewage systems and sewage treatment plants. The main exposure pathways are excretion and non-appropriate disposal. Typically only a fraction of the medicinal product taken is metabolised by the patients, meaning that the main share of the active ingredient is excreted unchanged into the wastewater system. Furthermore, sometimes the metabolites themselves are pharmacologically active. Yet, no wastewater treatment plant is able to degrade all active ingredients. So medicinal products are commonly found in surface water, to some extent in ground water, and sometimes even in drinking water. However, the concentrations in drinking water are orders of magnitude lower than therapeutic concentrations. An additional exposure pathway for human pharmaceuticals is the spreading of sewage sludge on soil, if the sludge is used as fertilizer on farmland. See for more details the Link “The Drugs We Wash Away: Pharmaceuticals, Drinking Water and the Environment”.

 

Veterinary pharmaceuticals

Veterinary pharmaceuticals on the other hand enter the environment mainly via soil, either indirectly, if the slurry and manure from mass livestock production is spread onto agricultural land as fertiliser, or directly from pasture animals. Moreover, pasture animals might additionally excrete directly into surface water. Pharmaceuticals can also enter the environment via the detour of manure used in biogas plants.

 

Figure 1: Entry path paths of human and veterinary medicinal products. See text for more details (reproduced with permission from the German Environmental Agency).

 

 

Assessment schemes

Despite the differences mentioned above, the general scheme of the environmental risk assessment of human and veterinary pharmaceuticals is similar. Both assessments start with an exposure assessment. Only if specific trigger values are reached an in-depth assessment of fate, behaviour and effects of the active ingredient is necessary.

 

Environmental risk assessment of human pharmaceuticals

In Europe, an ERA for human pharmaceuticals has to be conducted according to the Guideline on Environmental Risk Assessment of Medicinal Products for Human Use (EMA 2006). This ERA consists of two phases. Phase I is a pre-screening for estimating the exposure in surface water, and if this Predicted Environmental Exposure Concentration (PEC) does not reach the action limit of 0.01 µg/L, in most cases, the ERA can stop. In case this action limit is reached or exceeded, a base set of aquatic toxicology, and fate and behaviour data need to be supplied in phase II Tier A. A risk assessment, comparing the PEC with the Predicted No Effect Concentration (PNEC), needs to be conducted. If in this step a risk is still identified for a specific compartment, a substance and compartment-specific refinement and risk assessment in Phase II Tier B needs to be conducted (Figure 2).

 

Phase I: Estimation of Exposure

In Phase I, the PEC calculation is restricted to the aquatic compartment. The estimation should be based on the drug substance only, irrespective of its route of administration, pharmaceutical form, metabolism and excretion. The initial calculation of the PEC in surface water assumes:

  • The predicted amount used per capita per year is evenly distributed over the year and throughout the geographic area (Doseai);
  • A fraction of the overall market penetration (Fpen), in other words ‘how many people will take the medicinal product? Normally a default value of 1% is used;
  • The sewage system is the drug’s main route of entry into surface water.

The following formula is used to estimate the PEC in surface water:

 

\(PEC_{surfacewater} = {(DOSE_{ai}\ *\ F_{pen})\over (WASTE_{inhab}\ *\ DILUTION)}\)

 

PECsurfacewater = mg/l

DOSEai = Maximum daily dose consumed per capita [mg.inh-1.d-1]

Fpen = Fraction of market penetration (= 1% by default)

WASTEinhab = Amount of wastewater per inhabitant per day (= 200 l by default)

DILUTION = Dilution Factor (= 10 by default)

 

Three factors of this formula, i.e. Fpen, Wasteinhab and the Dilution Factor, are default values, meaning that the PECsurfacewater in Phase I entirely depends on the dose of the active ingredient. The Fpen can be refined by providing reasonably justified market penetration data, e.g. based on published epidemiological data.

If the PECsurfacewater value is equal to or above 0.01 μg/l (mean dose ≥ 2 mg cap-1 d-1), a Phase II environmental fate and effect analysis should be performed. Otherwise, the ERA can stop. However, in some cases, the action limit may not be applicable. For instance, medicinal substances with a log Kow > 4.5 are potential PBT candidates and should be screened for persistence (P), bioaccumulation potential (B), and toxicity (T) independently of the PEC value. Furthermore, some substances may affect vertebrates or lower animals at concentrations lower than 0.01 μg/L. These substances should always enter Phase II and a tailored risk assessment strategy should be followed which addresses the specific mechanism of action of the substance. This is often true for e.g. hormone active substances (see section on Endocrine disruption). The required tests in a Phase II assessment (see below) need to cover the most sensitive life stage, and the most sensitive endpoint needs to be assessed. This means for instance that for substances affecting reproduction, the organism needs to be exposed to the substance during gonad development and the reproductive output needs to be assessed.

 

Phase II: Environmental Fate and Effects Analysis

A Phase II assessment is conducted by evaluating the PEC/PNEC ratio based on a base set of data and the predicted environmental concentration from Tier A. If a potential environmental impact is indicated, further testing might be needed to refine PEC and PNEC values in Tier B.

Under certain circumstances, effects on sediment-dwelling organisms and terrestrial environmental fate and effects analysis are also required. Experimental studies should follow standard test protocols, e.g. OECD guidelines. It is not acceptable to use QSAR estimation, modelling and extrapolation from e.g. a substance with a similar mode of action and molecular structure (read across). This is in clear contrast to other regulations like e.g. REACH.

Human pharmaceuticals are used all year round without any major fluctuations and peaks. The only exemption are substances used against cold and influenza. These substances have a clear peak in the consumption in autumn and winter times. In developed countries in Europe and North America, antibiotics display a similar peak as they are prescribed to support the substances used against viral infections. The guideline reflects this exposure scenario and asks explicitly for long-term effect tests for all three trophic levels: algae, aquatic invertebrates and vertebrates (i.e., fish).

In order to assess the physio chemical fate, amongst other tests the sorption behaviour and fate in a water/sediment system should be determined.

 

Figure 2: Scheme of conducting an ERA for Human Medicinal Products according to the EMA guideline

 

 

If, after refinement, the possibility of environmental risks cannot be excluded, precautionary and safety measures may consist of:

  • An indication of potential risks presented by the medicinal product for the environment.
  • Product labelling, Summary Product Characteristics (SPC), Package Leaflet (PL) for patient use, product storage and disposal.

Labelling should generally aim at minimising the quantity discharged into the environment by appropriate mitigation measures

 

Environmental risk assessment of veterinary pharmaceuticals

In the EU, the Environmental Risk Assessment (ERA) is conducted for all veterinary medicinal products. The structure of an ERA for Veterinary Medicinal Products (VMPs) is quite similar to the ERA for Human Medicinal Products. It is also tier based and starts with an exposure assessment in Phase I. Here, the potential for environmental exposure is assessed based on the intended use of the product. It is assumed that products with limited environmental exposure will have negligible environmental effects and thus can stop in Phase I. Some VMPs that might otherwise stop in Phase I as a result of their low environmental exposure, may require additional hazard information to address particular concerns associated with their intrinsic properties and use. This approach is comparable to the assessment of Human Pharmaceutical Products, see above.

 

Phase I: Estimation of Environmental Exposure

For the exposure assessment, a decision tree was developed (Figure 3). The decision tree consists of a number of questions, and the answers of the individual questions will conclude in the extent of the environmental exposure of the product. The goal is to determine if environmental exposure is sufficiently significant to consider if data on hazard properties are needed for characterizing a risk. Products with a low environmental exposure are considered not to pose a risk to the environment and hence these products do not need further assessment. However, if the outcome of Phase I assessment is that the use of the product leads to significant environmental exposure, then additional environmental fate and effect data are required. Examples for products with a low environmental exposure are, among others are products for companion animals only and products that result in a Predicted Environmental Concentration in soil (PECsoil) of less than 100 µg/kg, based on a worst-case estimation.

 

Figure 3: Phase I Decision Tree for Veterinary Medicinal Products (VMPs); (VICH 2000)

 

 

Phase II: Environmental Fate and Effects Analysis

A Phase II assessment is necessary if either the trigger of 100 µg/kg in the terrestrial branch or the trigger of 1 µg/L in the aquatic branch is reached. It is also necessary, if the substance is a parasiticide for food producing animals. A Phase II is also required for substances that would in principle stop in Phase I, but there are indications that an environmental risk at very low concentrations is likely due to their hazardous profile (e.g., endocrine active medicinal products). This is comparable to the assessment for Human Pharmaceutical Products.

 

For Veterinary Pharmaceutical Products also the Phase II assessment is sub-divided into several Tiers, see Figure 4. For Tier A, a base set of studies assessing the physical-chemical properties, the environmental fate, and effects of the active ingredient is necessary. For Tier A, acute effect tests are suggested, assuming a more peak like exposure scenario due to e.g. applying manure and dung on fields and meadows, in contrast to the permanent exposure of human pharmaceuticals. If for a specific trophic level, e.g. dung fauna or algae, a risk is identified (PEC/PNEC ≥1) (see Introduction to Chapter 6), long-term tests for this level have to be conducted in Tier B. For the trophic levels, without an identified risk, the assessment can stop. If the risk still applies with these long-term studies, a further refinement with field studies in Tier C can be conducted. Here a co-operation with a competent authority is strongly recommended, as these tests are tailored, reflected by the individual design of these field studies. In addition, and independent of this, risk mitigation measures can be imposed to reduce the exposure concentration (PEC). These can be, beside others, that animals must remain stabled for a certain amount of time after the treatment, to ensure that the concentration of active ingredient in excreta is low enough to avoid adverse effects on dung fauna and their predators. Alternatively, the treated animals are denied access to water as the active ingredient has harmful effects on aquatic organisms.

 

Figure 4: Scheme for conducting an ERA for Veterinary Medicinal Products (VMPs) according to the EMA guidelines (VICH 2000; VICH 2004).

 

 

Conclusion

The Environmental Risk Assessment of Human and Veterinary Medicinal Products is a straightforward, tiered-based process with the possibility to exit at several steps in the assessment procedure. Depending on the dose, the physico-chemical properties, and the anticipated use, this can be quite early in the procedure. On the other hand, for very potent substances with specific modes of action the guidelines are flexible enough to allow specific assessments covering these modes of action.

The ERA guideline for human medicinal products entered into application 2006 and many data gaps exist for products approved prior to 2006. Although there is a legal requirement for an ERA dossier for all marketing authorisation applications, new applications for pharmaceuticals on the market before 2006 are only required to submit ERA data under certain circumstances (e.g. significant increase in usage). Even for some of the blockbusters, like Ibuprofen, Diclofenac, and Metformin, full information on fate, behaviour and effects on non-target organisms is currently lacking.

Furthermore, systematic post-authorisation monitoring and evaluation of potential unintended ecotoxicological effects does not exist. The market authorisation for pharmaceuticals does not expire, in contrast to e.g. an authorisation of pesticides, which needs to be renewed every 10 years.

For Veterinary Medicinal Products, an in-depth ERA is necessary for food producing animals only. An ERA for non-food animals can stop with question 3 in Phase I (Figure 3) as it is considered that the use of products for companion animals leads to negligible environmental concentrations, which might not be necessarily the case. Here, the guideline does not reflect the state of the art of scientific and regulatory knowledge. For example, the market authorisation, as a pesticide or biocide, has been withdrawn or strongly restricted for some potent insecticides like imidacloprid and fipronil which both are authorised for use in companion animals.

 

Further Reading

Pharmaceuticals in the Environment:https://www.umweltbundesamt.de/en/publikationen/pharmaceuticals-in-the-environment-the-global
Recommendations for reducing micro-pollutants in waters: https://www.umweltbundesamt.de/publikationen/recommendations-for-reducing-micropollutants-in

6.5.5. European Water Framework Directive

Author:      Piet Verdonschot

Reviewers: Peter von der Ohe, Michiel Kraak

 

Learning objectives:

After this finishing this module, you should be able to:

  • summarize the key aspects of the Water Framework Directive, its objectives and philosophy;
  • explain the methodological reasoning behind the Water Framework Directive;
  • understand the role of toxic substances in the Directive and the relation to other stressors.

 

Key words:

EU Water Framework Directive, water types, quality elements, ecological quality ratio, priority substances

 

Introduction

Early water legislation on the European level only began in the seventies with standards for rivers and lakes used for drinking water abstraction and aiming to control the discharge of particular harmful substances. In the early eighties, quality targets were set for drinking water, fishing waters, shellfish waters, bathing waters and groundwater. The main emission control instrument was the Dangerous Substances Directive. Within a decade, the Urban Waste Water Treatment Directive (1991), the Nitrates Directive (1991), the Drinking Water Directive (1998) and the Directive for Integrated Pollution Prevention and Control (1996) followed. Finally, on 23 October 2000, the "Directive 2000/60/EC of the European Parliament and of the Council establishing a framework for the Community action in the field of water policy" or, in short, the EU Water Framework Directive (WFD) was adopted (European Commission, 2000). The key aim of this directive is to achieve good ecological and good chemical status for all waters by 2027. This is captured in the following objectives:

  • expanding the scope of water protection to all waters (not only waters intended for particular uses), surface waters and groundwater;
  • achieving "good status" for all waters by a set deadline;
  • water management based on river basins;
  • combined approach of emission limit values and water quality standards;
  • ensuring that the user bears the true costs of providing and using water;
  • getting the citizen involved more closely;
  • streamlining legislation.

Instead of administrative or political boundaries, the natural geographical and hydrological unit (the river basin) was set as the unit of water management. For each river basin, independent from national frontiers, a river basin management plan needs to be established and updated every six years. Herein, the general protection of ecological water quality, specific protection of unique and valuable habitats, protection of drinking water resources, and protection of bathing water are integrated, assessed and, where necessary, translated into an action plan. Basically, the key requirement of the Directive is that the environment as an entity is protected to a high level, in other words the protection of the ecological integrity applies to all waters. Within five months after the WFD came into force, the Common Implementation Strategy (CIS) was established. The CIS includes, for instance, guidance documents on technical aspects, key events and additional resource documents related to different aspects of the implementation. The links at the end of this chapter provide access to these additional key documents.

 

WFD Methodology

The WFD integrated approach to manage water bodies consists of three key components of aquatic ecosystems: water quality, water quantity, and physical structure. Furthermore, it implies that the ecological status of water bodies must be determined with respect to the near-natural reference conditions, which represent a ‘high ecological status’. The determination of the ‘good ecological status’ (Figure 1) is based on the quality of the biological community, the hydrological characteristics and the chemical characteristics that may slightly deviate from these reference conditions. To describe reference conditions, a typology of water bodies is needed. In the WFD, water bodies are categorized as rivers, lakes, transitional or coastal waters. Within each of these categories, the type of water body must be differentiated, based on an ecoregion approach in combination with at least three additional factors: altitude, catchment area and geology (and depth for lakes). The objective of the typology is to ensure that type-specific biological reference conditions can be determined. Furthermore, it may be necessary or is allowed to use additional descriptors (called optional descriptors) to achieve sufficient differentiation. Waters in each category can be classified as natural, heavily modified or artificial dependent on their origin and human-induced changes.

The WFD requires to include ecological status and chemical status classification schemes for surface water bodies that differ for the four major water types, i.e. rivers, lakes, transitional waters and coastal waters. Rivers and lakes are assessed in relation to their ecological and chemical reference status, and heavily modified and artificial water bodies in relation to their ecological potential and chemical status. The classification schemes of ecological status and potential make use of several quality elements (QEs; Annex V):

  • Biological quality elements; composition and abundance of algae, macroalgae, higher water plants, benthic invertebrates, and fish;
  • Hydro-morphological quality elements (e.g. water flow, substrate, river morphology);
  • General physicochemical quality elements (e.g. nutrients, chloride, oxygen condition, temperature, transparency, salinity, river basin-specific pollutants);
  • Environmental Quality Standards (EQSs) for synthetic and non-synthetic pollutants.

For the ecological status and ecological potential classification schemes, the Directive provides normative definitions of the degree of human disturbance to each relevant quality element that is consistent with each of the classes for (potential) ecological status (Figure 1). These definitions have been expanded and used in the development of classification tools (assessment systems) and appropriate numeric class boundaries for each quality element. The results of applying these classification tools or assessment systems are used to determine the status (quality class) of each water body or group of water bodies.

Once reference conditions are established, the departure from these can be measured. Boundaries have been defined for the degree of deviation from the reference conditions for each of the WFD ecological status classes. Annex V 1.4.1 of the Directive states: “the results of the (classification) system shall be expressed as ecological quality ratios (EQRs) for the purposes of classification of ecological status. These ratios shall represent the relationship between the values of the biological parameters observed for a given body of surface water and the values for these parameters in the reference conditions applicable to that body. The ratio shall be expressed as a numerical value between zero and one, with high ecological status represented by values close to one and bad ecological status by values close to zero.” (Figure 1). Boundaries are thus the values that separate the 5 classes.

The reference conditions form the anchor point for the whole ecological assessment. The outcome or score of all WFD Quality Elements is combined to inform the overall quality classification of a water body. Hereby, the one-out-all-out principle applies, meaning that the lowest score for an individual Quality Element decides the final score.

 

Figure 1. Ecological Quality Ratio (altered after Vincent et al., 2002).

 

Priority substances

The values of the environmental quality standards (values for specific pollutants) are set to ensure that the water body is not exposed to acute and chronic toxicity for aquatic organisms, no accumulation in the ecosystem and losses of habitats and biodiversity occurs, as well as there is no threat to human health. Substances that were identified to present significant risks to or via the aquatic environment are listed as priority substances. According to the Directive on Environmental Quality Standards (Directive 2008/105/EC) good chemical status is reached for a water body when it complies with the Environmental Quality Standard (EQS) for all priority substances and eight other pollutants that are not in the priority substances list. The EQSs define a limit on the concentrations of 33 priority substances, 13 of them designated as priority hazardous substances in surface waters. These concentration limits are derived following the methodologies explained in Section 6.3.4. Furthermore, the Directive on EQSs offers the possibility of applying EQSs for sediment and biota, instead of those for water. It opens the possibility of designating mixing zones adjacent to discharge points where concentrations of the priority substances might be expected to exceed their EQS. Furthermore, authorities can add basin or catchment specific EQSs.

Environmental Quality Standards (EQSs) can be expressed as a maximum allowable concentration (EQS-MAC) or as an annual average value (EQS-AA). For all priority substances, Member States need to establish an inventory of emissions, discharges and losses. To improve the legislation, the EU also 1) introduced biota standards for several substances, 2) provided improvements on the efficiency of monitoring and the clarity of reporting with regard to certain substances behaving as ubiquitous persistent, bioaccumulative and toxic (PBT) substances, and 3) added a watch-list mechanism designed to allow targeted EU-wide monitoring of substances of possible concern to support the prioritization process in future reviews of the priority substances list.

 

Status classification

Together the classification of surface water bodies follows the scheme provided in Figure 2.

 

Figure 2. Elements of ‘good status’ of surface waters.

 

In summary, under the WFD, the ecological quality status assessment of surface water bodies is primarily based on biological quality elements phytoplankton, fish, and benthic flora and fauna. In the Netherlands, the worst Biological Quality Element score is taken as the overall final score (one-out-all-out principle). Furthermore, adequate assessment of stream and river hydro-morphology requires the consideration of any modifications to flow regime, sediment transport, river morphology, lateral channel mobility (or channel migration), and river continuity. For (groups of) substances, the WFD requires assessment of their relevance. A substance is relevant, when it exceeds its Environmental Quality Standard, meaning the boundary between good and moderate status is exceeded and a de-classification takes place when. The overall assessment follows the scheme given in Figure 3.

 

Figure 3. Decision tree for determining the ecological status of surface water bodies based on biological, hydromorphological and physicochemical quality elements according the normative definitions in Annex V: 1.2. (WFD).

 

References

European Commission (2000). Directive 2000/60/EC. Establishing a framework for community action in the field of water policy. European Commission PE-CONS 3639/1/100 Rev 1, Luxembourg.

Vincent, C., Heinrich, H., Edwards, A., Nygaard, K., Haythornthwaite, J. (2002). Guidance on typology, reference conditions and classification systems for transitional and coastal waters. CIS working group, 2, 119.

6.5.6. Policy on soil and groundwater regulation

Author: Frank Swartjes

Reviewers: Kees van Gestel, Ad Ragas, Dietmar Müller-Grabherr

 

Learning objectives:

You should be able to

  • explain how different countries regulate soil contamination issues
  • list some differences between different policy systems on soil and groundwater regulations
  • describe how risk assessment procedures are implemented in policy

 

Keywords: Policy on soil contamination, Water Framework Directive, screening values comparison, Thematic Soil Strategy, Common Forum

 

 

History

As a bomb hit, soil contamination came onto the political agenda in the United States and in Europe through a number of disasters in the late 1970s and early 1980s. Starting point was the 1978 Love Canal disaster in upper New York State, USA, in which a school and a number of residences had been built on a former landfill for chemical waste disposal with thousands of tonnes of dangerous chemical wastes, and became a national media event. In Europe in 1979, the residential site of Lekkerkerk in the Netherlands became an infamous national event. Again, a residential area had been built on a former waste dump, which included chemical waste from the painting industry, and with channels and ditches that had been filled in with chemical waste-containing materials.

Since these events, soil contamination-related policies emerged one after the other in different countries in the world. Crucial elements of these policies were a benchmark date for a ban on bringing pollutants in or on the soil (‘prevention’), including a strict policy, e.g. duty of care, for contaminations that are caused after the benchmark date, financial liability for polluting activities, tools for assessing the quality of soil and groundwater, and management solutions (remediation technologies and facilities for disposal).

 

Evolution in soil policies

Objectives in soil policies often show evolution over time and changes go along with developing new concepts and approaches for implementing policies. In general, soil policies often develop from a maximum risk control until a functional approach. The corresponding tools for implementation usually develop from a set of screening values towards a systemic use of frameworks, enabling sound environmental protection while improving the cost-benefit-balance. Consequently, soil policy implementation usually goes through different stages. In general terms, four different stages can be distinguished, i.e., related to maximum risk control, to the use of screening values, to the use of frameworks and based on a functional approach. Maximum risk control follows the precaution principle and is a stringent way of assessing and managing contamination by trying to avoid any risk. Procedures based on screening values allow for a distinction in polluted and non-polluted sites for which the former, the polluted sites, require some kind of intervention. The scientific underpinning of the earliest generations of screening values was limited and expert judgement played an important role. Later, more sophisticated screening values emerged, based on risk assessment. This resulted in screening values for individual contaminants within the contaminant groups metals and metalloids, other inorganic contaminants (e.g., cyanides), polycyclic aromatic hydrocarbons (PAHs), monocyclic aromatic hydrocarbons (including BETX (benzene, toluene, xylene)), persistent organic pollutants (including PCBs and dioxins), volatile organic contaminants (including trichloroethylene, tetrachloroethylene, 1,1,1-trichloroethane, and vinyl chloride), petroleum hydrocarbons and, in a few countries only, asbestos. For some contaminants such as PAHs, sum-screening values for groups were derived in several countries, based on toxicity equivalents. In a procedure based on frameworks, often the same screening values generally act as a trigger for further, more detailed site-specific investigations in one or two additional assessment steps. In the functional approach, soil and groundwater must be suited for the land use it relates to (e.g., agricultural or residential land) and the functions (e.g., drinking water abstraction, irrigation water) it performs. Some countries skip the maximum risk control and sometimes also the screening values stages and adopt a framework and/or a functional approach.

 

European collaboration and legislation

In Europe, collaboration was strengthened by concerted actions such as CARACAS (concerted action on risk assessment for contaminated sites in the European Union; 1996 - 1998) and CLARINET (Contaminated Land Rehabilitation Network for Environmental Technologies; 1998 - 2001). These concerted actions were followed up by fruitful international networks that are still are active today. These are the Common Forum, which is a network of contaminated land policy makers, regulators and technical advisors from Environment Authorities in European Union member states and European Free Trade Association countries, and NICOLE (Network for Industrially Co-ordinated Sustainable Land Management in Europe), which is a leading forum on industrially co-ordinated sustainable land management in Europe. NICOLE is promoting co-operation between industry, academia and service providers on the development and application of sustainable technologies.

In 2000, the EU Water Framework Directive (WFD; Directive 2000/60/EC) was adopted by the European Commission, followed by the Groundwater Directive (Directive 2006/118/EC) in 2006 (European parliament and the council of the European Union, 2019b). The environmental objectives are defined by the WFD. Moreover, ‘good chemical status’ and the ‘no deterioration clause’ account for groundwater bodies. ‘Prevent and limit’ as an objective aims to control direct or indirect contaminant inputs to groundwater, and distinguishes for ‘preventing hazardous substances’ to enter groundwater as well as ‘limiting other non-hazardous substances’. Moreover, the European Commission adopted a Soil Thematic Strategy, with soil contamination being one out of the seven identified threats. A proposal for a Soil Framework Directive, launched in 2006, with the objective to protect soils across the EU, was formally withdrawn in 2014 because of a lack of support from some countries.

 

Policies in the world

Today, most countries in Europe and North America, Australia and New Zealand, and several countries in Asia and Middle and South America, have regulations on soil and groundwater contamination. The policies, however, differ substantially in stage, extent and format. Some policies only cover prevention, e.g., blocking or controlling the inputs of chemicals onto the soil surface and in groundwater bodies. Other policies cover prevention, risk based quality assessment and risk management procedures and include elaborated technical tools, which enable a decent and uniform approach. In particular the larger countries such as the USA, Germany and Spain, policies differ between states or provinces within the country. And even in countries with a policy on the federal level, the responsibilities for different steps in the soil contamination chain are very different for the different layers of authorities (at the national, regional and municipal level).

In Figure 1 the European countries are shown that have a procedure based on frameworks (as described above), including risk-based screening values. It is difficult, if not impossible, to summarise all policies on soil and groundwater protection worldwide. Alternatively, some general aspects of these policies are given here. A fair first basic element in nearly all soil and groundwater policies, relating to prevention of contamination, is the declaration of a formal point in time after which polluting soil and groundwater is considered an illegal act. For soil and groundwater quality assessment and management, most policies follow the risk-based land management approach as the ultimate form of the functional approach described above. Central in this approach are the risks for specific targets that need to be protected up to a specified level. Different protection targets are considered. Not surprisingly, ‘human health’ is the primary protection target that is adopted in nearly all countries with soil and groundwater regulations. Moreover, the ecosystem is an important protection target for soil, while for groundwater the ecosystem as a protection target is under discussion. Another interesting general characteristic of mature soil and groundwater policies is the function-specific approach. The basic principle of this approach is that land must be suited for its purpose. As a consequence, the appraisal of a contaminated site in a residential area, for instance, follows a much more stringent concept than that of an industrial site.

 

Figure 1. European countries that have a soil policy procedure based on frameworks (see text), including risk-based screening values. Figure prepared by Frank Swartjes.

 

Risk assessment tools

Risk assessment tools often form the technical backbone of policies. Since the late 1980s risk assessment procedures for soil and groundwater quality appraisal were developed. In the late 1980s the exposure model CalTOX was developed by the Californian Department of Toxic Substances Control in the USA, a few years later the CSOIL model in the Netherlands (Van den Berg, 1991/ 1994/ 1995). In Figure 2, the flow chart of the Dutch CSOIL exposure model is given as an example. Three elements are recognized in CSOIL, like in most exposure models: (1) contaminant distribution over the soil compartments; (2) contaminant transfer from (the different compartments of) the soil into contact media; and (3) direct and indirect exposure to humans. The major exposure pathways are exposure through soil ingestion, crop consumption and inhalation of indoor vapours (Elert et al., 2011). Today, several exposure models exist (see Figure 3 for some ‘national’ European exposure models). However, these exposure models may give quite different exposure estimates for the same exposure scenario (Swartjes, 2007).

 

Figure 2. Flow chart of the Dutch CSOIL exposure model.

 

Figure 3. Some ‘national’ European soil exposure models, projected on the country in which they are used. Figure prepared by Frank Swartjes.

 

Moreover, procedures were developed for ecological risk assessment, including the Species Sensitivity Distributions (see section on SSDs), based on empirical relations between concentration in soil or groundwater and the percentage of species or ecological processes that experience adverse effects (PAF: potentially Affected Fraction). For site specific risk assessment, the TRIAD approach was developed, based on three lines of evidence, i.e., chemically-based, toxicity-based and using data from ecological field surveys (see section on the TRIAD approach).

In the framework of the HERACLES network, another attempt was made to summarizing different EU policies on polluted soil and groundwater. A strong plea was made for harmonisation of risk assessment tools (Swartjes et al., 2009). The authors also described a procedure for harmonization based on the development of a toolbox with standardized and flexible risk assessment tools. Flexible tools are meant to cover national or regional differences in cultural, climatic and geological (e.g., soil type, depth of the groundwater table) conditions. It is generally acknowledged, however, that policy decisions should be taken on the national level. In 2007, an analysis of the differences of soil and groundwater screening values and of the underlying regulatory frameworks, human health and ecological risk assessment procedures (Carlon, 2007) was launched. Although screening values are difficult to compare, since frameworks and objectives of screening values differ significantly, a general conclusion can be drawn for e.g. the screening values at the potentially unacceptable risk level (often used as ‘action’ values, i.e. values that trigger further research or intervention when exceeded). For the 20 metals, most soil screening values (from 13 countries or regions) show between a factor of 10 and 100 difference between the lowest and highest values. For the 23 organic pollutants considered, most soil screening values (from 15 countries or regions) differ by a factor of between 100 and 1000, but for some organic pollutants these screening values differ by more than four orders of magnitude. These conclusions are merely relevant from a policy viewpoint. Technically, these conclusions are less relevant, since, the screenings values are derived from a combination of different protection targets and tools and based on different policy decisions. Differences in screening values are explained by differences in geographical and biological and socio-cultural factors in different countries and regions, different national regulatory and policy decisions and variability in scientific/ technical tools.

 

 

References

Carlon, C. (Ed.) (2007). Derivation methods of soil screening values in Europe. A review and evaluation of national procedures towards harmonisation, JRC Scientific and Technical report EUR 22805 EN.

Elert, M., Bonnard, R., Jones, C., Schoof, R.A., Swartjes, F.A. (2011). Human Exposure Pathways. Chapter 11 in: Swartjes, F.A. (Ed.), Dealing with Contaminated Sites. From theory towards practical application. Springer Publishers, Dordrecht.

Swartjes, F.A. (2007). Insight into the variation in calculated human exposure to soil contaminants using seven different European models. Integrated Environmental Assessment and Management 3, 322–332.

Swartjes, F.A., D’Allesandro, M., Cornelis, Ch., Wcislo, E., Müller, D., Hazebrouck, B., Jones, C., Nathanail, C.P. (2009). Towards consistency in risk assessment tools for contaminated sites management in the EU. The HERACLES strategy from the end of 2009 onwards. National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, RIVM Report 711701091.

Van den Berg, R. (1991/1994/1995). Exposure of humans to soil contamination. A quantitative and qualitative analyses towards proposals for human toxicological C‑quality standards (revised version of the 1991/ 1994 reports). National Institute for Public Health and the Environment (RIVM), Bilthoven, The Netherlands, RIVM-report no. 725201011.

 

Further reading

Swartjes, F.A. (Ed.) (2011). Dealing with Contaminated Sites. From theory towards practical application. Springer Publishers, Dordrecht.

Rodríguez-Eugenio, N., McLaughlin, M., Pennock, D. (2018). Soil Pollution: a hidden reality. Rome, FAO.

 

6.5.7. Drinking water

in preparation

6.6. Risk management and risk communication

Date uploaded: 21st November 2024

Author: Ad Ragas

Reviewers: Herman Eijsackers

Leaning objectives

After learning this paragraph, you should be able to:

  • indicate which stakeholders are involved in chemical risk assessment and what their interests are;
  • characterize the main policy principles used in chemical risk management;
  • outline the main characteristics of the IRGC framework.

Keywords: risk management, DPSIR, stakeholders, ALARA, precautionary principle,

Chemical risk management is the process that aims to control the risks caused by the production and use of chemicals in society. In order to understand the risk management process, it is important to understand the interests and stakeholders involved in the production and use of chemicals on the one side and the adverse effects of chemicals on the other. This can be illustrated with the DPSIR chain that was introduced in section 1.2.

Figure 1: The processes, interests and stakeholders involved in a chemical risk issue, illustrated by means of the DPSIR chain. White boxes with the solid lines represent the DPSIR chain, red boxes represent the main stakeholders, and green boxes represent the associated and conflicting values and interests of the stakeholders.

The DPSIR in Figure 1 shows that there are different groups in society that have an interest in the production and use of chemicals, i.e. the consumers that are using the products in which these chemicals are contained and the producers and retailers that make money out of the production and sale of these products. On the other hand, there are stakeholders that have an interest in the endpoints that are affected by the chemicals when they reach the environment, e.g. people that work in the production, people of the general public worried about their health and people worried about ecosystem health. These stakeholders can partly overlap, e.g. the health of consumers benefitting of chemical products may be affected by their adverse impacts. However, there often is some kind of incongruity between the people benefitting and the people affected, e.g. when future generations are confronted with pollution problems caused by current generations or when people living downstream a river are confronted with the pollution caused upstream. This can result in a conflict of interests: the people affected demand action from the people benefitting. If pollution can be easily avoided this will not be a problem (after all, consumers and producers are not aiming to pollute the environment), but action is not always that simple. The government then comes into the picture as an important mediating agency since the government can define rules that all stakeholders have to adhere to. Scientists also play an important role, e.g. by studying the extent of the risks (risk assessors) and by developing interventions that can reduce risks. Risk management thus is a process that involves different stakeholders and each stakeholder has different interests and a different role to play in the process.

 

Linear risk management

There are different ways to organize the risk management process. The conventional way is to arrange it as a linear process, roughly consisting of the following steps:

  1. recognition and definition of the risk problem, often led by the government (politicians and policy makers), sometimes after pressure from different stakeholder groups in society;
  2. establishment of the nature and extent of the risk by scientists, often in isolation;
  3. identification and selection of risk reduction measures (if needed), often led by the government and often in collaboration with primary stakeholders (i.e. the producers and consumers);
  4. communication of the risk management strategy (i.e., the risk and risk reduction measures) to the stakeholders, including the general public. This communication process is typically led by the government and often consists of a unidirectional flow of information (primarily scientific information) from the government to the stakeholders.

This way of arranging the risk management process is strongly rooted in the belief that chemical risk is a strictly defined concept that can be objectively measured and quantified by means of scientific methods. It is reflected in many risk regulations such as the system of environmental quality standards (EQSs) of the European Water Framework Directive, exposure standards for the workplace and air quality regulations.

 

Risk management principles

The aim of chemical risk management is to control and reduce the risks of chemicals. A quantitative estimate of the risk is an important ingredient in this process, but definitely not the only one. Chemical risk management is often based on the application of various policy principles that can be applied in isolation or combination. Some important principles in the risk management of chemicals include:

  • As Low As Reasonably Achievable (ALARA). This principle stresses the fact that environmental pollution should be avoided whenever feasible. The principle is often applied regardless of the risk caused by a pollutant and plays an important role in environmental licensing of production facilities.
  • Polluter Pays Principle (PPP). This principle states that the polluter should pay for polluting the environment. This principle forms the basis for the taxation of polluting activities such as waste disposal and sewer discharges. Ideally, these taxes should then be used to prevent and reduce environmental contamination (e.g. to build and maintain wastewater treatment plants; WWTPs), but this is not always the case.
  • Precautionary Principle. This is probably one of the most debated principles when it comes to environmental regulation and it is also highly relevant for chemical risk assessment. There are various definitions of the precautionary principle, but the most widely accepted definition states that scientific certainty is not required before taking preventive measures. The precautionary principle thus is a way to deal with uncertainty. It enables policy makers to take preventive action in the absence of absolute certainty. However, there must be a legitimate reason for concern. This is also what complicates operationalisation of the precautionary principle: when is there “sufficient reason for concern to take preventive action”? This ultimately is a normative choice that strongly depends on the stakes and interests involved.
  • No data, no market. This policy principle is one of the fundamental principles underlying Europe’s chemical legislation (REACH; section 6.5). It is related to the polluter pays principle in the sense that it puts the burden of proof that a risk is acceptable (and the associated costs) on the shoulders of those who produce or import a chemical. The principle is increasingly used in environmental legislation: producers should prove that the chemicals and products that they put on the market are safe and sustainable.

 

The IRGC framework

Over the last few decades, the belief that risk is a strictly defined concept that can be objectively quantified is increasingly being challenged. The fact that chemical risk not really is a strictly defined concept becomes clear as one realizes that mixture effects have been ignored for decades but are now increasingly being included in chemical risk assessments. And not all stakeholders value risks in the same way as is explained in section 6.7 on risk perception. Although the scientists and risk assessors performing the risk assessment generally do their best to assess risk as objectively as possible, they must make subjective assumptions. Endpoints, unacceptable effects, magnitude of uncertainty factors are controversial topics and based on implicit political choices. Questions about risk often have no scientific answers or the answers are multiple and contestable. This has led to suggestions to rearrange the traditional linear risk management process into a process in which stakeholders are much more involved. One of these suggestions is the framework developed by International Risk Governance Council (Figure 2; IRGC, 2017). This framework provides guidance for early identification and handling of risks, involving multiple stakeholders. It recommends an inclusive approach to frame, assess, evaluate, manage and communicate important risk issues, often marked by complexity, uncertainty and ambiguity. The framework is generic and can be tailored to various risks and organisations. The framework comprises four interlinked elements, and three cross-cutting aspects:

1. Pre-assessment – Identification and framing.

  • Leads to framing the risk, early warning, and preparations for handling it,
  • Involves relevant actors and stakeholder groups, so as to capture the various perspectives on the risk, its associated opportunities, and potential strategies for addressing it.

2. Appraisal – Assessing the technical and perceived causes and consequences of the risk.

  • Develops and synthesises the knowledge base for the decision on whether or not a risk should be taken and/or managed and, if so,
  • Identifies and selects what options may be available for preventing, mitigating, adapting to or sharing the risk.

3. Characterisation and evaluation – Making a judgement about the risk and the need to manage it. 

  • Process of comparing the outcome of risk appraisal (risk and concern assessment) with specific criteria,
  • Determines the significance and acceptability of the risk, and
  • Prepares decisions.

4. Management – Deciding on and implementing risk management options.

  • Designs and implements the actions and remedies required to avoid, reduce (prevent, adapt, mitigate), transfer or retain the risks.

5. Cross-cutting aspects – Communicating, engaging with stakeholders, considering the context.

  • Crucial role of open, transparent and inclusive communication,
  • Importance of engaging stakeholders to both assess and manage risks, and
  • Need to deal with risk in a way that fully accounts for the societal context of both the risk and the decision that will be taken.

 

Figure 2: The risk governance framework of the IRGC (2017).

 

References

IRGC [International Risk Governance Council], 2017. Introduction to the IRGC risk governance framework. Revised version. Lausanne: EPFL International Risk Governance Center.

6.7. Risk perception

Author: Fred Woudenberg

Reviewers: Ortwin Renn

 

Learning objectives:

  • To list and memorize the most important determinants of risk perception
  • To look at and try to understand the influence of risks in situations or activities which you or others encounter or undertake in daily life
  • To actively look for as many situations and activities as possible in which the annual risk of getting sick, being injured or to die has little influence on risk perception
  • To look for examples in which experts (if possible ask them) react like lay people in their own daily lives

 

Key words: Risk perception, fear, worry, risk, context

 

 

Introduction

If risk perception had a first law like toxicology has with Paracelsus’ “Sola dosis facit venenum” (see section on History of Environmental Toxicology) it would be:

“People fear things that do not make them sick and get sick from things they do not fear.”

 

People can, for instance, worry extremely over a newly discovered soil pollution site in their neighborhood, which they hear about at a public meeting they have come to with their diesel car, and, when returning home, light an extra cigarette without thinking to relieve the stress.

 

Sources: Left: File photo: Kalamazoo City Commissioner Don Cooney and Kalamazoo residents march to protest capping the Allied Paper Landfill, May 2013. https://www.wmuk.org/post/will-bioremediation-work-allied-microbial-ecologist-skeptical. Right: https://desmotivaciones.es/352654/Obama

 

The explanation for this first law is quite easy. The annual risk of getting sick, being injured or to die has only limited influence on the perception of a risk. Other factors are more important. Figure 1 is a model of risk perception in its most basic form.

 

Figure 1. Simplified model of risk perception

 

In the middle of this figure, there is a list with several factors which determine risk perception to a large extent. In any given situation, they can each end up at the left, safe side or on the right, dangerous side. The model is a simplification. Research since the late sixties of the previous century has led to a collection of many more factors that often are connected to each other (for lectures of some well-known researchers see examples 1, 2, 3, 4 and 5).

 

Why do people fear soil pollution?

An example can show this interconnection and the discrepancy between the annual health risks (at the top of Figure 1) and other factors. The risk of harmful health effects for people living on polluted soil is often very small. The factor ‘risk’ thus ends at the left, safe side. Most of the other factors end up at the right. People do not voluntary choose to have a soil pollution in their garden. They have absolutely no control over the situation and an eventual sanitation. For this, they depend on authorities and companies. Nowadays, trust in authorities and companies is very small. Many people will suspect that these authorities care more about their money than about the health and well-being of their citizens and neighbours. A newly discovered soil pollution will get local media attention and this will certainly be the case if there is controversy. If the untrusted authorities share their conclusion that the risks are low, many people will suspect that they withhold information and are not completely open. Especially saying that there is ‘no cause for alarm’ will only make people worry more (see a funny example). People will not believe the conclusion of authorities that the risk is low, so effectively all factors end up at the dangerous side.

 

Why smokers are not afraid

For smoking a cigarette, the evaluation is the other way around. Almost everybody knows that smoking is dangerous, but people light their cigarette themselves. Most people at least think they have control over their smoking habit as they can decide to stop at any moment (but being addicted, they probably highly overestimate their level of control). For their information or taking measures, people do not depend on others and no information is withheld about smoking. Some smokers suffer from what is called optimistic bias, the idea that misery only happens to other. They always have the example of their grandfather who started smoking at 12 and still ran the marathon at 85.

People can be upset if they learn that cigarette companies purposely make cigarettes more addictive. It makes them feel the company takes over control which people greatly resent. This, and not the health effects, can make people decide to quit smoking. This also explains why passive smoking is more effective than active smoking in influencing people’s perceptions. Although the risk of passive smoking is 100 times smaller than the risk of active smoking, most factors end up at the right, dangerous side, making passive smoking maybe 100 times more objectionable and worrisome than active smoking.

 

Experts react like lay people at home

Many people are surprised to find out that the calculated or estimated health risks influences risk perception so little. But we experience it in our own daily lives, especially when we add another factor in the model: advantages. All of us perform risky activities because they are necessary, come with advantages or sometimes out of sheer fun. Most of us take part in daily traffic with an annual risk of dying far higher than 1 in a million. Once, twice or even more times a year we go on a holiday with a multitude of risks: transport, microbes, robbery, divorce. The thrill seekers of us go diving, mountain climbing, parachute jumping without even knowing the annual fatality rates. If the stakes are high, people can knowingly risk their life in order to improve it, as the thousands of immigrants trying to cross the Mediterranean illustrate, or even to give their life for a higher cause, like soldiers at war (Winston Churchill in 1940: "I have nothing to offer but blood, toil, tears and sweat”).

An example at the other side can show it maybe even clearer. No matter how small the risk, it can be totally unacceptable and nonsensical. Suppose the government starts a new lottery with an extremely small chance of winning, say one in a billion. Every citizen must play and tickets are free. So far nothing strange, but there is a twitch. The main and only price of the lottery is a public execution broadcasted live on national TV. The government will probably not make itself very popular with this absurd lottery. When government, as still is done, tells people they have to accept a small risk because they accept larger risks of activities they choose themselves, it makes people feel they have been given a ticket in the above mentioned lottery. This is how people can feel if the government tells them the risk of the polluted soil they live on is extremely small and that it would be wiser for them to quit smoking.

 

All risks have a context

A main lesson which can be learned from the study of risk perception is that risks always occur in a context. A risk is always part of a situation or activity which has many more characteristics than only the chance of getting sick, being injured or to die. We do not judge risks, we judge situations and activities of which the risk is often only a small part. Risk perception occurs in a rich environment. After 50 years of research a lot has been discovered, but predicting how angry of afraid people will be in a new, unknown situation is still a daunting task.

  • Het arrangement 6. Risk Assessment & Regulation is gemaakt met Wikiwijs van Kennisnet. Wikiwijs is hét onderwijsplatform waar je leermiddelen zoekt, maakt en deelt.

    Laatst gewijzigd
    2024-11-21 16:03:36
    Licentie

    Dit lesmateriaal is gepubliceerd onder de Creative Commons Naamsvermelding 4.0 Internationale licentie. Dit houdt in dat je onder de voorwaarde van naamsvermelding vrij bent om:

    • het werk te delen - te kopiëren, te verspreiden en door te geven via elk medium of bestandsformaat
    • het werk te bewerken - te remixen, te veranderen en afgeleide werken te maken
    • voor alle doeleinden, inclusief commerciële doeleinden.

    Meer informatie over de CC Naamsvermelding 4.0 Internationale licentie.

    Aanvullende informatie over dit lesmateriaal

    Van dit lesmateriaal is de volgende aanvullende informatie beschikbaar:

    Eindgebruiker
    leerling/student
    Moeilijkheidsgraad
    gemiddeld

    Gebruikte Wikiwijs Arrangementen

    Toxicologie tekstboek SURF. (z.d.).

    Environmental Toxicology, an open online textbook

    https://maken.wikiwijs.nl/147644/Environmental_Toxicology__an_open_online_textbook

  • Downloaden

    Het volledige arrangement is in de onderstaande formaten te downloaden.

    Metadata

    LTI

    Leeromgevingen die gebruik maken van LTI kunnen Wikiwijs arrangementen en toetsen afspelen en resultaten terugkoppelen. Hiervoor moet de leeromgeving wel bij Wikiwijs aangemeld zijn. Wil je gebruik maken van de LTI koppeling? Meld je aan via info@wikiwijs.nl met het verzoek om een LTI koppeling aan te gaan.

    Maak je al gebruik van LTI? Gebruik dan de onderstaande Launch URL’s.

    Arrangement

    IMSCC package

    Wil je de Launch URL’s niet los kopiëren, maar in één keer downloaden? Download dan de IMSCC package.

    Meer informatie voor ontwikkelaars

    Wikiwijs lesmateriaal kan worden gebruikt in een externe leeromgeving. Er kunnen koppelingen worden gemaakt en het lesmateriaal kan op verschillende manieren worden geëxporteerd. Meer informatie hierover kun je vinden op onze Developers Wiki.