EPA-Expo-Box (A Toolbox for Exposure Assessors)
Uncertainty and Variability
The following frequently asked questions (FAQs) provide information on the concepts of variability and uncertainty in exposure assessment. A list of resources that provide guidance on assessing uncertainty and variability in exposure and risk assessments follows. For further information refer to the course materials, including the participant reading packet for EXA 407: Assessing Uncertainty and Variability in the Context of Exposure Assessment developed for the Risk Assessment Training Experience (RATE) program.
FAQs on Uncertainty and Variability
- What is the difference between variability and uncertainty?
- What factors contribute to variability and uncertainty in exposure assessment?
- How do variability and uncertainty affect risk assessment?
- How should an exposure assessment be designed to ensure variability is well-characterized and uncertainty is limited?
- How are variability and uncertainty addressed in risk assessment?
What is the difference between variability and uncertainty?
|Refers to the inherent heterogeneity or diversity of data in an assessment. It is "a quantitative description of the range or spread of a set of values" (U.S. EPA, 2011), and is often expressed through statistical metrics such as variance, standard deviation, and interquartile ranges that reflect the variability of the data.||Refers to a lack of data or an incomplete understanding of the context of the risk assessment decision. It can be either qualitative or quantitative (U.S. EPA, 2011).|
|For example, body weight varies between members of a study population. The average body weight of the population can be characterized by collecting data; collecting an exact measured body weight from each study participant will allow for better understanding of the average body weight of the population than if body weights are estimated using an indirect approach (e.g., approximating based on visual inspection). But, the assessor cannot change the individual body weights of the study population, and therefore cannot decrease the variability in the population.
Uncertainty can be qualitative or quantitative. Qualitative uncertainty may be due to a lack of knowledge about the factors that affect exposure, whereas quantitative uncertainty may come from the use of non-precise measurement methods. Chemical concentrations in environmental media can be approximated using assumptions (more uncertainty) or described using measured data (less uncertainty). Uncertainty can be introduced when defining exposure assumptions, identifying individual parameters (i.e., data), making model predictions, or formulating judgments of the risk assessment.
What factors contribute to variability and uncertainty in exposure assessment?
|In an exposure assessment variability may be present in:
Exposure scenario uncertainty might result from:
|When considering a population’s exposure to urban air pollution, variability may exist in the measured pollution exposure concentrations due to when, where, and how the different measurements were taken (e.g., weekday vs. weekend measurements near roads in a city’s financial district can be very different; measurements from personal air monitoring devices will vary across individuals due to individuals’ behavior, such as how much time they spend outdoors). Variability may also exist in the population itself, which could explain variability in the exposure measurements. For example, members of a study population who bike to work on busy roads may have higher exposure to air pollution than members of the population who commute in a vehicle with the windows closed. Younger members of the study population may have faster breathing rates than older members, resulting in greater exposures. Uncertainty in risk assessment can be present in the characterization of the exposure scenario, the parameter estimates, and model predictions.
For example, grouping individuals with unique measured exposure levels into categories of exposure ranges can introduce aggregation errors and subsequent uncertainty. Incomplete analysis might occur if a certain exposure pathway is not considered, introducing uncertainty in the total estimate of exposure.
Parameter estimates can have uncertainty due to random errors in measurement or sampling techniques (e.g., imprecise monitor instruments or the choice of a less-precise technique) or systemic biases in measurements (e.g., total exposure estimates are reported consistently without considering contributions of a specific exposure route). Parameter estimates can also include uncertainty due to use of surrogate data, misclassifications, or random sampling errors.
Finally, model uncertainty occurs due to a lack of information or gaps in scientific theory required to make accurate predictions. Model uncertainty can be the result of incorrect inference of correlations or relationships within the model, oversimplification of situations in the model, or incompleteness of the model. Use of surrogate data instead of specific, measured data or a failure to account for correlations between variables can also contribute to model uncertainty.
How do variability and uncertainty affect risk assessment?
When variability is not characterized and uncertainty is high there is less confidence in the exposure and risk estimates; characterizing variability and reducing uncertainty increases the confidence in the estimates. A risk assessment report should also address variability and uncertainty to increase transparency and understanding of the assessment. Addressing variability and uncertainty can inform decision makers about the reliability of results and guide the process of refining the exposure assessment.
However, not all exposure evaluations are of the same complexity, and thus the level of complexity in evaluating uncertainty and variability can vary from one assessment to another. A tiered approach, starting with a simple assessment, is sometimes used to determine whether additional evaluation is required to further address uncertainty and variability. See the Tiers and Types Tool Set of EPA-Expo-Box for more information on using a tiered approach for exposure assessment.
How can an exposure assessment be designed to ensure variability is well-characterized and uncertainty is limited?
Variability and uncertainty will exist in any assessment. Before conducting an exposure assessment, it might be helpful to consider the following questions to ensure that the study design limits uncertainty and that potential variability is appropriately characterized.
Note that this list is intended as a starting point; it is not a complete list of items that should be considered. Each assessment design will have unique considerations for uncertainty and variability. Further, based on the nature of the assessment, some of these questions might not be applicable (e.g., some assessments do not use measured data).
|Will the assessment collect environmental media concentrations or tissue concentrations as a marker of exposure?||
When using tissue concentrations or other biomarkers:
|What is the detection limit of equipment used to measure chemical concentrations in environmental media or tissue samples?||
|What is the sensitivity of methods used to identify outcomes?||
|Which characteristics of the study population might play a role in understanding the findings?||
How are variability and uncertainty addressed in risk assessment?
Variability can be presented in a number of ways including tabular outputs, probability distributions, or qualitative discussion. Numerical or graphical descriptions of variability include percentiles, ranges of values, mean values, and variance measures (such as confidence intervals or standard deviation). A probability distribution would be a graphical representation of a central tendency value plus a confidence interval or standard deviation. Variability can also be discussed qualitatively.
NRC’s Science and Judgment in Risk Assessment (1994) outlines the following techniques for addressing variability: ignoring variability; disaggregating variability; using an average value or maximum or minimum value; and bootstrapping or probabilistic techniques (e.g., Monte Carlo analysis).
- Ignore Variability. While completely ignoring variability is not suggested, it can be used in conjunction with other techniques. It is important to note that the concept of ignoring variability requires the consideration of possible consequences of ignoring it. Ignoring variability would be useful only if the variability is small and all estimates are similar to the assumed value. One example would be EPA’s default assumption that all adults weigh 70 kg. This estimate ignores the variability in adult body weights, but the estimate is correct within 25% for most adults. Therefore, in order to “ignore” variability in adult body weight, there must be some prior knowledge about the variability, followed by a decision not to use this variability.
- Disaggregate Variability. Variability can be better characterized in the report by disaggregating the data into categories. For example, to characterize inter-individual variability, general population data can be disaggregated into categories by sex or age. To characterize temporal variability, data taken at different time points can be disaggregated by sampling time.
- Use Minimum/Maximum or Average Values. Using an average value is not the same as ignoring the variability but rather is a reliably estimated value with well-characterized bounds of the distribution. An average value would not be useful if the variability is dichotomous, where an average value does not actually exist. For best- or worst-case scenarios, a minimum or maximum value is sometimes used instead of a value that characterizes the variability as long as it is acknowledged that the assessment is based on an extreme situation.
- Bootstrapping or Probabilistic Techniques. Variability may also be addressed using probabilistic techniques, such as Monte Carlo analysis, that calculate a distribution of risk from repeated sampling of the probability distributions of variables. When the distribution for a parameter is unknown, bootstrapping can be used to estimate confidence intervals around specific exposure parameters by resampling from empirical distributions.
Unlike variability, uncertainty can be often be reduced by collecting more and better data (i.e., quantitative methods). Quantitative methods to address uncertainty include non-probabilistic approaches such as sensitivity analysis and probabilistic methods such as Monte Carlo analysis. Uncertainty can also be addressed in a qualitative discussion that presents the level of uncertainty, identifies data gaps, and explains any subjective decisions or instances where professional judgment was used.
Resources for Assessing Uncertainty and Variability
The table below provides links to and descriptions of some resources that provide guidance on assessing uncertainty and variability in exposure and risk assessments.
Tools for Assessing Uncertainty and Variability
NRC (National Research Council). (1994). Science and judgment in risk assessment. Washington, DC: National Academy Press. http://www.nap.edu/openbook.php?isbn=030904894X
U.S. EPA (U.S. Environmental Protection Agency). (2011). Exposure factors handbook 2011 edition (final) [EPA Report]. (EPA/600/R-09/052F). http://cfpub.epa.gov/ncea/cfm/recordisplay.cfm?deid=236252