Region 8

HH: Exposure Assessment

On this page:


In this step, the risk assessor seeks to quantify the amount of exposure from site contamination that is likely to occur for each type of population (e.g., residents, workers, visitors) and for each exposure pathway (e.g., ingestion of contaminated water, inhalation of chemicals in air, ingestion of contaminated soil or food) that is identified as being of potential concern in the Site Conceptual Model. These calculations require data on the amount (concentration) of each site-related chemical in each environmental medium, as well as knowledge of the amount of contact each population has with each medium. As described in RAGS I Part A, the basic equation usually takes the following form:

DI = C × (IR / BW) × (EF × ED / AT)


DI = daily intake of chemical (mg/kg-d)
C = concentration of chemical in an environmental medium (e.g., mg/kg for soil or food, mg/L for water, mg/m3 for air)
IR = intake rate of the environmental medium (e.g., kg/day for food or soil, L/day for water, m3/day for air)
BW = body weight (kg)
EF = exposure frequency (days/yr)
ED = exposure duration (years)
AT = averaging time (days)

Note that the term IR/BW is a description of the basic contact rate with a medium (e.g., L of water per kg body weight per day) and the second term (EF × ED/AT) adjusts for cases where exposure is not continuous. For example, if a person was exposed for 50 days/year for 20 years of a lifetime (70 years), the value of this term would be 50/365 × 20/70 = 0.039.

When the same individual may be exposed beginning as a child and extending into adulthood, exposure should be calculated as the time-weighted average (TWA) lifetime exposure for evaluating non-cancer and cancer risks as recommended in RAGS I Part A.

There is often wide variability in the amount of contact between different individuals within a population. Thus, human contact with an environmental medium is best thought of as a distribution of possible values rather than a specific value. Usually, emphasis is placed on two different points of this distribution:

Average or Central Tendency Exposure (CTE): CTE refers to individuals who have average or typical intake of environmental media.

Upper Bound or Reasonable Maximum Exposure (RME): RME refers to people who are at the high end of the exposure distribution (approximately the 95th percentile). The RME scenario is intended to assess exposures that are higher than average, but are still within a realistic range of exposure.

Because the calculations of CTE and RME risk are done using single numbers (point estimates) for each input value, this approach is usually referred to as the point estimate method. In some cases, the risk assessor may wish to describe each exposure parameter not by a single number but as a distribution. This is referred to as probabilistic risk assessment (PRA). In this case, computations require computer-based methods (Monte Carlo simulation) and the output is also a distribution rather than a point estimate. This approach provides a more complete description of the range of exposures that occur in the exposed population and also helps increase the accuracy of combining exposure levels across different pathways.

In some cases, human exposure may be measured directly (biomonitoring) rather than calculated based on assumed exposure parameters. For example, exposure to lead is often evaluated by measuring the amount of lead in blood, and exposure to arsenic is often evaluated by measuring the amount of arsenic in urine or in hair. While direct measurement bypasses many of the uncertainties associated with calculating human exposure, this approach is limited by providing data only on current conditions. In addition, if exposure is occurring from more than one source, direct measurement does not distinguish between the sources.

Top of Page

You will need Adobe Reader to view some of the files on this page. See EPA’s About PDF page to learn more.

Selecting Contaminants of Potential Concern (COPCs)

At a site, data are often available on the concentration of a wide variety of analytes. However, not all of the analytes are necessarily related to releases at the site, nor are all necessarily of equal concern. Contaminants of Potential Concern (COPCs) are a subset of all analytes that are selected for quantitative evaluation in the risk assessment. There are a variety of different methods that can be used for selecting COPCs, including a) comparison of on-site levels to background, b) an analysis of detection frequency, and c) an assessment of relative risk.

The general process and steps for selecting COPCs are described in Evaluating and Identifying Contaminants of Concern for Human Health (Region 8 Superfund Technical Guidance RA-03, September, 1994). Other valuable guidance on the selection of COPCs includes:

Comparison to Background

  • Guidance for Comparing Background and Chemical Concentrations in Soil for CERCLA Sites (PDF) (OSWER 9285.7-41, September 2002) (89 pp, 1.3 MB)
  • Comparing Statistical Tests for Detecting Soil Contamination Greater Than Background (J.W. Hardin and R.O. Gilbert, Pacific Northwest Laboratory, Richland, WA, PNL 8989, UC-630, December 1993, prepared for the USDOE)
  • Region 8 Background Soil Arsenic Concentrations (XLS) (2.5 MB)
    Presents a summary of arsenic concentrations in background soils from EPA Region 8, stratified by state (Colorado, Montana, North Dakota, South Dakota, Utah and Wyoming) and land use (native grassland, rangeland or agriculture, urban mixed, mining, Superfund site, mineralized area, and reclaimed area). Data are compiled from multiple studies, primarily based on USGS datasets. Summary statistics (N, mean, 95 percent UCLM, percentiles) are listed in tables and plotted graphically by state.

Risk-Based Toxicity Screen

Top of Page

Exposure Factors

In order to quantify human exposure to chemicals in the environment, it is necessary to calculate the level of contact between people and each contaminated environmental medium. The basic equations used to perform these calculations are provided in RAGS I Part A.

For every exposure pathway of potential concern, it is expected that there will be differences between different individuals in the level of exposure at a specific location due to differences in intake rates, body weights, exposure frequencies, and exposure durations. There is normally a wide range of average daily intakes between different members of an exposed population.

Because of this, all daily intake calculations must specify what part of the range of doses is being estimated. Typically, attention is focused on intakes that are "average" or are otherwise near the central portion of the range and on intakes that are near the upper end of the range (e.g., the 95th percentile). These two exposure estimates are referred to as Central Tendency Exposure (CTE) and Reasonable Maximum Exposure (RME), respectively. (Note that this variability in exposure between different members of the population should not be confused with the uncertainty that is often encountered in attempting to estimate either CTE or RME daily chemical intake levels).

EPA has developed default exposure parameters for evaluating a number of the most common exposure scenarios, especially scenarios for residents and workers. In February 2014, EPA issued OSWER Directive 9200.1-120 Human Health Evaluation Manual, Supplemental Guidance: Update of Standard Default Exposure Factors.  This guidance update supplements RAGS, Part A through E, and supersedes and replaces certain portions of OSWER Directive 9285.6-03 (March 1991).

In general, default values may be used in all exposure calculations. However, in some cases, it may be suspected that the national default exposure parameters for a scenario may be too low or too high for the specific conditions present at a site. In this case, the default values may be replaced if reliable site-specific data are available to provide more appropriate values. If no site reliable specific data are available, the defaults should be used, and any uncertainty associated with those default values should be discussed in the uncertainty section.

In some cases (e.g., some recreational scenarios), EPA has not yet identified national default exposure parameters. In these cases, the preferred approach is to perform a reliable site-specific study to estimate the parameters (e.g., perform a demographic survey to characterize frequency and duration of dirt-bike riding in an area of concern). In the absence of such reliable site-specific data, values may sometimes be derived by consulting the Exposure Factors Handbook. If no values can be derived from either of these sources, screening level calculations may be performed using parameters that are based on professional judgment. However, all exposure and risk estimates derived from professional judgment-based exposure parameters should be clearly identified and properly discussed in the uncertainty section.

Top of Page

Probabilistic Risk Assessment (PRA)

Equations for computing human exposure contain a number of terms that are inherently variable. For example, not all people have the same body weight. Rather, there is a distribution of body weights across different people. The same is true for intake rates, exposure frequencies, and exposure durations. If data are available to describe the distribution of each of these terms, then a mathematical method is needed to combine the distributions.

While there are a number of different methods available, the most common and convenient is Monte Carlo simulation. In this approach, each term in the exposure model is described by a distribution rather than a single value. The computer draws a value at random from each distribution, computes the exposure, and saves the value. This process is repeated many times, resulting in a distribution of exposure values. This distribution provides a more complete description of exposure than the point estimate approach and helps ensure that values selected for CTE and RME exposures are realistic.

Key guidance documents dealing with PRA include the following:

Top of Page


In some cases, biomonitoring may be a useful tool to help evaluate current exposure levels at a site. This requires that a population of humans are present at the site and that there is a method available for measuring the level of exposure in the population. In general, the results of the biomionitoring may be compared to other (reference) populations to help understand the magnitude of the site-related exposure, and/or may be compared to health-based guidelines for the maximum level of exposure that is considered acceptable. Important guidance documents on planning, performing, and interpreting biomonitoring studies are presented below.

Top of Page

Exposure Point Concentrations

Concept of an Exposure Point

An exposure point (also called an exposure area or exposure unit) is a location within which an exposed receptor may reasonably be assumed to move at random and where contact with an environmental medium (e.g., soil) is equally likely at all sub-locations.

Because the key attribute of an exposure point is the assumption of random exposure, the single most important factor to consider in selecting an exposure point is the expected behavior of individuals in the exposed population. For example, if the population of concern consists of current residents, then the site is usually divided into a series of different exposure units that are based on residential property boundaries. This is the same as assuming that a resident (adult and/or child) moves at random across their own property. In real life, it is possible that any particular resident does not move entirely at random within their property but frequents certain locations more often than others. However, it is important to remember that Superfund risk assessments usually do not seek to estimate risk to any one particular current resident or family, but to a hypothetical population of people living at the location, who may select other parts of the yard to visit.

Nevertheless, because of the possibility of non-random behavior within a yard, it may sometimes be appropriate to subdivide the yard into two or more exposure points (e.g., front yard, back yard). This illustrates a general conceptual principle: if it is not realistic to assume random exposure in some proposed exposure area, then the area may be divided into two or more sub-areas such that random exposure within each sub-area is considered reasonable. The obvious disadvantage of selecting too many sub-areas is that it substantially increases the amount of data needed to support the risk assessment (see discussion below, Calculation of the Exposure Point Concentration (EPC)).

If the land use is not residential, the same principle still applies. For example, consider a wildlife refuge where the exposed population consists of wildlife refuge workers. If it is assumed that the wildlife worker may roam across the whole site at random, then the entire site may be selected as the exposure point. If it is assumed that the worker may preferentially be exposed in some sub-area (e.g., a visitor center), then that sub-area might be selected as one exposure area and the rest of the site as a second exposure point.

Another factor that is often considered in selecting exposure points is the existing pattern of environmental contamination. For example, if soil in an area has been contaminated by releases from a nearby facility, and there is a clear spatial "footprint" of decreasing concentrations as a function of distance from the source, then it may be most convenient to select groups of homes (blocks, neighborhoods) as exposure points, rather than focusing on individual properties. This is equivalent to assuming that the degree of contamination is approximately the same at all locations within the exposure point and has the advantage that fewer samples per property may be sufficient to characterize exposure.

Regardless of what strategy is selected for choosing exposure points at a site, it is important that the risk assessment provide a thoughtful discussion of the factors that were considered and the rationale supporting the approach selected.

Top of Page

Calculation of the Exposure Point Concentration (EPC)

An exposure point concentration (EPC) is an estimate of the true arithmetic mean concentration of a chemical in a medium at an exposure point, selected as described above (Concept of an Exposure Point). The choice of the arithmetic mean as the most appropriate statistic for characterizing exposure at an exposure point is based on the fundamental assumption of random exposure within the exposure point.

For example, assume that the total daily intake at the exposure point is "IR" and that, on average, 1/100 comes from each of 100 sub-areas of the exposure unit. Then the total exposure is given by:

Total Intake = (IR/100) × C1 + (IR/100)×C2 + ... + (IR/100) × C100


IR/100 = intake rate of medium at each sub-area
Ci = Concentration at sub-area "i"

Rearranging this equation yields:

Total Intake = IR × (C1 + C2 + ... + C100)/100
= IR × Mean Concentration

However, because the true arithmetic mean concentration cannot be calculated with certainty from a limited number of measurements, EPA recommends that the 95th percentile upper confidence limit (UCL) of the arithmetic mean at each exposure point be used when calculating exposure and risk at that location (see Supplemental Guidance to RAGS: Calculating the Concentration Term (PDF) (Publication 9285.7-081, May 1992) (8 pp, 67 K)). If the 95 percent UCL exceeds the highest detected concentration, the highest detected value is used instead (see RAGS I Part A).

The equation used to compute the 95 percent UCL of a data set depends on the distribution (normal, lognormal, other) of the values. In the past, it was common practice to test each environmental data set for normality and, if it did not pass, to assume that the data set was lognormal. While this is mathematically convenient, the approach is inherently limited because no environmental data set can ever truly be lognormal and this approach can substantially overestimate the true UCL. To address this problem, EPA has recently developed software (ProUCL) that computes the UCL for a given data set by a variety of alternative statistical approaches (including several approaches that do not require the assumption of normality or lognormality) and then recommends specific UCL values as being the most appropriate for that particular data set. For this reason, Region 8 recommends the use of ProUCL as the default approach for computing exposure point concentrations in most cases. The software and User's Guide for ProUCL may be obtained at the following link:

If the ProUCL software is not selected for use at a site, Region 8 recommends following the guidance for computing UCLs as detailed in:

Additional documents that provide useful guidance on computing exposure point concentrations include the following:

Top of Page

Dealing with Biased Data

The basic unit of a risk assessment is an exposure unit, and the key description of exposure is the arithmetic mean concentration within an exposure unit. If the data collected from within an exposure unit are either random or systematic, the methods for computing the mean (and confidence limits around the mean) are relatively straightforward. However, in some cases, the data available are not random or systematic, but are biased. That is, more samples are collected from areas with high concentrations than with low concentrations. This unequal sampling density poses a difficulty in computing the mean, but techniques are available for adjusting for this issue. Important guidance documents on how to make these adjustments include the following:

Top of Page