Uncertainty can arise at any stage in the development of a regulatory analysis. Within each stage of the analysis, uncertainty can result from any number of factors, including insufficient data, an incomplete understanding of the physical or economic process being modeled, model specification, and the inherent uncertainty in the results of any statistical analysis. A thorough treatment of all sources of uncertainty in a regulatory analysis is beyond the scope of this document. Therefore, the remainder of this discussion focuses on the uncertainties associated with the valuation of changes in environmental impacts. 

There are three general sources of uncertainty in the economic analysis of a regulatory action: input uncertainty, model uncertainty, and estimation uncertainty. Each of these sources of uncertainty is described below: 

 Input uncertainty—This is a general term used to describe
two specific sources of uncertainty inherent in the data on
which an economic analysis is based. These specific sources
include the distribution of possible values of the environmental
impact being valued and measurement error. The former
source of input uncertainty arises when the impacts being valued
are themselves the product of a modeling effort as is often
the case when the impact of interest is a change in exposure
to a pollutant or a change in the probability of a certain outcome
occurring. The latter source of input uncertainty, measurement
error, is common when the data used in an economic analysis
are incomplete and analyst judgement or proxy measures are used
to fill in data gaps. The uncertainty surrounding such
inputs will naturally be carried through the economic analysis.
 Model uncertainty—This type of uncertainty is
typically a result of the fact that any statistical model represents
a simplification of a behavioral or economic process. This
simplification is often necessary because most behavioral and
economic processes are highly complex. In addition, this
simplification is necessary if the nature of the behavioral
or economic process being modeled is not completely understood.
In modeling such processes, analysts must often
rely on a series of assumptions and abstractions, each having
a potential impact on the precision of the analytical results.
 Estimation uncertainty—Economic analyses in which
the analyst estimates a parametric model yield results that
are variable. In particular, all statistical modeling
techniques result in parameter estimates that are not point
estimates, but rather probability distributions of likely values
for the parameters. This distribution in parameter estimates
naturally translates into a distribution of the predicted outcome
variable. This estimation uncertainty requires
that the analyst take care to assign the appropriate probability
ranges around each estimated parameter or prediction.

 
Because of the uncertainties described above, the results
of an EA must be presented in such a way that the full range of uncertainty
is transparent. There are five basic methods for characterizing
uncertainty:
 scenario analysis—estimating a range of possible outcomes, such
as worstcase and bestcase scenarios, in addition to the most
likely outcome
 Delphi methods—using input from a group of experts to
characterize the potential likelihood of possible outcomes
 sensitivity analysis—identifying assumptions made about
key input variables (e.g., the level of exposure or the discount
rate) and conducting the analysis over a range of plausible
values for these variables to determine the effect of each assumption
on the resulting point estimates
 metaanalysis—combining data or results from a number
of different studies to estimate a more general model or to
characterize the range or distribution of key input variables
 Monte Carlo and other probabilistic methods—simulating
a distribution of the results by randomly drawing from the probability
distributions of input variables and repeating the analysis
numerous times

 
The draft EPA white paper on uncertainty recommends that, at a minimum, the analyst
identify the key assumptions and qualitatively assess the potential impact
of each assumption on the results of the analysis ( HaglerBailly Consulting,
Inc., 1997).
In addition, sensitivity analysis should be conducted
to further characterize the impact of alternative values of key variables
whenever possible. Scenario analysis and Delphi methods are useful
when sensitivity analysis fails to adequately characterize the range of
possible outcomes, particularly in situations in which there is a small
risk of an extreme outcome. Metaanalysis and probabilistic methods
are often superior to the other methods. Metaanalysis provides
a more complete characterization of key input variables, and probabilistic
methods provide a probability distribution for the full range of possible
cost and benefit values.
Because Delphi methods, metaanalysis, and probabilistic
methods often require substantial time and financial resources, the analyst
should determine their likely contribution to the policy implications
of the EA results. For analyses in which benefits unambiguously
exceed costs, a sensitivity analysis should be adequate. However,
in cases in which the results vary significantly depending on the underlying
assumptions, other methods of characterizing the range and distribution
of both input variables and results should be considered.
12 Both the OMB EA guidance ( OMB,
1996) and the EPA working paper addressing uncertainty in EAs
( HaglerBailly Consulting, Inc., 1997) include a discussion
of certainty equivalents. Certainty equivalents are a
theoretical construct for the value that a riskaverse individual
places on an uncertain outcome. This type of uncertainty
does not represent a lack of knowledge on the part of the analyst
but rather a component of an individual’s response to risk.
Therefore, certainty equivalents are more appropriately
addressed in a discussion of the expected benefits of a regulatory
action rather than in a discussion of analytical uncertainty.


13 Risk and risk assessment are discussed in detail
in Section 7 of this guidance document. 

14 For an example of the use of
metaanalysis in determining the value of a statistical life to
use in the analysis of environmental programs, readers are
referred to EPA’s analysis of the benefits and costs
of the CAA ( EPA, 1996a). Analysts interested in
conducting metaanalyses are referred to Hedges and
Olkin (1985) and Cook et al. (1992).


15 An example of the use of
Monte Carlo simulation in the analysis of a regulation is
the CAA retrospective analysis ( EPA, 1996a). More
detailed discussions of probabilistic methods, including
Monte Carlo simulations, can be found in most intermediate
or advanced statistics and econometrics texts.


