Jump to main content.


ERP Results

Note: EPA no longer updates this information, but it may be useful as a reference or resource.


Results from 11 programs in 8 states suggest that the ERP measurement approach is providing valuable data about sector performance and project efficacy; the table below presents these results. Sectors where ERP is applied generally show improved performance - sometimes substantial - after the first round of compliance assistance and self-certification has been completed. Some highlights of results from the first full cycle of these programs:

  • 44 of the 190 indicators (23 percent) showed statistically significant performance increases.
  • No indicators showed statistically significant declines in performance.
  • 19 indicators (10 percent) showed no change in performance; for all of these indicators, 100 percent of facilities were meeting expectations at both the baseline and follow-up inspections.
  • States observed average performance increases of over 10 percent in 7 of the 11 ERPs.

In addition to these findings, preliminary analysis of longer-term ERPs suggests that performance continues to improve, or is maintained, over time.

Please note that there are several external factors, such as existing regulations or trade association outreach efforts, that have the potential to influence these results. Therefore, we cannot necessarily attribute all of the gains achieved during the ERP to the program itself. Nonetheless, whether or not substantial improvements can be attributed to an ERP, this measurement approach ensures that, at a minimum, regulators will have gained important information about sector characteristics and performance.

First Round of Post-Certification Results Compared to Baseline Results
(One Full ERP Cycle)1

Find more detailed information about any of these programs on the States & Sectors page of this website.

State2 Sector Self-Certification # of Indicators # Improving (# Significant) # Worsening (# Significant) # No Change, 100% Achievement3 Average Indicator Change (Percentage Points)4
DE Auto Body Voluntary 19 17 (13) 1 (N/A) 1 30
FL Auto Repair Mandatory 17 13 (7) 3 (0) 1 7
MA Dry Cleaners Mandatory 15 5 (0) 5 (0) 5 5
MA Photo Processors Mandatory 8 3 (1) 2 (0) 3 12
MA Printers Mandatory 25 17 (1) 6 (0) 2 13
MD Auto Body/Repair Voluntary 5 4 (1) 1 (0) 0 12
ME Auto Body Voluntary 22 18 (3) 4 (N/A) 0 10
MI Dry Cleaners Voluntary 7 3 (2) 4 (0) 0 1
RI Auto Body Voluntary 24 19 (7) 3 (N/A) 2 21
RI Auto Salvage Voluntary 14 11 (5) 0 (N/A) 3 34
WI Printers Voluntary 34 20 (4) 12 (0) 2 3
Average All Sectors N/A 17 12 (4) 4 (0) 2 N/A
TOTAL All Sectors N/A 190 130 (44) 41 (0) 19 N/A
Notes:
a) The following states used a 95% confidence level and a one-tailed test for significance: Rhode Island and Maine. Significance was not tested for indicators that showed worsening performance; this is marked in the table with (N/A).
b) The following states used a 95% confidence level and a two-tailed test for significance: Florida, Maryland, Massachusetts and Wisconsin.
c) The following state used a 90% confidence level and a one-tailed test for significance: Delaware. Recalculated results the same at 95% confidence level and a one-tailed test. Significance was not tested for indicators that showed worsening performance; this is marked in the table with (N/A).
d) The following state used a 90% confidence level and a two-tailed test for significance: Michigan.
e) For average percentage point change calculations, we excluded indicators for which the baseline and follow-up achievement rates were both 100 percent (i.e., the percentage point change was zero). Percentage point changes for each indicator are calculated by subtracting the observed percentage of shops following a certain behavior at the baseline from the observed percentage of shops following the behavior at post-certification.

Top of page


1While we present these results together, we should be careful when comparing results across different programs. Results can vary between programs for several reasons, including: resources available to the program, initial levels of performance, and whether self-certification was voluntary or mandatory.

2The results from the Minnesota Feedlots ERP are not included in this table because it relied on a different measurement approach that was not directly comparable to the other programs. We have also excluded the results for the Vermont and Rhode Island UST ERPs, because those results have not yet been finalized.

3This column counts the numbers of indicators for which 100 percent of facilities were achieving the indicator at both the baseline and follow-up inspections. Therefore, there was no change in performance between the baseline and the follow-up inspections.

4The percentage point change for an individual indicator is calculated by subtracting the percentage of randomly inspected facilities achieving the indicator at baseline from the percentage achieving the indicator in the post-certification random sample. For instance, if the percentage of randomly inspected facilities achieving an indicator increases from 50% to 70%, the percentage point change is 20. The average indicator change (percentage points) for an ERP is the simple mean of all the percentage point changes, positive or negative. Indicators that showed no change from a 100% achievement level at baseline are not included in the average, because improvement is not possible in that circumstance. (See previous footnote for a description of that category of indicators.)

Top of page

Additional Results:

Local Navigation


Jump to main content.