Atmospheric Modeling and Analysis Research
Operational Performance Evaluation of Air Quality Model Simulations
Two of the three main components of an air quality model, like the Community Multiscale Air Quality ( CMAQ) model, simulation are the input meteorology and the air quality model simulation itself, with the third being the input emissions. Meteorological data are provided by models, such as the MM5mesocale model system and the Weather Research and Forecasting (WRF) model.
The quality of the meteorological data, specifically how well predicted values such as temperature or wind speed, compare with the observed state of the atmosphere, is critical to the performance of the air quality model, which is highly dependent on the meteorological data to accurately simulate pollutants in the atmosphere. As such, an important aspect of any air quality simulation is evaluating the quality of the predicted meteorological data. This is accomplished by comparing model simulated values against observed data. This type of evaluation is referred to as operational evaluation. An example of meteorological evaluation can be found here. A similar evaluation of the air quality model simulation is also performed using available observed air quality measurements.
As the developers of the CMAQ model, EPA researchers frequently evaluate CMAQ simulations as part of the testing process as the model evolves with state-of-the-art science. Examples of changes to the modeling system that may require testing include updates/corrections to the model code, changes in the model inputs (e.g. meteorology and emissions), and any other changes that may impact the model predictions (e.g specified boundary conditions).
As computing power has increased — and continues to increase — over time, the frequency of model simulations has also increased, while the time required to run a simulation has decreased. Additionally, the duration of model simulations has increased from a week or several weeks to multiple months or years. With this increase in the number and duration of air quality simulations comes an increase in the time required to thoroughly evaluate each simulation.
In order to evaluate a simulation within a reasonable amount of time, EPA developed the Atmospheric Model Evaluation Tool (AMET), which helps researchers evaluate the operational performance of a meteorological or air quality simulation. A brief description of AMET is given below, while a link AMET code can be found in the tools section.
AMET is a combination of the open-source database software MYSQL, the open-source R statistics software, FORTRAN and PERL scripts that together provide an organized and powerful system for processing meteorological and air quality model output and then evaluating the performance of model predictions. AMET uses FORTRAN and PERL scripts to pair observed meteorological and air quality data with model estimates, then populates a MYSQL relational database with the paired data, and finally uses R statistics scripts to create statistics and plots to show the operational model performance. Many R scripts are already available with the release version of AMET, available through the Community Modeling and Analysis System (CMAS) center website, but users familiar with R can modify existing scripts or create new scripts to suit their evaluation needs. See Appel et al., 2011 (Overview of the Atmospheric Model Evaluation Tool (AMET) v1.1 for evaluating meteorological and air quality models) for a detailed description of AMET. Below is an example plot from AMET.
Scatter plot of observed versus CMAQ predicted sulfate for August 2006 created by AMET.
EPA is also a leader in the Air Quality Model Evaluation International Initiative (AQMEII), which is a collaborative model evaluation activity between numerous groups in North America and Europe.
- Appel, K.W., Chemel, C., Roselle, S.J., Francis, X.V., Sokhi, R.S., Rao, S.T., and Galmarini, S.: Examination of the Community Multiscale Air Quality (CMAQ) model performance over the North American and European Domains, accepted for publication in the Atmospheric Environment special issue on the AQMEII project, in press, 2012.
- Appel, K.W., Foley, K.M., Bash, J.O., Pinder, R.W., Dennis, R.L., Allen, D.J., and Pickering, K.: A multi-resolution assessment of the Community Multiscale Air Quality (CMAQ) model v4.7 wet deposition estimates for 2002–2006, Geosci. Model Dev., 4, 357-371, doi:10.5194/gmd-4-357-2011, 2011.
- Appel, K.W., Gilliam, R.C., Davis, N., Zubrow, A., and Howard, S.C. Overview of the Atmospheric Model Evaluation Tool (AMET) v1.1 for evaluating meteorological and air quality models, Environ. Modell. Softw.,26, 4, 434-443, 2011.
- Foley, K.M., Roselle, S.J., Appel, K.W., Bhave, P.V., Pleim, J.E., Otte, T.L., Mathur, R., Sarwar, G., Young, J.O., Gilliam, R.C., Nolte, C.G., Kelly, J.T., Gilliland, A.B., and Bash, J.O.: Incremental testing of the Community Multiscale Air Quality (CMAQ) modeling system version 4.7, Geosci. Model Dev., 3, 205-226, 2010.
- Appel, K.W., Roselle, S.J., Gilliam, R.C., and Pleim, J.E.: Sensitivity of the Community Multiscale Air Quality (CMAQ) model v4.7 results for the eastern United States to MM5 and WRF meteorological drivers, Geosci. Model Dev., 3, 169-188, 2010.
- Swall, J.L., K.M. Foley. The impact of spatial correlation and incommensurability on model evaluation. Atmospheric Environment, 43, 1204-1217 (2009).
- Appel, K.W., Bhave, P.V., Gilliland, A.B., Sarwar, G., Roselle, S.J., 2008. Evaluation of the Community Multiscale Air Quality (CMAQ) model version 4.5: Sensitivities impacting model performance; Part II - particulate matter, Atmospheric Environment (2008),
- Appel, K.W., Gilliand, A.B., Sarwar, G., Gilliam, R.C., 2007. Evaluation of the Community Multiscale Air Quality (CMAQ) model version 4.5: Sensitivities impacting model performance. Atmospheric Environment,
- Gilliam, R.C., C. Hogrefe, and S.T. Rao. New Methods For Evaluating Meteorological Models Used In Air Quality Applications, Atmospheric Environment, 40(26), 5073-5086 (2006)