首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 673 毫秒
1.
Currently used dispersion models, such as the AMS/EPA Regulatory Model (AERMOD), process routinely available meteorological observations to construct model inputs. Thus, model estimates of concentrations depend on the availability and quality of meteorological observations, as well as the specification of surface characteristics at the observing site. We can be less reliant on these meteorological observations by using outputs from prognostic models, which are routinely run by the National Oceanic and Atmospheric Administration (NOAA). The forecast fields are available daily over a grid system that covers all of the United States. These model outputs can be readily accessed and used for dispersion applications to construct model inputs with little processing. This study examines the usefulness of these outputs through the relative performance of a dispersion model that has input requirements similar to those of AERMOD. The dispersion model was used to simulate observed tracer concentrations from a Tracer Field Study conducted in Wilmington, California in 2004 using four different sources of inputs: (1) onsite measurements; (2) National Weather Service measurements from a nearby airport; (3) readily available forecast model outputs from the Eta Model; and (4) readily available and more spatially resolved forecast model outputs from the MM5 prognostic model. The comparison of the results from these simulations indicate that comprehensive models, such as MM5 and Eta, have the potential of providing adequate meteorological inputs for currently used short-range dispersion models such as AERMOD.  相似文献   

2.
This paper examines the use of Moderate Resolution Imaging Spectroradiometer (MODIS) observed active fire data (pixel counts) to refine the National Emissions Inventory (NEI) fire emission estimates for major wildfire events. This study was motivated by the extremely limited information available for many years of the United States Environmental Protection Agency (US EPA) NEI about the specific location and timing of major fire events. The MODIS fire data provide twice-daily snapshots of the locations and breadth of fires, which can be helpful for identifying major wildfires that typically persist for a minimum of several days. A major wildfire in Mallory Swamp, FL, is used here as a case study to test a reallocation approach for temporally and spatially distributing the state-level fire emissions based on the MODIS fire data. Community Multiscale Air Quality (CMAQ) model simulations using these reallocated emissions are then compared with another simulation based on the original NEI fire emissions. We compare total carbon (TC) predictions from these CMAQ simulations against observations from the Inter-agency Monitoring of Protected Visual Environments (IMPROVE) surface network. Comparisons at three IMPROVE sites demonstrate substantial improvements in the temporal variability and overall correlation for TC predictions when the MODIS fire data is used to refine the fire emission estimates. These results suggest that if limited information is available about the spatial and temporal extent of a major wildfire fire, remotely sensed fire data can be a useful surrogate for developing the fire emissions estimates for air quality modeling purposes.  相似文献   

3.
Different methods for the field-scale estimation of contaminant mass discharge in groundwater at control planes based on multi-level well data are numerically analysed for the expected estimation error. We consider "direct" methods based on time-integrated measuring of mass flux, as well as "indirect" methods, where estimates are derived from concentration measurements. The appropriateness of the methods is evaluated by means of modelled data provided by simulation of mass transport in a three-dimensional model domain. Uncertain heterogeneous aquifer conditions are addressed by means of Monte-Carlo simulations with aquifer conductivity as a random space function. We investigate extensively the role of the interplay between the spatial resolution of the sampling grid and aquifer heterogeneity with respect to the accuracy of the mass discharge estimation. It is shown that estimation errors can be reduced only if spatial sampling intervals are in due proportion to spatial correlation length scales. The ranking of the methods with regard to estimation error is shown to be heavily dependent on both the given sampling resolution and prevailing aquifer heterogeneity. Regarding the "indirect" estimation methods, we demonstrate the great importance of a consistent averaging of the parameters used for the discharge estimation.  相似文献   

4.
We investigated, using model simulations, the changes occurring in the distribution of dense non-aqueous phase liquid (DNAPL) mass (Sn) within the source zone during depletion through dissolution, and the resulting changes in the contaminant flux distribution (J) at the source control plane (CP). Two numerical codes (ISCO3D and T2VOC) were used to simulate selected scenarios of DNAPL dissolution and transport in three-dimensional, heterogeneous, spatially correlated, random permeability fields with emplaced sources. Data from the model simulations were interpreted based on population statistics (mean, standard deviation, coefficient of variation) and spatial statistics (centroid, second moments, variograms). The mean and standard deviation of the Sn and J distributions decreased with source mass depletion by dissolution. The decrease in mean and standard deviation was proportional for the J distribution resulting in a constant coefficient of variation (CV), while for the Sn distribution, the mean decreased faster than the standard deviation. The spatial distributions exhibited similar behavior as the population distribution, i.e., the CP flux distribution was more stable (defined by temporally constant second moments and range of variograms) than the Sn distribution. These observations appeared to be independent of the heterogeneity of the permeability (k) field (variance of the log permeability field=1 and 2.45), correlation structure (positive vs. negative correlation between the k and Sn domains) and the DNAPL dissolution model (equilibrium vs. rate-limited), for the cases studied. Analysis of data from a flux monitoring field study (Hill Air Force Base, Utah) at a DNAPL source CP before and after source remediation also revealed temporal invariance of the contaminant flux distribution. These modeling and field observations suggest that the temporal evolution of the contaminant flux distribution can be estimated if the initial distribution is known. However, the findings are preliminary and broader implications to sampling strategies for remediation performance assessment need to be evaluated in additional modeling and experimental studies.  相似文献   

5.
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.  相似文献   

6.
Poor air quality is still a threat for human health in many parts of the world. In order to assess measures for emission reductions and improved air quality, three-dimensional atmospheric chemistry transport modeling systems are used in numerous research institutions and public authorities. These models need accurate emission data in appropriate spatial and temporal resolution as input. This paper reviews the most widely used emission inventories on global and regional scales and looks into the methods used to make the inventory data model ready. Shortcomings of using standard temporal profiles for each emission sector are discussed, and new methods to improve the spatiotemporal distribution of the emissions are presented. These methods are often neither top-down nor bottom-up approaches but can be seen as hybrid methods that use detailed information about the emission process to derive spatially varying temporal emission profiles. These profiles are subsequently used to distribute bulk emissions such as national totals on appropriate grids. The wide area of natural emissions is also summarized, and the calculation methods are described. Almost all types of natural emissions depend on meteorological information, which is why they are highly variable in time and space and frequently calculated within the chemistry transport models themselves. The paper closes with an outlook for new ways to improve model ready emission data, for example, by using external databases about road traffic flow or satellite data to determine actual land use or leaf area. In a world where emission patterns change rapidly, it seems appropriate to use new types of statistical and observational data to create detailed emission data sets and keep emission inventories up-to-date.

Implications: Emission data are probably the most important input for chemistry transport model (CTM) systems. They need to be provided in high spatial and temporal resolution and on a grid that is in agreement with the CTM grid. Simple methods to distribute the emissions in time and space need to be replaced by sophisticated emission models in order to improve the CTM results. New methods, e.g., for ammonia emissions, provide grid cell–dependent temporal profiles. In the future, large data fields from traffic observations or satellite observations could be used for more detailed emission data.  相似文献   


7.
The RAM model provided by the U.S. EPA has been applied to the metropolitan Detroit area for SO2 concentrations and is compared to concentrations predicted by a numerical model and to field data obtained by the 14 station air sampling network maintained by the Wayne County Air Pollution Control Division. Great care was taken to develop the emission inventory. Based upon examination of the temporal and spatial correspondence of the respective model predictions and observed concentrations, the correlation coefficients for the 24-hour averaged data, the correlation coefficients for over 700 3-hour averaged observations, and the cumulative frequency distributions of the model output and observations, it is concluded that the numerical model provides a superior predictive tool to evaluate cause and effect relations, but that the RAM model, at far lower cost, predicts the correct magnitude of the worst events. Hence RAM might well be used in the Detroit Area for statistically based regulatory decisions.  相似文献   

8.
This paper presents results from a series of numerical experiments designed to evaluate operational long-range dispersion model simulations, and to investigate the effect of different temporal and spatial resolution of meteorological data from numerical weather prediction models on these simulations. Results of Lagrangian particle dispersion simulations of the first tracer release of the European Tracer Experiment (ETEX) are presented and compared with measured tracer concentrations. The use of analyzed data of higher resolution from the European Center for Medium-Range Weather Forecasts (ECMWF) model produced significantly better agreement between the concentrations predicted with the dispersion model and the ETEX measurements than the use of lower resolution Navy Operational Global Atmospheric Prediction System (NOGAPS) forecast data. Numerical experiments were performed in which the ECMWF model data with lower vertical resolution (4 instead of 7 levels below 500 mb), lower temporal resolution (12 h instead of 6 h intervals), and lower horizontal resolution (2.5° instead of 0.5°) were used. Degrading the horizontal or temporal resolution of the ECMWF data resulted in decreased accuracy of the dispersion simulations. These results indicate that flow features resolved by the numerical weather prediction model data at approximately 45 km horizontal grid spacing and 6 h time intervals, but not resolved at 225 km spacing and 12 h intervals, made an important contribution to the long-range dispersion.  相似文献   

9.
Air quality field data, collected as part of the fine particulate matter Supersites Program and other field measurements programs, have been used to assess the degree of intraurban variability for various physical and chemical properties of ambient fine particulate matter. Spatial patterns vary from nearly homogeneous to quite heterogeneous, depending on the city, parameter of interest, and the approach or method used to define spatial variability. Secondary formation, which is often regional in nature, drives fine particulate matter mass and the relevant chemical components toward high intraurban spatial homogeneity. Those particulate matter components that are dominated by primary emissions within the urban area, such as black carbon and several trace elements, tend to exhibit greater spatial heterogeneity. A variety of study designs and data analysis approaches have been used to characterize intraurban variability. High temporal correlation does not imply spatial homogeneity. For example, there can be high temporal correlation but with spatial heterogeneity manifested as smooth spatial gradients, often emanating from areas of high emissions such as the urban core or industrial zones.  相似文献   

10.
The prediction of spatial variation of the concentration of a pollutant governed by various sources and sinks is a complex problem. Gaussian air pollutant dispersion models such as AERMOD of the United States Environmental Protection Agency (USEPA) can be used for this purpose. AERMOD requires steady and horizontally homogeneous hourly surface and upper air meteorological observations. However, observations with such frequency are not easily available for most locations in India. To overcome this limitation, the planetary boundary layer and surface layer parameters required by AERMOD were computed using the Weather Research and Forecasting (WRF) Model (version 2.1.1) developed by the National Center for Atmospheric Research (NCAR). We have developed a preprocessor for offline coupling of WRF with AERMOD. Using this system, the dispersion of respirable particulate matter (RSPM/PM10) over Pune, India has been simulated. Data from the emissions inventory development and field-monitoring campaign (13–17 April 2005) conducted under the Pune Air Quality Management Program of the Ministry of Environment and Forests (MoEF), India and USEPA, have been used to drive and validate AERMOD. Comparison between the simulated and observed temperature and wind fields shows that WRF is capable of generating reliable meteorological inputs for AERMOD. The comparison of observed and simulated concentrations of PM10 shows that the model generally underestimates the concentrations over the city. However, data from this single case study would not be sufficient to conclude on suitability of regionally averaged meteorological parameters for driving Gaussian models like AERMOD and additional simulations with different WRF parameterizations along with an improved pollutant source data will be required for enhancing the reliability of the WRF–AERMOD modeling system.  相似文献   

11.
A new dynamic adaptive grid algorithm has been developed for use in air quality modeling. This algorithm uses a higher order numerical scheme—the piecewise parabolic method (PPM)—for computing advective solution fields; a weight function capable of promoting grid node clustering by moving grid nodes; and a conservative interpolation equation using PPM for redistributing the solution field after movement of grid nodes. Applications of the algorithm to a model problem, in which emissions from a point source disperse through the atmosphere in time, reflect that the algorithm is able to capture not only the regional ozone plume distribution, but also the small-scale plume structure near the source. In contrast, the small-scale plume structure was not captured in the corresponding static grid solution. Performance achieved in model problem simulations indicates that the algorithm has the potential to provide accurate air quality modeling solutions at costs that may be significantly less than those incurred in obtaining equivalent static grid solutions.  相似文献   

12.
Abstract

Air quality field data, collected as part of the fine particulate matter Supersites Program and other field measurements programs, have been used to assess the degree of intraurban variability for various physical and chemical properties of ambient fine particulate matter. Spatial patterns vary from nearly homogeneous to quite heterogeneous, depending on the city, parameter of interest, and the approach or method used to define spatial variability. Secondary formation, which is often regional in nature, drives fine particulate matter mass and the relevant chemical components toward high intraurban spatial homogeneity. Those particulate matter components that are dominated by primary emissions within the urban area, such as black carbon and several trace elements, tend to exhibit greater spatial heterogeneity. A variety of study designs and data analysis approaches have been used to characterize intraurban variability. High temporal correlation does not imply spatial homogeneity. For example, there can be high temporal correlation but with spatial heterogeneity manifested as smooth spatial gradients, often emanating from areas of high emissions such as the urban core or industrial zones.  相似文献   

13.
This paper establishes that an isotropic spatial correlation function in the form of a modified Bessel function of the second kind, first order, can be used to model the spatial variability of a pollution concentration field over a sufficiently long period of time in which the variability due to meteorological factors has been smoothed out. The corresponding cumulative semivariogram is derived and fitted by nonlinear least-squares to monthly averaged ozone data at 18 monitoring stations of the Sydney region. The good fit of the model indicates that the Sydney airshed has homogeneous and isotropic subregions whose radius of influence is about 17 km. The Bessel function form of the spatial correlation has a physical meaning as it is derived from the diffusion equation; hence, it is expected that the model can be used, in general, to represent the spatial variability of a smoothed homogeneous and isotropic concentration field.  相似文献   

14.
Underlying levels of atmospheric pollutants, assumed to be governed by smoothing mechanisms due to atmospheric dispersion, can be estimated from global emissions source databases on greenhouse gases and ozone-depleting compounds. However, spatial data may be contaminated with noise or even missing or zero-valued at many locations. Therefore, a problem that arises is how to extract the underlying smooth levels. This paper sets out a structural spatial model that assumes data evolve across a global grid constrained by second-order smoothing restrictions. The frequency-domain approach is particularly suitable for global datasets, reduces the computational burden associated with two-dimensional models and avoids cumbersome zero-inflated skewed distributions. Confidence intervals of the underlying levels are also obtained. An application to the estimation of global levels of atmospheric pollutants from anthropogenic emissions illustrates the technique which may also be useful in the analysis of other environmental datasets of similar characteristics.  相似文献   

15.
The paper presents a comprehensive model evaluation focusing on the meaning and shortcomings of accuracy measures used to determine model quality according to European Union (EU) directives on air quality. European wide simulations employing the chemical transport model REM-CALGRID for the year 2002 were compared with O3, NO2, SO2 and PM10 observations of the German measurement network.The EU model quality objective, which is based on maximum relative errors, tends to penalise (i) the overestimation of very low measured concentrations in the case of annual averages and (ii) the underestimation of extremely high measured concentrations in the case of short-term values. As a more robust alternative, a model accuracy measure is presented, which corresponds to the allowed number of exceedances of the corresponding short-term air quality limit values.The influence of the spatial heterogeneity of the observations in relation to the spatial resolution of the model is investigated by spatial averaging of observation data. Because of this heterogeneity, any model with a 25 km resolution would fail to simulate about 20% of all NO2 and SO2 stations and 5–10% of all O3 and PM10 stations in Germany according to the EU model quality objectives for short-term averages.  相似文献   

16.
The aim of this study was to determine whether nested generic box models can be used to predict spatial variance. An inter-comparison study was performed for the nested box model SimpleBox, and the spatially resolved model LOTOS-EUROS, using PCB-153 emissions in Europe as an example. We compared the two models concerning (1) average environmental concentrations, (2) spatial concentration variances, (3) spatial concentration patterns (maps), and (4) agreement with measured concentrations for the air and soil compartments. In SimpleBox, the spatial concentration variances and patterns were calculated subsequently for each separate grid cell surrounded by a regional and a continental shell with homogeneous, averaged circumstances. Average European PCB-153 concentrations calculated by LOTOS-EUROS and SimpleBox for the period 1981-2000 agree well for the air and soil compartments. Moreover, the predicted concentrations of both models are in line with the measured PCB-153 concentrations in Europe during that period. For PCB-153, the prediction of spatial concentration variances with the nested multimedia fate model SimpleBox performs adequately in most cases, except for the lower concentration boundary in the air compartment. It is concluded that SimpleBox can be used to predict the spatial maximum and average concentrations of PCB-153 in the air and soil compartments. The proposed method has to be tested systematically for different types of compounds, emission scenarios, environmental compartments and spatial scales in order to allow conclusions about the general applicability of the method.  相似文献   

17.
Analysis of carbon monoxide budget in North China   总被引:1,自引:0,他引:1  
Peng L  Zhao C  Lin Y  Zheng X  Tie X  Chan LY 《Chemosphere》2007,66(8):1383-1389
A global chemical transport model (MOZART-2; model of ozone and related tracers, version 2) was used to assess physical and chemical processes that control the budget of tropospheric carbon monoxide (CO) in North China. Satellite observations of CO from the measurements of pollution in the troposphere (MOPITT) instrument are combined with model results for the analysis. The comparison between the model simulations and the satellite observations of total column CO (TCO) shows that the model can reproduce the spatial and temporal distributions. However, the model results underestimate TCO by 23% in North China. This underestimation of TCO may be caused by the uncertainties of emissions. The tropospheric CO budget analysis suggests that in North China, surface emission is the largest source of tropospheric CO. The main sinks of tropospheric CO in this region are chemical reaction and stratosphere_and_troposphere exchange. The analysis also shows that most of inflow CO to Pacific regions comes from the upwind regions of North China. This transport of CO is significant during Winter and Spring time.  相似文献   

18.
The Southern California Air Quality Study database provides a valuable resource with which to test urban-scale photochemical models and to achieve a better understanding of the atmospheric dynamics of pollutant formation. The CIT model was evaluated using the SCAQS database according to traditional model performance guidelines. A first application, reported previously, focused on model enhancement and application of the model to the 27–29 August 1987 episode. This study evaluates the CIT model using the 24–25 June SCAQS episode, providing further evaluation of the model. Results show that the CIT airshed model can follow the diurnal variations of reactive species and the transport for relatively unreactive species. The normalized gross error for ozone was 31 % in June compared to 38% in August. However, to fully judge model performance in proper perspective, a question arises: “How well do the measurements reflect the air quality surrounding the monitoring station, not just in that location?” This is an important but seldom quantitatively considered factor, not only in model evaluation but in the study of health effects as well. Analyses indicate that individual concentration measurements only approximately represent the true volume-averaged concentrations within a computational grid cell and that significant spatial variations exist. Thus any evaluation of models using these data sets should take these local variations into consideration. A series of tests found that the local inhomogeneities had a normalized gross error in the range of 25–45% depending on the pollutant. In this context, the performance of the CIT model is consistent with known modeling limitations such as emissions inventories and sub-grid scale variation of observations.  相似文献   

19.
Detailed hourly precipitation data are required for long-range modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants using the CALPUFF model. In sparsely populated areas such as the north central United States, ground-based precipitation measurement stations may be too widely spaced to offer a complete and accurate spatial representation of hourly precipitation within a modeling domain. The availability of remotely sensed precipitation data by satellite and the National Weather Service array of next-generation radars (NEXRAD) deployed nationally provide an opportunity to improve on the paucity of data for these areas. Before adopting a new method of precipitation estimation in a modeling protocol, it should be compared with the ground-based precipitation measurements, which are currently relied upon for modeling purposes. This paper presents a statistical comparison between hourly precipitation measurements for the years 2006 through 2008 at 25 ground-based stations in the north central United States and radar-based precipitation measurements available from the National Center for Environmental Predictions (NCEP) as Stage IV data at the nearest grid cell to each selected precipitation station. It was found that the statistical agreement between the two methods depends strongly on whether the ground-based hourly precipitation is measured to within 0.1 in/hr or to within 0.01 in/hr. The results of the statistical comparison indicate that it would be more accurate to use gridded Stage IV precipitation data in a gridded dispersion model for a long-range simulation, than to rely on precipitation data interpolated between widely scattered rain gauges.

Implications:

The current reliance on ground-based rain gauges for precipitation events and hourly data for modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants results in potentially large discontinuity in data coverage and the need to extrapolate data between monitoring stations. The use of radar-based precipitation data, which is available for the entire continental United States and nearby areas, would resolve these data gaps and provide a complete and accurate spatial representation of hourly precipitation within a large modeling domain.  相似文献   


20.
Source term estimation algorithms compute unknown atmospheric transport and dispersion modeling variables from concentration observations made by sensors in the field. Insufficient spatial and temporal resolution in the meteorological data as well as inherent uncertainty in the wind field data make source term estimation and the prediction of subsequent transport and dispersion extremely difficult. This work addresses the question: how many sensors are necessary in order to successfully estimate the source term and meteorological variables required for atmospheric transport and dispersion modeling?The source term estimation system presented here uses a robust optimization technique – a genetic algorithm (GA) – to find the combination of source location, source height, source strength, surface wind direction, surface wind speed, and time of release that produces a concentration field that best matches the sensor observations. The approach is validated using the Gaussian puff as the dispersion model in identical twin numerical experiments. The limits of the system are tested by incorporating additive and multiplicative noise into the synthetic data. The minimum requirements for data quantity and quality are determined by an extensive grid sensitivity analysis. Finally, a metric is developed for quantifying the minimum number of sensors necessary to accurately estimate the source term and to obtain the relevant wind information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号