首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The current requirements and status of air quality modeling of hazardous pollutants are reviewed. Many applications require the ability to predict the local impacts from industrial sources or large roadways as needed for community health characterization and evaluating environmental justice concerns. Such local-scale modeling assessments can be performed by using Gaussian dispersion models. However, these models have a limited ability to handle chemical transformations. A new generation of Eulerian grid-based models is now capable of comprehensively treating transport and chemical transformations of air toxics. However, they typically have coarse spatial resolution, and their computational requirements increase dramatically with finer spatial resolution. The authors present and discuss possible advanced approaches that can combine the grid-based models with local-scale information.  相似文献   

2.
Model development and testing tend to concentrate on how well models represent “reality” or reproduce measurements. However, there are many sources of uncertainty in modelling atmospheric pollution, and those responsible for decisions on abatement strategies need to use modelled scenarios without fear that inaccuracies and assumptions in the modelling may mislead them. This paper explores how techniques from risk assessment may be used to examine a modelling study systematically. Those assumptions and uncertainties which could have significant consequences, whether arising from data used, the modelling itself, or factors omitted and incompleteness, may be identified using hazard and operability studies. This helps to target supporting studies—possibly using more complex models, or Monte Carlo uncertainty analysis; and to indicate potential implications to the decision makers. As a case study we have used work undertaken on uncertainties with the Abatement Strategies Assessment Model for the task force on integrated assessment modelling under the convention on long-range transboundary air pollution of the UN Economic Commission for Europe.  相似文献   

3.
Emissions of pollutants such as SO2 and NOx from external combustion sources can vary widely depending on fuel sulfur content, load, and transient conditions such as startup, shutdown, and maintenance/malfunction. While monitoring will automatically reflect variability from both emissions and meteorological influences, dispersion modeling has been typically conducted with a single constant peak emission rate. To respond to the need to account for emissions variability in addressing probabilistic 1-hr ambient air quality standards for SO2 and NO2, we have developed a statistical technique, the Emissions Variability Processor (EMVAP), which can account for emissions variability in dispersion modeling through Monte Carlo sampling from a specified frequency distribution of emission rates. Based upon initial AERMOD modeling of from 1 to 5 years of actual meteorological conditions, EMVAP is used as a postprocessor to AERMOD to simulate hundreds or even thousands of years of concentration predictions. This procedure uses emissions varied hourly with a Monte Carlo sampling process that is based upon the user-specified emissions distribution, from which a probabilistic estimate can be obtained of the controlling concentration. EMVAP can also accommodate an advanced Tier 2 NO2 modeling technique that uses a varying ambient ratio method approach to determine the fraction of total oxides of nitrogen that are in the form of nitrogen dioxide. For the case of the 1-hr National Ambient Air Quality Standards (NAAQS, established for SO2 and NO2), a “critical value” can be defined as the highest hourly emission rate that would be simulated to satisfy the standard using air dispersion models assuming constant emissions throughout the simulation. The critical value can be used as the starting point for a procedure like EMVAP that evaluates the impact of emissions variability and uses this information to determine an appropriate value to use for a longer term (e.g., 30-day) average emission rate that would still provide protection for the NAAQS under consideration. This paper reports on the design of EMVAP and its evaluation on several field databases that demonstrate that EMVAP produces a suitably modest overestimation of design concentrations. We also provide an example of an EMVAP application that involves a case in which a new emission limitation needs to be considered for a hypothetical emission unit that has infrequent higher-than-normal SO2 emissions.
ImplicationsEmissions of pollutants from combustion sources can vary widely depending on fuel sulfur content, load, and transient conditions such as startup and shutdown. While monitoring will automatically reflect this variability on measured concentrations, dispersion modeling is typically conducted with a single peak emission rate assumed to occur continuously. To realistically account for emissions variability in addressing probabilistic 1-hr ambient air quality standards for SO2 and NO2, the authors have developed a statistical technique, the Emissions Variability Processor (EMVAP), which can account for emissions variability in dispersion modeling through Monte Carlo sampling from a specified frequency distribution of emission rates.  相似文献   

4.
Kirk Hatfield 《Chemosphere》1992,25(12):1753-1762
Land use regulations and air quality standards can be effective tools to control air pollution. Atmospheric transport/chemistry simulation models could be used to develop suitable regulations and standards; however, these models are not as efficient as air quality management models developed from embedding governing equations for atmospheric transport/chemistry into an optimization framework. Formulations of two steady-state air quality management models are presented to facilitate the development or evaluation of land use strategies to protect regional air quality from pollution generated from distributed point or nonpoint sources. Both models are linear programs constructed with equations that describe steady-state atmospheric pollutant fate and transport. The first model determines feasible pollutant loading patterns for multiple land use activities to accommodate the greatest regional population. The second model ascertains patterns of expanded land use which have a minimum impact on air quality. The primary goal of this paper is to explain how air pollution and land use modeling may be coupled to create an effective management tool to aid scientists and engineers with decisions affecting air quality and land use. The secondary goal is to show the types of air quality and regulatory information which could be obtained from these models. This latter goal is attained with general conclusions as consequence of applying ‘duality theory.’  相似文献   

5.
6.
We present a numerical study of scalar transport released from a line source downstream of a square obstacle to investigate the capabilities and limitations of gradient-transport modeling in predicting atmospheric dispersion. The standard k? and kω models and a Reynolds Stress Transport closure are employed and compared to predict the time-averaged turbulent flow field, while a standard gradient–diffusion model is initially adopted to relate the scalar flux to mean gradients of the concentration field. The analysis of two algebraic closures for turbulent scalar fluxes based on the generalized-gradient–diffusion hypothesis and its quadratic extension is also presented. In spite of the rather simple flow setup, where both the flow and the scalar fields can be assumed homogeneous in the spanwise direction, the analysis clarifies several critical issues concerning gradient-transport type models. We established the dominant role of predicted turbulent kinetic energy on scalar dispersion when a scalar diffusivity is employed, irrespectively of the Reynolds stress closure adopted for the averaged momentum equation. Moreover, the standard gradient–diffusion hypothesis failed to predict the streamwise component of the scalar flux, which is characterized by a counter-gradient-transport mechanism. Although the resulting contribution in the averaged scalar transport equation is small in the present flow configuration, this limitation can become severe for strongly inhomogeneous flows in the presence of point sources, where the spread of the scalar plume is essentially three-dimensional. The predictive capabilities of gradient-transport type modeling are found clearly improved using algebraic closures, which appear to represent a promising tool for predicting atmospheric dispersion in complex flows when unsteady transport mechanisms are not dominant.  相似文献   

7.
The predictive potential of air quality models and thus their value in emergency management and public health support are critically dependent on the quality of their meteorological inputs. The atmospheric flow is the primary cause of the dispersion of airborne substances. The scavenging of pollutants by cloud particles and precipitation is an important sink of atmospheric pollution and subsequently determines the spatial distribution of the deposition of pollutants. The long-standing problem of the spin-up of clouds and precipitation in numerical weather prediction models limits the accuracy of the prediction of short-range dispersion and deposition from local sources. The resulting errors in the atmospheric concentration of pollutants also affect the initial conditions for the calculation of the long-range transport of these pollutants. Customary the spin-up problem is avoided by only using NWP (Numerical Weather Prediction) forecasts with a lead time greater than the spin-up time of the model. Due to the increase of uncertainty with forecast range this reduces the quality of the associated forecasts of the atmospheric flow.In this article recent improvements through diabatic initialization in the spin-up of large-scale precipitation in the Hirlam NWP model are discussed. In a synthetic example using a puff dispersion model the effect is demonstrated of these improvements on the deposition and dispersion of pollutants with a high scavenging coefficient, such as sulphur, and a low scavenging coefficient, such as cesium-137. The analysis presented in this article leads to the conclusion that, at least for situations where large-scale precipitation dominates, the improved model has a limited spin-up so that its full forecast range can be used. The implication for dispersion modeling is that the improved model is particularly useful for short-range forecasts and the calculation of local deposition. The sensitivity of the hydrological processes to proper initialization implies that the spin-up problem may reoccur with changes in the model and increased model resolution. Spin-up should be an ongoing concern for atmospheric modelers.  相似文献   

8.
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.  相似文献   

9.
Uncertainty factors in atmospheric dispersion models may influence the reliability of model prediction. The ability of a model in assimilating measurement data will be helpful to improve model prediction. In this paper, data assimilation based on ensemble Kalman filter (EnKF) is introduced to a Monte Carlo atmospheric dispersion model (MCADM) designed for assessment of consequences after an accident release of radionuclides. Twin experiment has been performed in which simulated ground-level dose rates have been assimilated. Uncertainties in the source term and turbulence intensity of wind field are considered, respectively. Methodologies and preliminary results of the application are described. It is shown that it is possible to reduce the discrepancy between the model forecast and the true situation by data assimilation. About 80% of error caused by the uncertainty in the source term is reduced, and the value for that caused by uncertainty in the turbulence intensity is about 50%.  相似文献   

10.
Receptor modeling techniques like chemical mass balance are used to attribute pollution levels at a point to different sources. Here we analyze the composition of particulate matter and use the source profiles of sources prevalent in a region to estimate quantitative source contributions. In dispersion modeling on the other hand the emission rates of various sources together with meteorological conditions are used to determine the concentrations levels at a point or in a region. The predictions using these two approaches are often inconsistent. In this work these differences are attributed to errors in emission inventory. Here an algorithm for coupling receptor and dispersion models is proposed to reduce the differences of the two predictions and determine the emission rates accurately. The proposed combined approach helps reconcile the differences arising when the two approaches are used in a stand-alone mode. This work is based on assuming that the models are perfect and uses a model-to-model comparison to illustrate the concept.  相似文献   

11.
Changes in ecosystem function at Rocky Mountain National Park (RMNP) are occurring because of emissions of nitrogen and sulfate species along the Front Range of the Colorado Rocky Mountains, as well as sources farther east and west. The nitrogen compounds include both oxidized and reduced nitrogen. A year-long monitoring program of various oxidized and reduced nitrogen species was initiated to better understand their origins as well as the complex chemistry occurring during transport from source to receptor. Specifically, the goals of the study were to characterize the atmospheric concentrations of nitrogen species in gaseous, particulate, and aqueous phases (precipitation and clouds) along the east and west sides of the Continental Divide; identify the relative contributions to atmospheric nitrogen species in RMNP from within and outside of the state of Colorado; identify the relative contributions to atmospheric nitrogen species in RMNP from emission sources along the Colorado Front Range versus other areas within Colorado; and identify the relative contributions to atmospheric nitrogen species from mobile sources, agricultural activities, and large and small point sources within the state of Colorado. Measured ammonia concentrations are combined with modeled releases of conservative tracers from ammonia source regions around the United States to apportion ammonia to its respective sources, using receptor modeling tools.

Implications: Increased deposition of nitrogen in RMNP has been demonstrated to contribute to a number of important ecosystem changes. The rate of deposition of nitrogen compounds in RMNP has crossed a crucial threshold called the “critical load.” This means that changes are occurring to park ecosystems and that these changes may soon reach a point where they are difficult or impossible to reverse. Several key issues need attention to develop an effective strategy for protecting park resources from adverse impacts of elevated nitrogen deposition. These include determining the importance of previously unquantified nitrogen inputs within the park and identification of important nitrogen sources and transport pathways.  相似文献   

12.
In order to estimate the health benefits of reducing mobile source emissions, analysts typically use detailed atmospheric models to estimate the change in population exposure that results from a given change in emissions. However, this may not be feasible in settings where data are limited or policy decisions are needed in the short term. Intake fraction (iF), defined as the fraction of emissions of a pollutant or its precursor that is inhaled by the population, is a metric that can be used to compare exposure assessment methods in a health benefits analysis context. To clarify the utility of rapid-assessment methods, we calculate particulate matter iFs for the Mexico City Metropolitan Area using five methods, some more resource intensive than others. First, we create two simple box models to describe dispersion of primary fine particulate matter (PM2.5) in the Mexico City basin. Second, we extrapolate iFs for primary PM2.5, ammonium sulfate, and ammonium nitrate from US values using a regression model. Third, we calculate iFs by assuming a linear relationship between emissions and population-weighted concentrations of primary PM2.5, ammonium nitrate, and ammonium sulfate (a particle composition method). Finally, we estimate PM iFs from detailed atmospheric dispersion and chemistry models run for only a short period of time. Intake fractions vary by up to a factor of five, from 23 to 120 per million for primary PM2.5. Estimates of 60, 7, and 0.7 per million for primary PM, secondary ammonium sulfate, and secondary ammonium nitrate, respectively, represent credible central estimates, with an approximate factor of two uncertainty surrounding each estimate. Our results emphasize that multiple rapid-assessment methods can provide meaningful estimates of iFs in resource-limited environments, and that formal uncertainty analysis, with special attention to model biases and uncertainty, would be important for health benefits analyses.  相似文献   

13.
Black carbon (BC), a constituent of particulate matter, is emitted from multiple combustion sources, complicating determination of contributions from individual sources or source categories from monitoring data. In close proximity to an airport, this may include aircraft emissions, other emissions on the airport grounds, and nearby major roadways, and it would be valuable to determine the factors most strongly related to measured BC concentrations. In this study, continuous BC concentrations were measured at five monitoring sites in proximity to a small regional airport in Warwick, Rhode Island from July 2005 to August 2006. Regression was used to model the relative contributions of aircraft and related sources, using real-time flight activity (departures and arrivals) and meteorological data, including mixing height, wind speed and direction. The latter two were included as a nonparametric smooth spatial term using thin-plate splines applied to wind velocity vectors and fit in a linear mixed model framework. Standard errors were computed using a moving-block bootstrap to account for temporal autocorrelation. Results suggest significant positive associations between hourly departures and arrivals at the airport and BC concentrations within the community, with departures having a more substantial impact. Generalized Additive Models for wind speed and direction were consistent with significant contributions from the airport, major highway, and multiple local roads. Additionally, inverse mixing height, temperature, precipitation, and at one location relative humidity, were associated with BC concentrations. Median contribution estimates indicate that aircraft departures and arrivals (and other sources coincident in space and time) contribute to approximately 24–28% of the BC concentrations at the monitoring sites in the community. Our analysis demonstrated that a regression-based approach with detailed meteorological and source characterization can provide insights about source contributions, which could be used to devise control strategies or to provide monitor-based comparisons with source-specific atmospheric dispersion models.  相似文献   

14.
Source term estimation algorithms compute unknown atmospheric transport and dispersion modeling variables from concentration observations made by sensors in the field. Insufficient spatial and temporal resolution in the meteorological data as well as inherent uncertainty in the wind field data make source term estimation and the prediction of subsequent transport and dispersion extremely difficult. This work addresses the question: how many sensors are necessary in order to successfully estimate the source term and meteorological variables required for atmospheric transport and dispersion modeling?The source term estimation system presented here uses a robust optimization technique – a genetic algorithm (GA) – to find the combination of source location, source height, source strength, surface wind direction, surface wind speed, and time of release that produces a concentration field that best matches the sensor observations. The approach is validated using the Gaussian puff as the dispersion model in identical twin numerical experiments. The limits of the system are tested by incorporating additive and multiplicative noise into the synthetic data. The minimum requirements for data quantity and quality are determined by an extensive grid sensitivity analysis. Finally, a metric is developed for quantifying the minimum number of sensors necessary to accurately estimate the source term and to obtain the relevant wind information.  相似文献   

15.
16.
ABSTRACT

The main objective of this study was to investigate the capabilities of the receptor-oriented inverse mode Lagrangian Stochastic Particle Dispersion Model (LSPDM) with the 12-km resolution Mesoscale Model 5 (MM5) wind field input for the assessment of source identification from seven regions impacting two receptors located in the eastern United States. The LSPDM analysis was compared with a standard version of the Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) single-particle backward-trajectory analysis using inputs from MM5 and the Eta Data Assimilation System (EDAS) with horizontal grid resolutions of 12 and 80 km, respectively. The analysis included four 7-day summertime events in 2002; residence times in the modeling domain were computed from the inverse LSPDM runs and HYPSLIT-simulated backward trajectories started from receptor-source heights of 100, 500, 1000, 1500, and 3000 m. Statistics were derived using normalized values of LSPDM- and HYSPLIT-predicted residence times versus Community Multiscale Air Quality model-predicted sulfate concentrations used as baseline information. From 40 cases considered, the LSPDM identified first- and second-ranked emission region influences in 37 cases, whereas HYSPLIT-MM5 (HYSPLIT-EDAS) identified the sources in 21 (16) cases. The LSPDM produced a higher overall correlation coefficient (0.89) compared with HYSPLIT (0.55–0.62). The improvement of using the LSPDM is also seen in the overall normalized root mean square error values of 0.17 for LSPDM compared with 0.30–0.32 for HYSPLIT. The HYSPLIT backward trajectories generally tend to underestimate near-receptor sources because of a lack of stochastic dispersion of the backward trajectories and to overestimate distant sources because of a lack of treatment of dispersion. Additionally, the HYSPLIT backward trajectories showed a lack of consistency in the results obtained from different single vertical levels for starting the backward trajectories. To alleviate problems due to selection of a backward-trajectory starting level within a large complex set of 3-dimensional winds, turbulence, and dispersion, results were averaged from all heights, which yielded uniform improvement against all individual cases.

IMPLICATIONS Backward-trajectory analysis is one of the standard procedures for determining the spatial locations of possible emission sources affecting given receptors, and it is frequently used to enhance receptor modeling results. This analysis simplifies some of the relevant processes such as pollutant dispersion, and additional methods have been used to improve receptor-source relationships. A methodology of inverse Lagrangian stochastic particle dispersion modeling was used in this study to complement and improve standard backward-trajectory analysis. The results show that inverse dispersion modeling can identify regional sources of haze in national parks and other regions of interest.  相似文献   

17.
18.
The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source.The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.  相似文献   

19.
Identification of hot spots for urban fine particulate matter (PM(2.5)) concentrations is complicated by the significant contributions from regional atmospheric transport and the dependence of spatial and temporal variability on averaging time. We focus on PM(2.5) patterns in New York City, which includes significant local sources, street canyons, and upwind contributions to concentrations. A literature synthesis demonstrates that long-term (e.g., one-year) average PM(2.5) concentrations at a small number of widely-distributed monitoring sites would not show substantial variability, whereas short-term (e.g., 1-h) average measurements with high spatial density would show significant variability. Statistical analyses of ambient monitoring data as a function of wind speed and direction reinforce the significance of regional transport but show evidence of local contributions. We conclude that current monitor siting may not adequately capture PM(2.5) variability in an urban area, especially in a mega-city, reinforcing the necessity of dispersion modeling and methods for analyzing high-resolution monitoring observations.  相似文献   

20.
For operational or research purposes (dispersion computations of radioactive effluents during nuclear emergency situations, simulations of chemical pollution in the vicinity of thermal power plants), different models of passive dispersion in the atmosphere have been developed at the Environment Department of EDF’s R and D Division. This report presents the comparison of the performances of three such models: DIFTRA (lagrangian puff model, with operational goal), DIFEUL (three dimensional eulerian) and DIFPAR (Monte Carlo particle model) for the simulation of the first ETEX release, an international tracer campaign during which a passive tracer cloud has been followed over Europe. The results obtained in this study give model vs. experience differences of the same order as the model vs. experience differences observed during an international model comparison experiment using data of the Chernobyl release, the ATMES exercise. In addition to the standard statistical scores used in the evaluation of the performances of the transport models two asymmetric scores (in contradistinction with the Figure of Merit in Space) are proposed: “efficiency” and “power”. Their aim is to separate the two manners in which a model may be wrong: by predicting presence of pollutant while none is measured or conversely predicting absence when pollutant is actually detected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号