首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 843 毫秒
1.
The contribution of vehicular traffic to air pollutant concentrations is often difficult to establish. This paper utilizes both time-series and simulation models to estimate vehicle contributions to pollutant levels near roadways. The time-series model used generalized additive models (GAMs) and fitted pollutant observations to traffic counts and meteorological variables. A one year period (2004) was analyzed on a seasonal basis using hourly measurements of carbon monoxide (CO) and particulate matter less than 2.5 μm in diameter (PM2.5) monitored near a major highway in Detroit, Michigan, along with hourly traffic counts and local meteorological data. Traffic counts showed statistically significant and approximately linear relationships with CO concentrations in fall, and piecewise linear relationships in spring, summer and winter. The same period was simulated using emission and dispersion models (Motor Vehicle Emissions Factor Model/MOBILE6.2; California Line Source Dispersion Model/CALINE4). CO emissions derived from the GAM were similar, on average, to those estimated by MOBILE6.2. The same analyses for PM2.5 showed that GAM emission estimates were much higher (by 4–5 times) than the dispersion model results, and that the traffic-PM2.5 relationship varied seasonally. This analysis suggests that the simulation model performed reasonably well for CO, but it significantly underestimated PM2.5 concentrations, a likely result of underestimating PM2.5 emission factors. Comparisons between statistical and simulation models can help identify model deficiencies and improve estimates of vehicle emissions and near-road air quality.  相似文献   

2.
We have analyzed the possibility to predict hourly averages of sulfur dioxide concentrations in the atmosphere at a site not far from the downtown area in the city of Santiago, Chile. We have compared the forecasts produced assuming persistence, linear regressions and feed forward neural networks. The effect of meteorological conditions is included by using forecasted values of temperature, relative humidity and wind speed at the time of the intended prediction as inputs to the different models. The best predictions for hourly averages are obtained with a three-layer neural network that has hourly averages of sulfur dioxide concentrations every 6 h on the previous day plus the actual values of the meteorological variables as input. Training the network with 1995 data, error in 8 h in advance prediction for 1996 data is of the order of 30%.  相似文献   

3.
In this study, we introduce the prospect of using prognostic model-generated meteorological output as input to steady-state dispersion models by identifying possible advantages and disadvantages and by presenting a comparative analysis. Because output from prognostic meteorological models is now routinely available and is used for Eulerian and Lagrangian air quality modeling applications, we explore the possibility of using such data in lieu of traditional National Weather Service (NWS) data for dispersion models. We apply these data in an urban application where comparisons can be made between the two meteorological input data types. Using the U.S. Environment Protection Agency's American Meteorological Society/U.S. Environmental Protection Agency Regulatory Model (AERMOD) air quality dispersion model, hourly and annual average concentrations of benzene are estimated for the Philadelphia, PA, area using both hourly MM5 model-generated meteorological output and meteorological data taken from the NWS site at the Philadelphia International Airport. Our intent is to stimulate a discussion of the relevant issues and inspire future work that examines many of the questions raised in this paper.  相似文献   

4.
In order to make projections for future air-quality levels, a robust methodology is needed that succeeds in reconstructing present-day air-quality levels. At present, climate projections for meteorological variables are available from Atmospheric-Ocean Coupled Global Climate Models (AOGCMs) but the temporal and spatial resolution is insufficient for air-quality assessment. Therefore, a variety of methods are tested in this paper in their ability to hindcast maximum 8 hourly levels of O3 and daily mean PM10 from observed meteorological data. The methods are based on a multiple linear regression technique combined with the automated Lamb weather classification. Moreover, we studied whether the above-mentioned multiple regression analysis still holds when driven by operational ECMWF (European Center for Medium-Range Weather Forecast) meteorological data. The main results show that a weather type classification prior to the regression analysis is superior to a simple linear regression approach. In contrast to PM10 downscaling, seasonal characteristics should be taken into account during the downscaling of O3 time series. Apart from a lower explained variance due to intrinsic limitations of the regression approach itself, a lower variability of the meteorological predictors (resolution effect) and model deficiencies, this synoptic-regression-based tool is generally able to reproduce the relevant statistical properties of the observed O3 distributions important in terms of European air quality Directives and air quality mitigation strategies. For PM10, the situation is different as the approach using only meteorology data was found to be insufficient to explain the observed PM10 variability using the meteorological variables considered in this study.  相似文献   

5.
Detailed hourly precipitation data are required for long-range modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants using the CALPUFF model. In sparsely populated areas such as the north central United States, ground-based precipitation measurement stations may be too widely spaced to offer a complete and accurate spatial representation of hourly precipitation within a modeling domain. The availability of remotely sensed precipitation data by satellite and the National Weather Service array of next-generation radars (NEXRAD) deployed nationally provide an opportunity to improve on the paucity of data for these areas. Before adopting a new method of precipitation estimation in a modeling protocol, it should be compared with the ground-based precipitation measurements, which are currently relied upon for modeling purposes. This paper presents a statistical comparison between hourly precipitation measurements for the years 2006 through 2008 at 25 ground-based stations in the north central United States and radar-based precipitation measurements available from the National Center for Environmental Predictions (NCEP) as Stage IV data at the nearest grid cell to each selected precipitation station. It was found that the statistical agreement between the two methods depends strongly on whether the ground-based hourly precipitation is measured to within 0.1 in/hr or to within 0.01 in/hr. The results of the statistical comparison indicate that it would be more accurate to use gridded Stage IV precipitation data in a gridded dispersion model for a long-range simulation, than to rely on precipitation data interpolated between widely scattered rain gauges.

Implications:

The current reliance on ground-based rain gauges for precipitation events and hourly data for modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants results in potentially large discontinuity in data coverage and the need to extrapolate data between monitoring stations. The use of radar-based precipitation data, which is available for the entire continental United States and nearby areas, would resolve these data gaps and provide a complete and accurate spatial representation of hourly precipitation within a large modeling domain.  相似文献   


6.
Two models frequently used to simulate the dispersion of pollutants in the atmosphere have been compared. This is necessary because only a well-tested and well-calibrated simulation model can be a good representation of the reality of the dispersion of pollutants. The models evaluated (HYSPLIT_4 with its four variants and MEDIA) were run using as input parameters the same meteorological dataset (for 23-26 October 1994) from the French model ARPEGE. The following statistical criteria were compared: the space and time evolution of the pollutant cloud; the variation of statistical parameters in time and space; and the differences between the simulated and measured values of concentration in time for six different stations. The results emphasise the characteristics of the two models and their abilities in the framework of the air quality monitoring.  相似文献   

7.
In homeland security applications, it is often necessary to characterize the source location and strength of a potentially harmful contaminant. Correct source characterization requires accurate meteorological data such as wind direction. Unfortunately, available meteorological data is often inaccurate or unrepresentative, having insufficient spatial and temporal resolution for precise modeling of pollutant dispersion. To address this issue, a method is presented that simultaneously determines the surface wind direction and the pollutant source characteristics. This method compares monitored receptor data to pollutant dispersion model output and uses a genetic algorithm (GA) to find the combination of source location, source strength, and surface wind direction that best matches the dispersion model output to the receptor data. A GA optimizes variables using principles from genetics and evolution.The approach is validated with an identical twin experiment using synthetic receptor data and a Gaussian plume equation as the dispersion model. Given sufficient receptor data, the GA is able to reproduce the wind direction, source location, and source strength. Additional runs incorporating white noise into the receptor data to simulate real-world variability demonstrate that the GA is still capable of computing the correct solution, as long as the magnitude of the noise does not exceed that of the receptor data.  相似文献   

8.
In the paper, the performance of two Bulgarian dispersion models is tested against European Tracer Experiment (ETEX) first release data base. The first one is the LED puff model which was the core of the Bulgarian Emergency Response System during all releases of ETEX. The second one is the newly created Eulerian dispersion model EMAP. These models have two important features: they are PC-oriented and they use quite a limited amount of input meteorological information. First, a number of runs with various source configurations are made on meteorological data produced by ECMWF. The aim of these runs is to verify the models’ ability to simulate reliably ETEX first release. To this end, a set of statistical criteria selected in ATMES (Atmospheric Transport Models Evaluation Study, see Klug et al., 1992 are used. The best runs for both models are obtained when the source is presented as a column towering from the ground to heights of 400–700 m. These runs took part in the second phase of ETEX (ETEX-II), the so called ATMES-type exercise where EMAP ranked ninth and LED - fourteenth among 34 models. Here, additional sets of EMAP are presented where in the first run the value of the horizontal diffusion coefficient is varied and in the other runs different meteorological data sets are tested. The results obtained from the first run show that the values of Kh=4–6×104 m2 s-1 produce fields which fit experimental data best. The other sets of runs show that the higher the frequency of the meteorological data, the better the simulation. The results can be improved by linear interpolation of the meteorological parameters with time, the best fitting obtained with interpolation at each time step.  相似文献   

9.
The paper introduces a new methodology for the prediction of daily PM10 concentrations, in line with the regulatory framework introduced through the EU Directive 2008/50/EC. The proposed approach is based on the efficient utilisation of the data collected over short time intervals (hourly) rather than the daily values used to derive the daily regulatory threshold. It is sufficiently simple and easily applicable in operational forecasting systems with the ability to accept as inputs both historical data and exogenous paraeters, such as meteorological variables. The application of the proposed methodology is demonstrated using data from five monitoring stations of air pollutants located in Athens, over a five year period (2000–2004) as well as compatible meteorological data from the NCEP (National Centers for Environmental Protection). A set of different models have been tested at the same time to reveal the effectiveness of the proposed approach, both univariate and multivariate, and linear and non-linear models. The analysis of all examined datasets has shown conclusive evidence that the introduction of the newly developed procedure which utilises data collected over a shorter horizon can significantly increase the forecasting ability of any developed model using daily historic PM10 data, under all examined metrics.  相似文献   

10.
In previous work [Kovalets, I., Andronopoulos, S., Bartzis, J.G., Gounaris, N., Kushchan, A., 2004. Introduction of data assimilation procedures in the meteorological pre-processor of atmospheric dispersion models used in emergency response systems. Atmospheric Environment 38, 457–467.] the authors have developed data assimilation (DA) procedures and implemented them in the frames of a diagnostic meteorological pre-processor (MPP) to enable simultaneous use of meteorological measurements with numerical weather prediction (NWP) data. The DA techniques were directly validated showing a clear improvement of the MPP output quality in comparison with meteorological measurement data. In the current paper it is demonstrated that the application of DA procedures in the MPP, to combine meteorological measurements with NWP data, has a noticeable positive effect on the performance of an atmospheric dispersion model (ADM) driven by the MPP output. This result is particularly important for emergency response systems used for accidental releases of pollutants, because it provides the possibility to combine meteorological measurements with NWP data in order to achieve more reliable dispersion predictions. This is also an indirect way to validate the DA procedures applied in the MPP. The above goal is achieved by applying the Lagrangian ADM DIPCOT driven by meteorological data calculated by the MPP code both with and without the use of DA procedures to simulate the first European tracer experiment (ETEX I). The performance of the ADM in each case was evaluated by comparing the predicted and the experimental concentrations with the use of statistical indices and concentration plots. The comparison of resulting concentrations using the different sets of meteorological data showed that the activation of DA in the MPP code clearly improves the performance of dispersion calculations in terms of plume shape and dimensions, location of maximum concentrations, statistical indices and time variation of concentration at the detectors locations.  相似文献   

11.
The performance of the AERMOD air dispersion model under low wind speed conditions, especially for applications with only one level of meteorological data and no direct turbulence measurements or vertical temperature gradient observations, is the focus of this study. The analysis documented in this paper addresses evaluations for low wind conditions involving tall stack releases for which multiple years of concurrent emissions, meteorological data, and monitoring data are available. AERMOD was tested on two field-study databases involving several SO2 monitors and hourly emissions data that had sub-hourly meteorological data (e.g., 10-min averages) available using several technical options: default mode, with various low wind speed beta options, and using the available sub-hourly meteorological data. These field study databases included (1) Mercer County, a North Dakota database featuring five SO2 monitors within 10 km of the Dakota Gasification Company’s plant and the Antelope Valley Station power plant in an area of both flat and elevated terrain, and (2) a flat-terrain setting database with four SO2 monitors within 6 km of the Gibson Generating Station in southwest Indiana. Both sites featured regionally representative 10-m meteorological databases, with no significant terrain obstacles between the meteorological site and the emission sources. The low wind beta options show improvement in model performance helping to reduce some of the overprediction biases currently present in AERMOD when run with regulatory default options. The overall findings with the low wind speed testing on these tall stack field-study databases indicate that AERMOD low wind speed options have a minor effect for flat terrain locations, but can have a significant effect for elevated terrain locations. The performance of AERMOD using low wind speed options leads to improved consistency of meteorological conditions associated with the highest observed and predicted concentration events. The available sub-hourly modeling results using the Sub-Hourly AERMOD Run Procedure (SHARP) are relatively unbiased and show that this alternative approach should be seriously considered to address situations dominated by low-wind meander conditions.

Implications: AERMOD was evaluated with two tall stack databases (in North Dakota and Indiana) in areas of both flat and elevated terrain. AERMOD cases included the regulatory default mode, low wind speed beta options, and use of the Sub-Hourly AERMOD Run Procedure (SHARP). The low wind beta options show improvement in model performance (especially in higher terrain areas), helping to reduce some of the overprediction biases currently present in regulatory default AERMOD. The SHARP results are relatively unbiased and show that this approach should be seriously considered to address situations dominated by low-wind meander conditions.  相似文献   

12.
Emissions of pollutants such as SO2 and NOx from external combustion sources can vary widely depending on fuel sulfur content, load, and transient conditions such as startup, shutdown, and maintenance/malfunction. While monitoring will automatically reflect variability from both emissions and meteorological influences, dispersion modeling has been typically conducted with a single constant peak emission rate. To respond to the need to account for emissions variability in addressing probabilistic 1-hr ambient air quality standards for SO2 and NO2, we have developed a statistical technique, the Emissions Variability Processor (EMVAP), which can account for emissions variability in dispersion modeling through Monte Carlo sampling from a specified frequency distribution of emission rates. Based upon initial AERMOD modeling of from 1 to 5 years of actual meteorological conditions, EMVAP is used as a postprocessor to AERMOD to simulate hundreds or even thousands of years of concentration predictions. This procedure uses emissions varied hourly with a Monte Carlo sampling process that is based upon the user-specified emissions distribution, from which a probabilistic estimate can be obtained of the controlling concentration. EMVAP can also accommodate an advanced Tier 2 NO2 modeling technique that uses a varying ambient ratio method approach to determine the fraction of total oxides of nitrogen that are in the form of nitrogen dioxide. For the case of the 1-hr National Ambient Air Quality Standards (NAAQS, established for SO2 and NO2), a “critical value” can be defined as the highest hourly emission rate that would be simulated to satisfy the standard using air dispersion models assuming constant emissions throughout the simulation. The critical value can be used as the starting point for a procedure like EMVAP that evaluates the impact of emissions variability and uses this information to determine an appropriate value to use for a longer term (e.g., 30-day) average emission rate that would still provide protection for the NAAQS under consideration. This paper reports on the design of EMVAP and its evaluation on several field databases that demonstrate that EMVAP produces a suitably modest overestimation of design concentrations. We also provide an example of an EMVAP application that involves a case in which a new emission limitation needs to be considered for a hypothetical emission unit that has infrequent higher-than-normal SO2 emissions.
ImplicationsEmissions of pollutants from combustion sources can vary widely depending on fuel sulfur content, load, and transient conditions such as startup and shutdown. While monitoring will automatically reflect this variability on measured concentrations, dispersion modeling is typically conducted with a single peak emission rate assumed to occur continuously. To realistically account for emissions variability in addressing probabilistic 1-hr ambient air quality standards for SO2 and NO2, the authors have developed a statistical technique, the Emissions Variability Processor (EMVAP), which can account for emissions variability in dispersion modeling through Monte Carlo sampling from a specified frequency distribution of emission rates.  相似文献   

13.
Assimilating concentration data into an atmospheric transport and dispersion model can provide information to improve downwind concentration forecasts. The forecast model is typically a one-way coupled set of equations: the meteorological equations impact the concentration, but the concentration does not generally affect the meteorological field. Thus, indirect methods of using concentration data to influence the meteorological variables are required. The problem studied here involves a simple wind field forcing Gaussian dispersion. Two methods of assimilating concentration data to infer the wind direction are demonstrated. The first method is Lagrangian in nature and treats the puff as an entity using feature extraction coupled with nudging. The second method is an Eulerian field approach akin to traditional variational approaches, but minimizes the error by using a genetic algorithm (GA) to directly optimize the match between observations and predictions. Both methods show success at inferring the wind field. The GA-variational method, however, is more accurate but requires more computational time. Dynamic assimilation of a continuous release modeled by a Gaussian plume is also demonstrated using the genetic algorithm approach.  相似文献   

14.
A livestock odor dispersion model (LODM) was developed to predict odor concentration and odor frequency using routine hourly meteorological data input. The odor concentrations predicted by the LODM were compared with the results obtained from other commercial models (Industrial Source Complex Short-Term model, version 3, CALPUFF) to evaluate its appropriateness. Two sets of field odor plume measurement data were used to validate the model. The model-predicted mean odor concentrations and odor frequencies were compared with those measured. Results show that this model has good performance for predicting odor concentrations and odor frequencies.  相似文献   

15.
The study of extreme values and prediction of ozone data is an important topic of research when dealing with environmental problems. Classical extreme value theory is usually used in air-pollution studies. It consists in fitting a parametric generalised extreme value (GEV) distribution to a data set of extreme values, and using the estimated distribution to compute return levels and other quantities of interest. Here, we propose to estimate these values using nonparametric functional data methods. Functional data analysis is a relatively new statistical methodology that generally deals with data consisting of curves or multi-dimensional variables. In this paper, we use this technique, jointly with nonparametric curve estimation, to provide alternatives to the usual parametric statistical tools. The nonparametric estimators are applied to real samples of maximum ozone values obtained from several monitoring stations belonging to the Automatic Urban and Rural Network (AURN) in the UK. The results show that nonparametric estimators work satisfactorily, outperforming the behaviour of classical parametric estimators. Functional data analysis is also used to predict stratospheric ozone concentrations. We show an application, using the data set of mean monthly ozone concentrations in Arosa, Switzerland, and the results are compared with those obtained by classical time series (ARIMA) analysis.  相似文献   

16.
The characteristics of an unknown source of emissions in the atmosphere are identified using an Adaptive Evolutionary Strategy (AES) methodology based on ground concentration measurements and a Gaussian plume model. The AES methodology selects an initial set of source characteristics including position, size, mass emission rate, and wind direction, from which a forward dispersion simulation is performed. The error between the simulated concentrations from the tentative source and the observed ground measurements is calculated. Then the AES algorithm prescribes the next tentative set of source characteristics. The iteration proceeds towards minimum error, corresponding to convergence towards the real source.The proposed methodology was used to identify the source characteristics of 12 releases from the Prairie Grass field experiment of dispersion, two for each atmospheric stability class, ranging from very unstable to stable atmosphere. The AES algorithm was found to have advantages over a simple canonical ES and a Monte Carlo (MC) method which were used as benchmarks.  相似文献   

17.
A model was developed for remote terminal use to compare the costs of alternate designs of air quality monitoring networks with varying sophistication, ranging from totally manual to completely automated systems. Of special interest is the isolation of manual sample analysis and manual data analysis for comparison with instrumental sensors and automated data processing.

The model allows for 10 levels of sophistication, and 6 were used in sample runs. As many as 50 samplingsite locations, with three different site types, may be specified, and each site type may have any configuration of chemical and/or meteorological sensors. Amortization, labor rates, instrument costs and lifetimes, telephone line charges, and other variables are readily changed by the user as desired.

It was concluded that for systems with fewer than 20–25 sensors, fully automated systems may not be justified on cost alone, at least for producing the same data (hourly) as the manual or semiautomatic systems. (Much more than hourly data is obtained with the automated system, of course.) For larger systems, labor costs put the nonautomated systems at a disadvantage after a few years of operation.  相似文献   

18.
The success of the application of computer modeling to decision-making will depend on the degree to which the scientifically valid “cause-and-effect” features of the air pollution system are represented. For this reason, dynamic simulation models are to be preferred to statistical and empirical models. A digital simulation model based on a stoichiometrically logical chemical mechanism and trajectory estimating routines was constructed, using Los Angeles source, meteorological and geographic input. The basic physical concept underlying the simulation model is the process of evolution of photochemical pollution in a parcel of air as it moves in a dynamic urban emission/meteorological environment along a given urban wind trajectory. Both the photochemical evolution and the trajectory are numerically integrated by a standard linear multistep predictor-corrector method. Concentrations of photochemical reactants and products (i.e., primary and secondary contaminants) are determined by this numerical integration, which also includes appropriate terms for relevant effects. In five preliminary validation runs, simulated NO2, NO, and O3 values were within 20% or 0.05 ppm of those observed at air monitoring stations located near the termini of the runs. The trajectories were plotted on the basis of hourly meteorological data for 22 stations. Six control strategy exercises were conducted to illustrate the application of the model to problem-solving situations.  相似文献   

19.
The relation of ambient levels of hydrocarbons to the products of atmospheric photochemistry has proved to be an elusive problem. Models to account for the photochemical processes are available based on laboratory examination of simulated atmospheres. Likewise, dispersion models are available which, for nonreacting species, can predict air quality given knowledge of emission rates and meteorological variables. However, integration of the dispersion model with the photochemical model is as yet an unsolved problem. In this study an empirical approach was applied in which the only assumption made was that there exists a relationship between early morning average hydrocarbon concentrations and subsequent maximum hourly average oxidant concentrations. A direct examination of all available days in several cities shows that, at any given hydrocarbon level, there exists a limit on the amount of oxidant which can be generated. Specifically it shows that the average 6:00–9:00 A.M. concentration of 0.3 ppm C nonmethane hydrocarbon can be expected to produce a maximum hourly average oxidant concentration of up to 0.1 ppm.  相似文献   

20.
天津市灰霾评价等级指标体系研究   总被引:2,自引:0,他引:2  
根据天津市2003—2007年灰霾日的污染物浓度和气象资料,应用主成分分析方法得出影响灰霾的5个主要因子(SO2、相对湿度、总云量、PM10和风速)。对相对湿度、总云量和风速3个气象因子的历史资料进行频数统计分析,并建立了各气象因子的等级划分标准。利用灰色聚类法构建了天津市灰霾评价的等级指标体系,灰霾等级划分结果表明,天津市轻度灰霾和重度灰霾出现天数相对较少,均以中度灰霾为主;轻度灰霾大多出现在春季和夏季;重度灰霾主要出现在冬季,春季出现的比例最小;综合评价分析,冬季灰霾污染程度最为严重。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号