首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Wind-driven rain (WDR) is an important factor in the dry and wet deposition of atmospheric pollutants on building facades. In the past, different calculation models for WDR deposition on building facades have been developed and progressively improved. Today, the models that are most advanced and most frequently used are the semi-empirical model in the ISO Standard for WDR assessment (ISO), the semi-empirical model by Straube and Burnett (SB) and the CFD model by Choi. This paper compares the three models by applying them to four idealised buildings under steady-state conditions of wind and rain. In each case, the reference wind direction is perpendicular to the windward facade. For the CFD model, validation of wind-flow patterns and WDR deposition fluxes was performed in earlier studies. The CFD results are therefore considered as the reference case and the performance of the two semi-empirical models is evaluated by comparison with the CFD results based on two criteria: (1) ability to model the wind-blocking effect on the WDR coefficient; and (2) ability to model the variation of the WDR coefficient with horizontal rainfall intensity Rh. It is shown that both the ISO and SB model, as opposed to the CFD model, cannot reproduce the wind-blocking effect. The ISO model incorrectly provides WDR coefficients that are independent of Rh, while the SB model shows a dependency that is opposite to that by CFD. In addition, the SB model can provide very large overestimations of the WDR deposition fluxes at the top and side edges of buildings (up to more than a factor 5). The capabilities and deficiencies of the ISO and SB model, as identified in this paper, should be considered when applying these models for WDR deposition calculations. The results in this paper will be used for improvement and further development of these models.  相似文献   

2.
The conventional gausslan plume equation for ground level concentrations was used to estimate hourly average sulfur dioxide concentrations at selected points in Louisville, KY, on specific days during 1973. Area emission sources were not included in the model since they are not substantial. The trajectory of the emissions from each continuous point source was calculated by a procedure that allowed for spatial variability in wind direction. All other meteorological parameters were held constant during each hour. The twenty-four individual hourly estimates at each location for a given day were arithmetically averaged yielding a daily mean. The model predictions were compared to actual measurements conducted by Jefferson County Air Pollution Control District personnel using the West-Gaeke sampling procedure. The sample correlation coefficient for all predictions was low, but after only about 30% of the predictions were eliminated on statistical grounds, the sample correlation coefficient was increased to 0.72. The statistical analysis appeared to discard a reasonable number of predictions on the basis of observed variability in the measured air quality.  相似文献   

3.
Urban air pollution has traditionally been modeled using annual, or at best, seasonal emissions inventories and climatology. These averaging techniques may introduce uncertainty into the analysis, if specific emissions (e.g. SO2) are correlated with dispersion factors on a short-term basis. This may well be the case for space heating emissions. An analysis of this problem, using hourly climatological and residential emission estimates for six U.S. cities, indicates that the errors introduced using such averages are modest (~ ± 12%) for annual average concentrations. Maximum hourly concentrations vary considerably more, since maximum heat demand and worst case dispersion are in general not coincident. The paper thus provides a basis for estimating more realistic air pollution Impacts due to residential space heating.  相似文献   

4.
A study was conducted on the Brigham Young University campus during January and February 2015 to identify winter-time sources of fine particulate material in Utah Valley, Utah. Fine particulate mass and components and related gas-phase species were all measured on an hourly averaged basis. Light scattering was also measured during the study. Included in the sampling was the first-time source apportionment application of a new monitoring instrument for the measurement of fine particulate organic marker compounds on an hourly averaged basis. Organic marker compounds measured included levoglucosan, dehydroabietic acid, stearic acid, pyrene, and anthracene. A total of 248 hourly averaged data sets were available for a positive matrix factorization (PMF) analysis of sources of both primary and secondary fine particulate material. A total of nine factors were identified. The presence of wood smoke emissions was associated with levoglucosan, dehydroabietic acid, and pyrene markers. Fine particulate secondary nitrate, secondary organic material, and wood smoke accounted for 90% of the fine particulate material. Fine particle light scattering was dominated by sources associated with wood smoke and secondary ammonium nitrate with associated modeled fine particulate water.

Implications: The identification of sources and secondary formation pathways leading to observed levels of PM2.5 (particulate matter with an aerodynmaic diameter <2.5 μm) is important in making regulatory decisions on pollution control. The use of organic marker compounds in this assessment has proven useful; however, data obtained on a daily, or longer, sampling schedule limit the value of the information because diurnal changes associated with emissions and secondary aerosol formation cannot be identified. A new instrument, the gas chromtography–mass spectrometry (GC-MS) organic aerosol monitor, allows for the determination on these compounds on an hourly averaged basis. The demonstrated potential value of hourly averaged data in a source apportionment analysis indicates that significant improvement in the data used for making regulatory decisions is possible.  相似文献   


5.
A method based on hourly NWS cloud amount reports is presented for developing a simple model to account for cloud cover in the determination of the nitrogen dioxide photolysis rate constant, k 1 The model is parameterized and verified with direct UV radiometer and k 1 measurements (vs. time of day) collected by Sickles, et al. 1 at Research Triangle Park. Application of our model to variable cloud condition situations indicates that significant improvement in k 1 prediction is obtained by including the influence of cloud cover. Comparison of our model with the radiative transfer calculations of Peterson7 indicates that the particular parameterization of k 1 given here is most representative of average albedo and relatively heavy aerosol loading conditions. Comparison of ozone predictions using hourly averaged k 1 and instantaneous k 1 under conditions of varying cloud cover suggest that the errors resulting from averaging k 1 are largest when variations in solar zenith angle are significant over the hour.  相似文献   

6.
A critical step in the modeling of the carbon monoxide (CO) impacts of mobile sources is predicting an 8-hour CO concentration given a modeled "worst-case" 1-hour concentration. Often, this is done by a multiplicative persistence factor. A meteorological persistence factor (MPF) accounts for the variability over 8 hours of wind speed, wind direction, stability class, and temperature. A vehicular persistence factor (VPF) reflects the lower traffic volumes during the off-peak hours.

Hourly meteorological data for ten years for four cities in Florida were obtained from the National Climatic Data Center. The CALINE3 model was used to obtain hourly CO concentrations, which were combined to derive MPFs for each city. Similarly, VPFs were derived from hourly vehicle counts from one busy roadway in each city. The mean VPF multiplied by the second highest MPF was defined as the worst-case total persistence factor (TPF). These worst-case TPFs increased significantly as more hours of nighttime were included in the 8- hour averaging time, but were fairly consistent from city to city. In general, the results suggest worst-case TPFs in the range of 0.4 to 0.5, lower than has been recommended by EPA in the past.  相似文献   

7.
Detailed hourly precipitation data are required for long-range modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants using the CALPUFF model. In sparsely populated areas such as the north central United States, ground-based precipitation measurement stations may be too widely spaced to offer a complete and accurate spatial representation of hourly precipitation within a modeling domain. The availability of remotely sensed precipitation data by satellite and the National Weather Service array of next-generation radars (NEXRAD) deployed nationally provide an opportunity to improve on the paucity of data for these areas. Before adopting a new method of precipitation estimation in a modeling protocol, it should be compared with the ground-based precipitation measurements, which are currently relied upon for modeling purposes. This paper presents a statistical comparison between hourly precipitation measurements for the years 2006 through 2008 at 25 ground-based stations in the north central United States and radar-based precipitation measurements available from the National Center for Environmental Predictions (NCEP) as Stage IV data at the nearest grid cell to each selected precipitation station. It was found that the statistical agreement between the two methods depends strongly on whether the ground-based hourly precipitation is measured to within 0.1 in/hr or to within 0.01 in/hr. The results of the statistical comparison indicate that it would be more accurate to use gridded Stage IV precipitation data in a gridded dispersion model for a long-range simulation, than to rely on precipitation data interpolated between widely scattered rain gauges.

Implications:

The current reliance on ground-based rain gauges for precipitation events and hourly data for modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants results in potentially large discontinuity in data coverage and the need to extrapolate data between monitoring stations. The use of radar-based precipitation data, which is available for the entire continental United States and nearby areas, would resolve these data gaps and provide a complete and accurate spatial representation of hourly precipitation within a large modeling domain.  相似文献   


8.
We have analyzed the possibility to predict hourly averages of sulfur dioxide concentrations in the atmosphere at a site not far from the downtown area in the city of Santiago, Chile. We have compared the forecasts produced assuming persistence, linear regressions and feed forward neural networks. The effect of meteorological conditions is included by using forecasted values of temperature, relative humidity and wind speed at the time of the intended prediction as inputs to the different models. The best predictions for hourly averages are obtained with a three-layer neural network that has hourly averages of sulfur dioxide concentrations every 6 h on the previous day plus the actual values of the meteorological variables as input. Training the network with 1995 data, error in 8 h in advance prediction for 1996 data is of the order of 30%.  相似文献   

9.
The objective of this study is to compare the use of several indices of exposure in describing the relationship between O3 and reduction in agricultural crop yield. No attempt has been made to determine which exposure-response models best fit the data sets examined. Hourly mean O3 concentration data, based on two-three measurements per hour, were used to develop indices of exposure from soybean and winter wheat experiments conducted in open-top chambers at the Boyce Thompson Institute, Ithaca, New York NCLAN field site. The comparative efficacy of cumulative indices (i.e. number of occurrences equal to or above specific hourly mean concentrations, sum of all hourly mean concentrations equal to or above a selected level, and the weighted sum of all hourly mean concentrations) and means calculated over an experimental period to describe the relationship between exposure to O3 and reductions in the yield of agricultural crops was evaluated. None of the exposure indices consistently provided a best fit with the Weibull and linear models tested. The selection of the model appears to be important in determining the indices that best describe the relationship between exposure and response. The focus of selecting a model should be on fitting the data points as well as on adequately describing biological responses. The investigator should be careful to couple the model with data points derived from indices relevant to the length of exposure. While we have used a small number of data sets, our analysis indicates that exposure indices that weight peak concentrations differently than lower concentrations of an exposure regime can be used in the development of exposure-response functions. Because such indices may have merit from a regulatory perspective, we recommend that additional data sets be used in further analyses to explore the biological rationale for various indices of exposure and their use in exposure-response functions.  相似文献   

10.
Urban and non-urban rural ozone (O3) concentrations are high in Bulgaria and often exceed the European AOT40 ecosystem as well as the AOT60 human health standards. This paper presents preliminary estimates to establish background, non-urban O3 concentrations for the southern region of Bulgaria. Ozone concentrations from three distinctly different sites are presented: a mountain site influenced by mountain-valley wind flow; a coastal site influenced by sea-breeze wind flow; and a 1700-m mountain peak site without 'local' wind flow characteristics. The latter offers the best estimate of 46-50 ppb for a background O3 level. The highest non-urban hourly value, 118 ppb, was measured at the mountain-valley site.  相似文献   

11.
A harmonized comparative performance evaluation of A Unified Regional Air-quality Modelling System (AURAMS) v1.3.1b and Community Multiscale Air Quality (CMAQ) v4.6 air-quality modelling systems was conducted on the same North American grid for July 2002 using the same emission inventories, emissions processor, and input meteorology.Comparison of AURAMS- and CMAQ-predicted O3 concentrations against hourly surface measurement data showed a lower normalized mean bias (NMB) of 20.7% for AURAMS versus 46.4% for CMAQ. However, AURAMS and CMAQ had more similar normalized mean errors (NMEs) of 46.9% and 54.2%, respectively. Both models did similarly well in predicting daily 1-h O3 maximums; however, AURAMS performed better in calculating daily minimums. CMAQ's poorer performance for O3 is partly due to its inability to correctly predict nighttime lows.Total PM2.5 hourly surface concentration was under-predicted by both AURAMS and CMAQ with NMBs of ?10.4% and ?65.2%, respectively. However, as with O3, both models had similar NMEs of 68.0% and 70.6%, respectively. In general, AURAMS performance was better than CMAQ for all major PM2.5 species except nitrate and elemental carbon. Both models significantly under-predicted total organic aerosols (TOAs), although the mean AURAMS concentration was over four times larger than CMAQ's. The under-prediction of TOA was partly due to the exclusion of forest-fire emissions. Sea-salt aerosol made up approximately 50.2% of the AURAMS total PM2.5 surface concentration versus only 6.2% in CMAQ when averaged over all grid cells. When averaged over land cells only, sea-salt still contributed 13.9% to the total PM2.5 mass in AURAMS versus 2.0% in CMAQ.  相似文献   

12.
There is an ongoing debate as to which components of the ambient ozone (O3) exposure dynamics best explain adverse crop yield responses. A key issue is regarding the importance of peak versus mid-range hourly ambient O3 concentrations. While in this paper the importance of peak atmospheric O3 concentrations is not discounted, if they occur at a time when plants are conducive for uptake, the corresponding importance of more frequently occurring mid-range O3 concentrations is described. The probability of co-occurrence of high O3 concentrations and O3 uptake limiting factors is provided using coherent data sets of O3 concentration, air temperature, air humidity, mean horizontal wind velocity and global radiation measured at representative US and German air quality monitoring sites. Using the PLant-ATmosphere INteraction (PLATIN) model, the significance of the aforementioned meteorological parameters on ozone uptake is examined. In addition, the limitations of describing the O3 exposure for plants under ambient, chamberless conditions by SUM06, AOT40 or W126 exposure indices are discussed.  相似文献   

13.
The frequency of co-occurrences for SO2NO2, SO2/O3 and O3/NO2 at rural and remote monitoring sites in the United States was characterized for the months of May-September for the years 1978–1982. Minimum hourly concentrations of 0.03 and 0.05 ppm of each gas were used as the criteria for defining a ‘co-occurrence’. The objectives of this study were to:
  • 1.(1) identify the types of co-occurrence patterns and their frequency;
  • 2.(2) identify whether the frequency of hourly simultaneous co-occurrences increased substantially when the minimum concentration was lowered (e.g. from 0.05 to 0.03 ppm) for each pollutant; and
  • 3.(3) determine whether the frequency of co-occurrences showed large year-to-year variation.
For all pollutant pairs and co-occurrence thresholds (i.e. 0.03 and 0.05 ppm), the frequency of daily and hourly co-occurrences was low for most sites. Year-to-year variability was found to be insignificant; most of the monitoring sites experienced co-occurrences of any type less than 12% of the 153 days. Based on our observations, researchers attempting to assess the potential effects of SO2/NO2, SO2/O3 and O3/NO2 in the United States should construct simulated exposure regimes so that
  • 1.(1) hourly simultaneous and daily simultaneous-only co-occurrences are fairly rare and
  • 2.(2) when co-occurrences are present, complex-sequential and sequential-only co-occurrence patterns predominate.
  相似文献   

14.
Ambient ozone and crop loss: establishing a cause-effect relationship   总被引:6,自引:0,他引:6  
This paper provides the results of a retrospective mathematical analysis of the US NCLAN (National Crop Loss Assessment Network) open-top chamber data. Some 77% of the 73 crop harvests examined, showed no statistically significant yield differences between NF (non-filtered open-top chamber) and AA (chamberless, ambient air) treatments (no easily discernable chamber effects on yield). However, among these cases only seven acceptable examples showed statistically significant yield reductions in NF compared to the CF (charcoal filtered open-top chamber) treatment. An examination of the combined or cumulative hourly ambient O3 frequency distribution for cases with yield loss in NF compared to a similar match of cases without yield loss showed that the mean, median and the various percentiles were all higher (>/= 3 X) in the former in contrast to the latter scenario. The combined frequency distribution of hourly O3 concentrations for the cases with yield loss in NF were clearly separated from the corresponding distribution with no yield loss, at O3 concentrations > 49 ppb. Univariate linear regressions between various O3 exposure parameters and per cent yield losses in NF showed that the cumulative frequency of occurrence of O3 concentrations between 50 and 87 ppb was the best predictor (adjusted R2 = 0.712 and p = 0.011). This analysis also showed that the frequency distribution of hourly concentrations up to 87 ppb O3 represented a critical point, since the addition of the frequency distributions of > 87 ppb O3 did not improve the R2 values. In fact as the frequency of hourly O3 concentrations included in the regression approached 50-100 ppb, the R2 value decreased substantially and the p value increased inversely. Further, univariate linear regressions between the frequencies of occurrence of various O3 concentrations between 50 and 90 ppb and: (a) cases with no yield difference in NF and (b) cases with yield increase in NF compared to the CF treatment (positive effect) provided no meaningful statistical relationship (adjusted R2 = 0.000) in either category. These results support the basis that additional evaluation of the frequency of occurrence of hourly O3] concentrations between 50 and 87 ppb for cases with the yield reductions could provide a meaningful ambient O3 standard, objective or guideline for vegetation.  相似文献   

15.
This paper presents a new approach to quantify emissions from fugitive gaseous air pollution sources. The authors combine Computed Tomography (CT) with Path-Integrated Optical Remote Sensing (PI-ORS) concentration data in a new field beam geometry. Path-integrated concentrations are sampled in a vertical plane downwind from the source along several radial beam paths. An innovative CT technique, which applies the Smooth Basis Function Minimization method to the beam data in conjunction with measured wind data, is used to estimate the total flux from the fugitive source. The authors conducted a synthetic data study to evaluate the proposed methodology under different meteorological conditions, beam geometry configurations, and simulated measurement errors. The measurement errors were simulated based on data collected with an Open-Path Fourier Transform Infra-Red system. This approach was found to be robust for the simulated errors and for a wide range of fluctuating wind directions. In the very sparse beam geometry examined (eight beam paths), successful emission rates were retrieved over a 70 degrees range of wind directions under extremely large measurement error conditions.  相似文献   

16.
Numerous ozone exposure statistics were calculated using hourly ozone data from crop yield loss experiments previously conducted for alfalfa, fresh market and processing tomatoes, cotton, and dry beans in an ambient ozone gradient near Los Angeles, California. Exposure statistics examined included peak (maximum daily hourly) and mean concentrations above specific threshold levels, and concentrations during specific time periods of the day. Peak and mean statistics weighted for ozone concentration and time period statistics weighted for hour of the day were also determined. Polynomial regression analysis was used to relate each of 163 ozone statistics to crop yield. Performance of the various statistics was rated by comparing residual mean square (RMS) values. The analyses demonstrated that no single statistic was best for all crop species. Ozone statistics with a threshold level performed well for most crops, but optimum threshold level was dependent upon crop species and varied with the particular statistics calculated. The data indicated that daily hours of exposure above a critical high-concentration threshold related well to crop yield for alfalfa, market tomatoes, and dry beans. The best statistic for cotton yield was an average of all daily peak ozone concentrations. Several different types of ozone statistics performed similarly for processing tomatoes. These analyses suggest that several ozone summary statistics should be examined in assessing the relationship of ambient ozone exposure to crop yield. Where no clear statistical preference is indicated among several statistics, those most biologically relevant should be selected.  相似文献   

17.
A multi-variate, non-linear statistical model is described to simulate passive O3 sampler data to mimic the hourly frequency distributions of continuous measurements using climatologic O3 indicators and passive sampler measurements. The main meteorological parameters identified by the model were, air temperature, relative humidity, solar radiation and wind speed, although other parameters were also considered. Together, air temperature, relative humidity and passive sampler data by themselves could explain 62.5-67.5% (R(2)) of the corresponding variability of the continuously measured O3 data. The final correlation coefficients (r) between the predicted hourly O3 concentrations from the passive sampler data and the true, continuous measurements were 0.819-0.854, with an accuracy of 92-94% for the predictive capability. With the addition of soil moisture data, the model can lead to the first order approximation of atmospheric O3 flux and plant stomatal uptake. Additionally, if such data are coupled to multi-point plant response measurements, meaningful cause-effect relationships can be derived in the future.  相似文献   

18.
ABSTRACT

This paper presents a new approach to quantify emissions from fugitive gaseous air pollution sources. The authors combine Computed Tomography (CT) with Path-Integrated Optical Remote Sensing (PI-ORS) concentration data in a new field beam geometry. Path-integrated concentrations are sampled in a vertical plane downwind from the source along several radial beam paths. An innovative CT technique, which applies the Smooth Basis Function Minimization method to the beam data in conjunction with measured wind data, is used to estimate the total flux from the fugitive source. The authors conducted a synthetic data study to evaluate the proposed methodology under different meteorological conditions, beam geometry configurations, and simulated measurement errors. The measurement errors were simulated based on data collected with an Open-Path Fourier Transform Infra-Red system. This approach was found to be robust for the simulated errors and for a wide range of fluctuating wind directions. In the very sparse beam geometry examined (eight beam paths), successful emission rates were retrieved over a 70° range of wind directions under extremely large measurement error conditions.  相似文献   

19.
The post-harvest burning of agricultural fields is commonly used to dispose of crop residue and provide other desired services such as pest control. Despite careful regulation of burning, smoke plumes from field burning in the Pacific Northwest commonly degrade air quality, particularly for rural populations. In this paper, ClearSky, a numerical smoke dispersion forecast system for agricultural field burning that was developed to support smoke management in the Inland Pacific Northwest, is described. ClearSky began operation during the summer through fall burn season of 2002 and continues to the present. ClearSky utilizes Mesoscale Meteorological Model version 5 (MM5v3) forecasts from the University of Washington, data on agricultural fields, a web-based user interface for defining burn scenarios, the Lagrangian CALPUFF dispersion model and web-served animations of plume forecasts. The ClearSky system employs a unique hybrid source configuration, which treats the flaming portion of a field as a buoyant line source and the smoldering portion of the field as a buoyant area source. Limited field observations show that this hybrid approach yields reasonable plume rise estimates using source parameters derived from recent field burning emission field studies. The performance of this modeling system was evaluated for 2003 by comparing forecast meteorology against meteorological observations, and comparing model-predicted hourly averaged PM2.5 concentrations against observations. Examples from this evaluation illustrate that while the ClearSky system can accurately predict PM2.5 surface concentrations due to field burning, the overall model performance depends strongly on meteorological forecast error. Statistical evaluation of the meteorological forecast at seven surface stations indicates a strong relationship between topographical complexity near the station and absolute wind direction error with wind direction errors increasing from approximately 20° for sites in open areas to 70° or more for sites in very complex terrain. The analysis also showed some days with good forecast meteorology with absolute mean error in wind direction less than 30° when ClearSky correctly predicted PM2.5 surface concentrations at receptors affected by field burns. On several other days with similar levels of wind direction error the model did not predict apparent plume impacts. In most of these cases, there were no reported burns in the vicinity of the monitor and, thus, it appeared that other, non-reported burns were responsible for the apparent plume impact at the monitoring site. These cases do not provide information on the performance of the model, but rather indicate that further work is needed to identify all burns and to improve burn reports in an accurate and timely manner. There were also a number of days with wind direction errors exceeding 70° when the forecast system did not correctly predict plume behavior.  相似文献   

20.
Maritime greenhouse gas emissions are projected to increase significantly by 2050, highlighting the need for reliable inventories as a first step in analyzing ship emission control policies. The impact of ship power models on marine emissions inventories has garnered little attention, with most inventories employing simple, load-factor-based models to estimate ship power consumption. The availability of more expansive ship activity data provides the opportunity to investigate the inventory impacts of adopting complex power models. Furthermore, ship parameter fields can be sparsely populated in ship registries, making gap-filling techniques and averaging processes necessary. Therefore, it is important to understand of the impact of averaged ship parameters on ship power and emission estimations. This paper examines power estimation differences between results from two complex, resistance-based and two simple, load-factor-based power models on a baseline inventory with unique ship parameters. These models are additionally analyzed according to their sensitivities toward average ship parameters. Automated Identification System (AIS) data from a fleet of commercial marine vessels operating over a 6-month period off the coast of the southwestern United States form the basis of the analysis. To assess the inventory impacts of using averaged ship parameters, fleet-level carbon dioxide (CO2) emissions are calculated using ship parameter data averaged across ship types and their subtype size classes. Each of the four ship power models are used to generate four CO2 emissions inventories, and results are compared with baseline estimates for the same sample fleet where no averaged values were used. The results suggest that a change in power model has a relatively high impact on emission estimates. They also indicate relatively little sensitivity, by all power models, to the use of ship characteristics averaged by ship and subtype.

Implications: Commercial marine vessel emissions inventories were calculated using four different models for ship engine power. The calculations used 6 months of Automated Identification System (AIS) data from a sample of 248 vessels as input data. The results show that more detailed, resistance-based models tend to estimate a lower propulsive power, and thus lower emissions, for ships than traditional load-factor-based models. Additionally, it was observed that emission calculations using averaged values for physical ship parameters had a minimal impact on the resulting emissions inventories.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号