首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Both forward and backward transport modeling methods are being developed for characterization of sources in atmospheric releases of toxic agents. Forward modeling methods, which describe the atmospheric transport from sources to receptors, use forward-running transport and dispersion models or computational fluid dynamics models which are run many times, and the resulting dispersion field is compared to observations from multiple sensors. Forward modeling methods include Bayesian updating and inference schemes using stochastic Monte Carlo or Markov Chain Monte Carlo sampling techniques. Backward or inverse modeling methods use only one model run in the reverse direction from the receptors to estimate the upwind sources. Inverse modeling methods include adjoint and tangent linear models, Kalman filters, and variational data assimilation, among others.This survey paper discusses these source estimation methods and lists the key references. The need for assessing uncertainties in the characterization of sources using atmospheric transport and dispersion models is emphasized.  相似文献   

2.
Carbon monoxide monitoring using continuous samplers is carried out in most major urban centres in the world and generally forms the basis for air quality assessments. Such assessments become less reliable as the proportion of data missing due to equipment failure and periods of calibration increases. This paper presents a semi-empirical model for the prediction of atmospheric carbon monoxide concentrations near roads for the purpose of interpolating missing data without the need for any traffic or emissions information. The model produces reliable predictions while remaining computationally simple by being site-specifically optimized. The model was developed for, and evaluated at, both a suburban site and an inner city site in Hamilton, New Zealand. Model performance statistics were found to be significantly better than other simple methods of interpolation with little additional computational complexity.  相似文献   

3.
Elevated concentrations of arsenic were detected in surface soils adjacent to a smelting complex in northern Canada. We evaluated the cancer risks caused by exposure to arsenic in two communities through combining geostatistical simulation with demographic data and dose–response models in a framework. Distribution of arsenic was first estimated using geostatistical circulant-embedding simulation method. We then evaluated the exposures from inadvertent ingestion, inhalation and dermal contact. Risks of skin caner and three internal cancers were estimated at both grid scale and census-unit scale using parametric dose–response models. Results indicated that local residents could face non-negligible cancer risks (skin cancer and liver cancer mainly). Uncertainties of risk estimates were discussed from the aspects of arsenic concentrations, exposed population and dose–response model. Reducing uncertainties would require additional soil sampling, epidemic records as well as complementary studies on land use, demographic variation, outdoor activities and bioavailability of arsenic.  相似文献   

4.
Numerical reactive transport models are often used as tools to assess aquifers contaminated with reactive groundwater solutes as well as investigating mitigation scenarios. The ability to accurately simulate the fate and transport of solutes, however, is often impeded by a lack of information regarding the parameters that define chemical reactions. In this study, we employ a steady-state Ensemble Kalman Filter (EnKF), a data assimilation algorithm, to provide improved estimates of a spatially-variable first-order rate constant λ through assimilation of solute concentration measurement data into reactive transport simulation results. The methodology is applied in a steady-state, synthetic aquifer system in which a contaminant is leached to the saturated zone and undergoes first-order decay. Multiple sources of uncertainty are investigated, including hydraulic conductivity of the aquifer and the statistical parameters that define the spatial structure of the parameter field. For the latter scenario, an iterative method is employed to identify the statistical mean of λ of the reference system. Results from all simulations show that the filter scheme is successful in conditioning the λ ensemble to the reference λ field. Sensitivity analyses demonstrate that the estimation of the λ values is dependent on the number of concentration measurements assimilated, the locations from which the measurement data are collected, the error assigned to the measurement values, and the correlation length of the λ fields.  相似文献   

5.
Photochemical grid models are addressing an increasing variety of air quality related issues, yet procedures and metrics used to evaluate their performance remain inconsistent. This impacts the ability to place results in quantitative context relative to other models and applications, and to inform the user and affected community of model uncertainties and weaknesses. More consistent evaluations can serve to drive improvements in the modeling process as major weaknesses are identified and addressed. The large number of North American photochemical modeling studies published in the peer-reviewed literature over the past decade affords a rich data set from which to update previously established quantitative performance “benchmarks” for ozone and particulate matter (PM) concentrations. Here we exploit this information to develop new ozone and PM benchmarks (goals and criteria) for three well-established statistical metrics over spatial scales ranging from urban to regional and over temporal scales ranging from episodic to seasonal. We also recommend additional evaluation procedures, statistical metrics, and graphical methods for good practice. While we primarily address modeling and regulatory settings in the United States, these recommendations are relevant to any such applications of state-of-the-science photochemical models. Our primary objective is to promote quantitatively consistent evaluations across different applications, scales, models, model inputs, and configurations. The purpose of benchmarks is to understand how good or poor the results are relative to historical model applications of similar nature and to guide model performance improvements prior to using results for policy assessments. To that end, it also remains critical to evaluate all aspects of the model via diagnostic and dynamic methods. A second objective is to establish a means to assess model performance changes in the future. Statistical metrics and benchmarks need to be revisited periodically as model performance and the characteristics of air quality change in the future.

Implications: We address inconsistent procedures and metrics used to evaluate photochemical model performance, recommend a specific set of statistical metrics, and develop updated quantitative performance benchmarks for those metrics. We promote quantitatively consistent evaluations across different applications, scales, models, inputs, and configurations, thereby (1) improving the user’s ability to quantitatively place results in context and guide model improvements, and (2) better informing users, regulators, and stakeholders of model uncertainties and weaknesses prior to using results for policy assessments. While we primarily address U.S. modeling and regulatory settings, these recommendations are relevant to any such applications of state-of-the-science photochemical models.  相似文献   


6.
Site uncertainties significantly influence groundwater flow and contaminant transport predictions. Aleatoric and epistemic uncertainty are both identified in site characterization and represented using proper uncertainty theories. When one theory best represents one parameter whereas a different theory may be more suitable for another parameter, the hybrid propagation of aleatoric (random) and epistemic (nonrandom) uncertainties will occur. The computational challenges of joint propagation of aleatoric and epistemic uncertainty through groundwater flow and contaminant transport models are significant. A fuzzy-stochastic nonlinear model was developed in this paper to incorporate these two types of uncertain site information and reduce the computational cost. The results show that (1) the computational cost using the nonlinear model is reduced compared with that of using the sparse grid algorithm and Monte Carlo methods; (2) the uncertainty of hydraulic conductivity (K) significantly influences the water head and solute distribution at the observation wells compared to other uncertain parameters, such as the storage coefficient and the distribution coefficient (Kd); and (3) the combination of multiple uncertain parameters substantially affects the simulation results. Neglecting site uncertainties may lead to unrealistic predictions.  相似文献   

7.
A robust spatial model to interpolate topsoil cadmium concentrations monitored by Taiwan EPA was constructed for assessment of health risks posed by this contamination in Changhua County, Taiwan. kriging methods, using a Geographic Information Systems (GIS), were used to estimate the range and severity of topsoil contamination. Optimised Kriging models and geostatistical analyses revealed that much of the farm topsoil in the catchment area was polluted above the levels established for human health. Cadmium hotspots (10-600 mg/kg) are identified in a highly populated region in the County.  相似文献   

8.
Uncertainty factors in atmospheric dispersion models may influence the reliability of model prediction. The ability of a model in assimilating measurement data will be helpful to improve model prediction. In this paper, data assimilation based on ensemble Kalman filter (EnKF) is introduced to a Monte Carlo atmospheric dispersion model (MCADM) designed for assessment of consequences after an accident release of radionuclides. Twin experiment has been performed in which simulated ground-level dose rates have been assimilated. Uncertainties in the source term and turbulence intensity of wind field are considered, respectively. Methodologies and preliminary results of the application are described. It is shown that it is possible to reduce the discrepancy between the model forecast and the true situation by data assimilation. About 80% of error caused by the uncertainty in the source term is reduced, and the value for that caused by uncertainty in the turbulence intensity is about 50%.  相似文献   

9.
Dairy farms comprise a complex landscape of groundwater pollution sources. The objective of our work is to develop a method to quantify nitrate leaching to shallow groundwater from different management units at dairy farms. Total nitrate loads are determined by the sequential calibration of a sub-regional scale and a farm-scale three-dimensional groundwater flow and transport model using observations at different spatial scales. These observations include local measurements of groundwater heads and nitrate concentrations in an extensive monitoring well network, providing data at a scale of a few meters and measurements of discharge rates and nitrate concentrations in a tile-drain network, providing data integrated across multiple farms. The various measurement scales are different from the spatial scales of the calibration parameters, which are the recharge and nitrogen leaching rates from individual management units. The calibration procedure offers a conceptual framework for using field measurements at different spatial scales to estimate recharge N concentrations at the management unit scale. It provides a map of spatially varying dairy farming impact on groundwater nitrogen. The method is applied to a dairy farm located in a relatively vulnerable hydrogeologic region in California. Potential sources within the dairy farm are divided into three categories, representing different manure management units: animal exercise yards and feeding areas (corrals), liquid manure holding ponds, and manure irrigated forage fields. Estimated average nitrogen leaching is 872 kg/ha/year, 807 kg/ha/year and 486 kg/ha/year for corrals, ponds and fields respectively. Results are applied to evaluate the accuracy of nitrogen mass balances often used by regulatory agencies to assess groundwater impacts. Calibrated leaching rates compare favorably to field and farm scale nitrogen mass balances. These data and interpretations provide a basis for developing improved management strategies.  相似文献   

10.
Available models of solute transport in heterogeneous formations lack in providing complete characterization of the predicted concentration. This is a serious drawback especially in risk analysis where confidence intervals and probability of exceeding threshold values are required. Our contribution to fill this gap of knowledge is a probability distribution model for the local concentration of conservative tracers migrating in heterogeneous aquifers. Our model accounts for dilution, mechanical mixing within the sampling volume and spreading due to formation heterogeneity. It is developed by modeling local concentration dynamics with an Ito Stochastic Differential Equation (SDE) that under the hypothesis of statistical stationarity leads to the Beta probability distribution function (pdf) for the solute concentration. This model shows large flexibility in capturing the smoothing effect of the sampling volume and the associated reduction of the probability of exceeding large concentrations. Furthermore, it is fully characterized by the first two moments of the solute concentration, and these are the same pieces of information required for standard geostatistical techniques employing Normal or Log-Normal distributions. Additionally, we show that in the absence of pore-scale dispersion and for point concentrations the pdf model converges to the binary distribution of [Dagan, G., 1982. Stochastic modeling of groundwater flow by unconditional and conditional probabilities, 2, The solute transport. Water Resour. Res. 18 (4), 835-848.], while it approaches the Normal distribution for sampling volumes much larger than the characteristic scale of the aquifer heterogeneity. Furthermore, we demonstrate that the same model with the spatial moments replacing the statistical moments can be applied to estimate the proportion of the plume volume where solute concentrations are above or below critical thresholds. Application of this model to point and vertically averaged bromide concentrations from the first Cape Cod tracer test and to a set of numerical simulations confirms the above findings and for the first time it shows the superiority of the Beta model to both Normal and Log-Normal models in interpreting field data. Furthermore, we show that assuming a-priori that local concentrations are normally or log-normally distributed may result in a severe underestimate of the probability of exceeding large concentrations.  相似文献   

11.
A fuzzy composting process model   总被引:2,自引:0,他引:2  
Composting processes are normally complicated with a variety of uncertainties arising from incomplete or imprecise information obtained in real-world systems. Previously, there has been a lack of studies that focused on developing effective approaches to incorporate such uncertainties within composting process models. To fill this gap, a fuzzy composting process model (FCPM) for simulating composting process under uncertainty was developed. This model was mainly based on integration of a fractional fuzzy vertex method and a comprehensive composting model. Degrees of influence by projected uncertain factors were also examined. Two scenarios were investigated in applying the FCPM method. In the first scenario, model simulation under deterministic conditions was conducted. A pilot-scale experiment was provided for verifications. The result indicated that the proposed composting model could provide an excellent vehicle for demonstrating the complex interactions that occurred in the composting process. In the second scenario, application of the proposed FCPM was conducted under uncertainties. Six input parameters were considered to be of uncertain features that were reflected as fuzzy membership functions. The results indicated that the uncertainties projected in input parameters will result in significant derivations on system predictions; the proposed FCPM can generate satisfactory system outputs, with less computational efforts being required. Analyses on degree of influence of system inputs were also provided to describe the impacts of uncertainties on system responses. Thus, suitable measures can be adopted either to reduce system uncertainty by well-directed reduction of uncertainties of those high-influencing parameters or to reduce the computational requirement by neglecting those negligible factors.  相似文献   

12.
In this study, a time-varying statistical model, TVAREX, was proposed for daily averaged PM10 concentrations forecasting of coastal cities. It is a Kalman filter based autoregressive model with exogenous inputs depending on selected meteorological properties on the day of prediction. The TVAREX model was evaluated and compared to an ANN model, trained with the Levenberg–Marquardt backpropagation algorithm subjected to the same set of inputs. It was found that the error statistics of the TVAREX model in general were comparable to those of the ANN model, but the TVAREX model was more efficient in capturing the PM10 pollution episodes due to its online nature, therefore having an appealing advantage for implementation.  相似文献   

13.
In this paper, bootstrapped wavelet neural network (BWNN) was developed for predicting monthly ammonia nitrogen (NH4+–N) and dissolved oxygen (DO) in Harbin region, northeast of China. The Morlet wavelet basis function (WBF) was employed as a nonlinear activation function of traditional three-layer artificial neural network (ANN) structure. Prediction intervals (PI) were constructed according to the calculated uncertainties from the model structure and data noise. Performance of BWNN model was also compared with four different models: traditional ANN, WNN, bootstrapped ANN, and autoregressive integrated moving average model. The results showed that BWNN could handle the severely fluctuating and non-seasonal time series data of water quality, and it produced better performance than the other four models. The uncertainty from data noise was smaller than that from the model structure for NH4+–N; conversely, the uncertainty from data noise was larger for DO series. Besides, total uncertainties in the low-flow period were the biggest due to complicated processes during the freeze-up period of the Songhua River. Further, a data missing–refilling scheme was designed, and better performances of BWNNs for structural data missing (SD) were observed than incidental data missing (ID). For both ID and SD, temporal method was satisfactory for filling NH4+–N series, whereas spatial imputation was fit for DO series. This filling BWNN forecasting method was applied to other areas suffering “real” data missing, and the results demonstrated its efficiency. Thus, the methods introduced here will help managers to obtain informed decisions.  相似文献   

14.
The intake fraction (iF) gives a measure of the portion of a source's emissions that is inhaled by an exposed population over a defined period of time. This study examines spatial and population-based iF distributions of a known human carcinogen, benzene, from a ubiquitous urban source, local vehicular traffic, in the Helsinki Metropolitan Area using three computational methods. The first method uses the EXPAND model (EXPosure to Air pollution, especially to Nitrogen Dioxide and particulate matter), which incorporates spatial and temporal information on population activity patterns as well as urban-scale and street canyon dispersion models to predict spatial population exposure distributions. The second method uses data from the personal monitoring study EXPOLIS (Air Pollution Exposure Distributions of Adult Urban Populations in Europe) to estimate the intake fractions for individuals in the study. The third method, a one-compartment box model provides estimates within an order-of-magnitude or better for non-reactive agents in an urban area. Population intake fractions are higher using the personal monitoring data method (median iF 30 per million, mean iF 39 per million) compared with the spatial model (annual mean iF 10 per million) and the box model (median iF 4 per million, mean iF 7 per million). In particular, this study presents detailed intake fraction distributions on several different levels (spatial, individual, and generic) for the same urban area.  相似文献   

15.
A large database including temporal trends of physical, ecological and socio-economic data was developed within the EUROCAT project. The aim was to estimate the nutrient fluxes for different socio-economic scenarios at catchment and coastal zone level of the Po catchment (Northern Italy) with reference to the Water Quality Objectives reported in the Water Framework Directive (WFD 2000/60/CE) and also in Italian legislation. Emission data derived from different sources at national, regional and local levels are referred to point and non-point sources. While non-point (diffuse) sources are simply integrated into the nutrient flux model, point sources are irregularly distributed. Intensive farming activity in the Po valley is one of the main Pressure factors Driving groundwater pollution in the catchment, therefore understanding the spatial variability of groundwater nitrate concentrations is a critical issue to be considered in developing a Water Quality Management Plan. In order to use the scattered point source data as input in our biogeochemical and transport models, it was necessary to predict their values and associated uncertainty at unsampled locations. This study reports the spatial distribution and uncertainty of groundwater nitrate concentration at a test site of the Po watershed using a probabilistic approach. Our approach was based on geostatistical sequential Gaussian simulation used to yield a series of stochastic images characterized by equally probable spatial distributions of the nitrate concentration across the area. Post-processing of many simulations allowed the mapping of contaminated and uncontaminated areas and provided a model for the uncertainty in the spatial distribution of nitrate concentrations.  相似文献   

16.
17.
This paper reviews four commonly used statistical methods for environmental data analysis and discusses potential pitfalls associated with application of these methods through real case study data. The four statistical methods are percentile and confidence interval, correlation coefficient, regression analysis, and analysis of variance (ANOVA). The potential pitfall for estimation of percentile and confidence interval includes the automatic assumption of a normal distribution to environmental data, which so often show a log-normal distribution. The potential pitfall for correlation coefficient includes the use of a wide range of data points in which the maximum in value may trivialize other smaller data points and consequently skew the correlation coefficient. The potential pitfall for regression analysis includes the propagation of uncertainties of input variables to the regression model prediction, which may be even more uncertain. The potential pitfall for ANOVA includes the acceptance of a hypothesis as a weak argument to imply a strong conclusion. As demonstrated in this paper, we may draw very different conclusions based on statistical analysis if the pitfalls are not identified. Reminder and enlightenment obtained from the pitfalls are given at the end of this article.  相似文献   

18.
Land-use regression models have increasingly been applied for air pollution mapping at typically the city level. Though models generally predict spatial variability well, the structure of models differs widely between studies. The observed differences in the models may be due to artefacts of data and methodology or underlying differences in source or dispersion characteristics. If the former, more standardised methods using common data sets could be beneficial. We compared land-use regression models for NO2 and PM10, developed with a consistent protocol in Great Britain (GB) and the Netherlands (NL).Models were constructed on the basis of 2001 annual mean concentrations from the national air quality networks. Predictor variables used for modelling related to traffic, population, land use and topography. Four sets of models were developed for each country. First, predictor variables derived from data sets common to both countries were used in a pooled analysis, including an indicator for country and interaction terms between country and the identified predictor variables. Second, the common data sets were used to develop individual baseline models for each country. Third, the country-specific baseline models were applied after calibration in the other country to explore transferability. The fourth model was developed using the best possible predictor variables for each country.A common model for GB and NL explained NO2 concentrations well (adjusted R2 0.64), with no significant differences in intercept and slopes between the two countries. The country-specific model developed on common variables for NL but not GB improved the prediction.The performance of models based upon common data was only slightly worse than models optimised with local data. Models transferred to the other country performed substantially worse than the country-specific models. In conclusion, care is needed both in transferring models across different study areas, and in developing large inter-regional LUR models.  相似文献   

19.
This paper reviews four commonly used statistical methods for environmental data analysis and discusses potential pitfalls associated with application of these methods through real case study data. The four statistical methods are percentile and confidence interval, correlation coefficient, regression analysis, and analysis of variance (ANOVA). The potential pitfall for estimation of percentile and confidence interval includes the automatic assumption of a normal distribution to environmental data, which so often show a log-normal distribution. The potential pitfall for correlation coefficient includes the use of a wide range of data points in which the maximum in value may trivialize other smaller data points and consequently skew the correlation coefficient. The potential pitfall for regression analysis includes the propagation of uncertainties of input variables to the regression model prediction, which may be even more uncertain. The potential pitfall for ANOVA includes the acceptance of a hypothesis as a weak argument to imply a strong conclusion. As demonstrated in this paper, we may draw very different conclusions based on statistical analysis if the pitfalls are not identified. Reminder and enlightenment obtained from the pitfalls are given at the end of this article.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号