首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Conceptual and statistical issues surrounding the estimation of a background concentration distribution for arsenic are reviewed. How background area is defined and samples collected are shown to impact the shape and location of the probability density function that in turn affects the estimation and precision of associated distributional parameters. The overall background concentration distribution is conceptualized as a mixture of a natural background distribution, an anthropogenic background distribution and a distribution designed to accommodate the potential for contamination site samples being included into the background sample set. This concept is extended to a discussion of issues surrounding estimation of natural and anthropogenic background distributions for larger geographic areas. Finally, the mixture model is formally defined and statistical approaches to estimating its parameters discussed.  相似文献   

2.
Statistical Issues in Assessing Anthropogenic Background for Arsenic   总被引:1,自引:0,他引:1  
Conceptual and statistical issues surrounding the estimation of a background concentration distribution for arsenic are reviewed. How background area is defined and samples collected are shown to impact the shape and location of the probability density function that in turn affects the estimation and precision of associated distributional parameters. The overall background concentration distribution is conceptualized as a mixture of a natural background distribution, an anthropogenic background distribution and a distribution designed to accommodate the potential for contamination site samples being included into the background sample set. This concept is extended to a discussion of issues surrounding estimation of natural and anthropogenic background distributions for larger geographic areas. Finally, the mixture model is formally defined and statistical approaches to estimating its parameters discussed.  相似文献   

3.
The chemical mass balance (CMB) receptor model is commonly used in source apportionment studies as a means for attributing measured airborne particulate matter (PM) to its constituent emission sources. Traditionally, error terms (e.g., measurement and source profile uncertainty) associated with the model have been treated in an additive sense. In this work, however, arguments are made for the assumption of multiplicative errors, and the effects of this assumption are realized in a Bayesian probabilistic formulation which incorporates a ‘modified’ receptor model. One practical, beneficial effect of the multiplicative error assumption is that it automatically precludes the possibility of negative source contributions, without requiring additional constraints on the problem. The present Bayesian treatment further differs from traditional approaches in that the source profiles are inferred alongside the source contributions. Existing knowledge regarding the source profiles is incorporated as prior information to be updated through the Bayesian inferential scheme. Hundreds of parameters are therefore present in the expression for the joint probability of the source contributions and profiles (the posterior probability density function, or PDF), whose domain is explored efficiently using the Hamiltonian Markov chain Monte Carlo method. The overall methodology is evaluated and results compared to the US Environmental Protection Agency's standard CMB model using a test case based on PM data from Fresno, California.  相似文献   

4.
Engineering projects involving hydrogeology are faced with uncertainties because the earth is heterogeneous, and typical data sets are fragmented and disparate. In theory, predictions provided by computer simulations using calibrated models constrained by geological boundaries provide answers to support management decisions, and geostatistical methods quantify safety margins. In practice, current methods are limited by the data types and models that can be included, computational demands, or simplifying assumptions. Data Fusion Modeling (DFM) removes many of the limitations and is capable of providing data integration and model calibration with quantified uncertainty for a variety of hydrological, geological, and geophysical data types and models. The benefits of DFM for waste management, water supply, and geotechnical applications are savings in time and cost through the ability to produce visual models that fill in missing data and predictive numerical models to aid management optimization. DFM has the ability to update field-scale models in real time using PC or workstation systems and is ideally suited for parallel processing implementation. DFM is a spatial state estimation and system identification methodology that uses three sources of information: measured data, physical laws, and statistical models for uncertainty in spatial heterogeneities. What is new in DFM is the solution of the causality problem in the data assimilation Kalman filter methods to achieve computational practicality. The Kalman filter is generalized by introducing information filter methods due to Bierman coupled with a Markov random field representation for spatial variation. A Bayesian penalty function is implemented with Gauss–Newton methods. This leads to a computational problem similar to numerical simulation of the partial differential equations (PDEs) of groundwater. In fact, extensions of PDE solver ideas to break down computations over space form the computational heart of DFM. State estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. Further, a system identification theory has been derived based on statistical likelihood principles. A maximum likelihood theory is provided to estimate statistical parameters such as Markov model parameters that determine the geostatistical variogram. Field-scale application of DFM at the DOE Savannah River Site is presented and compared with manual calibration. DFM calibration runs converge in less than 1 h on a Pentium Pro PC for a 3D model with more than 15,000 nodes. Run time is approximately linear with the number of nodes. Furthermore, conditional simulation is used to quantify the statistical variability in model predictions such as contaminant breakthrough curves.  相似文献   

5.
Two models frequently used to simulate the dispersion of pollutants in the atmosphere have been compared. This is necessary because only a well-tested and well-calibrated simulation model can be a good representation of the reality of the dispersion of pollutants. The models evaluated (HYSPLIT_4 with its four variants and MEDIA) were run using as input parameters the same meteorological dataset (for 23-26 October 1994) from the French model ARPEGE. The following statistical criteria were compared: the space and time evolution of the pollutant cloud; the variation of statistical parameters in time and space; and the differences between the simulated and measured values of concentration in time for six different stations. The results emphasise the characteristics of the two models and their abilities in the framework of the air quality monitoring.  相似文献   

6.
Plume rise downwind of a large stationary gas turbine was measured in the field and the conditions were then scaled in the laboratory. For the laboratory, the plume exit conditions, wind velocity and temperature profiles, and wind direction were matched. It was found that for high temperature exhaust, the buoyancy is best matched by calculating a dimensionless density difference. With properly calculated buoyancy length scales, the plume trajectories were compared and were found to agree quite well. The probability distributions of the entrainment constant and the average values of the entrapment constant with downwind distance were compared. The field data showed about 15% greater plume rise. The median entrainment constant was about 10% greater for the lab test and the shape of the probability distribution matched very closely.  相似文献   

7.
Detailed hourly precipitation data are required for long-range modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants using the CALPUFF model. In sparsely populated areas such as the north central United States, ground-based precipitation measurement stations may be too widely spaced to offer a complete and accurate spatial representation of hourly precipitation within a modeling domain. The availability of remotely sensed precipitation data by satellite and the National Weather Service array of next-generation radars (NEXRAD) deployed nationally provide an opportunity to improve on the paucity of data for these areas. Before adopting a new method of precipitation estimation in a modeling protocol, it should be compared with the ground-based precipitation measurements, which are currently relied upon for modeling purposes. This paper presents a statistical comparison between hourly precipitation measurements for the years 2006 through 2008 at 25 ground-based stations in the north central United States and radar-based precipitation measurements available from the National Center for Environmental Predictions (NCEP) as Stage IV data at the nearest grid cell to each selected precipitation station. It was found that the statistical agreement between the two methods depends strongly on whether the ground-based hourly precipitation is measured to within 0.1 in/hr or to within 0.01 in/hr. The results of the statistical comparison indicate that it would be more accurate to use gridded Stage IV precipitation data in a gridded dispersion model for a long-range simulation, than to rely on precipitation data interpolated between widely scattered rain gauges.

Implications:

The current reliance on ground-based rain gauges for precipitation events and hourly data for modeling of dispersion and wet deposition of particulate matter and water-soluble pollutants results in potentially large discontinuity in data coverage and the need to extrapolate data between monitoring stations. The use of radar-based precipitation data, which is available for the entire continental United States and nearby areas, would resolve these data gaps and provide a complete and accurate spatial representation of hourly precipitation within a large modeling domain.  相似文献   


8.
A numerical model of surfactant enhanced solubilization was developed and applied to the simulation of nonaqueous phase liquid recovery in two-dimensional heterogeneous laboratory sand tank systems. Model parameters were derived from independent, small-scale, batch and column experiments. These parameters included viscosity, density, solubilization capacity, surfactant sorption, interfacial tension, permeability, capillary retention functions, and interphase mass transfer correlations. Model predictive capability was assessed for the evaluation of the micellar solubilization of tetrachloroethylene (PCE) in the two-dimensional systems. Predicted effluent concentrations and mass recovery agreed reasonably well with measured values. Accurate prediction of enhanced solubilization behavior in the sand tanks was found to require the incorporation of pore-scale, system-dependent, interphase mass transfer limitations, including an explicit representation of specific interfacial contact area. Predicted effluent concentrations and mass recovery were also found to depend strongly upon the initial NAPL entrapment configuration. Numerical results collectively indicate that enhanced solubilization processes in heterogeneous, laboratory sand tank systems can be successfully simulated using independently measured soil parameters and column-measured mass transfer coefficients, provided that permeability and NAPL distributions are accurately known. This implies that the accuracy of model predictions at the field scale will be constrained by our ability to quantify soil heterogeneity and NAPL distribution.  相似文献   

9.
The usefulness of water quality simulation models for environmental management is explored with a focus on prediction uncertainty. The specific objective is to demonstrate how the usability of a flow and transport model (here: MACRO) can be enhanced by developing and analyzing its output probability distributions based on input variability. This infiltration-based model was designed to investigate preferential flow effects on pollutant transport. A statistical sensitivity analysis is used to identify the most uncertain input parameters based on model outputs. Probability distribution functions of input variables were determined based on field-measured data obtained under alternative tillage treatments. Uncertainty of model outputs is investigated using a Latin hypercube sampling scheme (LHS) with restricted pairing for model input sampling. Probability density functions (pdfs) are constructed for water flow rate, atrazine leaching rate, total accumulated leaching, and atrazine concentration in percolation water. Results indicate that consideration of input parameter uncertainty produces a 20% higher mean flow rate along with two to three times larger atrazine leaching rate, accumulated leachate, and concentration than that obtained using mean input parameters. Uncertainty in predicted flow rate is small but that in solute transport is an order of magnitude larger than that of corresponding input parameters. Macropore flow is observed to contribute to the variability of atrazine transport results. Overall, the analysis provides a quantification of prediction uncertainty that is found to enhance a user's ability to assess risk levels associated with model predictions.  相似文献   

10.
The usefulness of water quality simulation models for environmental management is explored with a focus on prediction uncertainty. The specific objective is to demonstrate how the usability of a flow and transport model (here: MACRO) can be enhanced by developing and analyzing its output probability distributions based on input variability. This infiltration-based model was designed to investigate preferential flow effects on pollutant transport. A statistical sensitivity analysis is used to identify the most uncertain input parameters based on model outputs. Probability distribution functions of input variables were determined based on field-measured data obtained under alternative tillage treatments. Uncertainty of model outputs is investigated using a Latin hypercube sampling scheme (LHS) with restricted pairing for model input sampling. Probability density functions (pdfs) are constructed for water flow rate, atrazine leaching rate, total accumulated leaching, and atrazine concentration in percolation water. Results indicate that consideration of input parameter uncertainty produces a 20% higher mean flow rate along with two to three times larger atrazine leaching rate, accumulated leachate, and concentration than that obtained using mean input parameters. Uncertainty in predicted flow rate is small but that in solute transport is an order of magnitude larger than that of corresponding input parameters. Macropore flow is observed to contribute to the variability of atrazine transport results. Overall, the analysis provides a quantification of prediction uncertainty that is found to enhance a user's ability to assess risk levels associated with model predictions.  相似文献   

11.
Mowat FS  Bundy KJ 《Chemosphere》2002,49(5):499-513
In order to evaluate sediment toxicity, a mathematical algorithm was developed to compute the toxicity of multiple component mixtures acting in an additive manner. A statistical approach was devised to determine the presence of potential interactive effects among mixture components. The algorithm uses three kinds of data to obtain an integrative approach to sediment toxicity assessment: Microtox toxicity data (EC50 values), sediment pollutant concentration measurements, and sequential extraction (SEQ) data to investigate metal partitioning. To simplify the analysis of complex mixtures using a prioritization scheme based on intrinsic toxicity and relative abundance, a toxicity index (TI) was employed as an indicator of adverse ecological impact. In general, the ranking of contaminants using the TI approach was found to be most efficient in reducing computational time, and concentrations using bioavailability data from SEQ was found to be the best theoretical predictor of the experimental mixture toxicity value. Only a few pollutants that were present at the greatest abundance were needed to provide a good approximation of the calculated EC50 found when all components were included. Not only does this substantially reduce the computational time needed to determine the EC50, it could in some cases dramatically reduce the pollutant monitoring effort required to track toxicity effectively. This approach would have substantial implications for both risk assessment and for remediation strategies, making them more efficient by focusing on the priority pollutants identified.  相似文献   

12.
The methods presented in this work provide a potential tool for characterizing contaminant source zones in terms of mass flux. The problem was conceptualized by considering contaminant transport through a vertical “flux plane” located between a source zone and a downgradient region where contaminant concentrations were measured. The goal was to develop a robust method capable of providing a statement of the magnitude and uncertainty associated with estimated contaminant mass flux values.In order to estimate the magnitude and transverse spatial distribution of mass flux through a plane, the problem was considered in an optimization framework. Two numerical optimization techniques were applied, simulated annealing (SA) and minimum relative entropy (MRE). The capabilities of the flux plane model and the numerical solution techniques were evaluated using data from a numerically generated test problem and a nonreactive tracer experiment performed in a three-dimensional aquifer model. Results demonstrate that SA is more robust and converges more quickly than MRE. However, SA is not capable of providing an estimate of the uncertainty associated with the simulated flux values. In contrast, MRE is not as robust as SA, but once in the neighborhood of the optimal solution, it is quite effective as a tool for inferring mass flux probability density functions, expected flux values, and confidence limits.A hybrid (SA-MRE) solution technique was developed in order to take advantage of the robust solution capabilities of SA and the uncertainty estimation capabilities of MRE. The coupled technique provided probability density functions and confidence intervals that would not have been available from an independent SA algorithm and they were obtained more efficiently than if provided by an independent MRE algorithm.  相似文献   

13.
Regional haze regulations require progress toward reducing atmospheric haze as measured by particle scattering coefficient of visible light. From a practical perspective, this raises the following question: Given a decrease in extinction, what is the probability that people will notice an improvement in visibility? This paper proposes a quantitative definition of the probability of a perceptible increase in visibility given a decrease in light extinction and a general method to estimate this probability from perception measurements made in the field under realistic conditions. Using data from a recent study of visibility perception by 8 observers, it is estimated that a 2-4 deciview change gives a 67% maximum probability of detecting the improvement. Stated another way, the odds of seeing a difference are at most 2:1 for a change of 2-4 deciviews. A 90% probability requires a change of at least 3.5-7.0 deciviews. The limitations and possible bias in the results of this study are discussed. These results may have a major effect on the cost-benefit analysis of regulatory actions to improve visibility.  相似文献   

14.
The methods presented in this work provide a potential tool for characterizing contaminant source zones in terms of mass flux. The problem was conceptualized by considering contaminant transport through a vertical "flux plane" located between a source zone and a downgradient region where contaminant concentrations were measured. The goal was to develop a robust method capable of providing a statement of the magnitude and uncertainty associated with estimated contaminant mass flux values. In order to estimate the magnitude and transverse spatial distribution of mass flux through a plane, the problem was considered in an optimization framework. Two numerical optimization techniques were applied, simulated annealing (SA) and minimum relative entropy (MRE). The capabilities of the flux plane model and the numerical solution techniques were evaluated using data from a numerically generated test problem and a nonreactive tracer experiment performed in a three-dimensional aquifer model. Results demonstrate that SA is more robust and converges more quickly than MRE. However, SA is not capable of providing an estimate of the uncertainty associated with the simulated flux values. In contrast, MRE is not as robust as SA, but once in the neighborhood of the optimal solution, it is quite effective as a tool for inferring mass flux probability density functions, expected flux values, and confidence limits. A hybrid (SA-MRE) solution technique was developed in order to take advantage of the robust solution capabilities of SA and the uncertainty estimation capabilities of MRE. The coupled technique provided probability density functions and confidence intervals that would not have been available from an independent SA algorithm and they were obtained more efficiently than if provided by an independent MRE algorithm.  相似文献   

15.
The problem of determining the source of an emission from the limited information provided by a finite and noisy set of concentration measurements obtained from real-time sensors is an ill-posed inverse problem. In general, this problem cannot be solved uniquely without additional information. A Bayesian probabilistic inferential framework, which provides a natural means for incorporating both errors (model and observational) and prior (additional) information about the source, is presented. Here, Bayesian inference is applied to find the posterior probability density function of the source parameters (location and strength) given a set of concentration measurements. It is shown how the source–receptor relationship required in the determination of the likelihood function can be efficiently calculated using the adjoint of the transport equation for the scalar concentration. The posterior distribution of the source parameters is sampled using a Markov chain Monte Carlo method. The inverse source determination method is validated against real data sets acquired in a highly disturbed flow field in an urban environment. The data sets used to validate the proposed methodology include a water-channel simulation of the near-field dispersion of contaminant plumes in a large array of building-like obstacles (Mock Urban Setting Trial) and a full-scale field experiment (Joint Urban 2003) in Oklahoma City. These two examples demonstrate the utility of the proposed approach for inverse source determination.  相似文献   

16.
17.
The purposes of the present study were the evaluation of the genotoxic effect of mixture of paraquat and linuron commercial formulations. the evaluation of the possible interaction between the two pesticides and the comparison of different statistical tests used for the data analysis. For this purposes the bone marrow micronucleus test was used. Dose levels were determined in the base of the LD50 of each component. According the experimental design, three dose levels of the mixture were administered at groups of five Wistar rats from each sex. Sampling time was 24 h for all groups plus 48 h for the highest dose and the positive control groups. In addition, the formulations were studied separately at two dose levels. Cyclophosphamide was used as positive control. For the statistical analysis of the results different parametric and nonparametric tests were used. The mixture of the two pesticides under the study conditions was not proven to be genotoxic. However, cytotoxicity and no signs of interaction were indicated. Minor declinations between the statistical tests used for data analysis observed.  相似文献   

18.
Estimating risks of groundwater contamination often require schemes for representing and propagating uncertainties relative to model input parameters. The most popular method is the Monte Carlo method whereby cumulative probability distributions are randomly sampled in an iterative fashion. The shortcoming of the approach, however, arises when probability distributions are arbitrarily selected in situations where available information is incomplete or imprecise. In such situations, alternative modes of information representation can be used, for example the nested intervals known as “possibility distributions”. In practical situations of groundwater risk assessment, it is common that certain model parameters may be represented by single probability distributions (representing variability) because there are data to justify these distributions, while others are more faithfully represented by possibility distributions (representing imprecision) due to the partial nature of available information. This paper applies two recent methods, designed for the joint-propagation of variability and imprecision, to a groundwater contamination risk assessment. Results of the joint-propagation methods are compared to those obtained using both interval analysis and the Monte Carlo method with a hypothesis of stochastic independence between model parameters. The two joint-propagation methods provide results in the form of families of cumulative distributions of the probability of exceeding a certain value of groundwater concentration. These families are delimited by an upper cumulative distribution and a lower distribution respectively called Plausibility and Belief after evidence theory. Slight differences between the results of the two joint-propagation methods are explained by the different assumptions regarding parameter dependencies. Results highlight the point that non-conservative results may be obtained if single cumulative probability distributions are arbitrarily selected for model parameters in the face of imprecise information and the Monte Carlo method is used under the assumption of stochastic independence. The proposed joint-propagation methods provide upper and lower bounds for the probability of exceeding a tolerance threshold. As this may seem impractical in a risk-management context, it is proposed to introduce “a-posteriori subjectivity” (as opposed to the “a-priori subjectivity” introduced by the arbitrary selection of single probability distributions) by defining a single indicator of evidence as a weighted average of Plausibility and Belief, with weights to be defined according to the specific context.  相似文献   

19.
The CIT/UCD three-dimensional source-oriented externally mixed air quality model is tested during a severe photochemical smog episode (Los Angeles, 7–9 September 1993) using two different chemical mechanisms that describe the formation of ozone and secondary reaction products. The first chemical mechanism is the secondary organic aerosol mechanism (SOAM) that is based on SAPRC90 with extensions to describe the formation of condensable organic products. The second chemical mechanism is the caltech atmospheric chemistry mechanism (CACM) that is based on SAPRC99 with more detailed treatment of organic oxidation products.The predicted ozone concentrations from the CIT/UCD/SOAM and the CIT/UCD/CACM models agree well with the observations made at most monitoring sites with a mean normalized error of approximately 0.4–0.5. Good agreement is generally found between the predicted and measured NOx concentrations except during morning rush hours of 6–10 am when NOx concentrations are under-predicted at most locations. Total VOC concentrations predicted by the two chemical mechanisms agree reasonably well with the observations at three of the four sites where measurements were made. Gas-phase concentrations of phenolic compounds and benzaldehyde predicted by the UCD/CIT/CACM model are higher than the measured concentrations whereas the predicted concentrations of other aromatic compounds approximately agree with the measured values.The fine airborne particulate matter mass concentrations (PM2.5) predicted by the UCD/CIT/SOAM and UCD/CIT/CACM models are slightly greater than the observed values during evening hours and lower than observed values during morning rush hours. The evening over-predictions are driven by an excess of nitrate, ammonium ion and sulfate. The UCD/CIT/CACM model predicts higher nighttime concentrations of gaseous precursors leading to the formation of particulate nitrate than the UCD/CIT/SOAM model. Elemental carbon and total organic mass are under-predicted by both models during morning rush hour periods. When this latter finding is combined with the NOx under-predictions that occur at the same time, it suggests a systematic bias in the diesel engine emissions inventory. The mass of particulate total organic carbon is under-predicted by both the UCD/CIT/SOAM and UCD/CIT/CACM models during afternoon hours. Elemental carbon concentrations generally agree with the observations at this time. Both the UCD/CIT/SOAM and UCD/CIT/CACM models predict low concentrations of secondary organic aerosol (SOA) (<3.5 μg m−3) indicating that both models could be missing SOA formation pathways. The representation of the aerosol as an internal mixture vs. a source-oriented external mixture did not significantly affect the predicted concentrations during the current study.  相似文献   

20.
Field data of physical properties in heterogeneous crystalline bedrock, like porosity and fracture aperture, is associated with uncertainty that can have a significant impact on the analysis of solute transport in rock fractures. Solutions to the central temporal moments of the residence time probability density function (PDF) are derived in a closed form for a solute Dirac pulse. The solutions are based on a model that takes into account advection along the fracture plane, diffusion into the rock matrix and sorption kinetics in the rock matrix. The most relevant rock properties including fracture aperture and several matrix properties as well as flow velocity are assumed to be spatially random along transport pathways. The mass transport is first solved in a general form along one-dimensional pathways, but the results can be extended to multi-dimensional flows simply by substituting the expected travel time for inert water parcels. Based on data obtained with rock samples taken at Asp? Hard Rock Laboratory in Sweden, the solutions indicate that the heterogeneity of the rock properties contributes to increasing significantly both the variance and the skewness of the residence time probability density function for a pulse travelling in a fracture. The Asp? data suggests that the bias introduced in the variance of the residence time PDF by neglecting the effect of heterogeneity of the rock properties on the radionuclide migration is very large for fractures thinner than a few tenths of a millimetre.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号