首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
Contamination source identification is a crucial step in environmental remediation. The exact contaminant source locations and release histories are often unknown due to lack of records and therefore must be identified through inversion. Coupled source location and release history identification is a complex nonlinear optimization problem. Existing strategies for contaminant source identification have important practical limitations. In many studies, analytical solutions for point sources are used; the problem is often formulated and solved via nonlinear optimization; and model uncertainty is seldom considered. In practice, model uncertainty can be significant because of the uncertainty in model structure and parameters, and the error in numerical solutions. An inaccurate model can lead to erroneous inversion of contaminant sources. In this work, a constrained robust least squares (CRLS) estimator is combined with a branch-and-bound global optimization solver for iteratively identifying source release histories and source locations. CRLS is used for source release history recovery and the global optimization solver is used for location search. CRLS is a robust estimator that was developed to incorporate directly a modeler's prior knowledge of model uncertainty and measurement error. The robustness of CRLS is essential for systems that are ill-conditioned. Because of this decoupling, the total solution time can be reduced significantly. Our numerical experiments show that the combination of CRLS with the global optimization solver achieved better performance than the combination of a non-robust estimator, i.e., the nonnegative least squares (NNLS) method, with the same solver.  相似文献   

2.
Finding the location and concentration of contaminant sources is an important step in groundwater remediation and management. This discovery typically requires the solution of an inverse problem. This inverse problem can be formulated as an optimization problem where the objective function is the sum of the square of the errors between the observed and predicted values of contaminant concentration at the observation wells. Studies show that the source identification accuracy is dependent on the observation locations (i.e., network geometry) and frequency of sampling; thus, finding a set of optimal monitoring well locations is very important for characterizing the source. The objective of this study is to propose a sensitivity-based method for optimal placement of monitoring wells by incorporating two uncertainties: the source location and hydraulic conductivity. An optimality metric called D-optimality in combination with a distance metric, which tends to make monitoring locations as far apart from each other as possible, is developed for finding optimal monitoring well locations for source identification. To address uncertainty in hydraulic conductivity, an integration method of multiple well designs is proposed based on multiple hydraulic conductivity realizations. Genetic algorithm is used as a search technique for this discrete combinatorial optimization problem. This procedure was applied to a hypothetical problem based on the well-known Borden Site data in Canada. The results show that the criterion-based selection proposed in this paper provides improved source identification performance when compared to uniformly distributed placement of wells.  相似文献   

3.
The methods presented in this work provide a potential tool for characterizing contaminant source zones in terms of mass flux. The problem was conceptualized by considering contaminant transport through a vertical “flux plane” located between a source zone and a downgradient region where contaminant concentrations were measured. The goal was to develop a robust method capable of providing a statement of the magnitude and uncertainty associated with estimated contaminant mass flux values.In order to estimate the magnitude and transverse spatial distribution of mass flux through a plane, the problem was considered in an optimization framework. Two numerical optimization techniques were applied, simulated annealing (SA) and minimum relative entropy (MRE). The capabilities of the flux plane model and the numerical solution techniques were evaluated using data from a numerically generated test problem and a nonreactive tracer experiment performed in a three-dimensional aquifer model. Results demonstrate that SA is more robust and converges more quickly than MRE. However, SA is not capable of providing an estimate of the uncertainty associated with the simulated flux values. In contrast, MRE is not as robust as SA, but once in the neighborhood of the optimal solution, it is quite effective as a tool for inferring mass flux probability density functions, expected flux values, and confidence limits.A hybrid (SA-MRE) solution technique was developed in order to take advantage of the robust solution capabilities of SA and the uncertainty estimation capabilities of MRE. The coupled technique provided probability density functions and confidence intervals that would not have been available from an independent SA algorithm and they were obtained more efficiently than if provided by an independent MRE algorithm.  相似文献   

4.
The methods presented in this work provide a potential tool for characterizing contaminant source zones in terms of mass flux. The problem was conceptualized by considering contaminant transport through a vertical "flux plane" located between a source zone and a downgradient region where contaminant concentrations were measured. The goal was to develop a robust method capable of providing a statement of the magnitude and uncertainty associated with estimated contaminant mass flux values. In order to estimate the magnitude and transverse spatial distribution of mass flux through a plane, the problem was considered in an optimization framework. Two numerical optimization techniques were applied, simulated annealing (SA) and minimum relative entropy (MRE). The capabilities of the flux plane model and the numerical solution techniques were evaluated using data from a numerically generated test problem and a nonreactive tracer experiment performed in a three-dimensional aquifer model. Results demonstrate that SA is more robust and converges more quickly than MRE. However, SA is not capable of providing an estimate of the uncertainty associated with the simulated flux values. In contrast, MRE is not as robust as SA, but once in the neighborhood of the optimal solution, it is quite effective as a tool for inferring mass flux probability density functions, expected flux values, and confidence limits. A hybrid (SA-MRE) solution technique was developed in order to take advantage of the robust solution capabilities of SA and the uncertainty estimation capabilities of MRE. The coupled technique provided probability density functions and confidence intervals that would not have been available from an independent SA algorithm and they were obtained more efficiently than if provided by an independent MRE algorithm.  相似文献   

5.
This work applies optimization and an Eulerian inversion approach presented by Bagtzoglou and Baun in 2005 in order to reconstruct contaminant plume time histories and to identify the likely source of atmospheric contamination using data from a real test site for the first time. Present-day distribution of an atmospheric contaminant plume as well as data points reflecting the plume history allow the reconstruction and provide the plume velocity, distribution, and probable source. The method was tested to a hypothetical case and with data from the Forest Atmosphere Transfer and Storage (FACTS) experiment in the Duke experimental forest site. In the scenarios presented herein, as well as in numerous cases tested for verification purposes, the model conserved mass, successfully located the peak of the plume, and managed to capture the motion of the plume well but underestimated the contaminant peak.  相似文献   

6.
Release of pollution into rivers is required to be handled with special consideration to environmental standards. For this purpose, it is essential to specify the contribution of each pollution source in contamination of water resources. In this study, a mathematical model is proposed for determining locations and concentration release histories of polluting point sources using measured downstream river concentrations via an inverse problem framework. The inverse solution is based on the integral equation obtained from applying the Green's function method on the one-dimensional advection-dispersion contaminant transport equation. Discretization of this integral equation results in a linear, over-determined and ill-posed system of algebraic equations that are solved by using the Tikhonov regularization method. Several examples and some real field data are investigated to illustrate the abilities of the proposed model. Results imply that the proposed method is effective and can identify the pollution sources in rivers with acceptable accuracy.  相似文献   

7.
A new simulation-optimization methodology is developed for cost-effective sampling network design associated with long-term monitoring of large-scale contaminant plumes. The new methodology is similar in concept to the one presented by Reed et al. (Reed, P.M., Minsker, B.S., Valocchi, A.J., 2000a. Cost-effective long-term groundwater monitoring design using a genetic algorithm and global mass interpolation. Water Resour. Res. 36 (12), 3731-3741) in that an optimization model based on a genetic algorithm is coupled with a flow and transport simulator and a global mass estimator to search for optimal sampling strategies. However, this study introduces the first and second moments of a three-dimensional contaminant plume as new constraints in the optimization formulation, and demonstrates the proposed methodology through a real-world application. The new moment constraints significantly increase the accuracy of the plume interpolated from the sampled data relative to the plume simulated by the transport model. The plume interpolation approaches employed in this study are ordinary kriging (OK) and inverse distance weighting (IDW). The proposed methodology is applied to the monitoring of plume evolution during a pump-and-treat operation at a large field site. It is shown that potential cost savings up to 65.6% may be achieved without any significant loss of accuracy in mass and moment estimations. The IDW-based interpolation method is computationally more efficient than the OK-based method and results in more potential cost savings. However, the OK-based method leads to more accurate mass and moment estimations. A comparison of the sampling designs obtained with and without the moment constraints points to their importance in ensuring a robust long-term monitoring design that is both cost-effective and accurate in mass and moment estimations. Additional analysis demonstrates the sensitivity of the optimal sampling design to the various coefficients included in the objective function of the optimization model.  相似文献   

8.
Sources of contamination of groundwater are often difficult to characterize. However, it is essential for effective remediation of polluted groundwater resources. This study demonstrates an application of the linked simulation-optimization based methodology to estimate the release history from spatially distributed sources of pollution at an illustrative abandoned mine-site. In linked simulation-optimization approaches a numerical groundwater flow and transport simulation model is linked to the optimization model. In this study, topographic and geologic characteristics of the abandoned mine-site were simulated using a three-dimensional (3D) numerical groundwater flow model. Transport of contaminant in the groundwater was simulated using a 3D transient advective-dispersive contaminant transport model. Adsorption or chemical reaction of the contaminant was not considered in the contaminant transport model. Adaptive simulated annealing (ASA) was employed for solving the optimization problem. An optimization algorithm generates the candidate solutions corresponding to various unknown groundwater source characteristics. The candidate solutions are used as input in the numerical groundwater transport simulation model to generate the concentration of pollutant in the study area. This information is used to calculate the objective function value, which is utilized by the optimization algorithm to improve the candidate solution. This process continues until an optimal solution is obtained. Optimal solutions obtained in this study show that the linked simulation-optimization based methodology is potentially applicable for the characterization of spatially distributed pollutant sources, typically present at abandoned mine-sites.  相似文献   

9.
Development of TMDLs (total maximum daily loads) is often facilitated by using the software system BASINS (Better Assessment Science Integrating point and Nonpoint Sources). One of the key elements of BASINS is the watershed model HSPF (Hydrological Simulation Program Fortran) developed by USEPA. Calibration of HSPF is a very tedious and time consuming task, more than 100 parameters are involved in the calibration process. In the current research, three non-linear automatic optimization techniques are applied and compared, as well an efficient way to calibrate HSPF is suggested. Parameter optimization using local and global optimization techniques for the watershed model is discussed. Approaches to automatic calibration of HSPF using the nonlinear parameter estimator PEST (Parameter Estimation Tool) with its Gauss-Marquardt-Levenberg (GML) method, Random multiple Search Method (RSM), and Shuffled Complex Evolution method developed at the University of Arizona (SCE-UA) are presented. Sensitivity analysis was conducted and the most and the least sensitive parameters were identified. It was noted that sensitivity depends on number of adjustable parameters. As more parameters were optimized simultaneously--a wider range of parameter values can maintain the model in the calibrated state. Impact of GML, RSM, and SCE-UA variables on ability to find the global minimum of the objective function (OF) was studied and the best variables are suggested. All three methods proved to be more efficient than manual HSPF calibration. Optimization results obtained by these methods are very similar, although in most cases RSM outperforms GML and SCE-UA outperforms RSM. GML is a very fast method, it can perform as well as SCE-UA when the variables are properly adjusted, initial guess is good and insensitive parameters are eliminated from the optimization process. SCE-UA is very robust and convenient to use. Logical definition of key variables in most cases leads to the global minimum.  相似文献   

10.
A quantitative methodology is described for the field-scale performance assessment of natural attenuation using plume-scale electron and carbon balances. This provides a practical framework for the calculation of global mass balances for contaminant plumes, using mass inputs from the plume source, background groundwater and plume residuals in a simplified box model. Biodegradation processes and reactions included in the analysis are identified from electron acceptors, electron donors and degradation products present in these inputs. Parameter values used in the model are obtained from data acquired during typical site investigation and groundwater monitoring studies for natural attenuation schemes. The approach is evaluated for a UK Permo-Triassic Sandstone aquifer contaminated with a plume of phenolic compounds. Uncertainty in the model predictions and sensitivity to parameter values was assessed by probabilistic modelling using Monte Carlo methods. Sensitivity analyses were compared for different input parameter probability distributions and a base case using fixed parameter values, using an identical conceptual model and data set. Results show that consumption of oxidants by biodegradation is approximately balanced by the production of CH4 and total dissolved inorganic carbon (TDIC) which is conserved in the plume. Under this condition, either the plume electron or carbon balance can be used to determine contaminant mass loss, which is equivalent to only 4% of the estimated source term. This corresponds to a first order, plume-averaged, half-life of > 800 years. The electron balance is particularly sensitive to uncertainty in the source term and dispersive inputs. Reliable historical information on contaminant spillages and detailed site investigation are necessary to accurately characterise the source term. The dispersive influx is sensitive to variability in the plume mixing zone width. Consumption of aqueous oxidants greatly exceeds that of mineral oxidants in the plume, but electron acceptor supply is insufficient to meet the electron donor demand and the plume will grow. The aquifer potential for degradation of these contaminants is limited by high contaminant concentrations and the supply of bioavailable electron acceptors. Natural attenuation will increase only after increased transport and dilution.  相似文献   

11.
Source term estimation algorithms compute unknown atmospheric transport and dispersion modeling variables from concentration observations made by sensors in the field. Insufficient spatial and temporal resolution in the meteorological data as well as inherent uncertainty in the wind field data make source term estimation and the prediction of subsequent transport and dispersion extremely difficult. This work addresses the question: how many sensors are necessary in order to successfully estimate the source term and meteorological variables required for atmospheric transport and dispersion modeling?The source term estimation system presented here uses a robust optimization technique – a genetic algorithm (GA) – to find the combination of source location, source height, source strength, surface wind direction, surface wind speed, and time of release that produces a concentration field that best matches the sensor observations. The approach is validated using the Gaussian puff as the dispersion model in identical twin numerical experiments. The limits of the system are tested by incorporating additive and multiplicative noise into the synthetic data. The minimum requirements for data quantity and quality are determined by an extensive grid sensitivity analysis. Finally, a metric is developed for quantifying the minimum number of sensors necessary to accurately estimate the source term and to obtain the relevant wind information.  相似文献   

12.
Site uncertainties significantly influence groundwater flow and contaminant transport predictions. Aleatoric and epistemic uncertainty are both identified in site characterization and represented using proper uncertainty theories. When one theory best represents one parameter whereas a different theory may be more suitable for another parameter, the hybrid propagation of aleatoric (random) and epistemic (nonrandom) uncertainties will occur. The computational challenges of joint propagation of aleatoric and epistemic uncertainty through groundwater flow and contaminant transport models are significant. A fuzzy-stochastic nonlinear model was developed in this paper to incorporate these two types of uncertain site information and reduce the computational cost. The results show that (1) the computational cost using the nonlinear model is reduced compared with that of using the sparse grid algorithm and Monte Carlo methods; (2) the uncertainty of hydraulic conductivity (K) significantly influences the water head and solute distribution at the observation wells compared to other uncertain parameters, such as the storage coefficient and the distribution coefficient (Kd); and (3) the combination of multiple uncertain parameters substantially affects the simulation results. Neglecting site uncertainties may lead to unrealistic predictions.  相似文献   

13.
In this work the performance and theoretical background behind two of the most commonly used receptor modelling methods in aerosol science, principal components analysis (PCA) and positive matrix factorization (PMF), as well as multivariate curve resolution by alternating least squares (MCR-ALS) and weighted alternating least squares (MCR-WALS), are examined. The performance of the four methods was initially evaluated under standard operational conditions, and modifications regarding data pre-treatment were then included. The methods were applied using raw and scaled data, with and without uncertainty estimations. Strong similarities were found among the sources identified by PMF and MCR-WALS (weighted models), whereas discrepancies were obtained with MCR-ALS (unweighted model). Weighting of input data by means of uncertainty estimates was found to be essential to obtain robust and accurate factor identification. The use of scaled (as opposed to raw) data highlighted the contribution of trace elements to the compositional profiles, which was key to the correct interpretation of the nature of the sources. Our results validate the performance of MCR-WALS for aerosol pollution studies.  相似文献   

14.
Abstract

A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational ?uid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.  相似文献   

15.
This paper presents the development of a hybrid bi-level programming approach for supporting multi-stage groundwater remediation design. To investigate remediation performances, a subsurface model was employed to simulate contaminant transport. A mixed-integer nonlinear optimization model was formulated in order to evaluate different remediation strategies. Multivariate relationships based on a filtered stepwise clustering analysis were developed to facilitate the incorporation of a simulation model within a nonlinear optimization framework. By using the developed statistical relationships, predictions needed for calculating the objective function value can be quickly obtained during the search process. The main advantage of the developed approach is that the remediation strategy can be adjusted from stage to stage, which makes the optimization more realistic. The proposed approach was examined through its application to a real-world aquifer remediation case in western Canada. The optimization results based on this application can help the decision makers to comprehensively evaluate remediation performance.  相似文献   

16.
Field-scale characterisations of contaminant plumes in groundwater, as well as source zone delineations, are associated with uncertainties that can be considerable. A major source of uncertainty in environmental datasets is due to variability of sampling results, as a direct consequence of the heterogeneity of environmental matrices. We develop a methodology for quantifying uncertainties in field-scale mass flow and average concentration estimations, using integral pumping tests (IPTs), where the contaminant concentration is measured as a function of time in a pumping well. This procedure increases the sampling volume and reduces the effect of small-scale variability that may bias point-scale measurements. In particular, using IPTs, the interpolation uncertainty of conventional point-scale measurements is transformed to a quantifiable uncertainty related to the (unknown) plume position relative to the pumping well. We show that this plume position uncertainty generally influenced the predicted mass flows and average concentrations (of acenapthene, benzene and CHCs) to a greater extent than a boundary condition uncertainty related to the local water balance, considering 19 control planes at a highly heterogeneous industrial site in southwest Germany. Furthermore, large (order of magnitude) uncertainties only occurred if the conditions were strongly heterogeneous in the nearest vicinity of the well. We also develop a consistent methodology for an assessment of the combined effect of uncertainty in hydraulic conditions and uncertainty in reactive transport parameters for delimiting of both contaminant source zones and zones absent of source, based on (downgradient) IPTs.  相似文献   

17.
Assessment of chemical contamination at large industrial complexes with long and sometimes unknown histories of operation represents a challenging environmental problem. The spatial and temporal complexity of the contaminant may be due to changes in production processes, differences in the chemical transport, and the physical heterogeneity of the soil and aquifer materials. Traditional mapping techniques are of limited value for sites where dozens of chemicals with diverse transport characteristics may be scattered over large spatial areas without documentation of disposal histories. In this context, a site with a long and largely undocumented disposal history of shallow groundwater contamination is examined using principal component analysis (PCA). The dominant chemical groups and chemical "modes" at the site were identified. PCA results indicate that five primary and three transition chemical groups can be identified in the space of the first three eigenvectors of the correlation matrix, which account for 61% of the total variance of the data. These groups represent a significant reduction in the dimension of the original data (116 chemicals). It is shown that each group represents a class of chemicals with similar chemo-dynamic properties and/or environmental response. Finally, the groups are mapped back onto the site map to infer delineation of contaminant source areas for each class of compounds. The approach serves as a preliminary step in subsurface characterization, and a data reduction strategy for source identification, subsurface modeling and remediation planning.  相似文献   

18.
In this field study, two approaches to assess contaminant mass discharge were compared: the sampling of multilevel wells (MLS) and the integral groundwater investigation (or integral pumping test, IPT) that makes use of the concentration-time series obtained from pumping wells. The MLS approached used concentrations, hydraulic conductivity and gradient rather than direct chemical flux measurements, while the IPT made use of a simplified analytical inversion. The two approaches were applied at a control plane located approximately 40m downgradient of a gasoline source at Canadian Forces Base Borden, Ontario, Canada. The methods yielded similar estimates of the mass discharging across the control plane. The sources of uncertainties in the mass discharge in each approach were evaluated, including the uncertainties inherent in the underlying assumptions and procedures. The maximum uncertainty of the MLS method was about 67%, and about 28% for the IPT method in this specific field situation. For the MLS method, the largest relative uncertainty (62%) was attributed to the limited sampling density (0.63 points/m(2)), through a novel comparison with a denser sampling grid nearby. A five-fold increase of the sampling grid density would have been required to reduce the overall relative uncertainty for the MLS method to about the same level as that for the IPT method. Uncertainty in the complete coverage of the control plane provided the largest relative uncertainty (37%) in the IPT method. While MLS or IPT methods to assess contaminant mass discharge are attractive assessment tools, the large relative uncertainty in either method found for this reasonable well monitored and simple aquifer suggests that results in more complex plumes in more heterogeneous aquifers should be viewed with caution.  相似文献   

19.
The effectiveness of removal of nonaqueous phase liquids (NAPLs) from the entrapment source zone of the subsurface has been limited by soil heterogeneity and the inability to locate all entrapped sources. The goal of this study was to demonstrate the uncertainty of degree of source removal associated with aquifer heterogeneity. In this demonstration, source zone NAPL removal using surfactant-enhanced dissolution was considered. Model components that simulate the processes of natural dissolution in aqueous phase and surfactant-enhanced dissolution were incorporated into an existing code of contaminant transport. The dissolution modules of the simulator used previously developed Gilland-Sherwood type phenomenological models of NAPL dissolution to estimate mass transfer coefficients that are upscaleable to multidimensional flow conditions found at field sites. The model was used to simulate the mass removal from 10 NAPL entrapment zone configurations based on previously conducted two-dimensional tank experiments. These entrapment zones represent the NAPL distribution in spatially correlated random fields of aquifer hydraulic conductivity. The numerical simulations representing two-dimensional conditions show that effectiveness of mass removal depends on the aquifer heterogeneity that controls the NAPL entrapment and delivery of the surfactant to the locations of entrapped NAPLs. Flow bypassing resulting from heterogeneity and the reduction of relative permeability due to NAPL entrapment reduces the delivery efficiency of the surfactant, thus prolonging the remediation time to achieve desired end-point NAPL saturations and downstream dissolved concentrations. In some extreme cases, the injected surfactant completely bypassed the NAPL source zones. It was also found that mass depletion rates for different NAPL source configurations vary significantly. The study shows that heterogeneity result in uncertainties in the mass removal and achievable end-points that are directly related to dissolved contaminant plume development downstream of the NAPL entrapment zone.  相似文献   

20.
Engineering projects involving hydrogeology are faced with uncertainties because the earth is heterogeneous, and typical data sets are fragmented and disparate. In theory, predictions provided by computer simulations using calibrated models constrained by geological boundaries provide answers to support management decisions, and geostatistical methods quantify safety margins. In practice, current methods are limited by the data types and models that can be included, computational demands, or simplifying assumptions. Data Fusion Modeling (DFM) removes many of the limitations and is capable of providing data integration and model calibration with quantified uncertainty for a variety of hydrological, geological, and geophysical data types and models. The benefits of DFM for waste management, water supply, and geotechnical applications are savings in time and cost through the ability to produce visual models that fill in missing data and predictive numerical models to aid management optimization. DFM has the ability to update field-scale models in real time using PC or workstation systems and is ideally suited for parallel processing implementation. DFM is a spatial state estimation and system identification methodology that uses three sources of information: measured data, physical laws, and statistical models for uncertainty in spatial heterogeneities. What is new in DFM is the solution of the causality problem in the data assimilation Kalman filter methods to achieve computational practicality. The Kalman filter is generalized by introducing information filter methods due to Bierman coupled with a Markov random field representation for spatial variation. A Bayesian penalty function is implemented with Gauss–Newton methods. This leads to a computational problem similar to numerical simulation of the partial differential equations (PDEs) of groundwater. In fact, extensions of PDE solver ideas to break down computations over space form the computational heart of DFM. State estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. Further, a system identification theory has been derived based on statistical likelihood principles. A maximum likelihood theory is provided to estimate statistical parameters such as Markov model parameters that determine the geostatistical variogram. Field-scale application of DFM at the DOE Savannah River Site is presented and compared with manual calibration. DFM calibration runs converge in less than 1 h on a Pentium Pro PC for a 3D model with more than 15,000 nodes. Run time is approximately linear with the number of nodes. Furthermore, conditional simulation is used to quantify the statistical variability in model predictions such as contaminant breakthrough curves.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号