首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A partially probabilistic blood lead prediction model has been developed, based on the US Environmental Protection Agency integrated exposure-uptake-biokinetic blood lead model (IEUBK model). This study translated the IEUBK model into a spreadsheet format. The uptake submodel incorporates uncertainty distributions for exposure and bioavailability parameters. The biokinetic submodel is duplicated with a table incorporating partitioning and decay of lead levels in the body. As a case study, the probabilistic model is applied to a lead exposure scenario involving a former smelter site in Sandy, Utah. The probabilistic model produces less biased estimates of means and standard deviations than the deterministic model. Parameter uncertainty is propagated in the model by the use of Monte Carlo simulation. Thus, sensitivity analysis is possible, and driving variables can be determined.  相似文献   

2.
Cost-effective hotspot identification is an important issue in hazardous waste site characterization and evaluation. Composite sampling techniques are known to be cost effective when the cost of measurement is substantially higher than the cost of sampling. Although compositing incurs no loss of information on the means, information on individual sample values is lost due to compositing. In particular, if the interest is in identifying the largest individual sample value, the composite sampling techniques are not able to do so. Under certain assumptions, it may be possible to satisfactorily predict individual sample values using the composite sample data, but it is not generally possible to identify the largest individual sample value. In this paper, we propose two methods of identifying the largest individual sample value with some additional measurement effort. Both methods are modifications of the simple sweep-out method proposed earlier. Since analytical results do not seem to be feasible, performance of the proposed methods is assessed via simulation. The simulation results show that both the proposed methods, namely the locally sequential sweep-out and the globally sequential sweep-out, are better than the simple sweep-out method.Prepared with partial support from the Statistical Analysis and Computing Branch, Environmental Statistics and Information Division, Office of Policy, Planning, and Evaluation, United States Environmental Protection Agency, Washington, DC under a Cooperative Agreement Number CR-821531. The contents have not been subjected to Agency review and therefore do not necessarily reflect the views of the Agency and no official endorsement should be inferred.  相似文献   

3.
Ranked set sampling: an annotated bibliography   总被引:1,自引:1,他引:1  
The paper provides an up-to-date annotated bibliography of the literature on ranked set sampling. The bibliography includes all pertinent papers known to the authors, and is intended to cover applications as well as theoretical developments. The annotations are arranged in chronological order and are intended to be sufficiently complete and detailed that a reading from beginning to end would provide a statistically mature reader with a state-of-the-art survey of ranked set sampling, including historical development, current status, and future research directions and applications. A final section of the paper gives a listing of all annotated papers, arranged in alphabetical order by author.This paper was prepared with partial support from the United States Environmental Protection Agency under a Cooperative Agreement Number CR-821531. The contents have not been subject to Agency review and therefore do not necessarily reflect the views or policies of the Agency and no official endorsement should be inferred.  相似文献   

4.
Quantifying a composite sample results in a loss of information on the values of the constituent individual samples. As a consequence of this information loss, it is impossible to identify individual samples having large values, based on composite sample measurements alone. However, under certain circumstances, it is possible to identify individual samples having large values without exhaustively measuring all individual samples. In addition to composite sample measurements, a few additional measurements on carefully selected individual samples are sufficient to identify the individual samples having large values. In this paper, we present a statistical method to recover extremely large individual sample values using composite sample measurements. An application to site characterization is used to illustrate the method.The paper has been prepared with partial support from the United States Environmental Protection Agency Number CR815273. The contents have not been subject to Agency review and therefore do not necessarily reflect the views or policies of the Agency and no official endorsement should be inferred.  相似文献   

5.
Cleanup standards at hazardous waste sites include (i) numeric standards (often risk-based), (ii) background standards in which the remediated site is compared with data from a supposedly clean region, and (iii) interim standards in which the remediated site is compared with preremediation data from the same site. The latter are especially appropriate for verifying progress when an innovative, but unproven, technology is used for remediation. Standards of type (i) require one-sample statistical tests, while those of type (ii) and type (iii) call for two-sample tests. This paper considers two-sample tests with an emphasis upon the type (iii) scenario. Both parametric (likelihood ratio) and nonparametric (linear rank) protocols are examined. The methods are illustrated with preremediation data from a site on the National Priorities List. The results indicate that nonparametric procedures can be quite competitive (in terms of power) with distributional modelling provided a near optimal rank test is selected. Suggestions are given for identifying such rank tests. The results also confirm the importance of sound baseline sampling; no amount of post-remediation sampling can overcome baseline deficiencies.This paper has been prepared with partial support from the United States Environmental Protection Agency under a Cooperative Agreement Number CR-815273. The contents have not been subject to Agency review and therefore do not necessarily reflect the views or policies of the Agency and no official endorsement should be inferred.  相似文献   

6.
In phased sampling, data obtained in one phase is used to design the sampling network for the next phase. GivenN total observations, 1, ...,N phases are possible. Experiments were conducted with one-phase, two-phase, andN-phase design algorithms on surrogate models of sites with contaminated soils. The sampling objective was to identify through interpolation, subunits of the site that required remediation. The cost-effectiveness of alternate methods was compared by using a loss function. More phases are better, but in economic terms, the improvement is marginal. The optimal total number of samples is essentially independent of the number of phases. For two phase designs, 75% of samples in the first phase is near optimal; 20% or less is actually counterproductive.The U.S. Environmental Protection Agency (EPA) through its Office of Research and Development (ORD), partially funded and collaborated in the research described here. It has been subjected to the Agency's peer review and has been approved as an EPA publication. The U.S. Government has a non-exclusive, royalty-free licence in and to any copyright covering this article.  相似文献   

7.
The U.S. Environmental Protection Agency uses environmental models to inform rulemaking and policy decisions at multiple spatial and temporal scales. As decision-making has moved towards integrated thinking and assessment (e.g. media, site, region, services), the increasing complexity and interdisciplinary nature of modern environmental problems has necessitated a new generation of integrated modeling technologies. Environmental modelers are now faced with the challenge of determining how data from manifold sources, types of process-based and empirical models, and hardware/software computing infrastructure can be reliably integrated and applied to protect human health and the environment.In this study, we demonstrate an Integrated Modeling Framework that allows us to predict the state of freshwater ecosystem services within and across the Albemarle-Pamlico Watershed, North Carolina and Virginia (USA). The Framework consists of three facilitating technologies: Data for Environmental Modeling automates the collection and standardization of input data; the Framework for Risk Assessment of Multimedia Environmental Systems manages the flow of information between linked models; and the Supercomputer for Model Uncertainty and Sensitivity Evaluation is a hardware and software parallel-computing interface with pre/post-processing analysis tools, including parameter estimation, uncertainty and sensitivity analysis. In this application, five environmental models are linked within the Framework to provide multimedia simulation capabilities: the Soil Water Assessment Tool predicts watershed runoff; the Watershed Mercury Model simulates mercury runoff and loading to streams; the Water quality Analysis and Simulation Program predicts water quality within the stream channel; the Habitat Suitability Index model predicts physicochemical habitat quality for individual fish species; and the Bioaccumulation and Aquatic System Simulator predicts fish growth and production, as well as exposure and bioaccumulation of toxic substances (e.g., mercury).Using this Framework, we present a baseline assessment of two freshwater ecosystem services-water quality and fisheries resources-in headwater streams throughout the Albemarle-Pamlico. A stratified random sample of 50 headwater streams is used to draw inferences about the target population of headwater streams across the region. Input data is developed for a twenty-year baseline simulation in each sampled stream using current land use and climate conditions. Monte Carlo sampling (n = 100 iterations per stream) is also used to demonstrate some of the Framework's experimental design and data analysis features. To evaluate model performance and accuracy, we compare initial (i.e., uncalibrated) model predictions (water temperature, dissolved oxygen, fish density, and methylmercury concentration within fish tissue) against empirical field data. Finally, we ‘roll-up’ the results from individual streams, to assess freshwater ecosystem services at the regional scale.  相似文献   

8.
This work aims at discussing some concepts pertaining to the theory and practice of environmental modelling in view of the results of several model validation exercises performed by the group “Model validation for radionuclide transport in the system watershed-river and in estuaries” of project EMRAS (Environmental Modelling for Radiation Safety) supported by the IAEA (International Atomic Energy Agency). The analyses here performed concern models applied to real scenarios of environmental contamination. In particular, the reasons for the uncertainty of the models and the EBUA (empirically based uncertainty analysis) methodology are discussed. The foundations of multi-model approach in environmental modelling are presented and motivated. An application of EBUA to the results of a multi-model exercise concerning three models aimed at predicting the wash-off of radionuclide deposits from the Pripyat floodplain (Ukraine) was described. Multi-model approach is, definitely, a tool for uncertainty analysis. EBUA offers the opportunity of an evaluation of the uncertainty levels of predictions in multi-model applications.  相似文献   

9.
The US Environmental Protection Agency's Office of Research and Development has initiated the Environmental Monitoring and Assessment Program (EMAP) to monitor status and trends in the condition of the nation's near coastal waters, forests, wetlands, agro-ecosystems, surface waters, deserts and rangelands. the programme is also intended to evaluate the effectiveness of Agency policies at protecting ecological resources occurring in these systems. Monitoring data collected for all ecosystems will be integrated for regional and national status and trends assessments. the near coastal component of EMAP consists of estuaries, coastal waters, and the Great Lakes. Near coastal ecosystems have been regionalized and classified, and an integrated sampling strategy has been developed. EPA and NOAA have agreed to coordinate and, to the extent possible, integrate the near coastal component of EMAP with the NOAA National Status and Trends Program. A demonstration project was conducted in estuaries of the mid-Atlantic region (Chesapeake Bay to Cape Cod) in the summer of 1990. in 1991, monitoring continued in mid-Atlantic estuaries and was initiated in estuaries of a portion of the Gulf of Mexico. Preliminary results indicate: there are no insurmountable logistical problems with sampling on a regional scale; several of the selected indicators are practical and sensitive on the regional scale; and an efficient effort in future years will provide valuable information on condition of estuarine resources at regional scales.  相似文献   

10.
The US Environmental Protection Agency's Office of Research and Development has initiated the Environmental Monitoring and Assessment Program (EMAP) to monitor status and trends in the condition of the nation's near coastal waters, forests, wetlands, agro-ecosystems, surface waters, deserts and rangelands. the programme is also intended to evaluate the effectiveness of Agency policies at protecting ecological resources occurring in these systems. Monitoring data collected for all ecosystems will be integrated for regional and national status and trends assessments. the near coastal component of EMAP consists of estuaries, coastal waters, and the Great Lakes. Near coastal ecosystems have been regionalized and classified, and an integrated sampling strategy has been developed. EPA and NOAA have agreed to coordinate and, to the extent possible, integrate the near coastal component of EMAP with the NOAA National Status and Trends Program. A demonstration project was conducted in estuaries of the mid-Atlantic region (Chesapeake Bay to Cape Cod) in the summer of 1990. in 1991, monitoring continued in mid-Atlantic estuaries and was initiated in estuaries of a portion of the Gulf of Mexico. Preliminary results indicate: there are no insurmountable logistical problems with sampling on a regional scale; several of the selected indicators are practical and sensitive on the regional scale; and an efficient effort in future years will provide valuable information on condition of estuarine resources at regional scales.  相似文献   

11.
Research needed to resolve the uncertainties of cancer risk from ingestion of arsenic in drinking water is described. The recommendations fall into two categories reflecting the areas of greatest uncertainty regarding the assessment of arsenic risk: research on the mechanism of cancer, and research on the metabolism and detoxification of arsenic. The recommendations are discussed in light of risk assessment and risk management issues, stressing the need for scientists to interpret research findings for decision managers.This document has been reviewed in accordance with the US Environmental Protection Agency Policy and approved for publication. Mention of trade names or commercial products does not constitute endorsement or recommendations for use.  相似文献   

12.
Environmental epidemiology and health risk and impact assessment have long grappled with problems of uncertainty in data and their relationships. These uncertainties have become more challenging because of the complex, systemic nature of many of the risks. A clear framework defining and quantifying uncertainty is needed. Three dimensions characterise uncertainty: its nature, its location and its level. In terms of its nature, uncertainty can be both intrinsic and extrinsic. The former reflects the effects of complexity, sparseness and nonlinearity; the latter arises through inadequacies in available observational data, measurement methods, sampling regimes and models. Uncertainty occurs in three locations: conceptualising the problem, analysis and communicating the results. Most attention has been devoted to characterising and quantifying the analysis—a wide range of statistical methods has been developed to estimate analytical uncertainties and model their propagation through the analysis. In complex systemic risks, larger uncertainties may be associated with conceptualisation of the problem and communication of the analytical results, both of which depend on the perspective and viewpoint of the observer. These imply using more participatory approaches to investigation, and more qualitative measures of uncertainty, not only to define uncertainty more inclusively and completely, but also to help those involved better understand the nature of the uncertainties and their practical implications.  相似文献   

13.
The United States Environmental Protection Agency's Environmental Monitoring and Assessment Program (EMAP) is designed to describe status, trends and spatial pattern of indicators of condition of the nation's ecological resources. The proposed sampling design for EMAP is based on a triangular systematic grid and employs both variable probability and double sampling. The Horvitz-Thompson estimator provides the foundation of the design-based estimation strategy used in EMAP. However, special features of EMAP designed to accommodate the complexity of sampling environmental resources on a national scale require modifications of standard variance estimation procedures as well as development of new techniques. An overview of variance estimation methods proposed for application to EMAP's sampling strategy for discrete resources is presented.  相似文献   

14.
Knowledge of animal abundance is fundamental to many ecological studies. Frequently, researchers cannot determine true abundance, and so must estimate it using a method such as mark-recapture or distance sampling. Recent advances in abundance estimation allow one to model heterogeneity with individual covariates or mixture distributions and to derive multimodel abundance estimators that explicitly address uncertainty about which model parameterization best represents truth. Further, it is possible to borrow information on detection probability across several populations when data are sparse. While promising, these methods have not been evaluated using mark-recapture data from populations of known abundance, and thus far have largely been overlooked by ecologists. In this paper, we explored the utility of newly developed mark-recapture methods for estimating the abundance of 12 captive populations of wild house mice (Mus musculus). We found that mark-recapture methods employing individual covariates yielded satisfactory abundance estimates for most populations. In contrast, model sets with heterogeneity formulations consisting solely of mixture distributions did not perform well for several of the populations. We show through simulation that a higher number of trapping occasions would have been necessary to achieve good estimator performance in this case. Finally, we show that simultaneous analysis of data from low abundance populations can yield viable abundance estimates.  相似文献   

15.
Environmental site assessments are frequently executed for monitoring and remediation performance evaluation purposes, especially in total petroleum hydrocarbon (TPH)-contaminated areas, such as gas stations. As a key issue, reproducibility of the assessment results must be ensured, especially if attempts are made to compare results between different institutions. Although it is widely known that uncertainties associated with soil sampling are much higher than those with chemical analyses, field guides or protocols to deal with these uncertainties are not stipulated in detail in the relevant regulations, causing serious errors and distortion of the reliability of environmental site assessments. In this research, uncertainties associated with soil sampling and sample reduction for chemical analysis were quantified using laboratory-scale experiments and the theory of sampling. The research results showed that the TPH mass assessed by sampling tends to be overestimated and sampling errors are high, especially for the low range of TPH concentrations. Homogenization of soil was found to be an efficient method to suppress uncertainty, but high-resolution sampling could be an essential way to minimize this.  相似文献   

16.

Goal and Scope

Human biomonitoring determines the concentration of xenobiotics in populations by means of smaller samples, thus necessarily arising sampling errors. These are determined.

Methods

For a fictitious population of 200,000 persons, differently broad xenobiotic concentration distributions were simulated. Samples of varying size were randomly drawn and the sampling error, defined as the proportional difference between the geometric means of sample and population, was determined.

Results and Conclusions

The sampling error depends on the sample size and the width of the concentration distribution; its estimation is possible for any xenobiotic, given it has lognormal distribution, and the sample size is between 10 and 50,000. For its estimation an equation was derived.

Outlook

When presenting and interpreting results of human biomonitoring, the sampling error must be considered, together with the uncertainty of the measurement.  相似文献   

17.
Environmental and Ecological Statistics - This paper presents an extension of the Geostatistical model under preferential sampling in order to accommodate possible local repulsion effects. This...  相似文献   

18.
Statistical methods as developed and used in decision making and scientific research are of recent origin. The logical foundations of statistics are still under discussion and some care is needed in applying the existing methodology and interpreting results. Some pitfalls in statistical data analysis are discussed and the importance of cross examination of data (or exploratory data analysis) before using specific statistical techniques are emphasized. Comments are made on the treatment of outliers, choice of stochastic models, use of multivariate techniques and the choice of software (expert systems) in statistical analysis. The need for developing new methodology with particular relevance to environmental research and policy is stressed.Dr Rao is Eberly Professor of Statistics and Director of the Penn State Center for Multivariate Analysis. He has received PhD and ScD degrees from Cambridge University, and has been awarded numerous honorary doctorates from universities around the world. He is a Fellow of Royal Society, UK; Fellow of Indian National Science Academy; Foreign Honorary Member of American Academy of Arts and Science; Life Fellow of King's College, Cambridge; and Founder Fellow of the Third World Academy of Sciences. He is Honorary Fellow and President of International Statistical Institute, Biometric Society and elected Fellow of the Institute of Mathematical Statistics. He has made outstanding contributions to virtually all important topics of theoretical and applied statistics, and many results bear his name. He has been Editor of Sankhya and theJournal of Multivariate Analysis, and serves on international advisory boards of several professional journals, includingEnvironmetrics and theJournal of Environmental Statistics. This paper is based on the keynote address to the Seventh Annual Conference on Statistics of the United States Environmental Protection Agency.  相似文献   

19.
Environmental and Ecological Statistics - Model averaging is commonly used to allow for model uncertainty in parameter estimation. As well as providing a point estimate that is a natural compromise...  相似文献   

20.
Models of the geographic distributions of species have wide application in ecology. But the nonspatial, single-level, regression models that ecologists have often employed do not deal with problems of irregular sampling intensity or spatial dependence, and do not adequately quantify uncertainty. We show here how to build statistical models that can handle these features of spatial prediction and provide richer, more powerful inference about species niche relations, distributions, and the effects of human disturbance. We begin with a familiar generalized linear model and build in additional features, including spatial random effects and hierarchical levels. Since these models are fully specified statistical models, we show that it is possible to add complexity without sacrificing interpretability. This step-by-step approach, together with attached code that implements a simple, spatially explicit, regression model, is structured to facilitate self-teaching. All models are developed in a Bayesian framework. We assess the performance of the models by using them to predict the distributions of two plant species (Proteaceae) from South Africa's Cape Floristic Region. We demonstrate that making distribution models spatially explicit can be essential for accurately characterizing the environmental response of species, predicting their probability of occurrence, and assessing uncertainty in the model results. Adding hierarchical levels to the models has further advantages in allowing human transformation of the landscape to be taken into account, as well as additional features of the sampling process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号