共查询到20条相似文献,搜索用时 0 毫秒
1.
Quantitative inference from environmental contaminant data is almost exclusively from within the classic Neyman/Pearson (N/P) hypothesis-testing model, by which the mean serves as the fundamental quantitative measure, but which is constrained by random sampling and the assumption of normality in the data. Permutation/randomization-based inference originally forwarded by R. A. Fisher derives probability directly from the proportion of the occurrences of interest and is not dependent upon the distribution of data or random sampling. Foundationally, the underlying logic and the interpretation of the significance of the two models vary, but inference using either model can often be successfully applied. However, data examples from airborne environmental fungi (mold), asbestos in settled dust, and 1,2,3,4-tetrachlorobenzene (TeCB) in soil demonstrate potentially misleading inference using traditional N/P hypothesis testing based upon means/variance compared to permutation/randomization inference using differences in frequency of detection (Δf d). Bootstrapping and permutation testing, which are extensions of permutation/randomization, confirm calculated p values via Δf d and should be utilized to verify the appropriateness of a given data analysis by either model. 相似文献
2.
Larry G. Blackwood 《Environmental monitoring and assessment》1991,18(1):25-40
The occurrence of censored data due to less than detectable measurements is a common problem with environmental data. The methods of survival analysis, although designed primarily for right censored data and time measured variables, can be adapted to apply censored environmental data. These methods have several theoretical and practical advantages over many existing techniques for dealing with less than detectable measurements.Work performed under the auspices of the U.S. Department of Energy, DOE Contract No. DE-AC07-76ID01570. 相似文献
3.
A. H. El-Shaarawi P. B. Kauss M. K. Kirby M. Walsh 《Environmental monitoring and assessment》1989,13(2-3):295-304
In recent years, intensive biological monitoring studies have been carried out on the Niagara River by the Ontario Ministry of the Environment. The basic objective was to determine the relative bioavailability of trace contaminants at various locations in the river, and to identify sources. A recurring difficulty encountered with the generated data is that substantial portions of sample concentrations of many toxic pollutants are below the limits of detection established by analytical laboratories. Under the assumption that the distribution of the data is log normal, the likelihood ratio test for testing the equality of several means for type I censored data is derived and its use for evaluating the spatial variability of trace contaminants in the river is illustrated. 相似文献
4.
Claudia von Brömssen Jens Fölster Martyn Futter Kerstin McEwan 《Environmental monitoring and assessment》2018,190(9):558
Long-term water quality monitoring is of high value for environmental management as well as for research. Artificial level shifts in time series due to method improvements, flaws in laboratory practices or changes in laboratory are a common limitation for analysis, which, however, are often ignored. Statistical estimation of such artefacts is complicated by the simultaneous existence of trends, seasonal variation and effects of other influencing factors, such as weather conditions. Here, we investigate the performance of generalised additive mixed models (GAMM) to simultaneously identify one or more artefacts associated with artificial level shifts, longitudinal effects related to temporal trends and seasonal variation, as well as to model the serial correlation structure of the data. In the same model, it is possible to estimate separate residual variances for different periods so as to identify if artefacts not only influence the mean level but also the dispersion of a series. Even with an appropriate statistical methodology, it is difficult to quantify artificial level shifts and make appropriate adjustments to the time series. The underlying temporal structure of the series is especially important. As long as there is no prominent underlying trend in the series, the shift estimates are rather stable and show less variation. If an artificial shift occurs during a slower downward or upward tendency, it is difficult to separate these two effects and shift estimates can be both biased and have large variation. In the case of a change in method or laboratory, we show that conducting the analyses with both methods in parallel strongly improves estimates of artefact effects on the time series, even if certain problems remain. Due to the difficulties of estimating artificial level shifts, posterior adjustment is problematic and can lead to time series that no longer can be used for trend analysis or other analysis based on the longitudinal structure of the series. Before carrying out a change in analytic method or laboratory, it should be considered if this is absolutely necessary. If changes cannot be avoided, the analysis of the two methods considered, or the two laboratories contracted, should be run in parallel for a considerable period of time so as to enable a good assessment of changes introduced to the data series. 相似文献
5.
Normality transformations for environmental data from compound normal-lognormal distributions 总被引:1,自引:0,他引:1
Larry G. Blackwood 《Environmental monitoring and assessment》1995,35(1):55-75
The combination of lognormally distributed quantities of interest with normally distributed random measurement error produces
data that follow a compound normal-lognormal (NLN) distribution. When the measurement error is large enough, such data do
not approximate normality, even after a logarithmic transformation. This paper reports the results of a search for a transformation
method for NLN data that is not only technically appropriate, but easy to implement as well. Three transformation families
were found to work relatively well. These families are compared in terms of success in achieving normality and robustness,
using simulated NLN data and actual environmental data believed to follow a NLN distribution. The exponential family of transformations
was found to give the best overall results.
This work was supported by the U.S. Department of Energy, Office of Environmental Restoration and Waste Management, under
DOE Idaho Field Office Contract DE-AC07-76ID01570. 相似文献
6.
V. T. Farewell 《Environmental monitoring and assessment》1989,13(2-3):285-294
When investigating trace substances in ambient water, a proportion of water sample concentrations is usually below limits of detection. In medical and industrial reliability studies, comparisons are often made of time to event data which includes right censored observations indicating only that an observation is greater than a specified value. In this paper consideration is given to the application of non-parametric procedures, widely used in the analysis of time to event data, to water quality data which is left censored.A non-parametric estimate of the cumulative distribution function for left censored water quality data can be generated quite easily. For the comparison of levels of trace substances it is necessary to combine an unconditional likelihood for the proportion of observations below a detection limit with a partial likelihood for the portion of the distribution above the detection limit in order to make use of regression methodology. The details of this are outlined and an example is given which compares levels of toxic substances at the head and mouth of the Niagara river.When comparisons are based on matched pair data, further modifications are necessary. A development paralleling that for time to event data is given. Consideration is also given to model extensions which allow for a dependence between observations at the same location over a period of time.The presentation is introductory and designed to illustrate the potential of some available methodology for use in the analysis of water quality data. 相似文献
7.
Roger H. Green 《Environmental monitoring and assessment》1984,4(3):293-301
In environmental studies statistics is too often used as a salvage operation, or as an attempt to show significance in the absence of any clear hypothesis. Good design is needed, not fancier statistics. Too often we pursue short-term problems that are in fashion rather than study long-term environmental deterioration that really matters. Since change-often unpredictable change-is an intrinsic part of nature, it is pointless to fight all environmental change. We must choose our level of concern and then influence environmental change where we can. The judgement on whether a given change is bad cannot be left to the statistician or to statistical tests; the politician in consultation with the ecologist are responsible for it. The statistical significance of a hypothesized impact-related change should be tested against year-to-year variation in the unimpacted situation rather than against replicate sampling error. This is another argument for long-term studies. Attributes of good design and appropriate criterion and predictor variables are discussed.Paper presented at a Symposium held on 20–21 April 1982, in Edmonton, Alberta, Canada. 相似文献
8.
In this paper we show the possibility of using expert system tools for environmental data management. We describe the domain indenpendent expert system shell SAK and Knowledge EXplorer, a system which learns rules from data. We show the functionality of Knowledge EXplorer on an example of water quality evaluation. 相似文献
9.
Larry G. Blackwood 《Environmental monitoring and assessment》1992,21(3):193-210
The lognormal distribution has become a common choice to represent intrinsically positive and often highly skewed environmental data in statistical analysis. However the implications of its use are often not carefully considered. With an emphasis on radiological monitoring applications, this paper reviews what assuming lognormality means in terms of data analysis and interpretation. The relationship of using normal theory methods on log transformed data to multiplicative errors and hypothesis testing in the original scale is also discussed.Work performed under the auspices of the U.S. Department of Energy, DOE Contract No. DE-AC-07-76ID01570. 相似文献
10.
Iyer CS Sindhu M Kulkarni SG Tambe SS Kulkarni BD 《Journal of environmental monitoring : JEM》2003,5(2):324-327
Measurements of temperature, salinity, dissolved oxygen, nitrogen as ammonia, nitrate and nitrite, and phosphate along with chlorophyll were carried out at three stations on the coastal waters of Cochin, south west India, at two-levels of the water column over a period of five years. The data set has been factorised using principal component analysis (PCA) for extracting linear relationships existing among a set of variables. A graphical display of the scores generated from the PCA was done by means of boxplots and biplots, which helped in the interpretation of the data. The major factors conditioning the system are related to the input of fresh water from the estuary of the Periyar river and the high organic load of the bottom sediment in the coastal area which results in a reducing environment, as reflected in the parameters of dissolved oxygen, ammoniacal-nitrogen and nitrite-nitrogen. Another factor which contributes to the variation in the system is related to the unloading activity in the port area. The present approach presents a logical way to interpret the complex data of the physico-chemical measurements. 相似文献
11.
Renate D. Kimbrough 《Environmental monitoring and assessment》1982,2(1-2):95-103
To determine whether a population has been affected by a chemical, evidence of exposure must be established. The mere presence of a chemical in the surroundings of a population may not, in all instances, result in actual exposure. Not all such exposures will cause health effects; nor is it always possible to establish that illness has or will result from exposure to chemicals. The inability to establish health effects in humans cannot a priori be translated to mean that a specific chemical is harmless. On the other hand, it must be determined whether health studies would be fruitful. If exposure was so minimal that no health effects are expected, then no health studies should be conducted. 相似文献
12.
Traditionally, the process capability index is developed by assuming that the process output data are independent and follow normal distribution. However, in most environmental cases, the process data are autocorrelated. The autocorrelated process, if unrecognized as an independent process, can lead to erroneous decision making and unnecessary quality loss. In this paper, three new capability indices with unbiased estimators are proposed to relieve the independence assumption for the-nominal-the-best and the-smaller-the-better cases. Furthermore, we use mean squared error (MSE) and mean absolute percent error (MAPE) to compare the accuracy of our proposed indices to previous autocorrelated indices. The results show that our proposed capability indices outperform the predecessors. 相似文献
13.
We review ways in which the new discipline of ecoinformatics is changing how environmental monitoring data are managed, synthesized, and analyzed. Rapid improvements in information technology and strong interest in biodiversity and sustainable ecosystems are driving a vigorous phase of development in ecological databases. Emerging data standards and protocols enable these data to be shared in ways that have previously been difficult. We use the U.S. Environmental Protection Agency’s National Coastal Assessment (NCA) as an example. The NCA has collected biological, chemical, and physical data from thousands of stations around the U.S. coasts since 1990. NCA data that were collected primarily to assess the ecological condition of the U.S. coasts can be used in innovative ways, such as biogeographical studies to analyze species invasions. NCA application of ecoinformatics tools leads to new possibilities for integrating the hundreds of thousands of NCA species records with other databases to address broad-scale and long-term questions such as environmental impacts, global climate change, and species invasions. 相似文献
14.
The dynamic light scattering (DLS) technique can detect the concentration and size distribution of nanoscale particles in aqueous solutions by analyzing photon interactions. This study evaluated the applicability of using photon count rate data from DLS analyses for measuring levels of biogenic and manufactured nanoscale particles in wastewater. Statistical evaluations were performed using secondary wastewater effluent and a Malvern Zetasizer. Dynamic light scattering analyses were performed equally by two analysts over a period of two days using five dilutions and twelve replicates for each dilution. Linearity evaluation using the sixty sample analysis yielded a regression coefficient R(2) = 0.959. The accuracy analysis for various dilutions indicated a recovery of 100 ± 6%. Precision analyses indicated low variance coefficients for the impact of analysts, days, and within sample error. The variation by analysts was apparent only in the most diluted sample (intermediate precision ~12%), where the photon count rate was close to the instrument detection limit. The variation for different days was apparent in the two most concentrated samples, which indicated that wastewater samples must be analyzed for nanoscale particle measurement within the same day of collection. Upon addition of 10 mg l(-1) of nanosilica to wastewater effluent samples, the measured photon count rates were within 5% of the estimated values. The results indicated that photon count rate data can effectively complement various techniques currently available to detect nanoscale particles in wastewaters. 相似文献
15.
In estimating spatial means of environmental variables of a region from datacollected by convenience or purposive sampling, validity of the results canbe ensured by collecting additional data through probability sampling. Theprecision of the estimator that uses the probability sample can beincreased by interpolating the values at the nonprobability sample points tothe probability sample points, and using these interpolated values as anauxiliary variable in the difference or regression estimator. Theseestimators are (approximately) unbiased, even when the nonprobability sampleis severely biased such as in preferential samples. The gain in precisioncompared to the estimator in combination with Simple Random Samplingis controlled by the correlation between the target variable andinterpolated variable. This correlation is determined by the size (density)and spatial coverage of the nonprobability sample, and the spatialcontinuity of the target variable. In a case study the average ratio of thevariances of the simple regression estimator and estimator was 0.68for preferential samples of size 150 with moderate spatial clustering, and0.80 for preferential samples of similar size with strong spatialclustering. In the latter case the simple regression estimator wassubstantially more precise than the simple difference estimator. 相似文献
16.
Dobson J Gardner M Miller B Jessep M Toft R 《Journal of environmental monitoring : JEM》1999,1(1):91-95
This paper reports an approach to the assessment of the validity of environmental monitoring data--a 'data filter'. The strategy has been developed through the UK National Marine Analytical Quality Control (AQC) Scheme for application to data collected for the UK National Marine Monitoring Plan, although the principles described are applicable more widely. The proposed data filter is divided into three components: Part A, 'QA/QC'--an assessment of the laboratory's practices in Quality Assurance/Quality Control; Part B, 'fitness for purpose'--an evaluation of the standard of accuracy that can be demonstrated by activities in (A), in relation to the intended application of the data; and Part C, the overall assessment on which data will be accepted as usable or rejected as being of suspect quality. A pilot application of the proposed approach is reported. The approach described in this paper is intended to formalise the assessment of environmental monitoring data for fitness for a chosen purpose. The issues important to fitness for purpose are discussed and assigned a relative priority order on which to judge the reliability/usefulness of monitoring data. 相似文献
17.
Karim C. Abbaspour Rainer Schulin Ernst Schläppi Hannes Flühler 《Environmental Modeling and Assessment》1996,1(3):151-158
A data worth model is presented for the analysis of alternative sampling schemes in a special project where decisions have to be made under uncertainty. This model is part of a comprehensive risk analysis algorthm with the acronym BUDA. The statistical framework in BUDA is Bayesian in nature and incorporates both parameter uncertainty and natural variability. In BUDA a project iterates among the analyst, the decision maker, and the field work. As part of the analysis, a data worth model calculates the value of a data campaign before the actual field work, thereby allowing the identification of an optimum data collection scheme. A goal function which depicts the objectives of a project is used to discriminate among different alternatives. A Latin hypercube sampling scheme is used to propagate parameter uncertainties to the goal function. In our example the uncertain parameters are the parameters which describe the geostatistical properties of saturated hydraulic conductivity in a Molasse environment. Our results indicated that failing to account for parameter uncertainty produces unrealistically optimistic results, while ignoring the spatial structure can lead to an inefficient use of the existing data. 相似文献
18.
Three statistical models are used to predict the upper percentiles of the distribution of air pollutant concentrations from restricted data sets recorded over yearly time intervals. The first is an empirical quantile-quantile model. It requires firstly that a more complete date set be available from a base site within the same airshed, and secondly that the base and restricted data sets are drawn from the same distributional form. A two-sided Kolmogorov-Smirnov two-sample test is applied to test the validity of the latter assumption, a test not requiring the assumption of a particular distributional form. The second model represents the a priori selection of a distributional model for the air quality data. To demonstrate this approach the two-parameter lognormal, gamma and Weibull models and the one-parameter exponential model were separately applied to all the restricted data sets. A third model employs a model identification procedure on each data set. It selects the best fit model. 相似文献
19.
In this paper we describe a statistical analysis of the inter-laboratory data summarized in Rosati et al. (2008) to assess the performance of an analytical method to detect the presence of dust from the collapse of the World Trade Center (WTC) on September 11, 2001. The focus of the inter-lab study was the measurement of the concentration of slag wool fibers in dust which was considered to be an indicator of WTC dust. Eight labs were provided with two blinded samples each of three batches of dust that varied in slag wool concentration. Analysis of the data revealed that three of labs, which did not meet measurement quality objectives set forth prior to the experimental work, were statistically distinguishable from the five labs that did meet the quality objectives. The five labs, as a group, demonstrated better measurement capability although their ability to distinguish between the batches was somewhat mixed. This work provides important insights for the planning and implementation of future studies involving examination of dust samples for physical contaminants. This work demonstrates (a) the importance of controlling the amount of dust analyzed, (b) the need to take additional replicates to improve count estimates, and (c) the need to address issues related to the execution of the analytical methodology to ensure all labs meet the measurement quality objectives. 相似文献
20.
Ruzica Micic Snezana Mitic Biljana Arsic Anja Jokic Milan Mitic Danijela Kostic Aleksandra Pavlovic Milan Cekerevac Ljiljana Nikolic-Bujanovic Zaklina Spalevic 《Environmental monitoring and assessment》2015,187(6):389
Zinc, copper, iron, chromium and cobalt are essential elements for human health, showing toxicity only in high concentrations, while lead and cadmium are extremely toxic even as traces. Therefore, it is important to monitor the contents of toxic metals in vegetables. Large number of vegetables is grown and used in nutrition, in Kosovo. The concentrations of selected elements in vegetables (radish, onion, garlic and spinach) from Kosovo were determined using ICP-OES method. Oral intake of metals and health risk index were calculated. Statistical analysis indicated numerous positive correlations between concentrations of selected elements in vegetables. As a result of principal component analysis, 15 new variables were obtained which were characterized by eigenvalues. The sequence of health quotients for the heavy metals followed the decreasing order Zn?=?Mn?>?Pb?>?Cu?>?Ni?>?Fe?>?Cd?>?Co?>?Cr. The health quotients for all investigated heavy metals were below 1 (one), which is considered safe. The vegetables from Kosovo are mainly safe for use in everyday diet. 相似文献