首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Efficiency of composite sampling for estimating a lognormal distribution   总被引:1,自引:0,他引:1  
In many environmental studies measuring the amount of a contaminant in a sampling unit is expensive. In such cases, composite sampling is often used to reduce data collection cost. However, composite sampling is known to be beneficial for estimating the mean of a population, but not necessarily for estimating the variance or other parameters. As some applications, for example, Monte Carlo risk assessment, require an estimate of the entire distribution, and as the lognormal model is commonly used in environmental risk assessment, in this paper we investigate efficiency of composite sampling for estimating a lognormal distribution. In particular, we examine the magnitude of savings in the number of measurements over simple random sampling, and the nature of its dependence on composite size and the parameters of the distribution utilizing simulation and asymptotic calculations.  相似文献   

2.
When an environmental sampling objective is to classify all the sample units as contaminated or not, composite sampling with selective retesting can substantially reduce costs by reducing the number of units that require direct analysis. The tradeoff, however, is increased complexity that has its own hidden costs. For this reason, we propose a model for assessing the relative cost, expressed as the ratio of total expected cost with compositing to total expected cost without compositing (initial exhaustive testing). Expressions are derived for the following retesting protocols: (i) exhaustive, (ii) sequential and (iii) binary split. The effects of both false positive and false negative rates are also derived and incorporated. The derived expressions of relative cost are illustrated for a range of values for various cost components that reflect typical costs incurred with hazardous waste site monitoring. Results allow those who are designing sampling plans to evaluate if any of these compositing/retesting protocols will be cost effective for particular applications.  相似文献   

3.
Ratio estimation of the parametric mean for a characteristic measured on plants sampled by a line intercept method is presented and evaluated via simulation using different plant dispersion patterns (Poisson, regular cluster, and Poisson cluster), plant width variances, and numbers of lines. The results indicate that on average the estimates are close to the parametric mean under all three dispersion patterns. Given a fixed number of lines, variability of the estimates is similar across dispersion patterns with variability under the Poisson pattern slightly smaller than varia-bility under the cluster patterns. No variance estimates were negative under the Poisson pattern, but some estimates were negative under the cluster patterns for smaller numbers of lines. Variance estimates become closer to zero similarly for all spatial patterns as the number of lines increases. Ratio estimation of the parametric mean in line intercept sampling works better, from the viewpoint of approximate unbiasedness and variability of estimates, under the Poisson pattern with larger numbers of lines than other combinations of spatial patterns, plant width variances and numbers of lines.  相似文献   

4.
The initial use of composite sampling involved the analysis of many negative samples with relatively high laboratory cost (Dorfman sampling). We propose a method of double compositing and compare its efficiency with Dorfman sampling. The variability of composite measurement samples has environmental interest (hot spots). The precision of these estimates depends on the kurtosis of the distribution; leptokurtic distributions (2 > 0) have increased precision as the number of field samples is increased. The opposite effect is obtained for platykurtic distributions. In the lognormal case, coverage probabilities are reasonable for < 0.5. The Poisson distribution can be associated with temporal compositing, of particular interest where radioactive measurements are taken. Sample size considerations indicate that the total sampling effort is directly proportional to the length of time sampled. If there is background radiation then increasing levels of this radiation require larger sample sizes to detect the same difference in radiation.  相似文献   

5.
Analyzing soils for contaminants can be costly. Generally, discrete samples are gathered from within a study area, analyzed by a laboratory and the results are used in a site-specific statistical analysis. Because of the heterogeneities that exist in soil samples within study areas, a large amount of variability and skewness may be present in the sample population. This necessitates collecting a large number of samples to obtain reliable inference on the mean contaminant concentration and to understand the spatial patterns for future remediation. Composite, or Incremental, sampling is a commonly applied method for gathering multiple discrete samples and physically combining them, such that each combination of discrete samples requires a single laboratory analysis, which reduces cost and can improve the estimates of the mean concentration. While incremental sampling can reduce cost and improve mean estimates, current implementations do not readily facilitate the characterization of spatial patterns or the detection of elevated constituent regions within study areas. The methods we present in this work provide efficient estimation and inference for the mean contaminant concentration over the entire spatial area and enable the identification of high contaminant regions within the area of interest. We develop sample design methodologies that explicitly define the characteristics of these designs (such as sample grid layout) and quantify the number of incremental samples that must be obtained under a design criteria to control false positive and false negative (Type I and II) decision errors. We present the sample design theory and specifications as well as results on simulated and real data.  相似文献   

6.
Determining the optimum number of increments in composite sampling   总被引:1,自引:0,他引:1  
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a method is developed which accounts for different sources of variation in compositing and data analysis. This method is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.
John E. HathawayEmail:
  相似文献   

7.
Environmental and Ecological Statistics -  相似文献   

8.
Composite sampling offers the great promise of efficiency and economy in environmental decision making. However, if careful attention is not paid to matching the support of the sample to that required for making the desired decision, the promise is unfulfilled. Obviously this attention must be applied in the design phase of a composite sampling strategy. Less obvious is the potential for alteration of the design sample support during sample collection and assay. The consequences of not paying attention to these aspects of sample design and assay are discussed in this issue paper illustrated with a series of examples taken from the authors consulting experience.  相似文献   

9.

For many clustered populations, the prior information on an initial stratification exists but the exact pattern of the population concentration may not be predicted. Under this situation, the stratified adaptive cluster sampling (SACS) may provide more efficient estimates than the other conventional sampling designs for the estimation of rare and clustered population parameters. For practical interest, we propose a generalized ratio estimator with the single auxiliary variable under the SACS design. The expressions of approximate bias and mean squared error (MSE) for the proposed estimator are derived. Numerical studies are carried out to compare the performances of the proposed generalized estimator over the usual mean and combined ratio estimators under the conventional stratified random sampling (StRS) using a real population of redwood trees in California and generating an artificial population by the Poisson cluster process. Simulation results show that the proposed class of estimators may provide more efficient results than the other estimators considered in this article for the estimation of highly clumped population.

  相似文献   

10.
In this paper a method for collection of vertically and horizontally integrated volume-weighted composite samples for analysis of water chemistry and plankton is presented. The method, which requires a proper knowledge of lake morphometry parameters, includes proposed standard procedures for determination of sampling interval thickness, maximum depth of sampling, selection of sampling stations, and distribution of discrete samples. An example of the outcome of the method in a lake with uncomplicated basin morphometry is given and the results are discussed against background of general lake basin morphometry data. The aim of the paper is to start a debate about optimization (statistical as well as ecological) of volume weighted composite sampling.  相似文献   

11.
The paper deals with the problem of estimating diversity indexes for an ecological community. First the species abundances are unbiasedly and consistently estimated using designs based on n random and independent selections of plots, points or lines over the study area. The problem of sampling elusive populations is also considered. Finally, the diversity index estimates are obtained as functions of the abundance estimates. The resulting estimators turn out to be asymptotically (n large) unbiased, even if a considerable bias may occur for a small n. Accordingly, the method of jackknifing is made use of in order to reduce bias.  相似文献   

12.
Macdonald and Pitcher's method of decomposing a sizefrequency histogram into cohorts (mathematical optimization of the fit of the distribution function to the histogram) has been used to estimate the composition of random samples drawn from populations with known cohort structure. The large-sample behaviour of the method is in accordance with the results of asymptotic theory. With sample sizes typical of those used in many ecological studies, good estimates often could not be obtained without imposing constraints upon the estimation procedure, even when the number of age classes in the population was known. If the number of age classes was not known, it was frequently difficult to determine from small samples. When unconstrained solutions were obtainable, confidence limits about estimates were often very wide. Our results and information in the theoretical literature indicate that if the Petersen method (whereby several modes on a size-frequency histogram are taken to represent single age classes and all age classes to be present) does not work, accurate estimates of demographic parameters are unlikely to be obtainable using more rigorous methods. In view of these difficulties, we recommend that an iptimization method, such as that described by Macdonald and Pitcher, be used to estimate demographic parameters. Standard errors of estimates should be reported. Optimization methods give an indication when the data is inadequate to obtain accurate parameter estimates, either by failing to converge or by placing large standard errors about the estimates. Graphical methods do not give a clear warning of this, and should be avoided except where the modes on the size-frequency histogram are very well separated and sample sizes are large. Often, assumptions must be made about population parameters to enable their estimation. This may involve constraining some parameters to particular values, assuming a fixed relationship between cohort mean sizes and their standard deviations, or by assuming that individuals grow according to a von Bertalanffy curve. Any such assumptions need detailed justification in each case.  相似文献   

13.
J. A. Downing 《Marine Biology》1989,103(2):231-234
Normative variance functions can be used to accurately predict sampling exigencies, but such empirically derived formulae are continuous functions that can predict levels of sampling precision that cannot logically occur in discrete population samples. General formulae are presented that allow calculation of upper and lower boundary constraints on levels of sampling precision. These boundary constraints would only have a significant influence on sampling design where populations are so sparse that samples consist mainly of presence-absence data. A previously published empirical equation for the prediction of requisite sample number for the estimation of a freshwater benthos population correctly shows that using a small sampler can result in an up to 50-fold reduction in the amount of sediment processed, regardless of these constraints. A previously published empirical equation for the prediction of sampling variance, based on over 3000 sets of replicate samples of marine benthos populations, suggests that the use of small samplers over large ones requires the processing of between one-half and one-twentieth of the sediment for the same level of precision. It is concluded that discussions of sampling optimization should be based on knowledge of real sampling costs.  相似文献   

14.
M. J. Riddle 《Marine Biology》1989,103(2):225-230
To calculate the number of samples required to estimate population density to a specified precision, a prior estimate of the sample variance is needed. Using data from the freshwater benthic literature, Downing (1979, 1980a) calculated a regression equation to predict sample variance from sampler size and population density. He used predicted sample variance to calculate the number of samples, of a range of sizes, required to estimate a range of population densities to a specified precision. He concludes that massive savings (1300 to 5000%) of total surface area sampled may be achieved by using sample units of small surface area. These conclusions are misleading. The data set used for the regression does not adequately cover the combination of a low-density population sampled by a device of small surface area. The benthic community of Belhaven Bay, East Lothian, Scotland was sampled in 1982 with a 0.1 m2 grab and a 0.0018 m2 corer, providing 112 sets of replicate data which were used to test the hypothesis that for a specified precision of the mean a considerable saving of total area sampled may be obtained by sampling with a device of small surface area. The benthos of Loch Creran, Argyll, Scotland was sampled with contiguous corer samples on four occasions in 1980 and 1981, providing 234 independent sets of replicate data. Contiguous samples were grouped to form several simulated series of samples of increasing surface area. A sampler of small surface area provided a saving of total area sampled of about 20%. Whether such a small saving is justifiable will depend on the extra field expenses incurred by taking many small samples.  相似文献   

15.
A new tool is presented based on the principle of maximum entropy for enhanced hot spot detection based on composite sampling with no (or minimal) requirement for composite breakdown. The methodology presented here is easy to implement and facilitates the use of multiple criteria for evaluating attainment of site remediation objectives. The new methodology provides very simple decision rules that can easily be used by non-statisticians and complements the use of composites for control of residual mean concentrations.  相似文献   

16.
17.
18.
Composite sampling techniques for identifying the largest individual sample value seem to be cost effective when the composite samples are internally homogeneous. However, since it is not always possible to form homogeneous composite samples, these methods can lead to higher costs than expected. In this paper we propose a two-way composite sampling design as a way to improve on the cost effectiveness of the methods available to identify the largest individual sample value.  相似文献   

19.
Along with increased activity in source sampling for organics, there have been many improvements in the methods of acquiring samples. Much has been learned about how best to proceed, and a number of potentially serious pitfalls have been discovered, characterized, and circumvented. Unfortunately, communication of all of this new technology has not always been effective.

This paper reviews some of the more important fundamental principles involved in stack sampling for organics, briefly describes and discusses recently developed equipment, and points out a few of the more serious pitfalls to be avoided. Extensive references are provided, many of which are often overlooked by newcomers to the field. The conclusion is reached that it is possible to consistently obtain high‐quality samples of organic materials from stationary source stacks, even though knowledge and caution are necessary.  相似文献   

20.
Emergy studies have suffered criticism due to the lack of uncertainty analysis and this shortcoming may have directly hindered the wider application and acceptance of this methodology. Recently, to fill this gap, the sources of uncertainty in emergy analysis were described and analytical and stochastic methods were put forward to estimate the uncertainty in unit emergy values (UEVs). However, the most common method used to determine UEVs is the emergy table-form model, and only a stochastic method (i.e., the Monte Carlo method) was provided to estimate the uncertainty of values calculated in this way. To simplify the determination of uncertainties in emergy analysis using table-form calculations, we introduced two analytical methods provided by the Guide to the Expression of Uncertainty in Measurement (GUM), i.e., the Variance method and the Taylor method, to estimate the uncertainty of emergy table-form calculations for two different types of data, and compared them with the stochastic method in two case studies. The results showed that, when replicate data are available at the system level, i.e., the same data on inputs and output are measured repeatedly in several independent systems, the Variance method is the simplest and most reliable method for determining the uncertainty of the model output, since it considers the underlying covariance of the inputs and requires no assumptions about the probability distributions of the inputs. However, when replicate data are only available at the subsystem level, i.e., repeat samples are measured on subsystems without specific correspondence between an output and a certain suite of inputs, the Taylor method will be a better option for calculating uncertainty, since it requires less information and is easier to understand and perform than the Monte Carlo method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号