首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
We compare the performance of a number of estimators of the cumulative distribution function (CDF) for the following scenario: imperfect measurements are taken on an initial sample from afinite population and perfect measurements are obtained on a small calibration subset of the initial sample. The estimators we considered include two naive estimators using perfect and imperfect measurements; the ratio, difference and regression estimators for a two-phasesample; a minimum MSE estimator; Stefanski and Bay's SIMEX estimator (1996); and two proposed estimators. The proposed estimators take the form of a weighted average of perfect and imperfect measurements. They are constructed by minimizing variance among the class of weighted averages subject to an unbiasedness constraint. They differ in the manner of estimating the weight parameters. The first one uses direct sample estimates. The second one tunes the unknown parameters to an underlying normal distribution. We compare the root mean square error (RMSE) of the proposed estimator against other potential competitors through computer simulations. Our simulations show that our second estimator has the smallest RMSE among thenine compared and that the reduction in RMSE is substantial when the calibration sample is small and the error is medium or large.  相似文献   

2.
Although not design-unbiased, the ratio estimator is recognized as more efficient when a certain degree of correlation exists between the variable of primary interest and the auxiliary variable. Meanwhile, the Rao–Blackwell method is another commonly used procedure to improve estimation efficiency. Various improved ratio estimators under adaptive cluster sampling (ACS) that make use of the auxiliary information together with the Rao–Blackwellized univariate estimators have been proposed in past research studies. In this article, the variances and the associated variance estimators of these improved ratio estimators are proposed for a thorough framework of statistical inference under ACS. Performance of the proposed variance estimators is evaluated in terms of the absolute relative percentage bias and the empirical mean-squared error. As expected, results show that both the absolute relative percentage bias and the empirical mean-squared error decrease as the initial sample size increases for all the variance estimators. To evaluate the confidence intervals based on these variance estimators and the finite-population Central Limit Theorem, the coverage rate and the interval width are used. These confidence intervals suffer a disadvantage similar to that of the conventional ratio estimator. Hence, alternative confidence intervals based on a certain type of adjusted variance estimators are constructed and assessed in this article.  相似文献   

3.
Rao-Blackwellization is used to improve the unbiased Hansen–Hurwitz and Horvitz–Thompson unbiased estimators in Adaptive Cluster Sampling by finding the conditional expected value of the original unbiased estimators given the sufficient or minimal sufficient statistic. In principle, the same idea can be used to find better ratio estimators, however, the calculation of taking all the possible combinations into account can be extremely tedious in practice. The simplified analytical forms of such ratio estimators are not currently available. For practical interest, several improved ratio estimators in Adaptive Cluster Sampling are proposed in this article. The proposed ratio estimators are not the real Rao-Blackwellized versions of the original ones but make use of the Rao-Blackwellized univariate estimators. How to calculate the proposed estimators is illustrated, and their performance are evaluated by both of the Bivariate Poisson clustered process and a real data. The simulation result indicates that the proposed improved ratio estimators are able to provide considerably advantageous estimation results over the original ones.  相似文献   

4.
Thompson (1990) introduced the adaptive cluster sampling design and developed two unbiased estimators, the modified Horvitz-Thompson (HT) and Hansen-Hurwitz (HH) estimators, for this sampling design and noticed that these estimators are not a function of the minimal sufficient statistics. He applied the Rao-Blackwell theorem to improve them. Despite having smaller variances, these latter estimators have not received attention because a suitable method or algorithm for computing them was not available. In this paper we obtain closed forms of the Rao-Blackwell versions which can easily be computed. We also show that the variance reduction for the HH estimator is greater than that for the HT estimator using Rao-Blackwell versions. When the condition for extra samples is 0$$ " align="middle" border="0"> , one can expect some Rao-Blackwell improvement in the HH estimator but not in the HT estimator. Two examples are given.  相似文献   

5.
With increasing concern over chemicals that are potential health hazards at low levels, determination of limits of detection have undergone considerable scrutiny. Most traditional detection limit estimators suffer from extensive statistical and/or conceptual limitations. In this paper, traditional detection limit estimators are described and critically evaluated. Using the terminology of Currie (1968), methods are categorized into decision limits versus detection limits. The methods are further categorized into single concentration design versus calibration design methodologies. While the single concentration design methods are useful for fixing ideas and clarifying definitions, they are shown to be extremely limited in practice since dependence of variability on concentration can neither be estimated or incorporated. Calibration-based detection limit estimators are described, compared and contrasted. Generalizations to non-constant variance, multiple future detection decisions and simultaneous control of Type I and II errors are provided. The various calibration-based methods are illustrated using real data and experimental design issues for detection limit studies discussed.  相似文献   

6.
When sample observations are expensive or difficult to obtain, ranked set sampling is known to be an efficient method for estimating the population mean, and in particular to improve on the sample mean estimator. Using best linear unbiased estimators, this paper considers the simple linear regression model with replicated observations. Use of a form of ranked set sampling is shown to be markedly more efficient for normal data when compared with the traditional simple linear regression estimators.  相似文献   

7.
In this article we consider asymptotic properties of the Horvitz-Thompson and Hansen-Hurwitz types of estimators under the adaptive cluster sampling variants obtained by selecting the initial sample by simple random sampling without replacement and by unequal probability sampling with replacement. We develop an asymptotic framework, which basically assumes that the number of units in the initial sample, as well as the number of units and networks in the population tend to infinity, but that the network sizes are bounded. Using this framework we prove that under each of the two variants of adaptive sampling above mentioned, both the Horvitz-Thompson and Hansen-Hurwitz types of estimators are design-consistent and asymptotically normally distributed. In addition we show that the ordinary estimators of their variances are also design-consistent estimators.  相似文献   

8.
Geostatistics is a set of statistical techniques that is increasingly used to characterize spatial dependence in spatially referenced ecological data. A common feature of geostatistics is predicting values at unsampled locations from nearby samples using the kriging algorithm. Modeling spatial dependence in sampled data is necessary before kriging and is usually accomplished with the variogram and its traditional estimator. Other types of estimators, known as non-ergodic estimators, have been used in ecological applications. Non-ergodic estimators were originally suggested as a method of choice when sampled data are preferentially located and exhibit a skewed frequency distribution. Preferentially located samples can occur, for example, when areas with high values are sampled more intensely than other areas. In earlier studies the visual appearance of variograms from traditional and non-ergodic estimators were compared. Here we evaluate the estimators' relative performance in prediction. We also show algebraically that a non-ergodic version of the variogram is equivalent to the traditional variogram estimator. Simulations, designed to investigate the effects of data skewness and preferential sampling on variogram estimation and kriging, showed the traditional variogram estimator outperforms the non-ergodic estimators under these conditions. We also analyzed data on carabid beetle abundance, which exhibited large-scale spatial variability (trend) and a skewed frequency distribution. Detrending data followed by robust estimation of the residual variogram is demonstrated to be a successful alternative to the non-ergodic approach.  相似文献   

9.
Closed capture-recapture (CR) estimators have been used extensively to estimate population size. Most closed CR approaches have been developed and evaluated for discrete-time models, but there has been little effort to evaluate their continuous-time counterparts. Continuous-time estimators — developed using maximum likelihood theory by Craig (1953) and Darroch (1958), and martingale theory by Becker (1984) — that allow capture probabilities to vary over time were evaluated using Monte Carlo simulation. Overall, the ML estimators had a smaller MSE. The estimators performed well when model assumptions were upheld, and were somewhat robust to heterogeneity in capture probabilities. However, the estimators were not robust to behavioural effects in the capture probabilities. Time lag effects (periods when animals might be unavailable for immediate recapture) on continuous-time estimates were also investigated and results indicated a positive bias which was greater for smaller populations. There was no gain in performance when using a continuous-time estimator versus a discrete-time estimator on the same simulated data. Usefulness of the continuous-time approach may be limited to study designs where animals are easier to sample using continuous-time methodology.  相似文献   

10.
An estimating function approach to the inference of catch-effort models   总被引:1,自引:0,他引:1  
A class of catch-effort models, which allows for heterogeneous removal probabilities, is proposed for closed populations. The model includes three types of removal probabilities: multiplicative, Poisson and logistic. The usual removal and generalized removal models then become special cases. The equivalence of the proposed model and a special type of capture-recapture model is discussed. A unified estimating function approach is used to estimate the initial population size. For the homogeneous model, the resulting population size estimator based on optimal estimating functions is asymptotically equivalent to the maximum likelihood estimator. One advantage for our approach is that it can be extended to handle the heterogeneous populations in which the maximum likelihood estimators do not exist. The bootstrap method is applied to construct variance estimators and confidence intervals. We illustrate the method by two real data examples. Results of a simulation study investigating the performance of the proposed estimation procedure are presented.  相似文献   

11.
Nonparametric mean estimation using partially ordered sets   总被引:2,自引:0,他引:2  
In ranked-set sampling (RSS), the ranker must give a complete ranking of the units in each set. In this paper, we consider a modification of RSS that allows the ranker to declare ties. Our sampling method is simply to break the ties at random so that we obtain a standard ranked-set sample, but also to record the tie structure for use in estimation. We propose several different nonparametric mean estimators that incorporate the tie information, and we show that the best of these estimators is substantially more efficient than estimators that ignore the ties. As part of our comparison of estimators, we develop new results about models for ties in rankings. We also show that there are settings where, to achieve more efficient estimation, ties should be declared not just when the ranker is actually unsure about how units rank, but also when the ranker is sure about the ranking, but believes that the units are close.  相似文献   

12.
Beissinger SR  Peery MZ 《Ecology》2007,88(2):296-305
Reducing extinction risk for threatened species requires determining which demographic parameters are depressed and causing population declines. Museum collections may constitute a unique, underutilized resource for measuring demographic changes over long time periods using age-ratio analysis. We reconstruct the historic demography of a U.S. federally endangered seabird, the Marbled Murrelet (Brachyramphus marmoratus), from specimens collected approximately 100 years ago for comparison with predictions from comparative analyses and with results from contemporary field studies using both age-ratio analysis and conventional demographic estimators. Reproduction in the late 1800s and early 1900s matched predictions from comparative analysis, but was 8-9 times greater than contemporary estimates, whereas adult survival was unchanged. Historic reproductive rates would support stable populations, but contemporary levels should result in population declines. Contemporary demographic estimates derived from age-ratio analysis were similar to estimates from conventional estimators. Using museum specimens to reconstruct historic demography provides a unique approach to identify causes of decline and to set demographic benchmarks for recovery of endangered species that meet most assumptions of age-ratio analysis.  相似文献   

13.
Thompson (1990) introduced the adaptive cluster sampling design. This sampling design has been shown to be a useful sampling method for parameter estimation of a clustered and scattered population (Roesch, 1993; Smith et al., 1995; Thompson and Seber, 1996). Two estimators, the modified Hansen-Hurwitz (HH) and Horvitz-Thompson (HT) estimators, are available to estimate the mean or total of a population. Empirical results from previous researches indicate that the modified HT estimator has smaller variance than the modified HH estimator. We analytically compare the properties of these two estimators. Some results are obtained in favor of the modified HT estimator so that practitioners are strongly recommended to use the HT estimator despite easiness of computations for the HH estimator.  相似文献   

14.
This paper develops statistical inference for population mean and total using stratified judgment post-stratified (SJPS) samples. The SJPS design selects a judgment post-stratified sample from each stratum. Hence, in addition to stratum structure, it induces additional ranking structure within stratum samples. SJPS is constructed from a finite population using either a with or without replacement sampling design. Inference is constructed under both randomization theory and a super population model. In both approaches, the paper shows that the estimators of population mean and total are unbiased. The paper also constructs unbiased estimators for the variance (mean square prediction error) of the sample mean (predictor of population mean), and develops confidence and prediction intervals for the population mean. The empirical evidence shows that the proposed estimators perform better than their competitors in the literature.  相似文献   

15.
Jobe RT 《Ecology》2008,89(1):174-182
One hypothesis for why estimators of species richness tend to underestimate total richness is that they do not explicitly account for increases in species richness due to spatial or environmental turnover in species composition (beta diversity). I analyze the similarity of a data set of native trees in Great Smoky Mountains National Park, USA, and assess the robustness of these estimators against recently developed ones that incorporate turnover explicitly: the total species accumulation method (T-S) and a method based on the distance decay of similarity. I show that the T-S estimator can give reliable estimates of species richness, given an appropriate grouping of sites. The estimator based on distance decay of similarity performed poorly. There are two main reasons for this: sample size effects and the assumption that distance decay of similarity exhibits a power law relationship. I show that estimators based on distance-decay relationships exhibit systematically lower rates of distance decay for samples with few individuals per site independent of environmental variation. Second, the data presented here and many other survey data sets exhibit exponential rather than power law distance-decay relationships. Richness estimators that explicitly incorporate beta diversity can be improved by beginning from an exponential distance-decay relationship and adjusting for the systematic errors introduced by small sample sizes.  相似文献   

16.
Estimates of a population’s growth rate and process variance from time-series data are often used to calculate risk metrics such as the probability of quasi-extinction, but temporal correlations in the data from sampling error, intrinsic population factors, or environmental conditions can bias process variance estimators and detrimentally affect risk predictions. It has been claimed (McNamara and Harding, Ecol Lett 7:16–20, 2004) that estimates of the long-term variance that incorporate observed temporal correlations in population growth are unaffected by sampling error; however, no estimation procedures were proposed for time-series data. We develop a suite of such long-term variance estimators, and use simulated data with temporally autocorrelated population growth and sampling error to evaluate their performance. In some cases, we get nearly unbiased long-term variance estimates despite ignoring sampling error, but the utility of these estimators is questionable because of large estimation uncertainty and difficulties in estimating correlation structure in practice. Process variance estimators that ignored temporal correlations generally gave more precise estimates of the variability in population growth and of the probability of quasi-extinction. We also found that the estimation of probability of quasi-extinction was greatly improved when quasi-extinction thresholds were set relatively close to population levels. Because of precision concerns, we recommend using simple models for risk estimates despite potential biases, and limiting inference to quantifying relative risk; e.g., changes in risk over time for a single population or comparative risk among populations.  相似文献   

17.
Practical considerations often motivate employing variable probability sampling designs when estimating characteristics of forest populations. Three distribution function estimators, the Horvitz-Thompson estimator, a difference estimator, and a ratio estimator, are compared following variable probability sampling in which the inclusion probabilities are proportional to an auxiliary variable, X. Relative performance of the estimators is affected by several factors, including the distribution of the inclusion probabilities, the correlation () between X and the response Y, and the position along the distribution function being estimated. Both the ratio and difference estimators are superior to the Horvitz-Thompson estimator. The difference estimator gains better precision than the ratio estimator toward the upper portion of the distribution function, but the ratio estimator is superior toward the lower end of the distribution function. The point along the distribution function at which the difference estimator becomes more precise than the ratio estimator depends on the sampling design, as well as the coefficient of variation of X and . A simple confidence interval procedure provides close to nominal coverage for intervals constructed from both the difference and ratio estimators, with the exception that coverage may be poor for the lower tail of the distribution function when using the ratio estimator.  相似文献   

18.
Forest surveys performed over a large scale (e.g. national inventories) involve several phases of sampling. The first phase is usually performed by means of a systematic search of the study region, in which the region is partitioned into regular polygons of the same size and points are randomly or systematically selected, one per polygon. In most cases, first-phase points are selected and recognized in orthophotos or very high resolution satellite images available for the whole study area. Disregarding the subsequent phases, the first phase of sampling can be effectively adopted to select small woodlots and tree rows, in the sense that a unit is selected when at least one first-phase point falls within it. On the basis of such a scheme of sampling, approximately unbiased estimators of abundance, coverage and other physical attributes readily measurable from orthophotos (e.g. tree-row length) are proposed, together with estimators of the corresponding variances. A simulation study is performed in order to check the performance of the estimators under several distributions of units over the study area (random, clustered, spatially trended).  相似文献   

19.

For many clustered populations, the prior information on an initial stratification exists but the exact pattern of the population concentration may not be predicted. Under this situation, the stratified adaptive cluster sampling (SACS) may provide more efficient estimates than the other conventional sampling designs for the estimation of rare and clustered population parameters. For practical interest, we propose a generalized ratio estimator with the single auxiliary variable under the SACS design. The expressions of approximate bias and mean squared error (MSE) for the proposed estimator are derived. Numerical studies are carried out to compare the performances of the proposed generalized estimator over the usual mean and combined ratio estimators under the conventional stratified random sampling (StRS) using a real population of redwood trees in California and generating an artificial population by the Poisson cluster process. Simulation results show that the proposed class of estimators may provide more efficient results than the other estimators considered in this article for the estimation of highly clumped population.

  相似文献   

20.
Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture–recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号