首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Although not design-unbiased, the ratio estimator is recognized as more efficient when a certain degree of correlation exists between the variable of primary interest and the auxiliary variable. Meanwhile, the Rao–Blackwell method is another commonly used procedure to improve estimation efficiency. Various improved ratio estimators under adaptive cluster sampling (ACS) that make use of the auxiliary information together with the Rao–Blackwellized univariate estimators have been proposed in past research studies. In this article, the variances and the associated variance estimators of these improved ratio estimators are proposed for a thorough framework of statistical inference under ACS. Performance of the proposed variance estimators is evaluated in terms of the absolute relative percentage bias and the empirical mean-squared error. As expected, results show that both the absolute relative percentage bias and the empirical mean-squared error decrease as the initial sample size increases for all the variance estimators. To evaluate the confidence intervals based on these variance estimators and the finite-population Central Limit Theorem, the coverage rate and the interval width are used. These confidence intervals suffer a disadvantage similar to that of the conventional ratio estimator. Hence, alternative confidence intervals based on a certain type of adjusted variance estimators are constructed and assessed in this article.  相似文献   

2.
The theory of conventional line transect surveys is based on an essential assumption that 100% detection of animals right on the transect lines can be achieved. When this assumption fails, independent observer line transect surveys are used. This paper proposes a general approach, based on a conditional likelihood, which can be carried out either parametrically or nonparametrically, to estimate the abundance of non-clustered biological populations using data collected from independent observer line transect surveys. A nonparametric estimator is specifically proposed which combines the conditional likelihood and the kernel smoothing method. It has the advantage that it allows the data themselves to dictate the form of the detection function, free of any subjective choice. The bias and the variance of the nonparametric estimator are given. Its asymptotic normality is established which enables construction of confidence intervals. A simulation study shows that the proposed estimator has good empirical performance, and the confidence intervals have good coverage accuracy.  相似文献   

3.
An estimating function approach to the inference of catch-effort models   总被引:1,自引:0,他引:1  
A class of catch-effort models, which allows for heterogeneous removal probabilities, is proposed for closed populations. The model includes three types of removal probabilities: multiplicative, Poisson and logistic. The usual removal and generalized removal models then become special cases. The equivalence of the proposed model and a special type of capture-recapture model is discussed. A unified estimating function approach is used to estimate the initial population size. For the homogeneous model, the resulting population size estimator based on optimal estimating functions is asymptotically equivalent to the maximum likelihood estimator. One advantage for our approach is that it can be extended to handle the heterogeneous populations in which the maximum likelihood estimators do not exist. The bootstrap method is applied to construct variance estimators and confidence intervals. We illustrate the method by two real data examples. Results of a simulation study investigating the performance of the proposed estimation procedure are presented.  相似文献   

4.
Atmospheric carbon dioxide concentration (ACDC) level is an important factor for predicting temperature and climate changes. We analyze the conditional variance of a function of ACDC level known as ACDC level growth rate (ACDCGR) using the generalised autoregressive conditional heteroskedasticity (GARCH) and GARCH models with leverage effect. The data are a subset of the well known Mauna Loa atmosphere carbon dioxide record. We test for the presence of stylized facts in the ACDCGR time series. The performance of GARCH models are compared to EGARCH, TGARCH and PGARCH models. Model fit measures AIC, BIC and likelihood is calculated for each fitted model. The results do confirm the presence of some of important stylized facts in the ACDCGR time series, but the presence of leverage effect is not significant . The out of sample one step ahead forecasting performances of the models based on RMSE and MAE metrics are evaluated. EGARCH model with student $t$ disturbances showed the best fit and a valid forecasting performance. A bootstrap algorithm is employed to calculate confidence intervals for future values of ACDCGR time series and its volatility. The constructed bootstrap confidence intervals showed a reasonable performance.  相似文献   

5.
Species reproduction is an important determinant of population dynamics. As such, this is an important parameter in environmental risk assessment. The closure principle computational approach test (CPCAT) was recently proposed as a method to derive a NOEC/LOEC for reproduction count data such as the number of juvenile Daphnia. The Poisson distribution used by CPCAT can be too restrictive as a model of the data-generating process. In practice, the generalized Poisson distribution could be more appropriate, as it allows for inequality of the population mean \(\mu\) and the population variance \(\sigma ^2\). It is of fundamental interest to explore the statistical power of CPCAT and the probability of determining a regulatory relevant effect correctly. Using a simulation, we varied between Poisson distribution (\(\mu =\sigma ^2\)) and generalized Poisson distribution allowing for over-dispersion (\(\mu <\sigma ^2\)) and under-dispersion (\(\mu >\sigma ^2\)). The results indicated that the probability of detecting the LOEC/NOEC correctly was \(\ge 0.8\) provided the effect was at least 20% above or below the mean level of the control group and mean reproduction of the control was at least 50 individuals while over-dispersion was missing. Specifically, under-dispersion increased, whereas over-dispersion reduced the statistical power of the CPCAT. Using the well-known Hampel identifier, we propose a simple and straight forward method to assess whether the data-generating process of real data could be over- or under-dispersed.  相似文献   

6.
Analyzing animal movements using Brownian bridges   总被引:7,自引:0,他引:7  
Horne JS  Garton EO  Krone SM  Lewis JS 《Ecology》2007,88(9):2354-2363
By studying animal movements, researchers can gain insight into many of the ecological characteristics and processes important for understanding population-level dynamics. We developed a Brownian bridge movement model (BBMM) for estimating the expected movement path of an animal, using discrete location data obtained at relatively short time intervals. The BBMM is based on the properties of a conditional random walk between successive pairs of locations, dependent on the time between locations, the distance between locations, and the Brownian motion variance that is related to the animal's mobility. We describe two critical developments that enable widespread use of the BBMM, including a derivation of the model when location data are measured with error and a maximum likelihood approach for estimating the Brownian motion variance. After the BBMM is fitted to location data, an estimate of the animal's probability of occurrence can be generated for an area during the time of observation. To illustrate potential applications, we provide three examples: estimating animal home ranges, estimating animal migration routes, and evaluating the influence of fine-scale resource selection on animal movement patterns.  相似文献   

7.
Individual heterogeneity and correlations between life history traits play a fundamental role in life history evolution and population dynamics. Unobserved individual heterogeneity in survival can be a nuisance for estimation of age effects at the individual level by causing bias due to mortality selection. We jointly analyze survival and breeding output from successful breeding attempts in an island population of Silvereyes (Zosterops lateralis chlorocephalus) by fitting models that incorporate age effects and individual heterogeneity via random effects. The number of offspring produced increased with age of parents in their first years of life but then eventually declined with age. A similar pattern was found for the probability of successful breeding. Annual survival declined with age even when individual heterogeneity was not accounted for. The rate of senescence in survival, however, depends on the variance of individual heterogeneity and vice versa; hence, both cannot be simultaneously estimated with precision. Model selection supported individual heterogeneity in breeding performance, but we found no correlation between individual heterogeneity in survival and breeding performance. We argue that individual random effects, unless unambiguously identified, should be treated as statistical nuisance or taken as a starting point in a search for mechanisms rather than given direct biological interpretation.  相似文献   

8.
A dynamic and heterogeneous species abundance model generating the lognormal species abundance distribution is fitted to time series of species data from an assemblage of stoneflies and mayflies (Plecoptera and Ephemeroptera) of an aquatic insect community collected over a period of 15 years. In each year except one, we analyze 5 parallel samples taken at the same time of the season giving information about the over-dispersion in the sampling relative to the Poisson distribution. Results are derived from a correlation analysis, where the correlation in the bivariate normal distribution of log abundance is used as measurement of similarity between communities. The analysis enables decomposition of the variance of the lognormal species abundance distribution into three components due to heterogeneity among species, stochastic dynamics driven by environmental noise, and over-dispersion in sampling, accounting for 62.9, 30.6 and 6.5% of the total variance, respectively. Corrected for sampling the heterogeneity and stochastic components accordingly account for 67.3 and 32.7% of the among species variance in log abundance. By using this method, it is possible to disentangle the effect of heterogeneity and stochastic dynamics by quantifying these components and correctly remove sampling effects on the observed species abundance distribution.  相似文献   

9.
Adaptive cluster sampling (ACS) is an efficient sampling design for estimating parameters of rare and clustered populations. It is widely used in ecological research. The modified Hansen-Hurwitz (HH) and Horvitz-Thompson (HT) estimators based on small samples under ACS have often highly skewed distributions. In such situations, confidence intervals based on traditional normal approximation can lead to unsatisfactory results, with poor coverage properties. Christman and Pontius (Biometrics 56:503–510, 2000) showed that bootstrap percentile methods are appropriate for constructing confidence intervals from the HH estimator. But Perez and Pontius (J Stat Comput Simul 76:755–764, 2006) showed that bootstrap confidence intervals from the HT estimator are even worse than the normal approximation confidence intervals. In this article, we consider two pseudo empirical likelihood functions under the ACS design. One leads to the HH estimator and the other leads to a HT type estimator known as the Hájek estimator. Based on these two empirical likelihood functions, we derive confidence intervals for the population mean. Using a simulation study, we show that the confidence intervals obtained from the first EL function perform as good as the bootstrap confidence intervals from the HH estimator but the confidence intervals obtained from the second EL function perform much better than the bootstrap confidence intervals from the HT estimator, in terms of coverage rate.  相似文献   

10.
Analysis of brood sex ratios: implications of offspring clustering   总被引:13,自引:0,他引:13  
Generalized linear models (GLMs) are increasingly used in modern statistical analyses of sex ratio variation because they are able to determine variable design effects on binary response data. However, in applying GLMs, authors frequently neglect the hierarchical structure of sex ratio data, thereby increasing the likelihood of committing 'type I' error. Here, we argue that whenever clustered (e.g., brood) sex ratios represent the desired level of statistical inference, the clustered data structure ought to be taken into account to avoid invalid conclusions. Neglecting the between-cluster variation and the finite number of clusters in determining test statistics, as implied by using likelihood ratio-based L2-statistics in conventional GLM, results in biased (usually overestimated) test statistics and pseudoreplication of the sample. Random variation in the sex ratio between clusters (broods) can often be accommodated by scaling residual binomial (error) variance for overdispersion, and using F-tests instead of L2-tests. More complex situations, however, require the use of generalized linear mixed models (GLMMs). By introducing higher-level random effects in addition to the residual error term, GLMMs allow an estimation of fixed effect and interaction parameters while accounting for random effects at different levels of the data. GLMMs are first required in sex ratio analyses whenever there are covariates at the offspring level of the data, but inferences are to be drawn at the brood level. Second, when interactions of effects at different levels of the data are to be estimated, random fluctuation of parameters can be taken into account only in GLMMs. Data structures requiring the use of GLMMs to avoid erroneous inferences are often encountered in ecological sex ratio studies.  相似文献   

11.
Efficient statistical mapping of avian count data   总被引:3,自引:0,他引:3  
We develop a spatial modeling framework for count data that is efficient to implement in high-dimensional prediction problems. We consider spectral parameterizations for the spatially varying mean of a Poisson model. The spectral parameterization of the spatial process is very computationally efficient, enabling effective estimation and prediction in large problems using Markov chain Monte Carlo techniques. We apply this model to creating avian relative abundance maps from North American Breeding Bird Survey (BBS) data. Variation in the ability of observers to count birds is modeled as spatially independent noise, resulting in over-dispersion relative to the Poisson assumption. This approach represents an improvement over existing approaches used for spatial modeling of BBS data which are either inefficient for continental scale modeling and prediction or fail to accommodate important distributional features of count data thus leading to inaccurate accounting of prediction uncertainty.  相似文献   

12.
Perez and Pontius (J Stat Comput Simul 76:755–764, 2006) introduced several bootstrap methods under adaptive cluster sampling using a Horvitz–Thompson type estimator. Using a simulation study, they showed that their proposed methods provide confidence intervals with highly understated coverage rates. In this article, we first show that their bootstrap methods provide biased bootstrap estimates. We then define two bootstrap methods, based on the method of Gross (Proceeding of the survey research methods section. American Statistical Association, Alexandria, VA, pp 181–184, 1980) and Bootstrap With Replacement, that provide unbiased bootstrap estimates of the population mean with bootstrap variances matching the corresponding unbiased variance estimator. Using a simulation study, we show that the bootstrap confidence intervals based on our proposed methods have better performance than those based on available bootstrap methods, in the sense of having coverage proportion closer to the nominal coverage level. We also compare the proposed intervals to empirical likelihood based intervals in small samples.  相似文献   

13.
Estimating temporal variance in animal demographic parameters is of particular importance in population biology. We implement the Schall’s algorithm for incorporating temporal random effects in survival models using recovery data. Our frequentist approach is based on a formulation of band-recovery models with random effects as generalized linear mixed models and a linearization of the link function conditional on the random effects. A simulation study shows that our procedure provides unbiased and precise estimates. The method is then implemented on two case studies using recovery data on fish and birds.  相似文献   

14.
T. Brey 《Marine Biology》1990,106(3):503-508
In field studies, somatic production of animals is often calculated by means of the increment summation method, which is based on consecutive samples from the population. The main disadvantage of this method is the lack of any measurement of variability, therefore the statistical significance of the calculated production value is uncertain. This paper shows that in many cases a nonparametric statistical approach called the “bootstrap” can be used to overcome this problem. By means of this procedure, natural variability of production and production to biomass ratios can be assessed by 95% confidence intervals, standard deviation or related parameters from a sample of limited size.  相似文献   

15.
Analysis of capture—recapture data often involves maximizing a complex likelihood function with many unknown parameters. Statistical inference based on selection of a proper model depends on successful attainment of this maximum. An EM algorithm is developed for obtaining maximum likelihood estimates of capture and survival probabilities conditional on first capture from standard capture—recapture data. The algorithm does not require the use of numerical derivatives which may improve precision and stability relative to other estimation schemes. The asymptotic covariance matrix of the estimated parameters can be obtained using the supplemented EM algorithm. The EM algorithm is compared to a more traditional Newton-Raphson algorithm with both a simulated and a real dataset. The two algorithms result in the same parameter estimates, but Newton-Raphson variance estimates depend on a numerically estimated Hessian matrix that is sensitive to step size choice.  相似文献   

16.
Respondent Experience and Contingent Valuation of Environmental Goods   总被引:5,自引:0,他引:5  
Respondent experience (i.e., a respondent's information set) has long been suspected to influence contingent valuation estimates of environmental values. We assess the influence of experience by explicitly modeling the relationship between respondent experience and both fitted individual resource values and the conditional variance of these estimated values. Using three different joint specifications for experience and WTP—normal/censored-normal, Poisson/censored-normal, and zero-inflated Poisson/censored-normal—we find discrete jumps in resource values as experience increases from zero and that more-experienced respondents have smaller conditional variances. Simulation of arbitrary levels of experience allows standardization of the amount of information when developing welfare estimates.  相似文献   

17.
Lindén A  Mäntyniemi S 《Ecology》2011,92(7):1414-1421
A Poisson process is a commonly used starting point for modeling stochastic variation of ecological count data around a theoretical expectation. However, data typically show more variation than implied by the Poisson distribution. Such overdispersion is often accounted for by using models with different assumptions about how the variance changes with the expectation. The choice of these assumptions can naturally have apparent consequences for statistical inference. We propose a parameterization of the negative binomial distribution, where two overdispersion parameters are introduced to allow for various quadratic mean-variance relationships, including the ones assumed in the most commonly used approaches. Using bird migration as an example, we present hypothetical scenarios on how overdispersion can arise due to sampling, flocking behavior or aggregation, environmental variability, or combinations of these factors. For all considered scenarios, mean-variance relationships can be appropriately described by the negative binomial distribution with two overdispersion parameters. To illustrate, we apply the model to empirical migration data with a high level of overdispersion, gaining clearly different model fits with different assumptions about mean-variance relationships. The proposed framework can be a useful approximation for modeling marginal distributions of independent count data in likelihood-based analyses.  相似文献   

18.
We propose a method for a Bayesian hierarchical analysis of count data that are observed at irregular locations in a bounded domain of R2. We model the data as having been observed on a fine regular lattice, where we do not have observations at all the sites. The counts are assumed to be independent Poisson random variables whose means are given by a log Gaussian process. In this article, the Gaussian process is assumed to be either a Markov random field (MRF) or a geostatistical model, and we compare the two models on an environmental data set. To make the comparison, we calibrate priors for the parameters in the geostatistical model to priors for the parameters in the MRF. The calibration is obtained empirically. The main goal is to predict the hidden Poisson-mean process at all sites on the lattice, given the spatially irregular count data; to do this we use an efficient MCMC. The spatial Bayesian methods are illustrated on radioactivity counts analyzed by Diggle et al. (1998).  相似文献   

19.
Parameters derived from photosynthesis-irradiance (P-I) models, although often empirical in nature, are useful indicators of the photoadaptive state of phytoplankton in culture and in situ. However objective criteria for determining significant changes in P-I curves are rarely provided, because confidence intervals for parameters of non-linear models are not estimated easily. Examination of least-squares residuals in parameter space and Monte Carlo approaches have been used to estimate confidence regions around parameter values, but the computationally intensive nature of these methods has prevented their routine application. We present an alternative method of estimating confidence intervals for parameters of P-I curves that runs quickly on a microcomputer and is easily combined with common parameter-estimation routines. This algorithm was tested using a 3-parameter P-I model and curves describing a wide range of photoadaptive states, with different numbers of observations and different amounts of inherent variability. The method produced results comparable to the Monte Carlo technique. This analysis makes it possible to specify the sample size required to define parameters with acceptable confidence as a function of data variance and photoadaptive state. In most reasonable situations, 25 observations are sufficient.  相似文献   

20.
Ranked-set sampling from a finite population is considered in this paper. Three sampling protocols are described, and procedures for constructing nonparametric confidence intervals for a population quantile are developed. Algorithms for computing coverage probabilities for these confidence intervals are presented, and the use of interpolated confidence intervals is recommended as a means to approximately achieve coverage probabilities that cannot be achieved exactly. A simulation study based on finite populations of sizes 20, 30, 40, and 50 shows that the three sampling protocols follow a strict ordering in terms of the average lengths of the confidence intervals they produce. This study also shows that all three ranked-set sampling protocols tend to produce confidence intervals shorter than those produced by simple random sampling, with the difference being substantial for two of the protocols. The interpolated confidence intervals are shown to achieve coverage probabilities quite close to their nominal levels. Rankings done according to a highly correlated concomitant variable are shown to reduce the level of the confidence intervals only minimally. An example to illustrate the construction of confidence intervals according to this methodology is provided.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号