首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 484 毫秒
1.
Closed capture-recapture (CR) estimators have been used extensively to estimate population size. Most closed CR approaches have been developed and evaluated for discrete-time models, but there has been little effort to evaluate their continuous-time counterparts. Continuous-time estimators — developed using maximum likelihood theory by Craig (1953) and Darroch (1958), and martingale theory by Becker (1984) — that allow capture probabilities to vary over time were evaluated using Monte Carlo simulation. Overall, the ML estimators had a smaller MSE. The estimators performed well when model assumptions were upheld, and were somewhat robust to heterogeneity in capture probabilities. However, the estimators were not robust to behavioural effects in the capture probabilities. Time lag effects (periods when animals might be unavailable for immediate recapture) on continuous-time estimates were also investigated and results indicated a positive bias which was greater for smaller populations. There was no gain in performance when using a continuous-time estimator versus a discrete-time estimator on the same simulated data. Usefulness of the continuous-time approach may be limited to study designs where animals are easier to sample using continuous-time methodology.  相似文献   

2.
Program MARK provides > 65 data types in a common configuration for the estimation of population parameters from mark-encounter data. Encounter information from live captures, live resightings, and dead recoveries can be incorporated to estimate demographic parameters. Available estimates include survival (S or ϕ), rate of population change (λ), transition rates between strata (Ψ), emigration and immigration rates, and population size (N). Although N is the parameter most often desired by biologists, N is one of the most difficult parameters to estimate precisely without bias for a geographically and demographically closed population. The set of closed population estimation models available in Program MARK incorporate time (t) and behavioral (b) variation, and individual heterogeneity (h) in the estimation of capture and recapture probabilities in a likelihood framework. The full range of models from M 0 (null model with all capture and recapture probabilities equal) to M tbh are possible, including the ability to include temporal, group, and individual covariates to model capture and recapture probabilities. Both the full likelihood formulation of Otis et al. (1978) and the conditional model formulation of Huggins (1989, 1991) and Alho (1990) are provided in Program MARK, and all of these models are incorporated into the robust design (Kendall et al. 1995, 1997; Kendall and Nichols 1995) and robust-design multistrata (Hestbeck et al. 1991, Brownie et al. 1993) data types. Model selection is performed with AICc (Burnham and Anderson 2002) and model averaging (Burnham and Anderson 2002) is available in Program MARK to provide estimates of N with standard error that reflect model selection uncertainty.  相似文献   

3.
We utilize mixture models and nonparametric maximum likelihood estimation to both develop a likelihood ratio test (lrt) for a common simplifying assumption and to allow heterogeneity within premarked cohort studies. Our methods allow estimation of the entire probability model and thus one can not only estimate many parameters of interest but one can also bootstrap from the estimated model to predict many things, including the standard deviations of estimators. Simulations suggest that our lrt has the appropriate protection for Type I error and often has good power. In practice, our lrt is important for determining the appropriateness of estimators and in examining if a simple design with only one capture period could be utilized for a future similar study.  相似文献   

4.
This paper presents a scan statistic, progressive upper level set (PULSE) scan statistic, for geospatial hotspot detection and its software implementation. Like ULS, the PULSE scan statistic is based on the arbitrarily shaped scan window and can be adapted for a network setting. PULSE is a refinement of the upper level set (ULS) scan statistic. Like some other likelihood based scanning devices, the ULS scan statistic identifies maximum likelihood estimate (MLE) zones that tend to be ‘stringy’ and sprawling. Its search path increases possibility of inclusion of extraneous cells in its MLE zones and, to a smaller extent, of exclusion of cells that belong to a true hotspot from its MLE zone. The PULSE scan statistic achieves improvement over the ULS scan statistic in two ways. First, it begins its search for a most likely zone with a large population of candidate zones obtained by modifying the ULS tree structure and continues its search using a genetic algorithm. Secondly, to reduce chances of generating an MLE that is excessively stringy and that includes extraneous cells in the MLE zone, PULSE uses cardinality and compactness of zones along with their likelihoods as the fitness function in the genetic algorithm and uses several pertinent criteria including evenness of intra-zone cellular response ratios to determine the MLE zone. To reduce computation, Gumbel distribution of extreme values is used to determine the p-value of the MLE zone. Better results come at the cost of increased processing time. An evaluative performance study is presented.  相似文献   

5.
An estimating function approach to the inference of catch-effort models   总被引:1,自引:0,他引:1  
A class of catch-effort models, which allows for heterogeneous removal probabilities, is proposed for closed populations. The model includes three types of removal probabilities: multiplicative, Poisson and logistic. The usual removal and generalized removal models then become special cases. The equivalence of the proposed model and a special type of capture-recapture model is discussed. A unified estimating function approach is used to estimate the initial population size. For the homogeneous model, the resulting population size estimator based on optimal estimating functions is asymptotically equivalent to the maximum likelihood estimator. One advantage for our approach is that it can be extended to handle the heterogeneous populations in which the maximum likelihood estimators do not exist. The bootstrap method is applied to construct variance estimators and confidence intervals. We illustrate the method by two real data examples. Results of a simulation study investigating the performance of the proposed estimation procedure are presented.  相似文献   

6.
In capture-recapture experiments, fish populations can be studied by two different sampling procedures. In both procedures, tagged fish are released on capture, but untagged fish are in one procedure released after tagging, in the second procedure they are retained. Using the two sampling techniques, Rafail (1971a,b) gave expressions for the estimation of an assumed constant (C) of proportionality between probabilities of capture of tagged to untagged fish which are simplified here to forms easier for calculation. The estimation of this constant (C) aids in estimation of abundance and mortality rates of untagged fish which are assumed to differ from those of tagged fish.  相似文献   

7.
We present a novel, non-parametric, frequentist approach for capture-recapture data based on a ratio estimator, which offers several advantages. First, as a non-parametric model, it does not require a known underlying distribution for parameters nor the associated assumptions, eliminating the need for post-hoc corrections or additional modeling to account for heterogeneity and other violated assumptions. Second, the model explicitly deals with dependence of trials by considering trials to be dependent; therefore, cluster sampling is handled naturally and additional adjustments are not necessary. Third, it accounts for ordering, utilizing the fact that a system with a small population will have a greater frequency of recaptures “early” in the survey work compared to an identical system with a larger population. We provide mathematical proof that our estimator attains asymptotic minimum variance under open systems. We apply the model to a data set of bottlenose dolphins (Tursiops truncatus) and compare results to those from classic closed models. We show that the model has an impressive rate of convergence and demonstrate that there’s an inverse relationship between population size and the proportion of the population that need to be sampled, while achieving the same degree of accuracy for abundance estimates. The model is flexible and can apply to ecological situations as well as other situations that lend themselves to capture recapture sampling.  相似文献   

8.
Aranked set sample (RSS), if not balanced, is simply a sample of independent order statistics gener- ated from the same underlying distribution F. Kvam and Samaniego (1994) derived maximum likelihood estimates of F for a general RSS. In many applications, including some in the environ- mental sciences, prior information about F is available to supplement the data-based inference. In such cases, Bayes estimators should be considered for improved estimation. Bayes estimation (using the squared error loss function) of the unknown distribution function F is investigated with such samples. Additionally, the Bayes generalized maximum likelihood estimator (GMLE) is derived. An iterative scheme based on the EM Algorithm is used to produce the GMLE of F. For the case of squared error loss, simple solutions are uncommon, and a procedure to find the solution to the Bayes estimate using the Gibbs sampler is illustrated. The methods are illustrated with data from the Natural Environmental Research Council of Great Britain (1975), representing water discharge of floods on the Nidd River in Yorkshire, England  相似文献   

9.
Model averaging, specifically information theoretic approaches based on Akaike’s information criterion (IT-AIC approaches), has had a major influence on statistical practices in the field of ecology and evolution. However, a neglected issue is that in common with most other model fitting approaches, IT-AIC methods are sensitive to the presence of missing observations. The commonest way of handling missing data is the complete-case analysis (the complete deletion from the dataset of cases containing any missing values). It is well-known that this results in reduced estimation precision (or reduced statistical power), biased parameter estimates; however, the implications for model selection have not been explored. Here we employ an example from behavioural ecology to illustrate how missing data can affect the conclusions drawn from model selection or based on hypothesis testing. We show how missing observations can be recovered to give accurate estimates for IT-related indices (e.g. AIC and Akaike weight) as well as parameters (and their standard errors) by utilizing ‘multiple imputation’. We use this paper to illustrate key concepts from missing data theory and as a basis for discussing available methods for handling missing data. The example is intended to serve as a practically oriented case study for behavioural ecologists deciding on how to handle missing data in their own datasets and also as a first attempt to consider the problems of conducting model selection and averaging in the presence of missing observations.  相似文献   

10.
Abstract: Assessing conservation strategies requires reliable estimates of abundance. Because detecting all individuals is most often impossible in free‐ranging populations, estimation procedures have to account for a <1 detection probability. Capture–recapture methods allow biologists to cope with this issue of detectability. Nevertheless, capture–recapture models for open populations are built on the assumption that all individuals share the same detection probability, although detection heterogeneity among individuals has led to underestimating abundance of closed populations. We developed multievent capture–recapture models for an open population and proposed an associated estimator of population size that both account for individual detection heterogeneity (IDH). We considered a two‐class mixture model with weakly and highly detectable individuals to account for IDH. In a noninvasive capture–recapture study of wolves we based on genotypes identified in feces and hairs, we found a large underestimation of population size (27% on average) occurred when IDH was ignored.  相似文献   

11.
The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M t against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.  相似文献   

12.
The estimation of population density animal population parameters, such as capture probability, population size, or population density, is an important issue in many ecological applications. Capture–recapture data may be considered as repeated observations that are often correlated over time. If these correlations are not taken into account then parameter estimates may be biased, possibly producing misleading results. We propose a generalized estimating equations (GEE) approach to account for correlation over time instead of assuming independence as in the traditional closed population capture–recapture studies. We also account for heterogeneity among observed individuals and over-dispersion, modelling capture probabilities as a function of covariates. The GEE versions of all closed population capture–recapture models and their corresponding estimating equations are proposed. We evaluate the effect of accounting for correlation structures on capture–recapture model selection based on the quasi-likelihood information criterion (QIC). An example is used for an illustrative application and for comparison to currently used methodology. A Horvitz–Thompson-like estimator is used to obtain estimates of population size based on conditional arguments. A simulation study is conducted to evaluate the performance of the GEE approach in capture-recapture studies. The GEE approach performs well for estimating population parameters, particularly when capture probabilities are high. The simulation results also reveal that estimated population size varies on the nature of the existing correlation among capture occasions.  相似文献   

13.
The growth pattern of Loxechinus albus in southern Chile was studied using size-at-age data obtained by reading growth bands on the genital plates. The scatter plots of sizes-at-age for samples collected in three different locations indicated that growth is linear between ages 2 and 10. Five different growth models, including linear, asymptotic and non-asymptotic functions, were fitted to the data, and model selection was conducted based on the Akaike information criteria (AIC) and the Bayesian information criteria (BIC). The AIC identified the Tanaka model as the most suitable for two of the three sites. However, the BIC led to the selection of the linear model for all zones. Our results show that the growth pattern of L. albus is different from the predominantly asymptotic pattern that has been reported for other sea urchin species.  相似文献   

14.
In the statistical modeling of a biological or ecological phenomenon, selecting an optimal model among a collection of candidates is a critical issue. To identify an optimal candidate model, a number of model selection criteria have been developed and investigated based on estimating Kullback’s (Information theory and statistics. Dover, Mineola, 1968) directed or symmetric divergence. Criteria that target the directed divergence include the Akaike (2nd international symposium on information theory. Akadémia Kiadó, Budapest, Hungary, pp 267–281, 1973, IEEE Trans Autom Control AC 19:716–723, 1974) information criterion, AIC, and the “corrected” Akaike information criterion (Hurvich and Tsai in Biometrika 76:297–307, 1989), AICc; criteria that target the symmetric divergence include the Kullback information criterion, KIC, and the “corrected” Kullback information criterion, KICc (Cavanaugh in Stat Probab Lett 42:333–343, 1999; Aust N Z J Stat 46:257–274, 2004). For overdispersed count data, simple modifications of AIC and AICc have been increasingly utilized: specifically, the quasi Akaike information criterion, QAIC, and its corrected version, QAICc (Lebreton et al. in Ecol Monogr 62(1):67–118 1992). In this paper, we propose analogues of QAIC and QAICc based on estimating the symmetric as opposed to the directed divergence: QKIC and QKICc. We evaluate the selection performance of AIC, AICc, QAIC, QAICc, KIC, KICc, QKIC, and QKICc in a simulation study, and illustrate their practical utility in an ecological application. In our application, we use the criteria to formulate statistical models of the tick (Dermacentor variabilis) load on a white-footed mouse (Peromyscus leucopus) in northern Missouri.  相似文献   

15.
When animals die in traps in a mark-recapture study, straightforward likelihood inferences are possible in a class of models. The class includes M0, Mt, and Mb as reported by White et al. (Los Alamos National Laboratory, LA-8787-NERP, pp 235, 1982), those that do not involve heterogeneity. We include three Markov chain “persistence” models and show that they provide good fits in a trapping study of deer mice in the Cascade-Siskiyou National Monument of Southern Oregon where trapping mortality was high.
Fred L. RamseyEmail:
  相似文献   

16.
Repertoire size, the number of unique song or syllable types in the repertoire, is a widely used measure of song complexity in birds, but it is difficult to calculate this exactly in species with large repertoires. A new method of repertoire size estimation applies species richness estimation procedures from community ecology, but such capture-recapture approaches have not been much tested. Here, we establish standardized sampling schemes and estimation procedures using capture-recapture models for syllable repertoires from 18 bird species, and suggest how these may be used to tackle problems of repertoire estimation. Different models, with different assumptions regarding the heterogeneity of the use of syllable types, performed best for different species with different song organizations. For most species, models assuming heterogeneous probability of occurrence of syllables (so-called detection probability) were selected due to the presence of both rare and frequent syllables. Capture-recapture estimates of syllable repertoire size from our small sample did not differ significantly from previous estimates using larger samples of count data. However, the enumeration of syllables in 15 songs yielded significantly lower estimates than previous reports. Hence, heterogeneity in detection probability of syllables should be addressed when estimating repertoire size. This is neglected using simple enumeration procedures, but is taken into account when repertoire size is estimated by appropriate capture-recapture models adjusted for species-specific song organization characteristics. We suggest that such approaches, in combination with standardized sampling, should be applied in species with potentially large repertoire size. On the other hand, in species with small repertoire size and homogenous syllable usage, enumerations may be satisfactory. Although researchers often use repertoire size as a measure of song complexity, listeners to songs are unlikely to count entire repertoires and they may rely on other cues, such as syllable detection probability.Communicated by A. Cockburn  相似文献   

17.
Analysis of capture—recapture data often involves maximizing a complex likelihood function with many unknown parameters. Statistical inference based on selection of a proper model depends on successful attainment of this maximum. An EM algorithm is developed for obtaining maximum likelihood estimates of capture and survival probabilities conditional on first capture from standard capture—recapture data. The algorithm does not require the use of numerical derivatives which may improve precision and stability relative to other estimation schemes. The asymptotic covariance matrix of the estimated parameters can be obtained using the supplemented EM algorithm. The EM algorithm is compared to a more traditional Newton-Raphson algorithm with both a simulated and a real dataset. The two algorithms result in the same parameter estimates, but Newton-Raphson variance estimates depend on a numerically estimated Hessian matrix that is sensitive to step size choice.  相似文献   

18.
Akaike’s information criterion (AIC) is increasingly being used in analyses in the field of ecology. This measure allows one to compare and rank multiple competing models and to estimate which of them best approximates the “true” process underlying the biological phenomenon under study. Behavioural ecologists have been slow to adopt this statistical tool, perhaps because of unfounded fears regarding the complexity of the technique. Here, we provide, using recent examples from the behavioural ecology literature, a simple introductory guide to AIC: what it is, how and when to apply it and what it achieves. We discuss multimodel inference using AIC—a procedure which should be used where no one model is strongly supported. Finally, we highlight a few of the pitfalls and problems that can be encountered by novice practitioners.  相似文献   

19.
In the mid nineteen eighties the Dutch NOx air quality monitoring network was reduced from 73 to 32 rural and city background stations, leading to higher spatial uncertainties. In this study, several other sources of information are being used to help reduce uncertainties in parameter estimation and spatial mapping. For parameter estimation, we used Bayesian inference. For mapping, we used kriging with external drift (KED) including secondary information from a dispersion model. The methods were applied to atmospheric NOx concentrations on rural and urban scales. We compared Bayesian estimation with restricted maximum likelihood estimation and KED with universal kriging. As a reference we also included ordinary least squares (OLS). Comparison of several parameter estimation and spatial interpolation methods was done by cross-validation. Bayesian analysis resulted in an error reduction of 10 to 20% as compared to restricted maximum likelihood, whereas KED resulted in an error reduction of 50% as compared to universal kriging. Where observations were sparse, the predictions were substantially improved by inclusion of the dispersion model output and by using available prior information. No major improvement was observed as compared to OLS, the cause presumably being that much good information is contained in the dispersion model output, so that no additional spatial residual random field is required to explain the data. In all, we conclude that reduction in the monitoring network could be compensated by modern geostatistical methods, and that a traditional simple statistical model is of an almost equal quality.
Jan van de KassteeleEmail:
  相似文献   

20.
Kodell and West (1993) describe two methods for calculating pointwise upper confidence limits on the risk function with normally distributed responses and using a certain definition of adverse quantitative effect. But Banga et al. (2000) have shown that these normal theory methods break down when applied to skew data. We accordingly develop a risk analysis model and associated likelihood-based methodology when the response follows either a gamma or reciprocal gamma distribution. The model supposes that the shape (index) parameter k of the response distribution is held fixed while the logarithm of the scale parameter is a linear model in terms of the dose level. Existence and uniqueness of the maximum likelihood estimates is established. Asymptotic likelihood-based upper and lower confidence limits on the risk are solutions of the Lagrange equations associated with a constrained optimization problem. Starting values for an iterative solution are obtained by replacing the Lagrange equations by the lowest order terms in their asymptotic expansions. Three methods are then compared for calculating confidence limits on the risk: (i) the aforementioned starting values (LRAL method), (ii) full iterative solution of the Lagrange equations (LREL method), and (iii) bounds obtained using approximate normality of the maximum likelihood estimates with standard errors derived from the information matrix (MLE method). Simulation is used to assess coverage probabilities for the resulting upper confidence limits when the log of the scale parameter is quadratic in the dose level. Results indicate that coverage for the MLE method can be off by as much as 15% points and converges very slowly to nominal coverage levels as the sample size increases. Coverage for the LRAL and LREL methods, on the other hand, is close to nominal levels unless (a) the sample size is small, say N < 25, (b) the index parameter is small, say k 1, and (c) the direction of adversity is to the left for the gamma distribution or to the right for the reciprocal gamma distribution.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号