首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
We derive some statistical properties of the distribution of two Negative Binomial random variables conditional on their total. This type of model can be appropriate for paired count data with Poisson over-dispersion such that the variance is a quadratic function of the mean. This statistical model is appropriate in many ecological applications including comparative fishing studies of two vessels and or gears. The parameter of interest is the ratio of pair means. We show that the conditional means and variances are different from the more commonly used Binomial model with variance adjusted for over-dispersion, or the Beta-Binomial model. The conditional Negative Binomial model is complicated because it does not eliminate nuisance parameters like in the Poisson case. Maximum likelihood estimation with the unconditional Negative Binomial model can result in biased estimates of the over-dispersion parameter and poor confidence intervals for the ratio of means when there are many nuisance parameters. We propose three approaches to deal with nuisance parameters in the conditional Negative Binomial model. We also study a random effects Binomial model for this type of data, and we develop an adjustment to the full-sample Negative Binomial profile likelihood to reduce the bias caused by nuisance parameters. We use simulations with these methods to examine bias, precision, and accuracy of estimators and confidence intervals. We conclude that the maximum likelihood method based on the full-sample Negative Binomial adjusted profile likelihood produces the best statistical inferences for the ratio of means when paired counts have Negative Binomial distributions. However, when there is uncertainty about the type of Poisson over-dispersion then a Binomial random effects model is a good choice.  相似文献   

2.
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate some of the statistical challenges in applying the methods.  相似文献   

3.
Estimates of animal performance often use the maximum of a small number of laboratory trials, a method which has several statistical disadvantages. Sample maxima always underestimate the true maximum performance, and the degree of the bias depends on sample size. Here, we suggest an alternative approach that involves estimating a specific performance quantile (e.g., the 0.90 quantile). We use the information on within-individual variation in performance to obtain a sampling distribution for the residual performance measures; we use this distribution to estimate a desired performance quantile for each individual. We illustrate our approach using simulations and with data on sprint speed in lizards. The quantile method has several advantages over the sample maximum: it reduces or eliminates bias, it uses all of the data from each individual, and its accuracy is independent of sample size. Additionally, we address the estimation of correlations between two different performance measures, such as sample maxima, quantiles, or means. In particular, because of sampling variability, we propose that the correlation of sample means does a better job estimating the correlation of population maxima than the estimator which is the correlation of sample maxima.  相似文献   

4.
Non-Gaussian spatial responses are usually modeled using a spatial generalized linear mixed model with location specific latent variables. The likelihood function of this model cannot usually be given in a closed form, thus the maximum likelihood approach is very challenging. So far, several numerical algorithms to solve the problem of calculating maximum likelihood estimates of this model have been presented. In this paper to estimate the parameters an approximate method is considered and a new algorithm is introduced that is much faster than existing algorithms but just as accurate. This is called the Approximate Expectation Maximization Gradient algorithm. The performance of the proposed algorithm and is illustrated with a simulation study and on a real data set.  相似文献   

5.
To choose among conservation actions that may benefit many species, managers need to monitor the consequences of those actions. Decisions about which species to monitor from a suite of different species being managed are hindered by natural variability in populations and uncertainty in several factors: the ability of the monitoring to detect a change, the likelihood of the management action being successful for a species, and how representative species are of one another. However, the literature provides little guidance about how to account for these uncertainties when deciding which species to monitor to determine whether the management actions are delivering outcomes. We devised an approach that applies decision science and selects the best complementary suite of species to monitor to meet specific conservation objectives. We created an index for indicator selection that accounts for the likelihood of successfully detecting a real trend due to a management action and whether that signal provides information about other species. We illustrated the benefit of our approach by analyzing a monitoring program for invasive predator management aimed at recovering 14 native Australian mammals of conservation concern. Our method selected the species that provided more monitoring power at lower cost relative to the current strategy and traditional approaches that consider only a subset of the important considerations. Our benefit function accounted for natural variability in species growth rates, uncertainty in the responses of species to the prescribed action, and how well species represent others. Monitoring programs that ignore uncertainty, likelihood of detecting change, and complementarity between species will be more costly and less efficient and may waste funding that could otherwise be used for management. Contabilización de la Complementariedad para Maximizar el Poder de Monitoreo para el Manejo de Especies  相似文献   

6.
Gauthier G  Besbeas P  Lebreton JD  Morgan BJ 《Ecology》2007,88(6):1420-1429
There are few analytic tools available to formally integrate information coming from population surveys and demographic studies. The Kalman filter is a procedure that facilitates such integration. Based on a state-space model, we can obtain a likelihood function for the survey data using a Kalman filter, which we may then combine with a likelihood for the demographic data. In this paper, we used this combined approach to analyze the population dynamics of a hunted species, the Greater Snow Goose (Chen caerulescens atlantica), and to examine the extent to which it can improve previous demographic population models. The state equation of the state-space model was a matrix population model with fecundity and regression parameters relating adult survival and harvest rate estimated in a previous capture-recapture study. The observation equation combined the output from this model with estimates from an annual spring photographic survey of the population. The maximum likelihood estimates of the regression parameters from the combined analysis differed little from the values of the original capture-recapture analysis, though their precision improved. The model output was found to be insensitive to a wide range of coefficient of variation (CV) in fecundity parameters. We found a close match between the surveyed and smoothed population size estimates generated by the Kalman filter over an 18-year period, and the estimated CV of the survey (0.078-0.150) was quite compatible with its assumed value (approximately 0.10). When we used the updated parameter values to predict future population size, the model underestimated the surveyed population size by 18% over a three-year period. However, this could be explained by a concurrent change in the survey method. We conclude that the Kalman filter is a promising approach to forecast population change because it incorporates survey information in a formal way compared with ad hoc approaches that either neglect this information or require some parameter or model tuning.  相似文献   

7.
An estimating function approach to the inference of catch-effort models   总被引:1,自引:0,他引:1  
A class of catch-effort models, which allows for heterogeneous removal probabilities, is proposed for closed populations. The model includes three types of removal probabilities: multiplicative, Poisson and logistic. The usual removal and generalized removal models then become special cases. The equivalence of the proposed model and a special type of capture-recapture model is discussed. A unified estimating function approach is used to estimate the initial population size. For the homogeneous model, the resulting population size estimator based on optimal estimating functions is asymptotically equivalent to the maximum likelihood estimator. One advantage for our approach is that it can be extended to handle the heterogeneous populations in which the maximum likelihood estimators do not exist. The bootstrap method is applied to construct variance estimators and confidence intervals. We illustrate the method by two real data examples. Results of a simulation study investigating the performance of the proposed estimation procedure are presented.  相似文献   

8.
Closed capture-recapture (CR) estimators have been used extensively to estimate population size. Most closed CR approaches have been developed and evaluated for discrete-time models, but there has been little effort to evaluate their continuous-time counterparts. Continuous-time estimators — developed using maximum likelihood theory by Craig (1953) and Darroch (1958), and martingale theory by Becker (1984) — that allow capture probabilities to vary over time were evaluated using Monte Carlo simulation. Overall, the ML estimators had a smaller MSE. The estimators performed well when model assumptions were upheld, and were somewhat robust to heterogeneity in capture probabilities. However, the estimators were not robust to behavioural effects in the capture probabilities. Time lag effects (periods when animals might be unavailable for immediate recapture) on continuous-time estimates were also investigated and results indicated a positive bias which was greater for smaller populations. There was no gain in performance when using a continuous-time estimator versus a discrete-time estimator on the same simulated data. Usefulness of the continuous-time approach may be limited to study designs where animals are easier to sample using continuous-time methodology.  相似文献   

9.
Yosef Cohen 《Ecological modelling》2009,220(13-14):1613-1619
Methods for modeling population dynamics in probability using the generalized point process approach are developed. The life history of these populations is such that seasonal reproduction occurs during a short time. Several models are developed and analyzed. Data about two species: colonial spiders (Stegodyphus dumicola) and a migratory bird (wood thrush, Hylocichla mustelina) are used to estimate model parameters with appropriate log maximum likelihood functions. For the spiders, the model is fitted to provide evolutionary feasible colony size based on maximum likelihood estimates of fecundity and survival data. For the migratory bird species, a maximum likelihood estimates are derived for the fecundity and survival rates of young and adult birds and immigration rate. The presented approach allows computation of quantities of interest such as probability of extinction and average time to extinction.  相似文献   

10.
On estimating the exponent of power-law frequency distributions   总被引:5,自引:0,他引:5  
White EP  Enquist BJ  Green JL 《Ecology》2008,89(4):905-912
Power-law frequency distributions characterize a wide array of natural phenomena. In ecology, biology, and many physical and social sciences, the exponents of these power laws are estimated to draw inference about the processes underlying the phenomenon, to test theoretical models, and to scale up from local observations to global patterns. Therefore, it is essential that these exponents be estimated accurately. Unfortunately, the binning-based methods traditionally used in ecology and other disciplines perform quite poorly. Here we discuss more sophisticated methods for fitting these exponents based on cumulative distribution functions and maximum likelihood estimation. We illustrate their superior performance at estimating known exponents and provide details on how and when ecologists should use them. Our results confirm that maximum likelihood estimation outperforms other methods in both accuracy and precision. Because of the use of biased statistical methods for estimating the exponent, the conclusions of several recently published papers should be revisited.  相似文献   

11.
We utilize mixture models and nonparametric maximum likelihood estimation to both develop a likelihood ratio test (lrt) for a common simplifying assumption and to allow heterogeneity within premarked cohort studies. Our methods allow estimation of the entire probability model and thus one can not only estimate many parameters of interest but one can also bootstrap from the estimated model to predict many things, including the standard deviations of estimators. Simulations suggest that our lrt has the appropriate protection for Type I error and often has good power. In practice, our lrt is important for determining the appropriateness of estimators and in examining if a simple design with only one capture period could be utilized for a future similar study.  相似文献   

12.
Estimates of biodiversity change are essential for the management and conservation of ecosystems. Accurate estimates rely on selecting representative sites, but monitoring often focuses on sites of special interest. How such site-selection biases influence estimates of biodiversity change is largely unknown. Site-selection bias potentially occurs across four major sources of biodiversity data, decreasing in likelihood from citizen science, museums, national park monitoring, and academic research. We defined site-selection bias as a preference for sites that are either densely populated (i.e., abundance bias) or species rich (i.e., richness bias). We simulated biodiversity change in a virtual landscape and tracked the observed biodiversity at a sampled site. The site was selected either randomly or with a site-selection bias. We used a simple spatially resolved, individual-based model to predict the movement or dispersal of individuals in and out of the chosen sampling site. Site-selection bias exaggerated estimates of biodiversity loss in sites selected with a bias by on average 300–400% compared with randomly selected sites. Based on our simulations, site-selection bias resulted in positive trends being estimated as negative trends: richness increase was estimated as 0.1 in randomly selected sites, whereas sites selected with a bias showed a richness change of −0.1 to −0.2 on average. Thus, site-selection bias may falsely indicate decreases in biodiversity. We varied sampling design and characteristics of the species and found that site-selection biases were strongest in short time series, for small grains, organisms with low dispersal ability, large regional species pools, and strong spatial aggregation. Based on these findings, to minimize site-selection bias, we recommend use of systematic site-selection schemes; maximizing sampling area; calculating biodiversity measures cumulatively across plots; and use of biodiversity measures that are less sensitive to rare species, such as the effective number of species. Awareness of the potential impact of site-selection bias is needed for biodiversity monitoring, the design of new studies on biodiversity change, and the interpretation of existing data.  相似文献   

13.
A new statistical testing approach using a weighted logrank statistic is developed for rodent tumorigenicity assays that have a single terminal sacrifice but not cause-of-death data. Instead of using cause-of-death assignment by pathologists, the number of fatal tumors is estimated by a constrained nonparametric maximum likelihood estimation method. For data lacking cause-of-death information, the Peto test is modified with estimated numbers of fatal tumors and a Fleming–Harrington-type weight, which is based on an estimated tumor survival function. A bootstrap resampling method is used to estimate the weight function. The proposed testing method with the weight adjustment appears to improve the performance in various situations of single-sacrifice animal experiments. A Monte Carlo simulation study for the proposed test is conducted to assess size and power of the test. This testing approach is illustrated using a real data set.  相似文献   

14.
The maximum likelihood estimator for estimating proportions by group testing is biased. An expression for the approximate bias has been previously presented, which enables the creation of a less biased estimator by removing the term of \(O(n^{-1})\). However, in this previous work the term of \(O(n^{-2})\) was incorrectly derived. This note gives a correct derivation, and examines the relative contribution of the two terms.  相似文献   

15.
Summary Long-billed curlews (Numenius americanus) appear unique among scolopacid shorebirds so far studied in possessing a significant sex bias in natal philopatry. We resighted 9 curlews at least attempting to breed that were color-banded as chicks; 8 of these were males. Male curlews also cooperate extensively with neighbors in mobbing potential chick predators. This mutualistic behavior may have evolved through kin selection among philopatric males. If so, we would expect such an evolutionary consequence to lead to a similar sex bias in breeding area fidelity. Yet our resightings of colorbanded adults over 4 consecutive years indicate that males and females were equally likely to return to previous nesting territories. Excessive disturbance such as capture and nest loss within a single breeding season was correlated with the likelihood of breeding dispersal by females but not males. This suggests potentially stronger breeding area fidelity of males.  相似文献   

16.
The International Union for the Conservation of Nature and Natural Resources (IUCN), the world's largest and most important global conservation network, has listed approximately 16,000 species worldwide as threatened. The most important tool for recognizing and listing species as threatened is population viability analysis (PVA), which estimates the probability of extinction of a population or species over a specified time horizon. The most common PVA approach is to apply it to single time series of population abundance. This approach to population viability analysis ignores covariability of local populations. Covariability can be important because high synchrony of local populations reduces the effective number of local populations and leads to greater extinction risk. Needed is a way of extending PVA to model correlation structure among multiple local populations. Multivariate state-space modeling is applied to this problem and alternative estimation methods are compared. The multivariate state-space technique is applied to endangered populations of pacific salmon, USA. Simulations demonstrated that the correlation structure can strongly influence population viability and is best estimated using restricted maximum likelihood instead of maximum likelihood.  相似文献   

17.
Capturing the spread of biological invasions in heterogeneous landscapes is a complex modelling task where information on both dispersal and population dynamics needs to be integrated. Spatial stochastic simulation and phenology models have rarely been combined to assist in the study of human-assisted long-distance dispersal events.Here we develop a process-based spatially explicit landscape-extent simulation model that considers the spread and detection of invasive insects. Natural and human-assisted dispersal mechanisms are modelled with an individual-based approach using negative exponential and negative power law dispersal kernels and gravity models. The model incorporates a phenology sub-model that uses daily temperature grids for the prediction and timing of the population dynamics in each habitat patch. The model was applied to the study of the invasion by the important maize pest western corn rootworm (WCR) Diabrotica virgifera ssp. virgifera in Europe. We parameterized and validated the model using maximum likelihood and simulation methods from the historical invasion of WCR in Austria.WCR was found to follow stratified dispersal where international transport networks in the Danube basin played a key role in the occurrence of long-distance dispersal events. Detection measures were found to be effective and altitude had a significant effect on limiting the spread of WCR. Spatial stochastic simulation combined with phenology models, maximum likelihood methods and predicted versus observed regression showed a high degree of flexibility that captured the salient features of WCR spread in Austria. This modelling approach is useful because it allows to fully exploit and the often limited and heterogeneous information available regarding the population dynamics and dispersal of alien invasive insects.  相似文献   

18.
Many simulation studies have examined the properties of distance sampling estimators of wildlife population size. When assumptions hold, if distances are generated from a detection model and fitted using the same model, they are known to perform well. However, in practice, the true model is unknown. Therefore, standard practice includes model selection, typically using model comparison tools like Akaike Information Criterion. Here we examine the performance of standard distance sampling estimators under model selection. We compare line and point transect estimators with distances simulated from two detection functions, hazard-rate and exponential power series (EPS), over a range of sample sizes. To mimic the real-world context where the true model may not be part of the candidate set, EPS models were not included as candidates, except for the half-normal parameterization. We found median bias depended on sample size (being asymptotically unbiased) and on the form of the true detection function: negative bias (up to 15% for line transects and 30% for point transects) when the shoulder of maximum detectability was narrow, and positive bias (up to 10% for line transects and 15% for point transects) when it was wide. Generating unbiased simulations requires careful choice of detection function or very large datasets. Practitioners should collect data that result in detection functions with a shoulder similar to a half-normal and use the monotonicity constraint. Narrow-shouldered detection functions can be avoided through good field procedures and those with wide shoulder are unlikely to occur, due to heterogeneity in detectability.  相似文献   

19.
The theory of conventional line transect surveys is based on an essential assumption that 100% detection of animals right on the transect lines can be achieved. When this assumption fails, independent observer line transect surveys are used. This paper proposes a general approach, based on a conditional likelihood, which can be carried out either parametrically or nonparametrically, to estimate the abundance of non-clustered biological populations using data collected from independent observer line transect surveys. A nonparametric estimator is specifically proposed which combines the conditional likelihood and the kernel smoothing method. It has the advantage that it allows the data themselves to dictate the form of the detection function, free of any subjective choice. The bias and the variance of the nonparametric estimator are given. Its asymptotic normality is established which enables construction of confidence intervals. A simulation study shows that the proposed estimator has good empirical performance, and the confidence intervals have good coverage accuracy.  相似文献   

20.
Analysis of capture—recapture data often involves maximizing a complex likelihood function with many unknown parameters. Statistical inference based on selection of a proper model depends on successful attainment of this maximum. An EM algorithm is developed for obtaining maximum likelihood estimates of capture and survival probabilities conditional on first capture from standard capture—recapture data. The algorithm does not require the use of numerical derivatives which may improve precision and stability relative to other estimation schemes. The asymptotic covariance matrix of the estimated parameters can be obtained using the supplemented EM algorithm. The EM algorithm is compared to a more traditional Newton-Raphson algorithm with both a simulated and a real dataset. The two algorithms result in the same parameter estimates, but Newton-Raphson variance estimates depend on a numerically estimated Hessian matrix that is sensitive to step size choice.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号