首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 562 毫秒
1.
Model averaging, specifically information theoretic approaches based on Akaike’s information criterion (IT-AIC approaches), has had a major influence on statistical practices in the field of ecology and evolution. However, a neglected issue is that in common with most other model fitting approaches, IT-AIC methods are sensitive to the presence of missing observations. The commonest way of handling missing data is the complete-case analysis (the complete deletion from the dataset of cases containing any missing values). It is well-known that this results in reduced estimation precision (or reduced statistical power), biased parameter estimates; however, the implications for model selection have not been explored. Here we employ an example from behavioural ecology to illustrate how missing data can affect the conclusions drawn from model selection or based on hypothesis testing. We show how missing observations can be recovered to give accurate estimates for IT-related indices (e.g. AIC and Akaike weight) as well as parameters (and their standard errors) by utilizing ‘multiple imputation’. We use this paper to illustrate key concepts from missing data theory and as a basis for discussing available methods for handling missing data. The example is intended to serve as a practically oriented case study for behavioural ecologists deciding on how to handle missing data in their own datasets and also as a first attempt to consider the problems of conducting model selection and averaging in the presence of missing observations.  相似文献   

2.
Scientific thinking may require the consideration of multiple hypotheses, which often call for complex statistical models at the level of data analysis. The aim of this introduction is to provide a brief overview on how competing hypotheses are evaluated statistically in behavioural ecological studies and to offer potentially fruitful avenues for future methodological developments. Complex models have traditionally been treated by model selection approaches using threshold-based removal of terms, i.e. stepwise selection. A recently introduced method for model selection applies an information-theoretic (IT) approach, which simultaneously evaluates hypotheses by balancing between model complexity and goodness of fit. The IT method has been increasingly propagated in the field of ecology, while a literature survey shows that its spread in behavioural ecology has been much slower, and model simplification using stepwise selection is still more widespread than IT-based model selection. Why has the use of IT methods in behavioural ecology lagged behind other disciplines? This special issue examines the suitability of the IT method for analysing data with multiple predictors, which researchers encounter in our field. The volume brings together different viewpoints to aid behavioural ecologists in understanding the method, with the hope of enhancing the statistical integration of our discipline.  相似文献   

3.
Link WA  Barker RJ 《Ecology》2006,87(10):2626-2635
Statistical thinking in wildlife biology and ecology has been profoundly influenced by the introduction of AIC (Akaike's information criterion) as a tool for model selection and as a basis for model averaging. In this paper, we advocate the Bayesian paradigm as a broader framework for multimodel inference, one in which model averaging and model selection are naturally linked, and in which the performance of AIC-based tools is naturally evaluated. Prior model weights implicitly associated with the use of AIC are seen to highly favor complex models: in some cases, all but the most highly parameterized models in the model set are virtually ignored a priori. We suggest the usefulness of the weighted BIC (Bayesian information criterion) as a computationally simple alternative to AIC, based on explicit selection of prior model probabilities rather than acceptance of default priors associated with AIC. We note, however, that both procedures are only approximate to the use of exact Bayes factors. We discuss and illustrate technical difficulties associated with Bayes factors, and suggest approaches to avoiding these difficulties in the context of model selection for a logistic regression. Our example highlights the predisposition of AIC weighting to favor complex models and suggests a need for caution in using the BIC for computing approximate posterior model weights.  相似文献   

4.
There has been a great deal of recent discussion of the practice of regression analysis (or more generally, linear modelling) in behaviour and ecology. In this paper, I wish to highlight two factors that have been under-considered, collinearity and measurement error in predictors, as well as to consider what happens when both exist at the same time. I examine what the consequences are for conventional regression analysis (ordinary least squares, OLS) as well as model averaging methods, typified by information theoretic approaches based around Akaike’s information criterion. Collinearity causes variance inflation of estimated slopes in OLS analysis, as is well known. In the presence of collinearity, model averaging reduces this variance for predictors with weak effects, but also can lead to parameter bias. When collinearity is strong or when all predictors have strong effects, model averaging relies heavily on the full model including all predictors and hence the results from this and OLS are essentially the same. I highlight that it is not safe to simply eliminate collinear variables without due consideration of their likely independent effects as this can lead to biases. Measurement error is also considered and I show that when collinearity exists, this can lead to extreme biases when predictors are collinear, have strong effects but differ in their degree of measurement error. I highlight techniques for dealing with and diagnosing these problems. These results reinforce that automated model selection techniques should not be relied on in the analysis of complex multivariable datasets.  相似文献   

5.
The infinite dimensional model (IDM) is an approach that has been developed for the analyses of phenotypic variation in function valued traits such as growth trajectories and continuous reaction norms. This model is particularly suited for the analysis of the potential and the constraints for growth to evolve under selection on body size. Despite of its applicability to a broad range of study systems IDM has only been applied in a handful of studies, as it is mathematically demanding for scientists not familiar with quantitative genetics methods. Here, we present a user-friendly R implementation of IDM, demonstrate its performance with growth data on nine-spined stickleback (Pungitius pungitius). In addition to rearing experiments, individual based size-at-age trajectories are often measured in wild in mark-recapture studies or estimated retrospectively from scales or bones. Therefore, our R implementation of IDM should be applicable to many studies conducted in wild and in a lab, and be useful by making the methodologically challenging IDM approach more easily accessible also in the fields where quantitative genetics methods are less standardly used.  相似文献   

6.
Wildlife resource selection studies typically compare used to available resources; selection or avoidance occurs when use is disproportionately greater or less than availability. Comparing used to available resources is problematic because results are often greatly influenced by what is considered available to the animal. Moreover, placing relocation points within resource units is often difficult due to radiotelemetry and mapping errors. Given these problems, we suggest that an animal’s resource use be summarized at the scale of the home range (i.e., the spatial distribution of all point locations of an animal) rather than by individual points that are considered used or available. To account for differences in use-intensity throughout an animal’s home range, we model resource selection using kernel density estimates and polytomous logistic regression. We present a case study of elk (Cervus elaphus) resource selection in South Dakota to illustrate the procedure. There are several advantages of our proposed approach. First, resource availability goes undefined by the investigator, which is a difficult and often arbitrary decision. Instead, the technique compares the intensity of animal use throughout the home range. This technique also avoids problems with classifying locations rigidly as used or unused. Second, location coordinates do not need to be placed within mapped resource units, which is problematic given mapping and telemetry error. Finally, resource use is considered at an appropriate scale for management because most wildlife resource decisions are made at the level of the patch. Despite the advantages of this use-intensity procedure, future research should address spatial autocorrelation and develop spatial models for ordered categorical variables.  相似文献   

7.
Complex signal function: developing a framework of testable hypotheses   总被引:1,自引:1,他引:1  
The basic building blocks of communication are signals, assembled in various sequences and combinations, and used in virtually all inter- and intra-specific interactions. While signal evolution has long been a focus of study, there has been a recent resurgence of interest and research in the complexity of animal displays. Much past research on signal evolution has focused on sensory specialists, or on single signals in isolation, but many animal displays involve complex signaling, or the combination of more than one signal or related component, often serially and overlapping, frequently across multiple sensory modalities. Here, we build a framework of functional hypotheses of complex signal evolution based on content-driven (ultimate) and efficacy-driven (proximate) selection pressures (sensu Guilford and Dawkins 1991). We point out key predictions for various hypotheses and discuss different approaches to uncovering complex signal function. We also differentiate a category of hypotheses based on inter-signal interactions. Throughout our review, we hope to make three points: (1) a complex signal is a functional unit upon which selection can act, (2) both content and efficacy-driven selection pressures must be considered when studying the evolution of complex signaling, and (3) individual signals or components do not necessarily contribute to complex signal function independently, but may interact in a functional way.Communicated by A. Cockburn  相似文献   

8.
Persistence of species in fragmented landscapes depends on dispersal among suitable breeding sites, and dispersal is often influenced by the "matrix" habitats that lie between breeding sites. However, measuring effects of different matrix habitats on movement and incorporating those differences into spatially explicit models to predict dispersal is costly in terms of time and financial resources. Hence a key question for conservation managers is: Do more costly, complex movement models yield more accurate dispersal predictions? We compared the abilities of a range of movement models, from simple to complex, to predict the dispersal of an endangered butterfly, the Saint Francis' satyr (Neonympha mitchellii francisci). The value of more complex models differed depending on how value was assessed. Although the most complex model, based on detailed movement behaviors, best predicted observed dispersal rates, it was only slightly better than the simplest model, which was based solely on distance between sites. Consequently, a parsimony approach using information criteria favors the simplest model we examined. However, when we applied the models to a larger landscape that included proposed habitat restoration sites, in which the composition of the matrix was different than the matrix surrounding extant breeding sites, the simplest model failed to identify a potentially important dispersal barrier, open habitat that butterflies rarely enter, which may completely isolate some of the proposed restoration sites from other breeding sites. Finally, we found that, although the gain in predicting dispersal with increasing model complexity was small, so was the increase in financial cost. Furthermore, a greater fit continued to accrue with greater financial cost, and more complex models made substantially different predictions than simple models when applied to a novel landscape in which butterflies are to be reintroduced to bolster their populations. This suggests that more complex models might be justifiable on financial grounds. Our results caution against a pure parsimony approach to deciding how complex movement models need to be to accurately predict dispersal through the matrix, especially if the models are to be applied to novel or modified landscapes.  相似文献   

9.
Patch Size and Connectivity Thresholds for Butterfly Habitat Restoration   总被引:4,自引:0,他引:4  
Abstract:  Recovery of endangered species in highly fragmented habitats often requires habitat restoration. Selection of restoration sites typically involves too many options and too much uncertainty to reach a decision based on existing reserve design methods. The Fender's blue butterfly (  Icaricia icarioides fenderi ) survives in small, isolated patches of remnant prairie in Oregon's Willamette Valley—a habitat for which <0.5% of the original remains. Recovery of this species will require considerable habitat restoration. We investigated the potential of biologically based rules of thumb and more complex models to serve as tools in making land acquisitions. Based on Fender's blue dispersal behavior and demography, we have estimated that restored patches should be <1 km from existing habitat and at least 2 ha. We compared these rules to the results of two modeling approaches: an incidence function model and a spatially explicit simulation of demography and dispersal behavior. Not surprisingly, the simple rules and complex models all conclude that large (>2 ha) connected (<1 km) patches have the highest restoration value. The dispersal model, however, suggests that small, connected patches have more restoration value than large, isolated patches, whereas the incidence function model suggests that size and connectivity are equally important. These differences stem from model assumptions. We used incidence functions to predict long-term, stochastic, steady-state conditions and dispersal simulations to predict short-term (25-year) colonization dynamics. To apply our results in the context of selecting restoration sites on the ground, we recommend selecting nearby sites when short-term colonization dynamics are expected to be an important aspect of a species' biology.  相似文献   

10.
Deterministic, size-structured models are widely used to describe consumer-resource interactions. Such models typically ignore potentially large random variability in juvenile development rates. We present simple representations of this variability and show five approaches to calculating the model parameters for Daphnia pulex interacting with its algal food. Using our parameterized models of growth variability, we investigate the robustness of a recently proposed stabilizing mechanism for Daphnia populations. Growth rate variability increases the range of enrichments over which small-amplitude cycles or quasi-cycles occur, thus increasing the plausibility that the underlying mechanism contributes to the prevalence of small-amplitude cycles in the field and in experiments. More generally, our approach allows us to relate commonly available information on variance of development times to population stability.  相似文献   

11.
Geostatistical model averaging based on conditional information criteria   总被引:1,自引:0,他引:1  
Variable selection in geostatistical regression is an important problem, but has not been well studied in the literature. In this paper, we focus on spatial prediction and consider a class of conditional information criteria indexed by a penalty parameter. Instead of applying a fixed criterion, which leads to an unstable predictor in the sense that it is discontinuous with respect to the response variables due to that a small change in the response may cause a different model to be selected, we further stabilize the predictor by local model averaging, resulting in a predictor that is not only continuous but also differentiable even after plugging-in estimated model parameters. Then Stein’s unbiased risk estimate is applied to select the penalty parameter, leading to a data-dependent penalty that is adaptive to the underlying model. Some numerical experiments show superiority of the proposed model averaging method over some commonly used variable selection methods. In addition, the proposed method is applied to a mercury data set for lakes in Maine.  相似文献   

12.
In ecological and behavioral research, drawing reliable conclusions from statistical models with multiple predictors is usually difficult if all predictors are simultaneously in the model. The traditional way of handling multiple predictors has been the use of threshold-based removal-introduction algorithms, that is, stepwise regression, which currently receives considerable criticism. A more recent and increasingly propagated modelling method for multiple predictors is the information theoretic (IT) approach that quantifies the relative suitability of multiple, potentially non-nested models based on a balance of model fit and the accuracy of estimates. Here, we examine three shortcomings of stepwise regression, subjective critical values, model uncertainty, and parameter estimation bias, which have been suggested to be avoided by applying information theory. We argue that, in certain circumstances, the IT approach may be sensitive to these issues as well. We point to areas where further testing and development could enhance the performance of IT methods and ultimately lead to robust inferences in behavioral ecology.  相似文献   

13.
Abstract:   In conservation biology, uncertainty about the choice of a statistical model is rarely considered. Model-selection uncertainty occurs whenever one model is chosen over plausible alternative models to represent understanding about a process and to make predictions about future observations. The standard approach to representing prediction uncertainty involves the calculation of prediction (or confidence) intervals that incorporate uncertainty about parameter estimates contingent on the choice of a "best" model chosen to represent truth. However, this approach to prediction based on statistical models tends to ignore model-selection uncertainty, resulting in overconfident predictions. Bayesian model averaging (BMA) has been promoted in a range of disciplines as a simple means of incorporating model-selection uncertainty into statistical inference and prediction. Bayesian model averaging also provides a formal framework for incorporating prior knowledge about the process being modeled. We provide an example of the application of BMA in modeling and predicting the spatial distribution of an arboreal marsupial in the Eden region of southeastern Australia. Other approaches to estimating prediction uncertainty are discussed.  相似文献   

14.
Two basic and competing approaches for measuring the benefits of pollution abatement have found support in the recent literature-the property value approach and the health damage function approach. The purpose of this paper is to show that conditions will often exist when the property value approach will not accurately measure all benefits and conditions will always be present that cause the health damage function approach to underestimate benefits. In general, neither approach can stand alone. It is possible, however, that the two approaches can be combined in such a way as to improve the measurement of abatement benefits. We present an approach for combining these two methods and do so by introducing an “information coefficient” that measures the degree of knowledge about pollution effects held by the public. Approaches to estimating the information coefficient are suggested.  相似文献   

15.
Non-independent mate selection occurs when the choice behavior of a female is altered by the interactions between other females and males. In the fiddler crab Uca mjoebergi, males court mate-searching females by waving their one greatly enlarged claw. When a female approaches a male, he initiates high-intensity waving. We conducted one natural mate choice experiment and two mate choice experiments using custom-built robotic crabs. We show that the decision of one female to approach a group of males increases the probability that another female will approach and visit a male from the same group. We suggest that this behavior is best explained by the ‘stimulus enhancement’ hypothesis, where the presence of a female near a group of males makes them more likely to be detected by other females due to an increase in male display rate.  相似文献   

16.

Background

Semi-natural plant communities such as field boundaries play an important ecological role in agricultural landscapes, e.g., provision of refuge for plant and other species, food web support or habitat connectivity. To prevent undesired effects of herbicide applications on these communities and their structure, the registration and application are regulated by risk assessment schemes in many industrialized countries. Standardized individual-level greenhouse experiments are conducted on a selection of crop and wild plant species to characterize the effects of herbicide loads potentially reaching off-field areas on non-target plants. Uncertainties regarding the protectiveness of such approaches to risk assessment might be addressed by assessment factors that are often under discussion. As an alternative approach, plant community models can be used to predict potential effects on plant communities of interest based on extrapolation of the individual-level effects measured in the standardized greenhouse experiments. In this study, we analyzed the reliability and adequacy of the plant community model IBC-grass (individual-based plant community model for grasslands) by comparing model predictions with empirically measured effects at the plant community level.

Results

We showed that the effects predicted by the model IBC-grass were in accordance with the empirical data. Based on the species-specific dose responses (calculated from empirical effects in monocultures measured 4 weeks after application), the model was able to realistically predict short-term herbicide impacts on communities when compared to empirical data.

Conclusion

The results presented in this study demonstrate an approach how the current standard greenhouse experiments—measuring herbicide impacts on individual-level—can be coupled with the model IBC-grass to estimate effects on plant community level. In this way, it can be used as a tool in ecological risk assessment.
  相似文献   

17.
Statistical methods emphasizing formal hypothesis testing have dominated the analyses used by ecologists to gain insight from data. Here, we review alternatives to hypothesis testing including techniques for parameter estimation and model selection using likelihood and Bayesian techniques. These methods emphasize evaluation of weight of evidence for multiple hypotheses, multimodel inference, and use of prior information in analysis. We provide a tutorial for maximum likelihood estimation of model parameters and model selection using information theoretics, including a brief treatment of procedures for model comparison, model averaging, and use of data from multiple sources. We discuss the advantages of likelihood estimation, Bayesian analysis, and meta-analysis as ways to accumulate understanding across multiple studies. These statistical methods hold promise for new insight in ecology by encouraging thoughtful model building as part of inquiry, providing a unified framework for the empirical analysis of theoretical models, and by facilitating the formal accumulation of evidence bearing on fundamental questions.  相似文献   

18.
The performance of statistical methods for modeling resource selection by animals is difficult to evaluate with field data because true selection patterns are unknown. Simulated data based on a known probability distribution, though, can be used to evaluate statistical methods. Models should estimate true selection patterns if they are to be useful in analyzing and interpreting field data. We used simulation techniques to evaluate the effectiveness of three statistical methods used in modeling resource selection. We generated 25 use locations per animal and included 10, 20, 40, or 80 animals in samples of use locations. To simulate species of different mobility, we generated use locations at four levels according to a known probability distribution across DeSoto National Wildlife Refuge (DNWR) in eastern Nebraska and western Iowa, USA. We either generated 5 random locations per use location or 10,000 random locations (total) within 4 predetermined areas around use locations to determine how the definition of availability and the number of random locations affected results. We analyzed simulated data using discrete choice, logistic-regression, and a maximum entropy method (Maxent). We used a simple linear regression of estimated and known probability distributions and area under receiver operating characteristic curves (AUC) to evaluate the performance of each method. Each statistical method was affected differently by number of animals and random locations used in analyses, level at which selection of resources occurred, and area considered available. Discrete-choice modeling resulted in precise and accurate estimates of the true probability distribution when the area in which use locations were generated was ≥ the area defined to be available. Logistic-regression models were unbiased and precise when the area in which use locations were generated and the area defined to be available were the same size; the fit of these models improved with increased numbers of random locations. Maxent resulted in unbiased and precise estimates of the known probability distribution when the area in which use locations were generated was small (home-range level) and the area defined to be available was large (study area). Based on AUC analyses, all models estimated the selection distribution better than random chance. Results from AUC analyses, however, often contradicted results of the linear regression method used to evaluate model performance. Discrete-choice modeling was best able to estimate the known selection distribution in our study area regardless of sample size or number of random locations used in the analyses, but we recommend further studies using simulated data over different landscapes and different resource metrics to confirm our results. Our study offers an approach and guidance for others interested in assessing the utility of techniques for modeling resource selection in their study area.  相似文献   

19.
Interspecific interactions are often difficult to elucidate, particularly with large vertebrates at large spatial scales. Here, we describe a methodology for estimating interspecific interactions by combining stable isotopes with bioenergetics. We illustrate this approach by modeling the population dynamics and species interactions of a suite of vertebrates on Santa Cruz Island, California, USA: two endemic carnivores (the island fox and island spotted skunk), an exotic herbivore (the feral pig), and their shared predator, the Golden Eagle. Sensitivity analyses suggest that our parameter estimates are robust, and natural history observations suggest that our overall approach captures the species interactions in this vertebrate community. Nonetheless, several factors provide challenges to using isotopes to infer species interactions. Knowledge regarding species-specific isotopic fractionation and diet breadth is often lacking, necessitating detailed laboratory studies and natural history information. However, when coupled with other approaches, including bioenergetics, mechanistic models, and natural history, stable isotopes can be powerful tools in illuminating interspecific interactions and community dynamics.  相似文献   

20.
A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models.The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is ‘poolable’.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号