首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
de Valpine P  Rosenheim JA 《Ecology》2008,89(2):532-541
Robust analyses of noisy, stage-structured, irregularly spaced, field-scale data incorporating multiple sources of variability and nonlinear dynamics remain very limited, hindering understanding of how small-scale studies relate to large-scale population dynamics. We used a novel, complementary Bayesian and frequentist state-space model analysis to ask how density, temperature, plant nitrogen, and predators affect cotton aphid (Aphis gossypii) population dynamics in weekly data from 18 field-years and whether estimated effects are consistent with small-scale studies. We found clear roles of density and temperature but not of plant nitrogen or predators, for which Bayesian and frequentist evidence differed. However, overall predictability of field-scale dynamics remained low. This study demonstrates stage-structured state-space model analysis incorporating bottom-up, top-down, and density-dependent effects for within-season (nearly continuous time), nonlinear population dynamics. The analysis combines Bayesian posterior evidence with maximum-likelihood estimation and frequentist hypothesis testing using average one-step-ahead residuals.  相似文献   

2.
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate some of the statistical challenges in applying the methods.  相似文献   

3.
Line-transect analysis is a widely used method of estimating plant and animal density and abundance. A Bayesian approach to a basic line-transect analysis is developed for a half-normal detection function. We extend the model of Karunamuni and Quinn [Karunamuni, R.J., Quinn II, T.J., 1995. Bayesian estimation of animal abundance for line-transect sampling. Biometrics 51, 1325–1337] by including a binomial likelihood function for the number of objects detected. The method computes a joint posterior distribution on the effective strip width and the density of objects in the sampled area. Analytical and computational methods for binned and unbinned perpendicular distance data are provided. Existing information about effective strip width and density can be brought into the analysis via prior distributions. The Bayesian approach is compared to a standard line-transect analysis using both real and simulated data. Results of the Bayesian and non-Bayesian analyses are similar when there are no prior data on effective strip width or density, but the Bayesian approach performs better when such data are available from previous or related studies. Practical methods for including prior data on effective strip width and density are suggested. A numerical example shows how the Bayesian approach can provide valid estimates when the sample size is too small for the standard approach to work reliably. The proposed Bayesian approach can form the basis for developing more advanced analyses.  相似文献   

4.
Model averaging, specifically information theoretic approaches based on Akaike’s information criterion (IT-AIC approaches), has had a major influence on statistical practices in the field of ecology and evolution. However, a neglected issue is that in common with most other model fitting approaches, IT-AIC methods are sensitive to the presence of missing observations. The commonest way of handling missing data is the complete-case analysis (the complete deletion from the dataset of cases containing any missing values). It is well-known that this results in reduced estimation precision (or reduced statistical power), biased parameter estimates; however, the implications for model selection have not been explored. Here we employ an example from behavioural ecology to illustrate how missing data can affect the conclusions drawn from model selection or based on hypothesis testing. We show how missing observations can be recovered to give accurate estimates for IT-related indices (e.g. AIC and Akaike weight) as well as parameters (and their standard errors) by utilizing ‘multiple imputation’. We use this paper to illustrate key concepts from missing data theory and as a basis for discussing available methods for handling missing data. The example is intended to serve as a practically oriented case study for behavioural ecologists deciding on how to handle missing data in their own datasets and also as a first attempt to consider the problems of conducting model selection and averaging in the presence of missing observations.  相似文献   

5.
Bayesian Methods in Conservation Biology   总被引:10,自引:0,他引:10  
Abstract: Bayesian statistical inference provides an alternate way to analyze data that is likely to be more appropriate to conservation biology problems than traditional statistical methods. I contrast Bayesian techniques with traditional hypothesis-testing techniques using examples applicable to conservation. I use a trend analysis of two hypothetical populations to illustrate how easy it is to understand Bayesian results, which are given in terms of probability. Bayesian trend analysis indicated that the two populations had very different chances of declining at biologically important rates. For example, the probability that the first population was declining faster than 5% per year was 0.00, compared to a probability of 0.86 for the second population. The Bayesian results appropriately identified which population was of greater conservation concern. The Bayesian results contrast with those obtained with traditional hypothesis testing. Hypothesis testing indicated that the first population, which the Bayesian analysis indicated had no chance of declining at > 5% per year, was declining significantly because it was declining at a slow rate and the abundance estimates were precise. Despite the high probability that the second population was experiencing a serious decline, hypothesis testing failed to reject the null hypothesis of no decline because the abundance estimates were imprecise. Finally, I extended the trend analysis to illustrate Bayesian decision theory, which allows for choice between more than two decisions and allows explicit specification of the consequences of various errors. The Bayesian results again differed from the traditional results: the decision analysis led to the conclusion that the first population was declining slowly and the second population was declining rapidly.  相似文献   

6.
In the mid nineteen eighties the Dutch NOx air quality monitoring network was reduced from 73 to 32 rural and city background stations, leading to higher spatial uncertainties. In this study, several other sources of information are being used to help reduce uncertainties in parameter estimation and spatial mapping. For parameter estimation, we used Bayesian inference. For mapping, we used kriging with external drift (KED) including secondary information from a dispersion model. The methods were applied to atmospheric NOx concentrations on rural and urban scales. We compared Bayesian estimation with restricted maximum likelihood estimation and KED with universal kriging. As a reference we also included ordinary least squares (OLS). Comparison of several parameter estimation and spatial interpolation methods was done by cross-validation. Bayesian analysis resulted in an error reduction of 10 to 20% as compared to restricted maximum likelihood, whereas KED resulted in an error reduction of 50% as compared to universal kriging. Where observations were sparse, the predictions were substantially improved by inclusion of the dispersion model output and by using available prior information. No major improvement was observed as compared to OLS, the cause presumably being that much good information is contained in the dispersion model output, so that no additional spatial residual random field is required to explain the data. In all, we conclude that reduction in the monitoring network could be compensated by modern geostatistical methods, and that a traditional simple statistical model is of an almost equal quality.
Jan van de KassteeleEmail:
  相似文献   

7.
A benefit function transfer obtains estimates of willingness-to-pay (WTP) for the evaluation of a given policy at a site by combining existing information from different study sites. This has the advantage that more efficient estimates are obtained, but it relies on the assumption that the heterogeneity between sites is appropriately captured in the benefit transfer model. A more expensive alternative to estimate WTP is to analyze only data from the policy site in question while ignoring information from other sites. We make use of the fact that these two choices can be viewed as a model selection problem and extend the set of models to allow for the hypothesis that the benefit function is only applicable to a subset of sites. We show how Bayesian model averaging (BMA) techniques can be used to optimally combine information from all models.The Bayesian algorithm searches for the set of sites that can form the basis for estimating a benefit function and reveals whether such information can be transferred to new sites for which only a small data set is available. We illustrate the method with a sample of 42 forests from U.K. and Ireland. We find that BMA benefit function transfer produces reliable estimates and can increase about 8 times the information content of a small sample when the forest is ‘poolable’.  相似文献   

8.
Observed spatial patterns in natural systems may result from processes acting across multiple spatial and temporal scales. Although spatially explicit data on processes that generate ecological patterns, such as the distribution of disease over a landscape, are frequently unavailable, information about the scales over which processes operate can be used to understand the link between pattern and process. Our goal was to identify scales of mule deer (Odocoileus hemionus) movement and mixing that exerted the greatest influence on the spatial pattern of chronic wasting disease (CWD) in northcentral Colorado, USA. We hypothesized that three scales of mixing (individual, winter subpopulation, or summer subpopulation) might control spatial variation in disease prevalence. We developed a fully Bayesian hierarchical model to compare the strength of evidence for each mixing scale. We found strong evidence that the finest mixing scale corresponded best to the spatial distribution of CWD infection. There was also evidence that land ownership and habitat use play a role in exacerbating the disease, along with the known effects of sex and age. Our analysis demonstrates how information on the scales of spatial processes that generate observed patterns can be used to gain insight when process data are sparse or unavailable.  相似文献   

9.
An important aspect of species distribution modelling is the choice of the modelling method because a suboptimal method may have poor predictive performance. Previous comparisons have found that novel methods, such as Maxent models, outperform well-established modelling methods, such as the standard logistic regression. These comparisons used training samples with small numbers of occurrences per estimated model parameter, and this limited sample size may have caused poorer predictive performance due to overfitting. Our hypothesis is that Maxent models would outperform a standard logistic regression because Maxent models avoid overfitting by using regularisation techniques and a standard logistic regression does not. Regularisation can be applied to logistic regression models using penalised maximum likelihood estimation. This estimation procedure shrinks the regression coefficients towards zero, causing biased predictions if applied to the training sample but improving the accuracy of new predictions. We used Maxent and logistic regression (standard and penalised) to analyse presence/pseudo-absence data for 13 tree species and evaluated the predictive performance (discrimination) using presence-absence data. The penalised logistic regression outperformed standard logistic regression and equalled the performance of Maxent. The penalised logistic regression may be considered one of the best methods to develop species distribution models trained with presence/pseudo-absence data, as it is comparable to Maxent. Our results encourage further use of the penalised logistic regression for species distribution modelling, especially in those cases in which a complex model must be fitted to a sample with a limited size.  相似文献   

10.
Individual-based models (IBMs) have been improved in quality and reliability in recent years with an approach called pattern-oriented modelling (POM). POM proposes guidelines to develop models reproducing multiple patterns observed on the field and to test systematically how well the IBMs reproduce them. POM studies used generally traditional methods of goodness of fit such as the sum of squares evaluation or ad hoc comparisons of fitting errors and variations. Model selection, however, can be a rigorous statistical approach based on information theory and information criteria such as the Akaike's information criterion (AIC) or the deviance information criterion (DIC). So far, it has not been tried to link POM to these rigorous techniques. The main problems to achieve that are: (a) the difficulty to have likelihood functions for IBMs’ parameters and (b) the possibility to obtain posterior distributions of IBMs’ parameters given the patterns to reproduce. In a first part, this paper answers problem (a) by proposing and explaining how to calculate a deviance measure (POMDEV) for models developed in a context of POM. And while answering the second problem, a second part of the paper proposes an information criterion for model selection in a POM context (the pattern-oriented modelling information criterion: POMIC). This criterion does not yet have the same theoretical foundation as, e.g., AIC, but uses formal analogies to the DIC. In a third part POMIC is tested with a modelling exercise. This exercise shows the potential of POMIC to use multiple patterns for selecting among multiple potential submodels and eventually select the most parsimonious and well fitting model version. We conclude that POMIC, although being a heuristically derived approach, can greatly improve the POM framework.  相似文献   

11.
Brook BW  Bradshaw CJ 《Ecology》2006,87(6):1445-1451
Population limitation is a fundamental tenet of ecology, but the relative roles of exogenous and endogenous mechanisms remain unquantified for most species. Here we used multi-model inference (MMI), a form of model averaging, based on information theory (Akaike's Information Criterion) to evaluate the relative strength of evidence for density-dependent and density-independent population dynamical models in long-term abundance time series of 1198 species. We also compared the MMI results to more classic methods for detecting density dependence: Neyman-Pearson hypothesis-testing and best-model selection using the Bayesian Information Criterion or cross-validation. Using MMI on our large database, we show that density dependence is a pervasive feature of population dynamics (median MMI support for density dependence = 74.7-92.2%), and that this holds across widely different taxa. The weight of evidence for density dependence varied among species but increased consistently, with the number of generations monitored. Best-model selection methods yielded similar results to MMI (a density-dependent model was favored in 66.2-93.9% of species time series), while the hypothesis-testing methods detected density dependence less frequently (32.6-49.8%). There were no obvious differences in the prevalence of density dependence across major taxonomic groups under any of the statistical methods used. These results underscore the value of using multiple modes of analysis to quantify the relative empirical support for a set of working hypotheses that encompass a range of realistic population dynamical behaviors.  相似文献   

12.
Abstract:  Soberón and Llorente (1993) proposed pure-birth stochastic processes as theoretical models for species-accumulation curves, and these processes have frequently been used to describe the progress of biological inventories. We describe, in algorithmic form, an alternative statistical analysis based on a likelihood approach ( Díaz-Francés & Gorostiza 2002 ) that provides mathematical rigor to the ideas in Soberón and Llorente (1993) and improves the estimation of the models by incorporating the facts that the variance of the error is not constant and that the observations are correlated. Additionally, we used the likelihood ratios between candidate models as an objective procedure for model selection, allowing comparison between the goodness of fit of various models. The software for these statistical methods can now be downloaded off the Internet. We used two examples of butterfly data sets to illustrate the use of the methods and the software.  相似文献   

13.
Models for the analysis of habitat selection data incorporate covariates in an independent multinomial selections model (McCracken et al. 1998) Ramsey and Usner 2003 and an extension of that model to include a persistence parameter (2003). In both cases, all parameters are assumed to be fixed through time. Radio telemetry data collected for habitat selection studies typically consist of animal relocations through time, suggesting the need for an extension to these models. We use a Bayesian approach that allows for the habitat selection probabilities, persistence parameter, or both, to change with season. These extensions are particularly important when movement patterns are expected to differ seasonally and/or when availabilities of habitats change throughout the study period due to weather or migration. We implement and compare the models using radio telemetry data for westslope cutthroat trout in two streams in eastern Oregon.  相似文献   

14.
Abstract:  Over the last decade, criticisms of null-hypothesis significance testing have grown dramatically, and several alternative practices, such as confidence intervals, information theoretic, and Bayesian methods, have been advocated. Have these calls for change had an impact on the statistical reporting practices in conservation biology? In 2000 and 2001, 92% of sampled articles in Conservation Biology and Biological Conservation reported results of null-hypothesis tests. In 2005 this figure dropped to 78%. There were corresponding increases in the use of confidence intervals, information theoretic, and Bayesian techniques. Of those articles reporting null-hypothesis testing—which still easily constitute the majority—very few report statistical power (8%) and many misinterpret statistical nonsignificance as evidence for no effect (63%). Overall, results of our survey show some improvements in statistical practice, but further efforts are clearly required to move the discipline toward improved practices.  相似文献   

15.
Clough Y 《Ecology》2012,93(8):1809-1815
The need to model and test hypotheses about complex ecological systems has led to a steady increase in use of path analytical techniques, which allow the modeling of multiple multivariate dependencies reflecting hypothesized causation and mechanisms. The aim is to achieve the estimation of direct, indirect, and total effects of one variable on another and to assess the adequacy of whole models. Path analytical techniques based on maximum likelihood currently used in ecology are rarely adequate for ecological data, which are often sparse, multi-level, and may contain nonlinear relationships as well as nonnormal response data such as counts or proportion data. Here I introduce a more flexible approach in the form of the joint application of hierarchical Bayes, Markov chain Monte Carlo algorithms, Shipley's d-sep test, and the potential outcomes framework to fit path models as well as to decompose and estimate effects. An example based on the direct and indirect interactions between ants, two insect herbivores, and a plant species demonstrates the implementation of these techniques, using freely available software.  相似文献   

16.
Freshwater aquatic systems in North America are being invaded by many different species, ranging from fish, mollusks, cladocerans to various bacteria and viruses. These invasions have serious ecological and economic impacts. Human activities such as recreational boating are an important pathway for dispersal. Gravity models are used to quantify the dispersal effect of human activity. Gravity models currently used in ecology are deterministic. This paper proposes the use of stochastic gravity models in ecology, which provides new capabilities both in model building and in potential model applications. These models allow us to use standard statistical inference tools such as maximum likelihood estimation and model selection based on information criteria. To facilitate prediction, we use only those covariates that are easily available from common data sources and can be forecasted in future. This is important for forecasting the spread of invasive species in geographical and temporal domain. The proposed model is portable, that is it can be used for estimating relative boater traffic and hence relative propagule pressure for the lakes not covered by current boater surveys. This makes our results broadly applicable to various invasion prediction and management models.  相似文献   

17.
The statistical analysis of environmental data from remote sensing and Earth system simulations often entails the analysis of gridded spatio-temporal data, with a hypothesis test being performed for each grid cell. When the whole image or a set of grid cells are analyzed for a global effect, the problem of multiple testing arises. When no global effect is present, we expect $$ \alpha $$% of all grid cells to be false positives, and spatially autocorrelated data can give rise to clustered spurious rejections that can be misleading in an analysis of spatial patterns. In this work, we review standard solutions for the multiple testing problem and apply them to spatio-temporal environmental data. These solutions are independent of the test statistic, and any test statistic can be used (e.g., tests for trends or change points in time series). Additionally, we introduce permutation methods and show that they have more statistical power. Real-world data are used to provide examples of the analysis, and the performance of each method is assessed in a simulation study. Unlike other simulation studies, our study compares the statistical power of the presented methods in a comprehensive simulation study. In conclusion, we present several statistically rigorous methods for analyzing spatio-temporal environmental data and controlling the false positives. These methods allow the use of any test statistic in a wide range of applications in environmental sciences and remote sensing.  相似文献   

18.
Model practitioners increasingly place emphasis on rigorous quantitative error analysis in aquatic biogeochemical models and the existing initiatives range from the development of alternative metrics for goodness of fit, to data assimilation into operational models, to parameter estimation techniques. However, the treatment of error in many of these efforts is arguably selective and/or ad hoc. A Bayesian hierarchical framework enables the development of robust probabilistic analysis of error and uncertainty in model predictions by explicitly accommodating measurement error, parameter uncertainty, and model structure imperfection. This paper presents a Bayesian hierarchical formulation for simultaneously calibrating aquatic biogeochemical models at multiple systems (or sites of the same system) with differences in their trophic conditions, prior precisions of model parameters, available information, measurement error or inter-annual variability. Our statistical formulation also explicitly considers the uncertainty in model inputs (model parameters, initial conditions), the analytical/sampling error associated with the field data, and the discrepancy between model structure and the natural system dynamics (e.g., missing key ecological processes, erroneous formulations, misspecified forcing functions). The comparison between observations and posterior predictive monthly distributions indicates that the plankton models calibrated under the Bayesian hierarchical scheme provided accurate system representations for all the scenarios examined. Our results also suggest that the Bayesian hierarchical approach allows overcoming problems of insufficient local data by “borrowing strength” from well-studied sites and this feature will be highly relevant to conservation practices of regions with a high number of freshwater resources for which complete data could never be practically collected. Finally, we discuss the prospect of extending this framework to spatially explicit biogeochemical models (e.g., more effectively connect inshore with offshore areas) along with the benefits for environmental management, such as the optimization of the sampling design of monitoring programs and the alignment with the policy practice of adaptive management.  相似文献   

19.
In this work we propose a Bayesian ecological analysis in which a latent variable summarizes data on emissions of atmospheric pollutants. We specified a hierarchical Bayesian model with spatially structured and unstructured random terms with a nested latent factor model. This can be considered a combination of the convolution spatial model of Besag et al. (1991) and an ecological regression analysis in which a latent variable plays the role of the covariate. The unified approach allows to proper account for the uncertainty in the latent score estimation in the regression analysis. The Bayesian Latent Factor model is used to summarize the information on environmental pressure derived from three stressors: Carbon Monoxide, Nitrogen Oxides and Inhalable Particles. We found evidence of positive correlation between Lung cancer mortality and environmental pressure indicators, in males, Tuscany (Italy), 1995–1999. Environmental pressure seems to be restricted to fourteen municipalities (top 5% of the Latent Factor distribution). The model identified two areas with high point source emissions.  相似文献   

20.
In geostatistics, both kriging and smoothing splines are commonly used to generate an interpolated map of a quantity of interest. The geoadditive model proposed by Kammann and Wand (J R Stat Soc: Ser C (Appl Stat) 52(1):1–18, 2003) represents a fusion of kriging and penalized spline additive models. Complex data issues, including non-linear covariate trends, multiple measurements at a location and clustered observations are easily handled using the geoadditive model. We propose a likelihood based estimation procedure that enables the estimation of the range (spatial decay) parameter associated with the penalized splines of the spatial component in the geoadditive model. We present how the spatial covariance structure (covariogram) can be derived from the geoadditive model. In a simulation study, we show that the underlying spatial process and prediction of the spatial map are estimated well using the proposed likelihood based estimation procedure. We present several applications of the proposed methods on real-life data examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号