首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Shipley B 《Ecology》2010,91(9):2794-2805
Maximum entropy (maxent) models assign probabilities to states that (1) agree with measured macroscopic constraints on attributes of the states and (2) are otherwise maximally uninformative and are thus as close as possible to a specified prior distribution. Such models have recently become popular in ecology, but classical inferential statistical tests require assumptions of independence during the allocation of entities to states that are rarely fulfilled in ecology. This paper describes a new permutation test for such maxent models that is appropriate for very general prior distributions and for cases in which many states have zero abundance and that can be used to test for conditional relevance of subsets of constraints. Simulations show that the test gives correct probability estimates under the null hypothesis. Power under the alternative hypothesis depends primarily on the number and strength of the constraints and on the number of states in the model; the number of empty states has only a small effect on power. The test is illustrated using two empirical data sets to test the community assembly model of B. Shipley, D. Vile, and E. Garnier and the species abundance distribution models of S. Pueyo, F. He, and T. Zillio.  相似文献   

2.
We derive some statistical properties of the distribution of two Negative Binomial random variables conditional on their total. This type of model can be appropriate for paired count data with Poisson over-dispersion such that the variance is a quadratic function of the mean. This statistical model is appropriate in many ecological applications including comparative fishing studies of two vessels and or gears. The parameter of interest is the ratio of pair means. We show that the conditional means and variances are different from the more commonly used Binomial model with variance adjusted for over-dispersion, or the Beta-Binomial model. The conditional Negative Binomial model is complicated because it does not eliminate nuisance parameters like in the Poisson case. Maximum likelihood estimation with the unconditional Negative Binomial model can result in biased estimates of the over-dispersion parameter and poor confidence intervals for the ratio of means when there are many nuisance parameters. We propose three approaches to deal with nuisance parameters in the conditional Negative Binomial model. We also study a random effects Binomial model for this type of data, and we develop an adjustment to the full-sample Negative Binomial profile likelihood to reduce the bias caused by nuisance parameters. We use simulations with these methods to examine bias, precision, and accuracy of estimators and confidence intervals. We conclude that the maximum likelihood method based on the full-sample Negative Binomial adjusted profile likelihood produces the best statistical inferences for the ratio of means when paired counts have Negative Binomial distributions. However, when there is uncertainty about the type of Poisson over-dispersion then a Binomial random effects model is a good choice.  相似文献   

3.
A central goal of behavioral ecology is to quantify and explain variation in behavior. While much previous work has focused on the differences in mean behavior across groups or treatments, we present a complementary approach studying changes in the distribution of the response variable. This is important because changes in the edges of a distribution may be more informative than changes in the mean if behavior at the edges of a distribution better reflects behavioral constraints. Quantile regression estimates the rate of change of conditional quantiles of a response variable and thus allows the study of changes in any part of its distribution. Although quantile regression is gaining popularity in the ecological literature, it is strikingly unused in behavioral ecology. Here, we demonstrate the usefulness of this method by analyzing the relationship between the starting distance (SD) at which an observer approach a focal animal and its flight initiation distance (FID, the distance between the observer and the animal when it decides to flee). In particular, we used a simple model of flight initiation distance to show that in most situations ordinary least-square regression cannot be used to analyse the SD–FID relationship. Quantile regression conducted on the lowest quantiles appears more robust and we applied this approach to data from four bird species. Overall, changes in the lowest FID values appeared to be the most informative to determine if a species displays a “flush early” strategy, a strategy which has been hypothesized to be a general rule. We hope this example will bring quantile regression to the attention of behavioral ecologists as a valuable tool to add to their statistical toolbox.  相似文献   

4.
Landscape pattern is of primary interest to landscape ecologists and landscape metrics are used to quantify landscape pattern. Metrics are commonly defined and calculated on raster-based land cover maps. One metric is the contagion, existing in several versions, e.g., unconditional and conditional, used as a measure of fragmentation. However, mapped data is sometimes in vector-based format or there may be no mapped data but only a point sample. In this study a definition of contagion for such cases is investigated. The metric is an extension of the usual contagion, based on pairs of points at varying distances and gives a function of the distance. In this study the extended contagion is calculated for vector-based delineated real landscapes and for simulated ones. Both unconditional and conditional contagions are studied using two classification systems. The unconditional contagion function was decreasing and convex, with upper and lower limits highly correlated to the Shannon diversity index, thus carrying only area proportion information. The spatial information lies in the speed by which the function converges to the lower limit; using a proxy function this can be expressed by a single parameter b, with high values for fragmented landscapes. No proxy function was found for the conditional contagion, for which only qualitative information was found. The extended contagion is applicable both in patch mosaic models of landscapes and in gradient-based models, where landscape characteristics change continuously without distinct borders between patches. The extended contagion can be useful in sample based surveys where there no map of the entire landscape is available.  相似文献   

5.
Akaike’s information criterion (AIC) is increasingly being used in analyses in the field of ecology. This measure allows one to compare and rank multiple competing models and to estimate which of them best approximates the “true” process underlying the biological phenomenon under study. Behavioural ecologists have been slow to adopt this statistical tool, perhaps because of unfounded fears regarding the complexity of the technique. Here, we provide, using recent examples from the behavioural ecology literature, a simple introductory guide to AIC: what it is, how and when to apply it and what it achieves. We discuss multimodel inference using AIC—a procedure which should be used where no one model is strongly supported. Finally, we highlight a few of the pitfalls and problems that can be encountered by novice practitioners.  相似文献   

6.
If a nearly natural population system is deviated from its equilibrium, an important task of conservation ecology may be to control it back into equilibrium. In the paper a trophic chain is considered, and control systems are obtained by changing certain model parameters into control variables. For the equilibrium control two approaches are proposed. First, for a fixed time interval, local controllability into equilibrium is proved, and applying tools of optimal control, it is also shown how an appropriate open-loop control can be determined that actually controls the system into the equilibrium in given time. Another considered problem is to control the system to a new desired equilibrium. The problem is solved by the construction of a closed-loop control which asymptotically steers the trophic chain into this new equilibrium. In this way, actually, a controlled regime shift is realized.  相似文献   

7.
We consider the problem of assessing long-term trends of ozone concentrations measured on a single site located in an urban area. Among the many methods proposed in the literature to eliminate the confounding effect of changing weather conditions, we employ a stratification of daily maxima based on regression trees. Within each stratum conditional independence and Weilbull distribution are assumed for maxima. Long-term trend is defined non-parametrically by the sequence of yearly medians. Models are estimated following the Bayesian approach. The alternative assumptions of common and stratum specific trends are compared and a model with common trend for all strata is selected for the analyzed real dataset. The conditional independence assumption is checked by the comparison with a model including an autoregressive component.  相似文献   

8.
Planktonic patches are defined as areas where the abundance of plankters is above a threshold value τ. The estimation of patch size and shape can be approached using spatial statistical tools, using truncated random fields or indicator random fields as classifiers. In all cases there is the risk of false positive and false negative errors. In this paper we present the results of a comparative study on the performance of four commonly used methods: conditional simulation and kriging, both in the original measurement units of the data and under an indicator transform. We used a misclassification cost function to compare the four methods. Our results show that conditional simulation in the original measurement units attains the lowest misclassification cost. We also illustrate how the point at which this minimum is attained can be used to chose an optimal cut-off value for binary classification. Received: December 2003 / Revised: June 2005  相似文献   

9.
《Ecological modelling》2005,187(4):475-490
Fortnightly observations of water quality parameters, discharge and water temperature along the River Elbe have been subjected to a multivariate data analysis. In a previous study [Petersen, W., Bertino, L., Callies, U., Zorita, E., 2001. Process identification by principal component analysis of river-quality data. Ecol. Model. 138, 193–213] applied principal component analysis (PCA) to show that 60% of variability in the data set can be explained through just two linear combinations of eight original variables. In the present paper more advanced multivariate methods are applied to the same data set, which are supposed to suit better interpretations in terms of the underlying system dynamics.The first method, graphical modelling, represents interaction structures in terms of a set of conditional independence constraints between pairs of variables given the values of all other variables. Assuming data from a multinormal distribution conditional independence constraints are expressed by zero partial correlations. Different graphical structures with nodes for each variable and connecting edges between them can be assessed with regard to their likelihood. The second method, canonical correlation analysis (CCA), is applied for studying the correlation structures of external forcing and water quality parameters.Results of CCA turn out to be consistent with the dominant patterns of variability obtained from PCA. The percentages of variability explained by external forcing, however, are estimated to be smaller. Fitting graphical models allows a more detailed representation of interaction structures. For instance, for given discharge and temperature correlated variations of the concentrations of oxygen and nitrate, respectively, can be modelled as being mediated by variations of pH, which is a representer for algal activity. Considerably simplified graphical models do not much affect the outcomes of both PCA and CCA, and hence it is concluded that these graphical models successfully represent the main interaction structures represented by the covariance matrix of the data. The analysed conditional independence patterns provide constraints to be satisfied by directed probabilistic networks, for instance.  相似文献   

10.
One of the most studied phenomena in ecology is density dependent regulation. The model most frequently used to study this behaviour is the theta-logistic model. However, disagreement has developed within the ecology community pertaining to the interpretation of this model’s parameters, and thus as to appropriate values for the parameters to assume. In particular, the parameter θθ has been allowed to take negative values, resulting in the ‘growth rate parameter’ estimated to be negative for species which are extant and exhibit no signs of becoming extinct in the short-term. Here we explain this phenomenon by formulating the theta-logistic model in the manner in which the original logistic model was formulated by Verhulst (1838), in doing so providing a simple interpretation of model parameters and thus restrictions on values the parameters may assume. We conclude that θθ should (almost always) be restricted to values greater than −11. This has implications for studies assessing the form of density dependence from data. Additionally, another model appearing in the literature is presented which provides a more flexible model of density dependence at the expense of only one additional parameter.  相似文献   

11.
Repertoire size, the number of unique song or syllable types in the repertoire, is a widely used measure of song complexity in birds, but it is difficult to calculate this exactly in species with large repertoires. A new method of repertoire size estimation applies species richness estimation procedures from community ecology, but such capture-recapture approaches have not been much tested. Here, we establish standardized sampling schemes and estimation procedures using capture-recapture models for syllable repertoires from 18 bird species, and suggest how these may be used to tackle problems of repertoire estimation. Different models, with different assumptions regarding the heterogeneity of the use of syllable types, performed best for different species with different song organizations. For most species, models assuming heterogeneous probability of occurrence of syllables (so-called detection probability) were selected due to the presence of both rare and frequent syllables. Capture-recapture estimates of syllable repertoire size from our small sample did not differ significantly from previous estimates using larger samples of count data. However, the enumeration of syllables in 15 songs yielded significantly lower estimates than previous reports. Hence, heterogeneity in detection probability of syllables should be addressed when estimating repertoire size. This is neglected using simple enumeration procedures, but is taken into account when repertoire size is estimated by appropriate capture-recapture models adjusted for species-specific song organization characteristics. We suggest that such approaches, in combination with standardized sampling, should be applied in species with potentially large repertoire size. On the other hand, in species with small repertoire size and homogenous syllable usage, enumerations may be satisfactory. Although researchers often use repertoire size as a measure of song complexity, listeners to songs are unlikely to count entire repertoires and they may rely on other cues, such as syllable detection probability.Communicated by A. Cockburn  相似文献   

12.
After several decades during which applied statistical inference in research on animal behaviour and behavioural ecology has been heavily dominated by null hypothesis significance testing (NHST), a new approach based on information theoretic (IT) criteria has recently become increasingly popular, and occasionally, it has been considered to be generally superior to conventional NHST. In this commentary, I discuss some limitations the IT-based method may have under certain circumstances. In addition, I reviewed some recent articles published in the fields of animal behaviour and behavioural ecology and point to some common failures, misunderstandings and issues frequently appearing in the practical application of IT-based methods. Based on this, I give some hints about how to avoid common pitfalls in the application of IT-based inference, when to choose one or the other approach and discuss under which circumstances a mixing of the two approaches might be appropriate.  相似文献   

13.
Multidimensional Markov chain models in geosciences were often built on multiple chains, one in each direction, and assumed these 1-D chains to be independent of each other. Thus, unwanted transitions (i.e., transitions of multiple chains to the same location with unequal states) inevitably occur and have to be excluded in estimating the states at unobserved locations. This consequently may result in unreliable estimates, such as underestimation of small classes (i.e., classes with smaller than average areas) in simulated realizations. This paper presents a single-chain-based multidimensional Markov chain model for estimation (i.e., prediction and conditional stochastic simulation) of spatial distribution of subsurface formations with borehole data. The model assumes that a single Markov chain moves in a lattice space, interacting with its nearest known neighbors through different transition probability rules in different cardinal directions. The conditional probability distribution of the Markov chain at the location to be estimated is formulated in an explicit form by following the Bayes’ Theorem and the conditional independence of sparse data in cardinal directions. Since no unwanted transitions are involved, the model can estimate all classes fairly. Transiogram models (i.e., 1-D continuous Markov transition probability diagrams) are used to provide transition probability input with needed lags to generalize the model. Therefore, conditional simulation can be conducted directly and efficiently. The model provides an alternative for heterogeneity characterization of subsurface formations.
Weidong LiEmail:
  相似文献   

14.
《Ecological modelling》2004,175(2):151-167
Throughfall may contribute large amounts of nutrients to forest soils via the leaching of accumulated dry particulates on the canopy, and by altering incoming precipitation, it may have some control on the acid–base status of the soil. Unfortunately, information about throughfall in forests is sparse and thus, scientists must deal with this gap in knowledge before conducting regional applications of dynamic soil acidification models. The first objective of this paper was to test the possibility of developing regression equations that could allow modellers to estimate throughfall nutrient fluxes using wet deposition nutrient fluxes as input data. The second objective was to test the relative importance of this simplification on regional applications of the dynamic soil–atmosphere model Soil Acidification in Forested Ecosystems (SAFE) using one published application of this model as the base case. Annual throughfall nutrient fluxes were estimated successfully from annual wet deposition fluxes for individual ions. The success of these relationships were however inversely proportional to the intensity at which an ion was involved in exchange reactions: models generally performed better with more conservative ions. The simulation of the soil acid–base status with SAFE suggested that it was appropriate to use the throughfall estimates yielded using the regression equations. Also, testing of the SAFE output using different regression equations in throughfall showed that, in the case of base cations, the key for modelling the soil acid–base status was to produce accurate throughfall estimates of Ca and Mg, and that K had marginal effects. However, a small bias in solution pH was introduced as the balance between alkalinity and acidity in the different categories of deposition appeared to be diverging from the base case (measured) values. The use of our approach at other sites may indicate if there is a systematic bias or not in the regressions. Yet, results suggest that the regression equations are appropriate for the purpose of modelling the soil acid–base status at the scale of the landscape because it assures that the same set of assumptions in throughfall are used for each application.  相似文献   

15.
Scientific thinking may require the consideration of multiple hypotheses, which often call for complex statistical models at the level of data analysis. The aim of this introduction is to provide a brief overview on how competing hypotheses are evaluated statistically in behavioural ecological studies and to offer potentially fruitful avenues for future methodological developments. Complex models have traditionally been treated by model selection approaches using threshold-based removal of terms, i.e. stepwise selection. A recently introduced method for model selection applies an information-theoretic (IT) approach, which simultaneously evaluates hypotheses by balancing between model complexity and goodness of fit. The IT method has been increasingly propagated in the field of ecology, while a literature survey shows that its spread in behavioural ecology has been much slower, and model simplification using stepwise selection is still more widespread than IT-based model selection. Why has the use of IT methods in behavioural ecology lagged behind other disciplines? This special issue examines the suitability of the IT method for analysing data with multiple predictors, which researchers encounter in our field. The volume brings together different viewpoints to aid behavioural ecologists in understanding the method, with the hope of enhancing the statistical integration of our discipline.  相似文献   

16.
The estimation of population density animal population parameters, such as capture probability, population size, or population density, is an important issue in many ecological applications. Capture–recapture data may be considered as repeated observations that are often correlated over time. If these correlations are not taken into account then parameter estimates may be biased, possibly producing misleading results. We propose a generalized estimating equations (GEE) approach to account for correlation over time instead of assuming independence as in the traditional closed population capture–recapture studies. We also account for heterogeneity among observed individuals and over-dispersion, modelling capture probabilities as a function of covariates. The GEE versions of all closed population capture–recapture models and their corresponding estimating equations are proposed. We evaluate the effect of accounting for correlation structures on capture–recapture model selection based on the quasi-likelihood information criterion (QIC). An example is used for an illustrative application and for comparison to currently used methodology. A Horvitz–Thompson-like estimator is used to obtain estimates of population size based on conditional arguments. A simulation study is conducted to evaluate the performance of the GEE approach in capture-recapture studies. The GEE approach performs well for estimating population parameters, particularly when capture probabilities are high. The simulation results also reveal that estimated population size varies on the nature of the existing correlation among capture occasions.  相似文献   

17.
For two or more classes (or types) of points, nearest neighbor contingency tables (NNCTs) are constructed using nearest neighbor (NN) frequencies and are used in testing spatial segregation of the classes. Pielou’s test of independence, Dixon’s cell-specific, class-specific, and overall tests are the tests based on NNCTs (i.e., they are NNCT-tests). These tests are designed and intended for use under the null pattern of random labeling (RL) of completely mapped data. However, it has been shown that Pielou’s test is not appropriate for testing segregation against the RL pattern while Dixon’s tests are. In this article, we compare Pielou’s and Dixon’s NNCT-tests; introduce the one-sided versions of Pielou’s test; extend the use of NNCT-tests for testing complete spatial randomness (CSR) of points from two or more classes (which is called CSR independence, henceforth). We assess the finite sample performance of the tests by an extensive Monte Carlo simulation study and demonstrate that Dixon’s tests are also appropriate for testing CSR independence; but Pielou’s test and the corresponding one-sided versions are liberal for testing CSR independence or RL. Furthermore, we show that Pielou’s tests are only appropriate when the NNCT is based on a random sample of (base, NN) pairs. We also prove the consistency of the tests under their appropriate null hypotheses. Moreover, we investigate the edge (or boundary) effects on the NNCT-tests and compare the buffer zone and toroidal edge correction methods for these tests. We illustrate the tests on a real life and an artificial data set.  相似文献   

18.
In behavioral ecology the overall sex ratio in a population of birds is often tested to see if it differs from a 50/50 ratio. In recent publications the binomial test or the 2 test are carried out although the sexes of chicks within the same nest may not be independent. The lack of independence occurs since female birds can adjust the sex ratio in an adaptive way as demonstrated in recent studies. In order to take dependence into consideration the Wilcoxon signed rank test based on the within-brood differences between the proportions of sons and daughters was performed in a study investigating great tit hatchling sex ratios. We compare this test with a test based on an optimally weighted estimator recently proposed for medical studies with clustered binary data. According to our simulation results, this novel test is more powerful than the Wilcoxon signed rank test and should be used for the analysis of avian sex ratios. The methods are illustrated with real data from the great reed warbler.  相似文献   

19.
Geostatistical model averaging based on conditional information criteria   总被引:1,自引:0,他引:1  
Variable selection in geostatistical regression is an important problem, but has not been well studied in the literature. In this paper, we focus on spatial prediction and consider a class of conditional information criteria indexed by a penalty parameter. Instead of applying a fixed criterion, which leads to an unstable predictor in the sense that it is discontinuous with respect to the response variables due to that a small change in the response may cause a different model to be selected, we further stabilize the predictor by local model averaging, resulting in a predictor that is not only continuous but also differentiable even after plugging-in estimated model parameters. Then Stein’s unbiased risk estimate is applied to select the penalty parameter, leading to a data-dependent penalty that is adaptive to the underlying model. Some numerical experiments show superiority of the proposed model averaging method over some commonly used variable selection methods. In addition, the proposed method is applied to a mercury data set for lakes in Maine.  相似文献   

20.
Abstract:  Organisms respond to their surroundings at multiple spatial scales, and different organisms respond differently to the same environment. Existing landscape models, such as the "fragmentation model" (or patch-matrix-corridor model) and the "variegation model," can be limited in their ability to explain complex patterns for different species and across multiple scales. An alternative approach is to conceptualize landscapes as overlaid species-specific habitat contour maps. Key characteristics of this approach are that different species may respond differently to the same environmental conditions and at different spatial scales. Although similar approaches are being used in ecological modeling, there is much room for habitat contours as a useful conceptual tool. By providing an alternative view of landscapes, a contour model may stimulate more field investigations stratified on the basis of ecological variables other than human-defined patches and patch boundaries. A conceptual model of habitat contours may also help to communicate ecological complexity to land managers. Finally, by incorporating additional ecological complexity, a conceptual model based on habitat contours may help to bridge the perceived gap between pattern and process in landscape ecology. Habitat contours do not preclude the use of existing landscape models and should be seen as a complementary approach most suited to heterogeneous human-modified landscapes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号