首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 564 毫秒
1.
Large, fine-grained samples are ideal for predictive species distribution models used for management purposes, but such datasets are not available for most species and conducting such surveys is costly. We attempted to overcome this obstacle by updating previously available coarse-grained logistic regression models with small fine-grained samples using a recalibration approach. Recalibration involves re-estimation of the intercept or slope of the linear predictor and may improve calibration (level of agreement between predicted and actual probabilities). If reliable estimates of occurrence likelihood are required (e.g., for species selection in ecological restoration) calibration should be preferred to other model performance measures. This updating approach is not expected to improve discrimination (the ability of the model to rank sites according to species suitability), because the rank order of predictions is not altered. We tested different updating methods and sample sizes with tree distribution data from Spain. Updated models were compared to models fitted using only fine-grained data (refitted models). Updated models performed reasonably well at fine scales and outperformed refitted models with small samples (10-100 occurrences). If a coarse-grained model is available (or could be easily developed) and fine-grained predictions are to be generated from a limited sample size, updating previous models may be a more accurate option than fitting a new model. Our results encourage further studies on model updating in other situations where species distribution models are used under different conditions from their training (e.g., different time periods, different regions).  相似文献   

2.
Inverse parameter estimation of individual-based models (IBMs) is a research area which is still in its infancy, in a context where conventional statistical methods are not well suited to confront this type of models with data. In this paper, we propose an original evolutionary algorithm which is designed for the calibration of complex IBMs, i.e. characterized by high stochasticity, parameter uncertainty and numerous non-linear interactions between parameters and model output. Our algorithm corresponds to a variant of the population-based incremental learning (PBIL) genetic algorithm, with a specific “optimal individual” operator. The method is presented in detail and applied to the individual-based model OSMOSE. The performance of the algorithm is evaluated and estimated parameters are compared with an independent manual calibration. The results show that automated and convergent methods for inverse parameter estimation are a significant improvement to existing ad hoc methods for the calibration of IBMs.  相似文献   

3.
With the advancement of computational systems and the development of model integration concepts, complexity of environmental model systems increased. In contrast to that, theory and knowledge about > environmental systems as well as the capability for environmental systems analyses remained, to a large extent, unchanged. As a consequence, model conceptualization, data gathering, and validation, have faced new challenges that hardly can be tackled by modellers alone. In this discourse-like review, we argue that modelling with reliable simulations of human-environmental interactions necessitate linking modelling and simulation research much stronger to science fields such as landscape ecology, community ecology, eco-hydrology, etc. It thus becomes more and more important to identify the adequate degree of complexity in environmental models (which is not only a technical or methodological question), to ensure data availability, and to test model performance. Even equally important, providing problem specific answers to environmental problems using simulation tools requires addressing end-user and stakeholder requirements during early stages of problem development. In doing so, we avoid modelling and simulation as an end of its own.  相似文献   

4.
The problem of distinguishing density-independent (DI) from density-dependent (DD) demographic time series is important for understanding the mechanisms that regulate populations of animals and plants. We address this problem in a novel way by means of Statistical Learning Theory (SLT); SLT is built around the idea of VC-dimension, a complexity index for classes of parameterized functions. Though VC-dimensions of nonlinear models are generally unknown, in the linear case VC-dimension actually corresponds to the number of free parameters; this allows one to straightforwardly apply the model selection framework developed within SLT, and called Structural Risk Minimization (SRM). We generate noisy artificial time series, both DI and DD, and use SRM to recognize the model underlying the data, choosing among a suite of both density-dependent and independent demographies. We show that SRM significantly outperforms traditional model selection approaches, such as the Schwartz Information Criterion and Final Prediction Error in recognizing both density-dependence and independence.  相似文献   

5.
The ODD protocol: A review and first update   总被引:8,自引:0,他引:8  
  相似文献   

6.
As large carnivores recover throughout Europe, their distribution needs to be studied to determine their conservation status and assess the potential for human-carnivore conflicts. However, efficient monitoring of many large carnivore species is challenging due to their rarity, elusive behavior, and large home ranges. Their monitoring can include opportunistic sightings from citizens in addition to designed surveys. Two types of detection errors may occur in such monitoring schemes: false negatives and false positives. False-negative detections can be accounted for in species distribution models (SDMs) that deal with imperfect detection. False-positive detections, due to species misidentification, have rarely been accounted for in SDMs. Generally, researchers use ad hoc data-filtering methods to discard ambiguous observations prior to analysis. These practices may discard valuable ecological information on the distribution of a species. We investigated the costs and benefits of including data types that may include false positives rather than discarding them for SDMs of large carnivores. We used a dynamic occupancy model that simultaneously accounts for false negatives and positives to jointly analyze data that included both unambiguous detections and ambiguous detections. We used simulations to compare the performances of our model with a model fitted on unambiguous data only. We tested the 2 models in 4 scenarios in which parameters that control false-positive detections and true detections varied. We applied our model to data from the monitoring of the Eurasian lynx (Lynx lynx) in the European Alps. The addition of ambiguous detections increased the precision of parameter estimates. For the Eurasian lynx, incorporating ambiguous detections produced more precise estimates of the ecological parameters and revealed additional occupied sites in areas where the species is likely expanding. Overall, we found that ambiguous data should be considered when studying the distribution of large carnivores through the use of dynamic occupancy models that account for misidentification.  相似文献   

7.
The increasing complexity of ecosystem models represents a major difficulty in tuning model parameters and analyzing simulated results. To address this problem, this study develops a hierarchical scheme that simplifies the Biome-BGC model into three functionally cascaded tiers and analyzes them sequentially. The first-tier model focuses on leaf-level ecophysiological processes; it simulates evapotranspiration and photosynthesis with prescribed leaf area index (LAI). The restriction on LAI is then lifted in the following two model tiers, which analyze how carbon and nitrogen is cycled at the whole-plant level (the second tier) and in all litter/soil pools (the third tier) to dynamically support the prescribed canopy. In particular, this study analyzes the steady state of these two model tiers by a set of equilibrium equations that are derived from Biome-BGC algorithms and are based on the principle of mass balance. Instead of spinning-up the model for thousands of climate years, these equations are able to estimate carbon/nitrogen stocks and fluxes of the target (steady-state) ecosystem directly from the results obtained by the first-tier model. The model hierarchy is examined with model experiments at four AmeriFlux sites. The results indicate that the proposed scheme can effectively calibrate Biome-BGC to simulate observed fluxes of evapotranspiration and photosynthesis; and the carbon/nitrogen stocks estimated by the equilibrium analysis approach are highly consistent with the results of model simulations. Therefore, the scheme developed in this study may serve as a practical guide to calibrate/analyze Biome-BGC; it also provides an efficient way to solve the problem of model spin-up, especially for applications over large regions. The same methodology may help analyze other similar ecosystem models as well.  相似文献   

8.
Species distribution models (SDMs) based on statistical relationships between occurrence data and underlying environmental conditions are increasingly used to predict spatial patterns of biological invasions and prioritize locations for early detection and control of invasion outbreaks. However, invasive species distribution models (iSDMs) face special challenges because (i) they typically violate SDM's assumption that the organism is in equilibrium with its environment, and (ii) species absence data are often unavailable or believed to be too difficult to interpret. This often leads researchers to generate pseudo-absences for model training or utilize presence-only methods, and to confuse the distinction between predictions of potential vs. actual distribution. We examined the hypothesis that true-absence data, when accompanied by dispersal constraints, improve prediction accuracy and ecological understanding of iSDMs that aim to predict the actual distribution of biological invasions. We evaluated the impact of presence-only, true-absence and pseudo-absence data on model accuracy using an extensive dataset on the distribution of the invasive forest pathogen Phytophthora ramorum in California. Two traditional presence/absence models (generalized linear model and classification trees) and two alternative presence-only models (ecological niche factor analysis and maximum entropy) were developed based on 890 field plots of pathogen occurrence and several climatic, topographic, host vegetation and dispersal variables. The effects of all three possible types of occurrence data on model performance were evaluated with receiver operating characteristic (ROC) and omission/commission error rates. Results show that prediction of actual distribution was less accurate when we ignored true-absences and dispersal constraints. Presence-only models and models without dispersal information tended to over-predict the actual range of invasions. Models based on pseudo-absence data exhibited similar accuracies as presence-only models but produced spatially less feasible predictions. We suggest that true-absence data are a critical ingredient not only for accurate calibration but also for ecologically meaningful assessment of iSDMs that focus on predictions of actual distributions.  相似文献   

9.
Uncertainty plays a major role in Integrated Coastal Zone Management (ICZM). A large part of this uncertainty is connected to our lack of knowledge of the integrated functioning of the coastal system and to the increasing need to act in a pro-active way. Increasingly, coastal managers are forced to take decisions based on information which is surrounded by uncertainties. Different types of uncertainty can be identified and the role of uncertainty in decision making, scientific uncertainty and model uncertainty in ICZM is discussed. The issue of spatial variability, which is believed to be extremely important in ICZM and represents a primary source of complexity and uncertainty, is also briefly introduced. Some principles for complex model building are described as an approach to handle, in a balanced way, the available data, information, knowledge and experience. The practical method of sensitivity analysis is then introduced as a method for a posterior evaluation of uncertainty in simulation models. We conclude by emphasising the need for the definition of an analysis plan in order to handle model uncertainty in a balanced way during the decision making process.  相似文献   

10.
Recent trends in lake and stream water quality modeling indicate a conflict between the search for improved accuracy through increasing model size and complexity, and the search for applicability through simplification of already existing models. Much of this conflict turns on the fact that that which can be simulated in principle issimply not matched by that which can be observed and verified in practice. This paper is concerned with that conflict. Its aim is to introduce and clarify some of the arguments surrounding two issues of key importance in resolving the conflict: uncertainty in the mathematical relationships hypothesized for a particular model (calibration and model structure identification); and uncertainty associated with the predictions obtained from the model (prediction error analysis). These are issues concerning the reliability of models and model-based forecasts. The paper argues, in particular, that there is an intimate relationship between prediction and model calibration. This relationship is especially important in accounting for uncertainty in the development and use of models. Using this argument it is possible to state a dilemma which captures some limiting features of both large and small models.  相似文献   

11.
12.
The considerable complexity often included in biophysical models leads to the need of specifying a large number of parameters and inputs, which are available with various levels of uncertainty. Also, models may behave counter-intuitively, particularly when there are nonlinearities in multiple input-output relationships. Quantitative knowledge of the sensitivity of models to changes in their parameters is hence a prerequisite for operational use of models. This can be achieved using sensitivity analysis (SA) via methods which differ for specific characteristics, including computational resources required to perform the analysis. Running SA on biophysical models across several contexts requires flexible and computationally efficient SA approaches, which must be able to account also for possible interactions among parameters. A number of SA experiments were performed on a crop model for the simulation of rice growth (Water Accounting Rice Model, WARM) in Northern Italy. SAs were carried out using the Morris method, three regression-based methods (Latin hypercube sampling, random and Quasi-Random, LpTau), and two methods based on variance decomposition: Extended Fourier Amplitude Sensitivity Test (E-FAST) and Sobol’, with the latter adopted as benchmark. Aboveground biomass at physiological maturity was selected as reference output to facilitate the comparison of alternative SA methods. Rankings of crop parameters (from the most to the least relevant) were generated according to sensitivity experiments using different SA methods and alternate parameterizations for each method, and calculating the top-down coefficient of concordance (TDCC) as measure of agreement between rankings. With few exceptions, significant TDCC values were obtained both for different parameterizations within each method and for the comparison of each method to the Sobol’ one. The substantial stability observed in the rankings seem to indicate that, for a crop model of average complexity such as WARM, resource intensive SA methods could not be needed to identify most relevant parameters. In fact, the simplest among the SA methods used (i.e., Morris method) produced results comparable to those obtained by methods more computationally expensive.  相似文献   

13.
We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by resubstitution rates were similar for each lichen species irrespective of the underlying sample survey form. Cross-validation estimates of prediction accuracies were lower than resubstitution accuracies for all species and both design types, and in all cases were closer to the true prediction accuracies based on the EVALUATION data set. We argue that greater emphasis should be placed on calculating and reporting cross-validation accuracy rates rather than simple resubstitution accuracy rates. Evaluation of the DESIGN and PURPOSIVE tree models on the EVALUATION data set shows significantly lower prediction accuracy for the PURPOSIVE tree models relative to the DESIGN models, indicating that non-probabilistic sample surveys may generate models with limited predictive capability. These differences were consistent across all four lichen species, with 11 of the 12 possible species and sample survey type comparisons having significantly lower accuracy rates. Some differences in accuracy were as large as 50%. The classification tree structures also differed considerably both among and within the modelled species, depending on the sample survey form. Overlap in the predictor variables selected by the DESIGN and PURPOSIVE tree models ranged from only 20% to 38%, indicating the classification trees fit the two evaluated survey forms on different sets of predictor variables. The magnitude of these differences in predictor variables throws doubt on ecological interpretation derived from prediction models based on non-probabilistic sample surveys.  相似文献   

14.
Does the choice of climate baseline matter in ecological niche modelling?   总被引:1,自引:0,他引:1  
Ecological niche models (ENMs) have multiple applications in ecology, evolution and conservation planning. They relate the known locations of a species to characteristics of its environment (usually climate) over its geographical range. Most ENMs are trained using standard 30-year (1961-1990) or 50-year (1951-2000) baselines to represent current climate conditions. Species occurrence records used as input to the models, however, are frequently collected from time periods that differ from those from which the climate is derived. Since climate variability can be significant within and outside baselines, and the distributions of some plants and animals (e.g., annual plants, insects) can adjust to environmental conditions on much shorter time scales, this mismatch between collection records and climatic baselines may affect the utility and accuracy of model outputs. We investigated how the choice of baseline periods influenced modelling efforts, anticipating that climate baselines derived from the same temporal period as the species records would yield improved ENMs. Ten simulated species’ distributions were modelled using an ENM (Maxent) for (a) occurrences and climates within the same temporal period, based on eighteen 10-year baselines within the 20th century and (b) all available samples and climate baselines from 1951-2000 and 1961-1990. Each model was projected onto all the available 10-year climate scenarios and compared to the models trained on the corresponding scenario. We show that temporal mismatches of species occurrences and climate baselines can result in significantly poorer distribution models. Such temporal mismatch may be unavoidable for many studies, but we emphasize here the need to match the time range of samples and climate data whenever possible.  相似文献   

15.
We investigate how the viability and harvestability predicted by population models are affected by details of model construction. Based on this analysis we discuss some of the pitfalls associated with the use of classical statistical techniques for resolving the uncertainties associated with modeling population dynamics. The management of the Serengeti wildebeest (Connochaetes taurinus) is used as a case study. We fitted a collection of age-structured and unstructured models to a common set of available data and compared model predictions in terms of wildebeest viability and harvest. Models that depicted demographic processes in strikingly different ways fitted the data equally well. However, upon further analysis it became clear that models that fit the data equally well could nonetheless have very different management implications. In general, model structure had a much larger effect on viability analysis (e.g., time to collapse) than on optimal harvest analysis (e.g., harvest rate that maximizes harvest). Some modeling decisions, such as including age-dependent fertility rates, did not affect management predictions, but others had a strong effect (e.g., choice of model structure). Because several suitable models of comparable complexity fitted the data equally well, traditional model selection methods based on the parsimony principle were not practical for judging the value of alternative models. Our results stress the need to implement analytical frameworks for population management that explicitly consider the uncertainty about the behavior of natural systems.  相似文献   

16.
Repertoire size, the number of unique song or syllable types in the repertoire, is a widely used measure of song complexity in birds, but it is difficult to calculate this exactly in species with large repertoires. A new method of repertoire size estimation applies species richness estimation procedures from community ecology, but such capture-recapture approaches have not been much tested. Here, we establish standardized sampling schemes and estimation procedures using capture-recapture models for syllable repertoires from 18 bird species, and suggest how these may be used to tackle problems of repertoire estimation. Different models, with different assumptions regarding the heterogeneity of the use of syllable types, performed best for different species with different song organizations. For most species, models assuming heterogeneous probability of occurrence of syllables (so-called detection probability) were selected due to the presence of both rare and frequent syllables. Capture-recapture estimates of syllable repertoire size from our small sample did not differ significantly from previous estimates using larger samples of count data. However, the enumeration of syllables in 15 songs yielded significantly lower estimates than previous reports. Hence, heterogeneity in detection probability of syllables should be addressed when estimating repertoire size. This is neglected using simple enumeration procedures, but is taken into account when repertoire size is estimated by appropriate capture-recapture models adjusted for species-specific song organization characteristics. We suggest that such approaches, in combination with standardized sampling, should be applied in species with potentially large repertoire size. On the other hand, in species with small repertoire size and homogenous syllable usage, enumerations may be satisfactory. Although researchers often use repertoire size as a measure of song complexity, listeners to songs are unlikely to count entire repertoires and they may rely on other cues, such as syllable detection probability.Communicated by A. Cockburn  相似文献   

17.
We propose a method for a Bayesian hierarchical analysis of count data that are observed at irregular locations in a bounded domain of R2. We model the data as having been observed on a fine regular lattice, where we do not have observations at all the sites. The counts are assumed to be independent Poisson random variables whose means are given by a log Gaussian process. In this article, the Gaussian process is assumed to be either a Markov random field (MRF) or a geostatistical model, and we compare the two models on an environmental data set. To make the comparison, we calibrate priors for the parameters in the geostatistical model to priors for the parameters in the MRF. The calibration is obtained empirically. The main goal is to predict the hidden Poisson-mean process at all sites on the lattice, given the spatially irregular count data; to do this we use an efficient MCMC. The spatial Bayesian methods are illustrated on radioactivity counts analyzed by Diggle et al. (1998).  相似文献   

18.
Little is known on the factors controlling distribution and abundance of snow petrels in Antarctica. Studying habitat selection through modeling may provide useful information on the relationships between this species and its environment, especially relevant in a climate change context, where habitat availability may change. Validating the predictive capability of habitat selection models with independent data is a vital step in assessing the performance of such models and their potential for predicting species’ distribution in poorly documented areas.From the results of ground surveys conducted in the Casey region (2002–2003, Wilkes Land, East Antarctica), habitat selection models based on a dataset of 4000 nests were created to predict the nesting distribution of snow petrels as a function of topography and substrate. In this study, the Casey models were tested at Mawson, 3800 km away from Casey. The location and characteristics of approximately 7700 snow petrel nests were collected during ground surveys (Summer 2004–2005). Using GIS, predictive maps of nest distribution were produced for the Mawson region with the models derived from the Casey datasets and predictions were compared to the observed data. Models performance was assessed using classification matrixes and Receiver operating characteristic (ROC) curves. Overall correct classification rates for the Casey models varied from 57% to 90%. However, two geomorphologically different sub-regions (coastal islands and inland mountains) were clearly distinguished in terms of habitat selection by Casey model predictions but also by the specific variations in coefficients of terms in new models, derived from the Mawson data sets. Observed variations in the snow petrel aggregations were found to be related to local habitat availability.We discuss the applicability of various types of models (GLM, CT) and investigate the effect of scale on the prediction of snow petrel habitats. While the Casey models created with data collected at the nest scale did not perform well at Mawson due to regional variations in nest micro-characteristics, the predictive performance of models created with data compiled at a coarser scale (habitat units) was satisfactory. Substrate type was the most robust predictor of nest presence between Casey and Mawson. This study demonstrate that it is possible to predict at the large scale the presence of snow petrel nests based on simple predictors such as topography and substrate, which can be obtained from aerial photography. Such methodologies have valuable applications in the management and conservation of this top predator and associated resources and may be applied to other Antarctic, Sub-Antarctic and lower latitudes species and in a variety of habitats.  相似文献   

19.
Coral reefs are threatened ecosystems, so it is important to have predictive models of their dynamics. Most current models of coral reefs fall into two categories. The first is simple heuristic models which provide an abstract understanding of the possible behaviour of reefs in general, but do not describe real reefs. The second is complex simulations whose parameters are obtained from a range of sources such as literature estimates. We cannot estimate the parameters of these models from a single data set, and we have little idea of the uncertainty in their predictions.We have developed a compromise between these two extremes, which is complex enough to describe real reef data, but simple enough that we can estimate parameters for a specific reef from a time series. In previous work, we fitted this model to a long-term data set from Heron Island, Australia, using maximum likelihood methods. To evaluate predictions from this model, we need estimates of the uncertainty in our parameters. Here, we obtain such estimates using Bayesian Metropolis-Coupled Markov Chain Monte Carlo. We do this for versions of the model in which corals are aggregated into a single state variable (the three-state model), and in which corals are separated into four state variables (the six-state model), in order to determine the appropriate level of aggregation. We also estimate the posterior distribution of predicted trajectories in each case.In both cases, the fitted trajectories were close to the observed data, but we had doubts about the biological plausibility of some parameter estimates. We suggest that informative prior distributions incorporating expert knowledge may resolve this problem. In the six-state model, the posterior distribution of state frequencies after 40 years contained two divergent community types, one dominated by free space and soft corals, and one dominated by acroporid, pocilloporid, and massive corals. The three-state model predicts only a single community type. We conclude that the three-state model hides too much biological heterogeneity, but we need more data if we are to obtain reliable predictions from the six-state model. It is likely that there will be similarly large, but currently unevaluated, uncertainty in the predictions of other coral reef models, many of which are much more complex and harder to fit to real data.  相似文献   

20.
Research questions at the regional, national and global scales frequently require the upscaling of existing models. At large scales, simple model aggregation may have a prohibitive computational cost and lead to over-detailed problem representation. Methods that guide model simplification and revision have the potential to support the choice of the appropriate level of detail or heterogeneity within upscaled models. Efficient upscaling will retain only the heterogeneity that contributes to accurate aggregated results. This approach to model revision is challenging, because automatic generation of alternative models is difficult and the set of possible revised models is very large. In the case where simplification alone is considered, there are at least n2−1 possible simplified models where n is the number of model variables. Even with the availability of High Performance Computing, it is not possible to evaluate every possible simplified model if the number of model variables is greater than roughly 35. To address these issues, we propose a method that extends an existing procedure for simplifying and aggregating mechanistic models based on replacing model variables with constants. The method generates simplified models by selectively aggregating existing model variables, retaining existing model structure while reducing the size of the set of possible models and ordering them into a search tree. The tree is then searched selectively. We illustrate the method using a catchment scale optimization model with c. 50,000 variables (Farm-adapt) in the context of adaptation to climatic change. The method was successful in identifying redundant model variables and an adequate model 10% smaller than the original model. We discuss how the procedure can be extended to other large models and compare the method to those proposed by others. We conclude by urging model developers to regard their models as a starting point and to consider the need for alternative models during model development.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号