首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Hidden process models are a conceptually useful and practical way to simultaneously account for process variation in animal population dynamics and measurement errors in observations and estimates made on the population. Process variation, which can be both demographic and environmental, is modeled by linking a series of stochastic and deterministic subprocesses that characterize processes such as birth, survival, maturation, and movement. Observations of the population can be modeled as functions of true abundance with realistic probability distributions to describe observation or estimation error. Computer-intensive procedures, such as sequential Monte Carlo methods or Markov chain Monte Carlo, condition on the observed data to yield estimates of both the underlying true population abundances and the unknown population dynamics parameters. Formulation and fitting of a hidden process model are demonstrated for Sacramento River winter-run chinook salmon (Oncorhynchus tshawytsha).  相似文献   

2.
Abstract: The nonuse (or passive) value of nature is important but time‐consuming and costly to quantify with direct surveys. In the absence of estimates of these values, there will likely be less investment in conservation actions that generate substantial nonuse benefits, such as conservation of native species. To help overcome decisions about the allocation of conservation dollars that reflect the lack of estimates of nonuse values, these values can be estimated indirectly by environmental value transfer (EVT). EVT uses existing data or information from a study site such that the estimated monetary value of an environmental good is transferred to another location or policy site. A major challenge in the use of EVT is the uncertainty about the sign and size of the error (i.e., the percentage by which transferred value exceeds the actual value) that results from transferring direct estimates of nonuse values from a study to a policy site, the site where the value is transferred. An EVT is most useful if the decision‐making framework does not require highly accurate information and when the conservation decision is constrained by time and financial resources. To account for uncertainty in the decision‐making process, a decision heuristic that guides the decision process and illustrates the possible decision branches, can be followed. To account for the uncertainty associated with the transfer of values from one site to another, we developed a risk and simulation approach that uses Monte Carlo simulations to evaluate the net benefits of conservation investments and takes into account different possible distributions of transfer error. This method does not reduce transfer error, but it provides a way to account for the effect of transfer error in conservation decision making. Our risk and simulation approach and decision‐based framework on when to use EVT offer better‐informed decision making in conservation.  相似文献   

3.
In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.  相似文献   

4.
Knape J  de Valpine P 《Ecology》2012,93(2):256-263
We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm.  相似文献   

5.
We develop regional-scale eutrophication models for lakes, ponds, and reservoirs to investigate the link between nutrients and chlorophyll-a. The Bayesian TREED (BTREED) model approach allows association of multiple environmental stressors with biological responses, and quantification of uncertainty sources in the empirical water quality model. Nutrient data for lakes, ponds, and reservoirs across the United States were obtained from the Environmental Protection Agency (EPA) National Nutrient Criteria Database. The nutrient data consist of measurements for both stressor variables (such as total nitrogen and total phosphorus), and response variables (such as chlorophyll-a), used in the BTREED model. Markov chain Monte Carlo (McMC) posterior exploration guides a stochastic search through a rich suite of candidate trees toward models that better fit the data. The Bayes factor provides a goodness of fit criterion for comparison of resultant models. We randomly split the data into training and test sets; the training data were used in model estimation, and the test data were used to evaluate out-of-sample predictive performance of the model. An average relative efficiency of 1.02 between the training and test data for the four highest log-likelihood models suggests good out-of-sample predictive performance. Reduced model uncertainty relative to over-parameterized alternative models makes the BTREED models useful for nutrient criteria development, providing the link between nutrient stressors and meaningful eutrophication response.  相似文献   

6.
Behavioural models for both humans and other animals often assume economic rationality on the part of decision makers. Economic rationality supposes that outcomes can be assigned objective values within a stable valuation framework and that choices are made to maximise a decision maker’s expected payoff. Yet, both human and animal behaviour is often not economically rational. Here, we compare economically rational making strategies with a strategy (trade-off contrasts) that has been proposed to account for decision-making behaviour in humans that departs of axiomatic rationality. We model the fitness of these strategies in a simple environment where choices are made on repeated occasions, there is stochastic fluctuation in the choices available at any given time, and uncertainty about what choices will be available in the future. Our results show that, for at least some of the model parameter space, non-rational decision strategies achieve higher fitness than economically rational strategies. The differences were comparable in magnitude to selection differentials observed in nature.  相似文献   

7.
This paper considers the modeling and forecasting of daily maximum hourly ozone concentrations in Laranjeiras, Serra, Brazil, through dynamic regression models. In order to take into account the natural skewness and heavy-tailness of the data, a linear regression model with autoregressive errors and innovations following a member of the family of scale mixture of skew-normal distributions was considered. Pollutants and meteorological variables were considered as predictors, along with some deterministic factors, namely week-days and seasons. The Oceanic Niño Index was also considered as a predictor. The estimated model was able to explain satisfactorily well the correlation structure of the ozone time series. An out-of-sample forecast study was also performed. The skew-normal and skew-t models displayed quite competitive point forecasts compared to the similar model with gaussian innovations. On the other hand, in terms of forecast intervals, the skewed models presented much better performance with more accurate prediction intervals. These findings were empirically corroborated by a forecast Monte Carlo experiment.  相似文献   

8.
Ensemble Bayesian model averaging using Markov Chain Monte Carlo sampling   总被引:2,自引:0,他引:2  
Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery et al. Mon Weather Rev 133:1155–1174, 2005) has recommended the Expectation–Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed DiffeRential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model streamflow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.  相似文献   

9.
Coral reefs are threatened ecosystems, so it is important to have predictive models of their dynamics. Most current models of coral reefs fall into two categories. The first is simple heuristic models which provide an abstract understanding of the possible behaviour of reefs in general, but do not describe real reefs. The second is complex simulations whose parameters are obtained from a range of sources such as literature estimates. We cannot estimate the parameters of these models from a single data set, and we have little idea of the uncertainty in their predictions.We have developed a compromise between these two extremes, which is complex enough to describe real reef data, but simple enough that we can estimate parameters for a specific reef from a time series. In previous work, we fitted this model to a long-term data set from Heron Island, Australia, using maximum likelihood methods. To evaluate predictions from this model, we need estimates of the uncertainty in our parameters. Here, we obtain such estimates using Bayesian Metropolis-Coupled Markov Chain Monte Carlo. We do this for versions of the model in which corals are aggregated into a single state variable (the three-state model), and in which corals are separated into four state variables (the six-state model), in order to determine the appropriate level of aggregation. We also estimate the posterior distribution of predicted trajectories in each case.In both cases, the fitted trajectories were close to the observed data, but we had doubts about the biological plausibility of some parameter estimates. We suggest that informative prior distributions incorporating expert knowledge may resolve this problem. In the six-state model, the posterior distribution of state frequencies after 40 years contained two divergent community types, one dominated by free space and soft corals, and one dominated by acroporid, pocilloporid, and massive corals. The three-state model predicts only a single community type. We conclude that the three-state model hides too much biological heterogeneity, but we need more data if we are to obtain reliable predictions from the six-state model. It is likely that there will be similarly large, but currently unevaluated, uncertainty in the predictions of other coral reef models, many of which are much more complex and harder to fit to real data.  相似文献   

10.
Studying evolutionary mechanisms in natural populations often requires testing multifactorial scenarios of causality involving direct and indirect relationships among individual and environmental variables. It is also essential to account for the imperfect detection of individuals to provide unbiased demographic parameter estimates. To cope with these issues, we developed a new approach combining structural equation models with capture-recapture models (CR-SEM) that allows the investigation of competing hypotheses about individual and environmental variability observed in demographic parameters. We employ Markov chain Monte Carlo sampling in a Bayesian framework to (1) estimate model parameters, (2) implement a model selection procedure to evaluate competing hypotheses about causal mechanisms, and (3) assess the fit of models to data using posterior predictive checks. We illustrate the value of our approach using two case studies on wild bird populations. We first show that CR-SEM can be useful to quantify the action of selection on a set of phenotypic traits with an analysis of selection gradients on morphological traits in Common Blackbirds (Turdus merula). In a second case study on Blue Tits (Cyanistes caeruleus), we illustrate the use of CR-SEM to study evolutionary trade-offs in the wild, while accounting for varying environmental conditions.  相似文献   

11.
Testing the Generality of Bird-Habitat Models   总被引:18,自引:0,他引:18  
Bird-habitat models are frequently used as predictive modeling tools—for example, to predict how a species will respond to habitat modifications. We investigated the generality of the predictions from this type of model. Multivariate models were developed for Golden Eagle (Aquila chrysaetos), Raven (Corvus corax), and Buzzard (Buteo buteo) living in northwest Scotland. Data were obtained for all habitat and nest locations within an area of 2349 km2. This assemblage of species is relatively static with respect to both occupancy and spatial positioning. The area was split into five geographic subregions: two on the mainland and three on the adjacent Island of Mull, which has one of United Kingdom's richest raptor fauna assemblages. Because data were collected for all nest locations and habitats, it was possible to build models that did not incorporate sampling error. A range of predictive models was developed using discriminant analysis and logistic regression. The models differed with respect to the geographical origin of the data used for model development. The predictive success of these models was then assessed by applying them to validation data. The models showed a wide range of predictive success, ranging from only 6% of nest sites correctly predicted to 100% correctly predicted. Model validation techniques were used to ensure that the models' predictions were not statistical artefacts. The variability in prediction success seemed to result from methodological and ecological processes, including the data recording scheme and interregional differences in nesting habitat. The results from this study suggest that conservation biologists must be very careful about making predictions from such studies because we may be working with systems that are inherently unpredictable.  相似文献   

12.
A centered spatial-temporal autologistic model is developed for analyzing spatial-temporal binary data observed on a lattice over time. We propose expectation-maximization pseudolikelihood and Monte Carlo expectation-maximization likelihood as well as consider Bayesian inference to obtain the estimates of model parameters. Further, we compare the statistical efficiency of the three approaches for various sizes of sampling lattices and numbers of sampling time points. Regarding prediction, we use Monte Carlo to obtain predictive distributions at future time points and compare the performance of the model with the uncentered spatial-temporal autologistic regression model. The methodology is demonstrated via simulation studies and a real data example concerning southern pine beetle outbreak in North Carolina.  相似文献   

13.
A set of stochastic differential equations has been used to model an aquatic ecosystem. The randomness in the system has been introduced through initial conditions of the state variables, parameters, and input variables (light and temperature). These models were analysed using Monte Carlo simulation procedures and the results were similar to those observed in the experimental and field data. They were different, however, from the results of a deterministic simulation. This approach allows us to incorporate the maximum degree of information in the model and to study the behavior of the system without arbitrarily manipulating the values of the parameters. Some possible refinements and generalizations of this approach are also discussed.  相似文献   

14.
Providing insight on decisions to hunt and trade bushmeat can facilitate improved management interventions that typically include enforcement, alternative employment, and donation of livestock. Conservation interventions to regulate bushmeat hunting and trade have hitherto been based on assumptions of utility- (i.e., personal benefits) maximizing behavior, which influences the types of incentives designed. However, if individuals instead strive to minimize regret, interventions may be misguided. We tested support for 3 hypotheses regarding decision rules through a choice experiment in Tanzania. We estimated models based on the assumptions of random utility maximization (RUM) and pure random regret maximization (P-RRM) and combinations thereof. One of these models had an attribute-specific decision rule and another had a class-specific decision rule. The RUM model outperformed the P-RRM model, but the attribute-specific model performed better. Allowing respondents with different decision rules and preference heterogeneity within each decision rule in a class-specific model performed best, revealing that 55% of the sample used a P-RRM decision rule. Individuals using a P-RRM decision rule responded less to enforcement, salary, and livestock donation than did individuals using the RUM decision rule. Hence, 3 common strategies, enforcement, alternative income-generating activities, and providing livestock as a substitute protein, are likely less effective in changing the behavior of more than half of respondents. Only salary elicited a large (i.e. elastic) response, and only for one RUM class. Policies to regulate the bushmeat trade based solely on the assumption of individuals maximizing utility, may fail for a significant proportion of the sample. Despite the superior performance of models that allow both RUM and P-RRM decision rules there are drawbacks that must be considered before use in the Global South, where very little is known about the social–psychology of decision making.  相似文献   

15.
Three general methods to calculate soil contaminant cleanup levels are assessed: the truncated lognormal approach, Monte Carlo analysis, and the house-by-house approach. When these methods are used together with a lead risk assessment model, they yield estimated soil lead cleanup levels that may be required in an attempt to achieve specified target blood lead levels for a community. The truncated lognormal approach is exemplified by the Society for Environmental Geochemistry and Health (SEGH) model, Monte Carlo analysis is exemplified by the US EPA's LEAD Model, and the house-by-house approach is used with a structural equation model to calculate site-specific soil lead cleanup levels. The various cleanup methods can each be used with any type of lead risk assessment model. Although all examples given here are for lead, the cleanup methods can, in principle, be used as well with risk assessment models for other chemical contaminants to derive contaminant-specific soil cleanup levels.  相似文献   

16.
Abstract:  Regional conservation planning increasingly draws on habitat suitability models to support decisions regarding land allocation and management. Nevertheless, statistical techniques commonly used for developing such models may give misleading results because they fail to account for 3 factors common in data sets of species distribution: spatial autocorrelation, the large number of sites where the species is absent (zero inflation), and uneven survey effort. We used spatial autoregressive models fit with Bayesian Markov Chain Monte Carlo techniques to assess the relationship between older coniferous forest and the abundance of Northern Spotted Owl nest and activity sites throughout the species' range. The spatial random-effect term incorporated in the autoregressive models successfully accounted for zero inflation and reduced the effect of survey bias on estimates of species–habitat associations. Our results support the hypothesis that the relationship between owl distribution and older forest varies with latitude. A quadratic relationship between owl abundance and older forest was evident in the southern portion of the range, and a pseudothreshold relationship was evident in the northern portion of the range. Our results suggest that proposed changes to the network of owl habitat reserves would reduce the proportion of the population protected by up to one-third, and that proposed guidelines for forest management within reserves underestimate the proportion of older forest associated with maximum owl abundance and inappropriately generalize threshold relationships among subregions. Bayesian spatial models can greatly enhance the utility of habitat analysis for conservation planning because they add the statistical flexibility necessary for analyzing regional survey data while retaining the interpretability of simpler models.  相似文献   

17.
Air Pollution Control model is developed for open-pit metal mines. Model will aid decision makers to select a cost-effective solution. Open-pit metal mines contribute toward air pollution and without effective control techniques manifests the risk of violation of environmental guidelines. This paper establishes a stochastic approach to conceptualize the air pollution control model to attain a sustainable solution. The model is formulated for decision makers to select the least costly treatment method using linear programming with a defined objective function and multi-constraints. Furthermore, an integrated fuzzy based risk assessment approach is applied to examine uncertainties and evaluate an ambient air quality systematically. The applicability of the optimized model is explored through an open-pit metal mine case study, in North America. This method also incorporates the meteorological data as input to accommodate the local conditions. The uncertainties in the inputs, and predicted concentration are accomplished by probabilistic analysis using Monte Carlo simulation method. The output results are obtained to select the cost-effective pollution control technologies for PM2.5, PM10, NOx, SO2 and greenhouse gases. The risk level is divided into three types (loose, medium and strict) using a triangular fuzzy membership approach based on different environmental guidelines. Fuzzy logic is then used to identify environmental risk through stochastic simulated cumulative distribution functions of pollutant concentration. Thus, an integrated modeling approach can be used as a decision tool for decision makers to select the cost-effective technology to control air pollution.  相似文献   

18.
We present a strategy for using an empirical forest growth model to reduce uncertainty in predictions made with a physiological process-based forest ecosystem model. The uncertainty reduction is carried out via Bayesian melding, in which information from prior knowledge and a deterministic computer model is conditioned on a likelihood function. We used predictions from an empirical forest growth model G-HAT in place of field observations of aboveground net primary productivity (ANPP) in a deciduous temperate forest ecosystem. Using Bayesian melding, priors for the inputs of the process-based forest ecosystem PnET-II were propagated through the model, and likelihoods for the PnET-II output ANPP were calculated using the G-HAT predictions. Posterior distributions for ANPP and many PnET-II inputs obtained using the G-HAT predictions largely matched posteriors obtained using field data. Since empirical growth models are often more readily available than extensive field data sets, the method represents a potential gain in efficiency for reducing the uncertainty of process-based model predictions when reliable empirical models are available but high-quality data are not.  相似文献   

19.
Testing the Accuracy of Population Viability Analysis   总被引:3,自引:0,他引:3  
  相似文献   

20.
Abstract:  Whenever population viability analysis (PVA) models are built to help guide decisions about the management of rare and threatened species, an important component of model building is the specification of a habitat model describing how a species is related to landscape or bioclimatic variables. Model-selection uncertainty may arise because there is often a great deal of ambiguity about which habitat model structure best approximates the true underlying biological processes. The standard approach to incorporate habitat models into PVA is to assume the best habitat model is correct, ignoring habitat-model uncertainty and alternative model structures that may lead to quantitatively different conclusions and management recommendations. Here we provide the first detailed examination of the influence of habitat-model uncertainty on the ranking of management scenarios from a PVA model. We evaluated and ranked 6 management scenarios for the endangered southern brown bandicoot ( Isoodon obesulus ) with PVA models, each derived from plausible competing habitat models developed with logistic regression. The ranking of management scenarios was sensitive to the choice of the habitat model used in PVA predictions. Our results demonstrate the need to incorporate methods into PVA that better account for model uncertainty and highlight the sensitivity of PVA to decisions made during model building. We recommend that researchers search for and consider a range of habitat models when undertaking model-based decision making and suggest that routine sensitivity analyses should be expanded to include an analysis of the impact of habitat-model uncertainty and assumptions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号