首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We introduce a methodology to infer zones of high potential for the habitat of a species, useful for management of biodiversity, conservation, biogeography, ecology, or sustainable use. Inference is based on a set of sites where the presence of the species has been reported. Each site is associated with covariate values, measured on discrete scales. We compute the predictive probability that the species is present at each node of a regular grid. Possible spatial bias for sites of presence is accounted for. Since the resulting posterior distribution does not have a closed form, a Markov chain Monte Carlo (MCMC) algorithm is implemented. However, we also describe an approximation to the posterior distribution, which avoids MCMC. Relevant features of the approach are that specific notions of data acquisition such as sampling intensity and detectability are accounted for, and that available a priori information regarding areas of distribution of the species is incorporated in a clear-cut way. These concepts, arising in the presence-only context, are not addressed in alternative methods. We also consider an uncertainty map, which measures the variability for the predictive probability at each node on the grid. A simulation study is carried out to test and compare our approach with other standard methods. Two case studies are also presented.  相似文献   

2.
Recently, public health professionals and other geostatistical researchers have shown increasing interest in boundary analysis, the detection or testing of zones or boundaries that reveal sharp changes in the values of spatially oriented variables. For areal data (i.e., data which consist only of sums or averages over geopolitical regions), Lu and Carlin (Geogr Anal 37: 265–285, 2005) suggested a fully model-based framework for areal wombling using Bayesian hierarchical models with posterior summaries computed using Markov chain Monte Carlo (MCMC) methods, and showed the approach to have advantages over existing non-stochastic alternatives. In this paper, we develop Bayesian areal boundary analysis methods that estimate the spatial neighborhood structure using the value of the process in each region and other variables that indicate how similar two regions are. Boundaries may then be determined by the posterior distribution of either this estimated neighborhood structure or the regional mean response differences themselves. Our methods do require several assumptions (including an appropriate prior distribution, a normal spatial random effect distribution, and a Bernoulli distribution for a set of spatial weights), but also deliver more in terms of full posterior inference for the boundary segments (e.g., direct probability statements regarding the probability that a particular border segment is part of the boundary). We illustrate three different remedies for the computing difficulties encountered in implementing our method. We use simulation to compare among existing purely algorithmic approaches, the Lu and Carlin (2005) method, and our new adjacency modeling methods. We also illustrate more practical modeling issues (e.g., covariate selection) in the context of a breast cancer late detection data set collected at the county level in the state of Minnesota.  相似文献   

3.
Hierarchical modeling of abundance in space or time using closed-population mark-recapture under heterogeneity (model \(\hbox {M}_{\text {h}}\) ) presents two challenges: (i) finding a flexible likelihood in which abundance appears as an explicit parameter and (ii) fitting the hierarchical model for abundance. The first challenge arises because abundance not only indexes the population size, it also determines the dimension of the capture probabilities in heterogeneity models. A common approach is to use data augmentation to include these capture probabilities directly into the likelihood and fit the model using Bayesian inference via Markov chain Monte Carlo (MCMC). Two such examples of this approach are (i) explicit trans-dimensional MCMC, and (ii) superpopulation data augmentation. The superpopulation approach has the advantage of simple specification that is easily implemented in BUGS and related software. However, it reparameterizes the model so that abundance is no longer included, except as a derived quantity. This is a drawback when hierarchical models for abundance, or related parameters, are desired. Here, we analytically compare the two approaches and show that they are more closely related than might appear superficially. We exploit this relationship to specify the model in a way that allows us to include abundance as a parameter and that facilitates hierarchical modeling using readily available software such as BUGS. We use this approach to model trends in grizzly bear abundance in Yellowstone National Park from 1986 to 1998.  相似文献   

4.
This paper addresses the question of studying the joint structure of three data tablesR,L andQ. In our motivating ecological example, the central tableL is a sites-by-species table that contains the number of organisms of a set of species that occurs at a set of sites. At the margins ofL are the sites-by-environment data tableR and the species-by-trait data table Q. For relating the biological traits of organisms to the characteristics of the environment in which they live, we propose a statistical technique calledRLQ analysis (R-mode linked toQ-mode), which consists in the general singular value decomposition of the triplet (R t D I LD J Q,D q ,D p ) whereD I ,D J ,D q ,D p are diagonal weight matrices, which are chosen in relation to the type of data that is being analyzed (quantitative, qualitative, etc.). In the special case where the central table is analysed by correspondence analysis,RLQ maximizes the covariance between linear combinations of columns ofR andQ. An example in bird ecology illustrates the potential of this method for community ecologists.  相似文献   

5.
Chl a and C-normalized pigment ratios were studied in two dinophytes (Prorocentrum minimum and Karlodinium micrum), three haptophytes (Chrysochromulina leadbeateri, Prymnesium parvum cf. patelliferum, Phaeocystis globosa), two prasinophytes (Pseudoscourfieldia marina, Bathycoccus prasinos) and the raphidophyte Heterosigma akashiwo, in low (LL, 35 μmol photons m−2 s−1) and high light (HL, 500 μmol photons m−2 s−1). Pigment ratios in LL and HL were compared against a general rule of photoacclimation: LL versus HL ratios ≥1 are typical for light-harvesting pigments (LHP) and <1 for photoprotective carotenoids. Peridinin, prasinoxanthin, gyroxanthin-diester and 19′-butanoyloxy-fucoxanthin were stable chemotaxonomic markers with less than 25% variation between LL versus HL Chl a–normalized ratios. As expected, Chls exhibited LL/HL to Chl a ratios >1 with some exceptions such as Chl c 3 in P. globosa and MV Chl c 3 in C. leadbeateri. LL/HL to Chl a ratios of photosynthetic carotenoids were close to 1, except Hex-fuco in P. globosa (four-fold higher Chl a ratio in HL vs LL). Although pigment ratios in P. globosa clearly responded to the light conditions the diadinoxanthin-diatoxanthin cycle remained almost unaltered at HL. Total averaged pigment and LHP to C ratios were significantly higher in LL versus HL, reflecting the photoacclimation status of the studied species. By contrast, the same Chl a-normalized ratios were weakly affected by the light intensity due to co-variation with Chl a. Based on our data, we suggest that the interpretation of PPC and LHP are highly dependent on biomass normalization (Chl a vs. C).  相似文献   

6.
Hierarchical modeling for extreme values observed over space and time   总被引:3,自引:1,他引:2  
We propose a hierarchical modeling approach for explaining a collection of spatially referenced time series of extreme values. We assume that the observations follow generalized extreme value (GEV) distributions whose locations and scales are jointly spatially dependent where the dependence is captured using multivariate Markov random field models specified through coregionalization. In addition, there is temporal dependence in the locations. There are various ways to provide appropriate specifications; we consider four choices. The models can be fitted using a Markov Chain Monte Carlo (MCMC) algorithm to enable inference for parameters and to provide spatio–temporal predictions. We fit the models to a set of gridded interpolated precipitation data collected over a 50-year period for the Cape Floristic Region in South Africa, summarizing results for what appears to be the best choice of model.
Alan E. GelfandEmail:
  相似文献   

7.
A large-eddy simulation with transitional structure function(TSF) subgrid model we previously proposed was performed to investigate the turbulent flow with thermal influence over an inhomogeneous canopy, which was represented as alternative large and small roughness elements. The aerodynamic and thermodynamic effects of the presence of a layer of large roughness elements were modelled by adding a drag term to the three-dimensional Navier–Stokes equations and a heat source/sink term to the scalar equation, respectively. The layer of small roughness elements was simply treated using the method as described in paper (Moeng 1984, J. Atmos Sci. 41, 2052–2062) for homogeneous rough surface. The horizontally averaged statistics such as mean vertical profiles of wind velocity, air temperature, et al., are in reasonable agreement with Gao et al.(1989, Boundary layer meteorol. 47, 349–377) field observation (homogeneous canopy). Not surprisingly, the calculated instantaneous velocity and temperature fields show that the roughness elements considerably changed the turbulent structure within the canopy. The adjustment of the mean vertical profiles of velocity and temperature was studied, which was found qualitatively comparable with Belcher et al. (2003, J Fluid Mech. 488, 369–398)’s theoretical results. The urban heat island(UHI) was investigated imposing heat source in the region of large roughness elements. An elevated inversion layer, a phenomenon often observed in the urban area (Sang et al., J Wind Eng. Ind. Aesodyn. 87, 243–258)’s was successfully simulated above the canopy. The cool island(CI) was also investigated imposing heat sink to simply model the evaporation of plant canopy. An inversion layer was found very stable and robust within the canopy.  相似文献   

8.
Rice’s theory for the statistical properties of random noise currents has been employed in the context of concentration fluctuations in dispersing plumes. Within this context, the theory has been extended to calculate the distribution of excursion times above a small threshold for arbitrary spacings between an up-crossing and the successive down-crossing. This approach has then been applied to a second-order stochastic model for the evolution of odour concentrations and their time derivative (simple model), and to the superstatistics extension of this model [Reynolds (2007) Phys. Fluids]. In agreement with the measurements of Yee and coworkers [Yee et al. (1993) Boundary-Layer Meteorol. 65, Yee et al. (1994) J. Appl. Meteorol. 33 ], both formulations predict a distribution of excursion times that can be well approximated by a power-law profile with exponent close to −3/2. For the superstatistical model the power-law profile extends over approximately three or more decades, for the simple model this range is smaller. Compared to the simple model, predictions for the superstatistical model are in a better agreement with the measurements.  相似文献   

9.
Knape J  de Valpine P 《Ecology》2012,93(2):256-263
We show how a recent framework combining Markov chain Monte Carlo (MCMC) with particle filters (PFMCMC) may be used to estimate population state-space models. With the purpose of utilizing the strengths of each method, PFMCMC explores hidden states by particle filters, while process and observation parameters are estimated using an MCMC algorithm. PFMCMC is exemplified by analyzing time series data on a red kangaroo (Macropus rufus) population in New South Wales, Australia, using MCMC over model parameters based on an adaptive Metropolis-Hastings algorithm. We fit three population models to these data; a density-dependent logistic diffusion model with environmental variance, an unregulated stochastic exponential growth model, and a random-walk model. Bayes factors and posterior model probabilities show that there is little support for density dependence and that the random-walk model is the most parsimonious model. The particle filter Metropolis-Hastings algorithm is a brute-force method that may be used to fit a range of complex population models. Implementation is straightforward and less involved than standard MCMC for many models, and marginal densities for model selection can be obtained with little additional effort. The cost is mainly computational, resulting in long running times that may be improved by parallelizing the algorithm.  相似文献   

10.
A number of newly synthesized phthalidylamines and o-benzoylbenzamide derivatives were evaluated for some biological activities. Synthesis was established by condensation of 3-acetoxyphthalide 1 with morpholine, piperidine, N,N-diisobutyl-N,N-dibenzylamines and piperazine, which afforded N-(3-phthalidyl)amines 3ad, and 4 respectively, while with N,N-diisopropylamine, o-formyl-N,N-diisopropyl benzamide 5a is formed exclusively. On the other hand, the reaction of 3-acetoxy-3-phenylphthalide 2 with secondary amines afforded o-benzoylbenzamide derivatives 5bc, 6 in a high yield. The structure of the reaction products was established from their spectral data. These products were screened for antifungal, antibacterial and genotoxic effect. It was found that all tested compounds have antifungal activity. Compounds 1, 2, 3d and 5b were found to be active against Escherichia coli, Bacillus subtilis and Staphylococcus aureus. Genotoxic effects using Ames test showed that Compounds 1 and 2 have a weak base-pair substitution mutagenicity while a clear base-pair substitution mutagenic activity was shown by 3a using TA100-strain of Salmonella typhimurium. Compound 4 showed a frameshift mutgenicity while a weak oxidative mutagenic action was revealed by 6. No change on the mutagenicity of the tested chemicals was observed after using the S9 metabolic activation system.  相似文献   

11.
《Ecological modelling》2005,182(2):183-197
In this paper, we estimate the winter respiration (oxygen depletion per unit area of hypolimnetic surface) in a hyper-eutrophic shallow lake (Tuusulanjärvi) in the northern hemisphere (Finland, northern Europe, latitude 60∘26′, longitude 25∘03′) under ice-cover periods in the years 1970–2003. We present a dynamic nonlinear model that can be used for predicting of the oxygen regime in following years and to dimensioning of needed artificial oxygenation efficiency that will prevent fish kills in the lake. We use Bayesian estimation of respiration using Markov chain Monte Carlo (MCMC) method (Adaptive Metropolis–Hastings algorithm). This allows for analysis and predictions that take into account all the uncertainties in the model and the data, pool information from different sources (laboratory experiments and lake data), and to quantify the uncertainties using a full statistical approach. The mean estimated respiration in the study period was 301±105 mg m−2 d−1, which is on the upper limit of winter respiration of eutrophic Canadian lakes on the same latitude. The reference rate of the respiration k (d−1) at 4 C indicated cyclic behavior of about 9-year amplitude and had a statistically significant negative trend through out the study period. The temperature coefficient and respiration rate of the model prove to be highly correlated and unidentifiable with the given data. The future winters can be predicted using the posterior information coming from the past observations. As new observations arrive, they are added to the analysis. Methods are shown to be applicable to the dimensioning of artificial oxygenation devices and to the anticipation of the need for oxygenation during the winter.  相似文献   

12.
We consider the numerical approaches for the least squares estimation of the parameter vector p in the initial value problem y′ = g(t, y, p), y(t0) = y0(p) when observations are available on some or all components of the vector y(t). Special attention is paid to the development of techniques which, although not global, are less sensitive to initial parameter estimates than the standard approach employing the sensitivity equations. Experience indicates that interactive approaches can be very valuable when good starting parameter approximations are unavailable. We describe the main features of our interactive parameter fitting package PARFIT. This package contains standard techniques employing the sensitivity equations as well as special algorithms designed to improve poor parameter estimates. These special algorithms have been selected and developed with user interaction in mind. We describe in detail one special approach designed for the case when observations are not available on all state variables. An example (using computer generated observations) is presented to illustrate this approach. Finally, the power of an interactive approach is demonstrated with two examples involving attempts to model physically observed phenomena.  相似文献   

13.
The numerical time-dependent three-dimensional model [Kovalets, I.V. and Maderich, V.S.: 2001, Int. J. Fluid Mech. Res. 30, 410–429] of the heavy gas dispersion in the atmospheric boundary layer has been improved by parameterizing momentum and heat fluxes on the surface of Earth using Monin–Obukhov similarity theory. Three parameterizations of heat exchange with the surface of Earth were considered: (A) formula of Yaglom A.M. and Kader B.A. [1974, J. Fluid Mech. 62, 601–623] for forced convection, (B) interpolation formula for mixed convection and (C) similarity relationship for mixed convection [Kader, B.A. and Yaglom, A.M.: 1990, J. Fluid Mech. 212, 637–662]. Two case studies were considered. In the first study based on experiment of Zhu et al., J. Hazard Mater 62, 161–186], the interaction of an isothermal heavy gas plume with an atmospheric surface layer was simulated. It was found that stable stratification in the cloud essentially suppresses the turbulence in the plume, reducing the turbulent momentum flux by a factor of down to 1/5 in comparison with the undisturbed value. This reduction essentially influences velocities in the atmospheric boundary layer above the cloud, increasing the mean velocity by a factor of up to 1.3 in comparison with the undisturbed value. A simulation of cold heavy gas dispersion was carried out in the second case based on field experiment BURRO 8. It was shown that both forced and free convections under moderate wind speeds significantly influence the plume. The relative rms and bias errors of prediction the plume’s height were σH ≈ 30% and ɛH = − 10%, respectively, for parameterization B, while for A and C the errors were σH ≈ 80% and ɛH ≈ − 65%. It is therefore advised to use the simple parameterization B in dense gas dispersion models.  相似文献   

14.
The effects of light exposure on the photosynthetic activity of kleptoplasts were studied in the sacoglossan mollusc Elysia viridis. The photosynthetic activity of ingested chloroplasts was assessed in vivo by non-destructively measuring photophysiological parameters using pulse amplitude modulation (PAM) fluorometry. Animals kept under starvation were exposed to two contrasting light conditions, 30 μmol photons m−2 s−1 (low light, LL), and 140 μmol photons m−2 s−1 (high light, HL), and changes in photosynthetic activity were monitored by measuring the maximum quantum yield of photosystem II (PSII), F v/F m, the minimum fluorescence, F o, related to chlorophyll a content, and by measuring rapid light-response curves (RLC) of relative electron transport rate (rETR). RLCs were characterised by the initial slope of the curve, αRLC, related to efficiency of light capture, and the maximum rETR level, rETRm,RLC, determined by the carbon-fixation metabolism. Starvation induced the decrease of all photophysiological parameters. However, the retention of photosynthetic activity (number of days for F v/F m > 0), as well as the rate and the patterns of its decrease over time, varied markedly with light exposure. Under HL conditions, a rapid, exponential decrease was observed for F v/F m, αRLC and rETRm,RLC, F o not showing any consistent trend of variation, and retention times ranged between 6 and 15 days. These results suggested that the retention of chloroplast functionality is limited by photoinactivation of PSII reaction center protein D1. In contrast, under LL conditions, a slower decrease in all parameters was found, with retention times varying from 15 to 57 days. F v/F m, αRLC and rETRm,RLC exhibited a bi-phasic pattern composed by a long phase of slow decrease in values followed by a rapid decline, whilst F o decayed exponentially. These results were interpreted as resulting from lower rates of D1 photoinactivation under low light and from the gradual decrease in carbon provided by photosynthesis due to reduction of functional photosynthetic units.  相似文献   

15.
The curculionid beetle Naupactus bipes (Germar, 1824) (Coleoptera: Curculionidae: Brachycerinae) has shown feeding preference for leaves of Piper gaudichaudianum, demonstrating an unexpected specificity for an insect considered to be a generalist. The leaves of P. gaudichaudianum contain the prenylated chromenes gaudichaudianic acid (4, major compound) and its methyl ester (5) in addition to a chromene (3) lacking one prenyl residue. In addition to 4, roots contain the chromone methyl ester (1) and methyl taboganate (2, major compound). Feeding on roots, larvae of N. bipes sequester exclusively the root-specific compounds 1 and 2. Adult beetles sequester the leaf-specific chromenes 3 and 4, but were found to also contain compounds 1 and 2 that are absent in leaves. Therefore, it is suggested that 1 and 2 are sequestered by larvae and can be found in the body of adult insects after long-term storage. In addition, 3 and 4, the major compounds in leaves were found to be associated with the eggs.  相似文献   

16.
In this paper we make use of some stochastic volatility models to analyse the behaviour of a weekly ozone average measurements series. The models considered here have been used previously in problems related to financial time series. Two models are considered and their parameters are estimated using a Bayesian approach based on Markov chain Monte Carlo (MCMC) methods. Both models are applied to the data provided by the monitoring network of the Metropolitan Area of Mexico City. The selection of the best model for that specific data set is performed using the Deviance Information Criterion and the Conditional Predictive Ordinate method.  相似文献   

17.
Traditional Markov chain Monte Carlo (MCMC) sampling of hidden Markov models (HMMs) involves latent states underlying an imperfect observation process, and generates posterior samples for top-level parameters concurrently with nuisance latent variables. When potentially many HMMs are embedded within a hierarchical model, this can result in prohibitively long MCMC runtimes. We study combinations of existing methods, which are shown to vastly improve computational efficiency for these hierarchical models while maintaining the modeling flexibility provided by embedded HMMs. The methods include discrete filtering of the HMM likelihood to remove latent states, reduced data representations, and a novel procedure for dynamic block sampling of posterior dimensions. The first two methods have been used in isolation in existing application-specific software, but are not generally available for incorporation in arbitrary model structures. Using the NIMBLE package for R, we develop and test combined computational approaches using three examples from ecological capture–recapture, although our methods are generally applicable to any embedded discrete HMMs. These combinations provide several orders of magnitude improvement in MCMC sampling efficiency, defined as the rate of generating effectively independent posterior samples. In addition to being computationally significant for this class of hierarchical models, this result underscores the potential for vast improvements to MCMC sampling efficiency which can result from combinations of known algorithms.  相似文献   

18.
An approach for defining the quality of surface sediments of limited areas in terms of heavy metal contents is proposed. Sediments were taken on a bi‐dimensional mapping, for checking possible different sources of pollution in the case study, a harbour zone. Non residual metals were determined by ICP‐AES in cold diluted hydrochloric acid leachates of sediments. An “enrichment factor”;, r, can be computed for each metal: metals with r values exceeding unity can be considered as indicators of metal pollution. A “total enrichment factor”;, R, was proposed in order to assess the degree of pollution of sediments for each site. R is an adimensional value that accounts for the presence of metals that exceed threshold values determined by background concentrations.  相似文献   

19.
The recent global financial crisis has highlighted the need for balanced and efficient investments in the reduction of the greenhouse effect caused by emissions of CO2 on a global scale. In a previous paper, the authors proposed a mathematical model describing the dynamic relation of CO2 emission with investment in reforestation and clean technology. An efficient allocation of resources to reduce the greenhouse effect has also been proposed. Here, this model is used to provide estimates of the investments needed in land reforestation and in the adoption of clean technologies for an optimum emission and abatement of CO2, for the period of 1996–2014. The required investments are computed to minimize deviations with respect to the emission targets proposed in the Kyoto Protocol for European Countries. The emission target can be achieved by 2014 with investments in reforestation peaking in 2004, and a reduction of the expected GDP of 42%, relative to 2006. Investments in clean technology should increase between 2008 and 2010 with maximum transfer figures around 70 million American dollars. Total (cumulative) costs are, however, relatively high depending on the price of carbon abatement and the rate at which the expected CO2 concentration in the atmosphere should be reduced. Results highlight the advantages for policy makers to be able to manage investments in climate policy more efficiently, controlling optimum transfers based on a portfolio of actions that tracks a pre-defined CO2 concentration target.  相似文献   

20.
This paper presents statistical methodology to analyze longitudinal binary responses for which a sudden change in the response occurs in time. Probability plots, transition matrices, and change-point models and more advanced techniques such as generalized auto-regression models and hidden Markov chains are presented and applied on a study on the activity of Rhipicephalus appendiculatus, the major vector of Theileria parva, a fatal disease in cattle. This study presents individual measurements on female R. appendiculatus, which are terminating their diapause (resting status) and become active. Comprehending activity patterns is very important to better understand the ecology of R. appendiculatus. The model indicates that activity and non-activity act in an absorbing way meaning that once a tick becomes active it shows a tendency to remain active. The change-point model estimates that the sudden change in activity happens on December 10. The reaction of ticks on acceleration and changes in rainfall and temperature indicates that ticks can sense climatic changes. The study revealed the underlying not visually observable states during diapause development of the adult tick of R. appendiculatus. These states could be related to phases during the dynamic event of diapause development and post-diapause activity in R. appendiculatus.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号