首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
2.
The rate of northern migration of the Africanized honey bee (AHB) in the United States has recently slowed dramatically. This paper investigates the impact of migration on the equilibrium size distributions of a particular stochastic multipopulation model, namely a coupled logistic power law model. The bivariate equilibrium size distribution of the model is derived and illustrated with parameter values used to describe AHB population dynamics. In the model, the difference between the equilibrium sizes of the two populations is a measure of the effect of migration. The distribution of this difference may be approximated by a normal distribution. The mean and variance parameters for the normal are predicted accurately by a second-order regression model based on the migration rate and the maximum size of the first population. The methodology is general, and should be useful in studying the migration effect in many other applications with one-way migration.  相似文献   

3.
Stochastic matrix population models are often used to help guide the management of animal populations. For a long-lived species, environmental stochasticity in adult survival will play an important role in determining outcomes from the model. One of the most common methods for modelling such stochasticity is to randomly select the value of adult survival for each year from a distribution with a specified mean and standard deviation. We consider four distributions that can provide realistic models for stochasticity in adult survival. For values of the mean and standard deviation that cover the range we would expect for long-lived species, all four distributions have similar shapes, with small differences in their skewness and kurtosis. This suggests that many of the outcomes from a population model will be insensitive to the choice of distribution, assuming that distribution provides a realistic model for environmental stochasticity in adult survival. For a generic age-structured model, the estimate of the long-run stochastic growth rate is almost identical for the four distributions, across this range of values for the mean and standard deviation. Model outcomes based on short-term projections, such as the probability of a decline over a 20-year period, are more sensitive to the choice of distribution.  相似文献   

4.
Recruitment data for 18 marine fish stocks are smoothed using 10 parametric families of probability distributions. Comparative fit of the 10 families is assessed by means of the maximized log-likelihood. Results indicate that the gamma distribution provides an overall good fit in the right-hand tail of the data, but that some adjustment to the gamma distribution is called for in the left-hand tail. Weight functions and weighted distributions are suggested as one means of achieving the needed adjustment.  相似文献   

5.
Testing the Accuracy of Population Viability Analysis   总被引:3,自引:0,他引:3  
  相似文献   

6.
Natural selection can be considered as optimising fitness. Because ‘mean’ fitness is maximized with respect to the genotypes of carriers, traditional theory can be viewed as a statistical theory of natural selection. Probabilistic optimisation is a way to incorporate such uncertainty into optimality analyses of natural selection, where environmental uncertainty is expressed as a probability distribution. Its canonical form is a weighted average of fitness with respect to a given probabilistic distribution. This concept should be applicable to three different levels of uncertainty: (1) behavioural variations of an individual, (2) individual variations within a generation, and (3) temporal change over generations (geometric mean fitness). The former two levels are straightforward with many empirical evidences, but the last category, the geometric mean fitness, has not well understood. Here we studied the geometric mean fitness by taking its logarithm, where the log growth rates become the fitness value. By further transforming the log growth rates, the fitness of log growth rates becomes its linear function. Therefore, a simple average of these distributions becomes the fitness measure across generations and consideration of variance discount or the entire probability distributions becomes unnecessary. We discuss some characteristic features of probabilistic optimization in general. Our view is considered a probabilistic view of natural selection, in contrast with the traditional statistical view of natural selection.  相似文献   

7.
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation.  相似文献   

8.
In this work, a mathematical model on concentration distribution is developed for a steady, uniform open channel turbulent flow laden with sediments by incorporating the effect of secondary current through velocity distribution together with the stratification effect due to presence of sediments. The effect of particle-particle interaction at reference level and the effect of incipient motion probability, non-ceasing probability and pick-up probability of the sediment particles at reference concentration are taken into account. The proposed model is compared with the Rouse equation as well as verified with existing experimental data. Good agreement between computed value and experimental data indicates that secondary current influences the suspension of particles significantly. The direction and magnitude (strength) of secondary current lead to different patterns of concentration distribution and theoretical analysis shows that type II profile (where maximum concentration appears at significant height above channel bed surface) always corresponds to upward direction and greater magnitude of secondary current.  相似文献   

9.
An analysis of concentration time series measured in a boundary-layer wind tunnel at the University of Hamburg is presented. The measurements were conducted with a detailed aerodynamic model of the Oklahoma City (OKC) central business district (CBD) at the scale of 1:300 and were part of the Joint Urban 2003 (JU2003) project. Concentration statistics, as well as concentration probability density (PDF) and exceedance probability (EDF) functions were computed for street- and roof-level sites for three different wind directions. Taking into account the different length scales and wind speeds in the wind-tunnel (WT) and full-scale experiments, dimensionless concentrations and a dimensionless time scale are computed for the comparison with data from the JU2003 full-scale tracer experiments, conducted in OKC in 2003. Using such dimensionless time, the WT time series cover a ~20 times longer time span than the JU2003 full-scale time series, which are analysed in detail in an accompanying, first part of this paper. The WT time series are thus divided into 20 consecutive blocks of equal length and the statistical significance of parameters based on relatively short records is assessed by studying the variability of the concentration statistics and probability functions for the different blocks. In particular at sites closer to the plume edge, the results for the individual blocks vary significantly and at such sites statistics from short records are not very representative. While the location of three sampling sites in the WT closely matched the sites during the full-scale experiments, the prevailing wind directions during the JU2003 releases were not exactly matched. The comparison between full-scale and WT concentration parameters should thus primarily be interpreted in a qualitative rather than direct quantitative sense. Given the differences in mean wind directions and concerns about the representativeness of full-scale concentration statistics, the WT and full-scale results compared well. The 98 percentile concentrations for almost all full-scale releases analyzed are within the scatter of the percentiles observed in the block analysis of the WT time series. Furthermore, the concentration percentiles appear linearly correlated with the fluctuation intensities and the linear relationships determined in the wind tunnel agree well with full-scale results.  相似文献   

10.
Spatial vegetation patterns are recognized as sources of valuable information that can be used to infer the state and functionality of semiarid ecosystems, particularly in the context of both climate and land use change. Recent studies have suggested that the patch-size distribution of vegetation in drylands can be described using power-law metrics, and that these scale-free distributions deviate from power-law linearity with characteristic scale lengths under the effects of increasing aridity or human disturbance, providing an early sign of desertification. These findings have been questioned by several modeling approaches, which have identified the presence of characteristic scale lengths on the patch-size distribution of semiarid periodic landscapes. We analyze the relationship between fragmentation of vegetation patterns and their patch-size distributions in semiarid landscapes showing different degree of periodicity (i.e., banding). Our assessment is based on the study of vegetation patterns derived from remote sensing in a series of semiarid Australian Mulga shrublands subjected to different disturbance levels. We use the patch-size probability density and cumulative probability distribution functions from both nondirectional and downslope analyses of the vegetation patterns. Our results indicate that the shape of the patch-size distribution of vegetation changes with the methodology of analysis applied and specific landscape traits, breaking the universal applicability of the power-law metrics. Characteristic scale lengths are detected in (quasi) periodic banded ecosystems when the methodology of analysis accounts for critical landscape anisotropies, using downslope transects in the direction of flow paths. In addition, a common signal of fragmentation is observed: the largest vegetation patches become increasingly less abundant under the effects of disturbance. This effect also explains deviations from power-law behavior in disturbed vegetation which originally showed scale-free patterns. Overall, our results emphasize the complexity of structure assessment in dryland ecosystems, while recognizing the usefulness of the patch-size distribution of vegetation for monitoring semiarid ecosystems, especially through the cumulative probability distributions, which showed high sensitivity to fragmentation of the vegetation patterns. We suggest that preserving large vegetation patches is a critical task for the maintenance of the ecosystem structure and functionality.  相似文献   

11.
We develop a stochastic model for the time-evolution of scalar concentrations and temporal gradients in concentration experienced by observers moving within inhomogeneous plumes that are dispersing within turbulent flows. In this model, scalar concentrations and their gradients evolve jointly as a Markovian process. Underlying the model formulation is a natural generalisation of Thomson’s well mixed condition [Thomson DJ (1987) J Fluid Mech 180:529–556]. As a consequence model outputs are necessarily compatible with statistical properties of scalars observed in experiment that are used here as model input. We then use the model to examine how insects aloft within the atmospheric boundary-layer can locate odour sources by modulating their flight patterns in response to odour cues. Mechanisms underlying odour-mediated flights have been studied extensively at laboratory-scale but an understanding of these flights over landscape scales is still lacking. Insect flights are simulated by combining the stochastic model with a simple model of insect olfactory response. These simulations show the strong influence of wind speed on the distributions of the times taken by insects to locate the source. In accordance with experimental observations [Baker TC, Vickers NJ (1997) In: Insect pheromone research: new directions, pp 248–264; Mafra-Neto A, Cardé RT (1994) Nature 369:142–144], flight patterns are predicted to become straighter and shorter, and source location is predicted to become more likely as the mean wind speed increases. The most probable arrival time to the source decreases with the mean wind speed. It is shown that scale-free movement patterns arising from olfactory-driven foraging stem directly from the power-law distribution of concentration excursion times above/below a threshold level and are robust with respect to variations in Reynolds number. Flight lengths are well represented by a power law distribution in agreement with the observed patterns of foraging bumblebees [Heinrich B (1979) Oecologia 40(3):235–245].  相似文献   

12.
A frequent assumption in environmental risk assessment is that the underlying distribution of an analyte concentration is lognormal. However, the distribution of a random variable whose log has a t-distribution has infinite mean. Because of the proximity of the standard normal and t-distribution, this suggests that a distribution such as the gamma or truncated normal, with smaller right tail probabilities, might make a better statistical model for mean estimation than the lognormal. In order to assess the effect of departures from lognormality on lognormal-based statistics, we simulated complete lognormal, truncated normal, and gamma data for various sample sizes and coefficients of variation. In these cases, departures from lognormality were not easily detected with the Shapiro-Wilk test. Various lognormal-based estimates and tests were compared with alternate methods based on the ordinary sample mean and standard error. The examples were also considered in the presence of random left censoring with the mean and standard error of the product limit estimate replacing the ordinary sample mean and standard error. The results suggest that in the estimation of or tests about a mean, if the assumption of lognormality is at all suspect, then lognormal-based approaches may not be as good as the alternative methods.  相似文献   

13.
Kodell and West (1993) describe two methods for calculating pointwise upper confidence limits on the risk function with normally distributed responses and using a certain definition of adverse quantitative effect. But Banga et al. (2000) have shown that these normal theory methods break down when applied to skew data. We accordingly develop a risk analysis model and associated likelihood-based methodology when the response follows either a gamma or reciprocal gamma distribution. The model supposes that the shape (index) parameter k of the response distribution is held fixed while the logarithm of the scale parameter is a linear model in terms of the dose level. Existence and uniqueness of the maximum likelihood estimates is established. Asymptotic likelihood-based upper and lower confidence limits on the risk are solutions of the Lagrange equations associated with a constrained optimization problem. Starting values for an iterative solution are obtained by replacing the Lagrange equations by the lowest order terms in their asymptotic expansions. Three methods are then compared for calculating confidence limits on the risk: (i) the aforementioned starting values (LRAL method), (ii) full iterative solution of the Lagrange equations (LREL method), and (iii) bounds obtained using approximate normality of the maximum likelihood estimates with standard errors derived from the information matrix (MLE method). Simulation is used to assess coverage probabilities for the resulting upper confidence limits when the log of the scale parameter is quadratic in the dose level. Results indicate that coverage for the MLE method can be off by as much as 15% points and converges very slowly to nominal coverage levels as the sample size increases. Coverage for the LRAL and LREL methods, on the other hand, is close to nominal levels unless (a) the sample size is small, say N < 25, (b) the index parameter is small, say k 1, and (c) the direction of adversity is to the left for the gamma distribution or to the right for the reciprocal gamma distribution.  相似文献   

14.
This study attempts to improve upon statistical downscaling (Sd) models based on the classical approach which uses canonical correlation analysis, in order to generate temperature scenarios over Greece. Considering the long-term trends of the predictor variables (1,000–500 hPa thickness field geopotential heights—using NCEP data) and the predictand variables (observed mean maximum summer temperatures over Greece), a new Sd model is constructed. Regression models using generalized least square estimators are developed in order to eliminate the trends within the time series. The advantages of the suggested method compared to the classical method are quantified in terms of a number of distinct performance criteria, e.g., Mean squared error which is the basic criterion of the estimated downscaled values relative to the observed. Finally, the suggested Sd models are used to evaluate the effects of a future climate scenario (IPCC-SRES: A2) on mean maximum summer temperatures over Greece. The results from the climate projection indicate a temperature increase for the period 2070–2100 which is smaller than the corresponding increase from the classical approach.  相似文献   

15.
Consider a lattice of locations in one dimension at which data are observed. We model the data as a random hierarchical process. The hidden process is assumed to have a (prior) distribution that is derived from a two-state Markov chain. The states correspond to the mean values (high and low) of the observed data. Conditional on the states, the observations are modelled, for example, as independent Gaussian random variables with identical variances. In this model, there are four free parameters: the Gaussian variance, the high and low mean values, and the transition probability in the Markov chain. A parametric empirical Bayes approach requires estimation of these four parameters from the marginal (unconditional) distribution of the data and we use the EM-algorithm to do this. From the posterior of the hidden process, we use simulated annealing to find the maximum a posteriori (MAP) estimate. Using a Gibbs sampler, we also obtain the maximum marginal posterior probability (MMPP) estimate of the hidden process. We use these methods to determine where change-points occur in spatial transects through grassland vegetation, a problem of considerable interest to plant ecologists.  相似文献   

16.
Obtaining Environmental Favourability Functions from Logistic Regression   总被引:6,自引:0,他引:6  
Logistic regression is a statistical tool widely used for predicting species’ potential distributions starting from presence/absence data and a set of independent variables. However, logistic regression equations compute probability values based not only on the values of the predictor variables but also on the relative proportion of presences and absences in the dataset, which does not adequately describe the environmental favourability for or against species presence. A few strategies have been used to circumvent this, but they usually imply an alteration of the original data or the discarding of potentially valuable information. We propose a way to obtain from logistic regression an environmental favourability function whose results are not affected by an uneven proportion of presences and absences. We tested the method on the distribution of virtual species in an imaginary territory. The favourability models yielded similar values regardless of the variation in the presence/absence ratio. We also illustrate with the example of the Pyrenean desman’s (Galemys pyrenaicus) distribution in Spain. The favourability model yielded more realistic potential distribution maps than the logistic regression model. Favourability values can be regarded as the degree of membership of the fuzzy set of sites whose environmental conditions are favourable to the species, which enables applying the rules of fuzzy logic to distribution modelling. They also allow for direct comparisons between models for species with different presence/absence ratios in the study area. This makes them more useful to estimate the conservation value of areas, to design ecological corridors, or to select appropriate areas for species reintroductions. Received: June 2005 / Revised: July 2005  相似文献   

17.
The strong fluctuating component in the measured concentration time series of a dispersing gaseous pollutant in the atmospheric boundary layer, and the hazard level associated to short-term concentration levels, demonstrate the necessity of calculating the magnitude of turbulent fluctuations of concentration using computational simulation models. Moreover the computation of concentration fluctuations in cases of dispersion in realistic situations, such as built-up areas or street canyons, is of special practical interest for hazard assessment purposes. In this paper, the formulation and evaluation of a model for concentration fluctuations, based on a transport equation, are presented. The model is applicable in cases of complex geometry. It is included in the framework of a computational code, developed for simulating the dispersion of buoyant pollutants over complex geometries. The experimental data used for the model evaluation concerned the dispersion of a passive gas in a street canyon between 4 identical rectangular buildings performed in a wind tunnel. The experimental concentration fluctuations data have been derived from measured high frequency concentrations. The concentration fluctuations model is evaluated by comparing the model's predictions with the observations in the form of scatter plots, quantile-quantile plots, contour plots and statistical indices as the fractional bias, the geometrical mean variance and the factor-of-two percentage. From the above comparisons it is concluded that the overall model performance in the present complex geometry case is satisfactory. The discrepancies between model predictions and observations are attributed to inaccuracies in prescribing the actual wind tunnel boundary conditions to the computational code.  相似文献   

18.
Model based grouping of species across environmental gradients   总被引:1,自引:0,他引:1  
We present a novel approach to the statistical analysis and prediction of multispecies data. The approach allows the simultaneous grouping and quantification of multiple species’ responses to environmental gradients. The underlying statistical model is a finite mixture model, where mixing is performed over the individual species’ responses to environmental gradients. Species with similar responses are grouped with minimal information loss. We term these groups species archetypes. Each species archetype has an associated GLM that can be used to predict distributions with appropriate measures of uncertainty. Initially, we illustrate the concept and method using artificial data and then with application to real data comprising 200 species from the Great Barrier Reef (GBR) lagoon on 13 oceanographic and geological gradients from 12°S to 24°S. The 200 species from the GBR are well represented by 15 species archetypes. The model is interpreted through maps of the probability of presence for a fine scale set of locations throughout the study area. Maps of uncertainty are also produced to provide statistical context. The presence of each species archetype was strongly influenced by oceanographic gradients, principally temperature, oxygen and salinity. The number of species in each group ranged from 4 to 34. The method has potential application to the analysis of multispecies distribution patterns and for multispecies management.  相似文献   

19.
The environmental quality of land can be assessed by calculating relevant threshold values, which differentiate between concentrations of elements resulting from geogenic and diffuse anthropogenic sources and concentrations generated by point sources of elements. A simple process allowing the calculation of these typical threshold values (TTVs) was applied across a region of highly complex geology (Northern Ireland) to six elements of interest; arsenic, chromium, copper, lead, nickel and vanadium. Three methods for identifying domains (areas where a readily identifiable factor can be shown to control the concentration of an element) were used: k-means cluster analysis, boxplots and empirical cumulative distribution functions (ECDF). The ECDF method was most efficient at determining areas of both elevated and reduced concentrations and was used to identify domains in this investigation. Two statistical methods for calculating normal background concentrations (NBCs) and upper limits of geochemical baseline variation (ULBLs), currently used in conjunction with legislative regimes in the UK and Finland respectively, were applied within each domain. The NBC methodology was constructed to run within a specific legislative framework, and its use on this soil geochemical data set was influenced by the presence of skewed distributions and outliers. In contrast, the ULBL methodology was found to calculate more appropriate TTVs that were generally more conservative than the NBCs. TTVs indicate what a “typical” concentration of an element would be within a defined geographical area and should be considered alongside the risk that each of the elements pose in these areas to determine potential risk to receptors.  相似文献   

20.
A stochastic individual-based model (IBM) of mosquitofish population dynamics in experimental ponds was constructed in order to increase, virtually, the number of replicates of control populations in an ecotoxicology trial, and thus to increase the statistical power of the experiments. In this context, great importance had to be paid to model calibration as this conditions the use of the model as a reference for statistical comparisons. Accordingly, model calibration required that both mean behaviour and variability behaviour of the model were in accordance with real data. Currently, identifying parameter values from observed data is still an open issue for IBMs, especially when the parameter space is large. Our model included 41 parameters: 30 driving the model expectancy and 11 driving the model variability. Under these conditions, the use of “Latin hypercube” sampling would most probably have “missed” some important combinations of parameter values. Therefore, complete factorial design was preferred. Unfortunately, due to the constraints of the computational capacity, cost-acceptable “complete designs” were limited to no more than nine parameters, the calibration question becoming a parameter selection question. In this study, successive “complete designs” were conducted with different sets of parameters and different parameter values, in order to progressively narrow the parameter space. For each “complete design”, the selection of a maximum of nine parameters and their respective n values was carefully guided by sensitivity analysis. Sensitivity analysis was decisive in selecting parameters that were both influential and likely to have strong interactions. According to this strategy, the model of mosquitofish population dynamics was calibrated on real data from two different years of experiments, and validated on real data from another independent year. This model includes two categories of agents; fish and their living environment. Fish agents have four main processes: growth, survival, puberty and reproduction. The outputs of the model are the length frequency distribution of the population and the 16 scalar variables describing the fish populations. In this study, the length frequency distribution was parameterized by 10 scalars in order to be able to perform calibration. The recently suggested notion of “probabilistic distribution of the distributions” was also applied to our case study, and was shown to be very promising for comparing length frequency distributions (as such).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号