首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A dynamic and heterogeneous species abundance model generating the lognormal species abundance distribution is fitted to time series of species data from an assemblage of stoneflies and mayflies (Plecoptera and Ephemeroptera) of an aquatic insect community collected over a period of 15 years. In each year except one, we analyze 5 parallel samples taken at the same time of the season giving information about the over-dispersion in the sampling relative to the Poisson distribution. Results are derived from a correlation analysis, where the correlation in the bivariate normal distribution of log abundance is used as measurement of similarity between communities. The analysis enables decomposition of the variance of the lognormal species abundance distribution into three components due to heterogeneity among species, stochastic dynamics driven by environmental noise, and over-dispersion in sampling, accounting for 62.9, 30.6 and 6.5% of the total variance, respectively. Corrected for sampling the heterogeneity and stochastic components accordingly account for 67.3 and 32.7% of the among species variance in log abundance. By using this method, it is possible to disentangle the effect of heterogeneity and stochastic dynamics by quantifying these components and correctly remove sampling effects on the observed species abundance distribution.  相似文献   

2.
Ranked set sampling can provide an efficient basis for estimating parameters of environmental variables, particularly when sampling costs are intrinsically high. Various ranked set estimators are considered for the population mean and contrasted in terms of their efficiencies and useful- ness, with special concern for sample design considerations. Specifically, we consider the effects of the form of the underlying random variable, optimisation of efficiency and how to allocate sampling effort for best effect (e.g. one large sample or several smaller ones of the same total size). The various prospects are explored for two important positively skew random variables (lognormal and extreme value) and explicit results are given for these cases. Whilst it turns out that the best approach is to use the largest possible single sample and the optimal ranked set best linear estimator (ranked set BLUE), we find some interesting qualitatively different conclusions for the two skew distributions  相似文献   

3.
A frequent assumption in environmental risk assessment is that the underlying distribution of an analyte concentration is lognormal. However, the distribution of a random variable whose log has a t-distribution has infinite mean. Because of the proximity of the standard normal and t-distribution, this suggests that a distribution such as the gamma or truncated normal, with smaller right tail probabilities, might make a better statistical model for mean estimation than the lognormal. In order to assess the effect of departures from lognormality on lognormal-based statistics, we simulated complete lognormal, truncated normal, and gamma data for various sample sizes and coefficients of variation. In these cases, departures from lognormality were not easily detected with the Shapiro-Wilk test. Various lognormal-based estimates and tests were compared with alternate methods based on the ordinary sample mean and standard error. The examples were also considered in the presence of random left censoring with the mean and standard error of the product limit estimate replacing the ordinary sample mean and standard error. The results suggest that in the estimation of or tests about a mean, if the assumption of lognormality is at all suspect, then lognormal-based approaches may not be as good as the alternative methods.  相似文献   

4.
Three general methods to calculate soil contaminant cleanup levels are assessed: the truncated lognormal approach, Monte Carlo analysis, and the house-by-house approach. When these methods are used together with a lead risk assessment model, they yield estimated soil lead cleanup levels that may be required in an attempt to achieve specified target blood lead levels for a community. The truncated lognormal approach is exemplified by the Society for Environmental Geochemistry and Health (SEGH) model, Monte Carlo analysis is exemplified by the US EPA's LEAD Model, and the house-by-house approach is used with a structural equation model to calculate site-specific soil lead cleanup levels. The various cleanup methods can each be used with any type of lead risk assessment model. Although all examples given here are for lead, the cleanup methods can, in principle, be used as well with risk assessment models for other chemical contaminants to derive contaminant-specific soil cleanup levels.  相似文献   

5.
The initial use of composite sampling involved the analysis of many negative samples with relatively high laboratory cost (Dorfman sampling). We propose a method of double compositing and compare its efficiency with Dorfman sampling. The variability of composite measurement samples has environmental interest (hot spots). The precision of these estimates depends on the kurtosis of the distribution; leptokurtic distributions (2 > 0) have increased precision as the number of field samples is increased. The opposite effect is obtained for platykurtic distributions. In the lognormal case, coverage probabilities are reasonable for < 0.5. The Poisson distribution can be associated with temporal compositing, of particular interest where radioactive measurements are taken. Sample size considerations indicate that the total sampling effort is directly proportional to the length of time sampled. If there is background radiation then increasing levels of this radiation require larger sample sizes to detect the same difference in radiation.  相似文献   

6.
Estimating prevalence using composites   总被引:1,自引:0,他引:1  
We are interested in estimating the fraction of a population that possesses a certain trait, such as the presence of a chemical contaminant in a lake. A composite sample drawn from a population has the trait in question whenever one or more of the individual samples making up the composite has the trait. Let the true fraction of the population that is contaminated be p. Classical estimators of p, such as the MLE and the jackknife, have been shown to be biased. In this study, we introduce a new shrinking estimator which can be used when doing composite sampling. The properties of this estimator are investigated and compared with those of the MLE and the jackknife.  相似文献   

7.
The objective of a long-term soil survey is to determine the mean concentrations of several chemical parameters for the pre-defined soil layers and to compare them with the corresponding values in the past. A two-stage random sampling procedure is used to achieve this goal. In the first step, n subplots are selected from N subplots by simple random sampling without replacement; in the second step, m sampling sites are chosen within each of the n selected subplots. Thus n · m soil samples are collected for each soil layer. The idea of the composite sample design comes from the challenge of reducing very expensive laboratory analyses: m laboratory samples from one subplot and one soil layer are physically mixed to form a composite sample. From each of the n selected subplots, one composite sample per soil layer is analyzed in the laboratory, thus n per soil layer in total. In this paper we show that the cost is reduced by the factor m — 1 when instead of the two-stage sampling its composite sample alternative is used; however, the variance of the composite sample mean is increased. In the case of positive intraclass correlation the increase is less than 12.5%; in the case of negative intraclass correlation the increase depends on the properties of the variable as well. For the univariate case we derive the optimal number of subplots and sampling sites. A case study is discussed at the end.  相似文献   

8.
In this paper a method for collection of vertically and horizontally integrated volume-weighted composite samples for analysis of water chemistry and plankton is presented. The method, which requires a proper knowledge of lake morphometry parameters, includes proposed standard procedures for determination of sampling interval thickness, maximum depth of sampling, selection of sampling stations, and distribution of discrete samples. An example of the outcome of the method in a lake with uncomplicated basin morphometry is given and the results are discussed against background of general lake basin morphometry data. The aim of the paper is to start a debate about optimization (statistical as well as ecological) of volume weighted composite sampling.  相似文献   

9.
Abstract: Uncertainties about biological data and human effects often delay decisions on management of endangered species. Some decision makers argue that uncertainty about the risk posed to a species should lead to precautionary decisions, whereas others argue for delaying protective measures until there is strong evidence that a human activity is having a serious effect on the species. We have developed a method that incorporates uncertainty into the estimate of risk so that delays in action can be reduced or eliminated. We illustrate our method with an actual situation of a deadlock over how to manage Hector's dolphin ( Cephalorhychus hectori ). The management question is whether sufficient risk is posed to the dolphins by mortalities in gillnets to warrant regulating the fisheries. In our quantitative risk assessment, we use a population model that incorporates both demographic ( between-individual) and environmental ( between-year) stochasticity. We incorporate uncertainty in estimates of model parameters by repeatedly running the model for different combinations of survival and reproductive rates. Each value is selected at random from a probability distribution that represents the uncertainty in estimating that parameter. Before drawing conclusions, we perform sensitivity analyses to see whether model assumptions alter conclusions and to recommend priorities for future research. In this example, uncertainty did not alter the conclusion that there is a high risk of population decline if current levels of gillnet mortality continue. Sensitivity analyses revealed this to be a robust conclusion. Thus, our analysis removes uncertainty in the scientific data as an excuse for inaction.  相似文献   

10.
Uncertainty characterization for emergy values   总被引:1,自引:0,他引:1  
While statistical estimation of uncertainty has not typically accompanied published emergy values, as with any other quantitative model, uncertainty is embedded in these values, and lack of uncertainty characterization makes their accuracy not only opaque, it also prevents the use of emergy values in statistical tests of hypotheses. This paper first attempts to describe sources of uncertainty in unit emergy values (UEVs) and presents a framework for estimating this uncertainty with analytical and stochastic models, with model choices dependent upon on how the UEV is calculated and what kind of uncertainties are quantified. The analytical model can incorporate a broader spectrum of uncertainty types than the stochastic model, including model and scenario uncertainty, which may be significant in emergy models, but is only appropriate for the most basic of emergy calculations. Although less comprehensive in its incorporation of uncertainty, the proposed stochastic method is suitable for all types of UEVs. The distributions of unit emergy values approximate the lognormal distribution with variations depending on the types of uncertainty quantified as well as the way the UEVs are calculated. While both methods of estimating uncertainty in UEVs have their limitations in their presented stage of development, this paper provides methods for incorporating uncertainty into emergy, and demonstrates how this can be depicted and propagated so that it can be used in future emergy analyses and permit emergy to be more readily incorporated into other methods of environmental assessment, such as LCA.  相似文献   

11.
Loehle C 《Ecology》2006,87(9):2221-2226
Abundance distributions are a central characteristic of ecosystems. Certain distributions have been derived from theoretical models of community organization, and therefore the fit of data to these distributions has been proposed as a test of these theories. However, it is shown here that the geometric sequence distribution can be derived directly from the empirical relationship between population density and body size, with the assumption of random or uniform body size distributions on a log scale (as holds at local scales). The geometric sequence model provides a good to excellent fit to empirical data. The presence of noise in the relationship between population density and body size creates a curve that begins to approximate a lognormal species abundance distribution as the noise term increases. For continental-scale data in which the body size distribution is not flat, the result of sampling tends again toward the lognormal. Repeat sampling over time smooths out species population fluctuations and damps out the noise, giving a more precise geometric sequence abundance distribution. It is argued that the direct derivation of this distribution from empirical relationships gives it priority over distributions derived from complex theoretical community models.  相似文献   

12.
An assessment is presented of distribution characteristics of heavy metals in the urban topsoil from the city of Xuzhou. The concentrations of Ag, Al, As, Au, Ba, Be, Bi, Cd, Co, Cr, Cu, Fe, Ga, Hg, Li, Mn, Mo, Ni, Pb, Pd, Pt, Sb, Sc, Se, Sn, V and Zn have been determined from 21 soil samples. Examination of lognormal distribution plots indicates that the diagrams of Al, Be, Fe, Ga, Li, and V are almost linear suggesting that these metals are almost unaffected by anthropogenic activities while the plots for As, Cd, Cu, Pb, Pd, Pt, Se, Zn and others are not linear probably due to anthropogenic activities from which these metals are delivered to the soils. Al is used for mineralogical normalization of these data. An evaluation of background values for topsoil is also carried out by means of lognormal distribution plots. The results show our background values obtained from the lognormal distribution plots are comparable to those values of uncontaminated soils of Xuzhou obtained by previous work except for Cd and Hg. At present, no explanation for the exceptions Cd and Hg can be given.  相似文献   

13.
《Ecological modelling》2005,181(2-3):203-213
Assessment of population dynamics is central to population dynamics and conservation. In structured populations, matrix population models based on demographic data have been widely used to assess such dynamics. Although highlighted in several studies, the influence of heterogeneity among individuals in demographic parameters and of the possible correlation among these parameters has usually been ignored, mostly because of difficulties in estimating such individual-specific parameters. In the kittiwake (Rissa tridactyla), a long-lived seabird species, differences in survival and breeding probabilities among individual birds are well documented. Several approaches have been used in the animal ecology literature to establish the association between survival and breeding rates. However, most are based on observed heterogeneity between groups of individuals, an approach that seldom accounts for individual heterogeneity. Few attempts have been made to build models permitting estimation of the correlation between vital rates. For example, survival and breeding probability of individual birds were jointly modelled using logistic random effects models by [Cam, E., Link, W.A., Cooch, E.G., Monnat, J., Danchin, E., 2002. Individual covariation in life-history traits: seeing the trees despite the forest. Am. Naturalist, 159, in press]. This is the only example in wildlife animal populations we are aware of. Here we adopt the survival analysis approaches from epidemiology. We model the survival and the breeding probability jointly using a normally distributed random effect (frailty). Conditionally on this random effect, the survival time is modelled assuming a lognormal distribution, and breeding is modelled with a logistic model. Since the deaths are observed in year-intervals, we also take into account that the data are interval censored. The joint model is estimated using classic frequentist methods and also MCMC techniques in Winbugs. The association between survival and breeding attempt is quantified using the standard deviation of the random frailty parameters. We apply our joint model on a large data set of 862 birds, that was followed from 1984 to 1995 in Brittany (France). Survival is positively correlated with breeding indicating that birds with greater inclination to breed also had higher survival.  相似文献   

14.
Estimates of animal performance often use the maximum of a small number of laboratory trials, a method which has several statistical disadvantages. Sample maxima always underestimate the true maximum performance, and the degree of the bias depends on sample size. Here, we suggest an alternative approach that involves estimating a specific performance quantile (e.g., the 0.90 quantile). We use the information on within-individual variation in performance to obtain a sampling distribution for the residual performance measures; we use this distribution to estimate a desired performance quantile for each individual. We illustrate our approach using simulations and with data on sprint speed in lizards. The quantile method has several advantages over the sample maximum: it reduces or eliminates bias, it uses all of the data from each individual, and its accuracy is independent of sample size. Additionally, we address the estimation of correlations between two different performance measures, such as sample maxima, quantiles, or means. In particular, because of sampling variability, we propose that the correlation of sample means does a better job estimating the correlation of population maxima than the estimator which is the correlation of sample maxima.  相似文献   

15.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedures may lead serious over- or under-coverage depending upon the alternative distribution. In this paper we conduct simulation studies to investigate the sensitivity of three normal theory UCL procedures to departures from normality. Data from several gamma, reciprocal gamma, and lognormal distributions are considered. The normal theory procedures are applied to both the raw data and the log-transformed data.  相似文献   

16.
Estimates of a population’s growth rate and process variance from time-series data are often used to calculate risk metrics such as the probability of quasi-extinction, but temporal correlations in the data from sampling error, intrinsic population factors, or environmental conditions can bias process variance estimators and detrimentally affect risk predictions. It has been claimed (McNamara and Harding, Ecol Lett 7:16–20, 2004) that estimates of the long-term variance that incorporate observed temporal correlations in population growth are unaffected by sampling error; however, no estimation procedures were proposed for time-series data. We develop a suite of such long-term variance estimators, and use simulated data with temporally autocorrelated population growth and sampling error to evaluate their performance. In some cases, we get nearly unbiased long-term variance estimates despite ignoring sampling error, but the utility of these estimators is questionable because of large estimation uncertainty and difficulties in estimating correlation structure in practice. Process variance estimators that ignored temporal correlations generally gave more precise estimates of the variability in population growth and of the probability of quasi-extinction. We also found that the estimation of probability of quasi-extinction was greatly improved when quasi-extinction thresholds were set relatively close to population levels. Because of precision concerns, we recommend using simple models for risk estimates despite potential biases, and limiting inference to quantifying relative risk; e.g., changes in risk over time for a single population or comparative risk among populations.  相似文献   

17.
Many environmental sampling problems involve some specified regulatory or contractual limit (RL). Often the interest is in estimating the percentile of the underlying contaminant concentration distribution corresponding to RL. The focus of this paper is on obtaining a point estimate and a lower confidence limit for that percentile when all observations are nondetectable, with the ith observation known to be less than some detection limit DLI, where DLi RL. Since composite samples are being considered, it is not unreasonable to assume an underlying normal distribution.  相似文献   

18.
There are presently few tools available for estimating epidemic risks from forest pathogens, and hence informing pro-active disease management. In this study we demonstrated that a bioclimatic niche model can be used to examine questions of epidemic risk in temperate eucalypt plantations. The bioclimatic niche model, CLIMEX, was used to identify regional variation in climate suitability for Mycosphaerella leaf disease (MLD), a major cause of foliage damage in temperate eucalypt plantations around the world. Using historical observations of MLD damage, we were able to convert the relative score of climatic suitability generated by CLIMEX into a severity ranking ranging from low to high, providing for the first time a direct link between risk and impact, and allowing us to explore disease severity in a way meaningful to forest managers. We determined that the ‘Compare Years’ function in CLIMEX could be used for site-specific risk assessment to identify severity, frequency and seasonality of MLD epidemics. We explored appropriate scales of risk assessment for forest managers. Applying the CLIMEX model of MLD using a 0.25° or coarser grid size to areas of sharp topographic relief frequently misrepresented the risk posed by MLD, because considerable variation occurred between individual forest sites encompassed within a single grid cell. This highlighted the need for site-specific risk assessment to address many questions pertinent to managing risk in plantations.  相似文献   

19.
Many of the most popular sampling schemes used in forestry are probability proportional to size methods. These methods are also referred to as size-biased because sampling is actually from a weighted form of the underlying population distribution. Length- and area-biased sampling are special cases of size-biased sampling where the probability weighting comes from a lineal or areal function of the random variable of interest, respectively. Often, interest is in estimating a parametric probability density of the data. In forestry, the Weibull function has been used extensively for such purposes. Estimating equations for method of moments and maximum likelihood for two- and three-parameter Weibull distributions are presented. Fitting is illustrated with an example from an area-biased angle-gauge sample of standing trees in a woodlot. Finally, some specific points concerning the form of the size-biased densities are reported.  相似文献   

20.
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号