首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 875 毫秒
1.
McGill BJ  Maurer BA  Weiser MD 《Ecology》2006,87(6):1411-1423
We describe a general framework for testing neutral theory. We summarize similarities and differences between ten different versions of neutral theory. Two central predictions of neutral theory are that species abundance distributions will follow a zero-sum multinomial distribution and that community composition will change over space due to dispersal limitation. We review all published empirical tests of neutral theory. With the exception of one type of test, all tests fail to support neutral theory. We identify and perform several new tests. Specifically, we develop a set of best practices for testing the fit of the zero-sum multinomial (ZSM) vs. a lognormal null hypothesis and apply this to a data set, concluding that the lognormal outperforms neutral theory on robust tests. We explore whether a priori parameterization of neutral theory is possible, and we conclude that it is not. We show that non-curve-fitting predictions readily derived from neutral theory are easily falsifiable. In toto, there is a current overwhelming weight of evidence against neutral theory. We suggest some next steps for neutral theory.  相似文献   

2.
Whether general environmental exposures to endocrine disrupting chemicals (including pesticides and dioxin) might induce decreased sex ratios (male/female ratio at birth) is discussed. To address this issue, the authors looked for a space-time clustering test which could detect local areas of significantly low risk, assuming a Bernoulli distribution. As a matter of fact, if the endocrine disruptor hypothesis holds true, and if the sex ratio is a sentinel health event indicative of new reproductive hazards ascribed to environmental factors, then in a given region, either a cluster of low male/female ratio among newborn babies would be expected in the vicinity of polluting municipal solid waste incinerators (MSWIs) (supporting the dioxin hypothesis), or local clusters would be expected in some rural areas where large amounts of pesticides are sprayed. Among cluster detection tests, the spatial scan statistic has been widely used in various applications to scan for areas with high rates, and rarely (if ever) with low rates. Therefore, the goal of this paper was to check the properties of the scan statistics under a given scenario (Bernoulli distribution, search for clusters with low rates) and to assess its added value in addressing the sex ratio issue. This study took place in the Franche-Comté region (France), mainly rural, comprising three main MSWIs, among which only one had high dioxin emissions level in the past. The study population consisted of 192,490 boys and 182,588 girls born during the 1975–1999 period. On the whole, the authors conclude that: (i) spatial and space-time scan statistics provide attractive features to address the sex ratio issue; (ii) sex ratio is not markedly affected across space and does not provide a reliable screening measure for detecting reproductive hazards ascribed to environmental factors.  相似文献   

3.
This paper presents a statistical method for detecting distinct scales of pattern for mosaics of irregular patches, by means of perimeter–area relationships. Krummel et al. (1987) were the first to develop a method for detecting different scaling domains in a landscape of irregular patches, but this method requires investigator judgment and is not completely satisfying. Grossi et al. (2001) suggested a modification of Krummel's method in order to detect objectively the change points between different scaling domains. Their procedure is based on the selection of the best piecewise linear regression model using a set of statistical tests. Even though the change points were estimated, the null distributions used for testing purposes were those appropriate for known change points. The present paper investigates the effect that estimating the change points has on the underlying distribution theory. The procedure we suggest is based on the selection of the best piecewise linear regression model using a likelihood ratio (LR) test. Each segment of the piecewise linear model corresponds to a fractal domain. Breakpoints between different segments are unknown, so the piecewise linear models are non-linear. In this case, the frequency distribution of the LR statistic cannot be approximated by a chi-squared distribution. Instead, Monte Carlo simulation is used to obtain an empirical null distribution of the LR statistic. The suggested method is applied to three patch types (CORINE biotopes) located in the Val Baganza watershed of Italy.  相似文献   

4.
For two or more classes (or types) of points, nearest neighbor contingency tables (NNCTs) are constructed using nearest neighbor (NN) frequencies and are used in testing spatial segregation of the classes. Pielou’s test of independence, Dixon’s cell-specific, class-specific, and overall tests are the tests based on NNCTs (i.e., they are NNCT-tests). These tests are designed and intended for use under the null pattern of random labeling (RL) of completely mapped data. However, it has been shown that Pielou’s test is not appropriate for testing segregation against the RL pattern while Dixon’s tests are. In this article, we compare Pielou’s and Dixon’s NNCT-tests; introduce the one-sided versions of Pielou’s test; extend the use of NNCT-tests for testing complete spatial randomness (CSR) of points from two or more classes (which is called CSR independence, henceforth). We assess the finite sample performance of the tests by an extensive Monte Carlo simulation study and demonstrate that Dixon’s tests are also appropriate for testing CSR independence; but Pielou’s test and the corresponding one-sided versions are liberal for testing CSR independence or RL. Furthermore, we show that Pielou’s tests are only appropriate when the NNCT is based on a random sample of (base, NN) pairs. We also prove the consistency of the tests under their appropriate null hypotheses. Moreover, we investigate the edge (or boundary) effects on the NNCT-tests and compare the buffer zone and toroidal edge correction methods for these tests. We illustrate the tests on a real life and an artificial data set.  相似文献   

5.
We develop a spectral framework for testing the hypothesis of complete spatial randomness (CSR) for a spatial point pattern. Five formal tests based on the periodogram (sample spectrum) proposed in Mugglestone (1990) are considered. A simulation study is used to evaluate and compare the power of the tests against clustered, regular and inhomogeneous alternatives to CSR. A subset of the tests is shown to be uniformly more powerful than the others against the alternatives considered. The spectral tests are also compared with three widely used space-domain tests that are based on the mean nearest-neighbor distance, the reduced second-order moment function (K-function), and a bivariate Cramér-von Mises statistic. The test based on the scaled cumulative R-spectrum is more powerful than the space-domain tests for detecting clustered alternatives to CSR, especially when the number of events is small.  相似文献   

6.
The statistical analysis of environmental data from remote sensing and Earth system simulations often entails the analysis of gridded spatio-temporal data, with a hypothesis test being performed for each grid cell. When the whole image or a set of grid cells are analyzed for a global effect, the problem of multiple testing arises. When no global effect is present, we expect $$ \alpha $$% of all grid cells to be false positives, and spatially autocorrelated data can give rise to clustered spurious rejections that can be misleading in an analysis of spatial patterns. In this work, we review standard solutions for the multiple testing problem and apply them to spatio-temporal environmental data. These solutions are independent of the test statistic, and any test statistic can be used (e.g., tests for trends or change points in time series). Additionally, we introduce permutation methods and show that they have more statistical power. Real-world data are used to provide examples of the analysis, and the performance of each method is assessed in a simulation study. Unlike other simulation studies, our study compares the statistical power of the presented methods in a comprehensive simulation study. In conclusion, we present several statistically rigorous methods for analyzing spatio-temporal environmental data and controlling the false positives. These methods allow the use of any test statistic in a wide range of applications in environmental sciences and remote sensing.  相似文献   

7.
The spatial scan statistic is a widely applied tool for cluster detection. The spatial scan statistic evaluates the significance of a series of potential circular clusters using Monte Carlo simulation to account for the multiplicity of comparisons. In most settings, the extent of the multiplicity problem varies across the study region. For example, urban areas typically have many overlapping clusters, while rural areas have few. The spatial scan statistic does not account for these local variations in the multiplicity problem. We propose two new spatially-varying multiplicity adjustments for spatial cluster detection, one based on a nested Bonferroni adjustment and one based on local averaging. Geographic variations in power for the spatial scan statistic and the two new statistics are explored through simulation studies, and the methods are applied to both the well-known New York leukemia data and data from a case–control study of breast cancer in Wisconsin.  相似文献   

8.
Binary matrices originating from presence/absence data on species (rows) distributed over sites (columns) have been a subject of much controversy in ecological biogeography. Under the null hypothesis that every matrix is equally likely, the distributions of some test statistics measuring co-occurrences between species are sought, conditional on the row and column totals being fixed at the values observed for some particular matrix. Many ad hoc methods have been proposed in the literature, but at least some of them do not provide uniform random samples of matrices. In particular, some swap algorithms have not accounted for the number of neighbors each matrix has in the universe of matrices with a set of fixed row and column sums. We provide a Monte-Carlo method using random walks on graphs that gives correct estimates for the distributions of statistics. We exemplify its use with one statistic.  相似文献   

9.
Spisula solidissima (Dillwyn, 1817) is a large, suspension-feeding bivalve, whose range extends from Nova Scotia to South Carolina. This species is harvested commercially. Shell length and age data were collected for this species from 1980 to 1994 during surveys of population size and structure. These data were used to examine the relationship between the growth rate of S.␣solidissima and intraspecific density. The null hypothesis was that density (represented by number of individuals per tow) would have no effect on rate of growth. A negative relationship would support the alternative hypothesis, that intraspecific competition had taken place. This analysis focused on the surfclam population offshore from the Delmarva Peninsula, USA because: (1) a major recruitment event occurred in 1977, (2) clam fishermen had reported “stunted” surfclams in that area, (3) a wide range of local densities were available to examine, and (4) the existence of a closed area within the study area set up an interesting contrast with areas left open to harvesting. Maps of surfclam abundance across the Delmarva region demonstrate that areas of highest density have generally remained in the same location through time. The results suggested that intraspecific competition has been important in structuring this population. Based on data from 1980 to 1992, shell length was significantly reduced at high density, and a significant interaction between age and density was observed. Growth modeling indicated decreased asymptotic lengths and growth rates with increasing density. In nine out of ten pairwise randomization tests, fitted von Bertalanffy growth curves, representing different densities, were significantly different from each other. High densities of clams have persisted in the area that was closed to harvesting for 11 years (1980 to 1991). In 1994, length at age was significantly less in this closed area compared to that in the surrounding area. This effect was apparent in clams from 3 to 17 years of age, and most pronounced in the cohort that recruited to the Delmarva region in high numbers in 1977. Lower growth rates within the closed area have management implications for the optimal duration of closures. Received: 16 May 1997 / Accepted: 7 October 1997  相似文献   

10.
Ecologists wish to understand the role of traits of species in determining where each species occurs in the environment. For this, they wish to detect associations between species traits and environmental variables from three data tables, species count data from sites with associated environmental data and species trait data from data bases. These three tables leave a missing part, the fourth-corner. The fourth-corner correlations between quantitative traits and environmental variables, heuristically proposed 20 years ago, fill this corner. Generalized linear (mixed) models have been proposed more recently as a model-based alternative. This paper shows that the squared fourth-corner correlation times the total count is precisely the score test statistic for testing the linear-by-linear interaction in a Poisson log-linear model that also contains species and sites as main effects. For multiple traits and environmental variables, the score test statistic is proportional to the total inertia of a doubly constrained correspondence analysis. When the count data are over-dispersed compared to the Poisson or when there are other deviations from the model such as unobserved traits or environmental variables that interact with the observed ones, the score test statistic does not have the usual chi-square distribution. For these types of deviations, row- and column-based permutation methods (and their sequential combination) are proposed to control the type I error without undue loss of power (unless no deviation is present), as illustrated in a small simulation study. The issues for valid statistical testing are illustrated using the well-known Dutch Dune Meadow data set.  相似文献   

11.
Judicious Use of Multiple Hypothesis Tests   总被引:5,自引:0,他引:5  
Abstract:  When analyzing a table of statistical results, one must first decide whether adjustment of significance levels is appropriate. If the main goal is hypothesis generation or initial screening for potential conservation problems, then it may be appropriate to use the standard comparisonwise significance level to avoid Type II errors (not detecting real differences or trends). If the main goal is rigorous testing of a hypothesis, however, then an adjustment for multiple tests is needed. To control the familywise Type I error rate (the probability of rejecting at least one true null hypothesis), sequential modifications of the standard Bonferroni method, such as Holm's method, will provide more statistical power than the standard Bonferroni method. Additional power may be achieved through procedures that control the false discovery rate (FDR) (the expected proportion of false positives among tests found to be significant). Holm's sequential Bonferroni method and two FDR-controlling procedures were applied to the results of multiple-regression analyses of the relationship between habitat variables and the abundance of 25 species of forest birds in Japan, and the FDR-controlling procedures provided considerably greater statistical power.  相似文献   

12.
Fisher (1950) introduced the variance or dispersion index test statistic to test deviations of the Poisson distribution. For this test approximate critical values exist for large sample sizes. If the number of observations is small this approximation can lead to a wrong conclusion. For small samples, the exact critical values can only be derived by enumeration of all possibilities. Tables of critical values for overdispersion already exist (e.g., Rao and Chakravarti, 1956) However, in many biological situations underdispersion, a more-regular-than-Poisson distribution, is a common phenomenon. Therefore, we have tabulated in this paper the one-tailed critical values for a small number of observations under the null hypothesis (H0) that the random variable is Poisson distributed against the alternative hypothesis of underdispersion. With the help of this table, the hypothesis that the observations in a data set are Poisson distributed, can be tested easily with the variance test. The tables are illustrated with examples from the literature and some observations from our own research. In general, the 2 approximation gives a smaller significance level than the exact variance test.  相似文献   

13.
We propose a novel tool for testing hypotheses concerning the adequacy of environmentally defined factors for local clustering of diseases, through the comparative evaluation of the significance of the most likely clusters detected under maps whose neighborhood structures were modified according to those factors. A multi-objective genetic algorithm scan statistic is employed for finding spatial clusters in a map divided in a finite number of regions, whose adjacency is defined by a graph structure. This cluster finder maximizes two objectives, the spatial scan statistic and the regularity of cluster shape. Instead of specifying locations for the possible clusters a priori, as is currently done for cluster finders based on focused algorithms, we alter the usual adjacency induced by the common geographical boundary between regions. In our approach, the connectivity between regions is reinforced or weakened, according to certain environmental features of interest associated with the map. We build various plausible scenarios, each time modifying the adjacency structure on specific geographic areas in the map, and run the multi-objective genetic algorithm for selecting the best cluster solutions for each one of the selected scenarios. The statistical significances of the most likely clusters are estimated through Monte Carlo simulations. The clusters with the lowest estimated p-values, along with their corresponding maps of enhanced environmental features, are displayed for comparative analysis. Therefore the probability of cluster detection is increased or decreased, according to changes made in the adjacency graph structure, related to the selection of environmental features. The eventual identification of the specific environmental conditions which induce the most significant clusters enables the practitioner to accept or reject different hypotheses concerning the relevance of geographical factors. Numerical simulation studies and an application for malaria clusters in Brazil are presented.  相似文献   

14.
Establishing IUCN Red List Criteria for Threatened Ecosystems   总被引:1,自引:0,他引:1  
Abstract: The potential for conservation of individual species has been greatly advanced by the International Union for Conservation of Nature's (IUCN) development of objective, repeatable, and transparent criteria for assessing extinction risk that explicitly separate risk assessment from priority setting. At the IV World Conservation Congress in 2008, the process began to develop and implement comparable global standards for ecosystems. A working group established by the IUCN has begun formulating a system of quantitative categories and criteria, analogous to those used for species, for assigning levels of threat to ecosystems at local, regional, and global levels. A final system will require definitions of ecosystems; quantification of ecosystem status; identification of the stages of degradation and loss of ecosystems; proxy measures of risk (criteria); classification thresholds for these criteria; and standardized methods for performing assessments. The system will need to reflect the degree and rate of change in an ecosystem's extent, composition, structure, and function, and have its conceptual roots in ecological theory and empirical research. On the basis of these requirements and the hypothesis that ecosystem risk is a function of the risk of its component species, we propose a set of four criteria: recent declines in distribution or ecological function, historical total loss in distribution or ecological function, small distribution combined with decline, or very small distribution. Most work has focused on terrestrial ecosystems, but comparable thresholds and criteria for freshwater and marine ecosystems are also needed. These are the first steps in an international consultation process that will lead to a unified proposal to be presented at the next World Conservation Congress in 2012.  相似文献   

15.
Several non-dynamic, scale-invariant, and scale-dependent dynamic subgrid-scale (SGS) models are utilized in large-eddy simulations of shear-driven neutral atmospheric boundary layer (ABL) flows. The popular Smagorinsky closure and an alternative closure based on Kolmogorov’s scaling hypothesis are used as SGS base models. Our results show that, in the context of neutral ABL regime, the dynamic modeling approach is extremely useful, and reproduces several establised results (e.g., the surface layer similarity theory) with fidelity. The scale-dependence framework, in general, improves the near-surface statistics from the Smagorinsky model-based simulations. We also note that the local averaging-based dynamic SGS models perform significantly better than their planar averaging-based counterparts. Lastly, we find more or less consistent superiority of the Smagorinsky-based SGS models (over the corresponding Kolmogorov’s scaling hypothesis-based SGS models) for predicting the inertial range scaling of spectra.  相似文献   

16.
The preference of the hermit crab, Calcinus californiensis, among six species of shells, was tested by two different experiments. The first experiment used pair-wise trials, analyzing the preference by Chi-square tests using two different constructions of the null hypothesis. One hypothesis was based on a no-preference among shell species, the second on comparing the number of crabs changing for a particular shell species when two options were given versus the changing when no options were offered. The second experiment was a multiple-alternative test based on a rank ordering of the shell preference. This method has both statistical and resource-saving advantages over the traditional pair-wise comparisons. The sequence of shell preference was similarly independent of the procedure used. The preferred shell species are heavy and might be associated with hydrodynamic advantages and with the protection against predation. The shell preference matches with the pattern of shell occupancy indicating that the shell use in nature is determined by the crab’s preference. The information generated may be used for further research on shell preference as a methodological alternative.  相似文献   

17.
The assumption of demographic closure in the analysis of capture-recapture data under closed-population models is of fundamental importance. Yet, little progress has been made in the development of omnibus tests of the closure assumption. We present a closure test for time-specific data that, in principle, tests the null hypothesis of closed-population model M t against the open-population Jolly-Seber model as a specific alternative. This test is chi-square, and can be decomposed into informative components that can be interpreted to determine the nature of closure violations. The test is most sensitive to permanent emigration and least sensitive to temporary emigration, and is of intermediate sensitivity to permanent or temporary immigration. This test is a versatile tool for testing the assumption of demographic closure in the analysis of capture-recapture data.  相似文献   

18.
Fitting generalised linear models (GLMs) with more than one predictor has become the standard method of analysis in evolutionary and behavioural research. Often, GLMs are used for exploratory data analysis, where one starts with a complex full model including interaction terms and then simplifies by removing non-significant terms. While this approach can be useful, it is problematic if significant effects are interpreted as if they arose from a single a priori hypothesis test. This is because model selection involves cryptic multiple hypothesis testing, a fact that has only rarely been acknowledged or quantified. We show that the probability of finding at least one ‘significant’ effect is high, even if all null hypotheses are true (e.g. 40% when starting with four predictors and their two-way interactions). This probability is close to theoretical expectations when the sample size (N) is large relative to the number of predictors including interactions (k). In contrast, type I error rates strongly exceed even those expectations when model simplification is applied to models that are over-fitted before simplification (low N/k ratio). The increase in false-positive results arises primarily from an overestimation of effect sizes among significant predictors, leading to upward-biased effect sizes that often cannot be reproduced in follow-up studies (‘the winner's curse’). Despite having their own problems, full model tests and P value adjustments can be used as a guide to how frequently type I errors arise by sampling variation alone. We favour the presentation of full models, since they best reflect the range of predictors investigated and ensure a balanced representation also of non-significant results.  相似文献   

19.
20.
Space limitation in larval settlement can play an important role in the population dynamics of marine species. A novel statistical test for space limitation based on quadrat counts of individuals is described. The test is based on identifying a significant relationship between the relative dispersion of quadrat counts and overall mean density. An application to a time series of quadrat counts of recently settled American lobsters Homarus americanus covering the period 1993–2007 in Casco Bay, Maine, USA (43°45′N; 69°58′W), is presented. For this data set, the null hypothesis that space is not limiting could not be rejected (P = 0.10).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号