首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For two or more classes (or types) of points, nearest neighbor contingency tables (NNCTs) are constructed using nearest neighbor (NN) frequencies and are used in testing spatial segregation of the classes. Pielou’s test of independence, Dixon’s cell-specific, class-specific, and overall tests are the tests based on NNCTs (i.e., they are NNCT-tests). These tests are designed and intended for use under the null pattern of random labeling (RL) of completely mapped data. However, it has been shown that Pielou’s test is not appropriate for testing segregation against the RL pattern while Dixon’s tests are. In this article, we compare Pielou’s and Dixon’s NNCT-tests; introduce the one-sided versions of Pielou’s test; extend the use of NNCT-tests for testing complete spatial randomness (CSR) of points from two or more classes (which is called CSR independence, henceforth). We assess the finite sample performance of the tests by an extensive Monte Carlo simulation study and demonstrate that Dixon’s tests are also appropriate for testing CSR independence; but Pielou’s test and the corresponding one-sided versions are liberal for testing CSR independence or RL. Furthermore, we show that Pielou’s tests are only appropriate when the NNCT is based on a random sample of (base, NN) pairs. We also prove the consistency of the tests under their appropriate null hypotheses. Moreover, we investigate the edge (or boundary) effects on the NNCT-tests and compare the buffer zone and toroidal edge correction methods for these tests. We illustrate the tests on a real life and an artificial data set.  相似文献   

2.
The testing for an association between two categorical variables using count data is commonplace in the behavioral sciences. Here, we present evidence that influential biostatistical textbooks give contradictory and incomplete advice on good practice in the analysis of such contingency table data. We survey the statistical literature and offer guidance on such analyses. Specifically, we call for greater use of exact testing rather than tests which use an asymptotic chi-squared distribution. That is, we suggest that researchers take a conservative approach and only perform asymptotic testing where there is little doubt that it is appropriate. We recommend a specific criterion for such decision-making. Where asymptotic testing is appropriate, we recommend chi-squared over the G-test and recommend against the implementation of Yates (or any other) correction. We also provide advice on the effective use of exact testing for associations in contingency tables. Lastly, we highlight issues that need to be considered when using the commonly recommended Fisher’s exact test.  相似文献   

3.
We develop a spectral framework for testing the hypothesis of complete spatial randomness (CSR) for a spatial point pattern. Five formal tests based on the periodogram (sample spectrum) proposed in Mugglestone (1990) are considered. A simulation study is used to evaluate and compare the power of the tests against clustered, regular and inhomogeneous alternatives to CSR. A subset of the tests is shown to be uniformly more powerful than the others against the alternatives considered. The spectral tests are also compared with three widely used space-domain tests that are based on the mean nearest-neighbor distance, the reduced second-order moment function (K-function), and a bivariate Cramér-von Mises statistic. The test based on the scaled cumulative R-spectrum is more powerful than the space-domain tests for detecting clustered alternatives to CSR, especially when the number of events is small.  相似文献   

4.
Testing the Accuracy of Population Viability Analysis   总被引:3,自引:0,他引:3  
  相似文献   

5.
Recently, public health professionals and other geostatistical researchers have shown increasing interest in boundary analysis, the detection or testing of zones or boundaries that reveal sharp changes in the values of spatially oriented variables. For areal data (i.e., data which consist only of sums or averages over geopolitical regions), Lu and Carlin (Geogr Anal 37: 265–285, 2005) suggested a fully model-based framework for areal wombling using Bayesian hierarchical models with posterior summaries computed using Markov chain Monte Carlo (MCMC) methods, and showed the approach to have advantages over existing non-stochastic alternatives. In this paper, we develop Bayesian areal boundary analysis methods that estimate the spatial neighborhood structure using the value of the process in each region and other variables that indicate how similar two regions are. Boundaries may then be determined by the posterior distribution of either this estimated neighborhood structure or the regional mean response differences themselves. Our methods do require several assumptions (including an appropriate prior distribution, a normal spatial random effect distribution, and a Bernoulli distribution for a set of spatial weights), but also deliver more in terms of full posterior inference for the boundary segments (e.g., direct probability statements regarding the probability that a particular border segment is part of the boundary). We illustrate three different remedies for the computing difficulties encountered in implementing our method. We use simulation to compare among existing purely algorithmic approaches, the Lu and Carlin (2005) method, and our new adjacency modeling methods. We also illustrate more practical modeling issues (e.g., covariate selection) in the context of a breast cancer late detection data set collected at the county level in the state of Minnesota.  相似文献   

6.
Abstract: Informally gathered species lists are a potential source of data for conservation biology, but most remain unused because of questions of reliability and statistical issues. We applied two alternative analytical methods (contingency tests and occupancy modeling) to a 35‐year data set (1973–2007) to test hypotheses about local bird extinction. We compiled data from bird lists collected by expert amateurs and professional scientists in a 2‐km2 fragment of lowland tropical forest in coastal Ecuador. We tested the effects of the following on local extinction: trophic level, sociality, foraging specialization, light tolerance, geographical range area, and biogeographic source. First we assessed extinction on the basis of the number of years in which a species was not detected on the site and used contingency tests with each factor to compare the frequency of expected and observed extinction events among different species categories. Then we defined four multiyear periods that reflected different stages of deforestation and isolation of the study site and used occupancy modeling to test extinction hypotheses singly and in combination. Both types of analyses supported the biogeographic source hypothesis and the species‐range hypothesis as causes of extinction; however, occupancy modeling indicated the model incorporating all factors except foraging specialization best fit the data.  相似文献   

7.
The statistical analysis of environmental data from remote sensing and Earth system simulations often entails the analysis of gridded spatio-temporal data, with a hypothesis test being performed for each grid cell. When the whole image or a set of grid cells are analyzed for a global effect, the problem of multiple testing arises. When no global effect is present, we expect $$ \alpha $$% of all grid cells to be false positives, and spatially autocorrelated data can give rise to clustered spurious rejections that can be misleading in an analysis of spatial patterns. In this work, we review standard solutions for the multiple testing problem and apply them to spatio-temporal environmental data. These solutions are independent of the test statistic, and any test statistic can be used (e.g., tests for trends or change points in time series). Additionally, we introduce permutation methods and show that they have more statistical power. Real-world data are used to provide examples of the analysis, and the performance of each method is assessed in a simulation study. Unlike other simulation studies, our study compares the statistical power of the presented methods in a comprehensive simulation study. In conclusion, we present several statistically rigorous methods for analyzing spatio-temporal environmental data and controlling the false positives. These methods allow the use of any test statistic in a wide range of applications in environmental sciences and remote sensing.  相似文献   

8.
Ulrich W  Gotelli NJ 《Ecology》2010,91(11):3384-3397
The influence of negative species interactions has dominated much of the literature on community assembly rules. Patterns of negative covariation among species are typically documented through null model analyses of binary presence/absence matrices in which rows designate species, columns designate sites, and the matrix entries indicate the presence (1) or absence (0) of a particular species in a particular site. However, the outcome of species interactions ultimately depends on population-level processes. Therefore, patterns of species segregation and aggregation might be more clearly expressed in abundance matrices, in which the matrix entries indicate the abundance or density of a species in a particular site. We conducted a series of benchmark tests to evaluate the performance of 14 candidate null model algorithms and six covariation metrics that can be used with abundance matrices. We first created a series of random test matrices by sampling a metacommunity from a lognormal species abundance distribution. We also created a series of structured matrices by altering the random matrices to incorporate patterns of pairwise species segregation and aggregation. We next screened each algorithm-index combination with the random and structured matrices to determine which tests had low Type I error rates and good power for detecting segregated and aggregated species distributions. In our benchmark tests, the best-performing null model does not constrain species richness, but assigns individuals to matrix cells proportional to the observed row and column marginal distributions until, for each row and column, total abundances are reached. Using this null model algorithm with a set of four covariance metrics, we tested for patterns of species segregation and aggregation in a collection of 149 empirical abundance matrices and 36 interaction matrices collated from published papers and posted data sets. More than 80% of the matrices were significantly segregated, which reinforces a previous meta-analysis of presence/absence matrices. However, using two of the metrics we detected a significant pattern of aggregation for plants and for the interaction matrices (which include plant-pollinator data sets). These results suggest that abundance matrices, analyzed with an appropriate null model, may be a powerful tool for quantifying patterns of species segregation and aggregation.  相似文献   

9.
10.
Abstract: Growing threats to biodiversity in the tropics mean there is an increasing need for effective monitoring that balances scientific rigor with practical feasibility. Alternatives to professional techniques are emerging that are based on the involvement of local people. Such locally based monitoring methods may be more sustainable over time, allow greater spatial coverage and quicker management decisions, lead to increased compliance, and help encourage attitude shifts toward more environmentally sustainable practices. Nevertheless, few studies have yet compared the findings or cost‐effectiveness of locally based methods with professional techniques or investigated the power of locally based methods to detect trends. We gathered data on bushmeat‐hunting catch and effort using a professional technique (accompanying hunters on hunting trips) and two locally based methods in which data were collected by hunters (hunting camp diaries and weekly hunter interviews) in a 15‐month study in Equatorial Guinea. Catch and effort results from locally based methods were strongly correlated with those of the professional technique and the spatial locations of hunting trips reported in the locally based methods accurately reflected those recorded with the professional technique. We used power simulations of catch and effort data to show that locally based methods can reliably detect meaningful levels of change (20% change with 80% power at significance level [α]= 0.05) in multispecies catch per unit effort. Locally based methods were the most cost‐effective for monitoring. Hunter interviews collected catch and effort data on 240% more hunts per person hour and 94% more hunts per unit cost, spent on monitoring, than the professional technique. Our results suggest that locally based monitoring can offer an accurate, cost‐effective, and sufficiently powerful method to monitor the status of natural resources. To establish such a system in Equatorial Guinea, the current lack of national and local capacity for monitoring and management must be addressed.  相似文献   

11.
Judicious Use of Multiple Hypothesis Tests   总被引:5,自引:0,他引:5  
Abstract:  When analyzing a table of statistical results, one must first decide whether adjustment of significance levels is appropriate. If the main goal is hypothesis generation or initial screening for potential conservation problems, then it may be appropriate to use the standard comparisonwise significance level to avoid Type II errors (not detecting real differences or trends). If the main goal is rigorous testing of a hypothesis, however, then an adjustment for multiple tests is needed. To control the familywise Type I error rate (the probability of rejecting at least one true null hypothesis), sequential modifications of the standard Bonferroni method, such as Holm's method, will provide more statistical power than the standard Bonferroni method. Additional power may be achieved through procedures that control the false discovery rate (FDR) (the expected proportion of false positives among tests found to be significant). Holm's sequential Bonferroni method and two FDR-controlling procedures were applied to the results of multiple-regression analyses of the relationship between habitat variables and the abundance of 25 species of forest birds in Japan, and the FDR-controlling procedures provided considerably greater statistical power.  相似文献   

12.
Larval dispersal is an important component of marine reserve networks. Two conceptually different approaches to incorporate dispersal connectivity into spatial planning of these networks exist, and it is an open question as to when either is most appropriate. Candidate reserve sites can be selected individually based on local properties of connectivity or on a spatial dependency-based approach of selecting clusters of strongly connected habitat patches. The first acts on individual sites, whereas the second acts on linked pairs of sites. We used a combination of larval dispersal simulations representing different seascapes and case studies of biophysical larval dispersal models in the Coral Triangle region and the province of Southeast Sulawesi, Indonesia, to compare the performance of these 2 methods in the spatial planning software Marxan. We explored the reserve design performance implications of different dispersal distances and patterns based on the equilibrium settlement of larvae in protected and unprotected areas. We further assessed different assumptions about metapopulation contributions from unprotected areas, including the case of 100% depletion and more moderate scenarios. The spatial dependency method was suitable when dispersal was limited, a high proportion of the area of interest was substantially degraded, or the target amount of habitat protected was low. Conversely, when subpopulations were well connected, the 100% depletion was relaxed, or more habitat was protected, protecting individual sites with high scores in metrics of connectivity was a better strategy. Spatial dependency methods generally produced more spatially clustered solutions with more benefits inside than outside reserves compared with site-based methods. Therefore, spatial dependency methods potentially provide better results for ecological persistence objectives over enhancing fisheries objectives, and vice versa. Different spatial prioritization methods of using connectivity are appropriate for different contexts, depending on dispersal characteristics, unprotected area contributions, habitat protection targets, and specific management objectives. Comparación entre los métodos de priorización de la conservación espacial con sitio y la conectividad espacial basada en la dependencia  相似文献   

13.
Spatial concurrent linear models, in which the model coefficients are spatial processes varying at a local level, are flexible and useful tools for analyzing spatial data. One approach places stationary Gaussian process priors on the spatial processes, but in applications the data may display strong nonstationary patterns. In this article, we propose a Bayesian variable selection approach based on wavelet tools to address this problem. The proposed approach does not involve any stationarity assumptions on the priors, and instead we impose a mixture prior directly on each wavelet coefficient. We introduce an option to control the priors such that high resolution coefficients are more likely to be zero. Computationally efficient MCMC procedures are provided to address posterior sampling, and uncertainty in the estimation is assessed through posterior means and standard deviations. Examples based on simulated data demonstrate the estimation accuracy and advantages of the proposed method. We also illustrate the performance of the proposed method for real data obtained through remote sensing.  相似文献   

14.
In the laboratory sciences good experimental design minimises the effects of any disturbing variables so that hypotheses are amenable to to relatively unambiguous testing. But in the field sciences such variables cannot be controlled and data are inherently variable. Subsequent hypothesis testing must rely on a careful statistical interpretation of noisy data. This paper describes one systematic approach to interpreting the results from surveys of metal contaminated soils. Since contaminating metals are also present naturally in soil, anthropogenic excesses are recognised through statistical tests on the data. The nature of pollution processes also leads to the generation of distinct spatial patterns which may be evaluated through appropriate computergraphic techniques.  相似文献   

15.
Conservation issues are often complicated by sociopolitical controversies that reflect competing philosophies and values regarding natural systems, animals, and people. Effective conservation outcomes require managers to engage myriad influences (social, cultural, political, and economic, as well as ecological). The contribution of conservation scientists who generate the information on which solutions rely is constrained if they are unable to acknowledge how personal values and disciplinary paradigms influence their research and conclusions. Conservation challenges involving controversial species provide an opportunity to reflect on the paradigms and value systems that underpin the discipline and practice of conservation science. Recent analyses highlight the ongoing reliance on normative values in conservation. We frame our discussion around controversies over feral horses (Equus ferus caballus) in the Canadian West and New Zealand and suggest that a lack of transparency and reflexivity regarding normative values continues to prevent conservation practitioners from finding resilient conservation solutions. We suggest that growing scrutiny and backlash to many normative conservation objectives necessitates formal reflexivity methods in conservation biology research, similar to those required of researchers in social science disciplines. Moreover, given that much conservation research and action continues to prioritize Western normative values regarding nature and conservation, we suggest that adopting reflexive methods more broadly is an important step toward more socially just research and practice. Formalizing such methods and requiring reflexivity in research will not only encourage reflection on how personal and disciplinary value systems influence conservation work but could more effectively engage people with diverse perspectives and values in conservation and encourage more novel and resilient conservation outcomes, particularly when dealing with controversial species.  相似文献   

16.
Advances in computing power in the past 20 years have led to a proliferation of spatially explicit, individual-based models of population and ecosystem dynamics. In forest ecosystems, the individual-based models encapsulate an emerging theory of "neighborhood" dynamics, in which fine-scale spatial interactions regulate the demography of component tree species. The spatial distribution of component species, in turn, regulates spatial variation in a whole host of community and ecosystem properties, with subsequent feedbacks on component species. The development of these models has been facilitated by development of new methods of analysis of field data, in which critical demographic rates and ecosystem processes are analyzed in terms of the spatial distributions of neighboring trees and physical environmental factors. The analyses are based on likelihood methods and information theory, and they allow a tight linkage between the models and explicit parameterization of the models from field data. Maximum likelihood methods have a long history of use for point and interval estimation in statistics. In contrast, likelihood principles have only more gradually emerged in ecology as the foundation for an alternative to traditional hypothesis testing. The alternative framework stresses the process of identifying and selecting among competing models, or in the simplest case, among competing point estimates of a parameter of a model. There are four general steps involved in a likelihood analysis: (1) model specification, (2) parameter estimation using maximum likelihood methods, (3) model comparison, and (4) model evaluation. Our goal in this paper is to review recent developments in the use of likelihood methods and modeling for the analysis of neighborhood processes in forest ecosystems. We will focus on a single class of processes, seed dispersal and seedling dispersion, because recent papers provide compelling evidence of the potential power of the approach, and illustrate some of the statistical challenges in applying the methods.  相似文献   

17.
Spatial statistical models that use flow and stream distance   总被引:6,自引:1,他引:6  
We develop spatial statistical models for stream networks that can estimate relationships between a response variable and other covariates, make predictions at unsampled locations, and predict an average or total for a stream or a stream segment. There have been very few attempts to develop valid spatial covariance models that incorporate flow, stream distance, or both. The application of typical spatial autocovariance functions based on Euclidean distance, such as the spherical covariance model, are not valid when using stream distance. In this paper we develop a large class of valid models that incorporate flow and stream distance by using spatial moving averages. These methods integrate a moving average function, or kernel, against a white noise process. By running the moving average function upstream from a location, we develop models that use flow, and by construction they are valid models based on stream distance. We show that with proper weighting, many of the usual spatial models based on Euclidean distance have a counterpart for stream networks. Using sulfate concentrations from an example data set, the Maryland Biological Stream Survey (MBSS), we show that models using flow may be more appropriate than models that only use stream distance. For the MBSS data set, we use restricted maximum likelihood to fit a valid covariance matrix that uses flow and stream distance, and then we use this covariance matrix to estimate fixed effects and make kriging and block kriging predictions. Received: July 2005 / Revised: March 2006  相似文献   

18.
GIS and geostatistics: Essential partners for spatial analysis   总被引:20,自引:0,他引:20  
Initially, geographical information systems (GIS) concentrated on two issues: automated map making, and facilitating the comparison of data on thematic maps. The first required high quality graphics, vector data models and powerful data bases, the second is based on grid cells that can be manipulated by suites of mathematical operators collectively termed map algebra. Both kinds of GIS are widely available and are taught in many universities and technical colleges. After more than 20 years of development, most standard GIS provide both kinds of functionality and good quality graphic display, but until recently they have not included the methods of statistics and geostatistics as tools for spatial analysis. Recently, standard statistical packages have been linked to GIS for both exploratory data analysis and statistical analysis and hypothesis testing. Standard statistical packages include methods for the analysis of random samples of cases or objects that are not necessarily co-located in space—if the results of statistical analysis display a spatial pattern then that is because the underlying data also share that pattern. Geostatistics addresses the need to make predictions of sampled attributes (i.e., maps) at unsampled locations from sparse, often expensive data. To make up for lack of hard data geostatistics has concentrated on the development of powerful methods based on stochastic theory. Though there have been recent moves to incorporate ancillary data in geostatistical analyses, insufficient attention has been paid to using modern methods of data display for the visualization of results. GIS can serve geostatistics by aiding geo-registration of data, facilitating spatial exploratory data analysis, providing a spatial context for interpolation and conditional simulation, as well as providing easy-to-use and effective tools for data display and visualization. The value of geostatistics for GIS lies in the provision of reliable interpolation methods with known errors, methods of upscaling and generalization, and for supplying multiple realizations of spatial patterns that can be used in environmental modeling. These stochastic methods are improving understanding of how errors in models of spatial processes accrue from errors in data or incompleteness in the structure of the models. New developments in GIS, based on ideas taken from map algebra, cellular automata and image analysis are providing high level programming languages for modeling dynamic processes such as erosion or the development of alluvial fans and deltas. Research has demonstrated that these models need stochastic inputs to yield realistic results. Non-stochastic tools such as fuzzy subsets have been shown to be useful for spatial analysis when probabilistic approaches are inappropriate or impossible. The conclusion is that in spite of differences in history and approach, the linkage of GIS, statistics and geostatistics provides a powerful, and complementary suite of tools for spatial analysis in the agricultural, earth and environmental sciences.  相似文献   

19.
The purpose of this paper is to develop a set of associated statistical tests for spatial clustering. In particular, a set of three associated tests will be developed; these will correspond to the three types of tests set out by Besag and Newell (general tests, focused tests, and tests for the detection of clustering). The associated tests draw primarily, though not exclusively, upon existing tests and results. The principal contributions are based upon the score statistic for focused tests, which has been an important approach to testing for clustering around environmental hazards. The first contribution consists of the formulation of a global statistic for general tests that corresponds to focused score statistics, along with an assessment of the distribution of the statistic under the null hypothesis of no raised incidence. The local score statistics used for focused tests will have the property of summing to the global statistic used for the corresponding general test. Attention is also given to the maximum local score statistic for the “test for the detection of clustering”. The critical values of this statistic which are required for testing the null hypothesis are described. Application of the methods is made to leukemia data for central New York State.  相似文献   

20.
For conservation decision making, species’ geographic distributions are mapped using various approaches. Some such efforts have downscaled versions of coarse‐resolution extent‐of‐occurrence maps to fine resolutions for conservation planning. We examined the quality of the extent‐of‐occurrence maps as range summaries and the utility of refining those maps into fine‐resolution distributional hypotheses. Extent‐of‐occurrence maps tend to be overly simple, omit many known and well‐documented populations, and likely frequently include many areas not holding populations. Refinement steps involve typological assumptions about habitat preferences and elevational ranges of species, which can introduce substantial error in estimates of species’ true areas of distribution. However, no model‐evaluation steps are taken to assess the predictive ability of these models, so model inaccuracies are not noticed. Whereas range summaries derived by these methods may be useful in coarse‐grained, global‐extent studies, their continued use in on‐the‐ground conservation applications at fine spatial resolutions is not advisable in light of reliance on assumptions, lack of real spatial resolution, and lack of testing. In contrast, data‐driven techniques that integrate primary data on biodiversity occurrence with remotely sensed data that summarize environmental dimensions (i.e., ecological niche modeling or species distribution modeling) offer data‐driven solutions based on a minimum of assumptions that can be evaluated and validated quantitatively to offer a well‐founded, widely accepted method for summarizing species’ distributional patterns for conservation applications.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号