首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nearest neighbor (NN) methods are widely employed for drawing inferences about spatial point patterns of two or more classes. We introduce a method for testing reflexivity in the NN structure (i.e., NN reflexivity) based on a contingency table which will be called reflexivity contingency table (RCT) henceforth. The RCT is based on the NN relationships among the data points and was used for testing niche specificity in literature, but we demonstrate that it is actually more appropriate for testing the NN reflexivity pattern. We derive the asymptotic distribution of the entries of the RCT under random labeling and introduce tests of reflexivity based on these entries. We also consider Pielou’s approach on RCT and show that it is not appropriate for completely mapped spatial data. We determine the appropriate null hypotheses and the underlying conditions/assumptions required for all tests considered. We investigate the finite sample performance of the tests in terms of empirical size and power by extensive Monte Carlo simulations and illustrate the methods on two real-life ecological data sets.  相似文献   

2.
For two or more classes (or types) of points, nearest neighbor contingency tables (NNCTs) are constructed using nearest neighbor (NN) frequencies and are used in testing spatial segregation of the classes. Pielou’s test of independence, Dixon’s cell-specific, class-specific, and overall tests are the tests based on NNCTs (i.e., they are NNCT-tests). These tests are designed and intended for use under the null pattern of random labeling (RL) of completely mapped data. However, it has been shown that Pielou’s test is not appropriate for testing segregation against the RL pattern while Dixon’s tests are. In this article, we compare Pielou’s and Dixon’s NNCT-tests; introduce the one-sided versions of Pielou’s test; extend the use of NNCT-tests for testing complete spatial randomness (CSR) of points from two or more classes (which is called CSR independence, henceforth). We assess the finite sample performance of the tests by an extensive Monte Carlo simulation study and demonstrate that Dixon’s tests are also appropriate for testing CSR independence; but Pielou’s test and the corresponding one-sided versions are liberal for testing CSR independence or RL. Furthermore, we show that Pielou’s tests are only appropriate when the NNCT is based on a random sample of (base, NN) pairs. We also prove the consistency of the tests under their appropriate null hypotheses. Moreover, we investigate the edge (or boundary) effects on the NNCT-tests and compare the buffer zone and toroidal edge correction methods for these tests. We illustrate the tests on a real life and an artificial data set.  相似文献   

3.
Non-parametric statistical tests are commonly used in the behavioral sciences. Researchers need to be aware that non-parameteric methods involving ranks can perform unreliably as a result of very small amounts of noise added in the storage and manipulation of values by computers, causing spurious reduction in the number of ties. In order to avoid this problem, researchers should round values to an appropriate number of decimal places prior to the ranking procedure to ensure that data points whose values cannot be separated according to the precision of their measurement are recorded as having identical rank. We also recommend exact rather than asymptotic evaluation of p values in non-parametric statistical tests.  相似文献   

4.
The literature on modelling a predator’s prey selection describes many intuitive indices, few of which have both reasonable statistical justification and tractable asymptotic properties. Here, we provide a simple model that meets both of these criteria, while extending previous work to include an array of data from multiple species and time points. Further, we apply the expectation–maximisation algorithm to compute estimates if exact counts of the number of prey species eaten in a particular time period are not observed. We conduct a simulation study to demonstrate the accuracy of our method, and illustrate the utility of the approach for field analysis of predation using a real data set, collected on wolf spiders using molecular gut-content analysis.  相似文献   

5.
Testing the Accuracy of Population Viability Analysis   总被引:3,自引:0,他引:3  
  相似文献   

6.
In the present study, an attempt was made to compare the statistical tools used for analysing the data of repeated dose toxicity studies with rodents conducted in 45 countries, with that of Japan. The study revealed that there was no congruence among the countries in the use of statistical tools for analysing the data obtained from the above studies. For example, to analyse the data obtained from repeated dose toxicity studies with rodents, Scheffé's multiple range and Dunnett type (joint type Dunnett) tests are commonly used in Japan, but in other countries use of these statistical tools is not so common. However, statistical techniques used for testing the above data for homogeneity of variance and inter-group comparisons do not differ much between Japan and other countries. In Japan, the data are generally not tested for normality and the same is true with the most of the countries investigated. In the present investigation, out of 127 studies examined, data of only 6 studies were analysed for both homogeneity of variance and normal distribution. For examining homogeneity of variance, we propose Levene's test, since the commonly used Bartlett's test may show heterogeneity in variance in all the groups, if a slight heterogeneity in variance is seen any one of the groups. We suggest the data may be examined for both homogeneity of variance and normal distribution. For the data of the groups that do not show heterogeneity of variance, to find the significant difference among the groups, we recommend Dunnett's test, and for those show heterogeneity of variance, we recommend Steel's test.  相似文献   

7.
Alternative livelihood project (ALP) is a widely used term for interventions that aim to reduce the prevalence of activities deemed to be environmentally damaging by substituting them with lower impact livelihood activities that provide at least equivalent benefits. ALPs are widely implemented in conservation, but in 2012, an International Union for Conservation of Nature resolution called for a critical review of such projects based on concern that their effectiveness was unproven. We focused on the conceptual design of ALPs by considering their underlying assumptions. We placed ALPs within a broad category of livelihood‐focused interventions to better understand their role in conservation and their intended impacts. We dissected 3 flawed assumptions about ALPs based on the notions of substitution, the homogenous community, and impact scalability. Interventions based on flawed assumptions about people's needs, aspirations, and the factors that influence livelihood choice are unlikely to achieve conservation objectives. We therefore recommend use of a sustainable livelihoods approach to understand the role and function of environmentally damaging behaviors within livelihood strategies; differentiate between households in a community that have the greatest environmental impact and those most vulnerable to resource access restrictions to improve intervention targeting; and learn more about the social–ecological system within which household livelihood strategies are embedded. Rather than using livelihood‐focused interventions as a direct behavior‐change tool, it may be more appropriate to focus on either enhancing the existing livelihood strategies of those most vulnerable to conservation‐imposed resource access restrictions or on use of livelihood‐focused interventions that establish a clear link to conservation as a means of building good community relations. However, we recommend that the term ALP be replaced by the broader term livelihood‐focused intervention. This avoids the implicit assumption that alternatives can fully substitute for natural resource‐based livelihood activities.  相似文献   

8.
This paper presents a statistical method for detecting distinct scales of pattern for mosaics of irregular patches, by means of perimeter–area relationships. Krummel et al. (1987) were the first to develop a method for detecting different scaling domains in a landscape of irregular patches, but this method requires investigator judgment and is not completely satisfying. Grossi et al. (2001) suggested a modification of Krummel's method in order to detect objectively the change points between different scaling domains. Their procedure is based on the selection of the best piecewise linear regression model using a set of statistical tests. Even though the change points were estimated, the null distributions used for testing purposes were those appropriate for known change points. The present paper investigates the effect that estimating the change points has on the underlying distribution theory. The procedure we suggest is based on the selection of the best piecewise linear regression model using a likelihood ratio (LR) test. Each segment of the piecewise linear model corresponds to a fractal domain. Breakpoints between different segments are unknown, so the piecewise linear models are non-linear. In this case, the frequency distribution of the LR statistic cannot be approximated by a chi-squared distribution. Instead, Monte Carlo simulation is used to obtain an empirical null distribution of the LR statistic. The suggested method is applied to three patch types (CORINE biotopes) located in the Val Baganza watershed of Italy.  相似文献   

9.
Spatial statistical models that use flow and stream distance   总被引:6,自引:1,他引:6  
We develop spatial statistical models for stream networks that can estimate relationships between a response variable and other covariates, make predictions at unsampled locations, and predict an average or total for a stream or a stream segment. There have been very few attempts to develop valid spatial covariance models that incorporate flow, stream distance, or both. The application of typical spatial autocovariance functions based on Euclidean distance, such as the spherical covariance model, are not valid when using stream distance. In this paper we develop a large class of valid models that incorporate flow and stream distance by using spatial moving averages. These methods integrate a moving average function, or kernel, against a white noise process. By running the moving average function upstream from a location, we develop models that use flow, and by construction they are valid models based on stream distance. We show that with proper weighting, many of the usual spatial models based on Euclidean distance have a counterpart for stream networks. Using sulfate concentrations from an example data set, the Maryland Biological Stream Survey (MBSS), we show that models using flow may be more appropriate than models that only use stream distance. For the MBSS data set, we use restricted maximum likelihood to fit a valid covariance matrix that uses flow and stream distance, and then we use this covariance matrix to estimate fixed effects and make kriging and block kriging predictions. Received: July 2005 / Revised: March 2006  相似文献   

10.
What happens when those who provide conservation advice are required to take policy and management action based on that advice? Conservation advocates and scientists often try to prompt regulatory change that has significant implications for government without facing the challenge of managing such change. Through a case study, we placed ourselves in the role of the government of Thailand, facing obligations to seahorses (Hippocampus spp.) under the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). These obligations include ensuring that its exports of seahorses do not damage wild populations. We applied a CITES-approved framework (which we developed) to evaluate the risks of such exports to 2 seahorse species. We used the framework to evaluate the pressures that put wild populations of the species at risk; whether current management mitigates the risk or offsets these pressures; and whether the species is responding as hoped to management policy. We based our analysis on information in published and grey literature, local knowledge, citizen science data, results of government research, and expert opinion. To meet CITES obligations, exports of both species would need to be prohibited until more precautionary adaptive management emerged. The risk of any exports of Hippocampus trimaculatus was above a tolerable level because of a lack of appropriate management to mitigate risks. In contrast, the risk of any exports of Hippocampus kuda could become tolerable if monitoring were put in place to assess the species’ response to management. The process we developed for Authorities to determine risk in response to CITES guidelines was challenging to implement even without the need for government to consider social implications of conservation action. Despite the imperfections of our risk evaluation, however, it still served to support adaptive management. Conservationists need to keep implementation in mind when offering advice.  相似文献   

11.
Social media data are being increasingly used in conservation science to study human–nature interactions. User-generated content, such as images, video, text, and audio, and the associated metadata can be used to assess such interactions. A number of social media platforms provide free access to user-generated social media content. However, similar to any research involving people, scientific investigations based on social media data require compliance with highest standards of data privacy and data protection, even when data are publicly available. Should social media data be misused, the risks to individual users' privacy and well-being can be substantial. We investigated the legal basis for using social media data while ensuring data subjects’ rights through a case study based on the European Union's General Data Protection Regulation. The risks associated with using social media data in research include accidental and purposeful misidentification that has the potential to cause psychological or physical harm to an identified person. To collect, store, protect, share, and manage social media data in a way that prevents potential risks to users involved, one should minimize data, anonymize data, and follow strict data management procedure. Risk-based approaches, such as a data privacy impact assessment, can be used to identify and minimize privacy risks to social media users, to demonstrate accountability and to comply with data protection legislation. We recommend that conservation scientists carefully consider our recommendations in devising their research objectives so as to facilitate responsible use of social media data in conservation science research, for example, in conservation culturomics and investigations of illegal wildlife trade online.  相似文献   

12.
Tuomisto H  Ruokolainen K 《Ecology》2006,87(11):2697-2708
It has been actively discussed recently what statistical methods are appropriate when one is interested in testing hypotheses about the origin of beta diversity, especially whether one should use the raw-data approach (e.g., canonical analysis such as RDA and CCA) or the distance approach (e.g., Mantel test and multiple regression on distance matrices). Most of the confusion seems to stem from uncertainty as to what is the response variable in the different approaches. Here our aim is to clarify this issue. We also show that, although both the raw-data approach and the distance approach can often be used to address the same ecological hypothesis, they target fundamentally different predictions of those hypotheses. As the two approaches shed light on different aspects of the ecological hypotheses, they should be viewed as complementary rather than alternative ways of analyzing data. However, in some cases only one of the approaches may be appropriate. We argue that S. P. Hubbell's neutral theory can only be tested using the distance approach, because its testable predictions are stated in terms of distances, not in terms of raw data. In all cases, the decision on which method is chosen must be based on which addresses the question at hand, it cannot be based on which provides the highest proportion of explained variance in simulation studies.  相似文献   

13.
14.
In settings where measurements are costly and/or difficult to obtain but ranking of the potential sample data is relatively easy and reliable, the use of statistical methods based on a ranked-set sampling approach can lead to substantial improvement over analogous methods associated with simple random samples. Previous nonparametric work in this area has been concentrated almost exclusively on the one- and two-sample location problems. In this paper we develop ranked-set sample procedures for the m-sample location setting where the treatment effect parameters follow a restricted umbrella pattern. Distribution-free testing procedures are developed for both the case where the peak of the umbrella is known and for the case where it is unknown. Small sample and asymptotic null distribution properties are provided for the peak-known test statistic.  相似文献   

15.
Estimates of a population’s growth rate and process variance from time-series data are often used to calculate risk metrics such as the probability of quasi-extinction, but temporal correlations in the data from sampling error, intrinsic population factors, or environmental conditions can bias process variance estimators and detrimentally affect risk predictions. It has been claimed (McNamara and Harding, Ecol Lett 7:16–20, 2004) that estimates of the long-term variance that incorporate observed temporal correlations in population growth are unaffected by sampling error; however, no estimation procedures were proposed for time-series data. We develop a suite of such long-term variance estimators, and use simulated data with temporally autocorrelated population growth and sampling error to evaluate their performance. In some cases, we get nearly unbiased long-term variance estimates despite ignoring sampling error, but the utility of these estimators is questionable because of large estimation uncertainty and difficulties in estimating correlation structure in practice. Process variance estimators that ignored temporal correlations generally gave more precise estimates of the variability in population growth and of the probability of quasi-extinction. We also found that the estimation of probability of quasi-extinction was greatly improved when quasi-extinction thresholds were set relatively close to population levels. Because of precision concerns, we recommend using simple models for risk estimates despite potential biases, and limiting inference to quantifying relative risk; e.g., changes in risk over time for a single population or comparative risk among populations.  相似文献   

16.
Detecting rare species is important for both threatened species management and invasive species eradication programs. Conservation scent dogs provide an olfactory survey tool that has advantages over traditional visual and auditory survey techniques for some cryptic species. From the literature, we identified 5 measures important in evaluating the use of scent dogs: precision, sensitivity, effort, cost, and comparison with other techniques. We explored the scale at which performance is evaluated and examined when field testing under real working conditions is achievable. We also identified cost differences among studies. We examined 61 studies published in 1976–2018 that reported conservation dog performance, and considered the inconsistencies in the reporting of scent dog performance among these studies. The majority of studies reported some measure of performance; however, only 8 studies reported all 3 aspects necessary for performance evaluation: precision, sensitivity, and effort. Although effort was considered in 43 studies, inconsistent methods and incomplete reporting prevented meaningful evaluation of performance and comparison among studies. Differences in cost between similar studies were influenced by geographical location and how the dog and handler were sourced for the study. To develop consistent reporting for evaluation, we recommend adoption of sensitivity, precision, and effort as standard performance measures. We recommend reporting effort as the total area and total time spent searching and reporting sensitivity and precision as proportions of the sample size. Additionally, reporting of costs, survey objectives, dog training and experience, type of detection task, and human influences will provide better opportunities for comparison within and among studies.  相似文献   

17.
Abstract: Success in conservation biology depends upon a synergistic combination of short-term tactics and long-term strategies. Although the former are often necessary to forestall immediate habitat losses, the latter provide the critical framework of understanding needed to develop effective short-term priorities. Although many conservation biologists now emphasize short-term tactics, prudence dictates the concomitant establishment of long-term research projects aimed at answering fundamental ecological questions.
One deterrent to amassing such long-term data bases, aside from time and funding, is a lack of suitable sites where such studies can be conducted on a large scale. We describe two established major land-holding networks in the United States that could serve as appropriate places to develop long-term studies: the National Science Foundation's Long-Term Ecological Research (LTER) sites, and the US. Department of Energy's National Environmental Research Parks (NERPs). Because they consist of established research facilities, both networks can provide conservation biologists with "low-cost" baseline information on ecological processes, as well as access to a number of representative terrestrial and aquatic habitats under more or less "controlled" conditions suitable for long-term studies. We recommend that conservation biologists explore the possibility of using these or similarly available sites in their research programs.  相似文献   

18.
Determining whether the diet of predators has changed is an important ecological problem and appropriate methodology is needed in order to test for differences or changes in diet. It is known that the fatty acid (FA) signature in a predator’s adipose tissue predictably reflects the prey consumed and that, consequently, a change in the FA signatures can be largely attributed to changes in the predator’s diet composition. The use of FA signatures as a means of detecting change in diet presents some statistical challenges however, since the FA signatures are compositional and sample sizes relative to the dimension of a signature are often small due to biological constraints. Furthermore, the FA signatures often contain zeros precluding the direct use of traditional compositional data analysis methods. In this paper, we provide the methodology to carry out valid statistical tests for detecting changes in FA signatures and we illustrate both independent and paired cases using simulation studies and real life seabird and seal data. We conclude that the statistical challenges using FA data are overcome through the use of nonparametric tests applied to the multivariate setting with suitable test statistics capable of handling the zeros that are present in the data.  相似文献   

19.
Population viability analysis programs are being used increasingly in research and management applications, but there has not been a systematic study of the congruence of different program predictions based on a single data set. We performed such an analysis using four population viability analysis computer programs: GAPPS, INMAT, RAMAS/AGE, and VORTEX. The standardized demographic rates used in all programs were generalized from hypothetical increasing and decreasing grizzly bear ( Ursus arctos horribilis ) populations. Idiosyncracies of input format for each program led to minor differences in intrinsic growth rates that translated into striking differences in estimates of extinction rates and expected population size. In contrast, the addition of demographic stochasticity, environmental stochasticity, and inbreeding costs caused only a small divergence in viability predictions. But, the addition of density dependence caused large deviations between the programs despite our best attempts to use the same density-dependent functions. Population viability programs differ in how density dependence is incorporated, and the necessary functions are difficult to parameterize accurately. Thus, we recommend that unless data clearly suggest a particular density-dependent model, predictions based on population viability analysis should include at least one scenario without density dependence. Further, we describe output metrics that may differ between programs; development of future software could benefit from standardized input and output formats across different programs.  相似文献   

20.
Vortices could play an important role in the occurrence of certain biological phenomena, such as the massive proliferation of harmful algae in bodies of water. Many measures exist to detect vortices in fluids, but little is known about the stochastic behavior of these quantities with data that contain statistical noise. Consequently they do not provide control over the probability of false positives and give little information about the risk of false negatives. Obtaining such control requires a statistical testing procedure. In this paper, we develop a test for vortices in random current fields using only the directions of the current observed at points on a regular grid. We construct a change-point test for spatially ordered angular data to detect the presence of a local vortex. A global vortex detection procedure based on this test is developed and applied to a data set from a lagoon located in the south of France. It is shown that this procedure can detect the presence of multiple vortices with good accuracy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号