首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到8条相似文献,搜索用时 15 毫秒
1.
A new species abundance estimator is proposed when point-to-plant sampling is adopted in a design-based framework. The method is based on the relationship between each species abundance and the probability density function of the relative squared point-to-plant distance. Using this result, a kernel estimator for species abundance is provided and the nearest neighbor method is suggested for bandwidth selection. The proposed estimator requires no assumptions about the species point patterns nor corrections for sampling near the edges of the study region. Moreover, the estimator shows suitable statistical properties as well as good practical performance as is shown in a simulation study.  相似文献   

2.
The selection function (which shows how the frequency of sampling units with the value X = x at one point in time must change in order to produce the distribution that occurs at a later point in time) is proposed for describing the changes over time in an environmentally important variable X. It is shown that the theory of selection functions as used in the study of natural selection and resource selection by animals requires some modifications in this new application and that a selection function is a useful tool in long-term monitoring studies because all changes in a distribution can be examined (rather than just changes in single parameters such as the mean), and because graphical presentations of the selection function are easy for non-statisticians to understand. Estimation of the selection function is discussed using a method appropriate for normal distributions and bootstrapping is suggested as a method for assessing the precision of estimates and for testing for significant differences between samples taken at different times. Methods are illustrated using data on water chemical variables from a study of the effects of acid precipitation in Norway.  相似文献   

3.
物种敏感度分布的非参数核密度估计模型   总被引:2,自引:0,他引:2  
针对目前物种敏感度分布参数方法建模所存在的缺点,首次提出基于非参数核密度估计方法的物种敏感度分布模型,并提出相应的最优窗宽和检验方法。选用无机汞作为案例研究对象,利用非参数核密度估计方法和3种传统参数模型分别推导了保护我国水生生物的无机汞的急性水质基准值。结果表明,非参数核密度估计方法在推导无机汞水质基准中的稳健性和精确度都大大优于传统参数模型,能够更好地构建物种敏感度分布曲线。该方法的提出丰富了水质基准的理论方法学,为更好地保护水生生物提供了有力的支撑。  相似文献   

4.
Forest fire is regarded as one of the most significant factors leading to land degradation. While evaluating fire hazard or producing fire risk zone maps, quantitative analyses using historic fire data is often required, and during all these modeling and multi-criteria analysis processes, the fire event itself is taken as the dependent variable. However, there are two main problematic issues in analyzing historic fire data. The first difficulty arises from the fact that it is in point format, whereas a continuous surface is frequently needed for statistically analyzing the relationship of fire events with other factors, such as anthropogenic, topographic and climatic conditions. Another, and probably the most bothersome challenge is to overcome inaccuracy inherent in historic fire data in point format, since the exact coordinates of ignition points are mostly unknown. In this study, kernel density mapping, a widely used method for converting discrete point data into a continuous raster surface, was used to map the historic fire data in Mumcular Forest Sub-district in Mu?la, Turkey. The historic fire data was transferred onto the digital forest stand map of the study area, where the exact locations of ignition points are unknown; however, the exact number of ignition points in each compartment of the forest stand map is known. Different random distributions of ignition points were produced, and for each random distribution, kernel density maps were produced by applying two distinct kernel functions with several smoothing parameter options. The obtained maps were compared through correlation analysis in order to illustrate the effect of randomness, choice of kernel function and smoothing parameter. The proposed method gives a range of values rather than a single bandwidth value; however, it provides a more reliable way than comparing the maps with different bandwidths subjectively by eye.  相似文献   

5.
Bayesian hierarchical models were used to assess trends of harbor seals, Phoca vitulina richardsi, in Prince William Sound, Alaska, following the 1989 Exxon Valdez oil spill. Data consisted of 4–10 replicate observations per year at 25 sites over 10 years. We had multiple objectives, including estimating the effects of covariates on seal counts, and estimating trend and abundance, both per site and overall. We considered a Bayesian hierarchical model to meet our objectives. The model consists of a Poisson regression model for each site. For each observation the logarithm of the mean of the Poisson distribution was a linear model with the following factors: (1) intercept for each site and year, (2) time of year, (3) time of day, (4) time relative to low tide, and (5) tide height. The intercept for each site was then given a linear trend model for year. As part of the hierarchical model, parameters for each site were given a prior distribution to summarize overall effects. Results showed that at most sites, (1) trend is down; counts decreased yearly, (2) counts decrease throughout August, (3) counts decrease throughout the day, (4) counts are at a maximum very near to low tide, and (5) counts decrease as the height of the low tide increases; however, there was considerable variation among sites. To get overall trend we used a weighted average of the trend at each site, where the weights depended on the overall abundance of a site. Results indicate a 3.3% decrease per year over the time period.  相似文献   

6.
Kernel density estimators are often used to estimate the utilization distributions (UDs) of animals. Kernel UD estimates have a strong theoretical basis and perform well, but are usually reported without estimates of error or uncertainty. It is intuitively and theoretically appealing to estimate the sampling error in kernel UD estimates using bootstrapping. However, standard equations for kernel density estimates are complicated and computationally expensive. Bootstrapping requires computing hundreds or thousands of probability densities and is impractical when the number of observations, or the area of interest is large. We used the fast Fourier transform (FFT) and discrete convolution theorem to create a bootstrapping algorithm fast enough to run on commonly available desktop or laptop computers. Application of the FFT method to a large (n>20,000) set of radio telemetry data would provide a 99.6% reduction in computation time (i.e., 1.6 as opposed to 444 hours) for 1000 bootstrap UD estimates. Bootstrap error contours were computed using data from a radio-collared polar bear (Ursus maritimus) in the Beaufort Sea north of Alaska.  相似文献   

7.
We present a robust sampling methodology to estimate population size using line transect and capture-recapture procedures for aerial surveys. Aerial surveys usually underestimate population density due to animals being missed. A combination of capture-recapture and line transect sampling methods with multiple observers allows violation of the assumption that all animals on the centreline are sighted from the air. We illustrate our method with an example of inanimate objects which shows evidence of failure of the assumption that all objects on the centreline have probability 1 of being detected. A simulation study is implemented to evaluate the performance of three variations of the Lincoln-Petersen estimator: the overall estimator, the stratified estimator, and the general stratified estimator based on the combined likelihood proposed in this paper. The stratified Lincoln-Petersen estimator based on the combined likelihood is found to be generally superior to the other estimators.  相似文献   

8.
Models that predict distribution are now widely used to understand the patterns and processes of plant and animal occurrence as well as to guide conservation and management of rare or threatened species. Application of these methods has led to corresponding studies evaluating the sensitivity of model performance to requisite data and other factors that may lead to imprecise or false inferences. We expand upon these works by providing a relative measure of the sensitivity of model parameters and prediction to common sources of error, bias, and variability. We used a one-at-a-time sample design and GPS location data for woodland caribou (Rangifer tarandus caribou) to assess one common species-distribution model: a resource selection function. Our measures of sensitivity included change in coefficient values, prediction success, and the area of mapped habitats following the systematic introduction of geographic error and bias in occurrence data, thematic misclassification of resource maps, and variation in model design. Results suggested that error, bias and model variation have a large impact on the direct interpretation of coefficients. Prediction success and definition of important habitats were less responsive to the perturbations we introduced to the baseline model. Model coefficients, prediction success, and area of ranked habitats were most sensitive to positional error in species locations followed by sampling bias, misclassification of resources, and variation in model design. We recommend that researchers report, and practitioners consider, levels of error and bias introduced to predictive species-distribution models. Formal sensitivity and uncertainty analyses are the most effective means for evaluating and focusing improvements on input data and considering the range of values possible from imperfect models.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号