首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Observations on axes which lack information on the direction of propagation are referred to as axial data. Such data are often encountered in enviromental sciences, e.g. observations on propagations of cracks or on faults in mining walls. Even though such observations are recorded as angles, circular probability models are inappropriate for such data since the constraint that observations lie only in [0, π) needs to be enforced. Probability models for such axial data are argued here to have a general structure stemming from that of wrapping a circular distribution on a semi-circle. In particular, we consider the most popular circular model, the von Mises or circular normal distribution, and derive the corresponding axial normal distribution. Certain properties of this distribution are established. Maximum likelihood estimation of its parameters are shown to be surprisingly, in contrast to trigonometric moment estimation, numerically quite appealing. Finally we illustrate our results by several real life axial data sets. Received: September 2004/ Revised: December 2004  相似文献   

2.
This paper compares the procedures based on the extended quasi-likelihood, pseudo-likelihood and quasi-likelihood approaches for testing homogeneity of several proportions for over-dispersed binomial data. The type I error of the Wald tests using the model-based and robust variance estimates, the score test, and the extended quasi-likelihood ratio test (deviance reduction test) were examined by simulation. The extended quasi-likelihood method performs less well when mean responses are close to 1 or 0. The model-based Wald test based on the quasi-likelihood performs the best in maintaining the nominal level. The score test performs less well when the intracluster correlations are large or heterogeneous. In summary: (i) both the quasilikelihood and pseudo-likelihood methods appear to be acceptable but care must be taken when overfitting a variance function with small sample sizes; (ii) the extended quasi-likelihood approach is the least favourable method because its nominal level is much too high; and (iii) the robust variance estimator performs poorly, particularly when the sample size is small.  相似文献   

3.
The proper management of an ecological population is greatly aided by solid information about its species' abundances. For the general heterogeneous Poisson species abundance setting, we develop the non-parametric mle for the entire probability model, namely for the total number N of species and the generating distribution F for the expected values of the species' abundances. Solid estimation of the entire probability model allows us to develop generator-based measures of ecological diversity and evenness which have inferences over similar regions. Also, our methods produce a solid goodness-of-fit test for our model as well as a likelihood ratio test to examine if there is heterogeneity in the expected values of the species' abundances. These estimates and tests are examined, in detail, in the paper. In particular, we apply our methods to important data from the National Breeding Bird Survey and discuss how our methods can also be easily applied to sweep net sampling data. To further examine our methods, we provide simulations for several illustrative situations.  相似文献   

4.
A frequent assumption in environmental risk assessment is that the underlying distribution of an analyte concentration is lognormal. However, the distribution of a random variable whose log has a t-distribution has infinite mean. Because of the proximity of the standard normal and t-distribution, this suggests that a distribution such as the gamma or truncated normal, with smaller right tail probabilities, might make a better statistical model for mean estimation than the lognormal. In order to assess the effect of departures from lognormality on lognormal-based statistics, we simulated complete lognormal, truncated normal, and gamma data for various sample sizes and coefficients of variation. In these cases, departures from lognormality were not easily detected with the Shapiro-Wilk test. Various lognormal-based estimates and tests were compared with alternate methods based on the ordinary sample mean and standard error. The examples were also considered in the presence of random left censoring with the mean and standard error of the product limit estimate replacing the ordinary sample mean and standard error. The results suggest that in the estimation of or tests about a mean, if the assumption of lognormality is at all suspect, then lognormal-based approaches may not be as good as the alternative methods.  相似文献   

5.
Aranked set sample (RSS), if not balanced, is simply a sample of independent order statistics gener- ated from the same underlying distribution F. Kvam and Samaniego (1994) derived maximum likelihood estimates of F for a general RSS. In many applications, including some in the environ- mental sciences, prior information about F is available to supplement the data-based inference. In such cases, Bayes estimators should be considered for improved estimation. Bayes estimation (using the squared error loss function) of the unknown distribution function F is investigated with such samples. Additionally, the Bayes generalized maximum likelihood estimator (GMLE) is derived. An iterative scheme based on the EM Algorithm is used to produce the GMLE of F. For the case of squared error loss, simple solutions are uncommon, and a procedure to find the solution to the Bayes estimate using the Gibbs sampler is illustrated. The methods are illustrated with data from the Natural Environmental Research Council of Great Britain (1975), representing water discharge of floods on the Nidd River in Yorkshire, England  相似文献   

6.
A capture-recapture model with heterogeneity and behavioural response   总被引:1,自引:0,他引:1  
We develop the non-parametric maximum likelihood estimator (MLE) of the full Mbh capture-recapture model which utilizes both initial capture and recapture data and permits both heterogeneity (h) between animals and behavioural (b) response to capture. Our MLE procedure utilizes non-parametric maximum likelihood estimation of mixture distributions (Lindsay, 1983; Lindsay and Roeder, 1992) and the EM algorithm (Dempsteret al., 1977). Our MLE estimate provides the first non-parametric estimate of the bivariate capture-recapture distribution.Since non-parametric maximum likelihood estimation exists for submodels Mh (allowing heterogeneity only), Mb (allowing behavioural response only) and M0 (allowing no changes), we develop maximum likelihood-based model selection, specifically the Akaike information criterion (AIC) (Akaike, 1973). The AIC procedure does well in detecting behavioural response but has difficulty in detecting heterogeneity.  相似文献   

7.
Hidden Markov models for circular and linear-circular time series   总被引:2,自引:0,他引:2  
We introduce a new class of circular time series based on hidden Markov models. These are compared with existing models, their properties are outlined and issues relating to parameter estimation are discussed. The new models conveniently describe multi-modal circular time series as dependent mixtures of circular distributions. Two examples from biology and meteorology are used to illustrate the theory. Finally, we introduce a hidden Markov model for bivariate linear-circular time series and use it to describe larval movement of the fly Drosophila. Received: September 2003 / Revised: March 2004  相似文献   

8.
On estimating the exponent of power-law frequency distributions   总被引:5,自引:0,他引:5  
White EP  Enquist BJ  Green JL 《Ecology》2008,89(4):905-912
Power-law frequency distributions characterize a wide array of natural phenomena. In ecology, biology, and many physical and social sciences, the exponents of these power laws are estimated to draw inference about the processes underlying the phenomenon, to test theoretical models, and to scale up from local observations to global patterns. Therefore, it is essential that these exponents be estimated accurately. Unfortunately, the binning-based methods traditionally used in ecology and other disciplines perform quite poorly. Here we discuss more sophisticated methods for fitting these exponents based on cumulative distribution functions and maximum likelihood estimation. We illustrate their superior performance at estimating known exponents and provide details on how and when ecologists should use them. Our results confirm that maximum likelihood estimation outperforms other methods in both accuracy and precision. Because of the use of biased statistical methods for estimating the exponent, the conclusions of several recently published papers should be revisited.  相似文献   

9.
Kodell and West (1993) describe two methods for calculating pointwise upper confidence limits on the risk function with normally distributed responses and using a certain definition of adverse quantitative effect. But Banga et al. (2000) have shown that these normal theory methods break down when applied to skew data. We accordingly develop a risk analysis model and associated likelihood-based methodology when the response follows either a gamma or reciprocal gamma distribution. The model supposes that the shape (index) parameter k of the response distribution is held fixed while the logarithm of the scale parameter is a linear model in terms of the dose level. Existence and uniqueness of the maximum likelihood estimates is established. Asymptotic likelihood-based upper and lower confidence limits on the risk are solutions of the Lagrange equations associated with a constrained optimization problem. Starting values for an iterative solution are obtained by replacing the Lagrange equations by the lowest order terms in their asymptotic expansions. Three methods are then compared for calculating confidence limits on the risk: (i) the aforementioned starting values (LRAL method), (ii) full iterative solution of the Lagrange equations (LREL method), and (iii) bounds obtained using approximate normality of the maximum likelihood estimates with standard errors derived from the information matrix (MLE method). Simulation is used to assess coverage probabilities for the resulting upper confidence limits when the log of the scale parameter is quadratic in the dose level. Results indicate that coverage for the MLE method can be off by as much as 15% points and converges very slowly to nominal coverage levels as the sample size increases. Coverage for the LRAL and LREL methods, on the other hand, is close to nominal levels unless (a) the sample size is small, say N < 25, (b) the index parameter is small, say k 1, and (c) the direction of adversity is to the left for the gamma distribution or to the right for the reciprocal gamma distribution.  相似文献   

10.
In most real data situations in the one-way design both the underlying distribution and the shape of the dose-response curve are a priori unknown. The power of a trend test strongly depends on both. However, tests which are routinely used to analyze toxicological assays must be robust. We use nonparametric tests with different scores—powerful for different distributions—and different contrasts—powerful for different shapes—and use the maximum of all test statistics as a new test statistic. Simulation results indicate that this maximum test, which is a nonparametric multiple contrast test, stabilizes the power for various shapes and distributions. The investigated tests are applied to the data of a toxicological assay.  相似文献   

11.
We develop a spectral framework for testing the hypothesis of complete spatial randomness (CSR) for a spatial point pattern. Five formal tests based on the periodogram (sample spectrum) proposed in Mugglestone (1990) are considered. A simulation study is used to evaluate and compare the power of the tests against clustered, regular and inhomogeneous alternatives to CSR. A subset of the tests is shown to be uniformly more powerful than the others against the alternatives considered. The spectral tests are also compared with three widely used space-domain tests that are based on the mean nearest-neighbor distance, the reduced second-order moment function (K-function), and a bivariate Cramér-von Mises statistic. The test based on the scaled cumulative R-spectrum is more powerful than the space-domain tests for detecting clustered alternatives to CSR, especially when the number of events is small.  相似文献   

12.
Clough Y 《Ecology》2012,93(8):1809-1815
The need to model and test hypotheses about complex ecological systems has led to a steady increase in use of path analytical techniques, which allow the modeling of multiple multivariate dependencies reflecting hypothesized causation and mechanisms. The aim is to achieve the estimation of direct, indirect, and total effects of one variable on another and to assess the adequacy of whole models. Path analytical techniques based on maximum likelihood currently used in ecology are rarely adequate for ecological data, which are often sparse, multi-level, and may contain nonlinear relationships as well as nonnormal response data such as counts or proportion data. Here I introduce a more flexible approach in the form of the joint application of hierarchical Bayes, Markov chain Monte Carlo algorithms, Shipley's d-sep test, and the potential outcomes framework to fit path models as well as to decompose and estimate effects. An example based on the direct and indirect interactions between ants, two insect herbivores, and a plant species demonstrates the implementation of these techniques, using freely available software.  相似文献   

13.
Iwao's quadratic regression or Taylor's Power Law (TPL) are commonly used to model the variance as a function of the mean for sample counts of insect populations which exhibit spatial aggregation. The modeled variance and distribution of the mean are typically used in pest management programs to decide if the population is above the action threshold in any management unit (MU) (e.g., orchard, forest compartment). For nested or multi-level sampling the usual two-stage modeling procedure first obtains the sample variance for each MU and sampling level using ANOVA and then fits a regression of variance on the mean for each level using either Iwao or TPL variance models. Here this approach is compared to the single-stage procedure of fitting a generalized linear mixed model (GLMM) directly to the count data with both approaches demonstrated using 2-level sampling. GLMMs and additive GLMMs (AGLMMs) with conditional Poisson variance function as well as the extension to the negative binomial are described. Generalization to more than two sampling levels is outlined. Formulae for calculating optimal relative sample sizes (ORSS) and the operating characteristic curve for the control decision are given for each model. The ORSS are independent of the mean in the case of the AGLMMs. The application described is estimation of the variance of the mean number of leaves per shoot occupied by immature stages of a defoliator of eucalypts, the Tasmanian Eucalyptus leaf beetle, based on a sample of trees within plots from each forest compartment. Historical population monitoring data were fitted using the above approaches.  相似文献   

14.
物种敏感度分布的非参数核密度估计模型   总被引:2,自引:0,他引:2  
针对目前物种敏感度分布参数方法建模所存在的缺点,首次提出基于非参数核密度估计方法的物种敏感度分布模型,并提出相应的最优窗宽和检验方法。选用无机汞作为案例研究对象,利用非参数核密度估计方法和3种传统参数模型分别推导了保护我国水生生物的无机汞的急性水质基准值。结果表明,非参数核密度估计方法在推导无机汞水质基准中的稳健性和精确度都大大优于传统参数模型,能够更好地构建物种敏感度分布曲线。该方法的提出丰富了水质基准的理论方法学,为更好地保护水生生物提供了有力的支撑。  相似文献   

15.
The fact that maternal exposures to some chemicals during pregnancy can adversely affect the structure and function of the nervous system in the offspring is well established. Government agencies have for a long time been concerned with regulation of developmental neurotoxicants and safe perinatal exposures. However, despite this concern, current guidelines provide only broad and nonspecific recommendations and lack clear directions for a model based approach to risk estimation. In this paper we propose a dose-response model for the nonquantal data obtained from developmental neurotoxicological experiments. To account for the critical issue of the correlation among responses from pups in the same litter, the so called intralitter correlation, a hierarchical distributional structure is used to derive the underlying unconditional distribution of responses. The maximum likelihood method is used to estimate model parameters and the covariance matrix of the estimates is derived. An example is used to illustrate the results.  相似文献   

16.
We derive some statistical properties of the distribution of two Negative Binomial random variables conditional on their total. This type of model can be appropriate for paired count data with Poisson over-dispersion such that the variance is a quadratic function of the mean. This statistical model is appropriate in many ecological applications including comparative fishing studies of two vessels and or gears. The parameter of interest is the ratio of pair means. We show that the conditional means and variances are different from the more commonly used Binomial model with variance adjusted for over-dispersion, or the Beta-Binomial model. The conditional Negative Binomial model is complicated because it does not eliminate nuisance parameters like in the Poisson case. Maximum likelihood estimation with the unconditional Negative Binomial model can result in biased estimates of the over-dispersion parameter and poor confidence intervals for the ratio of means when there are many nuisance parameters. We propose three approaches to deal with nuisance parameters in the conditional Negative Binomial model. We also study a random effects Binomial model for this type of data, and we develop an adjustment to the full-sample Negative Binomial profile likelihood to reduce the bias caused by nuisance parameters. We use simulations with these methods to examine bias, precision, and accuracy of estimators and confidence intervals. We conclude that the maximum likelihood method based on the full-sample Negative Binomial adjusted profile likelihood produces the best statistical inferences for the ratio of means when paired counts have Negative Binomial distributions. However, when there is uncertainty about the type of Poisson over-dispersion then a Binomial random effects model is a good choice.  相似文献   

17.
The assessment of adverse effects in terrestrial ecosystems is of central importance to any Environmental Risk Assessment of Industrial Chemicals at an EU level. A conceptual proposal in this regard is clearly outlined in the ‘Technical Guidance Document’ which is currently undergoing revision; nonetheless, from an industrial point of view, there are still some unresolved questions: The design of ecotoxicological tests should not focus exclusively on reproducibility under defined laboratory conditions; of equal if not greater importance is the ecological relevance of tests, i.e. the ability of a (chronic) test system to adequately reflect substance-related effects on wild species at the population level. In readily biodegradable substances in particular, the choice of an appropriate test substrate and an optimized feeding regime as well as the analytical confirmation of nominal test concentrations is vital. Bioaccumulation of industrial chemicals in soil-dwelling organisms usually takes place via the pore water phase. Oligochaete worms such as Eisenia and Lumbricus have proven to be promising candidates for an experimental approach. However, there is still an urgent need for the development of a balanced understanding of how to evaluate the results of those studies. Variability is inherently high in terrestrial systems, making any differentiation between natural fluctuations of parameters and substance-induced effects on the structures and functions of ecosystems a difficult task. In order to strengthen their predictive value for population or ecosystem-related effects, any experimental study has to fulfill specific quality criteria (e.g. acknowledged test procedure; Good Laboratory Practice; appropriate methodological approach (including statistics); meaningful endpoints; clear linkage of results and experimental design). Only if these criteria are met can test results be used for regulatory purposes.  相似文献   

18.
In geostatistics, both kriging and smoothing splines are commonly used to generate an interpolated map of a quantity of interest. The geoadditive model proposed by Kammann and Wand (J R Stat Soc: Ser C (Appl Stat) 52(1):1–18, 2003) represents a fusion of kriging and penalized spline additive models. Complex data issues, including non-linear covariate trends, multiple measurements at a location and clustered observations are easily handled using the geoadditive model. We propose a likelihood based estimation procedure that enables the estimation of the range (spatial decay) parameter associated with the penalized splines of the spatial component in the geoadditive model. We present how the spatial covariance structure (covariogram) can be derived from the geoadditive model. In a simulation study, we show that the underlying spatial process and prediction of the spatial map are estimated well using the proposed likelihood based estimation procedure. We present several applications of the proposed methods on real-life data examples.  相似文献   

19.
Most of the statistical techniques used to evaluate the data obtained from toxicity studies are based on the assumption that the data show a normal distribution and homogeneity of variance. Literature review on toxicity studies on laboratory animals reveals that in most of the cases homogeneity of variance alone is examined for the data obtained from these studies. But the data that show homogeneity of variance need not always show a normal distribution. In fact, most of the data derived from toxicity studies, including hematological and biochemical parameters show a non-normal distribution. On examining normality of data obtained from various toxicity studies using different normality tests, we observed that Shapiro-Wilk test is more appropriate than Kolmogorov-Smimov test, Lilliefors test, the normal probability paper analysis and Chi square test. But there are situations, especially in the long-term toxicity studies, where normality is not shown by one or more than one of the dosage groups. In this situation, we propose that the data maybe analyzed using Dunnett multiple comparison test after excluding the data of the groups that do not show normality However, the biological relevance of the excluded data has to be carefully scrutinized. We also observed that the tendency of the data to show a normal distribution seems to be related to the age of the animals. Present paper describes various tests commonly used to test normality and their power, and also emphasizes the need of subjecting the data obtained from toxicity studies to both normality and homogeneity tests. A flow chart suggesting the statistical techniques that maybe used for both the types of data showing a normal or non-normal distribution is also proposed.  相似文献   

20.
An estimating function approach to the inference of catch-effort models   总被引:1,自引:0,他引:1  
A class of catch-effort models, which allows for heterogeneous removal probabilities, is proposed for closed populations. The model includes three types of removal probabilities: multiplicative, Poisson and logistic. The usual removal and generalized removal models then become special cases. The equivalence of the proposed model and a special type of capture-recapture model is discussed. A unified estimating function approach is used to estimate the initial population size. For the homogeneous model, the resulting population size estimator based on optimal estimating functions is asymptotically equivalent to the maximum likelihood estimator. One advantage for our approach is that it can be extended to handle the heterogeneous populations in which the maximum likelihood estimators do not exist. The bootstrap method is applied to construct variance estimators and confidence intervals. We illustrate the method by two real data examples. Results of a simulation study investigating the performance of the proposed estimation procedure are presented.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号