首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
The comparison of increasing doses of a treatment to a negative control is frequently part of toxicological studies. For normally distributed data Williams (1971, 1972) introduced a maximum likelihood test under total order restriction. But until now there seems to have been no solution for the arbitrary unbalanced case. According to the idea proposed by Robertson et al. (1988) we will apply in this article the basic concept of Williams on the class of multiple contrast tests for the general unbalanced parametric set-up. Simulation results for size and power and two examples for estimating the minimal toxic dose (MTD) are given.  相似文献   

2.
Cleanup standards at hazardous waste sites include (i) numeric standards (often risk-based), (ii) background standards in which the remediated site is compared with data from a supposedly clean region, and (iii) interim standards in which the remediated site is compared with preremediation data from the same site. The latter are especially appropriate for verifying progress when an innovative, but unproven, technology is used for remediation. Standards of type (i) require one-sample statistical tests, while those of type (ii) and type (iii) call for two-sample tests. This paper considers two-sample tests with an emphasis upon the type (iii) scenario. Both parametric (likelihood ratio) and nonparametric (linear rank) protocols are examined. The methods are illustrated with preremediation data from a site on the National Priorities List. The results indicate that nonparametric procedures can be quite competitive (in terms of power) with distributional modelling provided a near optimal rank test is selected. Suggestions are given for identifying such rank tests. The results also confirm the importance of sound baseline sampling; no amount of post-remediation sampling can overcome baseline deficiencies.This paper has been prepared with partial support from the United States Environmental Protection Agency under a Cooperative Agreement Number CR-815273. The contents have not been subject to Agency review and therefore do not necessarily reflect the views or policies of the Agency and no official endorsement should be inferred.  相似文献   

3.
A new statistical testing approach using a weighted logrank statistic is developed for rodent tumorigenicity assays that have a single terminal sacrifice but not cause-of-death data. Instead of using cause-of-death assignment by pathologists, the number of fatal tumors is estimated by a constrained nonparametric maximum likelihood estimation method. For data lacking cause-of-death information, the Peto test is modified with estimated numbers of fatal tumors and a Fleming–Harrington-type weight, which is based on an estimated tumor survival function. A bootstrap resampling method is used to estimate the weight function. The proposed testing method with the weight adjustment appears to improve the performance in various situations of single-sacrifice animal experiments. A Monte Carlo simulation study for the proposed test is conducted to assess size and power of the test. This testing approach is illustrated using a real data set.  相似文献   

4.
Analysis of capture—recapture data often involves maximizing a complex likelihood function with many unknown parameters. Statistical inference based on selection of a proper model depends on successful attainment of this maximum. An EM algorithm is developed for obtaining maximum likelihood estimates of capture and survival probabilities conditional on first capture from standard capture—recapture data. The algorithm does not require the use of numerical derivatives which may improve precision and stability relative to other estimation schemes. The asymptotic covariance matrix of the estimated parameters can be obtained using the supplemented EM algorithm. The EM algorithm is compared to a more traditional Newton-Raphson algorithm with both a simulated and a real dataset. The two algorithms result in the same parameter estimates, but Newton-Raphson variance estimates depend on a numerically estimated Hessian matrix that is sensitive to step size choice.  相似文献   

5.
We utilize mixture models and nonparametric maximum likelihood estimation to both develop a likelihood ratio test (lrt) for a common simplifying assumption and to allow heterogeneity within premarked cohort studies. Our methods allow estimation of the entire probability model and thus one can not only estimate many parameters of interest but one can also bootstrap from the estimated model to predict many things, including the standard deviations of estimators. Simulations suggest that our lrt has the appropriate protection for Type I error and often has good power. In practice, our lrt is important for determining the appropriateness of estimators and in examining if a simple design with only one capture period could be utilized for a future similar study.  相似文献   

6.
Developmental toxicity studies are widely used to investigate the potential risk of environmental hazards. In dose–response experiments, subjects are randomly allocated to groups receiving various dose levels. Tests for trend are then often applied to assess possible dose effects. Recent techniques for risk assessment in this area are based on fitting dose–response models. The complexity of such studies implies a number of non-trivial challenges for model development and the construction of dose-related trend tests, including the hierarchical structure of the data, litter effects inducing extra variation, the functional form of the dose–response curve, the adverse event at dam or at fetus level, the inference paradigm, etc. The purpose of this paper is to propose a Bayesian trend test based on a non-linear power model for the dose effect and using an appropriate model for clustered binary data. Our work is motivated by the analysis of developmental toxicity studies, in which the offspring of exposed and control rodents are examined for defects. Simulations show the performance of the method over a number of samples generated under typical experimental conditions.  相似文献   

7.
铜暴露下赤子爱胜蚓(Eisenia foetida)活体基因的损伤研究   总被引:6,自引:0,他引:6  
通过碱性单细胞凝胶电泳法研究了Cu2+暴露剂量对赤子爱胜蚓(Eiseniafoetida)活体基因损伤的动态影响.结果显示:不添加Cu2+的对照组和添加Cu2+的处理组蚯蚓体腔细胞尾部DNA含量和尾长均呈非正态分布(p<0.05);在暴露72h时,125mg·L-1Cu2+处理组尾部DNA含量值最大,为41.44%,100mg·L-1Cu2+处理组尾长值最大,为33.79μm;随着Cu2+暴露剂量的增加,尾部DNA含量和尾长损伤频率增加;对照组和处理组的尾部DNA含量和尾长之间均存在显著性差异(p<0.05),Spearman非参数相关分析表明,尾部DNA百分含量和尾长之间呈显著相关(p<0.01,n=21),Cu2+暴露浓度与尾部DNA百分含量、尾长具有良好的剂量-构效关系(p<0.01).在125mg·L-1Cu2+浓度下暴露72h时蚯蚓的基因损伤程度达到最大,损伤程度为3级.可见,蚯蚓DNA生物标志物是重金属污染基因毒理诊断的重要指标,碱性SCGE试验是检测Cu2+暴露对赤子爱胜蚓活体基因损伤的有效手段.  相似文献   

8.
整合分析中的非参数检验:重复取样检验法的实例应用   总被引:2,自引:1,他引:2  
整合分析(meta-analysis)是对同一主题下多个独立实验结果进行综合的统计学方法。非参数检验整合分析——重复取样检验(resampling test)不考虑原文献数据的分布形式,故可在不知原文献数据分布形式时使用。其中的靴襻法(bootstrap)可用来给出总效应值的置信区间,但不能检验组内异质性是否显著。靴襻法与随机检验法(randomization test)可以有效弥补这一缺失,判断出组间差异性是否显著。实例应用表明,重复取样检验没有参数检验保守,又与参数检验的结果差异较小。  相似文献   

9.
The analysis of habitat selection in radio-tagged animals is approached by comparing the portions of use against the portions of availability observed for each habitat type. Since data are linearly dependent with singular variance-covariance matrices, standard multivariate statistical tests cannot be applied. To bypass the problem, compositional data analysis is customarily performed via log-ratio transform of sample observations. The procedure is criticized in this paper, emphasizing the several drawbacks which may arise from the use of compositional analysis. An alternative nonparametric solution is proposed in the framework of multiple testing. The habitat use is assessed separately for each habitat type by means of the sign test performed on the original observations. The resulting p values are combined in an overall test statistic whose significance is determined permuting sample observations. The theoretical findings of the paper are checked by simulation studies. Applications to case studies previously considered in literature are discussed.  相似文献   

10.
Determining whether the diet of predators has changed is an important ecological problem and appropriate methodology is needed in order to test for differences or changes in diet. It is known that the fatty acid (FA) signature in a predator’s adipose tissue predictably reflects the prey consumed and that, consequently, a change in the FA signatures can be largely attributed to changes in the predator’s diet composition. The use of FA signatures as a means of detecting change in diet presents some statistical challenges however, since the FA signatures are compositional and sample sizes relative to the dimension of a signature are often small due to biological constraints. Furthermore, the FA signatures often contain zeros precluding the direct use of traditional compositional data analysis methods. In this paper, we provide the methodology to carry out valid statistical tests for detecting changes in FA signatures and we illustrate both independent and paired cases using simulation studies and real life seabird and seal data. We conclude that the statistical challenges using FA data are overcome through the use of nonparametric tests applied to the multivariate setting with suitable test statistics capable of handling the zeros that are present in the data.  相似文献   

11.
We develop a spectral framework for testing the hypothesis of complete spatial randomness (CSR) for a spatial point pattern. Five formal tests based on the periodogram (sample spectrum) proposed in Mugglestone (1990) are considered. A simulation study is used to evaluate and compare the power of the tests against clustered, regular and inhomogeneous alternatives to CSR. A subset of the tests is shown to be uniformly more powerful than the others against the alternatives considered. The spectral tests are also compared with three widely used space-domain tests that are based on the mean nearest-neighbor distance, the reduced second-order moment function (K-function), and a bivariate Cramér-von Mises statistic. The test based on the scaled cumulative R-spectrum is more powerful than the space-domain tests for detecting clustered alternatives to CSR, especially when the number of events is small.  相似文献   

12.
Maximum entropy modeling of species geographic distributions   总被引:94,自引:0,他引:94  
The availability of detailed environmental data, together with inexpensive and powerful computers, has fueled a rapid increase in predictive modeling of species environmental requirements and geographic distributions. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, absence data are not available for most species. In this paper, we introduce the use of the maximum entropy method (Maxent) for modeling species geographic distributions with presence-only data. Maxent is a general-purpose machine learning method with a simple and precise mathematical formulation, and it has a number of aspects that make it well-suited for species distribution modeling. In order to investigate the efficacy of the method, here we perform a continental-scale case study using two Neotropical mammals: a lowland species of sloth, Bradypus variegatus, and a small montane murid rodent, Microryzomys minutus. We compared Maxent predictions with those of a commonly used presence-only modeling method, the Genetic Algorithm for Rule-Set Prediction (GARP). We made predictions on 10 random subsets of the occurrence records for both species, and then used the remaining localities for testing. Both algorithms provided reasonable estimates of the species’ range, far superior to the shaded outline maps available in field guides. All models were significantly better than random in both binomial tests of omission and receiver operating characteristic (ROC) analyses. The area under the ROC curve (AUC) was almost always higher for Maxent, indicating better discrimination of suitable versus unsuitable areas for the species. The Maxent modeling approach can be used in its present form for many applications with presence-only datasets, and merits further research and development.  相似文献   

13.
Using Japanese facility-level data from an OECD survey, we estimate the effects of implementation of ISO14001 and publication of environmental reports on the facilities’ environmental performance. While most previous studies focused on an index of emissions toxicity, this study examines three areas of impacts, none of which have been explored in the literature: natural resource use, solid waste generation, and wastewater effluent. The study is also unique in that the effectiveness of ISO14001 is considered in relation to environmental regulations. Our findings are summarized as follows. First, both ISO14001 and report publication help reduce all three impacts; the former appears more effective in all areas except wastewater. Second, environmental regulations do not weaken the effect of ISO14001. Third, assistance programs offered by local governments—a voluntary approach—promote facilities’ adoption of ISO14001. These findings suggest that governments can use command-and-control and voluntary approaches concurrently.  相似文献   

14.
We consider problems of inference for the wrapped skew-normal distribution on the circle. A centered parametrization of the distribution is introduced, and simulation used to compare the performance of method of moments and maximum likelihood estimation for its parameters. Maximum likelihood estimation is shown, in general, to be superior. The operating characteristics of two moment based tests, for wrapped normal and wrapped half-normal parent populations, respectively, are also explored. The former test is easy to apply, maintains the nominal significance level well and is generally highly powerful. The latter test does not hold the nominal significance level so well, although it is very powerful against negatively skew alternatives. Likelihood based tests for the two distributions are also discussed. A real data set from the ornithological literature is used to illustrate the application of the developed methodology and its extension to finite mixture modelling. Received: September 2003/ Revised: April 2005  相似文献   

15.
Release of elements from municipal solid waste incineration fly ash   总被引:1,自引:0,他引:1  
The element-release behavior of municipal solid waste incineration fly ash was explored through leaching test with continuous set-point pH (pHstat test) and serial single reaction cell (SSRC) tests. First, the relationship between element release and acid neutralizing capacity (ANC) consumption was examined with a pHstat test. Four types of release behaviors were identified which are characteristic for different elements: (1) release curves that were almost linear with ANC consumption (Ca, Zn, and Cd); (2) release that was significantly faster than ANC (Na, K, and Cl); (3) curves that featured a strong increase with ANC consumption, after a transient release, followed by an almost equal decrease (Si and S); and (4) release that is strongly retarded compared with ANC consumption (Cr, Cu, and Pb). In the SSRC system, it the existence of a pH front and a wash-out phenomenon is demonstrated. Combining the results from the SSRC test with the kinetic analysis of the ANC system in the pHstat test, it was inferred that less than one-third of the ANC measured from a batch pH titration plays a neutralization role in a field situation. The methodologies described may provide a powerful set of tools for systematic evaluation of element release from solid wastes.  相似文献   

16.
This study pursues external validation of contingent valuation by comparing survey results with the voting outcome of a Corvallis, Oregon, referendum to fund a riverfront improvement project through increased property taxes. Survey respondents hypothetically make a voting decision—with no financial consequences—on the upcoming referendum. The survey sample consists of respondents verified to have voted in the election. We use available precinct-level election data to compare the proportion of “yes” survey and referendum votes as well as estimate voting models and mean willingness to pay (WTP) based on the two sets of data. We find that survey responses match the actual voting outcome and WTP estimates based on the two are not statistically different. Contrary to similar studies, our statistical results do not depend on re-coding the majority of “undecided” survey responses to “no.” Furthermore, such a re-coding of responses may be inappropriate for our data set.  相似文献   

17.

Background

European chemicals legislation (registration, evaluation, authorisation and restriction of chemical substances (REACH)) requires a broad assessment of chemicals with respect to, inter alia, their health-relevant properties. Due to the extreme number of substances to be assessed and the limited current toxicological knowledge on their respective properties, REACH implicitly requires a paradigm change: away from knowledge generated mainly from costly animal experiments towards the use of mechanistic findings. Moreover, effect mechanisms at the biochemical or cellular level are essential when conclusions shall be drawn about "new" endpoints and mixtures of xenobiotics. This study (funded by the German Federal Environment Agency) describes examples of biochemical processes in the mammalian organism and how xenobiotics interfere with them. Interference with physiological processes expected to lead to adverse health effects is characterised as "toxicity pathway". The study describes toxicological endpoints not usually covered in routine animal testing and the respective toxicity pathways.

Results and conclusions

Screening for chemicals which exert effects via common toxicity pathways and subsequently conducting targeted short-term tests may generate new information about the toxicity of chemicals without performing extensive substance-by-substance animal experiments. Information on common toxicity pathways may also provide input for the assessment of mixture effects. The U.S. Environmental Protection Agency is working intensely on this concept. It involves the use of enormous amounts of data on relevant biochemical and cellular processes, which are generated by "high-throughput screening" methods, and then are combined with substance-specific kinetic data, experimental apical test outcomes and modelling. Current limitations in the regulatory use of this integrated approach on risk assessment will be outlined.  相似文献   

18.
A stochastic individual-based model (IBM) of mosquitofish population dynamics in experimental ponds was constructed in order to increase, virtually, the number of replicates of control populations in an ecotoxicology trial, and thus to increase the statistical power of the experiments. In this context, great importance had to be paid to model calibration as this conditions the use of the model as a reference for statistical comparisons. Accordingly, model calibration required that both mean behaviour and variability behaviour of the model were in accordance with real data. Currently, identifying parameter values from observed data is still an open issue for IBMs, especially when the parameter space is large. Our model included 41 parameters: 30 driving the model expectancy and 11 driving the model variability. Under these conditions, the use of “Latin hypercube” sampling would most probably have “missed” some important combinations of parameter values. Therefore, complete factorial design was preferred. Unfortunately, due to the constraints of the computational capacity, cost-acceptable “complete designs” were limited to no more than nine parameters, the calibration question becoming a parameter selection question. In this study, successive “complete designs” were conducted with different sets of parameters and different parameter values, in order to progressively narrow the parameter space. For each “complete design”, the selection of a maximum of nine parameters and their respective n values was carefully guided by sensitivity analysis. Sensitivity analysis was decisive in selecting parameters that were both influential and likely to have strong interactions. According to this strategy, the model of mosquitofish population dynamics was calibrated on real data from two different years of experiments, and validated on real data from another independent year. This model includes two categories of agents; fish and their living environment. Fish agents have four main processes: growth, survival, puberty and reproduction. The outputs of the model are the length frequency distribution of the population and the 16 scalar variables describing the fish populations. In this study, the length frequency distribution was parameterized by 10 scalars in order to be able to perform calibration. The recently suggested notion of “probabilistic distribution of the distributions” was also applied to our case study, and was shown to be very promising for comparing length frequency distributions (as such).  相似文献   

19.
Behavioral ecologists are often faced with a situation where they need to compare the central tendencies of two samples. The standard tools of the t test and Mann–Whitney U test (equivalent to the Wilcoxon rank-sum test) are unreliable when the variances of the groups are different. The problem is particularly severe when sample sizes are different between groups. The unequal-variance t test (Welch test) may not be suitable for nonnormal data. Here, we propose the use of Brunner and Munzel’s generalized Wilcoxon test followed by randomization to allow for small sample sizes. This tests whether the probability of an individual from one population being bigger than an individual from the other deviates from random expectation. This probability may sometimes be a more clear and informative measure of difference between the groups than a difference in more commonly used measures of central tendency (such as the mean). We provide a recipe for carrying out a statistical test of the null hypothesis that this probability is 50% and demonstrate the effectiveness of this technique for sample sizes typical in behavioral ecology. Although the test is not available in any commercial software package, it is relatively straightforward to implement for anyone with some programming ability. Furthermore, implementations in R and SAS are freely available on the internet.  相似文献   

20.
The large, sparse arrays of species counts arising in both field and experimental community studies do not lend themselves to standard statistical tests based on multivariate normality. Instead, a valid and more revealing approach uses informal display methods, such as clustering or multi-dimensional scaling ordination (MDS), based on a biologically-motivated definition of pairwise similarity of samples in terms of species composition. Formal testing methods are still required, however, to establish that real assemblage differences exist between sites, times, experimental treatments, pollution states, etc. Earlier work has described a range of Manteltype permutation or randomisation procedures, making no distributional assumptions, which are termed ANOSIM tests because of their dependence only on (rank) similarities and the analogy to one and two-way ANOVA. This paper extends these tests to cover an important practical case, previously unconsidered, that of a two-way layoutwithout replication. Such cases arise for single samples (or pseudo-replicates) taken in a baseline monitoring survey of several sites over time, or a mesocosm experiment in which treatments are replicated only once within each experimental block. Significance tests are given for the overall presence of a treatment (or time) effect, based on a measure of concordance between rank similarities of samples within each block (or site); the role of the two factors can be reversed to obtain a test for block effects. As in the analogous univariate ANOVA test, the method relies on absence or relative weakness of treatment x block interactions. Its scope is illustrated with data from two experimental and two field studies, involving meiofaunal communities from soft-sediment and macro-algal habitats. It is seen also to accommodate a modest derree of missing data. Whilst the failure to replicate adequately is not encouraged—a richer inference is available with genuine replication—the paper does provide a limited way forward for hypothesis testing in the absence of replicates.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号