首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A primary objective in quantitative risk assessment is the characterization of risk which is defined to be the likelihood of an adverse effect caused by an environmental toxin or chemcial agent. In modern risk-benchmark analysis, attention centers on the “benchmark dose” at which a fixed benchmark level of risk is achieved, with a lower confidence limits on this dose being of primary interest. In practice, a range of benchmark risks may be under study, so that the individual lower confidence limits on benchmark dose must be corrected for simultaneity in order to maintain a specified overall level of confidence. For the case of quantal data, simultaneous methods have been constructed that appeal to the large sample normality of parameter estimates. The suitability of these methods for use with small sample sizes will be considered. A new bootstrap technique is proposed as an alternative to the large sample methodology. This technique is evaluated via a simulation study and examples from environmental toxicology.
R. Webster WestEmail:
  相似文献   

2.
Although benchmark-dose methodology has existed for more than 20 years, benchmark doses (BMDs) still have not fully supplanted the no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) as points of departure from the experimental dose–response range for setting acceptable exposure levels of toxic substances. Among the issues involved in replacing the NOAEL (LOAEL) with a BMD are (1) which added risk level(s) above background risk should be targeted as benchmark responses (BMRs), (2) whether to apply the BMD methodology to both carcinogenic and noncarcinogenic toxic effects, and (3) how to model continuous health effects that aren’t observed in a natural risk-based context like dichotomous health effects. This paper addresses these issues and recommends specific BMDs to replace the NOAEL and LOAEL.
Ralph L. KodellEmail:
  相似文献   

3.
Benchmark calculations often are made from data extracted from publications. Such data may not be in a form most appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.
Kenny S. CrumpEmail:
  相似文献   

4.
To establish allowable daily intakes for humans from animal bioassay experiments, benchmark doses corresponding to low levels of risk have been proposed to replace the no-observed-adverse-effect level for non-cancer endpoints. When the experimental outcomes are quantal, each animal can be classified with or without the disease. The proportion of affected animals is observed as a function of dose and calculation of the benchmark dose is relatively simple. For quantitative responses, on the other hand, one method is to convert the continuous data to quantal data and proceed with benchmark dose estimation. Another method which has found more popularity (Crump, Risk Anal 15:79–89; 1995) is to fit an appropriate dose–response model to the continuous data, and directly estimate the risk and benchmark doses. The normal distribution has often been used in the past as a dose–response model. However, for non-symmetric data, the normal distribution can lead to erroneous results. Here, we propose the use of the class of beta-normal distribution and demonstrate its application in risk assessment for quantitative responses. The most important feature of this class of distributions is its generality, encompassing a wide range of distributional shapes including the normal distribution as a special case. The properties of the model are briefly discussed and risk estimates are derived based on the asymptotic properties of the maximum likelihood estimates. An example is used for illustration.
Mehdi RazzaghiEmail:
  相似文献   

5.
Model averaging (MA) has been proposed as a method of accommodating model uncertainty when estimating risk. Although the use of MA is inherently appealing, little is known about its performance using general modeling conditions. We investigate the use of MA for estimating excess risk using a Monte Carlo simulation. Dichotomous response data are simulated under various assumed underlying dose–response curves, and nine dose–response models (from the USEPA Benchmark dose model suite) are fit to obtain both model specific and MA risk estimates. The benchmark dose estimates (BMDs) from the MA method, as well as estimates from other commonly selected models, e.g., best fitting model or the model resulting in the smallest BMD, are compared to the true benchmark dose value to better understand both bias and coverage behavior in the estimation procedure. The MA method has a small bias when estimating the BMD that is similar to the bias of BMD estimates derived from the assumed model. Further, when a broader range of models are included in the family of models considered in the MA process, the lower bound estimate provided coverage close to the nominal level, which is superior to the other strategies considered. This approach provides an alternative method for risk managers to estimate risk while incorporating model uncertainty.
Matthew W. WheelerEmail:
  相似文献   

6.
Consider the removal experiment used to estimate population sizes. Statistical methods towards testing the homogeneity of capture probabilities of animals, including a graphical diagnostic and a formal test, are presented and illustrated by real biological examples. Simulation is used to assess the test and compare it with the χ2 test.
Chang Xuan MaoEmail:
  相似文献   

7.
When animals die in traps in a mark-recapture study, straightforward likelihood inferences are possible in a class of models. The class includes M0, Mt, and Mb as reported by White et al. (Los Alamos National Laboratory, LA-8787-NERP, pp 235, 1982), those that do not involve heterogeneity. We include three Markov chain “persistence” models and show that they provide good fits in a trapping study of deer mice in the Cascade-Siskiyou National Monument of Southern Oregon where trapping mortality was high.
Fred L. RamseyEmail:
  相似文献   

8.
Confidence intervals for the mean of the delta-lognormal distribution   总被引:1,自引:0,他引:1  
Data that are skewed and contain a relatively high proportion of zeros can often be modelled using a delta-lognormal distribution. We consider three methods of calculating a 95% confidence interval for the mean of this distribution, and use simulation to compare the methods, across a range of realistic scenarios. The best method, in terms of coverage, is that based on the profile-likelihood. This gives error rates that are within 1% (lower limit) or 3% (upper limit) of the nominal level, unless the sample size is small and the level of skewness is moderate to high. Our results will also apply to the delta-lognormal linear model, when we wish to calculate a confidence interval for the expected value of the response variable, given the value of one or more explanatory variables. We illustrate the three methods using data on red cod densities, taken from a fisheries trawl survey in New Zealand.
David FletcherEmail:
  相似文献   

9.
The concept of the renewal property is extended to processes indexed by a multidimensional time parameter. The definition given includes not only partial sum processes, but also Poisson processes and many other point processes whose jump points are not totally ordered. Various properties of renewal processes are discussed. Renewal processes are proposed as a basis for modelling the spread of a forest fire under a prevailing wind.
B. Gail IvanoffEmail:
  相似文献   

10.
Coverage, i.e., the area covered by the target attribute in the study region, is a key parameter in many surveys. Coverage estimation is usually performed by adopting a replicated protocol based on line-intercept sampling coupled with a suitable linear homogeneous estimator. Since coverage is a parameter which may be interestingly represented as the integral of a suitable function, improved Monte Carlo strategies for implementing the replicated protocol are introduced in order to achieve estimators with small variance rates. In addition, new specific theoretical results on Monte Carlo integration methods are given to deal with the integrand functions arising in the special coverage estimation setting.
Lucio BarabesiEmail:
  相似文献   

11.
The influence of multiple anchored fish aggregating devices (FADs) on the spatial behavior of yellowfin (Thunnus albacares) and bigeye tuna (T. obesus) was investigated by equipping all thirteen FADs surrounding the island of Oahu (HI, USA) with automated sonic receivers (“listening stations”) and intra-peritoneally implanting individually coded acoustic transmitters in 45 yellowfin and 12 bigeye tuna. Thus, the FAD network became a multi-element passive observatory of the residence and movement characteristics of tuna within the array. Yellowfin tuna were detected within the FAD array for up to 150 days, while bigeye tuna were only observed up to a maximum of 10 days after tagging. Only eight yellowfin tuna (out of 45) and one bigeye tuna (out of 12) visited FADs other than their FAD of release. Those nine fish tended to visit nearest neighboring FADs and, in general, spent more time at their FAD of release than at the others. Fish visiting the same FAD several times or visiting other FADs tended to stay longer in the FAD network. A majority of tagged fish exhibited some synchronicity when departing the FADs but not all tagged fish departed a FAD at the same time: small groups of tagged fish left together while others remained. We hypothesize that tuna (at an individual or collective level) consider local conditions around any given FAD to be representative of the environment on a larger scale (e.g., the entire island) and when those conditions become unfavorable the tuna move to a completely different area. Thus, while the anchored FADs surrounding the island of Oahu might concentrate fish and make them more vulnerable to fishing, at a meso-scale they might not entrain fish longer than if there were no (or very few) FADs in the area. At the existing FAD density, the ‘island effect’ is more likely to be responsible for the general presence of fish around the island than the FADs. We recommend further investigation of this hypothesis.
Laurent Dagorn (Corresponding author)Email:
Kim N. HollandEmail:
David G. ItanoEmail:
  相似文献   

12.
This paper explores the use of, and problems that arise in, kernel smoothing and parametric estimation of the relationships between wildfire incidence and various meteorological variables. Such relationships may be treated as components in separable point process models for wildfire activity. The resulting models can be used for comparative purposes in order to assess the predictive performance of the Burning Index.
Frederic Paik SchoenbergEmail:
  相似文献   

13.
In this paper some properties and analytic expressions regarding the Poisson lognormal distribution such as moments, maximum likelihood function and related derivatives are discussed. The author provides a sharp approximation of the integrals related to the Poisson lognormal probabilities and analyzes the choice of the initial values in the fitting procedure. Based on these he describes a new procedure for carrying out the maximum likelihood fitting of the truncated Poisson lognormal distribution. The method and results are illustrated on real data. The computer program for calculations is freely available.
Rudolf IzsákEmail:
  相似文献   

14.
Geochemical mapping is a technique rooted in mineral exploration but has now found worldwide application in studies of the urban environment. Such studies, involving multidisciplinary teams including geochemists, have to present their results in a way that nongeochemists can comprehend. A legislatively driven demand for urban geochemical data in connection with the need to identify contaminated land and subsequent health risk assessments has given rise to a greater worldwide interest in the urban geochemical environment. Herein, the aims and objectives of some urban studies are reviewed and commonly used terms such as baseline and background are defined. Geochemists need to better consider what is meant by the term urban. Whilst the unique make up of every city precludes a single recommended approach to a geochemical mapping strategy, more should be done to standardise the sampling and analytical methods. How (from a strategic and presentational point of view) and why we do geochemical mapping studies is discussed.
Christopher C. JohnsonEmail:
  相似文献   

15.
Missing covariate values in linear regression models can be an important problem facing environmental researchers. Existing missing value treatment methods such as Multiple Imputation (MI), the EM algorithm and Data Augmentation (DA) have the assumption that both observed and unobserved data come from the same distribution, most commonly a multivariate normal or a conditionally multivariate normal family. These methods do try to incorporate the missing data mechanism and rely on the assumption of Missing At Random (MAR). We present a DA method which does not rely on the MAR assumption and can model missing data mechanisms and covariate structure. This method utilizes the Gibbs Sampler as a tool for incorporating these structures and mechanisms. We apply this method to an ecological data set that relates fish condition to environmental variables. Notice that the presented DA method detects relationships that are not detected when other missing data methods are employed.
Edward L. BooneEmail:
  相似文献   

16.
Infectious disease surveillance has become an international top priority due to the perceived risk of bioterrorism. This is driving the improvement of real-time geo-spatial surveillance systems for monitoring disease indicators, which is expected to have many benefits beyond detecting a bioterror event. West Nile Virus surveillance in New York State (USA) is highlighted as a working system that uses dead American Crows (Corvus brachyrhynchos) to prospectively indicate viral activity prior to human onset. A cross-disciplinary review is then presented to argue that this system, and infectious disease surveillance in general, can be improved by complementing spatial cluster detection of an outcome variable with predictive “risk mapping” that incorporates spatiotemporal data on the environment, climate and human population through the flexible class of generalized linear mixed models.
Glen D. JohnsonEmail:
  相似文献   

17.
We consider a stochastic fire growth model, with the aim of predicting the behaviour of large forest fires. Such a model can describe not only average growth, but also the variability of the growth. Implementing such a model in a computing environment allows one to obtain probability contour plots, burn size distributions, and distributions of time to specified events. Such a model also allows the incorporation of a stochastic spotting mechanism.
Reg J. KulpergerEmail:
  相似文献   

18.
A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both to confirm existing knowledge on species’ interactions and to generate new biological questions and hypotheses on species’ interactions.
Rasmus P. WaagepetersenEmail:
  相似文献   

19.
20.
Line-intersect sampling based on segmented transects is adopted in many forest inventories to quantify important ecological indicators such as coarse woody debris attributes. By assuming a design-based approach, Affleck, Gregoire and Valentine (2005, Environ Ecol Stat 12:139–154) have recently proposed a sampling protocol for this line-intersect setting and have suggested an estimation method based on linear homogeneous estimators. However, their proposal does not encompass the estimation procedure currently adopted in some national forest inventories. Hence, the present paper aims to introduce a unifying perspective for both methods. Moreover, it is shown that the two procedures give rise to coincident estimators for almost all the usual field applications. Finally, some strategies for efficient segmented-transect replications are considered.
Lucio BarabesiEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号