首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In modern environmental risk analysis, inferences are often desired on those low dose levels at which a fixed benchmark risk is achieved. In this paper, we study the use of confidence limits on parameters from a simple one-stage model of risk historically popular in benchmark analysis with quantal data. Based on these confidence bounds, we present methods for deriving upper confidence limits on extra risk and lower bounds on the benchmark dose. The methods are seen to extend automatically to the case where simultaneous inferences are desired at multiple doses. Monte Carlo evaluations explore characteristics of the parameter estimates and the confidence limits under this setting.
R. Webster WestEmail:
  相似文献   

2.
To establish allowable daily intakes for humans from animal bioassay experiments, benchmark doses corresponding to low levels of risk have been proposed to replace the no-observed-adverse-effect level for non-cancer endpoints. When the experimental outcomes are quantal, each animal can be classified with or without the disease. The proportion of affected animals is observed as a function of dose and calculation of the benchmark dose is relatively simple. For quantitative responses, on the other hand, one method is to convert the continuous data to quantal data and proceed with benchmark dose estimation. Another method which has found more popularity (Crump, Risk Anal 15:79–89; 1995) is to fit an appropriate dose–response model to the continuous data, and directly estimate the risk and benchmark doses. The normal distribution has often been used in the past as a dose–response model. However, for non-symmetric data, the normal distribution can lead to erroneous results. Here, we propose the use of the class of beta-normal distribution and demonstrate its application in risk assessment for quantitative responses. The most important feature of this class of distributions is its generality, encompassing a wide range of distributional shapes including the normal distribution as a special case. The properties of the model are briefly discussed and risk estimates are derived based on the asymptotic properties of the maximum likelihood estimates. An example is used for illustration.
Mehdi RazzaghiEmail:
  相似文献   

3.
Model averaging (MA) has been proposed as a method of accommodating model uncertainty when estimating risk. Although the use of MA is inherently appealing, little is known about its performance using general modeling conditions. We investigate the use of MA for estimating excess risk using a Monte Carlo simulation. Dichotomous response data are simulated under various assumed underlying dose–response curves, and nine dose–response models (from the USEPA Benchmark dose model suite) are fit to obtain both model specific and MA risk estimates. The benchmark dose estimates (BMDs) from the MA method, as well as estimates from other commonly selected models, e.g., best fitting model or the model resulting in the smallest BMD, are compared to the true benchmark dose value to better understand both bias and coverage behavior in the estimation procedure. The MA method has a small bias when estimating the BMD that is similar to the bias of BMD estimates derived from the assumed model. Further, when a broader range of models are included in the family of models considered in the MA process, the lower bound estimate provided coverage close to the nominal level, which is superior to the other strategies considered. This approach provides an alternative method for risk managers to estimate risk while incorporating model uncertainty.
Matthew W. WheelerEmail:
  相似文献   

4.
Benchmark calculations often are made from data extracted from publications. Such data may not be in a form most appropriate for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration methods.
Kenny S. CrumpEmail:
  相似文献   

5.
Although benchmark-dose methodology has existed for more than 20 years, benchmark doses (BMDs) still have not fully supplanted the no-observed-adverse-effect level (NOAEL) and lowest-observed-adverse-effect level (LOAEL) as points of departure from the experimental dose–response range for setting acceptable exposure levels of toxic substances. Among the issues involved in replacing the NOAEL (LOAEL) with a BMD are (1) which added risk level(s) above background risk should be targeted as benchmark responses (BMRs), (2) whether to apply the BMD methodology to both carcinogenic and noncarcinogenic toxic effects, and (3) how to model continuous health effects that aren’t observed in a natural risk-based context like dichotomous health effects. This paper addresses these issues and recommends specific BMDs to replace the NOAEL and LOAEL.
Ralph L. KodellEmail:
  相似文献   

6.
Infectious disease surveillance has become an international top priority due to the perceived risk of bioterrorism. This is driving the improvement of real-time geo-spatial surveillance systems for monitoring disease indicators, which is expected to have many benefits beyond detecting a bioterror event. West Nile Virus surveillance in New York State (USA) is highlighted as a working system that uses dead American Crows (Corvus brachyrhynchos) to prospectively indicate viral activity prior to human onset. A cross-disciplinary review is then presented to argue that this system, and infectious disease surveillance in general, can be improved by complementing spatial cluster detection of an outcome variable with predictive “risk mapping” that incorporates spatiotemporal data on the environment, climate and human population through the flexible class of generalized linear mixed models.
Glen D. JohnsonEmail:
  相似文献   

7.
Consider the removal experiment used to estimate population sizes. Statistical methods towards testing the homogeneity of capture probabilities of animals, including a graphical diagnostic and a formal test, are presented and illustrated by real biological examples. Simulation is used to assess the test and compare it with the χ2 test.
Chang Xuan MaoEmail:
  相似文献   

8.
The influence of multiple anchored fish aggregating devices (FADs) on the spatial behavior of yellowfin (Thunnus albacares) and bigeye tuna (T. obesus) was investigated by equipping all thirteen FADs surrounding the island of Oahu (HI, USA) with automated sonic receivers (“listening stations”) and intra-peritoneally implanting individually coded acoustic transmitters in 45 yellowfin and 12 bigeye tuna. Thus, the FAD network became a multi-element passive observatory of the residence and movement characteristics of tuna within the array. Yellowfin tuna were detected within the FAD array for up to 150 days, while bigeye tuna were only observed up to a maximum of 10 days after tagging. Only eight yellowfin tuna (out of 45) and one bigeye tuna (out of 12) visited FADs other than their FAD of release. Those nine fish tended to visit nearest neighboring FADs and, in general, spent more time at their FAD of release than at the others. Fish visiting the same FAD several times or visiting other FADs tended to stay longer in the FAD network. A majority of tagged fish exhibited some synchronicity when departing the FADs but not all tagged fish departed a FAD at the same time: small groups of tagged fish left together while others remained. We hypothesize that tuna (at an individual or collective level) consider local conditions around any given FAD to be representative of the environment on a larger scale (e.g., the entire island) and when those conditions become unfavorable the tuna move to a completely different area. Thus, while the anchored FADs surrounding the island of Oahu might concentrate fish and make them more vulnerable to fishing, at a meso-scale they might not entrain fish longer than if there were no (or very few) FADs in the area. At the existing FAD density, the ‘island effect’ is more likely to be responsible for the general presence of fish around the island than the FADs. We recommend further investigation of this hypothesis.
Laurent Dagorn (Corresponding author)Email:
Kim N. HollandEmail:
David G. ItanoEmail:
  相似文献   

9.
Polygon-based thematic maps can be composed of boundaries that exist by definition—i.e., bona fide boundaries—or those that exist relative to a specific interpretation of a spatial phenomenon—i.e., fiat boundaries. The construction of maps composed of fiat boundaries is usually based on a subjective interpretive methodology that is affected by the data used to construct the map and the minimum mapping unit employed. That fiat boundaries are not the same as bona fide boundaries affects their use in computer-based spatial decision support tools. This is discussed both in terms of an analysis conducted at one specific moment, and in respect to increasingly common multi-temporal analysis.
Kim LowellEmail:
  相似文献   

10.
Confidence intervals for the mean of the delta-lognormal distribution   总被引:1,自引:0,他引:1  
Data that are skewed and contain a relatively high proportion of zeros can often be modelled using a delta-lognormal distribution. We consider three methods of calculating a 95% confidence interval for the mean of this distribution, and use simulation to compare the methods, across a range of realistic scenarios. The best method, in terms of coverage, is that based on the profile-likelihood. This gives error rates that are within 1% (lower limit) or 3% (upper limit) of the nominal level, unless the sample size is small and the level of skewness is moderate to high. Our results will also apply to the delta-lognormal linear model, when we wish to calculate a confidence interval for the expected value of the response variable, given the value of one or more explanatory variables. We illustrate the three methods using data on red cod densities, taken from a fisheries trawl survey in New Zealand.
David FletcherEmail:
  相似文献   

11.
Determining the optimum number of increments in composite sampling   总被引:1,自引:0,他引:1  
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a method is developed which accounts for different sources of variation in compositing and data analysis. This method is used to define and understand the process of determining the optimum number of increments that should be used in forming a composite. The blending variance is shown to have a smaller range of possible values than previously reported when estimating the number of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated number of increments.
John E. HathawayEmail:
  相似文献   

12.
A complex multivariate spatial point pattern of a plant community with high biodiversity is modelled using a hierarchical multivariate point process model. In the model, interactions between plants with different post-fire regeneration strategies are of key interest. We consider initially a maximum likelihood approach to inference where problems arise due to unknown interaction radii for the plants. We next demonstrate that a Bayesian approach provides a flexible framework for incorporating prior information concerning the interaction radii. From an ecological perspective, we are able both to confirm existing knowledge on species’ interactions and to generate new biological questions and hypotheses on species’ interactions.
Rasmus P. WaagepetersenEmail:
  相似文献   

13.
In this paper we examine the use of data augmentation techniques for simplifying iterative simulation in the context of both Bayesian and classical statistical inference for survival rate estimation. We examine two distinct model families common in population ecology to illustrate our ideas, ring-recovery models and capture–recapture models, and we present the computational advantage of this approach. We discuss also the fact that problems associated with identifiability in the classical framework can be overcome using data augmentation, but highlight the dangers in doing so under both inferential paradigms.
I. C. OlsenEmail:
  相似文献   

14.
Rarefaction estimates how many species are expected in a random sample of individuals from a larger collection and allows meaningful comparisons among collections of different sizes. It assumes random spatial dispersion. However, two common dispersion patterns, within-species clumping and segregation among species, can cause rarefaction to overestimate the species richness of a smaller continuous area. We use field studies and computer simulations to determine (1) how robust rarefaction is to nonrandom spatial dispersion and (2) whether simple measures of spatial autocorrelation can predict the bias in rarefaction estimates. Rarefaction does not estimate species richness accurately for many communities, especially at small sample sizes. Measures of spatial autocorrelation of the more abundant species do not reliably predict amount of bias. Survey sites should be standardized to equal-sized areas before sampling. When sites are of equal area but differ in number of individuals sampled, rarefaction can standardize collections. When communities are sampled from different-sized areas, the mean and confidence intervals of species accumulation curves allow more meaningful comparisons among sites. Electronic supplementary material  The online version of this article (doi:) contains supplementary material, which is available to authorized users.
Daniel SimberloffEmail:
  相似文献   

15.
Hierarchical mark-recapture models offer three advantages over classical mark-recapture models: (i) they allow expression of complicated models in terms of simple components; (ii) they provide a convenient way of modeling missing data and latent variables in a way that allows expression of relationships involving latent variables in the model; (iii) they provide a convenient way of introducing parsimony into models involving many nuisance parameters. Expressing models using the complete data likelihood we show how many of the standard mark-recapture models for open populations can be readily fitted using the software WinBUGS. We include examples that illustrate fitting the Cormack–Jolly–Seber model, multi-state and multi-event models, models including auxiliary data, and models including density dependence.
Darryl I. MacKenzieEmail:
  相似文献   

16.
This paper explores the use of, and problems that arise in, kernel smoothing and parametric estimation of the relationships between wildfire incidence and various meteorological variables. Such relationships may be treated as components in separable point process models for wildfire activity. The resulting models can be used for comparative purposes in order to assess the predictive performance of the Burning Index.
Frederic Paik SchoenbergEmail:
  相似文献   

17.
When animals die in traps in a mark-recapture study, straightforward likelihood inferences are possible in a class of models. The class includes M0, Mt, and Mb as reported by White et al. (Los Alamos National Laboratory, LA-8787-NERP, pp 235, 1982), those that do not involve heterogeneity. We include three Markov chain “persistence” models and show that they provide good fits in a trapping study of deer mice in the Cascade-Siskiyou National Monument of Southern Oregon where trapping mortality was high.
Fred L. RamseyEmail:
  相似文献   

18.
19.
The concept of the renewal property is extended to processes indexed by a multidimensional time parameter. The definition given includes not only partial sum processes, but also Poisson processes and many other point processes whose jump points are not totally ordered. Various properties of renewal processes are discussed. Renewal processes are proposed as a basis for modelling the spread of a forest fire under a prevailing wind.
B. Gail IvanoffEmail:
  相似文献   

20.
Ecological studies enable investigation of geographic variations in exposure to environmental variables, across groups, in relation to health outcomes measured on a geographic scale. Such studies are subject to ecological biases, including pure specification bias which arises when a nonlinear individual exposure-risk model is assumed to apply at the area level. Introduction of the within-area variance of exposure should induce a marked reduction in this source of ecological bias. Assuming several measurements per area of exposure and no confounding risk factors, we study the model including the within-area exposure variability when Gaussian within-area exposure distribution is assumed. The robustness is assessed when the within-area exposure distribution is misspecified. Two underlying exposure distributions are studied: the Gamma distribution and an unimodal mixture of two Gaussian distributions. In case of strong ecological association, this model can reduce the bias and improve the precision of the individual parameter estimates when the within-area exposure means and variances are correlated. These different models are applied to analyze the ecological association between radon concentration and childhood acute leukemia in France.
Léa FortunatoEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号