共查询到20条相似文献,搜索用时 468 毫秒
1.
Brooke E. Buckley Walter W. Piegorsch R. Webster West 《Environmental and Ecological Statistics》2009,16(1):53-62
In modern environmental risk analysis, inferences are often desired on those low dose levels at which a fixed benchmark risk
is achieved. In this paper, we study the use of confidence limits on parameters from a simple one-stage model of risk historically
popular in benchmark analysis with quantal data. Based on these confidence bounds, we present methods for deriving upper confidence
limits on extra risk and lower bounds on the benchmark dose. The methods are seen to extend automatically to the case where
simultaneous inferences are desired at multiple doses. Monte Carlo evaluations explore characteristics of the parameter estimates
and the confidence limits under this setting.
相似文献
R. Webster WestEmail: |
2.
Model averaging (MA) has been proposed as a method of accommodating model uncertainty when estimating risk. Although the use
of MA is inherently appealing, little is known about its performance using general modeling conditions. We investigate the
use of MA for estimating excess risk using a Monte Carlo simulation. Dichotomous response data are simulated under various
assumed underlying dose–response curves, and nine dose–response models (from the USEPA Benchmark dose model suite) are fit
to obtain both model specific and MA risk estimates. The benchmark dose estimates (BMDs) from the MA method, as well as estimates
from other commonly selected models, e.g., best fitting model or the model resulting in the smallest BMD, are compared to
the true benchmark dose value to better understand both bias and coverage behavior in the estimation procedure. The MA method
has a small bias when estimating the BMD that is similar to the bias of BMD estimates derived from the assumed model. Further,
when a broader range of models are included in the family of models considered in the MA process, the lower bound estimate
provided coverage close to the nominal level, which is superior to the other strategies considered. This approach provides
an alternative method for risk managers to estimate risk while incorporating model uncertainty.
相似文献
Matthew W. WheelerEmail: |
3.
Benchmark calculations often are made from data extracted from publications. Such data may not be in a form most appropriate
for benchmark analysis, and, as a result, suboptimal and/or non-standard benchmark analyses are often applied. This problem
can be mitigated in some cases using Monte Carlo computational methods that allow the likelihood of the published data to
be calculated while still using an appropriate benchmark dose (BMD) definition. Such an approach is illustrated herein using
data from a study of workers exposed to styrene, in which a hybrid BMD calculation is implemented from dose response data
reported only as means and standard deviations of ratios of scores on neuropsychological tests from exposed subjects to corresponding
scores from matched controls. The likelihood of the data is computed using a combination of analytic and Monte Carlo integration
methods.
相似文献
Kenny S. CrumpEmail: |
4.
Lucio Barabesi 《Environmental and Ecological Statistics》2007,14(4):483-494
Line-intersect sampling based on segmented transects is adopted in many forest inventories to quantify important ecological
indicators such as coarse woody debris attributes. By assuming a design-based approach, Affleck, Gregoire and Valentine (2005,
Environ Ecol Stat 12:139–154) have recently proposed a sampling protocol for this line-intersect setting and have suggested
an estimation method based on linear homogeneous estimators. However, their proposal does not encompass the estimation procedure
currently adopted in some national forest inventories. Hence, the present paper aims to introduce a unifying perspective for
both methods. Moreover, it is shown that the two procedures give rise to coincident estimators for almost all the usual field
applications. Finally, some strategies for efficient segmented-transect replications are considered.
相似文献
Lucio BarabesiEmail: |
5.
We propose a hierarchical modeling approach for explaining a collection of spatially referenced time series of extreme values.
We assume that the observations follow generalized extreme value (GEV) distributions whose locations and scales are jointly
spatially dependent where the dependence is captured using multivariate Markov random field models specified through coregionalization.
In addition, there is temporal dependence in the locations. There are various ways to provide appropriate specifications;
we consider four choices. The models can be fitted using a Markov Chain Monte Carlo (MCMC) algorithm to enable inference for
parameters and to provide spatio–temporal predictions. We fit the models to a set of gridded interpolated precipitation data
collected over a 50-year period for the Cape Floristic Region in South Africa, summarizing results for what appears to be
the best choice of model.
相似文献
Alan E. GelfandEmail: |
6.
Frederic Paik Schoenberg Jamie Pompa Chien-Hsun Chang 《Environmental and Ecological Statistics》2009,16(2):251-269
This paper explores the use of, and problems that arise in, kernel smoothing and parametric estimation of the relationships
between wildfire incidence and various meteorological variables. Such relationships may be treated as components in separable
point process models for wildfire activity. The resulting models can be used for comparative purposes in order to assess the
predictive performance of the Burning Index.
相似文献
Frederic Paik SchoenbergEmail: |
7.
B. Gail Ivanoff 《Environmental and Ecological Statistics》2009,16(2):153-171
The concept of the renewal property is extended to processes indexed by a multidimensional time parameter. The definition
given includes not only partial sum processes, but also Poisson processes and many other point processes whose jump points
are not totally ordered. Various properties of renewal processes are discussed. Renewal processes are proposed as a basis
for modelling the spread of a forest fire under a prevailing wind.
相似文献
B. Gail IvanoffEmail: |
8.
9.
In this paper we examine the use of data augmentation techniques for simplifying iterative simulation in the context of both
Bayesian and classical statistical inference for survival rate estimation. We examine two distinct model families common in
population ecology to illustrate our ideas, ring-recovery models and capture–recapture models, and we present the computational
advantage of this approach. We discuss also the fact that problems associated with identifiability in the classical framework
can be overcome using data augmentation, but highlight the dangers in doing so under both inferential paradigms.
相似文献
I. C. OlsenEmail: |
10.
Chang Xuan Mao 《Environmental and Ecological Statistics》2007,14(4):473-481
Consider the removal experiment used to estimate population sizes. Statistical methods towards testing the homogeneity of
capture probabilities of animals, including a graphical diagnostic and a formal test, are presented and illustrated by real
biological examples. Simulation is used to assess the test and compare it with the χ2 test.
相似文献
Chang Xuan MaoEmail: |
11.
External drift kriging of NOx concentrations with dispersion model output in a reduced air quality monitoring network 总被引:1,自引:0,他引:1
Jan van de Kassteele Alfred Stein Arnold L. M. Dekkers Guus J. M. Velders 《Environmental and Ecological Statistics》2009,16(3):321-339
In the mid nineteen eighties the Dutch NOx air quality monitoring network was reduced from 73 to 32 rural and city background stations, leading to higher spatial uncertainties.
In this study, several other sources of information are being used to help reduce uncertainties in parameter estimation and
spatial mapping. For parameter estimation, we used Bayesian inference. For mapping, we used kriging with external drift (KED)
including secondary information from a dispersion model. The methods were applied to atmospheric NOx concentrations on rural and urban scales. We compared Bayesian estimation with restricted maximum likelihood estimation and
KED with universal kriging. As a reference we also included ordinary least squares (OLS). Comparison of several parameter
estimation and spatial interpolation methods was done by cross-validation. Bayesian analysis resulted in an error reduction
of 10 to 20% as compared to restricted maximum likelihood, whereas KED resulted in an error reduction of 50% as compared to
universal kriging. Where observations were sparse, the predictions were substantially improved by inclusion of the dispersion
model output and by using available prior information. No major improvement was observed as compared to OLS, the cause presumably
being that much good information is contained in the dispersion model output, so that no additional spatial residual random
field is required to explain the data. In all, we conclude that reduction in the monitoring network could be compensated by
modern geostatistical methods, and that a traditional simple statistical model is of an almost equal quality.
相似文献
Jan van de KassteeleEmail: |
12.
The influence of multiple anchored fish aggregating devices (FADs) on the spatial behavior of yellowfin (Thunnus albacares) and bigeye tuna (T. obesus) was investigated by equipping all thirteen FADs surrounding the island of Oahu (HI, USA) with automated sonic receivers
(“listening stations”) and intra-peritoneally implanting individually coded acoustic transmitters in 45 yellowfin and 12 bigeye
tuna. Thus, the FAD network became a multi-element passive observatory of the residence and movement characteristics of tuna
within the array. Yellowfin tuna were detected within the FAD array for up to 150 days, while bigeye tuna were only observed
up to a maximum of 10 days after tagging. Only eight yellowfin tuna (out of 45) and one bigeye tuna (out of 12) visited FADs
other than their FAD of release. Those nine fish tended to visit nearest neighboring FADs and, in general, spent more time
at their FAD of release than at the others. Fish visiting the same FAD several times or visiting other FADs tended to stay
longer in the FAD network. A majority of tagged fish exhibited some synchronicity when departing the FADs but not all tagged
fish departed a FAD at the same time: small groups of tagged fish left together while others remained. We hypothesize that
tuna (at an individual or collective level) consider local conditions around any given FAD to be representative of the environment
on a larger scale (e.g., the entire island) and when those conditions become unfavorable the tuna move to a completely different
area. Thus, while the anchored FADs surrounding the island of Oahu might concentrate fish and make them more vulnerable to
fishing, at a meso-scale they might not entrain fish longer than if there were no (or very few) FADs in the area. At the existing
FAD density, the ‘island effect’ is more likely to be responsible for the general presence of fish around the island than
the FADs. We recommend further investigation of this hypothesis.
相似文献
Laurent Dagorn (Corresponding author)Email: |
Kim N. HollandEmail: |
David G. ItanoEmail: |
13.
Den Boychuk W. John Braun Reg J. Kulperger Zinovi L. Krougly David A. Stanford 《Environmental and Ecological Statistics》2009,16(2):133-151
We consider a stochastic fire growth model, with the aim of predicting the behaviour of large forest fires. Such a model can
describe not only average growth, but also the variability of the growth. Implementing such a model in a computing environment
allows one to obtain probability contour plots, burn size distributions, and distributions of time to specified events. Such
a model also allows the incorporation of a stochastic spotting mechanism.
相似文献
Reg J. KulpergerEmail: |
14.
Lance A. Waller 《Environmental and Ecological Statistics》2008,15(3):259-263
The three papers included in this special issue represent a set of presentations in an invited session on disease ecology
at the 2005 Spring Meeting of the Eastern North American Region of the International Biometric Society. The papers each address
statistical estimation and inference for particular components of different disease processes and, taken together, illustrate
the breadth of statistical issues arising in the study of the ecology and public health impact of disease. As an introduction,
we provide a very brief overview of the area of “disease ecology”, a variety of synonyms addressing different aspects of disease
ecology, and present a schematic structure illustrating general components of the underlying disease process, data collection
issues, and different disciplinary perspectives ranging from microbiology to public health surveillance.
相似文献
Lance A. WallerEmail: |
15.
Rudolf Izsák 《Environmental and Ecological Statistics》2008,15(2):143-156
In this paper some properties and analytic expressions regarding the Poisson lognormal distribution such as moments, maximum
likelihood function and related derivatives are discussed. The author provides a sharp approximation of the integrals related
to the Poisson lognormal probabilities and analyzes the choice of the initial values in the fitting procedure. Based on these
he describes a new procedure for carrying out the maximum likelihood fitting of the truncated Poisson lognormal distribution.
The method and results are illustrated on real data. The computer program for calculations is freely available.
相似文献
Rudolf IzsákEmail: |
16.
David Fletcher 《Environmental and Ecological Statistics》2008,15(2):175-189
Data that are skewed and contain a relatively high proportion of zeros can often be modelled using a delta-lognormal distribution.
We consider three methods of calculating a 95% confidence interval for the mean of this distribution, and use simulation to
compare the methods, across a range of realistic scenarios. The best method, in terms of coverage, is that based on the profile-likelihood.
This gives error rates that are within 1% (lower limit) or 3% (upper limit) of the nominal level, unless the sample size is
small and the level of skewness is moderate to high. Our results will also apply to the delta-lognormal linear model, when
we wish to calculate a confidence interval for the expected value of the response variable, given the value of one or more
explanatory variables. We illustrate the three methods using data on red cod densities, taken from a fisheries trawl survey
in New Zealand.
相似文献
David FletcherEmail: |
17.
We present a method for detecting the zones where an irregularly sampled variable changes abruptly in the plane. Such zones
are called Zones of Abrupt Change (ZACs). This method not only allows estimation of ZACs, but also testing of their statistical
significance against the null hypothesis of a stationary correlated random field. The sampling pattern, in particular its
local density, is crucial in the detection of potential ZACs. In this paper, we address the problem of evaluating the sampling
pattern by assessing the power of the local test used for detecting ZACs. It is shown that mapping the power allows us to
identify zones where ZACs may or may not be detected. The methodology is applied to a soil data set sampled at eight different
dates in an agricultural field. Detecting ZACs for the soil water content allowed us to identify permanent structures in the
agricultural field related to the boundaries between different soil types. Mapping the power for various sampling densities
proved to be useful to determine the minimal sampling density necessary for detecting ZACs.
相似文献
Edith GabrielEmail: |
18.
R. Webster West Daniela K. Nitcheva Walter W. Piegorsch 《Environmental and Ecological Statistics》2009,16(1):63-73
A primary objective in quantitative risk assessment is the characterization of risk which is defined to be the likelihood
of an adverse effect caused by an environmental toxin or chemcial agent. In modern risk-benchmark analysis, attention centers
on the “benchmark dose” at which a fixed benchmark level of risk is achieved, with a lower confidence limits on this dose
being of primary interest. In practice, a range of benchmark risks may be under study, so that the individual lower confidence
limits on benchmark dose must be corrected for simultaneity in order to maintain a specified overall level of confidence.
For the case of quantal data, simultaneous methods have been constructed that appeal to the large sample normality of parameter
estimates. The suitability of these methods for use with small sample sizes will be considered. A new bootstrap technique
is proposed as an alternative to the large sample methodology. This technique is evaluated via a simulation study and examples
from environmental toxicology.
相似文献
R. Webster WestEmail: |
19.
Léa Fortunato Chantal Guihenneuc-Jouyaux Margot Tirmarche Dominique Laurier Denis Hémon 《Environmental and Ecological Statistics》2009,16(3):341-353
Ecological studies enable investigation of geographic variations in exposure to environmental variables, across groups, in
relation to health outcomes measured on a geographic scale. Such studies are subject to ecological biases, including pure
specification bias which arises when a nonlinear individual exposure-risk model is assumed to apply at the area level. Introduction
of the within-area variance of exposure should induce a marked reduction in this source of ecological bias. Assuming several
measurements per area of exposure and no confounding risk factors, we study the model including the within-area exposure variability
when Gaussian within-area exposure distribution is assumed. The robustness is assessed when the within-area exposure distribution
is misspecified. Two underlying exposure distributions are studied: the Gamma distribution and an unimodal mixture of two
Gaussian distributions. In case of strong ecological association, this model can reduce the bias and improve the precision
of the individual parameter estimates when the within-area exposure means and variances are correlated. These different models
are applied to analyze the ecological association between radon concentration and childhood acute leukemia in France.
相似文献
Léa FortunatoEmail: |
20.
John E. Hathaway G. Bruce Schaalje Richard O. Gilbert Brent A. Pulsipher Brett D. Matzke 《Environmental and Ecological Statistics》2008,15(3):313-327
Composite sampling can be more cost effective than simple random sampling. This paper considers how to determine the optimum
number of increments to use in composite sampling. Composite sampling terminology and theory are outlined and a method is
developed which accounts for different sources of variation in compositing and data analysis. This method is used to define
and understand the process of determining the optimum number of increments that should be used in forming a composite. The
blending variance is shown to have a smaller range of possible values than previously reported when estimating the number
of increments in a composite sample. Accounting for differing levels of the blending variance significantly affects the estimated
number of increments.
相似文献
John E. HathawayEmail: |