We investigate several methods commonly used to obtain a benchmark dose and show that those based on full likelihood or profile likelihood methods might have severe shortcomings. We propose two new profile likelihood-based approaches which overcome these problems. Another contribution is the extension of the benchmark dose determination to non full likelihood models, such as quasi-likelihood, generalized estimating equations, which are widely used in settings such as developmental toxicity where clustered data are encountered. This widening of the scope of application is possible by the use of (robust) score statistics. Benchmark dose methods are applied to a data set from a developmental toxicity study. 相似文献
Recently, public health professionals and other geostatistical researchers have shown increasing interest in boundary analysis, the detection or testing of zones or boundaries that reveal sharp changes in the values of spatially oriented variables.
For areal data (i.e., data which consist only of sums or averages over geopolitical regions), Lu and Carlin (Geogr Anal 37: 265–285,
2005) suggested a fully model-based framework for areal wombling using Bayesian hierarchical models with posterior summaries
computed using Markov chain Monte Carlo (MCMC) methods, and showed the approach to have advantages over existing non-stochastic
alternatives. In this paper, we develop Bayesian areal boundary analysis methods that estimate the spatial neighborhood structure
using the value of the process in each region and other variables that indicate how similar two regions are. Boundaries may then be determined by the posterior distribution
of either this estimated neighborhood structure or the regional mean response differences themselves. Our methods do require
several assumptions (including an appropriate prior distribution, a normal spatial random effect distribution, and a Bernoulli
distribution for a set of spatial weights), but also deliver more in terms of full posterior inference for the boundary segments
(e.g., direct probability statements regarding the probability that a particular border segment is part of the boundary).
We illustrate three different remedies for the computing difficulties encountered in implementing our method. We use simulation
to compare among existing purely algorithmic approaches, the Lu and Carlin (2005) method, and our new adjacency modeling methods.
We also illustrate more practical modeling issues (e.g., covariate selection) in the context of a breast cancer late detection
data set collected at the county level in the state of Minnesota. 相似文献
This paper considers distinctions between lognormal and mixture models. Emphasis is placed on two component mixtures where the lower valued subpopulation has a large mixing parameter. The density of this sort of mixture can be easily mistaken for a lognormal density. In order to compare such a mixture to a lognormal it is demonstrated that Galton's two parameter logmodel and Pearson's five parameternormal mixture are special, or limiting, cases of the same general mixture model. Consideration is given to the lognormal threshold parameter in order to devise a tool that can help distinguish mixtures from lognormals. Based on the threshold parameter, piloted procedures can help measure whether or not a curve is friable, in the sense that a brittle curve is better represented as a mixture than as a skewed lognormal. It is also shown that generalizations of Galton's product risk model can be represented interms of the threshold parameter Based on a tool called a curve tensiometer was designed to be applied as a graphical friability check in the ecological context of Fisher's classic Iris data and in the environmental context of a Santa Monica Bay fish consumption study. 相似文献
Statistical methods as developed and used in decision making and scientific research are of recent origin. The logical foundations of statistics are still under discussion and some care is needed in applying the existing methodology and interpreting results. Some pitfalls in statistical data analysis are discussed and the importance of cross examination of data (or exploratory data analysis) before using specific statistical techniques are emphasized. Comments are made on the treatment of outliers, choice of stochastic models, use of multivariate techniques and the choice of software (expert systems) in statistical analysis. The need for developing new methodology with particular relevance to environmental research and policy is stressed.Dr Rao is Eberly Professor of Statistics and Director of the Penn State Center for Multivariate Analysis. He has received PhD and ScD degrees from Cambridge University, and has been awarded numerous honorary doctorates from universities around the world. He is a Fellow of Royal Society, UK; Fellow of Indian National Science Academy; Foreign Honorary Member of American Academy of Arts and Science; Life Fellow of King's College, Cambridge; and Founder Fellow of the Third World Academy of Sciences. He is Honorary Fellow and President of International Statistical Institute, Biometric Society and elected Fellow of the Institute of Mathematical Statistics. He has made outstanding contributions to virtually all important topics of theoretical and applied statistics, and many results bear his name. He has been Editor of Sankhya and theJournal of Multivariate Analysis, and serves on international advisory boards of several professional journals, includingEnvironmetrics and theJournal of Environmental Statistics. This paper is based on the keynote address to the Seventh Annual Conference on Statistics of the United States Environmental Protection Agency. 相似文献
The National Contaminant Biomonitoring Program (NCBP) was initiated in 1967 as a component of the National Pesticide Monitoring program. It consists of periodic collection of freshwater fish and other samples and the analysis of the concentrations of persistent environmental contaminants in these samples. For the analysis, the common approach has been to apply the mixed two-way ANOVA model to combined data. A main disadvantage of this method is that it cannot give a detailed temporal trend of the concentrations since the data are grouped. In this paper, we present an alternative approach that performs a longitudinal analysis of the information using random effects models. In the new approach, no grouping is needed and the data are treated as samples from continuous stochastic processes, which seems more appropriate than ANOVA for the problem. 相似文献
Objective: This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving.
Method: Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual–manual task) and distracted (poststarting of visual–manual task) driving periods. Average relative spectral power in a low frequency range (0–0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses.
Results: Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual–manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving.
Conclusions: This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses. 相似文献
Earthen embankment dams comprise 85% of all major operational dams in the United States. Assessment of peak flow rates for these earthen dams and the impacts on dam failure are of high interest to engineers and planners. Regression analysis is a frequently used risk assessment approach for earthen dams. In this paper, we present a decision support tool for assessing the applicability of nine regression equations commonly used by practitioners. Using data from 108 case studies, six parameters were observed to be significant factors predicting for peak flow as a metric for risk analysis. We present our work on an expanded earthen dam break database that relates the regression equations and underlying data. A web application, regression selection tool, is also presented to assess the appropriateness of a given model for a given test point. This graphical display allows users to visualize how their data point compares with the data used for the regression equation. These contributions improve estimates and better inform decision makers regarding operational and safety decisions. 相似文献