全文获取类型
收费全文 | 266篇 |
免费 | 72篇 |
国内免费 | 4篇 |
专业分类
安全科学 | 58篇 |
废物处理 | 6篇 |
环保管理 | 34篇 |
综合类 | 138篇 |
基础理论 | 68篇 |
污染及防治 | 17篇 |
评价与监测 | 7篇 |
社会与环境 | 7篇 |
灾害及防治 | 7篇 |
出版年
2025年 | 4篇 |
2024年 | 14篇 |
2023年 | 19篇 |
2022年 | 15篇 |
2021年 | 10篇 |
2020年 | 10篇 |
2019年 | 16篇 |
2018年 | 12篇 |
2017年 | 12篇 |
2016年 | 11篇 |
2015年 | 16篇 |
2014年 | 11篇 |
2013年 | 13篇 |
2012年 | 18篇 |
2011年 | 21篇 |
2010年 | 11篇 |
2009年 | 23篇 |
2008年 | 14篇 |
2007年 | 19篇 |
2006年 | 5篇 |
2005年 | 8篇 |
2004年 | 10篇 |
2003年 | 6篇 |
2002年 | 9篇 |
2001年 | 6篇 |
2000年 | 8篇 |
1999年 | 2篇 |
1998年 | 2篇 |
1997年 | 4篇 |
1995年 | 3篇 |
1993年 | 3篇 |
1992年 | 1篇 |
1991年 | 2篇 |
1989年 | 2篇 |
1979年 | 1篇 |
1975年 | 1篇 |
排序方式: 共有342条查询结果,搜索用时 15 毫秒
51.
Nobuhisa Kashiwagi 《Environmetrics》2004,15(8):777-796
A chemical mass balance method is proposed for the case where the existence of an unknown source is suspected. In general, when the existence of an unknown source is assumed in statistical receptor modeling, unknown quantities such as the composition of an unknown source and the contributions of assumed sources become unidentifiable. To estimate these unknown quantities avoiding the identification problem, a Bayes model for chemical mass balance is constructed in the form of composition without using prior knowledge on the unknown quantities except for natural constraints. The covariance of ambient observations given in the form of composition is defined in several ways. Markov chain Monte Carlo is used for evaluating the posterior means and variances of the unknown quantities as well as the likelihood for the proposed model. The likelihood is used for selecting the best fit covariance model. A simulation study is carried out to check the performance of the proposed method. Copyright © 2004 John Wiley & Sons, Ltd. 相似文献
52.
Many Monte Carlo simulation studies have been done in the field of risk analysis. This article demonstrates the importance of using predictive distributions (the estimated distributions of the explanatory variable accounting for uncertainty in point estimation of parameters) in the simulations. We explore different types of predictive distributions for the normal distribution, the lognormal distribution and the triangular distribution. The triangular distribution poses particular problems, and we found that estimation using quantile least squares was preferable to maximum likelihood. Copyright © 2001 John Wiley & Sons, Ltd. 相似文献
53.
Statistical methods are needed for evaluating many aspects of air pollution regulations increasingly adopted by many different governments in the European Union. The atmospheric particulate matter (PM) is an important air pollutant for which regulations have been issued recently. A challenging task here is to evaluate the regulations based on data monitored on a heterogeneous network where PM has been observed at a number of sites and a surrogate has been observed at some other sites. This paper develops a hierarchical Bayesian joint space–time model for the PM measurements and its surrogate between which the exact relationship is unknown, and applies the methods to analyse spatio‐temporal data obtained from a number of sites in Northern Italy. The model is implemented using MCMC techniques and methods are developed to meet the regulatory demands. These enablefull inference with regard to process unknowns, calibration, validation, predictions in time and space and evaluation of regulatory standards. Copyright © 2008 John Wiley & Sons, Ltd. 相似文献
54.
When a continuous population is sampled, the spatial mean is often the target parameter if the design‐based approach is assumed. In this case, auxiliary information may be suitably used to increase the accuracy of the spatial mean estimators. To this end, regression models are usually considered at the estimation stage in order to implement regression estimators. Since the spatial mean may be obviously represented as a bivariate integral, the strategies for placing the sampling locations are actually Monte Carlo integration methods. Hence, the regression‐based estimation is equivalent to the control‐variate integration method. In this setting, we suggest more refined Monte Carlo integration strategies which may drastically increase the regression estimator accuracy. Copyright © 2005 John Wiley & Sons, Ltd. 相似文献
55.
Peter J. Toscas 《Environmetrics》2010,21(6):632-644
Environmental monitoring data is often spatially correlated and left censored. In this paper a previously proposed Bayesian approach to handling spatially correlated data is modified so that bias corrected estimates of variance and spatial correlation parameters are attained. The methodology is applied to a water quality data set from the Ecosystem Health Monitoring Program (EHMP) in south‐east Queensland, and the results are contrasted with those from uncorrected for bias variance estimates to show that the latter can lead to unreliable inferences. A simulation study is conducted which shows that the bias corrected estimates of variance and correlation parameters are less biased than uncorrected estimates of these parameters and that the credible intervals for the parameters from bias corrected analyses are wider than those from the uncorrected analyses. The simulation also suggests that predictions of below detection values are generally overestimated by both bias corrected and uncorrected analyses, but the latter predictions are more biased. For predictions of detectable concentrations the simulations suggest that bias corrected and uncorrected analyses are equally biased and both underestimate the true values. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
56.
David M. Holland Oliveira Victor De Lawrence H. Cox Richard L. Smith 《Environmetrics》2000,11(4):373-393
Emission reductions were mandated in the Clean Air Act Amendments of 1990 with the expectation of concomitant reductions in ambient concentrations of atmospherically‐transported pollutants. To evaluate the effectiveness of the legislated emission reductions using monitoring data, this paper proposed a two‐stage approach for the estimation of regional trends and their standard errors. In the first stage, a generalized model (GAM) is fitted to airborne sulfur dioxide (SO2) data at each of 35 sites in the eastern United States to estimate the form and magnitude of the site‐specific trend (defined as percent total change) from 1989 to 1995. This analysis is designed to adjust the SO2 data for the influences of meteorology and season. In the second stage, the estimated trends are treated as samples with site‐dependent measurement error from a Gaussian random field with a stationary covariance function. Kriging methodology is adapted to construct spatially‐smoothed estimates of the true trend for three large regions in the eastern U.S. Finally, a Bayesian analysis with Markov Chain Monte Carlo (MCMC) methods is used to obtain regional trend estimates and their standard errors, which take account of the estimation of the unknown covariance parameters as well as the stochastic variation of the random fields. Both spatial estimation techniques produced similar results in terms of regional trend and standard error. Copyright © 2000 John Wiley & Sons, Ltd. 相似文献
57.
Due to the increased availability of measurements of various geophysical processes, a need has arisen for statistical methods suitable for the analysis of very large nonstationary spatial data sets. The nearest‐neighbor Gaussian process (NNGP) models are one of the latest and most popular Gaussian process‐based models, which reduce computational complexity and memory storage. The Bayesian inference is based on the assumption of a parametric covariance function that is often assumed stationary or known. Given that NNGP models are sensitive in the stationary assumption in comparison to other reduction methods, there is a need to build nonstationary covariance functions within the NNGP models. However, the construction of a nonstationary covariance function and/or matrix may be computationally expensive by itself in the presence of big data. In this paper, we develop an efficient two‐stage approach that deals with nonstationarity and the computational complexity in the presence of a big spatial data set. We propose a new low‐cost data‐driven tree‐structured partitioning technique to divide the spatial region into distinct subregions. Given the partitions, we construct computationally efficient nonstationary covariance functions for NNGP models. We demonstrate the performance of our approach through simulation experiments and an application to the global Total Ozone Matrix Spectrometer (TOMS) data set, in which the proposed approach performs well in terms of both prediction accuracy and computational complexity. 相似文献
58.
59.
Standard analyses of spatial data assume that measurement and prediction locations are measured precisely. In this paper we consider how appropriate methods of estimation and prediction change when this assumption is relaxed and the locations are subject to positional error. We describe basic models for positional error and assess their impact on spatial prediction. Using both simulated data and lead concentration pollution data from Galicia, Spain, we show how the predictive distributions of quantities of interest change after allowing for the positional error, and describe scenarios in which positional errors may affect the qualitative conclusions of an analysis. The subject of positional error is of particular relevance in assessing the exposure of an individual to an environmental pollutant when the position of the individual is tracked using imperfect measurement technology. Copyright © 2010 John Wiley & Sons, Ltd. 相似文献
60.
水力截获技术是净化或抑制地下水污染最为广泛使用的一种方法,而该技术实施过程中,如何确定最优水力截获量是其需要重点解决的关键问题.本文针对传统确定性方法计算最优水力截获量不合理的问题,从水文地质参数的随机性出发,应用基于随机理论的蒙特卡罗方法,通过实例来研究渗透系数的空间变异性对地下水污染物水力截获系统的影响,并寻求估算最优水力截获量的新方法.通过研究表明:基于确定性方法计算出最优水力截获量为110m3/d时恰好能完全截获污染区的污染物;应用随机模拟研究含水层渗透系数的空间变异性对水力截获系统的影响,发现当以传统确定性方法所计算的最优水力截获量(110m3/d)抽水时,并不能总是完全截获地下水污染物,其面临的稳定平均风险率高达24%;充分考虑了含水层渗透系数空间变异的Monte Carlo方法较以往传统确定性方法更为可靠,为此本文提出利用随机方法从截获系统可接受风险角度确定最优截获量的新思路. 相似文献