首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1478篇
  免费   196篇
  国内免费   134篇
安全科学   255篇
废物处理   14篇
环保管理   340篇
综合类   604篇
基础理论   208篇
污染及防治   34篇
评价与监测   158篇
社会与环境   103篇
灾害及防治   92篇
  2024年   5篇
  2023年   31篇
  2022年   59篇
  2021年   81篇
  2020年   64篇
  2019年   66篇
  2018年   50篇
  2017年   81篇
  2016年   105篇
  2015年   60篇
  2014年   66篇
  2013年   96篇
  2012年   90篇
  2011年   109篇
  2010年   73篇
  2009年   88篇
  2008年   56篇
  2007年   102篇
  2006年   74篇
  2005年   56篇
  2004年   51篇
  2003年   48篇
  2002年   43篇
  2001年   29篇
  2000年   37篇
  1999年   18篇
  1998年   20篇
  1997年   25篇
  1996年   16篇
  1995年   9篇
  1994年   15篇
  1993年   6篇
  1992年   8篇
  1991年   6篇
  1990年   4篇
  1989年   4篇
  1988年   7篇
  1987年   4篇
  1986年   3篇
  1985年   6篇
  1983年   4篇
  1981年   5篇
  1980年   7篇
  1978年   3篇
  1977年   3篇
  1975年   1篇
  1974年   2篇
  1973年   6篇
  1971年   3篇
  1970年   1篇
排序方式: 共有1808条查询结果,搜索用时 859 毫秒
351.
Recently, public health professionals and other geostatistical researchers have shown increasing interest in boundary analysis, the detection or testing of zones or boundaries that reveal sharp changes in the values of spatially oriented variables. For areal data (i.e., data which consist only of sums or averages over geopolitical regions), Lu and Carlin (Geogr Anal 37: 265–285, 2005) suggested a fully model-based framework for areal wombling using Bayesian hierarchical models with posterior summaries computed using Markov chain Monte Carlo (MCMC) methods, and showed the approach to have advantages over existing non-stochastic alternatives. In this paper, we develop Bayesian areal boundary analysis methods that estimate the spatial neighborhood structure using the value of the process in each region and other variables that indicate how similar two regions are. Boundaries may then be determined by the posterior distribution of either this estimated neighborhood structure or the regional mean response differences themselves. Our methods do require several assumptions (including an appropriate prior distribution, a normal spatial random effect distribution, and a Bernoulli distribution for a set of spatial weights), but also deliver more in terms of full posterior inference for the boundary segments (e.g., direct probability statements regarding the probability that a particular border segment is part of the boundary). We illustrate three different remedies for the computing difficulties encountered in implementing our method. We use simulation to compare among existing purely algorithmic approaches, the Lu and Carlin (2005) method, and our new adjacency modeling methods. We also illustrate more practical modeling issues (e.g., covariate selection) in the context of a breast cancer late detection data set collected at the county level in the state of Minnesota.  相似文献   
352.
On thresholds and environmental curve tensiometers   总被引:1,自引:0,他引:1  
This paper considers distinctions between lognormal and mixture models. Emphasis is placed on two component mixtures where the lower valued subpopulation has a large mixing parameter. The density of this sort of mixture can be easily mistaken for a lognormal density. In order to compare such a mixture to a lognormal it is demonstrated that Galton's two parameter logmodel and Pearson's five parameternormal mixture are special, or limiting, cases of the same general mixture model. Consideration is given to the lognormal threshold parameter in order to devise a tool that can help distinguish mixtures from lognormals. Based on the threshold parameter, piloted procedures can help measure whether or not a curve is friable, in the sense that a brittle curve is better represented as a mixture than as a skewed lognormal. It is also shown that generalizations of Galton's product risk model can be represented interms of the threshold parameter Based on a tool called a curve tensiometer was designed to be applied as a graphical friability check in the ecological context of Fisher's classic Iris data and in the environmental context of a Santa Monica Bay fish consumption study.  相似文献   
353.
Statistical methods as developed and used in decision making and scientific research are of recent origin. The logical foundations of statistics are still under discussion and some care is needed in applying the existing methodology and interpreting results. Some pitfalls in statistical data analysis are discussed and the importance of cross examination of data (or exploratory data analysis) before using specific statistical techniques are emphasized. Comments are made on the treatment of outliers, choice of stochastic models, use of multivariate techniques and the choice of software (expert systems) in statistical analysis. The need for developing new methodology with particular relevance to environmental research and policy is stressed.Dr Rao is Eberly Professor of Statistics and Director of the Penn State Center for Multivariate Analysis. He has received PhD and ScD degrees from Cambridge University, and has been awarded numerous honorary doctorates from universities around the world. He is a Fellow of Royal Society, UK; Fellow of Indian National Science Academy; Foreign Honorary Member of American Academy of Arts and Science; Life Fellow of King's College, Cambridge; and Founder Fellow of the Third World Academy of Sciences. He is Honorary Fellow and President of International Statistical Institute, Biometric Society and elected Fellow of the Institute of Mathematical Statistics. He has made outstanding contributions to virtually all important topics of theoretical and applied statistics, and many results bear his name. He has been Editor of Sankhya and theJournal of Multivariate Analysis, and serves on international advisory boards of several professional journals, includingEnvironmetrics and theJournal of Environmental Statistics. This paper is based on the keynote address to the Seventh Annual Conference on Statistics of the United States Environmental Protection Agency.  相似文献   
354.
本文针对目前环境监测数据方面存在的问题,提出了如何提高环境监测数据地位的建议。  相似文献   
355.
在基坑开挖中,坚持信息化施工管理,采取多种手段对基坑安全进行监测是非常必要的。利用监测资料的分析结果,可及时发现基坑开挖过程中出现的问题,可以有针对性地采取措施,调整方案,保证施工顺利进行。  相似文献   
356.
为适应地质资料汇交要求,提高汇交地质资料的质量,本文以华东地区为例,分析了地质资料汇交中存在的问题,并提出几点改进意见。  相似文献   
357.
The National Contaminant Biomonitoring Program (NCBP) was initiated in 1967 as a component of the National Pesticide Monitoring program. It consists of periodic collection of freshwater fish and other samples and the analysis of the concentrations of persistent environmental contaminants in these samples. For the analysis, the common approach has been to apply the mixed two-way ANOVA model to combined data. A main disadvantage of this method is that it cannot give a detailed temporal trend of the concentrations since the data are grouped. In this paper, we present an alternative approach that performs a longitudinal analysis of the information using random effects models. In the new approach, no grouping is needed and the data are treated as samples from continuous stochastic processes, which seems more appropriate than ANOVA for the problem.  相似文献   
358.
Objective: This article investigated and compared frequency domain and time domain characteristics of drivers' behaviors before and after the start of distracted driving.

Method: Data from an existing naturalistic driving study were used. Fast Fourier transform (FFT) was applied for the frequency domain analysis to explore drivers' behavior pattern changes between nondistracted (prestarting of visual–manual task) and distracted (poststarting of visual–manual task) driving periods. Average relative spectral power in a low frequency range (0–0.5 Hz) and the standard deviation in a 10-s time window of vehicle control variables (i.e., lane offset, yaw rate, and acceleration) were calculated and further compared. Sensitivity analyses were also applied to examine the reliability of the time and frequency domain analyses.

Results: Results of the mixed model analyses from the time and frequency domain analyses all showed significant degradation in lateral control performance after engaging in visual–manual tasks while driving. Results of the sensitivity analyses suggested that the frequency domain analysis was less sensitive to the frequency bandwidth, whereas the time domain analysis was more sensitive to the time intervals selected for variation calculations. Different time interval selections can result in significantly different standard deviation values, whereas average spectral power analysis on yaw rate in both low and high frequency bandwidths showed consistent results, that higher variation values were observed during distracted driving when compared to nondistracted driving.

Conclusions: This study suggests that driver state detection needs to consider the behavior changes during the prestarting periods, instead of only focusing on periods with physical presence of distraction, such as cell phone use. Lateral control measures can be a better indicator of distraction detection than longitudinal controls. In addition, frequency domain analyses proved to be a more robust and consistent method in assessing driving performance compared to time domain analyses.  相似文献   

359.
Earthen embankment dams comprise 85% of all major operational dams in the United States. Assessment of peak flow rates for these earthen dams and the impacts on dam failure are of high interest to engineers and planners. Regression analysis is a frequently used risk assessment approach for earthen dams. In this paper, we present a decision support tool for assessing the applicability of nine regression equations commonly used by practitioners. Using data from 108 case studies, six parameters were observed to be significant factors predicting for peak flow as a metric for risk analysis. We present our work on an expanded earthen dam break database that relates the regression equations and underlying data. A web application, regression selection tool, is also presented to assess the appropriateness of a given model for a given test point. This graphical display allows users to visualize how their data point compares with the data used for the regression equation. These contributions improve estimates and better inform decision makers regarding operational and safety decisions.  相似文献   
360.
我国农村生活污水污染排放及环境治理效率   总被引:2,自引:0,他引:2       下载免费PDF全文
当前我国农村生活污水治理形势依然严峻.利用第二次全国污染源普查成果,分析了我国2017年农村生活污水排放情况,并基于熵值法和数据包络分析(DEA)模型从技术效率、经济效率两个角度对治理效率进行评价.结果表明:①我国农村生活污水中COD排放量约占生活源排放总量的50.8%,NH3-N、TN和TP排放量分别占生活源排放总量的35.0%、30.5%、38.7%,污水及污染物排放去向以直排入水体、直排入农田和其他途径为主,三者占80%以上.②我国沿海用水量较大地区的水冲式厕所比例超过80%,且农村生活污水有效治理比例仅为11.0%,导致我国农村生活污水治理的技术效率仅为8.6%,且区域差异较大,其中有23个省份小于10%,技术效率普遍较低.③相比浙江省、上海市、山西省、内蒙古自治区、青海省、西藏自治区等地(经济效率均为1),我国其他省份农村生活污水治理的经济效率有很大改善空间.农村生活污水中人均COD排放强度、人均NH3-N排放强度与农村居民人均可支配收入之间的环境库兹涅茨曲线(EKC)均呈一定的倒“U”型关系,人均TN排放强度、人均TP排放强度与农村居民人均可支配收入之间的EKC则均呈“N”型特征,有波动上升趋势.研究显示,我国农村生活污水治理效率存在较大的提升空间,与城镇相比,农村的环境保护基础设施建设还比较落后,应以源头减量、分类处理、循环利用为导向,加强统筹规划,梯次推进,加快提升我国农村污水治理水平.   相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号