全文获取类型
收费全文 | 235篇 |
免费 | 51篇 |
国内免费 | 51篇 |
专业分类
安全科学 | 70篇 |
环保管理 | 35篇 |
综合类 | 140篇 |
基础理论 | 54篇 |
污染及防治 | 6篇 |
评价与监测 | 13篇 |
社会与环境 | 3篇 |
灾害及防治 | 16篇 |
出版年
2024年 | 6篇 |
2023年 | 21篇 |
2022年 | 40篇 |
2021年 | 22篇 |
2020年 | 15篇 |
2019年 | 11篇 |
2018年 | 6篇 |
2017年 | 11篇 |
2016年 | 6篇 |
2015年 | 9篇 |
2014年 | 6篇 |
2013年 | 5篇 |
2012年 | 16篇 |
2011年 | 12篇 |
2010年 | 15篇 |
2009年 | 16篇 |
2008年 | 9篇 |
2007年 | 21篇 |
2006年 | 15篇 |
2005年 | 10篇 |
2004年 | 6篇 |
2003年 | 10篇 |
2002年 | 8篇 |
2001年 | 2篇 |
2000年 | 8篇 |
1999年 | 6篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1995年 | 5篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1992年 | 1篇 |
1990年 | 1篇 |
1989年 | 1篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1982年 | 1篇 |
1981年 | 1篇 |
1978年 | 2篇 |
1973年 | 1篇 |
1972年 | 1篇 |
排序方式: 共有337条查询结果,搜索用时 15 毫秒
71.
选取气溶胶光学厚度、海拔、年降水量、年均气温、年均风速、人口密度、GDP密度和NDVI作为影响因子,基于随机森林模型、特征重要性排序和偏依赖图技术,研究中国PM2.5浓度空间分布的影响因素及其区域差异.结果表明:①与多元回归、广义可加模型和BP神经网络相比,随机森林模型估算的PM2.5浓度精度最高,可用于PM2.5污染的影响因素研究.②PM2.5浓度随气溶胶光学厚度、人口密度和GDP密度的增加呈先上升后平稳的趋势,随降水、风速和NDVI的增加呈先下降后平稳的趋势,随海拔和气温的增加呈下降→上升→下降的趋势.③气溶胶光学厚度对PM2.5浓度空间分布的影响最大,可解释37.96%的PM2.5浓度空间分异;年降水量对PM2.5浓度空间分布的影响最小,解释率仅为5.75%.④影响因子与PM2.5浓度的关系存在空间异质性,同一影响因子对不同地理分区的PM2.5浓度的影响程度有所不同.气溶胶光学厚度对华... 相似文献
72.
Many agricultural, biological, and environmental studies involve detecting temporal changes of a response variable, based on data observed at sampling sites in a spatial region and repeatedly over several time points. That is, data are repeated measures over time and are potentially correlated across space. The traditional repeated-measures analysis allows for time dependence but assumes that the observations at different sampling sites are mutually independent, which may not be suitable for field data that are correlated across space. In this paper, a nonparametric large-sample inference procedure is developed to assess the time effects while accounting for the spatial dependence using a block bootstrap. For illustration, the methodology is applied to describe the population changes of root-lesion nematodes over time in a production field in Wisconsin. 相似文献
73.
In this paper we address the problem of estimation of the variance of a normal population based on a balanced as well as an unbalanced ranked set sample (RSS), which is a modification of the original RSS of McIntyre (1952).We have proposed several methods of estimation of variance by combining different unbiased between and within estimators, and compared their performances 相似文献
74.
当前农业面源污染问题依然严峻,村落级尺度下各地块的污染风险状态精细识别有待进一步研究.本文针对三峡库区典型村落重庆市涪陵区南沱镇睦和村进行研究,结合无人机多光谱技术、农户行为调研、随机森林算法等进行地物识别及泛地块网格划分,通过测算总氮(total nitrogen,TN)、总磷(total phosphorus,TP... 相似文献
75.
运用Pearson相关性分析,变量重要性评分和随机森林方法构建了溶解氧(DO)实时预测模型,并以深圳湾为例采用浮标资料预测1,3,6和12h的溶解氧.模型预测结果表明,模型最优的输入条件为pH值,水温,叶绿素a,氧化还原电位和蓝绿藻5个水质指标,1h预报的相关系数在0.9以上,6h预报结果一定程度上可以满足工程要求,但对低溶解氧事件的预报必须在3h以内. 相似文献
76.
Polona Kalan Katarina Košmelj Charles Taillie Anton Cedilnik John H. Carson 《Environmental and Ecological Statistics》2003,10(4):469-482
The objective of this paper is to quantify and compare the loss functions of the standard two-stage design and its composite sample alternative in the context of multivariate soil sampling. The loss function is defined (conceptually) as the ratio of cost over information and measures design inefficiency. The efficiency of the design is the reciprocal of the loss function. The focus of this paper is twofold: (a) we define a measure of multivariate information using the Kullback–Leibler distance, and (b) we derive the variance-covariance structure for two soil sampling designs: a standard two-stage design and its composite sample counterpart. Randomness in the mass of soil samples is taken into account in both designs. A pilot study in Slovenia is used to demonstrate the calculations of the loss function and to compare the efficiency of the two designs. The results show that the composite sample design is more efficient than the two-stage design. The efficiency ratio is 1.3 for pH, 2.0 for C, 2.1 for N, and 2.5 for CEC. The multivariate efficiency ratio is 2.3. These ratios primarily reflect cost ratios; influence of the information is small. 相似文献
77.
Binary matrices originating from presence/absence data on species (rows) distributed over sites (columns) have been a subject of much controversy in ecological biogeography. Under the null hypothesis that every matrix is equally likely, the distributions of some test statistics measuring co-occurrences between species are sought, conditional on the row and column totals being fixed at the values observed for some particular matrix. Many ad hoc methods have been proposed in the literature, but at least some of them do not provide uniform random samples of matrices. In particular, some swap algorithms have not accounted for the number of neighbors each matrix has in the universe of matrices with a set of fixed row and column sums. We provide a Monte-Carlo method using random walks on graphs that gives correct estimates for the distributions of statistics. We exemplify its use with one statistic. 相似文献
78.
A traditional method of summarizing spatial distribution of species is the observed species-area curve. Often the observed species-area curve is surprisingly close to the expected species-area curve under the hypothesis of random placement of individuals. This has been used as evidence supporting the hypothesis. In this paper, we argue that using the observed species-area curve to test the general random placement hypothesis is highly inefficient. We present a testing method based on the classical 2 test for over-dispersion which is not only more efficient but also applicable to situations where complete abundance information are unavailable. We also discuss three alternatives of the hypothesis. The focus of this paper is on these and other general issues relevant to communities of different types. No applications are included in this paper. 相似文献
79.
Carlos Díaz-Avalos Celia Bulit David J. S. Montagnes 《Environmental and Ecological Statistics》2006,13(2):163-181
Planktonic patches are defined as areas where the abundance of plankters is above a threshold value τ. The estimation of patch
size and shape can be approached using spatial statistical tools, using truncated random fields or indicator random fields
as classifiers. In all cases there is the risk of false positive and false negative errors. In this paper we present the results
of a comparative study on the performance of four commonly used methods: conditional simulation and kriging, both in the original
measurement units of the data and under an indicator transform. We used a misclassification cost function to compare the four
methods. Our results show that conditional simulation in the original measurement units attains the lowest misclassification
cost. We also illustrate how the point at which this minimum is attained can be used to chose an optimal cut-off value for
binary classification.
Received: December 2003 / Revised: June 2005 相似文献
80.