首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   508篇
  免费   37篇
  国内免费   67篇
安全科学   72篇
废物处理   11篇
环保管理   109篇
综合类   209篇
基础理论   102篇
污染及防治   14篇
评价与监测   29篇
社会与环境   25篇
灾害及防治   41篇
  2023年   4篇
  2022年   8篇
  2021年   18篇
  2020年   14篇
  2019年   14篇
  2018年   14篇
  2017年   11篇
  2016年   24篇
  2015年   22篇
  2014年   21篇
  2013年   29篇
  2012年   43篇
  2011年   37篇
  2010年   22篇
  2009年   35篇
  2008年   16篇
  2007年   35篇
  2006年   35篇
  2005年   15篇
  2004年   12篇
  2003年   14篇
  2002年   15篇
  2001年   18篇
  2000年   15篇
  1999年   7篇
  1998年   16篇
  1997年   12篇
  1996年   16篇
  1995年   5篇
  1994年   7篇
  1993年   6篇
  1992年   4篇
  1991年   2篇
  1990年   4篇
  1989年   4篇
  1988年   4篇
  1987年   7篇
  1986年   3篇
  1985年   6篇
  1984年   2篇
  1983年   1篇
  1982年   1篇
  1981年   2篇
  1980年   1篇
  1979年   2篇
  1977年   1篇
  1974年   1篇
  1973年   2篇
  1972年   1篇
  1971年   3篇
排序方式: 共有612条查询结果,搜索用时 15 毫秒
371.
Results are reported from an application of the state space formulation and the Kalman filter to real-time forecasting of daily river flows. It is shown that the application of filtering techniques improves the overall forecasting performance of the model. As is true for most hydrologic systems, the model is not completely known. Therefore, the procedures pertaining to on-line parameter and noise statistics estimation, as presented in the first paper, are implemented. The example in this paper shows that these techniques also perform satisfactorily when applied to a real-world situation.  相似文献   
372.
以尾矿溃坝砂流下游演进动力学过程为研究重点,综合运用水文学及水动力学等运动理论和数值计算方法,建立描述尾矿坝溃决泥浆运动的数学模型,采用VOF模型对溃坝后的砂流运动过程进行数值模拟。以某溃决尾矿坝为例,对溃坝后下泄砂流演进过程进行模拟,模拟得到的最终影响范围及泥深均与该溃决尾矿坝溃后的现场观测结果相吻合,证明了模拟的有效性和正确性。  相似文献   
373.
基于岭估计的青海省东部农业区ET_0遥感反演研究   总被引:1,自引:1,他引:0  
论文利用青海东部农业区内的12个气象站2003—2005年气象资料,应用Penman-Monteith公式计算得到各站逐旬的ET_0值,并进一步研究其与高程(DEM)、地表温度(LST)及归一化植被指数(NDVI)3个因子之间的关系,提取遥感数据并耦合到时间分辨率为旬,空间分辨率为1 km,将其与计算所得ET_0建立多元反演模型。由于3个自变量因子之间存在着很强的相关关系,LST与NDVI间判定系数R2平均在0.7左右,不能直接用最小二乘回归方法建立模型。为有效避免自变量间相关性对模型的影响,研究中采用岭估计方法建模。结果表明,通过岭估计建立2003年10~33旬区域二元模型反演最低精度达76.19%,区域三元模型反演最低精度也有83.54%。与传统方法所建模型相比,检查点均方根误差减小约1.1,反演最低精度提高11%左右,能满足实际应用需求。  相似文献   
374.
In this study, real-scale wastewater treatment plant (Hurma WWTP) sludge anaerobic digestion process was modeled by Anaerobic Digestion Model (ADM1) with the purpose of generating the data to understand the process better by contributing to the prediction of the process operational conditions and process performance, which will be a base for future anaerobic sludge stabilization process investments.

Real-scale anaerobic sludge digestion process data was evaluated in terms of known process and state variables and also process yields. Average VS removal yield, methane production yield, and methane production rate values of the anaerobic sludge digestion unit were calculated as 46.4%, 0.49 m3CH4/kg VSremoved, and 0.33 m3 CH4/m3day, respectively. In this study, ADM1 was intended to predict the behavior of real-scale anaerobic digester processing sewage sludge under dynamic conditions. To estimate the variables of real-scale sludge anaerobic digestion process with high accuracy and to provide high model prediction performance, values of the four parameters (disintegration rate constant, carbohydrate hydrolysis rate constant, protein hydrolysis rate constant, and lipid hydrolysis rate constant) that have strong effects on structured ADM1 were estimated by using the parameter estimation module in Aquasim program and their values were found as 0.101, 10, 10, and 9.99, respectively. When the numbers of kinetic parameters with the processes included in ADM1 along with the dynamic and non-linear structure of the real scale anaerobic digestion were taken into consideration, model simulations were in good agreement with measured results of the biogas flow rate, methane flow rate, pH, total alkalinity, and volatile fatty acids.  相似文献   

375.
In chemical industry, sensors are used to monitor the leakage and emission of hazardous materials that are used for hazard warning and risk assessment to ensure safety production. The traditional sensor layout designs the scheme at single-layer, and thus causes large deviations in the estimated height and accuracy of source term estimation (STE). In this study, a dual-layer layout scheme for sensors is proposed. The numerical experiments verify that the improved schemes with an equal number of sensors, as well as detection errors, are beneficial to the accuracy of the STE results. The influence of the heights of the sensors and leak source on the results of STE is studied. Results show that the dual-layer sensor scheme with adjacent intervals at high places in the potential search space is highly favorable to locate the leak, and the scheme arranged near the ground is conducive for improving the estimation accuracy of source intensity. This study also compares the STE results of computational fluid dynamics (CFD) simulated scenarios under different sensor schemes and verifies the effectiveness of the proposed dual-layer sensor deployment scheme with adjacent intervals under turbulence condition.  相似文献   
376.
Analysis of capture—recapture data often involves maximizing a complex likelihood function with many unknown parameters. Statistical inference based on selection of a proper model depends on successful attainment of this maximum. An EM algorithm is developed for obtaining maximum likelihood estimates of capture and survival probabilities conditional on first capture from standard capture—recapture data. The algorithm does not require the use of numerical derivatives which may improve precision and stability relative to other estimation schemes. The asymptotic covariance matrix of the estimated parameters can be obtained using the supplemented EM algorithm. The EM algorithm is compared to a more traditional Newton-Raphson algorithm with both a simulated and a real dataset. The two algorithms result in the same parameter estimates, but Newton-Raphson variance estimates depend on a numerically estimated Hessian matrix that is sensitive to step size choice.  相似文献   
377.
The possibility of a bimodal log-likelihood function arises with certain data when the combined removal and signs-of-activities estimator is used. Bimodal log-likelihoods may, in turn, yield disjoint confidence intervals for certain confidence levels. The hypothesis that bimodality is caused by the violation of the equal catchability assumption of the removal model, leading to the combination of contradictory data/models in the combined estimator is set forth. Simulations exploring the effect of the violation of removal model assumptions on estimation and inference showed that the assumption of unequal capture probability influenced the frequency of bimodal likelihoods; similarly, extreme parameter values for probability of capture influenced the number of excessively large confidence intervals produced. A sex-specific combined estimator is developed as a remedial model tailored to the problem. The simulations suggest that both the signs-of-activities estimator and the sex-specific estimator perform equally well over the range of simulations presented, though the signs-of-activities estimator is easier to implement.  相似文献   
378.
An analysis of counts of sample size N=2 arising from a survey of the grass Bromus commutatus identified several factors which might seriously affect the estimation of parameters of Taylor's power law for such small sample sizes. The small sample estimation of Taylor's power law was studied by simulation. For each of five small sample sizes, N=2, 3, 5, 15 and 30, samples were simulated from populations for which the underlying known relationship between variance and mean was given by 2 = cd. One thousand samples generated from the negative binomial distribution were simulated for each of the six combinations of c=1,2 and 11, and d=1, 2, at each of four mean densities, =0.5, 1, 10 and 100, giving 4000 samples for each combination. Estimates of Taylor's power law parameters were obtained for each combination by regressing log10 s 2 on log10 m, where s 2 and m are the sample variance and mean, respectively. Bias in the parameter estimates, b and log10 a, reduced as N increased and increased with c for both values of d and these relationships were described well by quadratic response surfaces. The factors which affect small-sample estimation are: (i) exclusion of samples for which m = s 2 = 0; (ii) exclusion of samples for which s 2 = 0, but m > 0; (iii) correlation between log10 s 2 and log10 m; (iv) restriction on the maximum variance expressible in a sample; (v) restriction on the minimum variance expressible in a sample; (vi) underestimation of log10 s 2 for skew distributions; and (vii) the limited set of possible values of m and s 2. These factors and their effect on the parameter estimates are discussed in relation to the simulated samples. The effects of maximum variance restriction and underestimation of log10 s 2 were found to be the most severe. We conclude that Taylor's power law should be used with caution if the majority of samples from which s 2 and m are calculated have size, N, less than 15. An example is given of the estimated effect of bias when Taylor's power law is used to derive an efficient sampling scheme.  相似文献   
379.
We consider problems of inference for the wrapped skew-normal distribution on the circle. A centered parametrization of the distribution is introduced, and simulation used to compare the performance of method of moments and maximum likelihood estimation for its parameters. Maximum likelihood estimation is shown, in general, to be superior. The operating characteristics of two moment based tests, for wrapped normal and wrapped half-normal parent populations, respectively, are also explored. The former test is easy to apply, maintains the nominal significance level well and is generally highly powerful. The latter test does not hold the nominal significance level so well, although it is very powerful against negatively skew alternatives. Likelihood based tests for the two distributions are also discussed. A real data set from the ornithological literature is used to illustrate the application of the developed methodology and its extension to finite mixture modelling. Received: September 2003/ Revised: April 2005  相似文献   
380.
本文以无锡县河网水质数学模型的参数估算为例,根据河流的性质、水流特征、污染源的类型,选择5条河流进行现场观测,综合考虑不同河流的分析结果,研究了适用于平原河网水质数学模型的参数估算的方法。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号