首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   35篇
  免费   1篇
安全科学   3篇
废物处理   1篇
环保管理   2篇
综合类   9篇
基础理论   8篇
污染及防治   1篇
评价与监测   8篇
社会与环境   4篇
  2020年   1篇
  2013年   2篇
  2012年   1篇
  2011年   5篇
  2010年   4篇
  2009年   3篇
  2008年   1篇
  2007年   2篇
  2006年   2篇
  2005年   2篇
  2002年   3篇
  2000年   4篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1996年   3篇
排序方式: 共有36条查询结果,搜索用时 378 毫秒
11.
饶素梅  于敏 《环境科技》2007,20(A01):69-71
采用3种方法分别制作的SO2校准曲线无显著差异,且斜率一致,测定标准样品均在保证值范围之内,说明省略标定NaSO3的过程是可行的,同时对甲醛法给定的浓度范围和斜率值进行质疑。  相似文献   
12.
This paper describes a quantitative radioactivity analysis method especially suitable for environmental samples with low-level activity. The method, consisting of a multi-group approximation based on total absorption and Compton spectra of gamma rays, is coherently formalized and a computer algorithm thereof designed to analyze low-level activity NaI(Tl) gamma ray spectra of environmental samples. Milk powder from 1988 was used as the example case. Included is a special analysis on the uncertainty estimation. Gamma sensitiveness is defined and numerically evaluated. The results reproduced the calibration data well, attesting to the reliability of the method. The special analysis shows that the uncertainty of the assessed activity is tied to that of the calibration activity data. More than 77% of measured 1461-keV photons of 40K were counted in the range of clearly lower energies. Pile-up of single line photons (137Cs) looks negligible compared to that of a two-line cascade (134Cs). The detection limit varies with radionuclide and spectrum region and is related to the gamma sensitiveness of the detection system. The best detection limit always lies in a spectrum region holding a line of the radionuclide and the highest sensitiveness. The most radioactive milk powder sample showed a activity concentration of 21 ± 1 Bq g−1for 137Cs, 323 ± 13 Bq g−1 for 40K and no 134Cs.  相似文献   
13.
Engineering projects involving hydrogeology are faced with uncertainties because the earth is heterogeneous, and typical data sets are fragmented and disparate. In theory, predictions provided by computer simulations using calibrated models constrained by geological boundaries provide answers to support management decisions, and geostatistical methods quantify safety margins. In practice, current methods are limited by the data types and models that can be included, computational demands, or simplifying assumptions. Data Fusion Modeling (DFM) removes many of the limitations and is capable of providing data integration and model calibration with quantified uncertainty for a variety of hydrological, geological, and geophysical data types and models. The benefits of DFM for waste management, water supply, and geotechnical applications are savings in time and cost through the ability to produce visual models that fill in missing data and predictive numerical models to aid management optimization. DFM has the ability to update field-scale models in real time using PC or workstation systems and is ideally suited for parallel processing implementation. DFM is a spatial state estimation and system identification methodology that uses three sources of information: measured data, physical laws, and statistical models for uncertainty in spatial heterogeneities. What is new in DFM is the solution of the causality problem in the data assimilation Kalman filter methods to achieve computational practicality. The Kalman filter is generalized by introducing information filter methods due to Bierman coupled with a Markov random field representation for spatial variation. A Bayesian penalty function is implemented with Gauss–Newton methods. This leads to a computational problem similar to numerical simulation of the partial differential equations (PDEs) of groundwater. In fact, extensions of PDE solver ideas to break down computations over space form the computational heart of DFM. State estimates and uncertainties can be computed for heterogeneous hydraulic conductivity fields in multiple geological layers from the usually sparse hydraulic conductivity data and the often more plentiful head data. Further, a system identification theory has been derived based on statistical likelihood principles. A maximum likelihood theory is provided to estimate statistical parameters such as Markov model parameters that determine the geostatistical variogram. Field-scale application of DFM at the DOE Savannah River Site is presented and compared with manual calibration. DFM calibration runs converge in less than 1 h on a Pentium Pro PC for a 3D model with more than 15,000 nodes. Run time is approximately linear with the number of nodes. Furthermore, conditional simulation is used to quantify the statistical variability in model predictions such as contaminant breakthrough curves.  相似文献   
14.
A stochastic individual-based model (IBM) of mosquitofish population dynamics in experimental ponds was constructed in order to increase, virtually, the number of replicates of control populations in an ecotoxicology trial, and thus to increase the statistical power of the experiments. In this context, great importance had to be paid to model calibration as this conditions the use of the model as a reference for statistical comparisons. Accordingly, model calibration required that both mean behaviour and variability behaviour of the model were in accordance with real data. Currently, identifying parameter values from observed data is still an open issue for IBMs, especially when the parameter space is large. Our model included 41 parameters: 30 driving the model expectancy and 11 driving the model variability. Under these conditions, the use of “Latin hypercube” sampling would most probably have “missed” some important combinations of parameter values. Therefore, complete factorial design was preferred. Unfortunately, due to the constraints of the computational capacity, cost-acceptable “complete designs” were limited to no more than nine parameters, the calibration question becoming a parameter selection question. In this study, successive “complete designs” were conducted with different sets of parameters and different parameter values, in order to progressively narrow the parameter space. For each “complete design”, the selection of a maximum of nine parameters and their respective n values was carefully guided by sensitivity analysis. Sensitivity analysis was decisive in selecting parameters that were both influential and likely to have strong interactions. According to this strategy, the model of mosquitofish population dynamics was calibrated on real data from two different years of experiments, and validated on real data from another independent year. This model includes two categories of agents; fish and their living environment. Fish agents have four main processes: growth, survival, puberty and reproduction. The outputs of the model are the length frequency distribution of the population and the 16 scalar variables describing the fish populations. In this study, the length frequency distribution was parameterized by 10 scalars in order to be able to perform calibration. The recently suggested notion of “probabilistic distribution of the distributions” was also applied to our case study, and was shown to be very promising for comparing length frequency distributions (as such).  相似文献   
15.
Large, fine-grained samples are ideal for predictive species distribution models used for management purposes, but such datasets are not available for most species and conducting such surveys is costly. We attempted to overcome this obstacle by updating previously available coarse-grained logistic regression models with small fine-grained samples using a recalibration approach. Recalibration involves re-estimation of the intercept or slope of the linear predictor and may improve calibration (level of agreement between predicted and actual probabilities). If reliable estimates of occurrence likelihood are required (e.g., for species selection in ecological restoration) calibration should be preferred to other model performance measures. This updating approach is not expected to improve discrimination (the ability of the model to rank sites according to species suitability), because the rank order of predictions is not altered. We tested different updating methods and sample sizes with tree distribution data from Spain. Updated models were compared to models fitted using only fine-grained data (refitted models). Updated models performed reasonably well at fine scales and outperformed refitted models with small samples (10-100 occurrences). If a coarse-grained model is available (or could be easily developed) and fine-grained predictions are to be generated from a limited sample size, updating previous models may be a more accurate option than fitting a new model. Our results encourage further studies on model updating in other situations where species distribution models are used under different conditions from their training (e.g., different time periods, different regions).  相似文献   
16.
An important aspect of species distribution modelling is the choice of the modelling method because a suboptimal method may have poor predictive performance. Previous comparisons have found that novel methods, such as Maxent models, outperform well-established modelling methods, such as the standard logistic regression. These comparisons used training samples with small numbers of occurrences per estimated model parameter, and this limited sample size may have caused poorer predictive performance due to overfitting. Our hypothesis is that Maxent models would outperform a standard logistic regression because Maxent models avoid overfitting by using regularisation techniques and a standard logistic regression does not. Regularisation can be applied to logistic regression models using penalised maximum likelihood estimation. This estimation procedure shrinks the regression coefficients towards zero, causing biased predictions if applied to the training sample but improving the accuracy of new predictions. We used Maxent and logistic regression (standard and penalised) to analyse presence/pseudo-absence data for 13 tree species and evaluated the predictive performance (discrimination) using presence-absence data. The penalised logistic regression outperformed standard logistic regression and equalled the performance of Maxent. The penalised logistic regression may be considered one of the best methods to develop species distribution models trained with presence/pseudo-absence data, as it is comparable to Maxent. Our results encourage further use of the penalised logistic regression for species distribution modelling, especially in those cases in which a complex model must be fitted to a sample with a limited size.  相似文献   
17.
Radon adsorption by activated charcoal collectors such as PicoRad radon detectors is known to be largely affected by temperature and relative humidity. Quantitative models are, however, still needed for accurate radon estimation in a variable environment. Here we introduce a temperature calibration formula based on the gas adsorption theory to evaluate the radon concentration in air from the average temperature, collection time, and liquid scintillation count rate. On the basis of calibration experiments done by using the 25 m3 radon chamber available at the National Institute of Radiological Sciences in Japan, we found that the radon adsorption efficiency may vary up to a factor of two for temperatures typical of indoor conditions. We expect our results to be useful for establishing standardized protocols for optimized radon assessment in dwellings and workplaces.  相似文献   
18.
This paper develops a framework for regional scale flood modeling that integrates NEXRAD Level III rainfall, GIS, and a hydrological model (HEC-HMS/RAS). The San Antonio River Basin (about 4000 square miles, 10,000 km2) in Central Texas, USA, is the domain of the study because it is a region subject to frequent occurrences of severe flash flooding. A major flood in the summer of 2002 is chosen as a case to examine the modeling framework. The model consists of a rainfall-runoff model (HEC-HMS) that converts precipitation excess to overland flow and channel runoff, as well as a hydraulic model (HEC-RAS) that models unsteady state flow through the river channel network based on the HEC-HMS-derived hydrographs. HEC-HMS is run on a 4 x 4 km grid in the domain, a resolution consistent with the resolution of NEXRAD rainfall taken from the local river authority. Watershed parameters are calibrated manually to produce a good simulation of discharge at 12 subbasins. With the calibrated discharge, HEC-RAS is capable of producing floodplain polygons that are comparable to the satellite imagery. The modeling framework presented in this study incorporates a portion of the recently developed GIS tool named Map to Map that has been created on a local scale and extends it to a regional scale. The results of this research will benefit future modeling efforts by providing a tool for hydrological forecasts of flooding on a regional scale. While designed for the San Antonio River Basin, this regional scale model may be used as a prototype for model applications in other areas of the country.  相似文献   
19.
The forest vegetation simulator (FVS) model was calibrated for use in Ontario, Canada, to predict the growth of forest stands. Using data from permanent sample plots originating from different regions of Ontario, new models were derived for dbh growth rate, survival rate, stem height and species group density index for large trees and height and dbh growth rate for small trees. The dataset included black spruce (Picea mariana (Mill.) B.S.P.) and jack pine (Pinus banksiana Lamb.) for the boreal region, sugar maple (Acer saccharum Marsh.), white pine (Pinus strobus L.), red pine (Pinus resinosa Ait.) and yellow birch (Betula alleghaniensis Britton) for the Great Lakes-St. Lawrence region, and balsam fir (Abies balsamea (L.) Mill.) and trembling aspen (Populus tremuloides Michx.) for both regions. These new models were validated against an independent dataset that consisted of permanent sample plots located in Quebec. The new models predicted biologically consistent growth patterns whereas some of the original models from the Lake States version of FVS occasionally did not. The new models also fitted the calibration (Ontario) data better than the original FVS models. The validation against independent data from Quebec showed that the new models generally had a lower prediction error than the original FVS models.  相似文献   
20.
本文从不同标液体积下的氨氮含量与校正吸光度、不同标液体积下的浓度与校正吸光度、不同标液体积下的校正吸光度与氨氮含量的一次函数关系中推导相关计算公式,让初学者灵活掌握与运用相关公式。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号