首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
通过对现有排污申报登记方法存在问题的分析,提出了一套科学、实用的排污申报登记动态管理系统建设方案。  相似文献   

2.
环境管理的各项工作离不开对污染源信息的撷取,全面、准确、动态地了解辖区内有关污染源的排污状况,是强化市场经济条件下环境管理、正确进行决策的先决条件,更是实施污染物排放总量控制的前提。但以往在这方面普遍存在信息不全、不准、不及时,向基层环保部门和排污单位要数据的部门多、渠道多、报表多等“三不”、“三多”现象,且填报单位也无需对数据负法律责任,这一状况亟需得到改变。到1998年底,历时一年半的浙江省全省排放污染物申报登记实践,为开拓污染源信息新渠道作了有益的探索。1排污申报登记是撷取污染源信息的主渠…  相似文献   

3.
概述了基于江苏省新版排污申报报表制度的《江苏省排污申报信息管理系统(2001年版)》软件设计和应用情况,重点介绍了该环境信息应用软件的系统设计原则,技术路线,数据录入上报流程。  相似文献   

4.
中瑞排污许可证制度的对比研究   总被引:1,自引:0,他引:1  
对中国水排污许可和瑞典综合排污许可的审批程序和要求、审批内容等进行了对比分析。重点分析了环境法庭在瑞典综合排污许可申报过程中的作用和功能,以及瑞典排污许可证文件的起草和颁发过程。在此基础上,从政府、企业和公众3个层面提出了瑞典综合排污许可证制度对中国排污许可证制度建设和完善的重要借鉴意义。  相似文献   

5.
分析了排污申报登记制度在县级市实施中存在的问题,并提出了相关建议.  相似文献   

6.
为解决医疗行业废物排放量核定难的问题,通过对医疗机构废物产生量的调查、统计、分析,在确定医疗废物排放统计变量的基础上,得出相应的医疗废物排放系数,以供排污申报医疗废物排放量的核定。  相似文献   

7.
以《全国污染源普查条例》为依据,结合工业污染源调查、排污申报以及环境统计的体会,阐述了影响污染源普查的质量内涵、制约因素,提出了相应的对策,对污染源普查具有一定的借鉴作用。  相似文献   

8.
为调查城市居民生活排污情况,以天津市为例选取有代表性6个居民小区,开展了用排水量、排水水质调查与监测,分析了典型居民小区用排水量数据及人均生活排污系数。结果显示,市区COD、氨氮、TN、TP的人均生活排污系数分别为22.34~58.20、5.45~9.11、7.43~11.22、0.57~0.97g/d;郊区COD、氨氮、TN、TP的人均生活排污系数分别为13.34~40.04、0.99~8.26、1.33~8.79、0.19~0.87g/d,市区居民小区人均生活排污系数明显高于郊区。同时,采用统计年鉴的水量数据,核定了全市城镇居民人均生活排污系数,COD、氨氮、TN、TP分别为37.68、6.92、8.35、0.76g/d。  相似文献   

9.
针对现有国家《产排污系数手册》中电镀行业重金属产排污系数的不完整问题,以深圳市电镀企业分布较多的宝安区和龙岗区为研究对象,采用全面调研和个案实测的方法,将电镀行业典型镀种铜、镍、锌划分为18组四同组合。根据调研数据计算获得了每组四同组合的产排污系数,并通过个案实测验证了产排污系数的合理性及适用性。结果表明,产排污系数法计算的产污量误差在16%以内,排污量误差在30%以内,所以产排污系数能够推广应用于电镀行业重金属污染排放量核算。  相似文献   

10.
以第一次全国污染源普查数据为依据,分析了如皋市工业源COD产排污强度及其产业构成特征。不同行业群对GDP的贡献和对COD的贡献呈现明显倒挂,COD产排污大户主要集中在该市的一些传统产业,重点污染源企业治理水平明显高于一般污染源企业。在此基础上,就如皋工业源COD产排污水平、产业结构优化在降低COD产排污强度中的意义以及强化传统产业污染治理的主要途径等问题展开了讨论。  相似文献   

11.
Derivation of ambient water quality criteria for formaldehyde.   总被引:2,自引:0,他引:2  
D W Hohreiter  D K Rigg 《Chemosphere》2001,45(4-5):471-486
This paper describes the derivation of aquatic life water quality criteria for formaldehyde, developed in accordance with United States Environmental Protection Agency's (USEPA's) Guidelines for Deriving Numerical National Water Quality Criteria for the Protection of Aquatic Organisms and Their Uses. The initial step in deriving water quality criteria was to conduct an extensive literature search to assemble available acute and chronic toxicity data for formaldehyde. The literature search identified a large amount of information on acute toxicity of formaldehyde to fish and aquatic invertebrates. These acute data were evaluated with respect to data quality, and poor quality or uncertain data were excluded from the data base. The resulting data base met the USEPA requirements for criteria derivation by having data for at least one species in at least eight different taxonomic families. One shortcoming of the literature-derived data base, however, was that few studies involved analytical confirmation of nominal formaldehyde concentrations and reported toxicity endpoints. Also, there were relatively few data on chronic toxicity. The acute toxicity data set consisted of data for 12 species of fish, 3 species of amphibians, and 11 species of invertebrates. These data were sufficient, according to USEPA guidelines, to calculate a final acute value (FAV) of 9.15 mg/l, and an acute aquatic life water quality criterion (one-half the FAV) of 4.58 mg/l. A final acute-chronic ratio (ACR) was calculated using available chronic toxicity data and USEPA-recommended conservative default assumptions to account for missing data. Using the FAV and the final ACR (5.69), the final chronic aquatic life water quality criterion was determined to be 1.61 mg/l.  相似文献   

12.
为评估VOCs在线监测设备原始监测数据的准确性,建立了一种VOCs在线监测设备数据识别能力的评估方法。结果表明:8种VOCs在线监测设备在数据识别方面的表现存在一定的差异,原始数据与人工审核数据的平均相对偏差为−100%~56 652%;相较于高碳物种,低碳物种的原始数据与人工审核数据平均相对偏差更大;根据应用案例分析提出“数据识别指数”,对不同VOCs在线监测设备的数据识别能力进行定量区分。该方法可为今后VOCs在线监测设备评估工作提供一种全新的考核指标,还可以科学评判其他在线监测设备的快速分析应用能力。  相似文献   

13.
Although networks of environmental monitors are constantly improving through advances in technology and management, instances of missing data still occur. Many methods of imputing values for missing data are available, but they are often difficult to use or produce unsatisfactory results. I-Bot (short for “Imputation Robot”) is a context-intensive approach to the imputation of missing data in data sets from networks of environmental monitors. I-Bot is easy to use and routinely produces imputed values that are highly reliable. I-Bot is described and demonstrated using more than 10 years of California data for daily maximum 8-hr ozone, 24-hr PM2.5 (particulate matter with an aerodynamic diameter <2.5 μm), mid-day average surface temperature, and mid-day average wind speed. I-Bot performance is evaluated by imputing values for observed data as if they were missing, and then comparing the imputed values with the observed values. In many cases, I-Bot is able to impute values for long periods with missing data, such as a week, a month, a year, or even longer. Qualitative visual methods and standard quantitative metrics demonstrate the effectiveness of the I-Bot methodology.Implications: Many resources are expended every year to analyze and interpret data sets from networks of environmental monitors. A large fraction of those resources is used to cope with difficulties due to the presence of missing data. The I-Bot method of imputing values for such missing data may help convert incomplete data sets into virtually complete data sets that facilitate the analysis and reliable interpretation of vital environmental data.  相似文献   

14.
In the last 5 yr, the capabilities of earth-observing satellites and the technological tools to share and use satellite data have advanced sufficiently to consider using satellite imagery in conjunction with ground-based data for urban-scale air quality monitoring. Satellite data can add synoptic and geospatial information to ground-based air quality data and modeling. An assessment of the integrated use of ground-based and satellite data for air quality monitoring, including several short case studies, was conducted. Findings identified current U.S. satellites with potential for air quality applications, with others available internationally and several more to be launched within the next 5 yr; several of these sensors are described in this paper as illustrations. However, use of these data for air quality applications has been hindered by historical lack of collaboration between air quality and satellite scientists, difficulty accessing and understanding new data, limited resources and agency priorities to develop new techniques, ill-defined needs, and poor understanding of the potential and limitations of the data. Specialization in organizations and funding sources has limited the resources for cross-disciplinary projects. To successfully use these new data sets requires increased collaboration between organizations, streamlined access to data, and resources for project implementation.  相似文献   

15.
Abstract

In the last 5 yr, the capabilities of earth-observing satellites and the technological tools to share and use satellite data have advanced sufficiently to consider using satellite imagery in conjunction with ground-based data for urban-scale air quality monitoring. Satellite data can add synoptic and geospatial information to ground-based air quality data and modeling. An assessment of the integrated use of ground-based and satellite data for air quality monitoring, including several short case studies, was conducted. Findings identified current U.S. satellites with potential for air quality applications, with others available internationally and several more to be launched within the next 5 yr; several of these sensors are described in this paper as illustrations. However, use of these data for air quality applications has been hindered by historical lack of collaboration between air quality and satellite scientists, difficulty accessing and understanding new data, limited resources and agency priorities to develop new techniques, ill-defined needs, and poor understanding of the potential and limitations of the data. Specialization in organizations and funding sources has limited the resources for cross-disciplinary projects. To successfully use these new data sets requires increased collaboration between organizations, streamlined access to data, and resources for project implementation.  相似文献   

16.
Monitoring and sampling of air quality data is costly and labor intensive. The necessary efforts increase progressively with increasing accuracy requirements. Also loss of data because of instrument break down, data transmission failure, or service and calibrating procedures is more or less unavoidable. Calculation of characteristic parameters like means or percentiles as necessary for information compression and also for comparison with air quality standards do not require complete data sets, since successive primary data like half-hour means are not independent from each other. Emission patterns and periodically reappearing or comparably slowly changing transmission conditions are responsible for autocorrelation of these data. Using air quality data from the Austrian public monitoring networks for various air pollutants (NO2, SO2, CO, O3) over the last decade various patterns of data loss are simulated and used to compute air quality parameters (fractiles, semi-annual means, daily means). The variation interval of these parameters is compared to equivalent parameters resulting from the complete data sets. Furthermore, autocorrelation functions of these data are calculated and discussed briefly. Finally, the applicability of the parameters obtained from truncated data sets for air quality management decisions is discussed and compared to the Austrian standard. The results indicate an error of a few percent — depending on the type of data loss — if these parameters are computed from incomplete data sets up to 50% data loss. Thus reduction of monitoring efforts without substantial loss of information is possible.  相似文献   

17.
Data from well-designed experiments provide the strongest evidence of causation in biodiversity studies. However, for many species the collection of these data is not scalable to the spatial and temporal extents required to understand patterns at the population level. Only data collected from citizen science projects can gather sufficient quantities of data, but data collected from volunteers are inherently noisy and heterogeneous. Here we describe a ‘Big Data’ approach to improve the data quality in eBird, a global citizen science project that gathers bird observations. First, eBird’s data submission design ensures that all data meet high standards of completeness and accuracy. Second, we take a ‘sensor calibration’ approach to measure individual variation in eBird participant’s ability to detect and identify birds. Third, we use species distribution models to fill in data gaps. Finally, we provide examples of novel analyses exploring population-level patterns in bird distributions.  相似文献   

18.
采用神经网络技术对松江污水厂污水处理活性污泥系统进行建模试验研究,在对实际运行数据按机理准则和范围准则剔除异常数据后,将样本数据随机分成训练样本、检验样本和测试样本。用试凑法确定合理的神经网络隐层节点数,以避免采用过大或过小的网络结构,在训练过程中用检验样本实时监控从而避免“过训练”现象的影响,较好地解决神经网络方法建模的两大难题,从而建立可靠、有效的活性污泥系统神经网络模型。并应用建立的网络模型对活性污泥系统的运行情况进行了仿真研究。建模研究表明,神经网络技术能较好地应用于活性污泥系统的建模,模型具有较好的泛化能力,有很好的实用价值。  相似文献   

19.
Monitoring and laboratory data play integral roles alongside fate and exposure models in comprehensive risk assessments. The principle in the European Union Technical Guidance Documents for risk assessment is that measured data may take precedence over model results but only after they are judged to be of adequate reliability and to be representative of the particular environmental compartments to which they are applied. In practice, laboratory and field data are used to provide parameters for the models, while monitoring data are used to validate the models' predictions. Thus, comprehensive risk assessments require the integration of laboratory and monitoring data with the model predictions. However, this interplay is often overlooked. Discrepancies between the results of models and monitoring should be investigated in terms of the representativeness of both. Certainly, in the context of the EU risk assessment of existing chemicals, the specific requirements for monitoring data have not been adequately addressed. The resources required for environmental monitoring, both in terms of manpower and equipment, can be very significant. The design of monitoring programmes to optimise the use of resources and the use of models as a cost-effective alternative are increasing in importance. Generic considerations and criteria for the design of new monitoring programmes to generate representative quality data for the aquatic compartment are outlined and the criteria for the use of existing data are discussed. In particular, there is a need to improve the accessibility to data sets, to standardise the data sets, to promote communication and harmonisation of programmes and to incorporate the flexibility to change monitoring protocols to amend the chemicals under investigation in line with changing needs and priorities.  相似文献   

20.
针对松江污水厂污水处理活性污泥系统,采用神经网络技术进行建模试验研究,在对实际运行数据剔除异常数据后,将样本数据随机分成训练样本、检验样本和测试样本.用试凑法确定合理的神经网络隐层节点数,用检验样本实时监控训练过程从而避免"过训练"现象,用多次改变网络初始连接权值求得全局极小点,从而建立了泛化能力较好的基于神经网络的活性污泥系统数学模型.利用建立的神经网络模型,对活性污泥系统运行情况的仿真与控制进行了分析研究.示例研究表明:神经网络技术能较好地应用于活性污泥系统的建模与控制,有很好的理论与实践意义.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号