首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Synthetic nanoparticles have already been detected in the aquatic environment. Therefore, knowledge on their biodegradability is of utmost importance for risk assessment but such information is currently not available. Therefore, the biodegradability of fullerenes, single, double, multi-walled as well as COOH functionalized carbon nanotubes and cellulose and starch nanocrystals in aqueous environment has been investigated according to OECD standards. The biodegradability of starch and cellulose nanoparticles was also compared with the biodegradability of their macroscopic counterparts. Fullerenes and all carbon nanotubes did not biodegrade at all, while starch and cellulose nanoparticles biodegrade to similar levels as their macroscopic counterparts. However, neither comfortably met the criterion for ready biodegradability (60% after 28 days). The cellulose and starch nanoparticles were also found to degrade faster than their macroscopic counterparts due to their higher surface area. These findings are the first report of biodegradability of organic nanoparticles in the aquatic environment, an important accumulation environment for manmade compounds.  相似文献   

2.
3.
4.
Prediction for biodegradability of chemicals by an empirical flowchart   总被引:3,自引:0,他引:3  
A method for predicting aerobic biodegradability of chemicals was developed based on empirical knowledge. A flowchart was derived from rule of thumb relationships between the biodegradability and the number of the functional groups and substructures in a certain skeletal structure of chemicals. The flowchart classified chemicals into readily biodegradable, not readily biodegradable and not predictable. It was validated by using MITI data of 177 mono benzene derivatives and 168 acyclic compounds, resulting in correct prediction at 94% and 88% levels, respectively.  相似文献   

5.
Road transport is often the main source of air pollution in urban areas, and there is an increasing need to estimate its contribution precisely so that pollution-reduction measures (e.g. emission standards, scrapage programs, traffic management, ITS) are designed and implemented appropriately. This paper presents a meta-analysis of 50 studies dealing with the validation of various types of traffic emission model, including ‘average speed’, ‘traffic situation’, ‘traffic variable’, ‘cycle variable’, and ‘modal’ models. The validation studies employ measurements in tunnels, ambient concentration measurements, remote sensing, laboratory tests, and mass-balance techniques. One major finding of the analysis is that several models are only partially validated or not validated at all. The mean prediction errors are generally within a factor of 1.3 of the observed values for CO2, within a factor of 2 for HC and NOx, and within a factor of 3 for CO and PM, although differences as high as a factor of 5 have been reported. A positive mean prediction error for NOx (i.e. overestimation) was established for all model types and practically all validation techniques. In the case of HC, model predictions have been moving from underestimation to overestimation since the 1980s. The large prediction error for PM may be associated with different PM definitions between models and observations (e.g. size, measurement principle, exhaust/non-exhaust contribution).Statistical analyses show that the mean prediction error is generally not significantly different (p < 0.05) when the data are categorised according to model type or validation technique. Thus, there is no conclusive evidence that demonstrates that more complex models systematically perform better in terms of prediction error than less complex models. In fact, less complex models appear to perform better for PM. Moreover, the choice of validation technique does not systematically affect the result, with the exception of a CO underprediction when the validation is based on ambient concentration measurements and inverse modelling. The analysis identified two vital elements currently lacking in traffic emissions modelling: 1) guidance on the allowable error margins for different applications/scales, and 2) estimates of prediction errors. It is recommended that current and future emission models incorporate the capability to quantify prediction errors, and that clear guidelines are developed internationally with respect to expected accuracy.  相似文献   

6.
7.
8.
Ozone prediction has become an important activity in many U.S. ozone nonattainment areas. In this study, we describe the ozone prediction program in the Atlanta metropolitan area and analyze the performance of this program during the 1999 ozone-forecasting season. From May to September, a team of 10 air quality regulators, meteorologists, and atmospheric scientists made a daily prediction of the next-day maximum 8-hr average ozone concentration. The daily forecast was made aided by two linear regression models, a 3-dimensional air quality model, and the no-skill ozone persistence model. The team's performance is compared with the numerical models using several numerical indicators. Our analysis indicated that (1) the team correctly predicted next-day peak ozone concentrations 84% of the time, (2) the two linear regression models had a better performance than a 3-dimensional air quality model, (3) persistence was a strong predictor of ozone concentrations with a performance of 78%, and (4) about half of the team's wrong predictions could be prevented with improved meteorological predictions.  相似文献   

9.
Quantitative structure-activity relationships (QSARs) urgently need to be applied in regulatory programs. Many QSAR models can predict the effect of a wide range of substances to different endpoints, particularly in the case of ecotoxicity, but it is difficult to choose the most appropriate model on the basis of the requirements of the application. During the EC-funded project DEMETRA (www.demetra-tox.net) a huge number of QSAR models have been developed for the prediction of different ecotoxicological endpoints. DEMETRA individual models on rainbow trout LC50 after 96 h, water flea LC50 after 48 h and honey bee LD50 after 48 h have been used as a QSAR database to test the advantages of a new index for evaluating model uncertainty. This index takes into consideration the number of outliers (weighted on the total number of compounds) and their root mean square error. Application on the DEMETRA QSAR database indicated that the index can identify the models with the best performance with regard to outliers, and can be used, together with other classical statistical measures (e.g., the squared correlation coefficient), to support the evaluation of QSAR models.  相似文献   

10.
介绍不同的分子定量结构与生物降解性关系(QSBR)模型,对每种模型的相关性和有效性进行客观的比较,并对每种QSBR模型的应用进行详细的描述。研究表明,只有用广泛的分子结构进行可生化性判断的模型才是最有效的。  相似文献   

11.
Many poorly water-soluble compounds fail regulatory ready biodegradation tests as the method of test material preparation limits the bioavailability of the chemical. The recognised method for delivery of poorly soluble materials into biodegradability tests consists of coating test material inside the test vessel or onto inert substrates (i.e., glass cover slide, boiling beads, filter paper, or Teflon stir bar) that are placed inside the vessels. Volatile solvents are often used to augment this process. Although these substrates work fairly well for delivering many poorly soluble materials into biodegradability tests, they have not been effective in keeping low density, poorly water-soluble substances in the test medium. Soon after medium is added to the test vessels, these chemicals break loose from the substrates and float on the surface where they have limited contact with micro-organisms in the test medium. Hence, there is a reduced potential for measuring substantial biodegradability in the test. This paper describes the work undertaken to establish a standard method of adding low density, poorly water-soluble substances into test vessels of biodegradability studies to ensure these materials remain in contact with micro-organisms in the test medium. The substances are prepared for testing by adsorption onto silica gel followed by dispersion into the culture medium. This method of delivery may provide greater intra- and inter-laboratory consistency in biodegradability test results for low density, poorly water-soluble substances and it may more closely mimic the probable transport and fate of these substances in the environment.  相似文献   

12.
This study assesses the biodegradation potential of a number of fatty amine derivatives in tests following the OECD guidelines for ready biodegradability. A number of methods are used to reduce toxicity and improve the bioavailability of the fatty amine derivatives in these tests. Alkyl-1,3-diaminopropanes and octadecyltrimethylammonium chloride are toxic to microorganisms at concentrations used in OECD ready biodegradability tests. The concentration of these fatty amine derivatives in the aqueous phase can be reduced by reacting humic, or lignosulphonic acids with the derivatives or through the addition of silica gel to the test bottles. Using these non-biodegradable substances, ready biodegradability test results were obtained with tallow-1,3-diaminopropane and octadecyltrimethylammonium chloride. Demonstration of the ready biodegradability of the water-insoluble dioctadecylamine under the prescribed standard conditions is almost impossible due to the limited bioavailability of this compound. However, ready biodegradability results were achieved by using very low initial test substance concentrations and by introducing an organic phase. The contents of the bottles used to assess the biodegradability of dioctadecylamine were always mixed. False negative biodegradability results obtained with the fatty amine derivatives studied are the result of toxic effects and/or limited bioavailability. The aids investigated therefore improve ready biodegradability testing.  相似文献   

13.
A model for the prediction of emission of volatile organic compounds (VOCs) from dry building material was developed based on mass transfer theory. The model considers both diffusion and convective mass transfer. In addition, it does not neglect the fact that, in most cases, the initial distribution of VOCs within the material is non-uniform. Under the condition that the initial amount of VOCs contained in the building material is the same, six different types of initial VOC distributions were studied in order to show their effects on the characteristics of emission. The results show that, for short-term predictions, the effects are significant and thus cannot be neglected. Based on the fact that the initial distribution of VOCs is very difficult to directly determine, a conjugate gradient method with an adjoint problem for estimating functions was developed, which can be used to inversely estimate the initial distribution of VOCs within the material without a priori information on the functional form of the unknown function. Simulated measurements with and without measurement errors were used to validate the algorithm. This powerful method successfully recovered all of the aforementioned six different types of initial VOC distributions. A deviation between the exact and predicted initial condition near the bottom of the material was noticed, and a twin chamber method is proposed to obtain more accurate results. With accurate knowledge of the initial distribution of VOCs, source models will be able to yield more accurate predictions.  相似文献   

14.
Empirical QSAR models are only valid in the domain they were trained and validated. Application of the model to substances outside the domain of the model can lead to grossly erroneous predictions. Partial least squares (PLS) regression provides tools for prediction diagnostics that can be used to decide whether or not a substance is within the model domain, i.e. if the model prediction can be trusted. QSAR models for four different environmental end-points are used to demonstrate the importance of appropriate training set selection and how the reliability of QSAR predictions can be increased by outlier diagnostics. All models showed consistent results; test set prediction errors were very similar in magnitude to training set estimation errors when prediction outlier diagnostics were used to detect and remove outliers in the prediction data. Test set prediction errors for substances classified as outliers were much larger. The difference in the number of outliers between models with a randomly and systematically selected training illustrates well the need of representative training data.  相似文献   

15.
The widely used ECOSAR computer programme for QSAR prediction of chemical toxicity towards aquatic organisms was evaluated by using large data sets of industrial chemicals with varying molecular structures. Experimentally derived toxicity data covering acute effects on fish, Daphnia and green algae growth inhibition of in total more than 1,000 randomly selected substances were compared to the prediction results of the ECOSAR programme in order (1) to assess the capability of ECOSAR to correctly classify the chemicals into defined classes of aquatic toxicity according to rules of EU regulation and (2) to determine the number of correct predictions within tolerance factors from 2 to 1,000. Regarding ecotoxicity classification, 65% (fish), 52% (Daphnia) and 49% (algae) of the substances were correctly predicted into the classes "not harmful", "harmful", "toxic" and "very toxic". At all trophic levels about 20% of the chemicals were underestimated in their toxicity. The class of "not harmful" substances (experimental LC/EC(50)>100 mg l(-1)) represents nearly half of the whole data set. The percentages for correct predictions of toxic effects on fish, Daphnia and algae growth inhibition were 69%, 64% and 60%, respectively, when a tolerance factor of 10 was allowed. Focussing on those experimental results which were verified by analytically measured concentrations, the predictability for Daphnia and algae toxicity was improved by approximately three percentage points, whereas for fish no improvement was determined. The calculated correlation coefficients demonstrated poor correlation when the complete data set was taken, but showed good results for some of the ECOSAR chemical classes. The results are discussed in the context of literature data on the performance of ECOSAR and other QSAR models.  相似文献   

16.
17.
Several techniques have been developed over the last decade for the ensemble treatment of atmospheric dispersion model predictions. Among them two have received most of the attention, the multi-model and the ensemble prediction system (EPS) modeling. The multi-model approach relies on model simulations produced by different atmospheric dispersion models using meteorological data from potentially different weather prediction systems. The EPS-based ensemble is generated by running a single atmospheric dispersion model with the ensemble weather prediction members. In the paper we compare both approaches with the help of statistical indicators, using the simulations performed for the ETEX-1 tracer experiment. Both ensembles are also evaluated against measurement data. Among the most relevant results is that the multi-model median and the mean of EPS-based ensemble produced the best results, hence we consider a combination of multi-model and EPS-based approaches as an interesting suggestion for further research.  相似文献   

18.
ABSTRACT

Ozone prediction has become an important activity in many U.S. ozone nonattainment areas. In this study, we describe the ozone prediction program in the Atlanta metropolitan area and analyze the performance of this program during the 1999 ozone-forecasting season. From May to September, a team of 10 air quality regulators, meteorologists, and atmospheric scientists made a daily prediction of the next-day maximum 8-hr average ozone concentration. The daily forecast was made aided by two linear regression models, a 3-dimensional air quality model, and the no-skill ozone persistence model. The team's performance is compared with the numerical models using several numerical indicators. Our analysis indicated that (1) the team correctly predicted next-day peak ozone concentrations 84% of the time, (2) the two linear regression models had a better performance than a 3-dimensional air quality model, (3) persistence was a strong predictor of ozone concentrations with a performance of 78%, and (4) about half of the team's wrong predictions could be prevented with improved meteorological predictions.  相似文献   

19.
20.
Recent progress in developing artificial neural network (ANN) metamodels has paved the way for reliable use of these models in the prediction of air pollutant concentrations in urban atmosphere. However, improvement of prediction performance, proper selection of input parameters and model architecture, and quantification of model uncertainties remain key challenges to their practical use. This study has three main objectives: to select an ensemble of input parameters for ANN metamodels consisting of meteorological variables that are predictable by conventional weather forecast models and variables that properly describe the complex nature of pollutant source conditions in a major city, to optimize the ANN models to achieve the most accurate hourly prediction for a case study (city of Tehran), and to examine a methodology to analyze uncertainties based on ANN and Monte Carlo simulations (MCS). In the current study, the ANNs were constructed to predict criteria pollutants of nitrogen oxides (NOx), nitrogen dioxide (NO2), nitrogen monoxide (NO), ozone (O3), carbon monoxide (CO), and particulate matter with aerodynamic diameter of less than 10 μm (PM10) in Tehran based on the data collected at a monitoring station in the densely populated central area of the city. The best combination of input variables was comprehensively investigated taking into account the predictability of meteorological input variables and the study of model performance, correlation coefficients, and spectral analysis. Among numerous meteorological variables, wind speed, air temperature, relative humidity and wind direction were chosen as input variables for the ANN models. The complex nature of pollutant source conditions was reflected through the use of hour of the day and month of the year as input variables and the development of different models for each day of the week. After that, ANN models were constructed and validated, and a methodology of computing prediction intervals (PI) and probability of exceeding air quality thresholds was developed by combining ANNs and MCSs based on Latin Hypercube Sampling (LHS). The results showed that proper ANN models can be used as reliable metamodels for the prediction of hourly air pollutants in urban environments. High correlations were obtained with R 2 of more than 0.82 between modeled and observed hourly pollutant levels for CO, NOx, NO2, NO, and PM10. However, predicted O3 levels were less accurate. The combined use of ANNs and MCSs seems very promising in analyzing air pollution prediction uncertainties. Replacing deterministic predictions with probabilistic PIs can enhance the reliability of ANN models and provide a means of quantifying prediction uncertainties.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号