首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
The characteristic features of distribution of pesticide residues in crop units and single sample increments were studied based on more than 19,000 residue concentrations measured in root vegetables, leafy vegetables, small-, medium- and large-size fruits representing 20 different crops and 46 pesticides. Log-normal, gamma and Weibull distributions were found to provide the best fit for the relative frequency distributions of individual residue data sets. The overall best fit was provided by lognormal distribution. The relative standard deviation of residues (CV) in various crops ranged from 15–170%. The 100–120 residue values being in one data set was too small to identify potential effects of various factors such as the chemical and physical properties of pesticides and the nature of crops. Therefore, the average of CV values, obtained from individual data sets, were calculated and considered to be the best estimate for the likely variability of unit crop residues for treated field (CV = 0.8) and market samples (CV = 1.1), respectively. The larger variation of residues in market samples was attributed to the potential mixing of lots and varying proportion of non-detects. The expectable average variability of residues in composited samples can be calculated from the typical values taking into account the sample size.  相似文献   

2.
The lognormal, Weibull, and type V Pearson distributions were selected to fit the concentration frequency distributions of particulate matter with an aerodynamic diameter of < or = 10 microm (PM10) and SO2 in the Taiwan area. Air quality data from three stations, Hsin-Chu, Shalu, and Gain-Jin, were fitted with three distributions and compared with the measured data. The parameters of unimodal and bimodal fitted distributions were obtained by the methods of maximum likelihood and nonlinear least squares, respectively. Moreover, the root mean square error (RMSE), index of agreement (d), and Kolmogorov-Smirnov (K-S) test were used as criteria to judge the goodness-of-fit of these three distributions. These results show that the frequency distributions of PM10 concentration at the Hsin-Chu and Shalu stations are unimodal, but the distribution at Gain-Jin is bimodal. The distribution type of PM10 concentration varied greatly in different areas and could be influenced by local meteorological conditions. For SO2 concentration distribution, the distributions were all unimodal. The results also show that the lognormal distribution is the more appropriate to represent the PM10 distribution, while the Weibull and lognormal distributions are more suitable to represent the SO2 distribution. Moreover, the days exceeding the air quality standard (AQS) (PM10 > 125 microg/ m3) for the Hsin-Chu, Shalu, and Gain-Jin stations in the coming year are successfully predicted by the theoretic distributions.  相似文献   

3.
Although networks of environmental monitors are constantly improving through advances in technology and management, instances of missing data still occur. Many methods of imputing values for missing data are available, but they are often difficult to use or produce unsatisfactory results. I-Bot (short for “Imputation Robot”) is a context-intensive approach to the imputation of missing data in data sets from networks of environmental monitors. I-Bot is easy to use and routinely produces imputed values that are highly reliable. I-Bot is described and demonstrated using more than 10 years of California data for daily maximum 8-hr ozone, 24-hr PM2.5 (particulate matter with an aerodynamic diameter <2.5 μm), mid-day average surface temperature, and mid-day average wind speed. I-Bot performance is evaluated by imputing values for observed data as if they were missing, and then comparing the imputed values with the observed values. In many cases, I-Bot is able to impute values for long periods with missing data, such as a week, a month, a year, or even longer. Qualitative visual methods and standard quantitative metrics demonstrate the effectiveness of the I-Bot methodology.Implications: Many resources are expended every year to analyze and interpret data sets from networks of environmental monitors. A large fraction of those resources is used to cope with difficulties due to the presence of missing data. The I-Bot method of imputing values for such missing data may help convert incomplete data sets into virtually complete data sets that facilitate the analysis and reliable interpretation of vital environmental data.  相似文献   

4.
Air quality inside Asian temples is typically poor because of the burning of incense. This study measured and analyzed concentrations of fine (PM2.5) and coarse (PM2.5-10) particulate matter and their metal elements inside a temple in central Taiwan. Experimental results showed that the concentrations of metals Cd, Ni, Pb, and Cr inside the temple were higher than those at rural, suburban, urban, and industrial areas in other studies. Three theoretical parent distributions (lognormal, Weibull, and gamma) were used to fit the measured data. The lognormal distribution was the most appropriate distribution for representing frequency distributions of PM10, PM2.5, and their metal elements. Furthermore, the central limit theorem, H-statistic-based scheme, and parametric and nonparametric bootstrap methods were used to estimate confidence intervals for mean pollutant concentrations. The estimated upper confidence limits (UCLs) of means between different methods were very consistent, because the sample coefficient of variation (CV) was < 1. When the sample CV was > 1, the UCL based on H-statistical method tended to overestimate the UCLs when compared with other methods. Confidence intervals for pollutant concentrations at different percentiles were evaluated using parametric and nonparametric bootstrap methods. The probabilities of pollutants exceeding a critical concentration were also calculated.  相似文献   

5.
Complaints by the neighbourhood due to odour pollution from livestock farming are increasing. Therefore, some countries have already developed guidelines to address odour from livestock. These guidelines are in use to assess the necessary separation distance between livestock buildings and residential areas such that odour is not felt as an annoyance. In all these guidelines, the separation distance is calculated as a function of the rate of pollution. These are mainly power functions with an exponent between 0.3 and 0.5. The Austrian regulatory dispersion model, a Gauss model, is used to calculate the frequency distribution of the dilution factor for 12 classes of distances between 50 and 500 m downwind from the source. These data were fitted to an extended Weibull distribution of the dilution factor to determine the exponent of the power function describing the separation distance as a function of the emission. The exponent has a value of about 0.72. This result, achieved with a wind and stability statistics representative for the Austrian flatlands north of the Alps, indicates a stronger dependance of the separation distance from the odour emission than suggested by the guidelines.  相似文献   

6.
Atmospheric lead concentration distribution in Northern Taiwan   总被引:3,自引:0,他引:3  
Lu HC  Tsai CJ  Hung IF 《Chemosphere》2003,52(6):1079-1088
Atmospheric lead concentrations were measured randomly, approximately once per week, at five traffic sites in northern Taiwan from September 1994 to May 1995. Three types of theoretical distributions, lognormal, Weibull and gamma were selected to fit the frequency distribution of the measured lead concentration. Four goodness-of-fit criteria were used to judge which theoretical distribution is the most appropriate to represent the frequency distributions of atmospheric lead.The results show that atmospheric lead concentrations in total suspended particulates fit the lognormal distribution reasonably well in northern Taiwan. The intervals of fitted theoretical cumulative frequency distributions (CFDs) can successfully contain the measured data when the population mean is estimated with a 95% confidence interval. In addition, atmospheric lead concentration exceeding a critical concentration is also predicted from the fitted theoretical CFDs.  相似文献   

7.
Multivariate analysis of environmental data sets requires the absence of missing values or their substitution by small values. However, if the data is transformed logarithmically prior to the analysis, this solution cannot be applied because the logarithm of a small value might become an outlier. Several methods for substituting the missing values can be found in the literature although none of them guarantees that no distortion of the structure of the data set is produced. We propose a method for the assessment of these distortions which can be used for deciding whether to retain or not the samples or variables containing missing values and for the investigation of the performance of different substitution techniques. The method analyzes the structure of the distances among samples using Mantel tests. We present an application of the method to PCDD/F data measured in samples of terrestrial moss as part of a biomonitoring study.  相似文献   

8.
The COMPLEX I and COMPLEX II Gaussian dispersion models for complex terrain applications have been made available by EPA. Various terrain treatment options under IOPT(25) can be selected for a particular application, one of which [IOPT(25) = 1] is an algorithm similar to that of the VALLEY model. A model performance evaluation exercise involving three of the available options with both COMPLEX models was carried out using SF6 tracer measurements taken during worst-case stable impaction conditions in complex terrain at the Harry Allen Plant site in southern Nevada. The models did not reproduce observed concentrations on an event by event basis, as correlation coefficients for 1-h concentrations of 0-0.3 were exhibited. When observed and calculated cumulative frequency distributions for 1-h and 3-h concentrations were compared, a close correspondence between observations and concentrations calculated with COMPLEX I, IOPT(25) = 2 or 3 was noted; both options consistently overestimated observed concentrations. With IOPT(25) = 1, upper percentile (maximum) values in the calculated frequency distribution exceeded the corresponding IOPT(25) = 2 or 3 value by roughly a factor of 2, and observed values by 2.5-5. COMPLEX II typically produced maximum values 2-4 times as great as COMPLEX I for the same terrain treatment option. From these results it is concluded that: 1) the physically unrealistic sector-spread approach used in VALLEY and COMPLEX I under stable impaction conditions is a surrogate for wind direction variation, and 2) the doubling of the plume centerline concentration due to ground reflection under terrain impingement conditions that is included in IOPT(25) = 1 is inappropriate.

These findings were found to be consistent with an analysis of noncurrent observed and calculated SO2 χ/Q frequency distributions for 1, 3, and 24 hours near the Four Corners Plant in New Mexico. The comparison involved a four-year calculated χ/Q data set and a two-year observed χ/Q data set at the worst-case high terrain impact location near the plant.  相似文献   

9.
This paper deals with modeling observed frequency distributions of air quality data measured in the area of Venice, Italy. The paper discusses the application of the generalized gamma distribution (ggd) which has not been commonly applied to air quality data notwithstanding the fact that it embodies most distribution models used for air quality analyses. The approach yields important simplifications for statistical analyses. A comparison among the ggd and other relevant models (standard gamma, Weibull, lognormal), carried out on daily sulphur dioxide concentrations in the area of Venice underlines the efficiency of ggd models in portraying experimental data.  相似文献   

10.
Species sensitivity distributions (SSDs) are increasingly used in both ecological risk assessment and derivation of water quality criteria. However, there has been debate about the choice of an appropriate approach for derivation of water quality criteria based on SSDs because the various methods can generate different values. The objective of this study was to compare the differences among various methods. Data sets of acute toxicities of 12 substances to aquatic organisms, representing a range of classes with different modes of action, were studied. Nine typical statistical approaches, including parametric and nonparametric methods, were used to construct SSDs for 12 chemicals. Water quality criteria, expressed as hazardous concentration for 5 % of species (HC5), were derived by use of several approaches. All approaches produced comparable results, and the data generated by the different approaches were significantly correlated. Variability among estimates of HC5 of all inclusive species decreased with increasing sample size, and variability was similar among the statistical methods applied. Of the statistical methods selected, the bootstrap method represented the best-fitting model for all chemicals, while log-triangle and Weibull were the best models among the parametric methods evaluated. The bootstrap method was the primary choice to derive water quality criteria when data points are sufficient (more than 20). If the available data are few, all other methods should be constructed, and that which best describes the distribution of the data was selected.  相似文献   

11.
Abstract

Air quality inside Asian temples is typically poor because of the burning of incense. This study measured and analyzed concentrations of fine (PM2.5) and coarse (PM2.5–10) particulate matter and their metal elements inside a temple in central Taiwan. Experimental results showed that the concentrations of metals Cd, Ni, Pb, and Cr inside the temple were higher than those at rural, suburban, urban, and industrial areas in other studies. Three theoretical parent distributions (lognormal, Weibull, and gamma) were used to fit the measured data. The lognormal distribution was the most appropriate distribution for representing frequency distributions of PM10, PM2.5, and their metal elements.

Furthermore, the central limit theorem, H-statistic-based scheme, and parametric and nonparametric bootstrap methods were used to estimate confidence intervals for mean pollutant concentrations. The estimated upper confidence limits (UCLs) of means between different methods were very consistent, because the sample coefficient of variation (CV) was <1. When the sample CV was >1, the UCL based on H-statistical method tended to overestimate the UCLs when compared with other methods. Confidence intervals for pollutant concentrations at different percentiles were evaluated using parametric and nonparametric bootstrap methods. The probabilities of pollutants exceeding a critical concentration were also calculated.  相似文献   

12.
A stochastistic, Weibull probability model was developed and verified to simulate the underlying frequency distributions of hourly ozone (O3) concentrations (exposure dynamics) using the single, weekly mean values obtained from a passive (sodium nitrite absorbent) sampler. The simulation was based on the data derived from a co-located continuous monitor. Although at the moment the model output may be considered as being specific to the elevation and location of the study site, the results were extremely good. This effort for the approximation of the O3 exposure dynamics can be extended to other sites with similar data sets and in developing a generalized understanding of the stochastic O3 exposure-plant response relationships, conferring measurable benefits to the future use of passive O3 samplers, in the absence of continuous monitoring.  相似文献   

13.
Lu HC 《Chemosphere》2004,54(7):805-814
Three theoretical parent frequency distributions; lognormal, Weibull and gamma were used to fit the complete set of PM10 data in central Taiwan. The gamma distribution is the best one to represent the performance of high PM10 concentrations. However, the parent distribution sometimes diverges in predicting the high PM10 concentrations. Therefore, two predicting methods, Method I: two-parameter exponential distribution and Method II: asymptotic distribution of extreme value, were used to fit the high PM10 concentration distributions more correctly. The results fitted by the two-parameter exponential distribution are better matched with the actual high PM10 data than that by the parent distributions. Both of the predicting methods can successfully predict the return period and exceedances over a critical concentration in the future year. Moreover, the estimated emission source reductions of PM10 required to meet the air quality standard by Method I and Method II are very close. The estimated emission source reductions of PM10 range from 34% to 48% in central Taiwan.  相似文献   

14.
Lognormal distribution is often used as a default model for regression analysis of particle size distribution (PSD) data; however, its goodness-of-fit to particle matter (PM) sampled from animal buildings and its comparison to other PSD models have not been well examined. This study aimed to evaluate and to compare the goodness-of-fit of six PSD models to total suspended particulate matter (TSP) samples collected from 15 animal buildings. Four particle size analyzers were used for PSD measurement. The models' goodness-of-fit was evaluated based on adjusted R2, Akaike's information criterion (AIC), and mean squared error (MSE) values. Results showed that the models' approximation of measured PSDs differed with particle size analyzer. The lognormal distribution model offered overall good approximations to measured PSD data, but was inferior to the gamma and Weibull distribution models when applied to PSD data derived from the Horiba and Malvern analyzers. Single-variable models including the exponential, Khrgian-Mazin, and Chen's empirical models provided relatively poor approximations and, thus, were not recommended for future investigations. A further examination on model-predicted PSD parameters revealed that even the best-fit model of the six could significantly misestimate mean diameter median diameter; and variance. However, compared with other models, the best-fit model still offered the relatively best estimates of mean and median diameters, whereas the best predicted variances were given by the gamma distribution model.  相似文献   

15.
The purpose of this project was to investigate the relationship of ambient air quality measurements between two analytical methods, referred to as the total oxidant method and the chemiluminescent method. These two well documented analytical methods were run simultaneously, side by side, at a site on the Houston ship channel. They were calibrated daily. The hourly averages were analyzed by regression techniques and the confidence intervals were calculated for the regression lines. Confidence intervals for point estimates were also calculated. These methods were used with all data sets with values greater than 10 parts per billion and again with values greater than 30 parts per billion. A regression line was also calculated for a second set of data for the preceding year. These data were generated before a chromium triox-ide scrubber was installed to eliminate possible chemical interferences with the Kl method.

The results show that in general the chemiluminescent ozone method tends to produce values as much as two times higher than the simultaneous total oxidant values. In one set of data collected an 80 ppb chemiluminescent ozone value predicted a value of 43.9 ppb total oxidant with a 95% confidence interval of 7.7 to 80.4 ppb. In the second set of data an 80 ppb chemiluminescent ozone value predicted a value of 78 ppb total oxidant with a 95% confidence interval of 0.4 to 156 ppb. Other statistical analyses confirmed that either measurement was a very poor predictor of the other.  相似文献   

16.
The objective of this research was to determine optimum design point allocation for estimation of relative yield losses from ozone pollution when the true and fitted yield-ozone dose response relationship follows the Weibull. The optimum design is dependent on the values of the Weibull model parameters. A transformation was developed which allowed the optimum design (by the determinant criterion) for one parametric situation to be translated to any other, and permitted the search for optimum designs to be restricted to one set of Weibull parameters. Optimum designs were determined for the case where the Weibull parameters are assumed known, and effects of deviating from the optimum designs were investigated. Several alternative design strategies were considered for protecting against incorrectly guessing the Weibull model parameters when their true values are not known.  相似文献   

17.
A relatively simple Gaussian-type diffusion simulation model for calculating urban carbon monoxide (CO) concentrations as a function of local meteorology and the distribution of traffic is described. The model can be used in two ways: (1) in the synoptic mode, in which hourly concentrations at one or many receptor points are calculated from historical or forecast traffic and meteorological data; and (2) in the climatological mode, in which concentration frequency distributions are calculated on the basis of long-term sequences of input data. For model evaluation purposes, an extensive field study involving meteorological and air-quality measurements was conducted during November-December 1970 in San Jose, Calif., which has an automated network to provide traffic data throughout the central business district. Model refinements made on the basis of the data from this experimental program include the addition of a street-canyon submodel to compensate for the important aerodynamic effects of buildings on CO concentrations at streetside receptors. The magnitude of these effects was underscored by the concentrations measured on opposite sides of the street in San Jose, which frequently differed by a factor of two or more. Evaluation of the revised model has shown that calculated and observed concentration frequency distributions for street-canyon sites are in good agreement. Hour-average predictions are well correlated with observations (correlation coefficient of about 0.6 to 0.7), and about 80 percent of the calculated values are within 3 ppm of the observed hour-average concentrations, which ranged as high as 16 ppm.  相似文献   

18.
BACKGROUND, AIMS AND SCOPE: According to the German Federal Soil Protection Act, the natural function of soil as a habitat for human beings, animals, plants and soil organisms is, among other things, to be protected by deriving soil values for important chemicals regarding their amounts in the environment, their persistence and/or their toxicity. This contribution presents the results of the mathematical derivation of such values for nine metals and ten organic substances from soil ecotoxicological effect values available in the literature for microbial processes, plants and soil invertebrates. MATERIAL AND METHODS: Ecotoxicological data were mostly extracted from published papers and reports and had to originate from valid studies that were performed according to internationally standardised guidelines (e.g. ISO) or were otherwise well documented, plausible and performed according to accepted laboratory practice. As test results, both structural (i.e., effects on mortality, growth or reproduction) and functional (i.e., effects on microbial activity or organic matter breakdown) parameters were included. The derivation of soil values was performed using the distribution based extrapolation model (DIBAEX) and EC(50)s (Effective Concentration) as input data. RESULTS: For 19 compounds, soil values could be calculated. In 18 of these 19 cases clear laboratory ecotoxicological effects (i.e., EC50 values) below the calculated soil value have been found in the literature. DISCUSSION: In those few cases where a comparison with field studies is possible, effects have been observed in the same order of magnitude as the calculated soil values. A comparison with other similar approaches confirmed the plausibility of the calculated values. CONCLUSIONS: The DIBAEX-method is a feasible and widely accepted method for deriving soil values from ecotoxicological input data. Data availability was already satisfactory for some substances, but other substances, especially organics, were only poorly covered. The soil values presented here were based on EC50 input data. However, depending on the protection level aimed at by using soil values in legislation, it might be appropriate to use other input data such as NOECs in the derivation process. RECOMMENDATIONS AND PERSPECTIVES: It is recommended to generate an appropriate number of data for further relevant substances by means of a test battery or multi-species approaches such as terrestrial model ecosystems. These tests should also consider the influence of the bioavailability of substances. A final recommendation for legally binding soil values demands a plausibility check of the mathematically derived values. This should include a comparison with natural background concentrations, soil values for other pathways and soil values used in legislation of other countries. Finally, expert judgement always has to be considered.  相似文献   

19.
In this paper, bootstrapped wavelet neural network (BWNN) was developed for predicting monthly ammonia nitrogen (NH4+–N) and dissolved oxygen (DO) in Harbin region, northeast of China. The Morlet wavelet basis function (WBF) was employed as a nonlinear activation function of traditional three-layer artificial neural network (ANN) structure. Prediction intervals (PI) were constructed according to the calculated uncertainties from the model structure and data noise. Performance of BWNN model was also compared with four different models: traditional ANN, WNN, bootstrapped ANN, and autoregressive integrated moving average model. The results showed that BWNN could handle the severely fluctuating and non-seasonal time series data of water quality, and it produced better performance than the other four models. The uncertainty from data noise was smaller than that from the model structure for NH4+–N; conversely, the uncertainty from data noise was larger for DO series. Besides, total uncertainties in the low-flow period were the biggest due to complicated processes during the freeze-up period of the Songhua River. Further, a data missing–refilling scheme was designed, and better performances of BWNNs for structural data missing (SD) were observed than incidental data missing (ID). For both ID and SD, temporal method was satisfactory for filling NH4+–N series, whereas spatial imputation was fit for DO series. This filling BWNN forecasting method was applied to other areas suffering “real” data missing, and the results demonstrated its efficiency. Thus, the methods introduced here will help managers to obtain informed decisions.  相似文献   

20.
Spectral confocal microscope visualizations of microsphere movement in unsaturated porous media showed that attachment at the Air Water Solid (AWS) interface was an important retention mechanism. These visualizations can aid in resolving the functional form of retention rates of colloids at the AWS interface. In this study, soil adsorption isotherm equations were adapted by replacing the chemical concentration in the water as independent variable by the cumulative colloids passing by. In order of increasing number of fitted parameters, the functions tested were the Langmuir adsorption isotherm, the Logistic distribution, and the Weibull distribution. The functions were fitted against colloid concentrations obtained from time series of images acquired with a spectral confocal microscope for three experiments performed where either plain or carboxylated polystyrene latex microspheres were pulsed in a small flow chamber filled with cleaned quartz sand. Both moving and retained colloids were quantified over time. In fitting the models to the data, the agreement improved with increasing number of model parameters. The Weibull distribution gave overall the best fit. The logistic distribution did not fit the initial retention of microspheres well but otherwise the fit was good. The Langmuir isotherm only fitted the longest time series well. The results can be explained that initially when colloids are first introduced the rate of retention is low. Once colloids are at the AWS interface they act as anchor point for other colloids to attach and thereby increasing the retention rate as clusters form. Once the available attachment sites diminish, the retention rate decreases.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号