首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Nonlinear programming techniques are frequently used to design optimum monitoring networks. These mathematically rigorous techniques are difficult to implement or cumbersome when considering other design criteria. This paper presents a more pragmatic approach to the design of an optimal monitoring network to estimate human exposure to hazardous air pollutants. In this approach, an air quality simulation model is used to produce representative air quality patterns, which are then combined with population patterns to obtain typical exposure patterns. These combined patterns are used to determine ‘figures of merit’ for each potential monitoring site, which are used to identify and rank the most favorable sites. The spatial covariance structure of the air quality patterns is used to draw a ‘sphere of influence’ around each site to identify and eliminate redundant monitoring sites. This procedure determines the minimum number of sites required to achieve the desired spatial coverage. This methodology was used to design an optimal ambient air monitoring network for assessing population exposure to hazardous pollutants in the southeastern Ohio River valley.  相似文献   

2.
A new simulation-optimization methodology is developed for cost-effective sampling network design associated with long-term monitoring of large-scale contaminant plumes. The new methodology is similar in concept to the one presented by Reed et al. (Reed, P.M., Minsker, B.S., Valocchi, A.J., 2000a. Cost-effective long-term groundwater monitoring design using a genetic algorithm and global mass interpolation. Water Resour. Res. 36 (12), 3731-3741) in that an optimization model based on a genetic algorithm is coupled with a flow and transport simulator and a global mass estimator to search for optimal sampling strategies. However, this study introduces the first and second moments of a three-dimensional contaminant plume as new constraints in the optimization formulation, and demonstrates the proposed methodology through a real-world application. The new moment constraints significantly increase the accuracy of the plume interpolated from the sampled data relative to the plume simulated by the transport model. The plume interpolation approaches employed in this study are ordinary kriging (OK) and inverse distance weighting (IDW). The proposed methodology is applied to the monitoring of plume evolution during a pump-and-treat operation at a large field site. It is shown that potential cost savings up to 65.6% may be achieved without any significant loss of accuracy in mass and moment estimations. The IDW-based interpolation method is computationally more efficient than the OK-based method and results in more potential cost savings. However, the OK-based method leads to more accurate mass and moment estimations. A comparison of the sampling designs obtained with and without the moment constraints points to their importance in ensuring a robust long-term monitoring design that is both cost-effective and accurate in mass and moment estimations. Additional analysis demonstrates the sensitivity of the optimal sampling design to the various coefficients included in the objective function of the optimization model.  相似文献   

3.
Ozone is a harmful air pollutant at ground level, and its concentrations are measured with routine monitoring networks. Due to the heterogeneous nature of ozone fields, the spatial distribution of the ozone concentration measurements is very important. Therefore, the evaluation of distributed monitoring networks is of both theoretical and practical interests. In this study, we assess the efficiency of the ozone monitoring network over France (BDQA) by investigating a network reduction problem. We examine how well a subset of the BDQA network can represent the full network. The performance of a subnetwork is taken to be the root mean square error (rmse) of the hourly ozone mean concentration estimations over the whole network given the observations from that subnetwork. Spatial interpolations are conducted for the ozone estimation taking into account the spatial correlations. Several interpolation methods, namely ordinary kriging, simple kriging, kriging about the means, and consistent kriging about the means, are compared for a reliable estimation. Exponential models are employed for the spatial correlations. It is found that the statistical information about the means improves significantly the kriging results, and that it is necessary to consider the correlation model to be hourly-varying and daily stationary. The network reduction problem is solved using a simulated annealing algorithm. Significant improvements can be obtained through these optimizations. For instance, removing optimally half the stations leads to an estimation error of the order of the standard observational error (10 μg m?3). The resulting optimal subnetworks are dense in urban agglomerations around Paris (Île-de-France) and Nice (Côte d’Azur), where high ozone concentrations and strong heterogeneity are observed. The optimal subnetworks are probably dense near frontiers because beyond these frontiers there is no observation to reduce the uncertainty of the ozone field. For large rural regions, the stations are uniformly distributed. The fractions between urban, suburban and rural stations are rather constant for optimal subnetworks of larger size (beyond 100 stations). By contrast, for smaller subnetworks, the urban stations dominate.  相似文献   

4.
Abstract

A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational ?uid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability of this method to identify contaminant source positions. Future research intentions are focused on integration with real sensor networks and model improvements for much more complicated contamination scenarios.  相似文献   

5.
The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach. Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.  相似文献   

6.
This paper presents the ongoing research at the Swiss Federal Institute of Technology (ETH), Zurich, related to the design and implementation of an IDSS (integrated decision support system) dedicated to integrated risk assessment and safety management with application to the transportation of hazardous substances. One of the goals is to construct advanced models to estimate (a) the optimal routing for transportation of hazardous substances and (b) optimise plans for emergency preparedness and management in case of potential (severe) accidents. As an output of this activity one aims at designing tools to represent risk to the public by using a GIS calculation platform, generically applicable to Switzerland. Though a database of some 700 chemicals is available, epichlorohydrin is the hazardous substance targeted in the first phase.  相似文献   

7.
8.
ABSTRACT

The application of artificial intelligence techniques for performance optimization of the fuel lean gas reburn (FLGR) system is investigated. A multilayer, feedforward artificial neural network is applied to model static nonlinear relationships between the distribution of injected natural gas into the upper region of the furnace of a coal-fired boiler and the corresponding oxides of nitrogen (NOx) emissions exiting the furnace. Based on this model, optimal distributions of injected gas are determined such that the largest NOx reduction is achieved for each value of total injected gas. This optimization is accomplished through the development of a new optimization method based on neural networks. This new optimal control algorithm, which can be used as an alternative generic tool for solving multidimensional nonlinear constrained optimization problems, is described and its results are successfully validated against an off-the-shelf tool for solving mathematical programming problems. Encouraging results obtained using plant data from one of Commonwealth Edison's coal-fired electric power plants demonstrate the feasibility of the overall approach.

Preliminary results show that the use of this intelligent controller will also enable the determination of the most cost-effective operating conditions of the FLGR system by considering, along with the optimal distribution of the injected gas, the cost differential between natural gas and coal and the open-market price of NOx emission credits. Further study, however, is necessary, including the construction of a more comprehensive database, needed to develop high-fidelity process models and to add carbon monoxide (CO) emissions to the model of the gas reburn system.  相似文献   

9.
Newark Bay, New Jersey, is particularly vulnerable to ecological damage from petroleum and chemical spills, as a result of the enclosed nature and shallow depth of the bay, the high frequency of shipping traffic, and the numerous chemical and petroleum transfer terminals located alongs its shores. To evaluate the potential impacts to the natural resources of this coastal estuarine ecosystem, chemical and petroleum accidents reported to the US Coast Guard (USCG) between 1982 and 1991 were compiled to determine the frequency and volume of these incidents in Newark Bay and in each of its major tributaries. Records obtained from the USCG National Response Center's computerized database indicated that more than 1453 accidental incidents, resulting in the release of more than 18 million US gallons of hazardous materials and petroleum products, occurred throughout Newark Bay during this period of time. The bulk of the materials released to the aquatic environment consisted of petroleum products, specifically No. 6 Fuel Oil (103 spills, 12 829 272 US gal) and gasoline (207 spills, 48 816 US gal). The majority of the reported incidents occurred in the Arthur Kill and its tributaries, as well as in the Kill Van Kull and the Passaic River. The results of this study indicated that the accidental discharge of petroleum and hazardous chemicals represents a significant source of chemical pollution in Newark Bay. Based on the frequency of spills and the volume of materials released to the aquatic environment, it is likely that these events are having a deleterious effect on the Newark Bay ecosystem.  相似文献   

10.
Deposition is one of the main loss terms for ammonia and ammonium from the atmosphere. It is also the input for ecosystems that can lead to drastic changes and effects. Deposition networks are needed to evaluate the need and the effect of policies to reduce nitrogen emissions, but also for studying deposition parameters and for developing deposition models. As with ambient concentrations of ammonia, deposition, especially dry deposition, varies strongly in space and in time. Furthermore, the bi-directional surface-atmosphere exchange of ammonia makes the combination of ambient concentration measurements with inferential models inadequate. Developing deposition monitoring networks with reasonable accuracy and representativeness is therefore not straightforward. In Europe several projects have addressed deposition monitoring. From these results it is concluded that a monitoring strategy should consist of a network with a limited amount of super sites combined with a larger number of sites where low cost methods are applied, together with models for generalisation.  相似文献   

11.
A methodology for determining regional ozone design values and the expected number of exceedances is described. The methodology was applied to data bases for one year or less from four U.S. urban areas: Houston, Los Angeles, Philadelphia, and St. Louis. The effects of reducing numbers of stations in a network were tested, and it was concluded that networks of nine or ten appropriately selected stations are adequate for estimating design values. Using the methodology described, the expected number of exceedances tends to be underestimated when using smaller networks; however, this appears to be an artifact of the conservative approach taken in developing the methodology.  相似文献   

12.
Privatization of municipal solid waste (MSW) collection can improve service quality and reduce cost. To reduce the risk of an incapable company serving an entire collection area and to establish a competitive market, a large collection area should be divided into two or more subregions, with each subregion served by a different company. The MSW subregion districting is generally done manually, based on the planner's intuition. Major drawbacks of a manual approach include the creation of a districting plan with poor road network integrity for which it is difficult to design an efficient collection route. The other drawbacks are difficulty in finding the optimal districting plan and the lack of a way to consistently measure the differences among subregions to avoid unfair competition. To determine an MSW collection subregion districting plan, this study presents a mixed-integer optimization model that incorporates factors such as compactness, road network integrity, collection cost, and regional proximity. Two cases are presented to demonstrate the applicability of the proposed model. In both cases, districting plans with good road network integrity and regional proximity have been generated successfully.  相似文献   

13.
The establishment of an efficient surface water quality monitoring (WQM) network is a critical component in the assessment, restoration and protection of river water quality. A periodic evaluation of monitoring network is mandatory to ensure effective data collection and possible redesigning of existing network in a river catchment. In this study, the efficacy and appropriateness of existing water quality monitoring network in the Kabbini River basin of Kerala, India is presented. Significant multivariate statistical techniques like principal component analysis (PCA) and principal factor analysis (PFA) have been employed to evaluate the efficiency of the surface water quality monitoring network with monitoring stations as the evaluated variables for the interpretation of complex data matrix of the river basin. The main objective is to identify significant monitoring stations that must essentially be included in assessing annual and seasonal variations of river water quality. Moreover, the significance of seasonal redesign of the monitoring network was also investigated to capture valuable information on water quality from the network. Results identified few monitoring stations as insignificant in explaining the annual variance of the dataset. Moreover, the seasonal redesign of the monitoring network through a multivariate statistical framework was found to capture valuable information from the system, thus making the network more efficient. Cluster analysis (CA) classified the sampling sites into different groups based on similarity in water quality characteristics. The PCA/PFA identified significant latent factors standing for different pollution sources such as organic pollution, industrial pollution, diffuse pollution and faecal contamination. Thus, the present study illustrates that various multivariate statistical techniques can be effectively employed in sustainable management of water resources. Highlights ? The effectiveness of existing river water quality monitoring network is assessed ? Significance of seasonal redesign of the monitoring network is demonstrated ? Rationalization of water quality parameters is performed in a statistical framework  相似文献   

14.
A wind tunnel study was performed to examine some turbulent characteristics and statistical properties of the concentration field developing from the steady release of a tracer gas at street level in a canyon amidst urban roughness. The experiment was conducted with the approaching wind direction perpendicular to the street axis and, with a street width to building height aspect ratio equal to one. Concentration time series were recorded at 70 points within the test street cross-section and above. Mean concentrations, variances and related turbulent quantities, as well as other statistical quantities including quantiles were computed. Concentration spectra and autocorrelation functions were also examined. The emphasis is put here on the results concerning mean concentrations and the variance of concentration fluctuations. The main objective of this paper is to put forward potential benefits of the experimental approach taken in this study. Through a simple and already widely studied configuration it is aimed to show how, for modelling purposes, this approach can help improving our understanding of the mechanisms of dipersion of pollution from car exhausts in built-up areas and, with further measurements, how it could assist in drawing specifications for siting monitoring networks.  相似文献   

15.
Lu WZ  Wang WJ 《Chemosphere》2005,59(5):693-701
Monitoring and forecasting of air quality parameters are popular and important topics of atmospheric and environmental research today due to the health impact caused by exposing to air pollutants existing in urban air. The accurate models for air pollutant prediction are needed because such models would allow forecasting and diagnosing potential compliance or non-compliance in both short- and long-term aspects. Artificial neural networks (ANN) are regarded as reliable and cost-effective method to achieve such tasks and have produced some promising results to date. Although ANN has addressed more attentions to environmental researchers, its inherent drawbacks, e.g., local minima, over-fitting training, poor generalization performance, determination of the appropriate network architecture, etc., impede the practical application of ANN. Support vector machine (SVM), a novel type of learning machine based on statistical learning theory, can be used for regression and time series prediction and have been reported to perform well by some promising results. The work presented in this paper aims to examine the feasibility of applying SVM to predict air pollutant levels in advancing time series based on the monitored air pollutant database in Hong Kong downtown area. At the same time, the functional characteristics of SVM are investigated in the study. The experimental comparisons between the SVM model and the classical radial basis function (RBF) network demonstrate that the SVM is superior to the conventional RBF network in predicting air quality parameters with different time series and of better generalization performance than the RBF model.  相似文献   

16.
In the event of an accidental atmospheric release of radionuclides from a nuclear power plant, accurate real-time forecasting of the activity concentrations of radionuclides is acutely required by the decision makers for the preparation of adequate countermeasures. Yet, the accuracy of the forecasted plume is highly dependent on the source term estimation. Inverse modelling and data assimilation techniques should help in that respect. However the plume can locally be thin and could avoid a significant part of the radiological monitoring network surrounding the plant. Deploying mobile measuring stations following the accident could help to improve the source term estimation. In this paper, a method is proposed for the sequential reconstruction of the plume, by coupling a sequential data assimilation algorithm based on inverse modelling with an observation targeting strategy. The targeting design strategy consists in seeking the optimal locations of the mobile monitors at time t + 1 based on all available observations up to time t.The performance of the sequential assimilation with and without targeting of observations has been assessed in a realistic framework. It focuses on the Bugey nuclear power plant (France) and its surroundings within 50 km from the plant. The existing surveillance network is used and realistic observational errors are assumed. The targeting scheme leads to a better estimation of the source term as well as the activity concentrations in the domain. The mobile stations tend to be deployed along plume contours, where activity concentration gradients are important. It is shown that the information carried by the targeted observations is very significant, as compared to the information content of fixed observations. A simple test on the impact of model error from meteorology shows that the targeting strategy is still very useful in a more uncertain context.  相似文献   

17.
Ambient air observations of hazardous air pollutant (HAPs), also known as air toxics, derived from routine monitoring networks operated by states, local agencies, and tribes (SLTs), are analyzed to characterize national concentrations and risk across the nation for a representative subset of the 187 designated HAPs. Observations from the National Air Toxics Trend Sites (NATTS) network of 27 stations located in most major urban areas of the contiguous United States have provided a consistent record of HAPs that have been identified as posing the greatest risk since 2003 and have also captured similar concentration patterns of nearly 300 sites operated by SLTs. Relatively high concentration volatile organic compounds (VOCs) such as benzene, formaldehyde, and toluene exhibit the highest annual average concentration levels, typically ranging from 1 to 5 µg/m3. Halogenated (except for methylene chloride) and semivolatile organic compounds (SVOCs) and metals exhibit concentrations typically 2–3 orders of magnitude lower. Formaldehyde is the highest national risk driver based on estimated cancer risk and, nationally, has not exhibited significant changes in concentration, likely associated with the large pool of natural isoprene and formaldehyde emissions. Benzene, toluene, ethylbenzene, and 1,3-butadiene are ubiquitous VOC HAPs with large mobile source contributions that continue to exhibit declining concentrations over the last decade. Common chlorinated organic compounds such as ethylene dichloride and methylene chloride exhibit increasing concentrations. The variety of physical and chemical attributes and measurement technologies across 187 HAPs result in a broad range of method detection limits (MDLs) and cancer risk thresholds that challenge confidence in risk results for low concentration HAPs with MDLs near or greater than risk thresholds. From a national monitoring network perspective, the ability of the HAPs observational database to characterize the multiple pollutant and spatial scale patterns influencing exposure is severely limited and positioned to benefit by leveraging a variety of emerging measurement technologies.

Implications:?Ambient air toxics observation networks have limited ability to characterize the broad suite of hazardous air pollutants (HAPs) that affect exposures across multiple spatial scales. While our networks are best suited to capture major urban-scale signals of ubiquitous volatile organic compound HAPs, incorporation of sensing technologies that address regional and local-scale exposures should be pursued to address major gaps in spatial resolution. Caution should be exercised in interpreting HAPs observations based on data proximity to minimum detection limit and risk thresholds.  相似文献   

18.
This article describes two statistical methods that enable air pollution control agencies to assess the effectiveness of the spatial distribution of current stationary ozone monitoring networks by providing measures of site redundancy. These methods analyze site redundancy by determining the degree to which ozone measurements at one site can be successfully predicted from data collected at other monitoring sites. The first method, the similarity (SIM) measure, calculates redundancy based on the percentage of common operational days during which two monitoring stations report similar daily maximum ozone concentrations. The second method, a modeling technique, relates site redundancy in ozone measurement to an R-squared statistic from an autoregressive model. The model uses meteorological components recorded at a central location and ozone concentrations reported by the network’s other monitoring stations. Both techniques can assist in effective allocation of limited monitoring resources and offer a statistical approach to ambient air monitoring network design. The techniques are applied to data collected at six ozone monitoring stations in Harris County, Texas, during an eight-year period in the 1980s. The methods identified two sites in the six-site network that exhibit a high degree of redundancy.  相似文献   

19.
The information presented in this paper is directed to those with the responsibility of designing and operating air quality monitoring networks. An analytical model for location of monitor sites based upon maximizing a sum of coverage factors for each source is developed. An heuristic solution method from the facilities location analysis literature is used for solution of the model. Results of an example problem are presented and compared with the monitoring network currently In place. The model is shown to be a valuable addition to the methods available to the air quality monitor network designer. Needs for further research are pointed out.  相似文献   

20.
In environmental monitoring, it is important that the monitoring system should emit early warnings when undesired events occur. These events may be sudden or of a more subtle nature. In the design of such monitoring systems, a proper balance between cost and risk must be achieved. There are 2 classic types of risk connected with early warning systems, namely the risk of not detecting significant changes and the risk of false alarms. The purpose of this paper is to describe a method for comparing the performance of different monitoring systems, considering the classic types of risk and cost. The method is applied to the monitoring of the lichen cover as a test case. The expected utility has been used as a measure of performance. When estimating the probabilities of the events, spatial microsimulation and Monte-Carlo simulation techniques have been used. The monitoring programs studied are based on satellite images, aerial photos, field samples, and land-cover maps. The major conclusions of this study are that standardized quality measures are extremely useful for evaluating the usability of environmental monitoring methods. In addition, when estimating gains and costs, spatial microsimulation techniques are useful. To improve the method, however, macroconstraints should also be used for aligning the simulation model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号