首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A methodology is developed to include wind flow effects in land use regression (LUR) models for predicting nitrogen dioxide (NO2) concentrations for health exposure studies. NO2 is widely used in health studies as an indicator of traffic-generated air pollution in urban areas. Incorporation of high-resolution interpolated observed wind direction from a network of 38 weather stations in a LUR model improved NO2 concentration estimates in densely populated, high traffic and industrial/business areas in Toronto-Hamilton urban airshed (THUA) of Ontario, Canada. These small-area variations in air pollution concentrations that are probably more important for health exposure studies may not be detected by sparse continuous air pollution monitoring network or conventional interpolation methods. Observed wind fields were also compared with wind fields generated by Global Environmental Multiscale-High resolution Model Application Project (GEM-HiMAP) to explore the feasibility of using regional weather forecasting model simulated wind fields in LUR models when observed data are either sparse or not available. While GEM-HiMAP predicted wind fields well at large scales, it was unable to resolve wind flow patterns at smaller scales. These results suggest caution and careful evaluation of regional weather forecasting model simulated wind fields before incorporating into human exposure models for health studies. This study has demonstrated that wind fields may be integrated into the land use regression framework. Such integration has a discernable influence on both the overall model prediction and perhaps more importantly for health effects assessment on the relative spatial distribution of traffic pollution throughout the THUA. Methodology developed in this study may be applied in other large urban areas across the world.  相似文献   

2.
《Environmental Forensics》2013,14(4):229-238
Hydrologic and water quality (H/WQ) models are being used with increasing frequency to devise alternative pollution control strategies. It has been recognized that such models may have a large degree of uncertainty associated with their predictions, and that this uncertainty can significantly impact the utility of the model. In this study, ARRAMIS (Advanced Risk & Reliability Assessment Model) software package was used to analyze the uncertainty of the SWAT2000 (Soil and Water Assessment Tool) outputs concerning nutrients and sediment losses from agricultural lands. ARRAMIS applies Monte Carlo simulation technique connected with Latin hypercube sampling (LHS) scheme. This technique is applied to the Warner Creek watershed located in the Piedmont physiographic region of Maryland, and it provides an interval estimate of a range of values with an associated probability instead of a point estimate of a particular pollutant constituent. Uncertainty of model outputs was investigated using LHS scheme with restricted pairing for the model input sampling. Probability distribution functions (pdfs) for each of the 50 model simulations were constructed from these results. Model output distributions of interest in this analysis were stream flow, sediment, organic nitrogen (organic-N), organic phosphorus (organic-P), nitrate, ammonium, and mineral phosphorus (mineral-P) transported with water. Developed probability distribution functions for the model provided information with desirable probability. Results indicate that consideration of input parameter uncertainty produces 64% less mean stream flow along with approximately 8.2% larger sediment loading than obtained using mean input parameters. On the contrary, mean of outputs regarding nutrients such as nitrate, ammonia, organic-N, and organic-P (but not mineral-P) were almost the same as the one using mean input parameters. The uncertainty in predicted stream flow and sediment loading is large, but that for nutrient loadings is the same as that of the corresponding input parameters. This study concluded that using a best possible distribution for the input parameters to reflect the impact of soils and land use diversity in a small watershed on SWAT2000 model outputs may be more accurate than using average values for each input parameter.  相似文献   

3.
Heinzl H  Mittlböck M  Edler L 《Chemosphere》2007,67(9):S365-S374
When estimating human health risks from exposure to TCDD using toxicokinetic and toxicodynamic models, it is important to understand how model choice and assumptions necessary for modeling add to the uncertainty of risk estimates. Several toxicokinetic models have been proposed for the risk assessment of dioxins, in particular the elimination kinetics in humans has been a matter of constant debate. For a long time, a simple linear elimination kinetics has been common choice. Thus, it was used for the statistical analysis of the largest occupationally exposed cohort, the German Boehringer cohort. We challenge this assumption by considering, amongst others, a nonlinear modified Michaelis-Menten-type elimination kinetics, the so-called Carrier kinetics. Using the area under the lipid TCDD concentration time curve as dose metrics, we model the time to cancer-related death using the Cox proportional hazards model as toxicodynamic model. This risk assessment set-up was simulated in order to quantify uncertainty of both the dose (TCDD body burden) and the risk estimates, depending on the use of the kinetic model, variations of carcinogenic effect of TCDD and variations of latency period (lag time). If past exposure is estimated assuming a linear elimination kinetics although a Carrier kinetics actually holds, then high exposures in reality will be underestimated through statistical analysis and low exposures will be overestimated, respectively. This bias will carry over on the estimated individual concentration-time curves and the therefrom derived TCDD dose metric values. Using biased dose values when estimating a dose-response relationship will finally lead to biased risk estimates. The extent of bias and the decrease of precision are quantified in selected scenarios through this simulation approach. Our findings are in concordance with recent results in the field of dioxin risk assessment. They also reinforce the general demand for the scheduled uncertainty assessments in risk analyses.  相似文献   

4.
5.
The small-scale spatial variability of air pollution observed in urban areas has created concern about the representativeness of measurements used in exposure studies. It is suspected that limit values for traffic-related pollutants may be exceeded near busy streets, although respected at urban background sites. In order to assess spatial concentration gradients and identify weather conditions that might induce air pollution episodes in urban areas, different sampling and modelling techniques were studied.Two intensive monitoring campaigns were carried out in typical street canyons in Paris during winter and summer. Steep cross-road and vertical concentration gradients were observed within the canyons, in addition to large differences between roadside and background levels. Low winds and winds parallel to the street axis were identified as the worst dispersion conditions. The correlation between the measured compounds gave an insight into their sources and fate. An empirical relationship between CO and benzene was established. Two relatively simple mathematical models and an algorithm describing vertical pollutant dispersion were used. The combination of monitoring and modelling techniques proposed in this study can be seen as a reliable and cost-effective method for assessing air quality in urban micro-environments. These findings may have important implications in designing monitoring studies to support investigation on the health effects of traffic-related air pollution.  相似文献   

6.
Abstract

Many large metropolitan areas experience elevated concentrations of ground-level ozone pollution during the summertime “smog season”. Local environmental or health agencies often need to make daily air pollution forecasts for public advisories and for input into decisions regarding abatement measures and air quality management. Such forecasts are usually based on statistical relationships between weather conditions and ambient air pollution concentrations. Multivariate linear regression models have been widely used for this purpose, and well-specified regressions can provide reasonable results. However, pollution-weather relationships are typically complex and nonlinear—especially for ozone—properties that might be better captured by neural networks. This study investigates the potential for using neural networks to forecast ozone pollution, as compared to traditional regression models. Multiple regression models and neural networks are examined for a range of cities under different climate and ozone regimes, enabling a comparative study of the two approaches. Model comparison statistics indicate that neural network techniques are somewhat (but not dramatically) better than regression models for daily ozone prediction, and that all types of models are sensitive to different weather-ozone regimes and the role of persistence in aiding predictions.  相似文献   

7.
Chen YC  Ma HW 《Chemosphere》2006,63(5):751-761
Many environmental multimedia risk assessment models have been developed and widely used along with increasing sophistication of the risk assessment method. Despite of the considerable improvement, uncertainty remains a primary threat to the credibility of and users' confidence in the model-based risk assessments. In particular, it has been indicated that scenario and model uncertainty may affect significantly the assessment outcome. Furthermore, the uncertainty resulting from choosing different models has been shown more important than that caused by parameter uncertainty. Based on the relationship between exposure pathways and estimated risk results, this study develops a screening procedure to compare the relative suitability between potential multimedia models, which would facilitate the reduction of uncertainty due to model selection. MEPAS, MMSOILS, and CalTOX models, combined with Monte Carlo simulation, are applied to a realistic groundwater-contaminated site to demonstrate the process. It is also shown that the identification of important parameters and exposure pathways, and implicitly, the subsequent design of uncertainty reduction and risk management measures, would be better-formed.  相似文献   

8.
Quantitative assessment of human exposures and health effects due to air pollution involve detailed characterization of impacts of air quality on exposure and dose. A key challenge is to integrate these three components on a consistent spatial and temporal basis taking into account linkages and feedbacks. The current state-of-practice for such assessments is to exercise emission, meteorology, air quality, exposure, and dose models separately, and to link them together by using the output of one model as input to the subsequent downstream model. Quantification of variability and uncertainty has been an important topic in the exposure assessment community for a number of years. Variability refers to differences in the value of a quantity (e.g., exposure) over time, space, or among individuals. Uncertainty refers to lack of knowledge regarding the true value of a quantity. An emerging challenge is how to quantify variability and uncertainty in integrated assessments over the source-to-dose continuum by considering contributions from individual as well as linked components. For a case study of fine particulate matter (PM2.5) in North Carolina during July 2002, we characterize variability and uncertainty associated with each of the individual concentration, exposure and dose models that are linked, and use a conceptual framework to quantify and evaluate the implications of coupled model uncertainties. We find that the resulting overall uncertainties due to combined effects of both variability and uncertainty are smaller (usually by a factor of 3–4) than the crudely multiplied model-specific overall uncertainty ratios. Future research will need to examine the impact of potential dependencies among the model components by conducting a truly coupled modeling analysis.  相似文献   

9.
GOAL, SCOPE AND BACKGROUND: This paper uses two case studies of U.S. Department of Energy nuclear weapons complex installations to illustrate the integration of expedited site characterization (ESC) and multimedia modeling in the remedial action decision making process. CONCEPTUAL SITE MODELS, MULTIMEDIA MODELS, AND EXPEDITED SITE CHARACTERIZATION: Conceptual site models outline assumptions about contaminates and the spatial/temporal distribution of potential receptors. Multimedia models simulate contaminant transport and fate through multiple environmental media, estimate potential human exposure via specific exposure pathways, and estimate the risk of cancer and non-cancer health outcomes. ESC relies on using monitoring data to quantify the key components of an initial conceptual site model that is modified iteratively using the multimedia model. CASE STUDIES: Two case studies are presented that used the ESC approach: Los Alamos National Laboratory (LANL) and Pantex. LANL released radionuclides, metals, and organic compounds, into canyons surrounding the facility. The Pantex Plant has past waste management operations which included burning chemical wastes in unlined pits, burying wastes in unlined landfills, and discharging plant wastewaters into on-site surface waters. CONCLUSIONS: The case studies indicate that using multimedia models with the ESC approach can inform assessors about what, where, and how much site characterization data needs to be collected to reduce the uncertainty associated with risk assessment. Lowering the degree of uncertainty reduces the time and cost associated with assessing potential risk and increases the confidence that decision makers have in the assessments performed.  相似文献   

10.
Land-use regression models have increasingly been applied for air pollution mapping at typically the city level. Though models generally predict spatial variability well, the structure of models differs widely between studies. The observed differences in the models may be due to artefacts of data and methodology or underlying differences in source or dispersion characteristics. If the former, more standardised methods using common data sets could be beneficial. We compared land-use regression models for NO2 and PM10, developed with a consistent protocol in Great Britain (GB) and the Netherlands (NL).Models were constructed on the basis of 2001 annual mean concentrations from the national air quality networks. Predictor variables used for modelling related to traffic, population, land use and topography. Four sets of models were developed for each country. First, predictor variables derived from data sets common to both countries were used in a pooled analysis, including an indicator for country and interaction terms between country and the identified predictor variables. Second, the common data sets were used to develop individual baseline models for each country. Third, the country-specific baseline models were applied after calibration in the other country to explore transferability. The fourth model was developed using the best possible predictor variables for each country.A common model for GB and NL explained NO2 concentrations well (adjusted R2 0.64), with no significant differences in intercept and slopes between the two countries. The country-specific model developed on common variables for NL but not GB improved the prediction.The performance of models based upon common data was only slightly worse than models optimised with local data. Models transferred to the other country performed substantially worse than the country-specific models. In conclusion, care is needed both in transferring models across different study areas, and in developing large inter-regional LUR models.  相似文献   

11.
In particulate air pollution mortality time series studies, the particulate air pollution exposure measure used is typically the current day's or the previous day's air pollution concentration or a multi-day moving average air pollution concentration. Distributed lag models (DLMs) that allow for differential air pollution effects that are spread over multiple days are seen as an improvement over using a single- or multi-day moving average air pollution exposure measure. However, at the current time, the statistical properties of DLMs as a measure of air pollution exposure have not been investigated. In this paper, a simulation study is used to investigate the performance of DLMs as a measure of air pollution exposure in comparison with single- and multi-day moving average air pollution exposure measures under various forms for the true effect of air pollution on mortality. The simulation study shows that DLMs offer a more robust measure of the effect of air pollution on mortality and avoid the potential for a large negative bias compared with single- or multi-day moving average air pollution exposure measures. This is important information. In many U.S. cities, particulate air pollution concentrations are observed only once every six days, meaning it is often only possible to use single-day particulate air pollution exposure measures. The results from this paper will help quantify the magnitude of the negative bias that can result from using single-day exposure measures. The implications of this work for future air pollution mortality time series studies are discussed. The data used in this paper are concurrent daily time series of mortality, weather, and particulate air pollution from Cook County, IL, for the period 1987-1994.  相似文献   

12.

Background, aim, and scope

We strive to predict consequences of genetically modified plants (GMPs) being cultivated openly in the environment, as human and animal health, biodiversity, agricultural practise and farmers’ economy could be affected. Therefore, it is unfortunate that the risk assessment of GMPs is burdened by uncertainty. One of the reasons for the uncertainty is that the GMPs are interacting with the ecosystems at the release site thereby creating variability. This variability, e.g. in gene flow, makes consequence analysis difficult. The review illustrates the great uncertainty of results from gene-flow analysis.

Main features

Many independent experiments were performed on the individual processes in gene flow. The results comprise information both from laboratory, growth chambers and field trials, and they were generated using molecular or phenotypic markers and analysis of fitness parameters. Monitoring of the extent of spontaneous introgression in natural populations was also performed. Modelling was used as an additional tool to identify key parameters in gene flow.

Results

The GM plant may affect the environment directly or indirectly by dispersal of the transgene. Magnitude of the transgene dispersal will depend on the GM crop, the agricultural practise and the environment of the release site. From case-to-case these three factors provide a variability that is reflected in widely different likelihoods of transgene dispersal and fitness of introgressed plants. In the present review, this is illustrated through a bunch of examples mostly from our own research on oilseed rape, Brassica napus. In the Brassica cases, the variability affected all five main steps in the process of gene dispersal. The modelling performed suggests that in Brassica, differences in fitness among plant genome classes could be a dominant factor in the establishment and survival of introgressed populations.

Discussion

Up to now, experimental analyses have mainly focused on studying the many individual processes of gene flow. This can be criticised, as these experiments are normally carried out in widely different environments and with different genotypes, and thus providing bits and pieces difficult to assemble. Only few gene-flow studies have been performed in natural populations and over several plant generations, though this could give a more coherent and holistic view.

Conclusion

The variability inherent in the processes of gene flow in Brassica is apparent and remedies are wished for. One possibility is to expose the study species to additional experiments and monitoring, but this is costly and will likely not cover all possible scenarios. Another remedy is modelling gene flow. Modelling is a valuable tool in identifying key factors in the gene-flow process for which more knowledge is needed, and identifying parameters and processes which are relatively insensitive to change and therefore require less attention in future collections of data. But the interdependence between models and experimental data is extensive, as models depend on experimental data for their development or testing.

Recommendations

More and more transgenic varieties are being grown worldwide harbouring genes that might potentially affect the environment (e.g. drought tolerance, salt tolerance, disease tolerance, pharmaceutical genes). This calls for a thorough risk assessment. However, in Brassica, the limited and uncertain knowledge on gene flow is an obstacle to this. Modelling of gene flow should be optimised, and modelling outputs verified in targeted field studies and at the landscape level. Last but not least, it is important to remember that transgene flow in itself is not necessarily a thread, but it is the consequences of gene flow that may jeopardise the ecosystems and the agricultural production. This emphasises the importance of consequence analysis of genetically modified plants.  相似文献   

13.
Abstract

This work assessed the usefulness of a current air quality model (American Meteorological Society/Environmental Protection Agency Regulatory Model [AERMOD]) for predicting air concentrations and deposition of perfluorooctanoate (PFO) near a manufacturing facility. Air quality models play an important role in providing information for verifying permitting conditions and for exposure assessment purposes. It is important to ensure traditional modeling approaches are applicable to perfluorinated compounds, which are known to have unusual properties. Measured field data were compared with modeling predictions to show that AERMOD adequately located the maximum air concentration in the study area, provided representative or conservative air concentration estimates, and demonstrated bias and scatter not significantly different than that reported for other compounds. Surface soil/grass concentrations resulting from modeled deposition flux also showed acceptable bias and scatter compared with measured concentrations of PFO in soil/grass samples. Errors in predictions of air concentrations or deposition may be best explained by meteorological input uncertainty and conservatism in the PRIME algorithm used to account for building downwash. In general, AERMOD was found to be a useful screening tool for modeling the dispersion and deposition of PFO in air near a manufacturing facility.  相似文献   

14.
A weight-of-evidence approach was used by the US National Acid Precipitation Assessment Program (NAPAP) to assess the sensitivity of chemistry and biology of lakes and streams to hypothesized changes in sulfate deposition over the next 50 years. The analyses focused on projected effects in response to differences in the magnitude and the timing of changes in sulfate deposition in the north-eastern United States, the Mid-Appalachian Highlands, and the Southern Blue Ridge Province. A number of tools was used to provide the weight of evidence that is required to have confidence in an assessment that has many uncertainties because of the complexity of the systems for which the projections of future conditions were made and because of limited historical data. The MAGIC model provided the projections of chemical changes in response to alternative deposition scenarios. Projected chemical conditions were input into biological models that evaluate effects on fish populations. The sensitivity of water chemistry and brook trout resources to the hypothesized changes in deposition was found to be greatest in the Adirondacks and Mid-Atlantic Highlands. Under the hypothesized sulfur deposition reduction scenarios, chemical conditions suitable for fish were projected to improve 20-30 years sooner than with the scenario that assumed no new legislated controls. Other lines of evidence, e.g. other models, field observations, and paleolimnological findings, were used to evaluate uncertainty in the projections. Model parameter/calibration uncertainty for the chemical models and population sampling uncertainty were explicitly quantified. Model structural uncertainties were bracketed using model comparisons, recent measured changes, and paleolimnological reconstructions of historical changes in lake chemistry.  相似文献   

15.
Predicting long-term mean pollutant concentrations in the vicinity of airports, roads and other industrial sources are frequently of concern in regulatory and public health contexts. Many emissions are represented geometrically as ground-level line or area sources. Well developed modelling tools such as AERMOD and ADMS are able to model dispersion from finite (i.e. non-point) sources with considerable accuracy, drawing upon an up-to-date understanding of boundary layer behaviour. Due to mathematical difficulties associated with line and area sources, computationally expensive numerical integration schemes have been developed. For example, some models decompose area sources into a large number of line sources orthogonal to the mean wind direction, for which an analytical (Gaussian) solution exists. Models also employ a time-series approach, which involves computing mean pollutant concentrations for every hour over one or more years of meteorological data. This can give rise to computer runtimes of several days for assessment of a site. While this may be acceptable for assessment of a single industrial complex, airport, etc., this level of computational cost precludes national or international policy assessments at the level of detail available with dispersion modelling. In this paper, we extend previous work [S.R.H. Barrett, R.E. Britter, 2008. Development of algorithms and approximations for rapid operational air quality modelling. Atmospheric Environment 42 (2008) 8105–8111] to line and area sources. We introduce approximations which allow for the development of new analytical solutions for long-term mean dispersion from line and area sources, based on hypergeometric functions. We describe how these solutions can be parameterized from a single point source run from an existing advanced dispersion model, thereby accounting for all processes modelled in the more costly algorithms. The parameterization method combined with the analytical solutions for long-term mean dispersion are shown to produce results several orders of magnitude more efficiently with a loss of accuracy small compared to the absolute accuracy of advanced dispersion models near sources. The method can be readily incorporated into existing dispersion models, and may allow for additional computation time to be expended on modelling dispersion processes more accurately in future, rather than on accounting for source geometry.  相似文献   

16.
Conservation efforts are increasingly supported by ecosystem service assessments. These assessments depend on complex multi-disciplinary methods, and rely on a number of assumptions which reduce complexity. If assumptions are ambiguous or inadequate, misconceptions and misinterpretations may arise when interpreting results of assessments. An interdisciplinary understanding of assumptions in ecosystem service science is needed to provide consistent conservation recommendations. Here, we synthesise and elaborate on 12 prevalent types of assumptions in ecosystem service assessments. These comprise conceptual and ethical foundations of the ecosystem service concept, assumptions on data collection, indication, mapping, and modelling, on socio-economic valuation and value aggregation, as well as about using assessment results for decision-making. We recommend future assessments to increase transparency about assumptions, and to test and validate them and their potential consequences on assessment reliability. This will support the taking up of assessment results in conservation science, policy and practice.Electronic supplementary materialThe online version of this article (10.1007/s13280-020-01379-9) contains supplementary material, which is available to authorized users.  相似文献   

17.
Local air quality management requires the use of screening and advanced modelling tools that are able to predict roadside pollution levels under a variety of meteorological and traffic conditions. So far, more than 200 air pollution hotspots have been identified by local authorities in the UK, many of them associated with NO2 and/or PM10 exceedences in heavily trafficked urban streets that may be classified as street canyons or canyon intersections. This is due to the increased traffic-related emissions and reduced natural ventilation in such streets. Specialised dispersion models and empirical adjustment factors have been commonly used to account for the entrapment of pollutants in street canyons. However, most of the available operational tools have been validated using experimental datasets from relatively deep canyons (H/W⩾1) from continental Europe. The particular characteristics of low-rise street canyons (H/W<1), which are a typical feature of urban/sub-urban areas in the UK, have been rarely taken into account.The main objective of this study is to review current practice and evaluate three widely used regulatory dispersion models, WinOSPM, ADMS-Urban 2.0 and AEOLIUS Full. The model evaluation relied on two comprehensive datasets, which included CO, PM10 and NOx measurements, traffic information and relevant meteorological data from two busy street canyons in Birmingham and London for a 1-year period. The performance of the selected models was tested for different times of the day/days of the week and varying wind conditions. Furthermore, the ability of the models to reproduce roadside NO2/NOx concentration ratios using simplified chemistry schemes was evaluated for one of the sites. Finally, advantages and limitations of the current regulatory street canyon modelling practice in the UK, as well as needs for future research, have been identified and discussed.  相似文献   

18.
In many metropolitan areas, traffic is the main source of air pollution. The high concentrations of pollutants in streets have the potential to affect human health. Therefore, estimation of air pollution at the street level is required for health impact assessment. This task has been carried out in many developed countries by a combination of air quality measurements and modeling. This study focuses on how to apply a dispersion model to cities in the developing world, where model input data and data from air quality monitoring stations are limited or of varying quality. This research uses the operational street pollution model (OSPM) developed by the National Environmental Research Institute in Denmark for a case study in Hanoi, the capital of Vietnam. OSPM predictions from five streets were evaluated against air pollution measurements of nitrogen oxides (NO(x)), sulfur dioxide (SO2), carbon monoxide (CO), and benzene (BNZ) that were available from previous studies. Hourly measurements and passive sample measurements collected over 3-week periods were compared with model outputs, applying emission factors from previous studies. In addition, so-called "backward calculations" were performed to adapt the emission factors for Hanoi conditions. The average fleet emission factors estimated can be used for emission calculations at other streets in Hanoi and in other locations in Southeast Asia with similar vehicle types. This study also emphasizes the need to further eliminate uncertainties in input data for the street-scale air pollution modeling in Vietnam, namely by providing reliable emission factors and hourly air pollution measurements of high quality.  相似文献   

19.
Soil pollution data is also strongly scattering at small scale. Sampling of composite samples, therefore, is recommended for pollution assessment. Different statistical methods are available to provide information about the accuracy of the sampling process. Autocorrelation and variogram analysis can be applied to investigate spatial relationships. Analysis of variance is a useful method for homogeneity testing. The main source of the total measurement uncertainty is the uncertainty arising from sampling. The sample mass required for analysis can also be estimated using an analysis of variance. The number of increments to be taken for a composite sample can be estimated by means of simple statistical formulae. Analytical results of composite samples obtained from different fusion procedures of increments can be compared by means of multiple mean comparison. The applicability of statistical methods and their advantages are demonstrated for a case study investigating metals in soil at a very small spatial scale. The paper describes important statistical tools for the quantitative assessment of the sampling process. Detailed results clearly depend on the purpose of sampling, the spatial scale of the object under investigation and the specific case study, and have to be determined for each particular case.  相似文献   

20.
Emission projections are important for environmental policy, both to evaluate the effectiveness of abatement strategies and to determine legislation compliance in the future. Moreover, including uncertainty is an essential added value for decision makers. In this work, projection values and their associated uncertainty are computed for pollutant emissions corresponding to the most significant activities from the national atmospheric emission inventory in Spain. Till now, projections had been calculated under three main scenarios: “without measures” (WoM), “with measures” (WM) and “with additional measures” (WAM). For the first one, regression techniques had been applied, which are inadequate for time-dependent data. For the other scenarios, values had been computed taking into account expected activity growth, as well as policies and measures. However, only point forecasts had been computed. In this work statistical methodology has been applied for: a) Inclusion of projection intervals for future time points, where the width of the intervals is a measure of uncertainty. b) For the WoM scenario, ARIMA models are applied to model the dynamics of the processes. c) In the WM scenario, bootstrap is applied as an additional non-parametric tool, which does not rely on distributional assumptions and is thus more general. The advantages of using ARIMA models for the WoM scenario including uncertainty are shown. Moreover, presenting the WM scenario allows observing if projected emission values fall within the intervals, thus showing if the measures to be taken to reach the scenario imply a significant improvement. Results also show how bootstrap techniques incorporate stochastic modelling to produce forecast intervals for the WM scenario.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号