首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In March 2011, the Interstate Technology & Regulatory Council (ITRC) Contaminated Sediments Team published a web‐based Technical and Regulatory Guidance on the concepts, processes, and uses of bioavailability in a risk decision‐making framework at a contaminated sediment site. Bioavailability processes, as defined by the National Research Council (NRC; 2003), are the “individual physical, chemical, and biological interactions that determine the exposure of plants and animals to chemicals associated with soils and sediments.” Bioavailability assessment tools aid in the assessment of human and ecological exposure and development of site‐specific remedial objectives. The guidance provides information on the processes that may affect contaminant bioavailability within sediments to understand exposure within ecological and human receptors; supports the development of conceptual site models (CSMs); and describes available tools (biological, chemical, and physical) and models that are used to measure and characterize the fate and transport and potential bioavailability of contaminants. Case studies, referenced throughout the document, demonstrate the practical application of bioavailability measures. The guidance will describe the proper application of traditional and emerging sediment remediation technologies to support the selection of a remedy that is protective of human health and the environment. © 2013 Wiley Periodicals, Inc.  相似文献   

2.
A recent United States Environmental Protection Agency (US EPA) Expert Panel on Dense Nonaqueous Phase Liquid (DNAPL) Source Remediation concluded that the decision‐making process for implementing source depletion is hampered by quantitative uncertainties and that few useful predictive tools are currently available for evaluating the benefits. This article provides a new planning‐level approach to aid the process. Four simple mass balance models were used to provide estimates of the reduction in the remediation time frame (RTF) for a given amount of source depletion: step function, linear decay, first‐order decay, and compound. As a shared framework for assessment, all models use the time required to remediate groundwater concentrations below a particular threshold (e.g., goal concentration or mass discharge rate) as a metric. This value is of interest in terms of providing (1) absolute RTF estimates in years as a function of current mass discharge rate, current source mass, the remediation goal, and the source‐ reduction factor, and (2) relative RTF estimates as a fraction of the remediation time frame for monitored natural attenuation (MNA). Because the latter is a function of the remediation goal and the remaining fraction (RF) of mass following remediation, the relative RTF can be a valuable aid in the decision to proceed with source depletion or to use a long‐term containment or MNA approach. Design curves and examples illustrate the nonlinear relationship between the fraction of mass remaining following source depletion and the reduction in the RTF in the three decay‐based models. For an example case where 70 percent of the mass was removed by source depletion and the remediation goal (Cg/C0) was input as 0.01, the improvement in the RTF (relative to MNA) ranged from a 70 percent reduction (step function model) to a 21 percent reduction (compound model). Because empirical and process knowledge support the appropriateness of decay‐based models, the efficiency of source depletion in reducing the RTF is likely to be low at most sites (i.e., the percentage reduction in RTF will be much lower than the percentage of the mass that is removed by a source‐depletion project). Overall, the anticipated use of this planning model is in guiding the decision‐making process by quantifying the relative relationship between RTF and source depletion using commonly available site data. © 2005 Wiley Periodicals, Inc.  相似文献   

3.
This article aims to develop a general model for the evaluation of ecological-economic efficiency that will serve as an information support tool for decision making at the corporate, municipal, and regional levels. It encompasses cost-benefit analysis in solid waste management by applying a sustainability promoting approach that is explicitly related to monetary measures. A waste managements’ efficient decision (WAMED) model based on cost-benefit analysis is proposed and developed to evaluate the ecological-economic efficiency of solid waste management schemes. The employment of common business administration methodology tools is featured. A classification of competing waste management models is introduced to facilitate evaluation of the relevance of the previously introduced WAMED model. Suggestions are made for how to combine the previously introduced EUROPE model, based on the equality principle, with the WAMED model to create economic incentives to reduce solid waste management-related emissions. A fictive case study presents the practical application of the proposed cost-benefit analysis-based theory to the landfilling concept. It is concluded that the presented methodology reflects an integrated approach to decreasing negative impacts on the environment and on the health of the population, while increasing economic benefits through the implementation of solid waste management projects.  相似文献   

4.
Vapor intrusion (VI) assessment is complicated by spatial and temporal variability, largely due to compounded interactions among the many individual factors that influence the vapor migration pathway from subsurface sources to indoor air. Past research on highly variable indoor air datasets demonstrates that conventional sampling schemes can result in false negative determinations of potential risk corresponding to reasonable maximum exposures (RME). While high‐frequency chemical analysis of individual chlorinated volatile organic compounds (CVOCs) in indoor air is conceptually appealing, it remains largely impractical when numerous buildings are involved and particularly for long‐term monitoring. As more is learned about the challenges with indoor air sampling for VI assessment, it has become clear that alternative approaches are needed to help guide discrete sampling efforts and reduce sampling requirements while maintaining acceptable confidence in exposure characterization. Indicators, tracers, and surrogates (ITS), which include a collection of quantifiable metrics and tools, have been suggested as a potential solution for making VI pathway assessment and long‐term monitoring more informative, efficient, and cost‐effective. This review, compilation, and evaluation of ITS demonstrates how even low numbers of indoor air CVOC samples can provide high levels of confidence for representing the RME levels (e.g., 95th percentile) often sought by regulatory agencies for less than chronic effects. A two‐part compilation of available evidence for select low‐cost ITS is presented, with Part 1 focused on introducing the concepts of ITS, meteorologically based ITS, and the evidence from data‐rich studies to support lower cost CVOC VI assessments. Part 1 includes the results of quantitative analyses on two robust residential building VI datasets, where numerous supplemental metrics were collected concurrently with indoor air concentration data. These are supplemented with additional less‐intensive studies in different circumstances. These analyses show that certain ITS metrics and tools, including differential temperature, differential pressure, and radon (in Part 2), can provide benefits to VI assessment and long‐term monitoring. This includes indicators that narrow the assessment period needed to capture RME conditions, tracers that enhance understanding of the conceptual site model, and aid in the identification of preferential pathways and surrogates that support or substitute for CVOC sampling results. The results of this review provide insight into the scientifically supportable uses of ITS.  相似文献   

5.
This paper reviews several models developed to support decision making in municipal solid waste management (MSWM). The concepts underlying sustainable MSWM models can be divided into two categories: one incorporates social factors into decision making methods, and the other includes public participation in the decision-making process. The public is only apprised or takes part in discussion, and has little effect on decision making in most research efforts. Few studies have considered public participation in the decision-making process, and the methods have sought to strike a compromise between concerned criteria, not between stakeholders. However, the source of the conflict arises from the stakeholders' complex web of value. Such conflict affects the feasibility of implementing any decision. The purpose of this study is to develop a sustainable decision making model for MSWM to overcome these shortcomings. The proposed model combines multicriteria decision making (MCDM) and a consensus analysis model (CAM). The CAM is built up to aid in decision-making when MCDM methods are utilized and, subsequently, a novel sustainable decision making model for MSWM is developed. The main feature of CAM is the assessment of the degree of consensus between stakeholders for particular alternatives. A case study for food waste management in Taiwan is presented to demonstrate the practicality of this model.  相似文献   

6.
The decision to mitigate exposures from vapor intrusion (VI) is typically based on limited data from 24‐hour air samples. It is well documented that these data do not accurately represent long‐term average exposures linked to adverse health effects. Limited decision guidance is currently available to determine the most appropriate sampling strategy, considering the cost of sampling alternatives along with the economic consequences of exposure‐related health effects. We present a decision model that introduces economic and statistical considerations in evaluating alternative VI sampling methods. The model characterizes the best sampling method by factoring economic and health consequences of exposure, the variability of exposure, the cost of sampling and mitigation, and the likelihood of false‐negatives and false‐positives. Decision‐makers can use results to select the sample size that maximizes net benefit. Conceptual and mathematical models are presented linking biological, statistical, and economic considerations to assess the cost and effectiveness of different sampling strategies. The model relates an average exposure concentration, determined statistically, to abatement costs and to the monetary value of health deterioration. The value of the information provided by different strategies is calculated and used to select the optimum sampling method. Simulations show that longer‐term sampling methods tend to be more accurate and cost‐effective than short‐term samples. The ideal sampling strategy shows significant seasonal variation (it is typically optimal to use longer samples in the winter) and also varies significantly with the stringency of regulatory standards. Longer‐term sample collection provides a more accurate representation of average VI exposure and reduces the likelihood of type I and type II errors. This reduces expected costs of mitigation and exposure (e.g., health consequences, legal and regulatory penalties), which in some cases can be quite significant. The model herein shows how these savings are balanced against the additional costs of longer‐term sampling.  相似文献   

7.
Vapor intrusion characterization and response efforts must consider four key interactive factors: background indoor air constituents, preferential vapor migration pathways, complex patterns of vapor distribution within buildings, and temporal concentration variability caused by pressure differentials within and exterior to structures. An additional challenge is found at sites contaminated by trichloroethylene (TCE), which in the United States has very low indoor air screening levels due to acute risk over short exposure durations for sensitive populations. Timely and accurate characterization of vapor intrusion has been constrained by traditional passive time‐averaging sampling methods. This article presents three case studies of a robust new methodology for vapor intrusion characterization particularly suited for sites where there is a critical need for rapid response to exposure exceedances to minimize health risks and liabilities. The new methodology comprises low‐detection‐level field analytical instrumentation with grab sample and continuous monitoring capabilities for key volatile constituents integrated with pressure differential measurements and web‐based reporting. The system also provides automated triggered alerts to project teams and capability for integration with engineered systems for vapor intrusion control. The three case studies illustrate key findings and lessons learned during system deployment at two sites undergoing characterization studies and one site undergoing thermal remediation of volatile contaminants.  相似文献   

8.
Vapor intrusion characterization efforts are challenging due to complexities associated with indoor background sources, preferential subsurface migration pathways, indoor and shallow subsurface concentration dynamics, and representativeness limitations associated with manual monitoring and characterization methods. For sites experiencing trichloroethylene (TCE) vapor intrusion, the potential for acute risks poses additional challenges, as the need for rapid response to acute toxicity threshold exceedances is critical in order to minimize health risks and associated liabilities. Currently accepted discrete time‐integrated vapor intrusion monitoring methods that employ passive diffusion–adsorption and canister samplers often do not result in sufficient temporal or spatial sampling resolution in dynamic settings, have a propensity to yield false negative and false positive results, and are not able to prevent receptors from acute exposure risks, as sample processing times exceed exposure durations of concern. Multiple lines of evidence have been advocated for in an attempt to reduce some of these uncertainties. However, implementation of multiple lines of evidence do not afford rapid response capabilities and typically rely on discrete time‐integrated sample collection methods prone to nonrepresentative results due to concentration dynamics. Recent technology innovations have resulted in the deployment of continuous monitoring platforms composed of multiplexed laboratory grade analytical components integrated with quality control features, telemetry, geographical information systems, and interpolation algorithms for automatically generating geospatial time stamped renderings and time‐weighted averages through a cloud‐based data management platform. Automated alerts and responses can be engaged within 1 minute of a threshold exceedance detection. Superior temporal and spatial resolution also results in optimized remediation design and mitigation system performance confirmation. While continuous monitoring has been acknowledged by the regulatory community as a viable option for providing superior results when addressing spatial and temporal dynamics, until very recently, these approaches have been considered impractical due to cost constraints and instrumentation limitations. Recent instrumentation advancements via automation and multiplexing allow for rapid and continuous assessment and response from multiple locations using a single instrument. These advancements have reduced costs to the point where they are now competitive with discrete time‐integrated methods. In order to gain more regulatory and industry support for these viable options, there is an immediate need to perform a realistic cost comparison between currently approved discrete time‐integrated methods and newly fielded continuous monitoring platforms. Regulatory support for continuous monitoring platforms will result in more effectively protecting the public, provide property owners with information sufficient to more accurately address potential liabilities, reduce unnecessary remediation costs for situations where risks are minimal, lead to more effective and surgical remediation strategies, and allow practitioners to most effectively evaluate remediation system performance. To address this need, a series of common monitoring scenarios and associated assumptions were derived and cost comparisons performed. Scenarios included variables such as number of monitoring locations, duration, costs to meet quality control requirements, and number of analyses performed within a given monitoring campaign. Results from this effort suggest that for relatively larger sites where five or more locations will be monitored (e.g., large buildings, multistructure industrial complexes, educational facilities, or shallow groundwater plumes with significant spatial footprints under residential neighborhoods), procurement of continuous monitoring services is often less expensive than implementation of discrete time‐integrated monitoring services. For instance, for a 1‐week monitoring campaign, costs‐per‐analysis for continuous monitoring ranges from approximately 1 to 3 percent of discrete time‐integrated method costs for the scenarios investigated. Over this same one‐week duration, for discrete time‐integrated options, the number of sample analyses equals the number of data collection points (which ranged from 5 to 30 for this effort). In contrast, the number of analyses per week for the continuous monitoring option equals 672, or four analyses per hour. This investigation also suggests that continuous automated monitoring can be cost‐effective for multiple one‐week campaigns on a quarterly or semi‐annual basis in lieu of discrete time‐integrated monitoring options. In addition to cost benefits, automated responses are embedded within the continuous monitoring service and, therefore, provide acute TCE risk‐preventative capabilities that are not possible using discrete time‐integrated passive sampling methods, as the discrete time‐integrated services include analytical efforts that require more time than the exposure duration of concern. ©2016 Wiley Periodicals, Inc.  相似文献   

9.
Making remediation and risk management decisions for widely‐distributed chemicals is a challenging aspect of contaminated site management. The objective of this study is to present an initial evaluation of the ubiquitous, ambient environmental distribution of poly‐ and perfluoroalkyl substances (PFAS) within the context of environmental decision‐making at contaminated sites. PFAS are anthropogenic contaminants of emerging concern with a wide variety of consumer and industrial sources and uses that result in multiple exposure routes for humans. The combination of widespread prevalence and low screening levels introduces considerable uncertainty and potential costs in the environmental management of PFAS. PFAS are not naturally‐occurring, but are frequently detected in environmental media independent of site‐specific (i.e., point source) contamination. Information was collected on background and ambient levels of two predominant PFAS, perfluorooctane sulfonate and perfluorooctanoate, in North America in both abiotic media (soil, sediment, surface water, and public drinking water supplies) and selected biotic media (human tissues, fish, and shellfish). The background or ambient information was compiled from multiple published sources, organized by medium and concentration ranges, and evaluated for geographical trends and, when available, also compared to health‐based screening levels. Data coverage and quality varied from wide‐ranging and well‐documented for soil, surface water, and serum data to more localized and less well‐documented for sediment and fish and shellfish tissues and some uncertainties in the data were noted. Widespread ambient soil and sediment concentrations were noted but were well below human health‐protective thresholds for direct contact exposures. Surface water, drinking water supply waters (representing a combination of groundwater and surface water), fish and shellfish tissue, and human serum levels ranged from less than to greater than available health‐based threshold values. This evaluation highlights the need for incorporating literature‐based or site‐specific background into PFAS site evaluation and decision‐making, so that source identification, risk management, and remediation goals are properly focused and to also inform general policy development for PFAS management.  相似文献   

10.
Permeable reactive barriers (PRBs) have traditionally been constructed via trenching backfilled with granular, long‐lasting materials. Over the last decade, direct push injection PRBs with fine‐grained injectable reagents have gained popularity as a more cost‐efficient and less‐invasive approach compared to trenching. A direct push injection PRB was installed in 2005 to intercept a 2,500 feet (760 meter) long carbon tetrachloride (CT) groundwater plume at a site in Kansas. The PRB was constructed by injecting EHC® in situ chemical reduction reagent slurry into a line of direct push injection points. EHC is composed of slow‐release plant‐derived organic carbon plus microscale zero‐valent iron (ZVI) particles, specifically formulated for injection applications. This project was the first full‐scale application of EHC into a flow‐through reactive zone and provided valuable information about substrate longevity and PRB performance over time. Groundwater velocity at the site is high (1.8 feet per day) and sulfate‐rich (~120 milligrams per liter), potentially affecting the rate of substrate consumption and the PRB reactive life. CT removal rates peaked 16 months after PRB installation with >99% removal observed. Two years post‐installation removal rates decreased to approximately 95% and have since stabilized at that level for the 12 years of monitoring data available after injection. Geochemical data indicate that the organic carbon component of EHC was mostly consumed after 2 years; however, reducing conditions and a high degree of chloromethane treatment were maintained for several years after total organic carbon concentrations returned to background. Redox conditions are slowly reverting and have returned close to background conditions after 12 years, indicating that the PRB may be nearing the end of its reactive life. Direct measurements of iron have not been performed, but stoichiometric demand calculations suggest that the ZVI component of EHC may, in theory, last for up to 33 years. However, the ZVI component by itself would not be expected to support the level of treatment observed after the organic carbon substrate had been depleted. A longevity of up to 5 years was originally estimated for the EHC PRB based on the maximum expected longevity of the organic carbon substrate. While the organic carbon was consumed faster than expected, the PRB has continued to support a high degree of chloromethane treatment for a significantly longer time period of over 12 years. Recycling of biomass and the contribution from a reduced iron sulfide mineral zone are discussed as possible explanations for the sustained reducing conditions and continued chloromethane treatment.  相似文献   

11.
A dual isotope technology based on compound‐specific stable isotope analysis of carbon and hydrogen (2D‐CSIA) was recently developed to help identify sources and monitor in situ degradation of the contaminant 1,4‐dioxane (1,4‐D) in groundwater. Site investigation and optimized remediation have been the focus of thousands of CSIA applications completed for volatile organic contaminants (VOCs) worldwide. CSIA for the water miscible 1,4‐D, however, has been technically challenging. The most commercially available sample preparation settings “Purge and Trap” for VOC could not efficiently extract 1,4‐D out of water for a reliable CSIA measurement, especially when the concentration is below 100 μg/L. Such a high reporting limit has prevented CSIA from being used for effective site investigation and remediation monitoring at most 1,4‐D contaminated sites, where 1,4‐D is often present at very low ppb levels. This article outlines the recent breakthrough in 2D‐CSIA technology for 1,4‐D in water, reported down to ~1 μg/L for carbon, and ~10 μg/L to 20 μg/L for hydrogen using solid‐phase extraction based on EPA Method 522, and its benefit is highlighted through a case study at a 1,4‐D contaminated site. ©2016 Wiley Periodicals, Inc.  相似文献   

12.
In the past decade, management of historically contaminated land has largely been based on prevention of unacceptable risks to human health and the environment, to ensure a site is “fit for use.” More recently, interest has been shown in including sustainability as a decision‐making criterion. Sustainability concerns include the environmental, social, and economic consequences of risk management activities themselves, and also the opportunities for wider benefit beyond achievement of risk‐reduction goals alone. In the United Kingdom, this interest has led to the formation of a multistakeholder initiative, the UK Sustainable Remediation Forum (SuRF‐UK). This article presents a framework for assessing “sustainable remediation”; describes how it links with the relevant regulatory guidance; reviews the factors considered in sustainability; and looks at the appraisal tools that have been applied to evaluate the wider benefits and impacts of land remediation. The article also describes how the framework relates to recent international developments, including emerging European Union legislation and policy. A large part of this debate has taken place in the “grey” literature, which we review. It is proposed that a practical approach to integrating sustainability within risk‐based contaminated land management offers the possibility of a substantial step forward for the remediation industry, and a new opportunity for international consensus. © 2011 Wiley Periodicals, Inc.  相似文献   

13.
Environmental monitoring, data processing, and reporting methods are expensive, labor‐ and resource‐intensive, time‐consuming, and often inaccurate. An innovative project management platform was developed for integrating environmental monitoring sensors, telemetry, geographical information systems, models, and geostatistical algorithms for automatically generating contour maps and time‐stamped renderings of sensor attributes and multivariate analyses. More specifically, algorithms converting sensor‐derived head and solute concentration values allow for automated monitoring of mass flux and discharge to evaluate groundwater remediation system performance and contaminant discharges from aquifers to surface‐water receptors. Life‐cycle costs and carbon footprints were reduced due to the elimination of energy and labor expenditures associated with transportation, data collection, laboratory efforts, report generation, and information dissemination. A brief summary of two demonstrations of this sensor‐based water resources management application is presented. © 2011 Wiley Periodicals, Inc.  相似文献   

14.
Vapor intrusion characterization efforts can be challenging due to complexities associated with background indoor air constituents, preferential subsurface migration pathways, and response time and representativeness limitations associated with conventional low‐frequency monitoring methods. For sites experiencing trichloroethylene (TCE) vapor intrusion, the potential for acute risks poses additional challenges, as the need for rapid response to exposure exceedances becomes critical in order to minimize health risks and associated liabilities. Continuous monitoring platforms have been deployed to monitor indoor and subsurface concentrations of key volatile constituents, atmospheric pressure, and pressure differential conditions that can result in advective transport. These systems can be comprised of multiplexed laboratory‐grade analytical components integrated with telemetry and geographical information systems for automatically generating time‐stamped renderings of observations and time‐weighted averages through a cloud‐based data management platform. Integrated automatic alerting and responses can also be engaged within one minute of risk exceedance detection. The objectives at a site selected for testing included continuous monitoring of vapor concentrations and related surface and subsurface physical parameters to understand exposure risks over space and time and to evaluate potential mechanisms controlling risk dynamics which could then be used to design a long‐term risk reduction strategy. High‐frequency data collection, processing, and automated visualization efforts have resulted in greater understanding of natural processes such as dynamic contaminant vapor intrusion risk conditions potentially influenced by localized barometric pumping induced by temperature changes. For the selected site, temporal correlation was observed between dynamic indoor TCE vapor concentration, barometric pressure, and pressure differential. This correlation was observed with a predictable daily frequency even for very slight diurnal changes in barometric pressure and associated pressure differentials measured between subslab and indoor regimes and suggests that advective vapor transport and intrusion can result in elevated indoor TCE concentrations well above risk levels even with low‐to‐modest pressure differentials. This indicates that vapor intrusion can occur in response to diurnal pressure dynamics in coastal regions and suggests that similar natural phenomenon may control vapor intrusion dynamics in other regions, exhibiting similar pressure, geochemical, hydrogeologic, and climatic conditions. While dynamic indoor TCE concentrations have been observed in this coastal environment, questions remain regarding whether this hydrogeologic and climatic setting represent a special case, and how best to determine when continuous monitoring should be required to most appropriately minimize exposure durations as early as possible. ©2017 Wiley Periodicals, Inc.  相似文献   

15.
An Erratum has been published for this article in Remediation 14(4) 2004, 141. The selection of remediation options for the management of unacceptable risks at contaminated sites is hindered by insufficient information on their performance under different site conditions. Therefore, there is a need to define “operating windows” for individual remediation options to summarize their performance under a variety of site conditions. The concept of the “operating window” has been applied as both a performance optimization tool and decision support tool in a number of different industries. Remediation‐option operating windows could be used as decision support tools during the “options appraisal” stage of the Model Procedures (CLR 11), proposed by the Environment Agency (EA) for England and Wales, to enhance the identification of “feasible remediation options” for “relevant pollutant linkages.” The development of remediation‐option operating windows involves: 1) the determination of relationships between site conditions (“critical variables”) and option performance parameters (e.g., contaminant degradation or removal rates) and 2) the identification of upper‐ and lower‐limit values (“operational limits”) for these variables that define the ranges of site conditions over which option performance is likely to be sufficient (the “operating window”) and insufficient (the “operating wall”) for managing risk. Some research has used case study data to determine relationships between critical variables and subsurface natural attenuation (NA) process rates. Despite the various challenges associated with the approach, these studies suggest that available case study data can be used to develop operating windows for monitored natural attenuation (MNA) and, indeed, other remediation options. It is envisaged that the development of remediation‐option operating windows will encourage the application of more innovative remediation options as opposed to excavation and disposal to landfill and/or on‐site containment, which remain the most commonly employed options in many countries. © 2004 Wiley Periodicals, Inc.  相似文献   

16.
In situ chemical oxidation (ISCO) typically delivers oxidant solutions into the subsurface for contaminant destruction. Contaminants available to the oxidants, however, are limited by the mass transfer of hydrophobic contaminants into the aqueous phase. ISCO treatments therefore often leave sites with temporarily clean groundwater which is subject to contaminant rebound when sorbed and free phase contaminants leach back into the aqueous phase. Surfactant Enhanced In situ Chemical Oxidation (S‐ISCO®) uses a combined oxidant‐surfactant solution to provide optimized contaminant delivery to the oxidants for destruction via desorption and emulsification of the contaminants by the surfactants. This article provides an overview of S‐ISCO technology, followed by an implementation case study at a coal tar contaminated site in Queens, New York. Included are data points from the site which demonstrate how S‐ISCO delivers desorbed contaminants without uncontrolled contaminant mobilization, as desorbed and emulsified contaminants are immediately available to the simultaneously injected oxidant for reaction. ©2016 Wiley Periodicals, Inc.  相似文献   

17.
Many Superfund/hazardous chemical sites include waterbodies whose sediments contain hazardous chemicals. With the need to assess, rank, and remediate contaminated sediments at such sites, as well as in other waterways, regulators seek a simple, quantitative assessment approach that feeds easily into a decision‐making scheme. Numeric, co‐occurrence‐based “sediment quality guidelines” have emerged with the appearance of administrative simplicity. However, the very foundation of the co‐occurrence approach, based on the total concentrations of a chemical(s) in sediment, is technically invalid; its application relies on additional technically invalid presumptions. Use of technically invalid evaluation approaches renders any assessment of the significance of sediment contamination unreliable. This article reviews the technical roots and assumptions of the co‐occurrence‐based SQGs, the fundamental flaws in the rationale behind their development and application, and their misapplication for sediment quality evaluation. It also reviews concepts and approaches for the more reliable evaluation, ranking, and cleanup assessment of contaminated sediments at Superfund sites and elsewhere. © 2005 Wiley Periodicals, Inc.  相似文献   

18.
The increasing need for biomass for energy and feedstocks, along with the need to divert organic methane generating wastes from landfills, may provide the economic leverage necessary to return this type of marginal land to functional and economic use and is strongly supported by policy at the European Union (EU) level. The use of land to produce biomass for energy production or feedstocks for manufacturing processes (such as plastics and biofuels) has, however, become increasingly contentious, with a number of environmental, economic, and social concerns raised. The REJUVENATE project has developed a decision support framework to help land managers and other decision makers identify potential concerns related to sustainability and what types of biomass reuse for marginal land might be possible, given their particular circumstances. The decision‐making framework takes a holistic approach to decision making rather than viewing biomass production simply as an adjunct of a planned phytoremediation project. The framework is serviceable in Germany, Sweden, and the United Kingdom. These countries have substantive differences in their land and biomass reuse circumstances. However, all can make use of the set of common principles of crop, site, value, and project risk management set out by REJUVENATE. This implies that the framework should have wider applicability across the EU. This article introduces the decision support framework. © 2011 Wiley Periodicals, Inc.  相似文献   

19.
A fish‐consumption advisory is currently in effect in a seven‐mile stretch of the Grasse River in Massena, New York, due to elevated levels of PCBs in fish tissue. One remedial approach that is being evaluated to reduce the PCB levels in fish from the river is in situ capping. An in‐river pilot study was conducted in the summer of 2001 to assess the feasibility of capping PCB‐containing sediments of the river. The study consisted of the construction of a subaqueous cap in a seven‐acre portion of the river using various combinations of capping materials and placement techniques. Optimal results were achieved with a 1:1 sand/topsoil mix released from a clamshell bucket either just above or several feet below the water surface. A longer‐term monitoring program of the capped area commenced in 2002. Results of this monitoring indicated: 1) the in‐place cap has remained intact since installation; 2) no evidence of PCB migration into and through the cap; 3) groundwater advection through the cap is not an important PCB transport mechanism; and 4) macroinvertebrate colonization of the in‐place cap is continuing. Additional follow‐up monitoring in the spring of 2003 indicated that a significant portion of the cap and, in some cases, the underlying sediments had been disturbed in the period following the conclusion of the 2002 monitoring work. An analysis of river conditions in the spring of 2003 indicated that a significant ice jam had formed in the river directly over the capping pilot study area, and that the resulting increase in river velocities and turbulence in the area resulted in the movement of both cap materials and the underlying sediments. The pilot cap was not designed to address ice jam–related forces on the cap, as the occurrence of ice jams in this section of the river had not been known prior to the observations conducted in the spring of 2003. These findings will preclude implementation of the longer‐term monitoring program that had been envisioned for the pilot study. The data collected immediately after cap construction in 2001 and through the first year of monitoring in 2002 serve as the basis for the conclusions presented in this article. It should be recognized that, based on the observation made in the spring of 2003, some of these conclusions are no longer valid for the pilot study area.The occurrence of ice jams in the lower Grasse River and their importance on sediments and PCBs within the system are currently under investigation. © 2003 Wiley Periodicals, Inc.  相似文献   

20.
This article discusses a process for finding insights that will allow federal agencies and environmental professionals to more effectively manage contaminated sites. The process is built around what Etzioni (1968) called mixed‐scanning, that is, perpetually doing both comprehensive and detailed analyses and periodically re‐scanning for new circumstances that change the decision‐making environment. The article offers a checklist of 127 items, which is one part of the multiple‐stage scanning process. The checklist includes questions about technology; public, worker, and ecological health; economic cost and benefits; social impacts; and legal issues. While developed for a DOE high‐level radioactive waste application, the decision‐making framework and specific questions can be used for other large‐scale remediation and management projects. © 2002 Wiley Periodicals, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号