首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2895篇
  免费   69篇
  国内免费   31篇
安全科学   150篇
废物处理   110篇
环保管理   790篇
综合类   267篇
基础理论   681篇
环境理论   2篇
污染及防治   691篇
评价与监测   172篇
社会与环境   113篇
灾害及防治   19篇
  2023年   16篇
  2022年   21篇
  2021年   31篇
  2020年   35篇
  2019年   37篇
  2018年   56篇
  2017年   59篇
  2016年   98篇
  2015年   62篇
  2014年   67篇
  2013年   308篇
  2012年   105篇
  2011年   162篇
  2010年   125篇
  2009年   141篇
  2008年   145篇
  2007年   153篇
  2006年   128篇
  2005年   82篇
  2004年   109篇
  2003年   101篇
  2002年   90篇
  2001年   57篇
  2000年   45篇
  1999年   45篇
  1998年   40篇
  1997年   39篇
  1996年   45篇
  1995年   53篇
  1994年   39篇
  1993年   40篇
  1992年   24篇
  1991年   30篇
  1990年   35篇
  1989年   23篇
  1988年   30篇
  1987年   25篇
  1986年   37篇
  1985年   21篇
  1984年   36篇
  1983年   28篇
  1982年   30篇
  1981年   29篇
  1980年   18篇
  1979年   18篇
  1978年   12篇
  1977年   11篇
  1976年   9篇
  1973年   8篇
  1972年   9篇
排序方式: 共有2995条查询结果,搜索用时 15 毫秒
141.
Why has the United States not adopted global warming policies? Because the inner circle of the corporate elite has opposed these policies despite some corporate support for cap-and-trade and other policies. Pro- and anti-positions taken by think tanks that have led the policy debate in the post-Kyoto period are analyzed in order to demonstrate this. The corporate and upper class social ties of the directors of these pro- and anti-think tanks are examined, revealing a corporate elite split between the inner circle opposing these policies, and a ‘public interest sector’ of corporate law and media corporations along with top executives from higher education and other nonprofits that is supportive of policies addressing global warming. To enable adoption of major global warming policies, the corporate inner circle will need to become supportive and forge a class-wide corporate consensus on the need to address global warming.  相似文献   
142.
One approach for performing uncertainty assessment in flood inundation modeling is to use an ensemble of models with different conceptualizations, parameters, and initial and boundary conditions that capture the factors contributing to uncertainty. However, the high computational expense of many hydraulic models renders their use impractical for ensemble forecasting. To address this challenge, we developed a rating curve library method for flood inundation forecasting. This method involves pre‐running a hydraulic model using multiple inflows and extracting rating curves, which prescribe a relation between streamflow and stage at various cross sections along a river reach. For a given streamflow, flood stage at each cross section is interpolated from the pre‐computed rating curve library to delineate flood inundation depths and extents at a lower computational cost. In this article, we describe the workflow for our rating curve library method and the Rating Curve based Automatic Flood Forecasting (RCAFF) software that automates this workflow. We also investigate the feasibility of using this method to transform ensemble streamflow forecasts into local, probabilistic flood inundation delineations for the Onion and Shoal Creeks in Austin, Texas. While our results show water surface elevations from RCAFF are comparable to those from the hydraulic models, the ensemble streamflow forecasts used as inputs to RCAFF are the largest source of uncertainty in predicting observed floods.  相似文献   
143.
The National Flood Interoperability Experiment (NFIE) was an undertaking that initiated a transformation in national hydrologic forecasting by providing streamflow forecasts at high spatial resolution over the whole country. This type of large‐scale, high‐resolution hydrologic modeling requires flexible and scalable tools to handle the resulting computational loads. While high‐throughput computing (HTC) and cloud computing provide an ideal resource for large‐scale modeling because they are cost‐effective and highly scalable, nevertheless, using these tools requires specialized training that is not always common for hydrologists and engineers. In an effort to facilitate the use of HTC resources the National Science Foundation (NSF) funded project, CI‐WATER, has developed a set of Python tools that can automate the tasks of provisioning and configuring an HTC environment in the cloud, and creating and submitting jobs to that environment. These tools are packaged into two Python libraries: CondorPy and TethysCluster. Together these libraries provide a comprehensive toolkit for accessing HTC to support hydrologic modeling. Two use cases are described to demonstrate the use of the toolkit, including a web app that was used to support the NFIE national‐scale modeling.  相似文献   
144.
The Farm Animal Welfare Council’s concept of a Good Life gives an idea of an animal’s quality of life that is over and above that of a mere life worth living. The concept needs explanation and clarification, in order to be meaningful, particularly for consumers who purchase farm animal produce. The concept could allow assurance schemes to apply the label to assessments of both the potential of each method of production, conceptualised in ways expected to enhance consumers’ engagement such as ‘naturalness’ and ‘freedom’; and the concept of a life worth living as a safeguard threshold below which no animal’s actual welfare should fall, based on each animal’s overall affective states. This may provide a framework for development of the Good Life concept, within scientific and sociological fields, in order to allow reliable and influential use by assessors, consumers and retailers.  相似文献   
145.
By discharging excess stormwater at rates that more frequently exceed the critical flow for stream erosion, conventional detention basins often contribute to increased channel instability in urban and suburban systems that can be detrimental to aquatic habitat and water quality, as well as adjacent property and infrastructure. However, these ubiquitous assets, valued at approximately $600,000 per km2 in a representative suburban watershed, are ideal candidates to aid in reversing such cycles of channel degradation because improving their functionality would not necessarily require property acquisition or heavy construction. The objective of this research was to develop a simple, cost‐effective device that could be installed in detention basin outlets to reduce the erosive power of the relatively frequent storm events (~ < two‐year recurrence) and provide a passive bypass to maintain flood control performance during infrequent storms (such as the 100‐year recurrence). Results from a pilot installation show that the Detain H2O device reduced the cumulative sediment transport capacity of the preretrofit condition by greater than 40%, and contributed to reduced flashiness and prolonged baseflows in receiving streams. When scaling the strategy across a watershed, these results suggest that potential gains in water quality and stream channel stability could be achieved at costs that are orders of magnitude less than comparable benefits from newly constructed stormwater control measures.  相似文献   
146.
The data mining/groundwater modeling methodology developed in McDade et al. (2013) was performed to determine if matrix diffusion is a plausible explanation for the lower‐concentration but persistent chlorinated solvent plumes in the groundwater‐bearing units at three different pump‐and‐treat systems. Capture‐zone maps were evaluated, and eight wells were identified that did not draw water from any of the historical source areas but captured water from the sides of the plume. Two groundwater models were applied to study the persistence of the plumes in the absence of contributions from the historical source zones. In the wells modeled, the observed mass discharge generally decreased by about one order of magnitude or less over 4 to 10 years of pumping, and 1.8 to 17 pore volumes were extracted. In five of the eight wells, the matrix diffusion model fit the data much better than the advection dispersion retardation model, indicating that matrix diffusion better explains the persistent plume. In the three other wells, confounding factors, such as a changing capture zone over time (caused by changes in pumping rates in adjacent extraction wells); potential interference from a high‐concentration unremediated source zone; and limited number of pore volumes removed made it difficult to confirm that matrix diffusion processes were active in these areas. Overall, the results from the five wells indicate that mass discharge rates from the pumping wells will continue to show a characteristic “long tail'' of mass removal from zones affected by active matrix diffusion processes. Future site management activities should include matrix diffusion processes in the conceptual site models for these three sites. © 2013 Wiley Periodicals, Inc.  相似文献   
147.
Information regarding air emissions from shale gas extraction and production is critically important given production is occurring in highly urbanized areas across the United States. Objectives of this exploratory study were to collect ambient air samples in residential areas within 61 m (200 feet) of shale gas extraction/production and determine whether a “fingerprint” of chemicals can be associated with shale gas activity. Statistical analyses correlating fingerprint chemicals with methane, equipment, and processes of extraction/production were performed. Ambient air sampling in residential areas of shale gas extraction and production was conducted at six counties in the Dallas/Fort Worth (DFW) Metroplex from 2008 to 2010. The 39 locations tested were identified by clients that requested monitoring. Seven sites were sampled on 2 days (typically months later in another season), and two sites were sampled on 3 days, resulting in 50 sets of monitoring data. Twenty-four-hour passive samples were collected using summa canisters. Gas chromatography/mass spectrometer analysis was used to identify organic compounds present. Methane was present in concentrations above laboratory detection limits in 49 out of 50 sampling data sets. Most of the areas investigated had atmospheric methane concentrations considerably higher than reported urban background concentrations (1.8–2.0 ppmv). Other chemical constituents were found to be correlated with presence of methane. A principal components analysis (PCA) identified multivariate patterns of concentrations that potentially constitute signatures of emissions from different phases of operation at natural gas sites. The first factor identified through the PCA proved most informative. Extreme negative values were strongly and statistically associated with the presence of compressors at sample sites. The seven chemicals strongly associated with this factor (o-xylene, ethylbenzene, 1,2,4-trimethylbenzene, m- and p-xylene, 1,3,5-trimethylbenzene, toluene, and benzene) thus constitute a potential fingerprint of emissions associated with compression.

Implications: Information regarding air emissions from shale gas development and production is critically important given production is now occurring in highly urbanized areas across the United States. Methane, the primary shale gas constituent, contributes substantially to climate change; other natural gas constituents are known to have adverse health effects. This study goes beyond previous Barnett Shale field studies by encompassing a wider variety of production equipment (wells, tanks, compressors, and separators) and a wider geographical region. The principal components analysis, unique to this study, provides valuable information regarding the ability to anticipate associated shale gas chemical constituents.  相似文献   

148.
The U.S. Environmental Protection Agency (EPA) initiated the national PM2.5 Chemical Speciation Monitoring Network (CSN) in 2000 to support evaluation of long-term trends and to better quantify the impact of sources on particulate matter (PM) concentrations in the size range below 2.5 μm aerodynamic diameter (PM2.5; fine particles). The network peaked at more than 260 sites in 2005. In response to the 1999 Regional Haze Rule and the need to better understand the regional transport of PM, EPA also augmented the long-existing Interagency Monitoring of Protected Visual Environments (IMPROVE) visibility monitoring network in 2000, adding nearly 100 additional IMPROVE sites in rural Class 1 Areas across the country. Both networks measure the major chemical components of PM2.5 using historically accepted filter-based methods. Components measured by both networks include major anions, carbonaceous material, and a series of trace elements. CSN also measures ammonium and other cations directly, whereas IMPROVE estimates ammonium assuming complete neutralization of the measured sulfate and nitrate. IMPROVE also measures chloride and nitrite. In general, the field and laboratory approaches used in the two networks are similar; however, there are numerous, often subtle differences in sampling and chemical analysis methods, shipping, and quality control practices. These could potentially affect merging the two data sets when used to understand better the impact of sources on PM concentrations and the regional nature and long-range transport of PM2.5. This paper describes, for the first time in the peer-reviewed literature, these networks as they have existed since 2000, outlines differences in field and laboratory approaches, provides a summary of the analytical parameters that address data uncertainty, and summarizes major network changes since the inception of CSN.
ImplicationsTwo long-term chemical speciation particle monitoring networks have operated simultaneously in the United States since 2001, when the EPA began regular operations of its PM2.5 Chemical Speciation Monitoring Network (IMPROVE began in 1988). These networks use similar field sampling and analytical methods, but there are numerous, often subtle differences in equipment and methodologies that can affect the results. This paper describes these networks since 2000 (inception of CSN) and their differences, and summarizes the analytical parameters that address data uncertainty, providing researchers and policymakers with background information they may need (e.g., for 2018 PM2.5 designation and State Implementation Plan process; McCarthy, 2013) to assess results from each network and decide how these data sets can be mutually employed for enhanced analyses. Changes in CSN and IMPROVE that have occurred over the years also are described.  相似文献   
149.
Two industrial sites were investigated based on years of available hydrogeologic information and monitoring data for soil and groundwater. Collected data were forensically evaluated using age-dating and fingerprinting methods. The previous business uses of the project sites were as a gas station, laundry/dry-cleaning service, and car wash with petroleum underground storage tanks (USTs). As a result, these sites were exposed to a number of toxic contaminants at relatively high concentrations. Source control was necessary for successful remediation and the ultimate removal of the remaining compounds from these industrial sites. Although contaminated soil around the source was excavated during the remedial action and the high concentrations of contaminants were reduced, typical groundwater contaminants such as petroleum hydrocarbons as gasoline (TPH-G), benzene, toluene, ethylbenzene, xylenes (BTEX), and oxygenates including methyl tert-butyl ether (MTBE), diisopropyl ether (DIPE), ethyl tert-butyl ether (ETBE), tert-amyl methyl ether (TAME), and tert-butyl alcohol (TBA) were persistently found at the studied sites around the source points. The plume and concentration of contaminants had changed their shapes and strength for all monitoring periods. Thus, additional source control seems to be a requirement for the complete removal of source contamination, which must be ascertained with groundwater and soil monitoring on a regular time base. For the study sites, monitored natural attenuation was relatively feasible for the long-term plan; however, it did not offer a perfect remediation solution for an ultimate goal because of residual toxic compounds that might have affected the surrounding residential areas at higher concentrations than their health limits. Therefore, as a remediation strategy, the combination of clean-up technology and natural attenuation with monitoring activities are more highly recommended than either clean-up or natural attenuation used separately.  相似文献   
150.
A former bulk fuel terminal in North Carolina is a groundwater phytoremediation demonstration site where 3,250 hybrid poplars, willows, and pine trees were planted from 2006 to 2008 over approximately 579,000 L of residual gasoline, diesel, and jet fuel. Since 2011, the groundwater altitude is lower in the area with trees than outside the planted area. Soil‐gas analyses showed a 95 percent mass loss for total petroleum hydrocarbons (TPH) and a 99 percent mass loss for benzene, toluene, ethylbenzene, and xylenes (BTEX). BTEX and methyl tert‐butyl ether concentrations have decreased in groundwater. Interpolations of free‐phase, fuel product gauging data show reduced thicknesses across the site and pooling of fuel product where poplar biomass is greatest. Isolated clusters of tree mortalities have persisted in areas with high TPH and BTEX mass. Toxicity assays showed impaired water use for willows and poplars exposed to the site's fuel product, but Populus survival was higher than the willows or pines on‐site, even in a noncontaminated control area. All four Populus clones survived well at the site. © 2014 Wiley Periodicals, Inc.*  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号