首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1901篇
  免费   51篇
  国内免费   20篇
安全科学   129篇
废物处理   82篇
环保管理   446篇
综合类   210篇
基础理论   490篇
环境理论   2篇
污染及防治   385篇
评价与监测   124篇
社会与环境   72篇
灾害及防治   32篇
  2023年   14篇
  2022年   24篇
  2021年   22篇
  2020年   26篇
  2019年   24篇
  2018年   47篇
  2017年   61篇
  2016年   68篇
  2015年   64篇
  2014年   60篇
  2013年   131篇
  2012年   86篇
  2011年   155篇
  2010年   100篇
  2009年   88篇
  2008年   114篇
  2007年   125篇
  2006年   120篇
  2005年   72篇
  2004年   71篇
  2003年   71篇
  2002年   59篇
  2001年   40篇
  2000年   28篇
  1999年   30篇
  1998年   28篇
  1997年   15篇
  1996年   25篇
  1995年   16篇
  1994年   20篇
  1993年   17篇
  1992年   17篇
  1991年   9篇
  1990年   11篇
  1989年   5篇
  1988年   10篇
  1987年   15篇
  1986年   10篇
  1985年   5篇
  1984年   9篇
  1983年   6篇
  1982年   10篇
  1981年   4篇
  1980年   5篇
  1979年   6篇
  1969年   3篇
  1937年   2篇
  1936年   2篇
  1935年   3篇
  1926年   2篇
排序方式: 共有1972条查询结果,搜索用时 109 毫秒
851.
We examined the impact of variation in habitat quality on migrating Rufous Hummingbirds ( Selasphorus rufus ) in the California Sierra Nevada. As do other migratory species, these birds depend on "stopover" habitats en route for feeding and replenishing depleted energy stores. During seven years of study, the quality of the stopover habitat (assessed in terms of the density of nectar food resources) varied widely due to natural variation in flowering. In years when stopover habitat quality was poor, incoming body masses were low and stopover durations were long. Population densities of migrant hummingbirds at the study site were coupled to habitat quality both within and among years. These observations demonstrate important effects of stopover habitat variation on the physiological, behavioral, and population ecology of migrating hummingbirds. High-quality stopover habitats are critical links between breeding and wintering areas for many species, and their preservation should be considered an essential component of strategies aiming to conserve migratory bird populations.  相似文献   
852.
Previous studies have reported a recent decline in breeding populations of migratory songbirds in eastern and central North America. Several explanations have been suggested: deforestation on the wintering grounds in the tropics and habitat loss, increased predation pressure, and increased cowbirds parasitism on the breeding range. We used these factors to assign 47 species of insectivorous passerines to groups with contrasting vulnerability, and then used the North America Breeding Bird Survey to analyze population trends in these groups on a large continental scale. Variables indexing susceptibility to predation on the breeding ground were most strongly correlated with population trends form 1968 to 1987. During the period from 1978 to 1987, migratory status was also significantly associated with population trends long-distance migrants to the neotropics exhibited a small, nonsignificant decreasing trend, whereas residents and short-distance migrants increased strongly. During the same time period, the group of species with low nest location, open nest, and high cowbird parasitism declined significantly. Although it is difficult to separate the effects of multiple factors, our analyses suggest that predation on the breeding ground in North America has played a larger role in the decline of migratory songbirds than deforestation on the wintering grounds in the tropics.  相似文献   
853.
854.
Filtration of Bacillus subtilis spores and the F-RNA phage MS2 (MS2) on a field scale in a coarse alluvial gravel aquifer was evaluated from the authors' previously published data. An advection-dispersion model that is coupled with first-order attachment kinetics was used in this study to interpret microbial concentration vs. time breakthrough curves (BTC) at sampling wells. Based on attachment rates (katt) that were determined by applying the model to the breakthrough data, filter factors (f) were calculated and compared with f values estimated from the slopes of log (cmax/co) vs. distance plots. These two independent approaches resulted in nearly identical filter factors, suggesting that both approaches are useful in determining reductions in microbial concentrations over transport distance. Applying the graphic approach to analyse spatial data, we have also estimated the f values for different aquifers using information provided by some other published field studies. The results show that values of f, in units of log (cmax/co) m(-1), are consistently in the order of 10(-2) for clean coarse gravel aquifers, 10(-3) for contaminated coarse gravel aquifers, and generally 10(-1) for sandy fine gravel aquifers and river and coastal sand aquifers. For each aquifer category, the f values for bacteriophages and bacteria are in the same order-of-magnitude. The f values estimated in this study indicate that for every one-log reduction in microbial concentration in groundwater, it requires a few tens of meters of travel in clean coarse gravel aquifers, but a few hundreds of meters in contaminated coarse gravel aquifers. In contrast, a one-log reduction generally only requires a few meters of travel in sandy fine gravel aquifers and sand aquifers. Considering the highest concentration in human effluent is in the order of 10(4) pfu/l for enteroviruses and 10(6) cfu/100 ml for faecal coliform bacteria, a 7-log reduction in microbial concentration would comply with the drinking water standards for the downgradient wells under natural gradient conditions. Based on the results of this study, a 7-log reduction would require 125-280 m travel in clean coarse gravel aquifers, 1.7-3.9 km travel in contaminated coarse gravel aquifers, 33-61 m travel in clean sandy fine gravel aquifers, 33-129 m travel in contaminated sandy fine gravel aquifers, and 37-44 m travel in contaminated river and coastal sand aquifers. These recommended setback distances are for a worst-case scenario, assuming direct discharge of raw effluent into the saturated zone of an aquifer. Filtration theory was applied to calculate collision efficiency (alpha) from model-derived attachment rates (katt), and the results are compared with those reported in the literature. The calculated alpha values vary by two orders-of-magnitude, depending on whether collision efficiency is estimated from the effective particle size (d10) or the mean particle size (d50). Collision efficiency values for MS-2 are similar to those previously reported in the literature (e.g. ) [DeBorde, D.C., Woessner, W.W., Kiley, QT., Ball, P., 1999. Rapid transport of viruses in a floodplain aquifer. Water Res. 33 (10), 2229-2238]. However, the collision efficiency values calculated for Bacillus subtilis spores were unrealistic, suggesting that filtration theory is not appropriate for theoretically estimating filtration capacity for poorly sorted coarse gravel aquifer media. This is not surprising, as filtration theory was developed for uniform sand filters and does not consider particle size distribution. Thus, we do not recommend the use of filtration theory to estimate the filter factor or setback distances. Either of the methods applied in this work (BTC or concentration vs. distance analyses), which takes into account aquifer heterogeneities and site-specific conditions, appear to be most useful in determining filter factors and setback distances.  相似文献   
855.
Preliminary analysis based on an aggregate model of global carbon emissions suggests that constraining emissions to the levels that would be imposed by compliance with the results of the Kyoto negotiations can increase the discounted cost of ultimately limiting atmospheric concentrations. Kyoto targets can be either too restrictive or too permissive depending upon the (currently unknown) trajectory of carbon emissions over the near- to medium-term and the (as yet unspecified) concentration target that frames long-term policy. The discounted cost of meeting low concentration targets like 450 ppmv. is diminished by allowing large sinks and/or by imposing more restrictive near-term emissions benchmarks (even if only Annex B countries are bound by the Kyoto accord). Conversely, the cost of achieving high concentration targets like 650 ppmv. is diminished by disallowing sinks and/or by imposing less restrictive emissions benchmarks. Intermediate concentration targets like 550 ppmv. look like high concentration targets (favoring no sinks and expanded near-term emissions) along low emissions paths; but they look like low concentration targets (favoring the opposite) along high emissions paths. Emissions trajectories that lie above the median, but not excessively so, represent cases for which adjustments in the Kyoto emissions benchmarks and/or negotiated allowances for sinks have the smallest effect on the cost of mitigation.  相似文献   
856.
Five different assays, Gibbs, Prussian Blue, Folin-Ciocalteau, fluorescence quenching of added phenol and precipitation of phenolics with bovine serum albumin (BSA) were investigated for their suitability in measuring the phenolic content of freshwaters. Phenol and a hydrolysable tannic acid were used as standards for monophenolics and polyphenolics, respectively. The individual and simultaneous application of both standards in doubly distilled water and filtered freshwater samples showed no matrix interference for the Gibbs, the Prussian Blue and the Folin-Ciocalteau assays. The quenching of phenol fluorescence and incomplete precipitation of added tannic acid in the freshwater samples were thought to originate from complexation. The Gibbs assay was specific for monophenolics, monohydroxybenzenes, with a Criterion of Detection (CoD) of 0.027 mg l(-1). Evaluating the assay using twenty-two monophenolics of lignin origin showed, apart from phenol itself, the phenolic acids vanillic, isovanillic, ferulic and syringic to have a linear response between 0 and 10 microM. The other monophenolics were not responsive in the Gibbs assay. The oxidation-based assays Prussian Blue and Folin-Ciocalteau had a CoD of 0.169 and 0.025 mg l(-1), respectively. The ratio of response of both assays for each sample was taken as an indication of the degree of polymerisation of the phenolic content. The Folin-Ciocalteau assay was used directly on the samples, on samples spiked with tannic acid at 2 and 4 mg l(-1), and after precipitation of phenolics with BSA. The difference in tannic acid equivalents before and after treatment, assayed the amount of protein precipitated phenolics. The results of all assays allowed differentiation between monophenolics (Gibbs), polyphenolics (Prussian Blue), total phenolics (Folin-Ciocalteau), complexation of added phenol and protein-precipitated phenolics. The reaction mechanisms underlying the assays were matched onto those occurring during humification. The assays were applied to six filtered freshwater samples and two humic and two fulvic acids. The results showed a different pattern for each site and illustrated varying reactivity of the 'phenolic content' of freshwater.  相似文献   
857.
The Salinas River watershed along the central coast of California, U.S.A., supports rapidly growing urban areas and intensive agricultural operations. The river drains to an estuarine National Wildlife Refuge and a National Marine Sanctuary. The occurrence, spatial patterns, sources and causesof aquatic toxicity in the watershed were investigated by sampling four sites in the main river and four sites in representative tributaries during 15 surveys between September1998 and January 2000. In 96 hr toxicity tests, significant Ceriodaphnia dubia mortality was observed in 11% of the mainriver samples, 87% of the samples from a channel draining anurban/agricultural watershed, 13% of the samples fromchannels conveying agricultural tile drain runoff, and in 100% of the samples from a channel conveying agricultural surface furrow runoff. In six of nine toxicity identificationevaluations (TIEs), the organophosphate pesticides diazinon and/or chlorpyrifos were implicated as causes of observed toxicity, and these compounds were the most probable causes oftoxicity in two of the other three TIEs. Every sample collectedin the watershed that exhibited greater than 50% C. dubia mortality (n = 31) had sufficient diazinon and/or chlorpyrifos concentrations to account for the observed effects.Results are interpreted with respect to potential effects on other ecologically important species.  相似文献   
858.
Vapor intrusion characterization efforts are challenging due to complexities associated with indoor background sources, preferential subsurface migration pathways, indoor and shallow subsurface concentration dynamics, and representativeness limitations associated with manual monitoring and characterization methods. For sites experiencing trichloroethylene (TCE) vapor intrusion, the potential for acute risks poses additional challenges, as the need for rapid response to acute toxicity threshold exceedances is critical in order to minimize health risks and associated liabilities. Currently accepted discrete time‐integrated vapor intrusion monitoring methods that employ passive diffusion–adsorption and canister samplers often do not result in sufficient temporal or spatial sampling resolution in dynamic settings, have a propensity to yield false negative and false positive results, and are not able to prevent receptors from acute exposure risks, as sample processing times exceed exposure durations of concern. Multiple lines of evidence have been advocated for in an attempt to reduce some of these uncertainties. However, implementation of multiple lines of evidence do not afford rapid response capabilities and typically rely on discrete time‐integrated sample collection methods prone to nonrepresentative results due to concentration dynamics. Recent technology innovations have resulted in the deployment of continuous monitoring platforms composed of multiplexed laboratory grade analytical components integrated with quality control features, telemetry, geographical information systems, and interpolation algorithms for automatically generating geospatial time stamped renderings and time‐weighted averages through a cloud‐based data management platform. Automated alerts and responses can be engaged within 1 minute of a threshold exceedance detection. Superior temporal and spatial resolution also results in optimized remediation design and mitigation system performance confirmation. While continuous monitoring has been acknowledged by the regulatory community as a viable option for providing superior results when addressing spatial and temporal dynamics, until very recently, these approaches have been considered impractical due to cost constraints and instrumentation limitations. Recent instrumentation advancements via automation and multiplexing allow for rapid and continuous assessment and response from multiple locations using a single instrument. These advancements have reduced costs to the point where they are now competitive with discrete time‐integrated methods. In order to gain more regulatory and industry support for these viable options, there is an immediate need to perform a realistic cost comparison between currently approved discrete time‐integrated methods and newly fielded continuous monitoring platforms. Regulatory support for continuous monitoring platforms will result in more effectively protecting the public, provide property owners with information sufficient to more accurately address potential liabilities, reduce unnecessary remediation costs for situations where risks are minimal, lead to more effective and surgical remediation strategies, and allow practitioners to most effectively evaluate remediation system performance. To address this need, a series of common monitoring scenarios and associated assumptions were derived and cost comparisons performed. Scenarios included variables such as number of monitoring locations, duration, costs to meet quality control requirements, and number of analyses performed within a given monitoring campaign. Results from this effort suggest that for relatively larger sites where five or more locations will be monitored (e.g., large buildings, multistructure industrial complexes, educational facilities, or shallow groundwater plumes with significant spatial footprints under residential neighborhoods), procurement of continuous monitoring services is often less expensive than implementation of discrete time‐integrated monitoring services. For instance, for a 1‐week monitoring campaign, costs‐per‐analysis for continuous monitoring ranges from approximately 1 to 3 percent of discrete time‐integrated method costs for the scenarios investigated. Over this same one‐week duration, for discrete time‐integrated options, the number of sample analyses equals the number of data collection points (which ranged from 5 to 30 for this effort). In contrast, the number of analyses per week for the continuous monitoring option equals 672, or four analyses per hour. This investigation also suggests that continuous automated monitoring can be cost‐effective for multiple one‐week campaigns on a quarterly or semi‐annual basis in lieu of discrete time‐integrated monitoring options. In addition to cost benefits, automated responses are embedded within the continuous monitoring service and, therefore, provide acute TCE risk‐preventative capabilities that are not possible using discrete time‐integrated passive sampling methods, as the discrete time‐integrated services include analytical efforts that require more time than the exposure duration of concern. ©2016 Wiley Periodicals, Inc.  相似文献   
859.
860.
Although proactive followership behavior is often viewed as instrumental to group success, leaders do not always respond favorably to the actions of overly eager followers. Guided by a constructivist perspective, we investigated how interpretations of followership differ across the settings in which acts of leadership and followership emerge. In thematically analyzing data from semi‐structured interviews with leaders of high‐performing teams, we depict how the construal of follower behaviors relates to various contextual factors underscoring leader–follower interactions. Prototypical characteristics were described in relation to ideal followership (i.e., active independent thought, ability to process self‐related information accurately, collective orientation, and relational transparency). However, proactive followership behaviors were subject to the situational and relational demands that were salient during leader–follower interactions. Notably, the presence of third‐party observers, the demands of the task, stage in the decision‐making process, suitability of the targeted issue, and relational dynamics influenced which follower behaviors were viewed as appropriate from the leader's perspective. These findings provide insight into when leaders are more likely to endorse proactive followership, suggesting that proactive followership requires an awareness of how to calibrate one's actions in accordance with prevailing circumstances. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号