全文获取类型
收费全文 | 14298篇 |
免费 | 119篇 |
国内免费 | 65篇 |
专业分类
安全科学 | 301篇 |
废物处理 | 971篇 |
环保管理 | 2185篇 |
综合类 | 1445篇 |
基础理论 | 4203篇 |
环境理论 | 3篇 |
污染及防治 | 2880篇 |
评价与监测 | 1377篇 |
社会与环境 | 1069篇 |
灾害及防治 | 48篇 |
出版年
2023年 | 44篇 |
2022年 | 51篇 |
2021年 | 60篇 |
2020年 | 74篇 |
2019年 | 89篇 |
2018年 | 1540篇 |
2017年 | 1482篇 |
2016年 | 1329篇 |
2015年 | 249篇 |
2014年 | 176篇 |
2013年 | 506篇 |
2012年 | 678篇 |
2011年 | 1576篇 |
2010年 | 888篇 |
2009年 | 801篇 |
2008年 | 1093篇 |
2007年 | 1452篇 |
2006年 | 242篇 |
2005年 | 225篇 |
2004年 | 216篇 |
2003年 | 225篇 |
2002年 | 259篇 |
2001年 | 123篇 |
2000年 | 92篇 |
1999年 | 76篇 |
1998年 | 71篇 |
1997年 | 74篇 |
1996年 | 58篇 |
1995年 | 60篇 |
1994年 | 69篇 |
1993年 | 57篇 |
1992年 | 46篇 |
1991年 | 38篇 |
1990年 | 35篇 |
1989年 | 32篇 |
1988年 | 24篇 |
1987年 | 25篇 |
1986年 | 24篇 |
1985年 | 31篇 |
1984年 | 34篇 |
1983年 | 40篇 |
1982年 | 30篇 |
1981年 | 38篇 |
1980年 | 25篇 |
1979年 | 23篇 |
1978年 | 18篇 |
1977年 | 7篇 |
1976年 | 10篇 |
1972年 | 6篇 |
1971年 | 6篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
31.
Consistent estimators of change and state becomes an issue when sample data come from a mix of permanent and temporary observation units. A joint maximum likelihood estimator of state and change creates estimates of state that depend on antecedent viz. posterior survey results and may differ from estimates of state derived from a single-date analysis of the sample data. A constrained estimator of change in relative categorical frequencies that eliminates this potential inconsistency is proposed and a model based estimator of their sampling variance is developed. The performance of the constrained estimator is quantified against six criteria and a joint maximum likelihood estimator in simulated sampling from 15 populations with three combinations of permanent and temporary samples, four to six categorical class attributes, and constant size between sampling dates. Bias of the constrained estimators was negligible but larger than for joint maximum likelihood estimators. Mean absolute deviations and variances of constrained estimators were generally at par with the joint estimators. Constrained estimators of root mean square errors and achieved coverage of nominal confidence intervals of constrained estimators were occasionally better. A generalized variance function for the constrained estimates of change is provided as a computational shortcut. 相似文献
32.
Chris J. Matthews David B. Newton Roger D. Braddock Bofu Yu 《Environmental Modeling and Assessment》2007,12(1):27-41
Recently, the New Morris Method has been presented as an effective sensitivity analysis tool for mathematical models. The
New Morris Method estimates the sensitivity of an output parameter to a given set of input parameters (first-order effects)
and the extent these parameters interact with each other (second-order effects). This method requires the specification of
two parameters (runs and resolution) that control the sampling of the output parameter to determine its sensitivity to various
inputs. The criteria for these parameters have been set on the analysis of a well-behaved analytical function (see Cropp and
Braddock, Reliab. Eng. Syst. Saf. 78:77–83, 2002), which may not be applicable to other physical models that describe complex
processes. This paper will investigate the appropriateness of the criteria from (Cropp and Braddock, 2002) and hence the effectiveness
of the New Morris Method to determine the sensitivity behaviour of two hydrologic models: the Soil Erosion and Deposition
System and Griffith University Representation of Urban Hydrology. In the first case, this paper will separately analyse the
sensitivity of an output parameter on a set of input parameters (first- and second-order effects) for each model and discuss
the physical meaning of these sensitivities. This will be followed by an investigation into the sampling criteria by exploring
the convergence of the sensitivity behaviour for each model as the sampling of the parameter space is increased. By comparing
these trends to the convergence behaviour from Cropp and Braddock (2002), we will determine how well the New Morris Method
estimates the sensitivity for each model and whether the sampling criteria are appropriate for these models. It will be shown
that the New Morris Method can provide additional insight into the functioning of these models, and that, under a different
metric, the sensitivity behaviour of these models does converge confirming the sampling criteria set by Cropp and Braddock. 相似文献
33.
J. Christian Franson William L. Hohman Joseph L. Moore Milton R. Smith 《Environmental monitoring and assessment》1996,43(2):181-188
We used 363 blood samples collected from wild canvasback dueks (Aythya valisineria) at Catahoula Lake, Louisiana, U.S.A. to evaluate the effect of sample storage time on the efficacy of erythrocytic protoporphyrin as an indicator of lead exposure. The protoporphyrin concentration of each sample was determined by hematofluorometry within 5 min of blood collection and after refrigeration at 4 °C for 24 and 48 h. All samples were analyzed for lead by atomic absorption spectrophotometry. Based on a blood lead concentration of 0.2 ppm wet weight as positive evidence for lead exposure, the protoporphyrin technique resulted in overall error rates of 29%, 20%, and 19% and false negative error rates of 47%, 29% and 25% when hematofluorometric determinations were made on blood at 5 min, 24 h, and 48 h, respectively. False positive error rates were less than 10% for all three measurement times. The accuracy of the 24-h erythrocytic protoporphyrin classification of blood samples as positive or negative for lead exposure was significantly greater than the 5-min classification, but no improvement in accuracy was gained when samples were tested at 48 h. The false negative errors were probably due, at least in part, to the lag time between lead exposure and the increase of blood protoporphyrin concentrations. False negatives resulted in an underestimation of the true number of canvasbacks exposed to lead, indicating that hematofluorometry provides a conservative estimate of lead exposure.The U.S. Government's right to retain a non-exclusive, royalty-free licence in and to any copyright is acknowledgedDeceased 相似文献
34.
A peat core from an ombrotrophic bog in Switzerland provides the first complete, long-term record (14 500 years) of atmospheric Ag and Tl deposition. The lack of enrichment of Ag and Tl in the basal peat layer shows that mineral dissolution in the underlying sediments has not contributed measurably to the Ag and Tl inventories in the peat column, and that Ag and Tl were supplied exclusively by atmospheric deposition. The temporal and spatial distribution of modern peaks in Ag and Tl concentrations are similar to those of Pb which is known to be immobile in peat profiles. Silver and Tl, therefore, are effectively immobile in the peat bog also, allowing an atmospheric deposition chronology to be reconstructed. Silver concentrations vary by up to 114x and Tl up to 241x. While Holocene climate change and land use history can explain the variation in metal concentrations and enrichment factors (EF) in ancient peats (i.e. pre-dating the Roman Period), anthropogenic sources have to be invoked to explain the very high EF values (up to 123 in the case of Ag and 12 in the case of Tl) in peat samples since the middle of the 19th Century. The "natural background" EF of Tl in ancient peats is remarkably close to unity, indicating a lack of significant enrichment of this element in atmospheric aerosols due to chemical weathering of crustal rocks. Silver, on the other hand, shows a pronounced enrichment from 8030 to 5230 (14)C years BP (12x compared to crustal rocks); this may be due to weathering phenomena or biological processes, both of which are driven by climate. Even compared to the natural enrichment of Ag during the mid-Holocene, however, the enrichments of Ag and Tl in modern peats from the Industrial Period are at least an order of magnitude greater. The Pb/Ag and Tl/Ag ratios show that Pb and Tl are preferentially released, compared to Ag, during smelting of argentiferous Pb ores mined during the Roman and Medieval Periods. 相似文献
35.
36.
A water quality assessment was conducted on three Appalachian streams polluted by coal mining at the Big South Fork National River and Recreation Area, Tennessee and Kentucky. Results showed that sulfate was an excellent parameter for detecting the effects of coal mining and that sulfate analyses used in conjunction with conductivity readings provided the best detection index. Acidity and pH readings were relatively insensitive indicators, reflecting the mining pollution only after sulfate concentrations already indicated severe pollution levels.Hydrologist, Big South Fork NRRA, during the study; presently at Cape Lookout National SeashoreHydrologist, NPS, during the study; presently with USDA-Forest Service, Washington, DC. 相似文献
37.
Givelet N Le Roux G Cheburkin A Chen B Frank J Goodsite ME Kempter H Krachler M Noernberg T Rausch N Rheinberger S Roos-Barraclough F Sapkota A Scholz C Shotyk W 《Journal of environmental monitoring : JEM》2004,6(5):481-492
For detailed reconstructions of atmospheric metal deposition using peat cores from bogs, a comprehensive protocol for working with peat cores is proposed. The first step is to locate and determine suitable sampling sites in accordance with the principal goal of the study, the period of time of interest and the precision required. Using the state of the art procedures and field equipment, peat cores are collected in such a way as to provide high quality records for paleoenvironmental study. Pertinent field observations gathered during the fieldwork are recorded in a field report. Cores are kept frozen at -18 degree C until they can be prepared in the laboratory. Frozen peat cores are precisely cut into 1 cm slices using a stainless steel band saw with stainless steel blades. The outside edges of each slice are removed using a titanium knife to avoid any possible contamination which might have occurred during the sampling and handling stage. Each slice is split, with one-half kept frozen for future studies (archived), and the other half further subdivided for physical, chemical, and mineralogical analyses. Physical parameters such as ash and water contents, the bulk density and the degree of decomposition of the peat are determined using established methods. A subsample is dried overnight at 105 degree C in a drying oven and milled in a centrifugal mill with titanium sieve. Prior to any expensive and time consuming chemical procedures and analyses, the resulting powdered samples, after manual homogenisation, are measured for more than twenty-two major and trace elements using non-destructive X-Ray fluorescence (XRF) methods. This approach provides lots of valuable geochemical data which documents the natural geochemical processes which occur in the peat profiles and their possible effect on the trace metal profiles. The development, evaluation and use of peat cores from bogs as archives of high-resolution records of atmospheric deposition of mineral dust and trace elements have led to the development of many analytical procedures which now permit the measurement of a wide range of elements in peat samples such as lead and lead isotope ratios, mercury, arsenic, antimony, silver, molybdenum, thorium, uranium, rare earth elements. Radiometric methods (the carbon bomb pulse of (14)C, (210)Pb and conventional (14)C dating) are combined to allow reliable age-depth models to be reconstructed for each peat profile. 相似文献
38.
Harper M Hallmark TS Andrew ME Bird AJ 《Journal of environmental monitoring : JEM》2004,6(10):819-826
Personal and area air samples were taken at a scrap lead smelter operation in a bullet manufacturing facility. Samples were taken using the 37-mm styrene-acrylonitrile closed-face filter cassette (CFC, the current US standard device for lead sampling), the 37-mm GSP or "cone" sampler, the 25-mm Institute of Occupational Medicine (IOM) inhalable sampler, and the 25-mm Button sampler (developed at the University of Cincinnati). Polyvinylchloride filters were used for sampling. The filters were pre- and post-weighed, and analyzed for lead content using a field-portable X-ray fluorescence (XRF) analyzer. The filters were then extracted with dilute nitric acid in an ultrasonic extraction bath and the solutions were analyzed by inductively coupled plasma optical emission spectroscopy. The 25-mm filters were analyzed using a single XRF reading, while three readings on different parts of the filter were taken from the 37-mm filters. The single reading from the 25-mm filters was adjusted for the nominal area of the filter to obtain the mass loading, while the three readings from the 37-mm filters were inserted into two different algorithms for calculating the mass loadings, and the algorithms were compared. The IOM sampler was designed for material collected in the body of the sampler to be part of the collected sample as well as that on the filter. Therefore, the IOM sampler cassettes were rinsed separately to determine if wall-loss corrections were necessary. All four samplers gave very good correlations between the two analytical methods above the limit of detection of the XRF procedure. The limit of detection for the 25-mm filters (5 microg) was lower than for the 37-mm filters (10 microg). The percentage of XRF results that were within 25% of the corresponding ICP results was evaluated. In addition, the bias from linear regression was estimated. Linear regression for the Button sampler and the IOM sampler using single readings and the GSP using all tested techniques for total filter loading gave acceptable XRF readings at loadings equivalent to sampling at the OSHA 8-hour Action Level and Permissible Exposure Limit. However, the CFC only had acceptable results when the center reading corrected for filter area was used, which was surprising, and may be a result of a limited data set. In addition to linear regression, simple estimation of bias indicated reasonable agreements between XRF and ICP results for single XRF readings on the Button sampler filters, (82% of the individual results within criterion), and on the IOM sampler filters (77% or 61%--see text), and on the GSP sampler filters using the OSHA algorithm (78%). As a result of this pilot project, all three samplers were considered suitable for inclusion in further field research studies. 相似文献
39.
Bhanarkar AD Srivastava A Joseph AE Kumar R 《Environmental monitoring and assessment》2005,109(1-3):73-80
Air pollution in the workplace environment due to industrial operation have been found to cause serious occupational health
hazard. Similarly, heat stress is still most neglected occupational hazard in the tropical and subtropical countries like
India. The hot climate augments the heat exposure close to sources like furnaces. In this study an attempt is made to assess
air pollution and heat exposure levels to workers in the workplace environment in glass manufacturing unit located in the
State of Gujarat, India. Samples for workplace air quality were collected for SPM, SO2, NO2 and CO2 at eight locations. Results of workplace air quality showed 8-hourly average concentrations of SPM: 165–9118 μg/m3, SO2: 6–9 μg/m3 and NO2: 5–42 μg/m3, which were below the threshold limit values of workplace environment. The level of CO2 in workplace air of the plant was found to be in the range 827–2886 μg/m3, which was below TLV but much higher than the normal concentration for CO2 in the air (585 mg/m3). Indoor heat exposure was studied near the furnace and at various locations in an industrial complex for glass manufacturing.
The heat exposure parameters including the air temperature, the wet bulb temperature, and the globe parameters were measured.
The Wet Bulb Globe Temperature (WBGT), an indicator of heat, exceeded ACGIH TLVs limits most of the time at all the locations
in workplace areas. The recommended duration of work and rest have also been estimated. 相似文献
40.
Charles S. Tapiero 《Environmental Modeling and Assessment》2005,9(4):201-206
Conclusion In this paper we have considered a specific environmental game emphasizing both control-prevention efforts and the propensity to pollute by a firm which adopts a given pollution abatement technology. A random payoff game was constructed and solved under a risk neutral assumption and quadratic utilities for both the firm and the environmental controller. The game thus defined, provides a wide range of interpretations and potential approaches for selecting a control-inspection policies to prevent environmental risks. There are of course many facets to this problem, which could be considered and have not been considered in sufficient depth. For example, more complex control mechanisms and liabilities, the effects of insurance and risk sharing, the application of cooperative efforts and subvention of pollution abatement investments (through tax incentives and their like), etc. have not been considered [5,7]. These are topics for further research. The basic presumption of this paper is that it is very difficult to fully enforce pollution prevention by firms, as a result, some controls are needed to ensure that firms be controlled so that appropriate efforts are carried. 相似文献