首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84篇
  免费   2篇
  国内免费   2篇
安全科学   11篇
废物处理   2篇
环保管理   18篇
综合类   6篇
基础理论   36篇
污染及防治   7篇
评价与监测   3篇
社会与环境   4篇
灾害及防治   1篇
  2020年   2篇
  2019年   2篇
  2015年   4篇
  2014年   2篇
  2013年   1篇
  2012年   3篇
  2011年   13篇
  2010年   3篇
  2009年   16篇
  2008年   10篇
  2007年   4篇
  2006年   6篇
  2005年   5篇
  2004年   2篇
  2003年   3篇
  2000年   2篇
  1997年   2篇
  1995年   1篇
  1993年   2篇
  1992年   2篇
  1990年   2篇
  1973年   1篇
排序方式: 共有88条查询结果,搜索用时 31 毫秒
31.
In the wake of the resource constraints for external farm inputs faced by farmers in developing countries, sustainable agriculture practices that rely on renewable local or farm resources present desirable options for enhancing agriculture productivity. In this study, plot-level data from the semi-arid region of Ethiopia, Tigray are used to investigate the factors influencing farmers' decisions to adopt agriculture practices, with a particular focus on conservation tillage, compost and chemical fertilizer. A trivariate probit model is used to analyze the determinants of adoption of these practices. In addition, stochastic dominance analysis is used to compare the productivity impacts of compost with that of chemical fertilizer based on a six-year cross-sectional farm-level dataset. Our results indicate heterogeneity with regard to the factors that influence adoption decisions of the three practices and the importance of both plot and household characteristics on influencing adoption decisions. In particular, we found that household endowments and access to information, among other factors, impact the choice of sustainable farming practices significantly. Furthermore, the use of stochastic dominance analysis supported the contention that sustainable farming practices enhance productivity. They even proved to be superior to the use of chemical fertilizers — justifying the need to investigate factors that influence adoption of these practices and to use this knowledge to formulate policies that encourage adoption.  相似文献   
32.
The achievement possibilities of the EU 2 °C climate target have been assessed with the ETSAP TIAM global energy systems model. Cost-effective global and regional mitigation scenarios of carbon dioxide, methane, nitrous oxide and F-gases were calculated with alternative assumptions on emissions trading. In the mitigation scenarios, an 85% reduction in CO2 emissions is needed from the baseline, and very significant changes in the energy system towards emission-free sources take place during this century. The largest new technology groups are carbon-capture and storage (CCS), nuclear power, wind power, advanced bioenergy technologies and energy efficiency measures. CCS technologies contributed a 5.5-Pg CO2 annual emission reduction by 2050 and 12 Pg CO2 reduction by 2100. Also large-scale forestation measures were found cost-efficient. Forestation measures reached their maximum impact of 7.7 Pg CO2 annual emission reduction in 2080. The effects of uncertainties in the climate sensitivity have been analysed with stochastic scenarios.  相似文献   
33.
Using a time-varying stochastic frontier model, this paper examines the technical efficiency of firms in the iron and steel industry to try to identify the factors contributing to the industry's efficiency growth. Industry observers and policymakers tend to cite most frequently three possible sources of efficiency growth: privatization; economies of scale; and vintage of equipment. Our study corroborates these factors. Based on our findings, which pertain to 52 iron and steel firms over the period of 1978–1997, privatization is likely to improve the efficiency of iron and steel firms to a great extent as evidenced in various industries. This study also provides systematic evidence that iron and steel production shows economies of scale. In addition, newer vintages of equipment are found to be closely correlated with higher levels of efficiency. This clearly indicates that investment in new plants and equipments is critical in pursuit of efficiency in the iron and steel industry.  相似文献   
34.
Mercury is recognized internationally as an important pollutant since mercury and its compounds are persistent, bioaccumulative and toxic, and pose human and ecosystem risks. A critical aspect of mercury cycling is its bioaccumulation, mainly as methylmercury, along the aquatic food web resulting in high risk of human exposure through contaminated fish consumption. Since lake acidity (pH) and mercury methylation are correlated, control of lake pH through lake liming is a possible option to mitigate mercury bioaccumulation. This work proposes to use optimal control theory to derive time-dependent lake liming strategies for a tighter control of lake pH. Since the behavior of the freshwater ecosystems such as lakes is often associated with considerable uncertainties, a robust and realistic analysis should incorporate such uncertainties. This work models the time-dependent uncertain variations in the basic lake pH value and derives the liming profiles in the presence of such seasonal pH fluctuations. Established techniques from real options theory are employed for modeling the uncertainty as a stochastic process, and stochastic optimal control is used for deriving liming profiles. The approach is critically evaluated through applications to various case study lakes. Considering the substantial costs associated with liming operations, the work formulates a multi-objective problem highlighting the tradeoff between accurate pH control and liming cost. The results of the control problem solution are also compared with heuristics based liming. The results, while highlighting the success of using time-dependent liming, put forth certain interesting aspects that might be helpful to a decision maker. The analysis is expected to make liming operation more reliable, thereby presenting one more tool to manage the harmful effects of mercury pollution.  相似文献   
35.
A stochastic individual-based model (IBM) of mosquitofish population dynamics in experimental ponds was constructed in order to increase, virtually, the number of replicates of control populations in an ecotoxicology trial, and thus to increase the statistical power of the experiments. In this context, great importance had to be paid to model calibration as this conditions the use of the model as a reference for statistical comparisons. Accordingly, model calibration required that both mean behaviour and variability behaviour of the model were in accordance with real data. Currently, identifying parameter values from observed data is still an open issue for IBMs, especially when the parameter space is large. Our model included 41 parameters: 30 driving the model expectancy and 11 driving the model variability. Under these conditions, the use of “Latin hypercube” sampling would most probably have “missed” some important combinations of parameter values. Therefore, complete factorial design was preferred. Unfortunately, due to the constraints of the computational capacity, cost-acceptable “complete designs” were limited to no more than nine parameters, the calibration question becoming a parameter selection question. In this study, successive “complete designs” were conducted with different sets of parameters and different parameter values, in order to progressively narrow the parameter space. For each “complete design”, the selection of a maximum of nine parameters and their respective n values was carefully guided by sensitivity analysis. Sensitivity analysis was decisive in selecting parameters that were both influential and likely to have strong interactions. According to this strategy, the model of mosquitofish population dynamics was calibrated on real data from two different years of experiments, and validated on real data from another independent year. This model includes two categories of agents; fish and their living environment. Fish agents have four main processes: growth, survival, puberty and reproduction. The outputs of the model are the length frequency distribution of the population and the 16 scalar variables describing the fish populations. In this study, the length frequency distribution was parameterized by 10 scalars in order to be able to perform calibration. The recently suggested notion of “probabilistic distribution of the distributions” was also applied to our case study, and was shown to be very promising for comparing length frequency distributions (as such).  相似文献   
36.
A dynamic and heterogeneous species abundance model generating the lognormal species abundance distribution is fitted to time series of species data from an assemblage of stoneflies and mayflies (Plecoptera and Ephemeroptera) of an aquatic insect community collected over a period of 15 years. In each year except one, we analyze 5 parallel samples taken at the same time of the season giving information about the over-dispersion in the sampling relative to the Poisson distribution. Results are derived from a correlation analysis, where the correlation in the bivariate normal distribution of log abundance is used as measurement of similarity between communities. The analysis enables decomposition of the variance of the lognormal species abundance distribution into three components due to heterogeneity among species, stochastic dynamics driven by environmental noise, and over-dispersion in sampling, accounting for 62.9, 30.6 and 6.5% of the total variance, respectively. Corrected for sampling the heterogeneity and stochastic components accordingly account for 67.3 and 32.7% of the among species variance in log abundance. By using this method, it is possible to disentangle the effect of heterogeneity and stochastic dynamics by quantifying these components and correctly remove sampling effects on the observed species abundance distribution.  相似文献   
37.
The effects of noise on neuronal dynamical systems are of much current interest. Here, we investigate noise-induced changes in the rhythmic firing activity of single Hodgkin–Huxley neurons. With additive input current, there is, in the absence of noise, a critical mean value μ = μ c above which sustained periodic firing occurs. With initial conditions as resting values, for a range of values of the mean μ near the critical value, we have found that the firing rate is greatly reduced by noise, even of quite small amplitudes. Furthermore, the firing rate may undergo a pronounced minimum as the noise increases. This behavior has the opposite character to stochastic resonance and coherence resonance. We found that these phenomena occurred even when the initial conditions were chosen randomly or when the noise was switched on at a random time, indicating the robustness of the results. We also examined the effects of conductance-based noise on Hodgkin–Huxley neurons and obtained similar results, leading to the conclusion that the phenomena occur across a wide range of neuronal dynamical systems. Further, these phenomena will occur in diverse applications where a stable limit cycle coexists with a stable focus.  相似文献   
38.
In this paper, I show the existence and the characteristics of equilibrium in a non-renewable resource market where extraction costs are non-convex and market price is subject to stochastic shocks, an empirically relevant setting. In my model firms may be motivated to hold inventories to facilitate production smoothing, which allows them to continue producing at a smooth pace at any instant when extraction ceases, e.g. when reserves are exhausted. This aspect of the model then supports a competitive equilibrium in the presence of non-convex costs. Casual empirical evidence is provided that supports the central features of my model for a variety of non-renewable resources, lending credence to the explanation for equilibrium I propose.  相似文献   
39.
When looking for the best course of management decisions to efficiently conserve metapopulation systems, a classic approach in the ecology literature is to model the optimisation problem as a Markov decision process and find an optimal control policy using exact stochastic dynamic programming techniques. Stochastic dynamic programming is an iterative procedure that seeks to optimise a value function at each timestep by evaluating the benefits of each of the actions in each state of the system defined in the Markov decision process.Although stochastic dynamic programming methods provide an optimal solution to conservation management questions in a stochastic world, their applicability in metapopulation problems has always been limited by the so-called curse of dimensionality. The curse of dimensionality is the problem that adding new state variables inevitably results in much larger (often exponential) increases in the size of the state space, which can make solving superficially small problems impossible. The high computational requirements of stochastic dynamic programming methods mean that only simple metapopulation management problems can be analysed. In this paper we overcome the complexity burden of exact stochastic dynamic programming methods and present the benefits of an on-line sparse sampling algorithm proposed by Kearns, Mansour and Ng (2002). The algorithm is particularly attractive for problems with large state spaces as the running time is independent of the size of the state space of the problem. This appealing improvement is achieved at a cost: the solutions found are no longer guaranteed to be optimal.We apply the algorithm of Kearns et al. (2002) to a hypothetical fish metapopulation problem where the management objective is to maximise the number of occupied patches over the management time horizon. Our model has multiple management options to combat the threats of water abstraction and waterhole sedimentation. We compare the performance of the optimal solution to the results of the on-line sparse sampling algorithm for a simple 3-waterhole case. We find that three look-ahead steps minimises the error between the optimal solution and the approximation algorithm. This paper introduces a new algorithm to conservation management that provides a way to avoid the effects of the curse of dimensionality. The work has the potential to allow us to approximate solutions to much more complex metapopulation management problems in the future.  相似文献   
40.
Gas detection system is a critical layer of protection in process safety. Leak scenario probability and detector reliability are two key factors in the optimization of gas detector placement. However, they are easily neglected in previous studies, which may lead to an inaccurate evaluation of the optimization solutions. In this study, a stochastic programming (SP) optimization method is proposed considering these two factors. In order to quantitatively represent the probability of leak scenarios, a complete accident scenario set (CASS) is built combining leak sources and wind fields. Then, the computational fluid dynamics (CFD) method is adopted for consequence modeling of gas dispersion. The Markov model is developed to predict the detector reliability. With the objective of minimal cumulative detection time (MCDT), the SP formulation considering scenario probability and detector reliability (MCDT-SPR) is proposed. By introducing the particle swarm optimization (PSO) algorithm, the optimization formulations can be solved. A case study is investigated on a diesel hydrogenation refining unit. Results validate this approach is promising to improve the detection efficiency. This method is more practical and matching with the actual industrial environment, where the leak scenarios and the detector reliability can change dynamically in real process setting.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号