首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
Rapidly varied open channel flows are characterized by curvilinear streamlines, thereby resulting in a pressure field different from the hydrostatic approach proposed in the standard gradually varied flow theory. This problem is related to environmental hydraulic problems such as the undular hydraulic jump and flow over round-crested weirs, for which streamline curvature effects are significant. The inclusion of the curvilinear streamline effect in an extended energy equation was firstly by Fawer. Most of the extended energy equations currently employed are therefore modified forms of the original Fawer approach. The aim of the present study is to highlight and remind engineers of the outstanding theory presented by Fawer. Herein, his approach for steady open channel flow with curved streamlines is revised and compared with experimental observations. Computational methods are presented in detail and based on present results, it can be observed that more recent and complex models for these problems are similar to the original proposal of Fawer, and hardly more accurate in some instances. Based on the proposed study an useful framework for theoretical models for steady open channel flows with curved streamlines is proposed.  相似文献   

2.
The eddy covariance technique, which is used in the determination of net ecosystem CO2 exchange (NEE), is subject to significant errors when advection that carries CO2 in the mean flow is ignored. We measured horizontal and vertical advective CO2 fluxes at the Niwot Ridge AmeriFlux site (Colorado, USA) using a measurement approach consisting of multiple towers. We observed relatively high rates of both horizontal (F(hadv)) and vertical (F(vadv)) advective fluxes at low surface friction velocities (u(*)) which were associated with downslope katabatic flows. We observed that F(hadv) was confined to a relatively thin layer (0-6 m thick) of subcanopy air that flowed beneath the eddy covariance sensors principally at night, carrying with it respired CO2 from the soil and lower parts of the canopy. The observed F(vadv) came from above the canopy and was presumably due to the convergence of drainage flows at the tower site. The magnitudes of both F(hadv) and F(vadv) were similar, of opposite sign, and increased with decreasing u(*), meaning that they most affected estimates of the total CO2 flux on calm nights with low wind speeds. The mathematical sign, temporal variation and dependence on u(*) of both F(hadv) and F(vadv) were determined by the unique terrain of the Niwot Ridge site. Therefore, the patterns we observed may not be broadly applicable to other sites. We evaluated the influence of advection on the cumulative annual and monthly estimates of the total CO2 flux (F(c)), which is often used as an estimate of NEE, over six years using the dependence of F(hadv) and F(vadv) on u(*). When the sum of F(hadv) and F(vadv) was used to correct monthly F(c), we observed values that were different from the monthly F(c) calculated using the traditional u(*)-filter correction by--16 to 20 g C x m(-2) x mo(-1); the mean percentage difference in monthly Fc for these two methods over the six-year period was 10%. When the sum of F(hadv) and F(vadv) was used to correct annual Fc, we observed a 65% difference compared to the traditional u(*)-filter approach. Thus, the errors to the local CO2 budget, when F(hadv) and F(vadv) are ignored, can become large when compounded in cumulative fashion over long time intervals. We conclude that the "micrometeorological" (using observations of F(hadv) and F(vadv)) and "biological" (using the u(*) filter and temperature vs. F(c) relationship) corrections differ on the basis of fundamental mechanistic grounds. The micrometeorological correction is based on aerodynamic mechanisms and shows no correlation to drivers of biological activity. Conversely, the biological correction is based on climatic responses of organisms and has no physical connection to aerodynamic processes. In those cases where they impose corrections of similar magnitude on the cumulative F(c) sum, the result is due to a serendipitous similarity in scale but has no clear mechanistic explanation.  相似文献   

3.
Behavioural ecologists often study complex systems in which multiple hypotheses could be proposed to explain observed phenomena. For some systems, simple controlled experiments can be employed to reveal part of the complexity; often, however, observational studies that incorporate a multitude of causal factors may be the only (or preferred) avenue of study. We assess the value of recently advocated approaches to inference in both contexts. Specifically, we examine the use of information theoretic (IT) model selection using Akaike’s information criterion (AIC). We find that, for simple analyses, the advantages of switching to an IT-AIC approach are likely to be slight, especially given recent emphasis on biological rather than statistical significance. By contrast, the model selection approach embodied by IT approaches offers significant advantages when applied to problems of more complex causality. Model averaging is an intuitively appealing extension to model selection. However, we were unable to demonstrate consistent improvements in prediction accuracy when using model averaging with IT-AIC; our equivocal results suggest that more research is needed on its utility. We illustrate our arguments with worked examples from behavioural experiments.  相似文献   

4.
A typical two-phase debris flow exhibits a high and steep flow head consisting of rolling boulders and cobbles with intermittent or fluctuating velocity. The relative motion between the solid phase and the liquid phase is obvious. The motion of a two-phase debris flow depends not only on the rheological properties of the flow, but also on the energy transmission between the solid and liquid phases. Several models have been developed to study two-phase debris flows. An essential shortcoming of most of these models is the omission of the interaction between the two phases and identification of the different roles of the different materials in two-phase debris flows. The tracer particles were used for the velocity of solid phase and the velocity of liquid phase was calculated by the water velocity on the surface of the debris flow in the experiments. This paper analyzed the intermittent feature of two-phase debris flows based on videos of debris flows in the field and flume experiments. The experiments showed that the height of the head of the two-phase debris flow increased gradually in the initiation stage and reached equilibrium at a certain distance from the start of the debris flow. The height growth and the velocity of the flow head showed fluctuating characteristics. Model equations were established and the analyses proved that the average velocity of the two-phase debris flow head was proportional to the flood discharge and inversely proportional to the volume of the debris flow head.  相似文献   

5.
Exploring the response of an ecosystem, and subsequent tradeoffs among its biological community, to human perturbations remains a key challenge for the implementation of an ecosystem approaches to fisheries (EAF). To address this and related issues, we developed two network (or energy budget) models, Ecopath and Econetwrk, for the Gulf of Maine ecosystem. These models included 31 network “nodes” or biomass state variables across a broad range of trophic levels, with the present emphasis to particularly elucidate the role of small pelagics. After initial network balancing, various perturbation scenarios were evaluated to explore how potential changes to different fish, fisheries and lower trophic levels can affect model outputs. Categorically across all scenarios and interpretations thereof, there was minimal change at the second trophic levels and most of the “rebalancing” after a perturbation occurred via alteration of the diet matrix. Yet the model results from perturbations to a balanced energy budget fall into one of three categories. First, some model results were intuitive and in obvious agreement with established ecological and fishing theory. Second, some model results were counter-intuitive upon initial observation, seemingly contradictory to known ecological and fishing theory; but upon further examination the results were explainable given the constraints of an equilibrium energy budget. Finally, some results were counter-intuitive and difficult to reconcile with theory or further examination of equilibrium constraints. A detailed accounting of biomass flows for example scenarios explores some of the non-intuitive results more rigorously. Collectively these results imply a need to carefully track biomass flows and results of any given perturbation and to critically evaluate the conditions under which a new equilibrium is obtained for these types of models, which has implications for dynamic simulations based off of them. Given these caveats, the role of small pelagics as a prominent component of this ecosystem remains a robust conclusion. We discuss how one might use this approach in the context of further developing an EAF, recognizing that a more holistic, integrated perspective will be required as we continue to evaluate tradeoffs among marine biological communities.  相似文献   

6.
Emergy algebra: Improving matrix methods for calculating transformities   总被引:1,自引:0,他引:1  
Transformity is one of the core concepts in Energy Systems Theory and it is fundamental to the calculation of emergy. Accurate evaluation of transformities and other emergy per unit values is essential for the broad acceptance, application and further development of emergy methods. Since the rules for the calculation of emergy are different from those for energy, particular calculation methods and models have been developed for use in the emergy analysis of networks, but double counting errors still occur because of errors in applying these rules when estimating the emergies of feedbacks and co-products. In this paper, configurations of network energy flows were classified into seven types based on commonly occurring combinations of feedbacks, splits, and co-products. A method of structuring the network equations for each type using the rules of emergy algebra, which we called “preconditioning” prior to calculating transformities, was developed to avoid double counting errors in determining the emergy basis for energy flows in the network. The results obtained from previous approaches, the Track Summing Method, the Minimum Eigenvalue Model and the Linear Optimization Model, were reviewed in detail by evaluating a hypothetical system, which included several types of interactions and two inputs. A Matrix Model was introduced to simplify the calculation of transformities and it was also tested using the same hypothetical system. In addition, the Matrix Model was applied to two real case studies, which previously had been analyzed using the existing method and models. Comparison of the three case studies showed that if the preconditioning step to structure the equations was missing, double counting would lead to large errors in the transformity estimates, up to 275 percent for complex flows with feedback and co-product interactions. After preconditioning, the same results were obtained from all methods and models. The Matrix Model reduces the complexity of the Track Summing Method for the analysis of complex systems, and offers a more direct and understandable link between the network diagram and the matrix algebra, compared with the Minimum Eigenvalue Model or the Linear Optimization Model.  相似文献   

7.
《Ecological modelling》2007,201(1):89-96
Conditional value at risk (CVaR) was developed as a coherent measure of expected loss given that actual loss exceeds some value at risk (VaR) threshold. To date the concept has been primarily used to support quantitative risk assessment for investment decisions and portfolio management, using stochastic financial models to minimise the risk of unacceptable monetary loss. Intriguingly, the models and concepts are potentially adaptable to water resources planning and operational problems. This paper explores the application of CVaR within the context of identifying the risk of macro-economic damage to the fishery resources of Tonle Sap given reduced volumes of flow on the mainstream Mekong during the flood season. Emphasis is placed on simulating the linkages between the seasonally available flows in the Mekong mainstream, Tonle Sap water levels, annual fish catch and its economic value.We present scenarios using real hydrological and fish catch data along with exploratory concepts of contingency fund costs in terms of national and international aid requirements. The objective is to estimate the potential economic loss at a prescribed level of probability and to illustrate how VaR and CVaR may be calculated in this context. We demonstrate the properties of these risk measures through their behaviour under continuous and discontinuous loss distributions. We show that CVaR has advantages over VaR even under a relatively simple modelling approach. In the case where a loss distribution has discontinuities, VaR is potentially a poor measure of risk as it can vary unacceptably with a small increase in probability level. CVaR is stable in these situations. Here we find that when the loss distribution is continuous the CVaR is only marginally higher than the VaR. However, for the more realistic model where the loss distribution is discontinuous, the CVaR is substantially greater.We demonstrate the potential use of these two risk measures on a simple set of models of the Tonle Sap fishery in Cambodia. The sustainability of this fishery is crucial to the country in order to avoid even further dependence on international donor aid. Estimating the financial risk to which the national government and potential aid donors might be exposed given any damage to the fishery is the essence of this exploratory study of VaR and CVaR.  相似文献   

8.
《Ecological modelling》2007,208(1):9-16
Food webs are constructed as structural directed graphs that describe “who eats whom,” but it is common to interpret them as energy flow diagrams where predation represents an energy transfer from the prey to the predator. It is the aim of this work to demonstrate that food webs are incomplete as energy flow diagrams if they ignore passive flows to detritus (dead organic material). While many ecologists do include detritus in conceptual and mathematical models, the detrital omission is still commonly found. Often detritus is either ignored or treated as an unlimited energy source, yet all organisms contribute to the detritus pool, which can be an energy source for other species in the system. This feedback loop is of high importance, since it increases the number of pathways available for energy flows, revealing the significance of indirect effects, and making the functional role of the top predators less clear. In this work we propose the modified niche model by adding a detritus compartment to the niche model. We demonstrate the effect of structural loops that result from feeding on detritus, by comparing empirical data sets to five different assembly models: (1) cascade, (2) constant connectance, (3) niche, (4) modified niche (original in this work), and (5) cyber-ecosystem. Of these models, only the last two explicitly include detritus. We show that when passive flows to detritus are included in the food web structure, the structure becomes more robust to the removal of individual nodes or connections. In addition, we show that food web models that include the detritus feedback loop perform better with respect to several structural network metrics.  相似文献   

9.
Increasing difficulties associated with balancing consumptive demands for water and achieving ecological benefits in aquatic ecosystems provide opportunities for new ecosystem-scale ecological response models to assist managers. Using an Australian estuary as a case study, we developed a novel approach to create a data-derived state-and-transition model. The model identifies suites of co-occurring birds, fish, benthic invertebrates and aquatic macrophytes (as ‘states’) and the changing physico-chemical conditions that are associated with each (‘transitions’). The approach first used cluster analysis to identify sets of co-occurring biota. Differences in the physico-chemical data associated with each state were identified using classification trees, with the biotic distinctness of the resultant statistical model tested using analysis of similarities. The predictive capacity of the model was tested using new cases. Two models were created using different time-steps (annual and quarterly) and then combined to capture both longer-term trends and more-recent declines in ecological condition. We identified eight ecosystem states that were differentiated by a mix of water-quantity and water-quality variables. Each ecosystem state represented a distinct biotic assemblage under well-defined physico-chemical conditions. Two ‘basins of attraction’ were identified, with four tidally-influenced states, and another four independent of tidal influence. Within each basin, states described a continuum of relative health, manifest through declining taxonomic diversity and abundances. The main threshold determining relative health was whether freshwater flows had occurred in the region during the previous 339 days. Canonical analyses of principal coordinates tested the predictive capacity of the model and demonstrated that the variance in the environmental data set was well captured (87%) with 52% of the variance in the biological data set also captured. The latter increased to >80% when long- and short-term biological data were analysed separately, indicating that the model described the available data for the Coorong well. This approach thus created a data-derived, multivariate model, where neither states nor transitions were determined a priori. The approach did not over-fit the data, was robust to patchy or missing data, the choice of initial clustering technique and random errors in the biological data set, and was well-received by local natural resource managers. However, the model did not capture causal relationships and requires additional testing, particularly during future episodes of ecological recovery. The approach shows significant promise for simplifying management definitions of ecological condition and, via scenario analyses, can be used to assist in manager decision-making of large, complex aquatic ecosystems in the future.  相似文献   

10.
Based on numerical experiments with a new physiologically structured population model we demonstrate that predator physiology under low food and under starving conditions can have substantial implications for population dynamics in predator-prey interactions. We focused on Daphnia-algae interactions as model system and developed a new dynamic energy budget (DEB) model for individual daphnids. This model integrates the κ-rule approach common to net assimilation models into a net-production model, but uses a fixed allocation of net-productive energy in juveniles. The new DEB-model agrees well with the results of life history experiments with Daphnia. Compared to a pure κ-rule model the new allocation scheme leads to significant earlier maturation at low food levels and thus is in better agreement with the data. Incorporation of the new DEB-model into a physiologically structured population model using a box-car elevator technique revealed that the dynamics of Daphnia-algae interactions are highly sensitive to the assumptions on the energy allocation of juveniles under low food conditions. Additionally we show that also other energy allocation rules of our DEB-model concerning decreasing food levels and starving conditions at the individual level have strong implications for Daphnia-algae interactions at the population level. With increasing carrying capacity of algae a stable equilibrium with coexistence of Daphnia occurs and algae shifts to limit cycles. The amplitudes of the limit cycles increase with increasing percentage of sustainable weight loss. If a κ-rule energy allocation is applied to juveniles, the stable equilibrium occurs for a much narrower range of algal carrying capacities, the algal concentration at equilibrium is about 2 times larger, and the range of algae carrying capacities at which daphnids become extinct extends to higher carrying capacities than in the new DEB-model. Because predator-prey dynamics are very sensitive to predator physiology under low food and starving conditions, empirical constraints of predator physiology under these conditions are essential when comparing model results with observations in laboratory experiments or in the field.  相似文献   

11.

Goal and Scope

Details about the ecological function of lake shores as ecotones between land and lakes are not well-known. These ecotones are also heavily exploited and, in part, considerably changed. Whereas anthropogenic nutrient loading is decreasing, structural changes are increasing. Unfortunately, there is a deficit in methods of evaluation and decision processes.

Main Focus

Even the EU-water framework directive was no remedy for this deficit, as lake shores were included only implicitly. In this article several evaluation methods and their conceptual groundwork are presented. However, these methods were not developed for lake shore research. Therefore, criteria are proposed which could fulfill the specific demands of lake shore assessments. The management of lakes shores should consider structural and biological parameters, and be agreeable to local residents.

Results and Conclusions

In addition to conventional biodiversity methods, the ecology of lake shores could also be represented by a functional food net, for example in benthic invertebrates. But even quantification of biodiversity alone creates many problems. A simple biodiversity index cannot meet all the demands placed on a method of evaluation in complex situations, especially when coupled with additional information on structure, practicability, costs, etc. For these reasons, assessments for future management cannot be based on such an index.

Outlook

A possible approach to include this complexity in assessments is to apply mathematical models and theoretical order concepts.  相似文献   

12.
Ecosystems are often modeled as stocks of matter or energy connected by flows. Network environ analysis (NEA) is a set of mathematical methods for using powers of matrices to trace energy and material flows through such models. NEA has revealed several interesting properties of flow–storage networks, including dominance of indirect effects and the tendency for networks to create mutually positive interactions between species. However, the applicability of NEA is greatly limited by the fact that it can only be applied to models at constant steady states. In this paper, we present a new, computationally oriented approach to environ analysis called dynamic environ approximation (DEA). As a test of DEA, we use it to compute compartment throughflow in two implementations of a model of energy flow through an oyster reef ecosystem. We use a newly derived equation to compute model throughflow and compare its output to that of DEA. We find that DEA approximates the exact results given by this equation quite closely – in this particular case, with a mean Euclidean error ranging between 0.0008 and 0.21 – which gives a sense of how closely it reproduces other NEA-related quantities that cannot be exactly computed and discuss how to reduce this error. An application to calculating indirect flows in ecosystems is also discussed and dominance of indirect effects in a nonlinear model is demonstrated.  相似文献   

13.
Abstract:  Caughley (1994) argued that researchers working on threatened populations tended to follow the "small population paradigm" or the "declining population paradigm," and that greater integration of these paradigms was needed. Here I suggest that two related paradigms exist at the broader spatial scale, namely the metapopulation paradigm and habitat paradigm, and that these two paradigms also need to be integrated if we are to provide sound management advice. This integration is not trivial, and I outline five problems that need to be addressed: (1) habitat variables may not measure habitat quality, so site-specific data on vital rates are needed to resolve the effects of habitat quality and metapopulation dynamics; (2) measurements of vital rates may be confounded by movements; (3) vital rates may be density dependent; (4) vital rates may be affected by genotype; and (5) vital rates cannot be measured in unoccupied patches. I reviewed papers published in Conservation Biology from 1994 to 2003 and found 41 studies that analyzed data from 10 or more sites to understand the factors limiting species' distributions. Five of the analyses presented were purely within the metapopulation paradigm, 14 were purely within the habitat paradigm, 17 involved elements of both paradigms, and 7 were theoretically ambiguous (2 papers presented 2 distinct analyses and were counted twice). This suggests that many researchers appreciate the need to integrate the paradigms. Only one study, however, used data on vital rates to resolve the effects of habitat quality and metapopulation dynamics (problem 1), and this study did not address problems 2–5. I conclude that more intensive research incorporating site-specific data on vital rates and movement is needed to complement the numerous analyses of distributional data being produced.  相似文献   

14.
Ecological network analysis: network construction   总被引:1,自引:0,他引:1  
《Ecological modelling》2007,208(1):49-55
Ecological network analysis (ENA) is a systems-oriented methodology to analyze within system interactions used to identify holistic properties that are otherwise not evident from the direct observations. Like any analysis technique, the accuracy of the results is as good as the data available, but the additional challenge is that the data need to characterize an entire ecosystem's flows and storages. Thus, data requirements are substantial. As a result, there have, in fact, not been a significant number of network models constructed and development of the network analysis methodology has progressed largely within the purview of a few established models. In this paper, we outline the steps for one approach to construct network models. Lastly, we also provide a brief overview of the algorithmic methods used to construct food web typologies when empirical data are not available. It is our aim that such an effort aids other researchers to consider the construction of such models as well as encourages further refinement of this procedure.  相似文献   

15.
Akaike’s information criterion (AIC) is increasingly being used in analyses in the field of ecology. This measure allows one to compare and rank multiple competing models and to estimate which of them best approximates the “true” process underlying the biological phenomenon under study. Behavioural ecologists have been slow to adopt this statistical tool, perhaps because of unfounded fears regarding the complexity of the technique. Here, we provide, using recent examples from the behavioural ecology literature, a simple introductory guide to AIC: what it is, how and when to apply it and what it achieves. We discuss multimodel inference using AIC—a procedure which should be used where no one model is strongly supported. Finally, we highlight a few of the pitfalls and problems that can be encountered by novice practitioners.  相似文献   

16.
Ecotoxicological investigations focus on biological systems and their response to chemically induced stress. Experimental techniques are much more developed than deterministic dynamic modelling. In this methodological contribution a technique is presented, based on lattice theory. This technique, also calledHasse diagram technique, allows data analysis with respect to comparative evaluation. Hasse diagrams are used
  • ? to suggest a possible measure of microbial diversity,
  • ? to analyze dependencies between phospholipid fatty acids and simple geochemical parameters on an ordinal scale and
  • ? to visualise complex results of interactions of humic substances with xenobiotics.
  •   相似文献   

    17.
    Mean concentration fields of strongly advected non-buoyant discharges are characterised with a double-Gaussian assumption. Comparisons with experimental data show that the approximation provides a reasonable representation of the cross-sectional profiles. The self-similarity of these profiles enables their form to be represented by two additional parameters, one describing the relative separation of the peaks and the other the ratio of the cross-sectional spreads. Values for these additional parameters are determined from experimental data. This systematic approach to characterising the strongly advected flows provides a consistent framework for determining spreading rates and concentration ratios, such as the peak to centreline maximum and the peak to top hat. The double-Gaussian framework also provides a basis for comparisons with the CorJet and VisJet numerical models. In addition the double-Gaussian assumption is employed to interpret data obtained using the Light Attenuation technique. This is a relatively simple measuring system, which provides depth integrated concentration information. The data obtained using this technique is shown to be generally consistent with that from previous studies.  相似文献   

    18.
    Economics of the fishery has focused on the wastefulness of common pool resource exploitation. Pure open access fisheries dissipate economic rents and degrade biological stocks. Biologically managed fisheries also dissipate rents but are thought to hold biological stocks at desired levels. We develop and estimate an empirical bioeconomic model of the Gulf of Mexico gag fishery that questions the presumptive success of biological management. Unlike previous bioeconomic life history studies, we provide a way to circumvent calibration problems by embedding our estimation routine directly in the dynamic bioeconomic model. We nest a standard biological management model that accounts for complex life history characteristics of the gag. Biological intuition suggests that a spawning season closure will reduce fishing pressure and increase stocks, and simulations of the biological management model confirm this finding. However, simulations of the empirical bioeconomic model suggest that these intended outcomes of the spawning closure do not materialize. The behavioral response to the closure appears to be so pronounced that it offsets the restriction in allowable fishing days. Our results indicate that failure to account for fishing behavior may play an important role in fishery management failures.  相似文献   

    19.
    Persistence of species in fragmented landscapes depends on dispersal among suitable breeding sites, and dispersal is often influenced by the "matrix" habitats that lie between breeding sites. However, measuring effects of different matrix habitats on movement and incorporating those differences into spatially explicit models to predict dispersal is costly in terms of time and financial resources. Hence a key question for conservation managers is: Do more costly, complex movement models yield more accurate dispersal predictions? We compared the abilities of a range of movement models, from simple to complex, to predict the dispersal of an endangered butterfly, the Saint Francis' satyr (Neonympha mitchellii francisci). The value of more complex models differed depending on how value was assessed. Although the most complex model, based on detailed movement behaviors, best predicted observed dispersal rates, it was only slightly better than the simplest model, which was based solely on distance between sites. Consequently, a parsimony approach using information criteria favors the simplest model we examined. However, when we applied the models to a larger landscape that included proposed habitat restoration sites, in which the composition of the matrix was different than the matrix surrounding extant breeding sites, the simplest model failed to identify a potentially important dispersal barrier, open habitat that butterflies rarely enter, which may completely isolate some of the proposed restoration sites from other breeding sites. Finally, we found that, although the gain in predicting dispersal with increasing model complexity was small, so was the increase in financial cost. Furthermore, a greater fit continued to accrue with greater financial cost, and more complex models made substantially different predictions than simple models when applied to a novel landscape in which butterflies are to be reintroduced to bolster their populations. This suggests that more complex models might be justifiable on financial grounds. Our results caution against a pure parsimony approach to deciding how complex movement models need to be to accurately predict dispersal through the matrix, especially if the models are to be applied to novel or modified landscapes.  相似文献   

    20.
    An important component of the biological assessment of stream condition is an evaluation of the direct or indirect effects of human activities or disturbances. The concept of a "reference condition" is increasingly used to describe the standard or benchmark against which current condition is compared. Many individual nations, and the European Union as a whole, have codified the concept of reference condition in legislation aimed at protecting and improving the ecological condition of streams. However, the phrase "reference condition" has many meanings in a variety of contexts. One of the primary purposes of this paper is to bring some consistency to the use of the term. We argue the need for a "reference condition" term that is reserved for referring to the "naturalness" of the biota (structure and function) and that naturalness implies the absence of significant human disturbance or alteration. To avoid the confusion that arises when alternative definitions of reference condition are used, we propose that the original concept of reference condition be preserved in this modified form of the term: "reference condition for biological integrity," or RC(BI). We further urge that these specific terms be used to refer to the concepts and methods used in individual bioassessments to characterize the expected condition to which current conditions are compared: "minimally disturbed condition" (MDC); "historical condition" (HC); "least disturbed condition" (LDC); and "best attainable condition" (BAC). We argue that each of these concepts can be narrowly defined, and each implies specific methods for estimating expectations. We also describe current methods by which these expectations are estimated including: the reference-site approach (condition at minimally or least-disturbed sites); best professional judgment; interpretation of historical condition; extrapolation of empirical models; and evaluation of ambient distributions. Because different assumptions about what constitutes reference condition will have important effects on the final classification of streams into condition classes, we urge that bioassessments be consistent in describing the definitions and methods used to set expectations.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号