首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 125 毫秒
1.
The ability to detect and to develop a precise and accurate estimate of the entrainment mortality fraction is an important step in projecting power plant impacts on future fish population levels. Recent work indicates that these mortailities may be considerably less than 100% for some fish species in the early life stages. Point estimates of the entrainment mortality fraction have been developed based on probabilistic arguments, but the precision of these estimates has not been studied beyond the simple statistical test of the null hypothesis that no entrainment mortaility exists.The ability to detect entrainment mortality is explored as a function of the sample sizes (numbers of organisms collected) at the intake and discharge sampling stations of a power plant and of the proportion of organisms found alive in the intake samples (intake survival). Minimum detectable entrainment mortality, confidence interval width, and type II error (probability of accepting the null hypothesis of no entrainment mortality when there is mortality) are considered. Increasing sample size and/or decreasing sampling mortality will decrease the minimum detectable entrainment mortality, confidence interval width, and type II error for a given level of type I error.The results of this study are considered in the context of designing useful monitoring programs for determining the entrainment mortality fraction. Preliminary estimates of intake survival and the entrainment mortality fraction can be used to obtain estimates of the sample size needed for a specified level of confidence interval width or type II error. Final estimates of the intake survival and the entrainment mortality fraction can be used to determine the minimum detectable entrainment mortality and the type II error.  相似文献   

2.
ABSTRACT: A stochastic dynamic programming model is applied to a small hydroelectric system. The variation in number of stage iterations and the computer time required to reach steady state conditions with changes in the number of storage states is investigated. The increase in computer time required to develop the storage probability distributions with increase in the number of storage states is reviewed. It is found that for an average of seven inflow states, the largest number of storage states for which it is computationally feasible to develop the storage probability distributions is nine. It is shown that use of the dynamic program results based on a small number of storage states results in unrealistically skewed storage probability distributions. These skewed distributions are attributed to “trapping” states at the low end of the storage range.  相似文献   

3.
ABSTRACT: The spatial and temporal variability of dissolved oxygen (DO), biochemical oxygen demand (BOD), nitrate concentration and total coliform (TC) were investigated at nine sampling stations distributed along the main rivers of the Piracicaba River Basin, a 12,400 km2 catchment located in São Paulo State, one of the most developed regions of Brazil. Spatially, a downstream impoverishment of water quality conditions was observed, as seen by the decrease of DO, and increase of BOD, nitrate, and TC. These changes were probably caused by accumulating downstream discharge of domestic and industrial sewage. Temporal evaluation of 18 years of data showed that DO decreased with time for the majority of the sampling stations, while BOD, nitrate, and TC increased. A law, approved at the end of 1991, proposed a new water tax for river water extraction for industrial and agricultural use. The amount of this tax is determined according to the water quality of the extracted water. Therefore, the evaluation of the water quality status in this basin is a first step to help resources managers to determine the values for this tax.  相似文献   

4.
ABSTRACT: The main objective of this paper is to present a stockastic dynamic programming model useful in determining the optimal operating policy of a single multipurpose surface reservoir. It is the unreliability of forecasting the amount of future streamflow which makes the problem of a reservoir operation a stochastic process. In this paper the stochastic nature of the streamflow is taken into account by considering the correlation between the streamflows of each pair of consecutive time intervals. This interdependence is used to calculate the probability of transition from a given state and stage to its succeeding ones. A dynamic programming model with a physical equation and a stochastic recursive equation is developed to find the optimum operational policy. For illustrative purposes, the model is applied to a real surface water reservoir system.  相似文献   

5.
ABSTRACT: A review of nonparametric tests for trend leads to the conclusion that Mann-Whitney, Spearman, and Kendall tests are the best choice for trend detection in water quality time series. Recently these tests have been adapted to account for dependence and seasonality in such series (Lettenmaier, 1976; Hirsch, et al., 1972; Hirsch and Slack, 1984). For monotonic trends, a procedure allowing to select the pertinent tests considering the characteristics of time series is proposed and the practical limitations of the tests are also brought out. This procedure has been applied to identify the appropriate trend detection test for the time series of nine water quality parameters at Lake Laflamme (Québec). When a time series can be tested with the Mann-Whitney, Kendall, Spearman, or Lettenmaier (1976) test, the number of observations required to detect trends of a given magnitude, for selected significance and power levels can be calculated with the power function of the t test. When the test proposed by Hirsch, et al. (1984), Hirsch and Slack (1984), or Farrell (1980) need to be used, the number of observations can only be estimated approximately from the results of empirical power studies.  相似文献   

6.
ABSTRACT: A stochastic estimation of low flow in the upper reaches of streams is needed for the planning, development, and management of water resources and/or water use systems. In this paper, the definition and development procedure for the stochastic flow duration curve is presented and applied to five catchments located in eastern Japan and to two catchments in western Thailand. The probability distribution of N‐year daily discharge data is extracted at various percentages of time for which specified discharges are equaled or exceeded in a water year. Such a distribution is usually represented with a straight line on log‐normal probability paper. However, some of the probability plots for the annual minimum daily discharge are best represented with a straight line on Weibull probability paper. The effectiveness of the stochastic flow duration curve defined for the evaluation of flow regime is illustrated through its application. The ten year probability for the discharge exceeded 97 percent of the time may be recognized as an index of low flow. The recession shape of the lower part of the flow duration curve is dependent on the strength of low flow persistence.  相似文献   

7.
ABSTRACT: Numbers and record lengths of precipitation stations were surveyed in the conterminous United States using climatological data published in 1975 by the National Weather Service (NWS). The total numbers of nonrecording (8247) and recording (3036) gages were about the same as in the 1940s and less than in the late 1950s; about 70 percent of the nonrecording gages have record lengths of 25 years or more. State network densities were increased exponentially with population density and long term precipitation average. Except for a few states, precipitation stations maintained by the NWS are adequate in numbers to ensure a 95 percent statistical probability that state sample means will estimate true means within ± 5 percent.  相似文献   

8.
Typical tasks of a river monitoring network design include the selection of the water quality parameters, selection of sampling and measurement methods for these parameters, identification of the locations of sampling stations and determination of the sampling frequencies. These primary design considerations may require a variety of objectives, constraints and solutions. In this study we focus on the optimal river water quality monitoring network design aspect of the overall monitoring program and propose a novel methodology for the analysis of this problem. In the proposed analysis, the locations of sampling sites are determined such that the contaminant detection time is minimized for the river network while achieving maximum reliability for the monitoring system performance. Altamaha river system in the State of Georgia, USA is chosen as an example to demonstrate the proposed methodology. The results show that the proposed model can be effectively used for the optimal design of monitoring networks in river systems.  相似文献   

9.
Wind is one of the fastest growing renewable energy resources in the electric power system. Availability of wind energy is volatile in nature due to the stochastic behavior of wind speed and non-linear variation of the wind power curve of wind turbine generator. Because of this impression and uncertainty, the availability estimation of wind power has become a challenging issue. In this paper, Markov Fuzzy Reward technique has been proposed for finding out the reliability of wind farm by assessing the availability of wind power. According to this technique, availability of the wind power has been estimated considering wind farm and demand both as a multi-state system. In addition to the availability, different reliability indices such as the number of absolute failures, mean time to deficiency, and probability of failures of a wind farm have been assessed in a time horizon, which can provide useful information for the power system planner at wind farm installing stage. A comparison of this study reveals the efficacy of the proposed Markov Fuzzy Reward approach over the conventional Markov Reward approach.  相似文献   

10.
Profiles of retained colloids in porous media have frequently been observed to be hyper-exponential or non-monotonic with transport depth under unfavorable attachment conditions, whereas filtration theory predicts an exponential profile. In this work we present a stochastic model for colloid transport and deposition that allows various hypotheses for such deviations to be tested. The model is based on the conventional advective dispersion equation that accounts for first-order kinetic deposition and release of colloids. One or two stochastic parameters can be considered in this model, including the deposition coefficient, the release coefficient, and the average pore water velocity. In the case of one stochastic parameter, the probability density function (PDF) is characterized using log-normal, bimodal log-normal, or a simple two species/region formulation. When two stochastic parameters are considered, then a joint log-normal PDF is employed. Simulation results indicated that variations in the deposition coefficient and the average pore water velocity can both produce hyper-exponential deposition profiles. Bimodal formulations for the PDF were also able to produce hyper-exponential profiles, but with much lower variances in the deposition coefficient. The shape of the deposition profile was found to be very sensitive to the correlation of deposition and release coefficients, and to the correlation of pore water velocity and deposition coefficient. Application of the developed stochastic model to a particular set of colloid transport and deposition data indicated that chemical heterogeneity of the colloid population could not fully explain the observed behavior. Alternative interpretations were therefore proposed based on variability of the pore size and the water velocity distributions.  相似文献   

11.
ABSTRACT: This paper focuses on the investigation of the existence of chaotic behavior in the Singapore rainfall data. The procedure for the determination of the minimum number of variables essential and the number of variables sufficient to model the dynamics of the rainfall process was studied. An analysis of the rainfall behavior of different time periods was also conducted. The correlation dimension was used as a basis for discriminating stochastic and chaotic behaviors. Daily rainfall records for durations of 30, 20, 10, 5, 4, 3, 2, and 1 years from six stations were analyzed. The delay time for the phase-space reconstruction was computed using the autocorrelation function approach. The results provide positive evidence of the existence of chaotic behavior in the daily rainfall data. The minimum number of variables essential to model the dynamics of the rainfall process was identified to be 3 while the number of variables sufficient to model the dynamics of the rainfall process ranges from 11 to 18. The results also suggest that the attractor dimensions of rainfall data of longer time periods are higher than that of shorter time periods. The study suggests a minimum number of 1500 data points required for the computation of the correlation dimension of the rainfall data.  相似文献   

12.
Sampling scheme design is an important step in the management of polluted sites. It largely controls the accuracy of remediation cost estimates. In practice, however, sampling is seldom designed to comply with a given level of remediation cost uncertainty. In this paper, we present a new technique that allows one to estimate of the number of samples that should be taken at a given stage of investigation to reach a forecasted level of accuracy. The uncertainty is expressed both in terms of volume of polluted soil and overall cost of remediation. This technique provides a flexible tool for decision makers to define the amount of investigation worth conducting from an environmental and financial perspective. The technique is based on nonlinear geostatistics (conditional simulations) to estimate the volume of soil that requires remediation and excavation and on a function allowing estimation of the total cost of remediation (including investigations). The geostatistical estimation accounts for support effect, information effect, and sampling errors. The cost calculation includes mainly investigation, excavation, remediation, and transportation. The application of the technique on a former smelting work site (lead pollution) demonstrates how the tool can be used. In this example, the forecasted volumetric uncertainty decreases rapidly for a relatively small number of samples (20-50) and then reaches a plateau (after 100 samples). The uncertainty related to the total remediation cost decreases while the expected total cost increases. Based on these forecasts, we show how a risk-prone decision maker would probably decide to take 50 additional samples while a risk-averse decision maker would take 100 samples.  相似文献   

13.
ABSTRACT: Recent developments in water quality monitoring have generated interest in combining non-probability and probability data to improve water quality assessment. The Interagency Task Force on Water Quality Monitoring has taken the lead in exploring data combination possibilities. In this paper we take a developed statistical algorithm for combining the two data types and present an efficient process for implementing the desired data augmentation. In a case study simulated Environmental Protection Agency (EPA) Environmental Monitoring and Assessment Program (EMAP) probability data are combined with auxiliary monitoring station data. Auxiliary stations were identified on the STORET water quality database. The sampling frame is constructed using ARC/INFO and EPA's Reach File-3 (RF3) hydrography data. The procedures for locating auxiliary stations, constructing an EMAP-SWS sampling frame, simulating pollutant exposure, and combining EMAP and auxiliary stations were developed as a decision support system (DSS). In the case study with EMAP, the DSS was used to quantify the expected increases in estimate precision. The benefit of using auxiliary stations in EMAP estimates was measured as the decrease in standard error of the estimate.  相似文献   

14.
ABSTRACT: The U.S. Environmental Protection Agency has proposed a sample survey design to answer questions about the ecological condition and trends in condition of U.S. ecological resources. To meet the objectives, the design relies on a probability sample of the resource population of interest (e.g., a random sample of lakes) each year on which measurements are made during an index period. Natural spatial and temporal variability and variability in the sampling process all affect the ability to describe the status of a population and the sensitivity for trend detection. We describe the important components of variance and estimate their magnitude for indicators of trophic condition of lakes to illustrate the process. We also describe models for trend detection and use them to demonstrate the sensitivity of the proposed design to detect trends. If the variance structure that develops during the probability surveys is like that synthesized from available databases and the literature, then the trends in common indicators of trophic condition of the specified magnitude should be detectable within about a decade for Secchi disk transparency (0.5–1 percentiyear) and total phosphorus (2–3 percent/year), but not for chlorophyll-a (> 3–4 percent/year), which will take longer.  相似文献   

15.
ABSTRACT: Existing ambient water quality monitoring programs have resulted in data which are often unsuitable for assessment of water quality trends. A primary concern in designing a stream quality monitoring network is the selection of a temporal sampling strategy. It is extremely important that data for trend assessment be collected uniformly in time. Greatly superior trend detection power results for such a strategy as compared to stratified sampling strategies. In general, it is desirable that sampling frequencies be at least monthly but not greater than biweekly; higher sampling frequencies usually result in little additional information. An upper limit on trend detectability exists such that for both five and ten year base periods it is often impossible to detect trends in time series where the ratio of the trend magnitude to time series standard deviation is less than about 0.5. For the same record lengths trends in records with trend to standard deviation ratios greater than about one can usually be detected with very high power when a uniform sampling strategy is followed.  相似文献   

16.
A spectral formalism was developed and applied to quantify the sampling errors due to spatial and/or temporal gaps in soil moisture measurements. A design filter was developed to compute the sampling errors for discrete measurements in space and time. This filter has as its advantage a general form applicable to various types of sampling design. The lack of temporal measurements of the two‐dimensional soil moisture field made it difficult to compute the spectra directly from observed records. Therefore, the wave number frequency spectra of soil moisture data derived from stochastic models of rainfall and soil moisture were used. Parameters for both models were estimated using data from the Southern Great Plains Hydrology Experiment (SGP97) and the Oklahoma Mesonet. The estimated sampling error of the spatial average soil moisture measurement by airborne L‐band microwave remote sensing during the SGP97 hydrology experiment is estimated to be 2.4 percent. Under the same climate conditions and soil properties as the SGP97 experiment, equally spaced ground probe networks at intervals of 25 and 50 km are expected to have about 16 percent and 27 percent sampling error, respectively. Satellite designs with temporal gaps of two and three days are expected to have about 6 percent and 9 percent sampling errors, respectively.  相似文献   

17.
ABSTRACT The effects of the size of the Δt time step used in the integration of the implicit difference equations of unsteady open-channel flow are determined for numerous typical hydrographs with durations in the order of days or even weeks. Truncation errors related to the size of the Δt time step cause a numerical distortion (dispersion and attenuation) of the computed transient. The magnitude of the distortion is related directly to the size of the time step, the length of channel reach, and the channel resistance and inversely to the time of rise of the hydrograph. The type of finite difference expression which replaces spatial derivatives and non-derivative terms in the partial differential equations of unsteady flow has an important influence on the magnitude of the numerical distortion, as well as the numerical stability of the implicit difference equations. Time step sizes in the range of 3 to 6 hrs generally tend to minimize the combination of required computation time and numerical distortion of transients having a time of rise of the order of several days.  相似文献   

18.
ABSTRACT: This paper addresses two components of the problem of estimating the magnitude of step trends in surface water quality. The first is finding a robust estimator appropriate to the data characteristics expected in water-quality time series. The Hodges-Lehmann class of estimators is found to be robust in comparison to other nonparametric and moment-based estimators. A seasonal Hodges-Lehmann estimator is developed and shown to have desirable properties. Second, the effectiveness of various sampling strategies are examined using Monte Carlo simulation coupled with application of this estimator. The simulation is based on a large set of total phosphorus data from the Potomac River. To assure that the simulated records have realistic properties, the data are modeled in a multiplicative fashion incorporating flow, hysteresis, seasonal, and noise components. The results demonstrate the importance of balancing the length of the two sampling periods and balancing the number of data values between the two periods. The inefficiency of sampling at frequencies much in excess of 12 samples per year is demonstrated. Rotational sampling designs are discussed, and efficient designs, at least for this river and constituent, are shown to involve more than one year of active sampling at frequencies of about 12 per year.  相似文献   

19.
Summary After discussing methods for and the difficulties of determining optimal land use, particularly in relation to conservation and sustainability issues, prospects for establishing conservation networks so as to preserve the wildemess characteristics of the Cape York Peninsula area are considered. According to a number of international studies, nature conservation in this region should be given a high priority. While Cape York is sparsely settled, it is not, however, a complete wilderness. Mining, cattle ranching, forestry, fishing, tourism and land use by Aborigines, frequently conflict with nature conservation in this region. But most of the land currently belongs to the Crown (State), even though Crown title is now subject to counter-claims by Aborigines following the Mabo case which is outlined, and most is held as leasehold by its users. In theory, leasehold from the Crown should give considerable scope for altering land use in the region, and instituting a system of conservation networks in the area based on core protected areas, such as those suggested by the Wildlife Preservation Society of Queensland. Nevertheless, strategic land use planning for Cape York Peninsula is difficult because knowledge about the stock of natural resources and current land uses in the region is very imperfect, and conflicts between interest groups at the regional, State and national level are unlikely to allow for easy harmonious resolutions of land use disputes.But an encouraging sign in favour of nature conservation as a land use in Cape York Peninsula is its low economic opportunity cost, except where it comes into conflict with mining. Net returns from extensive pastoralism appear to be negative and economic returns from forestry are low. Tourism could be compatible with conservation. Potential conflicts with mining could be taken into account in the early planning stages of conservation networks by gazetting very large nature reserves and at a later time allowing some portions to be assigned for mining. The royalties from such mining might be used as transfer payments to benefit further conservation efforts in the region.Dr Andreas E. Hohl is a staff member and Professor Clem A. Tisdell is a Department Head of the Department of Economics at the University of Queensland.  相似文献   

20.
ABSTRACT: The effects of potential climate change on water resources in the Delaware River basin were determined. The study focused on two important water-resource components in the basin: (1) storage in the reservoirs that supply New York City, and (2) the position of the salt front in the Delaware River estuary. Current reservoir operating procedures provide for releases from the New York City reservoirs to maintain the position of the salt front in the estuary downstream from freshwater intakes and ground-water recharge zones in the Philadelphia metropolitan area. A hydrologic model of the basin was developed to simulate changes in New York City reservoir storage and the position of the salt front in the Delaware River estuary given changes in temperature and precipitation. Results of simulations indicated that storage depletion in the New York City reservoirs is a more likely effect of changes in temperature and precipitation than is the upstream movement of the salt front in the Delaware River estuary. In contrast, the results indicated that a rise in sea level would have a greater effect on movement of the salt front than on storage in the New York City reservoirs. The model simulations also projected that, by decreasing current mandated reservoir releases, a balance can be reached wherein the negative effects of climate change on storage in the New York City reservoirs and the position of the salt front in the Delaware River estuary are minimized. Finally, the results indicated that natural variability in climate is of such magnitude that its effects on water resources could overwhelm the effects of long-term trends in precipitation and temperature.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号