首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 28 毫秒
1.
ABSTRACT: The parameters of the extreme value type 1 distribution were estimated for 55 annual flood data sets by seven methods. These are the methods of (1) moments, (2) probability weighted moments, (3) mixed moments, (4) maximum likelihood estimation, (5) incomplete means, (6) principle of maximum entropy, and (7) least squares. The method of maximum likelihood estimation was found to be the best and the method of incomplete means the worst. The differences between the methods of principle of maximum entropy, probability weighted moments, moments, and least squares were only minor. The difference between these methods and the method of maximum likelihood was not pronounced.  相似文献   

2.
ABSTRACT: The purpose of this article is to discuss the importance of uncertainty analysis in water quality modeling, with an emphasis on the identification of the correct model specification. A wetland phosphorus retention model is used as an example to illustrate the procedure of using a filtering technique for model structure identification. Model structure identification is typically done through model parameter estimation. However, due to many sources of error in both model parameterization and observed variables and data, error-in-variable is often a problem. Therefore, it is not appropriate to use the least squares method for parameter estimation. Two alternative methods for parameter estimation are presented. The first method is the maximum likelihood estimator, which assumes independence of the observed response variable values. In anticipating the possible violation of the independence assumption, a second method, which coupled a maximum likelihood estimator and Kalman filter model, was presented. Furthermore, a Monte Carlo simulation algorithm is presented as a preliminary method for judging whether the model structure is appropriate or not.  相似文献   

3.
The principle of maximum entropy (POME) was used to derive the two-parameter gamma distribution used frequently in synthesis of instantaneous or finite-period unit hydrographs. The POME yielded the minimally prejudiced gamma distribution by maximizing the entropy subject to two appropriate constraints which were the mean of real values and the mean of the logarithms of real values of the variable. It provided a unique method for parameter estimation. Experimental data were used to compare this method with the methods of moments, cumulants, maximum likelihood estimation, and least squares.  相似文献   

4.
ABSTRACT: The minimization of the sum of absolute deviations and the minimization of the absolute maximum deviation (mini-max) were transformed into equivalent linear programs for the estimation of parameters in a transient and linear hydrologic system. It is demonstrated that these two methods yield viable parameter estimates that are globally optimal and reproduce properly the timing and magnitude of hydrologic events and associated variables such as total runoff. The two linear estimation methods compared favorably with the popular least-squares nonlinear estimation method. The generality of the theoretical developments shows that linear program equivalents are adequate competitors of nonlinear methods of hydrologic estimation and parameter calibration.  相似文献   

5.
ABSTRACT: The principle of maximum entropy (POME) was used to derive an alternative method for parameter estimation for the three parameter lognormal (TPLN) distribution. Six sets of annual peak discharge data were used to evaluate this method and compare it with the methods of moments and maximum likelihood estimation.  相似文献   

6.
ABSTRACT: This paper presents criteria for establishing the identification status of the inverse problem for confined aquifer flow. Three linear estimation methods (ordinary least squares, two-stage least squares, and three-stage least squares) and one nonlinear method (maximum likelihood) are used to estimate the matrices of parameters embedded in the partial differential equation characterizing confined flow. Computational experience indicates several advantages of maximum likelihood over the linear methods.  相似文献   

7.
ABSTRACT: The total phosphorous (TP) concentrations in the South Florida rainfall have been recorded in weekly intervals with a detection limit (DL) of 3.5 μg/L. As a large amount of the data is reported as below the DL, appropriate statistical methods are needed for data analysis. Thus, an attempt was made to identify an appropriate method to estimate the mean and variance of the data. In particular, a method to separate the statistics for the below DL portion from the estimated population statistics is proposed. The estimated statistics of the censored data are compared with the statistics of the uncensored data available from the recent years’ laboratory records. It was found that the one-step restricted maximum likelihood method is the most accurate for the wet TP data, and that the proposed method to combine the estimated statistics for TP < DL portion and the sample statistics for TP ≥ DL portion improves estimates compared to the conventional maximum likelihood estimates.  相似文献   

8.
ABSTRACT: An assumption of scale is inherent in any environmental monitoring exercise. The temporal or spatial scale of interest defines the statistical model which would be most appropriate for a given system and thus affects both sampling design and data analysis. Two monitoring objectives which are strongly tied to scale are the estimation of average conditions and the evaluation of trends. For both of these objectives, the time or spatial scale of interest strongly influences whether a given set of observations should be regarded as independent or serially correlated and affects the importance of serial correlation in choosing statistical methods. In particular serial correlation has a much different effect on the estimation of long-term means than it does on the estimation of specific-period means. For estimating trends, a distinction between serial correlation and trend is scale dependent. An explicit consideration of scale in monitoring system design and data analysis is, therefore, most important for producing meaningful statistical information.  相似文献   

9.
10.
: In general, the choice among reservoirs for water supply or flow augmentation is a multiobjective problem. Choices are based in part on the yield available from water supply reservoirs or, in the case of flow augmentation reservoirs, on the increase in low flows at downstream locations. Detailed estimates of these effects may be too costly for basin planning purposes. Thus this paper presents methods for rapid estimation of those quantities for New Hampshire. For water supply reservoirs, a composite empirical relation between Y95 (the draft available 95 percent of the time) and storage ratio, S*, is developed from previous studies in the region. For flow augmentation reservoirs, empirical relations between S* and degree of regulation, R*, are applied to each upstream regulating reservoir. Values of regulation arc then summed and divided by the mean flow at the downstream reach of interest. This parameter, (ΓR)*, is then related to increase in flow available 95 percent of the time by an empirical relation.  相似文献   

11.
ABSTRACT: Data splitting is used to compare methods of determining “homogeneous” hydrologic regions. The methods compared use cluster analysis based on similarity of hydrologic characteristics or similarity of characteristics of a stream's drainage basin. Data for 221 stations in Arizona are used to show that the methods, which are a modification of DeCoursey's scheme for defining regions, improve the fit of estimation data to the model, but that is is necessary to have an independent measure of predictive accuracy, such as that provided by data splitting, to demonstrate improved predictive accuracy. The methods used the complete linkage algorithm for cluster analysis and computed weighted average estimates of hydrologic characteristics at ungaged sites.  相似文献   

12.
13.
ABSTRACT: Evaluation of the Great Lakes Environmental Research Laboratory's (GLERL's) physically-based monthly net basin supply forecast method reveals component errors and the effects of model improvements for use on the Laurentian Great Lakes. While designed for probabilistic outlooks, it is assessed for giving deterministic outlooks along with other net basin supply forecast methods of the U.S. Army Corps of Engineers and Environment Canada, and with a stochastic approach commissioned by the Corps. The methods are compared to a simple clima-tological forecast and to actual time series of net basin supplies. Aetual net basin supplies are currently determined by estimating all components directly, instead of as water-balance residuals. This is judged more accurate and appropriate for both forecasting and simulation. GLERL's physically-based method forecasts component supplies while the other methods are based on residual supplies. These other methods should be rederived to be based on component supplies. For each of these other methods, differences between their outlooks and residual supplies are used as error estimates for the rederived methods and component supplies. The evaluations are made over a recent period of record high levels followed by a record drought. Net basin supply outlooks are better than climatology, and GLERL's physically-based method performs best with regard to either component or residual net basin supplies. Until advances are made in long-range climate outlooks, deterministic supply outlooks cannot be improved significantly.  相似文献   

14.
ABSTRACT: In recent years, several approaches to hydrologic frequency analysis have been proposed that enable one to direct attention to that portion of an overall probability distribution that is of greatest interest. The majority of the studies have focused on the upper tail of a distribution for flood analyses, though the same ideas can be applied to low flows. This paper presents an evaluation of the performances of five different estimation methods that place an emphasis on fitting the lower tail of the lognormal distribution for estimation of the ten‐year low‐flow quantile. The methods compared include distributional truncation, MLE treatment of censored data, partial probability weighted moments, LL‐moments, and expected moments. It is concluded that while there are some differences among the alternative methods in terms of their biases and root mean square errors, no one method consistently performs better than the others, particularly with recognition that the underlying population distribution is unknown. Therefore, it seems perfectly legitimate to make a selection of a method on the basis other criteria, such as ease of use. It is also shown in this paper that the five alternative methods can perform about as well as, if not better than, an estimation strategy involving fitting the complete lognormal distribution using L‐moments.  相似文献   

15.
Efforts to assess forest ecosystem carbon stocks, biodiversity, and fire hazards have spurred the need for comprehensive assessments of forest ecosystem dead wood (DW) components around the world. Currently, information regarding the prevalence, status, and methods of DW inventories occurring in the world’s forested landscapes is scattered. The goal of this study is to describe the status, DW components measured, sample methods employed, and DW component thresholds used by national forest inventories that currently inventory DW around the world. Study results indicate that most countries do not inventory forest DW. Globally, we estimate that about 13% of countries inventory DW using a diversity of sample methods and DW component definitions. A common feature among DW inventories was that most countries had only just begun DW inventories and employ very low sample intensities. There are major hurdles to harmonizing national forest inventories of DW: differences in population definitions, lack of clarity on sample protocols/estimation procedures, and sparse availability of inventory data/reports. Increasing database/estimation flexibility, developing common dimensional thresholds of DW components, publishing inventory procedures/protocols, releasing inventory data/reports to international peer review, and increasing communication (e.g., workshops) among countries inventorying DW are suggestions forwarded by this study to increase DW inventory harmonization.  相似文献   

16.
Hydrologic modeling of urban watersheds for designs and analyses of stormwater conveyance facilities can be performed in either an event-based or continuous fashion. Continuou simulation requires, among other things, the use of a time series of rainfall amounts. However, for urban drainage basins, which are typically small, the temporal resolution of the rainfall time series must be quite fine, and often on the order of 5 to 15 minutes. This poses a significant challenge because rainfall-gauging records are usually kept only for hourly or longer time steps. The time step sizes in stochastic rainfall generators are usually also too large for application to urban runoff modeling situations. Thus, there is a need for methods by which hourly rainfall amounts can be disaggregated to shorter time intervals. This paper presents and compares a number of approaches to this problem, which are based on the use of polynomial approximating functions. Results of these evaluations indicate that a desegregation method presented by Ormsbee (1989) is a relatively good performer when storm durations are short (2 hours), and that a quadratic spline-based approach is a good choice for longer-duration storms. Based on these results, the Ormsbee technique is recommended because it provides good performance, and can be applied easily to long time series of precipitation records. The quadratic spline-based approach is recommended as a close second choice because it performed the best most consistently, but remains more difficult to apply than the Ormsbee technique. Results of this study also indicate that, on average, all of the disaggregation methods evaluated introduce a severe negative bias into maximum rainfall intensities. This is cause for some well-justified concern, as the characteristics of runoff hydrographs are quite sensitive to maximum storm intensities. Thus, there is a need to continue the search for simple yet effective hourly rainfall disaggregation methods.  相似文献   

17.
ABSTRACT: The designs of stream channel naturalization, rehabilitation, and restoration projects are inherently fraught with uncertainty. Although a systematic approach to design can be described, the likelihood of success or failure of the design is unknown due to uncertainties within the design and implementation process. In this paper, a method for incorporating uncertainty in decision‐making during the design phase is presented that uses a decision analysis method known as Failure Modes and Effects Analysis (FMEA). The approach is applied to a channel rehabilitation project in north‐central Pennsylvania. FMEA considers risk in terms of the likelihood of a component failure, the consequences of failure, and the level of difficulty required to detect failure. Ratings developed as part of the FMEA can provide justification for decision making in determining design components that require particular attention to prevent failure of the project and the appropriate compensating actions to be taken.  相似文献   

18.
ABSTRACT: Alternative approaches suggested for modeling multiseries of water resources systems are reviewed and compared. Most approaches fall within the general framework of multivariate ARMA models. Formal modeling procedures suggest a three-stage iterative process, namely: model identification, parameter estimation and diagnostic checks. Although a number of statistical tools are already available to follow such modeling process, in general, it is not an easy task, especially if high order vector ARMA models are used. However, simpler ARMA models such as the contemporaneous and the transfer-function models may be sufficient for most applications in water resources. Two examples of modeling bivariate and trivariate streamflow series are included. Alternative modeling procedures are used and compared by using data generation techniques. The results obtained suggest that low order models, as well as contemporaneous ARMA models, reproduce quite well the main statistical characteristics of the time series analyzed. It is assumed that the same conclusions apply for most water resources time series.  相似文献   

19.
ABSTRACT. The estimator equations obtained using invariant imbedding is used to estimate the parameters in river or stream pollution. By using these equations, the parameters can be estimated directly from differential equations representing the pollution model and from measured noisy data such as BOD and DO. Another advantage of this approach is that a sequential estimation scheme is obtained. By using this sequential scheme, only current data are needed to estimate current or future values of the unknown parameters. Consequently, a large amount of computer time and computer memory can be saved. Furthermore, not only the parameters but also the concentrations of pollutants can be estimated. Thus, it also forms an effective forecasting technique. The classical least squares criterion is used in the estimation. Several examples are solved to illustrate the technique. (KEY WORDS: dynamic modeling; water pollution; invariant imbedding; forecasting; least squares criterion; estimation)  相似文献   

20.
ABSTRACT A detailed review of current methods and criteria used in parameter estimation in hydrology is presented. The effect of errors in the data set and the effect of interactions between methods of analysis, criteria, data set errors, and modeling assumptions are reviewed and discussed briefly. It is concluded that study of techniques, criteria, data set errors and particularly interactions between these, is essential to further progress in hydrologic modeling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号