首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ABSTRACT: The total phosphorous (TP) concentrations in the South Florida rainfall have been recorded in weekly intervals with a detection limit (DL) of 3.5 μg/L. As a large amount of the data is reported as below the DL, appropriate statistical methods are needed for data analysis. Thus, an attempt was made to identify an appropriate method to estimate the mean and variance of the data. In particular, a method to separate the statistics for the below DL portion from the estimated population statistics is proposed. The estimated statistics of the censored data are compared with the statistics of the uncensored data available from the recent years’ laboratory records. It was found that the one-step restricted maximum likelihood method is the most accurate for the wet TP data, and that the proposed method to combine the estimated statistics for TP < DL portion and the sample statistics for TP ≥ DL portion improves estimates compared to the conventional maximum likelihood estimates.  相似文献   

2.
ABSTRACT: Routine data collection currently consumes a large amount of the total resources devoted to water quality management. All too often data collection becomes an end in itself, with little thought given to the purpose of the data collection. The problem generally stems from a lack of proper routine surveillance system design and a failure on the part of the designers to initially identify the data needs of the management program. This study attempts, in a general way, to delineate the data needs of a water quality management program. This first required an identification of the activities involved in water quality management. The activities were then discussed in terms of the types of information needed to successfully complete their assigned tasks. Several detailed examples are given. The results of the discussion are summarized and several strategies are proposed to relate the results to surveillance system design.  相似文献   

3.
ABSTRACT: A method to partition the variation in concentrations of water chemistry parameters in a river is described. The approach consists of fitting a family of curves for each chemical parameter. Each curve indicates the response of the parameter to river flow for a particular time period or location. An analysis of covariance is then used to identify statistically significant differences between curves. Such differences result largely from two factors: (1) the discharge of effluents and (2) river flow-concentration relationships. The deviations from the fitted curves indicate month-to-month variations unrelated to river flow that are controlled by factors such as temperature-related seasonal patterns. Underlying statistical assumptions are discussed with respect to water chemistry data. The technique is applied to a data set consisting of monthly samples of 22 water chemistry parameters from the Sulphur River of Texas and Arkansas. Several patterns of response to river flow and to two effluent discharges were revealed.  相似文献   

4.
ABSTRACT: Cumulative density functions (c.d.f.'s) for water quality random variables may be estimated using data from a routine grab sampling program. The c.d.f. may then be used to estimate the probability that a single grab sample will violate a given stream standard and to determine the anticipated number of violations in a given number of samples. Confidence limits about a particular point on the c.d.f. may be used to reflect the accuracy with which the sample estimate represents the true c.d.f. Methods are presented here for calculating such confidence limits using both a normal model and a nonparametric model. Examples are presented to illustrate the usefulness of an estimated c.d.f. and associated confidence limits in assessing whether an observed number of standard violations is the result of natural variability or represents real degradation in water quality.  相似文献   

5.
ABSTRACT: Water quality monitoring cannot address every information need through one data collection procedure. This paper discusses the goals and related procedures for designing water quality monitoring programs. The discussion focuses on the broad information needs of those agencies operating water quality networks. These information needs include the ability to assess trends and environmental impacts, determine compliance with objectives or standards, estimate mass transport, and perform general surveillance. Each of these information needs has different data requirements. This paper outlines these goals and discusses factors to consider in developing a monitoring plan on a site by site basis.  相似文献   

6.
ABSTRACT: A comprehensive data analysis study is carried out for detecting trends and other statistical characteristics in water quality time series measured in Long Point Bay, Lake Erie. In order to glean an optimal amount of useful information from the available data, the exploratory and confirmatory data anslysis stages are adhered to. To test a range of hypotheses regarding the statistical properties of the time series, a wide variety of both parametric and nonparametric techniques are employed. A particularly useful nonparametric method for discovering trends is the seasonal Mann-Kendall test.  相似文献   

7.
ABSTRACT: The Environmental Display Manager, EDM, is a development system on an IBM 3090 mainframe at the U.S. EPA National Computer Center in Research Triangle Park, North Carolina. EDM provides mapping, display, analysis support, and information management capabilities to workstations located across the United States, and connected to EPA through federal, state, academic, and private communications networks. Through interactive software, EDM can quickly support analyses, create maps and graphics, and generate reports that integrate millions of pieces of environmental data. The concept of EDM is to provide easy access to environmental information, to provide automated environmental analyses and reports, and then to provide data, graphics, images, text, and documents that can be used by numerous output devices, software packages, and computers. The mapping cumponent works with an electronic version of the 54,000 7.5 minute quad sheets of the U.S. Geological Survey. The software also works with a hydrographic data base of the surface waters of the United States. With the maps, a user can look at the rivers in any state, can zoom in on a small pond, and can overlay and identify particular features such as industrial waste dischargers and factories. The hydrography allows routing for modeling programs, identification of upstream and downstream components, and linkage of environmental features associated with surface waters. Alternatively, users can query data based on latitude/longitude, city name, EPA permit number, state agency and station code, river name or number, and river cataloging unit. The maps can be overlaid with roads and environmental sites such as: municipal and industrial dischargers, Superfund sites, public drinking water supplies, water quality monitoring stations, stream gages, and city locations. Retrievals from related systems can be performed for selected sites creating graphics showing water quality trends, discharge monitoring reports, and permit discharge limits.  相似文献   

8.
ABSTRACT The problem of estimating missing values in water quality data using linear interpolation and harmonic analysis is studied to see which one of these two methods yields better estimates for the missing values. The data used in this study consisted of midnight values of dissolved oxygen from the Ohio River collected over a period of one year at Stratton station. Various hypothetical cases of missing data are considered and the two methods of supplementing missing values are evaluated using statistical tests. The results indicate that when the percentage of missed data points exceeded ten percent of the total number in the original sample, harmonic analysis usually yielded better estimates for both the regularly and irregularly missed cases. For data that exhibit cyclic variation, examples of which are dissolved oxygen concentration and water temperature, harmonic analysis as a data generation technique appears to be superior to linear interpolation.  相似文献   

9.
ABSTRACT: Recent developments in water quality monitoring have generated interest in combining non-probability and probability data to improve water quality assessment. The Interagency Task Force on Water Quality Monitoring has taken the lead in exploring data combination possibilities. In this paper we take a developed statistical algorithm for combining the two data types and present an efficient process for implementing the desired data augmentation. In a case study simulated Environmental Protection Agency (EPA) Environmental Monitoring and Assessment Program (EMAP) probability data are combined with auxiliary monitoring station data. Auxiliary stations were identified on the STORET water quality database. The sampling frame is constructed using ARC/INFO and EPA's Reach File-3 (RF3) hydrography data. The procedures for locating auxiliary stations, constructing an EMAP-SWS sampling frame, simulating pollutant exposure, and combining EMAP and auxiliary stations were developed as a decision support system (DSS). In the case study with EMAP, the DSS was used to quantify the expected increases in estimate precision. The benefit of using auxiliary stations in EMAP estimates was measured as the decrease in standard error of the estimate.  相似文献   

10.
ABSTRACT: Environmental decision making involving trace-levels of contaminants can be complicated by censoring, the practice of reporting concentrations either as less than the limit of detection (LOD) or as not detected (ND) when a test result is less than the LOD. Censoring can result in data series that are difficult to meaningfully summarize, graph, and analyze through traditional statistical methods. In spite of the relatively large measurement errors associated with test results below the LOD, simple and meaningful analyses can be carried out that provide valuable information not available if data are censored. For example, an indication of increasing levels of contamination at the fringe of a plume can act as an early warning signal to trigger further study, an increased sampling frequency, or a higher level of remediation at the source. This paper involves the application of nonparametric trend analyses to uncensored trace-level groundwater monitoring data collected between March 1991 and August 1994 on dissolved arsenic and chromium for seven wells at an industrial site in New York.  相似文献   

11.
ABSTRACT: A statistical approach for making Total Maximum Daily Load (TMDL) impairment decisions is developed as an alternative to the simple tally of the number of measurements that happen to exceed the standard. The method ensures that no more than a small (e.g., 10 percent) percentage of water quality samples will exceed a regulatory standard with a high level of confidence (e.g., 95 percent). The method is based on the 100(1‐α) percent lower confidence limit on an upper percentile of the concentration distribution. Advantages of the method include: (1) it provides a direct test of the hypothesis that a prespecified percentage of the true concentration distribution exceeds a regulatory standard, (2) it is applicable to a wide variety of different statistical concentration distributions, (3) it directly incorporates the magnitude of the measured concentrations unlike traditional approaches, and (4) it has explicit statistical power characteristics (i.e., what is the probability of missing an environmental impact). Detailed study of the simple tally approach reveals that it achieves high statistical power at the expense of unacceptably high false positive rates (30 to 40 percent false positive results). By contrast, the statistical approach results in similar statistical power while achieving a nominal false positive rate of 5 percent.  相似文献   

12.
ABSTRACT: One of the most significant changes m the field of hydrology in the past few years has been the increase m demand for basic data resulting from a new awareness on the part of planners, developers and managers of the essential nature of such data. For many years data collection has been an onerous, routine operation, following which the data were processed and stored - either in publications or file drawers - and the job considered completed. Two developments have changed that picture: the realization that we are drastically altering OUT environment, and the advent of the computer. The first forced us into a recognition of our need for accurate basic data and the second provided a new methodology for handling and using it. The change is evidenced m many ways and numerous activities are underway at both State and Federal level for all facets of the acquisition and handling of water data. The collection of basic data still involves hard routine work and a conscientious-effort to maintain a high level of quality. Hopefully, recognition of the absolutely essential nature of an adequate data base will result in the continued enhancement of the basic data collector and the concomitant increase in support of his activities.  相似文献   

13.
ABSTRACT A water quality investigation on Utah Lake was conducted during the same time period that the Heat Capacity Mapping Mission (HCMM) satellite was collecting thermal infrared and reflectivity data. Relationships were established and evaluated among HCMM data and lake water quality parameters. Although remotely sensed reflective data have been previously utilized, this study was unique in that thermal emitted data were also correlated to algae concentrations and other indicators. Standard statistical evaluations were made along with utilization of color graphics techniques to identify and plot relationships. The emitted thermal energy was found to have high positive correlations with net algal concentrations and with the predominant species, Aphanizomenon flos-aquae, a blue-green alga. No continuous correlation was found for a less abundant red pigment phytoplankton, Ceratium hirundinella. Similar trends, though for negative correlations, were shown for reflectivity data and algal concentrations throughout the spring and summer. Coincidence of areas of warmer emitted energy and darker relfected energy on colorgraphics displays clearly indicate lake areas of high algal concentrations. Night thermal data displayed a strong negative correlation with algal concentration, opposite to day thermal data. Color graphics of warmer day emitted energy and cooler night emitted energy further verify areas of high algal concentrations.  相似文献   

14.
ABSTRACT: Drought affects the quality of ground water in certain aquifers used by municipalities in Kansas. Water quality changes occur as a function of the amount of water available for recharge and hence to dilute more mineralized ground waters. Several measures of meteorological drought, including the Palmer Index and Eagleman Aridity Index, were correlated with water quality data to determine the degree of association. Several locations showed sharp delinces in water quality as the drought progressed. These relationships can be used to predict possible variations in present and future well-water supplies in locations subject to drought induced water quality deterioration.  相似文献   

15.
ABSTRACT: A series of reforms in the water industry in Australia has created a demand from the industry and regulators for objective methodologies to evaluate incremental changes in the customer service standards. In this paper, the use of choice modeling for estimating implicit prices associated with urban water supply attributes is explored. Results from multinomial logit (MNL) and random parameters logit (RPL) models show that increases in annual water bills and the frequency of future interruptions were the most important attributes. Implicit price confidence intervals based on the best models suggest that people are willing to pay positive amounts to achieve a water supply that is less frequently interrupted. The provision of alternative water supplies during an interruption and notification of the interruption were found to be unimportant to respondents. Choice modeling proved to be a useful technique and provided the industry and regulators with additional information for standard setting.  相似文献   

16.
ABSTRACT: This study developed a QUAL2E‐Reliability Analysis (QUAL2E‐RA) model for the stochastic water quality analysis of the downstream reach of the main Han River in Korea. The proposed model is based on the QUAL2E model and incorporates the Advanced First‐Order Second‐Moment (AFOSM) and Mean‐Value First‐Order Second‐Moment (MFOSM) methods. After the hydraulic characteristics from standard step method are identified, the optimal reaction coefficients are then estimated using the Broyden‐Fletcher‐Goldfarb‐Shanno (BFGS) method. Considering variations in river discharges, pollutant loads from tributaries, and reaction coefficients, the violation probabilities of existing water quality standards at several locations in the river were computed from the AFOSM and MFOSM methods, and the results were compared with those from the Monte Carlo method. The statistics of the three uncertainty analysis methods show that the outputs from the AFOSM and MFOSM methods are similar to those from the Monte Carlo method. From a practical model selection perspective, the MFOSM method is more attractive in terms of its computational simplicity and execution time.  相似文献   

17.
ABSTRACT: The visualization of water quality data in lakes was achieved by integrating the U.S. Environmental Protection Agency's (EPA) STORET water quality database, lake shoreline polygons from EPA's Reach File (version 3), and the UNIMAP 2-D and 3-D interactive mapping and modeling software. Based on lake name (and state abbreviation), a lake shoreline polygon can be accessed from the Reach File. The coordinates of the polygon are portrayed by the U.S. Geological Survey (USGS) 1:100,000 scale Digital Line Graph (DLG) hydrography layer. This polygon is passed, in turn, to the STORET water quality file. Monitoring stations located within the polygon boundary are extracted along with the complete sampling survey. Specific parameters, such as total phosphorus, pH, ammonia, and optional time and depth restrictions can be selected to build a file of x, y, z1, z1…, zn data which is imported to UNIMAP. Up to four parameters, including depth, can be selected at a time. Within UNIMAP, the data is gridded and then displayed as a 2-D color contour map, 3-D perspective contour map, or 2-D projected time or depth slices. This system operates on the EPA ES9000 mainframe computer located in Research Triangle Park (RIP), North Carolina. LAKEMAP is the culmination of an effort to bridge the gap between the vast array of environmental data collected by the EPA and the complex analytical and display software resident on the mainframe.  相似文献   

18.
ABSTRACT: Extensive investigations have been undertaken to determine the utility of Landsat data for detecting and analyzing hydrologic characteristics of an interior watershed of Iran that drains to Daryachehye-Namak (salt lake). This interior playa serves as the terminus for surface water discharging to it from the Karaj, Shur, Saveh, and Ghom Rivers and ground water from their underlying aquifers. These drainage systems encompass heavily populated and industrial sectors of west central Iran, including Tehran. The result of this investigation demonstrates the applicability of Landsat data for mapping and monitoring water regimen as an aid in interpreting hydrologic conditions throughout this arid region. Fluctuation of water area in a playa lake, occupying the lowest part of this closed basin, was monitored on repetitive Landsat coverage. As the result of field investigations combined with optical and digital analyses of the Landsat data, fluctuating water depths were determined in order to estimate the volume of water present in this lake during various seasons. A comparison between stream discharge rates and the estimated volume of standing water make it possible to quantitatively evaluate the hydrologic regimen and to detect the significance of ground water discharge.  相似文献   

19.
ABSTRACT: Left-censoring of data sets complicates subsequent statistical analyses. Generally, substitution or deletion methods provide poor estimates of the mean and variance of censored samples. These substitution and deletion methods include the use of values above the detection limit (DL) only, or substitution of 0, DL/2 or the DL for the below DL values during the calculation of mean and variance. A variety of statistical methods provides better estimators for different types of distributions and censoring. Maximum likelihood and order statistics methods compare favorably to the substitution or deletion methods. Selected statistical methods applicable to left-censoring of environmental data sets are reviewed with the purpose of demonstrating the use of these statistical methods for coping with Type I (and Type II) left-censoring of normally and log-normally distributed environmental data sets. A PC program (UNCENSOR) is presented that implements these statistical methods. Problems associated with data sets with multiple DLs are discussed relative to censoring methods for life and fatigue tests as recently applied to water quality data sets.  相似文献   

20.
Abstract: The concern about water quality in inland water bodies such as lakes and reservoirs has been increasing. Owing to the complexity associated with field collection of water quality samples and subsequent laboratory analyses, scientists and researchers have employed remote sensing techniques for water quality information retrieval. Due to the limitations of linear regression methods, many researchers have employed the artificial neural network (ANN) technique to decorrelate satellite data in order to assess water quality. In this paper, we propose a method that establishes the output sensitivity toward changes in the individual input reflectance channels while modeling water quality from remote sensing data collected by Landsat thematic mapper (TM). From the sensitivity, a hypothesis about the importance of each band can be made and used as a guideline to select appropriate input variables (band combination) for ANN models based on the principle of parsimony for water quality retrieval. The approach is illustrated through a case study of Beaver Reservoir in Arkansas, USA. The results of the case study are highly promising and validate the input selection procedure outlined in this paper. The results indicate that this approach could significantly reduce the effort and computational time required to develop an ANN water quality model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号