首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Abstract: The determination of sediment and nutrient loads is typically based on the collection and analysis of grab samples. The frequency and regularity of traditional sampling may not provide representation of constituent loading, particularly in systems with flashy hydrology. At two sites in the Little Bear River, Utah, continuous, high‐frequency turbidity was used with surrogate relationships to generate estimates of total phosphorus and total suspended solids concentrations, which were paired with discharge to estimate annual loads. The high frequency records were randomly subsampled to represent hourly, daily, weekly, and monthly sampling frequencies and to examine the effects of timing, and resulting annual load estimates were compared to the reference loads. Higher frequency sampling resulted in load estimates that better approximated the reference loads. The degree of bias was greater at the more hydrologically responsive site in the upper watershed, which required a higher sampling frequency than the lower watershed site to achieve the same level of accuracy in estimating the reference load. The hour of day and day of week of sampling impacted load estimation, depending on site and hydrologic conditions. The effects of sampling frequency on the determination of compliance with a water quality criterion were also examined. These techniques can be helpful in determining necessary sampling frequency to meet the objectives of a water quality monitoring program.  相似文献   

2.
Water quality regulation and litigation have elevated the awareness and need for quantifying water quality and source contributions in watersheds across the USA. In the present study, the regression method, which is typically applied to large (perennial) rivers, was evaluated in its ability to estimate constituent loads (NO(3)-N, total N, PO(4)-P, total P, sediment) on three small (ephemeral) watersheds with different land uses in Texas. Specifically, regression methodology was applied with daily flow data collected with bubbler stage recorders in hydraulic structures and with water quality data collected with four low-frequency sampling strategies: random, rise and fall, peak, and single stage. Estimated loads were compared with measured loads determined in 2001-2004 with an autosampler and high-frequency sampling strategies. Although annual rainfall and runoff volumes were relatively consistent within watersheds during the study period, measured annual nutrient and sediment concentrations and loads varied considerably for the cultivated and mixed watersheds but not for the pasture watershed. Likewise, estimated loads were much better for the pasture watershed than the cultivated and mixed landuse watersheds because of more consistent land management and vegetation type in the pasture watershed, which produced stronger correlations between constituent loads and mean daily flow rates. Load estimates for PO(4)-P were better than for other constituents possibly because PO(4)-P concentrations were less variable within storm events. Correlations between constituent concentrations and mean daily flow rate were poor and not significant for all watersheds, which is different than typically observed in large rivers. The regression method was quite variable in its ability to accurately estimate annual nutrient loads from the study watersheds; however, constituent load estimates were much more accurate for the combined 3-yr period. Thus, it is suggested that for small watersheds, regression-based annual load estimates should be used with caution, whereas long-term estimates can be much more accurate when multiple years of concentration data are available. The predictive ability of the regression method was similar for all of the low-frequency sampling strategies studied; therefore, single-stage or random strategies are recommended for low-frequency storm sampling on small watersheds because of their simplicity.  相似文献   

3.
ABSTRACT: Various temporal sampling strategies are used to monitor water quality in small streams. To determine how various strategies influence the estimated water quality, frequently collected water quality data from eight small streams (14 to 110 km2) in Wisconsin were systematically subsampled to simulate typically used strategies. These subsets of data were then used to estimate mean, median, and maximum concentrations, and with continuous daily flows used to estimate annual loads (using the regression method) and volumetrically weighted mean concentrations. For each strategy, accuracy and precision in each summary statistic were evaluated by comparison with concentrations and loads of total phosphorus and suspended sediment estimated from all available data. The most effective sampling strategy depends on the statistic of interest and study duration. For mean and median concentrations, the most frequent fixed period sampling economically feasible is best. For maximum concentrations, any strategy with samples at or prior to peak flow is best. The best sampling strategy to estimate loads depends on the study duration. For one‐year studies, fixed period monthly sampling supplemented with storm chasing was best, even though loads were overestimated by 25 to 50 percent. For two to three‐year load studies and estimating volumetrically weighted mean concentrations, fixed period semimonthly sampling was best.  相似文献   

4.
ABSTRACT: Growing interest in water quality has resulted in the development of monitoring networks and intensive sampling for various constituents. Common purposes are regulatory, source and sink understanding, and trend observations. Water quality monitoring involves monitoring system design; sampling site instrumentation; and sampling, analysis, quality control, and assurance. Sampling is a process to gather information with the least cost and least error. Various water quality sampling schemes have been applied for different sampling objectives and time frames. In this study, a flow proportional composite sampling scheme is applied to variable flow remote canals where the flow rate is not known a priori. In this scheme, historical weekly flow data are analyzed to develop high flow and low flow sampling trigger volumes for auto‐samplers. The median flow is used to estimate low flow sampling trigger volume and the five percent exceedence probability flow is used for high flow sampling trigger volume. A computer simulation of high resolution sampling is used to demonstrate the comparative bias in load estimation and operational cost among four sampling schemes. Weekly flow proportional composite auto‐sampling resulted in the least bias in load estimation with competitive operational cost compared to daily grab, weekly grab sampling and time proportional auto‐sampling.  相似文献   

5.
Load estimates obtained using an approach based on statistical distributions with parameters expressed as a function of covariates (e.g., streamflow) (distribution with covariates hereafter called DC method) were compared to four load estimation methods: (1) flow‐weighted mean concentration; (2) integral regression; (3) segmented regression (the last two with Ferguson's correction factor); and (4) hydrograph separation methods. A total of 25 datasets (from 19 stations) of daily concentrations of total dissolved solids, nutrients, or suspended particulate matter were used. The selected stations represented a wide range of hydrological conditions. Annual flux errors were determined by randomly generating 50 monthly sample series from daily series. Annual and interannual biases and dispersions were evaluated and compared. The impact of sampling frequency was investigated through the generation of bimonthly and weekly surveys. Interannual uncertainty analysis showed that the performance of the DC method was comparable with those of the other methods, except for stations showing high hydrological variability. In this case, the DC method performed better, with annual biases lower than those characterizing the other methods. Results show that the DC method generated the smallest pollutant load errors when considering a monthly sampling frequency for rivers showing high variability in hydrological conditions and contaminant concentrations.  相似文献   

6.
ABSTRACT: The objective of this investigation was to determine the effect of sampling frequency and sampling type on estimates of monthly nutrient loads and flow‐weighted nutrient concentrations in a constructed wetland. Phosphorus and nitrogen loads and concentrations entering and leaving a subtropical wetland (the Everglades Nutrient Removal Project, ENRP) were calculated on the basis of three sampling frequencies. The first frequency included weekly composite samples (three daily samples composited for one week) and grab samples from August 1994 to July 1997, representing a base‐line condition for comparison with results using reduced sampling frequencies. The second and third sampling frequency included three and two composite samples per month, respectively, drawn from the weekly samples. Total phosphorus and nitrogen loads calculated using two and three samples per month were almost identical to results based on four samples per month (least‐squares regression coefficients ranged from 0.96 to 0.98). Results of monthly mean flow‐weighted nutrient concentrations, obtained using reduced sampling frequencies, also were strongly correlated to concentrations calculated using the base‐line sampling frequency (r2ranged from 0.82 to 0.93). Grab samples did not always provide good estimates of loads or concentrations, particularly at the inflow when data were highly variable. From the results of this study, we can recommend that bi‐weekly composite sampling be used to monitor nutrient concentrations and loads discharged from larger‐scale Everglades Stormwater Treatment Areas (STAs) now under construction. Because there are high costs associated with water sample collection and processing, studies to identify optimal sampling frequencies should be a key feature in the design of any comprehensive wetland‐monitoring program.  相似文献   

7.
ABSTRACT: The Food Quality Protection Act of 1996 requires that human exposure to pesticides through drinking water be considered when establishing pesticide tolerances in food. Several systematic and seasonally weighted systematic sampling strategies for estimating pesticide concentrations in surface water were evaluated through Monte Carlo simulation, using intensive datasets from four sites in northwestern Ohio. The number of samples for the strategies ranged from 4 to 120 per year. Sampling strategies with a minimal sampling frequency outside the growing season can be used for estimating time weighted mean and percentile concentrations of pesticides with little loss of accuracy and precision, compared to strategies with the same sampling frequency year round. Less frequent sampling strategies can be used at large sites. A sampling frequency of 10 times monthly during the pesticide runoff period at a 90 km2 basin and four times monthly at a 16,400 km2 basin provided estimates of the time weighted mean, 90th, 95th, and 99th percentile concentrations that fell within 50 percent of the true value virtually all of the time. By taking into account basin size and the periodic nature of pesticide runoff, costs of obtaining estimates of time weighted mean and percentile pesticide concentrations can be minimized.  相似文献   

8.
Causes of variation between loads estimated using alternative calculation methods and their repeatability were investigated using 20 years of daily flow and monthly concentration samples for 77 rivers in New Zealand. Loads of dissolved and total nitrogen and phosphorus were calculated using the Ratio, L5, and L7 methods. Estimates of loads and their precision associated with short‐term records of 5, 10, and 15 years were simulated by subsampling. The representativeness of the short‐term loads was quantified as the standard deviation of the 20 realizations. The L7 method generally produced more realistic loads with the highest precision and representativeness. Differences between load estimates were shown to be associated with poor agreement between the data and the underlying model. The best method was shown to depend on the match between the model and functional and distributional characteristics of the data, rather than on the contaminant. Short‐term load estimates poorly represented the long‐term load estimate, and deviations frequently exceeded estimated imprecision. The results highlight there is no single preferred load calculation method, the inadvisability of “unsupervised” load estimation and the importance of inspecting concentration‐flow, unit load‐flow plots and regression residuals. Regulatory authorities should be aware that the precision of loads estimated from monthly data are likely to be “optimistic” with respect to the actual repeatability of load estimates.  相似文献   

9.
Total suspended solids (TSS) and total phosphorus (TP) have been shown to be strongly correlated with turbidity in watersheds. High‐frequency in situ turbidity can provide estimates of these potential pollutants over a wide range of hydrologic conditions. Concentrations and loads were estimated in four western Lake Superior trout streams from 2005 to 2010 using regression models relating continuous turbidity data to grab sample measures of TSS and TP during differing flow regimes. TSS loads estimated using the turbidity surrogate were compared with those made using FLUX software, a standard assessment technique based on discharge and grab sampling for TSS. More traditional rating curve methodology was not suitable because of the high variability in the particulates vs. discharge relationship. Stream‐specific turbidity and TSS data were strongly correlated (r2 = 0.5 to 0.8; p < 0.05) and less so for TP (r2 = 0.3 to 0.7; p < 0.05). Near‐continuous turbidity monitoring (every 15 min) provided a good method for estimating both TSS and TP concentration, providing information when manual sample collection was unlikely, and allowing for detailed analyses of short‐term responses of flashy Lake Superior tributaries to highly variable weather and hydrologic conditions while the FLUX model typically resulted in load estimates greater than those determined using the turbidity surrogate, with 17/23 stream years having greater FLUX estimates for TSS and 18/23 for TP.  相似文献   

10.
ABSTRACT: Surface water quality data are routinely collected in river basins by state or federal agencies. The observed quality of river water generally reflects the overall quality of the ecosystem of the river basin. Advanced statistical methods are often needed to extract valuable information from the vast amount of data for developing management strategies. Among the measured water quality constituents, total phosphorus is most often the limiting nutrient in freshwater aquatic systems. Relatively low concentrations of phosphorus in surface waters may create eutrophication problems. Phosphorus is a non-conservative constituent. Its time series generally exhibits nonlinear behavior. Linear models are shown to be inadequate. This paper presents a nonlinear state-dependent model for the phosphorous data collected at DeSoto, Kansas. The nonlinear model gives significant reductions in error variance and forecasting error as compared to the best linear autoregressive model identified.  相似文献   

11.
ABSTRACT: A linear filter (Kalman filter) technique was used with a Streamflow-concentration model the minimize surface water quality sampling frequencies when determining annual mean solute concentrations with a predetermined allowable error. The Kalman filter technique used the stream discharge interval as a replacement for the more commonly used time interval. Using filter computations, the measurement error variance was minimized within the sample size constraints. The Kalman filter application proposed here is applicable only under several conditions including: monitoring is solely to estimate annual mean concentration; discharge measurement errors are negligible; the Streamflow-concentration model is valid; and monthly samples reflect the total variance of the solute in question.  相似文献   

12.
The ability to detect and to develop a precise and accurate estimate of the entrainment mortality fraction is an important step in projecting power plant impacts on future fish population levels. Recent work indicates that these mortailities may be considerably less than 100% for some fish species in the early life stages. Point estimates of the entrainment mortality fraction have been developed based on probabilistic arguments, but the precision of these estimates has not been studied beyond the simple statistical test of the null hypothesis that no entrainment mortaility exists.The ability to detect entrainment mortality is explored as a function of the sample sizes (numbers of organisms collected) at the intake and discharge sampling stations of a power plant and of the proportion of organisms found alive in the intake samples (intake survival). Minimum detectable entrainment mortality, confidence interval width, and type II error (probability of accepting the null hypothesis of no entrainment mortality when there is mortality) are considered. Increasing sample size and/or decreasing sampling mortality will decrease the minimum detectable entrainment mortality, confidence interval width, and type II error for a given level of type I error.The results of this study are considered in the context of designing useful monitoring programs for determining the entrainment mortality fraction. Preliminary estimates of intake survival and the entrainment mortality fraction can be used to obtain estimates of the sample size needed for a specified level of confidence interval width or type II error. Final estimates of the intake survival and the entrainment mortality fraction can be used to determine the minimum detectable entrainment mortality and the type II error.  相似文献   

13.
ABSTRACT: Existing ambient water quality monitoring programs have resulted in data which are often unsuitable for assessment of water quality trends. A primary concern in designing a stream quality monitoring network is the selection of a temporal sampling strategy. It is extremely important that data for trend assessment be collected uniformly in time. Greatly superior trend detection power results for such a strategy as compared to stratified sampling strategies. In general, it is desirable that sampling frequencies be at least monthly but not greater than biweekly; higher sampling frequencies usually result in little additional information. An upper limit on trend detectability exists such that for both five and ten year base periods it is often impossible to detect trends in time series where the ratio of the trend magnitude to time series standard deviation is less than about 0.5. For the same record lengths trends in records with trend to standard deviation ratios greater than about one can usually be detected with very high power when a uniform sampling strategy is followed.  相似文献   

14.
ABSTRACT: The sampling of streams and estimation of total loads of nitrogen, phosphorus, and suspended sediment play an important role in efforts to control the eutrophication of Lake Tahoe. We used a Monte Carlo procedure to test the precision and bias of four methods of calculating total constituent loads for nitrate‐nitrogen, soluble reactive phosphorus, particulate phosphorus, total phosphorus, and suspended sediment in one major tributary of the lake. The methods tested were two forms of the Beale's Ratio Estimator, the Period Weighted Sample, and the Rating Curve. Intensive sampling in 1985 (a dry year) and 1986 (a wet year) provided a basis for estimating loads by the “worked record” method for comparison with estimates based on resampling actual data at the lower intensity that characterizes the present monitoring program. The results show that: (1) the Period Weighted Sample method was superior to the other methods for all constituents for 1985; and (2) for total phosphorus, particulate phosphorus, and suspended sediment, the Rating Curve gave the best results in 1986. Modification of the present sampling program and load calculation methods may be necessary to improve the precision and reduce the bias of estimates of total phosphorus loads in basin streams.  相似文献   

15.
Cost-efficient sample designs for collection of ground data and accurate mapping of variables are required to monitor natural resources and environmental and ecological systems. In this study, a sample design and mapping method was developed by integrating stratification, model updating, and cokriging with Landsat Thematic Mapper (TM) imagery. This method is based on the spatial autocorrelation of variables and the spatial cross-correlation among them. It can lead to sample designs with variable grid spacing, where sampling distances between plots vary depending on spatial variability of the variables from location to location. This has potential cost-efficiencies in terms of sample design and mapping. This method is also applicable for mapping in the case in which no ground data can be collected in some parts of a study area because of the high cost. The method was validated in a case study in which a ground and vegetation cover factor was sampled and mapped for monitoring soil erosion. The results showed that when the sample obtained with three strata using the developed method was used for sampling and mapping the cover factor, the sampling cost was greatly decreased, although the error of the map was slightly increased compared to that without stratification; that is, the sample cost-efficiency quantified by the product of cost and error was greatly increased. The increase of cost-efficiency was more obvious when the cover factor values of the plots within the no-significant-change stratum were updated by a model developed using the previous observations instead of remeasuring them in the field.  相似文献   

16.
The dual goals of the Organic Act of 1916 and Wilderness Act of 1964 are to protect natural resources and provide quality visitor experiences. Park managers need metrics of trail conditions to protect park resources and quality of visitor experiences. A few methods of sampling design for trails have been developed. Here, we describe a relatively new method, spatially balanced sampling, and compare it to systematic sampling. We evaluated the efficiency of sampling designs to measure recreation-related impacts in Rocky Mountain National Park. This study addressed two objectives: first, it compared estimates of trail conditions from data collected from systematic versus spatially balanced sampling data; second, it examined the relationship between sampling precision and sampling efficiency. No statistically significant differences in trail condition were found between the 100-m interval and the spatially balanced datasets. The spatially balanced probability-based dataset was found to be a good estimate of trail conditions when analyses were conducted with fewer sample points. Moreover, spatially balanced probability-based sampling is flexible and allows additional sample points to be added to a sample.  相似文献   

17.
ABSTRACT: The selection of sampling frequencies in order to achieve reasonably small and uniform confidence interval widths about annual sample means or sample geometric means of water quality constituents is suggested as a rational approach to regulatory monitoring network design. Methods are presented for predicting confidence interval widths at specified sampling frequencies while considering both seasonal variation and serial correlation of the quality time series. Deterministic annual cycles are isolated and serial dependence structures of the autoregressive, moving average type are identified through time series analysis of historic water quality records. The methods are applied to records for five quality constituents from a nine-station network in Illinois. Confidence interval widths about annual geometric means are computed over a range of sampling frequencies appropriate in regulatory monitoring. Results are compared with those obtained when a less rigorous approach, ignoring seasonal variation and serial correlation, is used. For a monthly sampling frequency the error created by ignoring both seasonal variation and serial correlation is approximately 8 percent. Finally, a simpler technique for evaluating serial correlation effects based on the assumption of AR(1) type dependence is examined. It is suggested that values of the parameter p1, in the AR(1) model should range from 0.75 to 0.90 for the constituents and region studied.  相似文献   

18.
Odor regulations typically specify the use of dynamic dilution olfactometery (DDO) as a method to quantify odor emissions, and Tedlar bags are the preferred holding container for grab samples. This study was conducted to determine if Tedlar bags affect the integrity of sampled air from animal operations. Air samples were collected simultaneously in both Tedlar bags and Tenax thermal desorption tubes. Sample sources originated from either a hydrocarbon-free air tank, dynamic headspace chamber (DHC), or swine-production facility, and were analyzed by gas chromatography-mass spectrometry-olfactometry (GC-MS-O). Several background contaminants were identified from Tedlar bags, which included the odorous compounds N,N-dimethyl acetamide (DMAC), acetic acid, and phenol. Samples from the DHC demonstrated that recovery of malodor compounds was dependent on residence time in the Tedlar bag with longer residence time leading to lower recovery. After 24 h of storage, recovery of C3-C6 volatile fatty acids (VFA) averaged 64%, 4-methylphenol and 4-ethylphenol averaged 10%, and indole and 3-methylindole were below the detection limits of GC-MS-O. The odor activity value (OAV) of grab samples collected in Tedlar bags were 33 to 65% lower following 24 h of storage. These results indicate that significant odorant bias occurs when using Tedlar bags for the sampling of odors from animal production facilities.  相似文献   

19.
A 30-year record of monthly precipitation for Northern New Jersey was analyzed for its statistical components. With a weak annual periodicity eliminated, the series was found to be random. The data for each month were fit with a gamma distribution using Thom's suggested best estimates of the distribution parameters. A one-thousand-year simulated monthly precipitation series was generated using random values from the twelve gamma distributions. The statistical properties of the simulated and sample time series agreed well. Numerous anomalous precipitation regimes were observed in the simulated data.  相似文献   

20.
It is often necessary to find a simpler method in different climatic regions to calculate reference crop evapotranspiration (ETo) since the application of the FAO‐56 Penman‐Monteith method is often restricted due to the unavailability of a comprehensive weather dataset. Seven ETo methods, namely the standard FAO‐56 Penman‐Monteith, the FAO‐24 Radiation, FAO‐24 Blaney Criddle, 1985 Hargreaves, Priestley‐Taylor, 1957 Makkink, and 1961 Turc, were applied to calculate monthly averages of daily ETo, total annual ETo, and daily ETo in an arid region at Aksu, China, in a semiarid region at Tongchuan, China, and in a humid region at Starkville, Mississippi, United States. Comparisons were made between the FAO‐56 method and the other six simple alternative methods, using the index of agreement D, modeling efficiency (EF), and root mean square error (RMSE). For the monthly averages of daily ETo, the values of D, EF, and RMSE ranged from 0.82 to 0.98, 0.55 to 0.98, and 0.23 to 1.00 mm/day, respectively. For the total annual ETo, the values of D, EF, and RMSE ranged from 0.21 to 0.91, ?43.08 to 0.82, and 24.80 to 234.08 mm/year, respectively. For the daily ETo, the values of D, EF, and RMSE ranged from 0.58 to 0.97, 0.57 to 0.97, and 0.30 to 1.06 mm/day, respectively. The results showed that the Priestly‐Taylor and 1985 Hargreaves methods worked best in the arid and semiarid regions, while the 1957 Makkink worked best in the humid region.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号