首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
National Water Model (NWM) simulates the hydrologic cycle and produces streamflow forecasts for 2.7 million reaches in the National Hydrography Dataset for continental United States (U.S.). NWM uses Muskingum–Cunge channel routing, which is based on the continuity equation. However, the momentum equation also needs to be considered to obtain more accurate estimates of streamflow and stage in rivers, especially for applications such as flood‐inundation mapping. Here, we used a steady‐state backwater version of Simulation Program for River NeTworks (SPRNT) model. We evaluated SPRNT’s and NWM’s abilities to predict inundated area for the record flood of Hurricane Matthew in October 2016. The Neuse River experienced record‐breaking floods and was well‐documented by U.S. Geological Survey. Streamflow simulations from NWM retrospective analysis were used as input for the SPRNT simulation. Retrospective NWM discharge predictions were converted to stage. The stages (from both SPRNT and NWM) were utilized to produce flood‐inundation maps using the Height Above Nearest Drainage method which uses the local relative heights to find out the local draining potentials and provide spatial representation of inundated area. The inundated‐area accuracies for NWM and SPRNT (based on comparison to a remotely sensed dataset) were 65.1% and 67.6%, respectively. These results show using steady‐state SPRNT results in a modest improvement of inundation‐forecast accuracy compared to NWM.  相似文献   

2.
River networks based on Digital Elevation Model (DEM) data differ depending on the DEM resolution, accuracy, and algorithms used for network extraction. As spatial scale increases, the differences diminish. This study explores methods that identify the scale where networks obtained by different methods agree within some margin of error. The problem is relevant for comparing hydrologic models built around the two networks. An example is the need to compare streamflow prediction from the Hillslope Link Model (HLM) operated by the Iowa Flood Center (IFC) and the National Water Model (NWM) operated by the National Water Center of the National Oceanic and Atmospheric Administration. The HLM uses landscape decomposition into hillslopes and channel links while the NWM uses the NHDPlus dataset as its basic spatial support. While the HLM resolves the scale of the NHDPlus, the outlets of the latter do not necessarily correspond to the nodes of the HLM model. The authors evaluated two methods to map the outlets of NHDPlus to outlets on the IFC network. The methods compare the upstream areas of the channels and their spatial location. Both methods displayed similar performance and identified matches for about 80% of the outlets with a tolerance of 10% in errors in the upstream area. As the aggregation scale increases, the number of matches also increases. At the scale of 100 km2, 90% of the outlets have matches with tolerance of 5%. The authors recommend this scale for comparing the HLM and NWM streamflow predictions.  相似文献   

3.
The National Water Model (NWM) will provide the next generation of operational streamflow forecasts across the United States (U.S.) using the WRF-Hydro hydrologic model. In this study, we propose a strategy to calibrate 10 parameters of WRF-Hydro that control runoff generation during floods and snowmelt seasons, and due to baseflow. We focus on the Oak Creek Basin (820 km2), an unregulated mountainous sub-watershed of the Salt and Verde River Basins in Arizona, which are the largest source of water supply for the Phoenix Metropolitan area. We calibrate the model against discharge observations at the outlet in 2008–2011, and validate it at two stream gauging stations in 2012–2016. After bias correcting the precipitation forcings, we sequentially modify the model parameters controlling distinct runoff generation processes in the basin. We find that capturing the deep drainage to the aquifer is crucial to improve the simulation of all processes and that this flux is mainly controlled by the SLOPE parameter. Performance metrics indicate that snowmelt, baseflow, and floods due to winter storms are simulated fairly well, while flood peaks caused by summer thunderstorms are severely underestimated. We suggest the use of spatially variable soil depth to enhance the simulation of these processes. This work supports the ongoing calibration effort of the NWM by testing WRF-Hydro in a watershed with a large variety of runoff mechanisms that are representative of several basins in the southwestern U.S.  相似文献   

4.
Hydrologic modeling can be used to provide warnings before, and to support operations during and after floods. Recent technological advances have increased our ability to create hydrologic models over large areas. In the United States (U.S.), a new National Water Model (NWM) that generates hydrologic variables at a national scale was released in August 2016. This model represents a substantial step forward in our ability to predict hydrologic events in a consistent fashion across the entire U.S. Nevertheless, for these hydrologic results to be effectively communicated, they need to be put in context and be presented in a way that is straightforward and facilitates management‐related decisions. The large amounts of data produced by the NWM present one of the major challenges to fulfill this goal. We created a cyberinfrastructure to store NWM results, “accessibility” web applications to retrieve NWM results, and a REST API to access NWM results programmatically. To demonstrate the utility of this cyberinfrastructure, we created additional web apps that illustrate how to use our REST API and communicate hydrologic forecasts with the aid of dynamic flood maps. This work offers a starting point for the development of a more comprehensive toolset to validate the NWM while also improving the ability to access and visualize NWM forecasts, and develop additional national‐scale‐derived products such as flood maps.  相似文献   

5.
ABSTRACT: Several federal and state water resources agencies and NASA have recently completed an Applications Systems Verification and Transfer (ASVT) project on the operational applications of satellite snow cover observations. When satellite snow cover data were tested in both empirical seasonal runoff estimation and short term modeling approaches, a definite potential for reducing forecast error was evident. Three years of testing in California resulted in reduction of seasonal stream flow forecast error was evident. Three years of testing in California resulted in reduction of seasonal stream flow forecast error from 15 percent to 10 percent on three study basins; and modeling studies on the Boise River basin in Idaho indicated that satellite snow cover could be used to reduce short term forecast error by up to 9.6 percent (5 day forecast). Potential benefits from improved satellite snow cover based predictions across the 11 western states total 10 million dollars for hydropower and 28 million dollars for irrigation annually. The truly operational application of the new technology in the West, however, will only be possible when the turnaround time for all data is reduced to 72 hours, and the water management agencies can be assured of a continuing supply of operational snow cover data from space.  相似文献   

6.
This paper explores the performance of the analysis‐and‐assimilation configuration of the National Water Model (NWM) v1.0 in Iowa. The NWM assimilates streamflow observations from the United States Geological Survey (USGS), which increases the performance but also limits the available data for model evaluation. In this study, Iowa Flood Center Bridge Sensors (IFCBS) data provided an independent nonassimilated dataset for evaluation analyses. The authors compared NWM outputs for the period between May 2016 and April 2017, with two datasets: USGS streamflow and velocity observations; Stage and streamflow data from IFCBS. The distribution of Spearman rank correlation (rs), Nash–Sutcliffe efficiency (E), and Kling–Gupta efficiency (KGE) provided quantification of model performance. We found the performance was linked with the spatial scale of the basins. Analysis at USGS gauges showed the strongest performance in large (>10,000 km2) basins (rs = 0.9, E = 0.9, KGE = 0.8), with some decrease at small (<1,000 km2) basins (rs = 0.6, E = ?0.25, KGE = ?0.2). Analysis with independent IFCBS observations was used to report performance at large basins (rs = 0.6, KGE = 0.1) and small basins (rs = 0.2, KGE = ?0.4). Data assimilation improves simulations at downstream basins. We found differences in the characterization of the model and observed data flow velocity distributions. The authors recommend checking the connection of USGS gauges and NHDPlus reaches for selected locations where performance is weak.  相似文献   

7.
Stormwater infrastructure designers and operators rely heavily on the United States Environmental Protection Agency’s Storm Water Management Model (SWMM) to simulate stormwater and wastewater infrastructure performance. Since its inception in the late 1970s, improvements and extensions have been tested and evaluated rigorously to verify the accuracy of the model. As a continuation of this progress, the main objective of this study was to quantify how accurately SWMM simulates the hydrologic activity of low impact development (LID) storm control measures. Model performance was evaluated by quantitatively comparing empirical data to model results using a multievent, multiobjective calibration method. The calibration methodology utilized the PEST software, a Parameter ESTimation tool, to determine unmeasured hydrologic parameters for SWMM’s LID modules. The calibrated LID modules’ Nash–Sutcliffe efficiencies averaged 0.81; average percent bias (PBIAS) ?9%; average ratio of root mean square error to standard deviation of measured values 0.485; average index of agreement 0.94; and the average volume error, simulated vs. observed, was +9%. SWMM accurately predicted the timing of peak flows, but usually underestimated their magnitudes by 10%. The average volume reduction, measured outflow volume divided by inflow volume, was 48%. We had more difficulty in calibrating one study, an infiltration trench, which identified a significant limitation of the current version of the SWMM LID module; it cannot simulate lateral exfiltration of water out of the storage layers of a LID storm control measure. This limitation is especially severe for a deep LIDs, such as infiltration trenches. Nevertheless, SWMM satisfactorily simulated the hydrologic performance of eight of the nine LID practices.  相似文献   

8.
Abstract: In this article, we describe a method for predicting floodplain locations and potential lateral channel migration across 82,900 km (491 km2 by bankfull area) of streams in the Columbia River basin. Predictions are based on channel confinement, channel slope, bankfull width, and bankfull depth derived from digital elevation and precipitation data. Half of the 367 km2 (47,900 km by length) of low‐gradient channels (≤ 4% channel slope) were classified as floodplain channels with a high likelihood of lateral channel migration (182 km2, 50%). Classification agreement between modeled and field‐measured floodplain confinement was 85% (κ = 0.46, p < 0.001) with the largest source of error being the misclassification of unconfined channels as confined (55% omission error). Classification agreement between predicted channel migration and lateral migration determined from aerial photographs was 76% (κ = 0.53, p < 0.001) with the largest source of error being the misclassification of laterally migrating channels as non‐migrating (35% omission error). On average, more salmon populations were associated with laterally migrating channels and floodplains than with confined or nonmigrating channels. These data are useful for many river basin planning applications, including identification of land use impacts to floodplain habitats and locations with restoration potential for listed salmonids or other species of concern.  相似文献   

9.
Streamflow monitoring in the Colorado River Basin (CRB) is essential to ensure diverse needs are met, especially during periods of drought or low flow. Existing stream gage networks, however, provide a limited record of past and current streamflow. Modeled streamflow products with more complete spatial and temporal coverage (including the National Water Model [NWM]), have primarily focused on flooding, rather than sustained drought or low flow conditions. Objectives of this study are to (1) evaluate historical performance of the NWM streamflow estimates (particularly with respect to droughts and seasonal low flows) and (2) identify characteristics relevant to model inputs and suitability for future applications. Comparisons of retrospective flows from the NWM to observed flows from the United States Geological Survey stream gage network over 22 years in the CRB reveal a tendency for underestimating low flow frequency, locations with low flows, and the number of years with low flows. We found model performance to be more accurate for the Upper CRB and at sites with higher precipitation, snow percent, baseflow index, and elevations. Underestimation of low flows and variable model performance has important implications for future applications: inaccurate evaluations of historical low flows and droughts, and less reliable performance outside of specific watershed/stream conditions. This highlights characteristics on which to focus future model development efforts.  相似文献   

10.
Over the summer of 2015, the National Water Center hosted the National Flood Interoperability Experiment (NFIE) Summer Institute. The NFIE organizers introduced a national‐scale distributed hydrologic modeling framework that can provide flow estimates at around 2.67 million reaches within the continental United States. The framework generates discharges by coupling a given Land Surface Model (LSM) with the Routing Application for Parallel Computation of Discharge (RAPID). These discharges are then accumulated through the National Hydrography Dataset Plus stream network. The framework can utilize a variety of LSMs to provide the runoff maps to the routing component. The results obtained from this framework suggested that there still exists room for further enhancements to its performance, especially in the area of peak timing and magnitude. The goal of our study was to investigate a single source of the errors in the framework's discharge estimates, which is the routing component. The authors substitute RAPID which is based on the simplified linear Muskingum routing method by the nonlinear routing component the Iowa Flood Center have incorporated in their full hydrologic Hillslope‐Link Model. Our results show improvement in model performance across scales due to incorporating new routing methodology.  相似文献   

11.
Channel dimensions (width and depth) at varying flows influence a host of instream ecological processes, as well as habitat and biotic features; they are a major consideration in stream habitat restoration and instream flow assessments. Models of widths and depths are often used to assess climate change vulnerability, develop endangered species recovery plans, and model water quality. However, development and application of such models require specific skillsets and resources. To facilitate acquisition of such estimates, we created a dataset of modeled channel dimensions for perennial stream segments across the conterminous United States. We used random forest models to predict wetted width, thalweg depth, bankfull width, and bankfull depth from several thousand field measurements of the National Rivers and Streams Assessment. Observed channel widths varied from <5 to >2000 m and depths varied from <2 to >125 m. Metrics of watershed area, runoff, slope, land use, and more were used as model predictors. The models had high pseudo R2 values (0.70–0.91) and median absolute errors within ±6% to ±21% of the interquartile range of measured values across 10 stream orders. Predicted channel dimensions can be joined to 1.1 million stream segments of the 1:100 K resolution National Hydrography Dataset Plus (version 2.1). These predictions, combined with a rapidly growing body of nationally available data, will further enhance our ability to study and protect aquatic resources.  相似文献   

12.
In this paper, the viability of modeling the instantaneous thermal efficiency (ηith) of a solar still was determined using meteorological and operational data with an artificial neural network (ANN), multivariate regression (MVR), and stepwise regression (SWR). This study used meteorological and operational variables to hypothesize the effect of solar still performance. In the ANN model, nine variables were used as input parameters: Julian day, ambient temperature, relative humidity, wind speed, solar radiation, feed water temperature, brine water temperature, total dissolved solids of feed water, and total dissolved solids of brine water. The ηith was represented by one node in the output layer. The same parameters were used in the MVR and SWR models. The advantages and disadvantages were discussed to provide different points of view for the models. The performance evaluation criteria indicated that the ANN model was better than the MVR and SWR models. The mean coefficient of determination for the ANN model was about 13% and14% more accurate than those of the MVR and SWR models, respectively. In addition, the mean root mean square error values of 6.534% and 6.589% for the MVR and SWR models, respectively, were almost double that of the mean values for the ANN model. Although both MVR and SWR models provided similar results, those for the MVR were comparatively better. The relative errors of predicted ηith values for the ANN model were mostly in the vicinity of ±10%. Consequently, the use of the ANN model is preferred, due to its high precision in predicting ηith compared to the MVR and SWR models. This study should be extremely beneficial to those coping with the design of solar stills.  相似文献   

13.
This study assessed the performance of six solar radiation models. The objective was to determine the most accurate model for estimating global solar radiation on a horizontal surface in Nigeria. Twenty-two years meteorological data sets collected from the Nigerian Meteorological agency and the National Aeronautics and Space Administration for the three regions, covering the entire climatic zones in Nigeria were utilized for calibrating and validating the selected models for Nigeria. The accuracy and applicability of various models were determined for three locations (Abuja, Benin City, and Sokoto), which spread across Nigeria using seven viable statistical indices. This study found that the estimation results of considered models are statistically significant at the 95% confidence level, but their accuracy varies from one location to another. However, the multivariable regression relationship deduced in terms of sunshine ratio, air temperature ratio, maximum air temperature, and cloudiness performs better than other relationships. The multivariable relationship has the least root mean square error and mean absolute bias error, not exceeding 1.0854 and 0.8160 MJ m?2 day?1, respectively, and monthly relative percentage error in the range of ± 12% for the study areas.  相似文献   

14.
Deep learning (DL) models are increasingly used to make accurate hindcasts of management-relevant variables, but they are less commonly used in forecasting applications. Data assimilation (DA) can be used for forecasts to leverage real-time observations, where the difference between model predictions and observations today is used to adjust the model to make better predictions tomorrow. In this use case, we developed a process-guided DL and DA approach to make 7-day probabilistic forecasts of daily maximum water temperature in the Delaware River Basin in support of water management decisions. Our modeling system produced forecasts of daily maximum water temperature with an average root mean squared error (RMSE) from 1.1 to 1.4°C for 1-day-ahead and 1.4 to 1.9°C for 7-day-ahead forecasts across all sites. The DA algorithm marginally improved forecast performance when compared with forecasts produced using the process-guided DL model alone (0%–14% lower RMSE with the DA algorithm). Across all sites and lead times, 65%–82% of observations were within 90% forecast confidence intervals, which allowed managers to anticipate probability of exceedances of ecologically relevant thresholds and aid in decisions about releasing reservoir water downstream. The flexibility of DL models shows promise for forecasting other important environmental variables and aid in decision-making.  相似文献   

15.
Operational forecast models require robust, computationally efficient, and reliable algorithms. We desire accurate forecasts within the limits of the uncertainties in channel geometry and roughness because the output from these algorithms leads to flood warnings and a variety of water management decisions. The current operational Water Model uses the Muskingum-Cunge method, which does not account for key hydraulic conditions such as flow hysteresis and backwater effects, limiting its ability in situations with pronounced backwater effects. This situation most commonly occurs in low-gradient rivers, near confluences and channel constrictions, coastal regions where the combined actions of tides, storm surges, and wind can cause adverse flow. These situations necessitate a more rigorous flow routing approach such as dynamic or diffusive wave approximation to simulate flow hydraulics accurately. Avoiding the dynamic wave routing due to its extreme computational cost, this work presents two diffusive wave approaches to simulate flow routing in a complex river network. This study reports a comparison of two different diffusive wave models that both use a finite difference solution solved using an implicit Crank–Nicolson (CN) scheme with second-order accuracy in both time and space. The first model applies the CN scheme over three spatial nodes and is referred to as Crank–Nicolson over Space (CNS). The second model uses the CN scheme over three temporal nodes and is referred to as Crank–Nicolson over Time (CNT). Both models can properly account for complex cross-section geometry and variable computational points spacing along the channel length. The models were tested in different watersheds representing a mixture of steep and flat topographies. Comparing model outputs against observations of discharges and water levels indicated that the models accurately predict the peak discharge, peak water level, and flooding duration. Both models are accurate and computationally stable over a broad range of hydraulic regimes. The CNS model is dependent on the Courant criteria, making it less computational efficient where short channel segments are present. The CNT model does not suffer from that constraint and is, thus, highly computationally efficient and could be more useful for operational forecast models.  相似文献   

16.
Williamson, Tanja N. and Charles G. Crawford, 2011. Estimation of Suspended‐Sediment Concentration From Total Suspended Solids and Turbidity Data for Kentucky, 1978‐1995. Journal of the American Water Resources Association (JAWRA) 47(4):739‐749. DOI: 10.1111/j.1752‐1688.2011.00538.x Abstract: Suspended sediment is a constituent of water quality that is monitored because of concerns about accelerated erosion, nonpoint contamination of water resources, and degradation of aquatic environments. In order to quantify the relationship among different sediment parameters for Kentucky streams, long‐term records were obtained from the National Water Information System of the U.S. Geological Survey. Suspended‐sediment concentration (SSC), the parameter traditionally measured and reported by the U.S. Geological Survey, was statistically compared to turbidity and total suspended solids (TSS), two parameters that are considered surrogate data. A linear regression of log‐transformed observations was used to estimate SSC from TSS; 72% of TSS observations were less than coincident SSC observations; however, the estimated SSC values were almost as likely to be overestimated as underestimated. The SSC‐turbidity relationship also used log‐transformed observations, but required a nonlinear, breakpoint regression that separated turbidity observations ≤6 nephelometric turbidity units. The slope for these low turbidity values was not significantly different than zero, indicating that low turbidity observations provide no real information about SSC; in the case of the Kentucky sediment record, this accounts for 30% of the turbidity observations.  相似文献   

17.
Cheng, Shin-jen, 2010. Inferring Hydrograph Components From Rainfall and Streamflow Records Using a Kriging Method-Based Linear Cascade Reservoir Model. Journal of the American Water Resources Association (JAWRA) 46(6):1171–1191. DOI: 10.1111/j.1752-1688.2010.00484.x Abstract: This study investigates the characteristics of hydrograph components in a Taiwan watershed to determine their shapes based on observations. Hydrographs were modeled by a conceptual model of three linear cascade reservoirs. Mean rainfall was calculated using the block Kriging method. The optimal parameters for 42 events from 1966-2008 were calibrated using an optimal algorithm. Rationality of generated runoffs was well compared with a trusty model. Model efficacy was verified using seven averaged parameters with 25 other events. Hydrograph components were characterized based on 42 calibration results. The following conclusions were obtained: (1) except for multipeak storms, a correlation between base time of the surface runoff and soil antecedent moisture is a decreasing power relationship; (2) a correlation between time lag of the surface flow and soil antecedent moisture for single-peak storms is an increasing power relationship; (3) for single-peak events, times to peak of hydrograph components are an increasing power correlation corresponding to the peak time of rainfall; (4) the peak flows of hydrograph components are linearly proportional to that of total runoff, and the peak ratio for the surface runoff to total runoff is approximately 78 and 13% for subsurface runoff to total runoff; and (5) the relationships of total discharges have direct ratios between hydrograph components and observations of total runoffs, and a surface runoff is 60 and 32% for a subsurface runoff.  相似文献   

18.
Parametric (propagation for normal error estimates) and nonparametric methods (bootstrap and enumeration of combinations) to assess the uncertainty in calculated rates of nitrogen loading were compared, based on the propagation of uncertainty observed in the variables used in the calculation. In addition, since such calculations are often based on literature surveys rather than random replicate measurements for the site in question, error propagation was also compared using the uncertainty of the sampled population (e.g., standard deviation) as well as the uncertainty of the mean (e.g., standard error of the mean). Calculations for the predicted nitrogen loading to a shallow estuary (Waquoit Bay, MA) were used as an example. The previously estimated mean loading from the watershed (5,400 ha) to Waquoit Bay (600 ha) was 23,000 kg N yr−1. The mode of a nonparametric estimate of the probability distribution differed dramatically, equaling only 70% of this mean. Repeated observations were available for only 8 of the 16 variables used in our calculation. We estimated uncertainty in model predictions by treating these as sample replicates. Parametric and nonparametric estimates of the standard error of the mean loading rate were 12–14%. However, since the available data include site-to-site variability, as is often the case, standard error may be an inappropriate measure of confidence. The standard deviations were around 38% of the loading rate. Further, 95% confidence intervals differed between the nonparametric and parametric methods, with those of the nonparametric method arranged asymmetrically around the predicted loading rate. The disparity in magnitude and symmetry of calculated confidence limits argue for careful consideration of the nature of the uncertainty of variables used in chained calculations. This analysis also suggests that a nonparametric method of calculating loading rates using most frequently observed values for variables used in loading calculations may be more appropriate than using mean values. These findings reinforce the importance of including assessment of uncertainty when evaluating nutrient loading rates in research and planning. Risk assessment, which may need to consider relative probability of extreme events in worst-case scenarios, will be in serious error using normal estimates, or even the nonparametric bootstrap. A method such as our enumeration of combinations produces a more reliable distribution of risk.  相似文献   

19.
The 2001 National Land Cover Database (NLCD) provides 30-m resolution estimates of percentage tree canopy and percentage impervious cover for the conterminous United States. Previous estimates that compared NLCD tree canopy and impervious cover estimates with photo-interpreted cover estimates within selected counties and places revealed that NLCD underestimates tree and impervious cover. Based on these previous results, a wall-to-wall comprehensive national analysis was conducted to determine if and how NLCD derived estimates of tree and impervious cover varies from photo-interpreted values across the conterminous United States. Results of this analysis reveal that NLCD significantly underestimates tree cover in 64 of the 65 zones used to create the NCLD cover maps, with a national average underestimation of 9.7% (standard error (SE) = 1.0%) and a maximum underestimation of 28.4% in mapping zone 3. Impervious cover was also underestimated in 44 zones with an average underestimation of 1.4% (SE = 0.4%) and a maximum underestimation of 5.7% in mapping zone 56. Understanding the degree of underestimation by mapping zone can lead to better estimates of tree and impervious cover and a better understanding of the potential limitations associated with NLCD cover estimates.  相似文献   

20.
Abstract: A practical methodology is proposed to estimate the three‐dimensional variability of soil moisture based on a stochastic transfer function model, which is an approximation of the Richard’s equation. Satellite, radar and in situ observations are the major sources of information to develop a model that represents the dynamic water content in the soil. The soil‐moisture observations were collected from 17 stations located in Puerto Rico (PR), and a sequential quadratic programming algorithm was used to estimate the parameters of the transfer function (TF) at each station. Soil texture information, terrain elevation, vegetation index, surface temperature, and accumulated rainfall for every grid cell were input into a self‐organized artificial neural network to identify similarities on terrain spatial variability and to determine the TF that best resembles the properties of a particular grid point. Soil moisture observed at 20 cm depth, soil texture, and cumulative rainfall were also used to train a feedforward artificial neural network to estimate soil moisture at 5, 10, 50, and 100 cm depth. A validation procedure was implemented to measure the horizontal and vertical estimation accuracy of soil moisture. Validation results from spatial and temporal variation of volumetric water content (vwc) showed that the proposed algorithm estimated soil moisture with a root mean squared error (RMSE) of 2.31% vwc, and the vertical profile shows a RMSE of 2.50% vwc. The algorithm estimates soil moisture in an hourly basis at 1 km spatial resolution, and up to 1 m depth, and was successfully applied under PR climate conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号