首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Analyzing soils for contaminants can be costly. Generally, discrete samples are gathered from within a study area, analyzed by a laboratory and the results are used in a site-specific statistical analysis. Because of the heterogeneities that exist in soil samples within study areas, a large amount of variability and skewness may be present in the sample population. This necessitates collecting a large number of samples to obtain reliable inference on the mean contaminant concentration and to understand the spatial patterns for future remediation. Composite, or Incremental, sampling is a commonly applied method for gathering multiple discrete samples and physically combining them, such that each combination of discrete samples requires a single laboratory analysis, which reduces cost and can improve the estimates of the mean concentration. While incremental sampling can reduce cost and improve mean estimates, current implementations do not readily facilitate the characterization of spatial patterns or the detection of elevated constituent regions within study areas. The methods we present in this work provide efficient estimation and inference for the mean contaminant concentration over the entire spatial area and enable the identification of high contaminant regions within the area of interest. We develop sample design methodologies that explicitly define the characteristics of these designs (such as sample grid layout) and quantify the number of incremental samples that must be obtained under a design criteria to control false positive and false negative (Type I and II) decision errors. We present the sample design theory and specifications as well as results on simulated and real data.  相似文献   

2.
The initial use of composite sampling involved the analysis of many negative samples with relatively high laboratory cost (Dorfman sampling). We propose a method of double compositing and compare its efficiency with Dorfman sampling. The variability of composite measurement samples has environmental interest (hot spots). The precision of these estimates depends on the kurtosis of the distribution; leptokurtic distributions (2 > 0) have increased precision as the number of field samples is increased. The opposite effect is obtained for platykurtic distributions. In the lognormal case, coverage probabilities are reasonable for < 0.5. The Poisson distribution can be associated with temporal compositing, of particular interest where radioactive measurements are taken. Sample size considerations indicate that the total sampling effort is directly proportional to the length of time sampled. If there is background radiation then increasing levels of this radiation require larger sample sizes to detect the same difference in radiation.  相似文献   

3.
There is an increasing interest in the quality of soil, especially for small geographical areas. We present a method to estimate the percent of the area in a county or hydrological basin that is eroded. There are sample data (for several counties in eastern Iowa) from the National Resources Inventory and population data on land use, land capability class, rainfall and slope length and steepness. Using the Gibbs sampler we perform Bayesian predictive inference to obtain estimates for the non-sampled units. These estimates, together with the sample data, provide an estimate of the proportion of the total area that is eroded. We assess the quality of fit of our model using two cross-validation exercises and graphical methods.  相似文献   

4.
Addressing onsite sampling in recreation site choice models   总被引:1,自引:0,他引:1  
Independent experts and politicians have criticized statistical analyses of recreation behavior, which rely upon onsite samples due to their potential for biased inference. The use of onsite sampling usually reflects data or budgetary constraints, but can lead to two primary forms of bias in site choice models. First, the strategy entails sampling site choices rather than sampling individuals—a form of bias called endogenous stratification. Under these conditions, sample choices may not reflect the site choices of the true population. Second, exogenous attributes of the individuals sampled onsite may differ from the attributes of individuals in the population—the most common form in recreation demand is avidity bias. We propose addressing these biases by combining two the existing methods: Weighted Exogenous Stratification Maximum Likelihood estimation and propensity score estimation. We use the National Marine Fisheries Service's Marine Recreational Fishing Statistics Survey to illustrate methods of bias reduction, employing both simulated and empirical applications. We find that propensity score based weights can significantly reduce bias in estimation. Our results indicate that failure to account for these biases can overstate anglers' willingness to pay for improvements in fishing catch, but weighted models exhibit higher variance of parameter estimates and willingness to pay.  相似文献   

5.
The relative abundance of organisms from different taxa provides information about ecosystem health and diversity. When the numbers of sampled organisms are modelled as Poisson counts, and the sample volumes are not uniform, variance for the proportion attributable to each taxon is difficult to compute. We present a method for computing approximate variances for this situation. The point estimates and their standard errors reduce to the standard multinomial maximum likelihood results when sample volumes are uniform. Further, given initial estimates of population densities for the taxa of interest, optimal sample volumes can be computed. The methods are illustrated for zooplankton counts from Andrus Lake, Michigan.  相似文献   

6.
A new spatially balanced sampling design for environmental surveys is introduced, called Halton iterative partitioning (HIP). The design draws sample locations that are well spread over the study area. Spatially balanced designs are known to be efficient when surveying natural resources because nearby locations tend to be similar. The HIP design uses structural properties of the Halton sequence to partition a resource into nested boxes. Sample locations are then drawn from specific boxes in the partition to ensure spatial diversity. The method is conceptually simple and computationally efficient, draws spatially balanced samples in two or more dimensions and uses standard design-based estimators. Furthermore, HIP samples have an implicit ordering that can be used to define spatially balanced over-samples. This feature is particularly useful when sampling natural resources because we can dynamically add spatially balanced units from the over-sample to the sample as non-target or inaccessible units are discovered. We use several populations to show that HIP sampling draws spatially balanced samples and gives precise estimates of population totals.  相似文献   

7.
Trace element concentrations in plant bioindicators are often determined to assess the quality of the environment. Instrumental methods used for trace element determination require digestion of samples. There are different methods of sample preparation for trace element analysis, and the selection of the best method should be fitted for the purpose of a study. Our hypothesis is that the method of sample preparation is important for interpretation of the results. Here we compare the results of 36 element determinations performed by ICP-MS on ashed and on acid-digested (HNO3, H2O2) samples of two moss species (Hylocomium splendens and Pleurozium schreberi) collected in Alaska and in south-central Poland. We found that dry ashing of the moss samples prior to analysis resulted in considerably lower detection limits of all the elements examined. We also show that this sample preparation technique facilitated the determination of interregional and interspecies differences in the chemistry of trace elements. Compared to the Polish mosses, the Alaskan mosses displayed more positive correlations of the major rock-forming elements with ash content, reflecting those elements’ geogenic origin. Of the two moss species, P. schreberi from both Alaska and Poland was also highlighted by a larger number of positive element pair correlations. The cluster analysis suggests that the more uniform element distribution pattern of the Polish mosses primarily reflects regional air pollution sources. Our study has shown that the method of sample preparation is an important factor in statistical interpretation of the results of trace element determinations.  相似文献   

8.
Quantifying a composite sample results in a loss of information on the values of the constituent individual samples. As a consequence of this information loss, it is impossible to identify individual samples having large values, based on composite sample measurements alone. However, under certain circumstances, it is possible to identify individual samples having large values without exhaustively measuring all individual samples. In addition to composite sample measurements, a few additional measurements on carefully selected individual samples are sufficient to identify the individual samples having large values. In this paper, we present a statistical method to recover extremely large individual sample values using composite sample measurements. An application to site characterization is used to illustrate the method.The paper has been prepared with partial support from the United States Environmental Protection Agency Number CR815273. The contents have not been subject to Agency review and therefore do not necessarily reflect the views or policies of the Agency and no official endorsement should be inferred.  相似文献   

9.
Compositing of individual samples is a cost-effective method for estimating a population mean, but at the expense of losing information about the individual sample values. The largest of these sample values (hotspot) is sometimes of particular interest. Sweep-out methods attempt to identify the hotspot and its value by quantifying a (hopefully, small) subset of individual values as well as the usual quantification of the composites. Sweep-out design is concerned with the sequential selection of individual samples for quantification on the basis of all earlier quantifications (both composite and individual). The design-goal is for the number of individual quantifications to be small (ideally, minimal). Previous sweep-out designs have applied to traditional (i.e., disjoint) compositing. This paper describes a sweep-out design suitable for two-way compositing. That is, the individual samples are arranged in a rectangular array and a composite is formed from each row and also from each column. At each step, the design employs all available measurements (composite and individual) to form the best linear unbiased predictions for the currently unquantified cells. The cell corresponding to the largest predicted value is chosen next for individual measurement. The procedure terminates when the hotspot has been identified with certainty.  相似文献   

10.
加速溶剂萃取技术及其在环境分析中的应用   总被引:15,自引:3,他引:15  
牟世芬  刘克纳 《环境化学》1997,16(4):387-391
加速溶剂萃取技术是一项新颖的样品前处理技术,通过升高温度与压力结合使用有机溶剂,可快速,有效地由基体中萃取各种欲测物。本文系统地阐述了该技术的基本原理,各种影响因子及其在环境分析中的应用。该法适用于固体和半固体样品的前处理。  相似文献   

11.
Analysis of two-state multivariate phenotypic change in ecological studies   总被引:1,自引:0,他引:1  
Collyer ML  Adams DC 《Ecology》2007,88(3):683-692
Analyses of two-state phenotypic change are common in ecological research. Some examples include phenotypic changes due to phenotypic plasticity between two environments, changes due to predator/non-predator character shifts, character displacement via competitive interactions, and patterns of sexual dimorphism. However, methods for analyzing phenotypic change for multivariate data have not been rigorously developed. Here we outline a method for testing vectors of phenotypic change in terms of two important attributes: the magnitude of change (vector length) and the direction of change described by trait covariation (angular difference between vectors). We describe a permutation procedure for testing these attributes, which allows non-targeted sources of variation to be held constant. We provide examples that illustrate the importance of considering vector attributes of phenotypic change in biological studies, and we demonstrate how greater inference can be made than by evaluating variance components with MANOVA alone. Finally, we consider how our method may be extended to more complex data.  相似文献   

12.
Culturomic tools enable the exploration of trends in human–nature interactions, although they entail inherent biases and necessitate careful validation. Furthermore, people may engage with nature across different culturomic data sets differently. We evaluated people's digital interest and engagement with plant species based on Wikipedia and Google data and explored the conservation implications of these temporal interest patterns. As a case study, we explored the digital footprints of the most popular plant species in Israel. We analyzed 4 years of daily page views from Hebrew Wikipedia and 10 years of daily Google search volume in Israel. We modeled popularity of plant species in these 2 data sets based on a suite of plant attributes. We further explored the seasonal trends of people's interest in each species. We found differences in how people interacted digitally with plants in Wikipedia and Google. Overall, in Google, searches for species that have utility to humans were more common, whereas in Wikipedia, plants that serve as cultural emblems received more attention. Furthermore, in Google, popular species attracted more attention over time, opposite to the trend in Wikipedia. In Google, interest in species with short bloom duration exhibited more pronounced seasonal patterns, whereas in Wikipedia, seasonality of interest increased as bloom duration increased. Together, our results suggest that people's digital interactions with nature may be inherently different depending on the sources explored, which may affect use of this information for conservation. Although culturomics holds much promise, better understanding of its underpinnings is important when translating insights into conservation actions.  相似文献   

13.
We consider the situation where there are n sampling sites in an area, with an environmental variable measured at some or all of the sites at m sample times. We assume that there is interest in knowing whether the environmental variable displays systematic changes over time at the sites. A cumulative sum type of analysis with associated randomization tests has been proposed before for this situation, when there is negligible correlation between the observations in different times at one site, and no correlation between the results at different sites. A modification that allows for serial correlation at the individual sites but with no correlation between sites has also been proposed before. In the present paper we discuss how the method can be modified further to allow for spatial correlation between the sites, by applying it only to a reduced set of sites that are far enough apart to give effectively independent results. Simulation results indicate that this strategy is effective providing that the level of spatial correlation is not too high.  相似文献   

14.
The time and effort required of probability sampling for accuracy assessment of large-scale land cover maps often means that probability test samples are not collected. Yet, map usefulness is substantially reduced without reliable accuracy estimates. In this article, we introduce a method of estimating the accuracy of a classified map that does not utilize a test sample in the usual sense, but instead estimates the probability of correct classification for each map unit using only the classification rule and the map unit covariates. We argue that the method is an improvement over conventional estimators, though it does not eliminate the need for probability sampling. The method also provides a new and simple method of constructing accuracy maps. We illustrate some of problems associated with accuracy assessment of broad-scale land cover maps, and our method, with a set of nine Landsat Thematic Mapper satellite image-based land cover maps from Montana and Wyoming, USA.  相似文献   

15.
Bayesian methods incorporate prior knowledge into a statistical analysis. This prior knowledge is usually restricted to assumptions regarding the form of probability distributions of the parameters of interest, leaving their values to be determined mainly through the data. Here we show how a Bayesian approach can be applied to the problem of drawing inference regarding species abundance distributions and comparing diversity indices between sites. The classic log series and the lognormal models of relative- abundance distribution are apparently quite different in form. The first is a sampling distribution while the other is a model of abundance of the underlying population. Bayesian methods help unite these two models in a common framework. Markov chain Monte Carlo simulation can be used to fit both distributions as small hierarchical models with shared common assumptions. Sampling error can be assumed to follow a Poisson distribution. Species not found in a sample, but suspected to be present in the region or community of interest, can be given zero abundance. This not only simplifies the process of model fitting, but also provides a convenient way of calculating confidence intervals for diversity indices. The method is especially useful when a comparison of species diversity between sites with different sample sizes is the key motivation behind the research. We illustrate the potential of the approach using data on fruit-feeding butterflies in southern Mexico. We conclude that, once all assumptions have been made transparent, a single data set may provide support for the belief that diversity is negatively affected by anthropogenic forest disturbance. Bayesian methods help to apply theory regarding the distribution of abundance in ecological communities to applied conservation.  相似文献   

16.
A statistical model is developed for estimating species richness and accumulation by formulating these community-level attributes as functions of model-based estimators of species occurrence while accounting for imperfect detection of individual species. The model requires a sampling protocol wherein repeated observations are made at a collection of sample locations selected to be representative of the community. This temporal replication provides the data needed to resolve the ambiguity between species absence and nondetection when species are unobserved at sample locations. Estimates of species richness and accumulation are computed for two communities, an avian community and a butterfly community. Our model-based estimates suggest that detection failures in many bird species were attributed to low rates of occurrence, as opposed to simply low rates of detection. We estimate that the avian community contains a substantial number of uncommon species and that species richness greatly exceeds the number of species actually observed in the sample. In fact, predictions of species accumulation suggest that even doubling the number of sample locations would not have revealed all of the species in the community. In contrast, our analysis of the butterfly community suggests that many species are relatively common and that the estimated richness of species in the community is nearly equal to the number of species actually detected in the sample. Our predictions of species accumulation suggest that the number of sample locations actually used in the butterfly survey could have been cut in half and the asymptotic richness of species still would have been attained. Our approach of developing occurrence-based summaries of communities while allowing for imperfect detection of species is broadly applicable and should prove useful in the design and analysis of surveys of biodiversity.  相似文献   

17.

Background

Perchlorate contamination of water and food poses potential health risks to humans due to the possible interference of perchlorate with the iodide uptake into the thyroid gland. Perchlorate has been found in food and drinking, surface, or swimming pool waters in many countries, including the United States, Canada, France, Germany, and Switzerland, with ion chromatography (IC) being the preferred analytical method. The standardization of a robust ion chromatographic method is therefore of the high interest for public health and safety. This article summarizes the experiments and results obtained from analyzing untreated samples, considering the sample’s electrical conductance as guidance for direct sample injection as described in EPA 314.0.

Results

The suitability of ion chromatography with suppressed conductivity detection was tested for water samples in order to check the influence of matrix effects on the perchlorate signal of untreated samples. A sample injection volume of 750 μL was applied to the selected 2 mm?IC?column. The IC?determination of perchlorate at low µg/L levels is challenged by the presence of high loads of matrix ions (e.g., chloride, nitrate, carbonate, and sulfate at 100 mg/L and above). Perchlorate recovery is impaired with the increasing matrix ion concentrations, and its chromatographic peak is asymmetric particularly at low perchlorate concentrations. The identification of the individual maximum concentration of interfering anions like chloride, nitrate, and sulfate that influence perchlorate recovery helps to reduce the number of sample preparation steps or an obligatory measurement of the electrical conductivity of the sample. Within the scope of this study, samples containing less than 125 mg/L of either anion did not need sample preparation.

Conclusion

The identification of the maximum concentration of interfering anions like chloride, nitrate, and sulfate influencing perchlorate recovery provides a simplified alternative to the EPA 314.0 method. This approach reduces unnecessary sample preparation steps while allowing a reliable prognosis of possible interferences and maintaining result quality. This study was performed to support the development of a respective international standard, which is being established by the International Organization for Standardization (ISO). The results of the study are also intended to be used as guidance for interested laboratories to optimize the analytical workflow for trace perchlorate determination.
  相似文献   

18.
Suppose fish are to be sampled from a stream. A fisheries biologist might ask one of the following three questions: ‘How many fish do I need to catch in order to see all of the species?’, ‘How many fish do I need to catch in order to see all species whose relative frequency is more than 5%?’, or ‘How many fish do I need to catch in order to see a member from each of the species A, B, and C?’. This paper offers a practical solution to such questions by setting a target sample size designed to achieve desired results with known probability. We present three sample size methods, one we call ‘exact’ and the others approximate. Each method is derived under assumed multinomial sampling, and requires (at least approximate) independence of draws and (usually) a large population. The minimum information needed to compute one of the approximate methods is the estimated relative frequency of the rarest species of interest. Total number of species is not needed. Choice of a sample size method depends largely on available computer resources. One approximation (called the ‘Monte Carlo approximation’) gets within ±6 units of exact sample size, but usually requires 20–30 minutes of computer time to compute. The second approximation (called the ‘ratio approximation’) can be computed manually and has relative error under 5% when all species are desired, but can be as much as 50% or more too high when exact sample size is small. Statistically, this problem is an application of the ‘sequential occupancy problem’. Three examples are given which illustrate the calculations so that a reader not interested in technical details can apply our results.  相似文献   

19.
We present an experiment designed to investigate the presence and nature of ordering effects within repeat-response stated preference (SP) studies. Our experiment takes the form of a large sample, full-factorial, discrete choice SP exercise investigating preferences for tap water quality improvements. Our study simultaneously investigates a variety of different forms of position-dependent and precedent-dependent ordering effect in preferences for attributes and options and in response randomness. We also examine whether advanced disclosure of the choice tasks impacts on the probability of exhibiting ordering effects of those different types. We analyze our data both non-parametrically and parametrically and find robust evidence for ordering effects. We also find that the patterns of order effect in respondents' preferences are significantly changed but not eradicated by the advanced disclosure of choice tasks a finding that offers insights into the choice behaviors underpinning order effects.  相似文献   

20.
土壤重金属X射线荧光光谱非标样测试方法研究   总被引:3,自引:0,他引:3  
采用粉末压片制样,用X射线荧光光谱非标样测试方法测定土壤中Ti、V、Cr、Mn、Fe、Ni、Cu、Zn、Ga、As、Cd、Sn、Sb、Pb、Hg等15种重金属元素。研究了样品制备方法和元素测量条件等影响测试准确度的因素。结果表明此方法不需对固体样品进行消化处理,无需制备标准样片,快速、简便、效率高,是一种非破坏性的分析方法,方法的检出限、准确度和精密度基本能够满足土壤中有毒有害重金属的快速筛查要求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号