首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 296 毫秒
1.
Researchers are increasingly turning to network theory to understand the social nature of animal populations. We present a computational framework that is the first step in a series of works that will allow us to develop a quantitative methodology of social network sampling to aid ecologists in their social network data collection. To develop our methodology, we need to be able to generate networks from which to sample. Ideally, we need to perform a systematic study of sampling protocols on different known network structures, as network structure might affect the robustness of any particular sampling methodology. Thus, we present a computational tool for generating network structures that have user-defined distributions for network properties and for key measures of interest to ecologists. The user defines the values of these measures and the tool will generate appropriate network randomizations with those properties. This tool will be used as a framework for developing a sampling methodology, although we do not present a full methodology here. We describe the method used by the tool, demonstrate its effectiveness, and discuss how the tool can now be utilized. We provide a proof-of-concept example (using the assortativity measure) of how such networks can be used, along with a simulated egocentric sampling regime, to test the level of equivalence of the sampled network to the actual network. This contribution is part of the special issue “Social Networks: new perspectives” (Guest Editors: J. Krause, D. Lusseau and R. James).  相似文献   

2.
《Ecological modelling》2005,181(4):493-508
Neural networks (NN) rely on the inner structure of available data sets rather than on comprehension of the modeled processes between inputs and outputs. Therefore, neural networks have been regarded as highly empirical models with limited extrapolation capability to situations outside the range of the training and validation data sets. In this study, the generalization ability of neural networks in predicting rice tillering dynamics was tested and several techniques inducing the generalization ability of neural networks were compared. We compared the performance of cross-validated neural networks with independent-validated neural networks and found that neural networks were able to extrapolate and predict tillering dynamics if the data were within the range of inputs of the training set. An inadequate training set resulted in overfitting of available data and neural networks that were not generalizable. The training set size required to enable a neural network to generalize and predict rice tillering dynamics was found to be at least 9 times as many training patterns for each weight. When a large number of variables are included in the input vector of a neural network with inadequate amounts of training data, we strongly recommend that the dimension of the input vector is reduced using principle component analysis (PCA), correspondence analysis (CA) or similar techniques to decrease the number of weights before the training procedure to improve the generalization ability of the NN. If the amount of training data still is not sufficient after the dimension of the input vector is reduced, regularization techniques, such as early stopping, jittering, and especially the embedment of estimated results by a theoretical model into the training set, should be used to improve the generalization ability of the neural network. The generalization of neural networks presents a wide spectrum of problems, and the proposed approaches are not confined strictly to modelling rice tillering dynamics but can be applied to other agricultural and ecological systems.  相似文献   

3.
《Ecological modelling》2003,159(2-3):179-201
An artificial neural network (ANN), a data driven modelling approach, is proposed to predict the algal bloom dynamics of the coastal waters of Hong Kong. The commonly used back-propagation learning algorithm is employed for training the ANN. The modeling is based on (a) comprehensive biweekly water quality data at Tolo Harbour (1982–2000); and (b) 4-year set of weekly phytoplankton abundance data at Lamma Island (1996–2000). Algal biomass is represented as chlorophyll-a and cell concentration of Skeletonema at the two locations, respectively. Analysis of a large number of scenarios shows that the best agreement with observations is obtained by using merely the time-lagged algal dynamics as the network input. In contrast to previous findings with more complicated neural networks of algal blooms in freshwater systems, the present work suggests the algal concentration in the eutrophic sub-tropical coastal water is mainly dependent on the antecedent algal concentrations in the previous 1–2 weeks. This finding is also supported by an interpretation of the neural networks’ weights. Through a systematic analysis of network performance, it is shown that previous reports of predictability of algal dynamics by ANN are erroneous in that ‘future data’ have been used to drive the network prediction. In addition, a novel real time forecast of coastal algal blooms based on weekly data at Lamma is presented. Our study shows that an ANN model with a small number of input variables is able to capture trends of algal dynamics, but data with a minimum sampling interval of 1 week is necessary. However, the sufficiency of the weekly sampling for real time predictions using ANN models needs to be further evaluated against longer weekly data sets as they become available.  相似文献   

4.
Infectious processes in a social group are driven by a network of contacts that is generally structured by the organization arising from behavioral and spatial heterogeneities within the group. Although theoretical models of transmission dynamics have placed an overwhelming emphasis on the importance of understanding the network structure in a social group, empirical data regarding such contact structures are rare. In this paper, I analyze the network structure and the correlated transmission dynamics within a honeybee colony as determined by food transfer interactions and the changes produced in it by an experimental manipulation. The study demonstrates that widespread transmission in the colony is correlated to a lower clustering coefficient and higher robustness of the social network. I also show that the social network in the colony is determined by the spatial distribution of various age classes, and the resulting organizational structure provides some amount of immunity to the young individuals. The results of this study demonstrates how, using the honeybee colony as a model system, concepts in network theory can be combined with those in behavioral ecology to gain a better understanding of social transmission processes, especially those related to disease dynamics.  相似文献   

5.
The water supply network (WSN) system is a critical element of civil infrastructure systems. Its complexity of operation and high number of components mean that all parts of the system cannot be simply assessed. Earthquakes are the most serious natural hazard to a WSN, and seismic risk assessment is essential to identify its vulnerability to different stages of damage and ensure the system safety. In this paper, using a WSN located in the airport area of Tianjin in northern China as a case study, a quantitative vulnerability assessment method was used to assess the damage that the water supply pipelines would suffer in an earthquake, and the finite element software ABAQUS and fuzzy mathematic theory were adopted to construct the assessment method. ABAQUS was applied to simulate the seismic damage to pipe segments and components of the WSN. Membership functions based on fuzzy theory were established to calculate the membership of the components in the system. However, to consider the vulnerability of the whole system, fuzzy cluster analysis was used to distinguish the importance of pipe segments and components. Finally, the vulnerability was quantified by these functions. The proposed methodology aims to assess the performance of WSNs based on pipe vulnerabilities that are simulated and calculated by the model and the mathematical method based on data of damage. In this study, a whole seismic vulnerability assessment method for a WSN was built, and these analyses are expected to provide necessary information for a mitigation plan in an earthquake disaster.  相似文献   

6.
Mixed-species associations are a widespread phenomenon, comprising interacting heterospecific individuals which gain predator, foraging or social benefits. Avian flocks have traditionally been classified as monolithic species units, with species-wide functional roles, such as nuclear, active, passive, or follower. It has also been suggested that flocks are mutualistic interactions, where niches of participating species converge. However the species-level perspective has limited previous studies, because both interactions and benefits occur at the level of the individual. Social network analysis provides a set of tools for quantitative assessment of individual participation. We used mark-resighting methods to develop networks of nodes (colour-marked individuals) and edges (their interactions within flocks). We found that variation in flock participation across individuals within species, especially in the buff-rumped thornbill, encompassed virtually the entire range of variation across all individuals in the entire set of species. For example, female, but not male, buff-rumped thornbills had high network betweenness, indicating that they interact with multiple flocks, likely as part of a female-specific dispersal strategy. Finally, we provide new evidence that mixed-species flocking is mutualistic, by quantifying an active shift in individual foraging niches towards those of their individual associates, with implications for trade-off between costs and benefits to individuals derived from participating in mixed-species flocks. This study is, to our knowledge, the first instance of a heterospecific social network built on pairwise interactions.  相似文献   

7.
Effective management of reservoir water resources demands a good command of ecological processes in the waterbody. In this work the three-dimensional finite element hydrodynamic model RMA10 was coupled to an eutrophication model. The models were used together with a methodology for loads estimation to foster the understanding of such processes in the largest reservoir in Western Europe—the Alqueva. Nutrient enrichment and eutrophication are water quality concerns in this man-made impoundment. A total phosphorus and nitrogen loads quantification methodology was developed to estimate the inputs in the reservoir, using point and non-point source data.Field data (including water temperature, wind, water elevation, chlorophyll-a, nutrient concentration and dissolved oxygen) and estimated loads were used as forcing for simulations.The analysis of the modeling results shows that spatial and temporal distributions for water temperature, chlorophyll-a, dissolved oxygen and nutrients are consistent with measured in situ data.Modeling results allowed the identification of likely key impact factors on the water quality of the Alqueva reservoir. It is shown that the particular geomorphological and hydrological characteristics of the reservoir together with local climate features are responsible for the existence of distinct ecological regions within the reservoir.  相似文献   

8.
The widespread use of ecological network models (e.g., Ecopath, Econetwrk, and related energy budget models) has been laudable for several reasons, chief of which is providing an easy-to-use set of modeling tools that can present an ecosystem context for improved understanding and management of living marine resources (LMR). Yet the ease-of-use of these models has led to two challenges. First, the veritable explosion of the use and application of these network models has resulted in recognition that the content and use of such models has spanned a range of quality. Second, as these models and their application have become more widespread, they are increasingly being used in a LMR management context. Thus review panels and other evaluators of these models would benefit from a set of rigorous and standard criteria from which the basis for all network models and related applications for any given system (i.e., the initial, static energy budget) can be evaluated. To this end, as one suggestion for improving network models in general, here I propose a series of pre-balance (PREBAL) diagnostics. These PREBAL diagnostics can be done, now, in simple spreadsheets before any balancing or tuning is executed. Examples of these PREBAL diagnostics include biomasses, biomass ratios, vital rates, vital rate ratios, total production, and total removals (and slopes thereof) across the taxa and trophic levels in any given energy budget. I assert that there are some general ecological and fishery principles that can be used in conjunction with PREBAL diagnostics to identify issues of model structure and data quality before balancing and dynamic applications are executed. I humbly present this PREBAL information as a simple yet general approach that could be easily implemented, could be considered for further incorporation into these model packages, and as such would ultimately result in a straightforward way to evaluate (and perhaps identify areas for improving) initial conditions in food web modeling efforts.  相似文献   

9.
Food webs are usually aggregated into a manageable size for their interpretation and analysis. The aggregation of food web components in trophic or other guilds is often at the choice of the modeler as there is little guidance in the literature as to what biases might be introduced by aggregation decisions. We examined the impacts of the choice of the a priori model on the subsequent estimation of missing flows using the inverse method and on the indices derived from ecological network analysis of both inverse method-derived flows and on the actual values of flows, using the fully determined Sylt-Rømø Bight food web model. We used the inverse method, with the least squares minimization goal function, to estimate ‘missing’ values in the food web flows on 14 aggregation schemes varying in number of compartments and in methods of aggregation. The resultant flows were compared to known values; the performance of the inverse method improved with increasing number of compartments and with aggregation based on both habitat and feeding habits rather than diet similarity. Comparison of network analysis indices of inverse method-derived flows with that of actual flows and the original value for the unaggregated food web showed that the use of both the inverse method and the aggregation scheme affected indices derived from ecological network analysis. The inverse method tended to underestimate the size and complexity of food webs, while an aggregation scheme explained as much variability in some network indices as the difference between inverse-derived and actual flows. However, topological network indices tended to be most robust to both the method of determining flows and to the inverse method. These results suggest that a goal function other than minimization of flows should be used when applying the inverse method to food web models. Comparison of food web models should be done with extreme care when different methodologies are used to estimate unknown flows and to aggregate system components. However, we propose that indices such as relative ascendency and relative redundancy are most valuable for comparing ecosystem models constructed using different methodologies for determining missing flows or for aggregating system components.  相似文献   

10.
Habitat loss can trigger migration network collapse by isolating migratory bird breeding grounds from nonbreeding grounds. Theoretically, habitat loss can have vastly different impacts depending on the site's importance within the migratory corridor. However, migration-network connectivity and the impacts of site loss are not completely understood. We used GPS tracking data on 4 bird species in the Asian flyways to construct migration networks and proposed a framework for assessing network connectivity for migratory species. We used a node-removal process to identify stopover sites with the highest impact on connectivity. In general, migration networks with fewer stopover sites were more vulnerable to habitat loss. Node removal in order from the highest to lowest degree of habitat loss yielded an increase of network resistance similar to random removal. In contrast, resistance increased more rapidly when removing nodes in order from the highest to lowest betweenness value (quantified by the number of shortest paths passing through the specific node). We quantified the risk of migration network collapse and identified crucial sites by first selecting sites with large contributions to network connectivity and then identifying which of those sites were likely to be removed from the network (i.e., sites with habitat loss). Among these crucial sites, 42% were not designated as protected areas. Setting priorities for site protection should account for a site's position in the migration network, rather than only site-specific characteristics. Our framework for assessing migration-network connectivity enables site prioritization for conservation of migratory species.  相似文献   

11.
Urban metabolism research faces difficulties defining ecological trophic levels and analyzing relationships among the metabolic system's energy components. Here, we propose a new way to perform such research. By integrating throughflow analysis with ecological network utility analysis, we used network flows to analyze the metabolic system's network structure and the ecological relationships within the system. We developed an ecological network model for the system, and used four Chinese cities as examples of how this approach provides insights into the flows within the system at both high and low levels of detail. Using the weight distribution in the network flow matrix, we determined the structure of the urban energy metabolic system and the trophic levels; using the sign distribution in the network utility matrix, we determined the relationships between each pair of the system's compartments and their degrees of mutualism. The model uses compartments based on 17 sectors (energy exploitation; coal-fired power; heat supply; washed coal; coking; oil refinery; gas generation; coal products; agricultural; industrial; construction; communication, storage, and postal service; wholesale, retail, accommodation, and catering; household; other consuming; recovery; and energy stocks). Analyzing the structure and functioning of the urban energy metabolic system revealed ways to optimize its structure by adjusting the relationships among compartments, thereby demonstrating how ecological network analysis can be used in future urban system research.  相似文献   

12.
《Ecological modelling》2007,208(1):25-40
A shift in the basic philosophy of nature developed by Francis Bacon, Renè Descartes, and Isaac Newton, has been suggested but for the most part rejected within mainstream science. It suggests the need for viewing nature as a deeply organic and connected system of relationships that is not necessarily or readily submissive to reductive thinking and analysis. Ecosystem design within the construct of a field called ecological engineering poses fundamental questions with respect to the philosophy of nature upon which our current scientific paradigm is predominantly based. In an effort to foster development of rigorous, quantitative methods for developing insight into complex ecosystem phenomena we propose systems and engineering ecology—an integrated science comprised of principles from environ theory, ascendency theory, exergy theory, emergy theory, ecological network analysis and ecological modelling, synthesized through the formal agency of systems science. We contend that ecological engineering will be limited in its robustness apart from development of rigorous systems-based sciences that are quantitative and incorporate the complex, emergent properties of ecosystem. We justify our proposed framework on the philosophical paradox of transferring aspects of traditional engineering design into ecological engineering and on the four causes of Aristotle.  相似文献   

13.
Forecasting extinction risk with nonstationary matrix models.   总被引:1,自引:0,他引:1  
Matrix population growth models are standard tools for forecasting population change and for managing rare species, but they are less useful for predicting extinction risk in the face of changing environmental conditions. Deterministic models provide point estimates of lambda, the finite rate of increase, as well as measures of matrix sensitivity and elasticity. Stationary matrix models can be used to estimate extinction risk in a variable environment, but they assume that the matrix elements are randomly sampled from a stationary (i.e., non-changing) distribution. Here we outline a method for using nonstationary matrix models to construct realistic forecasts of population fluctuation in changing environments. Our method requires three pieces of data: (1) field estimates of transition matrix elements, (2) experimental data on the demographic responses of populations to altered environmental conditions, and (3) forecasting data on environmental drivers. These three pieces of data are combined to generate a series of sequential transition matrices that emulate a pattern of long-term change in environmental drivers. Realistic estimates of population persistence and extinction risk can be derived from stochastic permutations of such a model. We illustrate the steps of this analysis with data from two populations of Sarracenia purpurea growing in northern New England. Sarracenia purpurea is a perennial carnivorous plant that is potentially at risk of local extinction because of increased nitrogen deposition. Long-term monitoring records or models of environmental change can be used to generate time series of driver variables under different scenarios of changing environments. Both manipulative and natural experiments can be used to construct a linking function that describes how matrix parameters change as a function of the environmental driver. This synthetic modeling approach provides quantitative estimates of extinction probability that have an explicit mechanistic basis.  相似文献   

14.
In this paper, we report an application of neural networks to simulate daily nitrate-nitrogen and suspended sediment fluxes from a small 7.1 km2 agricultural catchment (Melarchez), 70 km east of Paris, France. Nitrate-nitrogen and sediment losses are only a few possible consequences of soil erosion and biochemical applications associated to human activities such as intensive agriculture. Stacked multilayer perceptrons models (MLPs) like the ones explored here are based on commonly available inputs and yet are reasonably accurate considering their simplicity and ease of implementation. Note that the simulation does not resort on water quality flux observations at previous time steps as model inputs, which would be appropriate, for example, to predict the water chemistry of a drinking water plant a few time steps ahead. The water quality fluxes are strictly mapped to historical mean flux values and to hydro-climatic variables such as stream flow, rainfall, and soil moisture index (12 model input candidates in total), allowing its usage even when no flux observations are available. Self-organizing feature maps based on the network structure established by Kohonen were employed first to produce the training and the testing data sets, with the intent to produce statistically close subsets so that any difference in model performance between validation and testing has to be associated to the model and not to the data subsets. The stacked MLPs reached different levels of performance simulating the nitrate-nitrogen flux and the suspended sediment flux. In the first instance, 2-input stacked MLP nitrate-nitrogen simulations, based on the same-day stream flow and on the 80-cm soil moisture index, have a performance of almost 90% according to the efficiency index. On the other hand, the performance of 3-input stacked MLPs (same-day stream flow, same-day historical flux, and same-day stream flow increment) reached a little more than 75% according to the same criterion. The results presented here are deemed already promising enough, and should encourage water resources managers to implement simple models whenever appropriate.  相似文献   

15.
Efficient and reliable unexploded ordnance (UXO) site characterization is needed for decisions regarding future land use. There are several types of data available at UXO sites and geophysical signal maps are one of the most valuable sources of information. Incorporation of such information into site characterization requires a flexible and reliable methodology. Geostatistics allows one to account for exhaustive secondary information (i.e.,, known at every location within the field) in many different ways. Kriging and logistic regression were combined to map the probability of occurrence of at least one geophysical anomaly of interest, such as UXO, from a limited number of indicator data. Logistic regression is used to derive the trend from a geophysical signal map, and kriged residuals are added to the trend to estimate the probabilities of the presence of UXO at unsampled locations (simple kriging with varying local means or SKlm). Each location is identified for further remedial action if the estimated probability is greater than a given threshold. The technique is illustrated using a hypothetical UXO site generated by a UXO simulator, and a corresponding geophysical signal map. Indicator data are collected along two transects located within the site. Classification performances are then assessed by computing proportions of correct classification, false positive, false negative, and Kappa statistics. Two common approaches, one of which does not take any secondary information into account (ordinary indicator kriging) and a variant of common cokriging (collocated cokriging), were used for comparison purposes. Results indicate that accounting for exhaustive secondary information improves the overall characterization of UXO sites if an appropriate methodology, SKlm in this case, is used.  相似文献   

16.
Longitudinal behavioral data generally contains a significant amount of structure. In this work, we identify the structure inherent in daily behavior with models that can accurately analyze, predict, and cluster multimodal data from individuals and communities within the social network of a population. We represent this behavioral structure by the principal components of the complete behavioral dataset, a set of characteristic vectors we have termed eigenbehaviors. In our model, an individual’s behavior over a specific day can be approximated by a weighted sum of his or her primary eigenbehaviors. When these weights are calculated halfway through a day, they can be used to predict the day’s remaining behaviors with 79% accuracy for our test subjects. Additionally, we demonstrate the potential for this dimensionality reduction technique to infer community affiliations within the subjects’ social network by clustering individuals into a “behavior space” spanned by a set of their aggregate eigenbehaviors. These behavior spaces make it possible to determine the behavioral similarity between both individuals and groups, enabling 96% classification accuracy of community affiliations within the population-level social network. Additionally, the distance between individuals in the behavior space can be used as an estimate for relational ties such as friendship, suggesting strong behavioral homophily amongst the subjects. This approach capitalizes on the large amount of rich data previously captured during the Reality Mining study from mobile phones continuously logging location, proximate phones, and communication of 100 subjects at MIT over the course of 9 months. As wearable sensors continue to generate these types of rich, longitudinal datasets, dimensionality reduction techniques such as eigenbehaviors will play an increasingly important role in behavioral research. This contribution is part of the special issue “Social Networks: new perspectives” (Guest Editors: J. Krause, D. Lusseau and R. James). An erratum to this article can be found at  相似文献   

17.
Kendall WL  Conn PB  Hines JE 《Ecology》2006,87(1):169-177
Matrix population models that allow an animal to occupy more than one state over time are important tools for population and evolutionary ecologists. Definition of state can vary, including location for metapopulation models and breeding state for life history models. For populations whose members can be marked and subsequently reencountered, multistate mark-recapture models are available to estimate the survival and transition probabilities needed to construct population models. Multistate models have proved extremely useful in this context, but they often require a substantial amount of data and restrict estimation of transition probabilities to those areas or states subjected to formal sampling effort. At the same time, for many species, there are considerable tag recovery data provided by the public that could be modeled in order to increase precision and to extend inference to a greater number of areas or states. Here we present a statistical model for combining multistate capture-recapture data (e.g., from a breeding ground study) with multistate tag recovery data (e.g., from wintering grounds). We use this method to analyze data from a study of Canada Geese (Branta canadensis) in the Atlantic Flyway of North America. Our analysis produced marginal improvement in precision, due to relatively few recoveries, but we demonstrate how precision could be further improved with increases in the probability that a retrieved tag is reported.  相似文献   

18.
We investigate how the viability and harvestability predicted by population models are affected by details of model construction. Based on this analysis we discuss some of the pitfalls associated with the use of classical statistical techniques for resolving the uncertainties associated with modeling population dynamics. The management of the Serengeti wildebeest (Connochaetes taurinus) is used as a case study. We fitted a collection of age-structured and unstructured models to a common set of available data and compared model predictions in terms of wildebeest viability and harvest. Models that depicted demographic processes in strikingly different ways fitted the data equally well. However, upon further analysis it became clear that models that fit the data equally well could nonetheless have very different management implications. In general, model structure had a much larger effect on viability analysis (e.g., time to collapse) than on optimal harvest analysis (e.g., harvest rate that maximizes harvest). Some modeling decisions, such as including age-dependent fertility rates, did not affect management predictions, but others had a strong effect (e.g., choice of model structure). Because several suitable models of comparable complexity fitted the data equally well, traditional model selection methods based on the parsimony principle were not practical for judging the value of alternative models. Our results stress the need to implement analytical frameworks for population management that explicitly consider the uncertainty about the behavior of natural systems.  相似文献   

19.
Despite several decades of operations and the increasing importance of water quality monitoring networks, the authorities still rely on experiential insights and subjective judgments in siting water quality monitoring stations. This study proposes an integrated technique which uses a genetic algorithm (GA) and a geographic information system (GIS) for the design of an effective water quality monitoring network in a large river system. In order to develop a design scheme, planning objectives were identified for water quality monitoring networks and corresponding fitness functions were defined using linear combinations of five selection criteria that are critical for developing a monitoring system. The criteria include the representativeness of a river system, compliance with water quality standards, supervision of water use, surveillance of pollution sources and examination of water quality changes. The fitness levels were obtained through a series of calculations of the fitness functions using GIS data. A sensitivity analysis was performed for major parameters such as the numbers of generations, population sizes and probability of crossover and mutation, in order to determine a good fitness level and convergence for optimum solutions. The proposed methodology was applied to the design of water quality monitoring networks in the Nakdong River system, in Korea. The results showed that only 35 out of 110 stations currently in operation coincide with those in the new network design, therefore indicating that the effectiveness of the current monitoring network should be carefully re-examined. From this study, it was concluded that the proposed methodology could be a useful decision support tool for the optimized design of water quality monitoring networks.  相似文献   

20.
In order to quantify hazardous substance use in production processes, a special methodology has been designed within the context of the EcoGrade integrated environmental assessment method developed by the Öko-Institut, Institute for Applied Ecology. This methodology uses monoethylene glycol (MEG) equivalents as an indicator value for hazardous substance use. MEG equivalents permit direct, noxious-substance-focussed comparison of processes and products (Bunke 2001). The assessment is based upon the standardized risk phrases assigned to the component substances. The MEG equivalent methodology is a refinement and application of the potency factor model (Wirkfaktorenmodell) of the German Technical Rule for Hazardous Substances (Technische Regel für Gefahrstoffe, TRGS) 440 (AGS 2001). The data required for the assessment procedure are available within companies (safety data sheets) or are readily accessible publicly (hazardous substance databanks). A further benefit is that inventory analysis of hazardous substances using the method presented here makes it possible to take hazardous substance use into account in a systematic manner within life-cycle assessment (LCA) studies. The methodology has been tested for the example of residential buildings. Note: The terms ‘hazardous substance’, ‘noxious substance’ and ‘hazardous constituent’ are used in this paper in the sense of substances that have one of the hazard attributes set out in Article 3 of the German Chemicals Act (Chemikaliengesetz).  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号