Superb fairy-wrens are cooperatively breeding birds that combine stable, socially monogamous pair bonds and high levels of paternal care, with extreme levels of extra-pair mating and high levels of sexual competition. Our aim was to determine which testosterone correlates would prevail in such a life history that combines features that are conventionally associated with divergent hormone profiles. Unlike the situation in other species with monogamous pair bonds and high levels of paternal care, testosterone was elevated for a very long period of several months. During breeding there was a broad peak in testosterone followed by a gradual decline: this resembles the profile found in polygynous and promiscuous species. We found that three factors correlated with testosterone: development of the sexually selected nuptial plumage, social status and extra-group mating opportunities. Testosterone started increasing months prior to breeding, when the males that are later preferred as extra-group sires develop their nuptial plumage. Although these males did not have higher testosterone levels during breeding, they sustained high testosterone for much longer, and this might lend reliability to this sexual signal. Dominant males in groups had higher testosterone than pair-dwelling males and subordinate helpers. This was not due to differences in age, reproductive capability or mating opportunities, but was presumably associated with the assertion of dominance. In contrast to findings in other species, male testosterone level was not correlated with whether the resident female was fertile or had dependent nestlings. However, testosterone was strongly correlated with the total number of fertile females in the population, and hence with the opportunities for extra-group mating. 相似文献
The role of algal concentration in the transfer of organic contaminants in a food chain has been studied using the ubiquitous model polycyclic aromatic hydrocarbon benzo[a]pyrene (BaP) as the contaminant, Isochrysis galbana as the phytoplankton food source, and the common mussel (Mytilus edulis) as the primary consumer. The effect of algal concentration on BaP uptake by M. edulis was determined by feeding M. edulis daily with I. galbana which had previously been kept in the presence of BaP for 24 h. Four combinations of concentrations of algae and BaP were used to give final exposure concentrations of 30,000 or 150,000 algal cells ml(-1) in combination with either 2 or 50 microg BaP l(-1). BaP concentrations were determined fluorometrically in rest tissues (excluding digestive glands) and digestive gland microsomal fractions of M. edulis after 1, 7 and 15 days exposure, and also in isolated algae. Potentially toxic effects of BaP on M. edulis were examined in terms of blood cell lysosomal membrane damage (neutral red dye retention assay) and induction of digestive gland microsomal mixed-function oxygenase (MFO) parameters [BaP hydroxylase (BPH) and NADPH-cytochrome c (P450) reductase activities]. BaP bioaccumulation in rest tissues (and to a lesser extent in digestive gland microsomes) of M. edulis increased with both increasing BaP and algal exposure concentrations, and over time, producing maximal bioconcentration factors in rest tissues after 15 days exposure to 150,000 algal cells ml(-1) and 50 microg BaP l(-1) of 250,000. The five-fold higher concentration of algae increased BaP bioaccumulation by a factor of approximately 2 for 50 microg BaP l(-1) at day 15. Blood cell neutral red dye retention time decreased linearly with increasing log(10) tissue BaP body burden, indicating an increased biological impact on M. edulis with increasing BaP exposure possibly due to a direct effect of BaP on blood cell lysosomal membrane integrity. An increase was seen in NADPH-cytochrome c reductase activity, and indicated in BPH activity, with 1 but not 7 or 15 days exposure to BaP, indicating a transient response of the digestive gland microsomal MFO system to BaP exposure. 相似文献
Several studies have been carried out over the past 20 or so years to assess the level of visual air quality that is judged to be acceptable in urban settings. Groups of individuals were shown slides or computer-projected scenes under a variety of haze conditions and asked to judge whether each image represented acceptable visual air quality. The goal was to assess the level of haziness found to be acceptable for purposes of setting an urban visibility regulatory standard. More recently, similar studies were carried out in Beijing, China, and the more pristine Grand Canyon National Park and Great Gulf Wilderness. The studies clearly showed that when preference ratings were compared to measures of atmospheric haze such as atmospheric extinction, visual range, or deciview (dv), there was not a single indicator that represented acceptable levels of visual air quality for the varied urban or more remote settings. For instance, using a Washington, D.C., setting, 50% of the observers rated the landscape feature as not having acceptable visual air quality at an extinction of 0.19 km?1 (21 km visual range, 29 dv), while the 50% acceptability point for a Denver, Colorado, setting was 0.075 km?1 (52 km visual range, 20 dv) and for the Grand Canyon it was 0.023 km?1 (170 km visual range, 7 dv). Over the past three or four decades, many scene-specific visibility indices have been put forth as potential indicators of visibility levels as perceived by human observers. They include, but are not limited to, color and achromatic contrast of single landscape features, average and equivalent contrast of the entire image, edge detection algorithms such as the Sobel index, and just-noticeable difference or change indexes. This paper explores various scene-specific visual air quality indices and examines their applicability for use in quantifying visibility preference levels and judgments of visual air quality.
Implications: Visibility acceptability studies clearly show that visibility become more unacceptable as haze increases. However, there are large variations in the preference levels for different scenes when universal haze indicators, such as atmospheric extinction, are used. This variability is significantly reduced when the sky–landscape contrast of the more distant landscape features in the observed scene is used. Analysis suggest that about 50% of individuals would find the visibility unacceptable if at any time the more distant landscape features nearly disappear, that is, they are at the visual range. This common metric could form the basis for setting an urban visibility standard. 相似文献
Several collocated semicontinuous instruments measuring particulate matter with particle sizes < or =2.5 microm (PM2.5) sulfate (SO4(2-)) and nitrate (NO3-) were intercompared during two intensive field campaigns as part of the PM2.5 Technology Assessment and Characterization Study. The summer 2001 urban campaign in Queens, NY, and the summer 2002 rural campaign in upstate New York (Whiteface Mountain) hosted an operation of an Aerosol Mass Spectrometer, Ambient Particulate Sulfate and Nitrate Monitors, a Continuous Ambient Sulfate Monitor, and a Particle-Into-Liquid Sampler with Ion Chromatographs (PILS-IC). These instruments provided near real-time particulate SO4(2-) and NO3- mass concentration data, allowing the study of particulate SO4(2-)/NO3- diurnal patterns and detection of short-term events. Typical particulate SO4(2-) concentrations were comparable at both sites (ranging from 0 to 20 microg/m3), while ambient urban particulate NO3- concentrations ranged from 0 to 11 microg/m3 and rural NO3- concentration was typically less than 1 microg/m3. Results of the intercomparisons of the semicontinuous measurements are presented, as are results of the comparisons between the semicontinuous and time-integrated filter-based measurements. The comparisons at both sites, in most cases, indicated similar performance characteristics. In addition, charge balance calculations, based on major soluble ionic components of atmospheric aerosol from the PILS-IC and the filter measurements, indicated slightly acidic aerosol at both locations. 相似文献
Background Due to the bovine spongiform encephalopathy (BSE), specified risk material (SRM) as well as animal meat and bone meal (MBM)
are banned from the food and feed chain because of a possible infection with pathogenic prions (PrPSc). Nowadays, prions are
widely accepted to be responsible for TSE(transmissible spongiform encephalopathies)-caused illnesses like BSE and scrapie,
and especially for the occurrence of the new variant of CJD in humans. Presently, SRM and MBM are burnt under high temperatures
to avoid any hazards for humans, animals or the environment. The aim of this study was to evaluate a method using animal fat
separated from Category I material which includes SRM and the carcasses of TSE-infected animals, or animals suspected of being
infected with TSE, as a source for producing biodiesel by transesterification, analogous to the biodiesel process using vegetable
oil.
Methods For this purpose, animal fat was spiked with scrapie-infected hamster brain equivalents – as representative for a TSE-infected
animal – and the biodiesel manufacturing process was downscaled and performed under lab-scale conditions.
Results and Discussion The results analysed by Western blotting showed clearly that almost each single step of the process leads to a significant
reduction of the concentration of the pathogenic prion protein (PrPSc) in the main and side-products.
Conclusion The data revealed that the biodiesel production, even from material with a high concentration of pathogenic prions, can be
considered as safe.
Recommendations and Outlook The obtained results indicated that biodiesel produced from prion-contaminated fat was safe under the tested process conditions.
However, it has to be pointed out that the results cannot be generalized because a different process control using other conditions
may lead to different results and then has to be analysed independently. It is clear that the production of biodiesel from
high risk material represents a more economic usage than the combustion of such material. 相似文献
Freshwater and the services it provides are vital to both natural ecosystems and human needs; however, extreme climates and their influence on freshwater availability can be challenging for municipal planners and engineers to manage these resources effectively. In Arctic Canada, financial and human capital limitations have left a legacy of freshwater systems that underserve current communities and may be inadequate in the near future under a warming climate, growing population, and increasing demand. We address this challenge to community water resource planning by applying several novel water supply forecasting methods to evaluate the Apex River as an alternative freshwater source for Iqaluit, Nunavut (Canada). Surveys of water isotope composition of the Apex River and tributaries indicated that rainfall is the main source of water replenishment. This information was utilized to calibrate a water resource assessment that considered climate forecasting scenarios and their influence on supply, and alternative scenarios for freshwater management to better adapt to a changing climate. We found that under current climate and demand conditions, the freshwater supply of Iqaluit would be in a perpetual state of drawdown by 2024. Analysis of current infrastructure proposals revealed significant deficiencies in the supply extensions proposed whereby the Apex replenishment pipeline would only provide a 2-year extension to current municipal supply. Our heuristic supply forecast methods allowed for several alternative supply strategies to be rapidly evaluated, which will aid the community planning process by specifically quantifying the service life of the city’s current and future primary water supply. 相似文献