Objective: Since 2005, National Association for Stock Car Auto Racing, Incorporated (NASCAR) drivers have been required to use a head and neck restraint system (HNR) that complies with SFI Foundation, Inc. (SFI) 38.1. The primary purpose of the HNR is to control and limit injurious neck loads and head kinematics during frontal and frontal oblique impacts. The SFI 38.1 performance specification was implemented to establish a uniform test procedure and minimum standard for the evaluation of HNRs using dynamic sled testing. The purpose of this study was to evaluate the repeatability of the current SFI 38.1 test setup and explore the effects of a polyester seat belt restraint system.
Method: Eight sled tests were conducted using the SFI 38.1 sled test protocol with additional test setup constraints. Four 0° frontal tests and 4 30° right frontal (RF) oblique tests were conducted. The first 3 tests of each principal direction of force (PDOF) used nylon SFI 16.1 seat belt restraint assemblies. The fourth test of each PDOF used polyester SFI 16.6 seat belt restraint assemblies. A secondary data set (Lab B Data) was also supplied by the HNR manufacturer for further comparisons. The International Organization for Standardization (ISO) 18571 objective comparison method was used to quantify the repeatability of the anthropomorphic test device (ATD) resultant head, chest, and pelvis acceleration and upper neck axial force and flexion extension bending moment time histories across multiple tests.
Results: Two data sets generated using the SFI 38.1 test protocol exhibited large variations in mean ISO scores of ATD channels. The 8 tests conducted with additional setup constraints had significantly lower mean ISO score coefficients of variation (CVs). The Lab B tests conducted within the current specification but without the additional test setup constraints had larger mean ISO score standard deviation and CV for all comparisons. Specifically, tests with the additional setup constraints had average CVs of 3.3 and 2.9% for the 0° and 30° RF orientations, respectively. Lab B tests had average CVs of 22.9 and 24.5%, respectively. Polyester seat belt comparisons had CVs of 5.3 and 6.2% for the 0° and 30° RF orientations, respectively.
Conclusion: With the addition of common test setup constraints, which do not violate the specification, the SFI 38.1 test protocol produced a repeatable test process for determining performance capabilities of HNRs within a single sled lab. A limited study using polyester webbing seat belt assemblies versus the nylon material called for in SFI 38.1 indicates that the material likely has less effects on ATD upper neck axial force and flexion extension bending moment time histories than the test setup freedom currently available within the specification. The additional test setup constraints are discussed and were shown to improve ATD response repeatability for a given HNR. 相似文献
AbstractObjective: Systems that can warn the driver of a possible collision with a vulnerable road user (VRU) have significant safety benefits. However, incorrect warning times can have adverse effects on the driver. If the warning is too late, drivers might not be able to react; if the warning is too early, drivers can become annoyed and might turn off the system. Currently, there are no methods to determine the right timing for a warning to achieve high effectiveness and acceptance by the driver. This study aims to validate a driver model as the basis for selecting appropriate warning times. The timing of the forward collision warnings (FCWs) selected for the current study was based on the comfort boundary (CB) model developed during a previous project, which describes the moment a driver would brake. Drivers’ acceptance toward these warnings was analyzed. The present study was conducted as part of the European research project PROSPECT (“Proactive Safety for Pedestrians and Cyclists”).Methods: Two warnings were selected: One inside the CB and one outside the CB. The scenario tested was a cyclist crossing scenario with time to arrival (TTA) of 4?s (it takes the cyclist 4?s to reach the intersection). The timing of the warning inside the CB was at a time to collision (TTC) of 2.6?s (asymptotic value of the model at TTA = 4?s) and the warning outside the CB was at TTC = 1.7?s (below the lower 95% value at TTA = 4?s). Thirty-one participants took part in the test track study (between-subjects design where warning time was the independent variable). Participants were informed that they could brake any moment after the warning was issued. After the experiment, participants completed an acceptance survey.Results: Participants reacted faster to the warning outside the CB compared to the warning inside the CB. This confirms that the CB model represents the criticality felt by the driver. Participants also rated the warning inside the CB as more disturbing, and they had a higher acceptance of the system with the warning outside the CB. The above results confirm the possibility of developing wellsaccepted warnings based on driver models.Conclusions: Similar to other studies’ results, drivers prefer warning times that compare with their driving behavior. It is important to consider that the study tested only one scenario. In addition, in this study, participants were aware of the appearance of the cyclist and the warning. A further investigation should be conducted to determine the acceptance of distracted drivers. 相似文献
Objective: Considerable evidence indicates that medical conditions prevalent among older individuals lead to impairments in visual, cognitive, or psychomotor functions needed to drive safely. The purpose of this study was to explore the factors determining driving difficulties as seen from the viewpoint of 30 older drivers with mild cognitive impairment (MCI) and 30 age-matched controls without cognitive impairment.
Methods: Perceptions of driving difficulties from both groups were examined using data from an extensive questionnaire. Samples of drivers diagnosed with MCI and age-matched controls were asked to report the frequency with which they experienced driving difficulties due to functional deficits and knowledge of new traffic rules and traffic signs.
Results: The analysis revealed that 2 factors underlie MCI perceptions of driving difficulties, representing (1) difficulties associated with late detection combined with slowed response to relevant targets in the peripheral field of view and (2) difficulties associated with divided attention between tasks requiring switching from automatic to conscious processing particularly of long duration. The analysis for healthy controls revealed 3 factors representing (1) difficulties in estimating speed and distance of approaching vehicles in complex (attention-dividing) high-information-load conditions; (2) difficulties in moving head, neck, and feet; and (3) difficulties in switching from automatic responses to needing to use cognitive processing in new or unexpected situations.
Conclusions: Though both group analyses show difficulties with switching from automatic to decision making, the difficulties are different. For the control group, the difficulty in switching involves switching in new or unexpected situations associated with high-information-load conditions, whereas this switching difficulty for the MCI group is associated with divided attention between easier tasks requiring switching. These findings underline the ability of older drivers (with MCI and without cognitive impairment) to indicate probable impairments in various driving skills. The patterns of difficulties perceived by the MCI group and the age-matched healthy control group are indicative of demanding driving situations that may merit special attention for road designers and road safety engineers. They may also be considered in the design of older drivers’ fitness to drive evaluations, training programs, and/or vehicle technologies that provide for older driver assistance. 相似文献
Where they dominate coastlines, seagrass beds are thought to have a fundamental role in maintaining populations of exploited species. Thus, Mediterranean seagrass beds are afforded protection, yet no attempt to determine the contribution of these areas to both commercial fisheries landings and recreational fisheries expenditure has been made. There is evidence that seagrass extent continues to decline, but there is little understanding of the potential impacts of this decline. We used a seagrass residency index, that was trait and evidence based, to estimate the proportion of Mediterranean commercial fishery landings values and recreation fisheries total expenditure that can be attributed to seagrass during different life stages. The index was calculated as a weighted sum of the averages of the estimated residence time in seagrass (compared with other habitats) at each life stage of the fishery species found in seagrass. Seagrass‐associated species were estimated to contribute 30%–40% to the value of commercial fisheries landings and approximately 29% to recreational fisheries expenditure. These species predominantly rely on seagrass to survive juvenile stages. Seagrass beds had an estimated direct annual contribution during residency of €58–91 million (4% of commercial landing values) and €112 million (6% of recreation expenditure) to commercial and recreational fisheries, respectively, despite covering <2% of the area. These results suggest there is a clear cost of seagrass degradation associated with ineffective management of seagrass beds and that policy to manage both fisheries and seagrass beds should take into account the socioeconomic implications of seagrass loss to recreational and commercial fisheries. 相似文献
Objective: Drivers’ use of lane departure warning and prevention systems is lower than use of other crash avoidance technologies and varies significantly by manufacturer. One factor that may affect use is how well a system prevents unintended departures. The current study evaluated the performance of systems that assist in preventing departures by providing steering or braking input in a 2016 Chevrolet Malibu, 2016 Ford Fusion, 2016 Honda Accord, and 2018 Volvo S90. These vehicles were selected because a prior observational study found that the percentage of privately owned vehicles that had lane departure prevention systems turned on varied among these 4 automakers.Method: In each vehicle, a test driver induced 40 lane drifts on left and right curves by steering the vehicle straight into the curve so that vehicles departed in the opposite direction and 40 lane drifts on straightaways by slight steering input to direct the vehicle to left and right lane markers.Results: Vehicles from automakers with higher observed lane departure prevention use rates (Volvo, Chevrolet) featured systems that provided steering input earlier and more often avoided crossing lane markers by more than 35?cm compared to vehicles from automakers with lower observed use rates (Ford, Honda).Conclusion: The study identified functional characteristics (i.e., timing of steering input, prevention of departures more than 35?cm) of lane departure prevention systems that were strongly associated with observed activation of these systems in privately owned vehicles. Although this relationship does not imply causation, the findings support the hypothesis that functional characteristics of lane departure prevention systems affect their use. Designers may be able to use these results to maximize driver acceptance of future implementations of lane departure prevention. 相似文献
AbstractObjective: The objective of this article was to develop a multi-agent traffic simulation methodology to estimate the potential road safety improvements of automated vehicle technologies.Methods: We developed a computer program that merges road infrastructure data with a large number of vehicles, drivers, and pedestrians. Human errors are induced by modeling inattention, aimless driving, insufficient safety confirmation, misjudgment, and inadequate operation. The program was applied to simulate traffic in a prescribed area in Tsukuba city. First, a 100% manual driving scenario was set to simulate traffic for a total preset vehicle travel distance. The crashes from this simulation were compared with real-world crash data from the prescribed area from 2012 to 2017. Thereafter, 4 additional scenarios of increasing levels of automation penetration (including combinations of automated emergency braking [AEB], lane departure warning [LDW], and SAE Level 4 functions) were implemented to estimate their impact on safety.Results: Under manual driving, the system simulated a total of 859 crashes including single-car lane departure, car-to-car, and car-to-pedestrian crashes. These crashes tended to occur in locations similar to real-world crashes. The number of crashes predicted decreased to 156 cases with increasing level of automation. All of the technologies considered contributed to the decrease in crashes. Crash reductions attributable to AEB and LDW in the simulations were comparable to those reported in recent field studies. For the highest levels of automation, no assessment data were available and hence the results should be carefully treated. Further, in modeling automated functions, potentially negative aspects such as sensing failure or human overreliance were not incorporated.Conclusions: We developed a multi-agent traffic simulation methodology to estimate the effect of different automated vehicle technologies on safety. The crash locations resulting from simulations of manual driving within a limited area in Japan were preliminary assessed by comparison with real-world crash data collected in the same area. Increasing penetration levels of AEB and LDW led to a large reduction in both the frequency and severity of rear-end crashes, followed by car-to-car head-on crashes and single-vehicle lane departure crashes. Preliminary estimations of the potential safety improvements that may be achieved with highly automated driving technologies were also obtained. 相似文献