首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 437 毫秒
1.
为了解瓦斯灾害的演化特性与规律,应用分形理论中的R/S分析方法,对2000~2013年我国煤矿瓦斯事故及不同瓦斯事故类型建立四个时间序列进行分形特性研究,其重标极差的线性回归相关系数R在0.9715~0.9983之间,给定显著性水平α=0.05,经t检验,回归方程可靠;四个时间序列的赫斯特指数H在0.5998~0.9944之间,分形维数D在1.0056~1.4002之间,时间序列均具有明显的赫斯特现象和较强的约瑟效应。结果表明:煤矿瓦斯事故持续减少的趋势性很强;R/S分析方法能较好地反映煤矿瓦斯灾害子系统复杂的非线性动力学特性。  相似文献   

2.
基于C/S+B/S混合模式的企业安全生产信息管理系统研究   总被引:1,自引:0,他引:1  
安全生产是企业永恒的主题,良好的安全生产环境和秩序是企业可持续发展的重要保障.本文在综合分析了B/S、C/S结构的基础上,提出了基于计算机技术、网络技术、数据库技术的企业安全生产信息管理系统的构想,并对建立该系统的关键技术及系统集成进行了分析和研究.开发出了基于B/S C/S混合模式的企业安全生产信息管理系统,它能为预防事故发生提供重要的参考,为企业安全管理和事故应急决策提供重要的手段.  相似文献   

3.
针对煤矿瓦斯爆炸灾害子系统的复杂非线性动力学特性,以南方某省1958—2007年的煤矿瓦斯爆炸事故发生数构建时间序列,应用霍斯特时间序列数据的标度行为,提出瓦斯爆炸时间序列的重标极差分析(R/S分析)方法。结果表明:瓦斯爆炸时间序列的霍斯特指数为0.550 2,分形维数为1.449 8,其分式布朗运动轨迹表现出一定的持久性和非高斯性。为诊断瓦斯爆炸事故时间序列变异年份,对时间序列进行分段R/S分析,获得变异年份为1974年,以1974年为分界点构成2个时间序列,霍斯特指数分别为0.589 1和0.697 5,分形维数为1.410 9和1.302 5,与煤矿瓦斯爆炸事故时间序列的持久性趋势一致。煤矿瓦斯爆炸事故时间序列看似无序却蕴藏规律,反映了系统的动力学变化特征,R/S分析方法可以描述煤矿瓦斯爆炸事故时间序列的动力学特征。  相似文献   

4.
(资料性附录)安全风险评估方法B.1风险矩阵分析法(LS)风险矩阵分析法(简称LS),R-LxS,其中R是风险值,事故发生的可能性与事件后果的结合,L是事故发生的可能性;S是事故后果严重性;R值越大,说明该系统危险性大、风险大。(表B.1-B.4)。  相似文献   

5.
氢气火灾爆炸事故是氯碱生产中最常见的事故之一.在对焦作化电集团实际调研的基础上,采用系统安全分析方法中的事故树分析方法对氢气火灾爆炸事故进行了定性、定量分析.给出了氢气火灾爆炸事故树图,事故树的最小割集、最小径集以及3种重要度.最后基于事故树分析提出了预防事故发生的安全防范措施.  相似文献   

6.
一、当前安全生产形势及职业安全健康状况 党中央、国务院一贯高度重视安全生产问题,近年来采取一系列重大措施,不断加强安全生产工作.新的中央领导集体和新一届政府把安全生产工作摆在十分重要的位置.在国务院的统一领导下,全国加强了安全生产工作的监督和管理,一些地区和行业特大事故也得到了有效控制,全国及我省安全生产形势总体上呈稳定好转的态势.但由于长年来社会及多数企业对生产安全投入不足,加上经济体制转变过程中的管理缺位,因此目前全国及我省生产安全事故总量和各类安全事故死亡人数仍然居高不下,安全生产形势依然严峻.据统计,2003年全国共发生各类事故963976起,死亡136340人;一次死亡10人以上事故129起,死亡2566人.其中我省共发生各类事故42041起,死亡4518人;一次死亡10人以上事故3起,死亡36人.2004年上半年全国共发生各类事故426283起,死亡63735人,同比分别下降12.8%和0.2%;但一次死亡10人以上事故77起,死亡1217人,同比略有上升.这些数字说明,全国及我省安全生产形势的根本好转还有相当长的一段路要走.  相似文献   

7.
2012年全国“安全生产月”正在进入紧张的准备之中,活动主题是“科学发展、安全发展”.全国性的安全文化活动,起源于上世纪80年代,算起来也进行了20多年,通过多年安全文化的创建,一方面,我国生产安全事故和死亡人数大大降低,全国事故死亡人数由2005年的12 7万人,降至2011年8万人以下,说明安全文化正在潜移默化地发挥作用;另一方面,2011年全国仍发生各类事故34万7 728起,死亡7万5 572人,事故总量仍然较大,重特大事故时有发生,安全生产形势依然严峻.张德江副总理在今年初全国安全生产电视电话会议上分析,首要原因是:安全发展理念尚未牢固树立,没有做到发展以安全为前提和基础.这说明安全文化建设任重道远.  相似文献   

8.
"群体效应"与企业人因事故防御   总被引:1,自引:0,他引:1  
从人因工程的角度出发,分析人因失误与人因事故的联系、企业事故与人因事故的关系、分析人因事故与群体效应的关系;研究群体效应的形成机理和群体效应对于人因事故防御的作用和重要性;利用S→O→R模型建立"群体效应"三级人因事故防御机制;加强企业"群体效应"的正面功效,降低人因失误所引发的企业灾害的发生.最后指出了建立"群体效应"须采取的措施,为提高企业安全生产和企业制定事故防御机制提供依据.  相似文献   

9.
安全教育培训是安全生产领域一项重要的基础性工作,加强企业安全培训工作,不仅有利于从业人员掌握安全生产知识,提高安全生产技能,强化安全意识,而且对于加强企业的安全生产管理,预防事故发生,促进全国安全生产形势的根本好转,都具有十分重要的意义.  相似文献   

10.
事故直接经济损失是反映安全生产状况的一项重要指标,对准确把握安全生产形势具有重要作用.由于我国工矿商贸领域目前缺乏系统完善的相关基础数据,为研究我国生产安全事故直接经济损失总体情况,本文对其抽样调查方法进行了系统的探讨,并基于对抽样调查数据的统计分析,提出了全国生产安全事故直接经济损失推算方法,可为我国开展生产安全生产事故直接经济损失抽样调查提供参考.  相似文献   

11.
Due to a scarcity of data, the estimate of the frequency of a rare event is a consistently challenging problem in probabilistic risk assessment (PRA). However, the use of precursor data has been shown to help in obtaining more accurate estimates. Moreover, the use of hyper-priors to represent prior parameters in the hierarchical Bayesian approach (HBA) generates more consistent results in comparison to the conventional Bayesian method. This study proposes a framework that uses a precursor-based HBA for rare event frequency estimation. The proposed method is demonstrated using the recent BP Deepwater Horizon accident in the Gulf of Mexico. The conventional Bayesian method is also applied to the same case study. The results show that the proposed approach is more effective with regards to the following perspectives: (a) using the HBA in the proposed framework provides an opportunity to take full advantage of the sparse data available and add information from indirect but relevant data; (b) the HBA is more sensitive to changes in precursor data than the conventional Bayesian method; and (c) using hyper-priors to represent prior parameters, the HBA is able to model the variability that can exist among different sources of data.  相似文献   

12.
Abstract

Objective: The amount of collected field data from naturalistic driving studies is quickly increasing. The data are used for, among others, developing automated driving technologies (such as crash avoidance systems), studying driver interaction with such technologies, and gaining insights into the variety of scenarios in real-world traffic. Because data collection is time consuming and requires high investments and resources, questions like “Do we have enough data?,” “How much more information can we gain when obtaining more data?,” and “How far are we from obtaining completeness?” are highly relevant. In fact, deducing safety claims based on collected data—for example, through testing scenarios based on collected data—requires knowledge about the degree of completeness of the data used. We propose a method for quantifying the completeness of the so-called activities in a data set. This enables us to partly answer the aforementioned questions.

Method: In this article, the (traffic) data are interpreted as a sequence of different so-called scenarios that can be grouped into a finite set of scenario classes. The building blocks of scenarios are the activities. For every activity, there exists a parameterization that encodes all information in the data of each recorded activity. For each type of activity, we estimate a probability density function (pdf) of the associated parameters. Our proposed method quantifies the degree of completeness of a data set using the estimated pdfs.

Results: To illustrate the proposed method, 2 different case studies are presented. First, a case study with an artificial data set, of which the underlying pdfs are known, is carried out to illustrate that the proposed method correctly quantifies the completeness of the activities. Next, a case study with real-world data is performed to quantify the degree of completeness of the acquired data for which the true pdfs are unknown.

Conclusion: The presented case studies illustrate that the proposed method is able to quantify the degree of completeness of a small set of field data and can be used to deduce whether sufficient data have been collected for the purpose of the field study. Future work will focus on applying the proposed method to larger data sets. The proposed method will be used to evaluate the level of completeness of the data collection on Singaporean roads, aimed at defining relevant test cases for the autonomous vehicle road approval procedure that is being developed in Singapore.  相似文献   

13.
针对高可靠长寿命设备加速寿命试验,在试验样本量很少的情况下,用传统评估方法对其可靠寿命的评估存在精度不足问题。笔者在分析比较单侧容限系数和二维单侧容限系数方法后,提出设备寿命基于正态分布的加速试验可靠寿命小样本评估方法。该方法可以把设备研制过程中以往积累的试验数据和同类型的实验数据综合利用起来,以增大评估设备可靠寿命的数据量。工程应用表明,该方法与传统大样本评估方法相比,在试验设备数量一定的情况下,可靠寿命的评估精度提高至少一倍。  相似文献   

14.
Accurate and effective anomaly detection and diagnosis of modern engineering systems by monitoring processes ensure reliability and safety of a product while maintaining desired quality. In this paper, an innovative method based on Kullback-Leibler divergence for detecting incipient anomalies in highly correlated multivariate data is presented. We use a partial least square (PLS) method as a modeling framework and a symmetrized Kullback-Leibler distance (KLD) as an anomaly indicator, where it is used to quantify the dissimilarity between current PLS-based residual and reference probability distributions obtained using fault-free data. Furthermore, this paper reports the development of two monitoring charts based on the KLD. The first approach is a KLD-Shewhart chart, where the Shewhart monitoring chart with a three sigma rule is used to monitor the KLD of the response variables residuals from the PLS model. The second approach integrates the KLD statistic into the exponentially weighted moving average monitoring chart. The performance of the PLS-based KLD anomaly-detection methods is illustrated and compared to that of conventional PLS-based anomaly detection methods. Using synthetic data and simulated distillation column data, we demonstrate the greater sensitivity and effectiveness of the developed method over the conventional PLS-based methods, especially when data are highly correlated and small anomalies are of interest. Results indicate that the proposed chart is a very promising KLD-based method because KLD-based charts are, in practice, designed to detect small shifts in process parameters.  相似文献   

15.
基于均生函数模型的冲击矿压电磁辐射预测研究及其应用   总被引:1,自引:0,他引:1  
通过实验研究煤岩变形破裂过程电磁辐射信号的变化规律,表明非接触电磁辐射法能动态预测冲击矿压等煤岩动力灾害现象;在非接触电磁辐射法动态预测冲击矿压的实验研究基础上,提出一种新的动态时间序列预测模型——均生函数模型,利用电磁辐射监测仪测定的现场工作面电磁辐射信号时间数据序列,通过逐步回归筛选时间序列构造一个均生函数方程;利用该方程预测预报未来电磁辐射信号的发展,并与现场测定的实际值进行对比分析,以此来验证该预测模型的正确性。误差分析和实践结果表明:均生函数模型的预测值与实际测定值的相对误差最大为6.71%左右,且距平趋势正确率均达到了60%,证明该模型与电磁辐射预测方法的有机结合能有效地预测冲击矿压以及提高预测的准确性,为冲击矿压电磁辐射预测技术的研究提供了一种新的研究思路和方法。  相似文献   

16.
Alarm systems are critically important for safe and efficient operations of industrial plants, but many industrial alarm systems are suffering from too many nuisance alarms. This paper proposes a method to classify normal and abnormal data segments and evaluate performance indices for the most commonly used univariate alarm systems. The proposed method consists of three steps. First, piece-wise linear representations are exploited in separating historical data samples of an analog process variable configured with alarms into data segments with same qualitative trends. Second, data segments are classified into normal, abnormal and unclassified conditions via a mean hypothesis test; a required assumption is that data segments in normal and abnormal conditions have different mean values being distinguishable from alarm thresholds. Third, based on the normal and abnormal data, performance indices of univariate alarm systems are calculated, including two newly formulated ones as the false alarm duration ratio and the missed alarm duration ratio. The effectiveness of the proposed method is illustrated by numerical and industrial examples.  相似文献   

17.
Problem: Potential conflicts between pedestrians and vehicles represent a challenge to pedestrian safety. Near-crash is used as a surrogate metric for pedestrian safety evaluations when historical vehicle–pedestrian crash data are not available. One challenge of using near-crash data for pedestrian safety evaluation is the identification of near-crash events. Method: This paper introduces a novel method for pedestrian-vehicle near-crash identification that uses a roadside LiDAR sensor. The trajectory of each road user can be extracted from roadside LiDAR data via several data processing algorithms: background filtering, lane identification, object clustering, object classification, and object tracking. Three indicators, namely, the post encroachment time (PET), the proportion of the stopping distance (PSD), and the crash potential index (CPI) are applied for conflict risk classification. Results: The performance of the developed method was evaluated with field-collected data at four sites in Reno, Nevada, United States. The results of case studies demonstrate that pedestrian-vehicle near-crash events could be identified successfully via the proposed method. Practical applications: The proposed method is especially suitable for pedestrian-vehicle near-crash identification at individual sites. The extracted near-crash events can serve as supplementary material to naturalistic driving study (NDS) data for safety evaluation.  相似文献   

18.
为解决用户在现有搜索引擎上难以检索到专业和全面的安全生产隐患知识的问题,提出一种安全生产隐患数据的挖掘方法和针对安全生产隐患的智能搜索引擎系统的设计方法。结果表明:对于用户的搜索,系统能够直接反馈智能化查询结果,包括:隐患内容、隐患来源、隐患的行业,隐患场所、法规标准、事故信息等数据以及数据之间的关联关系。给出的示例和结果图表,说明了这种数据挖掘方法的有效性;对相关智能搜索引擎系统的组成、工作原理及实现方式的阐述,说明了系统设计方法的有效性,可为安全生产隐患智能搜索引擎系统的设计与开发提供参考。  相似文献   

19.
The source data for QRA’s is important to assure meaningful risk assessment results, particularly when the result is to be compared against quantitative risk acceptance criteria. The author’s company is one of the largest global QRA providers and we have concluded that the UK HSE Hydrocarbon Release Database (HCRD) provides the basis for the best leak frequency data as it offers complete leak data collection in a systematic manner, against a known population of equipment and facilities in the UK sector of the North Sea for which there is an accurate parts count estimate. The LEAK program is described. It screens HCRD to remove leak events not associated with full operations inventories and flows (e.g. when isolated for maintenance) and further uses a distribution function that permits frequencies for any arbitrary hole size range to be determined (e.g. 25 mm leaks, 50 mm leaks, full-bore ruptures). An important factor is that leak frequency data is not stationary, offshore operators have improved their control of leak events and the HCRD shows a declining leak rate over time.DNV often uses frequency modification techniques, termed MOR - Modification of Risk. This paper reviews 4 methods developed by the company internationally. These are the Manager Method, the API 581 method, a barrier based method, and a proprietary management system based method. These all permit localization of UK North Sea data to apply to other facilities (onshore or offshore) and with different management systems and mechanical integrity programs.Overall, localized data using MOR is considered more accurate than direct use of UK North Sea data, however validation is an issue. There are no direct comparisons that compare leak statistics over a sufficiently long period with static management systems and integrity programs. Thus MOR techniques remain judgment based approaches, but transparent in methodology and assumptions. The barrier based modification technique is the most directly verifiable of the four MOR methods presented.  相似文献   

20.
A method is presented for analysis of reliability of complex engineering systems using information from fault tree analysis and uncertainty/imprecision of data. Fuzzy logic is a mathematical tool to model inaccuracy and uncertainty of the real world and human thinking. The method can address subjective, qualitative, and quantitative uncertainties involving risk analysis. Risk analysis with all the inherent uncertainties is a prime candidate for Fuzzy Logic application. Fuzzy logic combined with expert elicitation is employed in order to deal with vagueness of the data, to effectively generate basic event failure probabilities without reliance on quantitative historical failure data through qualitative data processing.The proposed model is able to quantify the fault tree of LPG refuelling facility in the absence or existence of data. This paper also illustrates the use of importance measures in sensitivity analysis. The result demonstrates that the approach is an apposite for the probabilistic reliability approach when quantitative historical failure data are unavailable. The research results can help professionals to decide whether and where to take preventive or corrective actions and help informed decision-making in the risk management process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号