当前位置:文档之家› 基于RBF神经网络的故障诊断 - 副本

基于RBF神经网络的故障诊断 - 副本

基于RBF神经网络的故障诊断 - 副本
基于RBF神经网络的故障诊断 - 副本

基于RBF神经网络的故障诊断

摘要: RBF 神经网络即径向基函数神经网络(Radical Basis Function)。径向基函数神经网络是一种高效的前馈式神经网络,它具有其他前向网络所不具有的最佳逼近性能和全局最优特性,并且结构简单,训练速度快。同时,它也是一种可以广泛应用于模式识别、非线性函数逼近等领域的神经网络模型。利用Matlab神经网络工具箱对变速箱齿轮进行故障诊断仿真,

并创建RBF神经网络与BP神经网络来进行故障诊断。通过对比诊断结果,证明RBF网络在诊断精度,诊断速度上均优于BP网络,说明RBF网络应用于齿轮的故障诊断准确、可靠,在机械故障诊断方面具有广泛的应用前景。

关键词:神经网络;故障诊断;Matlab神经网络工具箱;RBF网络;

引言

由于汽车的特殊运行条件和运行环境,以及汽车行驶过程中经常性换档,使得变速箱常发生故障.具体有:

1.(1)异响的原因:①齿轮间隙过大;②轴承磨损松旷,③挂挡齿轮滑键槽与滑键轴磨损松旷;④轴承漏油或壳体漏油以致滑油减少,或变速箱底部放油堵脱落,以致滑油全部漏完,⑤金属小铁件混入变速箱体内;⑥滑油粘度不适当或品质不佳,⑦齿轮与轴的间隙过大以致松旷,⑧变速器与飞轮壳连接螺栓松动;⑨发动机与飞轮壳连接螺栓松动。

2. ①挂挡齿轮与被挂齿轮、套牙同套齿都在齿长方向磨成锥形或短缺;②闸叉锁止螺钉松脱,闸叉变形,叉部磨损;③闸轨凹槽磨损,定位钢球磨损,弹簧弹力减弱或折断,④轴承磨损松旷,⑤齿轮间隙过大。

3. ①滑油不够或不适当,致使齿轮磨损,②变速箱内混有泥砂污物,致

使齿轮磨损,③中间轴变形;④中间轴轴承松旷,致使啮合各齿发生拢击,⑥第二轴常啮合齿轮滚针轴承碎裂或定位圈卡簧破碎,甚至被轧入两啮合齿轮之间。

据统计由齿轮失效引起的汽车变速箱故障占全部原因的10%。在这里齿轮失效的主要形式有齿根裂痕和弯曲疲劳引起的断齿等,因而随着汽车技术的发展,对变速箱实施故障诊断、特别是对齿轮的诊断变得尤为重要。

齿轮是汽车行业主要的基础传动元件,通常每辆汽车中有18~30个齿部,齿轮的质量直接影响汽车的噪声、平稳性及使用寿命。齿轮加工机床是一种复杂的机床系统,是汽车行业的关键设备,世界上各汽车制造强国如美国、德国和日本等也是齿轮加工机床制造强国。

据统计,我国80%以上的汽车齿轮由国产制齿装备加工完成。同时,汽车工业消费了60%以上的齿轮加工机床,汽车工业将一直是机床消费的主体。

通过对常见故障的诊断和分析,可以及早发现故障和预防故障的进一步恶化。经过多年的发展,故障诊断技术的发展已进入到智能化阶段。目前,对汽车变速器齿轮故障诊断的实施方法有很多种,如磨损残余物分析诊断法、振动监测技术诊断法、声发射技术诊断法、光纤传感技术诊断法、人工神经网络技术诊断法等。

人工神经网络系统是由大量的、同时也是很简单的处理单元(或称神经元)广泛地互相连接而形成的复杂网络系统,因此,在智能化故障诊断技术领域里,人工神经网络技术的应用研究主要用于以下两个方面,一是从模式识别角度,应用具有感知器结构的神经网络模型或各种联想记忆模型实现征兆集到故障集之间的非线性特征映射关系;二是从专家系统的角度,建立基于神经网络的故障诊断专家系统。本文探索了神经网络技术在故障诊断中的应用,特别是对实践中难以建立数学模型的复杂系统,训练过的神经网络能存储有关过程的知识,能直接从历史故障信息中学习,滤除噪声的能力使神经网络适合在线监测和诊断,具有

分辨故障原因及类型的能力。近几年,RBF神经网络大量应用于机械故障诊断,如汽车发动机、压缩机、水轮机、内燃机等。在故障诊断的应用中,RBF神经网络的应用能准确、快速地判断故障类型和原因,对及早发现和排除故障发挥了很好的作用。在实际运行中,引起故障的原因很多,不同故障表现出的征兆有时具有相似性。针对故障原因与故障征兆之间的非线性关系,应用RBF神经网络进行故障诊断能准确、快速判断故障类型和原因,对于提高安全经济性具有重要的意义。

本文提出了RBF网络应用于变速箱齿轮故障诊断的基本方法,利用Matlab神经网络工具箱对变速箱齿轮进行故障诊断仿真,并创建RBF神经网络来进行故障诊断。

1神经网络故障诊断原理

神经网络是一种应用类似于大脑神经突触联接的结构进行信息处理的数学模型。在工程与学术界也常直接简称为神经网络或类神经网络。神经网络是一种运算模型,由大量的节点(或称神经元)和之间相互联接构成。每个节点代表一种特定的输出函数,称为激励函数(activation function)。每两个节点间的连接都代表一个对于通过该连接信号的加权值,称之为权重,这相当于人工神经网络的记忆。网络的输出则依网络的连接方式,权重值和激励函数的不同而不同。而网络自身通常都是对自然界某种算法或者函数的逼近,也可能是对一种逻辑策略的表达。

图1-1神经元模型结构

如图所示

a1~an为输入向量的各个分量

w1~wn为神经元各个突触的权值

b为偏置

f为传递函数,通常为非线性函数。以下默认为hardlim()

t为神经元输出

数学表示t=f(WA'+b)

W为权向量

A为输入向量,A'为A向量的转置

b为偏置

f为传递函数

可见,一个神经元的功能是求得输入向量与权向量的内积后,经一个非线性传递函数得到一个标量结果。

单个神经元的作用:把一个n维向量空间用一个超平面分割成两部分(称之为判断边界),给定一个输入向量,神经元可以判断出这个向量位于超平面的哪一边。

该超平面的方程: Wp+b=0

W权向量

b偏置

p超平面上的向量

人工神经网络是由大量处理单元互联组成的非线性、自适应信息处理系统。它是在现代神经科学研究成果的基础上提出的,试图通过模拟大脑神经网络处理、记忆信息的方式进行信息处理。人工神经网络具有四个基本特征:

(1)非线性非线性关系是自然界的普遍特性。大脑的智慧就是一种非线性现象。人工神经元处于激活或抑制二种不同的状态,这种行为在数学上表现为一种非线性。具有阈值的神经元构成的网络具有更好的性能,可以提高容错性和存储容量。

(2)非局限性一个神经网络通常由多个神经元广泛连接而成。一个系统的整体行为不仅取决于单个神经元的特征,而且可能主要由单元之间的相互作用、相互连接所决定。通过单元之间的大量连接模拟大脑的非局限性。联想记忆是非局限性的典型例子。

(3)非常定性人工神经网络具有自适应、自组织、自学习能力。神经网络不但处理的信息可以有各种变化,而且在处理信息的同时,非线性动力系统本身也在不断变化。经常采用迭代过程描写动力系统的演化过程。

(4)非凸性一个系统的演化方向,在一定条件下将取决于某个特定的状态函数。例如能量函数,它的极值相应于系统比较稳定的状态。非凸性是指这种函数有多个极值,故系统具有多个较稳定的平衡态,这将导致系统演化的多样性。

人工神经网络中,神经元处理单元可表示不同的对象,例如特征、字母、概念,或者一些有意义的抽象模式。网络中处理单元的类型分为三类:输入单元、输出单元和隐单元。输入单元接受外部世界的信号与数据;输出单元实现系统处理结果的输出;隐单元是处在输入和输出单元之间,不能由系统外部观察的单元。神经元间的连接权值反映了单元间的连接强度,信息的表示和处理体现在网络处理单元的连接关系中。人工神经网络是一种非程序化、适应性、大脑风格的信息处理,其本质是通过网络的变换和动力学行为得到一种并行分布式的信息处理功能,并在不同程度和层次上模仿人脑神经系统的信息处理功能。它是涉及神经科学、思维科学、人工智能、计算机科学等多个领域的交叉学科。

人工神经网络是并行分布式系统,采用了与传统人工智能和信息处理技术完全不同的机理,克服了传统的基于逻辑符号的人工智能在处理直觉、非结构化信息方面的缺陷,具有自适应、自组织和实时学习的特点。

2 RBF神经网络

径向基函数(radial basis function,RBF)神经网络在模式识别

中得到了成功的应用,这主要得益于径向基函数本身所具有的特性。将RBF神经网络应用在变速箱齿轮进行故障诊断,取得了令人满意的效果。实验结果显

示:RBF神经网络具有运算速度快、识别率高、算法简单等特点。在训练样本减少的情况下,该学习机的分类性能没有明显退化。不同于以往的一些论文,现着重从理论上对径向基函数本身进行分析,并将RBF网络与反向传播

(back-propagation,BP)网络和支持向量机进行对比,以突出该方法的优势所在,为在实际应用中选择合适的学习机提供一定的参考。

RBF网络由三层组成,输入层、隐含层和输出层,如图1所示。假定输入向量在这里将隐含层

RBF网络是一种三层前向网络,由输入到输出的映射是非线性的,而隐含层空间到输出空间的映射是线性的,前者是一个非线性优化的问题,求解方法较复杂,目前可选用的学习方式较多,主要有随机选取RBF中心(直接计算法)、无监督学习选取RBF中心(K-均值聚类法)、有监督学习选取中心(梯度下降法)和正交最小二乘法(OLS)等。本文主要采用第一种方法。RBF网络结构如下图所示,

x

x

输出层

输入层N个基函数的

隐含层

图2-2 RBF网络结构

当网络输入训练样本Xk 时,网络第j 个输出神经元的实际输出为:

公式2-1

一般“基函数”选为格林函数,当格林函数为高斯函数时:

公式2-2 RBF 网络训练过程为:由实验样本确定训练隐含层与输出层间的权值w 的最终

权值

y w 1-=?。 在RBF 网络训练中,隐含层神经元数量的确定是关键,一般选取与输入向量

的元素相等。然而, 在输入矢量很多时, 过多的隐含层单元数使网络结构复杂化, 影响训练时间。为此提出了改进方法: 从0个神经元开始训练, 通过检查输出误差使网络自动增加神经元。每次循环使用, 使网络产生的最大误差所对应的输入向量作为权值向量w ,产生一个新的隐含层神经元, 然后检查新网络的误差, 重复此过程直至达到误差要求或最大隐含层神经元数为止。

3 MATLAB 神经网络工具箱

神经网络工具箱(Neural Network Toolbox)是在MATLAB 环境下开发的工具箱之一,它以神经网络理论为基础,利用MATLAB 脚本语言构造出典型神经网络的激活函数,使对所选定网络输出的计算转变为对激活函数的调用。它还提供了多种学习算法以及170余种相关的工具箱函数,依此可直观、方便地进行神经网络的应用设计、分析、计算等。MATLAB 神经网络工具箱的出现,更加拓宽了神经网络的应用空间。

神经网络工具箱为构建网络提供了丰富的工具函数,根据需要可以构建不同类型的神经网络,并可应用不同的传递函数和算法对设计进行改进及优化,利用

),()(1i k N i ij k kj X X w X y ?∑==???

? ??--=-==∑=M m im km i i k i k k x x X X G X X G X X 122)(21p ex ||)(||),(,(σ?

训练样本可进行网络的学习与训练,利用诊断样本可对所得网络进行验证,还可以根据需要调用工具箱中有关神经网络的设计与训练程序。

4 实验分析

4.1 RBF神经网络训练过程

表1为某汽车变速箱的齿轮啮合频率样本数据,都是经过归一化处理后的样本数据,共有9个实际样本,3种故障模式,每个样本有l5个特征参数,应用MATLAB提供的神经网络工具箱构建RBF网络,并用表1学习样本进行训练。因此,可按照如下的方式设计网络,网络的输入层神经元个数为l5个,输出层的个数为3个,隐含层的个数并不是固定的,需要经过实际训练的检验来不断调整。由于齿轮包括3种故障模式,因此可以用如下形式表示输出:无故障(1,0,0);齿根裂纹(0,l,0);断齿(0,0,1)。

表1 齿轮箱样本数据

号各种特征样本齿轮状态

10.2286,0.1292,0.0720,0.1592,0.1335,0.0733,0.11

59,0.0940,0.0522,0.1345,0.0090,0.1260,0.3619,

0.0690,0.1828

20.2090,0.0947,0.1393,0.1387,0.2558,0.0900,0.0 771,0.0882,0.0393,0.1430,0.0126,0.1670,0.2450,

0.0508,0.1328

30.0442,0.0880,0.1147,0.0563,0.3347,0.1150,0.1无

453,0.0429,0.1818,0.0378,0.0092,0.2251,0.1516, 0.0858,0.0670故障

40.2603,0.1715,0.0702,0.2711,0.1491,0.1330,0.096

8,0.1911,0.2545,0.0871,0.0060,0.1793,0.1002,0.0 789,0.0909

齿

50.3690,0.2222,0.0562,0.5157,0.1872,0.1614,0.14 25,0.1506,0.1310,0.0500,0.0078,0.0348,0.0451,0

.0707,0.0880

齿

60.0359,0.1149,0.1230,0.5460,0.1977,0.1248,0.06 24,0.0832,0.1640,0.1002,0.0059,0.1503,0.1837,0

.1295,0.0700

齿

70.1759,0.2347,0.1829,0.1811,0.2922,0.0655,0.07

74,0.2273,0.2056,0.0925,0.0078,0.1852,0.3501,

0.1680,0.2668

齿

80.0724,0.1909,0.1340,0.2409,0.2842,0.0450,0. 0824,0.1064,0.1909,0.1586,0.0116,0.1698,0.364

4,0.2718,0.2494

齿

90.2634,0.2258,0.1165,0.1154,0.1074,0.0657,0.06

10,0.2623,0.2588,0.1155,0.0050,0.0978,0.1511,0. 2273,0.3220

齿利用函数newrb创建一个精确的神经网络,该函数在创建RBF神经网络

时,自动选择隐含层的节点数目,使得误差为0。代码为:

net=newrb(x,y);

其中,x为输入向量,y为目标向量,它们可以从表1 中得到。由于网络的建立过程就是训练过程,因此得到的网络已经是训练好了的。接下来验证网络的预测性能。代码为:

ty= sim(net,tx)

其tx为网络的测试样本。

表2 齿轮箱测试数据

号各种特征样本齿轮状态

10.2101,0.0950,0.1298,0.1359,0.2601,0.1001,0.075

3,0.0890,0.0389,0.1451,0.0128,0.1590,0.2452,0. 0512,0.1319

20.2593,0.1800,0.0711,0.2801,0.1501,0.1298,0.1001

,0.1891,0.2531,0.0875,0.0058,0.1803,0.0992,0.08

02,0.1002

齿

30.2599,0.2235,0.1201,0.1171,0.1102,0.0683,0.062

1,0.2597,0.2602,0.1167,0.0048,0.1002,0.1521,0.2 281,0.3205

齿

4.2 故障诊断推理

三层RBF神经网络进行故障诊断,采用数据驱动的正向推理策略,从初始

状态出发,向前推理,到达目标状态为止。其故障诊断结果如下:抽取表2所示的3组新数据作为输入数据,对已经训练好的网络进行测试。其测试结果为:

ty =

1.0054 -0.0431 -0.0127

0.0003 1.0249 0.0164

-0.0057 0.0182 0.9963

也就是说,将第一组测试数据(无故障)输入网络时,网络输出有ty1=(1.0054,0.0003,-0.0057),所以网络诊断的结果为无故障;将第二组测试数据(齿根裂纹)输入网络时,网络的输出有ty2=(-0.0431,1.0249,0.0182),所以网络诊断的结果为齿根裂纹;同样,将第三组测试数据(断齿)输入网络时网络的输出有ty3=(-0.0127,0.0164,0.9963),所以网络诊断的结果为断齿。

5 结论

仿真试验表明,RBF神经网络是一种性能良好的非线性逼近网络,对故障类型的识别十分准确。网络训练过程中,在采用相同的输入节点、输出节点的情况下,且在相同期望误差平方和的条件下,RBF网络的收敛速度明显高于优化的BP网络,不仅减少了样本的学习时间和复杂度,而且不容易出现局部极小值。通过对比可知,采用RBF网络对变速箱的齿轮进行故障诊断是可行的,并且RBF 网络比BP网络诊断速度快且准确,更适用于进行故障诊断.这种故障诊断方法不仅可用于变速箱齿轮故障诊断,也完全可用于柴油机、大型旋转机组等的故障诊断,因而具有广泛的应用前景。

附录:MATLAB源程序

clc;

close all;

% 9组样本数据:

a=[0.2286,0.1292,0.0720,0.1592,0.1335,0.0733,0.1159,0 .0940,0.0522,0.1345,0.0090,0.1260,0.3619,0.0690,0.18 28

0.2090,0.0947,0.1393,0.1387,0.2558,0.0900,0.0771,0.08 82,0.0393,0.1430,0.0126,0.1670,0.2450,0.0508,0.1328 0.0442,0.0880,0.1147,0.0563,0.3347,0.1150,0.1453,0.0 429,0.1818,0.0378,0.0092,0.2251,0.1516,0.0858,0.0670 0.2603,0.1715,0.0702,0.2711,0.1491,0.1330,0.0968,0.1911 ,0.2545,0.0871,0.0060,0.1793,0.1002,0.0789,0.0909

0.3690,0.2222,0.0562,0.5157,0.1872,0.1614,0.1425,0.15 06,0.1310,0.0500,0.0078,0.0348,0.0451,0.0707,0.0880 0.0359,0.1149,0.1230,0.5460,0.1977,0.1248,0.0624,0.0 832,0.1640,0.1002,0.0059,0.1503,0.1837,0.1295,0.0700 0.1759,0.2347,0.1829,0.1811,0.2922,0.0655,0.0774,0.22 73,0.2056,0.0925,0.0078,0.1852,0.3501,0.1680,0.2668 0.0724,0.1909,0.1340,0.2409,0.2842,0.0450,0.0824,0. 1064,0.1909,0.1586,0.0116,0.1698,0.3644,0.2718,0.249 4

0.2634,0.2258,0.1165,0.1154,0.1074,0.0657,0.0610,0.26

23,0.2588,0.1155,0.0050,0.0978,0.1511,0.2273,0.3220];

b=[1,0,0;1,0,0;1,0,0;0,1,0;0,1,0;0,1,0;0,0,1;0,0,1;0,0,1];

x=a'%RBF网络的15个输入向量

y=b'%RBF网络的3个输出向量

net=newrb(x,y,0.001,0.9,15,1); %net = newrb(P,T,GOAL,SPREAD,MN,DF)

%P为输入向量,T为目标向量,GOAL为圴方误差,默认为0,SPREAD为径向基函数的分布密度,默认为1,MN为神经元的最大数目,DF为两次显示之间所添加的神经元神经元数目。

A = sim(net,x);

E = y - A;

MSE=mse(E)

% 3组测试数据

ta=[0.2101,0.0950,0.1298,0.1359,0.2601,0.1001,0.0753,0 .0890,0.0389,0.1451,0.0128,0.1590,0.2452,0.0512,0.131 9

0.2593,0.1800,0.0711,0.2801,0.1501,0.1298,0.1001,0.1891, 0.2531,0.0875,0.0058,0.1803,0.0992,0.0802,0.1002

0.2599,0.2235,0.1201,0.1171,0.1102,0.0683,0.0621,0.259 7,0.2602,0.1167,0.0048,0.1002,0.1521,0.2281,0.3205];

tx=ta'

ty=sim(net,tx)%仿真输出

ty =

1.0073 -0.0324 -0.0089

-0.0047 0.9938 0.0047

-0.0026 0.0386 1.0042

NEWRB, neurons = 0, SSE = 3.61745 NEWRB, neurons = 2, SSE = 2.75572 NEWRB, neurons = 3, SSE = 1.22387 NEWRB, neurons = 4, SSE = 0.537735 NEWRB, neurons = 5, SSE = 0.179913 NEWRB, neurons = 6, SSE = 0.0921845 NEWRB, neurons = 7, SSE = 0.0359042 NEWRB, neurons = 8, SSE = 3.15544e-029 MSE = 1.1687e-030

BP程序

clc;

clear;

close all;

% 9组样本数据:

a=[0.2286,0.1292,0.0720,0.1592,0.1335,0.0733,0.1159,0 .0940,0.0522,0.1345,0.0090,0.1260,0.3619,0.0690,0.18 28

0.2090,0.0947,0.1393,0.1387,0.2558,0.0900,0.0771,0.08 82,0.0393,0.1430,0.0126,0.1670,0.2450,0.0508,0.1328 0.0442,0.0880,0.1147,0.0563,0.3347,0.1150,0.1453,0.0 429,0.1818,0.0378,0.0092,0.2251,0.1516,0.0858,0.0670 0.2603,0.1715,0.0702,0.2711,0.1491,0.1330,0.0968,0.1911 ,0.2545,0.0871,0.0060,0.1793,0.1002,0.0789,0.0909

0.3690,0.2222,0.0562,0.5157,0.1872,0.1614,0.1425,0.15 06,0.1310,0.0500,0.0078,0.0348,0.0451,0.0707,0.0880 0.0359,0.1149,0.1230,0.5460,0.1977,0.1248,0.0624,0.0 832,0.1640,0.1002,0.0059,0.1503,0.1837,0.1295,0.0700 0.1759,0.2347,0.1829,0.1811,0.2922,0.0655,0.0774,0.22 73,0.2056,0.0925,0.0078,0.1852,0.3501,0.1680,0.2668 0.0724,0.1909,0.1340,0.2409,0.2842,0.0450,0.0824,0. 1064,0.1909,0.1586,0.0116,0.1698,0.3644,0.2718,0.249 4

0.2634,0.2258,0.1165,0.1154,0.1074,0.0657,0.0610,0.26 23,0.2588,0.1155,0.0050,0.0978,0.1511,0.2273,0.3220];

b=[1,0,0;1,0,0;1,0,0;0,1,0;0,1,0;0,1,0;0,0,1;0,0,1;0,0,1];

p=a'%RBF网络的15个输入向量

t=b'%RBF网络的3个输出向量

size(p)

size(t)

net_1=newff(minmax(p),[10,3],{'tansig','purelin'},'tra ingdm')

inputWeights=net_1.IW{1,1}

inputbias=net_1.b{1}

layerWeights=net_1.LW{2,1}

layerbias=net_1.b{2}

net_1.trainParam.show = 50;

net_1.trainParam.lr = 0.05;

net_1.trainParam.mc = 0.9;

net_1.trainParam.epochs = 10000;

net_1.trainParam.goal = 1e-3;

[net_1,tr]=train(net_1,p,t);

A = sim(net_1,p);

E = t - A;

MSE=mse(E)

% 3组测试数据

ta=[0.2101,0.0950,0.1298,0.1359,0.2601,0.1001,0.0753,0

.0890,0.0389,0.1451,0.0128,0.1590,0.2452,0.0512,0.131 9

0.2593,0.1800,0.0711,0.2801,0.1501,0.1298,0.1001,0.1891, 0.2531,0.0875,0.0058,0.1803,0.0992,0.0802,0.1002

0.2599,0.2235,0.1201,0.1171,0.1102,0.0683,0.0621,0.259 7,0.2602,0.1167,0.0048,0.1002,0.1521,0.2281,0.3205];

tx=ta'

ty=sim(net_1,tx)%仿真输出

ty =

1.0257 0.0003 0.1151

-0.0417 0.9263 -0.0324

-0.0267 -0.0977 0.9535

MSE = 9.9944e-004

相关主题
文本预览
相关文档 最新文档