Application of Bayesian neural network in electrical impedance tomography
- 格式:pdf
- 大小:1.10 MB
- 文档页数:6
基于美国FAERS 数据库对未成年人群肝衰竭ADE 信号的挖掘与分析Δ李冰 1*,梁力 1,陈燕 1,郭宇航 1,刘霞 2,郭晋敏 1 #(1.中国人民解放军联勤保障部队第九六〇医院临床药学科,济南 250031;2.海军军医大学药学院临床药学教研室,上海 200433)中图分类号 R 969.3;R 994.1 文献标志码 A 文章编号 1001-0408(2023)17-2144-05DOI 10.6039/j.issn.1001-0408.2023.17.17摘要 目的 基于美国FDA 不良事件报告系统(FAERS )数据库,对未成年人群中引起肝衰竭的药物进行数据挖掘,以期为相关药物的临床合理应用提供参考。
方法 检索美国FAERS 数据库2013年第1季度—2022年第3季度未成年(小于18岁)人群发生肝衰竭的药物不良事件(ADE )报告数据并对其进行挖掘与分析,按不同年龄段分为婴儿(≤1岁)、幼儿(>1~<6岁)、儿童(6~<12岁)和少年(12~<18岁),利用比例失衡法中的报告比值(ROR )法、比例报告比值法和贝叶斯置信区间递进神经网络法筛选ADE 信号。
结果 共收集到未成年人群的肝衰竭ADE 报告1 051份,涉及60种药物。
少年的肝衰竭发生率最高(410例,占39.01%),其次是幼儿(297例,占28.26%);有14个药物的说明书未提及肝胆系统损伤和肝衰竭风险,包括左乙拉西坦31例(占2.95%),甲硝唑18例(占1.71%),托吡酯、甲泼尼龙各16例(各占1.52%),地塞米松12例(占1.14%),替沙仑赛11例(占1.05%),硫酸亚铁、二甲双胍和白消安各10例(各占0.95%),丙泊酚9例(占0.86%),onasemnogene abeparvovec 8例(占0.76%),苯海拉明、奥美拉唑各5例(各占0.48%),sebeliesterase α 4例(占0.38%),共计165例,占报告总数的15.70%。
doi: 10.3969/j.issn.2095-4468.2022.02.201应用人工神经网络算法的冷水机组能效提升策略张梦华1,周镇新1,刘念2,韩林志1,陈焕新 1(1-华中科技大学能源与动力工程学院,湖北武汉 430074;2-上海叠腾网络科技有限公司,上海 200000) [摘 要] 针对冷水机组蓄水池延迟造成的系统反馈调节不及时,从而增加建筑能耗的问题,本文提出了一种基于人工神经网络的冷水机组回水温度预测模型。
模型中包括利用专家知识设置变量、建立模型、训练模型和测试模型4个主要步骤,并在建立模型后以此来预测t 时刻后的回水温度。
结果表明:在原来11个相关变量基础上,增加的3个专家变量,使得4个模型预测回水温度的相关系数都较高,且发现与10、15、20 min 相比,5 min 前的测量参数可以更加准确预测此时的冷水机组回水温度,均方误差达到0.106 9、R 2达到0.992 3,并以此信息来对系统进行反馈调节,以解决由于蓄水池的延迟性因素给系统带来的增加能耗等不良影响。
[关键词] 冷水机组;神经网络;冷却水回水温度;蓄水池延迟时间;冷却塔风机功率 中图分类号:TQ051.5; TK173文献标识码:AEnergy Efficiency Improvement Strategy of Water Chiller Using Artificial NeuralNetwork AlgorithmZHANG Menghua 1, ZHOU Zhenxin 1, LIU Nian 2, HAN Linzhi 1, CHEN Huanxin *1(1-School of Energy and Power Engineering, Huazhong University of Science and Technology, Wuhan 430074, Hubei, China;2-Shanghai Dieteng Network Technology Co., Ltd., Shanghai 200000, China)[Abstract] Aiming at the problem that the system feedback adjustment is not timely caused by the delay of the water chiller reservoir, thereby increasing the energy consumption of the building, this paper is proposed a prediction model of the return water temperature of the chiller based on the artificial neural network. The model includes the use of expert knowledge to set variables, build models, train models, and test the four main steps, and use this to predict the return water temperature after time t after the model is built. The results show that, based on the original 11 related variables, the addition of 3 expert variables makes the four models have higher correlation coefficients for predicting the return water temperature, and it is found that compared with 10, 15, 20 min, the value of 5 min before the measured parameters can more accurately predict the return water temperature of the chiller at this time. The mean square error reaches 0.106 9 and the R 2 reaches 0.992 3. This information is used to feedback and adjust the system to solve the delay factor due to the reservoir. The adverse effects such as increased energy consumption. [Keywords] Chiller; Neural network; Cooling water return temperature; Delay time of the reservoir; Cooling tower fan power*陈焕新(1964—),男,教授,博士。
APPLICATION OF BAYESIAN REGULARIZED BP NEURALNETWORK MODEL FOR TREND ANALYSIS,ACIDITY ANDCHEMICAL COMPOSITION OF PRECIPITATION IN NORTHCAROLINAMIN XU1,GUANGMING ZENG1,2,∗,XINYI XU1,GUOHE HUANG1,2,RU JIANG1and WEI SUN21College of Environmental Science and Engineering,Hunan University,Changsha410082,China;2Sino-Canadian Center of Energy and Environment Research,University of Regina,Regina,SK,S4S0A2,Canada(∗author for correspondence,e-mail:zgming@,ykxumin@,Tel.:86–731-882-2754,Fax:86-731-882-3701)(Received1August2005;accepted12December2005)Abstract.Bayesian regularized back-propagation neural network(BRBPNN)was developed for trend analysis,acidity and chemical composition of precipitation in North Carolina using precipitation chemistry data in NADP.This study included two BRBPNN application problems:(i)the relationship between precipitation acidity(pH)and other ions(NH+4,NO−3,SO2−4,Ca2+,Mg2+,K+,Cl−and Na+) was performed by BRBPNN and the achieved optimal network structure was8-15-1.Then the relative importance index,obtained through the sum of square weights between each input neuron and the hidden layer of BRBPNN(8-15-1),indicated that the ions’contribution to the acidity declined in the order of NH+4>SO2−4>NO−3;and(ii)investigations were also carried out using BRBPNN with respect to temporal variation of monthly mean NH+4,SO2−4and NO3−concentrations and their optimal architectures for the1990–2003data were4-6-1,4-6-1and4-4-1,respectively.All the estimated results of the optimal BRBPNNs showed that the relationship between the acidity and other ions or that between NH+4,SO2−4,NO−3concentrations with regard to precipitation amount and time variable was obviously nonlinear,since in contrast to multiple linear regression(MLR),BRBPNN was clearly better with less error in prediction and of higher correlation coefficients.Meanwhile,results also exhibited that BRBPNN was of automated regularization parameter selection capability and may ensure the excellentfitting and robustness.Thus,this study laid the foundation for the application of BRBPNN in the analysis of acid precipitation.Keywords:Bayesian regularized back-propagation neural network(BRBPNN),precipitation,chem-ical composition,temporal trend,the sum of square weights1.IntroductionCharacterization of the chemical nature of precipitation is currently under con-siderable investigations due to the increasing concern about man’s atmospheric inputs of substances and their effects on land,surface waters,vegetation and mate-rials.Particularly,temporal trend and chemical composition has been the subject of extensive research in North America,Canada and Japan in the past30years(Zeng Water,Air,and Soil Pollution(2006)172:167–184DOI:10.1007/s11270-005-9068-8C Springer2006168MIN XU ET AL.and Flopke,1989;Khawaja and Husain,1990;Lim et al.,1991;Sinya et al.,2002; Grimm and Lynch,2005).Linear regression(LR)methods such as multiple linear regression(MLR)have been widely used to develop the model of temporal trend and chemical composition analysis in precipitation(Sinya et al.,2002;George,2003;Aherne and Farrell,2002; Christopher et al.,2005;Migliavacca et al.,2004;Yasushi et al.,2001).However, LR is an“ill-posed”problem in statistics and sometimes results in the instability of the models when trained with noisy data,besides the requirement of subjective decisions to be made on the part of the investigator as to the likely functional (e.g.nonlinear)relationships among variables(Burden and Winkler,1999;2000). On the other hand,recently,there has been increasing interest in estimating the uncertainties and nonlinearities associated with impact prediction of atmospheric deposition(Page et al.,2004).Besides precipitation amount,human activities,such as local and regional land cover and emission sources,the actual role each plays in determining the concentration at a given location is unknown and uncertain(Grimm and Lynch,2005).Therefore,it is of much significance that the model of temporal variation and precipitation chemistry is efficient,gives unambiguous models and doesn’t depend upon any subjective decisions about the relationships among ionic concentrations.In this study,we propose a Bayesian regularized back-propagation neural net-work(BRBPNN)to overcome MLR’s deficiencies and investigate nonlinearity and uncertainty in acid precipitation.The network is trained through Bayesian reg-ularized methods,a mathematical process which converts the regression into a well-behaved,“well-posed”problem.In contrast to MLR and traditional neural networks(NNs),BRBPNN has more performance when the relationship between variables is nonlinear(Sovan et al.,1996;Archontoula et al.,2003)and more ex-cellent generalizations because BRBPNN is of automated regularization parameter selection capability to obtain the optimal network architecture of posterior distri-bution and avoid over-fitting problem(Burden and Winkler,1999;2000).Thus,the main purpose of our paper is to apply BRBPNN method to modeling the nonlinear relationship between the acidity and chemical compositions of precipitation and improve the accuracy of monthly ionic concentration model used to provide pre-cipitation estimates.And both of them are helpful to predict precipitation variables and interpret mechanisms of acid precipitation.2.Theories and Methods2.1.T HEORY OF BAYESIAN REGULARIZED BP NEURAL NETWORK Traditional NN modeling was based on back-propagation that was created by gen-eralizing the Widrow-Hoff learning rule to multiple-layer networks and nonlinear differentiable transfer monly,a BPNN comprises three types ofAPPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL 169Hidden L ayerInput a 1=tansig(IW 1,1p +b 1 ) Output L ayer a 2=pu relin(LW 2,1a 1+b 2)Figure 1.Structure of the neural network used.R =number of elements in input vector;S =number of hidden neurons;p is a vector of R input elements.The network input to the transfer function tansig is n 1and the sum of the bias b 1.The network output to the transfer function purelin is n 2and the sum of the bias b 2.IW 1,1is input weight matrix and LW 2,1is layer weight matrix.a 1is the output of the hidden layer by tansig transfer function and y (a 2)is the network output.neuron layers:an input layer,one or several hidden layers and an output layer comprising one or several neurons.In most cases only one hidden layer is used (Figure 1)to limit the calculation time.Although BPNNs with biases,a sigmoid layer and a linear output layer are capable of approximating any function with a finite number of discontinuities (The MathWorks,),we se-lect tansig and pureline transfer functions of MATLAB to improve the efficiency (Burden and Winkler,1999;2000).Bayesian methods are the optimal methods for solving learning problems of neural network,which can automatically select the regularization parameters and integrates the properties of high convergent rate of traditional BPNN and prior information of Bayesian statistics (Burden and Winkler,1999;2000;Jouko and Aki,2001;Sun et al.,2005).To improve generalization ability of the network,the regularized training objective function F is denoted as:F =αE w +βE D (1)where E W is the sum of squared network weights,E D is the sum of squared net-work errors,αand βare objective function parameters (regularization parameters).Setting the correct values for the objective parameters is the main problem with im-plementing regularization and their relative size dictates the emphasis for training.Specially,in this study,the mean square errors (MSE)are chosen as a measure of the network training approximation.Set a desired neural network with a training data set D ={(p 1,t 1),(p 2,t 2),···,(p i ,t i ),···,(p n ,t n )},where p i is an input to the network,and t i is the corresponding target output.As each input is applied to the network,the network output is compared to the target.And the error is calculated as the difference between the target output and the network output.Then170MIN XU ET AL.we want to minimize the average of the sum of these errors(namely,MSE)through the iterative network training.MSE=1nni=1e(i)2=1nni=1(t(i)−a(i))2(2)where n is the number of sample set,e(i)is the error and a(i)is the network output.In the Bayesian framework the weights of the network are considered random variables and the posterior distribution of the weights can be updated according to Bayes’rule:P(w|D,α,β,M)=P(D|w,β,M)P(w|α,M)P(D|α,β,M)(3)where M is the particular neural network model used and w is the vector of net-work weights.P(w|α,M)is the prior density,which represents our knowledge of the weights before any data are collected.P(D|w,β,M)is the likelihood func-tion,which is the probability of the data occurring,given that the weights w. P(D|α,β,M)is a normalization factor,which guarantees that the total probability is1.Thus,we havePosterior=Likelihood×PriorEvidence(4)Likelyhood:A network with a specified architecture M and w can be viewed as making predictions about the target output as a function of input data in accordance with the probability distribution:P(D|w,β,M)=exp(−βE D)Z D(β)(5)where Z D(β)is the normalization factor:Z D(β)=(π/β)n/2(6) Prior:A prior probability is assigned to alternative network connection strengths w,written in the form:P(w|α,M)=exp(−αE w)Z w(α)(7)where Z w(α)is the normalization factor:Z w(α)=(π/α)K/2(8)APPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL171 Finally,the posterior probability of the network connections w is:P(w|D,α,β,M)=exp(−(αE w+βE D))Z F(α,β)=exp(−F(w))Z F(α,β)(9)Setting regularization parametersαandβ.The regularization parameters αandβdetermine the complexity of the model M.Now we apply Bayes’rule to optimize the objective function parametersαandβ.Here,we haveP(α,β|D,M)=P(D|α,β,M)P(α,β|M)P(D|M)(10)If we assume a uniform prior density P(α,β|M)for the regularization parame-tersαandβ,then maximizing the posterior is achieved by maximizing the likelihood function P(D|α,β,M).We also notice that the likelihood function P(D|α,β,M) on the right side of Equation(10)is the normalization factor for Equation(3). According to Foresee and Hagan(1997),we have:P(D|α,β,M)=P(D|w,β,M)P(w|α,M)P(w|D,α,β,M)=Z F(α,β)Z w(α)Z D(β)(11)In Equation(11),the only unknown part is Z F(α,β).Since the objective function has the shape of a quadratic in a small area surrounding the minimum point,we can expand F(w)around the minimum point of the posterior density w MP,where the gradient is zero.Solving for the normalizing constant yields:Z F(α,β)=(2π)K/2det−1/2(H)exp(−F(w MP))(12) where H is the Hessian matrix of the objective function.H=β∇2E D+α∇2E w(13) Substituting Equation(12)into Equation(11),we canfind the optimal values for αandβ,at the minimum point by taking the derivative with respect to each of the log of Equation(11)and set them equal to zero,we have:αMP=γ2E w(w MP)andβMP=n−γ2E D(w MP)(14)whereγ=K−αMP trace−1(H MP)is the number of effective parameters;n is the number of sample set and K is the total number of parameters in the network. The number of effective parameters is a measure of how many parameters in the network are effectively used in reducing the error function.It can range from zero to K.After training,we need to do the following checks:(i)Ifγis very close to172MIN XU ET AL.K,the network may be not large enough to properly represent the true function.In this case,we simply add more hidden neurons and retrain the network to make a larger network.If the larger network has the samefinalγ,then the smaller network was large enough;and(ii)if the network is sufficiently large,then a second larger network will achieve comparable values forγ.The Bayesian optimization of the regularization parameters requires the com-putation of the Hessian matrix of the objective function F(w)at the minimum point w MP.To overcome this problem,the Gauss-Newton approximation to Hessian ma-trix has been proposed by Foresee and Hagan(1997).Here are the steps required for Bayesian optimization of the regularization parameters:(i)Initializeα,βand the weights.After thefirst training step,the objective function parameters will recover from the initial setting;(ii)Take one step of the Levenberg-Marquardt algorithm to minimize the objective function F(w);(iii)Computeγusing the Gauss-Newton approximation to Hessian matrix in the Levenberg-Marquardt training algorithm; (iv)Compute new estimates for the objective function parametersαandβ;And(v) now iterate steps ii through iv until convergence.2.2.W EIGHT CALCULATION OF THE NETWORKGenerally,one of the difficult research topics of BRBPNN model is how to obtain effective information from a neural network.To a certain extent,the network weight and bias can reflect the complex nonlinear relationships between input variables and output variable.When the output layer only involves one neuron,the influences of input variables on output variable are directly presented in the influences of input parameters upon the network.Simultaneously,in case of the connection along the paths from the input layer to the hidden layer and along the paths from the hidden layer to the output layer,it is attempted to study how input variables react to the hidden layer,which can be considered as the impacts of input variables on output variable.According to Joseph et al.(2003),the relative importance of individual input variable upon output variable can be expressed as:I=Sj=1ABS(w ji)Numi=1Sj=1ABS(w ji)(15)where w ji is the connection weight from i input neuron to j hidden neuron,ABS is an absolute function,Num,S are the number of input variables and hidden neurons, respectively.2.3.M ULTIPLE LINEAR REGRESSIONThis study attempts to ascertain whether BRBPNN are preferred to MLR models widely used in the past for temporal variation of acid precipitation(Buishand et al.,APPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL173 1988;Dana and Easter,1987;MAP3S/RAINE,1982).MLR employs the following regression model:Y i=a0+a cos(2πi/12−φ)+bi+cP i+e i i=1,2,...12N(16) where N represents the number of years in the time series.In this case,Y i is the natural logarithm of the monthly mean concentration(mg/L)in precipitation for the i th month.The term a0represents the intercept.P i represents the natural logarithm of the precipitation amount(ml)for the i th month.The term bi,where i(month) goes from1to12N,represents the monotonic trend in concentration in precipitation over time.To facilitate the estimation of the coefficients a0,a,b,c andφfollowing Buishand et al.(1988)and John et al.(2000),the reparameterized MLR model was established and thefinal form of Equation(16)becomes:Y i=a0+αcos(2πi/12)+βsin(2πi/12)+bi+cP i+e i i=1,2,...12N(17)whereα=a cosϕandβ=a sinϕ.a0,α,β,b and c of the regression coefficients in Equation(17)are estimated using ordinary least squares method.2.4.D ATA SET SELECTIONPrecipitation chemistry data used are derived from NADP(the National At-mospheric Deposition Program),a nationwide precipitation collection network founded in1978.Monthly precipitation information of nine species(pH,NH+4, NO−3,SO2−4,Ca2+,Mg2+,K+,Cl−and Na+)and precipitation amount in1990–2003are collected in Clinton Crops Research Station(NC35),North Carolina, rmation on the data validation can be found at the NADP website: .The BRBPNN advantages are that they are able to produce models that are robust and well matched to the data.At the end of training,a Bayesian regularized neural network has the optimal generalization qualities and thus there is no need for a test set(MacKay,1992;1995).Husmeier et al.(1999)has also shown theoretically and by example that in a Bayesian regularized neural network,the training and test set performance do not differ significantly.Thus,this study needn’t select the test set and only the training set problem remains.i.Training set of BRBPNN between precipitation acidity and other ions With regard to the relationship between precipitation acidity and other ions,the input neurons are taken from monthly concentrations of NH+4,NO−3,SO2−4,Ca2+, Mg2+,K+,Cl−and Na+.And precipitation acidity(pH)is regarded as the output of the network.174MIN XU ET AL.ii.Training set of BRBPNN for temporal trend analysisBased on the weight calculations of BRBPNN between precipitation acidity and other ions,this study will simulate temporal trend of three main ions using BRBPNN and MLR,respectively.In Equation(17)of MLR,we allow a0,α,β,b and c for the estimated coefficients and i,P i,cos(2πi/12),and sin(2πi/12)for the independent variables.To try to achieve satisfactoryfitting results of BRBPNN model,we similarly employ four unknown items(i,P i,cos(2πi/12),and sin(2πi/12))as the input neurons of BRBPNN,the availability of which will be proved in the following. 2.5.S OFTWARE AND METHODMLR is carried out through SPSS11.0software.BRBPNN is debugged in neural network toolbox of MATLAB6.5for the algorithm described in Section2.1.Concretely,the BRBPNN algorithm is implemented through“trainbr”network training function in MATLAB toolbox,which updates the weight and bias according to Levenberg-Marquardt optimization.The function minimizes both squared errors and weights,provides the number of network parameters being effectively used by the network,and then determines the correct combination so as to produce a network that generalizes well.The training is stopped if the maximum number of epochs is reached,the performance has been minimized to a suitable small goal, or the performance gradient falls below a suitable target.Each of these targets and goals is set at the default values by MATLAB implementation if we don’t want to set them artificially.To eliminate the guesswork required in determining the optimum network size,the training should be carried out many times to ensure convergence.3.Results and Discussions3.1.C ORRELATION COEFfiCIENTS OF PRECIPITATION IONSFrom Table I it shows the correlation coefficients for the ion components and precipitation amount in NC35,which illustrates that the acidity of precipitation results from the integrative interactions of anions and cations and mainly depends upon four species,i.e.SO2−4,NO−3,Ca2+and NH+4.Especially,pH is strongly correlated with SO2−4and NO−3and their correlation coefficients are−0.708and −0.629,respectively.In addition,it can be found that all the ionic species have a negative correlation with precipitation amount,which accords with the theory thatthe higher the precipitation amount,the lower the ionic concentration(Li,1999).3.2.R ELATIONSHIP BETWEEN PH AND CHEMICAL COMPOSITIONS3.2.1.BRBPNN Structure and RobustnessFor the BRBPNN of the relationship between pH and chemical compositions,the number of input neurons is determined based on that of the selected input variables,APPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL175TABLE ICorrelation coefficients of precipitation ionsPrecipitation Ions Ca2+Mg2+K+Na+NH+4NO−3Cl−SO2−4pH amountCa2+ 1.0000.4620.5480.3490.4490.6270.3490.654−0.342−0.369Mg2+ 1.0000.3810.9800.0510.1320.9800.1230.006−0.303K+ 1.0000.3200.2480.2260.3270.316−0.024−0.237Na+ 1.000−0.0310.0210.9920.0210.074−0.272NH+4 1.0000.7330.0110.610−0.106−0.140NO−3 1.0000.0500.912−0.629−0.258Cl− 1.0000.0490.075−0.265SO2−4 1.000−0.708−0.245pH 1.0000.132 Precipitation 1.000 amountcomprising eight ions of NH+4,NO−3,SO2−4,Ca2+,Mg2+,K+,Cl−and Na+,and the output neuron only includes pH.Generally,the number of hidden neurons for traditional BPNN is roughly estimated through investigating the effects of the repeatedly trained network.But,BRBPNN can automatically search the optimal network parameters in posterior distribution(MacKay,1992;Foresee and Hagan, 1997).Based on the algorithm of Section2.1and Section2.5,the“trainbr”network training function is used to implement BRBPNNs with a tansig hidden layer and a pureline output layer.To acquire the optimal architecture,the BRBPNNs are trained independently20times to eliminate spurious effects caused by the random set of initial weights and the network training is stopped when the maximum number of repetitions reaches3000epochs.Add the number of hidden neurons(S)from1to 20and retrain BRBPNNs until the network performance(the number of effective parameters,MSE,E w and E D,etc.)remains approximately the same.In order to determine the optimal BRBPNN structure,Figure2summarizes the results for training many different networks of the8-S-1architecture for the relationship between pH and chemical constituents of precipitation.It describes MSE and the number of effective parameters changes along with the number of hidden neurons(S).When S is less than15,the number of effective parameters becomes bigger and MSE becomes smaller with the increase of S.But it is noted that when S is larger than15,MSE and the number of effective parameters is roughly constant with any network.This is the minimum number of hidden neurons required to properly represent the true function.From Figure2,the number of hidden neurons (S)can increase until20but MSE and the number of effective parameters are still roughly equal to those in the case of the network with15hidden neurons,which suggests that BRBPNN is robust.Therefore,using BPBRNN technique,we can determine the optimal size8-15-1of neural network.176MIN XU ET AL.Figure2.Changes of optimal BRBPNNs along with the number of hidden neurons.parison of calculations between BRBPNN(8-15-1)and MLR.3.2.2.Prediction Results ComparisonFigure3illustrates the output response of the BRBPNN(8-15-1)with a quite goodfit.Obviously,the calculations of BRBPNN(8-15-1)have much higher correlationcoefficient(R2=0.968)and more concentrated near the isoline than those of MLR. In contrast to the previous relationships between the acidity and other ions by MLR,most of average regression R2achieves less than0.769(Yu et al.,1998;Baez et al.,1997;Li,1999).Additionally,Figures2and3show that any BRBPNN of8-S-1architecture hasbetter approximating qualities.Even if S is equal to1,MSE of BRBPNN(8-1-1)ismuch smaller and superior than that of MLR.Thus,we can judge that there havebeen strong nonlinear relationships between the acidity and other ion concentration,which can’t be explained by MLR,and that it may be quite reasonable to apply aAPPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL177TABLE IISum of square weights(SSW)and the relative importance(I)from input neurons to hidden layer Ca2+Mg2+K+Na+NH+4NO−3Cl−SO2−4 SSW 2.9589 2.7575 1.74170.880510.4063 4.0828 1.3771 5.2050 I(%)10.069.38 5.92 2.9935.3813.88 4.6817.70neural network methodology to interpret nonlinear mechanisms between the acidity and other input variables.3.2.3.Weight Interpretation for the Acidity of PrecipitationTo interpret the weight of the optimal BRBPNN(8-15-1),Equation(15)is used to evaluate the significance of individual input variable and the calculations are illustrated in Table II.In the eight inputs of BRBPNN(8-15-1),comparatively, NH+4,SO2−4,NO−3,Ca2+and Mg2+have greater impacts upon the network and also indicates thesefive factors are of more significance for the acidity.From Table II it shows that NH+4contributes by far the most(35.38%)to the acidity prediction, while SO2−4and NO−3contribute with17.70%and13.88%,respectively.On the other hand,Ca2+and Mg2+contribute10.06%and9.38%,respectively.3.3.T EMPORAL TREND ANALYSIS3.3.1.Determination of BRBPNN StructureUniversally,there have always been lowfitting results in the analysis of temporal trend estimation in precipitation.For example,the regression R2of NH+4and NO−3 for Vhesapeake Bay Watershed in Grimma and Lynch(2005)are0.3148and0.4940; and the R2of SO2−4,NH+4and NO−3for Japan in Sinya et al.(2002)are0.4205, 0.4323and0.4519,respectively.This study also applies BRBPNN to estimate temporal trend of precipitation chemistry.According to the weight results,we select NH+4,SO2−4and NO−3to predict temporal trends using BRBPNN.Four unknown items(i,P i,cos(2πi/12),and sin(2πi/12))in Equation(17)are assumed as input neurons of BRBPNNs.Spe-cially,two periods(i.e.1990–1996and1990–2003)of input variables for NH+4 temporal trend using BRBPNN are selected to compare with the past MLR results of NH+4trend analysis in1990–1996(John et al.,2000).Similar to Figure2with training20times and3000epochs of the maximum number of repetitions,Figure4summarizes the results for training many different networks of the4-S-1architecture to approximate temporal variation for three ions and shows the process of MSE and the number of effective parameters along with the number of hidden neurons(S).It has been found that MSE and the number of effective parameters converge and stabilize when S of any network gradually increases.For the1990–2003data,when the number of hidden neurons(S)can178MIN XU ET AL.Figure4.Changes of optimal BRBPNNs along with the number of hidden neurons for different ions.∗a:the period of1990–2003;b:the period of1990–1996.increase until10,we canfind the minimum number of hidden neurons required to properly represent the accurate function and achieve satisfactory results are at least 6,6and4for trend analysis of NH+4,SO2−4and NO−3,respectively.Thus,the best BRBPNN structures of NH+4,SO2−4and NO−3are4-6-1,4-6-1,4-4-1,respectively. Additionally for NH+4data in1990–1996,the optimal one is BRBPNN(4-10-1), which differs from BRBPNN(4-6-1)of the1990–2003data and also indicates that the optimal BRBPNN architecture would change when different data are inputted.parison between BRBPNN and MLRFigure5–8summarize the comparison results of the trend analysis for different ions using BRBPNN and MLR,respectively.In particular,for Figure5,John et al. (2000)examines the R2of NH+4through MLR Equation(17)is just0.530for the 1990–1996data in NC35.But if BRBPNN method is utilized to train the same1990–1996data,R2can reach0.760.This explains that it is indispensable to consider the characteristics of nonlinearity in the NH+4trend analysis,which can make up the insufficiencies of MLR to some extent.Figure6–8demonstrate the pervasive feasibility and applicability of BRBPNN model in the temporal trend analysis of NH+4,SO2−4and NO−3,which reflects nonlinear properties and is much more precise than MLR.3.3.3.Temporal Trend PredictionUsing the above optimal BRBPNNs of ion components,we can obtain the optimal prediction results of ionic temporal trend.Figure9–12illustrate the typical seasonal cycle of monthly NH+4,SO2−4and NO−3concentrations in NC35,in agreement with the trend of John et al.(2000).APPLICATION OF BAYESIAN REGULARIZED BP NEURAL NETWORK MODEL179parison of NH+4calculations between BRBPNN(4-10-1)and MLR in1990–1996.parison of NH+4calculations between BRBPNN(4-6-1)and MLR in1990–2003.parison of SO2−4calculations between BRBPNN(4-6-1)and MLR in1990–2003.Based on Figure9,the estimated increase of NH+4concentration in precipita-tion for the1990–1996data corresponds to the annual increase of approximately 11.12%,which is slightly higher than9.5%obtained by MLR of John et al.(2000). Here,we can confirm that the results of BRBPNN are more reasonable and im-personal because BRBPNN considers nonlinear characteristics.In contrast with180MIN XU ET AL.parison of NO−3calculations between BRBPNN(4-4-1)and MLR in1990–2003Figure9.Temporal trend in the natural log(logNH+4)of NH+4concentration in1990–1996.∗Dots (o)represent monitoring values.The solid and dashed lines respectively represent predicted values and estimated trend given by BRBPNN method.Figure10.Temporal trend in the natural log(logNH+4)of NH+4concentration in1990–2003.∗Dots (o)represent monitoring values.The solid and dashed lines respectively represent predicted values and estimated trend given by BRBPNN method.。
生物和环境1. 神经的凋亡Apoptosisi of Neuron2. 肌动蛋白myosin的构象及作用机制The Structure and Function of Myosin3. 钇激光器的发射特性Yb Llaser Radiation Character4. 胰酶分泌素的分泌机制The Secreting Mechanism of Cholecystokinin5. 钙离子在信号传导中的作用The Function of Calcium in Signal Transduction6. 1,5-二磷酸核酮糖羧化酶的进化过程The Evolution of Rubisco7. 质谱技术在生物学中的应用Application of Mass Spectrometry in Biology8. PHB的微生物合成The Synthesis of PHB in Bacteria9. HIV-1 的研究Research on HIV-110.STA T信号通路在人体免疫系统的作用The Function of STA T s (Signal Transducers and Activators of Transcription)Involving the Human Immunological System11.水处理中的反渗透膜Reverse Osmosis in Water Treatment12.水体富营养化研究Research on Water Eutrophication13.饮用水处理和生产Drinking Water Treatment and Production14.废水中重金属的去除Removal of Heavy Metal in Waste Water15.膜分离技术在废水处理中的应用Membrane T echnology in the Use of Waste Water Treatment16.废塑料的生物降解Biodegradation of Wasted Plastics17.有机化合物的生物降解能力的确定方法The Method for the Determination of the Biodegradability of Organic Compounds 18.TiO2光催化氧化技术在环境工程中应用Application of Titanium Dioxide Photocatalysis in Environmental Engineering 19.包装材料的回收利用Reuse (Recycle) of Packaging Materials20.水处理中氮的去除The Removal of Nitrogen in Water-treatment21.污水的生物处理Biological Treatment of Waste Water22.催化还原法去除废气中的氮氧化物(NOx)The Catalytic Processes to Reduce Nitrogen Oxide in Waste Gases23.大气质量模型Atmosphere Environmental Quality Model24.挥发性有机物的测量The Measurement of the V olatile Organic Chemicals25.UASB在废水处理中的应用Application of UASB (Upflow Anaerobic Sludge Blanket) Reactor for the Treatment of Waste Water26.纺织工业废水中染料的去除The Removal of the Dyes from Waste Water of T extile Industry27.复合PCR技术在基因重组中的应用Multiplex PCR in Genetic Rearrangement28.含多环芳香烃废水对环境的污染The Pollution of Waste Water Containing Polycyclic Aromatic Hydrocarbons29.高效生物反应器的发展Development of High Performance Biotreator30.应用高效液相色谱纯化生物分子Purification of Biomolecules by HPLC (High-performance Liquid Chromatography) 31.蓝藻中的膜脂成分分析Analysis of Membrane Lipids in Cyanobacteria32.b-amyloid 在老年痴呆症中对神经的作用Function of b-amyloid on Neuron in Alzheimer's Disease33.小鼠胚胎干细胞的培养Cultivation of Embryonic Stem Cells in Mice34.基质金属蛋白酶的抑制Inhibition of Matrix Metalloproteinase (MMP)35.生物医用亲合吸附剂的研究进展Progress in biomedical affinity adsorbent36.面向环境的土壤磷素测定与表征方法研究进展Review on environmental oriented soil phosphorus testing procedure andinterpreting method37.海水养殖对沿岸生态环境影响的研究进展Review on effects of mariculture on coastal environment38.造纸清洁生产的研究进展Recent studies on cleaning production in paper industry39.深度氧化技术处理有机废水的研究进展Progress on treatment of organic wastewater by advances oxidation processes 40.折流式厌氧反应器(ABR)的研究进展Research advances in anaerobic baffled reactor (ABR)41.膜生物反应器中膜污染研究进展Study progress on the fouling of membrane in membrane bioreactor42.用于水和废水处理的混凝剂和絮凝剂的研究进展Progress on development and application of coagulants and flocculent in water and wastewater treatment43.二氢异香豆素类天然物的研究进展Development of studies on 3,4-dihydroisocoumarins in nature44.天然二萜酚类化合物研究进展Recent advances in the research on natural phenolic diterpenoids45.大气污染化学研究进展Progress in atmospheric chemistry of air pollution46.砷形态分析方法研究进展Development of methods for arsenic speciation47.复合污染的研究进展Advance in the study on compounded pollutions48.生物处理含氯代脂肪烃废水的研究进展Progress in research on the biological treatment of wastewater containingchlorinated aliphatics49.重金属生物吸附剂的应用研究现状Application conditions of heavy metal biosorbent50.两液相培养中有机溶剂对细胞毒性的研究进展Advances in studies on effects of toxicity of organic solvents on cells化学和化工1. 纳米材料的进展及其在塑料中的应用rogress and application of nano-materials in plastics2. 聚硅氯化铝(PASC)混凝剂的混凝特性The Coagulation Property of Polyaluminum Silicate Chlorate (PASC)3. 碳纳米管的制备与研究Preparations and studies of carbon nanotubes4. 纳米材料的制备及其发展动态Synthesis and development of nanosized materials5. 铁(III)核苷酸配位化合物与转铁蛋白的相互作用The interaction between ferric nucleotide coordination compounds and transferrin 6. 原位时间分辨拉曼光谱研究电化学氧化还原和吸附过程In-situ time resolved Raman spectroscopic studied on electrochemical oxidation-reduction and adsorption7. 苯胺电化学聚合机理的研究Study on the mechanism of electrochemical polymerization of aniline8. 沸石新材料研究进展Evolution of novel zeolite materials9. 聚合物共混相容性研究进展Research progress in compatibility of polymer blends10.聚酰亚胺LB膜研究进展Recent advances in polymide langmuir-blodgett films11.聚胺酯液晶研究进展The advances in LC-polyurethanes12.热塑性IPN研究进展及相结构理论Advances in thermoplastic IPN and morphological studies13.酞菁类聚合物功能材料研究进展Progresses in functional materials of phthalocyanine polymers14.有机硒化学研究进展Study progress in organoselenium chemisty15.杯芳烃研究进展Research progress in calixarene chemistry16.木素生物降解的研究进展Research progresses on lignin biodegradation17.甲烷直接催化转化制取芳烃的研究进展Progress research on direct catalytic conversion of methane to aromatics 18.铝基复合材料连接研究进展Advance in joining aluminum metal matrix composites19.现代天然香料提取技术的研究进展New development of the extraction from natural fragrance and flavour20.电泳涂料的研究进展Progress of study on electrodeposition coatings21.防静电涂料研究进展Research progress in antistatic coatings22.壳聚糖开发与应用研究进展Progress in research on the application and production of chitosan23.塑料薄膜防雾化技术的研究进展Research progress of anti-fogging technologies for plastics films24.膜反应器在催化反应中的研究进展Progress in study of films reactors for catalytical reactions25.表面活性剂对结晶过程影响的研究进展The development of studies on the influence of surfactants on crystallization 26.液晶复合分离膜及其研究进展Advances in liquid crystal composite membrane for separation27.高倍吸水树脂研究进展Recent progress in super adsorbent resin28.聚合物光折变的研究进展Progress of the study on photorefractivity in polymers29.微生物聚酯的合成和应用研究进展Progress on the biosynthesis and application of microbial polyesters30.可降解塑料的研究进展Progress in study on degradable plastics31.金属氢研究进展Progress on metallic hydrogen research32.软磁性材料的最新进展Recent advances in hard and soft magnetic materials33.光敏聚酰亚胺的研究进展Development of studies on photosensitive polyimides34.高分子卟啉及其金属配合物的研究进展Advances in polymers of porphyrins and their complexes35.水性聚胺酯研究进展Recent development of waterborne polyurethanes36.C60的研究进展及其在含能材料方面的应用前景Application prospect of C60 in energetic materials37.滤膜溶解富集方法研究进展Progress in investigation of concentration by means of soluble-membrane filter 38.人工晶体研究进展及应用前景The research progress and application prospects of synthetic crystals39.钛硅催化材料的研究进展Development of titanium silicon catalytic materials40.环烯烃聚合物的合成和应用研究进展Progress of polymerization and copolymerization with ethylene of cyclooelfines 41.多孔炭的纳米结构及其解析Nanostructure and analysis of porous carbons42.羰化法合成a-芳基丙酸研究进展Progress in preparation of a-arylpropionic acids through catalytic carbonation 44.组织工程相关生物材料表面工程的研究进展Advances in research on surface engineering of biomaterials for tissueengineering45.表面波在表面活性剂流变学研究中的应用Surface rheological properties of surfactant studied by surface wave technique 46.水溶性高分子聚集行为荧光非辐射能量转移研究进展Development of Fluorescence Nonradiative Energy Transfer in the Research for Aggregation of Water-Soluble Polymers47.两相催化体系中长链烯烃氢甲酰化反应研究进展Advance in the Hydroformylation of Higher Olefin in Two-Phase Catalystic System 48.聚合物膜燃料电池用电催化剂研究进展Progress in the Study of Electrocatalyst for PEMFC49.纳米器件制备的新方法--微接触印刷术New nano-fabrication Method-Microcontact Printing50.智能型水凝胶结构及响应机理的研究进展Recent Development of the Research on the Structure Effects and ResponsiveMechanism of Intelligent Hydrogels51.甲醇蒸馏distillation of methanol电类1. Amplifiers 放大器2. Asynchronous transfer mode(A TM) 异步传输模式3. Aritificial reality 虚拟现实4. Bayesian classification 贝叶斯分类器5. Biped robot 两足机器人6. Cable modem 有线调制解调器7. CDMA mobile communication system 码分多址移动通信系统8. Chaotic neural network 混沌神经网络9. Code optimization 代码优化10. Communication switching 通信交换11. Computer aided design 计算机辅助设计12. Compiler optimisation techniques 编译优化技术13. Computer game design 计算机游戏设计14. Computer graphics 计算机图形学15. Computer network 计算机网络16. Computer simulation 计算机仿真17. Computer vision 计算机视觉18. Continuous speech recognition 连续语音识别19. Corner Detect Operator 边角检测算子20. Database application 数据库应用21. Design of operation system 操作系统设计22. Digital filter 数字滤波器23. Digital image processing 数字图像处理24. Digital integrated circuits 数字集成电路25. Digital satellite communication system 数字卫星通信系统26. Digital signal processing 数字信号处理27. Digital television technology 数字电视技术28. Discrete system simulator programming 离散系统仿真编程29. Distributed interactive learning environment 分布式交互性学习环境30. EDA 数字系统设计自动化31. Electrical vehicles 电动交通工具32. Electricity control system 电力控制系统33. Electromagenetic wave radiation 电磁波辐射34. Face recognition 人脸识别35. Family Automation 家庭自动化36. Fibre bragg gratings 光纤布拉格光栅37. FIR digital filters 有限冲击响应数字滤波器38. Firewall technology 防火墙技术39. Fuzzy control 模糊控制40. Genetic algorithm 遗传算法41. HDTV 高清晰度电视42. High capacity floppy disk 高密度软盘43. High quality speech communication 高质量语音通信44. Image compression 图像压缩45. Image processing and recognition 图像处理和识别46. Image registration 图像配准47. Information retrieval 信息检索48. Intelligent robot 智能机器人49. Intelligent transportation 智能交通50. Internet protocol 因特网协议51. ISDN 综合业务数字网52. Knowledge discovery and data mining 知识发现和数据挖掘53. LAN, MAN and W AN 局域网,城域网和广域网54. Large scale integrated circuits 大规模集成电路55. Laser diode 激光二极管56. Laser measurement 激光测量57. Liner programming 线性规划58. Liner system stability analysis 线性系统稳定性分析59. Local area network security 局域网安全60. Magnetic material and devices 磁介质与设备61. Mass storage systems 海量存储技术62. Microwave devices 微波器件63. Mobile communication systems 移动通信系统64. MOS circuits MOS电路65. Motion control of robot 机器人运动控制66. Multimedia network 多媒体网络67. Network computing and knowledge acquisition 网络计算和知识获取68. Network routing protocol test 网络路由协议测试69. Neural network 神经网络70. Non-linear control 非线性控制71. Optical communication 光通信72. Optical fiber amplifiers 光纤放大器73. Optical hologram storage 光全息存储74. Optical modification 光调制75. Optical sensors 光传感器76. Optical switches 光开关77. Optical waveguides 光波导78. Packet switching technology in networks 网络中的分组交换技术79. Parallel algorithms 并行算法80. Pattern recognition 模式识别81. Photoelectric devices 光电子器件82. Process identificaion 过程辨识83. Programmable DSP chips 可编程数字信号处理芯片84. Programmable logic device 可编程逻辑器件85. Radar antennas 雷达天线86. Radar theory and systems 雷达理论和系统87. RISC architecture 简单指令处理器结构88. Satellite broadcasting 卫星广播89. Self calibration of camera 摄像机自适应校准90. Semiconductor laser 半导体激光器91. Semiconductor quantum well superlattices 半导体量子阱超晶格92. Signal detection and analysis 信号检测和分析93. Signal processing 信号处理94. Software engineering 软件工程95. Solid lasers 固体激光器96. Sound synthesiser 声音合成器97. Speech processing 语音处理98. System architecture design 系统结构设计99. Telecommunication receiving equipment 通信接收设备100. Theory of remote sensing by radar 雷达遥感理论101. Time division multiple access 时分多路访问102. Unix operating system Unix操作系统103. Video encoding and decoding 视频编解码104. Video telecommunication system 视频通信系统105. Wavelength division multiplexing 波分复用106. Wavelet transform 小波变换机械、自动化、物理、力学1. 无电压力传感器Nonelectric Pressure Sensors2. 金属腐蚀Metal Corrosion3. 印刷电路板的设计与制造The Design and Manufactory of Printed Circuit Board4. 分布式操作系统Distributed Operating Systems5. 金属材料的微结构和纳米结构Micro and Nanostructures of metal materials6. 宇宙背景辐射Backgroud Cosmic Radiations7. 非线性规划中的库恩-塔克条件kuhn-Tucker condition in Non-liner Programming8. 气体激光器Gas Laser9. 能量的来源及转化Energy resources and conversion10. 微纳米摩擦学Micro/nano-tribology11. 噪声控制Noise Control12. 空间观测技术Astronomical observation techniques13. 原子钟Atomic Clocks14. 半导体的磁性研究Research for Magnetic of Semiconductors15. 光学图形处理Optical Image Processing16. 液体/气体激光器加工Liquid/Gas Laser Machining17. 太阳能应用Solar Energy Application18. 流动系统中的混沌现象Chaos in Flowing Systems19. 半导体材料及仪器Semiconductor material and devices20. 电场测量研究Electric Field Measurement21. 系统及控制理论Systems and Control Theory22. 机械参量的测试Mechanical variables Measurement23. 光纤Optical Fibres24. 机动目标跟踪Tracking of Maneuvering T argets25. 航天技术Aerospace T echnology26. 导弹跟踪控制系统Missile Tracking System27. 液晶显示器件Liquid Crystal Displays28. CMOS 门电路CMOS Gate Circuits29. 图象采样与处理Image Sampling and Processing30. 光逻辑器件Optical Logic Device31. 信号发生器Signal Generator32. 蛋白质晶体测量Measurement of Protein Crystal Growth33. 有线电视Cables T elevision34. 震动与控制系统Vibration and Control System35. 高压输电系统的安全性研究Stability of High-voltage Power TransmissionSystem36. 电荷Electric Charge37. 电子显微镜及电子光学应用Electron Microscopes an Optics Applications38. 辐射的影响The Effect of Radiation39. 电化学传感器测试装置Electronchemical Sensors T esting Equipment40. 爱因斯坦-麦克斯韦场Einstein-Maxwell Fields41. 柔性角度传感器在生物力学中的应用Biomechanical Application of Flesible Angular Sensor42. 压电材料及应用装置Piezoelectric Materials and Devices43. 超导材料及其应用Superconducting Materials and The Applications of Them44. 光学干涉Optical Interferometry45. 表面测量Surface Measurement46. 等离子体中的电磁波Electromagnetic Waves in Plasma47. 半导体激光器Semiconductor lasers48. 数字人脸辨识Digital Face Recognition49. 光波导Optical Waveguides Theory50. 机械波检测技术Mechanical Waves T esting technology51. 激光调制技术Laser beam Modulation T echnology52. 只读内存Read-only Memory53. 光学显微镜Optical Microscopy54. 光纤位移测量传感器F-O displacement sensors55. 激光扫描Laser Scanners56. 量子论与量子场论Quantum Theory and Quantum Field Theory57. 流体机械Fluid Mechanics58. 地球引力Earth Gravity59. 自动控制系统Automatic Control System60. 静电线性加速器Electrostatic and Linear Accelerators61. 专家系统与网络接口Expert Systems and Network Interface62. 计算机辅助制造Computer aided Manufacture63. 全息存储Holography Storage64. 核能在中国的前景The Future of Nuclear Energy in China65. 机器人运动学和动力学分析The Kinematics an Dynamics of Robots66. 合成材料制品Composite Materials Preparations67. 光的吸收Light Absorption68. 自适应控制系统Self-adjusting Control Systems69. 通信与信息系统Communication and Information Systems70. 数字信号处理芯片Digital Signal Processing Chips71. 虚拟制造Virtual Manufacturing72. 雷达遥感Remote Sensing by Radar73. 晶格理论与点阵统计学Lattice Theory and Statistics74. 面向对象程序设计Object-Oriented Program Development75. 单片机应用及其外围设备The Applications of SCP and Outer Equipment76. 生物医学工程Biomedical Engineering77. 彩色电视设备Color T elevision78. 陶瓷-金属复合材料Ceramics-metallisation Composite Metallisation79. 电子信号的检测与处理Electronic Signal Detection and Processing80. X射线望远镜X- ray T elescope81. 基于网络的分时控制系统Time-varying Control System Based on Network82. 收音机信号传输Radio Broadcasting83. 单壁炭纳米管合成Single-Walled Carbon Nanotube Synthesis84. 无损检测Nondestructive T esting85. 汽车工业Automobile Industry86. 半导体材料与身体健康Semiconductor Materials and Health Physics87. 热辐射Heat Radiation88. 网络拓扑学Network T opology89. 微波的应用The Application of Microwave90. 局域网的设计The Design of Local Area Networks91. 金属元素表面结构Surface Structure of Metallic Elements92. 多媒体系统网络集成Network Synthesis of Multimedia Systems93. 铁氧体微波吸收材料Ferrite Microwave Absorbing Materials94. 炭纤维增强塑料复合材料Carbon Fiber Reinforced Plastic Composite95. 超导材料Superconducting Materials96. 远程定位水质控制Remote and On-site System for Water Quality Control97. 太阳能电力系统Solar Energy Power System98. 卫星接收系统Satellite Broadcasting and Relay System99. 时空对称性与守恒定律Symmetry of Space-time and Conservation Laws 100.邮件系统的体系及应用Application and Schemas for Mailbox System。
摘要锂离子电池凭借清洁、稳定等独特优势已经被广泛应用于了各个领域,同时对锂电池的预测和健康管理也成为必要性工作。
随着充放电过程的进行,锂离子电池的性能不断地退化,剩余使用寿命(Remaining Useful Life,RUL)不断缩短。
因此,预测锂电池的RUL成为了评估健康状态的重要方法。
目前已有的锂电池RUL的预测方法包括基于模型的方法和数据驱动的方法。
基于模型的方法往往需要对锂电池的性能退化过程有详细的了解以建立退化模型,而建立的模型通常较为复杂且泛化性能有限。
数据驱动的方法则通过历史数据来进行RUL预测,而不需要对具体退化过程有深入的了解。
本文从数据驱动的预测方法出发,根据电池运行过程的历史数据通过以深度学习为核心的方法对锂电池的RUL预测进行研究,主要的研究工作如下:首先,提出了一种基于长短时记忆(Long-Short Term Memory,LSTM)结构的神经网络模型对锂电池的容量序列进行预测以实现RUL的预测。
通过LSTM结构拟合锂电池容量的退化趋势,提取容量序列中体现的退化特性,对未来容量变化做出预测以确定RUL的终止周期。
实验结果证明了预测模型在容量序列预测中的有效性,为之后研究内容提供了模型基础。
其次,通过Dropout在神经网络结构中的应用建立了RUL预测的概率分布模型。
介绍了Dropout通常意义上作为一种避免过拟合的手段加入到神经网络结构中之外,作为近似贝叶斯神经网络的方法描述模型不确定性的作用。
并通过蒙特卡罗Dropout方法建立了RUL预测的近似概率分布模型。
最后,在单序列容量预测基础上,提出了基于相似序列的多序列预测方法。
由于同种电池的性能退化过程具有很高的相似性,在预测模型中加入了待测序列的相似序列辅助预测。
实验表明,基于相似序列的预测提升了预测模型预测的稳定性和对突变现象的反应能力。
关键词:锂离子电池;剩余使用寿命;深度学习;模型不确定性AbstractLithium-ion batteries have been widely used in various fields due to cleanliness and stability. At the same time, the prediction and health management of lithium batteries have become a necessity. As the charging and discharging process progresses, the performance of the lithium ion battery is continuously degraded, and the remaining useful life (RUL) is continuously shortened. Therefore, predicting the RUL of lithium batteries has become an important method for assessing health status.Currently available methods for predicting lithium battery RUL include model-based methods and data-driven methods. Model-based methods require a detailed understanding of the performance degradation process of lithium batteries to establish degradation models,which are often complex and have limited generalization performance. The data-driven approach uses historical data for RUL prediction without an in-depth understanding of the specific degradation process. Based on the data-driven prediction method, this paper studies the RUL prediction of lithium battery based on the historical data of battery operation process. The main research work is as follows:Firstly, a neural network model based on Long-Short Term Memory (LSTM) structure is proposed to predict the capacity sequence of lithium battery to realize RUL prediction. The degradation characteristics reflected in the capacity sequence are extracted, and the future capacity change is predicted to determine the termination circle of the RUL. The experimental results prove the validity of the prediction model in the prediction of capacity series, which provides a model basis for the subsequent research content.Secondly, the probability distribution model of RUL prediction is established by the application of Dropout in neural network structure. It is introduced that Dropout is usually added to the neural network structure as a means to avoid over-fitting, and it is used as a method to approximate the Bayesian neural network to describe the uncertainty of the model. The approximate probability distribution model of RUL prediction is established by Monte Carlo Dropout method.Finally, based on the single sequence capacity prediction, a multi-sequence prediction method based on similar sequences is proposed. Due to the high similarity of the performance degradation process of the same type batteries, a similar sequence-assisted prediction of the sequence to be tested is added to the prediction model. Experiments show that prediction based on similar sequences improves the stability of prediction model prediction and the ability to respond to sudden changes. Keywords:Lithuium-ion battery,remaining useful life, deep learning, model un certainty目录摘要 (I)Abstract (II)目录 ....................................................................................................................... I II 第1章绪论 .. (1)1.1课题背景及研究的目的和意义 (1)1.2国内外研究现状 (2)1.2.1基于模型的RUL预测方法 (2)1.2.2数据驱动的RUL预测方法 (3)1.2.3国内外研究现状分析 (4)1.3论文主要研究内容及章节安排 (5)第2章基于LSTM的锂电池容量预测 (7)2.1引言 (7)2.2数据预处理 (7)2.3深度学习概述 (10)2.4基于LSTM神经网络的容量预测 (12)2.5容量序列预测实验结果与分析 (14)2.6本章小结 (17)第3章锂电池剩余使用寿命不确定性建模 (19)3.1引言 (19)3.2贝叶斯神经网络 (20)3.2.1贝叶斯方法 (20)3.2.2贝叶斯神经网络介绍 (21)3.2.3贝叶斯神经网络适应性分析 (22)3.3基于Dropout的近似贝叶斯神经网络 (23)3.3.1神经网络中的Dropout方法 (23)3.3.2基于Dropout方法的近似贝叶斯过程 (24)3.4基于Dropout的RUL不确定性建模 (28)3.5实验结果与分析 (30)3.6本章小结 (31)第4章基于相似序列的锂电池RUL预测 (33)4.1引言 (33)4.2容量序列突变情况分析 (33)4.3锂电池容量序列相似性分析 (34)4.3.1 DTW算法描述 (34)4.3.2锂电池容量相似性比较 (36)4.4基于相似序列的RUL预测 (37)4.5实验结果与分析 (39)4.5.1相似容量序列预测结果与分析 (39)4.5.2 RUL预测实验结果与分析 (43)4.6本章小结 (46)结论 (47)参考文献 (48)攻读硕士学位期间发表的论文及其它成果 (53)哈尔滨工业大学学位论文原创性声明和使用权限 (54)致谢 (55)第1章绪论1.1 课题背景及研究的目的和意义随着工业的迅速发展,当前对石油等传统能源的需求仍在不断增长,同时不可再生资源也在日益枯竭,因此基于传统燃料的能源经济正面临着日益严峻的挑战,高效清洁能源的需求迫在眉睫。
贝叶斯深度学习(bayesiandeeplearning)⽬录 本⽂简单介绍什么是贝叶斯深度学习(bayesian deep learning),贝叶斯深度学习如何⽤来预测,贝叶斯深度学习和深度学习有什么区别。
对于贝叶斯深度学习如何训练,本⽂只能⼤致给个介绍。
(不敢误⼈⼦弟) 在介绍贝叶斯深度学习之前,先来回顾⼀下贝叶斯公式。
贝叶斯公式p (z |x )=p (x ,z )p (x )=p (x |z )p (z )p (x )其中,p (z |x ) 被称为后验概率(posterior),p (x ,z ) 被称为联合概率,p (x |z ) 被称为似然(likelihood),p (z ) 被称为先验概率(prior),p (x ) 被称为 evidence。
如果再引⼊全概率公式 p (x )=∫p (x |z )p (z )dz ,式(1)可以再变成如下形式:p (z |x )=p (x |z )p (z )∫p (x |z )p (z )dz 如果 z 是离散型变量,则将式(2)中分母积分符号 ∫ 改成求和符号 ∑ 即可。
(概率分布中的概率质量函数⼀般⽤⼤写字母 P (⋅) 表⽰,概率密度函数⼀般⽤⼩写字母 p (⋅) 表⽰,这⾥为了简便,不多做区分,⽤连续型变量举例)什么是贝叶斯深度学习? ⼀个最简单的神经元⽹络结构如下图所⽰:图 1 神经元 在深度学习中,w i ,(i =1,...,n ) 和 b 都是⼀个确定的值,例如 w 1=0.1,b =0.2。
即使我们通过梯度下降(gradient decent)更新 w i =w i −α⋅∂J ∂w i ,我们仍未改变 “w i 和 b 都是⼀个确定的值” 这⼀事实。
那什么是贝叶斯深度学习?将 w i 和 b 由确定的值变成分布(distributions),这就是贝叶斯深度学习。
贝叶斯深度学习认为每⼀个权重(weight)和偏置(bias)都应该是⼀个分布,⽽不是⼀个确定的值。
基于改进朴素贝叶斯的入侵检测方法孙程;邢建春;杨启亮;韩德帅【摘要】工业控制系统的网络安全问题越来越严峻,遭到的入侵威胁也越来越复杂.伴随着网络的开放性、复杂性不断增强,入侵威胁正在不断加深.为了抵御愈趋复杂和多样的入侵威胁,需要设计高效的入侵检测方法.朴素贝叶斯分类算法是一种有效而简洁的分类算法,能较好地应用于工业控制系统网络的入侵检测.但是它的属性独立性假设使得该方法无法表示属性变量之间存在的关系,影响了它的分类效果.针对该缺陷,借鉴前人的经验,提出了一种改进综合加权系数的朴素贝叶斯分类算法(Compositive Weighted Naive Bayes Classification,CWNBC).该算法既考虑了不同属性取值对分类结果的影响,又考虑了属性值的内容对分类的影响,巧妙地引入了综合加权系数.将该算法与其他几种算法比较,经实验表明,该分类算法有较高的分类准确率,能更好地适用于比较复杂的工业控制系统网络的入侵检测.【期刊名称】《微型机与应用》【年(卷),期】2017(036)001【总页数】4页(P8-10,14)【关键词】朴素贝叶斯;加权系数;属性值【作者】孙程;邢建春;杨启亮;韩德帅【作者单位】解放军理工大学国防工程学院,江苏南京210007;解放军理工大学国防工程学院,江苏南京210007;解放军理工大学国防工程学院,江苏南京210007;解放军理工大学国防工程学院,江苏南京210007【正文语种】中文【中图分类】TP31网络技术的飞速发展在给人们的生活带来极大便利的同时,也给人们带来了较大的安全威胁。
随着网络的开放性和复杂性不断增强,工业控制系统(Industry Control System,ICS)面临的安全问题也日益凸显,遭到的入侵威胁不断增大。
入侵检测是ICS的网络安全防御中重要的组成部分,是保护系统安全的重要手段,一直被国内外专家学者所关注。
入侵检测技术是一种要找出能够危害信息资源完整性、机密性和可用性的安全措施[1]。
基于美国FAERS数据库的乳腺癌患者使用阿贝西利不良事件分析苏小涵;曾姣;李雪;刘礼鑫;侯令密;李金穗【期刊名称】《中国药物警戒》【年(卷),期】2024(21)5【摘要】目的分析美国食品药品监督管理局不良事件报告系统(FAERS数据库)的乳腺癌患者使用阿贝西利的药品不良事件(ADE),为乳腺癌患者合理使用阿贝西利提供参考。
方法通过检索FAERS 2017年10月1日至2023年9月30日乳腺癌患者使用阿贝西利相关的ADE,按《监管活动医学词典》(medical dictionary for regulatoryactivities,MedDRA)、系统-器官分类(system organ class,SOC)和首选术语(preferred term,PT)进行分类,采用报告比值比法(reporting odds ratio,ROR)、比例报告比值法(proportional reporting ratio,PRR)、贝叶斯置信区间神经传播网络法(bayesian confidence interval neural propagation network method,BCPNN)和多重伽马-泊松收缩估计法(multi-item gamma poisson shrinker,MGPS)挖掘并分析可疑风险信号。
结果排除重复后,检索时段中使用阿贝西利的ADE共5 579份,通过筛选分析,共计得到46个有效信号,主要涉及16个SOC。
发生频次前3位ADE分别为腹泻、药物无效和脱水;信号强度前3位ADE分别为治疗反应丧失、乳品不耐受和皮下气肿。
此外,还发现说明书未提及的肾小管坏死、肾损伤、睡眠障碍(失眠型)、纵隔积气、多形性红斑、弥散性血管内凝血和短暂性脑缺血发作等19个可疑信号需给予关注。
结论在使用该药时,除密切观察患者可能出现说明书提及的ADE外,还应针对可能会出现的肾小管坏死、肾损伤等说明书中未提及的ADE进行重点监测,保障患者用药安全。
Application of Bayesian Neural Network in Electrical Impedance TomographyJouko Lampinen and Aki Vehtari{mpinen,Aki.Vehtari}@hut.fi Laboratory of Computational Engineering Helsinki University of Technology P.O.Box 9400, FIN-02015 HUT, FINLANDKimmo Leinonenleinonen kimmo@ Ahlstrom Pumps Corporation P.O.Box 18, FIN-48601, Karhula, FINLANDAbstractIn this contribution we present a method for solving the inverse problem in electric impedance tomography with neural networks. The problem of reconstructing the conductivity distribution inside an object from potential measurements on the surface is known to be ill-posed, requiring efficient regularization techniques. We demonstrate that a statistical inverse solution, where the mean of the inverse mapping is approximated with a neural network gives promising results. We study the effect of input and output data representation by simulations and conclude that projection to principal axis is feasible data transformation. Also we demonstrate that Bayesian neural networks, which aim to average over all network models weighted by the model’s posterior probability provide the best reconstruction results. With the presented approach estimation of some target variables, such as the void fraction (the ratio of gas and liquid), may be applicable directly without the actual image reconstruction. We also demonstrate that the solutions are very robust against noise in inputs.Another, more accurate approach for the image reconstruction in EIT is based on iterative inversion of the forward problem. Numerical minimization method, such as NewtonRaphson algorithm, is used to search for a conductance distribution that minimizes the difference between the measured potentials and those obtained by computing the potentials by, e.g., finite element method. This approach leads to computationally more complex algorithms, but gives much better results and offers more flexibility for controlling the regularization of the inverse, by defining the smoothing priors for the resulting image [8]. In this study we consider a simulated EIT problem of detecting gas bubbles in circular water pipe. The bubble formations consist of one to ten random overlapping circular bubbles, drawn so that the void fraction (area of the bubbles / area of the pipe) is roughly Gamma distributed with mean of 20%. In the following tests we used 500 samples in the training set and 500 in the test set. Fig. 1 shows an sample bubble and resulting equipotential curves. The potential signals from which the image is to be recovered are shown in Fig. 2. We present a statistical inverse approach for the EIT problem, based on approximating the inverse mapping with a Multi Layer Perceptron (MLP) neural network. Often the end goal of using process tomography is not the reconstructed image, but some index computed from the image, such as void fraction or mixing index indicating how well two substances have been mixed. We demonstrate that in such situation it may be feasible to directly estimate the target variable without the actual image reconstruction. There are a few studies on using neural networks in the EIT problem. In [7] the reconstruction image was directly estimated by a neural network from the potential signals.The solution was demonstrated to be very robust against noise in input signals. However, the resolution of the image in such approach is in practice limited to some tens or hundreds of pixels, as networks with several hundreds of outputs are rather difficult to use and train, and often require1IntroductionIn electrical impedance tomography (EIT) the aim is to recover the internal structure of an object from surface measurements. Number of electrodes are attached to the surface of the object and current patterns are injected from through the electrodes and the resulting potentials are measured. The inverse problem in EIT, estimating the conductivity distribution from the surface potentials, is known to be severely ill-posed, so that some regularization methods must be used to obtain feasible results [8]. Typically, the inverse problem in EIT is solved by assuming the system linear and computing regularized inverse matrix. This produces fast linear reconstruction algorithm. However the linear assumption is valid only if the perturbation from the linearization point is small, i.e., there are no large areas where conductance differs much from the background.10.5Our results indicate that direct estimation of the target variables without the explicit reconstruction may be appropriate solution, as the reconstruction may be much more complex problem than the actual end goal.02Bayesian Neural Networks−0.5Traditionally neural networks have been trained by searching for a set of weights that minimize the error between the target values and network outputs. In Bayesian learning the objective is to find the predictive distribution for output y given the input x and training data D (see [2] and [4] for introduction to Bayesian neural networks) p(y|x, D) = p(y|x, w, β) p(D|w)p(w|α)p(α) dwαβ, p(D)−1 −1 −0.5 0 0.5 1Figure 1: Example of the EIT problem. The simulated bubble is bounded by the circles. The current is injected from the electrode with the lightest color and the opposite electrode is grounded. The color and the contour curves show the resulting potential field.Relative change in Uwhere we compute the marginal distribution over all the parameters w and hyperparameters α, that determine the prior distributions for parameters, and β, that define the noise variance. Intuitively, the marginalization is equal to taking the average prediction of all the models p(y|x, w, β) weighted by their goodness, which is the posterior probability of the model given the training data D.100 50 0 5 10 Electrode 15 2 4 Injection 8 6Figure 2: Relative changes in potentials compared to homogeneous background. The eight curves correspond to injections from eight different electrodes.In practice we use Markov Chain Monte Carlo techniques for approximating the integral by mean of samples drawn from the posterior distribution of the models [4]. In the following experiments we have used the FBM software package 1 that implements the methods described in [4]. The resulting model after the learning is a collection of networks with different parameters w, such that the average of the outputs of the networks approximates the conditional expectation of the output given the input. In this work we have used 20 samples from the posterior distributions, so that the network model is equal to having a committee of 20 networks.non-standard regularization to smooth the results of neighboring pixels. In [1] linear neural network was used to estimate the conductance in the triangles of the FEM mesh. As the used network was linear, the actual advantage over linear pseudoinverse solutions was due to the iterative estimation of the inverse matrix with a gradient method, which proceeds slowly to the direction of the smallest eigenvectors of the inverse matrix, yielding natural regularization for the inverse. In [6] combination of principal component analysis (PCA) and neural network was used for computing a scalar variable, mixing index, from reconstructed tomographic images.3 Data RepresentationOne of the key issues in the approach presented here is transformation of both input and output data by principal component projection and application of the neural network in this lower dimensional eigenspace. This serves for three purposes: first to detach the actual inverse problem from the data representation of the potential signals and image data, allowing change of image resolution afterwards by changing the resolution of the eigen images. Secondly, the reconstruction of the image as superposition of the eigen images1 <URL:/˜radford/fbm. software.html>makes the inverse more robust against noise, as shown by the experiments. Thirdly, the dimensionality of the reconstruction problem is much reduced, and it is matched to the actual complexity of the bubble distributions (determined by the eigenvalues of the correlation matrix). The reconstruction equations are then up = Vu u gp = F (p) g= where u is the potential signal, Vu is the base span by the largest eigenvectors of u (we used Nu = 20), up is the projection of u on Vu , g is the reconstructed image, Vg is the base of eigenvectors of the autocorrelation matrix of the images, gp is the projection of the image g on base Vg , and F (p) is the non-linear function giving the inverse (the Bayesian MLP). Eigenimages could be computed from the training data [3], but a more principled approach is to construct a model for the autocorrelation of the conductivity distribution and compute the eigen vectors of the autocorrelation matrix. The autocorrelation model we used consisted of position dependent variance term S(x, y) and position independent autocorrelation term A(∆x, ∆y): Rxy,x y = A(x − x , y − y )S(x, y)S(x , y ), (2) Figure 3: Eigenimages from the autocorrelation model in (2) and (3). VgT gp (1)where we used rather generic assumptions: autocorrelation of pixels decays linearly as function of distance ∆x2 + ∆y 2 and reaches zero correlation at distance 0.5 (half of the radius of the pipe). The variance was modeled as sum of two Gaussians2S(x, y) =k=1Zk exp −x2 + y 2 2 2σk.(3)where the parameters σk and Zk were determined by maximum likelihood fit to the training data. The resulting base is shown in Fig. 3. Note that we can control the accuracy of image representation in different locations of the image by changing the autocorrelation length. Shorter autocorrelation results in more eigenimages coding the location and vice versa. Eigenimages computed from the autocorrelation model and those from the training data have been compared in [3]. Tables 1 and 2 and Fig. 4 show the effect of using the PCA projection as input or output transformation. The figures arecomputed with a MLP early stopping committee using different data partition for each member, as it is faster method than the Bayesian MLP. Clearly the best performance is obtained with PCA projection in both ends. Some examples of the reconstructed images are shown in Fig. 5.4Reconstruction resultsIn this section we present results of Bayesian MLP method for the EIT problem. We used one hidden layer MLP with 20 hidden units, per-case normal noise variance model, vague priors and MCMC-run specifications similar as used in [4, 5]. We run five long chains and discarded first half of the each chain. Finally 20 networks from the posterior distribution of network parameters were used. PCA transformation was used for both potential signals and images.Figure 5: Examples of the effect of data representation. The coding schemes are from up: direct in - direct out, direct in - PCA out, PCA in - direct out, PCA in - PCA out. The color in the figures show the probability of the bubble in each pixel. The data dimensions were: direct in: 256, PCA in: 20, direct out: 88, PCA out: 60.0.5 Estimate, direct in − direct out 0.4 0.3 Estimate, PCA in − direct out 0.1 0.2 0.3 Void Fraction 0.4 0.50.5 0.4 0.3Table 1: Effect of data coding. Mean relative absolute error in void fraction, mean(|ˆf − vf |/vf ), %. v Voltage signal coding Image coding Direct 256 PCA 20 Direct 10x10 40.3 9.4 PCA 60 (41x41) 46.7 8.70.2 0.1 0 00.2 0.1 0 00.10.2 0.3 Void Fraction0.40.5Direct in – direct out0.5 Estimate, direct in − PCA out 0.4 0.3 Estimate, PCA in − PCA out 0.1 0.2 0.3 Void Fraction 0.4 0.5 0.5 0.4 0.3PCA in – direct outTable 2: Effect of data coding. Reconstruction error, percentage of erroneously segmented pixels. Voltage signal coding Image coding Direct 256 PCA 20 Direct 10x10 17.5 9.7 PCA 60 (41x41) 14.1 6.70.2 0.1 0 00.2 0.1 0 00.10.2 0.3 Void Fraction0.40.5Direct in – PCA outPCA in – PCA outFigure 4: Scatter plots of the Void Fraction estimates with different data codings.Figure 6: Example of image reconstructions with MLP ESC (upper row) and the Bayesian MLP (lower row)Void fraction, guessTable 3: Errors in reconstructing the bubble shape and estimating the void fraction from the reconstructed images. See text for explanation of the different models. Method ClassificaRelative tion errors error in void % fraction % MLP ESC 6.7 8.7 Bayesian MLP 5.9 8.1 Bayesian MLP, direct VF 3.40.50.40.30.20.1As baseline result for MLPs we used early stopping committee of 20 MLP networks (MLP ESC), with different division of data to training and stopping sets for each member. The networks were initialized to near zero weights to guarantee that the mapping is smooth in the beginning. When used with caution MLP early stopping committee is good baseline method for neural networks. Fig. 6 shows examples of the image reconstruction results. Table 3 shows the quality of the image reconstructions with models, measured by error in the void fraction and percentage of erroneous pixels in the segmentation, over the test set. An important goal in the studied process tomography application was to estimate the void fraction, which is the proportion of gas and liquid in the image. With the proposed approach such goal variables can be estimated directly without explicit reconstruction of the image. The bottom row in Table 3 shows the relative absolute error in estimating the void fraction directly from the projections of the potential signals. Note that most of the eigen images in Fig. 3 have zero mean, so that they only code the shape of the distribution and make no contribution to the void fraction of the reconstruction. Hence the void fraction is clearly a lower dimensional subproblem of the whole reconstruction problem. Consequently the void fraction can be estimated to higher0.10.2 0.3 Void fraction, target0.40.5Figure 7: Scatterplot of the void fraction estimate with 10% and 90% quantiles.accuracy directly from the measurements (see also [3]). With Bayesian methods we can easily calculate confidence intervals for outputs. Fig. 7 shows the scatter plot of the void fraction versus the estimate by the Bayesian neural network. The 10% and 90% quantiles are computed directly from the posterior distribution of the model output.5Robustness to NoiseA special virtue of the solution proposed here is very high robustness to noise. Similar property of the neural network inverse was also reported in [7]. In the current approach the PCA projection of the potential signal and images contributes to the suppression of noise effects, as uncorrelated noise is also largely uncorrelated with the eigenvectors of the signals.30Error pixels in result image %VF, rel. abs. error, %20 10 8 6 4 3 0 1Direct VF VF from reconstr.20 15 10 5 0 0 1 2 4 8 16 32 64 128 Noise level %24 8 16 32 64 128 Noise level %Figure 8: Effect of additive Gaussian noise to estimation of void fraction directly and from the reconstructed image.Figure 9: Effect of additive Gaussian noise to the reconstruction of the images.Fig. 8 shows the effect of the noise on the inputs to the direct estimation of the void fraction. The noise was additive Gaussian noise with standard deviation given as percentage of the maximum amplitude of the potential signal. Fig. 9 shows the effect of noise to the image reconstruction results. Note that the expected noise level in industrial environment is about 2–5 %, which should have no significant effect to the inverse solutions by the proposed techniques.AcknowledgmentsThis study was partly funded by TEKES Grant 40888/97 (Project PROMISE, Applications of Probabilistic Modeling and Search).References[1] Adler, A. & Guardo, R. (1994). A neural network image reconstruction technique for electrical impedance tomography. IEEE Transactions on Medical Imaging, 13(4):pp. 594–600. [2] Bishop, C. M. (1995). Neural Networks for Pattern Recognition. Oxford University Press. [3] Lampinen, J., Vehtari, A. & Leinonen, K. (1999). Using Bayesian neural network to solve the inverse problem in electrical impedance tomography. In Proceedings of 11th Scandinavian Conference on Image Analysis SCIA’99. Kangerlussuaq, Greenland. [4] Neal, R. M. (1996). Bayesian Learning for Neural Networks, vol. 118 of Lecture Notes in Statistics. Springer-Verlag. [5] Neal, R. M. (1998). Assessing relevance determination methods using DELVE. In C. M. Bishop, ed., Neural Networks and Machine Learning, vol. 168 of NATO ASI Series F: Computer and Systems Sciences. Springer-Verlag. [6] Ni, X., Simons, S. J. R., Williams, R. A. & Jia, X. (1997). Use of PCA and neural network to extract information from tomographic images for process control. In Proc. Frontiers in industrial process tomography, vol. II, pp. 309–314. Engineering Foundation, New York, USA. [7] Nooralahiyan, A. Y. & Hoyle, B. S. (1997). Threecomponent tomographic flow imaging using artificial neural network reconstruction. Chemical Engineering Science, 52(13):pp. 2139 – 2148. See also /staff/ een6njb/Research/processtom.html. [8] Vauhkonen, M., Kaipio, J., Somersalo, E. & Karjalainen, P. (1997). Electrical impedance tomography with basis constraints. Inverse Problems, 13(2):pp. 523–530.6ConclusionIn this contribution we have presented a method for solving the ill-posed inverse problem in electric impedance tomography with Bayesian neural networks. With the proposed system • the inverse can be computed in a feedforward manner, facilitating real-time monitoring of the process • the image resolution can be chosen independently of the inverse model • prior knowledge can be used to build the autocovariance model • the solution is demonstrated to be highly immune to noise • estimation of some target variables, such as the void fraction, may be applicable directly without the actual image reconstruction • we can easily calculate confidence intervals for outputs • correct model complexity is controlled by the Bayesian methods Currently we are preparing tests for the method with real data in cooperation with the industrial partner of the project.。