基于纯数值函数优化的一种混合遗传算法
- 格式:pdf
- 大小:268.13 KB
- 文档页数:7
一种遗传蚁群融合算法的函数优化求解问题摘要:遗传算法是一种借鉴生物界自然选择和自然遗传机制的随机化搜索方法,可直接对结构对象进行操作,但是如果兼顾收敛速度和解的品质两个指标,单纯的遗传算法未必表现出原理本身的优越性。
针对上述问题,提出一种新的遗传蚁群融合算法,利用蚁群算法的正反馈机制,来提高遗传算法运行的速度和效率,从而更好更快的解决函数优化求解问题。
关键词:遗传算法蚁群算法算法融合函数优化遗传算法[1](genetic algorithm,GA)是一种模拟自然选择和遗传进化机制的优化算法,它是由美国Michigan大学的Holland教授于20世纪70年代提出的。
它的主要特点是简单、通用、鲁棒性强,适用于并行分布处理,应用范围广。
蚁群算法[2](ant colony algorithm,ACA)是由意大利学者Dorigo于20世纪90年代初在他自己的博士论文中提出。
它是一种最新发展的模拟昆虫王国中蚂蚁群体觅食行为的仿生优化算法,该算法采用了正反馈并行自催化机制,具有较强的鲁棒性、优良的分布式计算机制、易于与其它方法结合等优点。
但是它的缺点是运算初期信息素匮乏,求解速度缓慢。
优化问题的求解在遗传算法研究中占很大比重,诸如TSP等组合优化问题一直是遗传算法十分活跃的研究课题。
尽管遗传算法比其它传统搜索方法有更强的鲁棒性,但它对于算法计算过程中的反馈信息却没有利用,往往由此导致无为的冗余迭代,从而使得求解的效率不断降低。
且遗传算法更善长全局搜索而局部搜索能力却不足。
遗传算法可以用极快的速度达到最优解的90%左右,但要达到真正的最优解则要花费很长的时间。
一些对比实验还表明,如果兼顾收敛速度和解的品质两个指标,单纯的遗传算法方法未必比其它搜索方法更优越。
为此,除了要进一步改进基本理论和方法外,还要采用和神经网络、模拟退火或专家系统等其它方法结合的策略。
许多研究结果表明,采用这种混合模型可有效提高遗传算法的局部搜索能力,从而进一步改善其收敛速度和解的品质。
一种改进的混合遗传算法[提要]为了提高遗传算法的搜索效率,给出了一种改进的遗传算法。
该算法提出了新的交叉操作和仿粒子群变异,扩大了搜索范围。
通过经典函数的测试表明,改进算法与一般自适应遗传算法相比较,在函数最优值、平均收敛代数、收敛概率等方面都取得了令人满意的效果。
关键词:自适应遗传算法;适应度函数;交叉操作;仿粒子群变异中图分类号:TP3 文献标识码:A收录日期:2012年1月12日一、引言遗传算法(GA)由美国Michigan大学的Holland教授于1975年首先提出,后经De Jong、GoldBerg等人改进推广,广泛应用于各类问题。
它是一种模拟自然界生物进化过程与机制的全局概率优化搜索方法。
早期遗传算法在进化过程中易出现早熟收敛和局部收敛性差等问题,为了克服上述问题,人们提出了多种改进算法,本文针对遗传算法的不足,采用实数编码对遗传算法中的交叉和变异操作进行改进,提高了算法全局搜索能力,最后使用改进的算法进行仿真实验,结果表明本算法具有收敛概率高和平均收敛代数少的优点。
二、改进的遗传算法1、改进的交叉操作。
本文遗传算法采用实数编码,改进的交叉操作是先在交叉前产生三个服从均匀分布的随机数a∈[0,1]、b∈[-1,1]、c∈[-1,1],然后假设x1,x2是要交叉的两个父代,个体变量为m维,则x1,x2可以表示为x1=(x11,x12,…,x1m),x2=(x21,x22,…,x2m),其中为位移变量,其中?驻ij=min{(xij-),(-xij)}(i=1,2;j=1,2,…m),最后进行两次操作得:x1’=x1+b•?驻1x2’=x2+c•?驻2 (1)x1’’=ax1’+(1-a)x2’x2’’=ax2’+(1-a)x1’ (2)x1’,x2’,x1’’,x2’’分别为两次操作所产生的子代,从这4个子代中选取适应度大的两个保留到下一代。
通过这种操作可以有效地避免两个数值相近的个体进行“近亲繁殖”(数值相近的个体若只进行(2)式的操作会导致种群多样性快速下降),同时由于b,c的选取,生成的x1’,x2’是两个不相干的个体,彼此之间独立,由(2)式决定的后代还可以使子代遗传父代的某些有用因素,同时由于(1)式的位移调整,使得(2)式生成的后代比一般的算数交叉产生后代的范围扩大,提高算法的搜索范围,避免搜索陷入一个局部区域而出现“早熟”,最后再引入竞争机制在这四个后代中选出两个最好后代个体,这样在保证多样性的同时可以加快收敛的速度。
Brief Paper ACTA AUTOMATICA SINICA Vol.33,No.1January,2007Hybrid Simplex-improved Genetic Algorithm for GlobalNumerical OptimizationREN Zi-Wu1SAN Ye1CHEN Jun-Feng2Abstract In this paper,a hybrid simplex-improved genetic algorithm(HSIGA)which combines simplex method(SM)and genetic algorithm(GA)is proposed to solve global numerical optimization problems.In this hybrid algorithm some improved genetic mechanisms,for example,non-linear ranking selection, competition and selection among several crossover offspring, adaptive change of mutation scaling and stage evolution,are adopted;and new population is produced through three ap-proaches,i.e.elitist strategy,modified simplex strategy and improved genetic algorithm(IGA)strategy.Numerical experi-ments are included to demonstrate effectiveness of the proposed algorithm.Key words Genetic algorithm,simplex method,competition and selection,mutation scaling1IntroductionGenetic algorithm(GA)is a stochastic and parallel search technique based on the mechanics of natural selec-tion,genetics and evolution,which wasfirst developed by Holland in1970s[1].In recent years,GA has been widely applied to different areas such as fuzzy systems,neural net-works,etc.[2]Although GA has become one of the popu-lar methods to address some global optimization problems, the major problem of GA is that it may be trapped in the local optima of the objective function when the prob-lem dimension is high and there are numerous local op-tima.This degradation in efficiency is apparent especially in applications where the parameters being optimized are highly correlated[3].In order to overcome theseflaws and improve the GA s optimization efficiency,recent research works have been generally focused on two aspects.One is improvements upon the mechanism of the algorithm,such as modification of genetic operators,or the use of niche technique[4],etc;the other is combination of GA with other optimization methods,such as BFGS methods[5],simulated annealing(SA),etc.In this paper,a hybrid simplex-improved genetic algo-rithm(HSIGA)is proposed to solve global numerical op-timization problems.In this hybrid algorithm some im-proved genetic mechanisms are adopted,such as non-linear ranking selection,competition and selection among several crossover offspring,adaptive change of mutation scaling and adaptive stage evolution mechanism,to form an im-Received June17,2005;in revised form April26,2006Supported by National Natural Science Foundation of P.R.China (60474069)1.Control&Simulation centre,Harbin Institute of Technology, Harbin150080,P.R.China2.College of Computer&Information Engineering,Hohai University,Changzhou213022,P.R.China DOI:10.1360/aas-007-0091proved genetic algorithm(IGA).For further performance enhancement,the IGA algorithm is combined with the simplex method(SM)and the new population is gener-ated through three approaches,i.e.elitist strategy,simplex strategy and IGA strategy.We investigate the effectiveness of this proposed algorithm by solving10test functions with high dimensions.2Hybrid simplex-improved genetic algo-rithm(HSIGA)for numerical optimiza-tionIn this paper,the following minimization problem with fixed boundaries is consideredminimize f(x)=f(x1,x2,···,x n)subject to x minix i x maxi(i=1,2,···,n)(1) where x=(x1,x2,···,x n)is a variable vector in R n,f(x)denotes the objective function,and x miniand x maxirepre-sent the lower and the upper bounds of x i such that themeaningful range of x i is[x mini,x maxi].2.1Improved genetic algorithm(IGA)For the minimization problem like(1),we adopt real-code GA andfirstly introduce IGA.1)Non-linear ranking selection operatorIn order to select some excellent chromosomes from the parent generation,non-linear ranking selection is adopted in this paper,which maps chromosome s serial number in the queue to an expected selection probability.With the population pop={x1,x2,···x i,···,x P}of P chromosomes,we distribute the probability to each chromo-some from the best to the worst by a non-linear function, so the selection probability of chromosome x i is8<:p(x i)=q (1−q)i−1q =q(2)where q is the selection probability of the best chromosome, i is the serial number of the chromosome.After the selection probability of each chromosome is de-termined,the roulette wheel selection is adopted to select the excellent chromosome.Ranking selection need neither use the individual sfitness nor transform thefitness scaling, which can prevent the premature convergence or stagnation phenomenon to a certain extent.2)Competition and selection among several crossover offspringIn the natural evolution,parents often reproduce more than two offspring after crossover operation,and com-petition phenomenon between offspring of the same par-ents also always occurs.Illumined by this idea,competi-tion and selection of the excellent among the several off-spring is employed in the crossover operator.Different from the crossover operation of the simple genetic algo-rithm(SGA),four chromosomes are createdfirstly from the parents x s=[x s1,x s2,···,x s n]and x t=[x t1,x t2,···,x t n]c 2007by Acta Automatica Sinica.All rights reserved.92ACTA AUTOMATICA SINICA Vol.33 (s=t)according to the following formulae[2].y 1=[y11,y12,···,y1n]=x s+x t2(3)y2=[y21,y22,···,y2n]=x max(1−ω)+max(x s,x t)ω(4)y3=[y31,y32,···,y3n]=x min(1−ω)+min(x s,x t)ω(5)y 4=[y41,y42,···,y4n]=(x max+x min)(1−ω)+(x s+x t)ω2(6)x max=[x max1,x max2,···,x maxn](7)x min=[x min1,x min2,···,x minn](8)whereω∈[0,1]is a weight,max(x s,x t)is the vector with each element obtained by taking the maximum among the corresponding element of x s and x t.For example, max([1−23],[231])=[233].Similarly,min(x s,x t) denotes a vector obtained by taking the minimum value. Among these4chromosomes,the two with superiorfitness value are taken as the offspring of the crossover operation. It can be seen that the potential offspring can spread over the domain,and this crossover operator is superior to the single arithmetic crossover or heuristic crossover.3)Adaptive change of mutation scalingDifferent from the uniform mutation,boundary muta-tion,etc,a mutation operator with adaptive change of mu-tation scaling is employed.According to“the punctuated equilibrium theory”[6]in the evolutionfield,species evo-lution always appears in many and different directions at previous stage while the evolution tends to be conservative at later stage.Therefore,a larger mutation range is em-ployed during previous stage to keep the population diver-sified while the mutation range will be shrunken gradually during later stage to focus on local search.Supposing that the mutation scaling isµ(0 µ 1),the element x k(x k∈[x mink ,x maxk])selected in the chro-mosome(x1,x2,···,x k,···,x n)is to be mutated with a certain mutation probability P m.Then this original value x k will be replaced by the mutated value x newkchosen from the rangex new k ∈nmax(x k−µx maxk−x mink,x mink),min(x k+µx maxk−x mink2,x maxk)o(9)with a uniform probability.Based on the concept that the mutation scalingµis decreasing gradually during the pro-cess,a monotonously decreasing function of the mutation scalingµis builtµ(τ)=1−r(1−τT)b(10)where T is the number of generations,τis the current it-eration,and the weight r∈[0,1].From the formula(10)it can be seen that for a small value of weight r,the mutation scalingµis near to one at the beginning of the optimiza-tion,and the mutation scalingµwill be decreasing down to zero as the run progresses.4)Adaptive strategy of stage evolutionDuring the process of evolution,the diversity of popu-lation is descending.When the diversity decreases to a certain level,the algorithm searching is over[1].Generally, at a previous stage larger crossover and mutation probabi-lity can work obviously,while at a later stage the crossover and mutation probability had better be smaller since the algorithm has entered into the local searching process.For the selection operator,it is a good idea to choose smaller selection pressure at the beginning,and adopt larger selec-tion pressure later to promote local searching.Based on this idea,a model based on stage evolution is developed.We divide the whole process into3stages: First stage:τ∈[0,T1]T1=αTSecond stage:τ∈(T1,T2]T2=(1−α)TThird stage:τ∈(T2,T]where T andτhave been defined as above,generally pa-rameterαis equal to0.382.Then we choose three different best individual s selection probability q,crossover proba-bility P c and mutation probability P m for each stage.2.2Hybrid simplex-improved genetic algorithm(HSIGA)In order to improve the localfine tuning capability of GA and quicken the convergence rate,we combine the IGA with the simplex method(SM)to form a hybrid optimization algorithm[7].The detailed process is as follows.All chromosomes in the current generation are arranged from the best to the worstfirstly,and new population in the next generation is generated through the following three approaches.1)Elitist strategy:Thefirst N top-ranking chromosomes (elites)are reproduced directly into the next generation so that these elites can not be destroyed by the3operations of the GA or other operations.2)Modified simplex strategy:In this HSIGA algorithm, the S(S>N)top members in population produce S-N new chromosomes through the modified simplex method.In modified simplex method,the new generated chromosome is generated by reflecting x j over the centroid x c of the remaining points as follows.x newj=x c+α(x c−x j)(j=N+1,···,S)(11)where the centroid is equal to x c=(x1+x2+···+x N)/N,αis a random value.3)Improved genetic algorithm(IGA)strategy:The re-maining P-S children(where P is the population size)in the new generation are created through the IGA acting on the whole population.Fig.1depicts the architecture of this HSIGA algorithm. We can refer to the hybrid degree(S/P)by using the per-centage of population to which the modified simplex opera-tor is applied.From it we can see that the hybrid algorithm will become a real-code IGA when the hybrid degree(S/P) is zero;while the hybrid degree(S/P)is equal to100%, the algorithm will turn into the modified simplex method. Generally S is around20percent of the size P.No.1REN Zi-Wu et al.:Hybrid Simplex-improved Genetic Algorithm for Global Numerical Optimization93Fig.2Procedure of the HSIGA algorithmFig.1Architecture of the hybrid simplex-IGA algorithmThe new population in HSIGA is produced through these 3strategies with the following advantages.1)In GA the coding of elitists would be changed or de-stroyed after the genetic operation,and the produced off-spring may be lessfitness than their parents.Elitist stra-tegy is an effective approach to avoid the damage of the elitists,which is necessary to enhance the capacity of the algorithm.2)Some novel genetic operations are used in IGA,such as the crossover and the mutation operator.The idea of these operations is mainly from the nature.The genetic mech-anisms try to mimic the maturing phenomenon in nature, which makes the individuals more suitable to the environ-ment,and enhances the optimization performance.3)GA is global search procedure.It is less likely to be trapped in local optima,but the convergence rate will slow down and the computational cost is high at later stage.SM is a local search method whose merits are simple and com-putationally efficient,however it is easily entrapped in a local optimum.The hybrid of these two methods can com-bine the respective merits,which can speed up the conver-gence rate and avoid being entrapped in a local optimum.Moreover,HSIGA applies simplex reproduction to the top-ranking individuals,and applies simplex search to many points not a single one in the vicinity of optimum,which can quicken the convergence rate greatly.Based on the above,the pseudo code of this HSIGA al-gorithm can be depicted in Fig.2.3Numerical experiment and resultsWe execute the HSIGA to solve10benchmark functions[8∼12]in Table1.f1-f6are multimodal functions where the number of local minima increases with the prob-lem dimension;f7-f10are unimodal functions.In some recent studies,functions f1-f10were tested by the FEP[8],OGA/Q[9],CEP[10],PSO[11],EO[11],and FES[12]optimization algorithms.From the results of these numerical experiments,it can be seen that the OGA/Q[9] can give more robust and significantly better results than some other optimization algorithms,such as the CEP[10], PSO[11],EO[11],FES[12]etc.Therefore we adopt the ten benchmark functions to test our proposed HSIGA,and only to compare the performance of the HSIGA with the per-formance of FEP[8]and OGA/Q[9].3.1Control experimentIn order to identify any improvement due to improved genetic operations and combination with the modified sim-plex method,the following control experiments are de-signed and carried out.We execute the IGA and SGA to solve the test functions,where IGA is similar to HSIGA basically,except for the top members’number S in the population.In IGA the number of top members S is equal to zero,while in HSIGA S is20percent of the size P.SGA neither apply any improved genetic mechanisms nor com-bines with other optimization method;it adopts only tra-ditional operations,such asfitness-proportionate selection, arithmetic crossover and uniform mutation operation.3.2Parameter valuesBefore solving these test functions,some parameters should be assigned for each algorithm.94ACTA AUTOMATICA SINICA Vol.33 Table1List of10test functions used in experimental study,where n is the function dimension,n=30Test functions Solution spacef1(x)=nPi=1(−x i sinp|x i|)[−500,500]nf2(x)=nPi=1[x2i−10cos(2πx i)+10][−5.12,5.12]nf3(x)=−20exp−0.2s1nnPi=1x2i!−exp1nnPi=1cos(2πx i)!+20+exp(1)[−32,32]nf4(x)=14000nPi=1x2i−n Qi=1cos(x i√i)+1[−600,600]nf5(x)=πn (10sin2(πy1)+n−1Pi=1(y i−1)2[1+10sin2(πy i+1)]+(y n−1)2)+nPi=1u(x i,10,100,4)u(x i,a,k,m)=8><>:k(x i−a)mk(−x i−a)mx i>a−a x i a;x i<−ay i=1+14(x i+1)[−50,50]nf6(x)=110(sin3(3πx1)+n−1Pi=1(x i−1)2[1+sin2(3πx i+1)]+(x n−1)2[1+sin2(2πx n)])+nPi=1u(x i,5,100,4)[−50,50]nf7(x)=nPi=1x2i[−100,100]nf8(x)=nPi=1|x i|+n Qi=1|x i|[−10,10]nf9(x)=nPi=1i Pj=1x j!2[−100,100]n f10(x)=max{|x i|,i=1,2,···,n}[−100,100]nFor the HSIGA and the IGA algorithms:•Population size:We choose a moderate population size P=60.•Genetic probabilities:The whole evolution is divided into3stages.Table2shows the best chromosome’s selec-tion probability q,crossover probability P c and mutation probability P m at each stage.•Mutation parameters:The parameter r in the muta-tion scaling function is0.5,and parameter b is equal to 2.•Stopping criterion:For each function has different com-plexity,we use different stopping criterion.Table3lists the evolution generations T for each function.When the cur-rent iterationτreaches T,the execution is stopped.In addition,the modified simplex strategy is adopted in the HSIGA algorithm.We choose the hybrid degree of the HSIGA algorithm(S/P)=20%,and elites N is equal to four.In IGA the hybrid degree is equal to zero.For the SGA algorithm:•Population size:Population size of the SGA is150, which is larger than that of the IGA and the HSIGA.•Selection operation:Fitness-proportionate selection is adopted.•Crossover operation:Arithmetic crossover is employed and crossover probability P c is0.80.•Mutation operation:Uniform mutation is used and mutation probability P m is0.02.•Stopping criterion:In SGA if thefitness value of the best chromosomes cannot be further reduced in successive 50generations after500generations,the execution of the algorithm is stopped.3.3Results and comparisonSince the SGA uses different evolutionary parameters and termination criterion from those adopted by the IGA and the HSIGA,to make a fair comparison,we will calcu-latefirstly the computational cost of each algorithm,and compare the qualities of theirfinal solutions at the given computational cost.Each test function is performed in50independent runs and the following results are recorded:1)the mean number of function evaluations;2)the mean function value;and 3)the standard deviation of the function values.Table4 shows each algorithm’s results in the control experiment. From these results in Table4it can be seen that •As can be seen,HSIGAfinds the exact global optimum, 0,for functions f2,f4and f7∼f10.For other functions the mean function values of HSIGA are close to the optimal ones and the standard deviations are small.These results indicate that HSIGA canfind optimal or close-to-optimal solutions,and its solution quality is quite stable.•Compared to the results of SGA,the proposed HSIGA algorithm requires less function evaluations than SGA,and hence it has a smaller time complexity.However,HSIGA gives significantly smaller and closer-to-optimal solution than SGA,and hence its mean solution quality is much better than SGA.In addition,HSIGA gives smaller stan-dard deviations of function values than SGA,so its solution quality is more stable.•Compared to the results of IGA,though the gener-ations T of these two algorithms are equal,HSIGA re-quires less function evaluations than IGA.This is because part individuals of next generation in HSIGA are produced through elite strategy and modified simplex strategy.Ho-wever,HSIGA gives smaller mean function values,and gives equal or smaller standard deviations of function values than IGA for the10functions,and hence the solution qual-ity of HSIGA is better than that of IGA,and the HSIGA algorithm is statistically stable.Next,the performance of HSIGA is compared with the that of FEP[8]and OGA/Q[9]algorithms.Since in[8]and [9]the optimization results using these two algorithms are available for the10test functions,the comparison will be made accordingly and the results are listed in Table5.No.1REN Zi-Wu et al.:Hybrid Simplex-improved Genetic Algorithm for Global Numerical Optimization95Table2Adaptive change value of parameter(τ:Current iteration;T:Evolution generations)τq P c P mτ∈[0,T1]0.080.950.08τ∈(T1,T2]0.100.800.05τ∈(T2,T]0.120.650.02Table3HSIGA and IGA number of generations for each function(TF:Test function NG:Number of generations)TF f1f2f3f4f5f6f7f8f9f10NG500305040500500400500400500 Table4Comparison of optimization result and computation effort among HSIGA,IGA and SGAMean number of function evaluations Mean function value(standard deviation)fHSIGA IGA SGA HSIGA IGA SGAf min f168,41278,05289,877-12569.4740(1.0604E-4)-12569.4530(1.3782E-2)-6422.1504(869.3862)-12569.5 f24,1304,698259,9650(0)7.0237E-14(2.4246E-13) 1.9255(8.0727)0f36,8537,825229,9268.8818E-16(0) 5.8950E-13(2.1217E-12) 6.2034E-2(3.4579E-2)0f45,4826,275146,4630(0) 2.2693E-15(6.9245E-15)7.0871E-1(3.8481E-1)0f568,42978,044319,695 3.2475E-6(3.5963E-6) 1.7855E-5(9.0217E-6) 3.5947E-4(6.2180E-4)0f668,41778,044334,8969.0181E-5(1.0448E-5) 5.0194E-4(1.5354E-3) 1.0965E-2(2.2325E-2)0f754,74562,446322,3680(0) 2.2817E-183(0)9.6696E-2(8.4127E-2)0f868,43478,002211,6800(0)8.9611E-115(5.7284E-114) 3.8576E-2(2.0601E-2)0f954,70262,391269,6580(0) 5.3675E-186(0)219.8387(118.2618)0f1068,36178,004126,8520(0) 6.3391E-116(3.1639E-115)9.0751E-1(2.6103E-1)0Table5Comparison between HSIGA and OGA/Q[9]and FEP[8]Mean number of function evaluations Mean function value(standard deviation)fHSIGA OGA/Q[9]FEP[8]HSIGA OGA/Q[9]FEP[8]f minf168,412302,166900,000-12569.4740(1.0604E-4)-12569.4537(6.4470E-4)-12554.5(52.6)-12569.5 f24,130224,710500,0000(0)0(0) 4.6E-2(1.2E-2)0f36,853112,421150,0008.8818E-16(0) 4.4400E-16(3.9890E-17) 1.8E-2(2.1E-3)0f45,482134,000200,0000(0)0(0) 1.6E-2(2.2E-2)0f568,429134,556150,000 3.2475E-6(3.5963E-6) 6.0190E-6(1.1590E-6)9.2E-6(3.6E-6)0f668,417134,143150,0009.0181E-5(1.0448E-5) 1.8690E-4(2.6150E-5) 1.6E-4(7.3E-5)0f754,745112,559150,0000(0)0(0) 5.7E-4(1.3E-4)0f868,434112,612200,0000(0)0(0)8.1E-3(7.7E-4)0f954,702112,576500,0000(0)0(0) 1.6E-2(1.4E-2)0 f1068,361112,893500,0000(0)0(0)0.3(0.5)0From Table5,it can be seen that the HSIGA exhibits more accurate results and a superior performance over FEP, while the required computational effort is less than that required by FEP for all the functions.For f1,f5,and f6 HSIGA can give smaller mean function values using smaller numbers of function evaluations than OGA/Q.For f2,f4 and f7∼f10,the solutions of HSIGA are as good as those of OGA/Q,but the mean number of function evaluations of HSIGA is much lower than those of OGA/Q.For func-tion f3,HSIGA gives a mean function value of8.8818E-16, which is already very close to the global minimum0,using 6853function evaluations;OGA/Q gives a mean function value4.4400E-16,which is smaller than that of HSIGA, but it uses112421function evaluations,namely,16times the computational cost of HSIGA.In other words,HSIGA canfind close-to-optimal solutions,but OGA/Q canfind closer-to-optimal solutions using much more function eva-luations for function f3.In addition,HSIGA gives smaller standard deviation of function values than FEP and gives equal or smaller standard deviation of function values than OGA/Q(except function f5),hence it has a more stable so-lution quality.To summarize,the results show that HSIGA outperforms FEP and OGA/Q,and is competent for the numerical optimization problems.4ConclusionIn this paper,the HSIGA has been presented to solve global numerical optimization problems.This methodology involves a novel improvement on the genetic mechanism and combination with the modified simplex method to en-hance the genetic algorithm.The HSIGA has been carried out to solve10benchmark problems with high dimensions. Results obtained from50trials for each function show that the proposed HSIGA is able tofind optimal or close-to-96ACTA AUTOMATICA SINICA Vol.33optimal solutions for all these test functions;moreover thebehavior of the algorithm is parison of theHSIGA outcome with those from several other global opti-mization algorithms demonstrates that the HSIGA is morecompetitive than some recent algorithms on the problemstudied.References1Li Min-Qiang,Kou Ji-Song,Lin Dan,Li Shu-Quan.GeneticAlgorithm Basic Theory and Application.Beijing:SciencePress,2002(in Chinese)2Leung Frank H F,Lam H K,Ling S H,Tam Peter K S.Tuning of the structure and parameters of a neural networkusing an improved genetic algorithm.IEEE Transactions onNeural Networks,2003,14(1):79∼883Fogel D B.Evolutionary Computation:Toward a New Phi-losophy of Machine Intelligence.IEEE,2005.4Sareni B,Krahenbuhl L.Fitness sharing and niching meth-ods revisited.IEEE Transactions on Evolutionary Compu-tation,1998,2(3):97∼1065Wang Deng-Gang,Liu Ying-Xi,Li Shou-Chen.Hybrid ge-netic algorithms of global optimum for optimization prob-lems.Acta Mechanica Sinica,2002,34(3):469∼474(in Chi-nese)6Zhang Yun.Rate of evolution and unification of evolu-tion theories.Acta Scientiarum Naturalium UniversitatisPekinensis,1997,33(6):794∼803(in Chinese)7Yen J,Liao J C,Lee B,Randolph D.A hybrid approach tomodeling metabolic systems using a genetic algorithm andsimplex method.IEEE Transactions on Systems,Man,andCybernetics-Part B,1998,28(2):173∼1918Yao X,Liu Y,Lin G M.Evolutionary programming madefaster.IEEE Transactions on Evolutionary Computation,1999,3(2):82∼1029Leung Y W,Wang Y P.An orthogonal genetic algo-rithm with quantization for global numerical optimiza-tion.IEEE Transactions on Evolutionary Computation,2001,5(1):41∼5310Chellapilla bining mutation operators in evolutio-nary programming.IEEE Transactions on EvolutionaryComputation,1998,2(3):91∼9611Angeline P J.Evolutionary optimization versus particleswarm optimization:Philosophy and performance differ-ences.In:Proceeding of Evolutionary Programming VII Lec-ture Notes in Computer Science.1447,Berlin,Germany:Springer-Verlag,1998,601∼61012Yao X,Liu Y,Lin G M.Fast evolution strategies.In:Pro-ceedings of Evolutionary Programming VI Lecture Notes inComputer Science.1213,Berlin,Germany:Springer-Verlag,1997,149∼161REN Zi-Wu Ph. D.candidate in Control&SimulationCentre at Harbin Institute of Technology,China.His re-search interests include evolutionary algorithms and neural net-works.Corresponding author of this paper.E-mail:zwren-jren@.SAN Ye Professor in Control&Simulation Centre at HarbinInstitute of Technology,China.His research interests includesystem simulation techniques and optimal control.CHEN Jun-Feng Assistant in College of Computer&In-formation Engineering at Hohai University,China.Her researchinterests include genetic algorithm(GA)and neural network con-trol.。
差分算法和遗传算法-概述说明以及解释1.引言1.1 概述差分算法和遗传算法都是优化问题中常用的算法方法。
差分算法是一种基于数值优化的算法,它通过比较目标函数在不同参数设置下的差异来进行搜索和优化。
遗传算法则是一种基于生物进化思想的算法,模拟了自然界中的遗传、变异和选择等过程来进行问题求解。
差分算法主要通过对目标函数进行差分操作,得到差分向量,并根据差分向量更新参数,从而不断优化目标函数的值。
与其他优化算法相比,差分算法具有简单、易于实现、收敛快等优点。
因此,差分算法在函数优化、参数估计、信号处理等领域都有广泛的应用。
而遗传算法则通过模拟生物的进化过程,利用遗传算子和选择策略来对参数进行优化。
遗传算法中的遗传操作包括交叉、变异和选择,通过这些操作来产生新的解,并逐步优化。
与其他优化算法相比,遗传算法具有并行性强、全局搜索能力强等优点。
因此,遗传算法在组合优化、机器学习、人工智能等领域得到了广泛的应用。
本文将重点对差分算法和遗传算法的原理、应用领域、优缺点进行比较与分析。
通过对这两种算法的概述和深入了解,希望能够全面了解差分算法和遗传算法在不同领域中的应用场景和优劣势,从而为实际问题的解决提供参考和指导。
在总结差分算法和遗传算法的应用的基础上,还将对未来的发展方向进行展望,以期为算法研究和实践提供一定的思路和启示。
1.2 文章结构本文将以差分算法和遗传算法为主题,探讨它们的原理、应用领域以及对比分析它们的异同和优缺点。
文章将分为五个主要部分,每个部分重点介绍特定的内容。
首先,在引言部分,我们将给出对差分算法和遗传算法的概述,介绍它们的基本特点和应用背景。
然后,我们将详细说明本文的结构和主要内容,以便读者能够更好地理解和追踪整个文章的思路。
其次,在差分算法部分,我们将以详细的原理介绍为基础,深入探讨差分算法的基本概念、工作原理和相关数学模型。
同时,我们将列举差分算法在不同领域的广泛应用,并分析其优势和局限性。
一种函数优化问题的混合遗传算法彭伟卢锡城摘要将传统的局部搜索算法和遗传算法相结合,可以较好地解决遗传算法在达到全局最优解前收敛慢的问题.文章给出一种结合可变多面体法和正交遗传算法的混合算法.实验表明,它通过对问题的解空间交替进行全局和局部搜索,能更有效地求解函数优化问题.关键词遗传算法,可变多面体法,正交交叉,函数优化.中图法分类号TPA Hybrid Genetic Algorithm for Function OptimizationPENG Wei LU Xi-cheng(Department of Computer Changsha Institute of Technology Changsha410073)Abstract To overcome the problem of slow convergence before the genetic algorithms (GAs) reach the global optima, it is an effective way to combine the conventional local search algorithms with GAs. A new hybrid algorithm that incorporates the flexible polyhedron method into the orthogonal genetic algorithm (OGA) is presented in this paper. The experiments showed that it can achieve better performance by performing global search and local search alternately. The new algorithm can be applied to solve the function optimization problems efficiently.Key words Genetic algorithm, flexible polyhedron, orthogonal crossover, function optimization.遗传算法(genetic algorithms)通过模拟生物进化的途径来在问题的解域中定向搜索最优解,在组合优化、机器学习、自适应控制、多目标决策等领域中有许多应用.对于传统方法较难求解的一些NP问题,遗传算法往往能得到更好的结果.但对传统方法已能较好解决的问题(如一般的非线性优化问题),它并不显示较强的优势.原因在于,遗传算法对问题特定的知识(如梯度、Hessian阵、某些定理等)利用较少.它主要采用群体搜索技术,通过对解的不断组合、随机改变以及对候选解的评估和选择来完成求解过程.在达到全局最优解前,它尚存在收敛慢的问题.设计遗传算法时往往需要在其通用性与有效性之间折衷.设计针对问题的特定遗传算子,可以更有效地求解问题,但缺乏通用性.另一种途径是将遗传算法与问题领域中一些传统的寻优方法(如爬山法、模拟退火法、牛顿法等)结合起来,可在保持算法一定的通用性时提高算法的效率.这类混合算法的基本框架如图1所示.图1 混合遗传算法的基本框架本文考虑一类非线性函数优化问题,即minf(x) x∈D其中f(.)是n元连续函数,D是R n的有界子集.文献[2]中探讨了一种将拟牛顿法与传统GA结合起来用于求解上述问题的途径.由于拟牛顿法需求函数的一阶导数,因而该方法的通用性受到一定的限制.本文探讨将可变多面体法(flexible polyhedron)与GA相结合的算法,它只利用函数值进行搜索,因而适用范围更广.可变多面体法即Nelder-Mead单纯形法,对于一般的优化问题,能较快地逼近最优解,具有较强的局部搜索能力.但它对初始解的构成具有较强的依赖性,算法执行过程中难于发现新的可能存在最优解的区域.通过将它与GA相结合,一方面可以利用其局部搜索能力,另一方面可通过GA来不断“发现”新的更有希望的搜索区域,并动态调整可变多面体法的搜索方向,从而使算法具有更好的灵活性,也使算法更易于并行化.实验表明,对于求解上述非线性优化问题,混合法(以下称为H-GA)具有比传统GA和可变多面体法都好的性能.本文第1节给出H-GA的算法描述,第2节给出实验结果和几种算法之间的性能比较,最后是总结.1 H-GA算法1.1 编码方式编码的实质是在问题的解空间与算法的搜索空间之间建立一个映射.传统GA一般采用一种将实数空间离散化的二进制编码方式[1].这种方式存在编码长度影响求解精度、操作费时、不直观等缺点,因而提出了实数的直接编码方式并表明可以获得更好的性能[3,4].在实数编码方式下,每个个体用一个n维的实向量来表示,这种方式具有直观、易操作的优点,且可以针对它设计非传统的交叉算子.本文采用此编码方式.1.2 交叉和选择操作交叉操作涉及父本的选择配对机制和交叉算子.配对通常采用随机配对方式.为了维持群体的多样性,还可有选择地配对.配对方式能影响较优模式在群体中的扩散速度.为了防止算法的不成熟收敛(premature convergence),通常不希望较优模式在群体中过快地扩散.为此,我们采用一种近邻配对原则[5],即对群体中的第i个个体,若上一次迭代与之配对的是第(i-1)(mod N)个个体,则本次迭代用第(i+1)(mod N)个个体与之配对,N为群体的大小.这种配对方法不仅可避免较优模式过快地扩散,而且符合遗传算法细粒度并行模型的要求,易于获得较大的并行度.正交遗传算法(orthogonal genetic algorithm)在非线性优化问题及其他组合优化问题中已显示出其有效性[5,6],我们的算法采用了正交交叉算子.由两个父本交叉操作产生一组个体,从新个体和两个父本中选择最优的进入下一代群体(Elitist 策略).由于采用局部选择而不是全局选择,既在一定程度上保持了群体的多样性,又消除了算法在并行实现时的通讯瓶颈.设两个父本分别为P 和Q,用于实数编码的正交交叉操作[5]主要包括:(1) 混合交叉(blend crossover ):X 1[i ]=P [i ]; X 2[i ]=Q [i ]; X 3[i ]=r*P [i ]+(1-r)*Q [i ]), i=1,2,...,nr 为一参数,0<r <1.这里取r=0.5;(2) 用X 1,X 2和X 3按正交表L 9(34)产生9个新个体并分别计算它们的适应度值;(3) 按照正交试验方法计算最佳水平组合并产生对应的第10个个体,计算其适应度值;(4) 从X 1,X 2,X 3和新产生的个体中选择最好的两个个体取代P 和Q.1.3 变异操作在实数编码方式下,变异操作对个体X 的每个分量X [i ]作用一个随机偏差量,即X ′[i ]=X [i ]+δ, i=1,2,...,n在进化规划(evolutionary programming )和进化策略(evolutionary strategy )[7]中,广泛采用了高斯变异算子,用正态(高斯)分布的随机变量来作为变异操作中的偏差量,即δ=σ*N(0,1),N(0,1)为标准正态随机变量.算法中令σ随代数增加而逐渐减少,如可令σ=MUT -C/generation,其中MUT -C 为一常数,generation 为迭代的代数.文献[8]中亦采用了类似的将GA 的混合交叉算子与高斯变异算子相结合的途径.由于在正交交叉算子中已包含了混合交叉操作,因而正交遗传算法优于该算法.1.4 局部搜索可变多面体法用(n+1)个n 维欧氏空间中的顶点来构造搜索过程中的多面体,我们选取(n+1)个相邻的个体作为初始顶点(n <N-1).可变多面体法包含下列几种操作[9]:(1) 找出(n+1)个顶点中函数值最大及最小的点X h 和X l ,然后找出去掉X h 后的由n 个顶点构成的多边形的形心X c ;(2) 通过反射操作得到反射点X r :X r [k ]=X c [k ]+a*(X c [k ]-X h [k ]),其中X [k ]为X 的第k 个分量,a 为反射系数;(3) 若f(X r )<f(X l ),则执行扩大操作,得到X e :X e [k ]=X c [k ]+r*( X r [k ]-X c [k ]),其中r >1为扩大系数;(4) 若对多边形中除去X h 外的任一顶点X i ,均有f(X r )>f(X i ),则执行收缩操作,得到X s :X s [k ]=X c [k ]+b*(X h [k ]-X c [k ]),其中0<b <1为收缩系数;(5) 若f(Xr )>f(Xh),则使所有点向最小点靠近,即令Xi[k]=Xl[k]+0.5*(Xi[k]-Xl[k]),其中Xi[k]为第i个顶点的第k个分量;(6) 令Xr ,Xe和Xs中最好的点代替Xh.1.5 终止准则算法运行停止的条件包括以下的一种或它们的结合形式:(1) 算法收敛到一个不动点或连续几次迭代所获得的改变量小于要求的精度值;(2) 达到算法规定的最大迭代次数、或最大执行时间、或函数的最大调用次数(对解空间的最大采样次数).我们用最大采样次数和最大迭代次数来控制算法的终止.1.6 算法描述H-GA算法的主要步骤为:(1) (初始化)随机产生一个分布均匀的初始群体(包含N个初始解);(2) (交配)按两两配对的原则将群体中的个体配对并执行第1.2节的正交交叉操作;(3) (变异)群体中每个个体以Pm的概率进行变异;(4) (局部搜索)采用可变多面体法反复进行局部寻优操作,循环次数由参数Lh控制;(5) (终止)若终止条件满足,则算法中止,否则转向步骤(2).2 实验结果2.1 性能比较参数衡量一个算法的性能的参数包括:(1) 最终解的优劣度或精确度.最终解的优劣度通过误差值来度量.误差值定义为[2]:其中Xi 为算法最终解的第i个分量,X*i为实际的全局最优解的第i个分量,wi为第i个分量的权值,它反映了第i个分量的取值范围大小.(2) 获得最优解的概率.可以用算法多次运行中成功得到最优解的次数来作为其估计值.当群体中最好的解达到一定精度时,认为算法得到最优解.(3) 计算时间.在保证解的一定精确度的条件下,计算时间越少,采样点越少,算法的性能越好.我们采用函数被调用的次数(采样次数)和实际的运行时间来评价.2.2 性能比较我们用实验的方法来比较正交遗传算法(OGA)和H-GA算法的性能.OGA算法采用与H-GA算法相同的交叉和变异操作.在实验中,我们选择了两个不同性质的函数:(1) ,-5≤Xi≤5,i=1,2,...,n.这个函数在全局最小值周围有大量的局部极小值.全局最小值点为(0,0,...,0),相应最小值为-4n.(2) 一般Rosenbrock函数:f(X)=(100*(Xi+1-X2i)2+(1-Xi)2), -5≤Xi≤5,i=1,2,...,n函数的全局最小值点为(1,1,...,1),相应最小值为0.文献[10]中采用传统GA求解了n=2时的问题.在Rosenbrock函数曲面山谷中的点的最速下降方向几乎与到函数最小值的最好方向垂直,因而传统算法(如最速下降法)较难求解此问题.实验中我们发现,在高维情况下传统GA难以高效地求解该问题.可变多面体法在大部分试验中均未求得满意的解.对函数(1),我们在n=50和n=100的情况下将各个算法分别运行50次.每次运行均记录下算法在不同采样次数时的状态.群体大小分别设为100和150.在H-GA算法中,为简单起见,设每次迭代中可变多面体法的循环次数L为群体大小.应用中可根据函数特性等调整循环次数以取得更优的性能.每次运行中,两种算法均能较快地逼近最优解.表1为它们在不同采样次数时群体中最优解的平均误差和平均执行时间.由于取值范围相同,因而误差值计算中各分量的权值相同(wi =1).实验结果在一台586PC上得到.表函数计算(采样)次数最终解的平均误差平均执行时间(s)(*105次) OGA H-GA OGA H-GA1(n=50) 0.028 166 0.001 727 14.263 402 18.444 3992(n=50) 0.002 344 0.001 290 28.357 402 35.957 4022(n=100) 0.120 188 0.004 120 56.658 203 94.781 8074(n=100) 0.010 833 0.002 789 112.587 607 168.967 187优解,但所需计算时间稍多.H-GA算法的性能稍好于OGA算法.对函数(2),n分别取10和30.将这两种情况下的群体大小分别设为30和90.实验表明,在规定的采样次数内,OGA算法几乎不能收敛到最优解(表2).在50次运行中,H-GA算法的最终解与最优解的函数值之差小于10的次数分别达到43和44次.采样次数最终解的平均误差平均执行时间(s)最好情况的函数值最坏情况的函数值(*105次)OGA H-GA OGA H-GA OGA H-GA OGA H-GA1(n=10) 2.293560.287491.259401.629800.250710.000009.329173.987232(n=10) 1.974560.283222.522203.202200.158370.000008.550583.987203(n=30) 5.125953.738009.8472016.494619.35919.6981228.165717.29356(n=30) 5.056 2.464 19.686 33.307 18.788 0.941 27.686 8.950,OGA算法除了计算时间优于H-GA算法外,几乎难于求得最优解.H-GA算法能更有效地求解该类函数优化问题.3 总结本文给出了一种求解非线性全局最优化问题的混合遗传算法,它将传统寻优方法——可变多面体法与正交交叉算子结合起来,既可利用遗传算法的全局搜索能力,又能通过局部搜索加快算法的收敛.由于采用近邻配对原则和局部选择机制,此算法具有良好的并行性.我们还成功地将进化策略中的高斯变异算子结合到算法中.实验表明,本文提出的混合遗传算法能有效地处理一些传统遗传算法和寻优方法较难处理的函数优化问题.对于不同性质的问题和在算法执行的不同时机,混合遗传算法中各部分操作所起的作用是不同的.恰当地控制各部分操作的执行时机是需进一步研究的工作. 致谢本文的研究得到了荔建琦博士不少很好的建议,在此特表谢意.本文通讯联系人:彭伟,长沙 410073,长沙工学院计算机系作者简介:彭伟,1973年,博士生,主要研究领域为智能计算,先进网络技术.卢锡城,1946年生,教授,博士生导师,主要研究领域为并行与分布处理,先进网络技术.作者单位:长沙工学院计算机系长沙 410073E-mail: wpeng@参考文献1 Goldberg D E. Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley, 19892 Renders J-M, Flasse S P. Hybrid methods using genetic algorithms for global optimization. IEEE Transactions on Systems, Man, and Cybernetics (Part B), 1996,26(2):243~2583 Wright A H. Genetic algorithm for real parameter optimization. In: Rawlins G ed. Foundations of Genetic Algorithms. San Francisco: Morgan Kaufmann, 1991. 205~2184 Eshelman L J, Schaffer J D. Real-coded Genetic algorithms and interval-schemata. In: Whitley L D ed. Foundations of Genetic Algorithms 2. San Francisco: Morgan Kaufmann, 19935 张青富,彭伟,吴少岩等.遗传算法+正交设计:一种新的全局优化算法.见:李乃奎,石纯一,王树林主编.第4届中国人工智能联合学术会议论文集.北京:清华大学出版社,1996.127~133(Zhang Qing-fu, Peng Wei, Wu Shao-yan et al. Genetic algorithm+orthogonaldesign method: a new global optimization algorithm. In: Li Nai-kui et al eds. Proceedings of the 4th Chinese Joint Conference on Artificial Intelligence. Beijing: Qinghua Press, 1996. 127~133)6 Wu Shao-yan, Zhang Qing-fu, Chen Huo-wang. A new evolutionary model based on family eugenics: the first results. In: Proceedings of 1996 IEEE International Conference on Evolutionary Computation (ICEC'96). Nagoya, Japan, May 1996. 350~3557 Fogel D B. An introduction to simulated evolutionary optimization. IEEE Transactions on Neural Networks, 1994,5(1):3~148 Yang J M, Kao C Y. A combined evolutionary algorithm for real parameters optimization. In: Proceedings of 1996 IEEE International Conference on Evolutionary Computation (ICEC'96). Nagoya, Japan, May 1996. 732~7379 王永县(编).运筹学——规划论及网络.北京:清华大学出版社,1993(Wang Yong-xian. Operational Research——Programming and Networking. Beijing: Qinghua Press, 1993)10 梁艳春,周春光,李寿范.基于遗传算法的Rosenbrock函数优化问题的研究.软件学报,1997,8(9):701~708(Lian Yan-chun, Zhou Chun-guang, Li Shou-fan. Genetic algorithm-based research on optimization of Rosenbrock function. Journal of Software, 1997,8(9):701~708)本文1997-12-08收到原稿,1998-09-10收到修改稿友情提示:范文可能无法思考和涵盖全面,供参考!最好找专业人士起草或审核后使用,感谢您的下载!。
改进的小生境混合遗传算法在函数优化上的应用王聪;柯沪琦;胡燕海【摘要】为了提高经典小生境遗传算法的收敛性能,加强局部寻优能力,设计了一种新的小生境混合遗传算法.通过判断算法的在线性能指标Xe(s),将模拟退火算法巧妙地融入算法的后期,并针对小生境遗传算法的特点选用格雷码编码,同时设计了自适应的遗传交叉算子.用一个Shubert多峰值函数对改进的算法进行验证,结果表明:新算法的收敛性能和进化效率得到提高,局部寻优能力也有加强.%In order to improve the convergence of classical niche genetic algorithm,strengthen local optimizing ability,a new niche hybrid genetic algorithm is designed.By judging online performance indicators Xe (s) of algorithm,the simulated annealing algorithm is cleverly fused into later algorithm,and aiming at characteristics of the niche genetic algorithm,choose Gray code coding,at the same time,adaptive genetic crossover operator is ing a Shubert multimodal function to validate the improved algorithm,the results show that the convergence of the new algorithm and evolutionary efficiency is improved,the local optimization ability is also strengthened.【期刊名称】《传感器与微系统》【年(卷),期】2017(036)005【总页数】4页(P153-156)【关键词】小生境;混合遗传算法;模拟退火算法;在线性能指标【作者】王聪;柯沪琦;胡燕海【作者单位】宁波大学机械工程与力学学院,浙江宁波315211;宁波戴维医疗器械股份有限公司,浙江宁波315712;宁波大学机械工程与力学学院,浙江宁波315211【正文语种】中文【中图分类】TP18现今存在的一些优化算法,均具有各自的优点,但又存在或多或少的缺点。