Firefly_Algorithms_for_Multimodal_Optimization1
- 格式:pdf
- 大小:777.75 KB
- 文档页数:10
摘要萤火虫算法(Firefly Algorithm,FA)是受自然界中的萤火虫通过荧光进行信息交流这种群体行为的启发演变而来。
它是由剑桥大学的Xin-She Y ang教授在2009年提出的,它作为一种新颖的仿生群智能优化算法,有较大的研究空间。
近几十年来随着越来越多的仿生群智能算法的提出,人们对于这些算法的认识和研究也逐步加深。
本文先介绍群智能优化算法的理论概念,然后着重通过对萤火虫算法仿生原理的了解,从数学的角度对萤火虫算法进行合理的描述和过程的定义,最后编写该算法的matlab代码实现对3个峰值函数进行仿真测试,得出其测试结果。
同时用遗传算法对同样的测试函数也进行仿真测试,得出其测试结果。
最后通过测试结果比较萤火虫算法和遗传算法分别在对峰值函数寻优结果的精确度。
在比较过程中,可以根据测试结果发现,萤火虫算法在对峰值函数的寻优结果的精确度优于遗传算法。
这表明了萤火虫算法在连续空间优化的可行性和有效性,同时也表明了萤火虫算法具有良好的应用前景。
关键词:萤火虫算法,仿生群智能优化算法,优化分析,遗传算法ABSTRACTThe Firefly Algorithm (FA) is affected by the nature of the Firefly exchange of information through a fluorescence inspired this kind of crowd behavior has evolved. It is made by Xin - She Y ang professor at the university of Cambridge in 2009, as a novel bionic swarm intelligent optimization algorithm, has a large research space. In recent decades as more bionic swarm intelligent algorithm is put forward, people also gradually deepen to the understanding and research of those algorithms.First,it is introduced in this paper theoretical concepts of swarm intelligence optimization algorithm, and then emphatically through the understanding of firefly algorithm bionic principle, from the perspective of mathematical descriptions of firefly algorithm is reasonable and the definition of the process, finally ,writes matlab code of the algorithm to realize the three peak function simulation test, to test results. At the same time with the genetic algorithm on the same test function, simulation test, to test results. Finally by comparing test results of firefly algorithm and genetic algorithm in the accuracy of the optimization results of peak function respectively. In the process of comparison, according to the result of test, it can shows that the firefly algorithm on the accuracy of the optimization results of peak function is superior to genetic algorithm. It shows that the feasibility and effectiveness of firefly algorithm in the continuous space optimization, but also shows that the firefly algorithm has a good application prospect.Keywords:firefly algorithm, The bionic swarm intelligent optimization algorithm, Optimization analysis, genetic algorithm目录摘要 (I)ABSTRACT (II)目录 ...................................................................................................................................... I II 第一章绪论 . (1)一、研究的背景及意义 (1)二、群智能优化算法的研究现状 (1)三、本论文的内容和结构 (2)第二章群智能优化理论 (4)一、群智能优化算法的概述 (4)二、模拟退火算法 (4)三、遗传算法 (5)四、蚁群算法 (7)五、粒子群优化算法 (8)六、人工萤火虫群优化算法 (9)七、人工鱼群算法 (11)第三章萤火虫算法 (13)一、萤火虫算法的概念 (13)二、萤火虫算法的国内外研究现状 (13)三、萤火虫算法的仿生原理 (14)四、萤火虫算法的数学描述与分析 (15)五、萤火虫算法的流程 (16)六、实现萤火虫算法的matlab代码 (16)第四章仿真实验与分析 (22)一、三个测试函数的介绍 (22)二、FA和GA对F1(x)的仿真测试 (22)三、FA和GA对F2(x)的仿真测试 (25)四、FA和GA对F3(x)的仿真测试 (27)五、测试结果分析 (30)结论 (31)致谢 (32)参考文献 (33)第一章绪论一、研究的背景及意义在现实生活中,许多优化问题要求人们不仅要计算出其极值,还要得出其最优值。
烟火算法的原理和应用简介烟火算法(Firefly Algorithm)是一种基于自然现象中的萤火虫行为而设计的优化算法。
它模拟了萤火虫在寻找伴侣时发出的光信号的行为,通过交互和调节光亮度来搜索最优解。
本文将介绍烟火算法的原理和应用。
算法原理烟火算法基于以下几个关键概念:1.萤火虫个体:每个萤火虫个体表示算法中的一个解,它的光亮度表示该解的适应度。
2.光亮度:萤火虫个体的光亮度决定了其在搜索空间中的吸引力,较大的光亮度意味着更好的适应度。
3.吸引度:萤火虫个体的吸引度由其光亮度和与其他萤火虫的距离决定,距离越近,吸引度越大。
4.移动操作:每个萤火虫个体在每一次迭代中都会通过计算吸引度来更新自己的位置,接近具有较高光亮度的萤火虫。
烟火算法的基本步骤如下:1.初始化种群:随机生成一定数量的萤火虫个体,为每个个体分配一个初始位置和光亮度。
2.计算吸引度:根据每个个体的光亮度和与其他个体的距离,计算出每个个体的吸引度。
3.更新位置:根据吸引度调整每个个体的位置,使其接近具有较高光亮度的个体。
4.更新光亮度:根据位置的变化重新计算每个个体的光亮度。
5.收敛判断:重复步骤3和步骤4,直到满足终止条件。
应用领域烟火算法在许多优化问题中都有广泛的应用,包括以下领域:1.智能优化:烟火算法可以用于解决各种优化问题,如函数优化、组合优化、约束优化等。
它的并行性和全局搜索能力使其在解决复杂问题时具有优势。
2.机器学习:烟火算法可以用于机器学习中的参数优化问题,如神经网络训练中的权重优化、特征选择等。
它可以在搜索空间中自适应地调整参数,提高模型性能。
3.数据挖掘:烟火算法可以用于聚类分析、关联规则挖掘等数据挖掘任务中。
它能够找到数据中隐藏的模式和规律,提供有价值的信息支持决策。
4.图像处理:烟火算法可以用于图像分割、图像增强等图像处理任务中。
它可以优化图像处理算法中的参数,提高图像质量和处理效果。
5.任务调度:烟火算法可以用于任务调度问题,如作业车间调度、路径规划等。
萤火虫扰动算法(Firefly Algorithm)是一种模拟自然界萤火虫群体行为的优化算法,用于解决优化问题。
这个算法最初由Xin-She Yang在其2010年的研究论文中提出。
萤火虫扰动算法通过模拟萤火虫在搜索食物的过程中相互之间的光信号交流来实现优化。
以下是萤火虫扰动算法的基本原理和步骤:
基本原理:
1.吸引度:萤火虫的光亮程度被称为吸引度,与其所在位置的适应度(即解
的优劣)有关。
适应度越高的解,其吸引度越大。
2.运动:萤火虫根据其吸引度和与其他萤火虫的距离来调整其位置,目标是
向适应度更高的解靠近。
3.扰动:为了增加算法的全局搜索能力,引入了随机扰动,使得萤火虫有概
率在搜索空间中随机移动。
算法步骤:
1.初始化群体:随机生成初始萤火虫群体,并计算每个萤火虫的适应度。
2.迭代:对于每一代,按照吸引度和距离调整萤火虫的位置。
即,每个萤火
虫根据其他萤火虫的吸引度和相对位置来移动。
3.扰动:对群体中的每个萤火虫,有一定概率进行随机扰动。
4.适应度评估:计算新位置的适应度。
5.更新:根据适应度的改变更新萤火虫的位置。
6.终止条件:根据预定的停止条件(迭代次数、适应度阈值等)决定是否终
止算法。
代码示例(简化):
以下是萤火虫扰动算法的简化代码示例,其中假设优化目标是最小化的:
这只是一个简化的示例,实际应用中,可能需要根据问题的特点调整参数,或者在更复杂的问题中使用更复杂的扰动和移动策略。
面向多峰优化问题的萤火虫算法研究萤火虫算法(Firefly Algorithm)是一种基于自然界中萤火虫的交互行为而发展的优化算法,能够解决众多优化问题,特别是在多峰优化问题中表现出色。
本文将针对面向多峰优化问题的萤火虫算法进行研究。
一、多峰优化问题多峰优化问题是指在最优解的搜索过程中,目标函数存在多个峰值,这些峰值之间可能存在相对较大的距离。
此时,传统的优化算法易陷入局部最优解,在达到局部最优解后无法跳出,进而找不到全局最优解。
因此,如何解决多峰优化问题成为了研究热点。
二、萤火虫算法介绍萤火虫算法于2008年提出,是一种模仿萤火虫的互相吸引和聚集行为的优化算法。
萤火虫聚集程度和亮度成正比,亮度越大的萤火虫被其他萤火虫吸引的概率越大。
在算法中,亮度可以看成目标函数的值,每个萤火虫会根据当前亮度和其他萤火虫的亮度变化不断调整自己的位置,以求达到某一个最优的位置。
三、萤火虫算法解决多峰优化问题的方法针对多峰优化问题,萤火虫算法有如下优势:1、全局搜索能力较强:传统优化算法容易被多峰问题限制,陷入局部最优解。
而萤火虫算法利用亮度作为确定吸引力的因素,使得全局搜索能力更强。
2、适应性强:在多峰问题中,萤火虫算法可以让亮度高的萤火虫吸引整个群体前往移动自己的位置,从而适应当前环境。
3、参数设置简单:相比其他优化算法,萤火虫算法的参数设置较为简单。
在使用萤火虫算法解决多峰优化问题时,应注意以下几点:1、初始位置的确定:初始位置的质量将直接影响算法的效果。
应考虑不同维度、不同区域之间的均匀分布,确保算法是全局性的搜索。
2、参数设置的合理性:萤火虫算法需要合理设置参数,包括亮度、吸引度、随机步长等。
合理设置这些参数可以提高算法的搜索效果,减少局部最优解的出现。
3、算法停止的条件:为避免算法迭代次数过多造成计算资源和时间的浪费,需要设置算法停止条件。
通常采用迭代次数或者目标函数值的变化率小于预设阈值为停止条件。
四、总结萤火虫算法的互相吸引和聚集行为能够很好地解决多峰优化问题。
萤火虫算法(GSO与FA)1 前言仿生群智能优化算法是近些年来国内外学者研究的热点问题,其主要的思想是研究或者模仿自然界群体生活的生物的社会行为而构造的随机搜索方法。
目前研究比较多的有两种算法:蚁群算法(ACO)和粒子群算法(PSO)。
有研究结果表明,仿生群智能优化算法为许多应用领域提供了新思路和新方法。
2005年,印度学者K.N.Krishnanand和D.Ghose在IEEE群体智能会议上提出了一种新的群智能优化算法,人工萤火虫群优化(Glowworm Swarm Optimization, GSO)算法。
2009年,剑桥学者Xin-She Yang根据自然界中萤火虫的发光行为提出萤火虫算法(Firefly Algorithm, FA)。
自这两种萤火虫算法提出以来,各国学者对这两种算法进行了研究、改进和应用。
经过几年的发展,在连续空间的寻优过程和一些生产调度方面萤火虫算法具有良好的应用前景。
GSO和FA有相似的方面,但在具体实现方面有一定差异。
本文具体介绍和分析了这两种算法及其改进算法。
2 关于GSO(人工萤火虫)在GSO算法中,每一只人工萤火虫散步在解的空间中个人感觉这个解空间,就是目标函数的取值这些萤火虫带着荧光,并且拥有各自的视线范围,称为决策域(local-decision range)。
个人感觉:这里可以类比AFSA(人工鱼群中的相关的函数的概念)亮度与自己所在位置上的目标值有关。
越亮的萤火虫表示它所在的位置越好,即有较优的目标函数值。
个人感觉:这里的思想十分类似AFSA中涉及到的目标函数大的地方,个体聚集的就多,而且越来愈多萤火虫会在决策域范围内寻找邻居集合,在集合当中,越亮的邻居拥有越高的吸引力吸引此萤火虫往这个方向移动,每一次的飞行方向会随着挑选的邻居不同而改变。
此外,决策域范围的大小会受到邻居数量的影响,当邻居密度越低,萤火虫的决策半径会加大以寻找更多的邻居;当邻居密度越高,它的决策半径会缩小。
一种基于模式搜索算子的人工萤火虫优化算法人工萤火虫算法(Artificial Firefly Algorithm,AFA)是一种智能优化算法,它的核心思想是通过模拟萤火虫的社交行为来优化问题。
萤火虫通过发出光信号来吸引伴侣,如果其他萤火虫对其发出的光信号更亮,则会向那个方向移动。
在算法中,每个萤火虫表示一个待优化的解,发出的光强度与目标函数值成正比。
然而,原始的AFA在一些问题上表现不佳,因为它的搜索算子单一,只能通过调整光强度来改变萤火虫位置。
为了克服这个缺陷,研究者们提出了许多改进版的AFA,其中一种被称为基于模式搜索算子的AFA(Pattern Search-based AFA,PSAFA)。
PSAFA的核心思想是引入了一种新的搜索算子——模式搜索算子。
这个算子可以跳出局部最优解陷阱,更好地保持全局探索能力。
在PSAFA中,萤火虫的移动不仅依赖于光强度,还依赖于一组随机生成的模式序列。
这些序列组成了问题的解空间,每个模式序列用于探索不同的搜索方向,从而增加了搜索算法的多样性。
具体来说,每个萤火虫的位置由当前位置和一个模式序列组成。
萤火虫将模式序列对应的解应用于目标函数,计算出光强度。
然后,通过比较当前萤火虫的光强度和相邻萤火虫的光强度,来调整模式序列以及移动方向。
如果当前位置上的光强度不是最大的,那么萤火虫就会通过调整模式序列和位置来向更优的位置移动。
PSAFA的优点在于更好的全局搜索能力和鲁棒性,在复杂的优化问题中表现优秀。
此外,PSAFA还具有可扩展性,可以通过添加新的模式序列进一步提高算法的性能。
虽然PSAFA的实现比原始AFA复杂,但应用广泛的算法库和优化问题库(如MATLAB Toolbox和Cec Benchmark)使得实现变得容易。
总的来说,PSAFA是一个非常有前途的优化算法,值得进一步研究和探索。
萤火虫算法优化最大熵的图像分割方法吴鹏【摘要】为了提高图像的分割效果,提出一种萤火虫算法优化最大熵的图像分割方法。
获得最大熵法的阈值优化目标函数,采用萤火虫算法对目标函数进行求解,找到图像的最佳分割阈值,根据最佳阈值对图像进行分割,通过仿真实验对分割效果进行测试。
结果表明,该方法可以迅速、准确找到最佳阈值,提高图像分割的准确度和抗噪性能,可以较好地满足图像分割实时性要求。
%In order to improve the effect of image segmentation, this paper puts forward a novel image segmentation method based on firefly algorithm and maximum entropy method. Threshold optimization objective function of maximum entropy method is obtained, and then firefly algorithm is used to solve the objective function and find the optimal segmen-tation threshold of the image. Image is segmented according to the optimal threshold, and the performance is tested by simulation experiment. The results show that the proposed method can quickly and accurately find the optimal threshold value, and can improve the accuracy of image segmentation and anti-noise ability, so it can better meet the real-time require-ments of image segmentation.【期刊名称】《计算机工程与应用》【年(卷),期】2014(000)012【总页数】5页(P115-119)【关键词】萤火虫算法;最大熵法;阈值;图像分割【作者】吴鹏【作者单位】淄博职业学院,山东淄博 255314【正文语种】中文【中图分类】TP3111 引言图像分割是指根据一定的分割原则,把图像分割成若干感兴趣的区域,是图像处理的关键和首要步骤,其分割结果优劣直接影响人们对图像的理解和使用,因此图像分割是计算机图像研究的热点和重要课题[1]。
a r X i v :1003.1466v 1 [m a t h .O C ] 7 M a r 2010Firefly Algorithms for Multimodal Optimization Xin-She Yang Department of Engineering,University of Cambridge,Trumpington Street,Cambridge CB21PZ,UK Abstract Nature-inspired algorithms are among the most powerful algorithms for op-timization.This paper intends to provide a detailed description of a new Firefly Algorithm (FA)for multimodal optimization applications.We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO).Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms.Finally we will discuss its applications and implications for further research.Citation detail:X.-S.Yang,“Firefly algorithms for multimodal optimiza-tion”,in:Stochastic Algorithms:Foundations and Applications ,SAGA 2009,Lecture Notes in Computer Sciences,Vol.5792,pp.169-178(2009).1Introduction Biologically inspired algorithms are becoming powerful in modern numerical optimization [1,2,4,6,9,10],especially for the NP-hard problems such as the travelling salesman problem.Among these biology-derived algorithms,the multi-agent metaheuristic algorithms such as particle swarm optimization form hot research topics in the start-of-the-art algorithm development in optimiza-tion and other applications [1,2,9].Particle swarm optimization (PSO)was developed by Kennedy and Eber-hart in 1995[5],based on the swarm behaviour such as fish and bird schooling in nature,the so-called swarm intelligence.Though particle swarm optimization has many similarities with genetic algorithms,but it is much simpler because it does not use mutation/crossover operators.Instead,it uses the real-number randomness and the global communication among the swarming particles.In this sense,it is also easier to implement as it uses mainly real numbers.This paper aims to introduce the new Firefly Algorithm and to provide the comparison study of the FA with PSO and other relevant algorithms.We will first outline the particle swarm optimization,then formulate the firefly algorithms and finally give the comparison about the performance of these algorithms.The FA optimization seems more promising than particle swarm optimization in the sense that FA can deal with multimodal functions morenaturally and efficiently.In addition,particle swarm optimization is just a special class of thefirefly algorithms as we will demonstrate this in this paper. 2Particle Swarm Optimization2.1Standard PSOThe PSO algorithm searches the space of the objective functions by adjusting the trajectories of individual agents,called particles,as the piecewise paths formed by positional vectors in a quasi-stochastic manner[5,6].There are now as many as about20different variants of PSO.Here we only describe the simplest and yet popular standard PSO.The particle movement has two major components:a stochastic component and a deterministic component.A particle is attracted toward the position of the current global best g∗and its own best location x∗i in history,while at the same time it has a tendency to move randomly.When a particlefinds a location that is better than any previously found locations,then it updates it as the new current best for particle i.There is a current global best for all n particles.The aim is tofind the global best among all the current best solutions until the objective no longer improves or after a certain number of iterations.For the particle movement,we use x∗i to denote the current best for particle i,and g∗≈min or max{f(x i)}(i=1,2,...,n)to denote the current global best. Let x i and v i be the position vector and velocity for particle i,respectively. The new velocity vector is determined by the following formulav t+1i=v t i+αǫ1⊙(g∗−x t i)+βǫ2⊙(x∗i−x t i).(1) whereǫ1andǫ2are two random vectors,and each entry taking the values between0and1.The Hadamard product of two matrices u⊙v is defined as the entrywise product,that is[u⊙v]ij=u ij v ij.The parametersαandβare the learning parameters or acceleration constants,which can typically be takenas,say,α≈β≈2.The initial values of x t=0i can be taken as the bounds orlimits a=min(x j),b=max(x j)and v t=0i =0.The new position can then beupdated byx t+1 i =x t i+v t+1i.(2)Although v i can be any values,it is usually bounded in some range[0,v max].There are many variants which extend the standard PSO algorithm,and the most noticeable improvement is probably to use inertia functionθ(t)so that v t i is replaced byθ(t)v t i whereθtakes the values between0and1.In the simplest case,the inertia function can be taken as a constant,typicallyθ≈0.5∼0.9. This is equivalent to introducing a virtual mass to stabilize the motion of the particles,and thus the algorithm is expected to converge more quickly.3Firefly Algorithm3.1Behaviour of FirefliesTheflashing light offireflies is an amazing sight in the summer sky in the tropical and temperate regions.There are about two thousandfirefly species, and mostfireflies produce short and rhythmicflashes.The pattern offlashes is often unique for a particular species.Theflashing light is produced by a process of bioluminescence,and the true functions of such signaling systems are still debating.However,two fundamental functions of suchflashes are to attract mating partners(communication),and to attract potential prey.In addition,flashing may also serve as a protective warning mechanism.The rhythmicflash, the rate offlashing and the amount of time form part of the signal system that brings both sexes together.Females respond to a male’s unique pattern of flashing in the same species,while in some species such as photuris,female fireflies can mimic the matingflashing pattern of other species so as to lure and eat the malefireflies who may mistake theflashes as a potential suitable mate.We know that the light intensity at a particular distance r from the light source obeys the inverse square law.That is to say,the light intensity I de-creases as the distance r increases in terms of I∝1/r2.Furthermore,the air absorbs light which becomes weaker and weaker as the distance increases. These two combined factors make mostfireflies visible only to a limited dis-tance,usually several hundred meters at night,which is usually good enough forfireflies to communicate.Theflashing light can be formulated in such a way that it is associated with the objective function to be optimized,which makes it possible to formulate new optimization algorithms.In the rest of this paper,we willfirst outline the basic formulation of the Firefly Algorithm(FA)and then discuss the implementation as well as its analysis in detail.3.2Firefly AlgorithmNow we can idealize some of theflashing characteristics offireflies so as to developfirefly-inspired algorithms.For simplicity in describing our new Fireflire Algorithm(FA),we now use the following three idealized rules:1)allfireflies are unisex so that onefirefly will be attracted to otherfireflies regardless of their sex;2)Attractiveness is proportional to their brightness,thus for any twoflashingfireflies,the less brighter one will move towards the brighter one. The attractiveness is proportional to the brightness and they both decrease as their distance increases.If there is no brighter one than a particularfirefly, it will move randomly;3)The brightness of afirefly is affected or determined by the landscape of the objective function.For a maximization problem,the brightness can simply be proportional to the value of the objective function. Other forms of brightness can be defined in a similar way to thefitness function in genetic algorithms.Based on these three rules,the basic steps of thefirefly algorithm(FA)can be summarized as the pseudo code shown in Fig.1.Firefly AlgorithmFigure1:Pseudo code of thefirefly algorithm(FA).In certain sense,there is some conceptual similarity between thefirefly al-gorithms and the bacterial foraging algorithm(BFA)[3,7].In BFA,the at-traction among bacteria is based partly on theirfitness and partly on their distance,while in FA,the attractiveness is linked to their objective function and monotonic decay of the attractiveness with distance.However,the agents in FA have adjustable visibility and more versatile in attractiveness variations, which usually leads to higher mobility and thus the search space is explored more efficiently.3.3AttractivenessIn thefirefly algorithm,there are two important issues:the variation of light intensity and formulation of the attractiveness.For simplicity,we can always assume that the attractiveness of afirefly is determined by its brightness which in turn is associated with the encoded objective function.In the simplest case for maximum optimization problems,the brightness I of afirefly at a particular location x can be chosen as I(x)∝f(x).However, the attractivenessβis relative,it should be seen in the eyes of the beholder or judged by the otherfireflies.Thus,it will vary with the distance r ij between firefly i andfirefly j.In addition,light intensity decreases with the distance from its source,and light is also absorbed in the media,so we should allow the attractiveness to vary with the degree of absorption.In the simplest form, the light intensity I(r)varies according to the inverse square law I(r)=I s/r2 where I s is the intensity at the source.For a given medium with afixed light absorption coefficientγ,the light intensity I varies with the distance r.That is I=I0e−γr,where I0is the original light intensity.In order to avoid the singularity at r=0in the expression I s/r2,the combined effect of both theinverse square law and absorption can be approximated using the following Gaussian formI(r)=I0e−γr2.(3) Sometimes,we may need a function which decreases monotonically at a slower rate.In this case,we can use the following approximationI(r)=I02γ2r4+...,11+γr2.Equation(6)defines a characteristicdistanceΓ=1/√Γm.3.4Distance and MovementThe distance between any twofireflies i and j at x i and x j,respectively,is the Cartesian distancer ij=||x i−x j||=(x i−x j)2+(y i−y j)2.The movement of afirefly i is attracted to another more attractive(brighter)firefly j is determined byx i=x i+β0e−γr2ij(x j−x i)+α(rand−1where the second term is due to the attraction while the third term is random-ization withαbeing the randomization parameter.rand is a random number generator uniformly distributed in[0,1].For most cases in our implementation, we can takeβ0=1andα∈[0,1].Furthermore,the randomization term can easily be extended to a normal distribution N(0,1)or other distributions.In addition,if the scales vary significantly in different dimensions such as−105to 105in one dimension while,say,−0.001to0.01along the other,it is a good idea to replaceαbyαS k where the scaling parameters S k(k=1,...,d)in the d dimensions should be determined by the actual scales of the problem of interest.The parameterγnow characterizes the variation of the attractiveness,and its value is crucially important in determining the speed of the convergence and how the FA algorithm behaves.In theory,γ∈[0,∞),but in practice,γ=O(1) is determined by the characteristic lengthΓof the system to be optimized. Thus,in most applications,it typically varies from0.01to100.3.5Scaling and Asymptotic CasesIt is worth pointing out that the distance r defined above is not limited to the Euclidean distance.We can define many other forms of distance r in the n-dimensional hyperspace,depending on the type of problem of our interest. For example,for job scheduling problems,r can be defined as the time lag or time interval.For complicated networks such as the Internet and social networks,the distance r can be defined as the combination of the degree of local clustering and the average proximity of vertices.In fact,any measure that can effectively characterize the quantities of interest in the optimization problem can be used as the‘distance’r.The typical scaleΓshould be associated with the scale in the optimization problem of interest.IfΓis the typical scale for a given optimization problem,for a very large number offireflies n≫m where m is the number of local optima,then the initial locations of these n fireflies should distribute relatively uniformly over the entire search space in a similar manner as the initialization of quasi-Monte Carlo simulations.As the iterations proceed,thefireflies would converge into all the local optima (including the global ones)in a stochastic manner.By comparing the best solutions among all these optima,the global optima can easily be achieved. At the moment,we are trying to formally prove that thefirefly algorithm will approach global optima when n→∞and t≫1.In reality,it converges very quickly,typically with less than50to100generations,and this will be demonstrated using various standard test functions later in this paper.There are two important limiting cases whenγ→0andγ→∞.Forγ→0, the attractiveness is constantβ=β0andΓ→∞,this is equivalent to say that the light intensity does not decrease in an idealized sky.Thus,aflashing firefly can be seen anywhere in the domain.Thus,a single(usually global) optimum can easily be reached.This corresponds to a special case of particle swarm optimization(PSO)discussed earlier.Subsequently,the efficiency of this special case is the same as that of PSO.On the other hand,the limiting caseγ→∞leads toΓ→0andβ(r)→δ(r)(the Dirac delta function),which means that the attractiveness is almost zero in the sight of otherfireflies or thefireflies are short-sighted.This isFigure2:a global minimum f∗≈−1.801at(2.20319,1.57049).equivalent to the case where thefirefliesfly in a very foggy region randomly.No otherfireflies can be seen,and eachfirefly roams in a completely randomway.Therefore,this corresponds to the completely random search method.Asthefirefly algorithm is usually in somewhere between these two extremes,it ispossible to adjust the parameterγandαso that it can outperform both therandom search and PSO.In fact,FA canfind the global optima as well as all thelocal optima simultaneously in a very effective manner.This advantage will bedemonstrated in detail later in the implementation.A further advantage of FAis that differentfireflies will work almost independently,it is thus particularlysuitable for parallel implementation.It is even better than genetic algorithmsand PSO becausefireflies aggregate more closely around each optimum(withoutjumping around as in the case of genetic algorithms).The interactions betweendifferent subregions are minimal in parallel implementation.4Multimodal Optimization with MultipleOptima4.1ValidationIn order to demonstrate how thefirefly algorithm works,we have implemented itin Matlab.We will use various test functions to validate the new algorithm.Asan example,we now use the FA tofind the global optimum of the Michalewiczfunctionf(x)=−di=1sin(x i)[sin(ix2iFigure3:The initial40fireflies(left)and their locations after10iterations(right).scribed a multimodal function which looks like a standing-wave pattern[11] f(x)= e− d i=1(x i/a)2m−2e− d i=1x2i ·d i=1cos2x i,m=5,(11) is multimodal with many local peaks and valleys,and it has a unique globalminimum f∗=−1at(0,0,...,0)in the region−20≤x i≤20where i=1,2,...,dand a=15.The2D landscape of Yang’s function is shown in Fig.4.4.2Comparison of F A with PSO and GAVarious studies show that PSO algorithms can outperform genetic algorithms(GA)[4]and other conventional algorithms for solving many optimization prob-lems.This is partially due to that fact that the broadcasting ability of the cur-rent best estimates gives better and quicker convergence towards the optimality.A general framework for evaluating statistical performance of evolutionary al-gorithms has been discussed in detail by Shilane et al.[8].Now we will comparethe Firefly Algorithms with PSO,and genetic algorithms for various standardtest functions.For genetic algorithms,we have used the standard version withno elitism with a mutation probability of p m=0.05and a crossover probabilityof0.95.For the particle swarm optimization,we have also used the standardversion with the learning parametersα≈β≈2without the inertia correction[4,5,6].We have used various population sizes from n=15to200,and foundthat for most problems,it is sufficient to use n=15to50.Therefore,we haveused afixed population size of n=40in all our simulations for comparison.After implementing these algorithms using Matlab,we have carried out ex-tensive simulations and each algorithm has been run at least100times so as tocarry out meaningful statistical analysis.The algorithms stop when the varia-tions of function values are less than a given toleranceǫ≤10−5.The resultsare summarized in the following table(see Table1)where the global optimaare reached.The numbers are in the format:average number of evaluations(success rate),so3752±725(99%)means that the average number(mean)offunction evaluations is3752with a standard deviation of725.The success rateoffinding the global optima for this algorithm is99%.We can see that the FA is much more efficient infinding the global optima with higher success rates.Each function evaluation is virtually instantaneousFigure4:Yang’s function in2D with a global minimum f∗=−1at(0,0)where a=15.Table1:Comparison of algorithm performanceFunctions/Algorithms GA PSO FA on modern personal computer.For example,the computing time for10,000evaluations on a3GHz desktop is about5seconds.Even with graphics fordisplaying the locations of the particles andfireflies,it usually takes less thana few minutes.It is worth pointing out that more formal statistical hypothesistesting can be used to verify such significance.5ConclusionsIn this paper,we have formulated a newfirefly algorithm and analyzed its simi-larities and differences with particle swarm optimization.We then implementedand compared these algorithms.Our simulation results forfinding the globaloptima of various test functions suggest that particle swarm often outperformstraditional algorithms such as genetic algorithms,while the newfirefly algo-rithm is superior to both PSO and GA in terms of both efficiency and successrate.This implies that FA is potentially more powerful in solving NP-hardproblems which will be investigated further in future studies.The basicfirefly algorithm is very efficient,but we can see that the solutionsare still changing as the optima are approaching.It is possible to improve the solution quality by reducing the randomness gradually.A further improvement on the convergence of the algorithm is to vary the randomization parameterαso that it decreases gradually as the optima are approaching.These could form important topics for further research.Furthermore,as a relatively straightfor-ward extension,the Firefly Algorithm can be modified to solve multiobjective optimization problems.In addition,the application offirefly algorithms in com-bination with other algorithms may form an exciting area for further research. References[1]Bonabeau E.,Dorigo M.,Theraulaz G.,Swarm Intelligence:From Naturalto Artificial Systems.Oxford University Press,(1999)[2]Deb.K.,Optimisation for Engineering Design,Prentice-Hall,New Delhi,(1995).[3]Gazi K.,and Passino K.M.,Stability analysis of social foraging swarms,IEEE Trans.Sys.Man.Cyber.Part B-Cybernetics,34,539-557(2004).[4]Goldberg D.E.,Genetic Algorithms in Search,Optimisation and MachineLearning,Reading,Mass.:Addison Wesley(1989).[5]Kennedy J.and Eberhart R.C.:Particle swarm optimization.Proc.ofIEEE International Conference on Neural Networks,Piscataway,NJ.pp.1942-1948(1995).[6]Kennedy J.,Eberhart R.,Shi Y.:Swarm intelligence,Academic Press,(2001).[7]Passino K.M.,Biomimicrt of Bacterial Foraging for Distributed Optimiza-tion,University Press,Princeton,New Jersey(2001).[8]Shilane D.,Martikainen J.,Dudoit S.,Ovaska S.J.,A general frameworkfor statistical performance comparison of evolutionary computation algo-rithms,Information Sciences:an Int.Journal,178,2870-2879(2008). [9]Yang X.S.,Nature-Inspired Metaheuristic Algorithms,Luniver Press,(2008).[10]Yang X.S.,Biology-derived algorithms in engineering optimizaton(Chap-ter32),in Handbook of Bioinspired Algorithms and Applications(eds Olar-ius&Zomaya),Chapman&Hall/CRC(2005).[11]Yang X.S.,Engineering Optimization:An Introduction with MetaheuristicApplications,Wiley(2010).。