A composite particle swarm algorithm for global optimization of multimodal functions
- 格式:pdf
- 大小:498.83 KB
- 文档页数:10
particleswarm函数Particleswarm函数是一种基于群体智能的优化算法,它模拟粒子群的行为来搜索最优值,它是一个非常有效的优化算法,被广泛用于工程、科学、经济、预测、金融等方面。
Particleswarm函数可以被用来提高优化问题的求解效率,它是一个自适应式算法,使用模拟粒子群行为搜索最优解,算法避免了因为缺乏正确的结果而陷入局部最优点的情况。
Particleswarm函数也可以用来解决复杂的优化问题,如多维优化、离散优化、非凸优化、非线性优化等,它能够快速而准确地搜索最优解。
Particleswarm函数的一个显著优势是它的收敛速度,这极大地提高了解决优化问题的效率。
Particleswarm函数是一个自适应式算法,它会自动地根据具体情况调整参数,并以一种有效的方式搜索最优值,这种有效性使它成为一种非常受欢迎的优化算法。
Particleswarm函数主要由三个步骤组成:一是初始化;二是迭代搜索;三是最终结果的确定。
首先是初始化,也就是设置初始条件,需要设置粒子群的数量、参数和环境参数,然后是迭代搜索,也就是根据搜索给定的优化问题,调整粒子的位置、速度和加速度,最后是最终结果的确定,根据计算结果确定最优解。
Particleswarm函数同时也具有一定的缺点,首先是它无法保证最后收敛到局部最优解,这是因为它只能根据当前状态调整参数,而不是去探索其他可能的最优解。
其次,Particleswarm函数依赖于参数设置,参数过大则算法运行效率会下降,参数过小则容易陷入局部最优,所以在使用Particleswarm函数前,需要合理的设置参数。
综上所述,Particleswarm函数是一种基于群体智能的优化算法,它模拟粒子群的行为来搜索最优值,从而解决复杂的优化问题,不仅有效而且收敛速度较快。
Particleswarm函数的一个缺点在于无法保证最后收敛到局部最优解,并且参数设置不当则容易陷入局部最优。
粒子群改进算法matlab-概述说明以及解释1.引言概述部分的内容可如下编写:1.1 概述粒子群算法(Particle Swarm Optimization, PSO)是一种基于群体智能的优化算法,通过模拟鸟群或鱼群等自然界中群体行为的方式,来寻找最优解。
它最初由Russell Eberhart和James Kennedy于1995年提出,并在之后的发展中得到了广泛应用。
PSO算法的核心思想是将待求解问题的可能解看作是群体中的粒子,并通过模拟粒子间的交流和协作来不断优化解空间,在寻找最优解的过程中逐步收敛。
每个粒子通过记忆自己的历史最优解和整个群体中的全局最优解来进行自我调整和更新。
在每一次迭代中,粒子根据自身的记忆和全局信息进行位置的更新,直到达到预设的停止条件。
PSO算法具有简单、易于实现和快速收敛等特点,广泛应用于函数优化、组合优化、机器学习等领域。
然而,传统的PSO算法也存在着较为明显的局限性,如易陷入局部最优解、对参数设置较为敏感等问题。
为了克服传统PSO算法的局限性,研究者们提出了各种改进的方法,从算法思想到参数设置进行了深入研究。
本文旨在介绍粒子群改进算法在Matlab环境下的实现。
首先对传统的粒子群算法进行了详细的介绍,包括其原理、算法步骤、优缺点以及应用领域。
然后,进一步介绍了粒子群改进算法的各种改进方法,其中包括改进方法1、改进方法2、改进方法3和改进方法4等。
最后,通过Matlab环境的配置和实验结果与分析来展示粒子群改进算法在实际应用中的性能和效果。
本文的结论部分总结了主要发现、研究的局限性,并展望了未来的研究方向。
综上所述,本文将全面介绍粒子群改进算法的原理、算法步骤、实现过程和实验结果,旨在为读者提供一个详细的了解和研究该算法的指南。
1.2文章结构1.2 文章结构:本文主要包括以下几个部分的内容:第一部分为引言,介绍了本文的背景和目的,概述了即将介绍的粒子群改进算法的原理和优缺点。
发文章sci 粒子群算法-回复粒子群算法(Particle Swarm Optimization, PSO)是一种群体智能算法,其灵感源自鸟群的集体行为和鱼群的群体行为。
该算法通过模拟鸟群或鱼群中个体之间的信息交流与协作来优化问题的解。
本文将分为以下几个部分进行介绍和分析:1. 算法原理;2. 粒子群算法的应用领域;3. 粒子群算法的优点和局限性;4. 算法改进和未来发展方向。
一、算法原理粒子群算法的原理基于群体中个体之间的合作与信息共享。
在算法运行过程中,每个个体被称为粒子,其在问题空间中搜索可能的解。
每个粒子都有一个位置和速度向量,位置向量表示当前的解,速度向量表示在搜索过程中的移动方向和速度。
每个粒子都有个体最优位置(Pbest)和全局最优位置(Gbest)两个概念,Pbest表示该粒子找到的最好位置,而Gbest 表示整个群体中找到的最好位置。
粒子群算法的运行过程如下:1. 初始化粒子群的位置和速度向量;2. 对于每个粒子,计算其当前位置的适应度值,并与其个体最优位置进行比较,更新Pbest;3. 从所有粒子的个体最优位置中选取全局最优位置Gbest;4. 更新每个粒子的速度向量和位置向量,使其朝着个体最优位置和全局最优位置迭代移动;5. 重复步骤2和4,直到达到预设的停止条件,例如达到最大迭代次数或满足一定的精度要求。
二、粒子群算法的应用领域粒子群算法具有广泛的应用领域,主要包括以下几个方面:1. 函数优化问题:粒子群算法可以用于解决函数优化问题,通过在搜索空间中迭代移动,寻找全局最优解或近似最优解。
2. 机器学习:粒子群算法可以用于机器学习中的参数优化、特征选择、模型训练等问题,例如支持向量机(SVM)模型的参数优化。
3. 数据聚类:粒子群算法可以用于数据聚类问题,通过寻找样本点之间的相似性,将它们划分为具有相似特征的簇。
4. 图像处理:粒子群算法可以用于图像分割、图像分类和图像特征提取等问题,提高图像处理的效果和准确度。
粒子群优化算法模型集成全文共四篇示例,供读者参考第一篇示例:粒子群优化算法(Particle Swarm Optimization, PSO)是一种基于生物群体行为的优化算法,模拟了鸟群或鱼群等群体在搜索最优解时的行为。
PSO算法的核心思想是通过不断更新粒子的位置和速度来搜索最优解,其简单、易实现且具有较好的全局收敛性,在实际工程中得到了广泛的应用。
而模型集成则是一种将多个模型集成在一起,通过综合多个模型的预测结果来提高预测准确度和稳定性的方法。
将PSO算法与模型集成相结合,可以更好地利用PSO算法的全局优化能力,进一步提高模型的性能。
在实际的数据分析和建模任务中,我们通常会面临如何选择合适的模型来预测或分类的问题。
单一模型可能无法充分捕捉数据中的复杂关系,导致预测精度不高或者泛化能力较弱。
而模型集成的思想可以通过将多个模型的预测结果进行整合,在保持模型多样性的同时,提高整体的预测准确度和稳定性。
PSO算法作为一种优秀的全局优化算法,可以用来优化模型参数,进一步提高模型的预测性能。
在模型集成中,PSO算法可以用来搜索模型参数空间中的最优解,以求得最佳的模型参数组合。
在PSO算法中,粒子表示一个解向量,每个解向量代表一个模型参数组合。
每个粒子都有自己的位置和速度,位置代表模型参数组合的取值,速度代表每一维参数的更新方向和步长。
粒子根据自身的位置和速度不断更新,寻找最优解。
通过不断迭代更新所有粒子的位置和速度,整个粒子群逐渐收敛于最优解。
在进行模型集成时,我们可以将PSO算法与集成学习算法(如Bagging、Boosting等)相结合,形成一个新的模型集成框架。
在这个框架中,首先使用PSO算法来搜索最佳的模型参数组合,得到每个基模型的参数。
然后,利用这些参数训练多个不同的基模型,并将它们的预测结果进行加权或投票等方式进行整合,得到最终的预测结果。
通过PSO算法的全局搜索能力,我们可以更好地探索参数空间,找到更优的参数组合,从而提高模型集成的性能。
Particle swarm optimization algorithm for the berth allocationproblemChing-Jung Ting ⇑,Kun-Chih Wu,Hao ChouDepartment of Industrial Engineering and Management,Yuan Ze University,Chung-Li 32003,Taiwan,ROCa r t i c l e i n f o Keywords:Berth allocation problem Particle swarm optimization Container port Logisticsa b s t r a c tThe berth allocation is one of the major container port optimization problems.In both port operator’s and ocean carriers’perspective,the minimization of the time a ship at the berth may be considered as an objective with respect to port operations.This paper focuses on the discrete and dynamic berth allocation problem (BAP),which assigns ships to discrete berth positions and minimizes the total waiting times and handling times for all ships.We formulate a mixed integer programming (MIP)model for the BAP.Since BAP is a NP-hard problem,exact solution approaches cannot solve the instances of realistic size optimally within reasonable time.We propose a particle swarm optimization (PSO)approach to solve the BAP.The proposed PSO is tested with two sets of benchmark instances in different sizes from the literature.Exper-imental results show that the PSO algorithm is better than the other compared algorithms in terms of solution quality and computation time.Ó2013Elsevier Ltd.All rights reserved.1.IntroductionSince the introduction of the container in the 1950s,container-ships gradually become an important role in the global freight transportation.The world seaborne trade grew by an estimated 7%,taking the total of goods loaded to 8.4billion tons.World con-tainer port throughput increased by an estimated 13.3%to 531mil-lion 20-foot equivalent units (TEUs)in 2010(UNCTAD,2011).Thus,the terminal operation is an important part of the international trade of ocean shipping.Operations in a container terminal can be broken down into three functional systems:seaside operations,yard operations,and land-side operations (Theofanis,Boile,&Golias,2009).The first issue of seaside operations planning is the berth assignment to a set of vessels that have to be served within the planning hori-zon.One of the important objectives shared by the port operators and the ocean carriers is for the ships to leave the port as soon as possible.Thus,the container port authorities are forced to provide efficient and cost-effective services by utilizing the scarce berthing resources efficiently due to the fierce competition between ports.The berth allocation problem (BAP)is to allocate berths to a set of vessels scheduled to arrive at the port within the planning hori-zon in order to minimize their time spent at the port (the sum of their waiting and handling times).Bierwirth and Meisel (2010)classified the BAP according to the following spatial and temporal variations:(1)discrete versus continuous berthing space,(2)static versus dynamic vessel arrivals,(3)deterministic versus stochastic vessel handling time.The quay is divided into a set of berths,and each berth can be used by only one vessel at a time in the discrete case.In the continuous case,a vessel can occupy any arbi-trary position along the quay as long as the safety restriction be-tween vessels is considered.Static BAP assumes that all vessels already arrived at the port for the service,while vessels can arrive at any time during the planning horizon with known future arrival information in a dynamic BAP.The main focus of this research is the discrete berth allocation problem with dynamic vessel arrivals.The discrete BAP is NP hard (Cordeau,Laporte,Legato,&Moccia,2005).Exact solution approach cannot solve large scale realistic environments,heuristics algorithms are proposed in the literature to solve the BAP.In this paper,we investigated the use of particle swarm optimization (PSO)algorithm for the BAP.PSO is a popula-tion-based random search algorithm inspired by the social behav-ior of bird flocks and has been applied to solve many combinatorial optimization problems.We did not find any other work in the re-lated literature using PSO to tackle the BAP.Thus,it is worthwhile to evaluate the PSO for this task.The proposed PSO was tested with two sets of benchmark instances and compared with promising methods found in the literature to verify its efficiency.The remainders of the paper are organized as follows.In the next section we review the related literature.Section 3presents our problem with a mixed integer programming model.The proposed particle swarm optimization algorithm to tackle the discrete and dy-namic berth allocation problems is presented in Section 4.In Sec-tion 5computational experiments are performed and the results are presented.Finally,conclusions are summarized in Section 6.2.Literature reviewBerth allocation problem has attracted considerable practical and academic attention in recent years due to the needs of growing0957-4174/$-see front matter Ó2013Elsevier Ltd.All rights reserved./10.1016/j.eswa.2013.08.051Corresponding author.Tel.:+88634638800x2526;fax:+88634638907.E-mail address:ietingcj@.tw (C.-J.Ting).global supply chain.Different berth allocation models have been proposed in the literature.Steenken,Voß,and Stahlbock(2004), Vacca et al.(2008),Stahlbock and Voß(2008),and Bierwirth and Meisel(2010)provided a detailed review.We refer interested read-ers to the paper and references therein.In the following,we focus on the discrete and dynamic berth allocation problem.Other variants and possible extensions of the BAP will be briefly reviewed.Thurman(1989)proposed an optimization model for ship berthing plans at the US Naval Station ter,Brown, Lawphongpanich,and Thurman(1994)and Brown,Cormican, Lawphongpanich,and Widdis(1997)considered berth allocation models in naval ports and allowed two or more submarines to occupy a single berth position.Imai,Nagaiwa,and Chan(1997) formulated a static BAP as a nonlinear integer programming model to minimize the weighted sum of two conflicting objectives,berth performance and vessel dissatisfaction.Imai,Nishimura,and Papadimitriou(2001)introduced the dynamic BAP and solved the problem with a Lagrangian relaxation based heuristic. Nishimura,Imai,and Papadimitriou(2001)considered a dynamic BAP with multi-water depth configuration in a public berth system and berth dependent vessel handling ter,Imai,Nishimura, and Papadimitriou(2003)considered a dynamic BAP in which different vessels have different service priorities.Genetic algorithms were developed to solve the problem in Imai et al. (2001)and Nishimura et al.(2001).Cordeau et al.(2005)addressed a dynamic BAP with time windows in both discrete and continuous cases based upon data from a terminal in the Port of Gioia Tauro(Italy).The problem was formulated as a multiple depot vehicle routing problem with time windows(MDVRPTW),and solved by a tabu search heuristic. Monaco and Sammarra(2007)presented a compact formulation as a dynamic scheduling problem on unrelated parallel machines.The problem was solved by a Lagrangian relaxation heuristic.Imai, Nishimura,Hattori,and Papadimitriou(2007)considered a BAP in which up to two vessels can be served by the same berth simul-taneously.They formulated the problem with an integer linear programming model and solved it by genetic algorithms.Imai, Zhang,Nishimura,and Papadimitriou(2007)analyzed a two-objective berth allocation problem which minimizes service time and delay time.They used the Lagrangian relaxation with subgra-dient optimization technique and a genetic algorithm to identify the non-inferior solutions in the bi-objective model.Cheong and Tan(2008)developed a multiple ant colony algorithm for Nishimura et al.’s(2001)model and evaluated their algorithm by simulation experiments.Hansen,Og˘uz,and Mladenovic(2008)presented a minimum cost BAP based on an extension of Imai et al.’s(2003)model,and developed a variable neighborhood search(VNS)heuristic for solving it.Imai, Nishimura,and Papadimitriou(2008)studied a variant of the dynamic BAP in which an external terminal is available when there is a lack of berth capacity at the operator’s own terminal.Mauri,Oliveira,and Lorena(2008)proposed a hybrid approach called PTA/LP,which used the population training algorithm with a linear programming model using the column generation tech-nique.Barros,Costa,Oliveira,and Lorena(2011)developed and analyzed a berth allocation model with tidal time windows,where ships can only be served during those time windows.Buhrkal, Zuglian,Ropke,Larsen,and Lusby(2011)studied several mathe-matical programming models of the dynamic BAP and formulated the problem as a generalized set partition problem(GSPP).They solved the problem with CPLEX and obtained the optimal solutions on those instances from Cordeau et al.(2005).To the best of our knowledge,their mathematical model provides the best results on benchmark instances from the literature.de Oliveira,Mauri,and Lorena(2012b)presented an algorithm based on the clustering search method using the simulated annealing algorithm to generate solutions for the discrete BAP. Lalla-Ruiz,Melián-Batista,and Marcos Moreno-Vega(2012) developed a hybrid algorithm that combined tabu search with path relinking(T2S⁄+PR)to solve the BAP.They tested the instances from Cordeau et al.(2005)and newly generated data sets by them-selves.The results showed that the hybrid algorithm was compet-itive with the GSPP in small size instances.Xu,Li,and Leung(2012) considered the static and dynamic BAP that berths are limited by water depth and tidal condition.The problem was formulated as a parallel machine scheduling problem and solved with a heuristic.Another line of berth allocation research assumes that berths along a quayside can be shared by different vessels.Lim(1998) was thefirst to study the continuous berth allocation problem. The continuous BAP was formulated as a restricted form of the two-dimensional packing problem and solved with constant handling times.Li,Cai,and Lee(1998)and Guan,Xiao,Cheung, and Li(2002)modeled berth allocation as machine scheduling problems with multiprocessor tasks,while Guan and Cheung (2004)developed efficient heuristics using a discrete berthing section model for batch arriving vessels.Tong,Lau,and Lim (1999)proposed an ant colony optimization(ACO)approach for the BAP addressed by Lim(1998).Park and Kim(2002)presented a mixed integer programming(MIP)model to minimize the penalty cost associated with service delays and placing a ship at a non-preferred location.A Lagrangian relaxation model with subgradient optimization technique was proposed to solve the problem.Park and Kim(2003)integrated the berth scheduling into the quay crane assignment.The BAP was solved with an adaptation of the method from Park and Kim(2002)and the crane assignment was solved by dynamic programming.Kim and Moon(2003)for-mulated the continuous BAP as a MIP model and solved the prob-lem with a simulated annealing algorithm.Imai,Sun,Nishimura, and Papadimitriou(2005)presented a heuristic for the continuous BAP.Moorthy and Teo(2006)proposed a framework addressing the berth template design problem at the terminal on a weekly ba-sis and solved the problem with a sequence-pair-based SA algo-rithm.Wang and Lim(2007)proposed a stochastic beam search algorithm to solve the BAP in a multiple stage decision-making procedure.The improved beam search scheme and a stochastic node selection criterion were proposed.Lee and Chen(2009)developed a candidate-based approach to handle BAP by allowing vessel shifting and considering the clear-ance distance between vessels which depends on the ship lengths and the order of berthed vessels.A three-stage neighborhood search based heuristic was proposed to solve the BAP.Tang,Li, and Liu(2009)proposed two mathematical models to minimize the total weighted service time.The authors developed an im-proved Lagrangian relaxation algorithm to solve the BAP at the raw material docks in an iron and steel complex.Cheong,Tan, Liu,and Lin(2010)considered a multiple objective BAP which in-cludes makespan,waiting time,and degree of deviation from a pre-determined priority schedule.They proposed a multi-objective evolutionary algorithm that incorporates the Pareto optimality to solve the problem.Lee,Chen,and Cao(2010)developed two ver-sions of greedy randomized adaptive search procedure(GRASP) to solve the continuous BAP.The numerical results were compared with CPLEX and stochastic beam search of Wang and Lim(2007). Raa,Dullaert,and Van Schaeren(2011)presented a MIP model for the integrated BAP and quay crane assignment taking into ac-count vessel priorities,preferred berthing locations and handling time.de Oliveira,Mauri,and Lorena(2012a)presented a clustering search(CS)method with simulated annealing heuristic to solve the continuous BAP.The computational results of the I3instances from Cordeau et al.(2005)were compared with the tabu search by Cordeau et al.(2005)and memetic algorithms by Mauri,De Andrade,and Lorena(2011).1544 C.-J.Ting et al./Expert Systems with Applications41(2014)1543–1550The berth allocation problem is a NP problem.Due to the computational complexity,researchers have developed heuristics to solve the BAP in the literature.These heuristic methods include the GA(Imai et al.,2001;Imai,Nishimura,Hattori,& Papadimitriou,2007;Nishimura et al.,2001),SA(Kim&Moon, 2003;Moorthy&Teo,2006),TS(Cordeau et al.,2005),ACO(Tong et al.,1999;Cheong&Tan,2008),VNS(Hansen et al.,2008)and GRASP(Lee et al.,2010).In this paper,we propose a particle swarm optimization(PSO)algorithm to solve the problem.The reasons that we use PSO to solve this problem are as follows.PSO has been applied in many different combinatorial problems and provided very efficient performance.It is simple and needs less control parameters comparing to other metaheuristics,such as genetic algorithms and ant colony optimization algorithms.To our knowledge,no research used PSO to solve the BAP.Details of the proposed PSO are presented in Section4.3.Mathematical modelThis section describes the mixed integer programming model for the discrete and dynamic berth allocation problem.We treat the BAP as a vehicle routing problem with time windows,where berths correspond to vehicles,ships correspond to customers and a mooring sequence at a particular berth corresponds to a vehicle route.Each vehicle must start and end at the depot.The depot is divided into two dummy nodes,o and d.Time windows can be imposed on every node.The time windows of a vehicle correspond to the availability time of the corresponding berth.3.1.Assumptions1.Each berth can handle one ship at a time.2.Any ship can be handled at any berth with a given processingtime depending on both the ship and the berth.3.All ships arrive before or after the berth becoming availablewith known arrival times.4.Once a vessel is moored,it will remain in its location until allthe required processing is done.5.The initial status of the terminal space is ideally clean withoutany ship.3.2.Notationsa i:the arrival time of ship ib i:the end time of time window of ship id:the destination of any routee k:the end time of berth k availabilityK:set of the berths,K={1,2,...,|K|}M:a big numberN:set of ships that will arrive at the port,N={1,2,...,|N|}o:the origin of any routeP ki:the processing time of ship i at berth kS k:the start time of berth k availabilityDecision variablex kij¼1if ship i uses berth k immediately before ship j0otherwiseT ki:the starting time of ship i at berth k3.3.Model formulationWe formulate the BAP as a vehicle routing type problem,wherenodes o and d represents the origin and destination for any route. The processing time is dependent on the respective berth locations. The model has two types of decision variables:the binary assign-ment variable x kijand the continuous variable t ki.MinimizeXi2NXk2Kðt kiÀa iþp kiXj2N[f d gx kijÞð1Þs:t:Xi2N[f o gx kih¼Xi2N[f d gx khi8h2N;k2Kð2ÞXk2KXj2N[f d gx kij¼18i2Nð3ÞXj2N[f d gx koj618k2Kð4Þðt kiþp kiÞ6t kjþð1Àx kijÞM8i;j2N[f o g[f d g;k2Kð5Þs k6t ko8k2Kð6Þt kd6ek8k2Kð7Þa i6t ki8i2N;k2Kð8Þt kiþp kiXj2N[f d gx kij6bi8i2N;k2Kð9Þx kij¼f0;1g8i;j2N;k2Kð10Þt kiP08i2N;k2Kð11ÞThe objective function(1)minimizes the total service time which includes waiting time and processing time of all ships.Con-straint(2)ensures theflow conservation for all the vessels.Con-straint(3)states that each ship must be assigned to exactly one berth k.Since berth can be left unused,constraints(4)states that berth k can only start at most once.Constraint(5)guarantees the consistency for berthing time and mooring sequence on each berth. The berth availability time is enforced by constraints(6)and(7). Constraints(9)and(10)state that the vessel must be served within the time window.Finally,constraints(10)and(11)define the respective domains of the decision variables.4.Particle swarm optimization for BAPParticle swarm optimization(PSO),inspired by the social behavior of birdflocking orfish schooling,is a population-based stochastic search technique developed by Kennedy and Eberhart (1995).PSO has been applied in a wide range of combinatorial optimization problems,such as reactive power and voltage con-trol(Fukuyama&Yoshida,2001),permutationflowshop sequenc-ing problems(Liao,Tseng,&Luarn,2007),order allocation problem(Ting,Tsai,&Yeh,2007),machine scheduling(Low, Hsu,&Su,2010;Tsai&Kao,2011),production planning(Chen& C.-J.Ting et al./Expert Systems with Applications41(2014)1543–15501545Lin,2009),timetabling problem(Tassopoulos&Beligiannis,2012), and vehicle routing problems(Ai&Kachitvichyanukul,2009; MirHassani&Abolghasemi,2011).Compared to the genetic algo-rithm,PSO is easy to implement and there are few control param-eters to adjust.PSO is initialized with a population of random solutions and the potential solutions,called‘‘particles’’,in PSO search through the solution space.Each particle is also assigned a randomized velocity initially.The positions of individual particles are adjusted(via changing the velocity)according to its own previous searching experience(i.e.,previous best,pBest),and other particles’searching experiences(i.e.,global best,gBest).Velocity changing is weighted by random terms,with separate random numbers being generated for acceleration toward two best solutions,pBest and gBest,in each iteration.Suppose that the search space has D-dimension, and the position and the velocity of the i th particle at the current iteration is represented by X old i=(X old i1,...,X old id,...,X old iD)and V old i=(V old i1,...,V old id,...,V old iD),respectively.The general proce-dures of PSO are as follows.1.Initialization.Randomly generate a population(A)of potentialsolutions,called particles,and each particle is assigned a ran-domized velocity.The population size is problem-dependent and suggested to be between20and40by Hu and Eberhart (2002).2.Evaluation and update best pute the desired opti-mizationfitness function.pare thefitness of each particle with its pBest,if thecurrent is better,update the pBest.pare the pBest of each particle with the gBest,if thepBest is better,update the gBest.3.Velocity update.The particles areflown through hyperspace byupdating their own velocities.The velocity update of a particle is dynamically adjusted,subject to its own past best path and those of its companions.The particle updates its velocity and position with following equations(12)and(13).V new id ¼WÂV oldidþc1Ârnd1ÂðP idÀX oldidÞþc2Ârnd2ÂðP gdÀX oldidÞð12ÞX new id ¼X oldidþV newidð13Þwhere V newid is the particle new velocity,V oldidis the current parti-cle velocity,P id is the best previous position of particle i indimension d,P gd is the best position in dimension d found by all particles till now.W is the inertia weight,c1and c2are learn-ing factors.A larger W can prevent particles being trapped in the local optimum,while a smaller W let particles to exploit the same search space area.Eberhart and Shi(2001)suggested c1=c2=2and W=0.5+(rand()/2).X newidis the new particle(solution)position and X oldidis the current particle position ind th dimension.rnd1and rnd2are random numbers between0and1which represent the stochastic element of the PSO.4.Termination.Stop the algorithm if the stopping criterion is met;otherwise go to2.In this paper,we set the stop criterion as the maximum number of iterations(G)is reached.Particles’velocities on each dimension are clamped to a maxi-mum velocity V max,a parameter specified by the user to determine the maximum change one particle can move during one iteration. If the updated velocity exceeds V max,then the velocity on that dimension is limited to V max.Eberhart and Shi(2001)suggested V max being set at about10-20%of the dynamic range of the variable on each dimension.In PSO,only gBest gives out the information to others.It is a one-way information sharing mechanism.The evolution only looks for the best pared with the genetic algorithm,all the par-ticles tend to converge to the best solution quickly even in the local version in most cases.There are two key steps when applying PSO to optimization problems:the representation of the solution and thefitness function.One of the advantages of PSO is that PSO can take real numbers as particles.The population size will affect the effectiveness of PSO and is problem-dependent.The number of particles most commonly used is in the range of20–40(Hu and Eberhart,2002).The dimension of particles is determined by the problem to be optimized.The range of particles is also determined by the problem to be optimized,the user can specify different ranges for different dimension of parti-cles.Learning factors,c1and c2,usually equal to2.However,other settings were also used in different papers.But usually c1equals to c2and ranges from[0,4].The stop condition is based on the max-imum number of iterations the PSO execute and the minimum er-ror requirement.4.1.Particle representation and initial solution generationThe solution representation is the key issue when designing the PSO algorithm.It could be a string of integers or real numbers.In order to construct a direct relationship between the domain of the berth allocation problem and the PSO practices,n numbers of dimensions are presented each for one of the n vessels.Our solu-tion representation is as follows:n real numbers from a uniform distribution in the interval of(0,m)represent a solution in an array of n cells,where n is number of ships and m is the number of berths.The interval of(0,m)is used as boundary constraint to par-ticle position to guarantee that the decision variables are in the feasible region.The integer part of a real number denotes the berth that the ship is assigned to and the fractional part represents the processing order of ships on each berth.The ships are separated into groups based on the berth to which they are assigned.Based on the real number encoding,the sequence of ships on each berth can be extracted from ascending order of fractional parts.A ship with lower number will be scheduled before the ships with higher numbers.Fig.1shows a particle representation for a berth allocation problem with six vessels and two berths.Each particle dimension is encoded as a real number between(0,2).The integer part of each value represents the berth position the vessel assigned to.Thus,the same integer part represents the vessel is in the same berth.The fractional part determines the sequence of the vessel at the berth. For example,vessels{a,d,e}will be served by berth1while vessels {b,c,f}will be served at berth2.By sorting the fractional values in ascending order,vessel d will be thefirst one moored at berth1, followed by vessel e and a.Similarly,the sequence of vessels moored at berth2is b,f,and c.To prevent the search outside the initialized range of the feasi-ble solution space,we handle the boundary situation as follows.If the value in one cell is less than zero,we randomly generated a va-lue in(0,1)for the cell.If the value is larger than the number of berths,we randomly generated a value in(0,1)and subtract this from the number of berths.We randomly generate all particles except one solution that is based on a simple greedy heuristic,first-come-first-served(FCFS). FCFS is practically used in the real world operations from both operator’s and carrier’s perspective.We assume that all berths be-come available at the same time and sort the ships by their arrival times in ascending order.Ships are assigned one at a time by choosing the combination of berth and ship that willfinishfirst. The time is increased when there are no ships available or all berths are busy.This continues until all ships have been assigned.1546 C.-J.Ting et al./Expert Systems with Applications41(2014)1543–15504.2.Local searchIn each iteration,we apply local search to improve the solution quality.Due to that the local search is a time-consuming procedure of PSO,we will only apply local search to the best particle found in this iteration.The local search technique used in this study has two procedures:swap ships in the same berth and between berths. Fig.2shows these two procedures based on the solution obtained in Fig.1.Given a list of the ships handled by berth j,the swapping compares all the possible swapping pairs within the same berth and select the best improvement to exchange their values as shown in Fig.2(a).For the swapping of two ships in two berths, two ships in two different berths(one for each berth)are randomly selected.The positions of these two ships are exchanged.The swap results are evaluated and select the best improvement to exchange as shown in Fig.2(b).putational experimentsThe proposed PSO algorithm in previous section is tested in this section.Two sets of instances,I2and I3,from Cordeau et al.(2005) were tested.The PSO algorithm was coded in Microsoft Visual Stu-dio C++2008and run on a PC with an Intel Core2Duo CPU E8400 (3.00GHz)processor and1.9GB RAM,under the Windows7oper-ating system.The effectiveness of the proposed PSO algorithm was compared with other recent algorithms for the BAP,namely tabu search(T2S)(Cordeau et al.,2005),population training algorithm with linear programming(PTA/LP)(Mauri et al.,2008),and cluster-ing search(CS)approach(de Oliveira et al.,2012b)from the liter-ature.The results of these algorithms were obtained directly from their articles.To make the performance of our proposed PSO heuristic more robust,parameter-setting is necessary.We have performed a set of preliminary experiments in order tofind an appropriate param-eter setting that produces overall good results across most in-stances,even if they were not the optimal settings for all instances.The control parameters of our PSO algorithm are the number of particles A,number of iterations G,inertia weight W, and learning factors C1and C2,respectively.Based on the prelimin-ary experiments undertaken,we found that the best values of the parameter setting yield to better solution quality as follows:A=20, G=200,W=0.9,C1=C2=2.Each instance is run for30times and reported both best solution and average computational time.5.1.I2setThe I2data set including50instances were generated from the traffic and berth-allocation data at Gioia Tauro,Italy.Five instance sizes were considered:25ships with5,7,and10berths;35ships with7and10berths.A set of10instances was generated for each size.The earliest available time is the same for every berth.The ori-ginal data set takes into account time windows.Table1shows the results for the instances of25and35ships. We also compare the results to those obtained from the most re-cent literature.For each algorithm,we report their results and computational time.Among those results,GSPP by Buhrkal et al.(2011)solved the BAP with CPLEX and obtained the optimal solu-tions.Our PSO canfind the optimal solutions in all50instances and provide much better results than those by Cordeau et al.(2005).The computational times of our PSO in each instance are the average of30runs in seconds.The tabu search algorithm(T2S)pro-posed by Cordeau et al.(2005)used on average120s to solve each instance.The machine used by the GSPP was a PC with an Intel Xeon 2.66GHz,while T2S was run on a Sun workstation (900MHz).Our PSO performs better than that of T2S in terms of solution quality.We can observe that PSOfinds all the optimal solutions for all instances in I2set in short time(1.66s on average).5.2.I3setThe I3data set that includes30instances,each with60ships and13berths,is also randomly generated by Cordeau et al. (2005)based on data from the port of Gioia Tauro.TheparametersTable1Comparison of results of I2set.Instance GSPP T2S PSOOpt.Time Best Best Time25Â5_1759 5.997597590.7525Â5_2964 3.709659640.5525Â5_3970 2.95974970 1.2325Â5_4688 2.727026880.4225Â5_5955 6.979659550.8625Â5_61129 3.10112911290.5225ÂÂ5_7835 2.318358350.5225Â5_8627 1.926296270.5025Â5_9752 4.767557520.6925Â5_101073 6.38107710730.7325Â7_1657 3.626676570.5225Â7_2662 3.156716620.4425Â7_3807 4.288238070.9725Â7_4648 3.786556480.8625Â7_5725 3.857287250.4425Â7_6794 3.607947940.5225Â7_7734 3.547407340.6925Â7_8768 3.93782768 1.0525Â7_9749 3.737597490.8925Â7_10825 3.828308250.5525Â10_1713 5.837177130.7025Â10_2727 6.997367270.7525Â10_3761 6.127647610.5625Â10_4810 5.388198100.5225Â10_5840 6.778558400.4525Â10_6689 5.576946890.4425Â10_7666 5.836736660.5825Â10_8855 5.878608550.5325Â10_9711 5.387267110.4525Â10_10801 5.968128010.4735Â7_1100012.5710191000 5.0235Â7_2119215.9311961192 4.9135Â7_312017.1612301201 4.9435Â7_4113913.5911501139 3.4535Â7_5116411.5011791164 3.3635Â7_6168629.1617031686 3.2835Â7_7117612.8911811176 4.1735Â7_8131817.5213301318 2.3935Â7_912458.4112451245 3.5035Â7_10110914.3911301109 3.8935Â10_1112419.9811281124 1.5835Â10_2118911.3711971189 4.1335Â10_39388.97953938 3.3635Â10_4122610.2812391226 2.8435Â10_5134922.3113721349 1.5335Â10_6118810.9212211188 2.4435Â10_710519.7410521051 2.1735Â10_811949.3912191194 1.2835Â10_9131129.4513151311 2.8135Â10_10118914.2811981189 2.83Average953.668.55963.04953.66 1.66 C.-J.Ting et al./Expert Systems with Applications41(2014)1543–15501547。
A Modified Particle Swarm Optimizer AlgorithmYang Guangyou(School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068,China)deals with this issue, yet conceptually it is simple as well as being very easy to implement. In the algorithm, through periodically monitoring aggregation degree of the particle swarm and on the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima.Abstract: This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles’ diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.2 Standard PSO Algori thmPSO simulates the behaviors of bird flocking and used it to solve the optimization problems. PSO is initialized with a group of random particles (solutions, xi) and then searches for optima by updating generations. In every generation, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. This value is called pbest (p Keywords: Particle Swarm, Aggregation Degree, Mutation, Optimization.1 Introducti onThe particle swarm optimization (PSO) is a new community intelligence optimization method firstly proposed by Kennedy and Eberhart in 1995 [1,2]. To be exactly similar with other global optimization algorithm, the PSO algorithm tends to suffer from premature convergence. In order to overcome the problem of premature convergence, many solutions and improvements have been suggested, they included: changing descend direction of the best particle[3]; renewing state of the whole swarm or the some particles according to the certain standards [4];introducing the concepts about the breeding and subpopulation of GA [5,6,7]; introducing diversity measure to control diversity of the swarm[8]; adopting new position and velocity of particle to renew equation; cooperative particle swarm optimizers [9,10] etc.. In this paper, we presented a new model whichi ). Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest (p g ).The velocity and positions of each particle is updated according to their best encountered position and the best position encountered by any particle according to the following equation.12**()*()*()*()id id id id gd id v w v c rand p x c rand p x (1)id id id x x v (2)Where v id is the particle velocity in d-dimension, x id is the current particle position (solution) in d-dimension, w is the inertia weight. p and p i g are defined as stated before, rand() is a random function in the range [0,1]. c1 and c2are learning factors, usually c1 = c2 = 2. If the velocity is higher than a certain limit, called Vmax , this limit will be used as the new2-6751-4244-1135-1/07/$25.00 ©2007 IEEE.velocity for this particle in this dimension, thus keeping the particles within the search space._*(1new gbest gbest )KV (4)3 The Modi fi ed PSO Algori thmwhere V is a random number with Gaussian distribution. Thus it could not only produce the lesser disturbance scope with the bigger probability to perform local searching, but also appropriately produce the bigger disturbance scope to jump out of local optima. The initial value of K is set 1.0, K = EK , at f generations interval , E is a random number in the range [0.01, 0.9].Studies indicated that excessively concentrated particle swarm is easy to run into local minima due to loss of diversity of population. If the aggregation degree of particle swarm could be controlled validly, the ability to search the global minimum would be improved.3.1 Aggregation Degree of the Particles Swarm 3.3 MPSO AlgorithmThe aggregation degree of the particle swarm is used to describe the discrete degree of the swarm, namely diversity. It is represented as a distance between particles. In this paper, we applied the absolute difference value of each dimensional coordinate to denote the distance, and defined its biggest value as the aggregation degree of the particle swarm. If m is the size of swarm, N is the dimensionality of the problem, 3The modified PSO (MPSO) adds monitoring the aggregation degree of the particle swarm periodically to the standard PSO. Furthermore, the mutation operation for the gbest is performed when it do not reach global optimum point. The processes of implementing MPSO are listed as follows:Step 1. Set current iteration generation Iter=1.Initialize a population including m particles; Set the current position as the pbest position, the gbest is the best particle position of initialization particle swarm.id is the d th value of the i th particle and M id is the d th value of the j th particle. The Aggregation Degree of the swarm is calculated according to the following formula.Step 2. Evaluate the fitness for each particle;Step 3. Compare the evaluated fitness value of each particle with its pbest . If current value is better than pbest , then set the current position as the pbest position. Furthermore, if current value is better than gbest , then reset gbest to the current index in particle array;(){,,=1,2,,;;1,2,}id jd d t m a x x -x i j mi j d N z ""˙ (3)Between distances of particles in particle swarm also are available with Euclidean space mean distance seeing reference [6], but its computation load is relative big.Step 4. Change the velocity and position of the particle according to the equations (1) and (2), respectively;3.2 Strategy of MutationThe mutation operator of the algorithm includes two parts. One is, when monitoring periodically the aggregation degree of particle swarm (for example period=50), if the aggregation degree is less than the given value (d ˄t ˅< e ), then all particles’ position and velocity should be reinitialized, however, the pbest and the gbest would be reserved. The other is that when the PSO algorithm can not searches the global optimal point, the mutation for the gbest would be performed as follow:Step 5. if (Iter%Ie ==0) {Calculate d(t) of aggregation degree according to the equation (3);if d(t) is less then given threshold value e, reinitialize velocities and position of particle; }Step 6. Iter = Iter +1, If a stop criterion is met, end algorithm; else execute mutation operation to the gbest according to equation (4)ˈturn to step 2.2-6764 Results and Discussion4.1. Benchmark FunctionsComparison functions adopted here are benchmark functions used by many researchers. In which x represents a real number type the vector, its dimension is n, but x is ith element.i The function f1 is the generalized Rastrigrin function:211()(10cos(2)10),[5.12,5.12]ni i i i f x x x x S¦The function f 2 is the generalized Griewank function:22111()1,[600,600]4000n n i i i i f x x x¦ <The function f 3 is the Rosenbrock function:2223111()(100()(1)),[30,30]ni i i i f x x x x x¦The function f 4 is the Ackley function:411()20exp exp cos(2)20,n i i f x x n S §§·u ¨¨¸¨©¹©¦[30,30]i x e 4.2 Results and AnalysisFor the purpose of comparison, we had 50 trial runs for every instance. The size of particle swarm is 30; the goal value of all function is 1.0e-10; the maximum number of iteration is 3000. Table 1 lists the Vmax values for all the functions, respectively. Table 2 to 4 lists the results of both standard PSO and modified PSO for the four benchmark functions with 10, 20 and 30 dimensions. Where PSO is the results of standard PSO with a linearly decreasing w which from 0.7 to 0.4 or be fixed as 0.4, respectively. MPSO is the results of the modified PSO, in which inertia weight w is as 0.375 and interval Ie is set as 50. The Avg /Std stands for average value and standard deviation of fitness value of 50 trail runs, the fev als meant the average number of function call, the Ras representedratio of the number of reaching goal to the number of experiment. When the fitness value less than 1.0e-10, it is zero. By comparing the results, it is easy to see that MPSO have better results than PSO for all cases. Fig. 1 to 4 show typical convergence results of PSO and MPSO during 3000 generations for the four benchmark functions with 10 and 30 dimensions, respectively. In each case it is seen that MPSO performs better than PSO.Table 1. The initial range of benchmark function Function Vmax Function Vmax f110f3100f2600f430Table 2. The results of both PSO and MPSO with 10 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf1 2.965/1.35690009.001/500/017745.0050/50f20.091/0.03890009.000/500.023/0.083 32814.6044/50f313.375/25.07190030.000/507.102/0.333 90030.000/50f40/066064.8050/500/035025.6050/50Table 3. The results of both PSO and MPSO with 20 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf115.342/4.973590030.000/500/020259.0050/50f20.029/0.02288887.604/500.029/0.124 24218.4046/50f381.864/179.31388887.604/5017.433/0.235 90030.000/50f40/088155.0048/500/052959.0049/50Table 4. The results of both PSO and MPSO with 30 dimensionsStdPSO MPSOFun.Avg/Std fevals Ras Avg/Std fevals Rasf 40.020/7.86090030.000/500/020380.2050/501f 0.012/0.015 89724.009/500/017225.4050/502f 130.629/280.34990030.000/5027.962/0.464 90030.000/503f 0.019/0.132 90030.000/500/4.80E-10 58921.8046/5042-677101010-510105Rastrigrin function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-5100105Rastrigrin function with popu.=30,dim=10generationsl o g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 1. Performance comparison on Rastrigin101010-5100105Griewank function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-810-610-410-2100102Griewank function with popu.=30,dim=10generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 2. Performance comparison on Griewank1010101010Rosenbrock function with popu.=30,dim=10generationslo g 10(f i t n e s s)101021041061081010Rosenbrock function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 3. Performance comparison on Rosenbrock2-678101010-810-610-410-2100102Ackley function with popu.=30,dim=10generationslo g 10(f i t n e s s )101010-810-610-410-2100102Ackley function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 4. Performance comparison on Ackley[4] Xiaofeng Xie, Wenjun Zhang, Zhilian Yang, Dissipative particle swarm optimization[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02, vol.2, 2002: l456-l46l5 Conclus i onIn this paper, we presented a modified PSO. Two new features are added to PSO: Two newfeatures are added to PSO: aggregation degree and strategy of the Gaussian mutation. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively. The method shows good performance in all test cases than the standard PSO method. The further study on application is the subject of future work.[5] Angeline P J, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences[C] In: Evolutionary programming VII, 1998: 601-610[6] Lovbjerg M, Rasmussen T K, Krink T, Hybrid particle swarm optimization with breeding and subpopulations[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l [7] Angeline Peter J, Using selection to improve particle swarm optimization[C] In: Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, 1998: 84-89AcknowledgmentsThis work is supported by Hubei Key lab of Manufacturing [8] Riget, J. and Vesterstroem, J. S. A Diversity guided particle swarm optimizer-the ARPSO. [R] Aarhus: University of Aarhus, EVALife, 2002.2.2Quality Engineering(LMQ2005A04).References[9]Yu, H .J.,Zhang, L.P.,and H u,S.X. Adaptive particle swarm optimization algorithm based, [J] Journal of Zhejiang University (Engineering), 2005 Vol.39 No.9 1286-1291[1]J.Kennedy and R.C. Eberhart, Particle Swarm Optimization[C], Proc. on feedback mechanism IEEE Int’l. Conf. on Neural Networks, vol. VI, 1942-1948, IEEE Service Center, 1995[10] Van den Bergh F. Engelbrecht A P, Training product unit networks using cooperative particle swarm optimizers[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l, l26-l3l[2] R. Eberhart, J. Kennedy. A new optimizer using particle swarm theory. Proc. 6th Int. Symposium on Micro Machine and Human Science, 1995: 39-43[3] Thiemo Krink, Jakob S VesterstrOm, Jacques Riget, Particle Swarm Optimization with Spatial Particle Extension[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, vol. 2, 2002: 1474-1479Author BiographyYang Guangyou , Ph.D., Professor. Research interests are inthe area of intelligent computing and control. 2-679。
cma-es和粒子群算法-回复什么是CMA-ES算法和粒子群算法,它们在优化问题中的应用和区别,并分析其优点和不足。
CMA-ES算法(Covariance Matrix Adaptation Evolutionary Strategy,协方差矩阵自适应进化策略)和粒子群算法(Particle Swarm Optimization,PSO)是两种常用的优化算法,广泛应用于解决复杂的数值优化问题。
本文将逐步介绍CMA-ES算法和粒子群算法,并从不同的角度对它们进行比较和分析。
首先,让我们来了解CMA-ES算法。
CMA-ES算法是一种基于演化策略的全局优化算法,最初由Hansen等人于1996年提出。
它通过自适应地调整演化策略的参数,例如种群大小和变异策略,以优化目标函数。
CMA-ES算法中的关键概念是协方差矩阵,它用于模拟优化问题空间中的梯度信息。
通过不断适应协方差矩阵,CMA-ES算法能够在搜索空间中高效地寻找全局最优解。
下面是CMA-ES算法的基本步骤:1. 初始化:设置优化问题的初始解和协方差矩阵。
2. 生成新解:根据当前解和协方差矩阵生成新的解集。
3. 评估适应度:计算每个解的适应度,并确定最优解。
4. 更新协方差矩阵:通过适应度评估结果来更新协方差矩阵。
5. 终止条件检测:检测是否满足终止条件,如果不满足则返回第2步。
接下来,我们将介绍粒子群算法。
粒子群算法是一种模拟鸟群觅食行为的启发式优化算法,最早由Eberhart和Kennedy于1995年提出。
粒子群算法通过模拟粒子在搜索空间中的迁移和认知能力,来搜索最优解。
每个粒子维护自己的位置和速度,并按照自身经验和群体经验进行位置更新。
下面是粒子群算法的基本步骤:1. 初始化:设置粒子群的初始位置和速度。
2. 位置更新:根据当前位置和速度更新粒子的新位置。
3. 适应度评估:计算每个粒子的适应度,并确定最优解。
4. 速度更新:根据当前位置、速度和最优解更新粒子的速度。
An Improved Particle Swarm Optimization AlgorithmLin Lu, Qi Luo, Jun-yong Liu, Chuan LongSchool of Electrical Information, Sichuan University, Chengdu, Sichuan, China, 610065lvlin@AbstractA hierarchical structure poly-particle swarm optimization (HSPPSO) approach using the hierarchical structure concept of control theory is presented. In the bottom layer, parallel optimization calculation is performed on poly-particle swarms, which enlarges the particle searching domain. In the top layer, each particle swam in the bottom layer is treated as a particle of single particle swarm. The best position found by each particle swarm in the bottom layer is regard as the best position of single particle of the top layer particle swarm. The result of optimization on the top layer particle swarm is fed back to the bottom layer. If some particles trend to local extremumin particle swarm optimization (PSO) algorithm implementation, the particle velocity is updated and re-initialized. The test of proposed method on four typical functions shows that HSPPSO performance is better than PSO both on convergence rate and accuracy.1. IntroductionParticle swarm optimization algorithm first proposed by Dr. Kennedy and Dr. Eberhart in 1995 is a new intelligent optimization algorithm developed in recent years, which simulates the migration and aggregation of bird flock when they seek for food[1]. Similar to evolution algorithm, PSO algorithm adopts a strategy based on particle swarm and parallel global random search. PSO algorithm determines search path according to the velocity and current position of particle without more complicated evolution operation. PSO algorithm has better performance than early intelligent algorithms on calculation speed and memory occupation, and has less parameters and is easier to realize.At present, PSO algorithm is attracting more and more attention of researchers, and has been widely used in fields like function optimization, combination optimization, neural network training, robot path programming, pattern recognition, fuzzy system control and so on[2]. In addition, The study of PSO algorithm also has infiltrated into electricity, communications, and economic fields. Like other random search algorithms, there is also certain degree of premature phenomenon in PSO algorithm. So in order to improve the optimization efficiency, many scholars did improvement researches on basic PSO algorithm, such as modifying PSO algorithm with inertia weight[3], modifying PSO algorithm with contraction factor[4], and combined algorithm with other intelligent algorithm[5]. These modified algorithms have further improvement in aspects like calculation efficiency, convergence rate and so on.Proper coordination between global search and local search is critical for algorithm finally converging to global optimal solution. Basic PSO algorithm has simple concept and is easy to control parameters, but it takes the entire optimization as a whole without detailed division, and it searches on the same intensity all along that to a certain extent leads to premature convergence. A hierarchical structure poly-particle swarm optimization approach utilizing hierarchy concept of control theory is presented, in which the parallel optimization calculation employs poly-particle swarm in the bottom layer, which is equivalent to increase particle number and enlarges the particle searching domain. To avoid algorithm getting in local optimum and turning into premature, disturbance strategy is introduced, in which the particle velocity is updated and re-initialized when the flying velocity is smaller than the minimum restrictions in the process of iteration. The best position found by each poly-particle swarm in the bottom layer is regard as best position of single particle in the top layer. The top layer performs PSO optimization and feeds the global optimal solution back to the bottom layer. Independent search of the poly-particle swarm on bottom layer can be used to ensure that the optimization to carry out in a wider area. And in top layer, particle swarm’s tracking of current global optimal solution can be used to ensure theconvergence of the algorithm. Several benchmark functions have been used to test the algorithm in this paper, and the results show that the new algorithm performance well in optimization result and convergence characteristic, and can avoid premature phenomenon effectively.2. Basic PSO algorithmPSO algorithm is swarm intelligence based evolutionary computation technique. The individual in swarm is a volume-less particle in multidimensional search space. The position in search space represents potential solution of optimization problem, and the flying velocity determines the direction and step of search. The particle flies in search space at definite velocity which is dynamically adjusted according to its own flying experience and its companions’ flying experience, i.e., constantly adjusting its approach direction and velocity by tracing the best position found so far by particles themselves and that of the whole swarm, which forms positive feedback of swarm optimization. Particle swarm tracks the two best current positions, moves to better region gradually, and finally arrives to the best position of the whole search space.Supposing in a D dimension objective search space, PSO algorithm randomly initializes a swarm formed by m particles, then the position X i (potential solution of optimization problem)of ith particle can be presented as {x i 1 , x i 2 ,… , x iD }, substitute them into the object function and adaptive value will be come out, which can be used to evaluate the solution. Accordingly, flying velocity can be represented as {v i 1, v i 2,…, v iD }. Individual extremum P i {p i 1 , p i 2 ,… , p iD } represents the best previous position of the ith particle, and global extremum P g {p g 1 , p g 2 ,… , p gD } represents the best previous position of the swarm. Velocity and position are updated each time according to the formulas below.1112211max max 11min min 11()()if ,;if ,;k k k k k k id id id id gd id k k id id k k id id k k k id id idv wv c r p x c r p x v v v v v v v v x x v +++++++⎧=+−+−⎪>=⎪⎨<=⎪⎪=+⎩ (1) In the formula, k represents iteration number; w is inertia weight; c 1, c 2 is learning factor; r 1, r 2 is two random numbers in the range [0,1].The end condition of iteration is that the greatest iteration number appears or the best previous position fits for the minimum adaptive value.The first part of formula (1) is previous velocity of particle, reflecting the memory of particle. The second part is cognitive action of particle, reflecting particle’sthinking. The third part is social action of particle, reflecting information sharing and mutual cooperation between particles.3. Hierarchical structure poly-particle swarm optimizationDuring the search of PSO algorithm, particles always track the current global optimum or their own optimum, which is easy to get in local minimum [6]. Aiming at this problem in traditional PSO, this paper proposes a hierarchical structure poly-particle swarm optimization approach as the following.(1) Based on hierarchical control concept of control theory, two-layer ploy-particle swarm optimization is introduced. There are L swarms on bottom layer and p ig represents global optimum of ith swarm. There is a swarm on top layer and p g represents global optimum of top layer. Figure 1 represents scheme of HSPPSO.Figure 1Supposing there are L swarms in bottom layer, m particles in each swarm, then parallel computation of L swarms is equivalent to increasing the number of particle to L*m, which expands search space. The parallel computation time of ploy-particle swarms doesn’t increase with the number of particle. Besides individual and global extremum of the particle swarm, poly-particle swarm global extremum is also considered to adjust the velocity and position ofparticles in L swarms. Correction 33()k kg ij c r p x −isadded to the algorithm, and the iteration formulas are as the following. 111223311max max 11min min 11()()()if ,;if ,;k k k k k k k k ij ij ij ij ig ij g ij k k ij ij k k ij ij k k k ij ij ijv wv c r p x c r p x c r p x v v v v v v v v x x v +++++++⎧=+−+−+−⎪>=⎪⎪⎨<=⎪⎪=+⎪⎩ (2)In the formula, c 3 is learning factor; r 3 is random numbers in the range [0,1]. i represents swarm and i=1,…, L ,j represents particle and j=1,…, m ,x ijBottom layerlrepresents the position variable of jth particle in ith swarm; p ij represents individual extremum of jth particle in ith swarm.The fourth part of the formula (2) represents the influence of global experience to particle, reflecting information sharing and mutual cooperation between particles and global extremum.The top layer will commence secondary optimization after ploy-particle swarm optimization on bottom layer, which takes each swarm in L swarms for a particle and swarm optimum p ig for individual optimum of current particle. The iteration formulas of particle velocity update are as the following.1112211max max 11min min 11()()if ,;if ,;k k k k kk i i ig i g i k k i i k k i i k k k i i iv wv c r p x c r p x v v v v v v v v x x v +++++++⎧=+−+−⎪>=⎪⎨<=⎪⎪=+⎩ (3) The top layer PSO algorithm adjusts particle velocity according to the global optimum of each swarm on bottom layer. Independent search of the L poly-particle swarm on the bottom layer can be used to ensure the optimization to be carried out in a wider area. On top layer, particle swarm’s tracking of current global optimal solution can be used to ensure the convergence of the algorithm, in which both attentions are paid to the precision and efficiency of the optimization process.(2) Introduce disturbance strategy. Optimization is guided by the cooperation and competition between particles in PSO algorithm. Once a particle finds a position which is currently optimum, other particles will quickly move to the spot, and gather around the point. Then the swarm will lose variety and there will be no commutative effect and influence between particles. If particles have no variability, the whole swarm will stagnate at the point. If the point is local optimum, particle swarm won’t be able to search the other areas and will get in local optimum which is so-called premature phenomenon. To avoid premature, the particle velocity need to be updated and re-initialized without considering the former strategy when the velocity of particles on the bottom layer is less than boundary value and the position of particle can’t be updated with velocity.4. Algorithm flow(1) Initialization. Set swarm number L, particleswarm scales m and algorithm parameters: inertia weight, learning factor, velocity boundary value, and the largest iterative number.(2) Each swarm on bottom layer randomly generates m original solutions, which are regarded as currentoptimum solution p ij of particles meanwhile. Adaptive value of all particles is computed. The optimum adaptive value of all particles is regarded as current optimum solution p ig of swarm, which is transferred to top layer.(3) Top layer accepts L p ig from bottom layer as original value of particles, which are regarded as their own current optimum solution p ig of particles meanwhile. The optimum adaptive value of all particles is regarded as current optimum solution p g of swarm, which is transferred to bottom layer.(4) Bottom layer accepts p g form top layer, updates velocity and position of particle according to formula (2), if velocity is less than the boundary value, the particle velocity is updated and re-initialized.(5) Bottom layer computes adaptive value of particles, and compares with current individual extremum. The optimum value is regarded as current optimum solution p ij of particles. The minimum value of p ij is compared with global extremum. The optimum value is regarded as current global extremum p ig of swarm, which is transferred to top layer.(6) Top layer accepts p ig from bottom layer. The optimum adaptive value of p ig is compared with current global extremum p g . The optimum adaptive value is regarded as current global extremum p g . Velocity and position of each particle are updated according to formula (3).(7) Top layer computes adaptive value of particles, and compares with current individual extremum. The optimum value is regarded as current optimum solution p ig of particles. The minimum value of p ig is compared with global extremum p g . The optimum value is regarded as current global extremum p g of swarm, which is transferred to bottom layer.(8) Evaluate end condition of iteration, if sustainable then output the result p g , if unsustainable then turn to (4).5. ExampleIn order to study algorithm performance, tests are done on four common benchmark functions: Spherical, Rosenbrock, Griewank and Rastrigin. The adaptive value of the four functions is zero. Spherical and Rosenbrock are unimodal function, while Griewank and Rastrigin are multimodal function which has a great of local minimum.f 1:Sphericalfunction 211(),100100ni i i f x x x ==−≤≤∑f 2:Rosenbrock function222211()[100()(1)], 100100ni i i i i f x x x x x +==×−+−−≤≤∑f 3:Griewank function23111()(100)1,4000nn i i i f x x ===−−+∑∏100100i x −≤≤ f 4:Rastrigin function241()(10cos(2)10), 100100nii i i f x x x x π==−+−≤≤∑Set the dimension of four benchmark function to 10, the corresponding maximum iteration number to 500, swarm scale m to 40, number of swarm to 5, inertia weight to 0.7298, c1, c2 and c3 to 1.4962. Each test is operated 10 times randomly.Table 1 presents the results of four benchmark test function with PSO and HSPPSO.Table1.Compare of simulation resultsfunction algorithm minimummaximum averagef1PSO HSPPSO 3.0083e-93.5860e-12 8.9688e-5 1.9550e-7 3.5601e-84.2709e-11 f2PSO HSPPSO 5.74414.2580 8.8759 7.8538 7.659975.5342 f3PSO HSPPSO 00 24.9412 2.3861 7.36575 0.23861 f4PSO HSPPSO 4.97500.995013.9392 7.9597 10.15267 4.4806Table 1 shows that HSPPSO performance is better than PSO in searching solution, and gets better results than PSO both in test for local searching and global searching. Though the difference of HSPPSO between PSO in searching solution of unimodal function is not obvious, HSPPSO performance is better than PSO for multimodal function, which indicates that HSPPSO has better application foreground in avoiding premature convergence and searching global optimum.Figure 2 to 9 are adaptive value evolution curve after 10 times random iteration for four benchmark functions with HSPPSO and PSO, which indicates that HSPPSO performance is better than PSO both in initial convergence velocity and iteration time. PSO even cannot find optimum value in some tests.Figure 2.HSPPSO test function f1Figure 3.PSO test function f1Figure 4.HSPPSO test function f2Figure 5.PSO test function f2Figure 6.HSPPSO test function f3Adaptive valueIteration numberIteration numberAdaptive valueIteration numberAdaptive valueAdaptive valueIteration number Adaptive valueIteration numberFigure 7.PSO test function f3Figure 8.HSPPSO test function f4 Figure 9.PSO test function f4 6. ConclusionComparing with basic PSO, HSPPSO proposed in this paper can get better adaptive value and have faster convergence rate on condition of the same iteration step. HSPPSO provides a new idea for large scale system optimization problem.References[1] J Kennedy, R Eberhart. Particle Swarm Optimization[C].In:Proceedings of IEEE International Conference on Neural Networks, 1995, V ol 4, 1942-1948. [2] Vanden Bergh F, Engelbrecht A P. Training Product Unit Networks Using Cooperative Particle Swarm Optimization [C]. IEEE International Joint Conference on Neural Networks, Washington DC, USA.2001,126-131.[3] Shi Y, Eberhart R C. A modified particle swarm optimizer[C] IEEE World Congress on Computational Intelligence, 1998: 69- 73[4] Eberhart R C, Shi paring inertia weights and constriction factors in particle swarm optimization[C] Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, 2001: 84- 88[5] Lφvbjerg M, Rasmussen T K, Krink T.Hybrid particle swarm optimizer with breeding and subpopulations[C] Third Genetic and Evolutionary Computation Conference. Piscataway, NJ: IEEE Press, 2001[6] WANG Ling. Intelligent Optimization Algorithms with Applications. Beijing: Tsinghua University Press, 2001.Iteration number Adaptive valueIteration number Adaptive value Adaptive valueIteration number。
particleswarm函数particleswarm函数是一种以优化算法技术为基础的最优化解决方案。
它是一种仿生算法,基于来自大自然中群体行为技术,如烟蚁,鱼群等的数学模型而发展出来的。
particleswarm函数虽然是以优化算法技术为基础的最优化解决方案,但是它仍然可以在各个方面获得应用。
particleswarm函数是一种可用于优化各种技术问题的有效工具。
它可以用来求解复杂的并行优化问题,如允许多个实体同时变化的多元非凸优化问题,混合优化问题,迁移问题,多目标优化等等。
particleswarm函数的原理是,它使用一组相互独立的粒子作为搜索代理,它们在维度空间中进行空间搜索,以找到最优解。
粒子会按照一定规则运动,并且会对找到更优解分布更多的粒子,为了提高优化效率。
此外,particleswarm函数也可以用于控制系统设计。
它使用一种基于启发式因子的并行优化算法,来控制机器学习系统,以便选择最佳控制参数。
此外,particleswarm函数还可以用于机器学习,特别是神经网络的训练,能够大大降低训练时间。
另外,particleswarm函数还可以用于生物信息学,如基因组学应用,它可以帮助研究者分析和发现大量基因组信息,并发现基因之间的关系。
部分particleswarm函数可以用于从大量基因组数据中检测变异位点,从而揭示基因变异在疾病发病中的作用机制。
总的来说,particleswarm函数是一种有效的解决复杂优化问题的有效方法,它可以应用于控制系统设计、机器学习和生物信息学等多种应用场景中。
不仅在多个方面有着可观的效果,而且具有较高的可扩展性,因此,particleswarm函数在当今的优化问题解决中日益受到重视。
粒子群优化原理
粒子群优化(Particle Swarm Optimization,简称PSO)是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法是一种基于群体智能的随机搜索算法,它是由Kennedy和Eberhart在1995年提出的。
粒子群优化算法的基本思想是:将搜索空间中的每一个可能的解决方案看作一个粒子,每个粒子都有一个位置和一个速度,粒子群中的每个粒子都会根据其个体最优位置和全局最优位置的信息,以及随机因素,不断更新自己的位置和速度,从而不断接近最优解。
粒子群优化算法的优点是简单、快速,可以解决多维、非线性、非凸优化问题,但是它也有一些缺点,比如容易陷入局部最优解,收敛速度慢等。