A modified particle swarm optimizer based on cloud model
- 格式:pdf
- 大小:105.83 KB
- 文档页数:4
4.4 群智能进化计算在生物界,蚂蚁、蜜蜂等社会昆虫和鸟群、鱼群等动物呈现出有趣的现象:单个的个体智能并不高,但依靠群体的能力,却发挥出超出个体的智能。
这种现象揭示了简单智能的主体通过合作可以表现出复杂智能行为的特性。
通过对这种集体行为的人工模拟,用于解决组合优化问题和其它一些实际应用问题的新方法相继产生。
对这些新方法的研究可将其称之为群智能(Swarm intelligence )的研究。
1999年Bonabeau 、Dorigo 等人给出群智能的定义:群智能是指任何受到社会昆虫群体和其它动物群体的集体行为的启发而设计的算法和分布式问题解决装置。
很多实际问题可以转化为最优化问题来解决。
按计算的复杂度来划分,这些问题可分为P 问题和NP 问题。
目前对于NP 完全问题的解决尚没有很好的方法。
启发于蚁群、鸟群和鱼群的蚁群优化(Ant Colony Optimization ,ACO )、粒子群优化(Particle Swarm Optimization ,PSO )和鱼群优化(Fish Swarm Optimization ,FSO )等方法对这类问题的解决提供了全新的途径,它们是计算智能领域中三种典型的群智能算法。
因此,研究群智能具有十分重要的理论和实践意义。
本节主要介绍三种典型群智能算法,以及其改进形式。
4.4.1 粒子群优化算法1、基本原理与算法粒子群优化算法(PSO )最早是由Eberhart 和Kennedy 于1995年提出的,它的基本概念源于对人工生命和鸟群捕食行为的研究。
设想这样一个场景:一群鸟在随机搜寻食物,在这个区域里只有一块食物,所有的鸟都不知道食物在那里,但是它们知道当前的位置离食物还有多远。
那么找到食物的最优策略是什么呢?最简单有效的就是搜寻目前离食物最近的鸟的周围区域。
PSO 算法就是从这种生物种群行为特性中得到启发并用于求解优化问题。
在PSO 算法中,每个优化问题的潜在解都可以想象成d 维搜索空间的一个点,我们称之为“粒子”。
姓名:宝媛媛学号:2013704093 专业:计算机技术Granular Computing, 2008. GrC 2008. IEEE International Conference,2008:486-490一种改进的粒子群优化算法吕林,罗绮,刘俊勇,田立峰四川大学电气信息学院 ,中国四川成都 610065 lvlin@摘要:本文使用控制理论中的层次结构概念提出了一种基于多种群分层的粒子群优化算法(HSPPSO)。
在底层为了扩大粒子搜索领域,采用多种群粒子群并行计算。
在顶层,把每个种群看成一个粒子。
种群的最优值作为当前粒子的个体最优值 ,进行顶层粒子群优化。
并把优化结果返回到底层。
如果在粒子群优化(PSO)算法实现过程中,一些粒子趋向于局部极值,则更新粒子速度并重新初始化。
采用本HSPPSO算法在四种典型测试函数中的测试结果在收敛速度和准确性方面都优于PSO算法。
1、前言粒子群优化算法(Particle Swarm Optimization, PSO)是近年来发展的一种全新的智能优化算法,它模拟了鸟群觅食过程中的迁徙和群集行为,最初由Kennedy博士和 Eberhart博士于1995 年提出[1]。
PSO算法与进化算法相似,采用基于种群的多点并行全局随机搜索策略,但无需复杂的进化操作,而是根据粒子的速度和当前位置决定搜索路径。
与早期的智能算法相比,SO算法在计算速度和消耗内存上有较大的优势,而且调节参数少,简单易于实现。
目前,PSO算法得到了越来越多的研究人员关注和重视,并已广泛应用于函数优化和组合优化、神经网络训练、机器人路径规划、模式识别和模糊系统控制等应用领域[2]。
此外算法的研究还渗透到电力、通信、经济等领域。
但是和其它随机搜索算法一样,PSO算法仍然不同程度地存在早熟现象。
因此,为了提高优化效率,很多学者对基本的微粒群算法进行了研究改进:带有惯性权重的微粒群算法[3],引入收缩因子的微粒群算法[4],以及和其它智能算法结合的混合算法等[5] 。
粒子群优化算法:发展,应用程序和资源Russell C. EberhartPurdue School of Engineering and Technology 799 West Michigan StreetIndianapolis, IN 46202 USAEberhart@ Yuhui ShiEDS Embedded Systems Group I40 1 E. Hoffer Street Kokomo, IN 46982 USA Yuhui.Shi@摘要-本文重点工程和计算机科学方面的发展、应用程序和相关的粒子群优化的资源。
对1995年以来的粒子群算法提出到现在的发展进行探讨。
包括收缩因素,惯性权重,以及跟踪动态系统的简短的讨论。
应用方面,无论是对于已发展的和对未来的潜力应用领域进行探讨。
最后,涉及到粒子群优化的资源列出,包括书籍,网站和软件。
粒子群优化的书目是在文末。
一、引言粒子群优化(PSO)是由Kennedy和Eberhart在1995年开发的进化计算技术(1995年Kennedy和Eberhart埃伯哈特和肯尼迪,1995年,辛普森埃伯哈特和杜宾斯1996年)。
因此,在撰写本文的时候,PSO算法产生刚刚超过五年。
目前,它正在十多个国家研究并使用。
现在是一个适当的时候退后一步,看看对于粒子群的研究我们处在哪里,我们是如何研究到这个程度,以及将来这项研究可能会朝着哪个方向。
本文是关于这个起源于1995年PSO算法的发展、应用和相关资源的介绍。
这是从工程和计算机科学的角度来展望和分析的,并不意味着是在社会科学等领域的全面概括。
以下的介绍,是关于起源于1995年在粒子群算法的主要发展及探讨。
首先提出的是原始算法,接下来是对于收缩因素,惯性权重,及跟踪动态系统的简短的讨论。
其次,在应用方面,无论是对于已发展的和对未来的潜力应用领域进行探讨。
已开发的包括人力震颤分析,电力系统的负荷稳定和产品结构的优化。
基于粒子群算法的自适应PID控制李洪全,王 京(北京科技大学高效轧制国家工程研究中心,北京100083)[摘 要]粒子群算法是一种并行迭代逼近算法,具有参数少,易于实现等特点。
本文研究将粒子群算法应用于自适应PID控制,实时动态改变PID参数。
实验证明基于粒子群算法的自适应PID控制系统具有较好的快速性,稳定性和鲁棒性。
[关键词]粒子群算法;比例积分微分;自适应控制0 引言比例积分微分控制(PI D)是工业生产过程中最常用的控制,是历史最久、应用最广、适应性最强的控制。
在工业生产过程控制中,PID控制算法的使用占有很大比例,即使在先进算法广泛应用的今天,PID控制仍是主要的控制算法。
而PI D控制器的控制效果取决于PID参数K p,K i,K d的选择。
许多情况下,PID参数都是根据经验选取。
近年来,出现了多种基于人工智能调整参数的PID 控制,包括专家、模糊、神经网络和遗传算法PID 控制[1~2]。
粒子群算法简称PSO(Particle Swarm Optimizer)是1995年由美国社会心理学家James Kennedy和电气工程师Russell Eberhart提出的模拟自然界鸟群觅食过程而成得一种并行迭代逼近算法[3~4]。
粒子群算法参数少,简单,易于实现。
已经被应用到函数优化、神经网络训练、系统辨识、多目标优化、非线性方程求解等多个方面。
本文研究将PSO应用于PID参数整定,构成基于粒子群算法整定的自适应PID控制。
1 粒子群算法粒子群算法模拟鸟群寻找食物的过程。
假设一群鸟随机分布在一块区域中,而在这块区域内,只有一块食物,在寻找食物过程中,个体之间不断交互信息,传递经验。
个体并不知道食物的具体位置,但是个体可以通过追踪当前位置最佳的个体不断调整搜索速度,根据个体与食物的距离确定当前位置最佳的个体,当有个体搜索到食物后,整个鸟群的搜索工作完成。
在粒子群算法中,粒子对应鸟,粒子寻优过程对应鸟群搜索食物过程。
粒子群优化算法算法介绍 v[] 是粒子的速度, persent[] 是当前粒子的位置. pbest[] and gbest[] 如前定义 rand () 是介于(0, 1)之间的随机数.c1, c2 是学习因子. 通常 c1 = c2 = 2. 程序的伪代码如下 For each particle ____Initialize particle END Do ____For each particle ________Calculate fitness value ________If the fitness value is better than the best fitness value (pBest) in history ____________set current value as the new pBest ____End ____Choose the particle with the best fitness value of all the particles as the gBest ____For each particle ________Calculate particle velocity according equation (a) ________Update particle position according equation (b) ____End While maximum iterations or minimum error criteria is not attained在每一维粒子的速度都会被限制在一个最大速度Vmax,如果某一维更新后的速度超过用户设定的Vmax,那么这一维的速度就被限定为Vmax。
遗传算法和PSO的比较人工神经网络和PSO 这里用一个简单的例子说明PSO训练神经网络的过程。
这个例子使用分类问题的基准函数 (Benchmark function)IRIS数据集。
A Modified Particle Swarm Optimizer AlgorithmYang Guangyou(School of Mechanical Engineering, Hubei University of Technology, Wuhan, 430068,China)deals with this issue, yet conceptually it is simple as well as being very easy to implement. In the algorithm, through periodically monitoring aggregation degree of the particle swarm and on the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima.Abstract: This paper presented a modified particle swarm optimizer algorithm (MPSO). The aggregation degree of the particle swarm was introduced. The particles’ diversity was improved through periodically monitoring aggregation degree of the particle swarm. On the later development of the PSO algorithm, it has been taken strategy of the Gaussian mutation to the best particle’s position, which enhanced the particles’ capacity to jump out of local minima. Several typical benchmark functions with different dimensions have been used for testing. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively.2 Standard PSO Algori thmPSO simulates the behaviors of bird flocking and used it to solve the optimization problems. PSO is initialized with a group of random particles (solutions, xi) and then searches for optima by updating generations. In every generation, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. This value is called pbest (p Keywords: Particle Swarm, Aggregation Degree, Mutation, Optimization.1 Introducti onThe particle swarm optimization (PSO) is a new community intelligence optimization method firstly proposed by Kennedy and Eberhart in 1995 [1,2]. To be exactly similar with other global optimization algorithm, the PSO algorithm tends to suffer from premature convergence. In order to overcome the problem of premature convergence, many solutions and improvements have been suggested, they included: changing descend direction of the best particle[3]; renewing state of the whole swarm or the some particles according to the certain standards [4];introducing the concepts about the breeding and subpopulation of GA [5,6,7]; introducing diversity measure to control diversity of the swarm[8]; adopting new position and velocity of particle to renew equation; cooperative particle swarm optimizers [9,10] etc.. In this paper, we presented a new model whichi ). Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called gbest (p g ).The velocity and positions of each particle is updated according to their best encountered position and the best position encountered by any particle according to the following equation.12**()*()*()*()id id id id gd id v w v c rand p x c rand p x (1)id id id x x v (2)Where v id is the particle velocity in d-dimension, x id is the current particle position (solution) in d-dimension, w is the inertia weight. p and p i g are defined as stated before, rand() is a random function in the range [0,1]. c1 and c2are learning factors, usually c1 = c2 = 2. If the velocity is higher than a certain limit, called Vmax , this limit will be used as the new2-6751-4244-1135-1/07/$25.00 ©2007 IEEE.velocity for this particle in this dimension, thus keeping the particles within the search space._*(1new gbest gbest )KV (4)3 The Modi fi ed PSO Algori thmwhere V is a random number with Gaussian distribution. Thus it could not only produce the lesser disturbance scope with the bigger probability to perform local searching, but also appropriately produce the bigger disturbance scope to jump out of local optima. The initial value of K is set 1.0, K = EK , at f generations interval , E is a random number in the range [0.01, 0.9].Studies indicated that excessively concentrated particle swarm is easy to run into local minima due to loss of diversity of population. If the aggregation degree of particle swarm could be controlled validly, the ability to search the global minimum would be improved.3.1 Aggregation Degree of the Particles Swarm 3.3 MPSO AlgorithmThe aggregation degree of the particle swarm is used to describe the discrete degree of the swarm, namely diversity. It is represented as a distance between particles. In this paper, we applied the absolute difference value of each dimensional coordinate to denote the distance, and defined its biggest value as the aggregation degree of the particle swarm. If m is the size of swarm, N is the dimensionality of the problem, 3The modified PSO (MPSO) adds monitoring the aggregation degree of the particle swarm periodically to the standard PSO. Furthermore, the mutation operation for the gbest is performed when it do not reach global optimum point. The processes of implementing MPSO are listed as follows:Step 1. Set current iteration generation Iter=1.Initialize a population including m particles; Set the current position as the pbest position, the gbest is the best particle position of initialization particle swarm.id is the d th value of the i th particle and M id is the d th value of the j th particle. The Aggregation Degree of the swarm is calculated according to the following formula.Step 2. Evaluate the fitness for each particle;Step 3. Compare the evaluated fitness value of each particle with its pbest . If current value is better than pbest , then set the current position as the pbest position. Furthermore, if current value is better than gbest , then reset gbest to the current index in particle array;(){,,=1,2,,;;1,2,}id jd d t m a x x -x i j mi j d N z ""˙ (3)Between distances of particles in particle swarm also are available with Euclidean space mean distance seeing reference [6], but its computation load is relative big.Step 4. Change the velocity and position of the particle according to the equations (1) and (2), respectively;3.2 Strategy of MutationThe mutation operator of the algorithm includes two parts. One is, when monitoring periodically the aggregation degree of particle swarm (for example period=50), if the aggregation degree is less than the given value (d ˄t ˅< e ), then all particles’ position and velocity should be reinitialized, however, the pbest and the gbest would be reserved. The other is that when the PSO algorithm can not searches the global optimal point, the mutation for the gbest would be performed as follow:Step 5. if (Iter%Ie ==0) {Calculate d(t) of aggregation degree according to the equation (3);if d(t) is less then given threshold value e, reinitialize velocities and position of particle; }Step 6. Iter = Iter +1, If a stop criterion is met, end algorithm; else execute mutation operation to the gbest according to equation (4)ˈturn to step 2.2-6764 Results and Discussion4.1. Benchmark FunctionsComparison functions adopted here are benchmark functions used by many researchers. In which x represents a real number type the vector, its dimension is n, but x is ith element.i The function f1 is the generalized Rastrigrin function:211()(10cos(2)10),[5.12,5.12]ni i i i f x x x x S¦The function f 2 is the generalized Griewank function:22111()1,[600,600]4000n n i i i i f x x x¦ <The function f 3 is the Rosenbrock function:2223111()(100()(1)),[30,30]ni i i i f x x x x x¦The function f 4 is the Ackley function:411()20exp exp cos(2)20,n i i f x x n S §§·u ¨¨¸¨©¹©¦[30,30]i x e 4.2 Results and AnalysisFor the purpose of comparison, we had 50 trial runs for every instance. The size of particle swarm is 30; the goal value of all function is 1.0e-10; the maximum number of iteration is 3000. Table 1 lists the Vmax values for all the functions, respectively. Table 2 to 4 lists the results of both standard PSO and modified PSO for the four benchmark functions with 10, 20 and 30 dimensions. Where PSO is the results of standard PSO with a linearly decreasing w which from 0.7 to 0.4 or be fixed as 0.4, respectively. MPSO is the results of the modified PSO, in which inertia weight w is as 0.375 and interval Ie is set as 50. The Avg /Std stands for average value and standard deviation of fitness value of 50 trail runs, the fev als meant the average number of function call, the Ras representedratio of the number of reaching goal to the number of experiment. When the fitness value less than 1.0e-10, it is zero. By comparing the results, it is easy to see that MPSO have better results than PSO for all cases. Fig. 1 to 4 show typical convergence results of PSO and MPSO during 3000 generations for the four benchmark functions with 10 and 30 dimensions, respectively. In each case it is seen that MPSO performs better than PSO.Table 1. The initial range of benchmark function Function Vmax Function Vmax f110f3100f2600f430Table 2. The results of both PSO and MPSO with 10 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf1 2.965/1.35690009.001/500/017745.0050/50f20.091/0.03890009.000/500.023/0.083 32814.6044/50f313.375/25.07190030.000/507.102/0.333 90030.000/50f40/066064.8050/500/035025.6050/50Table 3. The results of both PSO and MPSO with 20 dimensionsStdPSO MPSOFun.Avg/Std fevalsRas Avg/Std fevalsRasf115.342/4.973590030.000/500/020259.0050/50f20.029/0.02288887.604/500.029/0.124 24218.4046/50f381.864/179.31388887.604/5017.433/0.235 90030.000/50f40/088155.0048/500/052959.0049/50Table 4. The results of both PSO and MPSO with 30 dimensionsStdPSO MPSOFun.Avg/Std fevals Ras Avg/Std fevals Rasf 40.020/7.86090030.000/500/020380.2050/501f 0.012/0.015 89724.009/500/017225.4050/502f 130.629/280.34990030.000/5027.962/0.464 90030.000/503f 0.019/0.132 90030.000/500/4.80E-10 58921.8046/5042-677101010-510105Rastrigrin function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-5100105Rastrigrin function with popu.=30,dim=10generationsl o g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 1. Performance comparison on Rastrigin101010-5100105Griewank function with popu.=30,dim=30generationslo g 10(f i t n e s s )101010-810-610-410-2100102Griewank function with popu.=30,dim=10generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 2. Performance comparison on Griewank1010101010Rosenbrock function with popu.=30,dim=10generationslo g 10(f i t n e s s)101021041061081010Rosenbrock function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 3. Performance comparison on Rosenbrock2-678101010-810-610-410-2100102Ackley function with popu.=30,dim=10generationslo g 10(f i t n e s s )101010-810-610-410-2100102Ackley function with popu.=30,dim=30generationslo g 10(f i t n e s s )(a) dimension = 10 (b) dimension = 30Fig. 4. Performance comparison on Ackley[4] Xiaofeng Xie, Wenjun Zhang, Zhilian Yang, Dissipative particle swarm optimization[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, CEC’02, vol.2, 2002: l456-l46l5 Conclus i onIn this paper, we presented a modified PSO. Two new features are added to PSO: Two newfeatures are added to PSO: aggregation degree and strategy of the Gaussian mutation. The simulation results show that the proposed method improves the convergence precision and speed of PSO algorithm effectively. The method shows good performance in all test cases than the standard PSO method. The further study on application is the subject of future work.[5] Angeline P J, Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences[C] In: Evolutionary programming VII, 1998: 601-610[6] Lovbjerg M, Rasmussen T K, Krink T, Hybrid particle swarm optimization with breeding and subpopulations[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l [7] Angeline Peter J, Using selection to improve particle swarm optimization[C] In: Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, 1998: 84-89AcknowledgmentsThis work is supported by Hubei Key lab of Manufacturing [8] Riget, J. and Vesterstroem, J. S. A Diversity guided particle swarm optimizer-the ARPSO. [R] Aarhus: University of Aarhus, EVALife, 2002.2.2Quality Engineering(LMQ2005A04).References[9]Yu, H .J.,Zhang, L.P.,and H u,S.X. Adaptive particle swarm optimization algorithm based, [J] Journal of Zhejiang University (Engineering), 2005 Vol.39 No.9 1286-1291[1]J.Kennedy and R.C. Eberhart, Particle Swarm Optimization[C], Proc. on feedback mechanism IEEE Int’l. Conf. on Neural Networks, vol. VI, 1942-1948, IEEE Service Center, 1995[10] Van den Bergh F. Engelbrecht A P, Training product unit networks using cooperative particle swarm optimizers[C] In: Proc of the third Genetic and Evolutionary computation conference, San Francisco, USA, 200l, l26-l3l[2] R. Eberhart, J. Kennedy. A new optimizer using particle swarm theory. Proc. 6th Int. Symposium on Micro Machine and Human Science, 1995: 39-43[3] Thiemo Krink, Jakob S VesterstrOm, Jacques Riget, Particle Swarm Optimization with Spatial Particle Extension[C] In: Proceedings of the 2002 Congress on Evolutionary Computation, vol. 2, 2002: 1474-1479Author BiographyYang Guangyou , Ph.D., Professor. Research interests are inthe area of intelligent computing and control. 2-679。
A Modified Particle Swarm Optimizer Based on CloudModelJianping Wen and Binggang CaoResearch Institute of Electric Vehicle and System Control,Xi’an Jiaotong University, Xi’an, Chinawenegle@Abstract – In this paper, we introduce cloud model theory to the particle swarm optimization algorithm to improve the global search ability and make a faster convergence speed of the algorithm. Some modifications are presented. First, we adopt cloud model to initialize the positions and velocities for entire population in the initialization range. Second, inertia weight isdy namically , nonlinearly decreased as the search progresses byusing the data set, which can be obtained by cloud model. Third, two random variants in the velocity rule are assigned with cloud model. Four, inertia weight and the two random variants are correlated b ycloud model. The modified particle swarm optimization is tested on some benchmark functions and the results are compared with the result of the standard particle swarm optimization. Experimental results indicate that the modified particle swarm optimization outperforms the standard particle swarm optimization in the global search ability with aquicker convergent speedIndex Terms – Particle swarm optimization, cloud model, initialization, global optimization. I. I NTRODUCTION The particle swarm optimization (PSO), first proposed by Dr. Kennedy and Dr. Eberhart [1], [2] in 1995, is a stochastic population-base d evolutionary computation optimization technique.. It is inspired originally by the social and cognitivebehavior existed in the bird flocking. The algorithm is initialized with a population of particles random distributed inthe search space. Each particle is assigned a randomized velocity. During a search process, each particle has a tendencyto fly towards better search areas with a velocity, which isdynamically adjusted according to its own experiences and those of its companion, i.e., other members of the population set. The velocity of each particle is updated using the best position it visited so far and the overall best position visited by its companions. Then, the position of each particle is updated using its updated velocity per iteration.Since its introduction, PSO has attracted much attention from researchers around the world. There have been manyresearch works to improve the performance of the original PSO algorithm. Among these improvements, Shi and Eberhart[3] introduced new parameter called inertia weight to the original PSO algorithm to overcome the shortcoming of no actual control mechanism for the velocity. Clerc [4] introduced another parameter called constriction factor to the original PSO algorithm, which may help to ensureconvergence. Eberhart and Shi [5] compared the performance of PSO algorithm using inertia weight to one using the constriction factor. The results indicated the inclusion ofconstriction factor increasing the rate of convergence. The most widely used improvement is the introduction of inertia weight [6], which is employed to control the impact of the previous history of velocities on the current one. On the basis of the PSO algorithm using inertia weight, Shi and Eberhart [7] used the method of a linearly varying inertia weight over the generations to balance global exploration and local exploitation, which improved the performance of the PSO significantly. For better performance, Shi and Eberhart [8] designed a fuzzy system to adjust the inertia weight of PSO algorithm. Inertia weight was nonlinearly, dynamically changed to have better dynamics of balance between global and local search abilities over the generations.In this paper, we present and analyze the modified PSO (mPSO) algorithm. For reflecting the nature of the flocking behavior of bird or the sociological behavior of a group ofpeople and providing balance between the global exploration and local exploitation abilities, the cloud model is adopted toinitialize the positions and velocities for entire population inthe initialization range. Inertia weight is nonlinearly,dynamically decreased as the search progresses by using cloud model. In addition, we replace the uniform distribution function in the velocity rule with the cloud model in order tomaintain diversity of the population. Finally, a numerical simulation result is given, in which the mPSO algorithmsuperiority significantly over the standard PSO (sPSO) algorithm is verified. II. T HE S TANDARD PSO A LGORITHMSuppose that the search space is D-dimensional. A swarm consists of N particles moving around in the D-dimensional search space. The i th particle has an associated current position )x ,,x ,x (X iD i i i "21=, a current velocity )v ,,v ,v (V iD i i i "21=, a personal best position achieved so far by itselfi pbest , which is represented by )p ,,p ,p (P iN i i i "21=. The best position achieved so far by the whole swarm gbest , which is represented by )g ,,g ,g (G N "21=. At each step, the velocity of the i th particle and its new position will be updated according to the following two equations:))()()(( ))()()(()()1(2211t x t g t r c t x t p t r c t wv t v id d i id id i id id −+−+=+ (1))t (v )t (x )t (x id id id +=+1 (2) Proceedings of the 2008 IEEE/ASMEInternational Conference on Advanced Intelligent MechatronicsJuly 2 - 5, 2008, Xi'an, Chinawhere w is called inertia weight; 1c and 2c are constants, called cognitive learning coefficient and social learning coefficient, respectively. )t (r i 1 and )t (r i 2 are two separately generated uniformly distributed random number in the range [0.1] and the values of )t (r i 1 and )t (r i 2 are not same for every iteration. The position of each particle is updated every iteration by adding the velocity vector to the position vector, as given in (2).III. P ROPOSED N EW D EVELOPMENTA. Cloud ModelThe cloud model [9] is defined as follows. Supposed that U is the quantitative universe of discourse, and C is the qualitative concept in U. If the quantitative value x is one of the random values of C belongs to U, and the certainty degree )t (C F of x to C is a random number with stable tendency. Then the distribution of x in the universe of discourse U is called a cloud model, and x a cloud drop. A piece of cloud is made up of a great number of cloud drops that represent a realization of a qualitative concept. Any one of the cloud drops is a mapping in the discourse universe from qualitative concept.The normal cloud is most useful in linguistic terms of vague concepts because the normal distribution have been supported by the results in every branch of both social sciences and natural sciences [10]. A normal cloud is defined with three digital characteristics: Expected value (x E ), Entropy (n E ) and Hyper entropy (e H ). Expectation (x E ) is the point that represents the qualitative concept properly in the universe of discourse. It is the most typical sample of the concept. Entropy (n E ) represents the measure of the concept coverage. If is larger, the concept is more macro. Hyper entropy (e H ) is the uncertain degree of entropy, and can also be considered as the entropy of n E . It reflects the randomness of samples of a qualitative concept [11].To realize the transform between quantitative value and qualitative concept, two kinds of cloud generators are introduced as positive cloud generator and backward cloud generator.G iven three digital characteristics x E , n E and e H , the positive cloud generator can produce a set of cloud drops as requirement [12].B. The Modified PSO AlgorithmThe modified PSO algorithm is presented based on the sPSO algorithm. The detailed modifications will be given in following.In most research on the sPSO algorithm, a population of particles is uniformly generated with random positions, and then random velocities are assigned to each particle in the process of initialization [13]. The general location of potential solutions in a search space may not be known at advance. In this paper, the cloud model is adopted to initialize particles throughout the initialization range for its position and velocityvectors, which can rather reflect the flocking behavior of bird or school of fish. The inertia weight is used to control the momentum of the particle by weighing the contribution of the previous velocity-basically controlling how much memory of the previous flight direction will influence the new velocity and balance the global and local search ability. A large inertia weight facilitates global exploration, while a small one tends to facilitate local exploration. It is crucial in finding the optimum solution efficiently that the inertia weight is nonlinearly, dynamically changed to have better dynamics of balance between global and local search abilities during the search process. In this paper, to efficiently provide balance between global and local exploration abilities, we present the new method for using the normal cloud to nonlinearly, dynamically adjust the inertia weight through the course of the run. The parameters )t (r i 1 and )t (r i 2 are used to maintain the diversity of the population. Considerably high diversity is necessary during the early part of the search to allow use of the full range of the search space. On the other hand, during the latter part of the search, when the algorithm is converging to the optimal solution, find-tuning of the solutions is important to find the global optima efficiently. We replace the old uniform distribution function in the velocity rule with the normal cloud to maintain the diversity of the population over the generations. The velocity equation is expressed as follows: ))()()(( ))()()(()()1(2211t x t g t C c t x t p t C c t v w t v id d i id id i id i id −+−+′=+ (3)where the value of i w ′ over the generations is set as the value of the cloud drops arranged in descending order; )t (C i 1 and )t (C i 2 are set as the value of the cloud drops not arranged in descending order. In the mPSO algorithm, each particle of the swarm shares mutual information globally and benefits from discoveries and pervious experiences of all other colleagues during the search process. The modified PSO method can be described as follows: Step1. G enerate a data set amounted to the number ofalgorithm iterations by using the cloud model.Step2. Initialize each particle’s position and velocity vector byusing a portion of those data and initialization range. Step3. Evaluate the fitness value of each particle.Step4. Set i pbest of each particle as its initial position anddetermine gbest .Step5. Update each particle’s velocity and position using (3)and (2), respectively.Step6. Evaluate the fitness value of each particle. If the currentfitness value for a particle excels its best previous fitness, revise the location of i pbest . After all the particles have been updated, determine and revise gbest if necessary.Step7. Judge the stop criterion, whether the iteration reachedthe given number or the best fitness reaches the givenvalue. If the criterion is satisfied then stop the program, else go to Step5.IV. S IMULATION R ESULTSIn order to test the capability of the mPSO algorithm, a set of 10 benchmark functions [14] are selected. The comparison is taken between the mPSO algorithm and the sPSO algorithm. The benchmark functions as follows: Spherical: ¦==ni i Sph x x f 12)(Ackley:()()e x nx n x f ni i n i i Ack ++⋅−¸¹·¨©§⋅−−=¦¦== 202cos 1exp 12.0exp 20)(112πGriewank:∏¦==¸¸¹·¨¨©§−+=n i i n i i Gri ix cos x )x (f 112400011 Rosenbrock:()()()¦=+−+−=ni ii i Ros x x x )x (f 122211100Rastrigin:()()¦=+−=ni i i Ras x cos x )x (f 1210210πBohachevsky 1:70440 *******221.)x cos(.)x cos(.x x )x (f Boh +−−+=ππEasom:()()0.1cos cos )(222121+−=−−−−ππx x Eas e x x x fColville:)1)(1(8.19 ))1()1((1.10 )1()(90 )1()(100)(422422232234212212−−+−+−+−+−+−+−=x x x x x x x x x x x f Col Hyperellipsoid:¦==n i i Hyp x i )x (f 122Schwefel: ()n .x sin x )x (f ni i i Sch 98294181+=¦=Some parameters of the algorithm in this paper are set as follow: the number of particles in the population space is ser as 30; The maximum iteration number is set as 2000 in each running. Each optimization experiment is run 50 times. A fullyconnected topology is used in all cases. During the optimization process, the particles are limited to move in the region defined by []max max x ,x − and the velocity is restricted to[]max max x ,x −. The number of dimensions, the search space and the optimum are listed in Table I. The range of swarm initialization and goal of function are indicated in Table II.The parameters 1c ,2c , x E , n E and e H for the mPSO algorithm are listed in Table III.TABLE IB ENCHMARKC ONFIGURATION FOR S IMULATIONSFunction Dimension Optimal f [-x max , x max ] f Sph 30 0 [-100,100] f Ack 30 0 [-100,100] f Gri 30 0 [-600,600] f Ros 30 0 [-100,100] f Ras 30 0 [-100,100] f Boh 2 0 [-100,100] f Eas 2 0 [-100,100] f Col 4 0 [-100,100] f Hyp 30 0 [-100,100] f Sch 30 0 [-500,500] TABLE III NITIALIZATION R ANGE AND G OAL OF F UNCTIONS Function Initialization range Goal of f f Sph [50,100] 10-3 f Ack [50,100] 10-3f Gri [300,600] 10-2f Ros [50,100] 10-3 f Ras [50,100] 10-3 f Boh [50,100] 10-3f Eas [50,100] 10-2f Col [50,100] 10-3f Hyp [50,100] 10-3f Sch [250,500] 10-3TABLE IIIT YPE S IZE FOR P APERS c 1c 2E x E n H e 1.6 1.6 1.0 0.02 0.003 For the sPSO algorithm, the parameter 1c and 2c is set as 1c =2c =1.8. The inertia weight is scaled linearly from 1.0 to0.4 over a maximum of 2000 iterations of the algorithm. A population of particles is uniformly distributed in theinitialization range, and velocities uniformly distributed in theinitialization range are assigned to each particle. The PSOstructure implemented is also a fully connected structure. The experimental results are reported in Table IV and V. To evaluate the performance of the proposed mPSOalgorithm, the average gbest and the standard deviations of thefunction values found in 50 times of running for the mPSO algorithm and the sPSO algorithm on each test function are listed in Table IV.As shown in table IV, it is clear that the mPSO algorithm outperforms the sPSO algorithm on all the benchmarkproblems on which we have conducted experiments so far. The mPSO algorithm achieves the smaller fitness value and standard deviation than the sPSO algorithm. The standarddeviation indicates the robustness of the mPSO algorithm. Table V gives the average, minimum and the maximumiteration number of the mPSO algorithm and the sPSO algorithm needed to achieve the goal in 50 times of running. From simulation, we can see that faster convergence isachieved and an improvement in the best solution is found bythe mPSO algorithm. It can be conducted that the introduction of the normal cloud model not only accelerates convergence but also improves the solution quality.TABLE IVT HE C OMPARISON R ESULTS OF A LGORITHM A FTER 2000I TERATIONSFunction Algorithm Average Std.f Sph mPSO 4.6382E-018 3.2467E-018 sPSO 1.5952E-007 6.2310E-007f Ack mPSO 5.5796(-0 5 2.9805(-0 5sPSO 19.5760 3.6978f Gri mPSO 3.7748E-016 2.6423E-016sPSO 0.0155 0.0185f Ros mPSO 0 0 sPSO 597.7322 1.3618e+003f Ras mPSO 3.2373E-011 3.0176E-011 sPSO 88.2172 28.7097f Boh mPSO 0 0 sPSO 0 0f Eas mPSO 0.0200 0.1400 sPSO 0.5600 0.4964f Col mPSO 0 0 sPSO 7.3418E-004 7.7795E-004f Hyp mPSO 5.9360E -020 6.2510E -020sPSO 4.3589E-007 2.7298E-006f Sch mPSO 3.8183E -004 2.0143E -004sPSO -6.0905E+007 4.2633E+008 TABLE VT HE C OMPARISON R ESULTS OF N UMBER OF A LGORITHM I TERATIONS TOA CHIEVE THE G OALFunction Algorithm Average Minimum Maximumf Sph mPSO 354.5400 139 439 sPSO 1.0782E+003 1031 1148f Ack mPSO 644.2000 525 759 sPSO - - -f Gri mPSO 600.7200 176 730 sPSO 1.2970e+003 1252 1415f Ros mPSO 638.1200 152 889 sPSO - - -f Ras mPSO 644.1800 398 775 sPSO - - -f Boh mPSO 453.1200 368 515 sPSO 556.8800 447 623f Eas mPSO 50.0600 19 847 sPSO - - -f Col mPSO 363.6600 91 612 sPSO 1.3690E+003 857 1784f Hyp mPSO 510.3000 116 706 sPSO 1.4511e+003 1396 1533f Sch mPSO 385.7010 213 635 sPSO - - - V.C ONCLUSIONIn this paper, on the basis of the standard particle swarm optimization algorithm together with the cloud mod el, we present a new methodology with promising new features. The initialization of the population and velocity upd ate rule of particle swarm optimization algorithm are mod ified. The mPSO possesses fast searching speed, efficiency and well behavior of global searching in tackling complex optimization problems. It is very simple that the mPSO is carried out. Experimental results ind icate that the mPSO improves the search performance on the benchmark functions significantly.R EFERENCES[1]J. Kennedy, R.C. Eberhart, “Particle swarm optimization,” Proc. IEEEInt’l. Conf. on Neural Networks, Perth, Australia: IEEE Service Center, Piscataway, NJ, pp. 1942Ω1948, Nov. 1995.[2]R.C. Eberhart, J. Kennedy, “A new optimizer using particle swarmtheory,” proc. Sixth Int’l Symposium on Micro Machine and Human Science, Nagoya, Japan, IEEE Service Center, Piscataway, NJ, pp. 39-43, 1995.[3]Y. Shi, R.C. Eberhart, “A modified particle swarm optimizer,” Proc. IEEEInt’l Conf. on Evolutionary Computation, Anchorage, Alaska, pp. 69–73, May 1998.[4]M. Clerc, “The swarm and the queen: towards the deterministic andadaptive particle swarm optimization,” proc. Congr. on Evolutionary Computation, Washington DC, USA, IEEE Service Center, Piscataway, NJ, pp. 1951–1957, 1999.[5]R.C. Eberhart, Y.C. Shi, “Comparing inertia weights and constrictionfactors in particle swarm optimization,” proc. Congress on Evolutionary Computation, San Diego, USA, IEEE Service Center, Piscataway, NJ, pp.84–89, 2000.[6]M. Senthil Arumugam, M.V.C. Rao, “On the improved performances ofthe particle swarm optimization algorithms with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal control of a class of hybrid systems,” Applied Soft Computing, vol. 8, no. 1, pp. 324-336, Jan. 2008.[7]Y. Shi, R.C. Eberhart, “Empirical study of particle swarm optimization,”proc. IEEE Int. Congr. Evolutionary Computation, vol. 3, pp. 101–106, 1999.[8]Y.H. Shi, R.C. Eberhart, “Fuzzy adaptive particle swarm optimization,”proc. Congr. on Evolutionary Computation, Seoul, Korea, pp. 101-106, 2001.[9]D.Y. Li, “Uncertainty reasoning based on cloud models in controllers,”Computer science and Mathematics with Application, pp. 399-421, 1996. [10]D.Y. Li, H.J. Meng, X.M. Shi, “Membership clouds and membershipcloud generators,” Computer R&D, vol. 32, no. 6, pp. 15-20, 1995.[11]L.B. Zhang, F.C. Sun, Z.Q. Sun, “Cloud model-based controller designfor flexible-link manipulators,” proc. IEEE Conf. on Robotics, Automation and mechatronics, pp. 1-5, Dec. 2006.[12]C.H. Xie, Y. Li, J. Shen, J.J. Cai, J.L. L, “A constructing algorithm ofconcept lattice with attribute generalation based on cloud models,” proc.IEEE 5th Int’l Conf. on Computer and Information Technology, IEEE Service Center, Piscataway, NJ, pp. 110–114, Sept. 2005.[13]R. Brits, A. P. Engelbrecht, F. van den Bergh, “Locating multiple optimausing particle swarm optimization,” AppliedMathematics andComputation, no. 189, pp. 1859-1883, 2007.[14]F. van den Bergh, A.P. Engelbrecht, “A study of particle swarmoptimization particle trajectories,” Information Sciences, no. 176, pp. 937-971, 2006.。