Differential Evolution for Solving Multi-Objective Optimization Problems
- 格式:pdf
- 大小:274.34 KB
- 文档页数:20
小学下册英语下册试卷(含答案)考试时间:90分钟(总分:140)A卷一、综合题(共计100题共100分)1. 填空题:I enjoy drawing pictures of _______ (我喜欢画_______的画).2. 听力题:I love to watch ______ at night. (stars)3. 听力题:Recycling helps reduce waste and conserve _____.4. 选择题:How many strings does a standard guitar have?A. FourB. FiveC. SixD. Seven5. 填空题:The flowers in the _______ bloom beautifully in spring.6. 选择题:What do we call the act of providing emotional support?A. ComfortingB. EncouragingC. EmpathizingD. All of the Above答案:D7. 选择题:What is the name of the famous painting by Van Gogh?A. The Starry NightB. The ScreamC. Girl with a Pearl EarringD. The Mona Lisa答案:A. The Starry Night8. 听力题:A chemical reaction can involve the release of ______.9. 选择题:What is the capital of Georgia?A. TbilisiB. BatumiC. KutaisiD. Rustavi答案: A10. 选择题:What is the opposite of "full"?A. EmptyB. CompleteC. WholeD. Satisfied11. 选择题:What do you call the force that pulls objects toward each other?A. MagnetismB. GravityC. FrictionD. Motion答案:B12. 选择题:What do we call the study of plants?A. BiologyB. BotanyC. ZoologyD. Ecology答案:B. Botany13. 填空题:Flowers need _____ (阳光) to grow.14. 选择题:What is the name of the famous river in Egypt?A. NileB. AmazonC. MississippiD. Yangtze答案:A15. 选择题:What do you call a person who plays music?A. MusicianB. SingerC. PerformerD. All of the above答案:D16. 听力题:The ____ has bright eyes and can see well at night.17. 填空题:I like to collect ________ (邮票) from different places.18. ssance began in ________ (意大利) in the 14th century. 填空题:The Rena19. 选择题:What is 9 2?A. 5B. 6C. 7D. 8答案: C20. 填空题:The country known for kangaroos is ________ (澳大利亚).21. 听力题:The sun is shining ___. (brightly)22. Listen and number.(听录音标号.)23. 选择题:What is the name of the famous ancient city in Mexico?A. TeotihuacanB. Machu PicchuC. Angkor WatD. Petra答案: A24. 听力题:The smallest unit of an element is called an _______.25. 听力题:An unstable isotope is known as a ______.26. 听力题:The dog is ______ near the door. (sitting)27. 听力题:The _______ of an object is how much space it takes up.28. 填空题:The ancient Greeks used _____ to explain their universe.29. 选择题:What do you call the place where people go to watch movies?A. TheaterB. CinemaC. StudioD. Gallery答案: B30. 选择题:What do you call the act of doing something useful for someone else?A. HelpingB. AssistingC. ServingD. Supporting答案: A31. 听力题:A chameleon can change its color for ______.32. 填空题:_____ (photosynthesis) is how plants make food.33. 听力题:His favorite color is ________.34. 填空题:The __________ is a famous natural landmark in Canada. (尼亚加拉瀑布)35. 填空题:Playing with my toy ____ helps me relax. (玩具名称)36. 听力题:The capital of Sweden is __________.37. 听力题:The _____ (computer/tablet) is useful.38. 填空题:The ________ (生态研究机构) contributes valuable insights.39. 选择题:Which shape has three sides?A. SquareB. TriangleC. RectangleD. Circle答案:BThe _____ (猫) is a small, furry animal that often purrs when happy. 它是一种小型毛茸茸的动物,通常在快乐时会发出呼噜声。
Discrete OptimizationScheduling flow shops using differential evolution algorithmGodfrey Onwubolu *,Donald DavendraDepartment of Engineering,The University of the South Pacific,P.O.Box 1168,Suva,FijiReceived 17January 2002;accepted 5August 2004Available online 21November 2004AbstractThis paper describes a novel optimization method based on a differential evolution (exploration)algorithm and its applications to solving non-linear programming problems containing integer and discrete variables.The techniques for handling discrete variables are described as well as the techniques needed to handle boundary constraints.In particular,the application of differential evolution algorithm to minimization of makespan ,flowtime and tardiness in a flow shop manufacturing system is given in order to illustrate the capabilities and the practical use of the method.Experiments were carried out to compare results from the differential evolution algorithm and the genetic algorithm,which has a reputation for being very powerful.The results obtained have proven satisfactory in solution quality when compared with genetic algorithm.The novel method requires few control variables,is relatively easy to implement and use,effec-tive,and efficient,which makes it an attractive and widely applicable approach for solving practical engineering prob-lems.Future directions in terms of research and applications are given.Ó2004Elsevier B.V.All rights reserved.Keywords:Scheduling;Flow shops;Differential evolution algorithm;Optimization1.IntroductionIn general,when discussing non-linear programming,the variables of the object function are usually as-sumed to be continuous.However,in practical real-life engineering applications it is common to have the problem variables under consideration being discrete or integer values.Real-life,practical engineering opti-mization problems are commonly integer or discrete because the available values are limited to a set of commercially available standard sizes.For example,the number of automated guided vehicles,the number of unit loads,the number of storage units in a warehouse operation are integer variables,while the size of a pallet,the size of billet for machining operation,etc.,are often limited to a set of commercially available 0377-2217/$-see front matter Ó2004Elsevier B.V.All rights reserved.doi:10.1016/j.ejor.2004.08.043*Corresponding author.Tel.:+679212034;fax:+679302567.E-mail address:onwubolu_g@usp.ac.fj (G.Onwubolu).European Journal of Operational Research 171(2006)674–692/locate/ejorG.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692675 standard sizes.Another class of interesting optimization problem isfinding the best order or sequence in which jobs have to be machined.None of these engineering problems has a continuous objective function; rather each of these engineering problems has either an integer objective function or discrete objective func-tion.In this paper we deal with the scheduling of jobs in aflow shop manufacturing system.Theflow shop scheduling-problem is a production planning-problem in which n jobs have to be pro-cessed in the same sequence on m machines.The assumptions are that there are no machine breakdowns and that all jobs are pre-emptive.This is commonly the case in many manufacturing systems where jobs are transferred from machine to machine by some kind of automated material handling systems.For large problem instances,typical of practical manufacturing settings,most researchers have focused on developing heuristic procedures that yield near optimal-solutions within a reasonable computation time. Most of these heuristic procedures focus on the development of permutation schedules and use makespan as a performance measure.Some of the well-known scheduling heuristics,which have been reported in the literature,include Palmer(1965),Campbell et al.(1970),Gupta(1971),Dannenbring(1977),Hundal and Rajagopal(1988)and Ho and Chang(1991).Cheng and Gupta(1989)and Baker and Scudder(1990)pre-sented a comprehensive survey of research work done inflow shop scheduling.In recent years,a growing body of literature suggests the use of heuristic search procedures for combi-natorial optimization problems.Several search procedures that have been identified as having great poten-tial to address practical optimization problems include simulated annealing(Kirkpatrick et al.,1983), genetic algorithms(Goldberg,1989),tabu search(Glover,1989,1990),and ant colony optimization(Dor-igo,1992).Consequently,over the past few years,several researchers have demonstrated the applicability of these methods,to combinatorial optimization problems such as theflow shop scheduling(see for example, Widmer and Hertz,1989;Ogbu and Smith,1990;Taillard,1990;Chen et al.,1995;Onwubolu,2000).More recently,a novel optimization method based on differential evolution(exploration)algorithm(Storn and Price,1995)has been developed,which originally focused on solving non-linear programming problems containing continuous variables.Since Storn and Price(1995)invented the differential evolution(explora-tion)algorithm,the challenge has been to employ the algorithm to different areas of problems other than those areas that the inventors originally focussed on.Although application of DE to combinatorial optimi-zation problems encountered in engineering is scarce,researchers have used DE to design complex digital filters(Storn,1999),and to design mechanical elements such as gear train,pressure vessels and springs (Lampinen and Zelinka,1999).This paper presents a new approach based on differential evolution algorithm for solving the problem of scheduling n jobs on m machines when all jobs are available for processing and the objective is to minimize the makespan.Other objective functions considered in the present work include meanflowtime and total tardiness.2.Problem formulationAflow shop scheduling is one in which all jobs must visit machines or work centers in the same sequence. Processing of a job must be completed on current machine before processing of the job is started on suc-ceeding machine.This means that initially all jobs are available and that each machine is restricted to pro-cessing only one job at any particular time.Since thefirst machine in the facility arrangement is thefirst to be visited by each job,the other machines are idle and other jobs are queued.Although queuing of jobs is prohibited in just-in-time(JIT)manufacturing environments,flow shop manufacturing continues tofind applications in electronics manufacturing,and space shuttle processing,and has attracted much research work(Onwubolu,2002).Theflow shop can be formatted generally by the sequencing of n jobs on m ma-chines under the precedence condition,with typical objective functions being the minimizing of average flowtime,minimizing the time required to complete all jobs or makespan,minimizing maximum tardiness,and minimizing the number of tardy jobs.If the number of jobs is relatively small,then the problem can be solved without using any generic optimizing algorithm.Every possibility can be checked to obtain results and then sequentially compared to capture the optimum value.But,more often,the number of jobs to be processed is large,which leads to big-O order of n !Consequently,some kind of algorithm is essential in this type of problem to avoid combinatorial explosion.The standard three-field notation (Lawler et al.,1995)used is that for representing a scheduling problem as a j b j F (C ),where a describes the machine environment,b describes the deviations from standard sched-uling assumptions,and F (C )describes the objective C being optimized.In the work reported in this paper,we are solving the n /m /F k F (C max )problem.Other problems solved include F ðC Þ¼F ðP C i Þand F ðC Þ¼F ðP T j Þ.Here a =n /m /F describes the multiple-machines flow shop problem,b =null,and F ðC Þ¼F ðC max ;P C i ;and P T j Þfor makespan,mean flowtime,and total tardiness,respectively.Stating these problem descriptions more elaborately,the minimization of completion time (makespan)for a flow shop schedule is equivalent to minimizing the objective function I :I ¼X n j ¼1C m ;j ;ð1Þs :t :C i ;j ¼max C i À1;j ;C i ;j À1ÀÁþP i ;j ;ð2Þwhere C m ,j =the completion time of job j ,C 1,1=k (any given value),C i ;j ¼P j k ¼1C 1;k ;C j ;i ¼P i k ¼1C k ;1,i )machine number,j )job in sequence,P i ,j )processing time of job j on machine i .For a given sequence,the mean flowtime,MFT =1P m i ¼1P n j ¼1c ij ,while the condition for tardiness is c m ,j >d j .The constraint of Eq.(2)applies to these two problem descriptions.3.Differential evolutionThe differential evolution (exploration)[DE]algorithm introduced by Storn and Price (1995)is a novel parallel direct search method,which utilizes NP parameter vectors as a population for each generation G .DE can be categorized into a class of floating-point encoded,evolutionary optimization algorithms .Currently,there are several variants of DE.The particular variant used throughout this investigation is the DE/rand/1/bin scheme.This scheme will be discussed here and more detailed descriptions are provided (Storn and Price,1995).Since the DE algorithm was originally designed to work with continuous variables,the opti-mization of continuous problems is discussed first.Handling discrete variables is explained later.Generally,the function to be optimized,I ,is of the form I ðX Þ:R D !R .The optimization target is to minimize the value of this objective function I ðX Þ,min ðI ðX ÞÞ;ð3Þby optimizing the values of its parameters X ={x 1,x 2,...,x D },X 2R D ,where X denotes a vector composed of D objective function ually,the parameters of the objective function are also subject to lower and upper boundary constraints,x (L )and x (U ),respectively,x ðL Þj P x j P x ðU Þj8j 2½1;D :ð4Þ3.1.InitializationAs with all evolutionary optimization algorithms,DE works with a population of solutions,not with a sin-gle solution for the optimization problem.Population P of generation G contains NP solution vectors called individuals of the population and each vector represents potential solution for the optimization problem 676G.Onwubolu,D.Davendra /European Journal of Operational Research 171(2006)674–692P ðG Þ¼X ðG Þi ¼x ðG Þj ;i ;i ¼1;...;NP ;j ¼1;...;D ;G ¼1;...;G max :ð5ÞIn order to establish a starting point for optimum seeking,the population must be initialized.Often there is no more knowledge available about the location of a global optimum than the boundaries of the problem variables.In this case,a natural way to initialize the population P (0)(initial population)is to seed it with random values within the given boundary constraints:P ð0Þ¼x ð0Þj ;i ¼x ðL Þj þrand j ½0;1 Âx ðU Þj Àx ðL Þj 8i 2½1;NP ;8j 2½1;D ;ð6Þwhere rand j [0,1]represents a uniformly distributed random value that ranges from zero to one.3.2.MutationThe self-referential population recombination scheme of DE is different from the other evolutionary algorithms.From the first generation onward,the population of the subsequent generation P (G +1)is obtained on the basis of the current population P (G ).First a temporary or trial population of candidate vectors for the subsequent generation,P 0ðG þ1Þ¼V ðG þ1Þ¼v ðG þ1Þj ;i ,is generated as follows:v ðG þ1Þj ;i ¼x ðG Þj ;r 3þF Âx ðG Þj ;r 1Àx ðG Þj ;r 2 ;if rand j ½0;1 <CR _j ¼k ;x ðG Þi ;j ;otherwise ;8<:ð7Þwhere i 2[1,NP];j 2[1,D ],r 1,r 2,r 32[1,NP],randomly selected,except:r 15r 25r 35i ,k =(int(rand i [0,1]·D )+1),and CR 2[0,1],F 2(0,1].Three randomly chosen indexes,r 1,r 2,and r 3refer to three randomly chosen vectors of population.They are mutually different from each other and also different from the running index i .New random values for r 1,r 2,and r 3are assigned for each value of index i (for each vector).A new value for the random num-ber rand[0,1]is assigned for each value of index j (for each vector parameter).3.3.CrossoverThe index k refers to a randomly chosen vector parameter and it is used to ensure that at least one vector parameter of each individual trial vector V (G +1)differs from its counterpart in the previous generation X (G ).A new random integer value is assigned to k for each value of the index i (prior to construction of each trial vector).F and CR are DE control parameters.Both values remain constant during the search process.Both values as well as the third control parameter,NP (population size),remain constant during the search pro-cess.F is a real-valued factor in range [0.0,1.0]that controls the amplification of differential variations.CR is a real-valued crossover factor in the range [0.0,1.0]that controls the probability that a trial vector will be selected form the randomly chosen,mutated vector,V ðG þ1Þj ;i instead of from the current vector,x ðG Þj ;i .Gener-ally,both F and CR affect the convergence rate and robustness of the search process.Their optimal values are dependent both on objective function characteristics and on the population size,ually,suitable values for F ,CR and NP can be found by experimentation after a few tests using different values.Practical advice on how to select control parameters NP,F and CR can be found in Storn and Price (1995,1997).3.4.SelectionThe selection scheme of DE also differs from the other evolutionary algorithms.On the basis of the cur-rent population P (G )and the temporary population P 0(G +1),the population of the next generation P (G +1)is created as follows:G.Onwubolu,D.Davendra /European Journal of Operational Research 171(2006)674–692677XðGþ1Þi ¼VðGþ1Þi;if I VðGþ1Þi6IðXðGÞiÞ;XðGÞi;otherwise:8<:ð8ÞThus,each individual of the temporary or trial population is compared with its counterpart in the current population.The one with the lower value of cost-function IðXÞto be minimized will propagate the pop-ulation of the next generation.As a result,all the individuals of the next generation are as good or better than their counterparts in the current generation.The interesting point concerning the DE selection scheme is that a trial vector is only compared to one individual vector,not to all the individual vectors in the cur-rent population.3.5.Boundary constraintsIt is important to notice that the recombination operation of DE is able to extend the search outside of the initialized range of the search space(Eqs.(6)and(7)).It is also worthwhile to notice that sometimes this is a beneficial property in problems with no boundary constraints because it is possible tofind the optimum that is located outside of the initialized range.However,in boundary-constrained problems,it is essential to ensure that parameter values lie inside their allowed ranges after recombination.A simple way to guarantee this is to replace parameter values that violate boundary constraints with random values generated within the feasible range:uðGþ1Þj;i ¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjÞ;if uðGþ1Þj;i<xðLÞj_uðGþ1Þj;i>xðUÞj;uðGþ1Þi;j;otherwise;(ð9Þwhere i2[1,NP];j2[1,D].This is the method that was used for this work.Another simple but less efficient method is to reproduce the boundary constraint violating values according to Eq.(7)as many times as is necessary to satisfy the boundary constraints.Yet another simple method that allows bounds to be approached asymptotically while minimizing the amount of disruption that results from resetting out of bound values(Price,1999) isuðGþ1Þj;i ¼ðxðGÞj;iþxðLÞjÞ=2;if uðGþ1Þj;i<xðLÞj;ðxðGÞj;iþxðUÞjÞ=2;if uðGþ1Þj;i>xðUÞj;uðGþ1Þj;i;otherwise:8>><>>:ð10Þ3.6.Conventional technique for integer and discrete optimization by DESeveral approaches have been used to deal with discrete variable optimization.Most of them round offthe variable to the nearest available value before evaluating each trial vector.To keep the population robust,successful trial vectors must enter the population with all of the precision with which they were generated(Storn and Price,1997).In its canonical form,the differential evolution algorithm is only capable of handling continuous vari-ables.Extending it for optimization of integer variables,however,is rather mpinen and Zelinka (1999)discuss how to modify DE for mixed variable optimization.They suggest that only a couple of sim-ple modifications are required.First,integer values should be used to evaluate the objective function,even though DE itself may still works internally with continuousfloating-point values.Thus, Iðy iÞ;i2½1;D ;ð11Þ678G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692wherey i ¼x i for continuous variables;INTðx iÞfor integer variables;&wherey i ¼x i;INTðx iÞ: &x i2X:INT()is a function for converting a real-value to an integer value by truncation.Truncation is performed here only for purposes of cost-function value evaluation.Truncated values are not elsewhere assigned. Thus,DE works with a population of continuous variables regardless of the corresponding object variable type.This is essential for maintaining the diversity of the population and the robustness of the algorithm. Second,in case of integer variable,instead of Eq.(6),the population should be initialized as follows: Pð0Þ¼xð0Þj;i¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjþ1Þ8i2½1;NP ;8j2½1;D :ð12ÞAdditionally,instead of Eq.(9),the boundary constraint handling integer variables should be performed as follows:uðGþ1Þj;i ¼xðLÞjþrand j½0;1 ÂðxðUÞjÀxðLÞjþ1Þ;if INTðuðGþ1Þj;iÞ<xðLÞj_INTðuðGþ1Þj;iÞ>xðUÞj;uðGþ1Þi;ji;otherwise;(ð13Þwhere i2[1,NP];j2[1,D].They also discuss how discrete values can also be handled in a straightforward manner.Suppose that the subset of discrete variables,X(d),contains l elements that can be assigned to var-iable x:XðdÞ¼xðdÞi;i2½1;l ;ð14Þwhere xðdÞi<xðdÞiþ1.Instead of the discrete value x i itself,we may assign its index,i,to x.Now the discrete variable can be handled as an integer variable that is boundary constrained to range1,...,l.To evaluate the objective func-tion,the discrete value,x i,is used instead of its index i.In other words,instead of optimizing the value of the discrete variable directly,we optimize the value of its index i.Only during evaluation is the indicated discrete value used.Once the discrete problem has been converted into an integer one,the previously de-scribed methods for handling integer variables can be applied(Eqs.(11)–(13)).3.7.Forward transformation and backward transformation techniqueThe problem formulation is already discussed in Section2.Solving theflow shop-scheduling problem and indeed most combinatorial optimization problems requires discrete variables and ordered sequence, rather than relative position indexing.To achieve this,we developed two strategies known as forward and backward transformation techniques respectively.In this paper,we present a forward transformation method for transforming integer variables into continuous variables for the internal representation of vec-tor values since in its canonical form,the DE algorithm is only capable of handling continuous variables.G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692679We also present a backward transformation method for transforming a population of continuous variablesobtained after mutation back into integer variables for evaluating the objective function(Onwubolu,2001). Both forward and backward transformations are utilized in implementing the DE algorithm used in the present study for theflow shop-scheduling problem.Fig.1shows how to deal with this inherent represen-tational problem in DE.Level0deals with integer numbers(which are used in discrete problems).At this level,initialization andfinal solutions are catered for.In the problem domain areas of scheduling,TSP,etc., we are not only interested in computing the objective function cost,we are also interested in the proper order of jobs or cities respectively.Level1of Fig.1deals withfloating point numbers,which are suited for DE.At this level,the DE operators(mutation,crossover,and selection)take place.To transform the integer at level0intofloating point numbers at level1for DEÕs operators,requires some specific kind of coding.This type of coding is highly used in mathematics and computing science.For the basics of trans-forming an integer number into its real number equivalence,interested readers may refer to Michalewicz (1994),and Onwubolu and Kumalo(2001)for its application to optimizing machining operations using genetic algorithms.3.7.1.Forward transformation(from integer to real number)In integer variable optimization a set of integer number is normally generated randomly as an initial solution.Let this set of integer number be represented asz0i2z0:ð15ÞLet the real number(floating point)equivalence of this integer number be z i.The length of the real number depends on the required precision,which in our case,we have chosen two places after the decimal point. The domain of the variable z i has length equal to5;the precision requirement implies that the range be [0...4].Although0is considered since it is not a feasible solution,the range[0.1,1,2,3,4]is chosen,which gives a range of5.We assign each feasible solution two decimal places and this gives us5·100=500.Accordingly,the equivalent continuous variable for z0iis given as100¼102<5Â1026103¼1000:ð16ÞThe mapping from an integer number to a real number z i for the given range is now straightforward,given asz i¼À1þz0iÂ510À1:ð17Þ680G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692Eq.(17)results in most conversion values being negative;this does not create any accuracy problem any way.After some studies by Onwubolu(2001),the scaling factor f=100was found to be adequate for con-verting virtually all integer numbers into their equivalent positive real numbers.Applying this scaling factor of f=100givesz i¼À1þz0iÂfÂ510À1¼À1þz0iÂ50010À1:ð18ÞEq.(18)is used to transform any integer variable into an equivalent continuous variable,which is then used for the DE internal representation of the population of vectors.Without this transformation,it is not pos-sible to make useful moves towards the global optimum in the solution space using the mutation mecha-nism of DE,which works better on continuous variables.For example in afive-job scheduling problem, suppose the sequence is given as{2,4,3,1,5}.This sequence is not directly used in DE internal representa-tion.Rather,applying Eq.(18),the sequence is transformed into a continuous form.Thefloating-pointequivalence of thefirst entry of the given sequence,z0i ¼2,is z i¼À1þ2Â500103À1¼0:001001.Other valuesare similarly obtained and the sequence is therefore represented internally in the DE scheme as {0.001001,1.002,0.501502,À0.499499,and1.5025}.3.7.2.Backward transformation(from real number to integer)Integer variables are used to evaluate the objective function.The DE self-referential population muta-tion scheme is quite unique.After the mutation of each vector,the trial vector is evaluated for its objective function in order to decide whether or not to retain it.This means that the objective function values of the current vectors in the population need to be also evaluated.These vector variables are continuous(from the forward transformation scheme)and have to be transformed into their integer number equivalence. The backward transformation technique is used for convertingfloating point numbers to their integer num-ber equivalence.The scheme is given as follows:z0 i ¼ð1þz iÞÂð103À1Þ500:ð19ÞIn this present form the backward transformation function is not able to properly discriminate between variables.To ensure that each number is discrete and unique,some modifications are required as follows: a¼intðz0iþ0:5Þ;ð20Þb¼aÀz0i;ð21ÞzÃi ¼ðaÀ1Þ;if b>0:5;a;if b<0:5:&ð22ÞEq.(22)gives zÃi ,which is the transformed value used for computing the objective function.It should bementioned that the conversion scheme of Eq.(19),which transforms real numbers after DE operations into integer numbers is not sufficient to avoid duplication;hence,the steps highlighted in Eqs.(20)–(22)are important.In our studies,these modifications ensure that after mutation,crossover and selection opera-tions,the convertedfloating numbers into their integer equivalence in the set of jobs for a new scheduling solution,or set of cities for a new TSP solution,etc.,are not duplicated.As an example,we consider a set of trial vector,z i={À0.33,0.67,À0.17,1.5,0.84}obtained after mutation.The integer values corresponding to the trial vector values are obtained using Eq.(22)as follows:G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692681z0 1¼ð1À0:33ÞÂð103À1Þ=500¼1:33866;z02¼ð1þ0:67ÞÂð103À1Þ=500¼3:3367;z0 3¼ð1À0:17ÞÂð103À1Þ=500¼1:65834;z04¼ð1þ1:50ÞÂð103À1Þ=500¼4:9950;z05¼ð1þ0:84ÞÂð103À1Þ=500¼3:6763;a1¼intð1:333866þ0:5Þ¼2;b1¼2À1:33866¼0:66134>0:5;zÃ1¼2À1¼1;a2¼intð3:3367þ0:5Þ¼4;b2¼4À3:3367¼0:6633>0:5;zÃ2¼4À1¼3;a3¼intð1:65834þ0:5Þ¼2;b3¼2À1:65834¼0:34166<0:5;zÃ3¼2;a4¼intð4:995þ0:5Þ¼5;b4¼5À4:995¼0:005<0:5;zÃ4¼5;a5¼intð3:673þ0:5Þ¼4;b5¼4À3:673¼0:3237<0:5;zÃ5¼4:This can be represented schematically as shown in Fig.2.The set of integer values is given aszÃi ¼f1;3;2;5;4g.This set is used to obtain the objective function values.Like in GA,after mutation,crossover,and boundary checking operations,the trial vector obtained fromthe backward transformation is continuously checked until feasible solution is found.Hence,it is not nec-essary to bother about the ordered sequence,which is crucially important in the type of combinatorial opti-mization problems we are concerned with.Feasible solutions constitute about10–15%of the total trial vectors.3.8.DE strategiesPrice and Storn(2001)have suggested ten different working strategies of DE and some guidelines in applying these strategies for any given problem.Different strategies can be adopted in the DE algorithm depending upon the type of problem for which it is applied.Table1shows the ten different working strat-egies proposed by Price and Storn(2001).The general convention used in Table1is as follows:DE/x/y/z.DE stands for differential evolution algorithm,x represents a string denoting the vector to be perturbed,y is the number of difference vectors considered for perturbation of x,and z is the type of crossover being used(exp:exponential;bin:binomial). Thus,the working algorithm outline by Storn and Price(1997)is the seventh strategy of DE,that is,DE/ rand/1/bin.Hence the perturbation can be either in the best vector of the previous generation or in any ran-domly chosen vector.Similarly for perturbation,either single or two vector differences can be used.For perturbation with a single vector difference,out of the three distinct randomly chosen vectors,the weighted vector differential of any two vectors is added to the third one.Similarly for perturbation with two vector682G.Onwubolu,D.Davendra/European Journal of Operational Research171(2006)674–692。
差分进化算法DE-DifferentialEvolution差分进化算法 (Differential Evolution)Differential Evolution(DE)是由Storn等⼈于1995年提出的,和其它⼀样,DE是⼀种模拟⽣物进化的,通过反复,使得那些适应环境的个体被保存了下来。
但相⽐于进化算法,DE保留了基于种群的全局搜索策略,采⽤实数编码、基于差分的简单变异操作和⼀对⼀的竞争⽣存策略,降低了遗传操作的复杂性。
同时,DE特有的记忆能⼒使其可以动态跟踪当前的搜索情况,以调整其搜索策略,具有较强的全局收敛能⼒和,且不需要借助问题的特征信息,适于求解⼀些利⽤常规的数学规划⽅法所⽆法求解的复杂环境中的优化问题。
⽬前,DE已经在许多领域得到了应⽤,譬如⼈⼯⽹络、化⼯、电⼒、机械设计、机器⼈、信号处理、⽣物信息、经济学、现代农业、⾷品安全、环境保护和运筹学等。
DE算法-作者⽹站:维基百科资料库 :DE 算法主要⽤于求解的全局优化问题,其主要⼯作步骤与其他基本⼀致,主要包括变异(Mutation)、交叉(Crossover)、选择(Selection)三种操作。
算法的基本思想是从某⼀随机产⽣的初始群体开始,利⽤从种群中随机选取的两个个体的差向量作为第三个个体的随机变化源,将差向量加权后按照⼀定的规则与第三个个体求和⽽产⽣变异个体,该操作称为变异。
然后,变异个体与某个预先决定的⽬标个体进⾏参数混合,⽣成试验个体,这⼀过程称之为交叉。
如果试验个体的适应度值优于⽬标个体的适应度值,则在下⼀代中试验个体取代⽬标个体,否则⽬标个体仍保存下来,该操作称为选择。
在每⼀代的进化过程中,每⼀个体⽮量作为⽬标个体⼀次,算法通过不断地迭代计算,保留优良个体,淘汰劣质个体,引导搜索过程向全局逼近。
算法图解:算法伪代码:算法C代码:1//********************************************************/2// DE/rand/1/bin --差分进化算法-(基本类型)3//********************************************************/456 #include <stdlib.h>7 #include <stdio.h>8 #include <time.h>9 #include <float.h>1011/* Function definitions */1213double func(double *);14int usage(char *);1516/* Random number generator defined by URAND should return17double-precision floating-point values uniformly distributed18over the interval [0.0, 1.0) */1920#define URAND ((double)rand()/((double)RAND_MAX + 1.0))2122/* Definition for random number generator initialization */2324#define INITRAND srand(time(0))2526/* Usage for the program */2728int usage(char *str)29 {30 fprintf(stderr, "Usage: %s [-h] [-u] [-s] [-N NP (20*D)] ", str);31 fprintf(stderr, "[-G Gmax (1000)]\n");32 fprintf(stderr, "\t[-C crossover constant, CR (0.9)]\n");33 fprintf(stderr, "\t[-F mutation scaling factor, F (0.9)]\n");34 fprintf(stderr, "\t[-o <outputfile>]\n\n");35 fprintf(stderr, "\t-s does not initialize random number generator\n");36 exit(-1);37 }383940int main(int argc, char **argv)41 {42 register int i, j, k, r1, r2, r3, jrand, numofFE = 0;43extern int D;44extern double Xl[], Xu[];4546int NP = 20 * D, Gmax = 1000, c, index = -1, s = 1;4748double **popul, **next, **ptr, *iptr, *U, CR = 0.9, F = 0.9,4950 min_value = DBL_MAX, totaltime = 0.0;5152char *ofile = NULL;5354 FILE *fid;55 clock_t starttime, endtime;565758/* Parse command line arguments given by user */5960for (i = 1; i < argc; i++)61 {62if (argv[i][0] != '-')63 usage(argv[0]);6465 c = argv[i][1];6667switch (c)68 {69case'N':70if (++i >= argc)71 usage(argv[0]);7273 NP = atoi(argv[i]);74break;75case'G':76if (++i >= argc)77 usage(argv[0]);7879 Gmax = atoi(argv[i]);80break;81case'C':82if (++i >= argc)83 usage(argv[0]);8485 CR = atof(argv[i]);86break;87case'F':88if (++i >= argc)89 usage(argv[0]);9091 F = atof(argv[i]);92break;93case'o':94if (++i >= argc)95 usage(argv[0]);9697 ofile = argv[i];98break;99case's': /* Flag for using same seeds for */100 s = 0; /* different runs */101break;102case'h':103case'u':104default:105 usage(argv[0]);106 }107 }108109if (s) INITRAND;110111/* Printing out information about optimization process for the user */ 112113 printf("Program parameters: ");114 printf("NP = %d, Gmax = %d, CR = %.2f, F = %.2f\n",115 NP, Gmax, CR, F);116117 printf("Dimension of the problem: %d\n", D);118119120/* Starting timer */121122 starttime = clock();123124125/* Allocating memory for current and next populations, intializing126 current population with uniformly distributed random values and127 calculating value for the objective function */128129130// NP:种群⼤⼩, Gmax:迭代次数, CR:交叉概率, F:扰动向量的缩放因⼦131132//当前种群133 popul = (double **)malloc(NP*sizeof(double *));134if (popul == NULL) perror("malloc");135136//下代种群137 next = (double **)malloc(NP*sizeof(double *));138if (next == NULL) perror("malloc");139140//当前种群popul[NP][D+1]141for (i = 0; i < NP; i++)142 {143//个体维度空间分配144 popul[i] = (double *)malloc((D + 1)*sizeof(double));145if (popul[i] == NULL) perror("malloc");146147//初始化维度值148for (j = 0; j < D; j++)149 popul[i][j] = Xl[j] + (Xu[j] - Xl[j])*URAND;150151//最后的元素内存放该个体的适应度值152 popul[i][D] = func(popul[i]);153154 numofFE++;//统计评估次数155156//下⼀代个体空间分配157 next[i] = (double *)malloc((D + 1)*sizeof(double));158if (next[i] == NULL) perror("malloc");159 }160161/* 为实验向量分配空间--Allocating memory for a trial vector U */ 162163 U = (double *)malloc((D + 1)*sizeof(double));164if (U == NULL) perror("malloc");165166167/* The main loop of the algorithm */168169for (k = 0; k < Gmax; k++)170 {171172for (i = 0; i < NP; i++) /* Going through whole population */173 {174175/* Selecting random indeces r1, r2, and r3 to individuls of176 the population such that i != r1 != r2 != r3 */177178//1.选择三个互不相同的随机个体r1,r2,r3179do180 {181 r1 = (int)(NP*URAND);182 } while (r1 == i);183184do185 {186 r2 = (int)(NP*URAND);187 } while (r2 == i || r2 == r1);188do189 {190 r3 = (int)(NP*URAND);191 } while (r3 == i || r3 == r1 || r3 == r2);192193 jrand = (int)(D*URAND);194195/* Mutation and crossover */196//2. 执⾏变异和交叉操作197for (j = 0; j < D; j++)198 {199//执⾏⼆项式交叉200if (URAND < CR || j == jrand)201 {202//试验向量部分来⾃变异后的向量203 U[j] = popul[r3][j] + F*(popul[r1][j] - popul[r2][j]);204 }205else206//试验向量部分来⾃个体i207 U[j] = popul[i][j];208 }209//3. 计算新⽣成向量的适应度值210 U[D] = func(U);211212 numofFE++;213214/* Comparing the trial vector 'U' and the old individual215 'next[i]' and selecting better one to continue in the216 next population.注意:空间的交替变换和使⽤ */217218//贪婪策略从试验向量U和当前个体i中选择⼀个好的放⼊到下⼀代个体中219if (U[D] <= popul[i][D])//新向量好220 {221222//试验向量U⽜逼, next指向当前的试验向量U,u指向next, ⽅法:指针交换223 iptr = U;224 U = next[i];225 next[i] = iptr;226 }227else//原始向量⽜逼, next指向个体i, ⽅法: 直接拷贝228 {229for (j = 0; j <= D; j++)230 next[i][j] = popul[i][j];231 }232233 } /* End of the going through whole population */234235236/* Pointers of old and new populations are swapped */237//指针交换,各指针指向的空间发⽣变化238 ptr = popul;239 popul = next;240 next = ptr;241242 } /* End of the main loop */243244245/* Stopping timer */246247 endtime = clock();248 totaltime = (double)(endtime - starttime);249250251/* If user has defined output file, the whole final population is252 saved to the file */253254if (ofile != NULL)255 {256if ((fid = (FILE *)fopen(ofile, "a")) == NULL)257 {258 fprintf(stderr, "Error in opening file %s\n\n", ofile);259 usage(argv[0]);260 }261262for (i = 0; i < NP; i++)263 {264for (j = 0; j <= D; j++)265 fprintf(fid, "%.15e ", popul[i][j]);266 fprintf(fid, "\n");267 }268 fclose(fid);269 }270271/* Finding best individual */272273for (i = 0; i < NP; i++)274 {275if (popul[i][D] < min_value)276 {277 min_value = popul[i][D];278 index = i;279 }280 }281282/* Printing out information about optimization process for the user */283284 printf("Execution time: %.3f s\n", totaltime / (double)CLOCKS_PER_SEC);285 printf("Number of objective function evaluations: %d\n", numofFE);286287 printf("Solution:\nValues of variables: ");288for (i = 0; i < D; i++)289 printf("%.15f ", popul[index][i]);290291 printf("\nObjective function value: ");292 printf("%.15f\n", popul[index][D]);293294295/* Freeing dynamically allocated memory */296297for (i = 0; i < NP; i++)298 {299free(popul[i]);300free(next[i]);301 }302free(popul);303free(next);304free(U);305306return(0);307 }经典⽂献:[1] Storn, R., "Designing Nonstandard Filters with Differential Evolution, IEEE Signal Processing Magazine, january 2005, pp. 103 - 106.[2] Storn, R., "Sytem Design by Constraint Adaptation and Differential Evolution", IEEE Trans. on Evolutionary Computation, 1999, Vol. 3, No. 1, pp. 22 - 34.[3] Storn, R. and Price, K., "Differential Evolution - a Simple and Efficient Heuristic for Global Optimization over Continuous Spaces", Journal of Global Optimization, Kluwer Academic Publishers, 1997, Vol. 11, pp. 341 - 359.[4] Gitsels, M. and Storn, R., Internet-Videotelephonie nach dem H.323-Standard, ITG-Fachbericht 144, 7. Dortmunder Fernsehseminar, pp. 87 - 92.[5] Storn, R., , Technical Report TR-96-046, ICSI, November 1996, .[6] Storn, R., , Technical Report TR-96-039, ICSI, November 1996, .[7] Price, K. and Storn, R., "Differential Evolution: Numerical Optimization Made Easy", Dr. Dobb's Journal, April 97, pp. 18 - 24.[8] Storn, R., NAFIPS 1996, Berkeley, pp. 519 - 523.[9] Storn, R. and Price, K., IEEE Conference on Evolutionary Computation, Nagoya, 1996, pp. 842 - 844.[10] Storn, R., (IEEE Signal Processing Letters, Vol. 3, No. 8, August 1996, pp. 242 - 244), Technical Report TR-95-061, ICSI, September 1995, .[11] Storn, R., IEEE International Conference on Evolutionary Computation ICEC 96, pp. 268 - 273, Technical Report TR-95-026, ICSI, May 1995, .[12] Storn, R., , Technical Report TR-95-018, ICSI, May 1995, .[13] Storn, R. and Price, K., , Technical Report TR-95-012, ICSI, March 1995, . Anyone who is interested in trying Differential Evolution (DE) might access the .[14] Storn, R., "A Debug/Trace Tool for C SW Projects", Dr. Dobb's Journal, February 1997, pp. 22 - 26.[15] Storn, R., "Constrained Optimization", Dr. Dobb's Journal, May 1995, pp. 119 - 123.[16] Christ, J., Storn, R. and Lueder, E., " New Shortlength DFTs for the Prime Factor Implementation on DSP Architectures", Frequenz, 1995, Band 49, Issue 1-2, pp. 8 - 10.[17] Ballay, H. and Storn, R., "A Tool for Checking C Coding Conventions", C User's Journal, july 94, pp. 41 - 50..[18] Storn, R., "A Hashing Function Based on Algebraic Coding", submitted for publication in the I.E.E. Proceedings~E, Computers and Digital Techniques.[19] Storn, R., "A Radix-2 FFT-Pipeline Architecture With Reduced Noise to Signal Ratio", I.E.E. Proceedings~F, Radar and Signal Processing, 1994.[20] Storn, R. , "Datensicherung mit Prüfsummen", ST-Computer, 1994.[21] Storn, R., "Some Results in Fixed Point Error Analysis of the Bruun-FFT Algorithm, IEEE Trans. on Signal Processing, Vol. 41, No. 7, July 93, pp. 2371 - 2375.[22] Storn, R. , "Statistische Optimierung", ST-Computer, Issues 12/1992 and 1/1993.[23] Storn, R. , "On the Bruun Algorithm and its Inverse", Frequenz, Vol. 3-4, 1992, pp. 110 -116.[24] Storn, R. , "Logische Schaltungen und deren Vereinfachung nach Quine-McCluskey", ST-Computer, Issues 3, 4 and 5, 1990.[25] Storn, R. , "A novel Radix-2 Pipeline Architecture for the Computation of the DFT", IEEE Proc. of the ISCAS 1988, pp. 1899 -1902.[26] Storn, R. , "On the Reduction of Arithmetic Complexity in the Chirp-Transform", Proc. ECCTD, 1987, pp. 239 -244.[27] Storn, R. , "Ein Primfaktor-Algorithmus für die diskrete Hartley-Transformation", 9. DFG-Kolloquium über digitale Signalverarbeitung, 1986, pp. 79 -82.[28] Storn, R. , "Fast Algorithms for the Discrete Hartley Transform", AEÜ, Band 40, Heft 4, 1986, pp. 233 -240.[29] Storn, R. , "Dreieck-Quadratur-Oszillator. Nur ein zeitbestimmendes Glied erforderlich", Elektronik, Issue 5, 1982, p. 74.[30] Storn, R. , "Constant Current Adapter", Elektor, Issue 7/8, 1981.[31] Storn, R. , "De Luxe Transistor Tester", Elektor, Issue 7/8, 1979. (The corresponding circuit was among the winners of the european circuit design contest "EUROTRONIK").BOOKS[1] Price K., Storn R., Lampinen J., Differential Evolution - A Practical Approach to Global Optimization, Springer, Berlin, 2005.[2] Contributor for Babu, B.V., Onwubolu, G. (Editors), New Optimization Techniques in Engineering, Springer, Berlin, 2004.[3] Contributor for Corne, D., Dorigo., M, and Glover., F. (Editors), New Ideas in Optimization, McGraw-Hill, 1999.。
Quasi-Oppositional Differential EvolutionShahryar Rahnamayan1,Hamid R.Tizhoosh1,Magdy M.A.Salama2Faculty of Engineering,University of Waterloo,Waterloo,Ontario,N2L3G1,Canada 1Pattern Analysis and Machine Intelligence(PAMI)Research Group1,2Medical Instrument Analysis and Machine Intelligence(MIAMI)Research Group shahryar@pami.uwaterloo.ca,tizhoosh@uwaterloo.ca,m.salama@ece.uwaterloo.caAbstract—In this paper,an enhanced version of theOpposition-Based Differential Evolution(ODE)is pro-posed.ODE utilizes opposite numbers in the populationinitialization and generation jumping to accelerate Differ-ential Evolution(DE).Instead of opposite numbers,in thiswork,quasi opposite points are used.So,we call the newextension Quasi-Oppositional DE(QODE).The proposedmathematical proof shows that in a black-box optimizationproblem quasi-opposite points have a higher chance to becloser to the solution than opposite points.A test suite with 15benchmark functions has been employed to compare performance of DE,ODE,and QODE experimentally.Results confirm that QODE performs better than ODEand DE in overall.Details for the proposed approach andthe conducted experiments are provided.I.I NTRODUCTIONDifferential Evolution(DE)was proposed by Price and Storn in1995[16].It is an effective,robust,and sim-ple global optimization algorithm[8]which has only a few control parameters.According to frequently reported comprehensive studies[8],[22],DE outperforms many other optimization methods in terms of convergence speed and robustness over common benchmark func-tions and real-world problems.Generally speaking,all population-based optimization algorithms,no exception for DE,suffer from long computational times because of their evolutionary nature.This crucial drawback some-times limits their application to off-line problems with little or no real time constraints.The concept of opposition-based learning(OBL)was introduced by Tizhoosh[18]and has thus far been applied to accelerate reinforcement learning[15],[19], [20],backpropagation learning[21],and differential evolution[10]–[12],[14].The main idea behind OBL is the simultaneous consideration of an estimate and its corresponding opposite estimate(i.e.guess and opposite guess)in order to achieve a better approximation of the current candidate solution.Opposition-based deferential evolution(ODE)[9],[10],[14]uses opposite numbers during population initialization and also for generating new populations during the evolutionary process.In this paper,an OBL has been utilized to accelerate ODE.In fact,instead of opposite numbers,quasi oppo-site points are used to accelerate ODE.For this reason, we call the new method Quasi-Oppositional DE(QODE) which employs exactly the same schemes of ODE for population initialization and generation jumping.Purely random sampling or selection of solutions from a given population has the chance of visiting or even revisiting unproductive regions of the search space.A mathematical proof has been provided to show that,in general,opposite numbers are more likely to be closer to the optimal solution than purely random ones[13].In this paper,we prove the quasi-opposite points have higher chance to be closer to solution than opposite points.Our experimental results confirm that QODE outperforms DE and ODE.The organization of this paper is as follows:Differen-tial Evolution,the parent algorithm,is briefly reviewed in section II.In section III,the concept of opposition-based learning is explained.The proposed approach is presented in section IV.Experimental verifications are given in section V.Finally,the work is concluded in section VI.II.D IFFERENTIAL E VOLUTIONDifferential Evolution(DE)is a population-based and directed search method[6],[7].Like other evolution-ary algorithms,it starts with an initial population vec-tor,which is randomly generated when no preliminary knowledge about the solution space is available.Let us assume that X i,G(i=1,2,...,N p)are solution vectors in generation G(N p=population size).Succes-sive populations are generated by adding the weighted difference of two randomly selected vectors to a third randomly selected vector.For classical DE(DE/rand/1/bin),the mutation, crossover,and selection operators are straightforwardly defined as follows:Mutation-For each vector X i,G in generation G a mutant vector V i,G is defined byV i,G=X a,G+F(X b,G−X c,G),(1) 22291-4244-1340-0/07$25.00c 2007I EEEwhere i={1,2,...,N p}and a,b,and c are mutually different random integer indices selected from {1,2,...,N p}.Further,i,a,b,and c are different so that N p≥4is required.F∈[0,2]is a real constant which determines the amplification of the added differential variation of(X b,G−X c,G).Larger values for F result in higher diversity in the generated population and lower values cause faster convergence.Crossover-DE utilizes the crossover operation to generate new solutions by shuffling competing vectors and also to increase the diversity of the population.For the classical version of the DE(DE/rand/1/bin),the binary crossover(shown by‘bin’in the notation)is utilized.It defines the following trial vector:U i,G=(U1i,G,U2i,G,...,U Di,G),(2) where j=1,2,...,D(D=problem dimension)andU ji,G=V ji,G if rand j(0,1)≤C r∨j=k,X ji,G otherwise.(3)C r∈(0,1)is the predefined crossover rate constant, and rand j(0,1)is the j th evaluation of a uniform random number generator.k∈{1,2,...,D}is a random parameter index,chosen once for each i to make sure that at least one parameter is always selected from the mutated vector,V ji,G.Most popular values for C r are in the range of(0.4,1)[3].Selection-The approach that must decide which vector(U i,G or X i,G)should be a member of next(new) generation,G+1.For a maximization problem,the vector with the higherfitness value is chosen.There are other variants based on different mutation and crossover strategies[16].III.O PPOSITION-B ASED L EARNING Generally speaking,evolutionary optimization meth-ods start with some initial solutions(initial population) and try to improve them toward some optimal solu-tion(s).The process of searching terminates when some predefined criteria are satisfied.In the absence of a priori information about the solution,we usually start with some random guesses.The computation time,among others,is related to the distance of these initial guesses from the optimal solution.We can improve our chance of starting with a closer(fitter)solution by simultaneously checking the opposite guesses.By doing this,thefitter one(guess or opposite guess)can be chosen as an initial solution.In fact,according to probability theory,the likelihood that a guess is further from the solution than its opposite guess is50%.So,starting with thefitter of the two,guess or opposite guess,has the potential to accelerate convergence.The same approach can be applied not only to initial solutions but also continuously to each solution in the current population.Before concentrating on quasi-oppositional version of DE,we need to define the concept of opposite numbers [18]:Definition(Opposite Number)-Let x∈[a,b]be a real number.The opposite number˘x is defined by˘x=a+b−x.(4) Similarly,this definition can be extended to higher dimensions as follows[18]:Definition(Opposite Point)-Let P(x1,x2,...,x n) be a point in n-dimensional space,where x1,x2,...,x n∈R and x i∈[a i,b i]∀i∈{1,2,...,n}. The opposite point˘P(˘x1,˘x2,...,˘x n)is completely defined by its components˘x i=a i+b i−x i.(5) As we mentioned before,opposition-based differential evolution(ODE)employs opposite points in population initialization and generation jumping to accelerate the classical DE.In this paper,in order to enhance the ODE, instead of opposite points a quasi opposite points are utilized.Figure1andfigure2show the interval and region which are used to generate these points in one-dimensional and two-dimensional spaces,respectively.Fig.1.Illustration of x,its opposite˘x,and the interval[M,˘x] (showed by dotted arrow)which the quasi opposite point,˘x q,is generated in this interval.Fig.2.For a two-dimensional space,the point P,its opposite˘P, and the region(illustrated by shadow)which the quasi opposite point,˘P q,is generated in that area.Mathematically,we can prove that for a black-box optimization problem(which means solution can appear anywhere over the search space),that the quasi opposite point˘x q has a higher chance than opposite point˘x to be closer to the solution.This proof can be as follows: Theorem-Given a guess x,its opposite˘x and quasi-opposite˘x q,and given the distance from the solution d(.)and probability function P r(.),we haveP r[d(˘x q)<d(˘x)]>1/2(6) Proof-Assume,the solution is in one of these intervals:[a,x],[x,M],[M,˘x],[˘x,b]22302007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n(CEC2007)([a,x ]∪[x,M ]∪[M,˘x ]∪[˘x ,b ]=[a,b ]).Weinvestigate all cases:•[a,x ],[˘x ,b ]-According to the definition of opposite point,intervals [a,x ]and [˘x ,b ]have the same length,so the probability of that the solution bein interval [a,x ]or [˘x ,b ]is equal (x −a b −a =b −˘x b −a ).Now,if the solution is in interval [a,x ],definitely,it is closer to ˘x q and in the same manner if it is in interval [˘x ,b ]it would be closer to ˘x .So,untilnow,˘x qand ˘xhave the equal chance to be closer to the solution.•[M,˘x ]-For this case,according to probabilitytheory,˘x q and ˘xhave the equal chance to be closer to the solution.•[x,M ]-For this case,obviously,˘x q is closer to the solution than ˘x .Now,we can conclude that,in overall,˘x q has a higher chance to be closer to the solution than ˘x ,because for the first two cases they had equal chance and just for last case ([x,M ])˘x q has a higher chance to be closer to the solution.This proof is for a one-dimensional space but the conclusion is the same for the higher dimensions:P rd (˘P q )<d (˘P ) >1/2(7)Because according to the definition of Euclideandistance between two points Y (y 1,y 2,...,y D )and Z (z 1,z 2,...,z D )in a D-dimensional spaced (Y,Z )= Y,Z = Di =1(y i −z i )2,(8)If in each dimension ˘x q has a higher chance to be closer to the solution than ˘x ,consequently,the point ˘Pq ,in a D-dimensional space,will have a higher chance to be closer to solution than P .Now let us define an optimization process which uses a quasi-oppositional scheme.Quasi-Oppositional OptimizationLet P (x 1,x 2,...,x D )be a point in an D-dimensionalspace (i.e.a candidate solution)and ˘P q (˘x q 1,˘x q 2,...,˘x q D )be a quasi opposite point (see figure 2).Assume f (·)is a fitness function which is used to measure the candidate’sfitness.Now,if f (˘Pq )≥f (P ),then point P can be replaced with ˘Pq ;otherwise we continue with P .Hence,we continue with the fitter one.IV.P ROPOSED A LGORITHMSimilar to all population-based optimization algo-rithms,two main steps are distinguishable for DE,namely population initialization and producing newgenerations by evolutionary operations such as selec-tion,crossover,and mutation.Similar to ODE,we will enhance these two steps using the quasi-oppositional scheme.The classical DE is chosen as a parent algorithm and the proposed scheme is embedded in DE to acceler-ate the convergence speed.Corresponding pseudo-code for the proposed approach (QODE)is given in Table I.Newly added /extended code segments will be explained in the following subsections.A.Quasi-Oppositional Population InitializationAccording to our review of optimization literature,random number generation,in absence of a priori knowl-edge,is the only choice to create an initial population.By utilizing quasi-oppositional learning we can obtain fitter starting candidate solutions even when there is no a priori knowledge about the solution(s).Steps 1-12from Table I present the implementation of quasi-oppositional initialization for QODE.Following steps show that procedure:1)Initialize the population P 0(N P )randomly,2)Calculate quasi-opposite population (QOP 0),steps 5-10from Table I,3)Select the N p fittest individuals from {P 0∪QOP 0}as initial population.B.Quasi-Oppositional Generation JumpingBy applying a similar approach to the current popu-lation,the evolutionary process can be forced to jump to a new solution candidate,which ideally is fitter than the current one.Based on a jumping rate J r (i.e.jumping probability),after generating new populations by selection,crossover,and mutation,the quasi-opposite population is calculated and the N p fittest individuals are selected from the union of the current population and the quasi-opposite population.As a difference to quasi-oppositional initialization,it should be noted here that in order to calculate the quasi-opposite population for generation jumping,the opposite of each variable and middle point are calculated dynamically.That is,the maximum and minimum values of each variablein the current population ([MIN p j ,MAX pj ])are used to calculate middle-to-opposite points instead of using variables’predefined interval boundaries ([a j ,b j ]).By staying within variables’interval static boundaries,we would jump outside of the already shrunken search space and the knowledge of the current reduced space (converged population)would be lost.Hence,we calcu-late new points by using variables’current interval in thepopulation ([MIN p j ,MAX pj ])which is,as the search does progress,increasingly smaller than the corresponding initial range [a j ,b j ].Steps 33-46from Table I show the implementation of quasi-oppositional generation jumping for QODE.2007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n (CEC 2007)2231TABLE IP SEUDO-CODE FOR Q UASI-O PPOSITIONAL D IFFERENTIAL E VOLUTION(QODE).P0:I NITIAL POPULATION,OP0:O PPOSITE OF INITIAL POPULATION,N p:P OPULATION SIZE,P:C URRENT POPULATION,OP:O PPOSITE OF CURRENT POPULATION,V:N OISE VECTOR,U:T RIAL VECTOR,D:P ROBLEM DIMENSION,[a j,b j]:R ANGE OF THE j-TH VARIABLE,BFV:B EST FITNESS VALUE SO FAR,VTR:V ALUE TO REACH,NFC:N UMBER OF FUNCTION CALLS,MAX NFC:M AXIMUM NUMBER OF FUNCTION CALLS,F:M UTATION CONSTANT,rand(0,1): U NIFORMLY GENERATED RANDOM NUMBER,C r:C ROSSOVER RATE,f(·):O BJECTIVE FUNCTION,P :P OPULATION OF NEXT GENERATION,J r:J UMPING RATE,MIN p j:M INIMUM VALUE OF THE j-TH VARIABLE IN THE CURRENT POPULATION,MAX p j:M AXIMUM VALUE OF THE j-TH VARIABLE IN THE CURRENT POPULATION,M i,j:M IDDLE P OINT.S TEPS1-12AND33-46ARE IMPLEMENTATIONS OF QUASI-OPPOSITIONAL POPULATION INITIALIZATION AND GENERATION JUMPING,RESPECTIVELY.Quasi-Oppositional Differential Evolution(QODE)/*Quasi-Oppositional Population Initialization*/1.Generate uniformly distributed random population P0;2.for(i=0;i<N p;i++)3.for(j=0;j<D;j++)4.{5.OP0i,j=a j+b j−P0i,j;6.M i,j=(a j+b j)/2;7.if(P0i,j<M i,j)8.QOP0i,j=M i,j+(OP0i,j−M i,j)×rand(0,1);9.else10.QOP0i,j=OP0i,j+(M i,j−OP0i,j)×rand(0,1);11.}12.Select N pfittest individuals from set the{P0,QOP0}as initial population P0;/*End of Quasi-Oppositional Population Initialization*/13.while(BFV>VTR and NFC<MAX NFC)14.{15.for(i=0;i<N p;i++)16.{17.Select three parents P i1,P i2,and P i3randomly from current population where i=i1=i2=i3;18.V i=P i1+F×(P i2−P i3);19.for(j=0;j<D;j++)20.{21.if(rand(0,1)<C r∨j=k)22.U i,j=V i,j;23.else24.U i,j=P i,j;25.}26.Evaluate U i;27.if(f(U i)≤f(P i))28.P i=U i;29.else30.P i=P i;31.}32.P=P ;/*Quasi-Oppositional Generation Jumping*/33.if(rand(0,1)<J r)34.{35.for(i=0;i<N p;i++)36.for(j=0;j<D;j++)37.{38.OP i,j=MIN p j+MAX p j−P i,j;39.M i,j=(MIN p j+MAX p j)/2;40.if(P i,j<M i,j)41.QOP i,j=M i,j+(OP i,j−M i,j)×rand(0,1);42.else43.QOP i,j=OP i,j+(M i,j−OP i,j)×rand(0,1);44.}45.Select N pfittest individuals from set the{P,QOP}as current population P;46.}/*End of Quasi-Oppositional Generation Jumping*/47.}22322007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n(CEC2007)V.E XPERIMENTAL V ERIFICATIONIn this section we describe the benchmark functions,comparison strategies,algorithm settings,and present the results.A.Benchmark FunctionsA set of 15benchmark functions (7unimodal and 8multimodal functions)has been used for performance verification of the proposed approach.Furthermore,test functions with two different dimensions (D and 2∗D )have been employed in the conducted experiments.By this way,the classical differential evolution (DE),opposition-based DE (ODE),and quasi-oppositional DE (QODE)are compared on 30minimization problems.The definition of the benchmark functions and their global optimum(s)are listed in Appendix A.The 13functions (out of 15)have an optimum in the center of searching space,to make it asymmetric the search space for all of these functions are shifted a 2as follows:If O.P.B.:−a ≤x i ≤a and f min =f (0,...,0)=0then S.P.B.:−a +a 2≤x i ≤a +a2,where O.P.B.and S.P.B.stand for original parameter bounds and shifted parameter bounds ,parison Strategies and MetricsIn this study,three metrics,namely,number of func-tion calls (NFC),success rate (SR),and success per-formance (SP)[17]have been utilized to compare the algorithms.We compare the convergence speed by mea-suring the number of function calls which is the most commonly used metric in literature [10]–[12],[14],[17].A smaller NFC means higher convergence speed.The termination criterion is to find a value smaller than the value-to-reach (VTR)before reaching the maximum number of function calls MAX NFC .In order to minimize the effect of the stochastic nature of the algorithms on the metric,the reported number of function calls for each function is the average over 50trials.The number of times,for which the algorithm suc-ceeds to reach the VTR for each test function is mea-sured as the success rate SR:SR =number of times reached VTRtotal number of trials.(9)The average success rate (SR ave )over n test functions are calculated as follows:SR ave=1n ni =1SR i .(10)Both of NFC and SR are important measures in an op-timization process.So,two individual objectives should be considered simultaneously to compare competitors.In order to combine these two metrics,a new measure,called success performance (SP),has been introduced as follows [17]:SP =mean (NFC for successful runs)SR.(11)By this definition,the two following algorithms have equal performances (SP=100):Algorithm A:mean (NFC for successful runs)=50and SR=0.5,Algorithm B:mean (NFC for successful runs)=100and SR=1.SP is our main measure to judge which algorithm performs better than others.C.Setting Control ParametersParameter settings for all conducted experiments are as follows:•Population size,N p =100[2],[4],[23]•Differential amplification factor,F =0.5[1],[2],[5],[16],[22]•Crossover probability constant,C r =0.9[1],[2],[5],[16],[22]•Jumping rate constant for ODE,J r ODE =0.3[10]–[12],[14]•Jumping rate constant for QODE,J r QODE =0.05•Maximum number of function calls,MAX NFC =106•Value to reach,VTR =10−8[17]The jumping rate for QODE is set to a smaller value(J r QODE =16J r ODE )because our trials showed that the higher jumping rates can reduce diversity of the population very fast and cause a premature convergence.This was predictable for QODE because instead of opposite point a random point between middle point and the opposite point is generated and so variable’s search interval is prone to be shrunk very fast.A com-plementary study is required to determine an optimal value /interval for QODE’s jumping rate.D.ResultsResults of applying DE,ODE,and QODE to solve 30test problems (15test problems with two different dimensions)are given in Table II.The best NFC and the success performance for each case are highlighted in boldface.As seen,QODE outperforms DE and ODE on 22functions,ODE on 6functions,and DE just on one function.DE performs marginally better than ODE and QODE in terms of average success rate (0.90,0.88,and 0.86,respectively).ODE surpasses DE on 26functions.As we mentioned before,the success performance is a measure which considers the number of function2007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n (CEC 2007)2233calls and the success rate simultaneously and so it can be utilized for a reasonable comparison of optimization algorithms.VI.C ONCLUSIONIn this paper,the quasi-oppositional DE(QODE),an enhanced version of the opposition-based differential evolution(ODE),is proposed.Both algorithms(ODE and QODE)use the same schemes for population initial-ization and generation jumping.But,QODE uses quasi-opposite points instead of opposite points.The presented mathematical proof confirms that this point has a higher chance than opposite point to be closer to the solution. Experimental results,conducted on30test problems, clearly show that QODE outperforms ODE and DE. Number of function calls,success rate,and success performance are three metrics which were employed to compare DE,ODE,and QODE in this study. According to our studies on the opposition-based learning,thisfield presents the promising potentials but still requires many deep theoretical and empirical investigations.Control parameters study(jumping rate in specific),adaptive setting of the jumping rate,and investigating of QODE on a more comprehensive test set are our directions for the future study.R EFERENCES[1]M.Ali and A.T¨o rn.Population set-based global optimization al-gorithms:Some modifications and numerical studies.Journal of Computers and Operations Research,31(10):1703–1725,2004.[2]J.Brest,S.Greiner,B.Boškovi´c,M.Mernik,and V.Žumer.Self-adapting control parameters in differential evolution:A comparative study on numerical benchmark problems.Journal of I EEE Transactions on Ev olutionar y Computation,10(6):646–657,2006.[3]S.Das,A.Konar,and U.Chakraborty.Improved differentialevolution algorithms for handling noisy optimization problems.In Proceedings of I EEE Congress on Ev olutionar y Computation Conference,pages1691–1698,Napier University,Edinburgh, UK,September2005.[4] C.Y.Lee and X.Yao.Evolutionary programming usingmutations based on the lévy probability distribution.Journal of I EEE Transactions on Ev olutionar y Computation,8(1):1–13, 2004.[5]J.Liu and mpinen.A fuzzy adaptive differential evolutionalgorithm.Journal of Soft Computing-A Fusion of Foundations, Methodologies and Applications,9(6):448–462,2005.[6]G.C.Onwubolu and B.Babu.New Optimization Techniques inE ngineering.Springer,Berlin,New York,2004.[7]K.Price.An Introduction to Differential Ev olution.McGraw-Hill,London(UK),1999.ISBN:007-709506-5.[8]K.Price,R.Storn,and mpinen.Differential Ev olution:A Practical Approach to Global Optimization.Springer-Verlag,Berlin/Heidelberg/Germany,1st edition edition,2005.ISBN: 3540209506.[9]S.Rahnamayan.Opposition-Based Differential Ev olution.Phdthesis,Deptartement of Systems Design Engineering,University of Waterloo,Waterloo,Canada,April2007.[10]S.Rahnamayan,H.Tizhoosh,and M.Salama.Opposition-baseddifferential evolution algorithms.In Proceedings of the2006I EEE World Congress on Computational Intelligence(C E C-2006),pages2010–2017,Vancouver,BC,Canada,July2006.[11]S.Rahnamayan,H.Tizhoosh,and M.Salama.Opposition-baseddifferential evolution for optimization of noisy problems.In Proceedings of the2006I EEE World Congress on Computational Intelligence(C E C-2006),pages1865–1872,Vancouver,BC, Canada,July2006.[12]S.Rahnamayan,H.Tizhoosh,and M.Salama.Opposition-based differential evolution with variable jumping rate.InI EEE S y mposium on Foundations of Computational Intelligence,Honolulu,Hawaii,USA,April2007.[13]S.Rahnamayan,H.Tizhoosh,and M.Salama.Opposition versusrandomness in soft computing techniques.submitted to theE lse v ier Journal on Applied Soft Computing,Aug.2006.[14]S.Rahnamayan,H.Tizhoosh,and M.Salama.Opposition-based differential evolution.accepted at the Journal of I EEE Transactions on Ev olutionar y Computation,Dec.2006. [15]M.Shokri,H.R.Tizhoosh,and M.Kamel.Opposition-basedq(λ)algorithm.In Proceedings of the2006I EEE World Congress on Computational Intelligence(IJCNN-2006),pages649–653, Vancouver,BC,Canada,July2006.[16]R.Storn and K.Price.Differential evolution:A simple andefficient heuristic for global optimization over continuous spaces.Journal of Global Optimization,11:341–359,1997.[17]P.N.Suganthan,N.Hansen,J.J.Liang,K.Deb,Y.P.Chen,A.Auger,and S.Tiwari.Problem definitions and evaluationcriteria for the cec2005special session on real-parameter optimization.Technical Report2005005,Kanpur Genetic Al-gorithms Laboratory,IIT Kanpur,Nanyang Technological Uni-versity,Singapore And KanGAL,May2005.[18]H.Tizhoosh.Opposition-based learning:A new scheme formachine intelligence.In Proceedings of the International Con-ference on Computational Intelligence for Modelling Control and Automation(CIMCA-2005),pages695–701,Vienna,Austria, 2005.[19]H.Tizhoosh.Reinforcement learning based on actions andopposite actions.In Proceedings of the International Conference on Artificial Intelligence and Machine Learning(AIML-2005), Cairo,Egypt,2005.[20]H.Tizhoosh.Opposition-based reinforcement learning.Journalof Ad v anced Computational Intelligence and Intelligent Infor-matics,10(3),2006.[21]M.Ventresca and H.Tizhoosh.Improving the convergence ofbackpropagation by opposite transfer functions.In Proceedings of the2006I EEE World Congress on Computational Intelligence (IJCNN-2006),pages9527–9534,Vancouver,BC,Canada,July 2006.[22]J.Vesterstroem and R.Thomsen.A comparative study of differ-ential evolution,particle swarm optimization,and evolutionary algorithms on numerical benchmark problems.In Proceedings of the Congress on Ev olutionar y Computation(C E C-2004),I EEE Publications,volume2,pages1980–1987,San Diego,California, USA,July2004.[23]X.Yao,Y.Liu,and G.Lin.Evolutionary programming madefaster.Journal of I EEE Transactions on Ev olutionar y Computa-tion,3(2):82,1999.A PPENDIX A.L IST OF BENCHMARK FUNCTIONS O.P.B.and S.P.B.stand for the original parameter bounds and the shifted parameter bounds,respecti v el y. All the conducted experiments are based on S.P.B.•1st De Jongf1(X)=ni=1x i2,O.P.B.−5.12≤x i≤5.12,S.P.B.−2.56≤x i≤7.68,min(f1)=f1(0,...,0)=0.22342007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n(CEC2007)TABLE IIC OMPARISON OF DE,ODE,AND QODE.D:D IMENSION,NFC:N UMBER OF FUNCTION CALLS(AVERAGE OVER50TRIALS),SR:S UCCESS RATE,SP:S UCCESS PERFORMANCE.T HE LAST ROW OF THE TABLE PRESENTS THE AVERAGE SUCCESS RATES.T HE BEST NFC AND THE SUCCESS PERFORMANCE FOR EACH CASE ARE HIGHLIGHTED IN boldface.DE,ODE,AND QODE ARE UNABLE TO SOLVE f10(D=60).D E OD E QOD EF D NFC SR SP NFC SR SP NFC SR SPf130860721860725084415084442896142896601548641154864101832110183294016194016 f23095080195080569441569444707214707260176344117634411775611177561059921105992 f32017458011745801773001177300116192111619240816092181609283466818346685396081539608 f4103237700.96337260752780.92818231811001181100208113700.08101421254213000.1626331256152800.163845500 f5301114400.96116083747170.92812141005400.801256756019396011939601283400.681887351152800.68169529 f6301876011876010152110152945219452603312813312811452111452146670.8417461 f73016837211683721002801100280824481824486029450012945002020100.962104272218500.72308125 f83010146011014607040817040850576150576601802600.842150001217500.60202900983000.40245800 f9101913400.762520002133300.563809002476400.48515900202883000.358240002539100.554617001933300.68284300 f103038519213851923691041369104239832123983260−0−−0−−0−f113018340811834081675801167580108852110885260318112131811227471612747161831321183132 f123040240140240264001264002107612107660736161736166478016478064205164205 f13303869201386920361884136188429144812914486043251614325164257000.964434382950841295084 f141019324119324161121161121397211397220457881457883172013172023776123776 f15103726013726026108126108189441189442017687211768725788815788840312140312SR ave0.900.880.86•Axis Parallel H y per-E llipsoidf2(X)=ni=1ix i2,O.P.B.−5.12≤x i≤5.12,S.P.B.−2.56≤x i≤7.68,min(f2)=f2(0,...,0)=0.•Schwefel’s Problem1.2f3(X)=ni=1(ij=1x j)2,O.P.B.−65≤x i≤65,S.P.B.−32.5≤x i≤97.5,min(f3)=f3(0,...,0)=0.•Rastrigin’s Functionf4(X)=10n+ni=1(x2i−10cos(2πx i)),O.P.B.−5.12≤x i≤5.12, S.P.B.−2.56≤x i≤7.68, min(f4)=f4(0,...,0)=0.•Griewangk’s Functionf5(X)=ni=1x2i4000−ni=1cos(x i√i)+1,O.P.B.−600≤x i≤600,S.P.B.−300≤x i≤900,min(f5)=f5(0,...,0)=0.•Sum of Different Powerf6(X)=ni=1|x i|(i+1),O.P.B.−1≤x i≤1,S.P.B.−0.5≤x i≤1.5,min(f6)=f6(0,...,0)=0.•Ackle y’s Problemf7(X)=−20exp(−0.2ni=1x2i)−exp(ni=1cos(2πx i))+ 20+e,O.P.B.−32≤x i≤32,S.P.B.−16≤x i≤48,min(f7)=f7(0,...,0)=0.2007IEEE Co ngr e ss o n Evo luti o nar y Co mputati o n(CEC2007)2235。
一个关于才演变的过程英语作文Title: The Evolutionary Journey of the Human Species: A Comprehensive Exploration into the Origins and Adaptation of Humankind.Introduction:The study of human evolution, a captivating scientific endeavor, delves into the intricate origins and remarkable adaptive journey of our species, Homo sapiens. Overmillions of years, our ancestors underwent a profound transformation, evolving from humble origins to become the dominant force on Earth. This essay will comprehensively explore the evolutionary process that shaped humanity, examining the timeline of major events, the selective pressures that drove adaptation, and the impact of environmental and genetic factors on our development.Hominin Origins and Early Adaptations:The evolutionary story of humans begins approximately 6 million years ago with the emergence of the genus Australopithecus. These early hominins exhibited bipedalism, a defining characteristic of the human lineage, allowing them to navigate the African savannas more efficiently. As the climate shifted and grasslands expanded, Australopithecus species diversified, giving rise to lineages that eventually led to the emergence of Homo.The advent of the genus Homo around 2.8 million years ago marked a significant milestone in human evolution. Homo habilis, one of the earliest Homo species, possessed a more developed brain and the ability to craft stone tools. This technological advancement hinted at the cognitive and behavioral complexity that would become a hallmark of the human lineage.Expansion and Adaptation:Approximately 1.8 million years ago, Homo erectus emerged as a pioneering species that embarked on a journey out of Africa, colonizing new territories in Asia andEurope. Homo erectus possessed superior toolmaking skills, mastered the use of fire, and developed more sophisticated hunting strategies, allowing them to adapt to diverse environments.The emergence of Homo neanderthalensis in Europe around 400,000 years ago showcased the remarkable adaptability of hominins. Neanderthals evolved unique physical and cultural adaptations to survive the harsh glacial conditions, including a robust physique, enlarged brains, and advanced social structures.The Modern Human: Homo sapiens.Around 200,000 years ago, the evolutionary stage was set for the emergence of Homo sapiens, our own species. Arising in Africa, Homo sapiens possessed cognitiveabilities and behavioral innovations that set them apart from their hominin predecessors. The development of language, abstract thought, and symbolic expression enabled complex social interactions and technological advancements.Selective Pressures and Adaptive Traits:The evolutionary trajectory of humans has been shaped by a multitude of selective pressures, including environmental challenges, competition for resources, and social interactions. These pressures favored the development of adaptive traits that enhanced survival and reproductive success.Bipedalism, for instance, freed the hands for tool use and facilitated efficient locomotion. Large brains and cognitive development allowed for complex problem-solving, language, and social cooperation. Advanced toolmakingskills provided a competitive edge in hunting and resource acquisition.Impact of Environmental and Genetic Factors:The environment played a crucial role in shaping human evolution. Climate fluctuations, changes in vegetation, and the availability of resources influenced the selective pressures that acted on hominin populations. Geneticvariation within populations provided the raw material for natural selection, allowing for the emergence of advantageous traits.For example, the evolution of light skin pigmentationin populations that migrated to higher latitudes allowedfor increased vitamin D synthesis in environments with reduced sunlight exposure. Genetic adaptations to high-altitude environments, such as increased red blood cell production, facilitated survival in oxygen-deprived conditions.Conclusion:The evolutionary journey of the human species is a testament to the remarkable adaptability and resilience of our lineage. From humble origins as tree-dwelling primatesto our status as the dominant species on Earth, humans have undergone a profound transformation shaped by selective pressures, environmental challenges, and genetic innovation. Understanding the evolutionary process provides not only a glimpse into our past but also insights into our futurepotential as a species. The study of human evolution remains an ongoing endeavor, with new discoveries constantly shedding light on the intricate tapestry of our origins and adaptation.。
DEMO:Differential Evolution for MultiobjectiveOptimizationTea Robiˇc and Bogdan FilipiˇcDepartment of Intelligent Systems,Joˇz ef Stefan Institute,Jamova39,SI-1000Ljubljana,Sloveniatea.robic@ijs.sibogdan.filipic@ijs.siAbstract.Differential Evolution(DE)is a simple but powerful evolu-tionary optimization algorithm with many successful applications.In thispaper we propose Differential Evolution for Multiobjective Optimization(DEMO)–a new approach to multiobjective optimization based on DE.DEMO combines the advantages of DE with the mechanisms of Pareto-based ranking and crowding distance sorting,used by state-of-the-artevolutionary algorithms for multiobjective optimization.DEMO is im-plemented in three variants that achieve competitive results onfive ZDTtest problems.1IntroductionMany real-world optimization problems involve optimization of several(conflict-ing)criteria.Since multiobjective optimization searches for an optimal vector, not just a single value,one solution often cannot be said to be better than an-other and there exists not only a single optimal solution,but a set of optimal solutions,called the Pareto front.Consequently,there are two goals in multi-objective optimization:(i)to discover solutions as close to the Pareto front as possible,and(ii)tofind solutions as diverse as possible in the obtained nondom-inated front.Satisfying these two goals is a challenging task for any algorithm for multiobjective optimization.In recent years,many algorithms for multiobjective optimization have been introduced.Most originate in thefield of Evolutionary Algorithms(EAs)–the so-called Multiobjective Optimization EAs(MOEAs).Among these,the NSGA-II by Deb et al.[1]and SPEA2by Zitzler et al.[2]are the most popular.MOEAs take the strong points of EAs and apply them to Multiobjective Optimization Problems(MOPs).A particular EA that has been used for multiobjective opti-mization is Differential Evolution(DE).DE is a simple yet powerful evolution-ary algorithm by Price and Storn[3]that has been successfully used in solving single-objective optimization problems[4].Hence,several researchers have tried to extend it to handle MOPs.Abbass[5,6]was thefirst to apply DE to MOPs in the so-called Pareto Differential Evolution(PDE)algorithm.This approach employs DE to createC.A.Coello Coello et al.(Eds.):EMO2005,LNCS3410,pp.520–533,2005.c Springer-Verlag Berlin Heidelberg2005DEMO:Differential Evolution for Multiobjective Optimization521 new individuals and then keeps only the nondominated ones as the basis for the next generation.PDE was compared to SPEA[7](the predecessor of SPEA2) on two test problems and found to outperform it.Madavan[8]achieved good results with the Pareto Differential Evolution Approach(PDEA1).Like PDE,PDEA applies DE to create new individuals. It then combines both populations and calculates the nondominated rank(with Pareto-based ranking assignment)and diversity rank(with the crowding distance metric)for each individual.Two variants of PDEA were investigated.Thefirst compares each child with its parent.The child replaced the parent if it had a higher nondominated rank or,if it had the same nondominated rank and a higher diversity rank.Otherwise the child is discarded.This variant was found inefficient –the diversity was good but the convergence slow.The other variant simply takes the best individuals according to the nondominated rank and diversity rank(like in NSGA-II).The latter variant has proved to be very efficient and was applied to several MOPs,where it produced favorable results.Xue[9]introduced Multiobjective Differential Evolution(MODE).This algo-rithm also uses the Pareto-based ranking assignment and the crowding distance metric,but in a different manner than PDEA.In MODE thefitness of an in-dividual isfirst calculated using Pareto-based ranking and then reduced with respect to the individual’s crowding distance value.This singlefitness value is then used to select the best individuals for the new population.MODE was tested onfive benchmark problems where it produced better results than SPEA.In this paper,we propose a new way of extending DE to be suitable for solving MOPs.We call it DEMO(Differential Evolution for Multiobjective Op-timization).Although similar to the existing algorithms(especially PDEA),our implementation differs from others and represents a novel approach to multiob-jective optimization.DEMO is implemented in three variants(DEMO/parent, DEMO/closest/dec and DEMO/closest/obj).Because of diverse recommenda-tions for the crossover probability,three different values for this parameter are investigated.From the simulation results onfive test problems wefind that DEMO efficiently achieves the two goals of multiobjective optimization,i.e.the convergence to the true Pareto front and uniform spread of individuals along the front.Moreover,DEMO achieves very good results on the test problem ZDT4 that poses many difficulties to state-of-the-art algorithms for multiobjective op-timization.The rest of the paper is organized as follows.In Section2we describe the DE scheme that was used as a base for DEMO.Thereafter,in Section3,we present DEMO in its three variants.Section4outlines the applied test prob-lems and performance measures,and states the results.Further comparison and discussion of the results are provided in Section5.The paper concludes with Section6.1This acronym was not used by Madavan.We introduce it to make clear distinction between his approach and other implementations of DE for multiobjective optimiza-tion.522T.Robiˇc and B.FilipiˇcDifferential Evolution1.Evaluate the initial population P of random individuals.2.While stopping criterion not met,do:2.1.For each individual P i (i =1,...,popSize )from P repeat:(a)Create candidate C from parent P i .(b)Evaluate the candidate.(c)If the candidate is better than the parent,the candidate replacesthe parent.Otherwise,the candidate is discarded.2.2.Randomly enumerate the individuals in P .Fig.1.Outline of DE’s main procedureCandidate creation Input:Parent P i1.Randomly select three individuals P i 1,P i 2,P i 3from P ,where i,i 1,i 2and i 3are pairwise different.2.Calculate candidate C as C =P i 1+F ·(P i 2−P i 3),where F is a scaling factor.3.Modify the candidate by binary crossover with the parent using crossover probability crossP rob .Output:Candidate C Fig.2.Outline of the candidate creation in scheme DE/rand/1/bin2Differential EvolutionDE is a simple evolutionary algorithm that creates new candidate solutions by combining the parent individual and several other individuals of the same pop-ulation.A candidate replaces the parent only if it has better fitness.This is a rather greedy selection scheme that often outperforms traditional EAs.The DE algorithm in pseudo-code is shown in Fig.1.Many variants of cre-ation of a candidate are possible.We use the DE scheme DE/rand/1/bin de-scribed in Fig.2(more details on this and other DE schemes can be found in [10]).Sometimes,the newly created candidate falls out of bounds of the vari-able space.In such cases,many approaches of constraint handling are possible.We address this problem by simply replacing the candidate value violating the boundary constraints with the closest boundary value.In this way,the candidate becomes feasible by making as few alterations to it as possible.Moreover,this approach does not require the construction of a new candidate.3Differential Evolution for Multiobjective OptimizationWhen applying DE to MOPs,we face many difficulties.Besides preserving a uniformly spread front of nondominated solutions,which is a challenging task for any MOEA,we have to deal with another question,that is,when to replace the parent with the candidate solution.In single-objective optimization,the decisionDEMO:Differential Evolution for Multiobjective Optimization523 Differential Evolution for Multiobjective Optimization1.Evaluate the initial population P of random individuals.2.While stopping criterion not met,do:2.1.For each individual P i(i=1,...,popSize)from P repeat:(a)Create candidate C from parent P i.(b)Evaluate the candidate.(c)If the candidate dominates the parent,the candidate replaces the parent.If the parent dominates the candidate,the candidate is discarded.Otherwise,the candidate is added in the population.2.2.If the population has more than popSize individuals,truncate it.2.3.Randomly enumerate the individuals in P.Fig.3.Outline of DEMO/parentis easy:the candidate replaces the parent only when the candidate is better than the parent.In MOPs,on the other hand,the decision is not so straightforward. We could use the concept of dominance(the candidate replaces the parent only if it dominates it),but this would make the greedy selection scheme of DE even greedier.Therefore,DEMO applies the following principle(see Fig.3). The candidate replaces the parent if it dominates it.If the parent dominates the candidate,the candidate is discarded.Otherwise(when the candidate and parent are nondominated with regard to each other),the candidate is added to the population.This step is repeated until popSize number of candidates are created.After that,we get a population of the size between popSize and 2·popSize.If the population has enlarged,we have to truncate it to prepare it for the next step of the algorithm.The truncation consists of sorting the individuals with nondominated sorting and then evaluating the individuals of the same front with the crowding dis-tance metric.The truncation procedure keeps in the population only the best popSize individuals(with regard to these two metrics).The described truncation is derived from NSGA-II and is also used in PDEA’s second variant.DEMO incorporates two crucial mechanisms.The immediate replacement of the parent individual with the candidate that dominates it,is the core of DEMO. The newly created candidates that enter the population(either by replacement or by addition)instantly take part in the creation of the following candidates. This emphasizes elitism within reproduction,which helps achieving thefirst goal of multiobjective optimization–convergence to the true Pareto front.The second mechanism is the use of nondominated sorting and crowding distance metric in truncation of the extended population.Besides preserving elitism, this mechanism stimulates the uniform spread of solutions.This is needed to achieve the second goal–finding as diverse nondominated solutions as possible. DEMO’s selection scheme thus efficiently pursues both goals of multiobjective optimization.The described DEMO’s procedure(outlined in Fig.3)is the most elemen-tary of the three variants presented in this paper.It is called DEMO/parent.524T.Robiˇc and B.FilipiˇcThe other two variants were inspired by the concept of Crowding DE,recently introduced by Thomsen in[11].Thomsen applied crowding-based DE in optimization of multimodal func-tions.When optimizing functions with many optima,we would sometimes like not only tofind one optimal point,but also discover and maintain multiple op-tima in a single algorithm run.For this purpose,Crowding DE can be used. Crowding DE is basically conventional DE with one important diffu-ally,the candidate is compared to its parent.In Crowding DE,the candidate is compared to the most similar individual in the population.The applied similarity measure is the Euclidean distance between two solutions.Crowding DE was tested on numerous benchmark problems and its performance was impressive.Because the goal of maintaining multiple optima is similar to the second goal of multiobjective optimization(maintaining diverse solutions in the front),we implemented the idea of Crowding DE in two addi-tional variants of the DEMO algorithm.The second,DEMO/closest/ dec,works in the same way as DEMO/parent,with the exception that the can-didate solution is compared to the most similar individual in decision space.If it dominates it,the candidate replaces this individual,otherwise it is treated in the same way as in DEMO/parent.The applied similarity measure is the Eu-clidean distance between the two solutions in decision space.In the third variant, DEMO/closest/obj,the candidate is compared to the most similar individual in objective space.DEMO/closest/dec and DEMO/closest/obj need more time for one step of the procedure than DEMO/parent.This is because at every step they have to search for the most similar individual in the decision and objective space,respec-tively.Although the additional computational complexity is notable when oper-ating on high-dimensional spaces,it is negligible in real-world problems where the solution evaluation is the most time consuming task.4Evaluation and Results4.1Test ProblemsWe analyze the performance of the three variants of DEMO onfive ZDT test problems(introduced in[12])that were frequently used as benchmark problems in the literature[1,8,9].These problems are described in detail in Tables1and2.4.2Performance MeasuresDifferent performance measures for evaluating the efficiency of MOEAs have been suggested in the literature.Because we wanted to compare DEMO to other MOEAs(especially the ones that use DE)on their published results,we use three metrics that have been used in these studies.For all three metrics,we need to know the true Pareto front for a problem. Since we are dealing with artificial test problems,the true Pareto front is notDEMO:Differential Evolution for Multiobjective Optimization525 Table1.Description of the test problems ZDT1,ZDT2and ZDT3ZDT1Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−x1/g(x)g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30 Characteristics convex Pareto frontZDT2Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−(x1/g(x))2g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30 Characteristics nonconvex Pareto frontZDT3Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−x1/g(x)−x1g(x)sin(10πx1)g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30Characteristics discontinuous Pareto fronthard to obtain.In our experiments we use500uniformly spaced Pareto-optimal solutions as the approximation of the true Pareto front2.Thefirst metric is Convergence metricΥ.It measures the distance between the obtained nondominated front Q and the set P∗of Pareto-optimal solutions:Υ= |Q|i=1d i |Q|,where d i is the Euclidean distance(in the objective space)between the solution i∈Q and the nearest member of P∗.Instead of the convergence metric,some researchers use a very similar metric, called Generational Distance GD.This metric measures the distance between2These solutions are uniformly spread in the decision space.They were made available online by Simon Huband at .au/research/wfg/ datafiles.html.526T.Robiˇc and B.FilipiˇcTable 2.Description of the test problems ZDT4and ZDT6ZDT4Decision space x ∈[0,1]×[−5,5]9Objective functions f 1(x )=x 1f 2(x )=g (x )1−(x i /g (x ))2g (x )=1+10(n −1)+n i =2x 2i −10cos(4πx i )Optimal solutions 0≤x ∗1≤1and x ∗i=0for i =2,...,10Characteristicsmany local Pareto frontsZDT6Decision space x ∈[0,1]10Objective functions f 1(x )=1−exp −4x 1sin(6πx 1)6f 2(x )=g (x )1−(f 1(x )/g (x ))2g (x )=1+9n −1ni =2x iOptimal solutions 0≤x ∗1≤1and x ∗i =0for i =2,...,10Characteristicslow density of solutions near Pareto frontthe obtained nondominated front Q and the set P ∗of Pareto-optimal solutions as GD =|Q |i =1d 2i|Q |,where d i is again the Euclidean distance (in the objective space)between the solution i ∈Q and the nearest member of P ∗.The third metric is Diversity metric ∆.This metric measures the extent of spread achieved among the nondominated solutions:∆=d f +d l + |Q |−1i =1|d i −d |d f +d l +(|Q |−1)d ,where d i is the Euclidean distance (in the objective space)between consecutive solutions in the obtained nondominated front Q ,and d is the average of these distances.The parameters d f and d l represent the Euclidean distances between the extreme solutions of the Pareto front P ∗and the boundary solutions of the obtained front Q .4.3ExperimentsIn addition to investigating the performance of the three DEMO variants,we were also interested in observing the effect of the crossover probability (crossProb in Fig.2)on the efficiency of DEMO.Therefore,we made the following exper-iments:for every DEMO variant,we run the respective DEMO algorithm with crossover probabilities set to 30%,60%and 90%.We repeated all tests 10timesDEMO:Differential Evolution for Multiobjective Optimization527 Table3.Statistics of the results on test problems ZDT1,ZDT2and ZDT3ZDT1Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.033482±0.0047500.390307±0.001876NSGA-II(binary-coded)0.000894±0.0000000.463292±0.041622SPEA0.001799±0.0000010.784525±0.004440PAES0.082085±0.0086791.229794±0.004839PDEA N/A0.298567±0.000742MODE0.005800±0.000000N/ADEMO/parent0.001083±0.0001130.325237±0.030249DEMO/closest/dec0.001113±0.0001340.319230±0.031350DEMO/closest/obj0.001132±0.0001360.306770±0.025465ZDT2Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.072391±0.0316890.430776±0.004721NSGA-II(binary-coded)0.000824±0.0000000.435112±0.024607SPEA0.001339±0.0000000.755148±0.004521PAES0.126276±0.0368771.165942±0.007682PDEA N/A0.317958±0.001389MODE0.005500±0.000000N/ADEMO/parent0.000755±0.0000450.329151±0.032408DEMO/closest/dec0.000820±0.0000420.335178±0.016985DEMO/closest/obj0.000780±0.0000350.326821±0.021083ZDT3Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.114500±0.0079400.738540±0.019706NSGA-II(binary-coded)0.043411±0.0000420.575606±0.005078SPEA0.047517±0.0000470.672938±0.003587PAES0.023872±0.0000100.789920±0.001653PDEA N/A0.623812±0.000225MODE0.021560±0.000000N/ADEMO/parent0.001178±0.0000590.309436±0.018603DEMO/closest/dec0.001197±0.0000910.324934±0.029648DEMO/closest/obj0.001236±0.0000910.328873±0.019142with different initial populations.The scaling factor F was set to0.5and was not tuned.To match the settings of the algorithms used for comparison,the population size was set100and the algorithm was run for250generations.4.4ResultsTables3and4present the mean and variance of the values of the convergence and diversity metric,averaged over10runs.We provide the results for all three DEMO variants.Results of other algorithms are taken from the literature(see528T.Robiˇc and B.FilipiˇcTable4.Statistics of the results on test problems ZDT4and ZDT6ZDT4Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.513053±0.1184600.702612±0.064648NSGA-II(binary-coded)3.227636±7.3076300.479475±0.009841SPEA7.340299±6.5725160.798463±0.014616PAES0.854816±0.5272380.870458±0.101399PDEA N/A0.840852±0.035741MODE0.638950±0.500200N/ADEMO/parent0.001037±0.0001340.359905±0.037672DEMO/closest/dec0.001016±0.0000910.359600±0.026977DEMO/closest/obj0.041012±0.0639200.407225±0.094851ZDT6Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.296564±0.0131350.668025±0.009923NSGA-II(binary-coded)7.806798±0.0016670.644477±0.035042SPEA0.221138±0.0004490.849389±0.002713PAES0.085469±0.0066641.153052±0.003916PDEA N/A0.473074±0.021721MODE0.026230±0.000861N/ADEMO/parent0.000629±0.0000440.442308±0.039255DEMO/closest/dec0.000630±0.0000210.461174±0.035289DEMO/closest/obj0.000642±0.0000290.458641±0.031362[1]for the results and parameter settings of both versions of NSGA-II,SPEA and PAES,[8]for PDEA,and[9]for MODE3).Results for PDEA in[8]were evaluated with generational distance instead of the convergence metric.Because PDEA is the approach that is the most similar to DEMO,we present the additional comparison of their results in Tables5 and6.Once more,we present the mean and variance of the values of generational distance,averaged over10runs.Table5.Generational distance achieved by PDEA and DEMO on the problems ZDT1, ZDT2and ZDT3Generational distanceAlgorithm ZDT1ZDT2ZDT3PDEA0.000615±0.0000000.000652±0.0000000.000563±0.000000 DEMO/parent0.000230±0.0000480.000091±0.0000040.000156±0.000007 DEMO/closest/dec0.000242±0.0000280.000097±0.0000040.000162±0.000013 DEMO/closest/obj0.000243±0.0000500.000092±0.0000040.000169±0.0000173The results for MODE are the average of30instead of10runs.In[9]no diversity metric was calculated.DEMO:Differential Evolution for Multiobjective Optimization529f 1f 2ZDT1f 1f 2ZDT2f 1f 2ZDT3f 1f 2ZDT40 0.20.40.60.811.2f 1f 2ZDT6Fig.4.Nondominated solutions of the final population obtained by DEMO on five ZDT test problems (see Table 7for more details on these fronts).The presented fronts are the outcome of a single run of DEMO/parentIn Tables 3,4,5and 6,only the results of DEMO with crossover probability 30%are shown.The results obtained with crossover probabilities set to 60%and530T.Robiˇc and B.FilipiˇcTable6.Generational distance achieved by PDEA and DEMO on the problems ZDT4 and ZDT6Generational distanceAlgorithm ZDT4ZDT6PDEA0.618258±0.8268810.023886±0.003294DEMO/parent0.000202±0.0000530.000074±0.000004DEMO/closest/dec0.000179±0.0000480.000075±0.000002DEMO/closest/obj0.004262±0.0065450.000076±0.000003Table7.Metric values for the nondominated fronts shown in Fig.4Problem Convergence metric Diversity metricZDT10.0012200.297066ZDT20.0007720.281994ZDT30.0012200.274098ZDT40.0012940.318805ZDT60.0006480.38508890%for all three variants were always worse than the ones given in the tables and are not presented for the sake of clarity4.Figure4shows the nondominated fronts obtained by a single run of DE-MO/parent with crossover probability30%.Table7summarizes the values of the convergence and diversity metrics for the nondominated fronts from Fig.4.5DiscussionAs mentioned in the previous section,DEMO with crossover probability of30% achieved the best results.This is in contradiction with the recommendations by the authors of DE[10]and confirms Madavan’sfindings in[8].However,we have to take these results with caution.Crossover probability is highly related to the dimensionality of the decision space.In our study,only high-dimensional functions were used.When operating on low-dimensional decision spaces,higher values for crossover probabilities should be used to preserve the diversity in the population.The challenge for MOEAs in thefirst three test problems(ZDT1,ZDT2and ZDT3)lies in the high-dimensionality of these problems.Many MOEAs have achieved very good results on these problems in both goals of multiobjective optimization(convergence to the true Pareto front and uniform spread of solu-tions along the front).The results for the problems ZDT1and ZDT2(Tables3 and5)show that DEMO achieves good results,which are comparable to the4The interested reader mayfind all nondominated fronts obtained by the three ver-sions of DEMO with the three different crossover probabilities on the internet site http://dis.ijs.si/tea/demo.htm.DEMO:Differential Evolution for Multiobjective Optimization531 results of the algorithms NSGA-II and PDEA.On ZDT3,the three DEMO variants outperform all other algorithms used in comparison.On thefirst three test problems we cannot see a meaningful difference in performance of the three DEMO variants.ZDT4is a hard optimization problem with many(219)local Pareto fronts that tend to mislead the optimization algorithm.In Tables4and6we can see that all algorithms(with the exception of DEMO)have difficulties in converging to the true Pareto front.Here,we can see for thefirst time that there is a notable difference between the DEMO variants.The third variant,DEMO/closest/obj, performs poorly compared to thefirst two variants,although still better than other algorithms.While thefirst two variants of DEMO converge to the true Pareto optimal front in all of the10runs,the third variant remains blocked in a local Pareto front3times out of10.This might be caused by DEMO/closest/obj putting too much effort infinding well spaced solutions and thus falling behind in the goal of convergence to the true Pareto optimal front.In this problem, there is also a big difference in results produced by the DEMO variants with different crossover probabilities.When using crossover probability60%or90%, no variant of DEMO ever converged to the true Pareto optimal front.With the test problem ZDT6,there are two major difficulties.Thefirst one is thin density of solutions towards the Pareto front and the second one non-uniform spread of solutions along the front.On this problem,all three DEMO variants outperform all other algorithms(see Tables4and6).The results of all DEMO variants are also very similar and almost no difference is noted when using different crossover probabilities.Here,we note that the diversity metric value is worse than on all other problems.This is because of the non-uniform spread of solutions that causes difficulties although the convergence is good.6ConclusionDEMO is a new DE implementation dealing with multiple objectives.The big-gest difference between DEMO and other MOEAs that also use DE for repro-duction is that in DEMO the newly created good candidates immediately take part in the creation of the subsequent candidates.This enables fast convergence to the true Pareto front,while the use of nondominated sorting and crowding distance metric in truncation of the extended population promotes the uniform spread of solutions.In this paper,three variants of DEMO were introduced.The detailed analysis of the results brings us to the following conclusions.The three DEMO variants are as effective as the algorithms NSGA-II and PDEA on the problems ZDT1 and ZDT2.On the problems ZDT3,ZDT4and ZDT6,DEMO achieves better results than any other algorithm used for comparison.As for the DEMO variants, we have not found any variant to be significantly better than another.Crowding DE thus showed not to bring the expected advantage over standard DE.Because DEMO/closest/dec and DEMO/closest/obj are computationally more expensive532T.Robiˇc and B.Filipiˇcthan DEMO/parent and do not bring any important advantage,we recommend the variant DEMO/parent be used in future experimentation.In this study,we also investigated the influence of three different settings of the crossover probability.We found that DEMO in all three variants worked best when the crossover probability was low(30%).Thesefindings,of course, cannot be generalized because our test problems were high-dimensional(10-or 30-dimensional).In low-dimensional problems,higher values of crossover prob-ability should be used to preserve the diversity in the population.Seeing how crossover probability affects DEMO’s performance,we are now interested if the other parameter used in candidate creation(the scaling factor F)also influences DEMO’s performance.This investigation is left for further work.In the near future,we also plan to evaluate DEMO on additional test problems.AcknowledgmentThe work presented in the paper was supported by the Slovenian Ministry of Ed-ucation,Science and Sport(Research Programme P2-0209Artificial Intelligence and Intelligent Systems).The authors wish to thank the anonymous reviewers for their comments and Simon Huband for making the Pareto-optimal solutions for the ZDT problems available online.References1.Deb,K.,Pratap,A.,Agarwal,S.,Meyarivan,T.:A fast and elitist multiobjectivegenetic algorithm:NSGA–II.IEEE Transactions on Evolutionary Computation6 (2002)182–1972.Zitzler,E.,Laumanns,M.,Thiele,L.:SPEA2:Improving the strength pareto evo-lutionary algorithm.Technical Report103,Computer Engineering and Networks Laboratory(TIK),Swiss Federal Institute of Technology(ETH)Zurich,Glorias-trasse35,CH-8092Zurich,Switzerland(2001)3.Price,K.V.,Storn,R.:Differential evolution–a simple evolution strategy for fastoptimization.Dr.Dobb’s Journal22(1997)18–24mpinen,J.:(A bibliography of differential evolution algorithm)http://www2.lut.fi/˜jlampine/debiblio.htm5.Abbass,H.A.,Sarker,R.,Newton,C.:PDE:A pareto-frontier differential evolu-tion approach for multi-objective optimization problems.In:Proceedings of the Congress on Evolutionary Computation2001(CEC’2001).Volume2,Piscataway, New Jersey,IEEE Service Center(2001)971–9786.Abbass,H.A.:The self-adaptive pareto differential evolution algorithm.In:Congress on Evolutionary Computation(CEC’2002).Volume1,Piscataway,New Jersey,IEEE Service Center(2002)831–8367.Zitzler,E.,Thiele,L.:Multiobjective evolutionary algorithms:A comparativecase study and the strength pareto approach.IEEE Transactions on Evolutionary Computation3(1999)257–2718.Madavan,N.K.:Multiobjective optimization using a pareto differential evolutionapproach.In:Congress on Evolutionary Computation(CEC’2002).Volume2, Piscataway,New Jersey,IEEE Service Center(2002)1145–1150。
Adaptive differential evolution algorithm for multiobjective optimization problemsWeiyi Qian *,Ajun liDepartment of Mathematics,Bohai University,Jinzhou,Liaoning 121000,PR ChinaAbstractIn this paper,a new adaptive differential evolution algorithm (ADEA)is proposed for multiobjective optimization problems.In ADEA,the variable parameter F based on the number of the current Pareto-front and the diversity of the current solutions is given for adjusting search size in every generation to find Pareto solutions in mutation operator,and the select operator combines the advantages of DE with the mechanisms of Pareto-based ranking and crowding dis-tance sorting.ADEA is implemented on five classical multiobjective problems,the results illustrate that ADEA efficiently achieves two goals of multiobjective optimization problems:find the solutions converge to the true Pareto-front and uni-form spread along the front.Ó2008Elsevier Inc.All rights reserved.Keywords:Multiobjective optimization problems;Differential evolution algorithm;Adaptive;Select operator1.IntroductionMultiobjective optimization problems (MOPs)which consist of several competing and incommensurable objective functions are frequently encountered in real-world problem such as scientific and engineering appli-cations.Consequently,there are two goals in multiobjective optimization:(i)to discover solutions as close to the Pareto-front as possible,(ii)to find solutions as diverse as possible in the obtained nondominated front.In recent years,many optimization techniques have been proposed in some literatures to solve MOPs.Some of the most attractive algorithms are evolution algorithms (EAs)such as NSGAII [1],MODE [2],PAES [3].In contrast to traditional gradient-based techniques,EAs use a set of potential solutions to detect feasible region.So several solutions of a multiobjective problem can be obtained in a single run.The properties enable EAs converge fast to the true Pareto-front (the concept will be explained in the following section).In MOPs,a large number of optimal solutions exist,and each correspond to a different trade-offamong the objective functions.Differential evolution algorithm (DE)is designed for minimizing functions of real variable.It is extremely robust in locating the global minimum.DE is a simple yet powerful evolutionary optimization algorithm that0096-3003/$-see front matter Ó2008Elsevier Inc.All rights reserved.doi:10.1016/j.amc.2007.12.052*Corresponding author.E-mail address:qianweiyi@ (W.Qian).Available online at Applied Mathematics and Computation 201(2008)431–440432W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440has been successfully used in solving single-objective problems by Price and Storn[4].After that,it was used to handle MOPs.Abbass was thefirst to apply DE to MOPs in the so-called Pareto differential evolution(PDE) algorithm[5,6].Madavan achieved good results with the Pareto differential evolution approach(PDEA)[7]. Xue introduced multiobjective differential evolution[2].Tea Robi^c proposed differential evolution for multi-objective optimization(DEMO)[8].In this paper,we propose a new way of extending DE to be suitable for solving MOPs.A novel approach, called adaptive differential evolution algorithm(ADEA)which incorporated a new select operator and an adaptive parameter,is introduced to search the global solutions.In each generation,the select operator emphasize Elitist to promote the research towards the Pareto-front.And the adaptive parameter F was used to adjust step size for the need of algorithm.From the simulate results onfive test problems,wefind the speed of converging to the true Pareto-front of ADEA is more fast and the diversity is better than most of other optimization algorithms for multiobjective problems.The rest of the paper is organized as follows:Section2provides the concept of Pareto-front and Pareto-ranking,DE scheme which was used as a background for ADEA.In Section3,a method called ADEA is dis-cussed in detail.Section4outlines the applied test problems and performance measures.Experimental results showing the effectiveness of our approach and the further comparison and discussion of the results are also provided in this section.Section5concludes the paper.2.Background2.1.Pareto-rankingTwo or more objectives in a multiobjective problem are usually conflict and compete,they cannot be min-imized concurrently.So one solution often cannot be said to be better or worse than another according their function values.There exists not a single optimal solution,but a set optimal solutions-called Pareto-front.Starting from a set of initial solutions,multiobjective evolutionary algorithms use an iteratively improving optimization techniques tofind the optimal solutions.In every iterative progress,EAs favor population-based Pareto dominance as a measure offitness.That makes for exploring the potential promising areas of search space and obtain more thefinal approximation of the Pareto-optimal front.The Pareto-optimal front is defined as follows:For any two decision vectors a and b,a1bða dominates bÞif fðaÞ6fðbÞK fðaÞ¼fðbÞ;a#bða weakly dominates bÞif fðaÞ6fðbÞ;a$bða is incomparable to bÞif fðaÞi fðbÞK fðbÞi fðaÞ;where relations 2f¼;<;6;i g are for every j2f1;...;n g if f jðaÞ f jðbÞ.A decision vector v is called Pareto-optimal solution when it is not dominate by any other decision vector ~v2X in all objectives.The set of all Pareto optimal solutions is called the Pareto-optimal front.We rest Par-eto-ranking–a concept that has been prevalent since Goldberg’s early work[2]and features in a host of tech-niques tofind Pareto-optimal front.The computation of Pareto-ranking is very complex before Deb et al.[8] proposed a fast ranking approach which requires at most oðMN2Þcomputations in NSGAII.This approach can sort a population of size N according to the level of non-domination fast.2.2.Differential evolutionDE is an efficient evolutionary optimization algorithm.It has been successfully applied on a plethora of applications.DE is a population set of solution vectors which are successively updated by addition,subtrac-tion and component swapping,until the populations converge,hopefully to optimum.An initial set S consists of N points with corresponding function values is generated randomly in X.Mutation and crossover are used to generate new points according S.A selection progress is then used to drive these points to the vicinity of the global minimizer.That is,thefinal minimizers will be obtained through irritating the initial populations.Now we will describe the progress of DE in detail.N points are sampled in X as initial set S¼f x1;x2;...;x N g.Take N)n,n being the dimension of the func-tion f(x).In each iteration,N competitions between target points and trial points which are generated through mutation and crossover operators are held to determine the members of S for the next generation.In mutation phrase,DE randomly selects three distinct points x(1),x(2),x(3)from the current set S.The weighted different of any points is then added to the third points which can be mathematically described as: ^x i¼x pð1ÞþFðx pð2ÞÀx pð3ÞÞ;ð1Þwhere F61is a selecting factor.Then a trial point yi as a competition to target point x i will be found from itsparents x i and^x i using the following crossover rules:y j i¼^x jiif R j6C R or j¼I i;x j i if R j>C R and j¼I i;(ð2Þwhere I i is a randomly chosen integer in the set I,i.e.I i¼f1;2;...;n g;the superscript j represents the j th com-ponent of respective vectors:R j2ð0;1Þ;draw randomly for each j.The entity C R is a constant.Notice that for C R¼1,the trial vector y i is equal to the mutated vector,that is,only the mutation operation is used to repro-duction.The goal of the crossover operation is to get the trial vector yi .The selection mechanism decideswhich point(x i and yi )is accept.All the solutions in the population have the same chance of being selectedas parents without dependence on theirfitness value.The child produced after the mutation and crossover operations is evaluated.Then,the performance of the child vector and its parents is compared and the better one is selected.If the parent is still better,it is retained in the population.The differential evolution algorithm(DE):Step1.Determine the initial set S¼f x1;x2;...;x N gwhere the points x i;i¼1;...;N are sampled randomly in X.Evaluate f(x)at each x i2S.Take N)n, n being the dimension of the f(x).Set generation counter k=0.Step2.While stopping criterion not meet,doStep2.1.Generate points to replace points in S.For each x i2S,determine y i by the following two reproduction operations.Mutation:Randomly select three points from S except x i,find the second parent^x i by the muta-tion rule(1).Crossover:Calculate the trial vector y i corresponding to x i from x i and^x i using the crossover rule(2).Step2.2.Replace points in S.Evaluate the trial vector yi ,if yiis better than its parent x i;yireplacesx i,otherwise yiis discarded.Set k=k+1.Go to step2.3.Adaptive differential evolution algorithmWhen applying DE to MOPs,we face many difficulties.How to replace the parent with trial solutions and preserve a uniform spread of nondominated solutions are two main challenge for DE algorithm.The selection operator of original DE is simple but it is notfit for the MOPs.The selection is easy in single-objective opti-mization,but the selection is not so straight forward in MOPs.It exists that the trial solutions and the parent solutions are incomparable.According to the original DE,if the trial solution is not dominance the parent,the trial is discarded.This method lead to miss the optimal solutions that have been found.Meanwhile,the sen-sitivity of the parameter F was not study,authors just chose their values randomly and mainly between0.5and 0.9.The effluence of F to the new points was ignored.To overcome the above mentioned drawbacks,we proposed a adaptive parameter F and a new selection method.We do not compare trial solution with its parent when a trial point is generated,but keep it in the trial set.When all the members in the population set generate trial solutions,we get a new population size of two popsize with trial set and parent set.Then truncate it to the size of popsize for the next step of the algorithm.The truncation consists of sorting the individuals with nondominated sorting and evaluating the W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440433individuals of the same front with the crowding distance metric.The truncation procedure continue until only the best popsize individuals are get.The truncation is similar to what in NSGAII,but ADEA introduces a different crowding distance metric.It is outlined in Fig.1.We use diagonal of the largest cuboid which enclos-ing the point x i without including any other point in the population as density estimation of the population.Fig.1outlines the crowing distance computation procedure of the solutions in an nondominated set I.In DE,the generation of new points is mainly dependent on mutation operator.In mutation,parameter F plays an important role.F has influence on the speed of converge and the diversity of the solutions.F decides the search ually,optimization algorithms favor global search at the early stage for exploring feasible domain and finding the global optimization solutions and local search at the latter stage for accelerating con-verge.Based on above strategy,we defines a parameter F as follows:F ¼max P k j ¼1P m j i ¼1j d ij À d j j þd f P j Q j Á d þd f ;1À2j P j j Q j ;l min !where d ij is the crowding distance of the i th solution in the j th Pareto level; dj is the average value of crowd-ing distance of the solutions in the j th Pareto level; dis the average value of crowding distance of the solu-tions in every iteration;j P j is the number of the nondominated solutions;j Q j is the number of the population set;the parameter d f represent the Euclidean distance between two boundary solutions in Q ;l min is a lower bound for F .ADEA is implemented as follows:Initially,a random parent population S is created.The truncation procedure is introduced for calculating F in the first iteration and all the solutions in S are retained.In the other iteration,truncation procedure used to select the parent set (for next generation)and calculating F .F is decided by the number of the current Pareto-front and the diversity of the current solutions.It can adjust its size with the need of the algorithm.The gen-eration of new points is as the same as DE (according mutation rule and crossover rule).Then a combined population S ¼S [Q (including new points)is formed.The population S will be of size 2N .Since all previous and current population members are included in S ,the elitism is insured.The ADEA procedure is shown in Fig.2.4.Experimental results 4.1.Performance measuresTo validate our approach,we used the methodology normally adopted in the evolutionary multiobjective optimization literature.Because we wanted to compare ADEA to other MOEAs on their published results,we use three metrics that have been used in these studies.They represent both quantitative comparisons and qual-Fig.1.Crowding distance metric in ADEA.434W.Qian,A.li /Applied Mathematics and Computation 201(2008)431–440itative comparisons with MOEAs that are respective of the state-of-the-art:the Nondominated Sorting Genetic Algorithm II (NSGAII)the Strength Pareto Evolutionary Algorithm (SPEA),the Pareto Archived Evolution Strategy (PAES),the Pareto-frontier Differential Evolution Approach (PDEA),the Multiobjective Optimization Evolution Algorithm (MOEA).(1)Convergence metric Ç.This metric is defined as:ǼP ni ¼1d in;where n is the number of nondominated vector found by the algorithm being analyzed and d i is the Euclidean distance (measured in the objective space)between the obtained nondominated front Q and the nearest member in the true Pareto front P .It measures the distance between the Q and P .(2)Generational distance (GD).This metric is similar to Ç.It is defined as:GD ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiP ni ¼1d ip n ;where d i is the Euclidean distance (measured in the objective space)between the obtained nondominated front Q and the nearest member in the true Pareto-front P .It should be clear that a value of GD =0indicates that all the elements generated are in the true Pareto-front of the problem.(3)Diversity metric D .This metric measures the distant of spread achieved among the nondominated solutions.It is defined as:D ¼d f þd l þP ni ¼1ðd i À dÞd f þd l þðn À1Þ dwhere d i is the Euclidean distance (measured in the objective space)between consecutive solutions in theobtained nondominated front Q and dis the average of these distances.The parameters d f and d l rep-resent the Euclidean distance between the extreme solutions of the true Pareto-front P and the boundary solutions of the obtained front Q .4.2.Test problems and resultsThe test problems for evaluating the performance of our methods are chosen based on significant past stud-ies in multiobjective evolutionary algorithms.We chose five problems from benchmark domains commonly used in past multiobjective GA research (in the literature [1,7,2,10]).For every test problem,number of population points (NP)=100,number of iterations (NG)=200:Fig.2.Outline of ADEA.W.Qian,A.li /Applied Mathematics and Computation 201(2008)431–440435Test problem1(ZDT1):Convex Pareto-front:f1ðxÞ¼x1;f2ðxÞ¼gðxÞð1Àffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix1=gðxÞpÞ;gðxÞ¼1þ9nÀ1X ni¼2x iwith n=30and x i2½0;1 .Test problem2(ZDT2):Nonconvex Pareto-front f1ðxÞ¼x1;f2ðxÞ¼gðxÞð1Àðx1=gðxÞÞ2Þ;gðxÞ¼1þ9nÀ1X ni¼2x iwith n=30and x i2½0;1 .Test problem3(ZDT3):Discontinuous Pareto-front:f1ðxÞ¼x1;f2ðxÞ¼gðxÞð1Àffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix1=gðxÞpÀx1gðxÞsinð10p x1ÞÞ;gðxÞ¼1þ9nÀ1X ni¼2x iwith n=30and x i2½0;1 .Test problem4(ZDT4):Many local Pareto-fronts:f1ðxÞ¼x1;f2ðxÞ¼gðxÞð1Àffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix1=gðxÞpÞ;gðxÞ¼1þ10ðnÀ1ÞþX ni¼2ðx2iÀ10cosð4p x iÞÞwith n=10and x12½0;1 x i2½À5;5 for i¼2; (9)Test problem5(ZDT6):Low density of solutions near Pareto-fronts: f1ðxÞ¼1ÀexpÀ4x1sinð6p x1Þ6;f2ðxÞ¼gðxÞð1Àðf1ðxÞ=gðxÞÞ2Þ;gðxÞ¼1þ9nÀ1X ni¼2x iwith n=10and x i2½0;1 .The results of ADEA is shown in Figs.3–7,the behavior of ADEA is compared to NSGAII on a number of test problems.To match the settings of the algorithms used for comparison,the population size was set100 and the algorithm was run for200generations.As can be seen,ADEA is able to generate uniform distribution of solutions and the results is better than NSGAII on both quality and quantity on zdt1,2,3,4,6.parison and discussionResults onfive test functions,in relation to the convergence metric C and diversity metric D,are presented in Table1–5,the mean and variance of the values are averaged over10runs.The results for ADEA and other five algorithms which are taken from the literature(see[9]for the results and parameter settings of both ver-sions of NSGAII,SPEA and PAES,[7]for PDEA,and[2]for MODE)are provided.436W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440437Results for PDEA in[6]were evaluated with generational distance instead of the convergence metric.As can be seen,ADEA is able to converge better in all problems except in ZDT1and ZDT2,where NSGA-II(binary-coded)and SPEA found better convergence.The challenge for MOEAs in thefirst three test prob-lems(ZDT1,ZDT2and ZDT3)lies in the high-dimensionality of these problems.Many MOEAs have achieved very good results on these problems in both goals of multiobjective optimization(convergence to the true Pareto-front and uniform spread of solutions along the front).The results for the problems ZDT1 and ZDT2(see Tables1and2)show that ADEA achieves good results,which are comparable to the results of the algorithms NSGA-II(real-coded),PAES and MODE.On ZDE3(see Table3),ADEA outperform all other algorithms used in comparison.438W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440Table1Statistics of results on test problems ZDT1Algorithm Convergence metric Diversity metric NSGA-II(real-coded)0.033482±0.0047500.390307±0.001876 NSGA-II(binary-coded)0.000894±0.0000000.463292±0.041622 SPEA0.001799±0.0000010.784525±0.004440 PAES0.082085±0.008679 1.229794±0.000742 PDEA N/A0.298567±0.000742 MODE0.005800±0.000000N/ASDE0.002741±0.0003850.382890±0.001435 Table2Statistics of results on test problems ZDT2Algorithm Convergence metric Diversity metric NSGA-II(real-coded)0.072391±0.0316890.430776±0.004721 NSGA-II(binary-coded)0.000824±0.0000000.435112±0.024607 SPEA0.001339±0.0000000.755184±0.004521 PAES0.126276±0.036877 1.165942±0.007682 PDEA N/A0.317958±0.001389 MODE0.005500±0.000000N/ASDE0.002203±0.0002970.345780±0.003900W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440439 Table3Statistics of results on test problems ZDT3Algorithm Convergence metric Diversity metric NSGA-II(real-coded)0.114500±0.0049400.738540±0.019706 NSGA-II(binary-coded)0.043411±0.0000420.575606±0.005078 SPEA0.047517±0.0000470.672938±0.003587 PAES0.023872±0.0000100.789920±0.001653 PDEA N/A0.623812±0.000225 MODE0.021560±0.000000N/ASDE0.002741±0.0001200.525770±0.043030 Table4Statistics of results on test problems ZDT4Algorithm Convergence metric Diversity metric NSGA-II(real-coded)0.513053±0.1184600.702612±0.064648 NSGA-II(binary-coded) 3.227636±7.3076300.479475±0.009841 SPEA7.340299±6.5725160.798463±0.014616 PAES0.854816±0.5272380.870458±0.101399 PDEA N/A0.840852±0.035741 MODE0.638950±0.500200N/ASDE0.100100±0.4462000.436300±0.110000 Table5Statistics of results on test problems ZDT6Algorithm Convergence metric Diversity metric NSGA-II(real-coded)0.296564±0.0131350.668025±0.009923 NSGA-II(binary-coded)7.806798±0.0016670.644477±0.035042 SPEA0.221138±0.0004490.849389±0.002713 PAES0.085469±0.006664 1.153052±0.003916 PDEA N/A0.473074±0.021721 MODE0.026230±0.000861N/ASDE0.000624±0.0000600.361100±0.036100 ZDT4is a hard optimization problem with manyð219Þlocal Pareto-fronts that tend to mislead the optimi-zation algorithm.In Table4,we can see that all algorithms have difficulties in converging to the true Pareto-front.But ADEA get very good result.With thefirst test problem ZDT6,there are two major difficulties.Thefirst one is thin density of solutions towards the Pareto-front and the second one nonuniform spread of solutions along the front.On this problem, ADEA outperform all other algorithms.In thefirst three and the last problems,crossover rate is0.7,ZDT4is0.3.For the diversity metric,ADEA outperform any other proposed algorithms.5.ConclusionADEA is a new DE implementation dealing with multiple objectives.The biggest difference between ADEA and other MOEAs is that ADEA introduced a new defined self-parameter and a new select operator.We tested the approach onfive benchmark problems and it was found that our approach is competitive to most other approaches.We also experimented with different crossover rate on these problems tofind their best solu-tions.The crossover rate is found to be sensitive on problem4to the solutions.In the near future,we also plan to evaluate ADEA on additional test problems.440W.Qian,A.li/Applied Mathematics and Computation201(2008)431–440References[1]K.Deb,A.Pratap,S.Agarwal,T.Meyarivan,A fast and elitist multiobjective genetic algorithm:NSGA-II,IEEE Transactions onEvolutionary Computation6(2002)182–197.[2]F.Xue,A.C.Sanderson,R.J.Graves,Pareto-based multi-objective differential evolution,Proceedings of the2003Congress onEvolutionary Computation(CEC’2003),vol.2,IEEE Press,Canberra,Australia,2003,pp.862–869.[3]J.Knowles,D.Corne,The Pareto archived evolution strategy:a new baseline algorithm for multiobjective optimization,in:Proceedings of the1999Congress on Evolutionary Computation,IEEE Service Center,Piscataway,NJ,1999,pp.98–105.[4]K.V.Price,R.Storn,Differential evolution-a simple evolution strategy for fast optimization,Dr.Dobb’s Journal22(1997)18–24.[5]H.A.Abbass,R.Sarker,C.Newton,PDE:A Pareto-frontier differential evolution approach for multi-objective optimizationproblems,Proceedings of the Congress on Evolutionary Computation2001(CEC’2001),vol.2,IEEE Service Center,Piscataway,NJ, 2001,pp.971–978.[6]H.A.Abbass,The self-adaptive Pareto differential evolution algorithm,Proceedings of the Congress on Evolutionary Computation(CEC’2002),vol.1,IEEE Service Center,Piscataway,NJ,2002,pp.831–836.[7]N.K.Madavan,Multiobjective optimization using a Pareto differential evolution approach,Proceeding of the Congress onEvolutionary Computation(CEC’2002),vol.2,IEEE Service Center,Piscataway,NJ,2002,pp.1145–1150.[8]Tea Rolicˇ,Bogdan Filipic ,in:C.A.Coello,et al.(Eds.),DEMO:Differential Evolution for Multiobjective Optimization,EMO2005,LNCS3410,2005,pp.520–533.[9]mpinen,A bibliography of differential evolution algorithm.<http://www2.lut.fi/~jlampine/debiblio.htm>.[10]D.E.Goldberg,Genetic Algorithms in Search,Optimization,and Machine Learning,Addison-Wesley,Reading,MA,1989.。