THE APPLICATION OF MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS TO THE GENERATION OF OPTIMIZED TO
- 格式:pdf
- 大小:599.06 KB
- 文档页数:6
生物医疗结合大语言模型
生物医疗是近年来迅速发展的领域,其涉及到许多不同的方面,包括新药研发、基因疗法、细胞治疗等。
在这些领域中,大语言模型(Large Language Model,简称LLM)已经开始被广泛
应用,以加快研究速度并提高成果质量。
LLM是由很多单元组成的神经网络,可以在大量数据的基础
上进行预测和推断。
在生物医疗领域中,LLM可以被用来分
析大量的基因数据,辅助药物研发、基因编辑等研究,提高预测准确度和研究效率。
具体来说,LLM可以被应用于以下方面:
1. 预测药物相互作用:LLM可以分析药物的结构和基因组数据,从而预测其与其他药物的相互作用,以提供新的治疗方案。
2. 个性化治疗:LLM可以帮助医生根据病人的基因数据,为
其提供更加精准的治疗方案,以提高治疗效果和减少不良反应。
3. 疾病诊断:LLM可以通过分析临床病历和基因数据等信息,快速、准确地对疾病进行诊断。
4. 基因编辑:LLM可以帮助研究人员设计和优化基因编辑实验,以提高编辑效率和准确度,从而开发出更安全有效的基因疗法和药物。
总的来说,LLM已成为生物医疗领域中一种重要的分析和预
测工具,其应用将推动该领域的快速发展,为大众医疗保健提供更优质的服务。
大语言模型(LLM)是一种强大的自然语言处理技术,它可以理解和生成自然语言文本,并具有广泛的应用场景。
然而,虽然LLM能够生成流畅、自然的文本,但在因果推断方面,它仍存在一些限制。
通过增强LLM的因果推断能力,我们可以更好地理解和解释人工智能系统的行为,从而提高其可信度和可靠性。
首先,我们可以通过将LLM与额外的上下文信息结合,来增强其因果推断能力。
上下文信息包括时间、地点、背景、情感等各个方面,它们可以为LLM提供更全面的信息,使其能够更好地理解事件之间的因果关系。
通过这种方式,LLM可以更好地预测未来的结果,并解释其预测的依据。
其次,我们可以通过引入可解释性建模技术,来增强LLM的因果推断能力。
这些技术包括决策树、规则归纳、贝叶斯网络等,它们可以帮助我们更好地理解LLM的决策过程,从而更准确地预测其结果。
此外,这些技术还可以帮助我们识别因果关系的路径,从而更深入地了解因果关系。
最后,我们可以通过将LLM与其他领域的知识结合,来增强其因果推断能力。
例如,我们可以将经济学、心理学、社会学等领域的知识融入LLM中,以帮助其更好地理解和解释因果关系。
通过这种方式,LLM可以更全面地考虑各种因素,从而更准确地预测和解释因果关系。
在应用方面,增强因果推断能力的LLM可以为许多领域提供更准确、更可靠的决策支持。
例如,在医疗领域,它可以辅助医生制定更有效的治疗方案;在金融领域,它可以辅助投资者做出更明智的投资决策;在政策制定领域,它可以为政策制定者提供更全面、更准确的政策建议。
总之,通过增强大语言模型(LLM)的因果推断能力,我们可以更好地理解和解释人工智能系统的行为,从而提高其可信度和可靠性。
这将有助于推动人工智能技术的广泛应用和发展,为社会带来更多的便利和价值。
同时,我们也需要关注和解决相关伦理和社会问题,以确保人工智能技术的发展符合人类的价值观和利益。
多目标优化和进化算法
多目标优化(Multi-Objective Optimization,简称MOO)是指在优化问题中存在多个目标函数需要同时优化的情况。
在实际问题中,往往存在多个目标之间相互制约、冲突的情况,因此需要寻找一种方法来平衡这些目标,得到一组最优解,这就是MOO的研究范畴。
进化算法(Evolutionary Algorithm,简称EA)是一类基于生物进化原理的优化算法,其基本思想是通过模拟进化过程来搜索最优解。
进化算法最初是由荷兰学者Holland于1975年提出的,随后经过不断的发展和完善,已经成为了一种重要的优化算法。
在实际应用中,MOO和EA经常被结合起来使用,形成了一种被称为多目标进化算法(Multi-Objective Evolutionary Algorithm,简称MOEA)的优化方法。
MOEA通过模拟生物进化过程,利用选择、交叉和变异等操作来生成新的解,并通过多目标评价函数来评估每个解的优劣。
MOEA能够在多个目标之间进行平衡,得到一组最优解,从而为实际问题提供了有效的解决方案。
MOEA的发展历程可以追溯到20世纪80年代初,最早的研究成果是由美国学者Goldberg和Deb等人提出的NSGA(Non-dominated Sorting Genetic Algorithm),该算法通过非支配排序和拥挤度距离来保持种群的多样性,从而得到一组最优解。
随后,又出现了许多基于NSGA的改进算法,如NSGA-II、
MOEA/D、SPEA等。
总之,MOO和EA是两个独立的研究领域,但它们的结合产生了MOEA这一新的研究方向。
MOEA已经在许多领域得到了广泛应用,如工程设计、决策分析、金融投资等。
融合创新毁誉参半作文英文回答:Fusion Innovation: A Double-Edged Sword.Fusion innovation has emerged as a promising solution to address the world's growing energy demands while mitigating environmental concerns. By combining elements from different disciplines and technologies, fusion innovation offers the potential to unlock novel and groundbreaking advancements. However, it is essential to acknowledge the inherent complexities and potentialpitfalls associated with this approach.One of the primary benefits of fusion innovation lies in its potential to harness vast and sustainable energy sources. By mimicking the nuclear fusion processes that occur within stars, fusion reactors could generate enormous amounts of energy without producing significant greenhouse gas emissions. This clean and virtually limitless energysource holds the promise of powering our societies for generations to come.Moreover, fusion innovation fosters interdisciplinary collaboration and the cross-pollination of ideas. By bringing together scientists, engineers, and other experts from diverse backgrounds, fusion research encourages a holistic approach to problem-solving. This collaborative environment can lead to breakthroughs that may not have been possible through isolated efforts.However, fusion innovation is not without its challenges. The development of fusion reactors requires a substantial investment of resources and time, as the technology is still in its nascent stages. Furthermore, the construction and operation of fusion reactors pose significant technical and safety concerns. The handling of radioactive materials and the potential for accidents or incidents necessitate rigorous safety protocols and robust regulatory frameworks.Another consideration is the potential social andethical implications of fusion innovation. The deployment of fusion reactors on a large scale may raise questions about resource allocation, equitable access to energy, and the potential for misuse or proliferation of nuclear technology. It is crucial to engage in open and informed public discourse to address these concerns and ensure that fusion innovation aligns with societal values and ethical principles.In summary, fusion innovation presents a tantalizing yet complex path towards sustainable energy solutions. While it holds immense potential to revolutionize our energy landscape, it is essential to proceed with caution and address the technical, safety, and ethical challenges associated with this transformative technology. Careful consideration, transparent decision-making, and robust governance mechanisms are paramount to harness the benefits of fusion innovation while mitigating its potential risks.中文回答:融合创新,一把双刃剑。
Comparison of Multiobjective Evolutionary Algorithms:Empirical ResultsEckart ZitzlerDepartment of Electrical Engineering Swiss Federal Institute of T echnology 8092Zurich,Switzerlandzitzler@tik.ee.ethz.ch Kalyanmoy DebDepartment of Mechanical Engineering Indian Institute of T echnology Kanpur Kanpur,PIN208016,Indiadeb@iitk.ac.inLothar ThieleDepartment of Electrical EngineeringSwiss Federal Institute of T echnology8092Zurich,Switzerlandthiele@tik.ee.ethz.chAbstractIn this paper,we provide a systematic comparison of various evolutionary approaches tomultiobjective optimization using six carefully chosen test functions.Each test functioninvolves a particular feature that is known to cause difficulty in the evolutionary optimiza-tion process,mainly in converging to the Pareto-optimal front(e.g.,multimodality anddeception).By investigating these different problem features separately,it is possible topredict the kind of problems to which a certain technique is or is not well suited.However,in contrast to what was suspected beforehand,the experimental results indicate a hierarchyof the algorithms under consideration.Furthermore,the emerging effects are evidencethat the suggested test functions provide sufficient complexity to compare multiobjectiveoptimizers.Finally,elitism is shown to be an important factor for improving evolutionarymultiobjective search.KeywordsEvolutionary algorithms,multiobjective optimization,Pareto optimality,test functions,elitism.1MotivationEvolutionary algorithms(EAs)have become established as the method at hand for exploring the Pareto-optimal front in multiobjective optimization problems that are too complex to be solved by exact methods,such as linear programming and gradient search.This is not only because there are few alternatives for searching intractably large spaces for multiple Pareto-optimal solutions.Due to their inherent parallelism and their capability to exploit similarities of solutions by recombination,they are able to approximate the Pareto-optimal front in a single optimization run.The numerous applications and the rapidly growing interest in the area of multiobjective EAs take this fact into account.After thefirst pioneering studies on evolutionary multiobjective optimization appeared in the mid-eighties(Schaffer,1984,1985;Fourman,1985)several different EA implementa-tions were proposed in the years1991–1994(Kursawe,1991;Hajela and Lin,1992;Fonseca c2000by the Massachusetts Institute of T echnology Evolutionary Computation8(2):173-195E.Zitzler,K.Deb,and L.Thieleand Fleming,1993;Horn et al.,1994;Srinivas and Deb,1994).Later,these approaches (and variations of them)were successfully applied to various multiobjective optimization problems(Ishibuchi and Murata,1996;Cunha et al.,1997;Valenzuela-Rend´on and Uresti-Charre,1997;Fonseca and Fleming,1998;Parks and Miller,1998).In recent years,some researchers have investigated particular topics of evolutionary multiobjective search,such as convergence to the Pareto-optimal front(Van Veldhuizen and Lamont,1998a;Rudolph, 1998),niching(Obayashi et al.,1998),and elitism(Parks and Miller,1998;Obayashi et al., 1998),while others have concentrated on developing new evolutionary techniques(Lau-manns et al.,1998;Zitzler and Thiele,1999).For a thorough discussion of evolutionary algorithms for multiobjective optimization,the interested reader is referred to Fonseca and Fleming(1995),Horn(1997),Van Veldhuizen and Lamont(1998b),and Coello(1999).In spite of this variety,there is a lack of studies that compare the performance and different aspects of these approaches.Consequently,the question arises:which imple-mentations are suited to which sort of problem,and what are the specific advantages and drawbacks of different techniques?First steps in this direction have been made in both theory and practice.On the theoretical side,Fonseca and Fleming(1995)discussed the influence of differentfitness assignment strategies on the selection process.On the practical side,Zitzler and Thiele (1998,1999)used a NP-hard0/1knapsack problem to compare several multiobjective EAs. In this paper,we provide a systematic comparison of six multiobjective EAs,including a random search strategy as well as a single-objective EA using objective aggregation.The basis of this empirical study is formed by a set of well-defined,domain-independent test functions that allow the investigation of independent problem features.We thereby draw upon results presented in Deb(1999),where problem features that may make convergence of EAs to the Pareto-optimal front difficult are identified and,furthermore,methods of constructing appropriate test functions are suggested.The functions considered here cover the range of convexity,nonconvexity,discrete Pareto fronts,multimodality,deception,and biased search spaces.Hence,we are able to systematically compare the approaches based on different kinds of difficulty and to determine more exactly where certain techniques are advantageous or have trouble.In this context,we also examine further factors such as population size and elitism.The paper is structured as follows:Section2introduces key concepts of multiobjective optimization and defines the terminology used in this paper mathematically.We then give a brief overview of the multiobjective EAs under consideration with special emphasis on the differences between them.The test functions,their construction,and their choice are the subject of Section4,which is followed by a discussion about performance metrics to assess the quality of trade-off fronts.Afterwards,we present the experimental results in Section6and investigate further aspects like elitism(Section7)and population size (Section8)separately.A discussion of the results as well as future perspectives are given in Section9.2DefinitionsOptimization problems involving multiple,conflicting objectives are often approached by aggregating the objectives into a scalar function and solving the resulting single-objective optimization problem.In contrast,in this study,we are concerned withfinding a set of optimal trade-offs,the so-called Pareto-optimal set.In the following,we formalize this 174Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs well-known concept and also define the difference between local and global Pareto-optimalsets.A multiobjective search space is partially ordered in the sense that two arbitrary so-lutions are related to each other in two possible ways:either one dominates the other or neither dominates.D EFINITION1:Let us consider,without loss of generality,a multiobjective minimization problem with decision variables(parameters)and objectives:Minimizewhere(1) and where is called decision vector,parameter space,objective vector,and objective space.A decision vector is said to dominate a decision vector(also written as) if and only if(2)Additionally,in this study,we say covers()if and only if or.Based on the above relation,we can define nondominated and Pareto-optimal solutions: D EFINITION2:Let be an arbitrary decision vector.1.The decision vector is said to be nondominated regarding a set if and only if thereis no vector in which dominates;formally(3)If it is clear within the context which set is meant,we simply leave it out.2.The decision vector is Pareto-optimal if and only if is nondominated regarding.Pareto-optimal decision vectors cannot be improved in any objective without causing a degradation in at least one other objective;they represent,in our terminology,globally optimal solutions.However,analogous to single-objective optimization problems,there may also be local optima which constitute a nondominated set within a certain neighbor-hood.This corresponds to the concepts of global and local Pareto-optimal sets introduced by Deb(1999):D EFINITION3:Consider a set of decision vectors.1.The set is denoted as a local Pareto-optimal set if and only if(4)where is a corresponding distance metric and,.A slightly modified definition of local Pareto optimality is given here.Evolutionary Computation Volume8,Number2175E.Zitzler,K.Deb,and L.Thiele2.The set is called a global Pareto-optimal set if and only if(5) Note that a global Pareto-optimal set does not necessarily contain all Pareto-optimal solu-tions.If we refer to the entirety of the Pareto-optimal solutions,we simply write“Pareto-optimal set”;the corresponding set of objective vectors is denoted as“Pareto-optimal front”.3Evolutionary Multiobjective OptimizationT wo major problems must be addressed when an evolutionary algorithm is applied to multiobjective optimization:1.How to accomplishfitness assignment and selection,respectively,in order to guide thesearch towards the Pareto-optimal set.2.How to maintain a diverse population in order to prevent premature convergence andachieve a well distributed trade-off front.Often,different approaches are classified with regard to thefirst issue,where one can distinguish between criterion selection,aggregation selection,and Pareto selection(Horn, 1997).Methods performing criterion selection switch between the objectives during the selection phase.Each time an individual is chosen for reproduction,potentially a different objective will decide which member of the population will be copied into the mating pool. Aggregation selection is based on the traditional approaches to multiobjective optimization where the multiple objectives are combined into a parameterized single objective function. The parameters of the resulting function are systematically varied during the same run in order tofind a set of Pareto-optimal solutions.Finally,Pareto selection makes direct use of the dominance relation from Definition1;Goldberg(1989)was thefirst to suggest a Pareto-basedfitness assignment strategy.In this study,six of the most salient multiobjective EAs are considered,where for each of the above categories,at least one representative was chosen.Nevertheless,there are many other methods that may be considered for the comparison(cf.Van Veldhuizen and Lamont(1998b)and Coello(1999)for an overview of different evolutionary techniques): Among the class of criterion selection approaches,the Vector Evaluated Genetic Al-gorithm(VEGA)(Schaffer,1984,1985)has been chosen.Although some serious drawbacks are known(Schaffer,1985;Fonseca and Fleming,1995;Horn,1997),this algorithm has been a strong point of reference up to now.Therefore,it has been included in this investigation.The EA proposed by Hajela and Lin(1992)is based on aggregation selection in combination withfitness sharing(Goldberg and Richardson,1987),where an individual is assessed by summing up the weighted objective values.As weighted-sum aggregation appears still to be widespread due to its simplicity,Hajela and Lin’s technique has been selected to represent this class of multiobjective EAs.Pareto-based techniques seem to be most popular in thefield of evolutionary mul-tiobjective optimization(Van Veldhuizen and Lamont,1998b).In particular,the 176Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs algorithm presented by Fonseca and Fleming(1993),the Niched Pareto Genetic Algo-rithm(NPGA)(Horn and Nafpliotis,1993;Horn et al.,1994),and the Nondominated Sorting Genetic Algorithm(NSGA)(Srinivas and Deb,1994)appear to have achieved the most attention in the EA literature and have been used in various studies.Thus, they are also considered here.Furthermore,a recent elitist Pareto-based strategy,the Strength Pareto Evolutionary Algorithm(SPGA)(Zitzler and Thiele,1999),which outperformed four other multiobjective EAs on an extended0/1knapsack problem,is included in the comparison.4Test Functions for Multiobjective OptimizersDeb(1999)has identified several features that may cause difficulties for multiobjective EAs in1)converging to the Pareto-optimal front and2)maintaining diversity within the population.Concerning thefirst issue,multimodality,deception,and isolated optima are well-known problem areas in single-objective evolutionary optimization.The second issue is important in order to achieve a well distributed nondominated front.However,certain characteristics of the Pareto-optimal front may prevent an EA fromfinding diverse Pareto-optimal solutions:convexity or nonconvexity,discreteness,and nonuniformity.For each of the six problem features mentioned,a corresponding test function is constructed following the guidelines in Deb(1999).We thereby restrict ourselves to only two objectives in order to investigate the simplest casefirst.In our opinion,two objectives are sufficient to reflect essential aspects of multiobjective optimization.Moreover,we do not consider maximization or mixed minimization/maximization problems.Each of the test functions defined below is structured in the same manner and consists itself of three functions(Deb,1999,216):Minimizesubject to(6)whereThe function is a function of thefirst decision variable only,is a function of the remaining variables,and the parameters of are the function values of and.The test functions differ in these three functions as well as in the number of variables and in the values the variables may take.D EFINITION4:We introduce six test functions that follow the scheme given in Equa-tion6:The test function has a convex Pareto-optimal front:(7)where,and.The Pareto-optimal front is formed with.The test function is the nonconvex counterpart to:(8) Evolutionary Computation Volume8,Number2177E.Zitzler,K.Deb,and L.Thielewhere,and.The Pareto-optimal front is formed with.The test function represents the discreteness feature;its Pareto-optimal front consists of several noncontiguous convex parts:(9)where,and.The Pareto-optimal front is formed with.The introduction of the sine function in causes discontinuity in the Pareto-optimal front.However, there is no discontinuity in the parameter space.The test function contains local Pareto-optimal fronts and,therefore,tests for the EA’s ability to deal with multimodality:(10)where,,and.The global Pareto-optimal front is formed with,the best local Pareto-optimal front with.Note that not all local Pareto-optimal sets are distinguishable in the objective space.The test function describes a deceptive problem and distinguishes itself from the other test functions in that represents a binary string:(11)where gives the number of ones in the bit vector(unitation),ififand,,and.The true Pareto-optimal front is formed with,while the best deceptive Pareto-optimal front is represented by the solutions for which.The global Pareto-optimal front as well as the local ones are convex.The test function includes two difficulties caused by the nonuniformity of the search space:first,the Pareto-optimal solutions are nonuniformly distributed along the global Pareto front (the front is biased for solutions for which is near one);second,the density of the solutions is lowest near the Pareto-optimal front and highest away from the front:(12)where,.The Pareto-optimal front is formed with and is nonconvex.We will discuss each function in more detail in Section6,where the corresponding Pareto-optimal fronts are visualized as well(Figures1–6).178Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs5Metrics of PerformanceComparing different optimization techniques experimentally always involves the notion of performance.In the case of multiobjective optimization,the definition of quality is substantially more complex than for single-objective optimization problems,because the optimization goal itself consists of multiple objectives:The distance of the resulting nondominated set to the Pareto-optimal front should be minimized.A good(in most cases uniform)distribution of the solutions found is desirable.Theassessment of this criterion might be based on a certain distance metric.The extent of the obtained nondominated front should be maximized,i.e.,for each objective,a wide range of values should be covered by the nondominated solutions.In the literature,some attempts can be found to formalize the above definition(or parts of it)by means of quantitative metrics.Performance assessment by means of weighted-sum aggregation was introduced by Esbensen and Kuh(1996).Thereby,a set of decision vectors is evaluated regarding a given linear combination by determining the minimum weighted-sum of all corresponding objective vectors of.Based on this concept,a sample of linear combinations is chosen at random(with respect to a certain probability distribution),and the minimum weighted-sums for all linear combinations are summed up and averaged.The resulting value is taken as a measure of quality.A drawback of this metric is that only the“worst”solution determines the quality value per linear combination. Although several weight combinations are used,nonconvex regions of the trade-off surface contribute to the quality more than convex parts and may,as a consequence,dominate the performance assessment.Finally,the distribution,as well as the extent of the nondominated front,is not considered.Another interesting means of performance assessment was proposed by Fonseca and Fleming(1996).Given a set of nondominated solutions,a boundary function divides the objective space into two regions:the objective vectors for which the corre-sponding solutions are not covered by and the objective vectors for which the associated solutions are covered by.They call this particular function,which can also be seen as the locus of the family of tightest goal vectors known to be attainable,the attainment surface. T aking multiple optimization runs into account,a method is described to compute a median attainment surface by using auxiliary straight lines and sampling their intersections with the attainment surfaces obtained.As a result,the samples represented by the median attain-ment surface can be relatively assessed by means of statistical tests and,therefore,allow comparison of the performance of two or more multiobjective optimizers.A drawback of this approach is that it remains unclear how the quality difference can be expressed,i.e.,how much better one algorithm is than another.However,Fonseca and Fleming describe ways of meaningful statistical interpretation in contrast to the other studies considered here,and furthermore,their methodology seems to be well suited to visualization of the outcomes of several runs.In the context of investigations on convergence to the Pareto-optimal front,some authors(Rudolph,1998;Van Veldhuizen and Lamont,1998a)have considered the distance of a given set to the Pareto-optimal set in the same way as the function defined below.The distribution was not taken into account,because the focus was not on this Evolutionary Computation Volume8,Number2179E.Zitzler,K.Deb,and L.Thielematter.However,in comparative studies,distance alone is not sufficient for performance evaluation,since extremely differently distributed fronts may have the same distance to the Pareto-optimal front.T wo complementary metrics of performance were presented in Zitzler and Thiele (1998,1999).On one hand,the size of the dominated area in the objective space is taken under consideration;on the other hand,a pair of nondominated sets is compared by calculating the fraction of each set that is covered by the other set.The area combines all three criteria(distance,distribution,and extent)into one,and therefore,sets differing in more than one criterion may not be distinguished.The second metric is in some way similar to the comparison methodology proposed in Fonseca and Fleming(1996).It can be used to show that the outcomes of an algorithm dominate the outcomes of another algorithm, although,it does not tell how much better it is.We give its definition here,because it is used in the remainder of this paper.D EFINITION5:Let be two sets of decision vectors.The function maps the ordered pair to the interval:(13)The value means that all solutions in are dominated by or equal to solutions in.The opposite,,represents the situation when none of the solutions in are covered by the set.Note that both and have to be considered,since is not necessarily equal to.In summary,it may be said that performance metrics are hard to define and it probably will not be possible to define a single metric that allows for all criteria in a meaningful way.Along with that problem,the statistical interpretation associated with a performance comparison is rather difficult and still needs to be answered,since multiple significance tests are involved,and thus,tools from analysis of variance may be required.In this study,we have chosen a visual presentation of the results together with the application of the metric from Definition5.The reason for this is that we would like to in-vestigate1)whether test functions can adequately test specific aspects of each multiobjective algorithm and2)whether any visual hierarchy of the chosen algorithms exists.However, for a deeper investigation of some of the algorithms(which is the subject of future work), we suggest the following metrics that allow assessment of each of the criteria listed at the beginning of this section separately.D EFINITION6:Given a set of pairwise nondominating decision vectors,a neighborhood parameter(to be chosen appropriately),and a distance metric.We introduce three functions to assess the quality of regarding the parameter space:1.The function gives the average distance to the Pareto-optimal set:min(14)Recently,an alternative metric has been proposed in Zitzler(1999)in order to overcome this problem. 180Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs 2.The function takes the distribution in combination with the number of nondominatedsolutions found into account:(15) 3.The function considers the extent of the front described by:max(16) Analogously,we define three metrics,,and on the objective space.Letbe the sets of objective vectors that correspond to and,respectively,and and as before:min(17)(18)max(19)While and are intuitive,and(respectively and)need further explanation.The distribution metrics give a value within the interval()that reflects the number of-niches(-niches)in().Obviously,the higher the value,the better the distribution for an appropriate neighborhood parameter(e.g.,means that for each objective vector there is no other objective vector within-distance to it).The functions and use the maximum extent in each dimension to estimate the range to which the front spreads out.In the case of two objectives,this equals the distance of the two outer solutions.6Comparison of Different Evolutionary Approaches6.1MethodologyWe compare eight algorithms on the six proposed test functions:1.A random search algorithm.2.Fonseca and Fleming’s multiobjective EA.3.The Niched Pareto Genetic Algorithm.4.Hajela and Lin’s weighted-sum based approach.5.The Vector Evaluated Genetic Algorithm.6.The Nondominated Sorting Genetic Algorithm.Evolutionary Computation Volume8,Number2181E.Zitzler,K.Deb,and L.Thiele7.A single-objective evolutionary algorithm using weighted-sum aggregation.8.The Strength Pareto Evolutionary Algorithm.The multiobjective EAs,as well as,were executed times on each test problem, where the population was monitored for nondominated solutions,and the resulting non-dominated set was taken as the outcome of one optimization run.Here,serves as an additional point of reference and randomly generates a certain number of individuals per generation according to the rate of crossover and mutation(but neither crossover and mutation nor selection are performed).Hence,the number offitness evaluations was the same as for the EAs.In contrast,simulation runs were considered in the case of, each run optimizing towards another randomly chosen linear combination of the objec-tives.The nondominated solutions among all solutions generated in the runs form the trade-off front achieved by on a particular test function.Independent of the algorithm and the test function,each simulation run was carried out using the following parameters:Number of generations:250Population size:100Crossover rate:0.8Mutation rate:0.01Niching parameter share:0.48862Domination pressure dom:10The niching parameter was calculated using the guidelines given in Deb and Goldberg (1989)assuming the formation of ten independent niches.Since uses genotypic fitness sharing on,a different value,share,was chosen for this particular case. Concerning,the recommended value for dom of the population size wastaken(Horn and Nafpliotis,1993).Furthermore,for reasons of fairness,ran with a population size of where the external nondominated set was restricted to.Regarding the implementations of the algorithms,one chromosome was used to en-code the parameters of the corresponding test problem.Each parameter is represented by bits;the parameters only comprise bits for the deceptive function. Moreover,all approaches except were realized using binary tournament selection with replacement in order to avoid effects caused by different selection schemes.Further-more,sincefitness sharing may produce chaotic behavior in combination with tournament selection,a slightly modified method is incorporated here,named continuously updated shar-ing(Oei et al.,1991).As requires a generational selection mechanism,stochastic universal sampling was used in the implementation.6.2Simulation ResultsIn Figures1–6,the nondominated fronts achieved by the different algorithms are visualized. Per algorithm and test function,the outcomes of thefirstfive runs were unified,and then the dominated solutions were removed from the union set;the remaining points are plotted in thefigures.Also shown are the Pareto-optimal fronts(lower curves),as well as additional reference curves(upper curves).The latter curves allow a more precise evaluation of the obtained trade-off fronts and were calculated by adding max minto the values of the Pareto-optimal points.The space between Pareto-optimal and 182Evolutionary Computation Volume8,Number2f101234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 1:T est function(convex).f101234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 2:T est function(nonconvex).183f11234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEA Figure 3:T est function(discrete).f1010203040f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 4:T est function(multimodal).184f10246f2RANDFFGA NPGAHLGA VEGA NSGASOEA SPEA Figure 5:T est function (deceptive).f12468f2RANDFFGA NPGA HLGA VEGA NSGASOEA SPEA Figure 6:T est function(nonuniform).185reference fronts represents about of the corresponding objective space.However,the curve resulting from the deceptive function is not appropriate for our purposes,since it lies above the fronts produced by the random search algorithm.Instead,we consider all solutions with,i.e.,for which the parameters are set to the deceptive attractors (for).In addition to the graphical presentation,the different algorithms were assessed in pairs using the metric from Definition5.For an ordered algorithm pair,there is a sample of values according to the runs performed.Each value is computed on the basis of the nondominated sets achieved by and with the same initial population. Here,box plots are used to visualize the distribution of these samples(Figure7).A box plot consists of a box summarizing of the data.The upper and lower ends of the box are the upper and lower quartiles,while a thick line within the box encodes the median. Dashed appendages summarize the spread and shape of the distribution.Furthermore,the shortcut in Figure7stands for“reference set”and represents,for each test function,a set of equidistant points that are uniformly distributed on the corresponding reference curve.Generally,the simulation results prove that all multiobjective EAs do better than the random search algorithm.However,the box plots reveal that,,anddo not always cover the randomly created trade-off front completely.Furthermore,it can be observed that clearly outperforms the other nonelitist multiobjective EAs regarding both distance to the Pareto-optimal front and distribution of the nondominated solutions.This confirms the results presented in Zitzler and Thiele(1998).Furthermore, it is remarkable that performs well compared to and,although some serious drawbacks of this approach are known(Fonseca and Fleming,1995).The reason for this might be that we consider the off-line performance here in contrast to other studies that examine the on-line performance(Horn and Nafpliotis,1993;Srinivas and Deb,1994). On-line performance means that only the nondominated solutions in thefinal population are considered as the outcome,while off-line performance takes the solutions nondominated among all solutions generated during the entire optimization run into account.Finally,the best performance is provided by,which makes explicit use of the concept of elitism. Apart from,it even outperforms in spite of substantially lower computational effort and although uses an elitist strategy as well.This observation leads to the question of whether elitism would increase the performance of the other multiobjective EAs.We will investigate this matter in the next section.Considering the different problem features separately,convexity seems to cause the least amount of difficulty for the multiobjective EAs.All algorithms evolved reasonably distributed fronts,although there was a difference in the distance to the Pareto-optimal set.On the nonconvex test function,however,,,and have difficulties finding intermediate solutions,as linear combinations of the objectives tend to prefer solutions strong in at least one objective(Fonseca and Fleming,1995,4).Pareto-based algorithms have advantages here,but only and evolved a sufficient number of nondominated solutions.In the case of(discreteness),and are superior to both and.While the fronts achieved by the former cover about of the reference set on average,the latter come up with coverage.Among the considered test functions,and seem to be the hardest problems,since none of the algorithms was able to evolve a global Pareto-optimal set.The results on the multimodal problem indicateNote that outside values are not plotted in Figure7in order to prevent overloading of the presentation. 186。
DEMO:Differential Evolution for MultiobjectiveOptimizationTea Robiˇc and Bogdan FilipiˇcDepartment of Intelligent Systems,Joˇz ef Stefan Institute,Jamova39,SI-1000Ljubljana,Sloveniatea.robic@ijs.sibogdan.filipic@ijs.siAbstract.Differential Evolution(DE)is a simple but powerful evolu-tionary optimization algorithm with many successful applications.In thispaper we propose Differential Evolution for Multiobjective Optimization(DEMO)–a new approach to multiobjective optimization based on DE.DEMO combines the advantages of DE with the mechanisms of Pareto-based ranking and crowding distance sorting,used by state-of-the-artevolutionary algorithms for multiobjective optimization.DEMO is im-plemented in three variants that achieve competitive results onfive ZDTtest problems.1IntroductionMany real-world optimization problems involve optimization of several(conflict-ing)criteria.Since multiobjective optimization searches for an optimal vector, not just a single value,one solution often cannot be said to be better than an-other and there exists not only a single optimal solution,but a set of optimal solutions,called the Pareto front.Consequently,there are two goals in multi-objective optimization:(i)to discover solutions as close to the Pareto front as possible,and(ii)tofind solutions as diverse as possible in the obtained nondom-inated front.Satisfying these two goals is a challenging task for any algorithm for multiobjective optimization.In recent years,many algorithms for multiobjective optimization have been introduced.Most originate in thefield of Evolutionary Algorithms(EAs)–the so-called Multiobjective Optimization EAs(MOEAs).Among these,the NSGA-II by Deb et al.[1]and SPEA2by Zitzler et al.[2]are the most popular.MOEAs take the strong points of EAs and apply them to Multiobjective Optimization Problems(MOPs).A particular EA that has been used for multiobjective opti-mization is Differential Evolution(DE).DE is a simple yet powerful evolution-ary algorithm by Price and Storn[3]that has been successfully used in solving single-objective optimization problems[4].Hence,several researchers have tried to extend it to handle MOPs.Abbass[5,6]was thefirst to apply DE to MOPs in the so-called Pareto Differential Evolution(PDE)algorithm.This approach employs DE to createC.A.Coello Coello et al.(Eds.):EMO2005,LNCS3410,pp.520–533,2005.c Springer-Verlag Berlin Heidelberg2005DEMO:Differential Evolution for Multiobjective Optimization521 new individuals and then keeps only the nondominated ones as the basis for the next generation.PDE was compared to SPEA[7](the predecessor of SPEA2) on two test problems and found to outperform it.Madavan[8]achieved good results with the Pareto Differential Evolution Approach(PDEA1).Like PDE,PDEA applies DE to create new individuals. It then combines both populations and calculates the nondominated rank(with Pareto-based ranking assignment)and diversity rank(with the crowding distance metric)for each individual.Two variants of PDEA were investigated.Thefirst compares each child with its parent.The child replaced the parent if it had a higher nondominated rank or,if it had the same nondominated rank and a higher diversity rank.Otherwise the child is discarded.This variant was found inefficient –the diversity was good but the convergence slow.The other variant simply takes the best individuals according to the nondominated rank and diversity rank(like in NSGA-II).The latter variant has proved to be very efficient and was applied to several MOPs,where it produced favorable results.Xue[9]introduced Multiobjective Differential Evolution(MODE).This algo-rithm also uses the Pareto-based ranking assignment and the crowding distance metric,but in a different manner than PDEA.In MODE thefitness of an in-dividual isfirst calculated using Pareto-based ranking and then reduced with respect to the individual’s crowding distance value.This singlefitness value is then used to select the best individuals for the new population.MODE was tested onfive benchmark problems where it produced better results than SPEA.In this paper,we propose a new way of extending DE to be suitable for solving MOPs.We call it DEMO(Differential Evolution for Multiobjective Op-timization).Although similar to the existing algorithms(especially PDEA),our implementation differs from others and represents a novel approach to multiob-jective optimization.DEMO is implemented in three variants(DEMO/parent, DEMO/closest/dec and DEMO/closest/obj).Because of diverse recommenda-tions for the crossover probability,three different values for this parameter are investigated.From the simulation results onfive test problems wefind that DEMO efficiently achieves the two goals of multiobjective optimization,i.e.the convergence to the true Pareto front and uniform spread of individuals along the front.Moreover,DEMO achieves very good results on the test problem ZDT4 that poses many difficulties to state-of-the-art algorithms for multiobjective op-timization.The rest of the paper is organized as follows.In Section2we describe the DE scheme that was used as a base for DEMO.Thereafter,in Section3,we present DEMO in its three variants.Section4outlines the applied test prob-lems and performance measures,and states the results.Further comparison and discussion of the results are provided in Section5.The paper concludes with Section6.1This acronym was not used by Madavan.We introduce it to make clear distinction between his approach and other implementations of DE for multiobjective optimiza-tion.522T.Robiˇc and B.FilipiˇcDifferential Evolution1.Evaluate the initial population P of random individuals.2.While stopping criterion not met,do:2.1.For each individual P i (i =1,...,popSize )from P repeat:(a)Create candidate C from parent P i .(b)Evaluate the candidate.(c)If the candidate is better than the parent,the candidate replacesthe parent.Otherwise,the candidate is discarded.2.2.Randomly enumerate the individuals in P .Fig.1.Outline of DE’s main procedureCandidate creation Input:Parent P i1.Randomly select three individuals P i 1,P i 2,P i 3from P ,where i,i 1,i 2and i 3are pairwise different.2.Calculate candidate C as C =P i 1+F ·(P i 2−P i 3),where F is a scaling factor.3.Modify the candidate by binary crossover with the parent using crossover probability crossP rob .Output:Candidate C Fig.2.Outline of the candidate creation in scheme DE/rand/1/bin2Differential EvolutionDE is a simple evolutionary algorithm that creates new candidate solutions by combining the parent individual and several other individuals of the same pop-ulation.A candidate replaces the parent only if it has better fitness.This is a rather greedy selection scheme that often outperforms traditional EAs.The DE algorithm in pseudo-code is shown in Fig.1.Many variants of cre-ation of a candidate are possible.We use the DE scheme DE/rand/1/bin de-scribed in Fig.2(more details on this and other DE schemes can be found in [10]).Sometimes,the newly created candidate falls out of bounds of the vari-able space.In such cases,many approaches of constraint handling are possible.We address this problem by simply replacing the candidate value violating the boundary constraints with the closest boundary value.In this way,the candidate becomes feasible by making as few alterations to it as possible.Moreover,this approach does not require the construction of a new candidate.3Differential Evolution for Multiobjective OptimizationWhen applying DE to MOPs,we face many difficulties.Besides preserving a uniformly spread front of nondominated solutions,which is a challenging task for any MOEA,we have to deal with another question,that is,when to replace the parent with the candidate solution.In single-objective optimization,the decisionDEMO:Differential Evolution for Multiobjective Optimization523 Differential Evolution for Multiobjective Optimization1.Evaluate the initial population P of random individuals.2.While stopping criterion not met,do:2.1.For each individual P i(i=1,...,popSize)from P repeat:(a)Create candidate C from parent P i.(b)Evaluate the candidate.(c)If the candidate dominates the parent,the candidate replaces the parent.If the parent dominates the candidate,the candidate is discarded.Otherwise,the candidate is added in the population.2.2.If the population has more than popSize individuals,truncate it.2.3.Randomly enumerate the individuals in P.Fig.3.Outline of DEMO/parentis easy:the candidate replaces the parent only when the candidate is better than the parent.In MOPs,on the other hand,the decision is not so straightforward. We could use the concept of dominance(the candidate replaces the parent only if it dominates it),but this would make the greedy selection scheme of DE even greedier.Therefore,DEMO applies the following principle(see Fig.3). The candidate replaces the parent if it dominates it.If the parent dominates the candidate,the candidate is discarded.Otherwise(when the candidate and parent are nondominated with regard to each other),the candidate is added to the population.This step is repeated until popSize number of candidates are created.After that,we get a population of the size between popSize and 2·popSize.If the population has enlarged,we have to truncate it to prepare it for the next step of the algorithm.The truncation consists of sorting the individuals with nondominated sorting and then evaluating the individuals of the same front with the crowding dis-tance metric.The truncation procedure keeps in the population only the best popSize individuals(with regard to these two metrics).The described truncation is derived from NSGA-II and is also used in PDEA’s second variant.DEMO incorporates two crucial mechanisms.The immediate replacement of the parent individual with the candidate that dominates it,is the core of DEMO. The newly created candidates that enter the population(either by replacement or by addition)instantly take part in the creation of the following candidates. This emphasizes elitism within reproduction,which helps achieving thefirst goal of multiobjective optimization–convergence to the true Pareto front.The second mechanism is the use of nondominated sorting and crowding distance metric in truncation of the extended population.Besides preserving elitism, this mechanism stimulates the uniform spread of solutions.This is needed to achieve the second goal–finding as diverse nondominated solutions as possible. DEMO’s selection scheme thus efficiently pursues both goals of multiobjective optimization.The described DEMO’s procedure(outlined in Fig.3)is the most elemen-tary of the three variants presented in this paper.It is called DEMO/parent.524T.Robiˇc and B.FilipiˇcThe other two variants were inspired by the concept of Crowding DE,recently introduced by Thomsen in[11].Thomsen applied crowding-based DE in optimization of multimodal func-tions.When optimizing functions with many optima,we would sometimes like not only tofind one optimal point,but also discover and maintain multiple op-tima in a single algorithm run.For this purpose,Crowding DE can be used. Crowding DE is basically conventional DE with one important diffu-ally,the candidate is compared to its parent.In Crowding DE,the candidate is compared to the most similar individual in the population.The applied similarity measure is the Euclidean distance between two solutions.Crowding DE was tested on numerous benchmark problems and its performance was impressive.Because the goal of maintaining multiple optima is similar to the second goal of multiobjective optimization(maintaining diverse solutions in the front),we implemented the idea of Crowding DE in two addi-tional variants of the DEMO algorithm.The second,DEMO/closest/ dec,works in the same way as DEMO/parent,with the exception that the can-didate solution is compared to the most similar individual in decision space.If it dominates it,the candidate replaces this individual,otherwise it is treated in the same way as in DEMO/parent.The applied similarity measure is the Eu-clidean distance between the two solutions in decision space.In the third variant, DEMO/closest/obj,the candidate is compared to the most similar individual in objective space.DEMO/closest/dec and DEMO/closest/obj need more time for one step of the procedure than DEMO/parent.This is because at every step they have to search for the most similar individual in the decision and objective space,respec-tively.Although the additional computational complexity is notable when oper-ating on high-dimensional spaces,it is negligible in real-world problems where the solution evaluation is the most time consuming task.4Evaluation and Results4.1Test ProblemsWe analyze the performance of the three variants of DEMO onfive ZDT test problems(introduced in[12])that were frequently used as benchmark problems in the literature[1,8,9].These problems are described in detail in Tables1and2.4.2Performance MeasuresDifferent performance measures for evaluating the efficiency of MOEAs have been suggested in the literature.Because we wanted to compare DEMO to other MOEAs(especially the ones that use DE)on their published results,we use three metrics that have been used in these studies.For all three metrics,we need to know the true Pareto front for a problem. Since we are dealing with artificial test problems,the true Pareto front is notDEMO:Differential Evolution for Multiobjective Optimization525 Table1.Description of the test problems ZDT1,ZDT2and ZDT3ZDT1Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−x1/g(x)g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30 Characteristics convex Pareto frontZDT2Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−(x1/g(x))2g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30 Characteristics nonconvex Pareto frontZDT3Decision space x∈[0,1]30Objective functions f1(x)=x1f2(x)=g(x)1−x1/g(x)−x1g(x)sin(10πx1)g(x)=1+9n−1 ni=2x iOptimal solutions0≤x∗1≤1and x∗i=0for i=2,...,30Characteristics discontinuous Pareto fronthard to obtain.In our experiments we use500uniformly spaced Pareto-optimal solutions as the approximation of the true Pareto front2.Thefirst metric is Convergence metricΥ.It measures the distance between the obtained nondominated front Q and the set P∗of Pareto-optimal solutions:Υ= |Q|i=1d i |Q|,where d i is the Euclidean distance(in the objective space)between the solution i∈Q and the nearest member of P∗.Instead of the convergence metric,some researchers use a very similar metric, called Generational Distance GD.This metric measures the distance between2These solutions are uniformly spread in the decision space.They were made available online by Simon Huband at .au/research/wfg/ datafiles.html.526T.Robiˇc and B.FilipiˇcTable 2.Description of the test problems ZDT4and ZDT6ZDT4Decision space x ∈[0,1]×[−5,5]9Objective functions f 1(x )=x 1f 2(x )=g (x )1−(x i /g (x ))2g (x )=1+10(n −1)+n i =2x 2i −10cos(4πx i )Optimal solutions 0≤x ∗1≤1and x ∗i=0for i =2,...,10Characteristicsmany local Pareto frontsZDT6Decision space x ∈[0,1]10Objective functions f 1(x )=1−exp −4x 1sin(6πx 1)6f 2(x )=g (x )1−(f 1(x )/g (x ))2g (x )=1+9n −1ni =2x iOptimal solutions 0≤x ∗1≤1and x ∗i =0for i =2,...,10Characteristicslow density of solutions near Pareto frontthe obtained nondominated front Q and the set P ∗of Pareto-optimal solutions as GD =|Q |i =1d 2i|Q |,where d i is again the Euclidean distance (in the objective space)between the solution i ∈Q and the nearest member of P ∗.The third metric is Diversity metric ∆.This metric measures the extent of spread achieved among the nondominated solutions:∆=d f +d l + |Q |−1i =1|d i −d |d f +d l +(|Q |−1)d ,where d i is the Euclidean distance (in the objective space)between consecutive solutions in the obtained nondominated front Q ,and d is the average of these distances.The parameters d f and d l represent the Euclidean distances between the extreme solutions of the Pareto front P ∗and the boundary solutions of the obtained front Q .4.3ExperimentsIn addition to investigating the performance of the three DEMO variants,we were also interested in observing the effect of the crossover probability (crossProb in Fig.2)on the efficiency of DEMO.Therefore,we made the following exper-iments:for every DEMO variant,we run the respective DEMO algorithm with crossover probabilities set to 30%,60%and 90%.We repeated all tests 10timesDEMO:Differential Evolution for Multiobjective Optimization527 Table3.Statistics of the results on test problems ZDT1,ZDT2and ZDT3ZDT1Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.033482±0.0047500.390307±0.001876NSGA-II(binary-coded)0.000894±0.0000000.463292±0.041622SPEA0.001799±0.0000010.784525±0.004440PAES0.082085±0.0086791.229794±0.004839PDEA N/A0.298567±0.000742MODE0.005800±0.000000N/ADEMO/parent0.001083±0.0001130.325237±0.030249DEMO/closest/dec0.001113±0.0001340.319230±0.031350DEMO/closest/obj0.001132±0.0001360.306770±0.025465ZDT2Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.072391±0.0316890.430776±0.004721NSGA-II(binary-coded)0.000824±0.0000000.435112±0.024607SPEA0.001339±0.0000000.755148±0.004521PAES0.126276±0.0368771.165942±0.007682PDEA N/A0.317958±0.001389MODE0.005500±0.000000N/ADEMO/parent0.000755±0.0000450.329151±0.032408DEMO/closest/dec0.000820±0.0000420.335178±0.016985DEMO/closest/obj0.000780±0.0000350.326821±0.021083ZDT3Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.114500±0.0079400.738540±0.019706NSGA-II(binary-coded)0.043411±0.0000420.575606±0.005078SPEA0.047517±0.0000470.672938±0.003587PAES0.023872±0.0000100.789920±0.001653PDEA N/A0.623812±0.000225MODE0.021560±0.000000N/ADEMO/parent0.001178±0.0000590.309436±0.018603DEMO/closest/dec0.001197±0.0000910.324934±0.029648DEMO/closest/obj0.001236±0.0000910.328873±0.019142with different initial populations.The scaling factor F was set to0.5and was not tuned.To match the settings of the algorithms used for comparison,the population size was set100and the algorithm was run for250generations.4.4ResultsTables3and4present the mean and variance of the values of the convergence and diversity metric,averaged over10runs.We provide the results for all three DEMO variants.Results of other algorithms are taken from the literature(see528T.Robiˇc and B.FilipiˇcTable4.Statistics of the results on test problems ZDT4and ZDT6ZDT4Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.513053±0.1184600.702612±0.064648NSGA-II(binary-coded)3.227636±7.3076300.479475±0.009841SPEA7.340299±6.5725160.798463±0.014616PAES0.854816±0.5272380.870458±0.101399PDEA N/A0.840852±0.035741MODE0.638950±0.500200N/ADEMO/parent0.001037±0.0001340.359905±0.037672DEMO/closest/dec0.001016±0.0000910.359600±0.026977DEMO/closest/obj0.041012±0.0639200.407225±0.094851ZDT6Algorithm Convergence metric Diversity metricNSGA-II(real-coded)0.296564±0.0131350.668025±0.009923NSGA-II(binary-coded)7.806798±0.0016670.644477±0.035042SPEA0.221138±0.0004490.849389±0.002713PAES0.085469±0.0066641.153052±0.003916PDEA N/A0.473074±0.021721MODE0.026230±0.000861N/ADEMO/parent0.000629±0.0000440.442308±0.039255DEMO/closest/dec0.000630±0.0000210.461174±0.035289DEMO/closest/obj0.000642±0.0000290.458641±0.031362[1]for the results and parameter settings of both versions of NSGA-II,SPEA and PAES,[8]for PDEA,and[9]for MODE3).Results for PDEA in[8]were evaluated with generational distance instead of the convergence metric.Because PDEA is the approach that is the most similar to DEMO,we present the additional comparison of their results in Tables5 and6.Once more,we present the mean and variance of the values of generational distance,averaged over10runs.Table5.Generational distance achieved by PDEA and DEMO on the problems ZDT1, ZDT2and ZDT3Generational distanceAlgorithm ZDT1ZDT2ZDT3PDEA0.000615±0.0000000.000652±0.0000000.000563±0.000000 DEMO/parent0.000230±0.0000480.000091±0.0000040.000156±0.000007 DEMO/closest/dec0.000242±0.0000280.000097±0.0000040.000162±0.000013 DEMO/closest/obj0.000243±0.0000500.000092±0.0000040.000169±0.0000173The results for MODE are the average of30instead of10runs.In[9]no diversity metric was calculated.DEMO:Differential Evolution for Multiobjective Optimization529f 1f 2ZDT1f 1f 2ZDT2f 1f 2ZDT3f 1f 2ZDT40 0.20.40.60.811.2f 1f 2ZDT6Fig.4.Nondominated solutions of the final population obtained by DEMO on five ZDT test problems (see Table 7for more details on these fronts).The presented fronts are the outcome of a single run of DEMO/parentIn Tables 3,4,5and 6,only the results of DEMO with crossover probability 30%are shown.The results obtained with crossover probabilities set to 60%and530T.Robiˇc and B.FilipiˇcTable6.Generational distance achieved by PDEA and DEMO on the problems ZDT4 and ZDT6Generational distanceAlgorithm ZDT4ZDT6PDEA0.618258±0.8268810.023886±0.003294DEMO/parent0.000202±0.0000530.000074±0.000004DEMO/closest/dec0.000179±0.0000480.000075±0.000002DEMO/closest/obj0.004262±0.0065450.000076±0.000003Table7.Metric values for the nondominated fronts shown in Fig.4Problem Convergence metric Diversity metricZDT10.0012200.297066ZDT20.0007720.281994ZDT30.0012200.274098ZDT40.0012940.318805ZDT60.0006480.38508890%for all three variants were always worse than the ones given in the tables and are not presented for the sake of clarity4.Figure4shows the nondominated fronts obtained by a single run of DE-MO/parent with crossover probability30%.Table7summarizes the values of the convergence and diversity metrics for the nondominated fronts from Fig.4.5DiscussionAs mentioned in the previous section,DEMO with crossover probability of30% achieved the best results.This is in contradiction with the recommendations by the authors of DE[10]and confirms Madavan’sfindings in[8].However,we have to take these results with caution.Crossover probability is highly related to the dimensionality of the decision space.In our study,only high-dimensional functions were used.When operating on low-dimensional decision spaces,higher values for crossover probabilities should be used to preserve the diversity in the population.The challenge for MOEAs in thefirst three test problems(ZDT1,ZDT2and ZDT3)lies in the high-dimensionality of these problems.Many MOEAs have achieved very good results on these problems in both goals of multiobjective optimization(convergence to the true Pareto front and uniform spread of solu-tions along the front).The results for the problems ZDT1and ZDT2(Tables3 and5)show that DEMO achieves good results,which are comparable to the4The interested reader mayfind all nondominated fronts obtained by the three ver-sions of DEMO with the three different crossover probabilities on the internet site http://dis.ijs.si/tea/demo.htm.DEMO:Differential Evolution for Multiobjective Optimization531 results of the algorithms NSGA-II and PDEA.On ZDT3,the three DEMO variants outperform all other algorithms used in comparison.On thefirst three test problems we cannot see a meaningful difference in performance of the three DEMO variants.ZDT4is a hard optimization problem with many(219)local Pareto fronts that tend to mislead the optimization algorithm.In Tables4and6we can see that all algorithms(with the exception of DEMO)have difficulties in converging to the true Pareto front.Here,we can see for thefirst time that there is a notable difference between the DEMO variants.The third variant,DEMO/closest/obj, performs poorly compared to thefirst two variants,although still better than other algorithms.While thefirst two variants of DEMO converge to the true Pareto optimal front in all of the10runs,the third variant remains blocked in a local Pareto front3times out of10.This might be caused by DEMO/closest/obj putting too much effort infinding well spaced solutions and thus falling behind in the goal of convergence to the true Pareto optimal front.In this problem, there is also a big difference in results produced by the DEMO variants with different crossover probabilities.When using crossover probability60%or90%, no variant of DEMO ever converged to the true Pareto optimal front.With the test problem ZDT6,there are two major difficulties.Thefirst one is thin density of solutions towards the Pareto front and the second one non-uniform spread of solutions along the front.On this problem,all three DEMO variants outperform all other algorithms(see Tables4and6).The results of all DEMO variants are also very similar and almost no difference is noted when using different crossover probabilities.Here,we note that the diversity metric value is worse than on all other problems.This is because of the non-uniform spread of solutions that causes difficulties although the convergence is good.6ConclusionDEMO is a new DE implementation dealing with multiple objectives.The big-gest difference between DEMO and other MOEAs that also use DE for repro-duction is that in DEMO the newly created good candidates immediately take part in the creation of the subsequent candidates.This enables fast convergence to the true Pareto front,while the use of nondominated sorting and crowding distance metric in truncation of the extended population promotes the uniform spread of solutions.In this paper,three variants of DEMO were introduced.The detailed analysis of the results brings us to the following conclusions.The three DEMO variants are as effective as the algorithms NSGA-II and PDEA on the problems ZDT1 and ZDT2.On the problems ZDT3,ZDT4and ZDT6,DEMO achieves better results than any other algorithm used for comparison.As for the DEMO variants, we have not found any variant to be significantly better than another.Crowding DE thus showed not to bring the expected advantage over standard DE.Because DEMO/closest/dec and DEMO/closest/obj are computationally more expensive532T.Robiˇc and B.Filipiˇcthan DEMO/parent and do not bring any important advantage,we recommend the variant DEMO/parent be used in future experimentation.In this study,we also investigated the influence of three different settings of the crossover probability.We found that DEMO in all three variants worked best when the crossover probability was low(30%).Thesefindings,of course, cannot be generalized because our test problems were high-dimensional(10-or 30-dimensional).In low-dimensional problems,higher values of crossover prob-ability should be used to preserve the diversity in the population.Seeing how crossover probability affects DEMO’s performance,we are now interested if the other parameter used in candidate creation(the scaling factor F)also influences DEMO’s performance.This investigation is left for further work.In the near future,we also plan to evaluate DEMO on additional test problems.AcknowledgmentThe work presented in the paper was supported by the Slovenian Ministry of Ed-ucation,Science and Sport(Research Programme P2-0209Artificial Intelligence and Intelligent Systems).The authors wish to thank the anonymous reviewers for their comments and Simon Huband for making the Pareto-optimal solutions for the ZDT problems available online.References1.Deb,K.,Pratap,A.,Agarwal,S.,Meyarivan,T.:A fast and elitist multiobjectivegenetic algorithm:NSGA–II.IEEE Transactions on Evolutionary Computation6 (2002)182–1972.Zitzler,E.,Laumanns,M.,Thiele,L.:SPEA2:Improving the strength pareto evo-lutionary algorithm.Technical Report103,Computer Engineering and Networks Laboratory(TIK),Swiss Federal Institute of Technology(ETH)Zurich,Glorias-trasse35,CH-8092Zurich,Switzerland(2001)3.Price,K.V.,Storn,R.:Differential evolution–a simple evolution strategy for fastoptimization.Dr.Dobb’s Journal22(1997)18–24mpinen,J.:(A bibliography of differential evolution algorithm)http://www2.lut.fi/˜jlampine/debiblio.htm5.Abbass,H.A.,Sarker,R.,Newton,C.:PDE:A pareto-frontier differential evolu-tion approach for multi-objective optimization problems.In:Proceedings of the Congress on Evolutionary Computation2001(CEC’2001).Volume2,Piscataway, New Jersey,IEEE Service Center(2001)971–9786.Abbass,H.A.:The self-adaptive pareto differential evolution algorithm.In:Congress on Evolutionary Computation(CEC’2002).Volume1,Piscataway,New Jersey,IEEE Service Center(2002)831–8367.Zitzler,E.,Thiele,L.:Multiobjective evolutionary algorithms:A comparativecase study and the strength pareto approach.IEEE Transactions on Evolutionary Computation3(1999)257–2718.Madavan,N.K.:Multiobjective optimization using a pareto differential evolutionapproach.In:Congress on Evolutionary Computation(CEC’2002).Volume2, Piscataway,New Jersey,IEEE Service Center(2002)1145–1150。
《高级法学英语》李剑波主编Unit 8 Section B content of an EIA 译文第一段(1-2自然段):什么是环境影响评估,它的目的和作用是什么。
1对环境影响评价的内容的充分讨论应当建立在理解它的目的的基础之上。
《埃斯波公约》将环境影响评价描述为“评估一项提议可能会对环境造成的影响的程序”。
这类环境影响评估的目的是,当国家决策者要决定是否批准某活动以及对该活动采取何种控制措施时,为国家决策者提供可能产生的跨界环境影响的信息。
一项环境影响评价是任何旨在确定环境风险、将环境问题纳入发展活动、促进可持续发展的管理体系的基础。
2它是一种工具,目标是明智的决策,但它不决定一个活动是否应该继续或如何被管理。
这些决定是为相关的公共部门制定的,平衡环境影响评价所提供的信息与其他被认为是决定性的因素,包括经济发展。
从这个角度看,很明显,一个“令人满意的”环境影响评价不需要显示一个活动不会有跨界损害的风险。
只要环境影响评价能提供有关活动可能的跨界影响的必要信息,并遵循适当的流程,就足够了。
第二段(3-7自然段):联合国国际法院和(联合国)国际法委员会对于环境影响评价的内容提出了什么样的观点和解释。
3一项环境影响评价具体应该由什么内容构成,是一个应该由律师来回答的法律问题,而不是技术人员需要回答的技术问题。
在纸浆厂案件的判决中,联合国国际法院指出:每一个活动的环境影响评价的具体内容,应该根据活动的性质、规模、活动可能给环境带来的不利影响以及执行这样一个环境影响评价程序所耗费的尽职调查精力等等,由每个国家在制定国内法或者批准这个活动的过程中自行决定。
4这份判决传达了两个很重要的观点。
第一,环境影响评价不是仅仅只能由法律事先规定,也可以在授权或批准一个活动的过程中提出。
重要的是有一些实实在在的手段来确保环境影响评价的实施。
第二,尽管每个环境影响评价的具体内容由国家确定,但是(一项重大活动启动之前)必须进行环境影响评价,而且环境影响评价必须与活动的性质、规模、活动可能给环境带来的不利影响等息息相关。
A New Multiobjective Evolutionary Algorithm:OMOEASanYou ZengDept.of Computer Science,China University ofGeoSciences,430074Wuhan,Hubei,P.R.China Dept.of Computer Science,Zhuzhou Institute ofTechnology,412008Zhuzhou,Hunan,P.R.Chinasanyou-zeng@Yuping ChenDept.of Computer Science,China University ofGeoSciences,430074Wuhan,Hubei,P.R.Chinakang@LiXin DingState Key Laboratory of Software Engineering,WuhanUniversity430072Wuhan,Hubei,P.R.Chinayujh@LiShan KangDept.of Computer Science,China University ofGeoSciences,430074Wuhan,Hubei,P.R.ChinaState Key Laboratory of Software Engineering,WuhanUniversity430072Wuhan,Hubei,P.R.Chinakang@Abstract-A new algorithm is proposed to solve con-strained multi-objective problems in this paper.The constraints of the MOPs are taken account of in deter-mining Pareto dominance.As a result,the feasibility of solutions is not an issue.At the same time,it takes ad-vantage of both the orthogonal design method to search evenly,and the statistical optimal method to speed up the computation.The output of the technique is a large set of solutions with high precision and even distribu-tion.Notably,for an engineering problem WATER,it finds the Pareto-optimal set,which was previously un-known.Keywords evolutionary algorithms,orthogonal design, multi-objective optimization,Pareto-optimal set1IntroductionEvolutionary algorithms can both handle large search spaces and generate multiple alternative trade-offs in a single optimization run for multiobjective optimization problems.Representative evolutionary techniques include vector evaluated genetic algorithm(VEGA)[3,4],Hajela and Lins genetic algorithm(HLGA)[5],Pareto-based rank-ing procedure(FFGA)[6],niched Pareto genetic algorithm (NPGA)[7,8],Nondominated Sorting Genetic Algorithm (NSGA)[9]and a new effective approach to multi-objective optimization,the strength Pareto evolutionary algorithm (SPEA)[10].Recently,some of the proposed methods have made further progress,for instance,NSGAII[12], SPEA2[13],rMOGAxs[14],and some new MOEAs come about,such as Generalised Regression GA(GRGA)[11]. In this paper,an orthogonal multi-objective evolutionary algorithm(OMOEA)is designed for multi-objective opti-mization problems(MOPs).The constraints of MOPs is first incorporated in Pareto dominance,to modify Pareto dominance(strict partial ordered relation),so that the fea-sibility is not an issue.Then the orthogonal design and the statistical optimal method is generalized suitable for MOPs with discrete variables.Applying the generalized design method,a niche evolution procedure is constructed,which is the kernel of OMOEA.OMOEA not only applies the or-thogonal array to produce the initial niche-population,but also a statistical optimal method to generate offspring for crossover in the niche evolution procedure.The evolution-ary process of the new technique is that an original niche evolvesfirst,and splits into a group of sub-niches;then ev-ery sub-niche iterates the above operations.Due to the uni-formity of the search,without blindness and randomness, and to the optimality of the statistics,so the niche evolution procedure can quickly yield an evenly distributed niche-population which is very close to the non-dominated set for the niche.At the same time,because of the exponential increase of the splitting frequency of niches,the new algo-rithm can yield a large set of solutions.Therefore,within a few generations,the OMOEA will quickly yield a large, constraint-satisfying,close-to-Pareto-optimal set with even distribution and high precision.In the remainder of the paper,we briefly mention the defi-nition of multiobjective optimization problem in Section2. Thereafter,we describe the constrained Pareto dominance as the improvement of the traditional Pareto dominance in Section3,in OMOEA,the search of Pareto-optimal set (non-dominated set)is based on constrained Pareto domi-nance,not traditional Pareto dominance.And in Section 4,we suggest a generalized orthogonal design method for MOPs on the purpose to locate as small region as possible where non-dominated set stays.Section5presents niche evolving and splitting which constructs the main compo-nents of OMOEA,and Section6describes the proposed OMOEA.The next section presents numerical experiments and discussion.Finally,we outline the conclusions of this paper.2Problem DefinitionDefinition1(Multi-objective Optimization Prob-lem(MOP))A general MOP includes a set of N parameters (decision variables),a set of K objective functions,and a set of L constraints.Objective functions and constraints are functions of the decision variables.The optimization goal is tominimize y=f(x)=(f1(x),f2(x),...,f K(x)) subject to e(x)=(e1(x),e2(x),...,e L(x))≤0 where x=(x1,x2,...,x N)∈XX={(x1,x2,...,x N)|l i≤x i≤u i}l=(l1,l2,...,l N)u=(u1,u2,...,u N)y=(y1,y2,...,y K)∈Y(1)where x is the decision vector,y is the objective vector,X denotes the decision space,l and u are the upper bound and lower bound of the decision space,and Y is called the ob-jective space.3Pareto-dominance with ConstraintsThe Pareto dominance(strict partial ordered relation)is based on the decision space X without consideration of the constraints,while the Pareto-optimal set M(X f,≺)is based on the feasible set.Thus it is inconvenient to get the Pareto-optimal set from traditional Pareto dominance,be-cause we care whether an individual is feasible when evalu-ating itsfitness.So the Pareto dominance will be modified. First a concept of constraint objective is introduced.Definition2Letf0(x)=Lj=1βj<e j(x)>+where<e j(x)>+=max{e j(x),0}βj>0(2)f0(x)is called constraint objective.Then a new relation”≺”is defined in consideration of both objectives and constraints as follows.Definition3(Constrained Pareto dominance)For any two decision vectors,a and ba≺b iff{f0(a)<f0(b)orf0(a)=f0(b)and f(a)<f(b)}wherea,b∈X (3)In definition3,the constraint objective has been added to the objectives,but the constraint objective has the highest pri-ority.By the constrained Pareto dominace”≺”,OMOEA will search the Pareto-optimal set M(X,≺)from decision space X without worrying about the feasibility of solutions.Remark:There are some similar constraint handling ideas,(cf.Fonseca1998[17]and Deb2000[12]).4Generalization of Orthogonal Design MethodIn a discrete single objective optimization problem,when there are N factors(variables)and each factor has Q lev-els,the search space consists of Q N combinations of lev-els.When N and Q are large,it may not be possible to do all Q N experiments to obtain optimal solutions.Therefore, it is desirable to sample a small,but representative set of combinations for experimentation,and based on the sam-ple,the optimal may be estimated.The orthogonal design was developed for the purpose[15].The selected combina-tions are scattered uniformly over the space of all possible combinations Q N.In this paper a kind of orthogonal ar-ray is employed,we denote it[a i,j]M×P where M=Q2, P=Q+1;and where the j th factor in the i th combina-tion has level a i,j and a i,j∈{1,2,...,Q}.We denote the j th column of the orthogonal array[a i,j]M×P by a j.The details to construct the orthogonal array[a i,j]M×P are as follows[16].Algorithm1Construction of orthogonal array[a i,j]M×P where M=Q2,P=Q+1for(i=1;i≤Q2;i++)a i,1= i−1Qmod Q;for(i=1;i≤Q2;i++)a i,2=(i−1)mod Q;for(t=1;t≤Q-1;t++)a2+t=(a1×t+ a2)mod Q;Increment a i,j by one for1≤i≤M and1≤j≤P;For a problem with N factors,we choose the former N columns of orthogonal array[a i,j]M×P,and denote it[a i,j]M×N,Denote the corresponding yields of the M com-binations by[y i]M×1,where the i th combination(experi-ment)has yield y i.From the yields of the selected combi-nations,we can get optimal solution by statistical methods.That is,calculate the mean value of the yield for each factor at each level,and evaluate the effect of each factor at each level by the mean value.We choose the best combination, which is composed of the best levels of the N factors,as the optimal solution.Denote the mean values by[∆k,j]Q×N where the objective has the mean value∆k,j at the k th level of the j th factor;and∆k,j=QMa i,j=ky i(4)where the j th factor has the level a i,j in the i th combina-tion(experiment).The objective has the value y i at the i th combination,and a i,j=k y i implies the sum of y i where∀i which satisfy a i,j=k.In the j th column of[∆k,j]Q×N, i.e.,{∆k,j|k=1,2,...,Q},suppose the index k of the min element in the column is k(0)j,for a minimization problem, the optimal solution is estimated as(k(0)1,k(0)2,...,k(0)N). The orthogonal design method was designed for single-objective optimization problems(SOPs).Here it is gener-alized for MOPs to estimate non-dominated set.The main difference between SOPs and MOPs is that SOPs have only simple-objective while MOPs have more objectives than one.In this section,the objective vector is denoted by y=(y1,y2,...,y K),and the decision vector(factors)by (x1,x2,...,x N),where the factor x j is quantized into a level set L j={x1,j,x2,j,,...,x Q,j},j=1,2,...,N.D de-notes the set of all possible combinations of levels.Statisti-cally,if Q is large enough,then the discrete non-dominated set regarding D will be very close to the continuous non-dominated set regarding X.So when Q is large enough, we may regard the discrete one as a satisfactory approxi-mation to the continuous one.Applying the orthogonal de-sign method,there is a best level for each factor in SOPs. However,in MOPs,this can not be concluded since there are more objectives than one.But there may be a non-dominated set relative to the level set for each factor.To get the non-dominated set relative to the level set for each factor,denote the values of objectives by[y i,k]M×K, where the k th objective in the i th combination has the value y i,k;and the mean values of the K objectives at the Q levels of the N factors by[∆q,j,k]Q×N×K,where the k th objec-tive at the q th level of the j th factor has mean value∆q,j,k. Denote∆q,j=(∆q,j,1,∆q,j,2,...,∆q,j,K)(5) Now we define dominance relation”≺j”on the level set for the j th factor,j=1,2,...,N.Definition4for any two levels x u,j,x v,j∈L j of the j th factor,j=1,2,...,N.x u,j≺j x v,j iff∆u,j<∆v,j∆u,j and∆v,j are defined in(5)(6)It is easy to verify that”≺j”is a strict partial ordered relation on L j.M(L j,≺j)denotes the non-dominated set regarding L j.Statistically,we have the following conclusion:The non-dominated set of the Cartesian product of the N non-dominated sets relatively evenly distributes over the non-dominated set relative to D,and the size of the Cartesian product is probably much smaller than that of D.5Niche Evolving and SplittingThe concept of niche here means following.Definition5A niche is a hyper-rectangle X n in the deci-sion space X of MOP,that is,a hyper-rectangle X n⊆X Since OMOEA is mainly the iteration of niche evolv-ing and splitting,we let a section(Section5)state Niche Evolving and Splitting.Niche evolution consists of produc-ing initial niche-population(cf.subsection5.1)evenly scat-tered over the niche by using orthogonal array,executing crossover operator(cf.subsection5.2)with the initial niche-population as the parents,where a generalized orthogonal design method and statistical optimal method is used,in-cluding the offspring into niche-population,from the niche-population,searching the non-dominated set which is the output of the niche evolution procedure.5.1Production of the Initial Niche-populationAn orthogonal array[a i,j]M×N actually corresponds to a population with size M,so that[a i,j]M×N determines a population,then let[a i,j]M×N represent a population.The production of a uniformly scattered niche-population over a niche is as followsAlgorithm2Construction of initial niche-population1.Execute algorithm1to construct orthogonal array[a i,j]Q2×(Q+1);2.Delete the last Q+1−N columns of[a i,j]Q2×(Q+1)to get[a i,j]Q2×N.3.Get initial niche-population P n(0):P n(0)←[a i,j]Q2×N5.2Crossover OperatorIn most evolutionary algorithms,the parents are stochas-tically chosen from the population.Here,in the evolution of the niche,all individuals in the population are picked as parents for the crossover operator.The details are as follows Algorithm3Crossover operator1.Input initial niche-population P n(0)as parents.ing P n(0)(i.e.[a i,j]Q2×N)),construct three di-mensional array[∆q,j,k]Q×N×K3.For each Factor j,j=1,2,...,N,construct non-dominated set M(L j,≺j)4.Construct the Cartesian productM(L1,≺1)×M(L2,≺2)×...×M(L N,≺N)5.Get offspring S:S←M(L1,≺1)×M(L2,≺2)×...×M(L N,≺N)5.3Procedure of Niche EvolutionDenote P n(0) =P n(0) S.Statistically,if Q is large enough,the new technique searches the non-dominated set relative to P n(0) and regard it as representative of the non-dominated set relative to X n.Now,we describe the algo-rithm for evolving niche X n.Algorithm4Evolution of niche1.Initiate paramter Q2.Execute algorithm2to produce initial niche-population:P n(0)3.Execute crossover operator(algorithm3)to breedoffspring S4.Unite initial niche-population and offspring:P n(0) =P n(0) S5.Construct non-dominated set M(P n(0) ,≺)fromP n(0) as the output niche-population P n(1).Thisis representative of the non-dominated set relative toniche X n5.4Procedure for Splitting NicheIf the precision of the output niche-population P n(1)is not satisfactory,then the niche X n splits into sub-niches, where each individual in P n(1)has corresponding sub-niche,the range length of each variable(factor)in each sub-niche is the difference between two successive levels of the factor.The procedure for splitting niche X n from P n(1)is as follows.Algorithm5Niche splitting1.For each s∈P n(1),construct a new nicheX(s)n={x||x j−s j|≤δj/2j=1,2,...,N} where x=(x1,x2,...,x N)s=(s1,s2,...,s N)whereδj is the difference of two successive levels.2.Get a group of sub-nichesΨn:Ψn←{X(s)n|s∈P n(1)}6The Orthogonal Multiobjective Evolutionary AlgorithmThe evolutionary process of OMOEA is that an original niche(the decision space X)evolvesfirst(cf.subsection 5.3),and splits into a group of sub-niches(cf.subsection 5.4);then every sub-niche iterates the above operations,until stopping criterion is satisfied.Let P denote the global-population,Ψthe set of all sub-niches.After each evolution generation,P andΨwill be updated.The details of the OMEA are as follows.Algorithm6orthogonal multi-objective evolutionary al-gorithm1.Input decision space X as initial niche.2.Execute algorithm4to let the niche evolve into P n(1)3.For the output P n(1)of initial niche X,execute al-gorithm5to split the niche into a groupΨn of niches.4.Initiate P andΨ:P←P n(1);Ψ←Ψn.5.gen=16.If current P does not reach the required precision,and the solution number of P is not more than a crit-ical value,then goto step7,else goto step11.7.For each niche X(s)n∈Ψ,execute algorithm4to letniches evolve and yield P(s)n(1)8.For the output P(s)n(1)of each niche X(s)n,executealgorithm5to split the niche into a group of nichesΨn.9.Ψ← Ψn;P:P← P(s)n(1)10.gen=gen+1;goto step611.Output P as the satisfying close-to-Pareto-optimal setof MOP7Numerical Experiments and Discussion7.1Test functionsFive MOPs are taken to test OMOEA.Thefirst four are the benchmark problems:ZDT1,ZDT2,ZDT3and ZDT4 [19]with a little modified.Thefinal one is W[18]which is an engineering problem with constraints,and its Pareto-optimal set is unknown.The OMOEA will yield high pre-cision solutions when level point stays at Pareto-optimal point,in this,the performance of OMOEA would’nt be well tested.Therefore,we modify thefirst four problems by ran-domly changing a little both bounds of variables,and denote the modified ones by ZDT 1,ZDT 2,ZDT 3,ZDT 4.7.2Results and discussion7.2.1GenerationSince the crossover operator is very efficient in the niche evolution,fewer evolution generations of the algorithm will yield excellent results.OMOEA has to locate the global Pareto-optimal solutions roughly and not trapped in local Pareto-optimal in thefirst generation.In the rest genera-tions,it enhances the precision of the solutions.Figure1 shows the fact.There are two generations taken,in thefirstFigure1:Result comparisons on thefirst four problems be-tween thefirst two generations.(a)For ZDT 1.(b)For ZDT 2.(c)For ZDT 3.(d)For ZDT 4.generation where the numbers of level Q1=41for ZDT 1, ZDT 2and ZDT 3,Q1=263for ZDT 4to locate global Pareto-optimal roughly,in the second where Q2=29for all to enhance precision.Figure1also distinctly illustrates that the populations of the generation1reach higher pre-cision for the problems ZDT 1,ZDT 2,ZDT 3,while for ZDT 4,this reaches lower and the second generation largely improves the precision of the populations.Generally,we recommend evolving two generations,in thefirst genera-tion,a larger Q is taken so as to roughly locate the global Pareto-optimal solutions,in the second,a smaller Q will ensure improving the precision of solutions and avoid large computation.For the problem W,OMOEA only uses one generation where Q1=89,since the number of solutions in the close-to-Pareto-optimal set we obtain is very large,if we continued to improve the close-to-Pareto-optimal set,the computation would be virtually endless.Therefore,once the number of solutions in the close-to-Pareto-optimal set of MOPs exceeds a given maximum,the algorithm will re-turn with the current close-to-Pareto-optimal set of MOPs as output.Figure2shows the resulting Pareto-optimal set of W and its the projections on the planes x1−x2,x1−x3 and x2−x3.Figure2:The graph of the close-to-Pareto-optimal set of W and its projections of on the three planes.(a)The projection on the plane x1−x2.(b)The projection on x1−x3.(c)The projection on x2−x3.(d)The graph of the close-to-Pareto-optimal set7.2.2QuantizationBy quantization,a continuous niche yields discrete one. Statistically,a large number of quantization level Q pro-vides a large number of sample for a good summing and helps not trapping in local Pareto-optimal,and if Q is large enough,then the non-dominated set regarding the discrete niche will be very close to that regarding the continuous one.So when Q is large enough,we regard the discrete non-dominated set as a satisfactory approximation to the contin-uous non-dominated set.At the same time,experiment re-sults show that a problem with large number of local Pareto-optimal fronts needs a large Q,while one with fewer local Pareto-optimal a smaller Q enough,to locate global Pareto-optimal solutions in thefirst generation.For example,for ZDT 1,ZDT 2and ZDT 3,OMOEA can locate the global Pareto-optimal solutions with Q=31,however,for ZDT 4, our technique is likely trapped in local Pareto-optimal until Q≥211.Figure3show the resulting solutions of ZDT 4 with different Q in thefirst generation,where in the sec-ond,Q=29.Theoretically,that how large Q is selected so as not to be trapped in local Pareto-optimal for OMOEA is not very clear.For ZDT 4,the Pareto-optimal front is formed with g(x)=1where x2=0,...,X10=0,the best local Pareto-optimal fronts with g(x)=1.25.We know g(x)≈1.25at x2=±0.008,...,x10=±0.008,in this,the difference of two successive levels should be lessFigure3:the resulting solutions of ZDT 4with different Q1in thefirst generation,while Q2=29in the sec-ond generation.(a)Q1=47,and results trapped in lo-cal optimal.(b)Q1=199,results trapped in local opti-mal.(c)Q1=223,and results approximate global opti-mal.(d)Q1=233,and results approximate global opti-mal.(e)Q1=251,and results approximate global optimal.(f)Q1=263,and results approximate global optimal. than0.016,Therefore,Q>650will ensure OMOEA not trapped in local Pareto-optimal.However,none is trapped in local Pareto-optimal with Q=263in thefirst genera-tion among100runs performed on ZDT 4.Therefore,for a problem with large local Pareto-optimal,a large Q,unnec-essary much large,should be taken to locate global Pareto-optimal in thefirst generation.On the other hand,the number of the quantization levels Q should be prime,since a prime Q is prerequisite to ensure the orthogonality of the array constructed by algorithm1, and therefore,to guarantee uniform search.Theoretically, whether there exists orthogonal array for any integer Q>0 is yet unknown.So a prime Q is recommended.Comparisonthe kernel of the SPEA,and programmed by ourselves which is likely poorer than Zitzler’sandfind OMOEA runs much faster than SPEA madefor the chosen test problems except W.In some way,conclude that OMOEA converges to Pareto-optimalfaster than SPEA for the chosen problems exceptOMOEA expends much time to get huge non-solutions for W.We chose the results of the in solving the test functions ZDT1−ZDT4as a with those of OMOEA.In our algorithm the parameters Q1=41,Q2=29for ZDT 1,ZDT 23,Q1=263,Q2=29for ZDT 4.The resultsinfigure4,and assessed by both coverage[10][1](PP314-316)in tables1,2.Both thefig-and the tables show that not only the quality,but also of the optimal solutions,from our algorithm isto SPEA.In particular,since ZDT4has219lo-cal Pareto-optimal fronts,previously known algorithms in-cluding VEGA,HLGA,FFGA,NPGA,NSGA,rMOGAxs and SPEA are all trapped in local Pareto-optimal solutions. As to OMOEA,when the number of quantization level is smaller than200,our algorithm is usually also trapped in local Pareto-optimal solutions unless the level points stay at the global Pareto-optimal points by chance.Once the number of quantization levels is larger than200,OMOEA eliminates local Pareto-optimal solutions and approximates global Pareto-optimal solutions(cf.Figure3).Regard-ing function W,the convergence of OMOEA to close-to-Pareto-optimal solutions is shown infigures2.The right-bottom graph is the three dimensional graph of the close-to-Pareto-optimal set,and other three graphs infigure2are the projections of the graph on the planes x1−x2,x1−x3and x2−x3.Wefind that the third coordinate x3of our close-to-Pareto-optimal solutions is0.01.The left-top graph shows that the projections of the graph on the x1x2-plane is a rect-angle:0.01≤x1≤0.45,0.01≤x2≤0.1with the left-bottom corner cut off by a curve ACB.After checking the resulting value of the constraint functions at our close-to-Pareto-optimal solutions,wefind that only thefirst con-straint boundary g1=1has nearby close-to-Pareto-optimal solutions.We infer that the curve ACB is an approximation to a portion of thefirst constraint boundary g1=1.In fact, the curve ACB is very close to thefirst constraint boundary g1=1.So may say that the Pareto-optimal set of W is the region enclosed byx1=0.01x1=0.45x2=0.01x2=0.10x3=0.010.00139/(x1x2)+4.94×0.01−0.08=1(7)ber of the objectives in general.It seems difficult to repre-sent a high dimensional Pareto-optimal set byfinite optimal solutions;and it is the difficulty that causes OMOEA’s ex-ponential computation for high dimensional Pareto-optimal set.A wise way to overcome the difficulty to an extent might be to model the structure of the Pareto-optimal set.7.2.4LimitationSince the technique is absolutely new,there exists some lim-itations.If the model is additive and quadratic,the optimal statistics in crossover operator will be very valid,however, if not,then the statistics may introduce error which may cause that a fewer non-dominated levels are eliminated from non-dominated level set(cf.Definition4)and some domi-nated levels near a non-dominated level are interlarded into the non-dominated level set.Although it is both impossible and unnecessary to pick up all non-dominated solutions,a lost non-dominated solution may destroy a little the diver-sity of the solutions.A way not losing non-dominated levels is to modify definition4with consideration of noise[20].A false non-dominated solutions will increase the compu-tations for eliminating it.At the same time,although the Cartesian product of the non-dominated level sets is excel-lently good,and the size is probably much smaller than that of the corresponding discrete niche,it will be large when Pareto-optimal set is high dimensional,which,in addition to the false non-dominated solutions,will result in exponen-tial computation complexity.A possible way to overcome this limitation is to execute orthogonal search on the Carte-sian product and decrease the exponential computation into polynomial in the future work.Therefore,with overcom-ing the limitations,the’linked’problems would be easy to Figure4:Result comparison between SPEA and OMOEA on test function ZDT 1,ZDT 2,ZDT 3,ZDT 4 with SPEA.(a)For ZDT 1.(b)For ZDT 2.(c)For ZDT 3.(d)For ZDT 4.solve.8ConclusionThe constraints of the MOPs are take account of in deter-mining Pareto dominance,as a result,the feasibility of so-lutions is not an issue.The orthogonal design and the statis-tical optimal method are generalized to MOPs,and the gen-eralized design method is applied to construct a new frame-work for multi-objective evolutionary algorithm(MOEA). Due to the uniformity of the search,optimality of the statis-tics,OMOEA yields large set of solutions with high pre-cision and even distribution.Notably,for the engineering problem WATER,itfinds the Pareto-optimal set,which was previously unknown.Acknowledgment This work was supported by National Natural Science Foundation of China(No.s:60204001, 70071042,60073043,60133010),A Project Supported by Scientific Research Fund of Hunan Provincial Ed-ucation Department(No.02C640),and Youth Cheng-guang Project of Science and Technology of Wuhan City(No.:20025001002).We thank Bob McKay for assis-tance with the form of expression of this paper.Bibliography[1]Deb,K.(2001).Multi-objective optimization usingevolutionary algorithms.Chichester,UK:Wiley [2]Horn,J.(1997).F1.9multicriteria decision mak-ing.In T.B¨a ck,D.B.Fogel,and Z.Michalewicz (Eds.),Handbook of Evolutionary Computation.Bris-tol(UK):Institute of Physics Publishing.[3]Schaffer,J.D.(1984).Multiple Objective Optimizationwith Vector Evaluated Genetic Algorithms.Ph.D.the-sis,Vanderbilt University.Unpublished.[4]Schaffer,J.D.(1985).Multiple objective optimiza-tion with vector evaluated genetic algorithms.In J.J.Grefenstette(Ed.),Proceedings of an International Conference on Genetic Algorithms and Their Appli-cations,Pitts-burgh,PA,pp.93C100.sponsored by Texas Instruments and U.S.Navy Center for Applied Research in Artificial Intelligence(NCARAI).[5]Hajela,P.and C.Y.Lin(1992).Genetic search strate-gies in multicriterion optimal design.Structural Opti-mization4,99C107.[6]Fonseca,C.M.and P.J.Fleming(1993).Genetic algo-rithms for multiobjective optimization:Formulation, discussion and generalization.In S.Forrest(Ed.),Pro-ceedings of the Fifth International Conference on Ge-netic Algorithms,San Mateo,California,pp.416C423.Morgan Kaufmann.[7]Horn,J.and N.Nafpliotis(1993).Multiobjective op-timization using the niched pareto genetic algorithm.IlliGAL Report93005,Illinois Genetic Algorithms Laboratory,University of Illinois,Urbana,Cham-paign.[8]Horn,J.,Nafpliotis,N.and Goldberg,D.E.(1994).A niched pareto genetic algorithm for multiobjectiveoptimization.In Proceedings of the First IEEE Con-ference on Evolutionary Computation,IEEE World Congress on Computational Computation,V olume1, Piscataway,NJ,pp.82C87.IEEE.[9]Srinivas,N.and Deb,K.(1994).Multiobjective opti-mization using non-dominated sorting in genetic algo-rithms.Evolutionary Computation2(3),221C248. [10]Zitzler,E.(1999).Evolutionary Algorithms for Mul-tiobjective Optimization:Methods and Applications.Ph.D.thesis,Swiss Federal Institute of Technology (ETH)Zurich,Switzerland.TIK-Schriftenreihe Nr.30,Diss ETH No.13398,Shaker Verlag,Aachen,Ger-many.[11]Tiwari, A.,Roy,R.(2002).Generalised RegressionGA for Handling Inseparable Function Interaction: Algorithm and Applications.Proceedings of the sev-enth international conference on parallel problem solving from nature.(PPSN VII).Granada,Spain [12]Kalyanmoy Deb,Amrit Pratap,Sameer Agarwal,andT.Meyarivan(2001).A Fast and Elitist Multiobjective Genetic Algorithm:NSGA–II,IEEE Transactions on Evolutionary Computation,V ol.6,No.2,pp.182–197 [13]Zitzler, E.,Laumanns,M.and Thiele,L.(2001).SPEA2:Improving the Strength Pareto Evolutionary Algorithm.TIK-Report103.ETH Zentrum,Glorias-trasse35,CH-8092Zurich,Switxerland.[14]Purshouse,R.C.,Fleming,P.J.(2001).The Multi-objective Genetic Algorithm Applied to Benchmark Problems–an Analysis.Research Report No.796.De-partment of Automatic Control and Systems Engineer-ing University of Sheffield,Sheffield,S13JD,UK. [15]Montgomery,D.C.(1991).Design and Analysis ofExperiments.3rd ed.New York:Wiley.[16]XiWen Ma.(1981)Mathmetical theory for orthogo-nal design.the Peaple’s Education Publisher:Beijing, China.(In chinese)[17]Carlos M.Fonseca and Peter J.Fleming.Multiobjec-tive Optimization and Multiple Constraint Handling with Evolutionary Algorithms–Part I:A Unified For-mulation.IEEE Transactions on Systems,Man,and Cybernetics,Part A:Systems and Humans,28(1):26-37,1998.[18]Ray,T.,TRai,K.and Seow,K.C.(2001).An evo-lutioanry algorithm for multiobjecvtive optimization.Engineering Optimization33(3),399-424.[19]Zitzler,E.,Deb,K.and Thiele,L.(2000).Comparisonof multiobjective evolutionary algorithms:empirical results.Evolutionary Computation8(2):173-195. [20]Rudolph,G.(2001).A Partial Order Approach toNoisy Fitness Functions.Proceedings of the2001 Congress on Evolutionary Com-putation(CEC2001), pp.318C325.Seoul Korea.。
Intelligent Computation in Manufacturing Engineering - 4THE APPLICATION OF MULTIOBJECTIVE EVOLUTIONARY ALGORITHMS TO THE GENERATION OF OPTIMIZED TOOL PATHS FOR MULTI-AXIS DIE AND MOULD MAKINGK. Weinert, J. Mehnen, M. Stautner Department for Machining Technology Baroper Str. 301, 44227 Dortmund, GermanyAbstract The generation of tool paths for the milling of dies and moulds requires immense knowledge about the process behavior. This high dimensional geometrical optimizing problem requires technical knowledge and user experience. In this paper, first approaches using evolutionary algorithms for automating tool path generation and optimizing given tool paths are shown. The proposed method combines evolutionary algorithms and a newly developed efficient multi-axis milling simulation. The evolutionary algorithm uses multiple criteria for the optimization of a tool path. The approach uses multiobjective algorithms because minimal or no movement of a machine axis can lead to high cutting forces or collisions. Keywords: Simulation, Cutting Process, Multiobjective Optimization 1 PROBLEM DESCRIPTION In multi-axis milling of free-formed surfaces for die and mould making, one of the main tasks is a near-optimal definition of the tool path. This definition is placed under various constraints that need to be fulfilled simultaneously. One aspect is the avoidance of collisions between the non-cutting parts of the cutting tool and the work piece. A second aspect, which is limited to multi-axis milling, is the task of finding a tool orientation, which fulfills all constraints resulting from the used machine. Different milling machines have different constraints imposed upon the movement of their additional axes, which are needed for the more than three-axis movement. This can be seen as another collision condition between tool and possible movement area of the machine in addition to the collision between tool and free-form surface. The third constraint is not limited to multi-axis milling; it is also valid for all milling processes. In milling processes, the cutting force on the cutting edge is an important factor for a stable and safe process control. The programming of tool paths must lead to a process, which is fast and safe, in order to be cost efficient. Resulting from these constraints, multi-axis tool path strategies are often designed as an addition to the existing three-axis strategies. These paths are planned as a three-axis movement with an additional two-axis movement, which changes the orientation of the tool to the work piece. This additional rotational movement has to be free of collisions. Therefore, extensive user experience is needed to design these tool paths. There are some software tools on the market, that deliver some kind of collision detection, but they are often restricted to threeaxis milling or only to collision detection against the final geometry and not to the actual process geometry of the free-form surface. So there is a demand for solutions providing functions to develop tool paths which are free of collisions and to use the possibilities of multi-axis tool path programming to obtain a fast and safe process course. This article introduces a first prototype of a tool, which enables the user to convert given three-axis tool paths into collision free multi-axis tool paths. 2 GENERAL IDEA The main concept consists of two steps. In the first step, a user starts designing a conversion solution for a given three-axis tool path and a given workpiece. In the second step, a software tool optimizes the user’s parameters considering all constraints and optimizing facilities. After this second step, the resulting tool path leads to an efficient multi-axis milling process that is free of collisions and fulfills all machine restrictions. The general idea for converting three-axis milling paths into multi-axis paths is to keep the position of the tool tip and to add a varying of the orientation to the tool. The resulting surface remains the same if only ball end mills are used. Only the quality of the surface will improve, resulting from the more advantageous engagement conditions. Several studies [1] have shown, that this simple tool tilt strategy leads to useful and efficient multiaxis paths with a minimal amount of manual user input. 2.1 Manual Step The user determines a tilt point in three-dimensional space. The multi-axis path is generated by tilting the tool, so that the main axis of the tool, the shank and the holder lead through an axis determined by the tilt point and the point on the tool path. (see fig. 1.)TiltpointTool (Holder, Shank, Edge)AreaFigure 1: Scheme of the tilt-point strategy. The user has to define an area, in which the tool paths have to be converted. By defining these areas, it is possible to define more than one tilt point on one tool path. This allows the conversion of tool paths for more complexworkpieces. Tilt points can be defined as positive or negative. If a tilt point is defined positive, the line between tilt point and tool tip is drawn through the holder and shank of the tool. If it is defined negative, this line is drawn, so that the holder points away from the line. This strategy allows the manipulation of the tool orientations in a very easy way. This tilt point strategy has several benefits. It is easy to develop, and the resulting tool paths lead to smooth movements of the machine axes because these moves can often be executed by tilting in one machine axis and rotating the second machine axis afterwards. One disadvantage is the indirect change of the tool paths by changing a point in three-dimensional space. It is easy to find a useful position for this point, but it is difficult to determine if this point is optimal, leading to a minimal, collision free movement of the tool axis. 3 EVOLUTIONARY STEP The optimization of this tilt point is accomplished in the second step of the strategy by an evolutionary algorithm. The evolutionary strategy benefits from the fact, that only a few (three) parameters need to be evolved for the tilt point. For further implementations of this technique, the optimization of more than one tilt point is planned. Additionally, the optimization of further parameters such as the form of the area, in which the orientation is changed by one tilt point, should be tested. The difficulty in this evolutionary step is not the number of parameters. The main problem of this evolutionary algorithm is a fast computation of the fitness function. Therefore, a new system for the simulation of five-axis milling has been developed. This simulation provides a tool to analyze the real process quality of the evolved individuals. 3.2 Techniques The input in the process chain is a three-axis milling path with additional user parameters. These parameters are an orientation of the tilting axis and an area, in which the path has to be tilted. Thereafter, in the evolution step the optimal additional parameters are searched. These additional parameters consist in this case only of the position of the tilt point. The output of this process is an optimized tilting position and the accordingly efficient multiaxis tool path. The schematic view on this process chain is shown in fig. 2.Step 1: User input4 DISCRETE MILLING SIMULATION There are several simulations, which allow a simulation of milling processes, available on the market. Most of them [2] [3] are not capable of simulating multi-axis milling processes. There are simulations which can perform this task, but these techniques contain disadvantages, which prevent a utilization for the fitness determination [4] [5]. For the usage in these evolutionary algorithms a new simulation of milling processes based on these dexel [6] techniques has been developed. This new simulation allows a free scalable level of simulation resolution to reduce simulation time and provides a fast collision testing and extra interface functions to derive a fitness value for the simulated tool path. These functions deliver values for the multiple criteria, which contribute to the final fitness of the individual. The individuals implemented here consist of a single tilt point or its according tool path. The simulation, in three different sub-simulations, determines these values. 4.3 Collision Detection The first sub-simulation is a collision detection simulation. Collisions between parts of the tool and the actual workpiece geometry can be detected. The parts of the tool are the holder or the shank of the cutter. The workpiece geometry is the actual geometry of the workpiece at the process time when the collision is tested. The simulation is able to determine a value, which indicates how much of the two volumes are in collision. This enables the optimization algorithm to distinguish between different collisions. The other type of collision is a value that determines if the movement of the tool collides with the space bound by not reachable axis values. If such a collision occurs, it is impossible for the machine to reach the desired tool position and since the desired position is not reachable, the desired shape cannot be produced with this tool path. This collision value is more difficult to use for an optimization because the value indicates only a collision not a ratio between collision and not collision. 4.4 Machine Simulation The second sub-simulation is a simulation of the machine capabilities. Every real milling machine, which is capable of five-axis milling, has different acceleration capabilities relating to the rotational axes. The simulation calculates a value, which indicates how well a specified machine can follow the given tool path, or how harmonic the real machine movement is compared to the desired movement. The simulation returns a harmony value, which represents how easily the machine may follow the suggested movement. This value has a direct influence on the real process milling times and the resulting surface quality. First preceding investigations have shown, that the ability of the A axis of the machine kinematics to follow the given positions is a good indicator for this value. Therefore, a simulation of the acceleration abilities of this axis delivers the difference of the desired axis position in relation to the real axis position. The second rotational axis, the C axis, has a lesser influence on the resulting surface because in many machine kinematics the C axis is of a much higher dynamic as the A axis. Thus, this optimization focuses on the two rotational axes. The ability of the machine to follow the three translating axes does not take this value in account. 4.5 Engagement Condition The third sub-simulation models the actual engagement condition of the cutting edge on the workpiece. For this, the simulation calculates the actual chip volume for every simulated cut on the workpiece. This chip volume allows determining different properties of this engagementThree Axis PathsUser ParametersStep 2: EvolutionMutationFitness Determination in Milling SimulationRecombination CrossoverSelectionOptimized Multi Axis PathsFigure 2: Proposed two-step process chainIntelligent Computation in Manufacturing Engineering - 4condition. Possible properties are the actual cutting force on the cutter or the type of engagement, which can be tangential feed or the amount of drill cutting. This computed cutting force is not an appropriate criterion for an evolutionary optimization. Because the actual cutting force is based on a model, which only takes the chip volume into account, it cannot be altered by changing the orientation of a ball nose mill. For the evolution, this part of the simulation calculates the centricity of the cut, a value that indicates how much of the material is cut by the center of the cutter during the cutting operation. This value changes if the cut is done with a different angle without changing the surface of the workpiece or the actual cutting volume. Decreasing this value leads to an engagement of the outer areas of the cutting edges and thereby leads to a better surface quality. These different values, which can be retrieved by the simulation, can be used for an optimization of the tilt point positions, which considers the characteristics of the resulting tool path. This optimization of a threedimensional vector is done by a simple evolutionary algorithm. 5 THE EVOLUTIONARY ALGORITHM As shown by Castelino et al. [7], evolutionary algorithms, which are used in process planning for NC machining, are primarily specialized in the optimization of general tool movement. These solutions refer to algorithms developed for the traveling salesman problem or the lawnmower problem. Due to their large non-linear search spaces, both problems are often used for evolutionary algorithms. Alander [8] presents an overview of this subject. In this algorithm, the problem that needs to be solved is a simple three-dimensional search. Therefore, we can use a rather simple (µ, ) ES. This leads to good first results without optimized evolutionary parameters and the general usability of this idea can be shown. This is a consequence of the simple three-dimensional evolution problems, which have to be solved. A tournament selection operator defines two groups of individuals that compete with each other. Every survivor of a tournament replaces the inferior individual. A survivor of a tournament is the individual that has a higher fitness value compared to its competitor. As it is shown in the results section, simple problems can be solved in less than 20 generations with values for µ of 50 and a of 25. This behavior is important for the usefulness of the introduced system. The determination of the fitness of a single individual can be very time consuming. Milling processes have real running times of up to 20 hours. State of the art milling simulations can be 10 times faster than the real process. Thus, defining the fitness of a single individual phenotype can be very time consuming. 5.6 Fitness Determination The fitness of a single individual is determined by the properties evaluated by the simulation of the resulting tool path. Not all criteria, which can be evolved, can be used for an evolutionary optimization. Some values cannot be influenced by a change in the orientation of the cutter. According to this, there are seven criteria, which contribute to the fitness evaluation: 1. Collision with the space bound by non-reachable machine positions (see sect. 4.3) 2. The size of the volume, which in collision between tool and the actual workpiece geometry summed up over the whole simulation process (see sect. 4.3) 3. The average engagement centricity of the cutting edge over the whole process (see sect. 4.5)4.The average harmony of the A axis movement (see sect. 4.4) 5. The minimum of harmony in the A axis movement (see sect. 4.4) 6. The average harmony of the C axis movement (see sect. 4.4) 7. The minimum of harmony in the C axis movement (see sect. 4.4) The first two criteria obviously have a different priority than the five other criteria. A desired tool path, which cannot be processed because positions are unreachable, does not need any further optimization of the remaining criteria. Thus, the other values cannot be evaluated in case of such a collision. They will be set to a fictive low value. In every other case, the values 2 to 7 will be evaluated simultaneously. 5.7 Order and Classification As it can be seen in section 5.6, the fitness determination and the resulting values suggest a ranking of the objectives. A collision free path has higher priority as a harmonic movement. Ehrgott [9] calls an optimization lexicographic when comparing objective vectors in criterion space. min (1) f ( xmin ) = [ f ( x1 ), f ( x2 ), f ( x3 ), f ( x4 ),...] → x∈ XThe function f(x) is called lexicographically smaller than or equal to the function f(y) if either f(xn) = f(yn), or there exists a number (l,1 <= n), such that f(xi) = f(yi), for 1 <= i <= l-1 and f(xl) < f(yl). The values retrieved from the simulation form a vector of seven dimensions. A lexicographic ordering of the vectors, which specify the fitness of a single individual, orders the individuals according to the given priority of the attribute. In this case reducing the collisions has a higher priority than decreasing the engagement centricity. This ordering allows implementing a simple selection method. 5.8 Selection Method The algorithm uses a truncation or (µ, )-selection [10]. In this selection scheme, individuals are chosen from µ individuals if their lexicographic ordering is higher than from the µ- individuals. The selection scheme for the best individuals and their lexicographic classification (see Section 5.7) is shown in pseudo code in fig. 3, where I1 and I2 are the current individuals with their respective arrays of fitness values.I1 < I2 { for (c=0; c<[NumberOfFitnessValues]; c++) { if (I1.f[c]>=I2.f[c]) return true; if (I1.f[c]< I2.f[c]) return false; }; return false; };Figure 3 : Pseudo code for the evolutionary algorithm. The criteria ranking depends on the order of the fitness values in the array. Therefore, a tool path is first tested for absence of collision. Secondly, a position with the most favorable engagement condition is searched. Only after that, the most harmonic movement of the machine axis is considered. 5.9 Recombination, Crossover and Mutation The recombination is carried out via a two-individual crossover. Two individuals are randomly chosen from the selected ones, ignoring their fitnesses. The crossoverfunction randomly chooses values from both individuals. This type of crossover reduces the fitness pressure. Therefore, the space of possible solutions can be searched by a more divergent population. The mutation is carried out after the recombination on all µ individuals of the new generation. The function for the calculation of the new parameter values is shown in equation (2), where f is the standardized fitness value of a single criterion.xnew = xold + m1r1 (1 − f1 ) + m2 r2 (1 − f 2 ) + m3 r3 (1 − f 3 ) + ... (2)The amount of mutation on a value x is based on n random values ri. They are chosen for each fitness value in the attribute vector. Each value is combined with a factor mi. This factor controls, how much a single deviation of a criterion can influence the search behavior of the complete algorithm. Due to the lexicographic ordering, it is helpful to choose a vector of the mi values with mi >= mi+1 and 1 <= i <= n. 6 FIRST RESULTS First tests were carried out on a simple test workpiece, which allows a fast and plain design of the evolutionary algorithm. The test workpiece is designed with the CAD/CAM software CATIA to ensure the functionality of the complete process chain without using real workpieces in the first tests. The usage of this test case allows verifying the design of the EA and helps to reduce unwanted time losses by errors in the implementation of the algorithms. After the usefulness of the algorithm and the new developed milling simulation has been shown, the pre-parameterized algorithm is used for two real workpieces. 6.1 Tool-Shank-Holder The used simulated tool is a ball end mill with a varying edge length of 1 mm up to 4 mm and a diameter of 1 mm to 8 mm depending on the workpiece. This part of the tool cannot lead to collisions. All tool paths, which are used in the test runs, originate from three-axis strategies. To ensure an increased surface quality after the conversion and evolution, a shorter and thicker tool shank is used. This tool cannot be used on the original three-axis paths. The colliding parts of the tool are the holder and the shank, which has a cone as basic form. In our test workpiece, only collisions between the workpiece and the shank are to be expected because of the length of the shank. Therefore, the distance between the point on the path and the holder is too big to get the holder into collision. 6.2 Column Workpiece The artificial test workpiece is a column that has a steep rising edge in the middle of the workpiece, see fig. 4.This workpiece needs to be milled with a long shank in a three-axis strategy to enable the processing of the small diameter (1 mm) fillet at the base of the column. The milling of these paths is not possible with a three-axis strategy and the chosen shank-holder combination. The collision is here most likely at the inner paths where the edge of the column is approached. The initial search space for the positioning of the tool points ranges from - 40 mm to + 40 mm along all axes of the global object space. Only 10 initial individuals, which are spread stochastically over this 80 x 80 x 80 mm search space, are used. To obtain an overview of the behavior of the algorithm, this test run was repeated 20 times. Fig. 5 shows a plot of the average fitnesses.1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1 11 21 31 41Fitness normalizedMax. Fitness Avg. Fitness Min. Fitness516171No. of GenerationFigure 5: Fitness progression of the column test workpiece after 20 tests. The fitness diagrammed above denotes the first criterion, which is evaluated. This criterion is the simulated amount of collisions between the workpiece and the tool. The values of the other criteria are not shown to focus on the general usability of the algorithm. As it can bee seen, a collision free tilt point is found not later than the 25th generation in most of the test runs. The development of the curve for the average minimal fitness is more interesting. For 25 generations the average minimal fitness value increases steadily, after the 25th generation the minimal fitness value starts oscillating due to the influence of the other criteria, which come to control when the first criteria, the absence of collisions, is fulfilled. The engagement criterion pulls the tilt point to a side of the collision free area to optimize the engagement. This leads to individuals, which, after a minimal mutation, may not be free of collisions. A closer look on this behavior is given on fig. 6.1 0,9 Fitness normalized 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1 11 21 Collisions Engagement HarmonyFigure 4 : Column test workpiece with the tool positioned at the base of the column.31 41 51 61 71 No. of Generation Figure 6: Fitness progression of three criteria shown on the column test workpiece in a single run.Intelligent Computation in Manufacturing Engineering - 4Fitness normalizedThis figure shows the fitness progression for the current best individual in a single run for three different criteria. As it can be seen, the collision criterion has reached a maximum at about generation 25 as in figure 5. At this point, the engagement value gains more importance, which leads to the described behavior. The third criterion, the harmony of the machine movement, is not increasing over the shown run. Due to the parameterization, an optimization of the engagement criterion is preferred. 6.3 Drop Forging Workpiece The second workpiece is a part for the drop forging of a chain element. A conversion of the given three-axis tool path would allow using a shorter and sturdier shank for the process. To increase the speed of the fitness determination only an excerpt of the finishing step at the bottom of the cavity was used for the fitness determination (see fig. 5). This is a valid way if the selected paths are representative for the entire tool path.1 0,9 0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1 11 21 31 41 51 No. of Generation 61 71 Max. Fitness Avg. Fitness Min. FitnessFigure 8: Fitness progression of the drop forging workpiece after 20 tests. The optimization of the other values of the criteria vector is not successful with this workpiece. Fig. 9 shows that both criteria have much better values at the beginning of the evolution. During the collision search the value for engagement decreases, but after generation five, the algorithm tries to increase the value and keep the paths free of collisions. This may result from the smaller area, in which the algorithm can search for collision free paths with good engagement conditions. The best engagement conditions lead obviously to collisions. Again, the harmony criterion is not very well optimized due to the preceding engagement optimization.1 0,9 0,8 Fitness normalized 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1 11 21 31 41 51 61 71 No. of Generation Collisions Engagement HarmonyFigure 7: The drop forging workpiece. Both test workpieces, the column and the drop forging part, have different constraints. The collision spaces of the workpieces differ and therefore, the collision avoidance has varying demands on the evolution progression. The simulation can only deliver a value of how many collisions have occurred. Consequently, the fitness landscapes are different, and a parameterization that is suitable for both cases should be suitable for other more practical workpieces as well. Cavities are difficult to mill if they are of a certain depth and have small diameters because this requires tools with long thin tool shanks. The use of long shanks causes undesirable effects on the surface quality and increases the risk of tool breakage. To overcome these effects the process speeds need to be reduced. This workpiece needs to be milled with a long shank as well to enable the processing of the small diameter (1 mm) fillet at the base of the column. Because of the similar dimensions of the workpiece, all parameters remain the same. This allows obtaining an overview of the behavior of the algorithm on different parts without the use of extra knowledge. Fig. 8 shows the average progression of the collision criterion after 20 test runs for the drop forging part. As it can be seen, a collision free configuration is found much faster than in section 6.2. This is a typical behavior for solutions of cavity problems. One reason is that the calculation of collision free paths is much easier if there are no parts of the workpiece between the tool paths.Figure 9: Fitness progression of the drop forging in a single run. 7 PRACTICAL WORKPIECE CAR CRANKSHAFT The pre-parameterized algorithm is now used for a real practical workpiece a mould for a car crankshaft. The chosen workpiece and the three-axis milling paths are not specially designed for five-axis milling or for this evolutionary path design. The original paths were designed for a three-axis milling process. The collision situation is very difficult for this workpiece due to its deep cavities. To exclude a three-axis processing a shank based on a cone is used. Surface quality is expected to be better if the process is carried out with the shorter and sturdier conical shank. The present three-axis tool paths lead to a collision if they are used with this tool, though.8 CONCLUSIONS AND OUTLOOK This paper presents a new system for the optimization of tool paths and engagement conditions for multi-axis milling as well as the generation of efficient multi-axis tool paths. First studies show that the proposed multiobjective evolutionary algorithm is suitable to deliver an efficient evolution in combination with an efficient simulation system. The simple parameterization of the algorithm allows the application of the system to different problem cases without further user knowledge about the evolutionary parameters. Although the introduced software system is an academic prototype, this technique may be used as a basis for a commercial solution for multi-axis path generation in CAM applications. Further implementations are planned and will be based on other generation strategies, though they will require a more complex genome. More advanced multi-axis strategies may be developed with this system without the use of three-axis tool paths as basis for the evolution. Figure 10: Car crankshaft mould with one area marked at the cavity selected for the tool path conversion. For test processing a finishing step was chosen. As in the preceding workpiece, an excerpt of the tool path is used to decrease the time for fitness calculation. As in the test runs, 10 individuals per population are used. The initial search space for these tests is increased to 200mm x 200mm x 200mm according to the larger dimensions of the workpiece (160mm x 160mm x 40mm). All other parameters were kept constant. 9 ACKNOWLEDGEMENTS This research was supported by the Deutsche Forschungsgemeinschaft as part of the Collaborative Research Center “Computational Intelligence” (SFB 531). 10 REFERENCES [1] Damm, P.; Friedhoff, J.; Stautner, M.: Maschinenoptimiertes 5-Achsen-Simultanfräsen nimmt einzug in die Welt des Werkzeugbaus. Schmiede Journal, March 2004, pp. 22-26 [2] Müller, H.; Surmann, T.; Stautner, M.; Albersmann, F.; Weinert, K.: Online Sculpting and Visualization of Multi-Dexel Volumes. In: SM’03, Eighth ACM Symposium on Solid Modelling and Applications, Elber, G.; Shapiro, V. (eds.), 2003, pp. 258-261 [3] Glaeser, G.; Gröller, E.: Efficient Volume-Generation During the Simulation of NC-Milling. Technical Report TR-186-2-97-10, April 1997, Institute of Computer Graphics and Algorithms, Vienna University of Technology [4] Hook, T. van: Real Time shaded NC Milling Display. Computer Graphics 20 (1986) 4, pp. 15-20 [5] Weinert, K.; Müller, H.; Friedhoff, J.: Efficient discrete simulation of 3-axis milling for sculptured surfaces. Production Engineering 3 (1998) 2, pp. 83-88 [6] Weinert, K.; Stautner, M.: An Efficient Discrete Simulation for Five-Axis Milling of Sculptured Surfaces. Production Engineering, 9 (2002) 1, pp. 47-51 [7] Castelino K.; D’Souza, R.; Wright, P. K.: Tool-path Optimization for Minimizing Airtime during Machining, Mechanical Engineering Dept. University of California at Berkeley, /~kenneth/research/pubs/airtime_minimization_joms.pdf1 0,9 0,8 Fitness normalized 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 1 11 21 31 41 51 No. of Generation 61 71 Collisions Engagement HarmonyFigure 11 : Average fitness’s from 10 evolution runs with 50 individuals each generation Fig. 11 shows a fitness plot for the test run over 75 generations. Because of the deep cavity and the difficult collision situation, a collision free configuration is found later than in the preceding test, around generation 50. As a consequence of this, the optimization of the two other criteria has only few generations left to develop a solution. However, the general behavior is comparable to the first tests. Good engagement conditions are easy to find if there may be collisions. With the decreasing of collisions, the space for optimal engagement conditions decreases. After the first optimization only a few or small changes in the engagement conditions are possible. The optimization works well on all tested types of workpiece - tool path combinations. The used fitness calculation is suitable for the specific order of criteria. As it can be seen in all three optimization tests, a high optimization of the movement harmony after the engagement conditions is very difficult. This may be a backdrop from the lexicographic optimization. Further researches will include hybrid techniques of lexicographic and pareto optimization.[8] Alander, J. T.: An Indexed Bibliography of Genetic Algorithms and the Travelling Salesman Problem. Internal Report 94-1-TSP, University of Vaasa, Department of Information Technology and Production Economics, Finland, 1994 [9] Ehrgott, M.; Multicriteria Optimization. In: Lecture Notes on Economics and Mathematical Systems, No: 491, Springer-Verlag Berlin Heidelberg, 2000 [10] Banzhaf, W.; Nordin, P.; Keller, R. E.; Francone, F. D.: Genetic Programming: An Introduction: On the Automatic Evolution of Computer Programs and Its Applications. Morgan Kaufmann Publishers, Inc. San Francisco, Cal., 1998。