Multi-objective optimization using evolutionary algorithms
- 格式:pdf
- 大小:1.09 MB
- 文档页数:8
NSGAIIA F AST ELITIST MULTIOBJECTIVE GENETIC ALGORITHM:NSGA-IIARAVIND SESHADRI1.Multi-Objective Optimization Using NSGA-IINSGA([5])is a popular non-domination based genetic algorithm for multi-objective optimization.It is a very e?ective algorithm but has been generally criticized for its computational complexity,lack of elitism and for choosing the optimal parameter value for sharing parameterσshare.A modi?ed version,NSGA-II([3])was developed,which has a better sorting algorithm,incorporates elitism and no sharing parameter needs to be chosen a priori.NSGA-II is discussed in detail in this.2.General Description of NSGA-IIThe population is initialized as usual.Once the population in initialized the population is sorted based on non-domination into each front.The?rst front being completely non-dominant set in the current population and the second front being dominated by the individuals in the?rst front only and the front goes so on.Each individual in the each front are assigned rank(?tness)values or based on front in which they belong to.Individuals in?rst front are given a?tness value of1and individuals in second are assigned?tness value as2and so on.In addition to?tness value a new parameter called crowding distance is cal-culated for each individual.The crowding distance is a measure of how close an individual is to its /doc/6b11041199.html,rge average crowding distance will result in better diversity in the population.Parents are selected from the population by using binarytournament selection based on the rank and crowding distance.An individual is selected in the rank is lesser than the other or if crowding distance is greater than the other1.The selected population generates o?springs from crossover and mutation operators,which will be discussed in detail in a later section.The population with the current population and current o?springs is sorted again based on non-domination and only the best N individuals are selected,where N is the population size.The selection is based on rank and the on crowding distance on the last front.3.Detailed Description of NSGA-II3.1.Population Initialization.The population is initialized based on the prob-lem range and constraints if any.1Crowding distance is compared only if the rank for both individuals are same12ARA VIND SESHADRI3.2.Non-Dominated sort.The initialized population is sorted based on non-domination2.The fast sort algorithm[3]is described as below for each ?for each individual p in main population P do the following–Initialize S p=?.This set would contain all the individuals that is being dominated by p.–Initialize n p=0.This would be the number of individuals that domi-nate p.–for each individual q in Pif p dominated q then·add q to the set S p i.e.S p=S p∪{q}else if q dominates p then·increment the domination counter for p i.e.n p=n p+1–if n p=0i.e.no individuals dominate p then p belongs to the?rst front;Set rank of individual p to one i.e p rank=1.Update the?rst front set by adding p to front one i.e F1=F1∪{p}This is carried out for all the individuals in main population P.Initialize the front counter to one.i=1following is carried out while the i th front is nonempty i.e.F i=?–Q=?.The set for storing the individuals for(i+1)th front.–for each individual p in front F ifor each individual q in S p(S p is the set of individuals dominatedby p)·n q=n q?1,decrement the domination count for individualq.·if n q=0then none of the individuals in the subsequentfronts would dominate q.Hence set q rank=i+1.Updatethe set Q with individual q i.e.Q=Q∪q.–Increment the front counter by one.–Now the set Q is the next front and hence F i=Q.This algorithm is better than the original NSGA([5])since it utilize the infor-mation about the set that an individual dominate(S p)and number of individuals that dominate the individual(n p).3.3.Crowding Distance.Once the non-dominated sort is complete the crowding distance is assigned.Since the individuals are selected based on rank and crowding distance all the individuals in the population are assigned a crowding distance value. Crowding distance is assigned front wise and comparing the crowding distance between two individuals in di?erent front is meaning less.The crowing distance is calculated as belowFor each front F i,n is the number of individuals.–initialize the distance to be zero for all the individuals i.e.F i(d j)=0, where j corresponds to the j th individual in front F i.–for each objective function mSort the individuals in front F i based on objective m i.e.I= sort(F i,m).2An individual is said to dominate another if the objective functions of it is no worse than the other and at least in one of its objective functions it is better than the otherNSGA-II3?Assign in?nite distance to boundary values for each individual in F i i.e.I(d1)=∞and I(d n)=∞for k=2to(n?1)·I(d k)=I(d k)+I(k+1).m?I(k?1).mf maxmf minm·I(k).m is the value of the m th objective function of the k th individual in IThe basic idea behind the crowing distance is?nding the euclidian distance between each individual in a front based on their m objectives in the m dimensional hyper space.The individuals in the boundary are always selected since they have in?nite distance assignment.3.4.Selection.Once the individuals are sorted based on non-domination and with crowding distance assigned,the selection is carried out using a crowded-comparison-operator(?n).The comparison is carried out as below based on(1)non-domination rank p rank i.e.individuals in front F i will have their rankas p rank=i.(2)crowding distance F i(d j)p?n q if–p rank–or if p and q belong to the same front F i then F i(d p)>F i(d q)i.e.the crowing distance should be more.The individuals are selected by using a binary tournament selection with crowed-comparison-operator.3.5.Genetic Operators.Real-coded GA’s use Simulated Binary Crossover (SBX)[2],[1]operator for crossover and polynomial mutation[2],[4].3.5.1.Simulated Binary Crossover.Simulated binary crossover simulates the bi-nary crossover observed in nature and is give as below.c1,k=12[(1?βk)p1,k+(1+βk)p2,k]c2,k=12[(1+βk)p1,k+(1?βk)p2,k]where c i,k is the i th child with k th component,p i,k is the selected parent andβk (≥0)is a sample from a random number generated having the densityp(β)=12(ηc+1)βηc,if0≤β≤1p(β)=12(ηc+1)1,ifβ>1.This distribution can be obtained from a uniformly sampled random number u between(0,1).ηc is the distribution index for crossover3.That isβ(u)=(2u)1(η+1)β(u)=1[2(1?u)]1(η+1)3This determine how well spread the children will be from their parents.4ARA VIND SESHADRI3.5.2.Polynomial Mutation.c k=p k+(p u k?p l k)δkwhere c k is the child and p k is the parent with p uk being the upper bound4onthe parent component,p lk is the lower bound andδk is small variation which iscalculated from a polynomial distribution by usingδk=(2r k)1ηm+1?1,if rk<0.5δk=1?[2(1?r k)]1ηm+1if rkr k is an uniformly sampled random number between(0,1)andηm is mutation distribution index.3.6.Recombination and Selection.The o?spring population is combined with the current generation population and selection is performed to set the individuals of the next generation.Since all the previous and current best individuals are added in the population,elitism is ensured.Population is now sorted based on non-domination.The new generation is?lled by each front subsequently until the population size exceeds the current population size.If by adding all the individuals in front F j the population exceeds N then individuals in front F j are selected based on their crowding distance in the descending order until the population size is N. And hence the process repeats to generate the subsequent generations./doc/6b11041199.html,ing the functionPretty much everything is explained while you execute the code but the main arguments to get the function running are population and number of generations. Once these arguments are entered the user would be prompted for number of ob-jective functions and number of decision variables.Also the range for the decision variables will have to be entered.Once preliminary data is obtained,the user is prompted to modify the objective function.Have fun and feel free to modify the code to suit your need!References[1]Hans-Georg Beyer and Kalyanmoy Deb.On Self-Adaptive Features in Real-Parameter Evo-lutionary Algorithm.IEEE Trabsactions on EvolutionaryComputation,5(3):250–270,June 2001.[2]Kalyanmoy Deb and R.B.Agarwal.Simulated Binary Crossover for Continuous Search Space.Complex Systems,9:115–148,April1995.[3]Kalyanmoy Deb,Amrit Pratap,Sameer Agarwal,and T.Meyarivan.A Fast Elitist Multi-objective Genetic Algorithm:NSGA-II.IEEE Transactions on Evolutionary Computation, 6(2):182–197,April2002.[4]M.M.Raghuwanshi and O.G.Kakde.Survey on multiobjective evolutionary and real codedgenetic algorithms.In Proceedings of the8th Asia Paci?c Symposium on Intelligent and Evo-lutionasy Systems,pages150–161,2004.[5]N.Srinivas and Kalyanmoy Deb.Multiobjective Optimization Using Nondominated Sortingin Genetic Algorithms.Evolutionary Computation,2(3):221–248,1994.E-mailaddress:aravind.seshadri@/doc/6b110411 99.html,4The decision space upper bound and lower bound for that particular component.。
qsconsult www.qsconsult.be1 Willy VandenbrandeShainin: A concept for problemsolvingLecture at the Shainin conferenceAmelior11 December 20092Dorian Shainin (1914 –2000)•Aeronautical engineer (MIT –1936)•Design Engineer for United Aircraft Corporations •Mentored by his friend Joseph M. Juran•Reliability consultant for Grumman Aerospace (Lunar Excursion Module)•Reliability consultant for Pratt&Whitney (RL-10 rocket engine)•Developed over 20 statistical engineering techniques for problem solving and reliability •Started Shainin Consultants in 1984, his son Peter is currentCEO.Dorian Shainin and ASQ•15th ASQ Honorary Member (1996)•First person to win all four major ASQ medals•In 2004 ASQ created the Dorian Shainin Medal–For outstanding use of unique or creativeapplications of statistical techniques in thesolving of problems related to the quality of aproduct or service.3Dorian Shainin•Not very well known outside USA (compared to Deming, Juran)•1991: Publication of first edition of“World Class Quality”by Keki Bothe •2000: Second edition (Keki and Adi Bothe)•Books brought attention to Shainin methods, but are very biased.4Problem Solving •Focus is on variation reductionLSL USLLSL = Lower Specification Limit USL = Upper Specification LimitBeforeAfter5Problem Solving•But also …LSLAfterBefore6Basic Shainin assumption•The pareto principle of vital few and trivial many.•Only a few input variables are responsible for a large part of the output behavior.–Red X TM–Pink X TM–Pale Pink X TM•Problem solving becomes the hunt for the Red X TM7Shainin tools•Recipe like methods / statistics in the background•Comparing extremes allows easier detection of causes–BOB Best of Best–WOW Worst of Worse•Non parametrics with ranking tests in stead of calculations with hypothesis tests•Graphical Methods•Working with small sample sizes•The truth is in the parts, not in the drawing: let the parts talk!8Preliminary activities•Define the critical output variable(s) to be improved (called problem Green Y®)•Determine the quality of the Measurement System used to evaluate the Green Y®–A bad measurement system can in itself beresponsible for excessive variation–Improvements can only be seen if they can bemeasured910Overview of Shainin toolsComponents Search Multi-Vari chart Paired Comparisons Variables Search Full FactorialsB vs CScatter Plots PrecontrolProduct / ProcessSearchRSM methodsPositrolProcess CertificationClue generatingFormal Doe toolsValidationOptimization AssuranceOngoing controlControl20 –1000 variables5 –20 variables4 or less variablesNo interactionsInteractionsGeneral comments•Gradually narrowing down the search•Clear logic–Analyzing–Improving–Controlling•Not all tools are “Shainin”tools•“What’s in a name?”–Positrol versus Control Plan–Process Certification versus Process Audit11Tool details•Overview of methods•More info on B vs C TM and Scatter Plots in workshops•Some more detail on–Multi-Vari chart–Paired Comparison TM and Product/ProcessSearch–Pre Control1213Clue Generating / Multi-Vari Chart Very useful tool and best applied beforebrainstorming causes on excess variationComments Samples taken in production on current process Could be a big measurement investment Sample SizeDivide total variation in categories Search for causes of variation in the biggest category first PrinciplesProblem type: excess variation Wide applicability ApplicationUnderstand the pattern of variation Define areas where not to look for problems Allow a more specific brainstorm ObjectiveMulti-Vari Chart•Breakdown of variation in 3 families:–Positional(within piece, between cavities, …)–Cyclical(consecutive units, batch-to-batch, lot-to-lot)–Temporal (hour-to-hour, shift-to-shift, …)1415Multi-vari Chart•If one family of variationcontains a large part oftotal variation, we canconcentrate oninvestigating variablesrelated to this family ofvariation.16Clue Generating / Component SearchTM Disassembly / reassembly requirement limitsapplication.Comments 2 = 1 BOB and 1 WOW Sample SizeSelect BOB and WOW unit Exchange components and observe behavior. Components that change behavior are Red X comp PrinciplesProblem type: assembly does not perform to spec Limitation: Disassembly / Reassembly must be possible without product change ApplicationFind the component(s) of an assembly that is (are) responsible for bad behavior Objective17Clue Generating / Paired ComparisonTM Practical application of “let the parts talk”Comments5 to6 pairs of 1 BOB and 1 WOWSample SizeSelect pairs of BOB and WOW units Look for differences Consistent differences to be investigated further PrinciplesProblem type: occasional problems in production flow ApplicationFind directions for further investigationObjectivePaired Comparisons TM: method•Step1: take1 good and 1 bad unit–As close as possible in time–Aim for BOB and WOW units•Step2: note the differences between these units (visual, dimensional, mechanical, chemical, …). Let the parts talk!•Step3: take a second pair of good and bad units.Repeat step218Paired Comparisons TM: method •Step4: repeat this process with third, fourth, fith, …pair until a pattern of differences becomes apparent. •Step5: don’t take inconsistent differences into account. Generally after the fith or sixth pair theconsistent differences that cause the variationbecome clear.1920Clue Generating / Product/Process Search Tukey test is alternative for t-testWidely applicable methodProblem: available data (process parameters)Comments 8 BOB and 8 WOW units / batches Sample SizeSelect sets of BOB and WOW units –batches -..Add product data / process parameters and rank Apply Tukey test to determine important parameters PrinciplesProblem type: Various types of problems ApplicationPreselection of variables out of a large group of potential variables ObjectiveProduct/Process Search: example•Transmission assemblies rejected for noise.•Components search shows idler shaft as responsible component•One of the parameters of idler shaft is “out of round”•8 good / 8 bad units selected and measured for “out of round”2122Product/Process search: example0.0070.0110.0190.0170.0220.0140.0180.015Out of round good units(mm)0.0170.0210.0230.0240.0230.0160.0180.019Out of round bad units(mm)Tukey test procedure•Rank individual units by parameter and indicate Good / Bad.•Count number of “all good”or “all bad”from one side and vice versa from other side.•Make sum of both counts.•Determine confidence level to evaluate significance.2324Tukey test confidence levels99.9%1399%1095%790%6ConfidenceTotal end count25Tukey test: example0.0230.0230.0240.0160.0170.0180.0190.0210.0170.0180.0190.0220.0070.0110.0140.015BadGood Top end count (all good)4Bottom end count (all bad)3Overlap regionTukey test: example•Total end count = 4 + 3 = 7•95 % confidence that out-of-round idler shaft is important in explaining the difference in noise levels.2627Formal Doe tools / Variables SearchAlternative to fractional factorials on two levels Method comparable to components searchCommentsNumber of tests is determined by number of variables and quality of ordering.Sample Size List variables in order of criticality (process knowledge) and indicate good / bad level.Swap factor settings and observe behavior.Factors that change behavior (and interactions) are red X TM , Pink X TMPrinciplesProblem type: Various types of problemsAfter clue generating more then 4 potential variables leftApplicationDetermine Red X TM , Pink X TM including quantification of their effectObjective28Formal Doe tools / Full FactorialsWell established methodCommentsNumber of tests is determined by number of variables k (2k test combinations)Sample Size Classical DOE with Full Factorials at two levels Main Effects and interactions are calculated Principles Problem type: Various types of problems After clue generating 4 or less variables left Application Determine Red X TM , Pink X TM including quantification of their effectObjective29Formal Doe tools / B(etter) vs C(urrent)TMQuick validation that works well with big improvementsComments3 B and 3 C tests (each test can involve several units –test of variation reduction)All 3B’s must be better than all 3C’s Sample Size Create new process using optimum settings and compare optimum with current.Principles Problem type: Various types of problemsApplication Validation of Red X TM , Pink X TMObjective30Optimization / Scatter PlotsGraphical method that could easily be transformed to a statistical methodComments30 tests for each critical variableSample Size Do tests around optimum and use graphical regression to set tolerancePrinciples Problem type: Variation Reduction and optimizing signalApplication Fine tune best level and realistic tolerance for Red X TM , Pink X TM if no interactions are present Objective31Optimization / Response Surface Methods Method developed by George Box CommentsDepends on variables and surface.Sample SizeEvolutionary Operation (EVOP) to scan response surface in direction of steepest ascent PrinciplesProblem type: Variation Reduction and optimizing signal ApplicationFine tune best level and realistic tolerance for Red X TM , Pink X TM if interactions are present ObjectiveEVOP example3233Control / PositrolCan be compared with a Control Plan CommentsChecking frequency in the When columnSample SizeTable of What, How, Who, Where and When control has to be exercised. PrinciplesProblem type: all types ApplicationAssuring that optimum settings are kept Objective34Control / Process CertificationMix of 5S, Poka-Yoke, instructions, ISO 9000,audits,…Comments Checking frequency to be determined Sample SizeMake overview of things that could influence the process and install inspections, audits, …PrinciplesProblem type: all typesApplicationEliminating peripheral causes of poor quality Objective35Control / Pre ControlAlternative to classical SPCTraffic lights systemVery practical methodComments Checking frequency to be determinedSample SizeDivide total tolerance in colored zones and use prescribed sampling and rules to control the process.PrinciplesProblem type: control variation and setting of the process ApplicationContinuous checking of the quality of the process output Objective36Pre-Control: chart constructionUSL LSLTARGET ½TOL1/4 TOL1/4 TOLPre-control: use of chart1.Start process: five consecutive units ingreen needed as validation of set-up.2.If not possible: improve process.3.In production: 2 consecutive units4.Frequency: time interval between twostoppages (see action rules) / 6.3738Pre-control: action rulesStop and act 2 units in different yellow zoneStop and act1 unit in red zone Correct2 units in same yellow zoneContinue 1 unit in green and 1 unit in yellowzoneContinue 2 units in green zoneAction Result of samplesAfter an intervention: 5 consecutive units in green zone39Pre-control: exampleTime StartCorrect Startqsconsult www.qsconsult.be40 Willy VandenbrandeWilly Vandenbrande, Master TQM ASQ Fellow-Six Sigma Black Belt Montpellier 34B -8310 BruggeBelgië-BelgiumTel + 32 (0)479 36 03 75E-mail willy@qsconsult.be Website www.qsconsult.beQS Consult。
Samsung Ultrasound System for Veterinary CareSophisticated 2D Image ProcessingEnhance hidden structures in shadowed regionsShadowHDR™ selectively applies high-frequency and low-frequency of the ultrasound to identify shadow areas where attenuation occurs.Reduce noise to improve 2D image qualityThe noise reduction filter enhances the edge contrast and creates sharp 2D images for optimal diagnostic performance. In addition, ClearVision provides application-specific optimization and advanced temporal resolution in live scan mode.Provide uniform imaging performance of overall image area S-Harmonic™ mitigates the signal noise, enhances contrast, and provides uniform image performance of overall image area from near-to-far.Clean up blurry areas in the imageHQ-Vision™ provides clearer images by mitigating the characteristics of ultrasound images that are slightly blurred than the actual vision.Abdomen without S-Harmonic™ aKidney without ClearVision b Abdomen with S-Harmonic™ aKidney with ClearVision bOffOffOnOn2A Comprehensive Selection for Veterinary CareSamsung ultrasound systems for veterinary adopt Samsung’s pioneering imaging engine, Crystal Architecture™, boast various advanced features, easy-to-use operations, and dedicated design that improve the routine diagnostic experience.Detailed Expression of Blood Flow DynamicsEnriched Diagnostic FeaturesDisplay and quantify tissue stiffness in a non-invasive methodExamine peripheral vessels with directional Power DopplerS-Flow™, a directional Power Doppler imaging technology, can help to detect even the peripheral blood vessels. It enables accurate diagnosis when the blood flow examination is especially difficult.S-Shearwave Imaging™ allows for non-invasive assessment of stiff tissues in various applications. The color-coded elastogram, quantitative measurements, display options, and user-selectable ROI functions are useful for accurate diagnosis of various regions.Show blood flow in vessels in a 3D like displayLumiFlow™ is a function that visualizes blood flow in three dimensional-like to help understand the structure of blood flow and small vessels intuitively.Visualize slow flow in microvascular structuresMV-Flow™ offers an advanced color imaging for visualizing slow flow ofmicrovascularized structures. High frame rates and advanced filtering enable MV-Flow™ to provide a detailed view of blood flow in relation to surrounding tissue or pathology with enhanced spatial resolution.Abdomen with MV-Flow™aKidney S-Flow™ with LumiFlow™aKidney MV-Flow™ with LumiFlow™ a3Compare previous and current exam in a side-by-side displayEzCompare™ automatically matches theimage settings, annotations, and bodymarkersfrom the prior study.Optimize image with one touch of the buttonQuickScan™ technology provides intuitive optimization of both grayscale and Doppler parameters.Build predefined protocols for streamlined processEzExam+™ assigns protocols for examinations that are regularly performed in the hospital in order to reduce the number of steps that youhave to go through.4Improved Workflow Efficiency and Ergonomic DesignWe believe that a truly great system offers customer-centric working conditions. The streamlined workflow supports your daily procedures by reducing keystrokes and by combining multiple actions into one. Users have the option of customizing their diagnostic settings based on personalized protocol, resulting in a more simplified exam process and faster workflow.Select transducer and preset combinations in one clickQuickPreset allows the user to select the most common transducer and preset combinations in one click.Customize frequently used functions on the touchscreenA customizable touchscreen allows the user tomove frequently used functions to the first page.Save image data directly to USB memoryQuickSave function allows image data to besaved directly on USB memory during the exam.Real-time image sharing, discussion,and remote control of ultrasound system *SonoSync™ is a real-time ultrasoundimage sharing solution that allows voicecommunication and remote controllability foreffective collaboration between physicians andsonographers at different locations.Use the system when AC power istemporarily unavailableBatteryAssist™ provides battery power to thesystem, enabling users to perform scans whenAC power is temporarily unavailable.Tilt touchscreen to accommodateuser preferenceSamsung’s tilting touch screen can be adjusted toaccommodate user’s viewing preferences in anyscanning environment.5Ultrasound System PC / Tablet / Smartphone* The availability may vary by product and country.Liver aKidney d Bowel bBladder bSmall Intestine aKidney a6Cardiac c Cardiac Color Mode bCardiac aCardiac M ModecSpleen a Liver b7CA4-10MLA2-14A PA1-5A PA3-8B PA4-12BRS85PrestigeThe Real RevolutionRS85 Prestige has been revolutionized with novel diagnostic features across eachapplication based on the preeminent imaging performance. The advanced intellectual technologies are to help you confirm with confidence for challenging cases, while the easy-to-use system supports your effort involved in the routine scanning.Sophisticated 2D Image ProcessingShadowHDR™, HQ-Vision™, PureVision™, S-Harmonic™, ClearVision, MultiVisionDetailed Expression of Blood Flow DynamicsMV-Flow™ 1, S-Flow™, LumiFlow™ 1, CEUS+ 1Enriched Diagnostic Features & Interventional SolutionsElastoScan™ 1, 2, S-Shearwave™ 1, S-Shearwave Imaging™ 1, Strain+ 1, Panoramic+, NeedleMate™Enhanced Productivity and Facilitated WorkflowEzExam+™, EzPrep™, QuickScan™, QuickPreset, Touch Customization, Sonosync™ 3Ergodynamics for Your Comfort6 Way Control Panel, Central Lock, Maneuverable Wheel, Gel Warmer, 23.8-inch LCD Monitor,14-inch Tilting Touch Screen8CA4-10MLA2-14A PA1-5A PA3-8B9V8Step up ConfidenceThe V8 ultrasound system combines exquisite imaging quality powered by Crystal Architecture™ with efficient, streamlined examination enabled by Intelligent Assist tools, and re-engineered workflow to fulfill the needs of today's busy clinical environment.Redefined Imaging TechnologiesPowered by Crystal Architecture™ (CrystalBeam™, CrystalLive™)Sophisticated 2D Image Processing & Detailed Color ExpressionShadowHDR™, HQ-Vision™ 1, ClearVision, S-Flow™, MV-Flow 1, LumiFlow™ 1Enriched Diagnostic FeaturesElastoScan™ 1, 2, S-Shearwave™ 1, S-Shearwave Imaging™ 1, Strain+ 1, Panoramic+, NeedleMate™Re-engineered Workflow and Enhanced CustomizationTouchEdit, QuickPreset, Expanded view, EzCompare™, EzExam+ 1,Sonosync™3Comfort Design14-inch Tilting Touchscreen, 23.8-inch LCD Monitor, Contextual Button, QuickSave, BatteryAssist™ 1, Cooling System, Adjustable Control Panel, Transducer Cable Hook, Gel WarmerHS60Focus on Your NeedsExtend diagnostic boundaries with a versatile ultrasound HS60.It enables users to diagnose with accuracy for various anatomiesusing dedicated and advanced features.Sophisticated 2D Image ProcessingHQ-Vision™, S-Harmonic™, ClearVision, MultiVisionDetailed Expression of Blood Flow DynamicsMV-Flow™ 1, S-Flow™, LumiFlow™ 1Enriched Diagnostic Features & Interventional SolutionsElastoScan™ 1, 2, Strain+ 1, Panoramic+, NeedleMate™Enhanced Productivity and Facilitated WorkflowEzExam+™, EzCompare™, QuickScan™, QuickPreset, BatteryAssist™Ergodynamics for Your ComfortAdjustable Control Panel, Transducer Cable Hangers, Gel Warmer 1, SSD Drive,BatteryAssist™ 1, 21.5-inch LCD Monitor, 10.1-inch Touch ScreenCA4-10M LA3-16AD PA3-8B PA4-12B 10CA4-10M LA3-16ADPA3-8B PA4-12B11 Simple yet PowerfulSamsung Ultrasound HS50 is the practical choice for superiorimaging and enhanced workflow. Experience Samsung’s advancedimaging technologies including S-Harmonic™ and other solutions.Sophisticated 2D Image ProcessingHQ-Vision™, S-Harmonic™, ClearVision, MultiVisionDetailed Expression of Blood Flow DynamicsS-Flow™, LumiFlow™ 1Enriched Diagnostic Features & Interventional SolutionsElastoScan™ 1, 2, Strain+ 1, Panoramic+ 1, NeedleMate™Enhanced Productivity and Facilitated WorkflowEzExam+™, EzCompare™, QuickScan™, QuickPreset, BatteryAssist™Ergodynamics for Your ComfortAdjustable Control Panel, Transducer Cable Hangers, Gel Warmer 1, SSD Drive,BatteryAssist™ 1, 21.5-inch LCD Monitor, 10.1-inch Touch ScreenSamsung’s advanced yet budget-friendly tools, previously exclusiveto our platforms, can enhance various application exam capabilitiesfor efficient and effective care.Sophisticated 2D Image ProcessingHQ-Vision™, S-Harmonic™, ClearVision, MultiVisionDetailed Expression of Blood Flow DynamicsS-Flow™Enriched Diagnostic Features & Interventional SolutionsElastoScan™ 1, 2, Strain+ 1, Panoramic+ 1, NeedleMate™Enhanced Productivity and Facilitated WorkflowEzExam+™, EzCompare™, QuickScan™, QuickPreset, MeasureNavigationErgodynamics for Your ComfortAdjustable Control Panel, Gas Lift, Side Storage 1, Rear Tray 1, Gel Warmer 1, SSD Drive,BatteryAssist™ 1, Print Cover 1, 21.5-inch LCD Monitor, 10.1-inch Touch Screen12CA4-10M13Value for BasicHS30 delivers a clear view and its basic tools are equipped to provide effective care and help necessary examination with versatile features.Sophisticated 2D Image Processing S-Harmonic™, ClearVision, MultiVision Detailed Expression of Blood Flow Dynamics S-Flow™Enriched Diagnostic Features& Interventional Solutions ElastoScan™ 1, 2, Strain+ 1, Panoramic+ 1 , NeedleMate™Enhanced Productivity and Facilitated Workflow EzExam+™, EzCompare™, QuickScan™Ergodynamics for Your ComfortSide Storage 1, Rear Tray 1, Transducer Cable Hangers, Gel Warmer 1, Keyboard & Keyskin 1,21.5-inch LCD MonitorSP3-8Mobile ExcellenceThe HM70 EVO ultrasound system is a high-performance hand-carried ultrasoundsystem, evolved to support a diverse range of applications and patients. The system has streamlined workflow, durability, and high resolution imaging that can be used in a variety of clinical situations.Sophisticated 2D Image ProcessingHQ-Vision™, S-Harmonic™, ClearVision, MultiVisionDetailed Expression of Blood Flow DynamicsS-Flow™Enriched Diagnostic Features & Interventional SolutionsElastoScan™ 1, 2, Strain+ 1, Panoramic+, NeedleMate™Enhanced Productivity and Facilitated WorkflowEzExam+™, EzCompare™, QuickScan™, QuickScan™, QuickSaveErgodynamics for Your ComfortPremium Cart (Gas Lift, On Cart Power Outlets), Extended Transducer Ports,Extended Battery, Keyskin, Side Storage, Front Handle, Carrier Package, 15-inch LCD Monitor1415About Samsung Medison CO., LTD.Samsung Medison, an affiliate of Samsung Electronics, is a global medical company founded in 1985. With a mission to bring health and well-being to people's lives, the company manufactures diagnostic ultrasound systems around the world across various medical fields. Samsung Medison hascommercialized the Live 3D technology in 2001 and since being part of Samsung Electronics in 2011, it is integrating IT, image processing, semiconductor and communication technologies into ultrasound devices for efficient and confident diagnosis.* The products, features, options, and transducers may not be commercially available in some countries.* Sales and Shipments are effective only after the approval by the regulatory affairs. Please contact your local sales representative for further details.* This product is a medical device, please read the user manual carefully before use.* Prestige is not a product name but is a marketing terminology.1. Optional feature which may require additional purchase.2. Strain value for ElastoScan+™ is not applicable in Canada and the United States.3. SonoSync™ is an image sharing solution.a. Image acquired by RS85 Prestige ultrasound system.b. Image acquired by V8 ultrasound system.c. Image acquired by HS60 ultrasound system.d. Image acquired by HS40 ultrasound system.© 2021 Samsung Medison All Rights Reserved.Samsung Medison reserves the right to modify the design, packaging,specifications, and features shown herein, without prior notice or obligation.SAMSUNG MEDISON CO., LTD.C T -T O T A L -V e t _I M C -211202-E NScan here or visit。
2018年第37卷第1期 CHEMICAL INDUSTRY AND ENGINEERING PROGRESS·343·化 工 进展变负荷工况下NO x 排放量预测控制唐振浩,张海洋,曹生现(东北电力大学自动化工程学院,吉林 吉林 132012)摘要:NO x 是火电厂排放的主要污染物之一,降低NO x 的排放是火电厂面临的主要问题。
针对火电厂变负荷工况下的NO x 排放量最小化问题,本文提出了一种基于最小二乘支持向量机(LSSVM )的非线性模型预测控制算法。
根据电站锅炉实际历史数据建立锅炉负荷预测模型和NO x 排放预测模型,并以交叉验证的方法优化模型参数,从而获得高精度模型。
在此基础上以NO x 的排放量最小为优化目标,考虑锅炉负荷约束,构建锅炉燃烧优化模型。
采用差分进化算法求解优化模型得到控制参数的最优设定值。
为了验证本文提出算法的有效性,采用实际生产数据进行实验。
实验结果表明本方法能够在变负荷工况下有效降低NO x 排放量,在不增加电厂改造成本上,为电厂提供了有效的控制手段,具有一定应用前景。
关键词:煤燃烧;优化;氮氧化物;差分算法;最小二乘支持向量机;模型预测控制中图分类号:TK224 文献标志码:A 文章编号:1000–6613(2018)01–0343–07 DOI :10.16085/j.issn.1000-6613.2017-0716Model predictive control of NO x emission under variable load conditionTANG Zhenhao ,ZHANG Haiyang ,CAO Shengxian(School of Automation Engineering ,Northeast Electric Power University ,Jilin 132012,Jilin ,China )Abstract: NO x is one of the main pollutants for coal-fired power plant emissions. The main problemfor the plants today is reducing NO x emission. A nonlinear model predictive control method based on least square support vector machine (LSSVM )is proposed in this paper to solve the boiler NO x emission minimization problem considering varying load in coal-fired power plants. The boiler load model and NO x emissions model are constructed based on practical data. And then, the model parameters can be optimized by cross validation to obtain accuracy models. Based on these models, the boiler combustion optimization model is constructed. The optimization model aiming at minimizing the NO x emission considers the boiler load as a constraint. This optimization model is solved to obtain the optimal control variable settings by different evolution (DE )algorithm. To testify the effectiveness of the proposed approach, the experiments based on real operational data are designed. The experiments results illustrate that the proposed method could reduce NO x emissions effectively under varying load. It provides an effective means at no additional cost and has a certain application prospect.Key words :coal combustion ;optimization ;nitrogen oxide ;differential evolution algorithm ;least squares support vector ;model-predictive control为了解决我国面临的严峻的环境污染问题,由中华人民共和国环境保护部发布的《火电厂大气污染物排放标准》中要求自2012年1月1日起除个别地区外,火电厂NO x 的排放量不得高于100mg/m 3。
doi:10.3969/j.issn.1003-3114.2024.02.018引用格式:王浩博,吴伟,周福辉,等.智能反射面增强的多无人机辅助语义通信资源优化[J].无线电通信技术,2024,50(2): 366-372.[WANG Haobo,WU Wei,ZHOU Fuhui,et al.Optimization of Resource Allocation for Intelligent Reflecting Surface-enhanced Multi-UAV Assisted Semantic Communication[J].Radio Communications Technology,2024,50(2):366-372.]智能反射面增强的多无人机辅助语义通信资源优化王浩博1,吴㊀伟1,2∗,周福辉2,胡㊀冰3,田㊀峰1(1.南京邮电大学通信与信息工程学院,江苏南京210003;2.南京航空航天大学电子信息工程学院,江苏南京211106;3.南京邮电大学现代邮政学院,江苏南京210003)摘㊀要:无人机(Unmanned Aerial Vehicle,UAV)为无线通信系统提供了具有高成本效益的解决方案㊂进一步地,提出了一种新颖的智能反射面(Intelligent Reflecting Surface,IRS)增强多UAV语义通信系统㊂该系统包括配备IRS的UAV㊁移动边缘计算(Mobile Edge Computing,MEC)服务器和具有数据收集与局部语义特征提取功能的UAV㊂通过IRS 优化信号反射显著改善了UAV与MEC服务器的通信质量㊂所构建的问题涉及多UAV轨迹㊁IRS反射系数和语义符号数量联合优化,以最大限度地减少传输延迟㊂为解决该非凸优化问题,本文引入了深度强化学习(Deep Reinforce Learn-ing,DRL)算法,包括对偶双深度Q网络(Dueling Double Deep Q Network,D3QN)用于解决离散动作空间问题,如UAV轨迹优化和语义符号数量优化;深度确定性策略梯度(Deep Deterministic Policy Gradient,DDPG)用于解决连续动作空间问题,如IRS反射系数优化,以实现高效决策㊂仿真结果表明,与各个基准方案相比,提出的智能优化方案性能均有所提升,特别是在发射功率较小的情况下,且对于功率的变化,所提出的智能优化方案展示了良好的稳定性㊂关键词:无人机网络;智能反射面;语义通信;资源分配中图分类号:TN925㊀㊀㊀文献标志码:A㊀㊀㊀开放科学(资源服务)标识码(OSID):文章编号:1003-3114(2024)02-0366-07Optimization of Resource Allocation for Intelligent ReflectingSurface-enhanced Multi-UAV Assisted Semantic CommunicationWANG Haobo1,WU Wei1,2∗,ZHOU Fuhui2,HU Bing3,TIAN Feng1(1.School of Communications and Information Engineering,Nanjing University of Posts and Telecommunications,Nanjing210003,China;2.College of Electronic and Information Engineering,Nanjing University of Aeronautics and Astronautics,Nanjing211106,China;3.School of Modern Posts,Nanjing University of Posts and Telecommunications,Nanjing210003,China)Abstract:Unmanned Aerial Vehicles(UAV)present a cost-effective solution for wireless communication systems.This article introduces a novel Intelligent Reflecting Surface(IRS)to augment the semantic communication system among multiple UAVs.The system encompasses UAV equipped with IRS,Mobile Edge Computing(MEC)servers,and UAV featuring data collection and local semantic feature extraction functions.Optimizing signal reflection through IRS significantly enhances communication quality between drones and MEC servers.The formulated problem entails joint optimization of multiple drone trajectories,IRS reflection coefficients,and the number of semantic symbols to minimize transmission delays.To address this non-convex optimization problem,this paper introduces a Deep收稿日期:2023-12-31基金项目:国家重点研发计划(2020YFB1807602);国家自然科学基金(62271267);广东省促进经济发展专项资金(粤自然资合[2023]24号);国家自然科学基金(青年项目)(62302237)Foundation Item:National K&D Program of China(2020YFB1807602);National Natural Science Foundation of China(62271267);Key Program of Marine Economy Development Special Foundation of Department of Natural Resources of Guangdong Province(GDNRC[2023]24);National Natural Sci-ence Foundation of China(Young Scientists Fund)(62302237)ReinforcementLearning(DRL)algorithm.Specifically,theDuelingDoubleDeepQNetwork(D3QN)isemployedtoaddressdiscreteactionspaceproblemssuchasdronetrajectoryandsemanticsymbolquantityoptimization.Additionally,DeepDeterministicPolicyGra dient(DDPG)algorithmisutilizedtosolvecontinuousactionspaceproblems,suchasIRSreflectioncoefficientoptimization,enablingefficientdecision making.Simulationresultsdemonstratethattheproposedintelligentoptimizationschemeoutperformsvariousbenchmarkschemes,particularlyinscenarioswithlowtransmissionpower.Furthermore,theintelligentoptimizationschemeproposedinthispaperexhibitsrobuststabilityinresponsetopowerchanges.Keywords:UAVnetwork;IRS;semanticcommunication;resourceallocation0 引言当前技术飞速发展的背景下,无人机(UnmannedAerialVehicle,UAV)已经成为无线通信系统中一种重要的技术[1]。
Comparison of Multiobjective Evolutionary Algorithms:Empirical ResultsEckart ZitzlerDepartment of Electrical Engineering Swiss Federal Institute of T echnology 8092Zurich,Switzerlandzitzler@tik.ee.ethz.ch Kalyanmoy DebDepartment of Mechanical Engineering Indian Institute of T echnology Kanpur Kanpur,PIN208016,Indiadeb@iitk.ac.inLothar ThieleDepartment of Electrical EngineeringSwiss Federal Institute of T echnology8092Zurich,Switzerlandthiele@tik.ee.ethz.chAbstractIn this paper,we provide a systematic comparison of various evolutionary approaches tomultiobjective optimization using six carefully chosen test functions.Each test functioninvolves a particular feature that is known to cause difficulty in the evolutionary optimiza-tion process,mainly in converging to the Pareto-optimal front(e.g.,multimodality anddeception).By investigating these different problem features separately,it is possible topredict the kind of problems to which a certain technique is or is not well suited.However,in contrast to what was suspected beforehand,the experimental results indicate a hierarchyof the algorithms under consideration.Furthermore,the emerging effects are evidencethat the suggested test functions provide sufficient complexity to compare multiobjectiveoptimizers.Finally,elitism is shown to be an important factor for improving evolutionarymultiobjective search.KeywordsEvolutionary algorithms,multiobjective optimization,Pareto optimality,test functions,elitism.1MotivationEvolutionary algorithms(EAs)have become established as the method at hand for exploring the Pareto-optimal front in multiobjective optimization problems that are too complex to be solved by exact methods,such as linear programming and gradient search.This is not only because there are few alternatives for searching intractably large spaces for multiple Pareto-optimal solutions.Due to their inherent parallelism and their capability to exploit similarities of solutions by recombination,they are able to approximate the Pareto-optimal front in a single optimization run.The numerous applications and the rapidly growing interest in the area of multiobjective EAs take this fact into account.After thefirst pioneering studies on evolutionary multiobjective optimization appeared in the mid-eighties(Schaffer,1984,1985;Fourman,1985)several different EA implementa-tions were proposed in the years1991–1994(Kursawe,1991;Hajela and Lin,1992;Fonseca c2000by the Massachusetts Institute of T echnology Evolutionary Computation8(2):173-195E.Zitzler,K.Deb,and L.Thieleand Fleming,1993;Horn et al.,1994;Srinivas and Deb,1994).Later,these approaches (and variations of them)were successfully applied to various multiobjective optimization problems(Ishibuchi and Murata,1996;Cunha et al.,1997;Valenzuela-Rend´on and Uresti-Charre,1997;Fonseca and Fleming,1998;Parks and Miller,1998).In recent years,some researchers have investigated particular topics of evolutionary multiobjective search,such as convergence to the Pareto-optimal front(Van Veldhuizen and Lamont,1998a;Rudolph, 1998),niching(Obayashi et al.,1998),and elitism(Parks and Miller,1998;Obayashi et al., 1998),while others have concentrated on developing new evolutionary techniques(Lau-manns et al.,1998;Zitzler and Thiele,1999).For a thorough discussion of evolutionary algorithms for multiobjective optimization,the interested reader is referred to Fonseca and Fleming(1995),Horn(1997),Van Veldhuizen and Lamont(1998b),and Coello(1999).In spite of this variety,there is a lack of studies that compare the performance and different aspects of these approaches.Consequently,the question arises:which imple-mentations are suited to which sort of problem,and what are the specific advantages and drawbacks of different techniques?First steps in this direction have been made in both theory and practice.On the theoretical side,Fonseca and Fleming(1995)discussed the influence of differentfitness assignment strategies on the selection process.On the practical side,Zitzler and Thiele (1998,1999)used a NP-hard0/1knapsack problem to compare several multiobjective EAs. In this paper,we provide a systematic comparison of six multiobjective EAs,including a random search strategy as well as a single-objective EA using objective aggregation.The basis of this empirical study is formed by a set of well-defined,domain-independent test functions that allow the investigation of independent problem features.We thereby draw upon results presented in Deb(1999),where problem features that may make convergence of EAs to the Pareto-optimal front difficult are identified and,furthermore,methods of constructing appropriate test functions are suggested.The functions considered here cover the range of convexity,nonconvexity,discrete Pareto fronts,multimodality,deception,and biased search spaces.Hence,we are able to systematically compare the approaches based on different kinds of difficulty and to determine more exactly where certain techniques are advantageous or have trouble.In this context,we also examine further factors such as population size and elitism.The paper is structured as follows:Section2introduces key concepts of multiobjective optimization and defines the terminology used in this paper mathematically.We then give a brief overview of the multiobjective EAs under consideration with special emphasis on the differences between them.The test functions,their construction,and their choice are the subject of Section4,which is followed by a discussion about performance metrics to assess the quality of trade-off fronts.Afterwards,we present the experimental results in Section6and investigate further aspects like elitism(Section7)and population size (Section8)separately.A discussion of the results as well as future perspectives are given in Section9.2DefinitionsOptimization problems involving multiple,conflicting objectives are often approached by aggregating the objectives into a scalar function and solving the resulting single-objective optimization problem.In contrast,in this study,we are concerned withfinding a set of optimal trade-offs,the so-called Pareto-optimal set.In the following,we formalize this 174Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs well-known concept and also define the difference between local and global Pareto-optimalsets.A multiobjective search space is partially ordered in the sense that two arbitrary so-lutions are related to each other in two possible ways:either one dominates the other or neither dominates.D EFINITION1:Let us consider,without loss of generality,a multiobjective minimization problem with decision variables(parameters)and objectives:Minimizewhere(1) and where is called decision vector,parameter space,objective vector,and objective space.A decision vector is said to dominate a decision vector(also written as) if and only if(2)Additionally,in this study,we say covers()if and only if or.Based on the above relation,we can define nondominated and Pareto-optimal solutions: D EFINITION2:Let be an arbitrary decision vector.1.The decision vector is said to be nondominated regarding a set if and only if thereis no vector in which dominates;formally(3)If it is clear within the context which set is meant,we simply leave it out.2.The decision vector is Pareto-optimal if and only if is nondominated regarding.Pareto-optimal decision vectors cannot be improved in any objective without causing a degradation in at least one other objective;they represent,in our terminology,globally optimal solutions.However,analogous to single-objective optimization problems,there may also be local optima which constitute a nondominated set within a certain neighbor-hood.This corresponds to the concepts of global and local Pareto-optimal sets introduced by Deb(1999):D EFINITION3:Consider a set of decision vectors.1.The set is denoted as a local Pareto-optimal set if and only if(4)where is a corresponding distance metric and,.A slightly modified definition of local Pareto optimality is given here.Evolutionary Computation Volume8,Number2175E.Zitzler,K.Deb,and L.Thiele2.The set is called a global Pareto-optimal set if and only if(5) Note that a global Pareto-optimal set does not necessarily contain all Pareto-optimal solu-tions.If we refer to the entirety of the Pareto-optimal solutions,we simply write“Pareto-optimal set”;the corresponding set of objective vectors is denoted as“Pareto-optimal front”.3Evolutionary Multiobjective OptimizationT wo major problems must be addressed when an evolutionary algorithm is applied to multiobjective optimization:1.How to accomplishfitness assignment and selection,respectively,in order to guide thesearch towards the Pareto-optimal set.2.How to maintain a diverse population in order to prevent premature convergence andachieve a well distributed trade-off front.Often,different approaches are classified with regard to thefirst issue,where one can distinguish between criterion selection,aggregation selection,and Pareto selection(Horn, 1997).Methods performing criterion selection switch between the objectives during the selection phase.Each time an individual is chosen for reproduction,potentially a different objective will decide which member of the population will be copied into the mating pool. Aggregation selection is based on the traditional approaches to multiobjective optimization where the multiple objectives are combined into a parameterized single objective function. The parameters of the resulting function are systematically varied during the same run in order tofind a set of Pareto-optimal solutions.Finally,Pareto selection makes direct use of the dominance relation from Definition1;Goldberg(1989)was thefirst to suggest a Pareto-basedfitness assignment strategy.In this study,six of the most salient multiobjective EAs are considered,where for each of the above categories,at least one representative was chosen.Nevertheless,there are many other methods that may be considered for the comparison(cf.Van Veldhuizen and Lamont(1998b)and Coello(1999)for an overview of different evolutionary techniques): Among the class of criterion selection approaches,the Vector Evaluated Genetic Al-gorithm(VEGA)(Schaffer,1984,1985)has been chosen.Although some serious drawbacks are known(Schaffer,1985;Fonseca and Fleming,1995;Horn,1997),this algorithm has been a strong point of reference up to now.Therefore,it has been included in this investigation.The EA proposed by Hajela and Lin(1992)is based on aggregation selection in combination withfitness sharing(Goldberg and Richardson,1987),where an individual is assessed by summing up the weighted objective values.As weighted-sum aggregation appears still to be widespread due to its simplicity,Hajela and Lin’s technique has been selected to represent this class of multiobjective EAs.Pareto-based techniques seem to be most popular in thefield of evolutionary mul-tiobjective optimization(Van Veldhuizen and Lamont,1998b).In particular,the 176Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs algorithm presented by Fonseca and Fleming(1993),the Niched Pareto Genetic Algo-rithm(NPGA)(Horn and Nafpliotis,1993;Horn et al.,1994),and the Nondominated Sorting Genetic Algorithm(NSGA)(Srinivas and Deb,1994)appear to have achieved the most attention in the EA literature and have been used in various studies.Thus, they are also considered here.Furthermore,a recent elitist Pareto-based strategy,the Strength Pareto Evolutionary Algorithm(SPGA)(Zitzler and Thiele,1999),which outperformed four other multiobjective EAs on an extended0/1knapsack problem,is included in the comparison.4Test Functions for Multiobjective OptimizersDeb(1999)has identified several features that may cause difficulties for multiobjective EAs in1)converging to the Pareto-optimal front and2)maintaining diversity within the population.Concerning thefirst issue,multimodality,deception,and isolated optima are well-known problem areas in single-objective evolutionary optimization.The second issue is important in order to achieve a well distributed nondominated front.However,certain characteristics of the Pareto-optimal front may prevent an EA fromfinding diverse Pareto-optimal solutions:convexity or nonconvexity,discreteness,and nonuniformity.For each of the six problem features mentioned,a corresponding test function is constructed following the guidelines in Deb(1999).We thereby restrict ourselves to only two objectives in order to investigate the simplest casefirst.In our opinion,two objectives are sufficient to reflect essential aspects of multiobjective optimization.Moreover,we do not consider maximization or mixed minimization/maximization problems.Each of the test functions defined below is structured in the same manner and consists itself of three functions(Deb,1999,216):Minimizesubject to(6)whereThe function is a function of thefirst decision variable only,is a function of the remaining variables,and the parameters of are the function values of and.The test functions differ in these three functions as well as in the number of variables and in the values the variables may take.D EFINITION4:We introduce six test functions that follow the scheme given in Equa-tion6:The test function has a convex Pareto-optimal front:(7)where,and.The Pareto-optimal front is formed with.The test function is the nonconvex counterpart to:(8) Evolutionary Computation Volume8,Number2177E.Zitzler,K.Deb,and L.Thielewhere,and.The Pareto-optimal front is formed with.The test function represents the discreteness feature;its Pareto-optimal front consists of several noncontiguous convex parts:(9)where,and.The Pareto-optimal front is formed with.The introduction of the sine function in causes discontinuity in the Pareto-optimal front.However, there is no discontinuity in the parameter space.The test function contains local Pareto-optimal fronts and,therefore,tests for the EA’s ability to deal with multimodality:(10)where,,and.The global Pareto-optimal front is formed with,the best local Pareto-optimal front with.Note that not all local Pareto-optimal sets are distinguishable in the objective space.The test function describes a deceptive problem and distinguishes itself from the other test functions in that represents a binary string:(11)where gives the number of ones in the bit vector(unitation),ififand,,and.The true Pareto-optimal front is formed with,while the best deceptive Pareto-optimal front is represented by the solutions for which.The global Pareto-optimal front as well as the local ones are convex.The test function includes two difficulties caused by the nonuniformity of the search space:first,the Pareto-optimal solutions are nonuniformly distributed along the global Pareto front (the front is biased for solutions for which is near one);second,the density of the solutions is lowest near the Pareto-optimal front and highest away from the front:(12)where,.The Pareto-optimal front is formed with and is nonconvex.We will discuss each function in more detail in Section6,where the corresponding Pareto-optimal fronts are visualized as well(Figures1–6).178Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs5Metrics of PerformanceComparing different optimization techniques experimentally always involves the notion of performance.In the case of multiobjective optimization,the definition of quality is substantially more complex than for single-objective optimization problems,because the optimization goal itself consists of multiple objectives:The distance of the resulting nondominated set to the Pareto-optimal front should be minimized.A good(in most cases uniform)distribution of the solutions found is desirable.Theassessment of this criterion might be based on a certain distance metric.The extent of the obtained nondominated front should be maximized,i.e.,for each objective,a wide range of values should be covered by the nondominated solutions.In the literature,some attempts can be found to formalize the above definition(or parts of it)by means of quantitative metrics.Performance assessment by means of weighted-sum aggregation was introduced by Esbensen and Kuh(1996).Thereby,a set of decision vectors is evaluated regarding a given linear combination by determining the minimum weighted-sum of all corresponding objective vectors of.Based on this concept,a sample of linear combinations is chosen at random(with respect to a certain probability distribution),and the minimum weighted-sums for all linear combinations are summed up and averaged.The resulting value is taken as a measure of quality.A drawback of this metric is that only the“worst”solution determines the quality value per linear combination. Although several weight combinations are used,nonconvex regions of the trade-off surface contribute to the quality more than convex parts and may,as a consequence,dominate the performance assessment.Finally,the distribution,as well as the extent of the nondominated front,is not considered.Another interesting means of performance assessment was proposed by Fonseca and Fleming(1996).Given a set of nondominated solutions,a boundary function divides the objective space into two regions:the objective vectors for which the corre-sponding solutions are not covered by and the objective vectors for which the associated solutions are covered by.They call this particular function,which can also be seen as the locus of the family of tightest goal vectors known to be attainable,the attainment surface. T aking multiple optimization runs into account,a method is described to compute a median attainment surface by using auxiliary straight lines and sampling their intersections with the attainment surfaces obtained.As a result,the samples represented by the median attain-ment surface can be relatively assessed by means of statistical tests and,therefore,allow comparison of the performance of two or more multiobjective optimizers.A drawback of this approach is that it remains unclear how the quality difference can be expressed,i.e.,how much better one algorithm is than another.However,Fonseca and Fleming describe ways of meaningful statistical interpretation in contrast to the other studies considered here,and furthermore,their methodology seems to be well suited to visualization of the outcomes of several runs.In the context of investigations on convergence to the Pareto-optimal front,some authors(Rudolph,1998;Van Veldhuizen and Lamont,1998a)have considered the distance of a given set to the Pareto-optimal set in the same way as the function defined below.The distribution was not taken into account,because the focus was not on this Evolutionary Computation Volume8,Number2179E.Zitzler,K.Deb,and L.Thielematter.However,in comparative studies,distance alone is not sufficient for performance evaluation,since extremely differently distributed fronts may have the same distance to the Pareto-optimal front.T wo complementary metrics of performance were presented in Zitzler and Thiele (1998,1999).On one hand,the size of the dominated area in the objective space is taken under consideration;on the other hand,a pair of nondominated sets is compared by calculating the fraction of each set that is covered by the other set.The area combines all three criteria(distance,distribution,and extent)into one,and therefore,sets differing in more than one criterion may not be distinguished.The second metric is in some way similar to the comparison methodology proposed in Fonseca and Fleming(1996).It can be used to show that the outcomes of an algorithm dominate the outcomes of another algorithm, although,it does not tell how much better it is.We give its definition here,because it is used in the remainder of this paper.D EFINITION5:Let be two sets of decision vectors.The function maps the ordered pair to the interval:(13)The value means that all solutions in are dominated by or equal to solutions in.The opposite,,represents the situation when none of the solutions in are covered by the set.Note that both and have to be considered,since is not necessarily equal to.In summary,it may be said that performance metrics are hard to define and it probably will not be possible to define a single metric that allows for all criteria in a meaningful way.Along with that problem,the statistical interpretation associated with a performance comparison is rather difficult and still needs to be answered,since multiple significance tests are involved,and thus,tools from analysis of variance may be required.In this study,we have chosen a visual presentation of the results together with the application of the metric from Definition5.The reason for this is that we would like to in-vestigate1)whether test functions can adequately test specific aspects of each multiobjective algorithm and2)whether any visual hierarchy of the chosen algorithms exists.However, for a deeper investigation of some of the algorithms(which is the subject of future work), we suggest the following metrics that allow assessment of each of the criteria listed at the beginning of this section separately.D EFINITION6:Given a set of pairwise nondominating decision vectors,a neighborhood parameter(to be chosen appropriately),and a distance metric.We introduce three functions to assess the quality of regarding the parameter space:1.The function gives the average distance to the Pareto-optimal set:min(14)Recently,an alternative metric has been proposed in Zitzler(1999)in order to overcome this problem. 180Evolutionary Computation Volume8,Number2Comparison of Multiobjective EAs 2.The function takes the distribution in combination with the number of nondominatedsolutions found into account:(15) 3.The function considers the extent of the front described by:max(16) Analogously,we define three metrics,,and on the objective space.Letbe the sets of objective vectors that correspond to and,respectively,and and as before:min(17)(18)max(19)While and are intuitive,and(respectively and)need further explanation.The distribution metrics give a value within the interval()that reflects the number of-niches(-niches)in().Obviously,the higher the value,the better the distribution for an appropriate neighborhood parameter(e.g.,means that for each objective vector there is no other objective vector within-distance to it).The functions and use the maximum extent in each dimension to estimate the range to which the front spreads out.In the case of two objectives,this equals the distance of the two outer solutions.6Comparison of Different Evolutionary Approaches6.1MethodologyWe compare eight algorithms on the six proposed test functions:1.A random search algorithm.2.Fonseca and Fleming’s multiobjective EA.3.The Niched Pareto Genetic Algorithm.4.Hajela and Lin’s weighted-sum based approach.5.The Vector Evaluated Genetic Algorithm.6.The Nondominated Sorting Genetic Algorithm.Evolutionary Computation Volume8,Number2181E.Zitzler,K.Deb,and L.Thiele7.A single-objective evolutionary algorithm using weighted-sum aggregation.8.The Strength Pareto Evolutionary Algorithm.The multiobjective EAs,as well as,were executed times on each test problem, where the population was monitored for nondominated solutions,and the resulting non-dominated set was taken as the outcome of one optimization run.Here,serves as an additional point of reference and randomly generates a certain number of individuals per generation according to the rate of crossover and mutation(but neither crossover and mutation nor selection are performed).Hence,the number offitness evaluations was the same as for the EAs.In contrast,simulation runs were considered in the case of, each run optimizing towards another randomly chosen linear combination of the objec-tives.The nondominated solutions among all solutions generated in the runs form the trade-off front achieved by on a particular test function.Independent of the algorithm and the test function,each simulation run was carried out using the following parameters:Number of generations:250Population size:100Crossover rate:0.8Mutation rate:0.01Niching parameter share:0.48862Domination pressure dom:10The niching parameter was calculated using the guidelines given in Deb and Goldberg (1989)assuming the formation of ten independent niches.Since uses genotypic fitness sharing on,a different value,share,was chosen for this particular case. Concerning,the recommended value for dom of the population size wastaken(Horn and Nafpliotis,1993).Furthermore,for reasons of fairness,ran with a population size of where the external nondominated set was restricted to.Regarding the implementations of the algorithms,one chromosome was used to en-code the parameters of the corresponding test problem.Each parameter is represented by bits;the parameters only comprise bits for the deceptive function. Moreover,all approaches except were realized using binary tournament selection with replacement in order to avoid effects caused by different selection schemes.Further-more,sincefitness sharing may produce chaotic behavior in combination with tournament selection,a slightly modified method is incorporated here,named continuously updated shar-ing(Oei et al.,1991).As requires a generational selection mechanism,stochastic universal sampling was used in the implementation.6.2Simulation ResultsIn Figures1–6,the nondominated fronts achieved by the different algorithms are visualized. Per algorithm and test function,the outcomes of thefirstfive runs were unified,and then the dominated solutions were removed from the union set;the remaining points are plotted in thefigures.Also shown are the Pareto-optimal fronts(lower curves),as well as additional reference curves(upper curves).The latter curves allow a more precise evaluation of the obtained trade-off fronts and were calculated by adding max minto the values of the Pareto-optimal points.The space between Pareto-optimal and 182Evolutionary Computation Volume8,Number2f101234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 1:T est function(convex).f101234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 2:T est function(nonconvex).183f11234f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEA Figure 3:T est function(discrete).f1010203040f2RANDFFGA NPGA HLGA VEGA NSGA SOEA SPEAFigure 4:T est function(multimodal).184f10246f2RANDFFGA NPGAHLGA VEGA NSGASOEA SPEA Figure 5:T est function (deceptive).f12468f2RANDFFGA NPGA HLGA VEGA NSGASOEA SPEA Figure 6:T est function(nonuniform).185reference fronts represents about of the corresponding objective space.However,the curve resulting from the deceptive function is not appropriate for our purposes,since it lies above the fronts produced by the random search algorithm.Instead,we consider all solutions with,i.e.,for which the parameters are set to the deceptive attractors (for).In addition to the graphical presentation,the different algorithms were assessed in pairs using the metric from Definition5.For an ordered algorithm pair,there is a sample of values according to the runs performed.Each value is computed on the basis of the nondominated sets achieved by and with the same initial population. Here,box plots are used to visualize the distribution of these samples(Figure7).A box plot consists of a box summarizing of the data.The upper and lower ends of the box are the upper and lower quartiles,while a thick line within the box encodes the median. Dashed appendages summarize the spread and shape of the distribution.Furthermore,the shortcut in Figure7stands for“reference set”and represents,for each test function,a set of equidistant points that are uniformly distributed on the corresponding reference curve.Generally,the simulation results prove that all multiobjective EAs do better than the random search algorithm.However,the box plots reveal that,,anddo not always cover the randomly created trade-off front completely.Furthermore,it can be observed that clearly outperforms the other nonelitist multiobjective EAs regarding both distance to the Pareto-optimal front and distribution of the nondominated solutions.This confirms the results presented in Zitzler and Thiele(1998).Furthermore, it is remarkable that performs well compared to and,although some serious drawbacks of this approach are known(Fonseca and Fleming,1995).The reason for this might be that we consider the off-line performance here in contrast to other studies that examine the on-line performance(Horn and Nafpliotis,1993;Srinivas and Deb,1994). On-line performance means that only the nondominated solutions in thefinal population are considered as the outcome,while off-line performance takes the solutions nondominated among all solutions generated during the entire optimization run into account.Finally,the best performance is provided by,which makes explicit use of the concept of elitism. Apart from,it even outperforms in spite of substantially lower computational effort and although uses an elitist strategy as well.This observation leads to the question of whether elitism would increase the performance of the other multiobjective EAs.We will investigate this matter in the next section.Considering the different problem features separately,convexity seems to cause the least amount of difficulty for the multiobjective EAs.All algorithms evolved reasonably distributed fronts,although there was a difference in the distance to the Pareto-optimal set.On the nonconvex test function,however,,,and have difficulties finding intermediate solutions,as linear combinations of the objectives tend to prefer solutions strong in at least one objective(Fonseca and Fleming,1995,4).Pareto-based algorithms have advantages here,but only and evolved a sufficient number of nondominated solutions.In the case of(discreteness),and are superior to both and.While the fronts achieved by the former cover about of the reference set on average,the latter come up with coverage.Among the considered test functions,and seem to be the hardest problems,since none of the algorithms was able to evolve a global Pareto-optimal set.The results on the multimodal problem indicateNote that outside values are not plotted in Figure7in order to prevent overloading of the presentation. 186。
第41卷第2期Vol.41㊀No.2重庆工商大学学报(自然科学版)J Chongqing Technol &Business Univ(Nat Sci Ed)2024年4月Apr.2024精英竞争和综合控制的多目标粒子群算法陈㊀飞1,刘衍民2,刘㊀君3,张娴子31.贵州大学数学与统计学院,贵阳5500252.遵义师范学院数学学院,贵州遵义5630063.贵州民族大学数据科学与信息工程学院,贵阳550025摘㊀要:目的多目标粒子群算法虽然极易实现且收敛速度快,但在平衡其收敛性和多样性方面仍需进一步改善㊂方法针对上述问题,提出一种精英竞争和综合控制的多目标粒子群算法(ECMOPSO )㊂一方面,算法采用全局损害选择精英粒子集,然后将两两竞争引入多目标粒子群算法中,通过精英竞争选取优胜者粒子,将其与全局领导者融合形成更全面的社会综合信息,以增强种群中粒子之间信息的交互性,更好引导种群中的粒子飞行,提升算法全局探索能力;另一方面,结合全局损害和基于位移密度估计对外部存档进行维护,从而提高外部存档中非劣解的质量,平衡算法的收敛性和多样性㊂结果将ECMOPSO 算法与4个多目标粒子群算法和4个多目标进化算法在ZDT 和UF 系列基准测试问题上进行仿真实验,并采用Wilcoxon 秩和检验和Friedman 秩检验比较ECMOPSO 算法与所选对比算法的整体性能㊂实验结果表明:相比其他几个对比算法,ECMOPSO 算法的收敛能力㊁解的分布性以及稳定性都得到了一定的提升㊂结论ECMOPSO 算法可以很好地平衡收敛性和多样性,提升其整体性能,能有效求解大多数多目标优化问题㊂关键词:多目标粒子群算法;精英竞争;综合控制;全局损害;基于位移密度估计中图分类号:TP18㊀㊀文献标识码:A ㊀㊀doi:10.16055/j.issn.1672-058X.2024.0002.010㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀收稿日期:2022-12-27㊀修回日期:2023-03-02㊀文章编号:1672-058X(2024)02-0074-12基金项目:贵州省进化人工智能重点实验室([2022]059);贵州省数字经济重点人才计划(2022001).作者简介:陈飞(1999 ),女,贵州遵义人,硕士研究生,从事优化理论及智能算法研究.通讯作者:刘衍民(1978 ),男,山东临沂人,教授,博士研究生导师,从事优化理论及智能算法研究.Email:yanmin7813@.引用格式:陈飞,刘衍民,刘君,等.精英竞争和综合控制的多目标粒子群算法[J].重庆工商大学学报(自然科学版),2024,41(2):74 85.CHEN Fei LIU Yanmin LIU Jun et al.Multi-objective particle swarm optimization with elite competition and comprehensivecontrol J .Journal of Chongqing Technology and Business University Natural Science Edition 2024 41 2 74 85.Multi-objective Particle Swarm Optimization with Elite Competition and Comprehensive Control CHEN Fei 1 LIU Yanmin 2 LIU Jun 3 ZHANG Xianzi 31.School of Mathematics and Statistics Guizhou University Guiyang 550025 China2.School of Mathematics Zunyi Normal University Guizhou Zunyi 563006 China3.School of Data Science and Information Engineering Guizhou Minzu University Guiyang 550025 ChinaAbstract Objective Although multi-objective particle swarm optimization is easy to implement and has a fastconvergence speed it still needs to be further improved in the aspect of balancing convergence and diversity.Methods To solve the above problem a multi-objective particle swarm optimization with elite competition and comprehensive control ECMOPSO was proposed.On one hand the algorithm selected the elite particle set with global detriment and then the pairwise competitions were introduced into the multi-objective particle swarm optimization.Thewinner particles were selected through elite competition and the winner particles were fused with the global leaders toform more comprehensive social information so as to enhance the information interaction between particles in the第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法population better guide the flight of particles in the population and improve the global exploration ability of the algorithm.On the other hand global detriment and shifted-based density estimation were combined to maintain the external archive so as to improve the quality of non-dominated solutions in the external archive and balance the convergence and diversity of the algorithm.Results The ECMOPSO algorithm four multi-objective particle swarm optimization algorithms and four multi-objective evolutionary algorithms were simulated for ZDT and UF series benchmark problems and the Wilcoxon rank sum test and the Friedman rank test were used to compare the overall performance of ECMOPSO algorithm with the selected comparison algorithms.The experimental results showed that compared with other comparison algorithms the convergence ability solution distribution and stability of the ECMOPSO algorithm have been improved to some extent.Conclusion ECMOPSO algorithm can balance convergence and diversity well improve its overall performance and can effectively solve most multi-objective optimization problems.Keywords multi-objective particle swarm optimization elite competition comprehensive control global detriment shifted-based density estimation1㊀引㊀言由于多目标优化问题(Multi-objective Optimization Problems,MOPs)[1]的目标之间相互冲突,一个目标的改进可能会导致其他目标退化,所以MOPs中通常得到的是一组折衷解,这个解被称为Pareto最优解[1]㊂粒子群算法(Particle Swarm Optimization,PSO)[2]具有收敛速度快㊁参数少和易实现等特点,在求解单目标优化问题时有着良好的优化性能,随之有学者将其扩展来求解MOPs,也取得了很好的效果㊂Coello等[3]在2002年提出多目标粒子群算法(Multi-Objective Particle Swarm Optimization,MOPSO),该算法使用外部存档来存储算法在搜索过程中发现的所有非劣解,为全局最优解(Global Best Solution,g best)的选取提供良好的候选者,从而与个体最优解(Personal Best Solution,p best)共同指导种群中的粒子飞行,这样可以加快算法的收敛速度㊂但是,在优化过程中,MOPSO算法全局搜索能力较弱,易导致早熟收敛而陷入局部最优及多样性差等问题㊂因此,许多研究人员针对这些问题提出了改进的MOPSO算法[4-8]㊂Han 等[4]使用多目标梯度法对外部存档进行维护,以提高进化过程中的收敛速度和局部开发能力,但在解决某些多模态问题时并没有表现出较大优势;Li等[5]提出一种主导差分方法来判别解之间的优势关系以及设计一个基于Lp范数的密度估计器,使算法不仅具有良好的收敛性和多样性,而且具有较低的复杂度,但在解决具有复杂Pareto前沿(Pareto Front,PF)问题时还有一定不足;Wu等[6]提出一种新的进化状态估计机制自适应地切换领导者的选择策略,平衡算法在演化过程中的开发与探索关系,但在某些MOPs上性能不太稳定; Han等[7]提出一种基于解分布熵和种群间距信息混合框架的自适应算法,在收敛速度和精度方面都有一定的提高,但增加了一定的计算复杂度;Zhu等[8]利用分解方法将MOPs转换为一组子问题,并且外部档案中的个体通过基于免疫的进化策略进行进化,有利于加快收敛速度,但其收敛性和多样性并未均衡㊂综上可知,以上各种算法在收敛性和多样性等方面都有一定的提升,但是在有效平衡其收敛性和多样性方面仍需改进㊂为进一步解决上述问题,本文提出一种精英竞争和综合控制的多目标粒子群算法(ECMOPSO)㊂首先,为了提高种群的搜索能力,采用全局损害选择精英粒子集;其次,将MOPSO算法与两两竞争结合起来,通过精英竞争选取优胜者粒子,将其与g best融合以便更好地引导种群中的粒子飞行;最后,结合全局损害和基于位移密度估计的综合平衡控制策略对外部存档进行维护,有效平衡算法的收敛性和多样性㊂2㊀相关工作2.1㊀多目标优化问题一般多目标优化问题[9]可以表述如下:min㊀F x()=f1x(),f2x(), ,f M x()()s.t.㊀g i x()ȡ0,i=1,2, ,qh j x()=0,j=1,2, ,r其中:x=x1,x2, ,x D()是决策空间X⊂R D中的一个D 维决策向量;F x()表示M维的目标向量;f i x()表示第i 维的目标函数;g i x()ȡ0i=1,2, ,q()和hjx()=0 j=1,2, ,r()分别是不等式约束和等式约束㊂57重庆工商大学学报(自然科学版)第41卷2.2㊀粒子群算法在PSO 算法[2]中,每个粒子通过跟踪两个领导者p best 和g best 更新其自身速度㊂p best 记录了每个粒子在历史搜索过程中的最优位置,而g best 则记录了整个种群当前搜索到的最优位置㊂假设第i 个粒子的速度矢量为V i =v i ,1,v i ,2, ,v i ,D (),位置矢量为X i =(x i ,1,x i ,2,,x i ,D ),其中i =1,2, ,N ,N 为种群规模,D 为决策空间的维数㊂则第i 个粒子在第t +1次迭代都遵循式(1)和式(2)来更新其速度和位置㊂V i t +1()=ωV i t ()+c 1r 1p best i t ()-X i t ()()+c 2r 2g best i t ()-X i t ()()(1)X i t +1()=X i t ()+V i t +1()(2)其中:t 是迭代次数,ω为惯性权重,c 1和c 2是分别用来控制p best 和g best 对粒子速度影响的加速度系数,r 1和r 2是[0,1]中均匀生成的两个随机数㊂2.3㊀竞争群优化器竞争群优化器(Competitive Swarm Optimizer,CSO)是由Cheng 等[10]提出的一个受PSO 算法启发且计算成本较低的算法,它与传统PSO 算法在本质上有很大的不同㊂CSO 算法不再使用p best 和g best 来引导粒子飞行,而是使用一种两两竞争学习来更新粒子的速度和位置㊂实验结果显示:CSO 算法比PSO 算法可以更好地平衡收敛性和多样性㊂第t +1次迭代过程中,第k 轮竞争之后失败粒子的速度和位置更新公式如下:V l ,k t +1()=r 1V l ,k t ()+r 2X w ,k t ()-X l ,k t ()()+φr 3X -k t ()-X l ,k t ()()X l ,k t +1()=X l ,k t ()+V l ,k t +1()其中:t 是迭代次数;k =1,2, ,N/2,N 为种群规模;r 1㊁r 2和r 3是[0,1]中均匀生成的3个随机数;V l ,k (t )和X l ,k (t )分别是第t 次迭代过程中第k 轮竞争之后失败粒子的速度和位置;X w ,k t ()和X -k t ()分别是第t 次迭代过程中第k 轮竞争之后优胜者粒子的位置和种群中所有粒子位置的均值;φ是用来控制X -k t ()影响的一个参数㊂3㊀MOPSO 算法设计为了更好平衡算法的收敛性和多样性,使粒子能够在较高维度的搜索空间中跳出局部最优而收敛到全局最优,本文提出一个改进的ECMOPSO 算法㊂与原始MOPSO 算法相比,ECMOPSO 算法提出了一种精英竞争的社会融合策略对粒子进行更新,并利用一种综合平衡控制策略对外部存档进行维护,有效地平衡了收敛性和多样性㊂3.1㊀精英竞争的社会融合在大多数元启发式算法中,如何有效平衡收敛性和多样性是一个至关重要的问题㊂在标准的MOPSO 算法中,种群中的每个粒子向p best 和g best 这两个领导者学习,在其 社会学习 部分只选择一个全局领导者㊂这种情况下,粒子不能从各种样本中学到更多有用的经验,从而降低了种群的多样性㊂基于上述这个问题,ECMOPSO 算法结合了CSO 算法中的两两竞争对种群中的粒子进行更新,以实现收敛性和多样性之间的平衡㊂首先,创建一个预定义大小为r 的精英粒子集E ,以获得综合性能较好的非劣解㊂由于精英粒子集是用来提供候选粒子的,用于两两竞争选择领导者来引导种群的搜索,所以选择的精英粒子应该在收敛性和多样性之间保持良好的平衡㊂因此,本文结合基于非支配排序和基于指标排序的方法来选择精英粒子㊂由于全局损害(Global Detriment,GD)[11]考虑解在不同目标上差异的绝对值累积,是一个考虑解之间差异有多显著的方法,所以选择它来作为E 中精英粒子的选择原则,最终选出排名前r 的粒子作为精英粒子集㊂GD 的描述具体如下:R GD NS i ()=1NM ðN j =1ðM m =1max f m NS i ()-f m NS j (),0()f m ,max -f m ,min其中:NS i 是第i 个非劣解;N 是非劣解的个数;M 是目标个数;f m ㊃()为第m 个目标的适应度函数;f m ,max 和f m ,min 分别是第m 个目标上的最大值和最小值㊂接下来,从精英粒子集E 中随机选出两个精英粒子竞争,优胜者粒子e 与g best 融合来引导种群中的粒子飞行,这样有利于更好地实现收敛性和多样性之间的平衡㊂假设V i =v i ,1,v i ,2, ,v i ,D ()为第i 个粒子的速度向量,X i =x i ,1,x i ,2, ,x i ,D ()为位置向量,优胜者粒子位置向量为e i =e i ,1,e i ,2, ,e i ,D (),其中:i =1,2, ,N ,N 为种群中的粒子数;D 为决策空间的维数㊂则第i 个粒子在第t +1次迭代时的速度和位置更新如式(3)和式(2)所示㊂V i t +1()=ωV i t ()+c 1r 1p best i t ()-X i t ()()+㊀c 2r 2g best i t ()+e i t ()()/2-X i t ()()(3)其中:t 为迭代次数,ω为惯性权重,p best 和g best 分别为个体最优解和全局最优解,c 1和c 2是分别用来控制p best 和g best 对粒子速度影响的加速度系数,r 1和r 2是[0,1]中均匀生成的两个随机数㊂3.2㊀综合平衡控制在大多数多目标粒子群算法中,随着算法迭代次67第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法数的增加,为了能够保留搜索过程中发现的具有良好性能的非劣解,以及最后获得分布良好的Pareto 前沿,需要对外部存档进行有效维护㊂MOPSO [3]算法采用自适应网格策略对外部存档中的非劣解进行删除,该算法随机删除密度最大的粒子,这样很可能会删除一些性能相对较好的非劣解,降低了算法的收敛性㊂因此,本文提出GD 和基于位移密度估计(Shifted -based Density Estimation,SDE)[12]的综合平衡控制策略(其值由R SDE 表示)来估计存档中解的密度,从而对自适应网格中密度大的解进行删除,确保外部存档中非劣解保持良好的分布㊂SDE 同时涵盖个体的收敛性信息和分布性信息,当度量外部存档中的非劣解NS i 和其他非劣解NS j 之间的相似度时,首先按照式(4)来根据NS i 和NS j 之间的收敛性比较移动NS j 在目标空间中的位置,即f ᶄm NS j ()=f m NS j (),if㊀f m NS i ()<f m NS j ()f m NS i (),if㊀f m NS i ()ȡf m NS j (){(4)其中:i ,j =1,2, ,n ,i ʂj ,n 为当前外部存档中非劣解的个数;m =1,2, ,M ,M 表示目标个数;f m NS i ()和f m NS j ()分别为非劣解NS i 和NS j 的第m 个目标值;f ᶄm NS j ()表示解NS j 位移之后的目标值㊂移动位置后,根据式(5)中的欧氏距离来度量解NS i 和NS j 之间的相似度,d i ,j 表示NS i 与NS j 位移后的距离㊂再通过式(6)计算非劣解NS i 的密度D NS i ():d i ,j =ðMm =1f ᶄmNS j ()-f m NS i ()(5)D NS i ()=min d i ,1,d i ,2, ,d i ,n ()(6)下面描述外部存档维护的综合平衡控制(Comprehensive Balance Control,CBC)指标(其值用R CBC 表示)㊂如式(7)所示,R CBC 由R SDE 计算得到的密度和R GD 之比组成,可以更全面地评估外部存档中的非劣解,R CBC 的值越大说明解的收敛性和多样性越好㊂R CBC NS i ()=D NS i ()R GD NS i ()(7)为了验证R CBC 指标维护外部存档时的性能,图1显示了一个双目标的例子来说明分别采用指标R SDE ㊁R GD和R CBC 对密度最大的粒子进行删除时的效果㊂假设最密集的网格中有6个非劣解:N 1(0.0547,0.7131)㊁N 2(0.0550,0.7111)㊁N 3(0.0551,0.7110)㊁N 4(0.0553,0.7102)㊁N 5(0.0556,0.7081)和N 6(0.0559,0.7058),表1分别列出了由指标R SDE ㊁R GD 和R CBC 计算所获得的排序值,将要被删除解的值以粗体显示㊂从表中可以看出:N 2和N 3获得了相同的R SDE 值,这里随机删除其中一个,用R GD 评估时选择将极值点N 6删除,这将导致非劣解分布不均匀;而R CBC 则结合了这两个指标的优点,选择相对拥挤的解N 3删除㊂从图1可以直观地观察到:用R SED 和R CBC 来删除粒子时都获得了较均匀的分布,但R CBC 更能精准地删除综合性能不好的粒子,更有效地平衡算法的收敛性和多样性㊂图1㊀分别由R SDE ㊁R GD 和R CBC 策略删除外部存档中粒子的图解Fig.1㊀Illustrations of deleting particles in the externalarchive by R SDE ,R GD and R CBC ,respectively表1㊀由R SDE ㊁R GD 和R CBC 获得的排序值Table 1㊀The ranking values obtained by R SDE ,R GD and R CBC指㊀标N 1N 2N 3N 4N 5N 6R SDE 0.00030.00010.00010.00030.00030.0023R GD0.22030.12700.13630.15060.18600.2639R CBC 0.00140.00080.00070.00200.00160.00873.3㊀算法的主要流程算法的具体步骤如下:Step1㊀设置ECMOPSO 算法的相关参数,初始化种群中粒子的位置㊂Step2㊀对粒子进行变异操作,并计算其适应度值㊂Step3㊀结合非支配排序和指标排序选取精英粒子集E ㊂Step4㊀建立外部存档,存储非劣解㊂若外部存档没有达到最大存储数,则继续引入非劣解;若达到最大存储数,则利用综合平衡控制对外部存档进行维护㊂Step5㊀选取p best 和g best ,并利用两两竞争选择优胜者粒子e ㊂77重庆工商大学学报(自然科学版)第41卷Step6㊀通过式(3)和式(2)更新粒子的速度和位置㊂Step7㊀若当前迭代次数没有达到最大迭代次数,则返回Step2;否则终止迭代,输出最优解集㊂图2㊀ECMOPSO算法流程图Fig.2㊀Flow chart of ECMOPSO algorithm 4㊀实验仿真分析4.1㊀实验参数设置为了对ECMOPSO算法的性能更客观地评价,使用了ZDT[13]和UF[14]两个不同基准测试组的15个测试问题来评估算法的性能㊂其中包括ZDT测试组的5个双目标测试问题以及UF测试组的7个双目标测试问题和3个三目标测试问题㊂这些测试问题具有不同的特性和复杂的Pareto前沿特征,比如凹凸性㊁多模态和不规则Pareto前沿形状等,可以很好地验证算法的可靠性和效率㊂对于双目标测试问题,ZDT1 ZDT3和UF1 UF7的决策变量个数设置为30,ZDT4和ZDT6的决策变量个数设置为10㊂对于三目标测试问题, UF8 UF10的决策变量个数设置为30㊂另外,选取了8个算法进行对比,其中有4个先进的多目标粒子群算法:SMPSO[15]㊁dMOPSO[16]㊁MPSOD[17]㊁NMPSO[18],以及4个具有竞争力的多目标进化算法:NSGAIII[19]㊁MOEADD[20]㊁SPEAR[21]㊁DGEA[22]㊂为了保证算法性能比较的公平性,所有对比算法设置的相关参数都与原始参考文献一致,每个算法的主要参数设置如表2所示㊂表2㊀ECMOPSO与几个对比算法的参数设置Table2㊀Parameter settings of ECMOPSO and several comparison algorithms算法名称参数设置SMPSOωɪ[0.1,0.5],c1,c2ɪ[1.5,2.5],p m=1/n,ηm=20 dMOPSO T a=2,θ=5MPSODωɪ[0.1,0.5],c1,c2,c3ɪ[1.5,2.5],p c=0.9,F=0.5,CR=0.5,p m=1/n,ηm=20,ηc=20 NMPSOωɪ[0.1,0.5],c1,c2,c3ɪ[1.5,2.5],p m=1/n,ηm=20 NSGAIII p m=1/n,p c=1.0,ηm=20,ηc=20MOEADD p m=1/n,p c=1.0,ηm=20,ηc=30,T=20,δ=0.9 SPEAR p m=1/n,p c=1.0,ηm=20,ηc=20DGEA R=10ECMOPSOω=0.4,c1=c2=2,r=30㊀㊀所有算法种群规模设置为200,外部存档大小为200,最大函数评估次数为10000㊂对于参数r,即精英粒子集大小,将会在4.5节进行参数分析㊂各算法在每个测试问题上都是独立运行30次,并且所有算法的实验数据都是在Intel(R)Core(TM)i7-6700CPU@ 3.40GHz3.40GHz Windows7系统中运用PlatEMO平台[23]和MATLAB R2021b实现的㊂4.2㊀性能评价指标为了评估算法的性能,采用反世代距离(Inverted Generational Distance,IGD[24],其值用R IGD表示)和超体积(Hypervolume,HV[25],其值用R HV表示)作为算法的性能评价指标㊂(1)R IGD是一种综合性指标,用来衡量算法得到的Pareto最优解集与真实PF之间的距离,可以很好地检验算法的收敛性和多样性㊂一个算法的R IGD值越小,就说明算法的收敛性和多样性越好㊂R IGD的计算公式为R IGD P,S()=1PðxɪP dist x,S()其中:S表示算法得到的Pareto最优解集,P表示在Pareto前沿上均匀分布的解集,dist x,S()是P中的解x87第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法与S 之间的最小欧氏距离㊂(2)R HV 也是一个综合的性能指标,表示算法获得的Pareto 最优解集与参照点围成的目标区域的体积㊂该指标可以估计算法所得到解集的收敛性和多样性,一个算法的R HV 值越大,就说明算法的收敛性和多样性越好㊂假设Z =Z 1,Z 2, ,Z m ()是目标空间中由所有Pareto 最优解支配的一个参考点,那么R HV 的计算公式为R HV S ()=δɣx ɪSf 1x (),z 1[]ˑf m x (),z m []()其中:S 表示算法得到的Pareto 最优解集,δ表示勒贝格测度㊂4.3㊀实验结果及数据分析表3和表4分别给出了ECMOPSO 算法与其他8个对比算法在15个测试问题上R IGD 值和R HV 值的均值(Mean)和标准差(Std),每个测试问题的最佳R IGD 值和R HV 值以粗体显示㊂此外,在α=0.05的显著性水平下,采用Wilcoxon 秩和检验,以显示检验结果之间的显著性差异㊂表中的符号 + ㊁ - 和 ʈ 分别表示其他算法的结果明显优于ECMOPSO 算法㊁明显差于ECMOPSO 算法以及与ECMOPSO 算法在统计上相似㊂表3㊀ECMOPSO 与8个对比算法在15个测试问题上的R IGD 值Table 3㊀R IGD values of ECMOPSO and eight comparison algorithms on fifteen test problems测试问题R IGD SMPSOdMOPSOMPSODNMPSONSGAIIIMOEADD SPEARDGEA ECMOPSOZDT1Mean StdWilcoxon 9.3128e -2(8.76e -2)- 5.7021e -2(1.72e -2)- 1.0539e -1(4.48e -2)- 2.9320e -2(1.09e -2)- 1.0075e -1(1.58e -2)- 1.2071e -1(1.84e -2)- 1.7041e -1(2.27e -2)- 1.1171e +0(2.91e -1)- 2.8807e -3(1.31e -4)ZDT2Mean Std Wilcoxon 5.3523e -2(1.07e -1)- 4.3899e -2(1.26e -2)- 1.7007e -1(1.02e -1)-1.9366e -2(6.61e -3)- 2.0184e -1(3.66e -2)- 1.5338e -1(2.65e -2)- 3.7517e -1(1.38e -1)-8.7982e -1(3.45e -1)- 3.4428e -3(1.24e -3)ZDT3Mean Std Wilcoxon 1.8286e -1(9.84e -2)- 3.5035e -2(6.31e -3)- 1.8725e -1(4.18e -2)-8.7429e -2(2.79e -2)-8.7967e -2(1.49e -2)- 1.8247e -1(1.50e -2)- 1.5654e -1(2.01e -2)-9.6677e -1(2.18e -1)- 3.4618e -3(3.00e -4)ZDT4Mean Std Wilcoxon 1.0525e +1(5.31e +0)- 5.4853e +0(6.11e +0)- 3.6093e +1(6.75e +0)- 1.7181e +1(1.02e +1)- 2.5028e +0(9.78e -1)-9.4142e -1(3.49e -1)ʈ2.0250e +0(6.56e -1)- 6.4764e +0(3.85e +0)- 1.4778e +0(1.53e +0)ZDT6Mean Std Wilcoxon 1.9172e -3(5.28e -5)+5.9543e -3(7.44e -3)- 2.7563e -2(2.87e -2)- 2.3135e -3(2.03e -4)+1.4465e +0(2.84e -1)- 5.8299e -1(1.72e -1)- 1.0509e +0(2.05e -1)- 2.2844e -2(1.09e -1)- 2.3842e -3(8.60e -5)UF1Mean Std Wilcoxon 3.8144e -1(9.05e -2)- 6.5685e -1(9.66e -2)- 2.8711e -1(5.02e -2)- 1.3308e -1(2.61e -2)ʈ1.3504e -1(3.81e -2)ʈ1.5518e -1(3.36e -2)- 1.3884e -1(2.47e -2)- 6.0941e -1(1.43e -1)- 1.2450e -1(8.69e -3)UF2Mean Std Wilcoxon 1.0184e -1(8.97e -3)-9.5765e -2(6.88e -3)- 1.1295e -1(1.01e -2)-8.3027e -2(7.26e -3)-8.1612e -2(5.33e -3)-7.5348e -2(6.19e -3)ʈ7.5129e -2(1.06e -2)ʈ1.8017e -1(2.18e -2)-7.4285e -2(4.41e -3)UF3Mean Std Wilcoxon 4.4586e -1(5.40e -2)- 3.3009e -1(6.15e -3)- 5.0047e -1(1.80e -2)- 3.5774e -1(6.03e -2)- 4.7958e -1(1.08e -2)- 4.6766e -1(1.32e -2)- 4.3291e -1(1.47e -2)- 5.6713e -1(3.36e -2)- 2.7446e -1(1.74e -2)UF4Mean Std Wilcoxon 1.1129e -1(8.05e -3)+1.3722e -1(4.87e -3)ʈ9.9054e -2(4.98e -3)+6.2896e -2(6.16e -3)+9.5857e -2(2.50e -3)+9.0998e -2(4.09e -3)+8.4957e -2(2.04e -3)+1.2029e -1(1.22e -2)+1.3613e -1(1.50e -2)97重庆工商大学学报(自然科学版)第41卷续表(表3)测试问题R IGD SMPSOdMOPSO MPSOD NMPSO NSGAIII MOEADD SPEAR DGEA ECMOPSOUF5Mean StdWilcoxon 2.8298e +0(5.59e -1)- 3.1949e +0(2.75e -1)- 2.7562e +0(2.66e -1)- 1.6868e +0(4.03e -1)- 1.6151e +0(3.91e -1)ʈ1.4347e +0(2.43e -1)ʈ1.1071e +0(1.88e -1)+3.0618e +0(7.08e -1)- 1.4249e +0(3.08e -1)UF6Mean Std Wilcoxon 1.2509e +0(4.17e -1)- 2.3449e +0(5.44e -1)-1.3904e +0(3.30e -1)-6.9338e -1(1.22e -1)-6.9635e -1(1.32e -1)-8.1561e -1(1.21e -1)-6.6148e -1(9.41e -2)-2.4887e +0(5.78e -1)-5.7726e -1(5.24e -2)UF7Mean StdWilcoxon 3.5088e -1(1.33e -1)-3.8396e -1(7.93e -2)-2.5133e -1(6.38e -2)-1.8663e -1(1.74e -1)-2.0383e -1(6.16e -2)-1.9100e -1(7.47e -2)-1.7285e -1(4.94e -2)-7.0149e -1(9.26e -2)-9.4107e -2(1.11e -2)UF8Mean StdWilcoxon 3.8732e -1(5.15e -2)- 3.4149e -1(3.37e -2)- 5.4728e -1(3.01e -2)- 4.7758e -1(1.01e -1)- 3.4595e -1(3.79e -2)- 3.3185e -1(2.17e -2)-3.2293e -1(1.95e -2)-7.2797e -1(1.26e -1)-2.7654e -1(3.88e -2)UF9Mean StdWilcoxon 5.5251e -1(3.74e -2)- 5.7965e -1(3.88e -2)- 6.6012e -1(3.81e -2)-4.6048e -1(5.27e -2)-5.1531e -1(4.58e -2)-5.2906e -1(6.04e -2)-4.6378e -1(4.87e -2)-7.5157e -1(1.39e -1)-1.4735e -1(1.99e -2)UF10Mean StdWilcoxon2.6494e +0(4.07e -1)-1.3204e +0(2.12e +0)ʈ4.2119e +0(4.16e -1)-1.4357e +0(3.60e -1)ʈ2.3930e +0(4.77e -1)-3.4430e +0(5.31e -1)-1.7848e +0(3.09e -1)-4.7389e +0(7.65e -1)-1.2703e +0(6.00e -1)+/-/ʈ2/13/00/13/21/14/02/11/21/12/21/11/32/12/11/14/0Best /All1/150/150/151/150/151/151/150/1511/15表4㊀ECMOPSO 与8个对比算法在15个测试问题上的R HV 值Table 4㊀R HV values of ECMOPSO and eight comparison algorithms on fifteen test problems测试问题R HV SMPSOdMOPSOMPSODNMPSO NSGAIII MOEADDSPEAR DGEA ECMOPSOZDT1Mean StdWilcoxon 6.0517e -1(1.05e -1)- 6.5272e -1(1.95e -2)- 5.7200e -1(5.85e -2)- 6.8969e -1(1.28e -2)- 5.8938e -1(1.96e -2)- 5.5955e -1(2.35e -2)- 4.9868e -1(2.42e -2)- 1.6149e -2(3.71e -2)-7.2136e -1(1.44e -4)ZDT2Mean Std Wilcoxon 3.9326e -1(9.98e -2)- 3.7831e -1(1.96e -2)- 2.5026e -1(1.02e -1)- 4.3397e -1(9.77e -3)- 2.0890e -1(2.99e -2)- 2.4709e -1(2.56e -2)-8.9477e -2(6.11e -2)- 2.9573e -2(9.92e -2)- 4.4532e -1(3.14e -3)ZDT3Mean Std Wilcoxon 5.2053e -1(6.83e -2)- 5.9777e -1(1.79e -2)ʈ4.6767e -1(4.14e -2)- 5.7228e -1(9.92e -3)- 5.4336e -1(1.03e -2)- 4.7287e -1(1.73e -2)- 5.0525e -1(3.00e -2)- 3.9698e -2(5.82e -2)- 6.0056e -1(7.13e -4)ZDT4Mean Std Wilcoxon 0.0000e +0(0.00e +0)- 3.9475e -2(9.21e -2)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)- 4.3538e -4(2.38e -3)- 3.3123e -2(4.47e -2)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)- 6.9196e -2(5.93e -2)ZDT6Mean Std Wilcoxon 3.9003e -1(5.86e -5)+3.8599e -1(7.41e -3)ʈ3.6732e -1(1.97e -2)-3.8975e -1(1.76e -4)+0.0000e +0(0.00e +0)-1.9708e -2(2.13e -2)-1.4687e -3(8.04e -3)-3.7990e -1(5.05e -2)-3.8942e -1(1.62e -4)8第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法续表(表4)测试问题R HV SMPSOdMOPSOMPSODNMPSONSGAIIIMOEADDSPEARDGEAECMOPSOUF1Mean StdWilcoxon 2.6491e -1(7.17e -2)- 5.5084e -2(5.08e -2)- 3.3817e -1(5.07e -2)- 5.1537e -1(3.63e -2)ʈ5.1768e -1(4.46e -2)ʈ4.9279e -1(4.05e -2)- 5.1402e -1(3.05e -2)ʈ1.0999e -1(8.33e -2)- 5.2738e -1(1.50e -2)UF2Mean StdWilcoxon 6.0671e -1(8.62e -3)- 6.1545e -1(6.11e -3)- 5.7878e -1(1.04e -2)- 6.1923e -1(8.22e -3)- 6.1559e -1(6.98e -3)- 6.2093e -1(7.56e -3)- 6.2477e -1(9.70e -3)- 5.0246e -1(2.80e -2)- 6.3444e -1(4.21e -3)UF3Mean StdWilcoxon 2.1706e -1(5.15e -2)- 3.0963e -1(9.53e -3)- 1.7346e -1(1.59e -2)- 2.8302e -1(5.67e -2)- 1.8452e -1(1.12e -2)- 1.9992e -1(1.15e -2)- 2.2317e -1(8.52e -3)- 1.2176e -1(1.64e -2)- 3.7404e -1(1.88e -2)UF4Mean StdWilcoxon 2.9288e -1(9.25e -3)+2.5327e -1(4.58e -3)ʈ3.0800e -1(6.00e -3)+3.6040e -1(8.40e -3)+3.1268e -1(2.99e -3)+3.1886e -1(4.21e -3)+3.2527e -1(2.71e -3)+2.8110e -1(1.43e -2)+2.5715e -1(1.61e -2)UF5Mean StdWilcoxon 0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)ʈ1.3453e -5(7.37e -5)ʈ0.0000e +0(0.00e +0)ʈ0.0000e +0(0.00e +0)UF6Mean StdWilcoxon 1.1432e -3(6.26e -3)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)- 1.8370e -2(2.67e -2)ʈ1.2244e -2(1.73e -2)- 2.8881e -3(6.71e -3)- 1.6659e -2(2.03e -2)ʈ0.0000e +0(0.00e +0)- 1.9962e -2(1.09e -2)UF7Mean Std Wilcoxon 2.0783e -1(1.03e -1)- 1.7261e -1(5.62e -2)- 2.6128e -1(6.09e -2)- 3.8775e -1(1.16e -1)ʈ3.1792e -1(6.35e -2)- 3.3427e -1(6.65e -2)- 3.5657e -1(4.50e -2)- 2.1735e -2(2.23e -2)- 4.4089e -1(1.70e -2)UF8Mean Std Wilcoxon 1.7390e -1(4.51e -2)- 2.6282e -1(2.32e -2)- 5.4523e -2(1.71e -2)- 2.8801e -1(5.17e -2)- 2.3078e -1(3.81e -2)- 1.5653e -1(3.31e -2)- 1.8219e -1(3.40e -2)- 2.4817e -2(2.82e -2)- 3.4706e -1(1.05e -2)UF9Mean Std Wilcoxon 2.1269e -1(3.11e -2)- 2.3294e -1(1.76e -2)- 1.1167e -1(2.52e -2)- 3.2189e -1(5.49e -2)- 2.3892e -1(4.10e -2)- 2.0844e -1(4.72e -2)- 2.7506e -1(4.50e -2)-8.2413e -2(5.59e -2)- 6.0995e -1(1.83e -2)UF10Mean Std Wilcoxon 0.0000e +0(0.00e +0)-9.1138e -2(2.49e -2)+0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)-0.0000e +0(0.00e +0)-3.0791e -2(5.35e -2)+/-/ʈ2/12/11/10/41/13/12/9/41/12/21/13/11/11/31/13/1Best /All1/151/150/151/150/150/151/150/1511/15㊀㊀从表3和表4可以看出:本文提出的ECMOPSO 算法综合性能明显优于对比的8个算法㊂从Wilcoxon 秩和检验结果来看:ECMOPSO 算法与对比算法SMPSO㊁dMOPSO㊁MPSOD㊁NMPSO㊁NSGAIII㊁MOEADD㊁SPEAR18重庆工商大学学报(自然科学版)第41卷和DGEA在15次比较中,分别有13㊁13㊁14㊁11㊁12㊁11㊁12㊁14次的表现显著优于这些算法,有0㊁2㊁0㊁2㊁2㊁3㊁1㊁0次得到了相似的结果㊂此外,从表3中的最佳值可以看出:ECMOPSO在15个测试问题上获得了11个最佳R IGD值,而算法SMPSO㊁dMOPSO㊁MPSOD㊁NMPSO㊁NSGAIII㊁MOEADD㊁SPEAR和DGEA获得的最佳R IGD值个数分别为1㊁0㊁0㊁1㊁0㊁1㊁1㊁0㊂表4中最佳R HV值所得结果与R IGD相似,ECMOPSO算法在15个测试问题上也获得了11个最佳值㊂所有统计结果都证明ECMOPSO算法和所选的算法比较仍然具有很强的竞争力㊂为进一步比较ECMOPSO算法与所选对比算法的整体性能,还采用Friedman秩检验计算所有算法的平均排名㊂从表5可以看出:无论是R IGD值还是R HV值,提出的ECMOPSO算法最终排名都是第一,这说明在和其他8个算法比较时,ECMOPSO算法在这些测试问题上的整体性能较为显著㊂表5㊀ECMOPSO算法和所有对比算法R IGD值和R HV值的Friedman秩检验Table5㊀Friedman rank test of R IGD values and R HV values ofECMOPSO and all comparison algorithms算㊀法R IGD R HV Friedman test Rank Friedman test RankSMPSO 5.737 5.436 dMOPSO 5.406 4.774 MPSOD7.008 6.708 NMPSO 3.402 3.102 NSGAIII 4.935 5.035 MOEADD 4.674 5.437 SPEAR 3.873 4.673 DGEA8.2797.909 ECMOPSO 1.731 1.971从R IGD㊁R HV㊁Wilcoxon秩和检验和Friedman秩检验结果可以得出结论:所提出的ECMOPSO算法与对比算法相比有更好的综合性能,在求解MOPs时可以获得更好的收敛性和多样性㊂4.4㊀图形比较分析为了比较各算法的收敛性和分布性,观察它们是否真正收敛到近似Pareto前沿,图3和图4分别展示ECMOPSO算法与其他8个对比算法在测试问题ZDT3和UF9上的Pareto最优解集分布㊂从图中可以看出: ECMOPSO算法在收敛性和分布性上的表现相对于其他算法来说都更有优势㊂这说明精英竞争和综合平衡控制策略能更好地平衡收敛性和多样性㊂㊀㊀(a)SMPSO(b)dMOPSO ㊀㊀(c)MPSOD(d)NMPSO ㊀㊀(e)NSGAIII(f)MOEADD ㊀㊀(g)SPEAR(h)DGEA(i)ECMOPSO图3㊀9个算法在ZDT3测试问题上的近似PFFig.3㊀Approximation PF of nine algorithms onZDT3test problems28第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法㊀㊀(a)SMPSO(b)dMOPSO㊀㊀(c)MPSOD(d)NMPSO㊀㊀(e)NSGAIII(f)MOEADD(g)SPEAR(h)DGEA(i)ECMOPSO图4㊀9个算法在UF9测试问题上的近似PFFig.4㊀Approximation PF of nine algorithms on UF9test problems为了能够直观地比较各算法在ZDT和UF系列测试问题上的稳定性,图5展示了ECMOPSO算法与其他8个算法在一些测试问题上独立运行30次所得到的R IGD值分布统计箱型图㊂其中箱型图横坐标上的1㊁2㊁3㊁4㊁5㊁6㊁7㊁8㊁9依次表示算法SMPSO㊁dMOPSO㊁MPSOD㊁NMPSO㊁NSGAIII㊁MOEADD㊁SPEAR㊁DGEA和ECMOPSO,纵坐标表示算法的R IGD值㊂从这些图中可以观察到:图5展示了与表3一致的对比结果,在这些测试问题上,ECMOPSO算法无论在解的结果还是算法的稳定性上都比其他8个算法更为显著㊂(a)ZDT1(b)ZDT2(c)ZDT3(d)UF1(e)UF7(f)UF9图5㊀9个算法在不同测试问题上R IGD值的箱型统计图Fig.5㊀Box statistics of the R IGD values of nine algorithms ondifferent test problems除了解集的质量,算法的收敛速度也是一个重要的性能指标㊂图6为ECMOPSO算法和8个算法在ZDT3㊁UF7和UF9上评价10000次得到的R IGD值收敛轨迹㊂从图中可以看出,ECMOPSO算法具有很好的收敛速度,明显优于其他8个对比算法㊂(a)ZDT338重庆工商大学学报(自然科学版)第41卷(b )UF1(c )UF7图6㊀ECMOPSO 和8个对比算法在ZDT3㊁UF1和UF7测试问题上的R IGD 收敛轨迹Fig.6㊀R IGD convergence trajectories of ECMOPSO and eightcomparison algorithms on ZDT3,UF1and UF7test problems综上可知,从PF 图㊁箱型图和收敛轨迹图来看,所提出的ECMOPSO 算法与所选的几个有竞争力的算法相比,ECMOPSO 算法的综合性能要优于其他算法,说明所提出的精英竞争和综合平衡控制策略可以更好地平衡算法的收敛性和多样性㊂4.5㊀参数分析本节将对参数r 进行灵敏度分析㊂在提出的ECMOPSO 算法中,精英粒子集的大小r 对算法的性能有一定的影响:如果r 太小,可能会导致种群过早收敛,但如果r 太大,可能会降低种群收敛速度㊂因此,合适大小的r 可以很好地平衡收敛性和多样性㊂图7展示了ECMOPSO 算法在测试问题UF1㊁UF3㊁UF6㊁UF7和UF8上不同大小精英粒子集的R IGD 值,该算法在每个测试问题上都独立运行了30次㊂从图中可以看出,当精英粒子集的大小为30时,ECMOPSO 算法的性能最佳㊂图7㊀ECMOPSO 在一些测试问题上不同大小精英粒子集的R IGD 值Fig.7㊀The R IGD values of different sizes of elite particle setscalculated by ECMOPSO on some test problems5㊀结㊀论提出一种精英竞争和综合控制的多目标粒子群算法,有效平衡了算法的收敛性和多样性㊂ECMOPSO 算法将两两竞争引入多目标粒子群算法中,采用全局损害选择精英粒子集,然后通过精英竞争选择优胜者粒子,将其与全局领导者融合来引导种群中的粒子飞行,提升了算法的全局探索能力;并且结合全局损害和基于位移密度估计对外部存档进行维护,从而提高外部存档中非劣解的质量,平衡算法的收敛性和多样性;最后采用Wilcoxon 秩和检验和Friedman 秩检验来比较ECMOPSO算法与所选对比算法的整体性能,将ECMOPSO 算法与8个对比算法在ZDT 和UF 测试问题上进行仿真实验㊂实验结果表明:ECMOPSO 算法具有更好的综合性能,可以更好地平衡收敛性和多样性,能有效地求解大多数多目标优化问题㊂参考文献 References1 ㊀LUO Q WU G JI B et al.Hybrid multi-objective optimizationapproach with Pareto local search for collaborative truck-dronerouting problems considering flexible time windows J .IEEETransactions on Intelligent Transportation Systems 2022 23 813011 13025.2 ㊀KENNEDY J EBERHART R C.Particle swarm optimizationC //Proceedings of the International Conference on NeuralNetworks.Piscataway 1995 1942 1948.3 ㊀COELLO C A C LECHUGA M S.MOPSO A proposal formultiple objectiveparticleswarmoptimization C //Proceedingsofthe2002CongressonEvolutionaryComputation.CEC 02IEEE 2002 1051 1056.4 ㊀HAN H LU WZHANG L et al.Adaptive gradient48第2期陈飞,等:精英竞争和综合控制的多目标粒子群算法multiobjective particle swarm optimization J .IEEE Transactions on Cybernetics 2018 48 11 3067 3079.5 ㊀LI L CHANG L GU T et al.On the norm of dominant difference for many-objective particle swarm optimization J . IEEE Transactions on Cybernetics 2021 51 4 2055 2067.6 ㊀WU B HU W HU J et al.Adaptive multiobjective particle swarm optimization based on evolutionary state estimation J . IEEE Transactions on Cybernetics 2021 517 3738 3751.7 ㊀HAN H LU W QIAO J.An adaptive multi-objective particle swarm optimization based on multiple adaptive methods J .IEEE Transactions on Cybernetics 2017 47 9 2754 2767.8 ㊀ZHU Q LIN Q CHEN W et al.An external archive-guided multi-objective particle swarm optimization algorithm J .IEEE Transactions on Cybernetics 2017 47 9 2794 2808.9 ㊀王世华李娜娜刘衍民.基于双决策和快速分层的多目标粒子群算法J .重庆工商大学学报自然科学版2022 39 1 62 70.WANG Shi-hua LI Na-na LIU Yan-min.Multi-objective particle swarm optimization based on double decision and fast stratification J .Journal of Chongqing Technology and Business University Natural Science Edition 2022 391 62 70.10 CHENG R JIN Y.A competitive swarm optimizer for large scale optimization J .IEEE Transactions on Cybernetics 2015 45 2 191 204.11 HUANG W ZHANG W.Multi-objective optimization based on an adaptive competitive swarm optimizer J .Information Sciences 2022 583 1 266 287.12 LI M YANG S LIU X.Shift-based density estimation for Pareto-based algorithms in many-objective optimization J . IEEE Transactions on Evolutionary Computation 2013 18 3 348 365.13 ZITZLER E DEB K THIELE parison of multi-objective evolutionary algorithms Empirical results J . Evolutionary Computation 2000 8 2 173 195.14 ZHANG Q ZHOU A ZHAO S et al.Multi-objective optimization test instances for the CEC2009special session and competition C//University of Essex Colchester UK and Nanyang Technological University Singapore Special Session on Performance Assessment of Multi-objective Optimization Algorithms Technical Report 2008 264 1 30.15 NEBRO A J DURILLO J J GARCIA-NIETO J et al. SMPSO A new PSO-based metaheuristic for multi-objectiveoptimization C//Proceedings of the IEEE Symposium on Computational Intelligence in Multi-criteria Decision-making MCDM .IEEE 2009 66 73.16 MARTINE Z S Z COELLO C A.A multi-objective particle swarm optimizer based on decomposition C//Proceedings of the13th Annual Conference on Genetic and Evolutionary Computation.2011 69 76.17 DAI C WANG Y YE M.A new multi-objective particle swarm optimization algorithm based on decomposition J . Information Sciences 2015 325 1 541 557.18 LIN Q LIU S ZHU Q et al.Particle swarm optimization with a balanceable fitness estimation for many-objective optimization problems J .IEEE Transactions on Evolutionary Computation 2018 22 1 32 46.19 DEB K JAIN H.An evolutionary many-objective optimization algorithm using reference-point-based non-dominated sorting approach part i Solving problems with box constraints J . IEEE Transactions on Evolutionary Computation 2014 184 577 601.20 LI K DEB K ZHANG Q et al.An evolutionary multi-objective optimization algorithm based on dominance and decomposition J .IEEE Transactions on Evolutionary Computation 2015 19 5 694 716.21 JIANG S YANG S.A strength Pareto evolutionary algorithm based on reference direction for multi-objective and many-objective optimization J .IEEE Transactions on Evolutionary Computation 2017 21 3 329 346.22 HE C CHENG R YAZDANI D.Adaptive offspring generation for evolutionary large-scale multi-objective optimization J .IEEE Transactions on Systems Man and Cybernetics Systems 2022 52 2 786 798.23 TIAN Y CHENG R ZHANG X Y et al.PlatEMO A MATLAB platform for evolutionary multi-objective optimization educational forum J .IEEE Computational Intelligence Magazine 2017 12 4 73 87.24 COELLO C C A CORTES N C.Solving multi-objective optimization problems using an artificial immune system J . Genetic Programming and Evolvable Machines 2005 62 163 190.25 ZITZLER E THIELE L.Multi-objective evolutionary algorithmsA comparative case study and the strength Pareto approach J . IEEE Transactions on Evolutionary Computation 1999 34 257 271.责任编辑:李翠薇58。
DISCLAIMERSAMSUNG ELECTRONICS RESERVES THE RIGHT TO CHANGE PRODUCTS, INFORMATION AND SPECIFICATIONS WITHOUT NOTICE.Products and specifications discussed herein are for reference purposes only. All information discussed herein may change without notice and is provided on an “AS IS” basis, without warranties of any kind. This document and all information discussed herein remain the sole and exclusive property of Samsung Electronics. No license of any patent, copyright, mask work, trademark or any other intellectual property right is granted by one party to the other party under this document, by implication, estoppels or otherwise. Samsung products are not intended for use in life support, critical care, medical, safety equipment, or similar applications where product failure could result in loss of life or personal or physical harm, or any military or defense application, or any governmental procurement to which special terms or provisions may apply. For updates or additional information about Samsung products, contact your nearest Samsung office. All brand names, trademarks and registered trademarks belong to their respective owners.COPYRIGHT © 2023This material is copyrighted by Samsung Electronics. Any unauthorized reproductions, use or disclosure of this material, or any part thereof, is strictly prohibited and is a violation under copyright law.TRADEMARKS & SERVICE MARKSThe Samsung Logo is the trademark of Samsung Electronics. Adobe is a trademark and Adobe Acrobat is a registered trademark of Adobe Systems Incorporated. All other company and product names may be trademarks of the respective companies with which they are associated.For more information, please visit /magicianRevision History1.IntroductionNew Samsung Magician 7Experience the new user-friendly GUI of Samsung Magician. Try our new features and enhanced functions for better user experience. Samsung Magician provides an integrated convenient solution for SSD with advanced capabilities.Samsung Magician software is developed and distributed exclusively for users of Samsung Solid Sate Drives (SSDs).New FeaturesNew Samsung Magician features a number of improvements over the previous versions.- New features include:•LED Setting – added to control LED color and mod using LED Setting.•Data Migration – added to clone your data to Samsung SSD using Data Migration.2.Requirements and Support System RequirementsSupported Features by modelThere are some limitations that exist depending on the type of storage and model.RAPID mode RequirementsDriver Support1)SATA2)NVMe1)Samsung Magician does not require internet connection to run. However, the internet connection isrequired to get updates for the latest Firmware, Feature modules or application and to authenticate the SSDs.2)If you delete some files of New Samsung Magician without internet connection, some features likecertification or configuration may not work properly and cause limitations in use of New Samsung Magician.3)The SSD should not be disconnected from the system while FW Update, Benchmarking, Secure Erase,Over Provisioning, Data Security, PSID Revert, Diagnostic scan, Performance Optimization or RAPID features are in progress. Doing so could result in data corruption.4)All parallel operations should be terminated before executing Diagnostic scan, PerformanceOptimization or Benchmarking features.5)Data corruption may result if the user terminates the Magician application abnormally whileBenchmarking, FW Update, Secure Erase, Diagnostic scan, Over Provisioning, Data Security, PSID Revert or RAPID features are in progress.6)There is always the risk of data loss when updating SSD firmware. It is imperative that the user backup any important data before performing a firmware update.7)If there are some system issues for Magician to perform functions, System Compatibility inInformation tab will provide guide to fix the issues.8)If Samsung Magician is under a proxy network environment, it may not provide the full functionalitysuch as firmware update.9)In order for Samsung Magician to function properly, the time of the PC needs to be correctly set.3.General LimitationsOverall1)Magician does not work with SSDs connected via the SCSI controller interface.2)Only MBR and GPT partition types are supported. Magician may not work with other partition types.3)Magician shows only volumes mounted with letter.4)Magician will not work on SSDs that are locked with a user password.5)The user may need to manually refresh for Magician to accurately reflect all connected/removeddisks.6)RAID on mode in SATA configuration is not supported by Samsung Magician and USB bootablesolution.7)If you are using any custom storage driver, then Magician may not work properly. Please always usethe latest storage driver or Microsoft driver.8)In Windows 7, the Samsung NVMe Driver is required for Magician to fully support Samsung’s NVMedevice.9)In Windows 7, the GUI of the Samsung Magician application may not seem normal intermittently.10)In the case of the function where the progress time is displayed – Performance benchmark,Diagnostic scan, and Performance optimization, changing the system time during the functionexecution may cause the elapsed time to not appear normally.11)Magician is signed using SHA-2 to provide a safer service. Windows updates or patches may berequired to use Magician in Windows 7.12)Depending on the resolution or ratio setting of your display, bottom and right side of SamsungMagician may be out of screen.13)For the performance benchmark record that was performed in the previous version of Magician 7.0,some values may be marked as Unknown.14)Magician might have compatibility issues with a certain IRST Driver.15)When RAPID is activated, Restore point warning message is only available in English.16)If Magician gets connected to or disconnected from the PSSD while performing some feature,Magician may not work properly.17)If you try changing the name of security enabled PSSD T1 / T3, the name won’t be changed withoutadditional guidance when the wrong password is entered.18)For Windows 8.1, update is required in order of KB2919442, KB2919355, and KB2999226.19)For Windows 8.1 with update, KB2999226 update is required.20)Magician icon may appear unchanged until the icon is updated by the system.21)It may take up to a few seconds for to installer to start.22)It may take time to move from one screen to the next screen in installer.23)Connecting through remote connection will dismiss forcefully magician notification window inwindows 11.24)Upon abnormal termination of data migration, it may take time to restore the migration function.Performance Benchmark1)Benchmarking may not work with some removable storage devices.2)Performance Benchmark may get timed-out on ASMedia controllers if the driver does not handlemulti thread operations (IOs)Performance Optimization1)Performance Optimization supports only the NTFS file system.2) Magician does not support TRIM operation for Standard Performance Optimization onWindows 8 and above, as they support native TRIM.Diagnostic Scan1)Short scan supports only the NTFS file system.2)If the device is locked, both short scan and full scan are not supported.3)When performing Short scan, secure sufficient space of 5GB or more.4)Samsung NVMe driver v3.3 is required to use SMART Self-test in 970 EVO Plus.NVMe Driver is not required from later released models. (980, 980 PRO Series, 990 Series, etc) PSID Revert1)SSD supporting PSID Revert is 860 EVO 860 EVO M.2, 860 EVO MSATA, 860 PRO, 860 QVO, 970 EVOPlus, 870 QVO, 870 EVO, 980, 980 PRO, 980 PRO with Heatsink, PSSD T7, PSSD T7 Touch, PSSD T7 Shield.2)The PSID Revert function can release the encrypted drive using the PSID of the label. Afterperforming PSID Revert, all data on the drive is deleted.Secure Erase & Linux Bootable Solution1)While making a bootable solution for Secure Erase, please make sure the Device Manager window isclosed.2)In some of the PCs, Bootable Solution may not work properly as expected because of compatibilityissue.3)The Bootable solution is not compatible with pure SCSI or SATA NVIDIA/LSI/AMD chipset drivers.4)AHCI or ATA mode must be enabled in the BIOS during PC boot up.5)The Bootable solution may hang if the SSD is removed on PCs that do not support the hot plug feature(e.g. ICH5/6 chipsets).6)The Bootable solution will not work with devices attached via SATA 6Gbps (SATA III) operating inIDE mode.7)Secure Erase may not work on systems where SECURITY FREEZE LOCK is issued by the BIOS.Encrypted Drive1)Class 0, TCG Opal and Encrypted Drive cannot be enabled simultaneously. Only one mode can beenabled at a time and all other modes must be disabled.2)Security mode (Class 0, TCG/Opal or Encrypted Drive) must be disabled (unlocked) before removingand installing onto another PC.Over Provisioning1)Over Provisioning only supports NTFS and raw (Unformatted) partitions.2)Over Provisioning does not support dynamic disks or disks that require ‘Chkdsk’ operation.3)Magician cannot guarantee that Over Provisioning scans disk’s partition layout properly, if partitioninformation had been changed during scanning.4)Over Provisioning may fail, even though enough free space is available, if your system suffers fromcluster misalignment.5)If user cannot span or shrink volume size through disk management of OS administration tool, it ispossible not to work dynamic over-provisioning properly.6)Windows ‘Disk Partition Service’ and ‘Virtual disk Service’ should not be disabled in order to performOver Provisioning.7)Over Provisioning can only be performed on the last accessible partition (NTFS or raw)8)If a device with more than 4TB applied to the MBR partition is used, the function may not operatenormally.LED Setting1)Depending on the status of the SSD, LED setting may not be possible.Security Setting1)Finding my password is unavailable on Samsung Magician.2)Up to four fingerprints can be registered on Samsung MagicianData Migration1)Data Migration supports the Windows operating systems listed in the System Requirement only.2)Data Migration supports the Samsung SSDs listed in the System Requirement only. OEM storagedevices provided through a computer manufacturer or supplied through another channel are not supported.3)Data Migration can only clone a Source Drive on which an operating system has been installed. Itcannot clone a drive without an operating system installed on it.4)When the Source Drive has two or more volumes (e.g. volumes to which drive letters, such as C:, D:, orE:, are assigned), Data Migration can clone the C: volume on which an operating system is installed and two more volumes. The System Reserved Partition, which is created automatically duringWindows installation, is cloned automatically.5)The OEM Partition, which is created by the computer manufacturer when shipped from the factory, isnot cloned. However, it will be automatically cloned if the computer manufacturer is Samsung and SRS (Samsung Recovery Solution) 5, SRS 6, or SRS 7 has been installed. (Versions lower than SRS 5 are not supported.)6)After cloning the Source Drive to the Target Drive, their data sizes may differ by a few gigabytes. Thisis normal. During cloning, Data Migration does not copy virtual memory (page files, hibernation files, etc.) automatically created and managed by the operating system.7)Data Migration cannot clone encrypted drives. In order to clone an encrypted drive, you must removeits password first.8)If the motherboard chipset drivers are not up to date when cloning, Data Migration may not functionproperly.9)If you have multiple operating systems installed on your computer (e.g. Windows 7 installed on the C:volume and Windows 8 installed on the D: volume), then the cloned drive may not function properly in some cases.10)If the Source Drive is damaged (e.g. it has bad sectors), then the cloned drive may not functionproperly.11)Before attempting to clone a drive using Data Migration, it is recommended that you close all openprograms and allocate sufficient memory first.12)If you have instant recovery software installed on your computer, then Data Migration may notfunction properly.13)If the Source Drive has been converted to a dynamic disk, then Data Migration may not functionproperly.14)If a portable device (e.g. an external USB device) is connected to the Target Drive for cloning,then Data Migration may not function properly because of the USB adapter.15)If the OS version installed in the original drive does not support the GPT partition and when it isduplicated in a drive exceeding 2TB, the MBR partition type will be applied to the duplicated drive. As MBR does not support large drives, the space exceeding 2TB will remain unallocated.16)In order to use Data Migration in Samsung Portable SSD, Security Mode should be disabled.17)Portable SSDs support Data Migration only on systems with Windows 8 or higher version.Firmware Update1)PC will be shut down automatically after firmware update (Magician counts down 20 seconds beforeshutdown).2)Firmware Update may fail on Samsung brand SSDs connected to AMD Controller. Please retry usingdefault SATA AHCI controller (Microsoft drivers).Settings1)When booting the PC, users can decide whether to auto run Samsung Magician. If auto run is turnedoff, updates cannot be received in real time.2)The device scan proceeds immediately after the language is changed.3)Scaling size varies depending on the resolution.4.RAPID mode Limitations1)RAPID mode accelerates only one SSD even though user has several Samsung SSDs (870 QVO, 870EVO, 860 QVO, 860 EVO, 860 PRO, 850 PRO, 850 EVO, 850, 750 EVO, 840 EVO, and 840 PROregardless of form factor).2)If there are two identical SSDs connected, RAPID mode may accelerate the incorrect SSD.3)RAID Mode sets is not supported as an accelerated drive.4)After uninstalling RAPID mode, if the system is restored to a prior state in which RAPID mode wasinstalled, RAPID mode will be started in a disabled state.5)NVIDIA Storage controller is not supported.6)During RAPID mode Enable/Disable operation: Do not disconnect the target SSD, Do not kill theapplication.7)If fast startup is enabled on windows 8, 8.1 and 10 machines, RAPID mode enable/disable requiressystem restart. Shutdown followed by turning-on the power will not activate RAPID modeenable/disable. By default fast startup is enabled.8)Flush command of operating system and/or application may cause variation in performance whenRAPID mode is enabled.9)Sometimes on AMD PC with AMD and ASMedia storage controllers it was found that the IOs takes alonger time to complete. In such cases if Rapid was enabled, it may get automatically disabled due to such IO errors. It may display "Rapid is in inactive state". User has to reboot the PC to enable the Rapid back.10)If multiple iterations of Read and Write are performed, RAPID mode may become inactive due tosystem internal errors on some of the AMD / ASMedia Controller or Driver.11)RAPID mode can't be guaranteed on the target SSD with non-NTFS file system.12)I f user deletes some files on RAPID folder, RAPID may not be uninstalled properly.13)If the msiexec.exe is either unstable or corrupted, RAPID mode enable fails with the error message“The Windows Installer service failed to start. Start the Windows Installer service manually, upgrade the Windows Installer service, and check if the last updated or installed program in Windows was successful. If the problem persists, contact the A/S center.”The issue can be fixed by:- Unregister and reregister Windows Installer service / MSI service.♦On the Start menu, click Run.♦In the Open box, type “msiexec /unreg”. And then press ENTER.♦On the Start menu, click Run♦In the Open box, type “msiexec /regserver”. And then press ENTER.♦Try enabling RAPID mode again.If RAPID mode does not enable, follow the steps below.- Updating Windows Installer♦If your Windows Installer is not the latest version, corrupted or msiexec is missing, please install the latest version of Windows Installer, then tryenabling RAPID mode again.- If none of the above procedures work, we recommend reinstalling Windows.Operational Check of RAPID modeRAPID mode starts its operation 45 seconds after OS booting. Please make sure the increased size of non-paged pool using “task manager → performance → memory tab” to ensure it is fully operational.* Before RAPID mode enabling* After RAPID mode enablingMar 2023/magician Design and contents of this manual are subject to change without notice.©2023 Samsung Electronics, Co., Ltd. All rights reserved.。
Reliability Engineering and System Safety 91(2006)992–1007Multi-objective optimization using genetic algorithms:A tutorialAbdullah Konak a,Ã,David W.Coit b ,Alice E.Smith caInformation Sciences and Technology,Penn State Berks,USA bDepartment of Industrial and Systems Engineering,Rutgers University cDepartment of Industrial and Systems Engineering,Auburn UniversityAvailable online 9January 2006AbstractMulti-objective formulations are realistic models for many complex engineering optimization problems.In many real-life problems,objectives under consideration conflict with each other,and optimizing a particular solution with respect to a single objective can result in unacceptable results with respect to the other objectives.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.In this paper,an overview and tutorial is presented describing genetic algorithms (GA)developed specifically for problems with multiple objectives.They differ primarily from traditional GA by using specialized fitness functions and introducing methods to promote solution diversity.r 2005Elsevier Ltd.All rights reserved.1.IntroductionThe objective of this paper is present an overview and tutorial of multiple-objective optimization methods using genetic algorithms (GA).For multiple-objective problems,the objectives are generally conflicting,preventing simulta-neous optimization of each objective.Many,or even most,real engineering problems actually do have multiple-objectives,i.e.,minimize cost,maximize performance,maximize reliability,etc.These are difficult but realistic problems.GA are a popular meta-heuristic that is particularly well-suited for this class of problems.Tradi-tional GA are customized to accommodate multi-objective problems by using specialized fitness functions and introducing methods to promote solution diversity.There are two general approaches to multiple-objective optimization.One is to combine the individual objective functions into a single composite function or move all but one objective to the constraint set.In the former case,determination of a single objective is possible with methods such as utility theory,weighted sum method,etc.,but theproblem lies in the proper selection of the weights or utility functions to characterize the decision-maker’s preferences.In practice,it can be very difficult to precisely and accurately select these weights,even for someone familiar with the problem pounding this drawback is that scaling amongst objectives is needed and small perturbations in the weights can sometimes lead to quite different solutions.In the latter case,the problem is that to move objectives to the constraint set,a constraining value must be established for each of these former objectives.This can be rather arbitrary.In both cases,an optimization method would return a single solution rather than a set of solutions that can be examined for trade-offs.For this reason,decision-makers often prefer a set of good solutions considering the multiple objectives.The second general approach is to determine an entire Pareto optimal solution set or a representative subset.A Pareto optimal set is a set of solutions that are nondominated with respect to each other.While moving from one Pareto solution to another,there is always a certain amount of sacrifice in one objective(s)to achieve a certain amount of gain in the other(s).Pareto optimal solution sets are often preferred to single solutions because they can be practical when considering real-life problems/locate/ress0951-8320/$-see front matter r 2005Elsevier Ltd.All rights reserved.doi:10.1016/j.ress.2005.11.018ÃCorresponding author.E-mail address:konak@ (A.Konak).since thefinal solution of the decision-maker is always a trade-off.Pareto optimal sets can be of varied sizes,but the size of the Pareto set usually increases with the increase in the number of objectives.2.Multi-objective optimization formulationConsider a decision-maker who wishes to optimize K objectives such that the objectives are non-commensurable and the decision-maker has no clear preference of the objectives relative to each other.Without loss of generality, all objectives are of the minimization type—a minimization type objective can be converted to a maximization type by multiplying negative one.A minimization multi-objective decision problem with K objectives is defined as follows: Given an n-dimensional decision variable vector x¼{x1,y,x n}in the solution space X,find a vector x* that minimizes a given set of K objective functions z(x*)¼{z1(x*),y,z K(x*)}.The solution space X is gen-erally restricted by a series of constraints,such as g j(x*)¼b j for j¼1,y,m,and bounds on the decision variables.In many real-life problems,objectives under considera-tion conflict with each other.Hence,optimizing x with respect to a single objective often results in unacceptable results with respect to the other objectives.Therefore,a perfect multi-objective solution that simultaneously opti-mizes each objective function is almost impossible.A reasonable solution to a multi-objective problem is to investigate a set of solutions,each of which satisfies the objectives at an acceptable level without being dominated by any other solution.If all objective functions are for minimization,a feasible solution x is said to dominate another feasible solution y (x1y),if and only if,z i(x)p z i(y)for i¼1,y,K and z j(x)o z j(y)for least one objective function j.A solution is said to be Pareto optimal if it is not dominated by any other solution in the solution space.A Pareto optimal solution cannot be improved with respect to any objective without worsening at least one other objective.The set of all feasible non-dominated solutions in X is referred to as the Pareto optimal set,and for a given Pareto optimal set,the corresponding objective function values in the objective space are called the Pareto front.For many problems,the number of Pareto optimal solutions is enormous(perhaps infinite).The ultimate goal of a multi-objective optimization algorithm is to identify solutions in the Pareto optimal set.However,identifying the entire Pareto optimal set, for many multi-objective problems,is practically impos-sible due to its size.In addition,for many problems, especially for combinatorial optimization problems,proof of solution optimality is computationally infeasible.There-fore,a practical approach to multi-objective optimization is to investigate a set of solutions(the best-known Pareto set)that represent the Pareto optimal set as well as possible.With these concerns in mind,a multi-objective optimization approach should achieve the following three conflicting goals[1]:1.The best-known Pareto front should be as close aspossible to the true Pareto front.Ideally,the best-known Pareto set should be a subset of the Pareto optimal set.2.Solutions in the best-known Pareto set should beuniformly distributed and diverse over of the Pareto front in order to provide the decision-maker a true picture of trade-offs.3.The best-known Pareto front should capture the wholespectrum of the Pareto front.This requires investigating solutions at the extreme ends of the objective function space.For a given computational time limit,thefirst goal is best served by focusing(intensifying)the search on a particular region of the Pareto front.On the contrary,the second goal demands the search effort to be uniformly distributed over the Pareto front.The third goal aims at extending the Pareto front at both ends,exploring new extreme solutions.This paper presents common approaches used in multi-objective GA to attain these three conflicting goals while solving a multi-objective optimization problem.3.Genetic algorithmsThe concept of GA was developed by Holland and his colleagues in the1960s and1970s[2].GA are inspired by the evolutionist theory explaining the origin of species.In nature,weak and unfit species within their environment are faced with extinction by natural selection.The strong ones have greater opportunity to pass their genes to future generations via reproduction.In the long run,species carrying the correct combination in their genes become dominant in their population.Sometimes,during the slow process of evolution,random changes may occur in genes. If these changes provide additional advantages in the challenge for survival,new species evolve from the old ones.Unsuccessful changes are eliminated by natural selection.In GA terminology,a solution vector x A X is called an individual or a chromosome.Chromosomes are made of discrete units called genes.Each gene controls one or more features of the chromosome.In the original implementa-tion of GA by Holland,genes are assumed to be binary digits.In later implementations,more varied gene types have been introduced.Normally,a chromosome corre-sponds to a unique solution x in the solution space.This requires a mapping mechanism between the solution space and the chromosomes.This mapping is called an encoding. In fact,GA work on the encoding of a problem,not on the problem itself.GA operate with a collection of chromosomes,called a population.The population is normally randomly initia-lized.As the search evolves,the population includesfitterA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007993andfitter solutions,and eventually it converges,meaning that it is dominated by a single solution.Holland also presented a proof of convergence(the schema theorem)to the global optimum where chromosomes are binary vectors.GA use two operators to generate new solutions from existing ones:crossover and mutation.The crossover operator is the most important operator of GA.In crossover,generally two chromosomes,called parents,are combined together to form new chromosomes,called offspring.The parents are selected among existing chromo-somes in the population with preference towardsfitness so that offspring is expected to inherit good genes which make the parentsfitter.By iteratively applying the crossover operator,genes of good chromosomes are expected to appear more frequently in the population,eventually leading to convergence to an overall good solution.The mutation operator introduces random changes into characteristics of chromosomes.Mutation is generally applied at the gene level.In typical GA implementations, the mutation rate(probability of changing the properties of a gene)is very small and depends on the length of the chromosome.Therefore,the new chromosome produced by mutation will not be very different from the original one.Mutation plays a critical role in GA.As discussed earlier,crossover leads the population to converge by making the chromosomes in the population alike.Muta-tion reintroduces genetic diversity back into the population and assists the search escape from local optima. Reproduction involves selection of chromosomes for the next generation.In the most general case,thefitness of an individual determines the probability of its survival for the next generation.There are different selection procedures in GA depending on how thefitness values are used. Proportional selection,ranking,and tournament selection are the most popular selection procedures.The procedure of a generic GA[3]is given as follows:Step1:Set t¼1.Randomly generate N solutions to form thefirst population,P1.Evaluate thefitness of solutions in P1.Step2:Crossover:Generate an offspring population Q t as follows:2.1.Choose two solutions x and y from P t based onthefitness values.ing a crossover operator,generate offspringand add them to Q t.Step3:Mutation:Mutate each solution x A Q t with a predefined mutation rate.Step4:Fitness assignment:Evaluate and assign afitness value to each solution x A Q t based on its objective function value and infeasibility.Step5:Selection:Select N solutions from Q t based on theirfitness and copy them to P t+1.Step6:If the stopping criterion is satisfied,terminate the search and return to the current population,else,set t¼t+1go to Step2.4.Multi-objective GABeing a population-based approach,GA are well suited to solve multi-objective optimization problems.A generic single-objective GA can be modified tofind a set of multiple non-dominated solutions in a single run.The ability of GA to simultaneously search different regions of a solution space makes it possible tofind a diverse set of solutions for difficult problems with non-convex,discon-tinuous,and multi-modal solutions spaces.The crossover operator of GA may exploit structures of good solutions with respect to different objectives to create new non-dominated solutions in unexplored parts of the Pareto front.In addition,most multi-objective GA do not require the user to prioritize,scale,or weigh objectives.Therefore, GA have been the most popular heuristic approach to multi-objective design and optimization problems.Jones et al.[4]reported that90%of the approaches to multi-objective optimization aimed to approximate the true Pareto front for the underlying problem.A majority of these used a meta-heuristic technique,and70%of all meta-heuristics approaches were based on evolutionary ap-proaches.Thefirst multi-objective GA,called vector evaluated GA (or VEGA),was proposed by Schaffer[5].Afterwards, several multi-objective evolutionary algorithms were devel-oped including Multi-objective Genetic Algorithm (MOGA)[6],Niched Pareto Genetic Algorithm(NPGA) [7],Weight-based Genetic Algorithm(WBGA)[8],Ran-dom Weighted Genetic Algorithm(RWGA)[9],Nondomi-nated Sorting Genetic Algorithm(NSGA)[10],Strength Pareto Evolutionary Algorithm(SPEA)[11],improved SPEA(SPEA2)[12],Pareto-Archived Evolution Strategy (PAES)[13],Pareto Envelope-based Selection Algorithm (PESA)[14],Region-based Selection in Evolutionary Multiobjective Optimization(PESA-II)[15],Fast Non-dominated Sorting Genetic Algorithm(NSGA-II)[16], Multi-objective Evolutionary Algorithm(MEA)[17], Micro-GA[18],Rank-Density Based Genetic Algorithm (RDGA)[19],and Dynamic Multi-objective Evolutionary Algorithm(DMOEA)[20].Note that although there are many variations of multi-objective GA in the literature, these cited GA are well-known and credible algorithms that have been used in many applications and their performances were tested in several comparative studies. Several survey papers[1,11,21–27]have been published on evolutionary multi-objective optimization.Coello lists more than2000references in his website[28].Generally, multi-objective GA differ based on theirfitness assign-ment procedure,elitisim,or diversification approaches.In Table1,highlights of the well-known multi-objective with their advantages and disadvantages are given.Most survey papers on multi-objective evolutionary approaches intro-duce and compare different algorithms.This paper takes a different course and focuses on important issues while designing a multi-objective GA and describes common techniques used in multi-objective GA to attain the threeA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007 994goals in multi-objective optimization.This approach is also taken in the survey paper by Zitzler et al.[1].However,the discussion in this paper is aimed at introducing the components of multi-objective GA to researchers and practitioners without a background on the multi-objective GA.It is also import to note that although several of the state-of-the-art algorithms exist as cited above,many researchers that applied multi-objective GA to their problems have preferred to design their own customized algorithms by adapting strategies from various multi-objective GA.This observation is another motivation for introducing the components of multi-objective GA rather than focusing on several algorithms.However,the pseudo-code for some of the well-known multi-objective GA are also provided in order to demonstrate how these proce-dures are incorporated within a multi-objective GA.Table1A list of well-known multi-objective GAAlgorithm Fitness assignment Diversity mechanism Elitism ExternalpopulationAdvantages DisadvantagesVEGA[5]Each subpopulation isevaluated with respectto a differentobjective No No No First MOGAStraightforwardimplementationTend converge to theextreme of each objectiveMOGA[6]Pareto ranking Fitness sharing byniching No No Simple extension of singleobjective GAUsually slowconvergenceProblems related to nichesize parameterWBGA[8]Weighted average ofnormalized objectives Niching No No Simple extension of singleobjective GADifficulties in nonconvexobjective function space Predefined weightsNPGA[7]Nofitnessassignment,tournament selection Niche count as tie-breaker in tournamentselectionNo No Very simple selectionprocess with tournamentselectionProblems related to nichesize parameterExtra parameter fortournament selectionRWGA[9]Weighted average ofnormalized objectives Randomly assignedweightsYes Yes Efficient and easyimplementDifficulties in nonconvexobjective function spacePESA[14]Nofitness assignment Cell-based density Pure elitist Yes Easy to implement Performance depends oncell sizesComputationally efficientPrior information neededabout objective spacePAES[29]Pareto dominance isused to replace aparent if offspringdominates Cell-based density astie breaker betweenoffspring and parentYes Yes Random mutation hill-climbing strategyNot a population basedapproachEasy to implement Performance depends oncell sizesComputationally efficientNSGA[10]Ranking based onnon-dominationsorting Fitness sharing bynichingNo No Fast convergence Problems related to nichesize parameterNSGA-II[30]Ranking based onnon-dominationsorting Crowding distance Yes No Single parameter(N)Crowding distance worksin objective space onlyWell testedEfficientSPEA[11]Raking based on theexternal archive ofnon-dominatedsolutions Clustering to truncateexternal populationYes Yes Well tested Complex clusteringalgorithmNo parameter forclusteringSPEA-2[12]Strength ofdominators Density based on thek-th nearest neighborYes Yes Improved SPEA Computationallyexpensivefitness anddensity calculationMake sure extreme pointsare preservedRDGA[19]The problem reducedto bi-objectiveproblem with solutionrank and density asobjectives Forbidden region cell-based densityYes Yes Dynamic cell update More difficult toimplement than othersRobust with respect to thenumber of objectivesDMOEA[20]Cell-based ranking Adaptive cell-baseddensity Yes(implicitly)No Includes efficienttechniques to update celldensitiesMore difficult toimplement than othersAdaptive approaches toset GA parametersA.Konak et al./Reliability Engineering and System Safety91(2006)992–10079955.Design issues and components of multi-objective GA 5.1.Fitness functions5.1.1.Weighted sum approachesThe classical approach to solve a multi-objective optimization problem is to assign a weight w i to each normalized objective function z 0i ðx Þso that the problem is converted to a single objective problem with a scalar objective function as follows:min z ¼w 1z 01ðx Þþw 2z 02ðx ÞþÁÁÁþw k z 0k ðx Þ,(1)where z 0i ðx Þis the normalized objective function z i (x )and P w i ¼1.This approach is called the priori approach since the user is expected to provide the weights.Solving a problem with the objective function (1)for a given weight vector w ¼{w 1,w 2,y ,w k }yields a single solution,and if multiple solutions are desired,the problem must be solved multiple times with different weight combinations.The main difficulty with this approach is selecting a weight vector for each run.To automate this process;Hajela and Lin [8]proposed the WBGA for multi-objective optimization (WBGA-MO)in the WBGA-MO,each solution x i in the population uses a different weight vector w i ¼{w 1,w 2,y ,w k }in the calculation of the summed objective function (1).The weight vector w i is embedded within the chromosome of solution x i .Therefore,multiple solutions can be simulta-neously searched in a single run.In addition,weight vectors can be adjusted to promote diversity of the population.Other researchers [9,31]have proposed a MOGA based on a weighted sum of multiple objective functions where a normalized weight vector w i is randomly generated for each solution x i during the selection phase at each generation.This approach aims to stipulate multiple search directions in a single run without using additional parameters.The general procedure of the RWGA using random weights is given as follows [31]:Procedure RWGA:E ¼external archive to store non-dominated solutions found during the search so far;n E ¼number of elitist solutions immigrating from E to P in each generation.Step 1:Generate a random population.Step 2:Assign a fitness value to each solution x A P t by performing the following steps:Step 2.1:Generate a random number u k in [0,1]for each objective k ,k ¼1,y ,K.Step 2.2:Calculate the random weight of each objective k as w k ¼ð1=u k ÞP K i ¼1u i .Step 2.3:Calculate the fitness of the solution as f ðx Þ¼P K k ¼1w k z k ðx Þ.Step 3:Calculate the selection probability of each solutionx A P t as follows:p ðx Þ¼ðf ðx ÞÀf min ÞÀ1P y 2P t ðf ðy ÞÀf minÞwhere f min ¼min f f ðx Þj x 2P t g .Step 4:Select parents using the selection probabilities calculated in Step 3.Apply crossover on the selected parent pairs to create N offspring.Mutate offspring with a predefined mutation rate.Copy all offspring to P t +1.Update E if necessary.Step 5:Randomly remove n E solutions from P t +1and add the same number of solutions from E to P t +1.Step 6:If the stopping condition is not satisfied,set t ¼t þ1and go to Step 2.Otherwise,return to E .The main advantage of the weighted sum approach is a straightforward implementation.Since a single objective is used in fitness assignment,a single objective GA can be used with minimum modifications.In addition,this approach is computationally efficient.The main disadvan-tage of this approach is that not all Pareto-optimal solutions can be investigated when the true Pareto front is non-convex.Therefore,multi-objective GA based on the weighed sum approach have difficulty in finding solutions uniformly distributed over a non-convex trade-off surface [1].5.1.2.Altering objective functionsAs mentioned earlier,VEGA [5]is the first GA used to approximate the Pareto-optimal set by a set of non-dominated solutions.In VEGA,population P t is randomly divided into K equal sized sub-populations;P 1,P 2,y ,P K .Then,each solution in subpopulation P i is assigned a fitness value based on objective function z i .Solutions are selected from these subpopulations using proportional selection for crossover and mutation.Crossover and mutation are performed on the new population in the same way as for a single objective GA.Procedure VEGA:N S ¼subpopulation size (N S ¼N =K )Step 1:Start with a random initial population P 0.Set t ¼0.Step 2:If the stopping criterion is satisfied,return P t .Step 3:Randomly sort population P t .Step 4:For each objective k ,k ¼1,y K ,perform the following steps:Step 4.1:For i ¼1þðk 21ÞN S ;...;kN S ,assign fit-ness value f ðx i Þ¼z k ðx i Þto the i th solution in the sorted population.Step 4.2:Based on the fitness values assigned in Step 4.1,select N S solutions between the (1+(k À1)N S )th and (kN S )th solutions of the sorted population to create subpopulation P k .Step 5:Combine all subpopulations P 1,y ,P k and apply crossover and mutation on the combined population to create P t +1of size N .Set t ¼t þ1,go to Step 2.A similar approach to VEGA is to use only a single objective function which is randomly determined each time in the selection phase [32].The main advantage of the alternating objectives approach is easy to implement andA.Konak et al./Reliability Engineering and System Safety 91(2006)992–1007996computationally as efficient as a single-objective GA.In fact,this approach is a straightforward extension of a single objective GA to solve multi-objective problems.The major drawback of objective switching is that the popula-tion tends to converge to solutions which are superior in one objective,but poor at others.5.1.3.Pareto-ranking approachesPareto-ranking approaches explicitly utilize the concept of Pareto dominance in evaluatingfitness or assigning selection probability to solutions.The population is ranked according to a dominance rule,and then each solution is assigned afitness value based on its rank in the population, not its actual objective function value.Note that herein all objectives are assumed to be minimized.Therefore,a lower rank corresponds to a better solution in the following discussions.Thefirst Pareto ranking technique was proposed by Goldberg[3]as follows:Step1:Set i¼1and TP¼P.Step2:Identify non-dominated solutions in TP and assigned them set to F i.Step3:Set TP¼TPF i.If TP¼+go to Step4,else set i¼iþ1and go to Step2.Step4:For every solution x A P at generation t,assign rank r1ðx;tÞ¼i if x A F i.In the procedure above,F1,F2,y are called non-dominated fronts,and F1is the Pareto front of population P.NSGA[10]also classifies the population into non-dominated fronts using an algorithm similar to that given above.Then a dummyfitness value is assigned to each front using afitness sharing function such that the worst fitness value assigned to F i is better than the bestfitness value assigned to F i+1.NSGA-II[16],a more efficient algorithm,named the fast non-dominated-sort algorithm, was developed to form non-dominated fronts.Fonseca and Fleming[6]used a slightly different rank assignment approach than the ranking based on non-dominated-fronts as follows:r2ðx;tÞ¼1þnqðx;tÞ;(2) where nq(x,t)is the number of solutions dominating solution x at generation t.This ranking method penalizes solutions located in the regions of the objective function space which are dominated(covered)by densely populated sections of the Pareto front.For example,in Fig.1b solution i is dominated by solutions c,d and e.Therefore,it is assigned a rank of4although it is in the same front with solutions f,g and h which are dominated by only a single solution.SPEA[11]uses a ranking procedure to assign better fitness values to non-dominated solutions at underrepre-sented regions of the objective space.In SPEA,an external list E of afixed size stores non-dominated solutions that have been investigated thus far during the search.For each solution y A E,a strength value is defined assðy;tÞ¼npðy;tÞN Pþ1,where npðy;tÞis the number solutions that y dominates in P.The rank r(y,t)of a solution y A E is assigned as r3ðy;tÞ¼sðy;tÞand the rank of a solution x A P is calculated asr3ðx;tÞ¼1þXy2E;y1xsðy;tÞ.Fig.1c illustrates an example of the SPEA ranking method.In the former two methods,all non-dominated solutions are assigned a rank of1.This method,however, favors solution a(in thefigure)over the other non-dominated solutions since it covers the least number of solutions in the objective function space.Therefore,a wide, uniformly distributed set of non-dominated solutions is encouraged.Accumulated ranking density strategy[19]also aims to penalize redundancy in the population due to overrepre-sentation.This ranking method is given asr4ðx;tÞ¼1þXy2P;y1xrðy;tÞ.To calculate the rank of a solution x,the rank of the solutions dominating this solution must be calculatedfirst. Fig.1d shows an example of this ranking method(based on r2).Using ranking method r4,solutions i,l and n are ranked higher than their counterparts at the same non-dominated front since the portion of the trade-off surface covering them is crowded by three nearby solutions c,d and e. Although some of the ranking approaches described in this section can be used directly to assignfitness values to individual solutions,they are usually combined with variousfitness sharing techniques to achieve the second goal in multi-objective optimization,finding a diverse and uniform Pareto front.5.2.Diversity:fitness assignment,fitness sharing,and nichingMaintaining a diverse population is an important consideration in multi-objective GA to obtain solutions uniformly distributed over the Pareto front.Without taking preventive measures,the population tends to form relatively few clusters in multi-objective GA.This phenom-enon is called genetic drift,and several approaches have been devised to prevent genetic drift as follows.5.2.1.Fitness sharingFitness sharing encourages the search in unexplored sections of a Pareto front by artificially reducingfitness of solutions in densely populated areas.To achieve this goal, densely populated areas are identified and a penaltyA.Konak et al./Reliability Engineering and System Safety91(2006)992–1007997。
第15卷第8期计算机集成制造系统Vol.15No.82009年8月Computer Integrated Manufacturing SystemsAug.2009文章编号:1006-5911(2009)08-1592-07收稿日期:2008207208;修订日期:2008209201。
Received 08J uly 2008;accepted 01Sep.2008.基金项目:国家863/CIMS 主题资助项目(2007AA04Z190,2008AA042301);国家自然科学基金资助项目(50835008,50875237)。
Found ation i 2tems :Project supported by t he National High 2Tech.R &D Program for CIMS ,China (No.2007AA04Z190,2008AA042301),and t he National Natural Science Foundation ,China (No.50835008,50875237).作者简介:魏 巍(1982-),男,辽宁沈阳人,浙江大学CAD &CG 国家重点实验室博士研究生,主要从事产品配置优化、产品信息建模、多目标优化和先进制造技术等研究。
E 2mail :boyweiwei @ ;+通信作者E 2mail :fyxtv @ 。
柔性工作车间调度问题的多目标优化方法研究魏 巍1,谭建荣1,冯毅雄+1,张 蕊2(1.浙江大学流体传动及控制国家重点实验室,浙江 杭州 310027;2.华晨金杯汽车有限公司,辽宁 沈阳 110044)摘 要:针对各工件目标不同的多目标柔性作业车间调度问题,构建了以加工成本、加工质量及制造工期为目标函数的柔性作业车间调度多目标优化数学模型。
针对传统的加权系数遗传算法不能很好地解决柔性作业车间调度多目标优化问题,提出采用改进的强度Pareto 进化算法,对柔性作业车间调度问题进行多目标优化,从而得出柔性车间调度问题的Pareto 综合最优解。
智能优化算法论⽂解读和复现(⼀):MOPSO⼀、MOPSO论⽂精读⾸先附上原⽂链接:摘要粒⼦群优化(Particle Swarm Optimization,PSO)是⼀种模拟鸟类飞⾏寻⾷机制,并基于种群迭代的启发式智能优化算法。
原始的PSO算法被设计为专门解决单⽬标优化问题,本⽂的主要⼯作就是将PSO这⼀算法扩展应⽤到解决多⽬标优化问题的领域中去。
主要的实现⽅法包括:1.通过引⼊帕累托占优(Pareto Dominance)的概念来决定粒⼦的飞⾏⽅向。
2.通过维持⼀个全局⾮⽀配解向量的档案库来修正其他粒⼦的飞⾏⽅向。
引⽂随着启发式优化技术的蓬勃⽣长,⼀个重要的研究⽅向就是设计⼀些更有效率的算法来保证具有较好收敛性的同时让解的多样性得以维持,PSO就是其中⼀种。
在这篇⽂章中,原作者提出了⼀种多⽬标粒⼦群优化(Multi-Objective Particle Swarm Optimizaton)算法,让PSO能够解决多⽬标优化问题。
这个算法基于种群迭代,并且额外维持了⼀个档案库(repository)和⼀些基于地理(geographically-based)的⽅法来维持解的多样性。
原作者将MOPSO在⼀些基本测试算例上进⾏了数值实验,并与帕累托存档进化算法(Pareto Archive Evolution Strategy,PAES)和⾮⽀配排序遗传算法(Nondominated Sorting Genetic Algorithm Ⅱ,NSGA-Ⅱ)进⾏了⽐较。
粒⼦群优化略。
⽅法描述PSO和进化算法的相似性让引⼊帕累托排序⽅法来将PSO拓展到多⽬标算法成为了最直接的改进⽅案。
对⼀个个体历史最优解的记录可以被⽤来存储迭代中产⽣的⾮⽀配解(类似于多⽬标进化算法中精英机制的概念)。
全局吸引机制的使⽤和对历史⾮⽀配解的保存会更可能让最终结果向全局⾮⽀配解收敛。
所以,原作者的基本思想就是建⽴⼀个全局资料库来让每个粒⼦在每次迭代后保存⾃⼰的飞⾏经验。
Chapter2Multi-objective OptimizationAbstract In this chapter,we introduce multi-objective optimization,and recall some of the most relevant research articles that have appeared in the international litera-ture related to these topics.The presented state-of-the-art does not have the purpose of being exhaustive;it aims to drive the reader to the main problems and the ap-proaches to solve them.2.1Multi-objective ManagementThe choice of a route at a planning level can be done taking into account time, length,but also parking or maintenance facilities.As far as advisory or,more in general,automation procedures to support this choice are concerned,the available tools are basically based on the“shortest-path problem”.Indeed,the problem tofind the single-objective shortest path from an origin to a destination in a network is one of the most classical optimization problems in transportation and logistic,and has deserved a great deal of attention from researchers worldwide.However,the need to face real applications renders the hypothesis of a single-objective function to be optimized subject to a set of constraints no longer suitable,and the introduction of a multi-objective optimization framework allows one to manage more informa-tion.Indeed,if for instance we consider the problem to route hazardous materials in a road network(see,e.g.,Erkut et al.,2007),defining a single-objective function problem will involve,separately,the distance,the risk for the population,and the transportation costs.If we regard the problem from different points of view,i.e.,in terms of social needs for a safe transshipment,or in terms of economic issues or pol-11122Multi-objective Optimizationlution reduction,it is clear that a model that considers simultaneously two or more such objectives could produce solutions with a higher level of equity.In the follow-ing,we will discuss multi-objective optimization and related solution techniques.2.2Multi-objective Optimization and Pareto-optimal SolutionsA basic single-objective optimization problem can be formulated as follows:min f(x)x∈S,where f is a scalar function and S is the(implicit)set of constraints that can be defined asS={x∈R m:h(x)=0,g(x)≥0}.Multi-objective optimization can be described in mathematical terms as follows:min[f1(x),f2(x),...,f n(x)]x∈S,where n>1and S is the set of constraints defined above.The space in which the objective vector belongs is called the objective space,and the image of the feasible set under F is called the attained set.Such a set will be denoted in the following withC={y∈R n:y=f(x),x∈S}.The scalar concept of“optimality”does not apply directly in the multi-objective setting.Here the notion of Pareto optimality has to be introduced.Essentially,a vector x∗∈S is said to be Pareto optimal for a multi-objective problem if all other vectors x∈S have a higher value for at least one of the objective functions f i,with i=1,...,n,or have the same value for all the objective functions.Formally speak-ing,we have the following definitions:•A point x∗is said to be a weak Pareto optimum or a weak efficient solution for the multi-objective problem if and only if there is no x∈S such that f i(x)<f i(x∗) for all i∈{1,...,n}.2.2Multi-objective Optimization and Pareto-optimal Solutions13•A point x∗is said to be a strict Pareto optimum or a strict efficient solution for the multi-objective problem if and only if there is no x∈S such that f i(x)≤f i(x∗) for all i∈{1,...,n},with at least one strict inequality.We can also speak of locally Pareto-optimal points,for which the definition is the same as above,except that we restrict attention to a feasible neighborhood of x∗.In other words,if B(x∗,ε)is a ball of radiusε>0around point x∗,we require that for someε>0,there is no x∈S∩B(x∗,ε)such that f i(x)≤f i(x∗)for all i∈{1,...,n}, with at least one strict inequality.The image of the efficient set,i.e.,the image of all the efficient solutions,is called Pareto front or Pareto curve or surface.The shape of the Pareto surface indicates the nature of the trade-off between the different objective functions.An example of a Pareto curve is reported in Fig.2.1,where all the points between(f2(ˆx),f1(ˆx))and (f2(˜x),f1(˜x))define the Pareto front.These points are called non-inferior or non-dominated points.f1(xFig.2.1Example of a Pareto curveAn example of weak and strict Pareto optima is shown in Fig.2.2:points p1and p5are weak Pareto optima;points p2,p3and p4are strict Pareto optima.142Multi-objective Optimization2Fig.2.2Example of weak and strict Pareto optima2.3Techniques to Solve Multi-objective Optimization ProblemsPareto curves cannot be computed efficiently in many cases.Even if it is theoreti-cally possible tofind all these points exactly,they are often of exponential size;a straightforward reduction from the knapsack problem shows that they are NP-hard to compute.Thus,approximation methods for them are frequently used.However, approximation does not represent a secondary choice for the decision maker.Indeed, there are many real-life problems for which it is quite hard for the decision maker to have all the information to correctly and/or completely formulate them;the deci-sion maker tends to learn more as soon as some preliminary solutions are available. Therefore,in such situations,having some approximated solutions can help,on the one hand,to see if an exact method is really required,and,on the other hand,to exploit such a solution to improve the problem formulation(Ruzica and Wiecek, 2005).Approximating methods can have different goals:representing the solution set when the latter is numerically available(for convex multi-objective problems);ap-proximating the solution set when some but not all the Pareto curve is numerically available(see non-linear multi-objective problems);approximating the solution set2.3Techniques to Solve Multi-objective Optimization Problems15when the whole efficient set is not numerically available(for discrete multi-objective problems).A comprehensive survey of the methods presented in the literature in the last33 years,from1975,is that of Ruzica and Wiecek(2005).The survey analyzes sepa-rately the cases of two objective functions,and the case with a number of objective functions strictly greater than two.More than50references on the topic have been reported.Another interesting survey on these techniques related to multiple objec-tive integer programming can be found in the book of Ehrgott(2005)and the paper of Erghott(2006),where he discusses different scalarization techniques.We will give details of the latter survey later in this chapter,when we move to integer lin-ear programming formulations.Also,T’Kindt and Billaut(2005)in their book on “Multicriteria scheduling”,dedicated a part of their manuscript(Chap.3)to multi-objective optimization approaches.In the following,we will start revising,following the same lines of Erghott (2006),these scalarization techniques for general continuous multi-objective op-timization problems.2.3.1The Scalarization TechniqueA multi-objective problem is often solved by combining its multiple objectives into one single-objective scalar function.This approach is in general known as the weighted-sum or scalarization method.In more detail,the weighted-sum method minimizes a positively weighted convex sum of the objectives,that is,minn∑i=1γi·f i(x)n∑i=1γi=1γi>0,i=1,...,nx∈S,that represents a new optimization problem with a unique objective function.We denote the above minimization problem with P s(γ).It can be proved that the minimizer of this single-objective function P(γ)is an efficient solution for the original multi-objective problem,i.e.,its image belongs to162Multi-objective Optimizationthe Pareto curve.In particular,we can say that if theγweight vector is strictly greater than zero(as reported in P(γ)),then the minimizer is a strict Pareto optimum,while in the case of at least oneγi=0,i.e.,minn∑i=1γi·f i(x)n∑i=1γi=1γi≥0,i=1,...,nx∈S,it is a weak Pareto optimum.Let us denote the latter problem with P w(γ).There is not an a-priori correspondence between a weight vector and a solution vector;it is up to the decision maker to choose appropriate weights,noting that weighting coefficients do not necessarily correspond directly to the relative impor-tance of the objective functions.Furthermore,as we noted before,besides the fact that the decision maker cannot be aware of which weights are the most appropriate to retrieve a satisfactorily solution,he/she does not know in general how to change weights to consistently change the solution.This means also that it is not easy to develop heuristic algorithms that,starting from certain weights,are able to define iteratively weight vectors to reach a certain portion of the Pareto curve.Since setting a weight vector conducts to only one point on the Pareto curve,per-forming several optimizations with different weight values can produce a consid-erable computational burden;therefore,the decision maker needs to choose which different weight combinations have to be considered to reproduce a representative part of the Pareto front.Besides this possibly huge computation time,the scalarization method has two technical shortcomings,as explained in the following.•The relationship between the objective function weights and the Pareto curve is such that a uniform spread of weight parameters,in general,does not producea uniform spread of points on the Pareto curve.What can be observed aboutthis fact is that all the points are grouped in certain parts of the Pareto front, while some(possibly significative)portions of the trade-off curve have not been produced.2.3Techniques to Solve Multi-objective Optimization Problems17•Non-convex parts of the Pareto set cannot be reached by minimizing convex combinations of the objective functions.An example can be made showing a geometrical interpretation of the weighted-sum method in two dimensions,i.e., when n=2.In the two-dimensional space the objective function is a liney=γ1·f1(x)+γ2·f2(x),wheref2(x)=−γ1·f1(x)γ2+yγ2.The minimization ofγ·f(x)in the weight-sum approach can be interpreted as the attempt tofind the y value for which,starting from the origin point,the line with slope−γ1γ2is tangent to the region C.Obviously,changing the weight parameters leads to possibly different touching points of the line to the feasible region.If the Pareto curve is convex then there is room to calculate such points for differentγvectors(see Fig.2.3).2 f1(xFig.2.3Geometrical representation of the weight-sum approach in the convex Pareto curve caseOn the contrary,when the curve is non-convex,there is a set of points that cannot be reached for any combinations of theγweight vector(see Fig.2.4).182Multi-objective Optimization f1(xFig.2.4Geometrical representation of the weight-sum approach in the non-convex Pareto curve caseThe following result by Geoffrion(1968)states a necessary and sufficient condi-tion in the case of convexity as follows:If the solution set S is convex and the n objectives f i are convex on S,x∗is a strict Pareto optimum if and only if it existsγ∈R n,such that x∗is an optimal solution of problem P s(γ).Similarly:If the solution set S is convex and the n objectives f i are convex on S,x∗is a weak Pareto optimum if and only if it existsγ∈R n,such that x∗is an optimal solution of problem P w(γ).If the convexity hypothesis does not hold,then only the necessary condition re-mains valid,i.e.,the optimal solutions of P s(γ)and P w(γ)are strict and weak Pareto optima,respectively.2.3.2ε-constraints MethodBesides the scalarization approach,another solution technique to multi-objective optimization is theε-constraints method proposed by Chankong and Haimes in 1983.Here,the decision maker chooses one objective out of n to be minimized; the remaining objectives are constrained to be less than or equal to given target val-2.3Techniques to Solve Multi-objective Optimization Problems19 ues.In mathematical terms,if we let f2(x)be the objective function chosen to be minimized,we have the following problem P(ε2):min f2(x)f i(x)≤εi,∀i∈{1,...,n}\{2}x∈S.We note that this formulation of theε-constraints method can be derived by a more general result by Miettinen,that in1994proved that:If an objective j and a vectorε=(ε1,...,εj−1,εj+1,...,εn)∈R n−1exist,such that x∗is an optimal solution to the following problem P(ε):min f j(x)f i(x)≤εi,∀i∈{1,...,n}\{j}x∈S,then x∗is a weak Pareto optimum.In turn,the Miettinen theorem derives from a more general theorem by Yu(1974) stating that:x∗is a strict Pareto optimum if and only if for each objective j,with j=1,...,n, there exists a vectorε=(ε1,...,εj−1,εj+1,...,εn)∈R n−1such that f(x∗)is the unique objective vector corresponding to the optimal solution to problem P(ε).Note that the Miettinen theorem is an easy implementable version of the result by Yu(1974).Indeed,one of the difficulties of the result by Yu,stems from the uniqueness constraint.The weaker result by Miettinen allows one to use a necessary condition to calculate weak Pareto optima independently from the uniqueness of the optimal solutions.However,if the set S and the objectives are convex this result becomes a necessary and sufficient condition for weak Pareto optima.When,as in problem P(ε2),the objective isfixed,on the one hand,we have a more simplified version,and therefore a version that can be more easily implemented in automated decision-support systems;on the other hand,however,we cannot say that in the presence of S convex and f i convex,∀i=1,...,n,all the set of weak Pareto optima can be calculated by varying theεvector.One advantage of theε-constraints method is that it is able to achieve efficient points in a non-convex Pareto curve.For instance,assume we have two objective202Multi-objective Optimization functions where objective function f1(x)is chosen to be minimized,i.e.,the problem ismin f1(x)f2(x)≤ε2x∈S,we can be in the situation depicted in Fig.2.5where,when f2(x)=ε2,f1(x)is an efficient point of the non-convex Pareto curve.f1(xf 2(x)£e2x)f1(xFig.2.5Geometrical representation of theε-constraints approach in the non-convex Pareto curve caseTherefore,as proposed in Steurer(1986)the decision maker can vary the upper boundsεi to obtain weak Pareto optima.Clearly,this is also a drawback of this method,i.e.,the decision maker has to choose appropriate upper bounds for the constraints,i.e.,theεi values.Moreover,the method is not particularly efficient if the number of the objective functions is greater than two.For these reasons,Erghott and Rusika in2005,proposed two modifications to improve this method,with particular attention to the computational difficulties that the method generates.2.3Techniques to Solve Multi-objective Optimization Problems21 2.3.3Goal ProgrammingGoal Programming dates back to Charnes et al.(1955)and Charnes and Cooper (1961).It does not pose the question of maximizing multiple objectives,but rather it attempts tofind specific goal values of these objectives.An example can be given by the following program:f1(x)≥v1f2(x)=v2f3(x)≤v3x∈S.Clearly we have to distinguish two cases,i.e.,if the intersection between the image set C and the utopian set,i.e.,the image of the admissible solutions for the objectives,is empty or not.In the former case,the problem transforms into one in which we have tofind a solution whose value is as close as possible to the utopian set.To do this,additional variables and constraints are introduced.In particular,for each constraint of the typef1(x)≥v1we introduce a variable s−1such that the above constraint becomesf1(x)+s−1≥v1.For each constraint of the typef2(x)=v2we introduce a surplus two variables s+2and s−2such that the above constraint be-comesf2(x)+s−2−s+2=v2.For each constraint of the typef3(x)≤v3we introduce a variable s+3such that the above constraint becomesf3(x)−s+3≤v3.222Multi-objective OptimizationLet us denote with s the vector of the additional variables.A solution(x,s)to the above problem is called a strict Pareto-slack optimum if and only if a solution (x ,s ),for every x ∈S,such that s i≤s i with at least one strict inequality does not exist.There are different ways of optimizing the slack/surplus variables.An exam-ple is given by the Archimedean goal programming,where the problem becomes that of minimizing a linear combination of the surplus and slack variables each one weighted by a positive coefficientαas follows:minαs−1s−1+αs+2s+2+αs−2s−2+αs+3s+3f1(x)+s−1≥v1f2(x)+s−2−s+2=v2f3(x)−s+3≤v3s−1≥0s+2≥0s−2≥0s+3≥0x∈S.For the above problem,the Geoffrion theorem says that the resolution of this prob-lem offers strict or weak Pareto-slack optimum.Besides Archimedean goal programming,other approaches are the lexicograph-ical goal programming,the interactive goal programming,the reference goal pro-gramming and the multi-criteria goal programming(see,e.g.,T’kindt and Billaut, 2005).2.3.4Multi-level ProgrammingMulti-level programming is another approach to multi-objective optimization and aims tofind one optimal point in the entire Pareto surface.Multi-level programming orders the n objectives according to a hierarchy.Firstly,the minimizers of thefirst objective function are found;secondly,the minimizers of the second most important2.3Techniques to Solve Multi-objective Optimization Problems23objective are searched for,and so forth until all the objective function have been optimized on successively smaller sets.Multi-level programming is a useful approach if the hierarchical order among the objectives is meaningful and the user is not interested in the continuous trade-off among the functions.One drawback is that optimization problems that are solved near the end of the hierarchy can be largely constrained and could become infeasi-ble,meaning that the less important objective functions tend to have no influence on the overall optimal solution.Bi-level programming(see,e.g.,Bialas and Karwan,1984)is the scenario in which n=2and has received several attention,also for the numerous applications in which it is involved.An example is given by hazmat transportation in which it has been mainly used to model the network design problem considering the government and the carriers points of view:see,e.g.,the papers of Kara and Verter(2004),and of Erkut and Gzara(2008)for two applications(see also Chap.4of this book).In a bi-level mathematical program one is concerned with two optimization prob-lems where the feasible region of thefirst problem,called the upper-level(or leader) problem,is determined by the knowledge of the other optimization problem,called the lower-level(or follower)problem.Problems that naturally can be modelled by means of bi-level programming are those for which variables of thefirst problem are constrained to be the optimal solution of the lower-level problem.In general,bi-level optimization is issued to cope with problems with two deci-sion makers in which the optimal decision of one of them(the leader)is constrained by the decision of the second decision maker(the follower).The second-level de-cision maker optimizes his/her objective function under a feasible region that is defined by thefirst-level decision maker.The latter,with this setting,is in charge to define all the possible reactions of the second-level decision maker and selects those values for the variable controlled by the follower that produce the best outcome for his/her objective function.A bi-level program can be formulated as follows:min f(x1,x2)x1∈X1x2∈argmin g(x1,x2)x2∈X2.242Multi-objective OptimizationThe analyst should pay particular attention when using bi-level optimization(or multi-level optimization in general)in studying the uniqueness of the solutions of the follower problem.Assume,for instance,one has to calculate an optimal solu-tion x∗1to the leader model.Let x∗2be an optimal solution of the follower problem associated with x∗1.If x∗2is not unique,i.e.,|argmin g(x∗1,x2)|>1,we can have a sit-uation in which the follower decision maker can be free,without violating the leader constraints,to adopt for his problem another optimal solution different from x∗2,i.e.,ˆx2∈argmin g(x∗1,x2)withˆx2=x∗2,possibly inducing a f(x∗1,ˆx2)>f(x∗1,x∗2)on the leader,forcing the latter to carry out a sensitivity analysis on the values at-tained by his objective function in correspondence to all the optimal solutions in argmin g(x∗1,x2).Bi-level programs are very closely related to the van Stackelberg equilibrium problem(van Stackelberg,1952),and the mathematical programs with equilibrium constraints(see,e.g.,Luo et al.1996).The most studied instances of bi-level pro-gramming problems have been for a long time the linear bi-level programs,and therefore this subclass is the subject of several dedicated surveys,such as that by Wen and Hsu(1991).Over the years,more complex bi-level programs were studied and even those including discrete variables received some attention,see,e.g.,Vicente et al.(1996). Hence,more general surveys appeared,such as those by Vicente and Calamai(1994) and Falk and Liu(1995)on non-linear bi-level programming.The combinatorial nature of bi-level programming has been reviewed in Marcotte and Savard(2005).Bi-level programs are hard to solve.In particular,linear bi-level programming has been proved to be strongly NP-hard(see,Hansen et al.,1992);Vicente et al. (1996)strengthened this result by showing thatfinding a certificate of local opti-mality is also strongly NP-hard.Existing methods for bi-level programs can be distinguished into two classes.On the one hand,we have convergent algorithms for general bi-level programs with the-oretical properties guaranteeing suitable stationary conditions;see,e.g.,the implicit function approach by Outrata et al.(1998),the quadratic one-level reformulation by Scholtes and Stohr(1999),and the smoothing approaches by Fukushima and Pang (1999)and Dussault et al.(2004).With respect to the optimization problems with complementarity constraints, which represent a special way of solving bi-level programs,we can mention the pa-pers of Kocvara and Outrata(2004),Bouza and Still(2007),and Lin and Fukushima2.4Multi-objective Optimization Integer Problems25(2003,2005).Thefirst work presents a new theoretical framework with the im-plicit programming approach.The second one studies convergence properties of a smoothing method that allows the characterization of local minimizers where all the functions defining the model are twice differentiable.Finally,Lin and Fukushima (2003,2005)present two relaxation methods.Exact algorithms have been proposed for special classes of bi-level programs, e.g.,see the vertex enumeration methods by Candler and Townsley(1982),Bialas and Karwan(1984),and Tuy et al.(1993)applied when the property of an extremal solution in bi-level linear program plementary pivoting approaches(see, e.g.,Bialas et al.,1980,and J´u dice and Faustino,1992)have been proposed on the single-level optimization problem obtained by replacing the second-level optimiza-tion problem by its optimality conditions.Exploiting the complementarity structure of this single-level reformulation,Bard and Moore(1990)and Hansen et al.(1992), have proposed branch-and-bound algorithms that appear to be among the most effi-cient.Typically,branch-and-bound is used when the lower-level problem is convex and regular,since the latter can be replaced by its Karush–Kuhn–Tucker(KKT) conditions,yielding a single-level reformulation.When one deals with linear bi-level programs,the complementarity conditions are intrinsically combinatorial,and in such cases branch-and-bound is the best approach to solve this problem(see,e.g., Colson et al.,2005).A cutting-plane approach is not frequently used to solve bi-level linear programs.Cutting-plane methods found in the literature are essentially based on Tuy’s concavity cuts(Tuy,1964).White and Anandalingam(1993)use these cuts in a penalty function approach for solving bi-level linear programs.Marcotte et al.(1993)propose a cutting-plane algorithm for solving bi-level linear programs with a guarantee offinite termination.Recently,Audet et al.(2007),exploiting the equivalence of the latter problem with a mixed integer linear programming one, proposed a new branch-and-bound algorithm embedding Gomory cuts for bi-level linear programming.2.4Multi-objective Optimization Integer ProblemsIn the previous section,we gave general results for continuous multi-objective prob-lems.In this section,we focus our attention on what happens if the optimization problem being solved has integrality constraints on the variables.In particular,all262Multi-objective Optimizationthe techniques presented can be applied in these situations as well,with some lim-itations on the capabilities of these methods to construct the Pareto front entirely. Indeed,these methods are,in general,very hard to solve in real applications,or are unable tofind all efficient solutions.When integrality constraints arise,one of the main limits of these techniques is in the inability of obtaining some Pareto optima; therefore,we will have supported and unsupported Pareto optima.f 2(x)f1(xFig.2.6Supported and unsupported Pareto optimaFig.2.6gives an example of these situations:points p6and p7are unsupported Pareto optima,while p1and p5are supported weak Pareto optima,and p2,p3,and p4are supported strict Pareto optima.Given a multi-objective optimization integer problem(MOIP),the scalarization in a single objective problem with additional variables and/or parameters tofind a subset of efficient solutions to the original MOIP,has the same computational complexity issues of a continuous scalarized problem.In the2006paper of Ehrgott“A discussion of scalarization techniques for mul-tiple objective integer programming”the author,besides the scalarization tech-niques also presented in the previous section(e.g.,the weighted-sum method,the ε-constraint method),satisfying the linear requirement imposed by the MOIP for-mulation(where variables are integers,but constraints and objectives are linear),2.4Multi-objective Optimization Integer Problems27presented more methods like the Lagrangian relaxation and the elastic-constraints method.By the author’s analysis,it emerges that the attempt to solve the scalarized prob-lem by means of Lagrangian relaxation would not lead to results that go beyond the performance of the weighted-sum technique.It is also shown that the general linear scalarization formulation is NP-hard.Then,the author presents the elastic-constraints method,a new scalarization technique able to overcome the drawback of the previously mentioned techniques related tofinding all efficient solutions,com-bining the advantages of the weighted-sum and theε-constraint methods.Further-more,it is shown that a proper application of this method can also give reasonable computing times in practical applications;indeed,the results obtained by the author on the elastic-constraints method are applied to an airline-crew scheduling problem, whose size oscillates from500to2000constraints,showing the effectiveness of the proposed technique.2.4.1Multi-objective Shortest PathsGiven a directed graph G=(V,A),an origin s∈V and a destination t∈V,the shortest-path problem(SPP)aims tofind the minimum distance path in G from o to d.This problem has been studied for more than50years,and several polynomial algorithms have been produced(see,for instance,Cormen et al.,2001).From the freight distribution point of view the term shortest may have quite dif-ferent meanings from faster,to quickest,to safest,and so on,focusing the attention on what the labels of the arc set A represent to the decision maker.For this reason, in some cases we willfind it simpler to define for each arc more labels so as to represent the different arc features(e.g.,length,travel time,estimated risk).The problem tofind multi-objective shortest paths(MOSPP)is known to be NP-hard(see,e.g.,Serafini,1986),and the algorithms proposed in the literature faced the difficulty to manage the large number of non-dominated paths that results in a considerable computational time,even in the case of small instances.Note that the number of non-dominated paths may increase exponentially with the number of nodes in the graph(Hansen,1979).In the multi-objective scenario,each arc(i,j)in the graph has a vector of costs c i j∈R n with c i j=(c1i j,...,c n i j)components,where n is the number of criteria.。