Choosing among several parallel implementations of the backpropagation algorithm
- 格式:pdf
- 大小:88.58 KB
- 文档页数:7
Exploring the Possibilities and Paradoxes Exploring the possibilities and paradoxes of life is a complex and multifaceted endeavor that requires a deep understanding of the human experience. It involves delving into the depths of our existence, contemplating the various paths we can take, and grappling with the contradictions and enigmas that confront us along the way. From the grand mysteries of the universe to the intricacies of our own minds, exploring the possibilities and paradoxes of life is an endlessly fascinating and challenging pursuit.One of the most compelling aspects of exploring the possibilities and paradoxes of life is the opportunity to expand our horizons and challenge our preconceived notions. When we open ourselves up to the myriad of possibilities that exist, we allow ourselves to break free from the constraints of our limited perspective and embrace the vastness of the unknown. This can be both exhilarating and terrifying, as we confront the uncertainty of the future and the endless array of choices that lay before us. However, it is through this process of exploration that we are able to grow and evolve, pushing the boundaries of what we thought was possible and discovering new and unexpected truths about ourselves and the world around us.At the same time, exploring the possibilities and paradoxes of life also forces us to confront the inherent contradictions and complexities that define our existence. Life is filled with paradoxes – moments of joy and sorrow, love and loss, success and failure – that can be difficult to reconcile. How can we make sense of a world that is so full of contradictions and uncertainties? It is through this process of grappling with the paradoxes of life that we are able to develop a deeper understanding of ourselves and the world around us. By embracing the inherent contradictions of life, we can learn to appreciate the beauty and complexity of our existence, finding meaning in the midst of chaos and confusion.Moreover, exploring the possibilities and paradoxes of life allows us to tap into our innate sense of curiosity and wonder. As humans, we are naturally drawn to the unknown, compelled to seek out new experiences and knowledge that challenge and inspire us. This innate curiosity drives us to explore the possibilities and paradoxes of life, pushing us to ask difficult questions and seek out the answers that elude us. It is through this process ofexploration that we are able to expand our understanding of the world and our place within it, opening ourselves up to new ideas and perspectives that can enrich our lives in profound ways.However, while exploring the possibilities and paradoxes of life can be incredibly rewarding, it can also be a daunting and overwhelming task. The sheer magnitude of the unknown and the complexities of our existence can leave us feeling lost and adrift, struggling to make sense of a world that often seems incomprehensible. It is in these moments of uncertainty and doubt that we must rely on our resilience and determination to continue forward, embracing the challenges that come with exploring the possibilities and paradoxes of life. It is through these struggles that we are able to develop the strength and courage to confront the unknown, finding meaning and purpose in the midst of chaos and confusion.In conclusion, exploring the possibilities and paradoxes of life is a deeply enriching and challenging endeavor that allows us to expand our horizons, confront the inherent contradictions of our existence, tap into our innate sense of curiosity and wonder, and develop the resilience and determination to confront the unknown. It is through this process of exploration that we are able to grow and evolve, pushing the boundaries of what we thought was possible and discovering new and unexpected truths about ourselves and the world around us. While the journey may be fraught with uncertainty and doubt, it is through these struggles that we are able to find meaning and purpose in the midst of chaos and confusion, ultimately enriching our lives in profound ways.。
组织行为学:管理沟通沟通是管理的浓缩。
松下幸之助有句名言:“企业管理过去是沟通,现在是沟通,未来还是沟通。
”管理者的真正工作就是沟通communication。
不管到了什么时候,企业管理都离不开沟通can't do without communication。
A working knowledge of communication requires a basic understanding of some fundamental concepts.沟通的方向/类型Direction of communicationCommunication can flow vertically or laterally. The vertical dimension can further divided into downward and upward direction.正式的组织沟通Formal organizational communicationThere 3 basic types of formal organizational communication: downward, upward and lateral.下行沟通 Downward communication“下行沟通”,又称“下向沟通”,说简单点就是“上司对下属的沟通”,其最重要的形式有三,下达指令、听取汇报和商讨问题。
能否充分有效地进行这些沟通,会直接影响到企业组织运行的效率。
——张华 2010年12月13日星期一注于安德电器Downward organizational communication flows from any point on an organization chart downward to another point on the organization chart. That is, communication flows from one level downward to a lower level on the organization chart. For example, communication between a manager and his subordinates is one kind of downward communication.下行沟通是指资讯的流动是由组织层次的较高处流向较低处,通常下行沟通的目的是为了控制、指示、激励及评估。
As a high school student, Ive always been surrounded by books, and the act of placing a book on a table is something I do countless times a day. Its a simple action, yet it carries a profound significance in my academic journey and personal growth.My day usually starts with the suns first rays peeking through the curtains, and I find myself reaching for the book I left on my bedside table the night before. Its a ritual that sets the tone for the day. The book, a companion in my quest for knowledge, is carefully placed on the table in the breakfast nook. The sunlight dances on its cover, casting a warm glow that feels like a gentle nudge to start the day right.At school, the sight of books on tables is ubiquitous. Each classroom is a sea of desks, each adorned with a book or two. The library, a sanctuary of sorts, is filled with long tables where students bury their noses in the pages of novels, textbooks, and reference materials. The act of placing a book on a table here is a declaration of intenta silent promise to delve into the world within its pages.During my study sessions, I find comfort in the familiarity of the gesture. A textbook on the table signifies a commitment to understanding a new concept or solving a complex problem. A novel, placed there with anticipation, promises an escape into a different world, a chance to live vicariously through the characters and their adventures.The placement of a book on a table is not just a physical act its a metaphor for the process of learning. It represents the beginning of an exploration, ajourney that starts with a single step but can lead to a vast expanse of knowledge. Each book is a vessel of wisdom, waiting to be unlocked by the curious mind.In the quiet of my room, after a long day, the act of placing a book on my desk is a signal to my brain that its time to unwind. Its a transition from the hustle of the day to the calm of the night. The book becomes a bridge between the world outside and the world within, a portal to dreams and ideas that can only be reached through the power of imagination.The table, a simple piece of furniture, becomes a stage where the drama of learning unfolds. Its a canvas where the ink of knowledge is spilled, where the lines of thought are drawn, and where the boundaries of understanding are pushed. The book, once placed, becomes a part of this tableau, contributing to the narrative of learning and discovery.In a world where digital screens are becoming increasingly prevalent, the tactile experience of placing a physical book on a table is a reminder of the enduring charm of print. The weight of the book in my hands, the texture of the paper, the smell of the inkall these sensory details enhance the experience of reading and make the learning process more immersive.Moreover, the act of placing a book on a table is a testament to the value of perseverance. Its a reminder that every piece of knowledge, every insight, every discovery, begins with this simple action. Its a humble beginning that can lead to monumental achievements.In conclusion, placing a book on a table is more than just a mundane task its a ritual that signifies the beginning of a learning journey. Its a symbol of curiosity, a beacon of knowledge, and a stepping stone to a world of possibilities. As I continue to place books on tables, I am reminded of the power of this simple act and the profound impact it has on my life as a student and an individual.。
新 变 化将于2007年秋季开始的修订版GRE考试,也就是大家所关心的新GRE考试,将会在下面几个方面有所变化:部分,题目由以前的7题变成了14题,并且答案选项的模式也变化了。
旧的考试填空的答案选项分为一空题目的答案和两空题目的答案,但是无论哪种,都是五选一。
修订版的考试样题中,填空14道题目按答案选项分布如下:一空题中6选2的有5道题目,一空题中5选1的有4道题目,两空题有4道题目,三空题有1道题目。
因此在选择答案的模式上,更能考查知识的客观性。
其中两空题和三空题的答案选项变成了每个空格分别3选1,也就是说,每个空格要单独进行选择,每个选项由以前的5个变成了3个。
这样的话,随机选中答案的概率,两空题从以前的1/5变成了现在的1/9,新出现的三空题随机选中正确答案的概率变成了1/27。
因此难度相对增加,所以就不能像以前一样,通过一次选择判断出两空题惟一正确的答案了。
尽管如此,两空题和三空题的解题逻辑基本上还是和以前的两空题一样。
填空题目里面惟一真正新增的题型就是一空题目里面的6选2,这里重点分析一下。
既然6选2是考查同义类比词,因此如果六个选项里面只有一对同义词对的话,必然就是正确答案了。
但如果有两对同义词对的话,那么就要根据句子的逻辑关系再进行进一步的判断了,而这个关系无非就是以前旧题型中要考查的同义重复和反义重复。
下面我们来分析一下具体的样题(样题来自ETS官方网站)。
美国教育考试服务中心(ETS)2月份宣布,今年9月在全球启动的新GRE考试将同步在中国举行,而现行GRE考试将于今年7月31日截止。
新GRE考试将于今年7月1日开始报名,考试时间为今年9月16日。
作为60年来最大的一次改革,新GRE考试将更注重考查考生的研究能力。
新GRE考试题目将从现行的随机变化性改为每名考生同时使用相同的考题。
考试方式由现行的PBT或CBT模式改为iBT的模式,考试总时长为4小时左右。
其中语言推理部分考2个section,每个section限时40分钟;数学部分考2个section,每个section限时40分钟;作文部分依然是argue和issue两个分项,每个分项限时30分钟。
GRE阅读难句解读GRE阅读难句解读1.Such philosophical concerns as the mind-body problem or,moregenerally,the nature of human knowledge they believe,are basic human questions whose tentative philosophical solutions have served as the necessary foundations on which all other intellectual speculation hasrested.他们相信,诸如灵与肉的问题,或更普遍而言,人类知识的性质等此类哲学关注,均是一些基本的人类问题,其探索性的'哲学答案已构成一个必要的基础,其它所有的智力思辩均赖以建立其上。
2.The idea of an autonomous discipline called philosophy,distinct from and sitting in judgment on such pursuits as theology andscience turns out,on close examination,to be of quite recent origin.细作审视,我们便不难发现,一种被称为哲学的独立学科既有别于神学和科学诸般学科,并高高在上地对其予以评判的思想,其渊源却是甚为近期的事。
3.But the recent discovery of detailed similarities in the skeletal structure of the flippers in all three groups undermines the attempt to explain away superficial resemblance as due to convergent evolution-the independent development of similarities between unrelated groups in response to similar environmental pressures.但是,最近在所有这三类动物身上,对鳍肢的骨骼结构所存在的细致共同点的发现足以削弱这样一种企图,即把表面上的近似作为趋同进化所致的结果来将其解释掉。
MM Research Preprints,352–374MMRC,AMSS,Academia SinicaNo.23,December2004Evolutionary Programming Based on Non-UniformMutation1)Xinchao Zhao and Xiao-Shan GaoKey Laboratory of Mathematics MechanizationInstitute of Systems Science,AMSS,Academia Sinica(xczhao,xgao)@Zechun HuDepartment of Mathematics,Nanjing UniversityAbstract.A new evolutionary programming algorithm(NEP)using the non-uniformmutation operator instead of Gaussian or Cauchy mutation operators is proposed.NEPhas the merits of“long jumps”of the Cauchy mutation operator at the early stage of thealgorithm and“fine-tunings”of the Gaussian mutation operator at the later -parisons with the recently proposed sequential and parallel evolutionary algorithms aremade through comprehensive experiments.NEP significantly outperforms the adaptiveLEP for most of the benchmarks.NEP outperforms some parallel GAs and performscomparably to others in terms of the solution quality and algorithmic robustness.Wegive a detailed theoretical analysis of NEP.The probability convergence is proved.Theexpected step size of the non-uniform mutation is calculated.Based on this,the keyproperty of NEP with“long jumps”at the early stage and“fine-tunings”at the laterstage is proved strictly.Index Terms:Evolutionary programming,genetic algorithm,non-uniform mutation,global optimization,probability convergence,theoretical analysis.1.IntroductionI NSPIRED by the biological evolution and natural selection,intelligent computation al-gorithms are proposed to provide powerful tools for solving many difficult problems.Genetic algorithms(GAs)[2,3],evolutionary strategies(ESs)[4],and the evolutionary programming (EP)[5,21]are especially noticeable among them.In GAs,the crossover operator plays the major role and the mutation is always seen as an assistant operator.In ESs and EP,however, the mutation has been considered as the main operator.GAs usually adopt a high crossover probability and a low mutation probability,while ESs and EPs apply mutation to every in-dividual.In binary GAs,one,two,multi-point,or uniform crossover and uniform mutation [1,3]are often used.Some new mutation operators are proposed recently,such as the frame-shift and translocation operators[22],the transposition operator[18],etc.For real-coded GAs,the non-uniform mutation operator[1]is introduced.Besides the Gaussian mutation [5,21],self-adaptation mutations[11,12],self-adaptation rules from ESs[13],Cauchy[14]1)Partially supported by a National Key Basic Research Project of China and by a USA NSF grant CCR-0201253.Non-Uniform Evolutionary Programming353 and L´e vy-based[15]mutations are also incorporated into evolutionary programming.These new operators greatly enhance the performance of the algorithms.In this paper,a new evolutionary programming algorithm(abbr.NEP)using the non-uniform mutation instead of Gaussian or Cauchy mutations is proposed.This work is inspired by the following observations.First,Yao et al[14,15]argued that“higher probability of making longer jumps”is a key point that FEP and LEP perform better than CEP. However,“longer jumps”are detrimental if the current point is already very close to the global optimum.Second,the non-uniform mutation operator introduced in[1]has the feature of searching the space uniformly at the early stage and very locally at the later stage.In other words,the non-uniform mutation has the common merits of“higher probability of making far long jumps”at the early stage and“much better localfine-tuning ability”at the later stage.In[1],the non-uniform mutation operator is used in GAs by Michalewicz.As we mentioned before,the mutation operator is generally seen as an assistant operator in GAs. While in NEP,the mutation operator is treated as the major operator.At the initial stage of NEP,the jumping steps are so long that they almost cover the whole search space(Section IV-A).The greedy idea and the idea of mutating a single component of the individual vector rather than modifying all the components are incorporated into the NEP algorithm in order to avoid possible random jumps and to ensure the algorithm“stays”at the promising area just found by the search engine.Comparisons with the adaptive LEP demonstrate that NEP greatly outperforms the adaptive LEP for most of the separable and nonseparable,unidomal and multimodal parisons withfive parallel genetic algorithms on high-dimensional benchmarks are also made.NEP performs much better than R-DC and R-DS,comparable to ECO-GA, and slightly worse than CHC-GA and GD-RCGA.Detailed introduction on these parallel GAs can be found in[6].Convergence is an important issue in the theoretical study of EAs and many nice results [7,8,9,10]have been obtained.Based on the Markov process theory,we prove that NEP with greedy selection is convergent in probability one.Theoretical analysis on how NEP works is also given based on the theories of stochastic process.First,the expected step size of the non-uniform mutation is calculated.Its derivative with respect to the generation variable t is less than zero,which implies the monotonously decreasing exploring region with the progress of the algorithm.Second,we obtain a quantitative description of the step size through analyzing the step size equation.Theoretical analysis also strongly supports the fact that NEP is not sensitive to the search space size[31].According to Michalewicz[1],the parameter b in Eq.(2)of the non-uniform mutation determines the non-uniformity.Through theoretical analysis,we show that when b becomes larger,the step size of the mutation deceases faster.So an adaptive NEP is proposed based on different values of b.We apply several different mutations with different values of b for a parent and choose the best offspring for the next generation.The rest of the paper is organized as follows.In Section II,we present the NEP algorithm and prove its convergence prehensive experiments are done to compare NEP with the recently proposed adaptive LEP[15]andfive parallel genetic algorithms[6]in Section III.In Section IV,further theoretical analysis on the executing process of NEP is given.An adaptive NEP algorithm is proposed in Section V.In Section VI,conclusions are354X.C.Zhao,X.S.Gao and Z.C.Hureached.2.Non-uniform Evolutionary ProgrammingIn this section,we introduce an evolutionary programming algorithm based on the non-uniform mutation operator and prove its probability convergence.2.1.Non-uniform MutationMichalewicz[1]proposed a dynamical non-uniform mutation operator to reduce the dis-advantage of random mutation in the real-coded GA.This new operator is defined as follows.For each individual X t i in a population of t-th generation,create an offspring X t+1i through non-uniform mutation as follows:if X t i={x1,x2,...,x m}is a chromosome(t is the generation number)and the element x k is selected for this mutation,the result is a vectorX t+1i={x 1,x 2,...,x m},wherex k=x k+∆(t,UB−x k),if a randomξis0x k−∆(t,x k−LB),if a randomξis1(1)and LB and UB are the lower and upper bounds of the variables x k.The function∆(t,y) returns a value in the range[0,y]such that∆(t,y)approaches to zero as t increases.This property causes this operator to search the space uniformly initially(when t is small),and very locally at later stages.This strategy increases the probability of generating a new number close to its successor than a random choice.We use the following function:∆(t,y)=y·(1−r(1−t T)b),(2) where r is a uniform random number from[0,1],T is the maximal generation number,andb is a system parameter determining the degree of dependency on the iteration number.2.2.Simple Crossover in Evolutionary ProgrammingSimilar to the binary genetic algorithm,we adopt the simple crossover.Let m be the dimension of a given problem and components of chromosomes X,Y are allfloat numbers. Choose a pair of individuals:X=(x1,...,x pos1,...,x pos2,...,x m)Y=(y1...,y pos1,...,y pos2,...,y m)and randomly generate a decimal r.If r<pc(crossover probability),apply simple two-point crossover to them as follows.Generate two random integers pos1,pos2in the interval[1,m]. The components of two individuals between the numbers pos1and pos2will be exchanged. Then the new individuals are generated as follows.X =(x1,...,y pos1,...,y pos2,...,x m)Y =(y1,...,x pos1,...,x pos2,...,y m)Non-Uniform Evolutionary Programming3552.3.Greedy Selection in NEPWefirst give the definition of neighborhood of an individual.Definition:[31]Given a vector X=(x1,...,x i,...,x m)(m is the dimension of vectors), we call X is its neighbor,if and only if one of its component is changed and other components remain unchanged.The neighborhood N of a vector X consists of all its neighbors.That is,N={X |X is a neighbor of X}.(3) Different from the traditional local search which performs greedy local search until a local optimum is obtained,we only mutate every component one time in certain probability for each component of every real-coded individual(vector).The current individual will be replaced by the mutated one only when the new one is not worse than the current individual.This strategy can overcome the plateaus of constantfitness problem as indicated by Jansen and Wegener[17].Such a greedy selection procedure for a current individual X=(x1,...,x i,...,x m)is as follows.For i from1to the functional dimension mMutate the i th component x i of X and obtaina new vector X using Eqs.(1,2);iffitness of X is not worse than that of X thenX becomes the current individual;End for.2.4.Non-Uniform Evolutionary ProgrammingFor a function f(X),NEP willfind anX∗such that∀X,f(X∗)≤f(X).(4) The NEP algorithm adopts real encoding,two-point crossover and non-uniform mutation. The procedure of NEP is given as follows.Procedure of NEP1)Generate the initial population consisting of n individuals,each of which,X0i ,has mindependent components,X0i =(x1,x2,...,x m).2)Evaluate each individual based on the objective function,f(X0i).3)Apply two-point crossover to the current population.4)For each parental individual X ti =(x t1,...,x t m).4.1)for each component x t j constructing x t j using Eq.(1,2)(new one denoted as X ti)4.1.1)If f(X ti )≤f(X ti)X ti replaces X tias the current individual.5)Conduct pairwise comparison over the offspring population.For each comparison, q individuals are chosen uniformly at random from the offspring population.Thefittest individual is put into the next generation.6)Repeat steps3-5,until the stopping criteria are satisfied.End of ProcedureThe crossover isfirstly applied to the population in a probability0.4,the non-uniform mutation operator followed at a mutation probability0.6.The summation of the mutation and crossover probabilities is kept as1.Thus NEP will generate equal number of offsprings356X.C.Zhao,X.S.Gao and Z.C.Huin the sense of probability comparing with other EPs.The parameter b in NEP remains unchanged during the evolution.2.5.Analysis on the Convergence of NEPIn this section,the convergence of NEP will be proved based on the stochastic process theory.Since NEP mutates single component only in once mutation operation,we only consider the one dimensional case,i.e.,n=1.In this case,there is no crossover and we need only consider the non-uniform mutation operation.We divide the objective functions into two classes in the following analysis.2.5..1.Unimodal FunctionsWe assume that f(x)has a unique minimal value at x∗.In Fig.(1),let x0be one initial solution,x 0another initial solution lying between x∗and x0,x0a number satisfying f(x0)= f(x0)and x0=x0,andεan arbitrary small positive number.Without loss of generality,we assume that variable x lies on the right side of x∗and x lies on the left side.Fig.1.Analysis on the Unimodal FunctionBased on Eq.(1,2)and via NEP algorithm,we havex1=x0,ifξ=0,or ifξ=1and x0−∆(1,x0−a)≤x0;x0−∆(1,x0−a),ifξ=1and x0−∆(1,x0−a)>x0;x 1=x 0,ifξ=0,or ifξ=1and x 0−∆(1,x 0−a)≤x 0;x 0−∆(1,x 0−a),ifξ=1and x 0−∆(1,x 0−a)>x 0;Lemma1:Let p1=P{x1/∈(x∗−ε,x∗+ε)},p 1=P{x1 /∈(x∗−ε,x∗+ε)}.If x∗<x 0<x0then p 1<p1.Similarly,if x0<x 0<x∗,p 1<p1also holds.Proof:We havep1=1−P{x1∈(x∗−ε,x∗+ε)}=1−P{ξ=1,x∗−ε<x0−∆(1,x0−a)<x∗+ε)}=1−12P{x∗−ε<x0−∆(1,x0−a)<x∗+ε)}Similarly,p 1=1−12P{x∗−ε<x 0−∆(1,x 0−a)<x∗+ε)}Non-Uniform Evolutionary Programming357 Let q=P{x∗−ε<x0−∆(1,x0−a)<x∗+ε)}q =P{x∗−ε<x 0−∆(1,x 0−a)<x∗+ε)}Thus it is enough to show thatq<q (5) By Eq.(2)we haveq=P{x∗−ε<x0−∆(1,x0−a)<x∗+ε)}=P{x∗−ε<x0−(x0−a)(1−r(1−1T)b)<x∗+ε)}=P{(x∗−a−εx0−a )1m<r<(x∗−a+εx0−a)1m}=(x∗−a+εx0−a)1m−(x∗−a−εx0−a)1m where m=(1−1T)b(6)Similarly,we haveq =P{x∗−ε<x 0−∆(1,x 0−a)<x∗+ε)}=(x∗−a+εx 0−a)1m−(x∗−a−εx 0−a)1m(7)From Eq.(6,7)and let q subtracts q ,we may derive Eq.(5)and thus the correctness of the Lemma.Since x0is a given initial solution(individual),we can assume that x0>x∗.Letp+1:=P{x0is the intial solution and x1/∈(x∗−ε,x∗+ε)}p−1:=P{x0is the intial solution and x1/∈(x∗−ε,x∗+ε)}Then p1:=p+1(or p−1)For n≥2,we definep+n:=P{x n−1>x∗,x n/∈(x∗−ε,x∗+ε)}p−n:=P{x n−1<x∗,x n/∈(x∗−ε,x∗+ε)}p n:=p+n+p−nTheorem2:For anyε>0,we havelimn→∞p n=0Proof:By the description of NEP,it is easy to know that the stochastic process{x i,i= 0,1,2,···}is a Markov process.By the property of conditional expectation,Markov property and Lemma1,we can obtain thatp2=P{x2/∈(x∗−ε,x∗+ε)}=E[I{x2/∈(x∗−ε,x∗+ε)}]=E(E[I{x2/∈(x∗−ε,x∗+ε)}|x1])=E(E x1[I{x2/∈(x∗−ε,x∗+ε)}])=E x1[I{x2/∈(x∗−ε,x∗+ε)}]·P{x1/∈(x∗−ε,x∗+ε)}≤max{p+1,p−1}·p1p3=P{x3/∈(x∗−ε,x∗+ε)}=E[I{x3/∈(x∗−ε,x∗+ε)}]=E(E[I{x3/∈(x∗−ε,x∗+ε)}|x2])=E(E x2[I{x3/∈(x∗−ε,x∗+ε)}])358X.C.Zhao,X.S.Gao and Z.C.Hu=E(E x2[I{x1/∈(x∗−ε,x∗+ε)}])=E x2[I{x1/∈(x∗−ε,x∗+ε)}]·P{x2/∈(x∗−ε,x∗+ε)}≤max{E x1[I{x1/∈(x∗−ε,x∗+ε)}],E x1[I{x1/∈(x∗−ε,x∗+ε)}]}×p2≤(max{p+1,p−1})2·p1By induction we havep n=P{x n/∈(x∗−ε,x∗+ε)}≤(max{p+1,p−1})n−1·p1Obviously,0<p+1,p−1<1,solimn→∞p n=0The proof is complete.By the greedy selection of NEP,it is easy to know thatp n=P{x i/∈(x∗−ε,x∗+ε),i=1,2,...,n}So Theorem2implies that for anyε>0we havelimn→∞(1−P{x i/∈(x∗−ε,x∗+ε),i=1,...,n})=1i.e.,limn→∞P{∃i=1,...,n,s.t.x i∈(x∗−ε,x∗+ε)}=1(8) Eq.(8)indicates that for anyε>0,{x i}∞i=1is to enter the domain(x∗−ε,x∗+ε)}almost surely,and so{x i}∞i=1converges to x∗almost surely.2.5..2.Multimodal FunctionsWe assume that g(x)is a multimodal function with a minimal value at x∗.Without loss of generality,we assume that g has only one global optimum.Let x0,x 0be initial solutions (individuals).Fig.2.Analysis on the Multimodal FunctionWithout loss of generality,suppose x0,x 0are two points on the left side of point“c”as in Fig.(2).Denote the offsprings of them as x1,x 1respectively.Now we will consider how to choose the interval points“c”and“d”which satisfy the following conditions.First,g(x) is unimodal in the subinterval of[c,d].Second,we assume g(c)=g(d).Third,there is noNon-Uniform Evolutionary Programming359 other local optimal region below the line through g(c)and g(d)on function g(x).We have the following lemma.Lemma3:Letp1=P{x1(ω)/∈(c,d),x0is the initial point},p 1=P{x1 (ω)/∈(c,d),x 0is the initial point}.Then if x0<x 0we have p 1<p1.Proof:We havep1=1−P{x1∈(c,d),x0is the initial point}=1−P{ξ=0,c<x0+∆(1,b−x0)<d}=1−12P{c<x0+∆(1,b−x0)<d}Similarly,p 1=1−12P{c<x 0+∆(1,b−x 0)<d}The proof is similar with the Lemma1.Remark:This means that if the initial point is closer to(c,d),the probability that its offspring enters(c,d)is larger.Next,we want to establish a similar result as Theorem2.Let x0be a given initial individual as indicated in Fig.(2).DefineA0:={u∈[a,b]:g(u)=g(x0)}x−0:=min{u∈A0:u<c}x+0:=max{u∈A0:u>d}.For example,x−0=x10,x+0=x40in Fig.(2).For n≥1,we defineA n(ω):={u∈[a,b]:g(u)=g(x n(ω))},x−n:=min{u∈A n(ω):u<c},x+n=max{u∈A n(ω):u>d}.Let p−1:=P{x1(ω)/∈(c,d),x−0is the initial point};p+1:=P{x1(ω)/∈(c,d),x+0is the initial point};For n≥1(all the following equations should be understood under the condition that x0 is the beginning point),we definep n=P{x n(ω)/∈(c,d)}.Theorem4:We have thatlimn→∞p n=0Proof.By the procedure of NEP,we know that the stochastic process{x i,i=0,1, (i)a Markov process.By the property of conditional expectation,Markov property and Lemma 3we can obtain thatp2=P{x2(ω)/∈(c,d)}=E[I{x2(ω)/∈(c,d)}]=E(E[I{x2(ω)/∈(c,d)}|x1(ω)])=E(E x1(ω)[I{x1(ω)/∈(c,d)}])=E x1(ω)(I{x1(ω)/∈(c,d)})·I{x1(ω)/∈(c,d)}+E x1(ω)(I{x1(ω)/∈(c,d)})·I{x1(ω)∈(c,d)}=E x1(ω)(I{x1(ω)/∈(c,d)})·I{x1(ω)/∈(c,d)}≤max{E x−0(I{x1(ω)/∈(c,d)}),E x+0(I{x1(ω)/∈(c,d)})}×P{x1(ω)/∈(c,d)}≤p1·max{p+1,p−1} Similarly we havep3=P{x3(ω)/∈(c,d)}360X.C.Zhao,X.S.Gao and Z.C.Hu=E[I{x3(ω)/∈(c,d)}]=E(E[I{x3(ω)/∈(c,d)}|x2(ω)])=E(E x2(ω)[I{x1(ω)/∈(c,d)}])=E x2(ω)(I{x1(ω)/∈(c,d)})·I{x2(ω)/∈(c,d)}+E x2(ω)(I{x1(ω)/∈(c,d)})·I{x2(ω)∈(c,d)}=E x2(ω)(I{x1(ω)/∈(c,d)})·I{x2(ω)/∈(c,d)}≤max{E x−1(ω)(I{x1(ω)/∈(c,d)}),E x+1(ω)(I{x1(ω)/∈(c,d)})}×P{x2(ω)/∈(c,d)}≤max{E x−0(I{x1(ω)/∈(c,d)}),E x+0(I{x1(ω)/∈(c,d)})}×p2≤p1·(max{p+1,p−1})2By induction we havep n:=P{x n(ω)/∈(c,d)}≤p1·(max{p+1,p−1})n−1By the procedure of NEP,we know that0≤p+1,p−1<1(if x0∈(c,d),then obviously we have p+1=p−1=0).Thus we havelimn→∞p n=0.The proof is complete.By greedy selection of NEP,it is easy to know that p n=P{x i(ω)/∈(c,d),i=1,2,···,n}. And by Theorem4,we know thatlimn→∞(1−P{x i(ω)/∈(c,d),i=1,2,···,n})=1,i.e.P{∃i=1,2,···,n,such that x i(ω)∈(c,d)}=1Obviously,the function(g(x),x∈[c d])is a unimodal function.So from Theorem2we know thatP{ω:x i(ω)converges to x∗}=1(9) Remark:Similarly,it is easy to reach the same conclusions for the high-dimensional problems because algorithm NEP only mutates one component of a vector(individual)per mutation operation.3.Experiments and AnalysisIn our algorithm,we set the simple crossover probability pc=0.4and the non-uniform mutation probability pm=0.6in all cases.Following Michalewicz[1],the parameter b in Eq.(2)is set to be5.The population size and the maximal evolutionary generation numbers will vary for the comparing algorithms.To make the comparison fair in terms of comput-ing time,the population size of NEP is reduced proportionally according to the problem dimensions,since each individual in NEP generates m(dimension of function)offsprings. For example,for a four dimensional function,if the population size of CEP is100then the population size of NEP will be25.It is worth pointing out that NEP actually uses less computing time,because operations,such as selection,use less time in a small population. The programs are implemented with the C programming language.Non-Uniform Evolutionary Programming361parison with Adaptive LEP[15]Yao[14]et al.proposed an evolutionary programming algorithm(FEP)based on the Cauchy probability distribution.Lee and Yao[15]proposed an evolutionary programming algorithm(LEP)based on the L´e vy probability distribution.The Cauchy probability distri-bution is a special case of the L´e vy probability distribution and the adaptive LEP performs at least as well as the nonadaptive LEP with afixedα[15].So we just compare our algorithm with the adaptive LEP.3.1..1.Benchmark Functions and Parameters SettingWe use the same test functions with the adaptive LEP which can be found in Table1 or[15].The large number of benchmarks is necessary as Wolpert and Macready[19]have shown that under certain assumptions no single search algorithm is best on average for all problems.If the number of benchmarks is too small,it would be very difficult to make a generalized conclusion and have the potential risk that the algorithm is biased toward the chosen problems.Among them,f1,f2,f3are high-dimension and unimodal functions, f4,...,f9are mutilmodal functions where the number of local minims increases exponentially with the augment of the problem dimensions and are the most difficult class of problems for many optimization algorithms[14].f10,...,f14are low-dimensional functions which only have a few local minima.Some properties of several benchmarks are listed below[6].•f3is a continuous and unimodal function,with the optimum located in a steep parabolic valley with aflat bottom and the nonlinear interactions between the variables,i.e.,it is nonseparable[20].These features make the search direction have to continually change to reach the optimum.Experiments show that it is even more difficult than those multimodal benchmarks.•The difficulty of f2concerns the fact that searching along the coordinate axes only gives a poor rate of convergence since its gradient is not oriented along the axes.It presents similar difficulties with f3,but its valley is narrower.•f7is difficult to optimize because it is nonseparable and the search algorithm has to climb a hill to reach the next valley[20].The maximal evolutionary generation numbers are set to be the same to adaptive LEP (1500for functions f1,···,f9,30for f10,f11and100for f12,···,f14).Due to the population size of the adaptive LEP being100and each individual generates four offsprings at every generation,the population size of the adaptive LEP is equivalent to400.Consequently,weset the population size of NEP to be an integer less than400Dimensions of P roblem (dimension is30for high dimensional functions)which are13for functions f1,...,f9,200for f10,f11and 100for f12,f13,f14.A computational precision of50digits after point is used in NEP.So a result being0means that it is less that10−50in NEP and vice versa in this subsection. We do notfind the computational precision demand of[15].3.1..2.Performance Comparison and AnalysisComparisons between NEP and the adaptive LEP are given in Figures3-5and Table2 which includes the average result for50independent runs.Figures3-5show the evolutionary362X.C.Zhao,X.S.Gao and Z.C.HuBenchmark Functionsn D f min high-dimension unimodal functionsf 1=n i =1x 2i30[-100,100]n 0f 2=n i =1(ij =1x j )230[-100,100]n 0f 3=n −1 i =1[100(x i +1−x 2i )2+(x i −1)2]30[-30,30]n 0high-dimension multimodal functions with many local minima f 4=−ni =1(x i sin( |x i |))30[500,500]n-12569.48f 5=ni =1[x 2i −10cos(2πx i )+10]30[-5.12,5.12]nf 6=−20exp[−0.21n ni =1x 2i ]−exp(1nni =1cos(2πx i ))+20+e 30[-32,32]n 0f 7=14000n i =1x 2i −n i =1cos(x i√i)+130[-600,600]nf 8=πn {10sin 2(πy i )+n −1 i =1(y i −1)2[1+10sin 2(πy i +1)]+(y n −1)2}+n i =1u (x i ,10,100,4),y i =1+14(x i +1)30[-50,50]nf 9=0.1{10sin 2(3πx 1)+n −1 i =1(x i −1)2[1+10sin 2(3πx i +1)]+(x n −1)2[1+sin 2(2πx n )]}+n i =1u (x i ,5,100,4)30[-50,50]n 0low-dimension functions with only a few local minima f 10=(4−2.1x 21+13x 41)x 21+x 1x 2+(−4+4x 22)x 222[-5,5]n-1.0316f 11=[1+(x 1+x 2+1)2(19−14x 1+3x 21−14x 2+6x 1x 2+3x 22)]×[30+(2x 1−3x 2)2×(18−32x 1+12x 21+48x 2−36x 1x 2+27x 22)]2[-2,2]n 3f 12=−5j =1[4 i =1(x i −a ij )2+c j ]−14[0,10]n -10.1532f 13=−7j =1[4 i =1(x i −a ij )2+c j ]−14[0,10]n -10.40282f 14=−10j =1[4 i =1(x i −a ij )2+c j ]−14[0,10]n-10.53628Table 1.Benchmark functions used by LEP &NEPNon-Uniform Evolutionary Programming363 behaviors of the best and averagefitness versus the evolutionary generations.Due to the small population size(13)of high-dimensional functions,the plots of the average and best fitness nearly completely overlap each other.2.22)f1Fig.3.Best and averagefitness of unimodal functions vs evolutionary generation Figure4showing strong exploring abilities for multimodal functions with high-dimensions and very large search spaces,NEP canfind the potential area quickly in the initial stage of algorithm.The results approach to0.1-0.001after200generations.In the middle stage of the algorithm,a smooth behavior is indicated when it maybe patiently locate the position of the global optimum.A fast convergent speed once again appears in the later stage of the algorithm,which once again illuminate thefine-tuning ability of the non-uniform mutation operator.For function f7,at about the last ten generations,the algorithm reached the minimal value.These results show that NEP has the merits of long jumping(initial stage) andfine-tuning(later stage)abilities.For unimodal functions,NEP shows similar behaviors as the multimodal functions observed from Fig.3excluding function f3.Although f3is the most difficultone,NEP still outperforms the adaptive LEP as Table2shows.Table2shows that NEP is only outperformed by the adaptive LEP on the low dimensional multimodal functions with only a few local minima.For the high dimensional functions with the unimodal and multimodal,NEP is better in terms of the convergent ability and robustness.What is more,this encouraging result is achieved without introducing any extra parameters and no extra computation cost comparing with CEP.Function f3is nonseparable[20]whose result is the worst one comparing with the results364X.C.Zhao,X.S.Gao and Z.C.HuGriewank Function)f6(Ackley’sFig.4.Best and averagefitness of multimodal functions vs evolutionary generationof other functions for NEP.Even though,it still outperforms the adaptive LEP in terms of the solution quality and the robustness.It is noteworthy that NEP nearlyfinds the global optima for all the high-dimension multimodal functions except for function f7whose standard deviation just reaches3digits after the decimal point.The experimental results further confirm the difficulty of function f7.The standard deviation of function f5is equal to zero,that is,all the50runs reach the optimum.For functions with only a few local minima,the performance of NEP is a little worse than the adaptive LEP.As Schnier and Yao[27]analyzed the last benchmarks are rather deceptive.But we have no need to worry about it based on two reasons.Firstly and most importantly,all the realistic problems from engineering and society are generally very complicated.Secondly,there are many other methods to deal with such problems,such as steady-state GA[1]and multiple representations EA with a modified configuration[27] which perform very well for these functions.parisons with Parallel Genetic AlgorithmsIn this section,we will make further comparison between NEP andfive parallel genetic algorithms which are GD-BLX r[6],ECO-GA model[23],CHC algorithm[24],deterministic crowding(DC)[25]and disruptive selection(DS)[26].All the results of the parallel genetic algorithms are obtained from[6]and detailed introductions about these algorithms can also be found in[6].All these parallel genetic algorithms are based on the BLX-αcrossover and。
When it comes to choosing a life partner,there are numerous factors that individuals consider to ensure compatibility and a fulfilling relationship.Here are some common criteria that people often take into account when looking for a significant other:1.Shared Values:Its essential for a couple to have similar core values and beliefs.This includes religious beliefs,moral principles,and attitudes towards family and life goals.2.Trustworthiness:Trust is the foundation of any strong relationship.A partner should be reliable,honest,and someone you can confide in without fear of betrayal.munication Skills:Effective communication is key to resolving conflicts and maintaining a healthy relationship.A good partner should be able to express their thoughts and feelings openly and listen to yours.4.Emotional Stability:Emotional maturity is important for handling lifes ups and downs.A partner should be able to manage their emotions and provide emotional support when needed.5.Physical Attraction:While physical attraction isnt everything,it does play a role in romantic relationships.Its important to be attracted to your partner and feel comfortable with their physical appearance.6.Intelligence:Many people seek a partner who is intellectually stimulating.This doesnt necessarily mean they have to be academically smart,but they should be able to engage in meaningful conversations and share a similar level of curiosity about the world.7.Sense of Humor:A good sense of humor can make life more enjoyable and help couples navigate through difficult times.Finding someone who can make you laugh and appreciates your humor is a plus.8.Financial Stability:While money isnt the most important thing in a relationship, financial stability can reduce stress and provide a sense of security.9.Shared Interests:Having common hobbies or interests can strengthen the bond between partners.It provides opportunities for shared experiences and conversations. 10.Respect:A partner should respect you as an individual,value your opinions,and support your personal goals and dreams.11.Kindness and Compassion:A kind and compassionate partner is more likely to beunderstanding and caring,which are important qualities in a longterm relationship.12.Independence:Its important for both partners to have a sense of independence and not rely solely on the other for happiness or fulfillment.13.Adaptability:Life is full of changes,and a partner who is adaptable can navigate these changes together with you,making the journey smoother.14.Family and Friends Approval:While its not a dealbreaker,having the support and approval of family and friends can contribute to a relationships success.15.LongTerm Compatibility:Finally,thinking about longterm compatibility is crucial. This includes considering whether your partners lifestyle,career goals,and family planning align with yours.In conclusion,while everyones list of criteria may vary,these are some of the most commonly soughtafter qualities in a potential partner.Its important to remember that no one is perfect,and finding a balance between your ideals and the reality of human imperfection is part of the process of building a successful relationship.。