Optimal and suboptimal singleton arc consistency algorithms
- 格式:pdf
- 大小:151.28 KB
- 文档页数:6
基于遗传算法的两通道完全重构滤波器组的设计
徐华楠;刘哲
【期刊名称】《微电子学与计算机》
【年(卷),期】2008(25)3
【摘要】介绍了两通道滤波器组的完全重构条件,利用Euclidean分解算法,将两通道滤波器组的设计问题简化为寻找给定特性的低通滤波器的最佳Euclidean互补滤波器的单变量非线性优化问题,并探讨了采用遗传算法设计此类高度非线性优化问题.最后通过设计例子说明将遗传算法应用到滤波器组的设计中是可行的.
【总页数】5页(P144-148)
【关键词】两通道滤波器组;遗传算法;完全重构;因式分解
【作者】徐华楠;刘哲
【作者单位】西北工业大学理学院
【正文语种】中文
【中图分类】TP31
【相关文献】
1.两通道完全重构滤波器组的设计方法:因式分解法 [J], 石光明;焦李成
2.无完全重构约束的两通道自适应FIR滤波器组设计 [J], 王兰美;水鹏朗;廖桂生;王桂宝
3.基于多项式分解理论的低时延完全重构两通道滤波器组的设计 [J], 石光明;焦李成
4.基于进化计算设计无乘法的完全重构两通道滤波器组 [J], 石光明;张子敬;焦李成
5.两通道完全重构全相位FIR滤波器组的设计 [J], 黄翔东;王兆华
因版权原因,仅展示原文概要,查看原文内容请购买。
a r X i v :0808.0806v 2 [c o n d -m a t .s u p r -c o n ] 7 A u g 2008Monotonic d-wave Superconducting Gap in Optimally-Doped Bi 2Sr 1.6La 0.4CuO 6Superconductor by Laser-Based Angle-Resolved Photoemission SpectroscopyJianqiao Meng 1,Wentao Zhang 1,Guodong Liu 1,Lin Zhao 1,Haiyun Liu 1,Xiaowen Jia 1,Wei Lu 1,Xiaoli Dong 1,Guiling Wang 2,Hongbo Zhang 2,Yong Zhou 2,Yong Zhu 3,Xiaoyang Wang 3,Zhongxian Zhao 1,Zuyan Xu 2,Chuangtian Chen 3,X.J.Zhou 1,∗1National Laboratory for Superconductivity,Beijing National Laboratory for Condensed Matter Physics,Institute of Physics,Chinese Academy of Sciences,Beijing 100190,China2Key Laboratory for Optics,Beijing National Laboratory for Condensed Matter Physics,Institute of Physics,Chinese Academy of Sciences,Beijing 100190,China3Technical Institute of Physics and Chemistry,Chinese Academy of Sciences,Beijing 100190,China(Dated:April 23,2008)The momentum and temperature dependence of the superconducting gap and pseudogap in optimally-doped Bi 2Sr 1.6La 0.4CuO 6superconductor is investigated by super-high resolution laser-based angle-resolved photoemission spectroscopy.The measured energy gap in the superconducting state exhibits a standard d -wave form.Pseudogap opens above T c over a large portion of the Fermi surface with a “Fermi arc”formed near the nodal region.In the region outside of the “Fermi arc”,the pseudogap has the similar magnitude and momentum dependence as the gap in the supercon-ducting state which changes little with temperature and shows no abrupt change across T c .These observations indicate that the pseudogap and superconducting gap are closely related and favor the picture that the pseudogap is a precursor to the superconducting gap.PACS numbers:74.25.Jb,71.18.+y,74.72.Dn,79.60.-iThe high temperature cuprate superconductors are characterized by their unusual superconducting state,manifested by the anisotropic superconducting gap with predominantly d -wave symmetry[1],as well as the anomalous normal state,exemplified by the existence of a pseudogap above the superconducting transition tem-perature (T c )[2].The origin of the pseudogap and its relation with the superconducting gap are critical is-sues in understanding the mechanism of superconduc-tivity and exotic normal state properties[3,4].It has been a long-standing debate on whether the pseudogap is intimately related to the superconducting gap like a precursor of pairing[5,6,7,8]or it originates from other competing orders that has no direct bearing on superconductivity[9,10,11,12].Angle-resolved photoemission spectroscopy (ARPES),as a powerful tool to directly measure the magni-tude of the energy gap,has provided key insights on the superconducting gap and pseudogap in cuprate superconductors[13].Recently,great effort has been fo-cused on investigating their relationship but the results are split in supporting two different pictures[8,11,14,15,16,17].In one class of ARPES experiments,dis-tinct doping and temperature dependence of the en-ergy gap between the nodal and antinodal regions are reported[11,15]which are used to support “two gap”picture where the pseudogap and the superconducting gap are loosely related or independent.Additional sup-port comes from the unusual gap form measured in the superconducting state[14,16].Its strong deviation from the standard d -wave form is interpreted as composing of “two components”:a “true”d-wave superconducting gapand the remanent pseudogap that is already present in the normal state[14,16].In another class of experiments that supports “one-gap”picture where the pseudogap is a precursor of the superconducting gap,the gap in the superconducting state is found to be consistent with a standard d -wave form[8,17].Slight deviation in the un-derdoped regime is interpreted as due to high-harmonic pairing terms[18].In light of the controversy surrounding the relationship between the pseudogap and superconducting gap and its importance in understanding high-T c superconductivity,we report in this paper detailed momentum and temper-ature dependence of the superconducting gap and pseu-dogap in Bi 2Sr 1.6La 0.4CuO 6(La-Bi2201)superconductor by super-high resolution laser-based ARPES measure-ments.In the superconducting state we have identified an anisotropic energy gap that is consistent with a stan-dard d -wave form.This is significantly different from the previous results on a similar superconductor[14].In the normal state,we have observed pseudogap opening with a small “Fermi arc”formed near the nodal region.Outside of the ”Fermi arc”,the pseudogap in the normal state has the similar magnitude and momentum dependence as the gap in the superconducting state:detailed tem-perature dependence shows that the pseudogap evolves smoothly into the superconducting gap with no abrupt change across T c .These results point to an intimate re-lationship between the pseudogap and the superconduct-ing gap which is in favor of the “one-gap”picture that pseudogap is a precursor to the superconducting gap.The ARPES measurements are carried out on our newly-developed Vacuum Ultraviolet(VUV)laser-based2E - EF (eV)E - EF (eV)1.00.50G (0,0)(p ,0)1510152025k xFIG.1:Fermi surface of the optimally-doped La-Bi2201(T c =32K)and corresponding photoemission spectra (EDCs)on the Fermi surface at various temperatures.(a).Spectral weight as a function of two-dimensional momentum (k x ,k y )integrated over [-5meV,5meV]energy window with respect to the Fermi level E F .The measured Fermi momenta are marked by red empty circles and labeled by numbers;(b).Original EDCs along the Fermi surface measured at 15K.The symmetrized EDCs along the Fermi surface are shown in (c)for 15K,(d)for 25K and (e and f)for 40K.The numbers on panels (b-f)corresponds to the Fermi momentum numbers in (a).angle-resolved photoemission system with advantages of super-high energy resolution,high momentum resolution,high photon flux and enhanced bulk sensitivity[19].The photon energy is 6.994eV with a bandwidth of 0.26meV and the energy resolution of the electron energy analyzer (Scienta R4000)was set at 0.5meV,giving rise to an overall energy resolution of 0.56meV.The angular res-olution is ∼0.3◦,corresponding to a momentum resolu-tion ∼0.004˚A −1at the photon energy of 6.994eV.The optimally doped Bi 2Sr 2−x La x CuO 6(La-Bi2201)(x=0.4,T c ∼32K,transition width ∼2K)single crystals were grown by the traveling solvent floating zone method[20].One advantage of choosing La-Bi2201system lies in its relatively low superconducting transition temperature that is desirable in investigating the normal state behav-ior with suppressed thermal broadening of photoemission spectra.The samples are cleaved in situ in vacuum with a base pressure better than 4×10−11Torr.Fig.1(a)shows the Fermi surface mapping of the op-timally doped La-Bi2201(T c =32K)measured at 15K.The low photon energy and high photon flux have made it possible to take dense sampling of the measurements in the momentum space.The photoemission spectra (En-ergy Distribution Curves,EDCs)along the Fermi surface are plotted in Fig.1(b).The EDCs near the nodal re-gion show sharp peaks that are similar to those observed in Bi2212[21].When the momentum moves away from the nodal region to the (0,π)antinodal region,the EDC peaks get weaker,but peak feature remains along the en-tire Fermi surface even for the one close to the antinodal region.The EDC peak position also shifts away from the Fermi level when the momentum moves from the nodal to the antinodal region,indicating a gap opening in the superconducting state.Note that the EDCs near the antinodal region do not show any feature near 40meV that was reported in a previous measurement[14].In order to extract the energy gap,we have sym-metrized the original EDCs with respect to the Fermi level,as shown in Fig.1c for the 15K measurements,and Fig.1d and Fig.1(e-f)for 25K and 40K,respec-tively.The symmetrization procedure not only provides an intuitive way in visualizing the energy gap,but also removes the effect of Fermi cutoffin photoemission spec-tra and provides a quantitative way in extracting the gap size[22].The symmetrized EDCs have been fitted using the general phenomenological form[22];the fitted curves are overlaid in Fig.1(c-f)and the extracted gap size is plotted in Fig.2.As shown in Fig.2,the gap in the superconducting state exhibits a clear anisotropic behavior that is consis-tent with a standard d -wave form ∆=∆0cos(2Φ)(or in a more strict sense,∆=∆0|cos (k x a )−cos (k y a )|/2form as shown in the inset of Fig.2)with a maximum energy gap ∆0=15.5meV.It is also interesting to note that the gap is nearly identical for the 15K and 25K measurements for such a T c =32K superconductor.These results are significantly different from a recent measurement where the gap in the superconducting state deviates strongly from the standard d -wave form with an antinodal gap at 40meV[14].An earlier measurement[23]gave an antin-3G a p S i z e (m e V )Angle F (degrees)FIG.2:Energy gap along the Fermi surface measured at 15K (solid circles),25K (empty circles)and 40K (empty squares)on the optimally-doped La-Bi2201(T c =32K).The solid red line is fitted from the measured data at 15K which gives ∆=15.5cos(2Φ).The Φangle is defined as shown in the bottom-right inset.The upper-right inset shows the gap size as a function of |cos (k x a )−cos (k y a )|/2at 15K and 25K.The pink line represents a fitted line with ∆=15.5|cos (k x a )−cos (k y a )|/2.odal gap at 10∼12meV which is close to our present mea-surement,but it also reported strong deviation from the standard d -wave form.While the non-d -wave energy gap can be interpreted ascomposed of two components in the previous measurement[14],our present results clearly in-dicate that the gap in the superconducting state is dom-inated by a d -wave component.In the normal state above T c =32K,the Fermi sur-face measured at 40K is still gapped over a large portion except for the section near the nodal region that shows a zero gap,as seen from the symmetrized EDCs (Fig.1e-f for 40K)and the extracted pseudo-gap (40K data in Fig.2).This is consistent with the “Fermi arc”picture observed in other high temperature superconductors[6,11,24].Note that the pseudogap out-side of the “Fermi arc”region shows similar magnitude and momentum dependence as the gap in the supercon-ducting state (Fig.2).Fig.3shows detailed temperature dependence of EDCs and the associated energy gap for two representa-tive momenta on the Fermi surface.Strong temperature dependence of the EDCs is observed for the Fermi mo-mentum A (Fig.3a).At high temperatures like 100K or above,the EDCs show a broad hump structure near -0.2eV with no observable peak near the Fermi level.Upon cooling,the high-energy -0.2eV broad hump shows little change with temperature,while a new structure emerges near the Fermi level and develops into a sharp “quasipar-ticle”peak in the superconducting state,giving rise to a peak-dip-hump structure in EDCs.This temperatureE - EF (eV)E - EF (eV)FIG.3:(a,b).Temperature dependence of representa-tive EDCs at two Fermi momenta on the Fermi surface in optimally-doped La-Bi2201.The location of the Fermi mo-menta is indicated in the inset.Detailed temperature depen-dence of the symmetrized EDCs for the Fermi momentum A are shown in (c)and for the Fermi momentum B in (d).The dashed lines in (c)and (d)serve as a guide to the eye.evolution and peak-dip-hump structure are reminiscent to that observed in other high temperature superconduc-tors like Bi2212[25].When moving towards the antin-odal region,as for the Fermi momentum B (Fig.3b),the EDCs qualitatively show similar behavior although the temperature effect gets much weaker.One can still see a weak peak developed at low temperatures,e.g.,13K,near the Fermi level.To examine the evolution of the energy gap with tem-perature,Fig.3c and 3d show symmetrized EDCs mea-sured at different temperatures for the Fermi momenta A and B,respectively.The gap size extracted by fit-ting the symmetrized EDCs with the general formula[22]are plotted in Fig. 4.For the Fermi momentum A,as seen from Fig.3c,signature of gap opening in the su-perconducting state persists above T c =32K,remaining obvious at 50K,getting less clear at 75K,and appear to disappear around 100K and above as evidenced by the appearance of a broad peak.The gap size below 50K (Fig.4)shows little change with temperature and no abrupt change is observed across T c .The data at 75K is hard to fit to get a reliable gap size,thus not included in Fig. 4.When the momentum moves closer to the antinodal region,as for the Fermi momentum B,simi-lar behaviors are observed,i.e.,below 50K,the gap size is nearly a constant without an abrupt change near T c .But in this case,different from the Fermi momentum A,4G a p S i z e (m e V )Temperature(K)FIG.4:Temperature dependence of the energy gap for two Fermi momenta A (empty squares)and B (empty circles)as indicated in insets of Fig.3(a)and (b),and also indicated in the up-right inset,for optimally-doped La-Bi2201.The dashed line indicates T c =32K.there is no broad peak recovered above 100K,probably indicating a higher pseudogap temperature.This is qual-itatively consistent with the transport[26]and NMR[27]measurements on the same material that give a pseudo-gap temperature between 100∼150K.From precise gap measurement,there are clear signa-tures that can distinct between “one-gap”and “two-gap”scenarios[4].In the “two-gap”picture where the pseudo-gap and superconducting gap are assumed independent,because the superconducting gap opens below T c in addi-tion to the pseudogap that already opens in the normal state and persists into the superconducting state,one would expect to observe two effects:(1).Deviation of the energy gap from a standard d -wave form in the super-conducting state with a possible break in the measured gap form[14];(2).Outside of the “Fermi arc”region,one should expect to see an increase in gap size in the superconducting state.Our observations of standard d -wave form in the superconducting state (Fig.2),similar magnitude and momentum dependence of the pseudogap and the gap in the superconducting state outside of the “Fermi arc”region (Fig.2),smooth evolution of the gap size across T c and no indication of gap size increase upon entering the superconducting state (Fig.4),are not com-patible with the expectations of the “two-gap”picture.They favor the “one-gap”picture where the pseudogap and superconducting gap are closely related and the pseu-dogap transforms into the superconducting gap across T c .Note that,although the region outside of the “Fermi arc”shows little change of the gap size with temperature (Fig.4),the EDCs exhibit strong temperature depen-dence with a “quasiparticle”peak developed in the su-perconducting state(Fig.3a and 3b)that can be related with the establishment of phase coherence[8,25].This suggests that the pseudogap region on the Fermi surface can sense the occurrence of superconductivity through acquiring phase coherence.In conclusion,from our precise measurements on the detailed momentum and temperature dependence of the energy gap in optimally doped La-Bi2201,we provide clear evidence to show that the pseudogap and super-conducting gap are intimately related.Our observations are in favor of the “one-gap”picture that the pseudogap is a precursor to the superconducting gap and supercon-ductivity is realized by establishing a phase coherence.We acknowledge helpful discussions with T.Xi-ang.This work is supported by the NSFC(10525417and 10734120),the MOST of China (973project No:2006CB601002,2006CB921302),and CAS (Projects IT-SNEM and 100-Talent).∗Corresponding author:XJZhou@[1]See,e.g.,C.C.Tsuei and J.R.Kirtley,Rev.Mod.Phys.72,969(2000).[2]T.Timusk and B.Statt,Rep.Prog.Phys.62,61(1999).[3]V.J.Emery and S.A.Kivelson,Nature (London)374,434(1995);X.G.Wen and P.A.Lee,Phys.Rev.Lett.76,503(1996);C.M.Varma,Phys.Rev.Lett.83,3538(1999);S.Chakravarty et al.,Phys.Rev.B 63,094503(2001);P.W.Anderson,Phys.Rev.Lett.96,017001(2006).[4]lis,Science 314,1888(2006).[5]Ch.Renner et al.,Phys.Rev.Lett.80,149(1998).[6]M.R.Norman et al.,Nature (London)392,157(1998).[7]Y.Y.Wang et al.,Phys.Rev.B 73,024510(2006).[8]A.Kanigel et al.,Phys.Rev.Lett.99,157001(2007).[9]G.Deytscher,Nature (London)397,410(1999).[10]M.Le.Tacon et al.,Nature Phys.2,537(2006).[11]K.Tanaka et al.,Scinece 314,1910(2006).[12]M.C.Boyer et al.,Nature Phys.3,802(2007).[13]A.Damascelli et al.,Rev.Mod.Phys.75,473(2003);J.C.Campuzano et al.,in The Physics of Superconductors,Vol.2,edited by K.H.Bennemann and J.B.Ketterson,(Springer,2004).[14]T.Kondo et al.,Phys.Rev.Lett.98,267004(2007).[15]W.S.Lee et al.,Nature (London)450,81(2007).[16]K.Terashima et al.,Phys.Rev.Lett.99,017003(2007).[17]M.Shi et al.,arXiv:cond-mat/0708.2333.[18]J.Mesot et al.,Phys.Rev.Lett.83,840(1999).[19]G.D Liu et al.,Rev.Sci.Instruments 79,023105(2008).[20]J.Q.Meng et al.,unpublished work.[21]W.T.Zhang et al.,arXiv:cond-mat/0801.2824.[22]M.R.Norman et al.,Phys.Rev.B 57,R11093(1998).[23]J.M.Harris et al.,Phys.Rev.Lett.79,143(1997).[24]A.Kanigel et al.,Nature Phys.2447(2006).[25]A.V.Fedorov et al.,Phys.Rev.Lett.82,2179(1999);D.L.Feng et al.,Science 289,277(2000);H.Ding et al.,Phys.Rev.Lett.87,227001(2001).[26]Y.Ando et al.,Phys.Rev.Lett.93,267001(2004).[27]G.-Q.Zheng et al.,Phys.Rev.Lett.94,047006(2005).。
Optimal and Efficient Path Planning for Partially-Known EnvironmentsAnthony StentzThe Robotics Institute; Carnegie Mellon University; Pittsburgh, PA 15213AbstractThe task of planning trajectories for a mobile robot has received considerable attention in the research literature. Most of the work assumes the robot has a complete and accurate model of its environment before it begins to move; less attention has been paid to the problem of partially known environments. This situation occurs for an exploratory robot or one that must move to a goal location without the benefit of a floorplan or terrain map. Existing approaches plan an initial path based on known information and then modify the plan locally or replan the entire path as the robot discovers obstacles with its sensors, sacrificing optimality or computational efficiency respectively. This paper introduces a new algorithm, D*, capable of planning paths in unknown, partially known, and changing environments in an efficient, optimal, and complete manner.1.0IntroductionThe research literature has addressed extensively the motion planning problem for one or more robots moving through a field of obstacles to a goal. Most of this work assumes that the environment is completely known before the robot begins its traverse (see Latombe [4] for a good survey). The optimal algorithms in this literature search a state space (e.g., visibility graph, grid cells) using the dis-tance transform [2] or heuristics [8] to find the lowest cost path from the robot’s start state to the goal state. Cost can be defined to be distance travelled, energy expended, time exposed to danger, etc.Unfortunately, the robot may have partial or no information about the environment before it begins its traverse but is equipped with a sensor that is capable of measuring the environment as it moves. One approach to path planning in this scenario is to generate a “global”path using the known information and then attempt to “locally” circumvent obstacles on the route detected by the sensors [1]. If the route is completely obstructed, a new global path is planned. Lumelsky [7] initially assumes the environment to be devoid of obstacles and moves the robot directly toward the goal. If an obstacle obstructs the path, the robot moves around the perimeter until the point on the obstacle nearest the goal is found. The robot then proceeds to move directly toward the goal again. Pirzadeh [9] adopts a strategy whereby the robot wanders about the environment until it discovers the goal. The robot repeatedly moves to the adjacent location with lowest cost and increments the cost of a location each time it visits it to penalize later traverses of the same space. Korf [3] uses initial map information to estimate the cost to the goal for each state and efficiently updates it with backtracking costs as the robot moves through the environment.While these approaches are complete, they are also suboptimal in the sense that they do not generate the lowest cost path given the sensor information as it is acquired and assuming all known, a priori information is correct. It is possible to generate optimal behavior by computing an optimal path from the known map information, moving the robot along the path until either it reaches the goal or its sensors detect a discrepancy between the map and the environment, updating the map, and then replanning a new optimal path from the robot’s current location to the goal. Although this brute-force, replanning approach is optimal, it can be grossly inefficient, particularly in expansive environments where the goal is far away and little map information exists. Zelinsky [15] increases efficiency by using a quad-tree [13] to represent free and obstacle space, thus reducing the number of states to search in the planning space. For natural terrain, however, the map can encode robot traversability at each location ranging over a continuum, thus rendering quad-trees inappropriate or suboptimal.This paper presents a new algorithm for generating optimal paths for a robot operating with a sensor and a map of the environment. The map can be complete, empty, or contain partial information about the environment. ForIn Proceedings IEEE International Conference on Robotics and Automation, May 1994.regions of the environment that are unknown, the map may contain approximate information, stochastic models for occupancy, or even a heuristic estimates. The algorithm is functionally equivalent to the brute-force,optimal replanner, but it is far more efficient.The algorithm is formulated in terms of an optimal find-path problem within a directed graph, where the arcs are labelled with cost values that can range over a continuum. The robot’s sensor is able to measure arc costs in the vicinity of the robot, and the known and estimated arc values comprise the map. Thus, the algorithm can be used for any planning representation, including visibility graphs [5] and grid cell structures. The paper describes the algorithm, illustrates its operation, presents informal proofs of its soundness, optimality, and completeness, and then concludes with an empirical comparison of the algorithm to the optimal replanner.2.0The D* AlgorithmThe name of the algorithm, D*, was chosen because it resembles A* [8], except that it is dynamic in the sense that arc cost parameters can change during the problem-solving process. Provided that robot motion is properly coupled to the algorithm, D* generates optimal trajecto-ries. This section begins with the definitions and notation used in the algorithm, presents the D* algorithm, and closes with an illustration of its operation.2.1DefinitionsThe objective of a path planner is to move the robot from some location in the world to a goal location, such that it avoids all obstacles and minimizes a positive cost metric (e.g., length of the traverse). The problem space can be formulated as a set of states denoting robot loca-tions connected by directional arcs , each of which has an associated cost. The robot starts at a particular state and moves across arcs (incurring the cost of traversal) to other states until it reaches the goal state, denoted by . Every state except has a backpointer to a next state denoted by . D* uses backpointers to represent paths to the goal. The cost of traversing an arc from state to state is a positive number given by the arc cost function . If does not have an arc to , then is undefined. Two states and are neighbors in the space if or is defined.Like A*, D* maintains an list of states. The list is used to propagate information about changes to the arc cost function and to calculate path costs to states in the space. Every state has an associated tag ,such that if has never been on the list, if is currently on the list, andG X G Y b X ()Y =Y X c X Y ,()Y X c X Y ,()X Y c X Y ,()c Y X ,()OPEN OPEN X t X ()t X ()NEW =X OPEN t X ()OPEN =XOPEN if is no longer on the list. For each state , D* maintains an estimate of the sum of the arc costs from to given by the path cost function . Given the proper conditions, this estimate is equivalent to the optimal (minimal) cost from state to , given by the implicit function . For each state on the list (i.e.,), the key function,, is defined to be equal to the minimum of before modification and all values assumed by since was placed on the list. The key function classifies a state on the list into one of two types:a state if , and a state if . D* uses states on the listto propagate information about path cost increases (e.g.,due to an increased arc cost) and states to propagate information about path cost reductions (e.g.,due to a reduced arc cost or new path to the goal). The propagation takes place through the repeated removal of states from the list. Each time a state is removed from the list, it is expanded to pass cost changes to its neighbors. These neighbors are in turn placed on the list to continue the process.States on the list are sorted by their key function value. The parameter is defined to be for all such that . The parameter represents an important threshold in D*: path costs less than or equal to are optimal, and those greater than may not be optimal. The parameter is defined to be equal to prior to most recent removal of a state from the list. If no states have been removed,is undefined.An ordering of states denoted by is defined to be a sequence if for all such that and for all such that . Thus, a sequence defines a path of backpointers from to . A sequence is defined to be monotonic if( a n d) o r ( and ) for all such that . D* constructs and maintains a monotonic sequence , representing decreasing current or lower-bounded path costs, for each state that is or was on the list. Given a sequence of states , state is an ancestor of state if and a descendant of if .For all two-state functions involving the goal state, the following shorthand notation is used:.Likewise, for sequences the notation is used.The notation is used to refer to a function independent of its domain.t X ()CLOSED =X OPEN X X G h G X ,()X G o G X ,()X OPEN t X ()OPEN =k G X ,()h G X ,()h G X ,()X OPEN X OPEN RAISE k G X ,()h G X ,()<LOWER k G X ,()h G X ,()=RAISE OPEN LOWER OPEN OPEN OPEN k min min k X ()()X t X ()OPEN =k min k min k min k old k min OPEN k old X 1X N {,}b X i 1+()X i =i 1i ≤N <X i X j ≠i j (,)1i ≤j <N ≤X N X 1X 1X N {,}t X i ()CLOSED =h G X i ,()h G X i 1+,()<t X i ()OPEN =k G X i ,()h G X i 1+,()<i 1i ≤N <G X {,}X OPEN X 1X N {,}X i X j 1i j N ≤<≤X j 1j i N ≤<≤f X ()f G X ,()≡X {}G X {,}≡f °()2.2Algorithm DescriptionThe D* algorithm consists primarily of two functions: and .is used to compute optimal path costs to the goal, and is used to change the arc cost function and enter affected states on the list. Initially, is set to for all states,is set to zero, and is placed on the list. The first function,, is repeatedly called until the robot’s state,, is removed from the list (i.e.,) or a value of -1 is returned, at which point either the sequence has been computed or does not exist respectively. The robot then proceeds to follow the backpointers in the sequence until it eitherreaches the goal or discovers an error in the arc cost func-tion (e.g., due to a detected obstacle). The second function,, is immediately called to cor-rect and place affected states on the list. Let be the robot’s state at which it discovers an error in .By calling until it returns ,the cost changes are propagated to state such that . At this point, a possibly new sequence has been constructed, and the robot continues to follow the backpointers in the sequence toward the goal.T h e a l g o r i t h m s f o r a n d are presented below. The embedded routines are , which returns the state on the list with minimum value ( if the list is empty);, which returns for the list (-1 if the list is empty);, which deletes state from the list and sets ; and , w h i c h c o m p u t e s i f , if , and i f , s e t s and , and places or re-positions state on the list sorted by .In function at lines L1 through L3,the state with the lowest value is removed from the list. If is a state (i.e.,), its path cost is optimal since is equal to the old . At lines L8 through L13, each neighbor of is examined to see if its path cost can be lowered. Additionally,neighbor states that are receive an initial path cost value, and cost changes are propagated to each neighbor that has a backpointer to , regardless of whether the new cost is greater than or less than the old. Since these states are descendants of , any change to the path cost of affects their path costs as well. The backpointer of is redirected (if needed) so that the monotonic sequence is constructed. All neighbors that receive a new pathPROCESS STATE –MODIFY COST –PROCESS STATE –MODIFY COST –c °()OPEN t °()NEW h G ()G OPEN PROCESS STATE –X OPEN t X ()CLOSED =X {}X {}c °()MODIFY COST –c °()OPEN Y c °()PROCESS STATE –k min h Y ()≥Y h Y ()o Y ()=Y {}PROCESS STATE –MODIFY COST –MIN STATE –OPEN k °()NULL GET KMIN –k min OPEN DELETE X ()X OPEN t X ()CLOSED =INSERT X h new ,()k X ()h new =t X ()NEW =k X ()min k X ()h new ,()=t X ()OPEN =k X ()min h X ()h new ,()=t X ()CLOSED =h X ()h new =t X ()OPEN =X OPEN k °()PROCESS STATE –X k °()OPEN X LOWER k X ()h X ()=h X ()k min Y X NEW Y X X X Y Y {}cost are placed on the list, so that they will propagate the cost changes to their neighbors.If is a state, its path cost may not be optimal.Before propagates cost changes to its neighbors, its optimal neighbors are examined at lines L4 through L7 to see if can be reduced. At lines L15 through L18, cost changes are propagated to states and immediate descendants in the same way as for states. If is able to lower the path cost of a state that is not an immediate descendant (lines L20 and L21), is placed back on the list for future expansion. It is shown in the next section that this action is required to avoid creating a closed loop in the backpointers. If the path cost of is able to be reduced by a suboptimal neighbor (lines L23 through L25), the neighbor is placed back on the list. Thus, the update is “postponed” until the neighbor has an optimal path cost.Function: PROCESS-STATE ()L1L2if then return L3;L4if then L5for each neighbor of :L6if and then L7;L8if then L9for each neighbor of :L10if or L11( and ) or L12( and ) then L13;L14else L15for each neighbor of :L16if or L17( and ) then L18;L19else L20if and then L21L22else L23if and and L24 and then L25L26return OPEN X RAISE X h X ()NEW LOWER X X OPEN X OPEN X MIN STATE ()–=X NULL =1–k old GET KMIN –()=DELETE X ()k old h X ()<Y X h Y ()k old ≤h X ()h Y ()c Y X ,()+>b X ()Y =h X ()h Y ()c Y X ,()+=k old h X ()=Y X t Y ()NEW =b Y ()X =h Y ()h X ()c X Y ,()+≠b Y ()X ≠h Y ()h X ()c X Y ,()+>b Y ()X =INSERT Y h X ()c X Y ,()+,()Y X t Y ()NEW =b Y ()X =h Y ()h X ()c X Y ,()+≠b Y ()X =INSERT Y h X ()c X Y ,()+,()b Y ()X ≠h Y ()h X ()c X Y ,()+>INSERT X h X (),()b Y ()X ≠h X ()h Y ()c Y X ,()+>t Y ()CLOSED =h Y ()k old >INSERT Y h Y (),()GET KMIN ()–In function , the arc cost function is updated with the changed value. Since the path cost for state will change, is placed on the list. When is expanded via , it computes a new and places on the list.Additional state expansions propagate the cost to the descendants of .Function: MODIFY-COST (X, Y, cval)L1L2if then L3return 2.3Illustration of OperationThe role of and states is central to the operation of the algorithm. The states (i.e.,) propagate cost increases, and the states (i.e.,) propagate cost reductions. When the cost of traversing an arc is increased, an affected neighbor state is placed on the list, and the cost increase is propagated via states through all state sequences containing the arc. As the states come in contact with neighboring states of lower cost, these states are placed on the list, and they sub-sequently decrease the cost of previously raised states wherever possible. If the cost of traversing an arc isdecreased, the reduction is propagated via states through all state sequences containing the arc, as well as neighboring states whose cost can also be lowered.Figure 1: Backpointers Based on Initial PropagationFigure 1 through Figure 3 illustrate the operation of the algorithm for a “potential well” path planning problem.MODIFY COST –Y X OPEN X PROCESS STATE –h Y ()h X ()c X Y ,()+=Y OPEN Y c X Y ,()cval=t X ()CLOSED =INSERT X h X (),()GET KMIN ()–RAISE LOWER RAISE k X ()h X ()<LOWER k X ()h X ()=OPEN RAISE RAISE LOWER OPEN LOWER GThe planning space consists of a 50 x 50 grid of cells .Each cell represents a state and is connected to its eight neighbors via bidirectional arcs. The arc cost values are small for the cells and prohibitively large for the cells.1 The robot is point-sized and is equipped with a contact sensor. Figure 1 shows the results of an optimal path calculation from the goal to all states in the planning space. The two grey obstacles are stored in the map, but the black obstacle is not. The arrows depict the backpointer function; thus, an optimal path to the goal for any state can be obtained by tracing the arrows from the state to the goal. Note that the arrows deflect around the grey, known obstacles but pass through the black,unknown obstacle.Figure 2: LOWER States Sweep into WellThe robot starts at the center of the left wall and follows the backpointers toward the goal. When it reaches the unknown obstacle, it detects a discrepancy between the map and world, updates the map, colors the cell light grey, and enters the obstacle cell on the list.Backpointers are redirected to pull the robot up along the unknown obstacle and then back down. Figure 2illustrates the information propagation after the robot has discovered the well is sealed. The robot’s path is shown in black and the states on the list in grey.states move out of the well transmitting path cost increases. These states activate states around the1. The arc cost value of OBSTACLE must be chosen to be greater than the longest possible sequence of EMPTY cells so that a simple threshold can be used on the path cost to determine if the optimal path to the goal must pass through an obstacle.EMPTY OBSTACLE GLOWER statesRAISE statesLOWER statesOPEN OPEN RAISE LOWER“lip” of the well which sweep around the upper and lower obstacles and redirect the backpointers out of the well.Figure 3: Final Backpointer ConfigurationThis process is complete when the states reach the robot’s cell, at which point the robot moves around the lower obstacle to the goal (Figure 3). Note that after the traverse, the backpointers are only partially updated.Backpointers within the well point outward, but those in the left half of the planning space still point into the well.All states have a path to the goal, but optimal paths are computed to a limited number of states. This effect illustrates the efficiency of D*. The backpointer updates needed to guarantee an optimal path for the robot are limited to the vicinity of the obstacle.Figure 4 illustrates path planning in fractally generated terrain. The environment is 450 x 450 cells. Grey regions are fives times more difficult to traverse than white regions, and the black regions are untraversible. The black curve shows the robot’s path from the lower left corner to the upper right given a complete, a priori map of the environment. This path is referred to as omniscient optimal . Figure 5 shows path planning in the same terrain with an optimistic map (all white). The robot is equipped with a circular field of view with a 20-cell radius. The map is updated with sensor information as the robot moves and the discrepancies are entered on the list for processing by D*. Due to the lack of a priori map information, the robot drives below the large obstruction in the center and wanders into a deadend before backtracking around the last obstacle to the goal. The resultant path is roughly twice the cost of omniscientGLOWER OPEN optimal. This path is optimal, however, given the information the robot had when it acquired it.Figure 4: Path Planning with a Complete MapFigure 5: Path Planning with an Optimistic MapFigure 6 illustrates the same problem using coarse map information, created by averaging the arc costs in each square region. This map information is accurate enough to steer the robot to the correct side of the central obstruction, and the resultant path is only 6% greater incost than omniscient optimal.Figure 6: Path Planning with a Coarse-Resolution Map3.0Soundness, Optimality, andCompletenessAfter all states have been initialized to and has been entered onto the list, the function is repeatedly invoked to construct state sequences. The function is invoked to make changes to and to seed these changes on the list. D* exhibits the following properties:Property 1: If , then the sequence is constructed and is monotonic.Property 2: When the value returned by e q u a l s o r e x c e e d s , t h e n .Property 3: If a path from to exists, and the search space contains a finite number of states, will be c o n s t r u c t e d a f t e r a fi n i t e n u m b e r o f c a l l s t o . I f a p a t h d o e s n o t e x i s t , will return -1 with .Property 1 is a soundness property: once a state has been visited, a finite sequence of backpointers to the goal has been constructed. Property 2 is an optimality property.It defines the conditions under which the chain of backpointers to the goal is optimal. Property 3 is a completeness property: if a path from to exists, it will be constructed. If no path exists, it will be reported ina finite amount of time. All three properties holdX t X ()NEW =G OPEN PROCESS STATE –MODIFY COST –c °()OPEN t X ()NEW ≠X {}k min PROCESS STATE –h X ()h X ()o X ()=X G X {}PROCESS STATE –PROCESS STATE –t X ()NEW =X G regardless of the pattern of access for functions and .For brevity, the proofs for the above three properties are informal. See Stentz [14] for the detailed, formal p r o o f s. C o n s i d e r P r o p e r t y 1 fi r s t. W h e n e v e r visits a state, it assigns to point to an existing state sequence and sets to preserve monotonicity. Monotonic sequences are subsequently manipulated by modifying the functions ,,, and . When a state is placed on the list (i.e.,), is set to preserve monotonicity for states with backpointers to .Likewise, when a state is removed from the list, the values of its neighbors are increased if needed to preserve monotonicity. The backpointer of a state ,, can only be reassigned to if and if the sequence contains no states. Since contains no states, the value of every state in the sequence must be less than . Thus, cannot be an ancestor of , and a closed loop in the backpointers cannot be created.Therefore, once a state has been visited, the sequence has been constructed. Subsequent modifications ensure that a sequence still exists.Consider Property 2. Each time a state is inserted on or removed from the list, D* modifies values so that for each pair of states such that is and is . Thus, when is chosen for expansion (i.e.,), the neighbors of cannot reduce below , nor can the neighbors, since their values must be greater than . States placed on the list during the expansion of must have values greater than ; thus, increases or remains the same with each invocation of . If states with values less than or equal to are optimal, then states with values between (inclusively) and are optimal, since no states on the list can reduce their path costs. Thus, states with values less than or equal to are optimal. By induction,constructs optimal sequences to all reachable states. If the a r c c o s t i s m o d i fi e d , t h e f u n c t i o n places on the list, after which is less than or equal to . Since no state with can be affected by the modified arc cost, the property still holds.Consider Property 3. Each time a state is expanded via , it places its neighbors on the list. Thus, if the sequence exists, it will be constructed unless a state in the sequence,, is never selected for expansion. But once a state has been placedMODIFY COST –PROCESS STATE –PROCESS STATE –NEW b °()h °()t °()h °()k °()b °()X OPEN t X ()OPEN =k X ()h X ()X X h °()X b X ()Y h Y ()h X ()<Y {}RAISE Y {}RAISE h °()h Y ()X Y X X {}X {}OPEN h °()k X ()h Y ()c Y X ,()+≤X Y ,()X OPEN Y CLOSED X k min k X ()=CLOSED X h X ()k min OPEN h °()k min OPEN X k °()k X ()k min PROCESS STATE –h °()k old h °()k old k min OPEN h °()k min PROCESS STATE –c X Y ,()MODIFY COST –X OPEN k min h X ()Y h Y ()h X ()≤PROCESS STATE –NEW OPEN X {}Yon the list, its value cannot be increased.Thus, due to the monotonicity of , the state will eventually be selected for expansion.4.0Experimental ResultsD* was compared to the optimal replanner to verify its optimality and to determine its performance improve-ment. The optimal replanner initially plans a single path from the goal to the start state. The robot proceeds to fol-low the path until its sensor detects an error in the map.The robot updates the map, plans a new path from the goal to its current location, and repeats until the goal isreached. An optimistic heuristic function is used to focus the search, such that equals the “straight-line”cost of the path from to the robot’s location assuming all cells in the path are . The replanner repeatedly expands states on the list with the minimum value. Since is a lower bound on the actual cost from to the robot for all , the replanner is optimal [8].The two algorithms were compared on planning problems of varying size. Each environment was square,consisting of a start state in the center of the left wall and a goal state in center of the right wall. Each environment consisted of a mix of map obstacles (i.e., available to robot before traverse) and unknown obstacles measurable b y t h e r o b o t ’s s e n s o r. T h e s e n s o r u s e d w a s omnidirectional with a 10-cell radial field of view. Figure 7 shows an environment model with 100,000 states. The map obstacles are shown in grey and the unknown obstacles in black.Table 1 shows the results of the comparison for environments of size 1000 through 1,000,000 cells. The runtimes in CPU time for a Sun Microsystems SPARC-10processor are listed along with the speed-up factor of D*over the optimal replanner. For both algorithms, the reported runtime is the total CPU time for all replanning needed to move the robot from the start state to the goal state, after the initial path has been planned. For each environment size, the two algorithms were compared on five randomly-generated environments, and the runtimes were averaged. The speed-up factors for each environment size were computed by averaging the speed-up factors for the five trials.The runtime for each algorithm is highly dependent on the complexity of the environment, including the number,size, and placement of the obstacles, and the ratio of map to unknown obstacles. The results indicate that as the environment increases in size, the performance of D*over the optimal replanner increases rapidly. The intuitionOPEN k °()k min Y gˆX ()gˆX ()X EMPTY OPEN gˆX ()h X ()+g ˆX ()X X for this result is that D* replans locally when it detects an unknown obstacle, but the optimal replanner generates a new global trajectory. As the environment increases in size, the local trajectories remain constant in complexity,but the global trajectories increase in complexity.Figure 7: Typical Environment for Algorithm ComparisonTable 1: Comparison of D* to Optimal Replanner5.0Conclusions5.1SummaryThis paper presents D*, a provably optimal and effi-cient path planning algorithm for sensor-equipped robots.The algorithm can handle the full spectrum of a priori map information, ranging from complete and accurate map information to the absence of map information. D* is a very general algorithm and can be applied to problems in artificial intelligence other than robot motion planning. In its most general form, D* can handle any path cost opti-mization problem where the cost parameters change dur-ing the search for the solution. D* is most efficient when these changes are detected near the current starting point in the search space, which is the case with a robot equipped with an on-board sensor.Algorithm 1,00010,000100,0001,000,000Replanner 427 msec 14.45 sec 10.86 min 50.82 min D*261 msec 1.69 sec 10.93 sec 16.83 sec Speed-Up1.6710.1456.30229.30See Stentz [14] for an extensive description of related applications for D*, including planning with robot shape,field of view considerations, dead-reckoning error,changing environments, occupancy maps, potential fields,natural terrain, multiple goals, and multiple robots.5.2Future WorkFor unknown or partially-known terrains, recentresearch has addressed the exploration and map building problems [6][9][10][11][15] in addition to the path finding problem. Using a strategy of raising costs for previously visited states, D* can be extended to support exploration tasks.Quad trees have limited use in environments with cost values ranging over a continuum, unless the environment includes large regions with constant traversability costs.Future work will incorporate the quad tree representation for these environments as well as those with binary cost values (e.g., and ) in order to reduce memory requirements [15].Work is underway to integrate D* with an off-road obstacle avoidance system [12] on an outdoor mobile robot. To date, the combined system has demonstrated the ability to find the goal after driving several hundred meters in a cluttered environment with no initial map.AcknowledgmentsThis research was sponsored by ARPA, under contracts “Perception for Outdoor Navigation” (contract number DACA76-89-C-0014, monitored by the US Army TEC)and “Unmanned Ground Vehicle System” (contract num-ber DAAE07-90-C-R059, monitored by TACOM).The author thanks Alonzo Kelly and Paul Keller for graphics software and ideas for generating fractal terrain.References[1] Goto, Y ., Stentz, A., “Mobile Robot Navigation: The CMU System,” IEEE Expert, V ol. 2, No. 4, Winter, 1987.[2] Jarvis, R. A., “Collision-Free Trajectory Planning Using the Distance Transforms,” Mechanical Engineering Trans. of the Institution of Engineers, Australia, V ol. ME10, No. 3, Septem-ber, 1985.[3] Korf, R. E., “Real-Time Heuristic Search: First Results,”Proc. Sixth National Conference on Artificial Intelligence, July,1987.[4] Latombe, J.-C., “Robot Motion Planning”, Kluwer Aca-demic Publishers, 1991.[5] Lozano-Perez, T., “Spatial Planning: A Configuration Space Approach”, IEEE Transactions on Computers, V ol. C-32, No. 2,February, 1983.OBSTACLE EMPTY [6] Lumelsky, V . J., Mukhopadhyay, S., Sun, K., “Dynamic Path Planning in Sensor-Based Terrain Acquisition”, IEEE Transac-tions on Robotics and Automation, V ol. 6, No. 4, August, 1990.[7] Lumelsky, V . J., Stepanov, A. A., “Dynamic Path Planning for a Mobile Automaton with Limited Information on the Envi-ronment”, IEEE Transactions on Automatic Control, V ol. AC-31, No. 11, November, 1986.[8] Nilsson, N. J., “Principles of Artificial Intelligence”, Tioga Publishing Company, 1980.[9] Pirzadeh, A., Snyder, W., “A Unified Solution to Coverage and Search in Explored and Unexplored Terrains Using Indirect Control”, Proc. of the IEEE International Conference on Robot-ics and Automation, May, 1990.[10] Rao, N. S. V ., “An Algorithmic Framework for Navigation in Unknown Terrains”, IEEE Computer, June, 1989.[11] Rao, N.S.V ., Stoltzfus, N., Iyengar, S. S., “A ‘Retraction’Method for Learned Navigation in Unknown Terrains for a Cir-cular Robot,” IEEE Transactions on Robotics and Automation,V ol. 7, No. 5, October, 1991.[12] Rosenblatt, J. K., Langer, D., Hebert, M., “An Integrated System for Autonomous Off-Road Navigation,” Proc. of the IEEE International Conference on Robotics and Automation,May, 1994.[13] Samet, H., “An Overview of Quadtrees, Octrees andRelated Hierarchical Data Structures,” in NATO ASI Series, V ol.F40, Theoretical Foundations of Computer Graphics, Berlin:Springer-Verlag, 1988.[14] Stentz, A., “Optimal and Efficient Path Planning for Unknown and Dynamic Environments,” Carnegie MellonRobotics Institute Technical Report CMU-RI-TR-93-20, August,1993.[15] Zelinsky, A., “A Mobile Robot Exploration Algorithm”,IEEE Transactions on Robotics and Automation, V ol. 8, No. 6,December, 1992.。
红外图像的各向异性分段高斯滤波(英文)
高阳;李言俊;张科
【期刊名称】《光子学报》
【年(卷),期】2007(36)6
【摘要】提出了一种各向异性分段高斯滤波,这种方法除了考虑图像的尺度和方向特性外,还根据边缘区域可分特性,使用了分段滤波的方法来进一步解决边缘保持的问题.该方法通过本文所提出的一种图像噪音方差估计模型来确定图像的基准尺度,并由Hermite变换得到其方向角,然后再通过确定高斯模型的轴向比和自适应尺度,使对选择区域的滤波转变为对分段高斯模型的滤波,从而使计算的可靠性得到增强.从仿真结果可以看出,各向异性分段高斯滤波器在噪音去除和边缘保持的综合性能上要优于其他常用的滤波算法.
【总页数】5页(P1167-1171)
【关键词】各向异性分段高斯滤波;噪音抑制;边缘保持;噪音方差估计
【作者】高阳;李言俊;张科
【作者单位】西北工业大学航天学院
【正文语种】中文
【中图分类】TP391
【相关文献】
1.各向异性扩散-中值滤波在红外图像处理中的应用 [J], 刘潮东;史忠科
2.各向异性滤波在红外图像处理中的应用 [J], 王怀野;张科;李言俊
3.基于各向异性高斯滤波的暗原色理论雾天彩色图像增强算法 [J], 高银;云利军;石俊生;丁慧梅
4.基于多尺度高斯滤波和形态学变换的红外与其他类型图像融合方法 [J], 李志坚;杨风暴;高玉斌;吉琳娜;胡鹏
5.各向异性导向滤波的红外与可见光图像融合 [J], 刘明葳;王任华;李静;焦映臻因版权原因,仅展示原文概要,查看原文内容请购买。
给材料自由
Emma
【期刊名称】《设计》
【年(卷),期】2013(000)007
【总页数】8页(P90-97)
【作者】Emma
【作者单位】
【正文语种】中文
【相关文献】
1.超材料混凝土单胞骨料的无阻尼自由振动特性研究 [J], 韩洁;路国运
2.基于高阶梁理论的功能梯度材料自由振动分析 [J], 江希;匡传树;帅涛;耿培帅
3.单矢量水听器水声材料声反射系数自由场测量 [J], 时胜国;王超;胡博
4.轴向运动碳纳米管增强复合材料板的自由振动研究 [J], 黄小林;钟德月;刘思奇;魏耿忠
5.接触角法研究医用聚醚聚氨酯材料表面自由能与界面自由能 [J], 李品一;王建祺;沈优珍
因版权原因,仅展示原文概要,查看原文内容请购买。
用于光束整形的多功能衍射相位板的设计(英文)
冯迪;严瑛白;谭峭峰;刘海涛
【期刊名称】《光子学报》
【年(卷),期】2003(32)8
【摘要】提出一种设计多功能衍射相位板的随机搜索和模拟退火混合设计方法利用这种方法 ,通过采用多个评价函数 ,可设计出在不同观察面上同时产生所需光强分布的衍射相位板设计用于高斯光束整形的多功能衍射相位板可以在焦面上产生高能量利用率、小旁瓣的光强分布 ,同时在离焦面上获得具有良好顶部均匀性的光强分布该方法提高了衍射器件的设计灵活性 ,对于设计同一块器件在光路的不同位置产生所需的光强分布提供了新的思路 ,在激光武器、激光加工和激光手术等领域有广阔的应用潜力 .
【总页数】4页(P997-1000)
【关键词】衍射相位板;混合算法;评价函数;光束整形
【作者】冯迪;严瑛白;谭峭峰;刘海涛
【作者单位】清华大学精密仪器系
【正文语种】中文
【中图分类】O437.4;O438
【相关文献】
1.用波晶片相位板产生角动量可调的无衍射涡旋空心光束∗ [J], 施建珍;许田;周巧巧;纪宪明;印建平
2.用于激光束分束的相位型衍射光栅的设计 [J], 董梅峰
3.用于半导体激光器光束整形的衍射光学元件的设计研究 [J], 姬扬;张静娟;姚德成;陈岩松
4.用于光束整形的衍射光学元件设计的混合算法 [J], 庞辉;应朝福;范长江;林培秋;吴浩伟
5.用于超宽焦斑光束整形的大口径衍射光学元件设计和制作 [J], 赵逸琼;王建东;张晓波;李永平;伍源;王旭迪;傅绍军
因版权原因,仅展示原文概要,查看原文内容请购买。
2018年第37卷第1期 CHEMICAL INDUSTRY AND ENGINEERING PROGRESS·343·化 工 进展变负荷工况下NO x 排放量预测控制唐振浩,张海洋,曹生现(东北电力大学自动化工程学院,吉林 吉林 132012)摘要:NO x 是火电厂排放的主要污染物之一,降低NO x 的排放是火电厂面临的主要问题。
针对火电厂变负荷工况下的NO x 排放量最小化问题,本文提出了一种基于最小二乘支持向量机(LSSVM )的非线性模型预测控制算法。
根据电站锅炉实际历史数据建立锅炉负荷预测模型和NO x 排放预测模型,并以交叉验证的方法优化模型参数,从而获得高精度模型。
在此基础上以NO x 的排放量最小为优化目标,考虑锅炉负荷约束,构建锅炉燃烧优化模型。
采用差分进化算法求解优化模型得到控制参数的最优设定值。
为了验证本文提出算法的有效性,采用实际生产数据进行实验。
实验结果表明本方法能够在变负荷工况下有效降低NO x 排放量,在不增加电厂改造成本上,为电厂提供了有效的控制手段,具有一定应用前景。
关键词:煤燃烧;优化;氮氧化物;差分算法;最小二乘支持向量机;模型预测控制中图分类号:TK224 文献标志码:A 文章编号:1000–6613(2018)01–0343–07 DOI :10.16085/j.issn.1000-6613.2017-0716Model predictive control of NO x emission under variable load conditionTANG Zhenhao ,ZHANG Haiyang ,CAO Shengxian(School of Automation Engineering ,Northeast Electric Power University ,Jilin 132012,Jilin ,China )Abstract: NO x is one of the main pollutants for coal-fired power plant emissions. The main problemfor the plants today is reducing NO x emission. A nonlinear model predictive control method based on least square support vector machine (LSSVM )is proposed in this paper to solve the boiler NO x emission minimization problem considering varying load in coal-fired power plants. The boiler load model and NO x emissions model are constructed based on practical data. And then, the model parameters can be optimized by cross validation to obtain accuracy models. Based on these models, the boiler combustion optimization model is constructed. The optimization model aiming at minimizing the NO x emission considers the boiler load as a constraint. This optimization model is solved to obtain the optimal control variable settings by different evolution (DE )algorithm. To testify the effectiveness of the proposed approach, the experiments based on real operational data are designed. The experiments results illustrate that the proposed method could reduce NO x emissions effectively under varying load. It provides an effective means at no additional cost and has a certain application prospect.Key words :coal combustion ;optimization ;nitrogen oxide ;differential evolution algorithm ;least squares support vector ;model-predictive control为了解决我国面临的严峻的环境污染问题,由中华人民共和国环境保护部发布的《火电厂大气污染物排放标准》中要求自2012年1月1日起除个别地区外,火电厂NO x 的排放量不得高于100mg/m 3。
两个非平行波导间的能量转换(英文)
贾玉斌;郝一龙
【期刊名称】《光子学报》
【年(卷),期】2005(34)6
【摘要】提出一种分析非平行波导间耦合的简明方法耦合系数推广法.利用这种方法,导出一种新的非平行双波导的耦合方程,由此得到一组非平行波导耦合的完美的分析解,依据这些分析解可以优化非平行波导的传输距离、输入/输出端口波导间距和夹角.
【总页数】5页(P852-856)
【关键词】平行;能量转换;输入/输出端口;耦合系数;耦合方程;波导耦合;传输距离;分析解
【作者】贾玉斌;郝一龙
【作者单位】北京大学微电子研究院
【正文语种】中文
【中图分类】TN253;G633.63
【相关文献】
1.具有各向异性的两个平行圆柱形介质波导耦合模的研究 [J], 艾克拜尔·帕塔尔;安元清俊
2.非简并参量下转换系统两个不同形式解间的关系 [J], 厉江帆;姜宗福;单树民
3.基于波导间能量耦合效应的光子晶体频段选择与能量分束器 [J], 赵绚;刘晨;马会
丽;冯帅
4.非平行波导耦合理论研究 [J], 梁华伟;石顺祥;李家立
5.偏振轴非平行的平行波导耦合系数分析 [J], 严秀红;梁毅
因版权原因,仅展示原文概要,查看原文内容请购买。
二维浅海波导中声场边界处理的谱无限元方法
曹伟浩;程广利;刘宝
【期刊名称】《海军工程大学学报》
【年(卷),期】2024(36)1
【摘要】针对传统无限元法在截断声场边界上计算精度低的问题,提出了一种基于GR (Gauss-Radau)插值的谱无限元法,以高精度处理无限远边界对声场计算的影响。
首先,采用映射函数构建从自然坐标系到笛卡尔坐标系的节点转换函数,获得两种坐
标系之间的映射雅克比矩阵;然后,利用基于GR插值的形函数,模拟单元节点的声压,结合映射雅克比矩阵,对二维声场波动方程进行变分处理,推导了无限单元对应的积
分表达式,用于模拟实际浅海波导环境下无限远处的声传播。
与基于镜像法的解析
解和传统无限元法结果的对比表明:所提方法结果与解析解一致性高,相对误差约为1%,验证了该方法的有效性和准确性。
【总页数】7页(P49-55)
【作者】曹伟浩;程广利;刘宝
【作者单位】海军工程大学电子工程学院
【正文语种】中文
【中图分类】TP242
【相关文献】
1.三维有限元法计算接地电阻时无限边界的处理方法
2.透射边界条件在波动谱元模拟中的实现:二维波动
3.点源二维势场问题的边界元法中点源处理方法
4.BAQUS 中动力学二维无限元边界的实现及应用
5.外域声场计算中的无限元方法
因版权原因,仅展示原文概要,查看原文内容请购买。
Optimal and Suboptimal Singleton Arc Consistency AlgorithmsChristian BessiereLIRMM(CNRS/University of Montpellier) 161rue Ada,Montpellier,Francebessiere@lirmm.frRomuald Debruyne´Ecole des Mines de Nantes/LINA 4rue Alfred Kastler,Nantes,France romuald.debruyne@emn.frAbstractSingleton arc consistency(SAC)enhances thepruning capability of arc consistency by ensuringthat the network cannot become arc inconsistent af-ter the assignment of a value to a variable.Algo-rithms have already been proposed to enforce SAC,but they are far from optimal time complexity.Wegive a lower bound to the time complexity of en-forcing SAC,and we propose an algorithm thatachieves this complexity,thus being optimal.How-ever,it can be costly in space on large problems.We then propose another SAC algorithm that tradestime optimality for a better space complexity.Nev-ertheless,this last algorithm has a better worst-casetime complexity than previously published SAC al-gorithms.An experimental study shows the goodperformance of the new algorithms.1IntroductionEnsuring that a given local consistency does not lead to a failure when we enforce it after having assigned a variable to a value is a common idea in constraint reasoning.It has been applied(sometimes under the name’shaving’)in con-straint problems with numerical domains by limiting the as-signments to bounds in the domains and ensuring that bounds consistency does not fail[Lhomme,1993].In SAT,it has been used as a way to compute more accurate heuristics for DPLL[Freeman,1995;Li and Ambulagan,1997].Finally, in constraint satisfaction problems(CSPs),it has been intro-duced under the name Singleton Arc Consistency(SAC)in [Debruyne and Bessi`e re,1997b]and further studied,both theoretically and experimentally in[Prosser et al.,2000; Debruyne and Bessi`e re,2001].Some nice properties give to SAC a real advantage over the other local consistencies enhancing the ubiquitous arc consis-tency.Its definition is much simpler than restricted path con-sistency[Berlandier,1995],max-restricted-path consistency [Debruyne and Bessi`e re,1997a],or other exotic local con-sistencies.Its operational semantics can be understood by a non-expert of thefield.Enforcing it only removes values in domains,and thus does not change the structure of the prob-lem,as opposed to path consistency[Montanari,1974],k-consistency[Freuder,1978],etc.Finally,implementing it can be done simply on top of any AC algorithm.Algorithms enforcing SAC were already proposed in the past(SAC1[Debruyne and Bessi`e re,1997b],and SAC2 [Bart´a k and Erben,2004])but they are far from optimal timecomplexity.Moreover,singleton arc consistency lacks an analysis of its complexity.In this paper,we study the com-plexity of enforcing SAC(in Section3)and propose an algo-rithm,SAC-Opt,enforcing SAC with optimal worst-case time complexity(Section4).However,optimal time complexity is reached at the cost of a high space complexity that prevents the use of this algorithm on large problems.In Section5,we propose SAC-SDS,a SAC algorithm with better worst-case space complexity but no longer optimal in time.Neverthe-less,its time complexity remains better than the SAC algo-rithms proposed in the past.The experiments presented in Section6show the good performance of both SAC-Opt and SAC-SDS compared to previous SAC algorithms.2PreliminariesA constraint network P consists of afinite set of n variables X={i,j,...},a set of domains D={D(i),D(j),...}, where the domain D(i)is thefinite set of at most d val-ues that variable i can take,and a set of e constraints C= {c1,...,c e}.Each constraint c k is defined by the ordered set var(c k)of the variables it involves,and the set sol(c k) of combinations of values satisfying it.A solution to a con-straint network is an assignment of a value from its domain to each variable such that every constraint in the network is satisfied.When var(c)=(i,j),we use c ij to denote sol(c) and we say that c is binary.A value a in the domain of a variable i(denoted by(i,a)) is consistent with a constraint c k iff there exists a support for(i,a)on c k,i.e.,an instantiation of the other variables in var(c k)to values in their domain such that together with (i,a)they satisfy c k.Otherwise,(i,a)is said arc inconsis-tent.A constraint network P=(X,D,C)is arc consistent iff D does not contain any arc inconsistent value.AC(P)de-notes the network where all arc inconsistent values have been removed from P.If AC(P)has empty domain,we say that P is arc inconsistentDefinition1(Singleton arc consistency)A constraint net-work P=(X,D,C)is singleton arc consistent iff∀i∈X,∀a∈D(i),the network P|i=a=(X,D|i=a,C)obtained by replacing D(i)by the singleton{a}is not arc inconsis-tent.If P|i=a is arc inconsistent,we say that(i,a)is SAC inconsistent.3Complexity of SACBoth SAC1[Debruyne and Bessi`e re,1997b]and SAC2 [Bart´a k and Erben,2004]have an O(en2d4)time complexity on binary constraints.But it is not optimal.Theorem1The best time complexity we can expect from an algorithm enforcing SAC on binary constraints is in O(end3). Proof.According to[McGregor,1979],we know that check-ing whether a value(i,a)is such that AC does not fail in P|i=a is equivalent to verifying if in each domain D(j)there is at least one value b such that((i,a),(j,b))is“path con-sistent on(i,a)”.(A pair of values((i,a)(j,b))is path con-sistent on(i,a)iff it is not deleted when enforcing path con-sistency only on pairs involving(i,a).)For each variable j, we have tofind a value b∈D(j)such that((i,a),(j,b))is path consistent on(i,a).In the worst case,all the values b in D(j)have to be considered.So,the path consistency of each pair((i,a),(j,b))may need to be checked against every third variable k by looking for a value(k,c)compatible with both (i,a)and(j,b).However,it is useless to consider variables k not linked to j since in this partial form of path consis-tency only pairs involving(i,a)can be removed.By using an optimal time path consistency technique(storing supports), we are guaranteed to consider at most once each value(k,c) when seeking compatible value on k for a pair((i,a),(j,b)). The worst-case complexity of proving that path consistencyon(i,a)does not fail is thusΣb∈D(j)Σckj ∈CΣc∈D(k)=ed2.Furthermore,enforcing path consistency on(i,a)is com-pletely independent from enforcing path consistency on an-other value(j,b).Indeed,a pair((i,a),(j,b))can be pathconsistent on(i,a)while becoming path inconsistent after the deletion of a pair((k,c),(j,b))thus being path inconsistenton(j,b).Therefore,the worst case time complexity of anySAC algorithm is at least in O(Σ(i,a)∈D ed2)=O(end3).2 4Optimal Time Algorithm for SACSAC1has no data structure storing which values may becomeSAC inconsistent when a given value is removed.Hence,itmust check again the SAC consistency of all remaining val-ues.SAC2has data structures that use the fact that if ACdoes not lead to a wipe out in P|i=a then the SAC consis-tency of(i,a)holds as long as all the values in AC(P|i=a) are in the domain.After the removal of a value(j,b),SAC2checks again the SAC consistency of all the values(i,a)suchthat(j,b)was in AC(P|i=a).This leads to a better average time complexity than SAC1but the data structures of SAC2 are not sufficient to reach optimality since SAC2may waste time re-enforcing AC in P|i=a several times from scratch. Algorithm1,called SAC-Opt,is an algorithm that enforces SAC in O(end3),the lowest time complexity which can be expected(see Theorem1).The idea behind such an optimal algorithm is that we don’twant to do and redo(potentially nd times)arc consistencyfunction SAC-Opt(inout P:Problem):Boolean;/*init phase*/;1if AC(P)then PendingList←∅else return false;2foreach(i,a)∈D do3P ia←P/*copy only domains and data structures*/;4D ia(i)←{a};5if¬propagAC(P ia,D(i)\{a})then6D←D\{(i,a)};7if propagAC(P,{(i,a)})then8foreach P jb such that(i,a)∈D jb do9Q jb←Q jb∪{(i,a)};10PendingList←PendingList∪{(j,b)};11else return false;/*propag phase*/;12while PendingList=∅do13pop(i,a)from PendingList;14if propagAC(P ia,Q ia)then Q ia←∅;15else16D←D\{(i,a)};17if D(i)=∅then return false;18foreach(j,b)∈D such that(i,a)∈D jb do19D jb(i)←D jb(i)\{a};Q jb←Q jb∪{(i,a)}; 20PendingList←PendingList∪{(j,b)};21return true;for future propagation(lines9–10).Once the initialisation phase hasfinished,we know that PendingList contains all values(i,a)for which some SACinconsistent value removals(contained in Q ia)have not yetbeen propagated in P ia.The loop in line12propagates these removals(line14).When propagation fails,this means that(i,a)itself is SAC inconsistent.It is removed from the do-main D of the master problem in line16and each subproblem P jb containing(i,a)is updated together with its propagationlist Q jb for future propagation(lines18–20).When PendingList is empty,all removals have been propa-gated in the subproblems.So,all the values in D are SAC. Theorem2SAC-Opt is a correct SAC algorithm with O(end3)optimal time complexity and O(end2)space com-plexity on binary constraints.Proof.Soundness.A value a is removed from D(i)either inline6or in line16because P ia was found arc inconsistent.If P ia was found arc inconsistent in the initialisation phase (line5),SAC inconsistency of(i,a)follows immediately.Let us proceed by induction for the remaining case.Suppose all values already removed in line16were SAC inconsistent.If P ia is found arc inconsistent in line14,this means that P ia cannot be made arc consistent without containing some SAC inconsistent values.So,(i,a)is SAC inconsistent and SAC-Opt is sound.Completeness.Thanks to the way PendingList and Q ia areupdated in lines8–10and18–20,we know that at the end of the algorithm,all P ia are arc consistent.Thus,for any value (i,a)remaining in P at the end of SAC-Opt,P|i=a is not arc inconsistent and(i,a)is therefore SAC consistent. Complexity.The complexity analysis is given for networks of binary constraints since the optimality proof was given for networks of binary constraints only.(But SAC-Opt works on constraints of any arity.)Since any AC algorithm can be used to implement our SAC algorithm,the space and time com-plexities will obviously depend on this choice.Let usfirst discuss the case where an optimal time algorithm such as AC-6[Bessi`e re,1994]or AC2001[Bessi`e re and R´e gin,2001; Zhang and Yap,2001]is used.Line3tells us that the space complexity will be nd times the complexity of the AC algo-rithm,namely O(end2).Regarding time complexity,thefirst loop copies the data structures and propagates arc consistency on each subproblem(line5),two tasks which are respectively in nd·ed and nd·ed2.In the while loop(line12),each sub-problem can be called nd times for arc consistency,and there are nd subproblems.Now,AC-6and AC2001are both incre-mental,which means that the complexity of nd restrictions on the same problem is ed2and not nd·ed2.Thus the total cost of arc consistency propagation on the nd subproblems is nd·ed2.We have to add the cost of updating the lists in lines 8and18.In the worst case,each value is removed one by one,and thus,nd values are put in nd Q lists,leading to n2d2 updates of PendingList.If n<e,the total time complexity is O(end3),which is optimal.2 Remarks.We chose to present the Q ia lists as lists of values to be propagated because the AC algorithm used is not spec-ified here,and value removal is the most accurate informa-tion we can have.When using afine-grained AC algorithm,such as AC-6the lists will be used as they are.If we use a coarse-grained algorithm such as AC-3[Mackworth,1977] or AC2001,only the variables from which the domain has changed are needed.The modification is direct.We should bear in mind that if AC-3is used,we decrease space complex-ity to O(n2d2),but time complexity increases to O(end4) since AC-3is non optimal.5Losing Time Optimality to Save SpaceSAC-Opt cannot be used on large constraint networks because of its O(end2)space complexity.Moreover,it seems dif-ficult to reach optimal time complexity with smaller space requirements.A SAC algorithm has to maintain AC on nd subproblems P|i=a,and to guarantee O(ed2)total time on these subproblems we need to use an optimal time AC algo-rithm.Since there does not exist any optimal AC algorithm requiring less than O(ed)space,we obtain nd·ed space for the whole SAC process.We propose to relax time optimality to reach a satisfac-tory trade-off between space and time.To avoid discussing in too general terms,we instantiate this idea on a AC2001-based SAC algorithm for binary constraints,but the same idea could be implemented with other optimal AC algorithms such as AC-6,and with constraints of any arity.The algorithm SAC-SDS(Sharing Data Structures)tries to use as much as possi-ble the incrementality of AC to avoid redundant work,with-out duplicating on each subproblem P|i=a the data structures required by optimal AC algorithms.This algorithm requires less space than SAC-Opt but is not optimal in time.As in SAC-Opt,for each value(i,a)SAC-SDS stores the lo-cal domain D ia of the subproblem P|i=a and its propagation list Q ia.Note that in our AC2001-like presentation,Q ia is a list of variables j for which the domain D ia(j)has changed (in SAC-Opt it is a list of values(j,b)removed from D ia). As in SAC-Opt,thanks to these local domains D ia,we know which values(i,a)may no longer be SAC after the removal of a value(j,b):those for which(j,b)was in D ia.These local domains are also used to avoid starting each new AC propagation phase from scratch,since D ia is exactly the do-main from which we should start when enforcing again AC on P|i=a after a value removal.By comparison,SAC1and SAC2restart from scratch for each new propagation.But the main idea in SAC-SDS is that as opposed to SAC-Opt,it does not duplicate to all subproblems P ia the data structures of the optimal AC algorithm used.The data structure is maintained only in the master problem P.In the case of AC2001,the Last structure which maintains in Last(i,a,j)the smallest value in D(j)compatible with (i,a)on c ij is built and updated only in P.Nevertheless,it is used by all subproblems P ia to avoid repeating constraint checks already done in P.SAC-SDS(Algorithm2)works as follows:After some initialisations(lines1–4),SAC-SDS repeatedly pops a value from PendingList and propagates AC in P ia(lines6–9).Note that’D ia=nil’means that this is thefirst enforcement of AC in P ia,so D ia must be initialised(line8).1If P ia is arc in-function SAC-SDS-2001(inout P:problem):Boolean;1if AC2001(P)then PendingList←∅else return false;2foreach(i,a)∈D do3D ia←nil;Q ia←{i};4PendingList←PendingList∪{(i,a)};5while PendingList=∅do6pop(i,a)from PendingList;7if a∈D(i)then8if D ia=nil then D ia←(D\D(i))∪{(i,a)};9if propagSubAC(D ia,Q ia)then Q ia←∅;10else11D(i)←D(i)\{a};Deleted←{(i,a)};12if propagMainAC(D,{i},Deleted)then13updateSubProblems(Deleted)14else return false;15return true;function Propag Main/Sub AC(inout D:domain;in Q:set;inout Deleted:set):Boolean; 16while Q=∅do17pop j from Q;18foreach i∈X such that∃c ij∈C do19foreach a∈D(i)such that Last(i,a,j)∈D(j)do 20if∃b∈D(j),b>Last(i,a,j)∧c ij(a,b)then 21Last(i,a,j)←b22else23D(i)←D(i)\{a};Q←Q∪{i};24Deleted←Deleted∪{(i,a)};25if D(i)=∅then return false;26return true;/*···:in propagMainAC but not in propagSubAC*/; procedure updateSubProblems(in Deleted:set);27foreach(j,b)∈D|D jb∩Deleted=∅do28Q jb←Q jb∪{i∈X/D jb(i)∩Deleted=∅};29D jb←D jb\Deleted;30PendingList←PendingList∪{(j,b)};it in a former loop as in SAC-Opt.problems faster(lines19–20)since we know that there is no support for(i,a)on c ij lower than Last(i,a,j)in P, and so in any subproblem as well.But this data structure is shared by all subproblems.Thus it must not be modified by propagSubAC.Otherwise,Last(i,a,j)would not be guar-anteed to be the smallest support in the other subproblems and in P.When achieving AC in P,propagMainAC does update the Last data structure.The only difference with AC2001is line24where it needs to store in Deleted the values removed from D during the propagation.Those removals performed in P can directly be transferred in the subproblems without the effort of proving their inconsistency in each subproblem. This is done via function updateSubProblems,which re-moves values in Deleted from all the subproblems.It also updates the local propagation lists Q ia and PendingList for future propagation in the subproblems.Theorem3SAC-SDS is a correct SAC algorithm with O(end4)time complexity and O(n2d2)space complexity on binary constraints.Proof.Soundness.Notefirst that the structure Last is up-dated only when achieving AC in P so that any support of(i,a)in D(j)is greater than or equal to Last(i,a,j). The domains of the subproblems being subdomains of D, any support of a value(i,a)on c ij in a subproblem is also greater than or equal to Last(i,a,j).This explains that propagSubAC can benefit from the structure Last without losing any support(lines19–20).Once this observation is made,soundness comes from the same reason as in SAC-Opt. pleteness comes from the fact that any deletion is propagated.After initialisation(lines2-4), PendingList=D and so,the main loop of SAC-SDS pro-cesses any subproblem P|i=a at least once.Each time a value(i,a)is found SAC inconsistent in P because P|i=a is arc inconsistent(line9)or because the deletion of some SAC-inconsistent value makes it arc inconsistent in P(line 23of propagMainAC called in line12),(i,a)is removed from the subproblems(line29),and PendingList and the local propagation lists are updated for future propagation(lines28 and30).At the end of the main loop,PendingList is empty, so all the removals have been propagated and for any value (i,a)∈D,D ia is a non empty arc consistent subdomain of P|i=a.Complexity.The data structure Last requires a space in O(ed).Each of the nd domains D ia can contain nd values and there is at most n variables in the nd local propagation lists Q ia.Since e<n2,the space complexity of SAC-SDS-2001is in O(n2d2).So,considering space requirements, SAC-SDS-2001is similar to SAC2.Regarding time complexity,SAC-SDS-2001first duplicates the domains and propagates arc consistency on each subprob-lem(lines8and9),two tasks which are respectively in nd·nd and nd·ed2.Each value removal is propagated to all P|i=a problems via an update of PendingList,Q ia and D ia(lines 27–30).This requires nd·nd operations.Each subproblem can in the worst case be called nd times for arc consistency, and there are nd subproblems.The domains of each subprob-lem are stored so that the AC propagation is launched with the domains in the state in which they were at the end of the1E0Figure1:cpu time with n=100,d=20,and density=.05. previous AC propagation in the subproblem.Thus,in spite of the several AC propagations on a subproblem,a value willbe removed at most once and,thanks to incrementality of arcconsistency,the propagation of these nd value removals is in O(ed3).(Note that we cannot reach the optimal ed2com-plexity for arc consistency on these subproblems since we donot duplicate the data structures necessary for AC optimal-ity.)Therefore,the total cost of arc consistency propagations is nd·ed3.The total time complexity is O(end4).2 As with SAC2,SAC-SDS performs a better propagation than SAC1since after the removal of a value(i,a)from D,SAC-SDS checks the arc consistency of the subproblems P|j=b only if they have(i,a)in their domains(and not all the subproblems as SAC1).But this is not sufficient to have a bet-ter worst-case time complexity than SAC1.SAC2has indeed the same O(en2d4)time complexity as SAC1.SAC-SDS im-proves this complexity because it stores the current domain of each subproblem,and so,does not propagate in the subprob-lems each time from scratch.2Furthermore,we can expect a better average time complexity since the shared structure Last reduces the number of constraint checks required,even if it does not permit to obtain optimal worst-case time com-plexity.This potential gain can only be assessed by an exper-imental comparison of the different SAC versions.Finally, in SAC1and SAC2each AC enforcement in a subproblem must be done on a new copy of D built at runtime(potentially nd·nd times)while such duplication is performed only once for each value in SAC-SDS(by creating subdomains D ia).6Experimental ResultsWe compared the performance of the SAC algorithms on ran-dom uniform constraint networks of binary constraints gener-ated by[Frost et al.,1996],which produces instances accord-6.2Experiments on dense constraint networks Fig.2presents performance on constraint networks having 100variables,20values in each domain and a complete graph of binary constraints.SAC1and SAC2show very close performance.When all values are SAC consistent(tightness lower than.37)the addi-tional data structure of SAC2is useless since there is no prop-agation.However the cost of building this data structure is not important compared to the overall time and the time required by SAC2is almost the same as SAC1.Around the peak of complexity,SAC2-X(with X∈{4,6,2001})requires a little less time than SAC1-X.SAC2has to repeatedly recheck the arc consistency of less subproblems than SAC1but the cost of testing a subproblem remains the same.On very tight con-straints,SAC1requires less time than SAC2since the incon-sistency of the problem is found with almost no propagation and building the data structure of SAC2is useless. Conversely to what was expected in[Bart´a k and Erben, 2004],using AC-4in SAC1(or in SAC2)instead of AC-6or AC2001is not worthwhile.The intuition was that since the data structure of AC-4does not have to be updated,the cost of its creation would be low compared to the profit we can ex-pect.However,SAC1-4and SAC2-4are far more costly than the versions based on AC-6or AC2001.The best results are obtained with SAC-Opt-2001and SAC-SDS-2001which are between2.6and17times faster than the others at the peak.These two algorithms have a better prop-agation between subproblems than SAC1but they also avoid some redundant work and so reduce the work performed on each subproblem.7Summary and ConclusionWe have presented SAC-Opt,thefirst optimal time SAC algo-rithm.However,the high space complexity of this algorithm prevents its use on large constraint networks.Hence,we have proposed SAC-SDS,a SAC algorithm that is not optimal in time but that requires less space than SAC-Opt.Experiments show the good performance of these new SAC algorithms compared to previous versions available in the literature.This opens again the issue of using SAC as an alternative to AC for pruning values in a constraint network,or at least in’promis-ing’parts of the network.SAC could also be used to compute variable selection heuristics,as this had been done with suc-cess in SAT[Li and Ambulagan,1997].References[Bart´a k and Erben,2003]R.Bart´a k and R.Erben.Singleton arc consistency revised.In ITI Series2003-153,Prague, 2003.[Bart´a k and Erben,2004]R.Bart´a k and R.Erben.A new algorithm for singleton arc consistency.In Proceedings FLAIRS’04,Miami Beach FL,2004.AAAI Press. [Berlandier,1995]P.Berlandier.Improving domainfiltering using restricted path consistency.In Proceedings IEEE-CAIA’95,1995.[Bessi`e re and R´e gin,2001]C.Bessi`e re and J.C.R´e gin.Re-fining the basic constraint propagation algorithm.In Pro-ceedings IJCAI’01,pages309–315,Seattle W A,2001. [Bessi`e re,1994]C.Bessi`e re.Arc-consistency and arc-consistency again.Artificial Intelligence,65:179–190, 1994.[Debruyne and Bessi`e re,1997a]R.Debruyne andC.Bessi`e re.From restricted path consistency tomax-restricted path consistency.In Proceedings CP’97, pages312–326,Linz,Austria,1997.[Debruyne and Bessi`e re,1997b]R.Debruyne andC.Bessi`e re.Some practicablefiltering techniquesfor the constraint satisfaction problem.In Proceedings IJCAI’97,pages412–417,Nagoya,Japan,1997. [Debruyne and Bessi`e re,2001]R.Debruyne andC.Bessi`e re.Domainfiltering consistencies.Jour-nal of Artificial Intelligence Research,14:205–230, 2001.[Freeman,1995]J.W.Freeman.Improvements to proposi-tional satisfiability search algorithms.PhD thesis,Univer-sity of Pennsylvania,Philadelphia PA,1995. [Freuder,1978]E.C.Freuder.Synthesizing constraint munications of the ACM,21(11):958–966, Nov1978.[Frost et al.,1996]D.Frost,C.Bessi`e re,R.Dechter,and J.C.R´e gin.Random uniform csp generators.URL: http://www.lirmm.fr/˜bessiere/generator.html,1996. [Lhomme,1993]O.Lhomme.Consistency techniques for numeric csps.In Proceedings IJCAI’93,pages232–238, Chamb´e ry,France,1993.[Li and Ambulagan,1997]C.M.Li and Ambulagan.Heuris-tics based on unit propagation for satisfiability problems.In Proceedings IJCAI’97,pages366–371,Nagoya,Japan, 1997.[Mackworth,1977]A.K.Mackworth.Consistency in net-works of relations.Artificial Intelligence,8:99–118,1977. [McGregor,1979]J.J.McGregor.Relational consistency al-gorithms and their application infinding subgraph and graph rmation Science,19:229–250, 1979.[Montanari,1974]works of constraints: Fundamental properties and applications to picture rmation Science,7:95–132,1974. [Prosser et al.,2000]P.Prosser,K.Stergiou,and T Walsh.Singleton consistencies.In Proceedings CP’00,pages 353–368,Singapore,2000.[Prosser,1996]P.Prosser.An empirical study of phase tran-sition in binary constraint satisfaction problems.Artificial Intelligence,81:81–109,1996.[Zhang and Yap,2001]Y.Zhang and R.H.C.Yap.Making AC-3an optimal algorithm.In Proceedings IJCAI’01, pages316–321,Seattle W A,2001.。