Constraints On The Topology Of The Universe From The WMAP First-Year Sky Maps
- 格式:pdf
- 大小:225.60 KB
- 文档页数:16
连续体结构的拓扑优化设计一、本文概述Overview of this article随着科技的不断进步和工程需求的日益增长,连续体结构的拓扑优化设计已成为现代工程领域的研究热点。
拓扑优化旨在通过改变结构的内部布局和连接方式,实现结构性能的最优化,从而提高工程结构的承载能力和效率。
本文将对连续体结构的拓扑优化设计进行深入研究,探讨其基本原理、方法、应用以及未来的发展趋势。
With the continuous progress of technology and the increasing demand for engineering, the topology optimization design of continuum structures has become a research hotspot in the field of modern engineering. Topology optimization aims to optimize the structural performance by changing the internal layout and connection methods of the structure, thereby improving the load-bearing capacity and efficiency of engineering structures. This article will conduct in-depth research on the topology optimization design of continuum structures, exploring their basic principles, methods,applications, and future development trends.本文将介绍连续体结构拓扑优化的基本概念和原理,包括拓扑优化的定义、目标函数和约束条件等。
基于ANSYS的结构拓扑优化林丹益;李芳【摘要】针对拓扑优化技术在现实中的应用问题,将拓扑优化技术应用到自行车车架和多拱拱桥的最优化设计中.开展了各种拓扑优化方法的分析研究,建立了“以单元材料密度为设计变量,以结构的柔顺度最小化为目标函数,体积减少百分比为约束函数”的数学模型;通过采用商用有限元软件ANSYS中的拓扑优化设计模块对自行车车架和多拱拱桥进行了拓扑优化设计,优化结果表明所得拓扑结构清晰,并与实际的自行车车架和多拱拱桥非常相似.研究结果表明,该结构拓扑优化方法正确而有效,具有一定的工程应用前景.%In order to solve the application problems of topological optimization technology in reality, the bicycle frames and multiple arch bridge was investigated.After the analysis of all kinds of methods of topological optimizaiton, the mathematical model that unit material density as design variables, the minimum of structural compliance as the objective function, the volume reduction percentage as the constraint function was established. The topology optimization design module of the commercial finite element software ANSYS was used to the bicycle frame and multiple arch bridge for the topology optimization design.The topological structure is clear and they are very likely to the bicycle frame and multiple arch bridge in reality. The results indicate that the method is correct and effective, it has a certain engineering application prospect.【期刊名称】《机电工程》【年(卷),期】2012(029)008【总页数】5页(P898-901,915)【关键词】拓扑优化;ANSYS;自行车车架;多拱拱桥【作者】林丹益;李芳【作者单位】浙江工业大学机械工程学院,浙江杭州310014;浙江工业大学机械工程学院,浙江杭州310014【正文语种】中文【中图分类】TH112;U4840 引言连续体结构优化按照设计变量的类型和求解问题的难易程度可分为尺寸优化、形状优化和拓扑优化3个层次,分别对应于3个不同的产品设计阶段,即详细设计、基本设计和概念设计3个阶段。
INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMSmun.Syst.2013;26:1054–1073Published online27January2012in Wiley Online Library().DOI:10.1002/dac.1399A unified enhanced particle swarm optimization-based virtualnetwork embedding algorithmZhongbao Zhang1,Xiang Cheng1,Sen Su1,*,†,Yiwen Wang1,Kai Shuang1andYan Luo21State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,10Xi Tu Cheng Road,Beijing,China2Electrical and Computer Engineering,University of Massachusetts Lowell,One University Ave,Lowell,MA01854,USASUMMARYVirtual network(VN)embedding is a major challenge in network virtualization.In this paper,we aim to increase the acceptance ratio of VNs and the revenue of infrastructure providers by optimizing VN embed-ding costs.Wefirst establish two models for VN embedding:an integer linear programming model for a substrate network that does not support path splitting and a mixed integer programming model when path splitting is supported.Then we propose a unified enhanced particle swarm optimization-based VN embed-ding algorithm,called VNE-UEPSO,to solve these two models irrespective of the support for path splitting. In VNE-UEPSO,the parameters and operations of the particles are well redefined according to the VN embedding context.To reduce the time complexity of the link mapping stage,we use shortest path algorithm for link mapping when path splitting is unsupported and propose greedy k-shortest paths algorithm for the other case.Furthermore,a large to large and small to small preferred node mapping strategy is proposed to achieve better convergence and load balance of the substrate network.The simulation results show that our algorithm significantly outperforms previous approaches in terms of the VN acceptance ratio and long-term average revenue.Copyright©2012John Wiley&Sons,Ltd.Received24June2011;Revised12October2011;Accepted27November2011KEY WORDS:network virtualization;virtual network embedding;integer linear programming;mixed integer programming;metaheuristic;particle swarm optimization1.INTRODUCTIONThe Internet has only been improved incrementally since its inception.In the past,fundamental changes in network architectures have faced strong resistance from realistic experimentation and deployment[1–4].In recent years,network virtualization has emerged to serve as the foundation of the future Internet that allows multiple heterogeneous virtual networks(VNs)to coexist on a shared network substrate,providing adequateflexibility for network innovations.In the network virtualization environment,infrastructure providers(InPs),and service providers (SPs)play two decoupled roles:InPs manage the physical infrastructure,whereas SPs create VNs and offer end-to-end services[1,3,5].Embedding VN requests of the SPs,with both node and link constraints,into the substrate network(also known as VN embedding)is non-deterministic polynomial-time hard(NP-hard).Even if all the virtual nodes are mapped,it is still NP-hard to embed virtual links without violating the bandwidth constraints into the substrate paths[6].Thus,*Correspondence to:Sen Su,State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,10Xi Tu Cheng Road,Beijing,China.†E-mail:susen@UNIFIED EPSO-BASED VNE ALGORITHM1055 to reduce the hardness of the VN embedding problem and enable efficient heuristics,early studies restrict the VN embedding problem space as follows:Assume that the VN requests are known in advance(i.e.,an offline version)[7–9].Ignore one or more types of resource constraints of the VN request(e.g.,CPU,bandwidth,or location)[6–11].Perform no admission control when the resource of the substrate network is insufficient [7,8,10].Focus only on the backbone-star topology[8].Considering all these aforementioned issues,when path splitting is supported by the substrate network,the authors in[12]formulate a mixed integer programming(MIP)for the VN embedding problem and propose several MIP-based online VN embedding algorithms to coordinate node and link mapping stages.However,when path splitting is not supported by the substrate network,their MIP formulation would no longer be appropriate;consequently,the corresponding algorithm they proposed suffers from poor performance.Besides,the linear programming relaxation and round-ing techniques adopted by their algorithms would result in time-consuming and infeasible VN embedding solution.Even if a feasible solution can be obtained,it may still be far from being optimal[13].To address this issue,wefirst note that different VN embedding solutions could result in different total resource costs to the substrate network and reducing such cost may help increase the possi-bility of accepting more future VN requests;thus,we present an integer linear programming(ILP) formulation and a MIP formulation for the VN embedding problem to minimize such cost when path splitting is unsupported and supported by the substrate network,respectively.Solving ILP and MIP is a well-known NP-hard problem[13].Traditional exact algorithms such as branch and bound and cutting plane are guaranteed tofind an optimal solution.These algorithms,however,incur exponential running time,and so only instances of a moderate size could be practically solved[14].So we turn our attention tofinding a feasible solution that is near optimal.The technique of metaheuristics has been shown useful in practice,including genetic algorithm[15],simulated annealing[16],evolutionary programming[17],and particle swarm optimization(PSO)[18].These are iterative search techniques inspired from biological and physical phenomena,which have been successfully applied to a wide range of optimization problems.In particular,PSO is a population-based stochastic global optimizer that can generate better optimal solution in lesser computing time with stable convergence[19]than other population-based methods.It is also easy to implement with a smaller number of adjustable parameters. We are motivated to leverage the benefits of PSO to conceive an efficient online VN embedding algorithm.More specifically,if we consider the position of each particle in PSO as a possible VN embedding solution,and then each particle can adjust its position to achieve better posi-tion according to the individual and global optimal information,finally the approximate optimal solution of VN embedding can be obtained through the evolution process of the particles.How-ever,before we employ PSO to solve our problem,there are still three imperative challenges that must be conqueredfirst:(i)Standard PSO only deals with continuous optimization prob-lem,so it is not directly applicable to the optimal VN embedding problem,which is a discrete optimization problem.(ii)Because the formulations of the VN embedding problem are differ-ent,our PSO-based VN embedding algorithm needs to be well designed to deal with the optimal VN embedding problem irrespective of whether path splitting is supported by the substrate net-work or not.Besides,previous work[6]uses k-shortest path(KSP)for the virtual link mapping when path splitting is unsupported by the substrate;otherwise,it uses multicommodityflow(MCF) algorithm instead;however,if we adopt the same virtual link mapping algorithms,our algorithm may result in very time consuming because of the iteration processes of PSO.(iii)The random-ness of PSO may result in slow convergence for solving our problem.In addition,it may also make the substrate network resources fragmented and hinder the substrate network from accepting larger VN requests.1056Z.ZHANG ET AL.Toward these ends,we present a unified enhanced PSO-based VN embedding algorithm,referred as VNE-UEPSO,to solve the optimal VN embedding problem.In VNE-UEPSO,we conquer the aforementioned challenges as follows:We redefine the parameters and operations of the particles (such as position,velocity and updating operations,etc.)according to our problem.Moreover,because the virtual link mapping algorithm is coupled with the feature of the sub-strate network supported,we only encode the position vector as the virtual node mapping solution and left the virtual link mapping solution to be determined in a position feasibility check procedure.Therefore,this procedure can adopt proper virtual link mapping algorithm according to the feature of the substrate network supported.To reduce the time complexity of the virtual link mapping stage while maintaining the efficiency,we apply the shortest path algorithm for the link mapping when path splitting is unsupported and propose a novel link mapping algorithm called greedy k-shortest paths (GKSP)for the other case.Furthermore,we propose a large to large and small to small (L2S2)preferred local selection strategy for position initialization and update of the particles to achieve better convergence and load balance of the substrate network.The simulation results show that the algorithm we proposed can significantly outperforms the existing approaches in terms of the long-term average revenue and VN request acceptance ratio while decreasing the substrate resource cost irrespective of whether path splitting is supported by the substrate network or not.The rest of this paper is organized as follows.In Section 2,we present the network model and the definition of VN embedding problem and its common objectives.The ILP and MIP models of the VN embedding problem is presented in Section 3.In Section 4,we describe the details of the VNE-UEPSO algorithm.Our VN embedding algorithm is evaluated in Section 5.An overview of the related work is discussed in Section 6.Section 7concludes this paper.WORK MODEL AND PROBLEM DESCRIPTIONIn this section,we will first model the substrate network of InPs and VN of SPs and then give the VN embedding problem description,followed by the definition of objectives.work modelWe denote the topology of the substrate network by a weighted undirected graph G s D .N s ,L s ,A n s ,A l s /,where N s is the set of the substrate nodes and L s is the set of the substrate links.The notations A n s and A l s denote the attributes of the substrate nodes and links,respectively.Theattributes of a node could be processing capacity,storage,and location.The attribute of a link could be the bandwidth and delay.In this paper,we consider the available CPU capacity and location con-straint (e.g.,particular geographic regions)for the node attribute and the available bandwidth for the link attribute.All loop-free paths of the substrate network are denoted by P s .Similarly,the topology of the VN could also be denoted by a weighted undirected graph G v D .N v ,L v ,R n v ,R l v /,where N v is the set of the virtual nodes,L v is the set of the virtuallinks,and R n v and R l v denote CPU requirements and location constraints on virtual nodes and band-width requirements on virtual links,respectively.Each VN request can be denoted by the quad VNR .i/.G v ,t a ,t d ,W /,where the variables t a and t d denote the time of the VN request arriving and the duration of the VN staying in the substrate network,respectively.When the i th VN request arrives,the substrate network should allocate resources to the VN that satisfy the constraints of the virtual nodes and links.If there are no enough substrate resources,the VN request should be rejected or postponed.The allocated substrate resources are released when the VN departs.Similar to the work in [12],here W is a non-negative value expressing how far a virtual node n v 2N v can be placed from the location specified by Loc.n v /.UNIFIED EPSO-BASED VNE ALGORITHM1057 Figure1(b)presents a substrate network,where the numbers in rectangles are the available CPU resources at the nodes and the numbers over the links represent available bandwidths.Figure1(a) and1(c)presents two VN requests with node and link constraints.2.2.Virtual network embedding problem descriptionThe VN embedding problem is defined by a mapping M W G v.N v,L v/!G s.N0s,P0s/from G v to a subset of G s,where N0s N s and P0s P s.The mapping can be decomposed into two mapping steps:Node mapping places the virtual nodes to different substrate nodes that satisfy the node resource constraints.As shown in Figure1(a)and1(b),the node mapping solution of the VN request1is{a!B,b!C,c!F,d!E}.Link mapping assigns the virtual links to loop-free paths on the substrate that satisfy the link resource requirements.The link mapping solution is{.a,b/!.B,C/,.a,c/!.B,F/, .b,d/!.C,E/,.c,d/!.F,E/}in Figure1(a)and1(b).After the node and link mapping stage of the VN request1,the residual capacities of the sub-strate nodes and links are shown in Figure1(d).Figure1(c)and1(d)shows another VN embedding solution for VN request2.Note that the virtual nodes of different VN requests can be mapped onto the same substrate node but the virtual nodes in the same VN request cannot share the same substrate node.2.3.ObjectivesLong-term average revenue.From the InPs’point of view,an efficient and effective online VN embedding algorithm would maximize the revenue of InPs and accept more VN requests in the long run.Similar to the previous work in[6,7,12],wefirst give the revenue definition of accepting a VN request at time t by the following equation:R.G v,t/D X n v2N v CP U.n v/C X l v2L v BW.l v/,(1)where CP U.n v/and BW.L v/are the CPU and bandwidth requirements for the virtual node n v and link l v,respectively.Then like the previous work in[6],the long-term average revenue is given by the following:lim(2)T!1P T t D0R.G v,t/T.1058Z.ZHANG ET AL.VN request acceptance ratio.It can be defined by the following equation:lim T !1P TtD 0VNR s P T t D 0VNR ,(3)where VNR s denotes the VN request successfully accepted by the substrate network.Long-term revenue to cost (R/C)ratio.We first define the cost of accepting a VN request at time t as the sum of the total substrate resources allocated to that VN:C.G v ,t/D X n v 2N v CP U.n v /C X l v 2L v X l s 2L sBW.f l v l s,l v /,(4)where f l v l s2¹0,1ºand f l v l s =1if substrate link l s allocated bandwidth resource to virtual link l v ,otherwise f l v l s D 0.BW.f l v l s ,l v /is the amount of bandwidth l s allocated to l v .We use a modi-fied version of Equation (4)as the objective function of our ILP and MIP models,which will bepresented in the next section.Then we introduce the long-term R/C ratio to quantify the efficiency of substrate resource use,which can be defined as follows:lim T !1P Tt D 0R.G v ,t/P t D 0C.G v ,t/.(5)In this paper,we consider the long-term average revenue as the main objective of the online VN embedding algorithm,in addition to the VN request acceptance ratio,and the long-term R/C ratio.If the long-term average revenues of the VN embedding solutions are nearly the same,higher VN request acceptance ratio and long-term R/C ratio are preferred.3.ILP AND MIP FORMULATIONS FOR OPTIMAL VN EMBEDDINGIn this section,we first give the motivation behind our ILP and MIP formulations for the VN embedding problem and then provide the details of this formulation.3.1.MotivationFor one VN request,different VN embedding solutions may have different substrate resource costs.Let us reconsider the examples of VN embedding presented in Section 2(Figure 1).Assuming that the substrate node B and F can satisfy all the requirements of the virtual node b and c in the VN request 2,we can construct another VN embedding solution for VN request 2,which the node mapping is ¹a !A ,b !B ,c !F ºand the link mapping is {.a ,b/!.A ,B/,.a ,c/!.A ,F /}.Obviously,this VN embedding solution can consume less substrate network resources than the solu-tion proposed in Figure 1(d)and increase the possibility to accept more future VN requests.This observation motivates us to establish an optimal model for the VN embedding problem to minimize this cost.3.2.NOTATIONWe first summarize the notations that will be used throughout this paper in Table I.3.3.Resource cost modelingFor a VN request,because the CPU cost of different VN embedding solutions is a constant value,we only consider the bandwidth resource cost in Formula (6).X .u ,v/2L v X .i ,j /2L sf uv ij BW.l uv /.(6)UNIFIED EPSO-BASED VNE ALGORITHM 1059Table I.Notations.Notation Description i ,jSubstrate nodes u ,v Virtual nodes x u iA binary variable such that x u i D 1if virtual node u is mapped to the substrate node i and 0otherwise f uv ij A binary variable,where it is 1if virtuallink l uv is routed on physical link l ij and 0otherwiseCP U.u/The CPU value of nodes u .BW.l uv /The BW value of link l uv .3.4.Capacity constraints modelingThere are two kinds of capacity constraints:node constraints and link constraints.For the node con-straints,the CPU capacity of the substrate node i must satisfy the CPU request of the virtual node u ,and its location must be within the range of the virtual node specified by W ,which indicates how far the virtual node can be placed from the location specified by Loc.u/.The distance function Dis denotes the Euclidean distance of two nodes.For example,suppose node n 1is located at .x 1,y 1/and node n 2at .x 2,y 2/,then the value of Dis.Loc.i/,Loc.u//is equal to p .x 1 x 2/2C .y 1 y 2/2.For the virtual link constraints,the substrate link .i ,j /must meet the bandwidth requirement for the virtual link .u ,v/.Constraints (7)and (8)specify node constraints and link constraints,respectively.8u 2N v ,8i 2N s ,²x u iCP U.u/6CP U.i/x u i Dis.Loc.i/,Loc.u//6W(7)8.i ,j /2L s ,8.u ,v/2L v ,f uv ij BW.l uv /6BW.l ij /.(8)3.5.Connectivity constraints modelingConstraint (9)is flow conservation constraint for routing one unit of traffic from corresponding sub-strate node of u to corresponding substrate node of v .It requires that equal amounts of flow due to virtual link .u ,v/enter and leave each substrate node that does not correspond to the source u or destination v .Furthermore,the node u has an exogenous input of 1unit of traffic that has to find its way to the substrate corresponding to node v .8i 2N s ,8.u ,v/2L v ,X .i ,j /2L s f uv ij X .j ,i/2L s f uv j i D 8<:1if.x u i D 1/ 1if.x v i D 1/0otherwise.(9)3.6.Variable constraints modeling Constraint (10)ensures that a virtual node must correlate with just one substrate node,and constraint (11)denotes the domain constraints for the variables f uv ij and x u i .If path splitting is not supported,f uv ij is a binary variable in {0,1}(the model is an ILP),otherwise a continuous variable in [0,1](the model is a MIP).8i 2N s ,X u 2N vx u i 61,8u 2N v ,X i 2N s x u i D 1(10)1060Z.ZHANG ET AL.8i2N s,8u2N v,x u i2¹0,1º,8.i,j/2L s,8.u,v/2L v,f uvij2²Œ0,1 if path splitting¹0,1ºotherwise.(11) 3.7.Problem formulationThe goal of VN embedding problem in this paper is to minimize the resource cost for embedding each VN request;thus,we have the following optimization problem:M in X.u,v/2L v X.i,j/2L s f uv ij BW.l uv/(12)subject to Equations(7)–(11),where the model is an ILP if path splitting is not supported,otherwise a MIP.In the next section,we will propose our VNE-UEPSO algorithm to solve this optimal VN embedding problem.4.PROPOSED VNE-UEPSO ALGORITHMIn this section,wefirst give a brief introduction of PSO in Section4.1.When employing PSO to solve our optimal VN embedding problem,there are still some challenges as pointed out in Section1.Sections4.2–4.4describe the details of these challenges and how they are addressed respectively.In Section4.5,we present the algorithmic details of VNE-UEPSO.4.1.Basic Concepts of PSOParticle swarm optimization is an emerging population-based optimization method,first introduced by Eberhart and Kennedy in1995,that is inspired by theflocking behavior of many species, such birds or school offish,in their food hunting.It is a kind of random search algorithm that simulates nature evolutionary process and performs good characteristic in solving some difficult optimization problems.In PSO,a swarm of particles are represented as potential solutions,flying through the problem space by following the current optimum particles,and each particle i is associated with two vectors, that is,the position vector X i DŒx1i,x2i,:::,x D i and the velocity vector V i DŒv1i,v2i,:::,v D i , where D denotes the dimensions of the solution space.The position and velocity of each particle can be initialized randomly within the corresponding ranges.During the evolutionary process,the velocity and position of particle i on dimension d are updated as follows:v d i D wv d i C c1r d1.pBest d i x d i/C c2r d2.gBest d x d i//,(13)x d i D x d i C v d i,(14)where w is the inertia weight,c1is the cognition weight and c2is the social weight,and r d1and r d2are two random values uniformly distributed in the range of[0,1]for the d th dimension.pBest i is the position with the bestfitness found so far for the i th particle,and gBest is the best position in the swarm.4.2.Discrete PSO for VN embeddingBecause the basic PSO can only handle continuous optimization problems,the parameters and operations of the particles in PSO must be redefined to make it suitable to solve the optimal VN embedding problem when considering its discrete characteristic.Although there are some variants of PSO for discrete optimization problems such as[20]and[21],they are problem specific andUNIFIED EPSO-BASED VNE ALGORITHM 1061also cannot directly be used to solve the optimal VN embedding problem.Therefore,we propose a discrete version of PSO for our problem.Label the virtual nodes and substrate nodes,respectively.Redefine the position and velocity parameters for discrete PSO as follows:Position (X ):Let us suppose that the position vector X i D Œx 1i ,x 2i ,:::,x D i of a particle denotes a possible VN embedding solution,where x d i is the order number of the substrate node selected in the candidate node list of the d th virtual node.Here,D denotes the total number of virtual nodes in the VN request.Note that the position vector only represents the node mapping solution and whether the link mapping can be satisfied is unknown.In other words,the feasibility of the position of the particle still need to be checked.Therefore,we introduce a feasibility check procedure for the position that will be presented in the next subsection.Velocity (V ):The velocity vector V i D Œv 1i ,v 2i ,:::,v D i of the particle is used to guide the current VN embedding solution to adjust to an even better solution,where v d i is a binary value,if v d i =0;the corresponding virtual node mapping decision in the current VN embedding solution should be adjusted by reselecting another substrate node from its candidate node list;otherwise,it will remain the current choice.The subtraction,addition,and multiplication operations of the particles are redefined as follows:Subtraction («):X i «X j indicates the differences of the two VN embedding solutions X i and X j .The result value of the corresponding dimension is 1if X i and X j have the same values at the same dimension;otherwise,it is 0.For example,.1,2,3,4,5/«.1,5,3,4,6/D .1,0,1,1,0/.Addition (˚):P i V i ˚P j V j indicates the result of the formula that keeps V i with probability P i and keeps V j with the probability P j in corresponding dimension,where P i C P j D 1.For example,0.1.1,0,0,1,1/˚0.9.1,0,1,0,1/D .1,0, , ,1/. denotes uncertain to be 0or 1.In this example,the first is equal to 0with probability of 0.1and 1with probability of 0.9.Multiplication (˝):x d i ˝v d i indicates the position update process of particles.The result of this operation is a new position that corresponds to a new VN embedding solution.The oper-ating rule is as follows:if the value of V i in d dimension equals to 1,then the value of X i in the corresponding dimension will be kept;otherwise,the value of X i in the corresponding dimension should be adjusted by reselecting another substrate node from its candidate list.Taking .1,2,4,3,8/˝.1,0,1,0,1/as an example,the second and fourth virtual node embedding solutions should be adjusted.As a result,on the basis of the aforementioned redefinition,the velocity and position of parti-cle i on dimension d are determined according to the velocity and position update equations given as follows:v d i D P 1v d i ˚P 2.pBest d i «x d i /˚P 3.gBest d «x d i //,(15)x d i D x d i ˝v d i ,(16)where P 1is inertia weight,and P 2and P 3can be seen as the cognition and social weights,respec-tively.Typically,P 1,P 2,and P 3are set to constant values and satisfy the inequality P 16P 26P 3(P 1C P 2C P 3D 1).4.3.Feasibility check procedure for the position of a particleBecause the position vector only represents the node mapping solution,it is just a possible VN embedding solution,whether the capacity and connectivity constraints presented in Equations (8)and (9)between these mapped virtual nodes can be satisfied still needs to be checked.Here,we introduce a procedure to check the feasibility of the current particle’s position,which is correspond-ing to the link mapping stage of the VN embedding process.If the position is feasible,we can have its linking mapping solution and calculate its fitness value by Formula (6);otherwise,its fitness value is set to infinity.To check the feasibility of the position of a particle is equivalent to finding a link mapping solu-tion for the current VN request.Previous work [6]uses KSP for link mapping when path splitting is unsupported by the substrate;otherwise,MCF algorithm is used.However,if adopting the same1062Z.ZHANG ET AL.link mapping algorithms,our algorithm may become more time consuming as a result of the iter-ation processes of PSO.Thus,we devise the alternative link mapping algorithms according to the following two conditions:(i)On one hand,without path splitting feature for the substrate network, instead of searching the KSPs for increasing values of k,wefind a path that has enough bandwidth to map the corresponding virtual link.Wefirst remove the links,whose bandwidth cannot satisfy the virtual link bandwidth constraints,and then use the shortest path tofind a link solution shown in Algorithm1.(ii)On the other hand,when path splitting is supported by the substrate network,we propose GKSP in the link mapping stage:wefind a corresponding shortest path and make use of it (the substrate network bandwidth resource will be changed),irrespective of whether it can satisfy the virtual link.This procedure repeats until wefind enough bandwidth as shown in Algorithm2. Our GKSP link mapping algorithm can be solved in O.M C N log N C k/time[22]in a substrate network with N nodes and M links while the time complexity of MCF algorithm is approximately O.M2C kN/[23].4.4.L2S2preferred local selection strategyFor the basic PSO,it is common to generate or update the position parameter of the parti-cles randomly within the corresponding ranges with equal probability during the evolutionary process.However,taking the context of VN embedding problem into account,if we overuseUNIFIED EPSO-BASED VNE ALGORITHM1063 the bottleneck resources of the substrate network,it may make the substrate network resource unbalanced and fragmented and hinder the substrate network from accepting larger VN network request.Besides,because the possible VN embedding solutions are encoded by the position param-eter without considering the link mapping stage,it may lead to dissatisfying the connectivity constraints in the linking mapping stage and thereby slow the convergence speed of our algo-rithm.Therefore,we develop a L2S2preferred local selection strategy for position initialization and update processes of the particles both achieve quick convergence and balance the substrate network loads.It may increase the possibility to satisfy the virtual node’s connectivity constraints in the link-ing mapping stage if we embed a virtual node to a substrate node with more bandwidth resource. This is because the more bandwidth a network node has,the more degree it might have.When tak-ing the node mapping stage into consideration,a node with more CPU resource is also preferred. Therefore,similar to the previous work[6],a network node resource measure that can reflect both the CPU resource and the bandwidth resource of a node at the same time is introduced,given by the following:NR.u/D CP U.u/X l2L.u/BW.l/,(17)where,on a substrate network,L.u/is the set of all the adjacent links of u,CP U.u/is the remain-ing CPU resource of u,and BW.l/is the unoccupied bandwidth resource of link l.For a virtual node u,CP U.u/and BW.l/are the capacity constraints of this node.The main principle of the L2S2preferred local selection strategy is that the virtual node with larger resource requirements has higher probability to be mapped to the substrate node with larger available resources.The benefits of such a strategy are twofold:it helps to satisfy the resource requirement of the current VN request and consequently accelerate the convergence of our algorithm;it can balance the substrate network loads in the long run.For a VN request containing n virtual nodes,the L2S2preferred local selection strategy for position initialization(position update)is presented in Algorithm3as follows:4.5.VNE-UEPSO algorithm descriptionThe VNE-UEPSO algorithm shown in Algorithm4takes the substrate network and a VN embedding request as input,Formula6asfitness function f.X/,and an approximate optimal VN embedding solution of our algorithms as output.Theflowchart of VNE-UEPSO algorithm is also presented in Figure2.。
第27卷㊀第11期2023年11月㊀电㊀机㊀与㊀控㊀制㊀学㊀报Electri c ㊀Machines ㊀and ㊀Control㊀Vol.27No.11Nov.2023㊀㊀㊀㊀㊀㊀基于调制波分解的中点钳位型三电平逆变器的混合调制策略王金平,㊀吉耀聪,㊀张庆岩,㊀刘圣宇,㊀姜卫东(合肥工业大学电气与自动化工程学院,安徽合肥230009)摘㊀要:对于中点钳位型三电平逆变器,传统的载波脉宽调制在高调制度低功率因数下中点电压会发生三倍频波动,而虚拟空间矢量调制虽然能实现中点电压无条件平衡,却存在开关损耗高的问题㊂针对上述问题,基于调制波分解的多约束目标的协调,提出一种适用于中点钳位型三电平逆变器的混合调制策略㊂该调制策略采用双调制波单载波比较的方式实现,分析开关次数和中点电压无条件平衡的两类约束,为了在确保中点电压平衡的条件下降低开关损耗,提出上述两类约束的混合调制策略㊂在此基础上,探讨两种约束下的中点电压主动控制方法㊂另外,还对比了混合调制策略与其他调制策略的性能指标㊂混合调制策略可以在全调制度全功率因数下实现中点电压平衡,且相比虚拟空间矢量调制开关损耗有所降低,实验结果证明了其可行性和优越性㊂关键词:中点钳位型三电平逆变器;调制波分解;中点电压主动控制;开关损耗;调制策略DOI :10.15938/j.emc.2023.11.008中图分类号:TM464文献标志码:A文章编号:1007-449X(2023)11-0066-13㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀收稿日期:2022-10-28基金项目:国家自然科学基金(52077050)作者简介:王金平(1984 ),男,博士,教授,博士生导师,研究方向为开关变换器拓扑及其控制;吉耀聪(1998 ),男,硕士,研究方向为电力电子与电力传动;张庆岩(1995 ),男,硕士,研究方向为电力电子与电力传动;刘圣宇(1997 ),男,硕士,研究方向为电力电子与电力传动;姜卫东(1976 ),男,博士,教授,博士生导师,研究方向为电能质量控制技术㊁并网变流器控制技术㊁电机控制技术㊂通信作者:王金平Hybrid modulation strategy for neutral point clamped three-levelinverter based on modulation wave decompositionWANG Jinping,㊀JI Yaocong,㊀ZHANG Qingyan,㊀LIU Shengyu,㊀JIANG Weidong(School of Electrical Engineering and Automation,Hefei University of Technology,Hefei 230009,China)Abstract :For neutral point clamped three-level inverter (NPC TLI),traditional carrier-based pulse width modulation (CBPWM)can cause triple frequency fluctuations in neutral point voltage (NPV)un-der high modulation index and low power factor.Although virtual space vector pulse width modulation (VSVPWM)can achieve unconditional balance of NPV,it has the problem of high switching loss.To address the above issues,a hybrid modulation strategy suitable for NPC TLI was proposed based on the coordination of multi-constraint objectives through modulation wave decomposition.It is realized by com-paring double modulation waves with a single carrier.Two types of constraints for switching times and un-conditional balanced neutral point voltage were analyzed.In order to reduce the switching loss under the condition of ensuring NPV balance,the proposed hybrid modulation strategy with the above two con-straints was obtained.On this basis,active NPV control methods under these two constraints were dis-cussed.In addition,the performance indicators of the proposed strategy and other strategies were com-pared.The hybrid modulation strategy can achieve NPV balance under full modulation index and full power factor,and has reduced switching losses compared to VSVPWM.Experimental results verify the feasibility and superiority of the proposed strategy.Keywords:neutral point clamped three-level inverter;modulation wave decomposition;active control of NPV;switching loss;modulation strategy0㊀引㊀言中点钳位型三电平逆变器(neutral point clamped three-level inverter,NPC TLI)是在高功率场合下应用最为广泛的多电平逆变器之一,其具有总谐波失真低㊁开关器件电压应力低和转换效率高等优点㊂在光伏并网㊁交流电机调速和电能质量综合治理等领域,NPC TLI都发挥着重要的作用[1-5]㊂中点电压(neutral point voltage,NPV)平衡是研究NPC TLI的关键问题[6]㊂在实际应用中,由于上下电容不一致或电容器充放电速率不对称,NPV会产生一定的波动,包括直流分量和交流分量两部分㊂这会导致功率器件的电压应力升高,滤波电感产生低频电流谐波,以及母线电容器的使用寿命降低等一系列问题,系统的可靠性将大大降低[7]㊂在直流侧采用大电容能抑制中点波动,但系统体积会增大㊂要维持NPV平衡,可在直流侧电容并联独立的直流电压源,这无疑增加了成本[8]㊂为了提高功率密度和降低成本,基于软件的解决方案更具有优势㊂多年来,人们对调制策略进行了广泛的研究㊂现今主要通过各种脉冲宽度调制(pulse width modu-lation,PWM)策略维持NPV平衡,主要可以划分为载波脉宽调制(carrier based PWM,CBPWM)和空间矢量脉宽调制(space vector PWM,SVPWM),且二者具有等效性[9-11]㊂由于实现简便,CBPWM策略在工业应用中比SVPWM策略更受欢迎[12]㊂CBPWM 策略有单调制波双载波和双调制波单载波两种方式,传统的单调制波双载波方式所生成的开关序列仅能同时产生0㊁1或1㊁2两个电平,而双调制波单载波方式可生成的开关序列能同时产生0㊁1㊁2三个电平㊂通过对双调制波的多种约束能实现不同的调制策略㊂传统的CBPWM不能在全功率因数和全调制度范围内实现NPV的平衡,在高调制度低功率因数的情况下,NPV会产生低频波动㊂文献[13]提出虚拟空间矢量PWM(virtual space vector PWM,VS-VPWM)的方法,该方法用冗余小矢量控制NPV的偏移,能无条件实现NPV平衡,但会增加功率管的开关次数,且算法比较复杂㊂由于上㊁下电容不相等,死区时间等非理想因素的影响,各种调制策略下的NPV还可能会缓慢变化㊂因此,有必要加以NPV 主动控制㊂在文献[14-18]中,NPV主动控制是通过给三相同时注入合适的零序电压(zero sequencevoltage,ZSV)来实现,这可使流入或者流出中点的电流减小甚至达到0,从而平衡NPV㊂NPC TLI还有一个重要的研究问题是开关损耗,低的开关损耗可以使NPC TLI运行效率提升,同时还能降低冷却系统的成本[19]㊂开关次数和导通电流是开关损耗最主要的2个影响因素㊂VSVPWM 由于在一个周期内有4次开关动作,会大大增加开关损耗㊂相对VSVPWM,传统的CBPWM在一个周期内有3次开关动作,故开关损耗更小㊂在文献[20]和文献[21]中,通过三相各电平占空比的计算获取双调制波,该方法未知变量多,计算过程繁琐㊂本文对调制波分解后的双调制波进行多种约束,更简便地获取双调制波,从而实现不同的调制策略㊂基于简单的调制波分解算法,提出一种混合调制策略,保证NPV在全调制度㊁全功率因数范围内平衡的同时,不过分增大开关损耗㊂1㊀NPC TLINPC TLI的拓扑如图1所示,每一相由4个开关器件和2个钳位二极管组成㊂C1和C2为上㊁下电容㊂直流母线电压u DC为上下电容电压之和㊂电容电压均衡时,u C1=u C2=u DC/2㊂选择电容中点作为参考点㊂导通的开关器件和输出电平之间的关系见表1㊂表1㊀导通的开关管与输出电平之间的关系Table1㊀Relationship between the gated on switch and the output level导通的开关管输出电平S1㊁S22S2㊁S31S3㊁S4076第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略图1㊀NPC TLI 拓扑Fig.1㊀Topology of NPC TLI㊀㊀在稳态时,NPC TLI 输出的三相电压u x (x =a,b,c)和电流i x 可以表示为:㊀u a =(u DC /2)m cos(ωt );u b =(u DC /2)m cos(ωt -2π/3);u c =(u DC /2)m cos(ωt -4π/3)㊂üþýïïï(1)㊀i a =I m cos(ωt -ϕ);i b =I m cos(ωt -2π/3-ϕ);i c =I m cos(ωt -4π/3-ϕ)㊂üþýïïï(2)其中:ωt 为相电压的相位角,m ɪ[0,1.1547]为调制度;I m 为相电流的峰值;ϕ为功率因数角㊂NPC TLI 一般采用空间矢量调制或载波调制㊂已有很多文献表明,这两种调制方法是等效的㊂CBPWM 通过调制波与载波比较生成相应的PWM 序列㊂与其它PWM 调制策略相比,CBPWM 更容易实现㊂CBPWM 一般使用三角载波,现在较常用的是用单调制波和双载波比较生成相应的PWM 序列,如图2所示㊂图中:S out,x 是某一相输出的PWM 序列;carrier1是上载波;carrier2是下载波㊂调制函数如下:S out,x=2,u x >carrier1;1,carrier1>u x >carrier2;0,u x <carrier2㊂ìîíïïï(3)可以看出,在一个开关周期内,这种比较方式可确保每相输出电平0㊁1或1㊁2,使得每相发生2次开关动作㊂图2㊀双载波单调制波方式下生成的PWM 序列Fig.2㊀PWM sequence generated by dual carrier andsingle modulation wave2㊀调制波分解算法为了便于分析和简化计算,对NPC TLI 输出的三相电压u a ㊁u b ㊁u c 排序:u max =max(u a ,u b ,u c );u mid =mid(u a ,u b ,u c );u min =min(u a ,u b ,u c )㊂üþýïïï(4)式中:u max ㊁u mid ㊁u min 分别为最大电压㊁中间电压和最小电压;i max ㊁i mid ㊁i min 分别为它们对应的相电流㊂将x 相的调制波u x 分解为双调制波u ᶄx 和u ᵡx ,即u x +u DC /2+u ZSV =u ᶄx +u ᵡx ,u ᶄx ɤu ᵡx ,u ᶄx ㊁u ᵡx ɪ[0,u DC /2]㊂(5)式中u ZSV 为向三相同时注入的ZSV㊂x 相分解后的双调制波u ᶄx 和u ᵡx 分别与同一载波比较,得到2个独立的PWM 序列S ᶄx 和S ᵡx ,将S ᶄx 和S ᵡx 相加,生成最终所需要的PWM 序列S x ,如图3所示㊂这种方式所产生的PWM 序列会同时出现0㊁1㊁2三个电平,这将导致逆变器的开关次数增多㊂因此必须对u ᶄx 和u ᵡx 加以约束㊂从图中可以看出,x 相输出序列中各个电平的占空比为:d x ,2=2u ᶄx u DC ;d x ,1=2u ᵡx u DC -2u ᶄx u DC ;d x ,0=1-2u ᵡxu DC㊂(6)图3㊀调制波分解Fig.3㊀Modulation wave decomposition根据文献[22],将一个调制波分解为2个调制波后,线电压可计算为:u max -u mid =(d max,2u DC +d max,1u DC2+d max,00)-(d mid,2u DC +d mid,1u DC 2+d mid,00)=(u ᶄmax +u ᵡmax )-(u ᶄmid +u ᵡmid );u mid -u min=(d mid,2u DC +d mid,1u DC2+d mid,00)-(d min,2u DC +d min,1uDC 2+d min,00)=(u ᶄmid +u ᵡmid )-(u ᶄmin +u ᵡmin )㊂üþýïïïïïïïïïïï(7)86电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀可知,输出序列没有改变线电压关系㊂事实上,式(5)仅给出了含有参数u ZSV的3个方程,不足以确定每相的双调制波㊂每相的双调制波的求解还需要其他的附加方程㊂众所周知,NPC TLI要尽量维持电容电压平衡㊂其基本原则是,若在一个开关周期起始时刻电容电压均衡,在该开关周期的结束时刻电容电压仍应维持平衡㊂三相电流在一个开关周期内注入中点的电流之和为i0,即i0=i max d max,1+i mid d mid,1+i min d min,1㊂(8)假设一个开关周期内三相电流近似不变,NPV 不变的条件是在一个开关周期内注入中点的电流之和为0,即i0=0㊂此时,代入式(6)给出的占空比,式(8)可以改写为i max uᵡmax+i mid uᵡmid+i min uᵡmin=i max uᶄmax+i mid uᶄmid+ i min uᶄmin㊂(9)上式表明,维持电容电压平衡的条件等价为调制波uᶄx和uᵡx进行功率交换时瞬时功率和相等㊂2.1㊀开关次数的约束如前所述,双调制波单载波的方式所产生的PWM序列可以同时出现0㊁1㊁2三个电平,这将导致逆变器的开关次数增多㊂为了减少开关次数,必须对uᶄx和uᵡx进行约束㊂若uᶄx=0,所产生的PWM序列可以仅出现0㊁1两个电平;若uᵡx=u DC/2,所产生的PWM序列可以仅出现1㊁2两个电平㊂不妨做如下约束:uᶄx=0,u x<0;uᵡx=u DC/2,u x>0㊂}(10)情形1:u max>0㊁u mid>0㊁u min<0㊂在这种情况下,双调制波uᶄx和uᵡx的约束如下:uᵡmax=u DC/2; uᵡmid=u DC/2; uᶄmin=0㊂üþýïïïï(11)将式(11)代入式(5)中可知:uᶄmax=u max+u ZSV;uᶄmid=u mid+u ZSV;uᵡmin=u min+u DC2+u ZSV㊂üþýïïïï(12)将式(11)和式(12)代入式(9)中,可以解得ZSV为u ZSV=i max u max+i mid u mid-i min u min2i min㊂(13)综合式(11)㊁式(12)和式(13),可知分解后的6个调制波为:uᶄmax=u max+i max u max+i mid u mid-i min u min2i min,uᵡmax=u DC2;uᶄmid=u mid+i max u max+i mid u mid-i min u min2i min,uᵡmid=u DC2;uᶄmin=0,uᵡmin=u min+u DC2+i max u max+i mid u mid-i min u min2i min㊂üþýïïïïïïïïïïïïïïï(14)情形2:u max>0㊁u mid<0㊁u min<0㊂在这种情况下,双调制波uᶄx和uᵡx的约束如下:uᵡmax=u DC/2;uᶄmid=0;uᶄmin=0㊂(15)类比情形1,可算得此情形下ZSV为u ZSV=i mid u mid+i min u min-i max u max2i max㊂(16)同理,分解后的6个调制波为:uᶄmax=u max+i mid u mid+i min u min-i max u max2i max,uᵡmax=u DC2;uᶄmid=0,uᵡmid=u mid+u DC2+i mid u mid+i min u min-i max u max2i max; uᶄmin=0,uᵡmin=u min+u DC2+i mid u mid+i min u min-i max u max2i max㊂üþýïïïïïïïïïïïïïï(17)值得注意的是,尽管基于开关次数的约束可以确定出每相的双调制波,但是必须考虑到uᶄx和uᵡxɪ[0,u DC/2]这一约束条件㊂当考虑到这一约束条件时,将可能无法完全按照式(13)和式(16)注入ZSV,这将导致NPV出现低频波动㊂在基于开关次数的约束下,从式(14)和式(17)得到的双调制波与电流的瞬时值相关,即与功率因数有关㊂在mɪ[0,1.1547],ωtɪ[0,2π]范围内,对于不同的功率因数角,图4展示了在开关次数的96第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略约束下,NPV 可以在一个开关周期内平衡的区域,其中包括情形1与情形2的区域,空白区域为NPV 不能在一个开关周期内实现平衡的区域,即式(14)和式(17)所得的双调制波无法满足u ᶄx 和u ᵡx ɪ[0,u DC /2]的约束条件㊂图4㊀不同功率因数下的可平衡区域Fig.4㊀Balanced regions under different power factors在基于开关次数的约束下,情形1与情形2以π/3为周期交替切换㊂当功率因数角ϕ=0时,除了调制度非常接近1.1547的很小区域外,其余区域都使NPV 在一个开关周期内平衡㊂随着功率因数的降低,情形1与情形2的适用区域逐渐变小㊂当功率因数角ϕ=π/2时,情形1与情形2的适用区域仅在调制度为0.58以下㊂2.2㊀NPV 无条件平衡约束为了达到NPV 的全功率因数范围平衡,应放宽对双调制波的约束㊂仅约束u max 和u min :u ᵡmax =u DC /2;uᶄmin=0㊂}(18)则式(9)可以改写为i max (u DC2-u ᶄmax )+i mid (u ᵡmid -u ᶄmid )+i min u ᵡmin =0㊂(19)考虑到i max +i mid +i min =0,则当下述条件成立时,即可使式(19)被满足u DC2-u ᶄmax =u ᵡmid -u ᶄmid =u ᵡmin ㊂(20)结合式(5)和式(20),求解出三相的双调制波为:u ᶄmax =12(u max -u min ),u ᵡmax =12u DC ;u ᶄmid=12(u mid -u min ),u ᵡmid =12u DC -12(u max -u mid );u ᶄmin=0,u ᵡmin =12u DC -12(u max -u min )㊂üþýïïïïïïïïïï(21)从式(21)可以看出,只要u max -u min ɤu DC ,u ᶄx 和u ᵡx ɪ[0,u DC /2]这一约束条件总能被满足㊂不同于式(14)和式(17),式(21)所给出的双调制波与电流无关㊂因此,这种方法不受功率因数的限制㊂图5给出了NPV 无条件平衡约束的实现方法㊂每相双调制波与同一载波比较后生成2个PWM 子序列,将这2个PWM 子序列相加得到每一相最终的PWM 序列㊂可以看出,在NPV 无条件平衡约束下,最大相的PWM 序列由电平1㊁2组成,中间相的PWM 序列由电平0㊁1㊁2组成,最小相的PWM 序列由电平0㊁1组成㊂相比开关次数的约束,NPV 无条件平衡约束在一个周期内的开关次数增加了一次,增大了开关损耗㊂图5㊀调制波分解的开关序列Fig.5㊀PWM sequence of modulation wave decomposition2.3㊀两种约束的混合调制策略在基于开关次数的约束下,并非所有功率因数下都可以实现NPV 在一个开关周期内平衡,而在NPV 无条件平衡约束下,又会增大开关次数,从而07电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀增加开关损耗㊂因此,可以采用2种约束的混合调制策略,保证NPV 在一个开关周期内平衡的前提下尽可能减小开关损耗㊂在图4中,与开关次数的约束适用区域互补,空白区域即为NPV 无条件平衡约束的适用区域㊂图6给出了混合调制策略的流程图㊂可以看出,混合调制策略中最为关键的一步就是判定开关次数的约束下能否使求解出的双调制波满足u ᶄx 和u ᵡx ɪ[0,u DC /2]㊂在混合调制策略中,应尽量减少NPV 无条件平衡约束的使用次数㊂仅在基于开关次数的约束无法使用时,才采用NPV 无条件平衡约束,这样可以保证实现NPV 在一个开关周期平衡的同时不过分增大开关损耗㊂图6㊀混合调制策略的流程图Fig.6㊀Flowchart of hybrid modulation strategy图7给出了基于开关次数的约束在混合调制策略中的占比情况㊂可以看出,低调制度下(m <0.58)可以全部采用基于开关次数的约束㊂而调制度较高(m >0.58)时,基于开关次数的约束在混合调制策略中的占比会随着功率因数的变化而剧烈变化㊂当功率因数大于0.866且m <0.928,就可以使开关次数的约束占比达到100%㊂当功率因数降低到0时,混合调制策略中,开关次数的约束占比将迅速降为0㊂图7㊀开关次数的约束在混合调制策略中的占比情况Fig.7㊀Proportion of constraint based on switchingtimes in hybrid modulation strategy3㊀NPV 主动控制方法在基于开关次数的约束下,利用式(14)和式(17)计算双调制波时,需要用到下一开关周期的三相电流,而此电流还是未知的,如果用上一开关周期的三相电流替代,则不可避免的带来误差,导致NPV 逐渐偏移㊂在NPV 无条件平衡约束下,双调制波的计算虽然与三相电流无关㊂但一些非理想因素,例如电容容值的微小偏差㊁初始状态下的电容电压偏差㊁死区的插入等,都可能导致NPV 逐渐偏移㊂如果对NPV 偏移不加以主动控制,可能使电容电压或功率器件的电压应力超过其允许值,导致装备保护或发生较为严重的故障㊂因此有必要探讨这两种约束下的NPV 主动控制方法㊂若检测到上下电容电压偏差Δu NP =u C2-u C1㊂要改变NPV,必须向中点注入或抽取电流㊂NPV 主动控制的关键就在于对中点电流的控制㊂为了使NPV 重新回复至平衡状态,一个载波周期内需要向中点抽取的平均电流为i 0=Δu NP (C 1+C 2)/T S ㊂(22)式中T S 为开关周期㊂代入式(6)给出的占空比,式(8)可以改写为i 0=2u DC[i max (u ᵡmax -u ᶄmax )+i mid (u ᵡmid -u ᶄmid )+i min (u ᵡmin -u ᶄmin )]㊂(23)3.1㊀基于开关次数约束下的NPV 主动控制对于基于开关次数约束而言,三相的双调制波每相都有一个调制波为0或u DC /2,另一个调制波是随输出电压而改变㊂通过对后者注入ZSV 实现NPV 主动控制㊂在情形1时,由于u ᵡmax =u DC /2㊁u ᵡmid =u DC /2㊁u ᶄmin =0,故只能对u ᶄmax ㊁u ᶄmid ㊁u ᵡmin 注入ZSV,注入ZSV 后,6个调制波电压为:u ᶄmax =u max +u ZSV ,u ᵡmax =u DC2;u ᶄmid =u mid +u ZSV ,u ᵡmid =u DC2;u ᶄmin =0,u ᵡmin =12u DC+u min +u ZSV ㊂üþýïïïïïï(24)将式(24)代入式(23)可得u ZSV =2i max u max +2i mid u mid -2i min u min +i 0u DC4i min =u ZSV1+i 0u DC4i min㊂(25)17第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略值得注意的是,ZSV 由两部分组成,第一部分是为了在基于开关次数约束下实现NPV 在一个开关周期平衡所注入的ZSV,等同于式(13)计算的ZSV;第二部分是为了消除NPV 偏移而注入的ZSV㊂类似情形1,在基于开关次数约束下的情形2,实现一个开关周期内NPV 平衡且消除偏移的ZSV 为u ZSV =2i mid u mid +2i min u min -2i max u max -i 0u DC4i max㊂(26)3.2㊀NPV 无条件平衡约束下的NPV 主动控制对于NPV 无条件平衡约束而言,注入ZSV 的方法同样适用㊂不同的是,该约束下u ᵡmax =u DC /2㊁u ᶄmin =0,而u ᶄmax ㊁u ᶄmid ㊁u ᵡmid 和u ᵡmin 都可以改变㊂这意味着除了u ᶄmax 和u ᵡmin ,u ᶄmid 和u ᵡmid 可任选一个注入ZSV㊂此外,还能采用向电压中间相注入差模电压的方式㊂这便有了3种情形,以下一一对其分析㊂情形1:对u ᶄmax ㊁u ᵡmid ㊁u ᵡmin 注入ZSV㊂注入ZSV 后,三相的6个调制波电压为:u ᶄmax =12(u max -u min )+u ZSV ,u ᵡmax =12u DC ;u ᶄmid=12(u mid -u min ),u ᵡmid =12u DC -12(u max -u mid )+u ZSV ;u ᶄmin=0,u ᵡmin =12u DC -12(u max -u min )+u ZSV ㊂üþýïïïïïïïïïï(27)将式(27)代入式(23)可得u ZSV=-i 0u DC 4i max㊂(28)情形2:对u ᶄmax ㊁u ᶄmid ㊁u ᵡmin 注入ZSV㊂注入ZSV 后,三相的6个调制波电压为:u ᶄmax =12(u max -u min )+u ZSV ,u ᵡmax =12u DC ;u ᶄmid=12(u mid -u min )+u ZSV ,u ᵡmid =12u DC -12(u max -u mid );u ᶄmin=0,u ᵡmin =12u DC -12(u max -u min )+u ZSV ㊂üþýïïïïïïïïïï(29)将式(29)代入式(23)可得u ZSV =i 0u DC4i min㊂(30)情形3:对u ᶄmid 和u ᵡmid 注入差模电压㊂注入差模电压后,三相的6个调制波电压为:u ᶄmax =12(u max -u min ),u ᵡmax =12u DC ;u ᶄmid=12(u mid -u min )-u dmv ,u ᵡmid =12u DC -12(u max -u mid )+u dmv ;u ᶄmin=0,u ᵡmin =12u DC -12(u max -u min )㊂üþýïïïïïïïïïï(31)式中u dmv 为差模电压㊂将式(31)代入式(23)可得u dmv =i 0u DC4i mid㊂(32)值得注意的是,NPV 无条件平衡约束下所得的ZSV 或差模电压只包含为了消除NPV 偏移的部分㊂以上3种情形的调节能力有所不同,在实际应用中可以选择调节能力最强的一种[20]㊂4㊀性能分析本节主要从开关损耗方面评价本文提出的混合调制策略,并与现有的方法对比㊂不同PWM 策略下的导通损耗大致相等,但开关损耗相差很大㊂因此,在分析不同调制策略的损耗时,开关损耗占主导地位,不考虑传导损耗㊂为了比较全调制度和全功率因数范围内的开关损耗,在不同调制度下,调节负载使相电流幅值保持不变㊂这样,开关损耗仅与调制方案㊁负载功率因数和调制算法的开关次数有关㊂由文献[23]知,在一个基波周期内,下式用于评估开关损耗,即P SL =ðn =1, ,N[i a (n )k a (n )+i b (n )k b (n )+i c (n )k c (n )]㊂(22)在全调制度和全功率因数范围内,分别计算混合调制策略㊁CBPWM 和VSVPWM 在一个基波周期内的总开关损耗,分别命名为P Hyb ㊁P CB 和P VSV ㊂值得注意的是,在比较不同调制算法的开关损耗时,它们之间的比率比具体值更重要㊂图8分别给出了P VSV /P CB ㊁P Hyb /P CB 和P Hyb /P VSV 的值㊂从图8(a)可以看出,VSVPWM 的开关损耗在全调制度和全功率因数范围内始终高于CBP-WM㊂当ϕ=π/2和3π/2时,VSVPWM 的开关损耗27电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀可以达到CBPWM 的1.5倍左右㊂原因是u mid 对应的相电流最大,且该相有两次开关动作,开关损耗大大增加㊂当ϕ=0和π时,VSVPWM 的开关损耗仍达到CBPWM 的1.1倍左右㊂较高的开关损耗是VSVPWM 的主要缺点㊂图8(b)给出了P Hyb /P CB 在全调制度和全功率因数范围内的值㊂可以看出,当调制较低时(m <0.58),P Hyb /P CB 始终为1,即开关损耗与CBPWM 相同㊂当调制较高时(m >0.58),开关损耗会随着功率因数的降低而增加㊂这是因为随着功率因数的降低,NPV 无条件平衡约束比例逐渐增加,因此开关损耗增加㊂当功率因数为0时,即ϕ=π/2和3π/2,P Hyb /P CB 的值最高,可达1.5㊂这是因为几乎整个开关周期内都采用NPV 无条件平衡约束㊂图8㊀不同调制策略下的开关损耗Fig.8㊀Switching losses under different modulationstrategies图8(c)给出了全调制度和全功率因数范围内的P Hyb /P VSV 值㊂可以看出,P Hyb /P VSV 的值在全范围内都小于l㊂说明混合调制策略与VSVPWM 相比,不仅可以平衡全范围内的NPV,而且可以降低开关损耗㊂5㊀实验结果为了验证本文提出的混合调制算法,搭建了NPC TLI 的实验平台,其中交流源通过不控整流桥为逆变器提供直流侧电压,如图9所示㊂实验系统的主要参数见表2㊂本文实验部分分析了CBPWM㊁VSVPWM 和混合调制策略㊂实验包括稳态实验和动态实验2个部分,并分析了实验结果㊂图9㊀NPC TLI 的实验平台Fig.9㊀Experimental platform of NPC TLI表2㊀实验参数Table 2㊀Experimental parameters㊀㊀参数数值直流侧电压/V 200直流侧上下电容/μF 1000负载1/Ω2e jπ/12负载2/Ω2e j5π/12负载3/Ω6e jπ/12负载4/Ω6e j5π/12负载5/Ω12e jπ/12开关频率/kHz 8基波频率/Hz505.1㊀稳态实验图10~图12给出了CBPWM㊁VSVPWM 和本文37第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略提出的混合调制策略在不同调制度和负载下的稳态实验结果,包括直流侧上下电容电压㊁双调制波㊁相电压相电流和线电压㊂在此次实验中,使用表2中的负载1~负载4来保持相电流的幅值不变㊂图10㊀CBPWM 的稳态实验Fig.10㊀Steady state experiment ofCBPWM图11㊀VSVPWM 的稳态实验Fig.11㊀Steady state experiment of VSVPWM图10给出了CBPWM 下的稳态实验结果㊂当m =0.3与ϕ=π/12,ϕ=5π/12时,NPV 几乎没有波动㊂表明了低调制度,在CBPWM 下,NPV 可以被47电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀很好的控制,而无关乎功率因数㊂当m=0.9与ϕ=π/12时,NP电压波动很小;当m=0.9和ϕ=5π/ 12时,NPV有明显的三倍频波动,幅值接近直流侧电压的5%㊂CBPWM的双调制波如图10中的uᶄa和uᵡa所示,在低功率因数的时候会发现,双调制波的部分区间会达到允许的限定值,这是由于相电流滞后于相电压接近90ʎ,导致所需注入的零序电压的幅值过大造成的㊂在高调制度时,这一现象尤为明显,这意味着对NPV的控制能力变差㊂图11给出了在VSVPWM的实验结果,可以看出NPV在全调制度㊁全功率因数下都得到很好的控制,几乎没有波动㊂由于VSVPWM其本身的NPV 平衡能力就很好,在加入NPV主动控制后,对双调制波的改变很小㊂观察VSVPWM的双调制波uᶄa和uᵡa,在不同调制度和功率因数下双调制波的波形几乎相同㊂图12给出了在混合调制策略的实验结果㊂可以看出NPV在全调制度㊁全功率因数下都维持了平衡㊂混合调制策略下的双调制波如图12中的uᶄa和uᵡa所示,当m=0.3与ϕ=π/12和ϕ=5π/12,混合调制策略的双调制波较为接近CBPWM相应的双调制波;当m=0.9㊁ϕ=π/12时,CBPWM与VSVPWM 的双调制波在混合调制策略的双调制波体现;当m=0.9和ϕ=5π/12时,混合调制策略的双调制波较为接近VSVPWM的双调制波㊂对于开关损耗,在实验中使用Tektronix TPS2024示波器与专业功率分析模块测量㊂图13给出了不同调制策略下的开关损耗测量值㊂图10~图12的4种工况分别表示为(a)~(d)㊂不难发现,CBPWM和VSVPWM的开关损耗分别为最低和最高,混合调制策略的开关损耗总是介于二者之间㊂实验研究中各种调制策略在不同情况下的开关损耗之比与图8中的仿真分析基本吻合㊂结果证明,相比于VSVPWM,混合调制策略有降低开关损耗的效果㊂另外,表3还给出了不同调制策略下相电流i a 的THD㊂可以看出,CBPWM在中点电压存在三倍频波动时输出电流质量最差,在中点电压可以平衡时输出电流质量最好㊂VSVPWM和本文所提的混合调制策略能完全平衡中点电压,故其谐波含量相对较低㊂图12㊀混合调制策略的稳态实验Fig.12㊀Steady state experiments for hybridcontrol method57第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略图13㊀不同调制策略下的开关损耗Fig.13㊀Switching losses under different PWM strategies 表3㊀不同调制策略下i a的THDTable3㊀THD of i a under different PWM strategies调制度和功率因数CBPWM/%VSVPWM/%混合调制策略/% m=0.3,ϕ=π/12 3.98 4.12 4.03m=0.3,ϕ=5π/12 4.49 4.23 4.36m=0.9,ϕ=π/12 4.01 4.25 4.12m=0.9,ϕ=5π/12 5.13 4.35 4.405.2㊀动态实验图14~图16给出了动态实验结果,包括NPV 恢复㊁负载突变和调制度突变㊂NPV恢复实验中使NPV在初始时存在20V的偏移,使用不同调制方法使得NPV恢复到平衡状态㊂图14给出了采用混合调制策略时的NPV恢复过程㊂可以看出,混合调制策略能使NPV从不平衡状态迅速恢复到平衡状态㊂图14㊀混合调制策略下的NPV恢复过程Fig.14㊀NPV recovery process under mixedmodulation图15㊀当m=0.9,ϕ=5π/12时,CBPWM和VSVPWM 的NPV恢复过程Fig.15㊀NPV recovery process under CBPWM and VS-VPWM while m=0.9,ϕ=5π/1267电㊀机㊀与㊀控㊀制㊀学㊀报㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀㊀第27卷㊀图16㊀混合调制策略下的动态过程Fig.16㊀Dynamic process under hybrid control method 图15给出了在m=0.9和ϕ=5π/12时,CBP-WM和VSVPWM的NPV恢复过程㊂可以看出,虽然CBPWM最终消除了在NPV上的直流偏移,但存在着较大的交流纹波㊂VSVPWM具有良好的NPV 恢复能力,NPV最终实现平衡㊂图16给出了在混合调制策略下调制度突然变化及负载突然变化的实验结果㊂负载突变实验使用负载3与负载5㊂容易看出,在动态过程中,NPV均被很好的控制㊂这说明混合调制策略在动态过程中对NPV具有很强的控制能力㊂6㊀结㊀论本文基于一种简单的调制波分解算法,对双调制波进行多种约束,以达到不同的调制目标㊂为了在全调制度㊁全功率因数范围内实现NPV平衡,本文提出了一种混合调制策略,实现了在NPV平衡的前提下开关损耗有所降低㊂与单纯采用某一约束相比,混合调制策略在NPV平衡方面和开关损耗方面具有一定的优势㊂理论分析和实验结果表明本文所提出的混合调制策略对NPV具有VSVPWM同样地控制效果,但开关损耗有所降低㊂参考文献:[1]㊀葛兴来,张晓华,岳岩.低载波比下三电平NPC逆变器同步SVPWM算法[J].电机与控制学报,2018,22(9):24.GE Xinglai,ZHANG Xiaohua,YUE parative study on synchronized space vector PWM for three level neutral point clamped VSI under low carrier ratio[J].Electrical Machines and Control,2018,22(9):24.[2]㊀KOURO S,MALINOWSKI M,GOPAKUMAR K,et al.Recentadvances and industrial applications of multilevel converters[J].IEEE Transaction on Industrial Electronics,2010,57(8):2553.[3]㊀胡存刚,胡军,马大俊,等.三电平光伏并网逆变器SHEPWM优化控制方法[J].电机与控制学报,2016,20(7):74.HU Cungang,HU Jun,MA Dajun,et al.Optimized control meth-od for three-level photovoltaic grid-connected inverter using SHEP-WM[J].Electrical Machines and Control,2016,20(7):74.[4]㊀JAMWAL P S,SINGH S,JAIN S.Three-level inverters for induc-tion motor driven electric vehicles[C]//20203rd International Conference on Energy,Power and Environment,March5-7, 2021,Meghalaya,India.2021:1.㊀[5]㊀姜卫东,王培侠,王金平,等.全范围内中点电位平衡的三电平变换器调制策略[J].电力系统自动化,2018,42(21):145.JIANG Weidong,WANG Peixia,WANG Jinping,et al.Modula-tion strategy for three-level converter with neutral point potential balance in full range[J].Automation of Electric Power Systems, 2018,42(21):145.[6]㊀向超群,陈春阳,韩丁,等.中点电位不平衡度反馈的三电平虚77第11期王金平等:基于调制波分解的中点钳位型三电平逆变器的混合调制策略。
a r X i v :a s t r o -p h /0104162v 2 28 M a y 2001Early-universe constraints on Dark EnergyRachel Bean ♯,Steen H.Hansen ♭and Alessandro Melchiorri ♭♯Theoretical Physics,The Blackett Laboratory,Imperial College,Prince Consort Road,London,U.K.♭NAPL,University of Oxford,Keble road,OX13RH,Oxford,UK In the past years ’quintessence’models have been considered which can produce the accel-erated expansion in the universe suggested by recent astronomical observations.One of the key differences between quintessence and a cosmological constant is that the energy density in quintessence,Ωφ,could be a significant fraction of the overall energy even in the early uni-verse,while the cosmological constant will be dynamically relevant only at late times.We use standard Big Bang Nucleosynthesis and the observed abundances of primordial nuclides to put constraints on Ωφat temperatures near T ∼1MeV .We point out that current ex-perimental data does not support the presence of such a field,providing the strong constraint Ωφ(MeV)<0.045at 2σC.L.and strengthening previous results.We also consider the effect a scaling field has on CMB anisotropies using the recent data from Boomerang and DASI,providing the CMB constraint Ωφ≤0.39at 2σduring the radiation dominated epoch.Introduction.Recent astronomical observations [1]suggest that the energy density of the universe is dominated by a dark energy component with nega-tive pressure which causes the expansion rate of the universe to accelerate.One of the main goals for cos-mology,and for fundamental physics,is ascertaining the nature of the dark energy [2].In the past years scaling fields have been con-sidered which can produce an accelerated expansion in the present epoch.The scaling field is known as “quintessence”and a vast category of “tracker”quintessence models have been created (see for exam-ple [3,4]and references therein),in which the field approaches an attractor solution at early times,with its energy density scaling as a fraction of the domi-nant component.The desired late time accelerated expansion behaviour is then set up independently of initial conditions,with the quintessential field domi-nating the energy content.Let us remind the reader of the two key differ-ences between the general quintessential model and a cosmological constant:firstly,for quintessence,the equation-of-state parameter w φ=p/ρvaries in time,usually approaching a present value w 0<−1/3,whilst for the cosmological constant remains fixed at w Λ=−1.Secondly,during the attractor regime the energy density in quintessence Ωφis,in general,a sig-nificant fraction of the dominant component whilst ΩΛis only comparable to it at late times.Future supernovae luminosity distance data,as might be obtained by the proposed SNAP satel-lite,will probably have the potential to discriminate between different dark energy theories [5].These datasets will only be able to probe the late time be-haviour of the dark energy component at red-shift z <2,however.Furthermore,since the luminosity distance depends on w through a multiple integral re-lation,it will be difficult to infer a precise measure-ment of w (z )from these datasets alone [6].In this Letter we take a different approach to the problem,focusing our attention on the early time be-haviour of the quintessence field,when the trackingregime is maintained in a wide class of models,and Ωφis a significant (≥0.01,say)fraction of the overall density.In particular,we will use standard big bang nucle-osynthesis and the observed abundances of primordial nuclides to put constraints on the amplitude of Ωφat temperatures near T ∼1MeV.The inclusion of a scaling field increases the expansion rate of the uni-verse,and changes the ratio of neutrons to protons at freeze-out and hence the predicted abundances of light elements.The presence of this field in the radiation domi-nated regime also has important effects on the shape of the spectrum of the cosmic microwave background anisotropies.We use the recent anisotropy power spectrum data obtained by the Boomerang [7]and DASI [8]experiments to obtain further,independent constraints on Ωφduring the radiation dominated epoch.There are a wide variety of quintessential models;we limit our analysis to the most general ones,with attractor solutions established well before nucleosyn-thesis.More specifically,we study a tracker model based on the exponential potential V =V 0e −λφ[9].If the dominant component scales as ρn =ρ0(a 0λ2.However,the pure exponential potential,since it simply mimics the scaling of the dominant matter in the attractor regime,cannot produce an ac-celerated expanding universe in the matter dominated epoch.Therefore,we focus our attention on a recently pro-posed model by Albrecht and Skordis (referred to as the AS model from herein)[10],motivated by physics in the low-energy limit of M-theory,which includes a factor in front of the exponential,so that it takes the form V =V 0 (φ0−φ)2+A e−λφ.The prefactor introduces a small minimum in the potential.When the potential gets trapped in this minimum its kinetic energy disappears,triggering a period of accelerated-1.0-0.8-0.6-0.4-0.20.0-1.0-0.8-0.6-0.4-0.20.00.2-1.0-0.8-0.6-0.4-0.20.00.2log 10(a)w t o tlog 10(a)ΩQFIG.1.Top panel:Time behaviour of the fractional energy density Ωφfor the Albrecht and Skordis model to-gether with the constraints presented in the paper.The parameters of the models are (assuming h =0.65and Ωφ=0.65)λ=3,φ0=87.089,A =0.01and λ=8,φ0=25.797,A =0.01.Bottom panel:Time behaviour for the overall equation of state parameter w tot for the two models.Luminosity distance data will not be useful in differentiating the two models.expansion,which never ends if Aλ2<1[11].In Fig.1we introduce and summarise the main re-sults of the paper.In the figure,the BBN constraints obtained in section 2,and the CMB constrains ob-tained in section 3are shown together with two dif-ferent versions of the AS model which both satisfy the condition Ωφ=0.65today.Constraints from BBN.In the last few years impor-tant experimental progress has been made in the mea-surement of light element primordial abundances.For the 4He mass fraction,Y He ,two marginally compati-ble measurements have been obtained from regression against zero metallicity in blue compact galaxies.A low value Y He =0.234±0.003[12]and a high one Y He =0.244±0.002[13]give realistic bounds.We use the high value in our analysis;if one instead con-sidered the low value,the bounds obtained would be even stronger.Observations in different quasar absorption line sys-tems give a relative abundance for deuterium,critical in fixing the baryon fraction,of D/H =(3.0±0.4)·10−5[14].Recently a new measurement of deuterium in the damped Lyman-αsystem was presented [15],leading to the weighted abundance D/H =(2.2±0.2)·10−5.We use the value from [14]in our analysis;the use of [15]leads to even stronger bound.In the standard BBN scenario,the primordial abun-dances are a function of the baryon density η∼Ωb h 2only.To constrain the energy density of a primor-FIG. 2.1,2and 3σlikelihood contours in the (Ωb h 2,Ωφ(1MeV))parameter space derived from 4He and D abundances.dial field a T ∼MeV,we modified the standard BBN code [16],including the quintessence energy compo-nent Ωφ.We then performed a likelihood analysis in the parameter space (Ωb h 2,ΩBBN φ)using the observed abundances Y He and D/H .In Fig.2we plot the 1,2and 3σlikelihood contours in the (Ωb h 2,ΩBBN φ)plane.Our main result is that the experimental data for 4He and D does not favour the presence of a dark energy component,providing the strong constraint Ωφ(MeV)<0.045at 2σ(corresponding to λ>9for the exponential potential scenario),strengthening sig-nificantly the previous limit of [17],Ωφ(MeV)<0.2.The reason for the difference is due to the improve-ment in the measurements of the observed abun-dances,especially for the deuterium,which now corre-sponds to approximately ∆N eff<0.2−0.3additional effective neutrinos (see,e.g.[18]),whereas Ref.[17]used the conservative value ∆N eff<1.5.One could worry about the effect of any underesti-mated systematic errors,and we therefore multiplied the error-bars of the observed abundances by a factor of 2.Even taking this into account,there is still a strong constraint Ωφ(MeV)<0.09(λ>6.5)at 2σ.Constraints from CMB.The effects of a scaling field on the angular power spectrum of the CMB anisotropies are several [10].Firstly,if the energy den-sity in the scaling quintessence is significant during the radiation epoch,this would change the equality redshift and modify the structure of the peaks in the CMB spectrum (see e.g.[19]).Secondly,since the inclusion of a scaling field changes the overall content in matter and energy,the angular diameter distance of the acoustic horizon size at recombination will change.This would result in a shift of the peak positions on the angular spectrum.It is important to note that this effect does not quali-tatively add any new features additional to those pro-01002003004005006007008009001000110020406080100FIG.3.CMB anisotropy power spectra for the Al-brecht-Skordis models with λ=3,φ0=87.22and A =0.009(long dash),λ=8,φ0=32.329and A =0.01(short dash)(both with Ωφ=0.65and h =0.65),and a cosmological constant with ΩΛ=0.65,N ν=3.04(full line)and N ν=7.8(dash-dot).duced by the presence of a cosmological constant [20].Third,the time-varying Newtonian potential after decoupling will produce anisotropies at large angu-lar scales through the Integrated Sachs-Wolfe (ISW)effect.Again,this effect will be difficult to disentan-gle from the same effect generated by a cosmologi-cal constant,especially in view of the affect of cos-mic variance and/or gravity waves on the large scale anisotropies.Finally,the perturbations in the scaling field about the homogeneous solution will also slightly affect the baryon-photon fluid modifying the structure of the spectral peaks.However,this effect is generally neg-ligible.From these considerations,supported also by re-cent CMB analysis [21,22],we can conclude that the CMB anisotropies alone cannot give significant con-straints on w φat late times.If,however,Ωφis signif-icant during the radiation dominated epoch it would leave a characteristic imprint on the CMB spectrum.The CMB anisotropies can then provide a useful cross check on the bounds obtained from BBN.To obtain an upper bound on Ωφat last scatter-ing,we perform a likelihood analysis on the recent Boomerang [7]and DASI [8]data.The anisotropy power spectrum from BOOMERang and DASI was estimated in 19bins between ℓ=75and ℓ=1025and in 9bins,from ℓ=100to ℓ=864respec-tively.Our database of models is sampled as in [23],we include the effect of the beam uncertainties for the BOOMERanG data and we use the public available covariance matrix and window functions for the DASI experiment.There are naturally degeneracies between Ωm and ΩΛwhich are broken by the inclusion of SN1a data [1].0.010.111001000FIG.4.Matter power spectra for 3models in fig. 3.The predictions support those in the CMB spectra,the quintessence model in agreement with BBN λ=8(short dash)mimics the ΩΛ=0.65spectrum with N ν=3.0(full line).The model with λ=3(long dash)is in clear dis-agreement with observations.By finding the remaining “nuisance”parameters which maximise the likelihood,we obtain Ωφ<0.39at 2σlevel during the radiation dominated epoch.Therefore,while there is no evidence from the CMB anisotropies for a presence of a scaling field in the ra-diation dominated regime,the bounds obtained are actually larger than those from BBN.In Fig.3we plot the CMB power spectra for 4alter-native scenarios.The CMB spectrum for the model which satisfies the BBN constraints is practically in-distinguishable from the spectrum obtained with a cosmological constant.Nonetheless,if the dark energy component during radiation is significant,the change in the redshift of equality leaves a characteristic im-print in the CMB spectrum,breaking the geometri-cal degeneracy.This is also found when considering non-minimally coupled scalar fields [24],even when the scalar is a small fraction of the energy density at last scattering.In the minimally coupled models con-sidered here,this is equivalent to an increase in the neutrino effective number,i.e.altering the number of relativistic degrees of freedom at last scattering.In Fig.4we have plotted the corresponding mat-ter power spectra together with the decorrelated data points of Ref.[25].As one can see,the model with λ=3is in disagreement with the data,producing less power than the model with λ=8,with this last one still mimicking a cosmological constant.The less power can be still explained by the increment in the radiation energy component which shift the equality at late time and the position of the turn-around in the matter spectrum towards larger scales.A bias fac-tor could in principle solve the discrepancy between the λ=3model and the data,however,the mat-ter fluctuations over a sphere of size 8h −1Mpc areσ8∼0.5to be compared with theobserved valueσ8=0.56Ω−0.47m ∼0.9[26].Weak lensing observa-tions[27]may open up further opportunities to con-strain quintessence models even more tightly through the matter power spectrum.Conclusions.We have examined BBN abundances and CMB anisotropies in a cosmological scenario with a scalingfield.We have quantitatively discussed how large values of the fractional density in the scal-ingfieldΩφat T∼1MeV can be in agreement with the observed values of4He and D,assuming standard Big Bang Nucleosynthesis.The2σlimit Ωφ(1MeV)<0.05severely constrains a wide class of quintessential scenarios,like those based on an expo-nential potential.For example,for the pure expo-nential potential the total energy today is restricted toΩφ=3[1]P.M.Garnavich et al,Ap.J.Letters493,L53-57(1998);S.Perlmutter et al,Ap.J.483,565(1997);S.Perlmutter et al(The Supernova Cosmology Project), Nature39151(1998);A.G.Riess et al,Ap.J.116, 1009(1998);B.P.Schmidt,Ap.J.507,46-63(1998).[2]N.Bahcall,J.P.Ostriker,S.Perlmutter andP.J.Steinhardt,Science284,1481(1999)[astro-ph/9906463]; A.H.Jaffe et al,Phys.Rev.Lett.86 (2001)[astro-ph/0007333].[3]I.Zlatev,L.Wang,&P.Steinhardt,Phys.Rev.Lett.82896-899(1999).[4]P.Brax,J.Martin&A.Riazuelo,Phys.Rev.D.,62103505(2000).[5]J.Weller, A.Albrecht,Phys.Rev.Lett.861939(2001)[astro-ph/0008314];T.Chiba,T.Nakamura, Phys.Rev.D62(2000)121301,[astro-ph/0008175];D.Huterer and M.S.Turner,[astro-ph/0012510];M.Tegmark,[astro-ph/0101354];V.Barger, D.Marfatia Phys.Lett.B498(2001)67-73,[astro-ph/0009256].[6]I.Maor,R.Brustein and P.J.Steinhardt,Phys.Rev.Lett.86,6(2001)[astro-ph/0007297].[7]terfield et al.,astro-ph/0104460.[8]C.Pryke et al.,astro-ph/0104489.[9]J.J.Halliwell,Phys.Lett.B185341(1987);J.Bar-row,Phys.Lett.B18712(1987);B.Ratra,P.Pee-bles,Phys.Rev.D373406(1988);C.Wetterich,As-tron.&Astrophys.301321(1995).[10]A.Albrecht,C.Skordis,Phys.Rev.Lett.842076(2000)[astro-ph/9908085]; C.Skordis,A.Albrect,Phys.Rev.D,Submitted[astro-ph/0012195].[11]J.Barrow,R.Bean,J.Maguejo(2000)[astro-ph/0004321].[12]K.A.Olive and G.Steigman,Astrophys.J.Suppl.Ser.97,49(1995).[13]Y.I.Izotov and T.X.Thuan,ApJ,500188(1998);[14]S.Burles and D.Tytler,ApJ,499689(1998).[15]M.Pettini and D.V.Bowen,[astro-ph/0104474].[16]L.Kawano,Fermilab-Pub-92/04-A(1992).[17]P.G.Ferreira and M.Joyce,Phys.Rev.Lett.79(1997)4740-4743[astro-ph/9707286];P.G.Ferreira and M.Joyce,Phys.Rev.D58(1998)023503[astro-ph/9711102].[18]S.Burles,K.M.Nollett,J.N.Truran andM.S.Turner,Phys.Rev.Lett.82,4176(1999)[astro-ph/9901157];S.Esposito,G.Mangano,G.Miele and O.Pisanti,JHEP0009(2000)038[astro-ph/0005571].[19]W.Hu,D.Scott,N.Sugiyama and M.White,Phys.Rev.D52(1995)5498[astro-ph/9505043].[20]G.Efstathiou&J.R.Bond[astro-ph/9807103].[21]A.Balbi,C.Baccigalupi,S.Matarrese,F.Perrottaand N.Vittorio,Astrophys.J.547(2001)L89[astro-ph/0009432].[22]J.R.Bond et al.[The MaxiBoom Collaboration],astro-ph/0011379.[23]S.Esposito,G.Mangano,A.Melchiorri,G.Miele andO.Pisanti,Phys.Rev.D63(2001)043004[astro-ph/0007419].[24]R.Bean,[astro-ph/0104464][25]A.J.S.Hamilton&M.Tegmark,[astro-ph/0008392].[26]Viana P.,Liddle A.R.,1999,MNRAS,303,535.[27]Van Waerbeke L.et al.,[astro-ph/0101511][28]A. D.Dolgov,S.H.Hansen,S.Pastor andD.V.Semikoz,Nucl.Phys.B548(1999)385[hep-ph/9809598].[29]P.Di Bari and R.Foot,Phys.Rev.D63(2001)043008[hep-ph/0008258].[30]S.Dodelson,M.Kaplinghat and E.Stewart,Phys.Rev.Lett.85(2000)5276[astro-ph/0002360]. [31]M.Zaldarriaga&U.Seljak,ApJ.469437(1996).。
a r X i v :h e p -t h /9701112v 2 15 J u l 1999Universit¨a t FreiburgTHEP 96/17hep-th/9701112Algebraic Constraint Quantizationand thePseudo–Rigid Body ∗Michael Trunk Universit¨a t Freiburg Fakult¨a t f¨u r Physik Hermann–Herder–Str.3D–79104Freiburg Germany e-mail:trunk@physik.uni-freiburg.de Abstract The pseudo–rigid body represents an example of a constrained system with anon-unimodular gauge group.This system is used as a testing ground for the appli-cation of an algebraic constraint quantization scheme which focusses on observable quantities,translating the vanishing of the constraints into representation condi-tions on the algebra of observables.The constraint which is responsible for the non-unimodularity of the gauge group is shown not to contribute to the observable content of the constraints,i.e.,not to impose any restrictions on the construction of the quantum theory of the system.The application of the algebraic constraint quantization scheme yields a unique quantization of the physical degrees of free-dom,which are shown to form a realization of the so-called CM (N )–model of collective motions.I.IntroductionThe pseudo–rigid body[1]represents an example of afirst class constrained sys-tem with a complicated,non–Abelian and non-unimodular gauge group.In the present paper this system will be used a testing ground for the application of an algebraic concept for the implementation of classical phase space constraints into the quantum theory,formulated heuristically in Ref.[2].The aim of this algebraic concept is to circumvent the technical and conceptual problems which beset the currently used methods for the quantization of constrained systems,where one has to impose requirements upon the“quantization”of unphysical quantities,like the constraints or gauge conditions,which are subsequently used to project the physical states out of an extended Hilbert space.In contrast,as the connection between the classical and quantum descriptions of a physical system is closest on the algebraic level,the central idea of the algebraic concept consists in translating the“vanishing”of the constraints into conditions which are imposed upon observ-able quantities.This is achieved by treating the intrinsically defined observable content of the constraints(see below)as supplying representation conditions for the identification of the physical representation of the algebra of observables.The example of the pseudo-rigid body has been chosen because Duval,Elhadad, Gotay,´Sniatycki,and Tuynman[3,4]have shown that,when quantizing afirst class constrained system with a non-unimodular gauge group H using the Dirac quantization procedure,the usual invariance condition for projecting the physical states out of an extended Hilbert spaceˆJξΨphys=0(1) must be replaced by a condition of quasi–invarianceˆJ ξΨphys=−iThe plan of the paper is as follows.In Sec.II the algebraic concept for the implementation of classical phase space constraints into the quantum theory is formulated.In Sec.III the pseudo-rigid body is introduced in an arbitrary number N of space dimensions.The identification of the algebra of observables leads to the CM(N)–model of collective motions.In Sec.IV the discussion is specialized to the dimensions N=2and N=3,the observable content of the constraints is determined,and a short description of the free dynamics is given.Finally,the quantization of the system according to the algebraic concept,carried out in Sec. V,yields a unique identification of the physical representation of the algebra of observables.II.Algebraic constraint quantizationA.Heuristic considerationsTo begin with,quantization is understood as the construction of the quantum algebra of observables,starting from the classical algebra of observables,and the identification of that irreducible∗–representation of it,which provides the de-scription of the physical system in question.The classical respectively quantum algebra of observables is a(graded,involutive)Poisson respectively commutator algebra which is generated polynomially by a set of fundamental observables.The physical representation of the quantum algebra of observables,as a commutator algebra,is distinguished by additional algebraic structures,like characteristic rela-tions(with respect to the associative product)between its elements,the values of its Casimir elements,or extremal properties.Likewise,the physical realization of the classical algebra of observables,as a Poisson algebra,is distinguished by alge-braic structures which correspond to those of the quantum algebra of observables. Strictly speaking,the set of additional algebraic structures,required to determine its physical realization,should be considered as part of the definition of the algebra of observables;it should be chosen minimally,such that there is just one faithful representation of the so defined algebra of observables.Now the question arises,how the construction of the quantum theory can be affected by the presence of constraints,and how the condition of the“vanishing”of the constraints,i.e.the gauge invariance of the system,can be implemented into the quantum theory[5].Thefirst place,where the constraints could gain influence on the construction of the quantum theory is the characterization of the algebra of observables by the commutators of its elements.This influence can be excluded by requiring that the set of fundamental observables,which generates the classical algebra of observables, consists of proper observables,i.e.that the fundamental observables are gauge in variant and that their linear span does not contain any generators of pure gaugetransformations.For,in that case,assuming for the moment that the reduced phase space does exist,each element of the set of fundamental observables induces a non-vanishing function on the reduced phase space and the abstract Poisson algebras,generated polynomially by these two sets of functions,are isomorphic. Consequently,the two realizations of this algebra can only differ as regards the set of algebraic structures which are needed to characterize the algebra of observables beyond the commutation relations,and the differences must disappear upon the vanishing of the constraints.For this to be the case,there must exist dependencies between those elements of the algebra of observables,in terms of which these algebraic structures are formulated,and certain gauge invariant combinations of the constraints(e.g.functional dependencies,which,upon the vanishing of the constraints,induce relations between the elements of the algebra of observables). So,the only possibility for the constraints to work their way into the quantum theory is the existence of such dependencies,which represent the remaining gauge redundancy that has not been eliminated by the choice of the set of fundamental observables.In the present work it will be assumed that the representations of the quantum algebra of observables can be characterized by the values of its Casimir invariants alone[7](otherwise further algebraic structures must be treated in essentially the same way as the identities for the Casimirs are treated below).This is usually the case in physically relevant systems,especially if the fundamental observables form a Lie algebra.On this assumption the constraints can only have an impact on the construction of the quantum theory if they impose restrictions on the values of the Casimirs of the algebra of observables.That is,there must exist functional dependencies,in the classical theory,which allow to identify certain gauge invariant combinations of the constraints with Casimir elements of the algebra of observables,and the condition of the vanishing of the constraints induces identities which have to be fulfilled by the Casimir elements of the algebra of observables.These identities allow to translate the gauge invariance of the classical system into conditions on observable quantities,which will be referred to as the observable content of the constraints[8].By imposing correspondence requirements,the operator versions of the said gauge invariant combinations of the constraints can be identified with central elements of the quantum algebra of observables,which permit to formulate the observable content of the constraints,and thus to implement the constraints, on the level of quantum theory.B.Formulation of the conceptIn the following the most important steps for the realization of the algebraic con-cept for the implementation of classical phase space constraints into the quantum theory will be enumerated.This enumeration should not be misunderstood as aquantization program that can be applied algorithmically.Rather,it is meant as a statement of the principles which,in one form or the other,should apply to the quantization of an arbitrary constrained system(with the restriction stated above),but which has to be adapted in a case by case analysis to the concrete situation.In any case the starting point is a gauge invariant Hamiltonian system with phase space P and a set offirst–class constraints C i.The following notation will be employed:F=C∞(P)is the set of all smooth functions on P;C={f∈F|∃g i∈F:f= i g i C i}is the set of all weakly vanishing functions on P(under suitable regularity assumptions on the constraints C i,cf.[9]);P={f∈F|{f,C i}=0∀i}is the strong Poisson commutant of the constraints; G=C∩P is the set of gauge invariant combinations of the constraints;the elements of G will be referred to as the generalized Casimir elements of the constraints.•Observables:Choose a set˜O⊂P\G of fundamental observables,such that L(˜O)∩G={0}(where L(˜O)is the linear span of the elements of˜O).˜O must generate P weakly,i.e.the(closure,with respect to a suitable norm, of the)polynomial algebra over˜O must coincide with P up to equivalence (P∋f≈g∈P⇐⇒f−g∈G),at least locally,cf.[10].If there is a ∗–involution on P,˜O has to be closed with respect to it.Equipped with the Poisson bracket,L(˜O)should possibly form a Lie algebra (which,in that case,will also be denoted by˜O).Otherwise the Poisson brack-ets of the elements of˜O must be polynomial in the fundamental observables.The classical algebra of observables O is the Poisson algebra generated poly-nomially by˜O(or,more generally,its completion with respect to a suitable norm).The generators of the symmetry algebra S of the system should be contained in˜O[11],and the Hamiltonian must be a simple polynomial function of the elements of˜O.•The Observable Content of the Constraints:Determine the observable content of the constraints,i.e.the functional dependencies between the Casimir ele-ments of O and the constraints,and the conditions which are imposed upon the Casimir elements of O by the vanishing of the constraints.The set of generalized Casimir elements of the constraints,respectively of the corresponding Casimir elements of O,which enter into the functional dependencies,will be denoted by OC.•Identities:In addition,one has to determine the identities for the Casimir elements of O which do not involve the constraints.•Quantum Algebra of Observables:Starting from the classical Poisson alge-bra O one has to construct the commutator algebra QO,which represents the analogue of O on the level of quantum theory.The quantum algebra of observables QO is generated polynomially by a set Q˜O of fundamental observables.The elements of Q˜O are in one-to-one correspondence with the elements of˜O.The algebraic structure of QO is defined by the commutators between its elements,which can be obtained derivatively from the commu-tators between the fundamental observables.The latter have to be inferred from the Poisson brackets between the classical fundamental observables by imposing correspondence and consistency requirements.Thus,the observable linear(i.e.,Lie)or linearizable symmetries of the sys-tem should be preserved upon quantization,i.e.this part of the symmetry algebra of the quantum system should be isomorphic to that of the classical system(where the Poisson bracket has to be replaced by(−i/¯h)times the commutator).For the commutators of arbitrary elements of Q˜O this strict correspondence of commutators and Poisson brackets cannot be required a priori.Rather,(−i/¯h)times the commutators can differ from the Poisson brackets by quantum corrections which are compatible with the correspon-dence principle.The correction terms must be formed from elements of QO which possess a well-defined non-vanishing classical limit,multiplied by ex-plicit positive integer powers of¯h.Together with these explicit powers of ¯h they must carry the correct physical dimensions.The covariant trans-formation properties(with respect to the linear or linearizable part of the symmetry algebra)of the fundamental observables as well as of their com-mutators should be preserved.If˜O carries a gradation or∗–involution,these structures must also be implemented into Q˜O,and the commutator struc-ture must be compatible with them.Of course,if˜O is a Lie algebra,this should also be the case for Q˜O.Then,QO is the enveloping algebra of the Lie algebra Q˜O.This deformation process may not result in the occurrence of additional observables on the level of quantum theory which do not possess a classical analogue.•Correspondence of Observables:The expressions for specific quantum observ-ables,which correspond to given classical observables,and their commutation relations have to be determined along the same lines.•The Observable Content of the Constraints:The crucial step is the identifi-cation of those Casimir elements of QO which correspond to the elements of OC(the principles for their identification are the same as above),and of the conditions which express the observable content of the constraints on the level of quantum theory.•Identities:In the same way one has to determine the form of those identities for the Casimir elements of QO which correspond to the classical identities for the Casimir elements of O which do not involve the constraints.•Identification of the Physical Representation of QO:Having established the algebraic structure of QO,the physical representation is that irreducible ∗–representation of QO,in which the conditions,which express the observ-able content of the constraints,and the identities for the remaining Casimir elements of QO are satisfied.Note that we do not have to introduce an extended Hilbert space(where the term“extended Hilbert space”refers to any Hilbert space containing unphysical states).Of course,the use of an extended Hilbert space may facilitate the construc-tion of representations of QO.But then the Hilbert space will not be irreducible with respect to QO,and the selection of the physical subspace,i.e.of the physical representation of QO,can be carried out with the help of conditions which are imposed on observable quantities.III.The pseudo-rigid bodyIn Ref.[1]the pseudo-rigid body(PRB)is defined kinematically by specifying its configuration space.Consider a distribution of mass points in R N,or a continuous mass distribution,such that the volume(of the convex hull)is non-zero.Let this object undergo collective linear deformations,i.e.let each mass point be subject to the same linear transformation.Then,starting from an initial configuration, each other configuration can be obtained by specifying the change in the position of the center of mass,i.e.an element of R N,and in the orientation,shape,and size of the body,i.e.an element of GL0(N,R),the identity component of GL(N,R). That is,the configuration space of the PRB is the groupG=GL0(N,R)×denotes the semi-direct product).Now suppose we are unable to detect different orientations,sizes,and positions of the body,i.e.the physical degrees of freedom are the different shapes of the body.Then,the redundant degrees of freedom,namely dilations,rotations,and translations,are described by the action of the gauge groupH= R+×SO(N)A.Symplectic structureThe configuration space Q=G being an open subset of the space M(N,R)×R N(M(N,R)is the space of real(N×N)–matrices),we can introduce global coordinates on Q,namely the matrix elements x ij,1≤i,j≤N,of the element g=(x ij)∈GL0(N,R),and the Cartesian coordinates x i of the vector x∈R N.The phase space P=T∗G of the system can be identified with the product G×L G∗(L G∗is the dual of the Lie algebra L G of G)by the trivializationT∗:G×L G∗−→T∗G,((g,x),(α,β))−→µ(α,β)(g,x) ((α,β)∈L G∗≃M(N,R)×R N).In the above coordinates the one-formµ(α,β)∈T∗G is given explicitly byµ(α,β)(g,x)=αij dx ji+βi dx i(3)(repeated indices are summed over).Denoting the coordinates on L G∗by(p ij,p i),the Liouville form can be written asθ=p ij dx ji+p i dx i.(4) The symplectic form isω=−dθ.B.Infinitesimal generatorsFor the purpose of later reference we will supply the expressions for the infinitesimal generators of the action of the group G on its cotangent bundle T∗G by left and right translations.LetΦL:G×G−→G,((h,y),(g,x))−→(hg,y+h x)be the left action of the group G on itself by left translations,and letΦL∗: G×T∗G−→T∗G denote the canonical lift of this action to P=T∗G.Then, in the above coordinates,the infinitesimal generator for the action of the one-parameter subgroup generated by the element(A,a)∈L G≃M(N,R)×R N on P is given byJ L(x,p)=a ij x jk p ki+a i p i+p i a ij x j.(5)(A,a)Similarly,letΦR:G×G−→G,((h,y),(g,x))−→(gh−1,x−gh−1y)be the left action of G on itself by right translations,and letΦR∗denote the canonical lift to P.In this case the infinitesimal generator isJ R(A,a)(x,p)=−x ij a jk p ki−p i x ij a j.(6)The Lie algebras which are formed by the elements J L/R(A,a),(A,a)∈L G,are iso-morphic to L G{J L/R(A,a),J L/R(B,b)}=J L/R[(A,a),(B,b)]=J L/R([A,B],A b−B a).(7) As left and right translations commute,the corresponding generators satisfy{J L(A,a),J R(B,b)}=0.(8) C.ConstraintsLet D=E N(E N is the(N×N)–unit matrix),let{K ij=−K ji}be the standard basis of so(N),and{e i}the standard basis of R N.Then,the elements(D,0), (K ij,0),and(0,e i)constitute a basis of L H,the Lie algebra of the gauge group H,and the infinitesimal generatorsD:=J L(D,0),K ij:=J L(Kij,0),P i:=J L(0,ei)(9)span the constraint algebra C0≃L H.D.Fundamental observablesOne class of observables,which can readily be obtained,is given by the genera-tors of right translations J R(A,a).As the generators J R(D,0)=−J L(D,0)and J R(0,e i)=− j x ji J L(0,e j)are contained in the intersection P∩C of the strong Poisson com-mutant of the constraints with the set of weakly vanishing functions,we have to restrict ourselves to the sl(N,R)subalgebra of L G,so that only the observablesJ R (A,0),A∈sl(N,R),can be taken to form part of˜O.The action of SL(N,R)has to be supplemented by the“translations along the fibers”generated by an appropriate class of functions on the configuration space Q=G(cf.[10]).These functions can be chosen asX ij=X ji=λ(det g)−2/N(g t g)ij=λ(det g)−2/N x ki x kj,λ>0(10) (g t is the transpose of g∈GL0(N,R),the physical significance of the parameter λwill be determined in the next section,cf.the discussion below eq.(34)).The functions X ij are obviously invariant under translations(R N)and under the action of SO(N)from the left.As they are homogeneous of degree zero in x ij,they are also invariant under dilations.Being the elements of a symmetric matrix,the functions X ij generate an action of the Abelian group S(N)of real symmetric(N×N)–matrices on P=T∗G. Choosing the matricesS ij=S ji=1as a basis of S(N),the action on P of an element S=s ij S ij∈S(N)(s ij=s ji)is given byµ−→µ−s ij d X ij(11) (µ∈P),in coordinatesx kl−→x kl,p kl−→p kl−s ij∂X ij×S(N).Denoting the elements of sl(N,R)×S(N),with the group multiplication law(g,S)(h,T)=(gh,S+gT g t).This group is also denoted by CM(N),the group of collective motions in N di-mensions,and plays a prominent rˆo le in the description of collective modes of multi-particle systems,e.g.in nuclear physics(cf.[12,13]).The action of SL(N,R)×S(N)as the Lie algebra˜O of fundamental observables.In the next section further justification will be given to this choice.IV.The cases N=2and N=3Having established the kinematical properties of the PRB and the Lie structure of the algebra of fundamental observables,we now have to determine the Casimirs of O and the observable content of the constraints.This will be done explicitly for the space dimensions N=2and N=3.For the sake of clarity and simplicity,the case N=2will be treated in detail,the largely analogous discussion of the case N=3can then be kept short.A.N=2In two space dimensions the configuration space is the group G=GL0(2,R)×R2.The constraint algebra C0is spanned by the functionsD=x ij p ji+x i p i,K=(x1i p i2−x2i p i1)+(x1p2−x2p1),P i=p i(14)(i,j∈{1,2}).Choosing the matricesL1=12 0110,L3=1×S(2)is generated by the functionsL1=−12(x i1p2i+x i2p1i)(17) L3=−1dx ki x kj,d=det g=det(x ij).(19) The Lie algebra cm(2)is isomorphic to the Lie algebra iso(2,1)=so(2,1)2(X11−X22),X3:=1The function¯K,which represents the observable content of the constraints,is not a Casimir of C0but a generalized Casimir element of the constraints.This property is not an effect of the non-unimodularity of the gauge group H,but simply of its semi-direct product structure(which,in turn,causes also the non-unimodularity).It should be noted that the constraint algebra C0,which is a semi-direct prod-uct,can be replaced by an equivalent constraint algebra¯C0=(R+×so(2))×R2, which is a direct product and is generated by¯K,¯D=D−x i P i,and¯P i=P i(for N=2,¯C0is even Abelian).¯K is a Casimir of¯C0,that is it fulfills the definition of an element of OC as it was given in Ref.[2].It should also be noted that the dilation constraint D does not contribute to the observable content of the constraints,i.e.it cannot have any impact on the selection of the physical representation of the algebra of observables.Therefore it would be quite unphysical to make the quantization of the system depend on requirements which are imposed on the quantization of this quantity which,from the point of view of the constrained system,is unobservable.It is one of the merits of Ref.[1] to have shown that a naive application of the Dirac quantization scheme,which requires the quantization of this quantity,can lead to substantially wrong results: it is the requirement that the operator corresponding to D upon quantization be formally self-adjoint with respect to an inner product on an extended Hilbert space containing unphysical states,which necessitates the introduction of the correction term(2).Of course,from the discussion of this single example nothing can be said about the general case of a non-unimodular gauge group.B.DynamicsIn this paragraph it will be shown how the Hamiltonian for the free dynamics of the constrained system can be expressed as a function of the cm(2)observables, and how these observables can be given a physical interpretation.Let the body be composed of n≥3individual mass points with the same mass m,not all of them lying on the same line.Let the positions of the mass points, relative to the center of mass,at the time t=0be x0a,a=1,2,...,n.Then the positions at time t>0,implementing the condition that all mass points be subject to the same linear transformation,can be written asx a(t)=g(t)x0a,g(t)∈GL0(2,R),g(0)=E2.(25) Define the mass–quadrupole tensor q(t)byq ij(t)=na=1mx ai(t)x aj(t)=(g(t)q(0)g t(t))ij(26)and choose the basis in the center of mass frame such that q0:=q(0)becomes diagonal:q0=diag(q1,q2).The kinetic energy of the unconstrained motion of thebody,relative to the center of mass,can be expressed by q0and the time derivative ˙g(t)of g(t)T=1∂x ij +a i∂2ωD D+ωµIµ(28) the kinetic energy becomes a function of the generalized“angular”velocitiesωD andωµ:T=T(ωD,ωµ).The generalized“angular”momenta conjugate toωD andωµJ D=∂T∂ωµ(29)are the infinitesimal generators of the group action by left translations,i.e.in the above coordinates on the groupJ D=12q1+(J D−J1)2+(J2−J3)2 2θJµJµ,θ:=q1q2whereκD andκK are still arbitrary,and J2=JµJµis the quadratic Casimir of the sl(2,R)subalgebra of gl(2,R).Finally,observing that we have the identity J2=L2,where L2=LµLµis the quadratic Casimir of the sl(2,R)≃so(2,1) subalgebra of the cm(2)≃iso(2,1)algebra,the Hamiltonian H can be expressed as a function of observables1H=det q0,can be identified with the space of all mass–quadrupole tensors of determinant q1q2.Moreover,as the quantity√×R3can be taken to consist of the elements(D,0),(K i,0)and(0,e i),i=1,2,3,where D=E3and{K i|(K i)jk=−εijk}is the standard basis of so(3).The corresponding generators of the constraint algebra C0areD=x ij p ji+x i p i,K i=εijk(−x kl p lj+x j p k),P i=p i.(35)ObservablesFor the sl(3,R)subalgebra of the Lie algebra cm(3)=sl(3,R)3δij E3,3i=1T ii=0.(36)The basis for S(3)can be chosen as above:S ij=S ji=1(det g)2/3x ki x kj,(37) their commutation relations read{J ij,J kl}=δjk J il−δil J kj(38){J ij,X kl}=δjk X il+δjl X ik−22V k¯V k(41) whereV k=εkij¯X li J lj,¯V k=εkij X il J jl,¯X ij=εiklεjmn X km X ln=¯X ji.(42) A third invariant is the signature Sig(X)of the matrix X=(X ij),which is defined as twice the number of positive eigenvalues minus the rank of X.Identities and the observable content of the constraintsThere are two identities for the Casimirs of O,which do not involve the constraintsI3=det X≡λ3,Sig(X)=3(43) (i.e.X is positive definite),and one functional identity which determines the ob-servable content of the constraintsI3Λ=λ3Λ≡I5,Λ=3i=1¯K2i(44)where¯Ki=K i−εijk x j P k.(45) Again,Λis not a Casimir of C0but a generalized Casimir element of the constraints. Going over to the equivalent constraint algebra¯C0=(R+×SO(3))×R3generated by¯D=D−x i P i,¯K i and¯P i=P i,Λbecomes a Casimir of¯C0.Note that,under the additional conditions expressed by the identities(43), the Casimir I5is always a non-negative function.This can be seen as follows. In every Hamiltonian action(cf.[14])of the Lie algebra cm(3)on a symplectic manifold M the generators of this action are uniquely defined–not only up to a constant–because thefirst and second cohomology groups of cm(3)vanish[15]. Therefore,the matrix X of the generators X ij of the S(3)subalgebra assumes the value X=λE3(i.e.there is a point m∈M,such that X(m)=λE3),given that the generators fulfill the identities(43).For X=λE3we have¯X=2λ2E3,and,at the point m,I5can be written asI5(m)=λ33i=1N2i(m),N i=εijk J jk.(46)But I5is constant on the orbit through m,and so I5is non-negative everywhere because M is foliated by the orbits of the cm(3)action.In the case at hand I5assumes the value zero,the minimum value which is compatible with the identities(43),on the constraint surface.That is,the effect, on the observable sector of the system,of the vanishing of the constraints consists in restricting the observableλ3Λ=I5to its minimum.Consequently,the observable content of the constraints,i.e.,the condition which is imposed on the Casimir I5 of the algebra of observables by the vanishing of the constraints via the identity (44),can be given two equivalent formulations,a numerical and an algebraic one. The numerical formulation states that the Casimirλ3Λ=I5assumes the value zero,whereas the algebraic formulation states that it has to assume the minimum possible value compatible with the identities(43).DynamicsIn the same way as in the two-dimensional case the Hamiltonian for the free constrained dynamics can be expressed as a function of the cm(3)generators. For q0=qE3(which is not a restriction because it can always be achieved by an SL(3,R)–transformation,i.e.by a change of basis)it is proportional to the quadratic Casimir of the sl(3,R)subalgebraH=1。
The vast 巨大的majority of error messages generated by GROMACS are descriptive 解释的, informing the user where the exact error lies. Some errors that arise are noted below, along with more details on what the issue is and how to solve it.1. 1. General1. 1.1. Cannot allocate memory2. 2. pdb2gmx1. 2.1. Residue 'XXX' not found in residue topology database2. 2.2. Long bonds and/or missing atoms3. 2.3. Chain identifier 'X' was used in two non-sequential blocks4. 2.4. WARNING: atom X is missing in residue XXX Y in the pdb file5. 2.5. Atom X in residue YYY not found in rtp entry3. 3. grompp1. 3.1. Found a second defaults directive file2. 3.2. Invalid order for directive defaults3. 3.3. System has non-zero total charge4. 3.4. Incorrect number of parameters5. 3.5. Number of coordinates in coordinate file does not match topology6. 3.6. Fatal error: No such moleculetype XXX7. 3.7. T-Coupling group XXX has fewer than 10% of the atoms8. 3.8. The cut-off length is longer than half the shortest box vector or longer than thesmallest box diagonal element. Increase the box size or decrease rlist9. 3.9. Unknown left-hand XXXX in parameter file10.3.10. Atom index (1) in bonds out of bounds4. 4. mdrun1. 4.1. Stepsize too small, or no change in energy. Converged to machine precision, butnot to the requested precision2. 4.2. LINCS/SETTLE/SHAKE warnings3. 4.3. 1-4 interaction not within cut-off4. 4.4. Simulation running but no output5. 4.5. Can not do Conjugate Gradients with constraints6. 4.6. Pressure scaling more than 1%7. 4.7. Range Checking error8. 4.8. X particles communicated to PME node Y are more than a cell length out of thedomain decomposition cell of their charge group9. 4.9. There is no domain decomposition for n nodes that is compatible with the givenbox and a minimum cell size of x nmGeneralCannot allocate memoryThe executed script has attempted to assign memory to be used in the calculation, but is unable to due to insufficient memory.Possible solutions are:•install more memory in the computer.•use a computer with more memory.•reduce the scope of the number of atoms selected for analysis.•reduce the length of trajectory file being processed.•in some cases confusion between Ångström and nm may lead to users wanting to generate a water box that is 103 times larger than what they think it is (e.g.genbox).The user should bear in mind that the cost in time and/or memory for various activities will scale with the number of atoms/groups/residues N or the simulation length T as order N, N log N, or N2 (or maybe worse!) and the same for T, depending on the type of activity. If it takes a long time, have a think about what you are doing, and the underlying algorithm (see the manual, man page, or use the -h flag for the utility), and see if there's something sensible you can do that has better scaling properties. pdb2gmxResidue 'XXX' not found in residue topology残基不存在databaseThis means that the force field {you have selected while running pdb2gmx} does not have an entry in the residue database for XXX. The residue database entry is necessary both for stand-alone molecules (e.g. formaldehyde甲醛) or a peptide (standard or non-standard). This entry defines the atom types, connectivity, bonded and non-bonded interaction types for the residue and is necessary to use pdb2gmx to build a .top file. A residue database entry may be missing simply because the database does not contain the residue at all, or because the name is different.For new users, this error appears because they are running pdb2gmx blindly盲目的on a PDB file they have without consideration of the contents of the file. A force fieldis not something that is magical, it can only deal with molecules or residues (building blocks) that are provided in the residue database or included otherwise.If you want to use pdb2gmx to automatically generate your topology, you have to ensure that the appropriate .rtp entry is present within the desired force field and has the same name as the building block you are trying to use. If you call your molecule "HIS," then pdb2gmx will not magically build a random molecule; it will try to build histidine, based on the [ HIS ] entry in the .rtp file, so it will look for the exact atomic entries for histidine, no more no less.If you want a topology for an arbitrary molecule, you cannot use pdb2gmx (unless you build the .rtp entry yourself). You will have to build it by hand, or use another program (such as x2top or one of the scripts contributed by users) to build the .top file.If there is not an entry for this residue in the database, then the options for obtaining the force field parameters are:•see if there is a different name being used for the residue in the residue database and rename as appropriate,•parameterize the residue / molecule yourself,•find a topology file for the molecule, convert it to an .itp file and include it in your .top file,•use another force field which has parameters available for this,•search the primary literature for publications for parameters for the residue that are consistent with the force field that is being used.Long bonds and/or missing atomsThere are probably atoms missing earlier in the .pdb file which makes pdb2gmx go crazy. Check the screen output of pdb2gmx, as it will tell you which one is missing. Then add the atoms in your pdb file, energy minimization will put them in the right place, or fix the side chain with e.g. the WhatIF program.Chain identifier 'X' was used in two non-sequential非次序的blocksThis means that within the coordinate file fed to pdb2gmx, the X chain has been split 断裂, possibly by the incorrect insertion of one molecule within another. The solution is simple: move the inserted molecule to a location within the file so that it is not splitting another molecule.This message may also mean that the same chain identifier has been used for two separate chains. In that case, rename the second chain to a unique identifier.WARNING: atom X is missing in residue XXX Y in the pdb fileRelated to the long bonds/missing atoms error above, this error is usually quite obvious in its meaning. That is, pdb2gmx expects certain atoms within the given residue, based on the entries in the force field .rtp file. There are several cases to which this error applies:•Missing hydrogen atoms; the error message may be suggesting that an entry in the .hdb file is missing. More likely, the nomenclature 名称of yourhydrogen atoms simply does not match what is expected by the .rtp entry. In this case, use -ignh to allow pdb2gmx to add the correct hydrogens for you, or re-name the problematic问题的atoms.• A terminal residue (usually the N-terminus) is missing H atoms; this usually suggests that the proper -ter option has not been supplied or chosen properly.In the case of the AMBER force fields, nomenclature is typically the problem.N-terminal and C-terminal residues must be prefixed by N and C,respectively. For example, an N-terminal alanine 丙氨酸should not belisted in the .pdb file as ALA, but rather NALA, as specified in the ffamberinstructions.•Atoms are simply missing in the structure file provided to pdb2gmx; look for REMARK 465 and REMARK 470 entries in the .pdb file. These atoms willhave to be modeled in using external software. There is no GROMACS tool to re-construct incomplete models.Contrary to what the error message says, the use of the option -missing is almost always inappropriate. The -missing option should only be used to generate specialized topologies for amino acid-like molecules to take advantage of .rtp entries. If you find yourself using -missing in order to generate a topology for a protein or nucleic acid, don't; the topology produced is likely physically unrealistic.Atom X in residue YYY not found in rtp entryIf you are attempting to assemble a topology using pdb2gmx, the atom names are expected to match those found in the .rtp file that define the building block(s) in your structure. If you get this error, simply re-name the atoms in your coordinate file appropriately.gromppFound a second defaults directive fileThis is caused by the [defaults] directive appearing more than once in the topology or force field files for the system - it can only appear once. A typical cause of this is a second defaults being set in an included topology file, .itp, that has been sourced from somewhere else. For specifications on how the topology files work, see GROMACS Manual, Section 5.6.[ defaults ]; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ1 1 no 1.0 1.0One solution is to simply comment out (or delete) the lines of code out in the file where it is included for the second time i.e.,;[ defaults ]; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ;1 1 no 1.0 1.0A better approach to finding a solution is to re-think what you are doing. The [defaults] directive should only be appearing at the top of your .top file where you choose the force field. If you are trying to mix two force fields, then you are asking for trouble. If a molecule .itp file tries to choose a force field, then whoever produced it is asking for trouble.Invalid无效的order for directive defaultsThis is a result of defaults being set in the topology or force field files in the inappropriate location; the [defaults] section can only appear once and must be the first directive in the topology. The [defaults] directive is typically present in the force field file (ffgmx.itp, ffoplsaa.itp, etc), and is added to the topology when you #include this file in the system topology.The "invalid order" error can also pertain to 适合any of the other topology directives. In these cases, the error is usually a result of simply inappropriately re-organizing the topology (hence "out of order" or "out of sequence," as the error implies). A common problem is placing position restraint files for multiple molecules out of order. Recall that a position restraint file can only belong to the [ moleculetype ] block that contains it. For example:WRONG:#include "topol_A.itp"#include "topol_B.itp"#include "ligand.itp"#ifdef POSRES#include "posre_A.itp"#include "posre_B.itp"#include "ligand_posre.itp"#endifRIGHT:#include "topol_A.itp"#ifdef POSRES#include "posre_A.itp"#endif#include "topol_B.itp"#ifdef POSRES#include "posre_B.itp"#endif#include "ligand.itp"#ifdef POSRES#include "ligand_posre.itp"#endifSystem has non-zero total chargeNotifies you that counter-ions反离子may be required for the system to neutralize the charge or there may be problems with the topology.If the charge is a non-integer, then this indicates that there is a problem with the topology. If pdb2gmx has been used, then look at the right hand comment column of the atom listing, which lists the cumulative 累计charge. This should be an integer after every residue (and/or charge group where applicable). This will assist in finding the residue where things start departing from integer values. Also check the capping groups that have been used.If the charge is already close to an integer, then the difference is caused by rounding errors and not a major problem.Note for PME users: It is possible to use a uniform neutralizing background charge in PME to compensate for a system with a net background charge. There is probably nothing wrong with this in principle, because the uniform charge will not perturb 不安the dynamics. Nevertheless, it is standard practice to actually add counter-ions to make the system net neutral.Incorrect number of parametersLook at the topology file for the system. You've not given enough parameters for one of the bonded definitions.Sometimes this also occurs if you've mangled损坏the Include File Mechanism or the topology file format (see: GROMACS Manual Chapter 5) when you edited the file.Number of coordinates in coordinate file does not match topologyThis is pointing out that, based on the information provided in the topology file, .top, the total number of atoms or particles within the system does not match exactly with what is provided within the coordinate file, often a .gro or a .pdb.The most common reason for this is simply that the user has failed to update the topology file after solvating or adding additional molecules to the system, or made a typographical error in the number of one of the molecules within the system. Ensure that the end of the topology file being used contains something like the following, that matches exactly with what is within the coordinate file being used, in terms of both numbers and order of the molecules:[ molecules ]; Compound #molProtein 1SOL 10189NA+ 10In a case when grompp can't find any any atoms in the topology file at all (number of coordinates in coordinate file (conf.gro, 648) does not match topology (topol.top, 0)) and that error is preceded by warnings like:calling /lib/cpp...sh: /lib/cpp: No such file or directorycpp exit code: 32512Tried to execute: '/lib/cpp -I/usr/local/gromacs-...The '/lib/cpp' command is defined in the .mdp filethen your system's C preprocessor, cpp, is not being found or run correctly. One reason might also be that the cpp variable is not properly set in the .mdp file. As of GROMACS version 4.0, grompp contains its own preprocessor, so this error should not occur.This error can also occur when the .mdp file has been edited under Windows, and your cpp is intolerant of the mismatch between Windows and Unix end-of-line characters. If it is possible that you have done this, try running your .mdp file through the standard Linux dos2unix utility.Fatal error: No such moleculetype XXXEach type of molecule in your [ molecules ] section of your .top file must have a corresponding [ moleculetype ] section defined previously, either in the .top file or an included.itp file. See GROMACS Manual section 5.6.1 for the syntax句法description. Your .top file doesn't have such a definition for the indicated molecule. Check the contents of the relevant files, and pay attention to the status of #ifdef and / or #include statements.T-Coupling group XXX has fewer than 10% of the atomsIt is possible to specify separate thermostats恒温器(temperature coupling groups) for each and every molecule type within a simulation. This is a particularly bad practice employed by many new users to Molecular Dynamics Simulations. Doing so is a bad idea, as you can introduce errors and artifacts that are hard to predict. In most cases it is best to have all molecules within a single group, using system. If separate coupling groups are required, then ensure that they are of "sufficient size" and combine molecule types that appear together within the simulation. For example, for a protein in water with counter-ions, one would likely want to use Protein and Non-Protein. The cut-off length is longer than half the shortest box vector or longer than the smallest box diagonal element. Increase the box size or decrease rlistThis error is generated in the cases as noted within the message. The dimensions of the box are such that an atom will interact with itself (when using periodic boundary conditions), thus violating the minimum image convention. Such an event is totally unrealistic and will introduce some serious artefacts. The solution is again what is noted within the message, either increase the size of the simulation box so that it is at an absolute minimum twice the cut-off length in all three dimensions (take care here if are using pressure coupling, as the box dimensions will change over time and if they decrease even slightly, you will still be violating the minimum image convention) or decrease the cut-off length (depending on the force field utilised, this may not be an option).Unknown left-hand XXXX in parameter filegrompp has found an unknown term in the .mdp file fed to it. You should check the spelling of XXXX and look for typographical排字上的errors. Be aware that quite a few run parameters changed between GROMACS 3.x and GROMACS 4.x and the output from grompp will sometimes offer helpful commentary about these situations.Atom index (1) in bonds out of boundsThis kind of error looks likeFatal error:[ file spc.itp, line 32 ]Atom index (1) in bonds out of bounds (1-0).This probably means that you have inserted topologysection "settles" in a part belonging to a differentmolecule than you intended to. in that case move the"settles" section to the right molecule.This error is fairly self-explanatory. You should look at your .top file and check that all of the [molecules] sections contain all of the data pertaining to that molecule, and no other data. That is, you cannot #include another molecule type (.itp file) before the previous [moleculetype] has ended. Consult the examples in chapter 5 of the manual for information on the required ordering of the different [sections]. Pay attention to the contents of any files you have included with #include directives.This error can also arise if you are using a water model that is not enabled for use with your chosen force field by default. For example, if you are attempting to use the SPC water model with an AMBER force field, you will see this error. The reason is that, in spc.itp, there is no #ifdef statement defining atom types for any of the AMBER force fields. You can either add this section yourself, or use a different water model. mdrunStepsize too small, or no change in energy. Converged to machine precision, but not to the requested precisionThis is not an error as such. It is simply informing you that during the energy minimization process it reached the limit possible to minimize the structure with your current parameters. It does not mean that the system has not been minimized fully, butin some situations that may be the case. If the system has a significant amount of water present, then a E pot of the order of -105 to -106 is typically a reasonable value for almost all cases, e.g. starting a Molecular Dynamics simulation from the resulting structure. Only for special purposes, such as normal mode analysis type of calculations, it may be required to minimize further.Further minimization may be achieved by using a different energy minimization method or by making use of double precision-enabled GROMACS.LINCS/SETTLE/SHAKE warningsSometimes, when running dynamics, mdrun may suddenly stop (perhaps after writing several pdb files) after a series of warnings about the constraint algorithms (e.g. LINCS, SETTLE or SHAKE) are written to the log file. These algorithms often used to constrain bond lengths and/or angles. When a system is blowing up (i.e. exploding due to diverging forces), the constraints are usually the first thing to fail. This doesn't necessarily mean you need to troubleshoot the constraint algorithm. Usually it is a sign of something more fundamentally wrong (physically unrealistic) with your system. Perhaps you didn't minimize well enough, have a bad starting structure/steric clashes, are using too large a timestep, or are doing particle insertion in free energy calculations without using soft core. This can also be caused by a single water molecule that is isolated from the other water molecules, somewhere within the system.1-4 interaction not within cut-offSome of your atoms have moved so two atoms separated by three bonds are separated by more than the cut-off distance. This is BAD. Most importantly, do not increase your cut-off! This error actually indicates that the atoms have very large velocities, which usually means that (part of) your molecule(s) is (are) blowing up. If you are using LINCS for constraints, you probably also already got a number of LINCS warnings. When using SHAKE this will give rise to a SHAKE error, which halts暂停your simulation before the "1-4 not within cutoff" error can appear.There can be a number of reasons for the large velocities in your system. If it happens at the beginning of the simulation, your system might be not equilibrated well enough (e.g. it contains some bad contacts). Try a(nother) round of energy minimization to fix this. Otherwise you might have a very high temperature, and/or a timestep that is toolarge. Experiment with these parameters until the error stops occurring. If this doesn't help, check the validity of the parameters in your topology!Simulation running but no outputNot an error as such, but mdrun appears to be chewing up CPU time but nothing is being written to the output files. There are a number of reasons why this may occur: •Your simulation might simply be (very) slow, and since output is buffered减轻, it can take quite some time for output to appear in the respective files. Ifyou are trying to fix some problems and you want to get output as fast aspossible, you can set the environment variable LOG_BUFS to 0 by usingsetenv LOG_BUFS 0, this disables output buffering. Use unsetenvLOG_BUFS to turn buffering on again.•Something might be going wrong in your simulation, causing e.g.not-a-numbers (NAN) to be generated (these are the result of e.g. division by zero). Subsequent calculations with NAN's will generate floating pointexceptions which slow everything down by orders of magnitude. On a SGIsystem this will usually result in a large percentage of CPU time beingdevoted to 'system' (check it with osview, or for a multi-processor machinewith top and osview).•You might have all nst* parameters (see your .mdp file) set to 0, this will suppress阻止most output.•Your disk might be full. Eventually this will lead to mdrun crashing, but since output is buffered, it might take a while for mdrun to realize it can't write.•You are running an executable可执行的compiled with MPI support (e.g.LAM) and did not start the LAM daemon无交互后台程序(lamboot). SeeLAM documentation.Can not do Conjugate结合Gradients with constraintsThis means you can't do energy minimization with the conjugate gradient algorithm 共轭斜度算法if your topology has constraints defined - see here.Pressure scaling more than 1%This error tends to be generated when the simulation box begins to oscillate 摆动(due to large pressures and / or small coupling constants), the system starts to resonate 共振and then crashes. This can mean that the system isn't equilibrated sufficiently before using pressure coupling. Therefore, better / more equilibration may fix the issue.It is recommended to observe the system trajectory prior and during the crash. This may indicate if a particular part of the system / structure is the problem.In some cases, if the system has been equilibrated sufficiently, this error can mean that the pressure coupling constant, tau_p, is too small (particularly when using the Berendsen weak coupling method). Increasing that value will slow down the response to pressure changes and may stop the resonance from occuring.This error can also appear when using a timestep that is too large, e.g. 5 fs, in the absence of constraints and / or virtual sites.Range Checking errorThis usually means your simulation is blowing up. Probably you need to do better energy minimization and/or equilibration and/or topology design.X particles communicated to PME node Y are more than a cell length out of the domain decomposition cell of their charge groupThis is another way that mdrun tells you your system is blowing up. In GROMACS version 4.0, domain decomposition was introduced to divide the system into regions containing nearby atoms (for more details, see the manual or the GROMACS 4 paper). If you have particles that are flying across the system, you will get this fatal error. The message indicates that some piece of your system is tearing apart (hence out of the "cell of their charge group"). Refer to the Blowing Up page for advice on how to fix this issue.There is no domain decomposition for n nodes that is compatible with the given box and a minimum cell size of x nmThis means you tried to run a parallel calculation并行计算, and when mdrun tried to partition your simulation cell into chunks 相当大的数量for each processor, it couldn't. The minimum cell size is controlled by the size of the largest charge group or bonded interaction and the largest of rvdw, rlist and rcoulomb, some other effects of bond constraints, and a safety margin. Thus it is not possible to run a small simulation with large numbers of processors. So, if grompp warned you about a large charge group, pay attention and reconsider its size. mdrun prints a breakdown of how it computed this minimum size in the .log file, so you can perhaps find a cause there. If you didn't think you were running a parallel calculation, be aware that from 4.5, GROMACS uses thread-based parallelism by default. To prevent this, you can either give mdrun the "-nt 1" command line option, or build GROMACS so that it will not use threads线. Otherwise, you might be using an MPI-enabled GROMACS and not be aware of the fact.。
网络计划技术的基本原理和组成要素Network planning technology is an essential part of the telecommunications industry, and it plays a crucial role in designing, implementing, and managing communication networks. 网络规划技术是电信行业的重要组成部分,并且在设计、实施和管理通信网络中发挥着关键作用。
The basic principles of network planning technology involve the systematic analysis of network requirements, the development of technical specifications, and the optimization of network performance. 网络规划技术的基本原理涉及对网络需求的系统分析、技术规范的制定以及网络性能的优化。
It encompasses various elements such as network topology, capacity planning, traffic engineering, and cost analysis. 它涵盖了诸如网络拓扑、容量规划、流量工程和成本分析等各种元素。
First and foremost, network planning technology requires a comprehensive understanding of the requirements and objectives of the network. 首先,网络规划技术需要全面了解网络的需求和目标。
This involves gathering information about the type of services to be provided, the geographical coverage required, and the expected traffic patterns. 这涉及收集有关所提供的服务类型、所需的地理覆盖范围以及预期的流量模式的信息。
Signal Integrity and PCB layout considerations for DDR2-800 Mb/s and DDR3 MemoriesFidus Systems Inc.900, Morrison Drive, Ottawa, Ontario, K2H 8K7, CanadaChris Brennan, Cristian Tudor, Eric Schroeter, Heike Wunschmann, and Syed BokhariSession # 8.13AbstractThe paper addresses the challenge of meeting Signal Integrity (SI) and Power Integrity (PI) requirements of Printed Circuit Boards (PCBs) containing Double Data Rate 2 (DDR2) memories. The emphasis is on low layer count PCBs, typically 4-6 layers using conventional technology. Some design guidelines have been provided.1. IntroductionDDR2 usage is common today with a push towards higher speeds such as 800 Mbps [1] and more recently, 1066 Mbps. DDR3 [2] targets a data rate of 1600 Mbps. From a PCB implementation standpoint, a primary requirement is delay matching which is dictated by the timing requirement. This brings into it a number of related factors that affect waveform integrity and delay. These factors are interdependent, but where a distinction can be made, they can be termed PCB layer stackup and impedance, interconnect topologies, delay matching, cross talk, PI and timing. Cadence ALLEGRO™SI-230 and Ansoft’s HFSS™ are used in all computations.Table 1: Comparison of DDR2 and DDR3 requirementsSignals common to both technologies and a general comparison of DDR2 and DDR3 is shown in Table 1. It must be noted that “matching” includes cases where the clock net may be made longer (termed DELTA in ALLEGRO SigXP). We have assumed a configuration comprising a Controller and two SDRAMs in most illustrations that follow.2. PCB Layer stackup and impedanceIn a layer constrained implementation, a 4 layer PCB (Figure 1) is a minimum with all routing on TOP and BOTTOM layers. One of the internal layers will be a solid ground plane (GND). The other internal plane layer is dedicated to VDD. Vtt and Vref can be derived from VDD. Use of a 6-layer PCB makes the implementation of certain topologies easier. PI is also enhanced due to the reduced spacing between power and GND planes. The interconnect characteristic impedance for DDR2 implementation can be a constant. A single-ended trace characteristic impedance of 50 Ohms can be used for all single-ended signals. A differential impedance of 100 Ohms can be used for all differential signals, namely CLOCK and DQS. Further, the termination resistor pulled up to VTT can be kept at 50 Ohms and ODT settings can be kept at 50 Ohms.In the case of DDR3 however, single ended trace impedances of 40 and 60 Ohms used selectively on loaded sections of ADDR/CMD/CNTRL nets have been found to be advantageous. Further, the value of the termination resistor pulled up to Vtt needs to be optimized in combination with the trace impedance through SI simulations. Typically, it is in the range 30 – 70 Ohms. The differential trace impedance can remain at 100 Ohms.Figure 1 : Four and Six layer PCB stackup3. Interconnect TopologiesIn both cases of DDR2 and DDR3, DQ, DM and DQS signals are point-to-point and do not need any topological consideration. An exception is in the case of multi-rank Dual In Line Memory Modules (DIMMs). Waveform integrity is also easily addressed by a proper choice of drive strengths and On Die Termination (ODT). The ADDR/CMD/CNTRL signals, and sometimes the clock signal will involve a multipoint connection where a suitable topology is needed. Possible choices are indicated in Figure 2 for cases involving two SDRAMs. The Fly-By Topology is a special case of a daisy chain with a very short or no stub.For DDR3, any of these topologies will work, provided that the trace lengths are minimized. The Fly-by topology shows the best waveform integrity in terms of an increased noise margin. This can be difficult to implement on a4-layer PCB and the need for a 6-layer PCB arises. The daisy chain topology is easier to implement on a 4 layer PCB. The tree topology on the other hand requires the length of the branch AB to be very close to that of AC (Figure 2). Enforcing this requirement results in the need to increase the length of the branches which affects waveform integrity. Therefore, for DDR3 implementation, the daisy chain topology with minimized stubs proves to be best suited for 4-layer PCBs.For DDR2-800 Mbps any of these topologies are applicable with the distinction between each other being less dramatic. Again, the daisy chain proves to be superior in terms of both implementation as well as SI.Where more than two SDRAMs are present, often, the topology can be dictated by constraints on device placement. Figure 3 shows some examples where a topology could be chosen to suit a particular component placement. Of these, only A and D are best suited for 4-layer PCB implementation. Again, for DDR2-800 Mbps operations all topologies yield adequate waveform integrity. For a DDR3 implementation, in particular at 1600 Mbps, only D appears to be feasible.Vtt RtRtRtTree topology Fly-By topologyFigure 2: ADDR/CMD/CNTRL topologies with 2 SDRAMS(A)Figure 3: ADDR/CMD/CNTRL topologies with four SDRAMS4. Delay matchingImplementing matched delay is usually carried out by bending a trace in a trombone shape. Routing blockage may require layer jumping. Unfortunately, while physical interconnect lengths can be made identical in layout, electrically, the two configurations shown in Figure 4 will not be the same.The case of trombone delay has been well understood, and the case of a via is obvious. The delay of a trombone trace is smaller than the delay of a straight trace of the same center-line length. In the case of a via, the delay is more than that of a straight microstrip trace of length equal to the via length. The problem can be resolved in two different ways. In the first approach, these values can be pre-computed precisely and taken into account while delay matching. This would become a tedious exercise which could perhaps be eased with userRtRtRt(B)(C)(D)Rtdefined constraints in ALLEGRO 16.0. In the second approach, one would use means to reduce the disparity to an acceptable level.Trombone traceStraight traceL 3L 2 L 4 ≠L 1L 5Figure 4: Illustration of Trombone traces and ViasFigure 5: Circuit for estimation of trombone effect and resulting waveforms.≠Straight traceVia cross sectional viewConsider the case of a trombone trace. It is known that the disparity can be reduced by increasing the length of L3 (Figure 4). Details can be found in reference [3]. A simulation topology can be set up in SigXP to represent parallel arms of a trombone trace as coupled lines. A sweep simulation is carried out with L3 (S in Figure 5) as a variable and the largest reasonable value that reduces the delay difference with respect to a reference trace is selected. For microstrip traces, L3 > 7 times the distance of the trace to ground is needed.Delay values are affected in a trombone trace due to coupling between parallel trace segments. Another way to reduce coupling without increasing the spacing is to use a saw tooth profile. The saw tooth profile shows better performance as compared to a trombone although it eventually ends up requiring more space. In either case, it is possible to estimate the effect on delay precisely by using a modified equation for the computation of the effective trace length [3]. This would need to be implemented as a user defined constraint in ALLEGRO. Consider the case of a through hole via on the 6 layer stackup of Figure 2. Ground vias placed close to the signal vias play an important role in the delay. For the illustration, the microstrip traces on TOP and BOTTOM layers are 150 mils long, and 4 mils wide. The via barrel diameter = 8 mils, pad diameter is 18 mils and the anti-pad diameter is 26 mils.Three different cases are considered. In the first case, the interconnect with via does not have any ground vias in its immediate neighborhood. Return paths are provided at the edges of the PCB 250 mils away from the signal via. In the second case, a reference straight microstrip trace of length = 362 mils is considered. The third case is the same as case 1 with four ground vias in the neighborhood of the signal via. Computed s-parameters with 60 Ohm normalization are shown in Figure 6. It can be seen that the use of 4 ground vias surrounding the signal via makes its behavior more like a uniform impedance transmission line and improves the s21 characteristic. In the absence of a return path in the immediate neighborhood, the via impedance increases. For the present purpose, it is important to know the resulting impact on the delay.A test circuit is set up similar to Figure 5. The driver is a linear source of 60 Ohms output impedance and outputs a trapezoidal signal of rise time = fall time = 100 ps and amplitude = 1V. It is connected to each of the 3 interconnects shown in Figure 6 and the far end is terminated in a 60 Ohm load. The excitation is a periodic signal with a frequency of 800 MHz. The time difference between the driver waveform at V = 0.5 V and the waveform at the receiver gives the switched delay.Results are illustrated in Figure 7 where only the rising edge is shown. It can be seen that the delay with four neighboring ground vias differs from that of the straight trace by 3 ps. On the other hand, the difference is 8 ps for the interconnect with no ground vias in the immediate neighborhood.It is therefore clear that increasing the ground via density near signal vias will help. However, in the case of 4 layer PCBs, this will not be possible as the signal traces adjacent to the Power plane will be referenced to a Power plane. Consequently, the signal return path would depend on decoupling. Therefore, it is very important that the decoupling requirement on 4 layer PCBs addresses return paths in addition to meeting power integrity requirements.The clock net is differential in both DDR2 and DDR3. In DDR2, DQS can be either single ended or differential although it is usually implemented as differential at higher data rates. The switched delay of a differential trace is less than that of a single ended trace of identical length. Where timing computations indicate the need, the clock and DQS traces may need to be made longer than the corresponding ADDR/CMD/CNTRL nets and DATA nets. This would ensure that the clock and DQS transitions are centered on the associated ADDR/CMD/CNTRL nets and DQ nets.Since DQ and DM nets run at the maximum speed, it is desirable that all of these nets in any byte lane be routed identically, preferably without vias. Differential nets are less sensitive to discontinuities and where layer jumping is needed, the DQS and CLOCK nets should be considered first.Figure 6: s-parameters of interconnects with vias (60 Ohm normalization)Figure 7: Driver and Receiver waveforms for the 3 cases of Figure 6. (Plot colors correspond)5. CrosstalkCross talk contributes to delay uncertainty being significant for microstrip traces. This is generally reduced by increasing the spacing between adjacent traces for long parallel runs. This has the drawback of increasing the total trace length and therefore a reasonable value must be chosen. Typically the spacing should be greater than twice the trace distance to ground. Again, ground vias play an important role. Near and far end coupling levels are illustrated in Figure 8. Use of multiple ground vias reduces coupling levels by 7 dB. To derive the interconnect budget, a simulation of a victim trace with two aggressors on both sides is adequate. Using a periodic excitation on all nets will yield the cross talk induced jitter. Using a pseudo random excitation on all nets will show the effect of both cross talk as well as data dependencies. Time domain results are not shown here, but it is easily done by setting up a 5 coupled line circuit in SigXP with the spacing between traces set up for sweeping. Reasonable spacing values that keep the jitter in the waveform due to both cross talk as well as pattern dependence at an acceptable level are chosen.Figure 8: s-parameters of coupled traces (60 Ohm normalization)6. Power IntegrityPower Integrity here refers to meeting the Power supply tolerance requirement under a maximum switching condition. Failure to address this requirement properly leads to a number of problems, such as increased clock jitter, increased data dependent jitter, and increased cross talk all of which eventually reduce timing margins.The theory for decoupling has been very well understood and usually starts with the definition of a “target impedance” as [4]CurrentTransient tolerance Voltage Z et t =arg (1)An important requirement here is knowledge of the transient current under worst case switching condition. A second important requirement is the frequency range. This is the range of frequencies over which the decoupling network must ensure that its impedance value is equal to or below the required target impedance. On a printed circuit board, capacitance created by the Power-Ground sandwich and the decoupling capacitors needs to handle a minimum frequency of ~100 kHz up to a maximum frequency of ~100-200 MHz. Frequencies below 100 kHz are easily addressed by the bulk capacitance of the voltage regulator module. Frequencies above 200 MHz should be addressed by the on-die and in some cases on-package decoupling capacitance. Due to the finite inductance of the package, there is no need to provide decoupling on the PCB to handle frequencies greater than 200 MHz. The actual computation of power integrity can be very complex involving IC package details, simultaneously switched signals and the PCB power distribution network. For PCB design, the use of the target impedance approach to decoupling design is simpler and provides a practical solution with very little computational effort.The three power rails of concern are the VDD, VTT and Vref. The tolerance requirements on the VDD rail is ~ 5% and the transient current is determined as the difference between Idd7 and Idd2 as specified by JEDEC [1,4]. This is accomplished by using plane layers for power distribution and a modest number of decoupling capacitors. It is preferable to use decoupling capacitors of 10 different values distributed in the range of 10 nF to 10 uF. Further, the capacitor pad mounting structure should be designed for reduced mounted inductance.The Vref rail has a tighter tolerance, but it draws very little current. Its target impedance is easily met using narrow traces and one or two decoupling capacitors. It is important however that the capacitors be located very close to the device pins.The VTT rail proves to be challenging because it not only has a tighter tolerance, but it also draws a transient current close to that of the VDD rail. The transient current is easily calculated as described in reference [5]. Again, the target impedance requirement can be met using an increased number of decoupling capacitors.On a 4 layer PCB, the planes are too far apart and consequently the advantage of inter-plane capacitance is lost. The number of decoupling capacitors needs to be increased and higher frequency capacitors with values less than 10 nF may be needed. These computations are easily done using ALLEGRO SI Power Integrity option.7. TimingTiming computation is carried out as described in reference [6]. A table needs to be setup for the following eight cases: 1. 2. 3. 4. 5. 6. 7. 8. Write Setup analysis DQ vs. DQS Write Hold analysis DQ vs. DQS Read Setup analysis DQ vs. DQS Read Hold analysis DQ vs. DQS Write Setup analysis DQS vs. CLK Write Hold analysis DQS vs. CLK Write Setup analysis ADDR/CMD/CNTRL vs. CLK Write Hold analysis ADDR/CMD/CNTRL vs. CLKAn example is shown for the case of Write setup analysis in Table 2. Actual numbers have been omitted as they are not precisely known yet for DDR3. These numbers are obtained from data sheets of Controller and memory manufacturers. The numbers in the interconnect section are determined by SI simulations. All the eight cases need to be analyzed for DDR2. For DDR3, 5 and 6 are not needed due to its write leveling feature. In the PCB implementation, length match tolerances must ensure that the total margin is positive. ElementControllerSkew Componenta.)DQ vs. DQS skew at transmitter output b.) Data / Strobe PLL jitter a+b Setup requirement (tDSb @ Vih/Vil level) DQ slew rate DQS slew rateSetupUnitsps ps ps psCommentsFrom controller design data Used if not included in transmitter skewTotal Controller SDRAM (or DIMM)V/ns V/ns psTotal SDRAM setup requirement InterconnecttDSb + slew rate adjustmentFrom SDRAM datasheet; this number is to be adjusted based on DQ and DQS slew rates Measured as per JEDEC specification from SI simulation results Measured as per JEDEC specification from SI simulation results Includes slew rate adjustmenta.) Data Xtalk b.) DQS Xtalk c.) Length matching tolerance d.) Characteristic impedance mismatch Total Interconnect Min. Total Setup Budget Setup margin Interconnect skew (a + b + c + d) 0.24*tckps ps ps ps2 aggressors (one each side of the victim); victim – repetitive; aggressor- PRBS 2 aggressors (one each side of the victim); victim – repetitive; aggressor- PRBS Extracted from SI simulation results longest data net, worst case PVT corner can be omitted if routing of DQ and corresponding DQS signals are done on same layerps ps From SDRAM datasheet (includes clock duty cycle variation) Must be positiveMin. Total Setup Budget – (Total Controller + Total SDRAM + Total Interconnect )psTable 2: Illustration of DDR3 Write Setup timing analysis summary for DQ vs. DQS8. PCB LayoutImplementation on a PCB involves a number of tradeoffs to meet SI requirements. Often, the question is how far does one need to go? PCB layout tasks are facilitated using the following approach: 1. Set up topology and constraints in ALLEGRO Constraint Manager. 2. Design Controller BGA breakout. A controller pin arrangement with ADDR/CMD/CNTRL pins in the middle and DQ/DQS/DM byte lanes on either side is best suited. Within these groups, individual pins may need to be swapped to ensure routing with minimum cross-over. 3. Attempt routing with reduced stub length and a minimum trace spacing as obtained from cross talk simulation. Often, most stubs can be eliminated but it will not be possible for all the pins. One may try two traces between BGA pads of the memory devices. This would require narrow PCB traces which can increase manufacturing cost. Yet, it will not be possible for all signals unless micro via and via-in-pad technology is used. Complete routing with coarse length matching tolerances. 4. Place Vref decoupling capacitors close to the Vref pins. Vtt decoupling can be placed at the far end of the last SDRAM and will not come in the way of routing. VDD decoupling can be placed close to devices where possible without blocking routing channels. The smaller valued capacitors should be placed closer to the devices. With a proper decoupling design, it will not be necessary to cram all capacitors close to the devices. All decoupling capacitors should use a fan out for the footprint designed for reduced inductance. This is typically two short wide traces perpendicular to the capacitor length. This can be automated by using a user defined capacitor footprint that can be attached to all the decoupling capacitors in the schematic. 5. Implement fine length matching and insert multiple ground vias where signal traces jump layers. It is better to use the delay matching option in ALLEGRO and one must include z-axis delay. Typically, P and N nets of differential pairs should be matched with a tolerance of +/- 2ps and the tolerance for all other matched nets can be +/- 10 ps or more based on the timing margin computation.9. DIMMConsiderations described above apply to the case of PCBs containing one or more DIMMs. The only exception is that the decoupling requirement for the memories can be relaxed as it is already accounted for on the DIMM PCB. SI analysis of registered DIMMs is also much simpler where the DIMM is treated as a single load. While the routing topology for ADDR/CMD/CNTRL nets is usually a daisy chain with reduced stubs, tree topologies can also be used for registered DIMMs. Analysis of un-buffered DIMMs can become tedious as the timing requirement at all the SDRAMs must be analyzed. DIMM routing on 4-layer PCBs is relatively simpler compared to the case of SDRAMs.10. ExamplesThe detail described above has been used in the implementation of a DDR2 PCB, a DDR3 PCB and a DDR3 – DIMM PCB. The controller is from MOSAID [7] which is designed to provide both DDR2 as well as DDR3 functionality. For the SI simulations, IBIS models have been used. Models for the memories are from MICRON Technology, Inc [8]. The IBIS models for the DDR3 SDRAMs were available at 1333 Mbps speed. These were used at 1600 Mbps. For the unbuffered DDR3 DIMM (MT_DDR3_0542cc) EBD models from Micron Technology were used. All waveforms are for the typical case and are computed at the SDRAM die. The 6 layer PCB stackup of Figure 2 is used with routing on TOP and BOTTOM layers only. The memory consists of 2 SDRAMsrouted as a daisy chain. In the case of the DIMM, a single unbufferred DIMM is used. TOP/BOTTOM layer routing and Signal Integrity waveforms are shown in Figures. 9-11.Snapshots ofFigure 9: Illustration of TOP and BOTTOM layers of a DDR3 PCB with computed waveforms at the farthest SDRAM. Waveform on left is an ADDRESS net compared to that of the CLOCK net. Waveform on the right is a DATA net compared to that of a DQS net. Clock frequency = 800 MHz and data rate is 1600 Mbps.Figure 10: Illustration of TOP and BOTTOM layers of a DDR2 PCB with computed waveforms at the farthest SDRAM. Waveform on left is an ADDRESS net compared to that of the CLOCK net. Waveform on the right is a DATA net compared to that of a DQS net. Clock frequency = 400 MHz and data rate is 800 Mbps.Figure 11: Illustration of TOP and BOTTOM layers of a DDR3 – DIMM PCB with computed waveforms at the 8th (last) SDRAM on DIMM. Waveform on left is an ADDRESS net compared to that of the CLOCK net. Waveform on the right is a DATA net compared to that of a DQS net.Lastly, Figure 12 shows a comparison of computed and measured DATA eye patterns of an 800 Mbps DDR2. In all cases waveform integrity can be seen to be excellent.Figure 12: Computed (Red) and Measured (blue) waveforms of a data net of an 800 Mbps DDR2 PCB.11. ConclusionIn this paper, all aspects related to SI, and PI of DDR2 and DDR3 implementation have been described. Use of Constraint Manager in ALLEGROTM makes implementation easy. While a four layer PCB implementation of 800 Mbps DDR2 and DDR3 appears to be feasible, DDR3-1600 Mbps will prove to be challenging. It will become clearer as the memory devices become available and one has a good handle on timing numbers.References[1] DDR2 SDRAM Specification, JEDEC JESD79-2B, January 2005. [2] DDR3 SDRAM Standard, JEDEC JESD79-3, June 2007. [3] Syed Bokhari, “Delay matching on Printed Circuit Boards”, Proceedings of the CDNLIVE 2006, San Jose. [4] Larry D Smith, and Jeffrey Lee, “Power Distribution System for JEDEC DDR2 memory DIMM, Proc. IEEE EPEP conference, Princeton, N.J., pp. 121-124, October 2003. [5] Hardware and layout design considerations for DDR2 SDRAM Memory Interfaces, Freescale semiconductor Application Note, Doc. No. AN2910, Rev. 2, 03/2007. [6] DDR2 design guide for 2 DIMM systems, Technical Note, Micron Technology Inc. TN-47-01, 2003. [7] /corporate/products-services/ip/SDRAM_Controller_whitepaper_Oct_2006.pdf [8] /products/dram/ddr2/partlist.aspx?speed=DDR2-800 [9] /products/dram/ddr3/partlist.aspx?speed=DDR3-1066。
选择题All of the following roles are stakeholder except:All of the following are important system attributes except:Which of the following are elements of a SA:Which of the following is not a precondition for architectural review?ATAM outputs include:简答题1、What is The Architecture Business Cycle (ABC)?ABC is cycle of influences, from the environment to the architecture and back to the environment2、List 5 architecture patterns/styles.数据流过程调用事件驱动风格信息共享风格层次风格3、How is an architectural pattern/style determined.a set of component typesa set of connector types/interaction mechanismsa topological layout of these componentsa set of constraints on topology and behavior4、Bass et al's classify all architecture structures into 3 main categories, what are them?Module-based structures:the elements are software modules (the implementation units). Includes decomposition, uses, layered, class.Component-and-connector structures: the elements are run-time components. Includes process communication, concurrency (parallelism), shared data production and consumption, and client-server communication.Allocation Strutures:5、List 5 architecture structures according to Bass et al.Decomposition, Users, Layered, class , client-serverProcess,Concurrency,Shared data,DeploymentImplement, work assignment6、What is a quality attribute scenario?a means to characterize system quality attributes, consists of 6 parts:stimulus sourcestimulusenvironmentartifacts affectedsystem responsemeasurement of responseA unified way to express quality requirements7、Discuss the benefits of architectural reviews.Five different types of benefits result from holding architectural reviews.financialforces preparation for review为评审做准备也是一种推动力early detection of problems尽早地发现问题validation of requirements确认需求improved architectures提高体系结构质量8、When can architectural reviews begin?iming of the reviewearly “architecture discovery review”is done after requirements are set, but before the architecture is firm需求分析之后,体系结构还没有确定之前进行is used to understand implications of requirements on architecture用来理解需求在体系结构方面的隐含内容checks for requirements feasibility(检查需求的可行性)prioritizes architectural goals(为质量目标排序)full architectural reviewis done when architectural documentation is available体系结构文档可用时进行is used to evaluate qualities of proposed architecture评价被评体系结构的质量9、What is an unplanned architectural review? why should the organization have it?usually occurs when project is in trouble通常在项目出现问题时采用often devolves into finger-pointing导致互相责备can be painful for project already struggling对于已经苦苦挣扎了很久的项目而言,有些痛苦An Unplanned evaluation is unexpected and usually the result of a project in serious trouble and taking extreme measures to try to salvage previous effort10、What is brainstorming?头脑风暴的特点是让与会者敞开思想,使各种设想在相互碰撞中激起脑海的创造性风暴,其可分为直接头脑风暴和质疑头脑风暴法。
L3.1w w w .3d s .c o m | © D a s s a u l t S y s t èm e sLesson content:Abaqus Model Optimization Tasks Design Responses Objective Functions ConstraintsGeometric Restrictions Stop Conditions PostprocessingWorkshop 2a: Topology Optimization of a Cantilever Beam With Stamping Geometric Restrictions Workshop 2b: Topology Optimization of a Cantilever Beam With Demold Control Using the Central Plane TechniqueWorkshop 2c: Topology Optimization of a Cantilever Beam With Symmetry Geometric RestrictionsLesson 3: ATOM Workflow and Options2.5 hoursL3.2w w w .3d s .c o m | © D a s s a u l t S y s t èm e sAbaqus ModelThe Abaqus model must be ready prior to the setup of the optimizationAlthough not necessary, it is helpful to create sets that can be used later to define the optimization regionsShown on the right: A set was created to define the region (cell) where the stamping geometric restriction will be appliedw w w .3d s .c o m | © D a s s a u l t S y s t èm e sAn optimization task identifies the type of optimization and the design domain for the optimization.The task serves to configure the optimization algorithm to be usedCreate an optimization task from the Model Tree or the Optimization toolbox as shownChoose the type of optimization task accordinglyEach task also contains the design responses, objective functions, constraints, geometric restrictions and stop conditionsIn this lecture we discuss the setup of the task for topology optimizationL3.4w w w .3d s .c o m | © D a s s a u l t S y s t èm e sOptimization Tasks (2/6)For a topology optimization task, the optimization region is selected nextThe elements in the optimization region will constitute the design domainThe whole model is selected by defaultOften, the optimization region will only be a subset of the model.For example, on the right we have removed the deformable shaft from the display so that only the gear is selected as the optimization regionw w w .3d s .c o m | © D a s s a u l t S y s t èm e sHaving chosen the optimization type and region, it is now possible to configure the optimizationThe Basic tab of the optimization task editor allows the user to choose if the load and boundary regions are to be kept frozenFrozen areas are discussed further later in the context of geometric restrictionsL3.6w w w .3d s .c o m | © D a s s a u l t S y s t èm e sOptimization Tasks (4/6)The Density tab allows the user to change thedensity update strategy and configure other related parametersThese settings are only available for the sensitivity-based methodTip: These parameters rarely need to be changed; if necessary, use a more conservative strategy for a more stable optimizationw w w .3d s .c o m | © D a s s a u l t S y s t èm e sThe Advanced tab allows the user to switch to the condition-based approach if desiredThe condition-based approach is usually preferred for stiffness optimizationNote: the sensitivity-based approach is also able to optimize on stiffnessFor the condition-based approach, the user can configure the speed of the update scheme and the volume deleted in the first cycleThe advanced option “Delete soft elements in region” is recommended when solving problems where soft elements may distort excessively and cause convergence difficultyL3.8w w w .3d s .c o m | © D a s s a u l t S y s t èm e sOptimization Tasks (6/6)For sensitivity-based optimization the user may choose between the SIMP and the RAMP material interpolation techniquesRAMP is preferred for problems that are more dynamic in nature because the interpolation scheme is always concave.Criteria for convergence can be set here. Default criteria are usually sufficient.Note: the default penalty factor has been chosen carefully.Values less than 3 shouldn’t be used.Values greater than 3 significantly increase the chance of getting trapped in a local minimaw w w .3d s .c o m | © D a s s a u l t S y s t èm e sDesign responses are output variables that can be used to describe objective functions and constraintsAll available design responses forsensitivity-based optimization are shown on the rightCondition-based optimization can only have strain energy as the objective and volume as the constraintDesign responses can be a summation of values in the region or maximum/minimum of that regionDesign responses can also be summed across steps/load casesL3.10w w w .3d s .c o m | © D a s s a u l t S y s t èm e sDesign Responses (2/3)A design response can be a combination of previously defined design responsesFor example, on the right we have constructed design response D-Response-3 as aweighted combination of D-Response-1 and D-Response-2Sensitivity-based optimization supports the following operators:Weighted combinationDifferenceAbsolute differencew w w .3d s .c o m | © D a s s a u l t S y s t èm e sCondition-based optimization supports many more operators for creating combined termsL3.12w w w .3d s .c o m | © D a s s a u l t S y s t èm e sObjective Functions (1/2)Objective functions can be created from any previously defined design responsesDesign responses can be single term or combined termFurthermore, the objective function is always a weighted sum of the specified design responsesReference values are constants subtracted from the design responseReference values are meaningless for a condition-based topology optimizationL3.13w w w .3d s .c o m | © D a s s a u l t S y s t èm e sObjective Functions (2/2)Three objective target formulations are supported in topology optimizationMINMIN formulation minimizes the weighted sum of the specified design responsesMAXMAX formulation maximizes the sum of the specified design responsesMIN_MAX (minimize the maximum load case)MIN_MAX formulation minimizes the maximum of the two (or more) design responses specified in the objective function editorL3.14w w w .3d s .c o m | © D a s s a u l t S y s t èm e sConstraints (1/2)Constraints are an integral part of a topology optimizationAn unconstrained topology optimization is not allowed.An error is issued for such casesIn a condition-based topology optimization, only volume constraints are allowed and they are enforced as equality constraintsL3.15w w w .3d s .c o m | © D a s s a u l t S y s t èm e sConstraints (2/2)In sensitivity-based optimizations, many more constraints are allowedFilter by constraint while creating the design response to see what output variables can be chosen as constraints (shown below)Combined terms are allowed to be used as constraints (shown bottom right)Constraints are always inequalities in sensitivity-based optimizationL3.16w w w .3d s .c o m | © D a s s a u l t S y s t èm e sGeometric Restrictions (1/7)Geometric restrictions are additional constraints which are enforced independent of the optimizationGeometric restrictions can be used to enforce symmetries or minimum member sizes that are desired in the final designDemold control is perhaps the most important geometric restriction.It enables the user to place constraints such that the final design can be manufactured by casting.w w w .3d s .c o m | © D a s s a u l t S y s t èm e sFrozen areaFrozen area constraints ensure that no material is removed from the regions designated as frozen (relative density here is always 1)These constraints are particularly important in regions where loads and boundary conditions are specified since we don’t want these regions to become voids.In the gear example, the gear teeth and the inner circumference were kept frozen.Prevents losing contact with the shaft or losing the load path.FrozenL3.18w w w .3d s .c o m | © D a s s a u l t S y s t èm e sGeometric Restrictions (3/7)Member sizeTopology optimization can sometimes lead to thin or thick members that can be problematic to manufactureMember size restrictions provide filters to control the size of the membersUsers input a filter diameterNote:Maximum thickness restriction (and therefore enveloperestriction) is available only in sensitivity-based optimizationThe exact member size specified by the filter diameter isn’t guaranteedw w w .3d s .c o m | © D a s s a u l t S y s t èm e sDemold controlIf the topology obtained from the optimization is to be produced by casting, the formation of cavities and undercuts needs to be prevented by using demold controlDemold region: region where the demold control restriction is activeCollision check region: region where the removal of an element results in a hole or an undercut is checkedI.This region is same as the demold region by defaultII.This region should always contain at least the demold regionThe pull direction: the direction in which the two halves of the mold would be pulled in (as shown, bottom right)Center plane: central plane of the mold (as shown, bottom right)I.Can be specified or calculated automaticallyL3.20w w w .3d s .c o m | © D a s s a u l t S y s t èm e sGeometric Restrictions (5/7)Demold control (cont’d)The stamping option enforces the condition that if one element is removed from the structure, all others in the ± pull direction are also removedIn the gear example, a stamping constraint was used to ensure that only through holes are formed.Forging is a special case of casting. The forging die needs to be pulled in only one direction.The forging option creates a fictitious central plane internally on the back plane (shown below) so that pulling takes place in only one directionL3.21w w w .3d s .c o m | © D a s s a u l t S y s t èm e sGeometric Restrictions (6/7)SymmetryTopology optimization of symmetric loaded components usually leads to a symmetric designIn case we want a symmetric design but the loading isn’t symmetric, it is necessary to enforce symmetryPlane symmetryRotational symmetryCyclic symmetryPoint symmetryL3.22w w w .3d s .c o m | © D a s s a u l t S y s t èm e sGeometric Restrictions (7/7)It is possible to overconstrain the optimization.Care must be taken when specifying combinations of geometric restrictions.Examples:Planar symmetry can be combined with a pull direction if the pull direction is perpendicular or parallel to the symmetry plane.Rotation symmetry and the definition of a pull direction: this combination is possible if the pull direction is parallel to the axis of rotation.Two reflection symmetries can be combined if the planes are perpendicular.In general, begin the optimization study without geometric restrictions. Add them into the model one by one.L3.23w w w .3d s .c o m | © D a s s a u l t S y s t èm e sStop ConditionsThe optimization may be stopped before convergence is achieved if the stop conditions are achievedStop conditions can be constructed on displacements and stressesStop conditions are only supported in shape optimizationL3.24w w w .3d s .c o m | © D a s s a u l t S y s t èm e sPostprocessing (1/10)The relative densities of the elements in the optimization region are available in the field output variable MAT_PROP_NORMALIZEDw w w .3d s .c o m | © D a s s a u l t S y s t èm e sIn order to access the field output showing the relative densities of elements, switch to the step named ATOM OPTIMIZATIONFrom the main menu bar, select Results →Step/FrameSelect ATOM OPTIMIZATION as the step to visualizePlot contours of MAT_PROP_NORMALIZEDNote: Only the undeformed shape will be plotted. If the deformed shape is desired, switch back to Step-1_Optimization (or as named in your model)L3.26w w w .3d s .c o m | © D a s s a u l t S y s t èm e sPostprocessing (3/10)IsosurfacesThe soft elements can be visualized as voids using the Opt_surface cut in the View Cut ManagerRelative densities of the elements are centroidal quantities that are extrapolated and averaged at the nodes in order to obtain field outputAn isosurface is created that separates the soft elements from the hard elementsw w w .3d s .c o m | © D a s s a u l t S y s t èm e sWhat went wrong here?Can we tell by looking at stress or displacement plots?Iso value = 0.9 Iso value = 0.3L3.28w w w .3d s .c o m | © D a s s a u l t S y s t èm e sPostprocessing (5/10)Iso value = 0.9 Iso value = 0.3Note: Always plot MAT_PROP_NORMALIZED as field output and ensure that the isosurface is not cutting through fully dense elementsw w w .3d s .c o m | © D a s s a u l t S y s t èm e sBelow, isosurfaces are generated on element output (MAT_PROP_NORMALIZED) that is averaged at nodes with the averaging threshold at 100%Iso value = 0.9Iso value = 0.3L3.30w w w .3d s .c o m | © D a s s a u l t S y s t èm e sPostprocessing (7/10)ExtractionExtraction is a process of obtaining a surface mesh (STL format or its equivalent in an Abaqus input file) from a topology optimization resultOnce the isosurface is identified, new interior edges and surfaces are identified.Nodes are created on interior faces and a triangular mesh is created on the portion of the model to be retained.SmoothingThe isosurface provides first-order smoothing of a topology optimization resultDuring extraction the nodes on the interior surfaces are moved to achieve additional smoothing of the isosurfacew w w .3d s .c o m | © D a s s a u l t S y s t èm e sExtraction (cont’d)Reduction is the process of reducing the number of triangles in the STL representationThis is useful when converting a large STL file to a SAT file which can be imported and meshed in Abaqus for further analysisNote: you will need to use other DS tools such as SOLIDWORKS or CATIA for this conversionL3.32w w w .3d s .c o m | © D a s s a u l t S y s t èm e sPostprocessing (9/10)Optimization reportEnsure that the optimization constraints have been satisfied within toleranceOptimization_report.csv is created in the working directoryITERATION OBJECTIVE-1 OBJ_FUNC_DRESP:COMPLIANCE OBJ_FUNC_TERM:COMPLIANCE OPT-CONSTRAINT-1:EQ:VOL Norm-Values: 0.6456477 0.6456477 0.6456477 0.8000001 0 0.6456477 0.6456477 0.6456477 1 1 0.6497207 0.6497207 0.6497207 0.948712 2 0.6501995 0.6501995 0.6501995 0.9437472 3 0.6512569 0.6512569 0.6512569 0.93827784 0.6520502 0.6520502 0.6520502 0.9331822 0.6916615 0.6916615 0.6916615 0.831561823 0.6954725 0.6954725 0.6954725 0.8268944 24 0.7028578 0.7028578 0.7028578 0.8217635 25 0.8512989 0.8512989 0.8512989 0.8169149 26 0.7232164 0.7232164 0.7232164 0.8110763 27 0.7404507 0.7404507 0.7404507 0.8057563 28 0.7356095 0.7356095 0.7356095 0.8024307w w w .3d s .c o m | © D a s s a u l t S y s t èm e sHistory outputOptimization_report.csv should not be accessed while the optimization is running.Use the history output variables in Abaqus/CAE to monitor constraints and objectivesL3.34w w w .3d s .c o m | © D a s s a u l t S y s t èm e s1.In this workshop you will:a.become familiar with setting up, submitting and postprocessing a topology optimization problem with astamping geometric restrictionWorkshop 2a: Topology Optimization of a Cantilever Beam With Stamping Geometric RestrictionsL3.35w w w .3d s .c o m | © D a s s a u l t S y s t èm e s1.In this workshop you will:a.further explore demold control geometric restrictions, specifically with the central plane technique whichensures that the final design proposal is moldableWorkshop 2b: Topology Optimization of a Cantilever Beam With Demold Control Using the Central Plane Technique30 minutesL3.36w w w .3d s .c o m | © D a s s a u l t S y s t èm e s1.In this workshop you will:a.explore various symmetry restrictions available in the topology optimization modulee symmetry restrictions to create specific patterns in the design area as required for ease ofmanufacturing a particular componentWorkshop 2c: Topology Optimization of a Cantilever Beam With Symmetry Geometric Restrictions。
1.软件理论包括:–计算模型与可计算理论、算法理论基础、算法设计与分析、程序设计语言理论基础、程序设计语言设计及其编译技术、数理逻辑、数据抽象与基本数据类型的实现技术等。
对软件工程而言,充满了方法论的内容:–如何分析、如何设计、如何编程、如何测试、如何维护、…;–软件模型与软件系统的“质量”在很大程度上依赖于开发者本身的经验与水平。
因为缺乏对软件开发过程的理论层面的刻画,没有将数目众多的方法论总结提升为理论,故而只能是“工程”。
2.体系结构”的共性一组基本的构成要素——构件–这些要素之间的连接关系——连接件–这些要素连接之后形成的拓扑结构——物理分布–作用于这些要素或连接关系上的限制条件——约束–质量——性能3.软件体系结构(SA):–提供了一个结构、行为和属性的高级抽象–从一个较高的层次来考虑组成系统的构件、构件之间的连接,以及由构件与构件交互形成的拓扑结构–这些要素应该满足一定的限制,遵循一定的设计规则,能够在一定的环境下进行演化。
–反映系统开发中具有重要影响的设计决策,便于各种人员的交流,反映多种关注,据此开发的系统能完成系统既定的功能和性能需求4.体系结构= 构件+ 连接件+ 拓扑结构+ 约束+ 质量5.目标:提高软件质量–功能属性(functional properties):能够完成用户功能性需求;–非功能性属性(non-functional properties):能够以用户要求的性能标准,合理、高效的实现各类功能性需求。
6.软件体系结构关注的是:如何将复杂的软件系统划分为模块、如何规范模块的构成和性能、以及如何将这些模块组织为完整的系统。
⏹主要目标:建立一个一致的系统及其视图集,并表达为最终用户和软件设计者需要的结构形式,支持用户和设计者之间的交流与理解。
⏹分为两方面:–外向目标:建立满足最终用户要求的系统需求;–内向目标:建立满足系统设计者需要以及易于系统实现、维护和扩展的系统构件构成。
a r X i v :h e p -p h /9310246v 1 8 O c t 1993RAL-93-071October 1993Determining the gluon distributions in the proton and photon from two-jet production at HERA J.R.Forshaw and R.G.Roberts Rutherford Appleton Laboratory,Chilton,Didcot OX110QX,England.Abstract Two-jet production from the direct photon contribution at HERA is a sensitive mea-sure of the small-x gluon in the proton.We propose measurements of ratios of the jet cross-sections which will clearly distinguish between gluons with or without singular be-haviour at small x .Furthermore,we show that analogous ratio measurements for the resolved photon contribution provide a sensitive way of determining the gluon distribu-tion in the photon.IntroductionOne of the aims of HERA is to study the structure of the proton and in particular to learn about the distribution of gluons at small x(i.e.x<∼10−2).Here the gluon is expectedto be dominant and we are in an unexplored region of lepton-hadron scattering.The latest measurements of the proton structure function F p2(x,Q2)from H1[1]and ZEUS[2]show the distinct rise for x<∼10−2which has encouraged speculation that this may be a signal of so-called‘Lipatov’behaviour[3].On the other hand it is possible to generate such a rise using conventional leading log Q2evolution from a‘valencelike’gluon distribution at Q20=0.3GeV2 [4].Measuring F p2however is only an indirect probe of the gluon and this is the main reason why the gluon is the least constrained of all the parton distributions to date.There have been several methods proposed which aim to extract the small-x gluon content of the proton at HERA rather more directly[5,6,7].Here,we present an analysis of two-jet photoproduction which utilises the separation of events into so-called‘direct’and‘resolved’components.Through the construction of appropri-ate cross section ratios we show that HERA should provide a sensitive and direct measurement of the small-x gluon in the proton in addition to revealing important information regarding the gluon in the photon.Thefirst step we take is analogous to the procedure previously considered for hadronic two-jet production[8,9]where the configuration of‘same-side’jets allows very small x values of the initial partons to be examined.At HERA we propose isolating the contribution to jet photoproduction from‘direct’photon interactions(seefig.1(a)).We focus on the region where both jets have equal rapidities and are travelling in the electron direction(which we define to be that of positive rapidity).Such large and positive rapidities mean that we are sensitive to the small-x gluon within the proton.The second step is to consider contributions like those offig.1(b),where the photon is resolved into its partonic components.We now study the region of negative jet rapidities where,conversely to thefirst step,the cross section is sensitive to the parton distributions in the photon for values of xγaround0.2−0.5.With the proton gluon distribution already constrained from step one,the resolved photon contribution to two-jet production provides a sensitive measure of the parton distributions in the photon.The possibility of using the two-jet data from HERA to provide information on the parton content of the photon,especially on thegluon distribution has been emphasised by several groups [10,11,12].We wish to emphasise that measurement of ratios of the two-jet cross sections is a particularly clean and sensitive way of discriminating between different parametrisations of the gluon distribution which tends to minimise the experimental and theoretical uncertainties involved.Step 1−the gluon in the protonTo begin with,consider the direct photon contribution to two-jet production,e.g.see fig.1(a).The two-jet cross section may be written to leading order (LO)asd 3σdir d ˆt(1)where f γ/e (z )is the probability that the incoming electron will emit an effectively real photon with a fraction z of its momentum and f j/p (x,µ)is the momentum distribution of parton j inside the proton evaluated at scale µ.The laboratory rapidities (y 1and y 2)of the final state partons (emitted with transverse momentum p T )can be expressed in terms of the momentum fractions z and x :z =p T s E e (e y 1+e y 2)x =p Ts E p (e −y 1+e −y 2).(2)The energies of the incoming electron and proton are E e and E p respectively and the centre-of-mass energy is√E e E p .In this analysis we shall study only the configuration where the two jets have equal rapidities,i.e.y 1=y 2=y .At HERA energies,choosing y ∼1and p T =5GeV,we find x ≃.003.There are two subprocesses in the direct photon contribution:(i)γg →q ¯q and (ii)γq (¯q )→g q (¯q )and the LO cross section of eqn.(1)takes the simple formd 3σdir4p 4T zf γ/e (z ) ( i e 2i )xg (x,µ)+(10/3)F p 2(x,µ) .(3)Since xg (x,µ)is much larger than x ¯q (x,µ)at small x then (i)dominates the cross section for x <∼10−2.Thus evaluating ratios of the above cross section at different values of y effectively measures the ratio of the gluon distribution at different values of x .We suggest measuring the ratio σdir (y )/σdir (y =−0.5)(where we have integrated over all p T ≥5GeV in eqn.(3)toconstructσdir(y)).Fig.2shows the expectations for this ratio from various proposed gluons: MRSD0′,MRSD−′[13]and GRV[4].The D−′curve is generated from a singular gluon which, atµ2=4GeV2and small x,behaves as xg(x,µ)∼x−1/2which is in contrast to the D0′curve which corresponds to xg(x,µ)→constant as x→0atµ2=4GeV2.The GRV prediction leads to a singular gluon distribution forµ2>∼1GeV2due to the long evolution time whichresults from starting evolution at low scales.The curves offig.2were computed with z in the range0.2≤z≤0.7.These are typical HERA cuts and are intended to reduce background uncertainties due to beam gas interactions(z>0.2)and DIS interactions(z<0.7),they are also responsible for the cusps around y=0.1.The scaleµwas taken to be p T/2,as specified by the prescription of Ellis,Kunszt and Soper [14]who have shown that the scaleµfor which the leading order calculation reproduces the less scale dependent O(α3s)result is given by:µ≈cosh(y1−y2)2y1=y2−→p T2p T when y1=y2is also consistent with the results of B¨o decker for two-jet direct photoproduction[15].Varying the value ofµ,e.g.to p T,does not seriously alter the curves in fig.2.As thefigure demonstrates,this is a sensitive measure since it effectively measures the ratio of the magnitude of the gluon at small x to that around x∼0.1.The ratio differs by as much as40%between the MRSD0′and MRSD−′predictions and experiment should easily be able to distinguish between them.Note that we choose to evaluate the cross sections at y1=y2 since this maximises the sensitivity to the particular choice of gluon distribution,otherwise contributions from larger x values tend to smear out the small-x contribution.Of course the predictions we have for the ratio infig.2does assume that one can indeed separate the direct and resolved components of the photon on an event by event basis.Recent preliminary results from HERA demonstrate that making a suitable cut on xγ(seefig.1(b)), e.g xγ≥0.75,can provide the required clean separation[16].Furthermore,we have checked, in LO,that the direct cross section is almost unaltered after including the resolved componentwith xγ≥0.75.Step2−the gluon in the photonWe next turn to the resolved contribution,e.g.fig.1(b),where the two-jet cross section,to leading order,is given byd3σres(5)dˆtand the expression for z in eqn.(2)is now the expression for the product zxγ.In the previous section,we saw that choosing y1,y2large and positive led to small x for the proton,now we see that choosing y1,y2large and negative leads to small zxγ.If we compute the average values of xγwefind that typically we are exploring the range0.18<xγ<0.5for −2<y1=y2<0.Instead of having only two subprocesses,as in step one,we now have the full set of lead-ing order graphs which contribute to jet cross sections in hadron-hadron scattering.We will assume that,from the analysis in step one,the small-x behaviour of the partons in the pro-ton is now sufficiently pinned down so that an analysis of the two-jet cross section can yield direct information on the partons in the photon.Again we take the scaleµ=1indeed a sensitive measure of the gluon density in the photon.ConclusionsWe have proposed that a measurement of the two-jet cross section ratio,σdir(y)/σdir(−1), for y1=y2=y from photoproduction at HERA will provide a sensitive test of the gluon density in the proton.The relatively large values of y which are accessible enable the gluon content of the proton to be probed in the interesting small-x region.Since we expect many of the experimental uncertainties involved in computing the ratio to cancel(and that any rapidity dependence of the detector acceptance can be corrected for)we expect that the construction of such a cross section ratio will be experimentally straightforward.Already,events due to hard two-jet production at HERA have been seen clearly for p T up to 20GeV[16]−even a separation of these events into‘enriched directγ’and‘enriched resolved γ’samples has been performed.These observations are extremely encouraging for the method we suggest since,in step one,the small-x gluon in the proton is determined from the‘clean’direct photon component.Having pinned down the gluon in the proton one can then proceed to extract information on the gluon density in the photon.This is step two of our recipe in which only the resolved component of the photon contributes and where a measurement of the ratio,σres(y)/σdir(y=0),can go a long way in discriminating between the various proposed gluon distributions of the photon.We believe that using ratios of the cross sections in this way reduces both experimental and theoretical uncertainties and allows a relatively simple and logical procedure to provide a sensitive determination of the gluon densities in both the proton and photon.AcknowledgementsIt is a pleasure to thank Jonathan Butterworth for many helpful discussions and suggestions.References[1]H1Collaboration,I.Abt et al.,DESY report DESY93-117,Aug.1993.[2]ZEUS Collaboration,M.Klein et al.,DESY report DESY93-110,Aug.1993.[3]E.A.Kuraev,L.N.Lipatov and V.S.Fadin,Sov.Phys.JETP45(1977)199;Ya.Ya.Balitsky and L.N.Lipatov,Sov.J.Nucl.Phys.28(1978)822.[4]M.Gl¨u ck,E.Reya and A.Vogt,Phys.Lett.B306(1993)391.[5]A.M.Cooper-Sarkar et al.,Zeit.Phys.C39(1988)281.[6]A.D.Martin,C.K.Ng and W.J.Stirling,Phys.Lett.B191(1987)200.[7]J.Bl¨u mlein and M.Klein,DESY report DESY92-038(1992);K.Prytz,Phys.Lett.B311(1993)286.[8]A.D.Martin,W.J.Stirling and R.G.Roberts,RAL preprint-93-047(1993).[9]CDF collaboration:contribution to the EPS Conference on High Energy Physics,Mar-seilles,July1993,preprint FERMILAB-Conf-93/203-E.[10]L.E.Gordon and J.K.Storrow,Manchester University preprint M/C.TH.92/06(1992).[11]M.Drees and R.Godbole,Phys.Rev.Lett.61(1988)682;Phys.Rev.D39(1989)169.[12]H.Baer,J.Ohnemus and J.F.Owens,Zeit.Phys.C42(1988)657.[13]A.D.Martin,W.J.Stirling and R.G.Roberts,Phys.Lett.306B(1993)145.[14]S.D Ellis,Z.Kunszt and D.E.Soper,Phys.Rev.Lett.69(1992)1496.[15]D.B¨o decker,Zeit.Phys.C59(1993)501.[16]R.Klanner and A.de Roeck,HERA results presented at the EPS conference on highenergy physics,Marseille1993.[17]M.Gl¨u ck,E.Reya and A.Vogt,Zeit.Phys.C53(1992)127.[18]H.Abramowicz,K.Charchula and A.Levy,Phys.Lett.B269(1991)458.Figure CaptionsFig.1(a)A direct photon contribution to the hard two-jet cross section.(b)A resolved photon contribution to the hard two-jet cross section.Fig.2The ratio d2σdirdy1dy2 y1=y2=−0.5for three choices of protongluon density,i.e.MRSD0′,MRSD−′[13]and GRV[4].Fig.3The ratio d2σresdy1dy2 y1=y2=0using(a)MRSD−′and(b)MRSD0′for the proton parton densities.The labelled curves correspond to various choices of the photon parton densities,LAC1,LAC3[18],GS2[10],GRV[17]and DG[11].Also shown are the corresponding predictions when the gluon density in the photon is switched off.。
矿产资源开发利用方案编写内容要求及审查大纲
矿产资源开发利用方案编写内容要求及《矿产资源开发利用方案》审查大纲一、概述
㈠矿区位置、隶属关系和企业性质。
如为改扩建矿山, 应说明矿山现状、
特点及存在的主要问题。
㈡编制依据
(1简述项目前期工作进展情况及与有关方面对项目的意向性协议情况。
(2 列出开发利用方案编制所依据的主要基础性资料的名称。
如经储量管理部门认定的矿区地质勘探报告、选矿试验报告、加工利用试验报告、工程地质初评资料、矿区水文资料和供水资料等。
对改、扩建矿山应有生产实际资料, 如矿山总平面现状图、矿床开拓系统图、采场现状图和主要采选设备清单等。
二、矿产品需求现状和预测
㈠该矿产在国内需求情况和市场供应情况
1、矿产品现状及加工利用趋向。
2、国内近、远期的需求量及主要销向预测。
㈡产品价格分析
1、国内矿产品价格现状。
2、矿产品价格稳定性及变化趋势。
三、矿产资源概况
㈠矿区总体概况
1、矿区总体规划情况。
2、矿区矿产资源概况。
3、该设计与矿区总体开发的关系。
㈡该设计项目的资源概况
1、矿床地质及构造特征。
2、矿床开采技术条件及水文地质条件。