Unitarity Constraints on Anomalous Top Quark Couplings to Weak Gauge Bosons
- 格式:pdf
- 大小:375.36 KB
- 文档页数:18
constrains on nature selectionConstraints on Nature SelectionIntroduction•Nature selection refers to the process by which the environment, through various mechanisms, determineswhich traits are advantageous and allow organisms tosurvive and reproduce.•However, there are certain constraints that affect the outcomes of natural selection and shape the evolution of species.•In this article, we will explore some of the key constraints on nature selection and their implicationsfor biological diversity and adaptation.Genetic Constraints1.Genetic Variation:–Limited genetic diversity within a population can constrain the potential for adaptation.–Lower genetic variation reduces the pool ofavailable traits for selection to act upon.2.Genetic Trade-offs:–Organisms often face trade-offs when allocating resources to different traits.–Selection for one beneficial trait may hinder the development or maintenance of other traits.3.Genetic Drift:–Random changes in allele frequencies can occur due to genetic drift, especially in small populations.–Genetic drift can lead to the fixation of neutral or deleterious alleles, limiting adaptation. Environmental Constraints1.Physical Limitations:–Organisms are constrained by physical limitations imposed by their environment.–For example, the size and shape of beaks in birds may be constrained by their feeding habits andavailable food sources.2.Resource Availability:–Limited availability of resources, such as food, water, or shelter, can restrict the extent ofadaptation.–Competition for resources can influence theselection pressures acting on different traits. 3.Coevolutionary Interactions:–Coevolution with other species can imposeconstraints on natural selection.–Organisms may evolve traits that are advantageous in their interactions with specific partners,limiting their adaptation in other contexts. Developmental Constraints1.Developmental Plasticity:–Organisms exhibit various degrees of developmental plasticity, allowing them to respond toenvironmental cues.–Developmental constraints can limit the range of possible phenotypes that can be produced inresponse to selection.2.Developmental Trade-offs:–Developmental processes may be subject to trade-offs, making it difficult to simultaneouslyoptimize multiple traits.–This can constrain the direction and extent of natural selection.3.Developmental Constraints on Evolutionary Transitions:–Some evolutionary transitions, such as thetransition from aquatic to terrestrial habitats,are constrained by developmental limitations.–The need for functional intermediates can slow down the process of adaptation and the emergence of newspecies.Conclusion•Constraints on nature selection play a crucial role in shaping the patterns of biological diversity andadaptation.•Genetic, environmental, and developmental constraints interact to determine the direction and pace ofevolutionary change.•By understanding these constraints, we can gain insights into the limitations and possibilities of naturalselection and appreciate the remarkable diversity oflife on Earth.Implications for Conservation•Understanding the constraints on natural selection is essential for conservation efforts.•Conservation strategies need to take into account the limitations and vulnerabilities of species in theirresponse to changing environments.Future Directions•Further research is needed to better understand the interconnectedness of genetic, environmental, anddevelopmental constraints.•Exploring the adaptive potential of organisms under different constraints can provide insights into theirevolutionary resilience.Ethical Considerations•Recognizing the constraints on natural selection raises ethical questions about human responsibility andintervention in evolutionary processes.•It is important to balance conservation efforts with respect for natural processes and the preservation ofbiodiversity.Conclusion•Constraints on natural selection shape the course of evolution and determine the patterns of biodiversity andadaptation.•Genetic, environmental, and developmental constraints define the limits and possibilities of natural selection. •By studying and understanding these constraints, we can gain a deeper appreciation for the intricacies of thenatural world and make more informed decisions regardingconservation and preservation.。
On the Utility of Semantic ConstraintsG. Smith and W. StuerzlingerDept. of Computer Science, York University, Toronto, Canada{ gsmith | wolfgang }@cs.yorku.caAbstract. Content creation for computer graphics applications is a laboriousprocess that requires skilled personnel. One fundamental problem is thatmanipulation of 3D objects with 2D user interfaces is very difficult for non-experienced users. In this paper, we describe a system that uses semanticconstraints to restrict object motion in a 3D scene, making interaction muchsimpler and more intuitive. We compare three different levels of semanticconstraints in a 3D scene manipulation program with a 2D user interface. Weshow that the presented techniques are significantly more efficient thanalternate techniques, which do not use semantics in their constraints. To ourknowledge, this is the first evaluation of 3D manipulation techniques with 2Ddevices and constraints.1 IntroductionMany applications are readily available in the areas of 3D modeling and scene construction, but in general, these products are difficult to use and require many hours of training. For example, products such as Maya (Alias|wavefront) and 3D Studio Max (Discreet), have dozens of menus, modes and widgets for scene creation and manipulation, which can be very intimidating for an untrained user. Our efforts address these difficulties.The task of creating a 3D scene from scratch is very complex. To simplify the problem, we choose to focus on the creation of complete 3D scenes based on a library of existing objects. Here the challenge is to enable the user to easily add objects and to quickly position them in the environment. In general, positioning an object in a 3D scene is difficult as six independent variables must be controlled, three for positioning and three for orientation.Our observations of humans rearranging furniture and planning environments indicate that humans do not think about scene manipulation as a problem with six degrees of freedom. The rationale is that most real objects are not placed arbitrarily in space, but are constrained by physics (e.g. gravity) and/or human conventions (ceiling lamps are almost never placed permanently onto the floor or onto chairs). This leads us to believe that an interface that exposes the full six degrees of freedom to the user makes it harder for the average person to interact with virtual environments. Many real objects have a maximum of three degrees of freedom in practice – e.g. all objects resting on a plane. In addition, many objects are often placed against walls or other objects, thus further reducing the available degrees of freedom. This implies that atwo-dimensional (2D) input device such as a mouse is sufficient to manipulate objects in a virtual environment.In our system, information about how an object interacts with the physical world assists the user in placing and manipulating objects in virtual environments. Each object in a scene is given a set of rules, called constraints, which must be followed when the object is being manipulated.2 Previous WorkPrevious work on 3D object manipulation can be classified into two categories: those that use 2D and those that use 3D input devices.The simplest solution for a 2D input device is to decompose the manipulation task into positioning and orientation. Unfortunately, there is no intuitive mapping of these tasks with three degrees of freedom each to a mouse with three buttons.Bier introduced ‘Snap-Dragging’ [1] to simplify the creation of line drawings in a 2D interactive graphics program. The mouse cursor snaps to points and curves using a gravity function. Bier subsequently applied these ideas to placing and orienting objects in a 3D environment [2]. The main features of this system are a general-purpose gravity function, 3D alignment objects, and smooth motion affine transformations of objects. Gleicher [9] built on this work and introduced a method that can deal even with non-linear constraints.For 3D scene construction Bukowski and Sequin [7] employ a combination of pseudo-physical and goal-oriented properties called ‘Object Associations’ to position objects in a 3D scene with 2D devices (mouse and monitor). A two-phase approach is used. First, a relocation procedure maps the 2D mouse motion into vertical or horizontal transformations of an object's position. Then association procedures align and position the object. Although intuitive, their approach has a few drawbacks. First, associations apply only to the object currently being moved and are not maintained after the current manipulation. In addition, when an object is selected for relocation, a local search for associated objects is performed. This can result in lag b etween the motion of the selected object and the motion of its associated objects. Cyclical constraints are not supported.Goesele and Stuerzlinger [8] built upon the ideas of Object Associations. Each scene object is given predefined offer and binding areas. These areas are used to define constraining surfaces between objects. For example, a lamp has a binding area at its base and a table has an offer area on its top. Consequently, a lamp can be constrained to a tabletop. To better simulate the way real world objects behave, a labeled constraint hierarchy adds semantics to the constraint process. Each constraint area is associated with a label from the hierarchy. A binding area constrains then only to offer areas whose label is equal to or is an ancestor in the constraint hierarchy. In this way, the legs of a chair can be constrained to the floor, or in front of a desk, but never to the wall. Collision detection is used to prevent objects from passing though each other.Drawbacks of this approach include the following: Once a constraint has been satisfied, there are no means to re-constrain an object to another surface or to un-constrain it. Furthermore, the constraint satisfaction search is global, in that an object will be moved across the entire scene to satisfy a constraint, This has often-undesirable effects for the user, especially because constraints cannot be undone.A number of authors have investigated the performance of object manipulation with 3D input devices, such as a space-ball or a six degree-of-freedom tracker. Such devices enable direct interaction with a 3D scene. In combination with devices that generate a 3D view, such systems can simulate Virtual Reality (VR).One of the first researchers to use 3D devices to manipulate a 3D scene was Bolt in 1980 [3]. Subsequently many other researchers studied the creation and manipulation of 3D environments in VR (see e.g. [12][16]). Very few of these systems utilize constraints for object manipulation and even these support only the simplest geometric constraints (e.g. on-plane). Closest to the work discussed here is the ‘SmartScene’ system by Multigen [18]. This system uses tracked pinch-gloves in 3D as interaction devices.More recently Bowman et al. [5], Mine et al. [12], and Pierce et al. [13] proposed different 3D manipulation methods that can be applied in a variety of settings. Poupyrev et al. recently also addressed the problem of 3D rotation [15]. For a more complete overview over previous work in this area, we refer the reader to [6].While it may seem obvious that the introduction of constraints makes interaction in 3D easier, it is unclear how strong this effect is. An extensive search for literature was performed in this area, and no study that addresses was found.2.1 MotivationOur main motivation behind this work is that we wanted to analyze how semantic constraints affect user performance in a give task. Such constraints have been introduced to commercial products such as the Smartscene system [18], but to our knowledge, no evaluation has been performed to assess the value of these constraints. Our initial hypothesis was that semantic constraints would greatly simplify user interaction, as they make putting an object into the “right” place very simple.3 The MIVE SystemThe MIVE (Multi-user Intuitive Virtual Environment) system extends the work done in [8] by improving the way existing constraints behave, and adding new useful constraints. This work concerns only the interaction of a single user; therefore we disregard the multi-user aspects of the system here.3.1 Semantic ConstraintsEvery object in the MIVE system has constraints defined for it. Each constraint can be one of two types: offer or binding. Essentially, binding areas will “stick” to offer areas. Constraint relationships are stored in a directed a-cyclic graph called the scene graph. Figure 1 depicts a simple scene, and it’s associated scene graph. When an object is moved in the scene, all of its descendants in the scene graph move with it.Fig. 1. A Scene and its associated Scene Graph.Links describe constraint relationsNotice that edges in the scene graph of Figure 1 correspond directly to satisfied constraints in the scene. The user can modify the scene graph structure by interacting with objects in the scene. Constraints can be broken and re-constrained with ease by simply clicking on the desired object, and pulling away from the existing constraint to break it. This allows us to dynamically change the structure of the scene graph. Figure 2 shows the same scene as Figure 1 after the chair has been pulled away from thelarge table, and dragged under the smaller table.Fig. 2. Scene from Fig.1 after chair has been movedA labeled constraint hierarchy is used to add semantics to the constraint process. The hierarchy is a tree structure, and defines the behavior of the constraint and offer areas. Each constraint area is associated with a label from the hierarchy, which defines what offer areas this constraint area is allowed to attach to. A binding area constrains only to offer areas whose label is equal to or is an ancestor in the constraint hierarchy tree.Fig. 3. Semantic Constraint treeA simplified version of the semantic constraint tree is shown in Figure 3. This tree is used to restrict object placements and to make interactions more intuitive. For example, the phone in fig 1 has a binding area on it base, which has the OnWorkspace label associated with it, hence it can be constrained to any offer area with an OnWorkspace label, OnHorizontal label, or OnPlane label. The top of the table has an OnWorkspace label associated with its offer area, so the phone can constrain there. The phone cannot be placed on the floor, which has an OnFloor label associated with its offer area, because the label of the floor’s offer area is not an ancestor of OnWorkspace in the tree. Semantics are added in a similar manner to every binding and offer area.4 Constraint SatisfactionFor our constraint system, binding and offer areas both have a polygon and vector, which represent their effective areas and orientation. A binding area is satisfied by an offer area by aligning their orientation vectors and by translating the binding polygon so that it lies within the offer polygon (excluding the offer polygon edges). If after rotation and translation the binding polygon is not completely enclosed by the offer polygon, then the binding area is not constrained to the offer area. In addition, a binding area cannot be bound to an offer area of the same object: an object cannot be constrained to itself. In Borning’s [4] terms, our system implements locally-predicate-better constraints.To constrain an object, we attempt to satisfy all of its binding areas. For each binding area of an object, we search through the scene to find potential satisfying offer areas. Semantics restrict the offer areas that a binding area is allowed to constrain to. To prevent objects from jumping large distances to satisfy constraints, we only consider constraining an object to offer areas that are close to the object being constrained. Closeness is relative to object size, therefore we consider only objects that are within a sphere with a radius that is twice the radius of the sphere bound of the object. Using this heuristic, constraints remain unsatisfied until an object is moved close to a valid offer area. It also ensures that objects are always locally constrained. For each binding area, if there are multiple satisfying offer areas, the closest satisfying offerarea found is chosen. The object is moved to connect the binding and offer areas. The bound object then becomes a child of the offering object in the scene graph, and the search is repeated for the next binding area.Once an object is constrained, its motion is restricted such that the binding areas of the object always remain in contact with the associated offer areas. This essentially removes degrees of freedom from object manipulations. Constraints can be broken with ease by simply pulling an object away from its associated offer area.5 MIVE Constraint EnvironmentsPrevious systems have used a more general constraint environment, where objects only know that they must lie on a horizontal and/or vertical surface, such as the Object Association system [7]. We hypothesize that this makes interaction less intuitive because it gives the user less control over how objects behave in the scene. For example, the chair would have a horizontal constraint on its base, and could easily be placed on a table, bed, refrigerator, or any other horizontal surface. We have implemented such a constraint system in MIVE and call it the Partially Constrained (P) mode.Each object in the MIVE system has a set of semantic constraints associated with it. This mode uses user-defined semantics to restrict object placement to valid locations. We call this the Fully Constrained (C) mode.6 User TestingWe designed a test where significant semantic differences between object constraints exist to see if a significant difference between the PC and FC modes would occur. Twelve pipes (3 different sizes, 4 of each size) had to be placed onto a wall. The wall had 12 different receptacles where pipes could be attached in FC mode. The test is illustrated in Figure 4.In the task, we evaluated 3 different modes:-Partially constrained (P) mode, where the pipes connect to all vertical surfaces(i.e. anywhere on the wall). There is no visual indication where pipes should beattached.-Non-semantic constrained with drawn offer areas (D) mode, where a pipe can connect to anywhere on wall (as in mode P), but the correct locations are indicated visually by a small transparent green square on the wall. However, each pipe could actually be placed anywhere on the wall.-Fully constrained (C) mode, where pipes connect only to the correct receptacle.Three different kinds of pipes existed. This is a simplified analogy to having gas, hot water and cold water pipes in a mechanical construction task, where each type of pipe is only allowed to connect to a subset of the receptacles.Tasks were set up so that no navigation was required for any of the tests. This avoids interference with problems of participants understanding navigation in 3D.Fig. 4. The initial and target scenes for our user test6.1 ParticipantsTwelve volunteers participated in this experiment. Participants were computer science students with different experience and backgrounds, different computer skills and different degrees of exposure to 3D computer graphics.6.2 ApparatusThe MIVE interface was designed to very simple. Figure 5 shows the full user interface of the MIVE program running with its default object list.Fig. 5. The MIVE interfaceThe MIVE interface consists of three windows: the scene window, the object selection window, and the button window. The scene window sits on the right hand side of the screen. The participant directly interacts with objects in the scene window by clicking and dragging them.The lower left-hand corner shows the object selection window. Objects are positioned on an invisible cylinder, which is rotated by clicking any mouse button within thewindow and dragging left and right. Objects are added to the scene window by simply clicking on the desired object in the object selection window, and clicking on the desired location to add it in the scene window. Drag & Drop is supported as well. To facilitate selection of small objects all objects are scaled logarithmically.The upper left-hand corner of the window contains buttons for performing tasks such as loading or saving the scene, deleting an object, undoing the previous operation, or quitting the program. There is also a radio button, which can be used to switch between interaction and navigation mode. This functionality was disabled for the tests in this publication.The MIVE system is implemented in C++ and runs on an SGI Onyx2 running IRIX 6.5. It is based on the Cosmo3D [10] scene graph API.6.3 InteractionThe interface for MIVE was designed to be as simple and uncluttered as possible. All interactions between the participant and the program are done using a 3-button mouse.For this test, the three modes use only two of the three buttons. The left mouse button is used to move objects by clicking and dragging them to the desired new location. The middle mouse button is used to rotate the objects. The third mouse button is currently unused in these modes.6.4 ProcedureA three-minute tutorial was given prior to the testing, at which time the experimenter gave the participant instructions on how to use the system. Each participant was then allowed to experiment less than two minutes with the system before testing started. Each test began with the participant sitting in front of a computer monitor with a scene displayed. A target scene was displayed on an adjacent monitor, and the participant was instructed to make the scene on their screen look like that in the target scene. The experimenter supervised the participant, and when the task was deemed complete the supervisor instructed the participant to continue to the next task.Each participant was asked to perform the task in three different modes. The order that the participant performed the modes was chosen using a Latin square method. For each of the tests, we recorded the time taken by the participant, and the accuracy of object placement compared to the target scene. Accuracy was measured by summing the distances in centimeters between each of the object centers in the participant’s final result and the target scene.7 AnalysisAt the end of each experiment task completion time and the modified scene was stored. The Euclidean distance between the participant’s solution and the reference solution was computed later on.7.1 Adjustments to DataNo adjustments were made to the collected data and no trials were excluded.7.2 Computed FormulasErrors are sums of Euclidean distances. We ignore rotation because no ideal measure for rotation differences exists to our knowledge. Moreover it is hard to find a meaningful combination of translation and rotation errors into one number.7.3 ResultsFigure 6 summarizes the results of the test. The thick center line of a box shows the median, the second line is the mean, the box itself indicates the 25th and 75th percentile, and the ‘tails’ specify the 10th and 90th percentile.The analysis indicates a significant effect between modes C and P (p < 0.001). Mode P is clearly slower than mode C, by a factor of 2.5. More detailed analysis reveals that mode C is also significantly faster than mode D by a factor of 1.5. In accuracy, modes C and P do not have a significant difference, but mode C is significantly better than mode P by a factor of 8.6 (p < 0.001).Fig. 6. User test resultsAlthough only a small number of participants took part in this test, the statistical test has an extremely high power (>0.99) and we are confident that further testing would only re-confirm these results. The results for the tasks show that the semantic constraints of mode C can provide benefits for scene manipulation in environments where semantic differences among objects exist.8 ConclusionIn this publication we presented a system that allows users to easily manipulate a 3D scene with traditional 2D devices. The MIVE system is based on semantic constraints, which enable an intuitive mapping from 2D interactions to 3D manipulations. The semantic constraints and manipulation techniques encapsulate the user’s expectations of how objects move in an environment. Based on user tests we showed that the use of semantic constraints, as opposed to more general constraints without semantics, provide clear benefits for manipulation of 3D objects in a 2D user interface.The benefits of our interaction techniques become very apparent when one compares the simple MIVE user interface with the complex 3D user interface in commercial packages such as AutoCAD, or Maya that are also based on 2D input devices. We can only hypothesize at the outcome of a test comparing our system with e.g. Maya, butare confident that it is clearly easier to learn our user interface due to the reduced complexity. In fairness, we need to point out that these packages are also capable of object creation and the specification of animations, which our system does not currently address.References1.B ier, E.A., and Stone, M.C. Snap-dragging. SIGGRAPH 1986 proceedings, ACM Press, pp. 233-240.2.B ier, E.A. Snap dragging in three dimensions, SIGGRAPH 1990, pp. 193-204.3.B olt, R., Put-that-there, SIGGRAPH ‘80, 262-270.4.B orning, A., Freeman, B., Ultraviolet: A Constraint Satisfaction Algorithm for Interactive Graphics, Constraints: An International Journal, 3, 1-26, 1998.5.B owman, D., Hodges, L. An evaluation of techniques for grabbing and manipulating remote objects in immersive virtual environments. Proceedings of ACM Symp. on Interactive 3D Graphics, 1997, pp. 35-38.6.B owman, D., Kruijff, E., LaViola, J., Mine, M., Poupyrev, I., 3D user interface design, ACM SIGGRAPH 2000, Course notes # 36, 2000.7.B ukowski, R., and Sequin, C. Object associations. ACM Symp. Interactive 3D Graphics 1995, 131-138.8.G oesele, M, Stuerzlinger, W. Semantic constraints for scene manipulation. Proc. Spring Conference in Computer Graphics 1999, pp. 140-146.9.G leicher, M, A Graphics Toolkit Based on Differential Constraints. Proc. UIST 93, 109-120.10.E ckel, G., Cosmo 3D programmers guide. Silicon Graphics Inc. 1998.11.M ine, M., ISAAC: A Meta-CAD System for Virtual Environments. Computer-Aided Design, 29(8), 97.12.M ine, M., Brooks, F., Sequin, C. Moving Objects in Space: Exploiting proprioception in virtual-environment interaction. SIGRAPH 1997, pp. 19-26.13.P ierce, J., Forsberg, A., Conway, M., Hong, S., Zeleznik, R. et al., Image plane interaction techniques in 3D immersive environments. Proceedings of ACM Symp. on Interactive 3D Graphics. 1997. pp. 39-43.14.P oupyrev, I., Weghorst, S., Billinghurst, M., Ichikawa, T., Egocentric object manipulation in virtual environments: empirical evaluation of interaction techniques. Computer Graphics Forum, 17(3), 1998, 41-52.15.P oupyrev, I., Weghorst, S., Fels, S. Non-isomorphic 3D rotational techniques. ACM CHI'2000, pp. 546-547.16.S haw, C., Green, M., THRED: A Two-Handed Design System, Multimedia Systems Journal,5(2),1997.17.S hoemake, K., ARCBALL: A user interface for specifying three-dimensional orientation using a mouse, Graphics Interface, 1992, pp. 151-156.18.S martScene promotional material, Multigen (San Jose, CA), 1999.。
a r X i v :a s t r o -p h /0009492v 1 29 S e p 2000Constraints on InflationPedro T.P.VianaCentro de Astrof´ısica da Universidade do Porto,Rua das Estrelas s/n,4150Porto,Portugal,and Departamento de Matem´a tica Aplicada da Faculdade deCiˆe ncias da Universidade do Porto;viana@astro.up.pt1OverviewThe simplest models of inflation,those in which only one scalar field is present and is minimally coupled,lead to very simple and clear predictions:(1)the Universe is simply connected,(nearly)homogeneous and isotropic,with no detectable large-scale rotation,at least up to scales slightly larger than the present particle horizon;(2)the observable Universe is spatially flat;(3)the scalar perturbations that eventually originated the large-scale structures ob-served today,if generated during inflation,were primordial (i.e.passive),adia-batic and Gaussian distributed,with a nearly scale-invariant power spectrum.However,in more complicated models,where more than one scalar field is present,with the possibility of the such fields being strongly coupled,most of the above predictions can be weakened.The exceptions are those of a trivial topology,the absence of large-scale rotation,and the primordial nature of the scalar perturbations.Nevertheless,these are still sufficient to provide testsfor both hypothesis:of inflation as the event responsible for the present-day large-scale Universe being nearly homogeneous,isotropic and spatiallyflat; and of inflation as the most important mechanism behind the generation of the scalar perturbations that eventually originated the large-scale structures observed in the Universe today.I will focus on the later hypothesis[see(55)for a general review of inflation], describing the observational tests,and present constraints,on the various char-acteristics expected for scalar perturbations in the simplest inflationary mod-els.These characteristics have an impact both on the nature and timescale for the formation and evolution of structures on large-scale in the Universe,and on the properties of the cosmic microwave background radiation(CMBR).I will therefore spend most of this review on how the present observational data regarding these two topics constrain the characteristics of the scalar pertur-bations,and what in turn that tells us about inflation.I will also discuss the prospects of detecting locally a possible stochastic background of gravitational waves produced during inflation,and the present evidence regarding the geom-etry of the Universe,and whether it supports the prediction of spatialflatness associated with the simplest inflationary models.Though tensor perturbations and spatialflatness cannot be used to test inflation,given that inflationary models exist which do not predict them,they can offer strong support to the hypothesis that an inflationary period did occur in the early Universe.2Structure formation and inflation2.1IntroductionIn studies of structure formation one is particularly interested in the statistical properties of the density contrast of the matter distribution,ρ(x,t)−δ(x,t)≡ρ,where x is the comoving position.A Fourier expansion can be made in a large enough box(of volume V)so that δ(x,t)is periodic within the box.We then haveδ(x,t)= kδk(t)e i k·x,where k≡|k|is a comoving wavenumber,with the Fourier coefficients being given byδk(t)=12π2 |δk|2.The dispersion of the density contrast is then simply given byσ2(t)≡ δ2(x,t) =∞ 0P(k,t)dk√2σ2 dδ.This equation implies that there is always some probability of havingδ(x,t)<−1,which,by definition,is not physically possible.Therefore,as afirst ap-proximation,it is only valid to considerδ(x,t)as a Gaussian randomfield if there is only a very small probability of havingδ(x,t)<−1by the above equation,i.e.ifσ(t)≪1.This condition is also necessary if we wish to use linear perturbation theory to follow the evolution ofδ(x,t).Gaussian random fields are very special,since only the power spectrum is required to specify all of the statistical properties of thefield,whereas for non-Gaussianfields the full hierarchy of probability distributions is needed.After matter domination,the power spectrum of the density contrast,δ(x,t), can be written as(56;57;67)P(k,t)=g2(Ω,λ)aH4T2(k,t)δ2H(k),where the quantities aH,Ωandλ≡Λc2/3H2are to be calculated at t.Thefunction g (Ω,λ)accounts for the rate of growth of density perturbations rela-tive to the Einstein-de Sitter case,whose growth is given by the (aH )4factor.The transfer function,T (k,t ),measures the change at t in the amplitude of a perturbation with comoving wavenumber k relative to a perturbation with infinite wavelength,thus in the limit k →0(in practice k →k hor due to gauge ambiguities),one has T (k,t )→1.The shape of the transfer function results mostly from the different behaviour of perturbations in the radiation and matter dominated eras,and from sub-horizon damping effects,like Silk damping,which affects baryons,and free-streaming (Landau damping),which acts on hot dark matter perturbations.An oscillatory pattern can also appear in the transfer function if baryons contribute significantly to the matter den-sity in the Universe,due to acoustic oscillations of the photon-baryon fluid on scales below the horizon until decoupling occurs.The calculation of a trans-fer function not only depends on the type of mechanism responsible for the generation of the density perturbations,but also on the assumed matter and energy content in the Universe.It thus needs to be determined numerically,though nowadays there are several analytical prescriptions which approximate it for the most popular structure formation scenarios [see e.g.(24)].The quantity δ2H (k ),defined asδ2H (k )≡ δρk 0 n −1,where n is the so-called spectral index and δH (k 0)is a normalisation factor at an arbitrary comoving wavelength k 0.Since the COBE measurement of the amplitude of the large-angle anisotropies in the temperature of the CMBR be-came available,the value of δH (k 0)is usually set so as to reproduce it (though some previous assumption has to be made regarding the contribution of ten-sor perturbations,i.e.gravitational waves,to the anisotropies).When this is done,in the simplest inflationary models the value of δH (k 0)then depends essentially only on the values of n ,Ω0and λ[see e.g.(11)].A scale-invariant,or Harrison-Zel’dovich,power spectrum corresponds to n =1.In general most inflationary models give n ≤1,though in some it is possible to have n >1.2.2Adiabatic vs.entropy perturbationsPerturbations in a multi-component system can be of the entropy or of the adiabatic types.Thefirst correspond tofluctuations in the form of the local equation of state of the system,e.g.fluctuations in the relative number densi-ties of the different particle types present in the system,while the second cor-respond tofluctuations in its energy density.In the case of a perfectfluid com-posed of matter and radiation,pure entropy perturbations are characterised by δρr=−δρm,while for pure adiabatic perturbations,δρr/ρr=(4/3)(δρm/ρm). The entropy perturbations are also called isocurvature,given that the to-tal density of the system remains homogeneous.In contrast,the adiabatic perturbations are also known as curvature perturbations,as they induce in-homogeneities in the spatial curvature.The two types of perturbations are orthogonal,in the sense that all other types of perturbations on a system can be written as a combination of both adiabatic and entropy modes.On scales smaller than the Hubble radius any entropy perturbation rapidly becomes an adiabatic perturbation of the same amplitude,as local pressure differences,due to the localfluctuations in the equation of state,re-distribute the energy density.However,this change is slightly less efficient during the radiation dominated era than during the matter dominated era(and can only occur after the decoupling between photons and baryons,in the case of bary-onic isocurvature perturbations).Causality precludes this re-distribution on scales bigger than the Hubble radius,and thus any entropy perturbation on these scales remains with constant amplitude.The end result is that initialy scale-invariant power spectra of adiabatic and isocurvature perturbations give rise,after matter-radiation equality,to power spectra of density perturbations with almost the same shape.Entropy perturbations are not affected by either Silk or Landau damping, contrary to adiabatic perturbations,thus potentially providing a means of baryonic density perturbations existing below the characteristic Silk damping scale after recombination(note that this could also have been achieved if there were cold dark matter adiabatic perturbations at such scales,with the dark matter necessarily being the dominant matter component in the Universe). However,presently the amplitude of any primordial entropy perturbations is severely constrained by the level of anisotropy in the temperature of the CMBR as measured by COBE.In the case of an Universe with critical-density and scale-invariant perturbations,the total anisotropy on large angular scales is six times bigger in the case of pure entropy perturbations than in the case of pure adiabatic perturbations,for the samefinal matter density perturba-tion at those scales(56).We will later see that in a critical-density universe with initial scale-invariant adiabatic perturbations,the amplitude of the den-sity perturbation power spectrum needed to generate observed structures,like rich galaxy clusters,and that needed to generate the temperature anisotropies measured by COBE,are roughly compatible.Therefore if one requires small-scale density perturbations with high enough amplitude to reproduce known structures in an universe with critical-density and initial scale-invariant isocur-vature perturbations,one ends up with CMBR anisotropies on COBE scales with much larger amplitude than those which are measured.Possible ways of escaping this handicap associated with entropy perturba-tions are:breaking the assumption of scale-invariance by assuming a steeper dependence with k,i.e.decreasing the amount of large-scale power relatively to small-scale power;and decreasing the matter density in the Universe,i.e. assumingΩ0<1.However,thefirst possibility leads to values for the spectral index which are in conflict with the constraints imposed by the COBE data [see e.g.(36)],while the second solution demands unrealistically small val-ues forΩ0,well below0.1(12).However,combinations of these two changes, together with the introduction of more exotic forms of dark matter,like de-caying particles(36),may provide working models purely with isocurvature perturbations.Further,isocurvature perturbations need not have an infla-tionary origin,as cosmological defect models provide an alternative means of generating structure from isocurvature initial conditions.In any case,clearly though the possibility of isocurvature perturbations is not yet ruled out from the point of view of structure formation,it is in much worse shape than the hypothesis of adiabatic perturbations,only surviving by appealing to a complex mixture of effects in the case of an inflationary origin, or by being associated with topological defects.2.3Passive vs.active perturbationsPerturbations can be passive(i.e.primordial)or active.Passive perturbations are those which are generated in the very early Universe(e.g.during inflation), and henceforth evolve passively,being changed only by the action of cosmic expansion and gravity.This means that the phases of such perturbations,in a Fourier expansion of the densityfield,will remain constant once the perturba-tions are generated,and as long as they evolve linearly.Therefore,if one adds them up over time,they add up coherently.It is in this sense that passive perturbations are usually also called coherent.Further,as soon as these per-turbations enter the particle horizon,the photon-baryonfluid will try toflow into the potential wells associated with the perturbations,setting in motion acoustic oscillations in thefluid which will be in in phase.Primordial pertur-bations,adiabatic or isocurvature,always lead to such in-phase(or coherent) acoustic oscillations of the photon-baryonfluid.On the contrary,(random)active perturbations,because they are constantly being produced randomly across space,tend to produce an ensemble of pertur-bations whose phases will add up over time incoherently.Such perturbations necessarily lead to oscillations of the photon-baryonfluid which will be out-of-phase with each other,with the result that their effects will cancel themselves out.Active perturbations,like those produced by standard topological defect models,lead in general to incoherent acoustic oscillations in the photon-baryon fluid(though in a few models some coherence can temporarily exist on scales that have just entered the Hubble radius),as they are generated almost com-pletely at random.However,active perturbations are not necessarily produced in such a way.Models of active perturbations have been proposed,though in a somewhat contrived manner,which seem to be able to produce perfectly coherent oscillations in the photon-baryonfluid,analogous to those produced by passive perturbations[see e.g.(76)].However,clearly,the discovery that the photon-baryonfluid that existed before decoupling had coherent acoustic oscillations,would strongly boost the status of inflation as the best available explanation for the formation of structure.In terms of large-scale structure,these oscillations only leave a detectable signature in the power spectrum of density perturbations if baryons comprise in excess of about15per cent of the total matter content in the Universe (61).The main problem is that non-linear evolution of the power spectrum washes out the signature of the baryon induced oscillations for large k,where the power spectrum can be better determined observationally,leaving out basically only the two peaks on the largest scales as a smoking gun,where observational errors are larger due to cosmic variance.2.4Gaussian vs.non-Gaussian perturbationsThe two characteristics of the density perturbations generated in the simplest inflationary models that could,in principle,be more easily searched for in large-scale structure data are their initial Gaussian probability distribution and the near scale-invariance of their power spectrum.In the simplest models of inflation,due to the nature of quantumfluctuations, the phases of the Fourier modes associated with the perturbations in the value of the scalarfield are independent,drawn at random from a uniform distri-bution in the interval[0,2π].Therefore,the density perturbations resulting from these perturbations will also be a superposition of Fourier modes with independent random phases.From the central limit theorem of statistics it then follows that the density probability distribution at any point in space is Gaussian.As previously mentioned,this result is extremely important,since only one function,the power spectrum at horizon re-entry,is then required tospecify all of the statistical properties of the initial density distribution. Present large-scale structure data cannot be used to say whether the the initial density perturbations followed a Gaussian distribution or not.One problem with the detection of Gaussian initial conditions is that as the density pertur-bations grow under gravity,their distribution increasingly deviates from the initial Gaussian shape.If the density contrastδhad a perfect Gaussian dis-tribution,with dispersionσ,there would always be a non-zero probability of havingδ<−1in some region of space,which is clearly unphysical.Therefore, the real distribution of density perturbations will always be at least slightly non-Gaussian,with a cut-offatδ=−1,i.e.positively skewed.Further,once the(initialy very rare)densest regions of the Universe start to turn-around and collapse,due to their own self-gravity,their density will increase much faster than that associated with most other regions at the same scale,which are still expanding with the Universe,i.e.evolving linearly.This leads to the development by the density distribution of a positive tail associated with high values forδ,thus further increasing the skewness.These deviations do not matter as long asσ≪1,for then the probability thatδ<−1or the exis-tence of regions in the process of turn-around is extremely small.Thus,in the early stages of the gravitational evolution of non-correlated random density perturbations it is a very reasonable assumption to take their distribution as perfectly Gaussian.However,as the value ofσapproaches1,the probability thatδ<−1or of high values forδcannot be neglected any longer,and as-suming the density distribution to be Gaussian will induce significant errors in calculations.In summary,gravitational evolution of density perturbations induces increas-ingly larger deviations from an initial Gaussian distribution,leading to a progressively more positively skewed distribution.Higher moments than the skewness,equally zero for a Gaussian distribution,are also generated in the process,though at increasingly later times for the same amplitude.This means that among these only the kurtosis(which compares the size of the side tails of some distribution against those of a Gaussian distribution)has developed significantly by today on scales larger than about1Mpc(on smaller scales astrophysical processes irremediably mess up the calculations).In order to determine whether the initial probability distribution was Gaussian one then needs either to go to scales R which are still evolving linearly,i.e.σ(R)≪1,or one needs to quantify,using gravitational perturbation theory [see e.g.(9)],the amount of skewness and kurtosis a Gaussian densityfield develops under gravitational instability for the values ofσ(R)observed.If the values for the skewness and kurtosis were found to be well in excess of those expected,either the initial density distribution was non-Gaussian or structure did not form through gravitational instability.The problem isthat we presently do not have direct access to the densityfield,though this will change in the near future by its reconstruction using data from large areas of the sky searched for gravitational lensed galaxies[see e.g.(81;62)]. Traditionally,the galaxy distribution has been used as a tracer of the density distribution,being assumed that the skewness and kurtosis of the densityfield should be the same as that of the galaxyfield.However,it was soon realised that the galaxies could have a biased distribution with relation to the matter, i.e.δgal=bδwhere b is the bias parameter.And a biased mass distribution with respect to galaxies closely resembles one which is unbiased,but which has a stronger degree of gravitational evolution.Dropping the assumption of linear bias,by considering the possibility that the galaxy distribution might depend on higher order terms of the densityfield,further complicates.Observationally the situation is also not very clear,with often incompatible values for both the skewness and kurtosis derived from the same galaxy surveys(41). Another means of having access to the densityfield is by reconstructing it using the galaxy velocityfield,under the assumption that galaxies move solely under the action of gravity.In fact,one can use directly the velocityfield to constrain the initial distribution of the density perturbations,through its own skewness and kurtosis.The scaled skewness and kurtosis of the divergence of the velocityfield have the advantage of not depending on a possible bias between the mass and galaxy distributions.But they depend on the value of Ω0,in such a way that the velocityfield of a low-density Universe is similar to that of a high-density Universe whose densityfield has evolved further under gravity.Nevertheless,by combining measurements of both the scaled skewness and kurtosis it should in principle be possible to determine if the initial density field was Gaussian(9).However,only the line-of-sight velocity component is measurable,hence reconstruction methods,like POTENT(23),are used to recover the full3D velocityfield based solely on it.But such methods have their weaknesses,like for example the need for very good distance indicators, which have up to today hindered the use of the velocityfield to determine whether the initial density perturbations followed a Gaussian distribution. The topology of the galaxy distribution can also be used to determine whether the initial density perturbations had a Gaussian distribution or not,if it is once more assumed that it reflects the underlying properties of the densityfield. The topological measure most widely used to distinguish between different underlying distributions is the genus,which essentially gives the number of holes minus the number of isolated regions,defined by a surface,plus one(e.g. it is zero for a sphere,one for a doughnut).Up to today all measures of the genus of the galaxy distribution seem compatible with it being Gaussian on the largest scales,above about10h−1Mpc[see e.g.(13)],with the prospects of even tighter constraints coming from the big galaxy surveys under way like the2dF and the SDSS(15).2.5Scale-invariant perturbations?The nearly scale-invariant nature of the initial perturbations,as expected from inflation,can only be probed using large-scale structure data insofar as one assumes a certain matter content in the Universe.This unfortunate situation results from the fact that after horizon re-entry the subsequent evolution of the density perturbations is greatly affected by both the type of matter present in the Universe and the cosmic expansion rate,which is in turn a function of the total quantity of matter present in the Universe.The changes in the initial power spectrum of density perturbations,which in our notation is given byδ2H(k),are encapsulated in the transfer function, T(k,t).Thus,unless one assumes a certain transfer function,we have no hope of recovering the shape ofδ2H(k)from large-scale structure data.The evidence forδ2H(k)∝k n−1,as expected in the simplest inflationary models (in particular n≤1),is therefore dependent on assumptions regarding the parameters that affect the transfer function.However,the choice of certain values for some of these parameters may come at a price,and be in conflict with observations not directly related with the power spectrum.For example, the observational data regarding light element abundances implies that the present cosmic baryon density,Ωb,is about0.02h−2,with an error at95per cent confidence of less than about20per cent,if homogeneous standard nu-cleosynthesis is assumed[see e.g.(77)].Heavy tinkering with the value ofΩb is tantamount to throwing away the standard nucleosynthesis calculation,thus requiring a viable alternative to be put in its place.It is also not possible to have any type of dark matter one might want to consider.If all the dark mat-ter particles had high intrinsic velocities,i.e.if they were hot,like neutrinos, then the effect of free-streaming would completely erase all the perturbations on small scales,e.g.in the case of standard neutrinos on all scales smaller than about4×1014(Ω0/Ων)(Ωνh2)−2M⊙.Not only it would then be impossible to reproduce the abundance of high-redshift objects,like proto-galaxies,quasars, or damped Lyman-αsystems,but the actual present-day abundance and dis-tribution of galaxies would be radically different from that observed.It seems very improbable that neutrinos presently contribute with more than about30 per cent of the total matter density in the Universe[see e.g.(78;18;66)]. What is needed is to reformulate slightly the original question and ask instead whether for the simplest,best observationally supported assumptions one can make,the present-day power spectrum of density perturbations is compatible with an initial power-law shape,and in particular with one which is nearly scale-invariant.These assumptions are:the total energy density in the Uni-verse is equal to or less than the critical density,i.e.Ω≤1,and results from a matter component,plus a possible classical cosmological constant;the Hubble constant,in the form of h,has a value between0.4and0.9;the baryon abun-dance,in the form ofΩb h2,is within0.015to0.025;the(non-baryonic)dark matter is essentially cold,with the possibility of any of the3known standard neutrinos having a cosmologically significant mass.Among all these assump-tions,the one which could be more easily changed without conflicting with non-large scale structure data is the nature of the(non-baryonic)dark matter. There could be warm,decaying,or even self-interacting,dark matter,though usually the existence of some contribution by cold dark matter is found to be required tofit all the available large-scale structure data.In order to constrain the above free parameters in this simplest model,one needs at least the same number of independent observational constraints.The most widely used are:the slope of the galaxy/cluster power spectrum;the present number density of rich galaxy clusters;the high-redshift abundance of proto-galaxies,quasars and damped Lyman-αsystems;the amplitude of velocity bulkflows;the amplitude of CMBR temperature anisotropies,both on large-angular scales,as measured by COBE,and on intermediate-angular scales,as presently measured by balloon experiments.While thefirst con-straint directly limits the slope of the density power spectrum,the next four constraints do such only indirectly,by imposing possible intervals for the am-plitude of the density power spectrum at specific scales.Other constraint that has started to be used recently,is the slope and nor-malisation of the density power spectrum on Mpc scales as inferred from the abundance and distribution of Lyman-αforest absorption features in the spec-tra of distant(z∼2.5)quasars(19).However,these calculations depend on assuming a relatively simple physical picture for the formation of such features, being presently still unclear if such a picture provides a good approximation to reality.The comparison of the above defined simplest structure formation models with CMBR anisotropy data is also presently not as clean as one would like.The two most important culprits are:the possibility of a gravity wave contribution at the COBE scale,thus allowing one to arbitrarily decrease the amplitude imposed by the COBE result on the density power spectrum;the possibility of re-ionization,which allows models with too much intermediate-scale power in the CMBR anisotropy angular power spectrum to evade the observational limits on such scales.Again taking refuge in the simplicity assumption,the simplest models would be those with a negligible contribution of gravity waves to the large-angle CMBR anisotropy signal,together with no significant re-ionization.In any case,it should be mentioned that relaxing these two further assumptions does not open up a large region of parameter space(78).Unfortunately,the comparison of these simplest structure formation modelswith the observational constraints just described does not tell us much aboutthe shape of the initial density power spectrum.The assumption of a power-law shape is perfectly compatible with the data,with the value of the spectralindex being loosely constrained to be between0.7and1.4(78;66).However, by imposing further restrictions on the type of structure formation modelconsidered,stronger constraints can be obtained.For example,if the Universewas Einstein-de Sitter then the value of the Hubble constant would have to be smaller than about0.55,in order for the Universe to be more than12Gyr old.This does not matter much,because high values for h increase the amount ofsmall-scale power relative to large-scale power and at the same time suppress power at intermediate angular scales on the CMBR,and for h>0.55these twoeffects join together to exclude almost all viable Einstein-de Sitter structure formation models in the context of the simplest assumptions laid down before(78).Restricting ourselves to0.5<h<0.55,then yields a preferred valuefor the spectral index n roughly between0.9and1.1,with a total neutrino density ofΩν∼0.15[see e.g.(78;31;66).Another example,is the case of theΩ0=0.3flat model,that preferred by the high-redshift type Ia supernova data of(68),for which values for the spectral index in excess of1tend tobe preferred(78;66).One should note however that both models are onlymarginally compatible with the observational data if n=1.0.Finally,it should be mentioned that,even within the simplified structure for-mation scenario we have been assuming,there is enough room for initial den-sity power spectra with deviations from a power-law shape to be viable,as the survival of the broken scale-invariance model testifies(53).This only goes to show the still scarceness of good quality large-scale structure data at present. In the near future the Sloan Digital Sky Survey(SDSS)will allow a much better constraint to be imposed on the value of n,by extending the measure of the slope of the galaxy power spectrum to larger scales,thus probing a region of the power spectrum which has not been in principle too much affected by the dark matter properties,retaining therefore more information about the shape of the initial density power spectrum(58).3The CMBR and inflationThe measurement of the characteristics of the anisotropies in the temperature of the CMBR can teach us a lot not only about the processes that gave rise to the density perturbations responsible for the appearance of the large-scale structures we observe today,but also about cosmological parameters,like the Hubble constant and the matter and energy content in the Universe,and physical events that happened between decoupling and the present,like re-ionization and the formation of structure(e.g.cluster formation through the。
2024年外研版英语中考仿真试题及解答参考一、听力部分(本大题有20小题,每小题1分,共20分)1、Listen to the following dialogue and answer the question.A: Good morning, John. How was your weekend?B: Good morning, Alice. It was quite good. I went hiking with my friends.A: That sounds fun! Where did you go hiking?B: We went to the mountains near our town.A: Oh, that’s great. I love hiking. Do you have any plans for this weekend?B: Yes, I’m planning to go to the beach.A: That sounds nice. I hope you have a good time.Question: What did John do last weekend?Answer: B. He went hiking with his friends.解析:根据对话内容,我们可以得知John上周末和朋友去爬山了。
2、Listen to the following passage and answer the question.The Internet has become an essential part of our lives. It connects us with people from all over the world and provides us with a wealth of information. However, it also has its downsides. One of the biggest problems is the amount of time we spend on the Internet. Many people become addicted to social media platforms and spend hours scrolling through their feeds. This can lead to decreased productivity and poor physical health.Question: What is one of the biggest problems related to the Internet?Answer: A. It connects us with people from all over the world.解析:根据文章内容,我们可以得知互联网的其中一个主要问题是人们过度使用社交媒体平台,导致生产力下降和身体健康受损。
a r X i v :h e p -p h /9607413v 2 5 N o v 1996AMES-HET-96-04October 1996(revised)Unitarity Constraints on Anomalous Top Quark Couplingsto Weak Gauge BosonsM.Hosch,K.Whisnant,and Bing-Lin YoungDepartment of Physics and Astronomy,Iowa State University,Ames,IA 50011,USAAbstractIf there is new physics associated with the top quark,it could show up as anomalous couplings of the top quark to weak gauge bosons,such as Ztb vector and axial-vector couplings.We use the processes tt →W +W −,and tThe combined CDF[1]and D0[2]measurements give a top mass of m t=175±9GeV. The large size of the top quark mass,near the scale of electroweak symmetry breaking, suggests that the interactions of the top quark may provide clues to the physics of electroweak symmetry breaking and possibly evidence for physics beyond the standard model.If the new physics occurs above the electroweak symmetry breaking scale,its effects can be expressed as non-standard terms in an effective Lagrangian describing the physics at or below the new physics scale.Such non-standard interactions,in the form of anomalous vector and axial-vector couplings of the top quark to the W and Z bosons,will affect Z decay widths.The recent measurement of R b[3],the ratio of the decay widths Z→bt→t t→Z0L Z0L,t t→Z0L H,where the L subscript refers to the longitudinal component.We parametrize the anomalous contributions to the t2cosθ−ig22b)/Γ(hadrons)[3],and include terms quadratic in theκ’s ignored in earlier analyses.Wefind that these seemingly small quadratic terms have a significant effect on the allowed regions in some cases.Then we will combine these results with constraints from unitarity.Wefind that the unitarity constraints can place additional limits on the anomalous couplings when the scale of new physics is as low as2TeV.Furthermore,since the data is not consistent with the Standard Model at the90%CL,unitarity constraints place an upper limit on the scale of new physics represented by theκ’s in Eq.1.New physics at LEP and SLC can be parametrized in terms of the four parameters S,T,U[11]andδb b)=Γ(bb),which can be expressed in 3terms of R b asδb3π2κNC R−κNC L−3(κNC L)2−3(κNC R)2 logµ28πs2Z M2tM2Z,(3)U=1M2Z,(4)δb9π18(1−s2Z)M2tM2Z× (4s4Z−18s2Z+9) 2κCC L+(κCC L)2−4(κNC R−κNC L)2+4(4s4Z+2s2Z−3)κNC L+2(4s4Z−28s2Z+15)κNC R−(28s4Z+2s2Z−9)(κCC R)2 ,(5) whereµis the renormalization scale and s2Z=sin2θW(M Z).We keep the terms quadraticin theκ’s in our analysis,as they will affect the result even when theκ’s are not large.As in Ref.[6],we choose the scaleµ=2m t,which assumes that the new physics is related to the top quark mass,and take m t=175GeV and s2Z=0.2311.We have also investigated the caseµ=m t and will comment on it later.Recent data from LEP and SLC imply the following constraints due to new physics contributions[3,10]S=−0.28±0.19−0.08+0.17(6)T=−0.20±0.26+0.17−0.12(7)U=−0.31±0.54(8)δbbis the latest world average[3].The error inδbThe CLEO measurement of b→sγ[13]also puts a constraint onκCC R[14].We have updated this limit using the most recent experimental data on m t and b→sγto get−0.03<κCC R<0.00.(10) We note thatκCC R is constrained to be very small and enters into Eqs.2-5only quadrat-ically.As will be demonstrated later,the unitarity constraints also depend onκCC R only quadratically,so that we will be able to consistently ignore its effects.The constraints on theκ’s due to the precision electroweak data are most easily seen by looking at the allowed regions when two of the parameters are varied simultaneously.Figures1a and1b show the90%CL bounds onκNCL versusκNCRandκCC L versusκNCL,respectively,where the other parameters are set to zero in each case,and we have set the renormalization scale atµ=2m t.The region allowed by all the data is denoted by bold lines.As is evident from thefigures,the most important constraints on the allowed regions are from the limits on T andδbt→t t→Z L Z L,t t→Z L H,which will be affected by the anomalous couplings of Eq.1.For each reaction we consider all helicity combinations for the t and¯t.The5reactions with transverse vector bosons may be ignored since their rates are suppressed in comparison with the processes involving longitudinal vector bosons.We are most concerned√with amplitudes that grow with increasing center of mass energy,t→t√t→Z L H,as they do not grow witht−→Z L Z L,and t+s.These four processes are sufficient to constrain the anomalous couplings in our model.For the process t√t→Z L Z L)=−T−−(t2G f m tt→W+L W−L the diagrams which contribute are the t-channel exchange of a virtual b quark,and the s-channel exchange of the Z boson,Higgs boson,and photon.√After retaining only leading terms proportional to s andt→W+L W−L)=−T−−(t√2G f m t√t→W+L W−L)=i√t→W+L W−L)=ia J m,m′=1t−,m=0for t+t−,W+L W−L and Z L Z L,and m=−1for t−t+,t−6G F√√√√√√t+,t+t+,t−6G F2T4 0000−√2T4−√√and again we have retained only the terms which grow with s.The characteristic equations for the roots of Eqs.16and19are easily found.The strongest constraint in each case comes from the largest eigenvalue,a0max=√16π6G F s2[T23+T24+T25],(24) for J=1,where partial-wave unitarity requires a J max<1.Although the importance of the higher partial waves are generally reduced by an overall factor2J+1,since some of the J=1amplitudes grow linearly with s and the J=0amplitudes grow only as m t√s where unitarity is saturated,where the other parameters are set to zero in each case.The regions allowed by precision electroweak data are taken from the corresponding cases in Fig.1.If we assume that partial-wave unitarity is obeyed up to the energy scale of new physics, then the unitarity bounds in Fig.3for a given value of√s.The scale at which unitarity constraints begin to encroach on the region allowed by the LEP and SLC data varies according to the parameter set used.The lowest energy scales for which the unitarity constraints place additional limits on the new physics parameters(√bmeasurement),and predict nonzero values for theκparameters.There is thena maximum value of √smax)for which both the unitarity and precision electroweak8constraints are satisfied.The quantity√smaxfor various parameter sets are given in second column of Table Iwhen the90%CL LEP and SLC data are used.When we tighten the LEP and SLC constraints by requiring68%CL agreement withthe data,onlyκNCRversusκCC L has a region of values consistent with the data when only two parameters are allowed to vary.Figure4shows this allowed region and the unitarity constraints for various values of√s for which both unitarity and the electroweak constraints are satisfied is√s max=3.7TeV,κNCR ,κNCL,κCC L=0.(25)We have also examined the effect on our results of changing the renormalization scale µ.Since each of the electroweak observables in Eqs.2-5are proportional to log(µ2/M2Z), reducingµto m t will shift the allowed regions in Figs.1and2to larger values of thecouplings,which in turn leads to smaller values of √smax(see Table I).On theother hand,ifµ>2m t is chosen,then the allowed regions shrink in size;however,such larger values of the renormalization scaleµare not physically reasonable.Therefore,the energy scales listed in Table I and Eq.25forµ=2m t represent conservative estimates. Choosing a renormalization scale as low as m t typically reduces these values by30%.In summary,our analysis shows that unitarity contraints can impose limits on the anoma-lous weak gauge couplings of the top quark beyond those given by precision electroweak data if the new physics responsible for these couplings appears at a scaleΛas low as2TeV,as indicated by the values of√I.ACKNOWLEDGEMENTSWe thank Xinmin Zhang for many helpful discussions.This work was supported in part by the U.S.Department of Energy under Contract DE-FG02-94ER40817.M.Hosch was also supported under a GAANN fellowship.10REFERENCES[1]CDF Collaboration,F.Abe et al.,Phys.Rev.Lett.74,2626(1995).[2]D0Collaboration,S.Abachi et al.,Phys.Rev.Lett.74,2632(1995).[3]K.M¨o nig,talk presented at ICHEP-96,Warsaw,Poland,September,1996.[4]R.Peccei and X.Zhang,Nucl.Phys.B337,269(1990);R.Peccei,S.Peris,and X.Zhang,Nucl.Phys.B349,305(1991);B.-L.Young and X.Zhang,Phys.Rev.D51, 6584(1995).[5]G.J.Gounaris,M.Kuroda,and F.M.Renard,preprint PM/96-22and THES-TP96/06,hep-ph/9606435,June1996;G.J.Gounaris,F.M.Renard,and C.Verzegnassi,Phys.Rev.D52,451(1995),and references therein.[6]S.Dawson and G.Valencia,Phys.Rev.D53,1721(1996);E.Malkawi and C.P.Yuan,Phys.Rev.D50,4462(1994).[7]T.D.Lee and C.N.Yang,Phys.Rev.Lett.4,307(1960);B.Lee,C.Quigg,and H.Thacker,Phys.Rev.D16,1519(1977);T.Appelquist and M.S.Chanowitz,Phys.Rev.Lett.59,2405(1987).[8]K.Whisnant,B.-L.Young,and X.Zhang,Phys.Rev.D52,3115(1995).[9]G.J.Gounaris,D.T.Papadamou,and F.M.Renard,preprint PM/96-28and THES-TP96/09,hep-ph/9609437,September,1996.[10]ngacker,talk presented at SUSY-95,Palaiseau,France,May,1995,U.of Pennreport UPR-0683T.[11]D.C.Kennedy and B.W.Lynn,Nucl.Phys.B322,1(1989);M.Peskin and T.Takeuchi,Phys.Rev.Lett.65,964(1990);Phys.Rev.D46,381(1992);G.Altarelli,R.Barbieri, and F.Caravaglios,Nucl.Phys.B405,3(1993).[12]G.Altarelli,R.Barbieri,and F.Caravaglios,Nucl.Phys.B363,326(1991).[13]M.Alam et al.,CLEO collaboration,Phys.Rev.Lett.74,2885(1995).[14]K.Fujikawa and A.Yamada,Phys.Rev.D49,5890(1994).[15]M.S.Chanowitz,M.A.Furman,and I.Hinchliffe,Nucl.Phys.B153,402(1979).TABLESTABLE I.Values of√s max(the highest energy scale for which both the unitarity and electroweak constraints are satisfied)at90%CL for various parameter sets whenµ=2m t. The corresponding values forµ=m t are given in parentheses.√√κNC L ,κNCR19(13)3.3(2.1)κNC R ,κCCL19(13)1.6(1.3)FIGURE CAPTIONSFIG.1.Limits from precision LEP and SLC data on(a)κNCR vs.κNCLforκCC L=κCC R=0,and(b)κCC L vs.κNCL forκNCR=κCC R=0,using the90%CL limits on S,T and U fromRef.[10],and onδbbare indicated by the arrows.In Fig.1(a)the entire region shown is allowed by U. In each case the region allowed by all of the electroweak data lies inside the bold lines.FIG.2.Allowed region from precision LEP and SLC data ofκNCL vs.κNCRfor severalvalues ofκCC L withκCC R=0,using the90%CL limits from Refs.[3]and[10].FIG.3.Unitarity limits on(a)κNCR vs.κNCLforκCC L=κCC R=0,and(b)κCC L vs.κNCLforκNCR =κCC R=0,shown for several values of√s.The regions allowed by unitarity lie inside the ellipses for each energy scale.The 68%CL region allowed by LEP and SLC data is also shown.。