Geometric scaling violations in the central rapidity region of d+Au collisions at RHIC
- 格式:pdf
- 大小:264.86 KB
- 文档页数:17
fluent 计算错误汇总(fluent 计算错误汇总)Fluent calculation error summary1..Fluent cannot display imagesWhen you run fluent, you check grid after importing case, and this error always occurs when you display gridError, message, from, graphics, function, Update_Display:Unable, to, Set, OpenGL, Rendering, ContextError:, FLUENT, received, a, fatal, signal (SEGMENTATION, VIOLATION)Error, Object: ()Terms of settlement:Right click the shortcut, and put the target byx:fluent.incntbinntx86fluent.exeChange to: x:fluent.incntbinntx86fluent.exe, 2D, -driver, MSWIf there is a three-dimensional, you can build a shortcut to change:X:fluent.incntbinntx86fluent.exe 3D -driver MSWThis can be called directly. If not for the above reasons, it may also conflict with other software, such as MATLAB, whichwill make fluent unable to display images.Q1:GAMBIT cannot run after installation. The error message is "unable find Exceed X Server""A. GAMBIT needs to be loaded with EXCEED.Gambit run: first run the command prompt, enter gambit, enterFluent: run directly in the beginning - Fluent Inc programQ2:Fluent cannot run after installation. Error message is "unable find/open license.dat""A., FLUENT, and GAMBIT need to copy the corresponding license.dat files to the FLUENT.INC/license directoryQ3: error message: when running gambit, the prompt could not find the gambit fileThe default setup settings are recommended for A., FLUENT, and GAMBIT,After you have finished installing GAMBIT, set the environment variable,Set the way "- INC-Set Environment FLUENT program start"In addition, setting up the environment variable needs to be restarted, otherwise you will still be prompted for an environment variable.What does Q4: need to pay attention to when using Fluent and Gambit?A. installed FLUENT and GAMBIT, it is best to set the user default pathRecommend setting methods to build a directory in a non system partition, such as d:\usersA) Win2K users in the control panel - users and passwords - Advanced - advanced, using the fluent user's configuration fileModify the local path to d:\users, restart the user, run the command prompt, and check whether the user path is modifiedB) XP user, send the command prompt to the desktop shortcut, and right-click the command promptIn the shortcut start position, add D:\users and reset the checkQ5:Gambit operation failed. Error message "IDENTIFIER" "default_ Server""The default file for A. gambit has been opened. To the user's default directory, delete files such as default_id.*Q6:Gambit running failed, Gambit running interface flash, no error information,Only the exceed is started and a directory of gambit.xxxx israndomly generated in the directory where gambit is locatedA. because of the error of the gambit program, there are two directories in the fluent folder with gambit,What you need to do correctly is the gambit.exe in fluent inc/ntbin/ntx86,Instead of the gambit.exe under the gambit folderQ7: installed FLUENT6.1, this problem occurs at runtime: Error:, sopenoutputfile:, unable, to, open, file, for, output Error Object: "c:\temp\kill-fluent1684""A. to build a temp directory in C, two errors can be resolved. What about iterative divergence in Q8:Fluent computation?A. FLUENT the number of iterative calculation start Courant is best to use a smaller, otherwise easily lead to iterative divergence.To modify Slovecontrolssolution, modify Courant NumberThe default value is 1, beginning with less experience, such as 0.01, and then gradually increasing,Experienced colleagues decide for themselvesOr, FLUENT modify the limit of the iterative value, SlovecontrolsLimitsDecide on what you calculateQ9:fortran program reported wrong stack overflow how to do?The general FORTRAN compiler defaults to "allocatable array" placed in the heap, and "automatic array" placed on the stack.The default setting for the stack is generally 1048576.Array out of bounds, prompt stack overflow in Visual Fortran. Compiler under UNIX platform such as F77It's usually core dump.Now change the default settings of the stack.In VF, you can run link or editbin commands in command line mode.Question 1:Gambit why can't it be started?There may be 3 reasons1.exceed problem. Running Gambit appears Using X_DEVICE.... Exceed installation is no problem, if not, please re install exceed, install the exceed best custom installation, select only Xserver, and nothing else, so the best;2.License problem. Into the command line, set up theenvironment variables, run Gambit, if you display License Error, that is the License problem, and re Copy License file to the license directory under the installation directory;3..Lok file problem. The establishment of Defaul.dbs default Gambit starts, if there are default.lok files, the gambit cannot start, delete the file, the.Lok file is locked in this project, please see the detailed description of the Gambit help;4. is also a License problem, but even if the copy License file is not resolved again, you can then try to modify the system time.Question 2: how to improve convergence?1. ensure that the mesh is fine enough2., maybe your boundary conditions are too bad, you can try to change the boundary conditions to be more conventional. After calculating the convergence, gradually increase the value of the boundary variables until you meet the requirements3. adjust the relaxation factor appropriately and choose the solution that best fits the model you are usingQuestion 3:Fluent how is the pressure in the pressure inlet and the pressure outlet boundary set?First of all, we should make clear the two concepts:Total pressure = static pressure + dynamic pressure (for incompressible flow)Absolute pressure gauge (gauge = pressure) + reference pressure (operating pressure)Set the pressure boundary in Fluent is the gauge set in pressure-inlet is the total pressure; pressure-outlet set is static (Note: This contains not head pressure Hydrostatic Head).Question 4: what is static pressure, total pressure and dynamic pressure?Hydrostatic pressure, dynamic pressure, and total pressure are the concepts of fluid mechanics (total pressure, strictly speaking aerodynamics).1. hydrostatic pressure is the pressure at which the fluid moves at the same speed. The pressure measured by the pressure is due to the motion of molecules.The 2. dynamic pressure equals the 0.5* density * (velocity * velocity), a definition given in terms of energy.3. total pressure is actually a balance of energy. It is a function of static pressure and Maher number. It is the pressure measured by a manometer in the fluid at rest.4. a reference pressure (operating, pressure) is also present in the fluentThis is because, the pressure term in the NS equation is a derivative form, so when solving the pressure, must be given a reference value can be determined, such as solving afirst-order differential equation, dy/dx=1, for y = x + constant. Only given that constant can the condition of definite solution be formed. The operation pressure in fluent is equivalent to that constant, so theoryHi, Chang Gu Qiao Shi Jia Gu Guowei, Yun Cha Hong of kappa.Fluent is to solve the pressure gauge,Plus, this reference pressure is absolute pressure.Question 5: post-processing shows why always flashing, abnormal?Post processing display problems are generally a video card problem:1. video cards are too old;2. drive is wrong, not installed DirectX and OpenGL and a series of engines;3. video card drivers may be damaged by virusesQuestion 6: how should the viscosity of the mixture be calculated?The volume fraction of I component gas in a mixed gas, the molecular weight * I component, gas dynamic viscosity of * I component / sigma (volume fraction of gas I component gas, molecular weight of * I component)Question 7: what is the PDF model?The PDF model does not solve the transport equation of a single component, and solves the transport equation of the mixed component distribution. The concentration of each component is obtained by mixing the components. The PDF model is particularly suitable for the simulation of turbulent diffusion flames and similar reaction processes. In the model, the probability density function PDF (probability, density) is usedFunction) to consider turbulence effects. The model does not require the user to explicitly define the reaction mechanism, but is handled by the flame surface method (i.e., the mixed burning model) or the chemical equilibrium calculation, and thus has more advantages over the finite rate model.Question 8:SCCM what is the unit?Vacuum unit conversions, flow rates and leakage rates: 1 Pa = L/s = 59.2 SCCMQuestion 9:Schmidt, what is Number?The relation between momentum and mass transport: the ratio of viscosity to diffusivityQuestion 10:Prandtl, what is Number?The ratio of kinematic viscosity to thermal diffusivity indicates the degree of difficulty in the transport of momentum and heatQuestion 11:Lewis, what is Number?Is the ratio between the thermal diffusivity and the diffusion coefficientQuestion 12: how do I import fluent mesh files into CFX?First import the gambit mesh to the icem-cfd, and then the cfx. Icem-cfd has the import mesh function and interfaces with the fluentWhat about the conflict between 13:Fluent and MATLAB?In the control panel - >> management service will matlab server offQuestion 14:Fluent software expired how to do?In all directories, find out if there is a file that is newer than the current system time. When you find it, change the file time back. Fluent looks for the latest time in all the files in the computer. If this time is new to the system time, the fluent thinks that the system time has been modified, and it is useless to change the system time.Problem 15:Phoenics installation FAQ1. "Tcl error" - not loaded with active.exe (I have no idea in the PHOENICS installer);2. "visual, FORTRAN, run-time, error" - no FORTRAN installed;3. "code expired" - the system time has not changed correctly.Question 16:Fluent common answer 1Q: in fluent, opening display in grid can only pop up a whitish screen and die. Fluent display:Error:, Floating, point, error:, divide, by, zeroError, Object: ()Error:, FLUENT, received, a, fatal, signal (SEGMENTATION, VIOLATION)Error, Object: ()A:1. may be that the graph has not been shown yet, you suddenly turn it off and later show that this is the case. You are advised to save case and date, exit fluent, read again case and date, you can display normally;2. graphics card for OpenGL support is not good, update video card driverQuestion 17:Fluent common answer 2Q: after startup, display as follows:Error:, sopenoutputfile:, unable, to, open, file, for, output Error Object: "c:\temp\kill-fluent692""A: crack is not clean, does not affect the useQuestion 18:Fluent common answer 3Q: when the grid imports fluent: Building...: grid,domain: error: null pointera: 计算域指针无效, 一般需要重新生成网格.问题19: fluent常见报错解答4q: 运行时出现如下信息:error:fluent received fatal signal (access _ violation)1. note exact events leading to an error.2. save the case / data under the new name.3. exit program and reboot to continue.4. report an error to your distributor.error object: (a)a: 只要是严重的错误和发散, fluent都显示这个, 这些信息说明不了任何问题.模型要做一定的调整.问题20: fluent常见报错解答5q: 出现如下信息fluent 6.1.22: welcome to2003: copyright fluent inc.: all rights reserveddump: cannot open file "fl _ s117.dmp".dump: error: unable to open file: (a)encountered: error in critical code section: hit return to exit.a: 这可能是整理注册表或清理垃圾文件时把一些fluent要用到的文件清除了.直接重新覆盖安装即可恢复.问题21: fluent常见报错解答6q: 计算完display时出现如下信息error message: the text from graphics function compute _ _ extent:the device for '/ driver / opengl / win + w0 / inner / scale' doesn't seem to be alive: an error message from _ camera _ graphics function set by _ volume:: xmin equal to or greater than xmax ata: 删掉显卡重装驱动, 并检察directx等是否安装问题22: fluent常见报错解答7q: 迭代计算中窗口显示:turbulent viscosity limited to viscosity ratio of...a: 这是提示你turbulent viscosity ratio 已超过给定上限, 你可以在solve - controls - limits的选项中加大max turbulent viscosity ratio值, 可以加大2个数量级.问题23: fluent常见报错解答8q: 运行fluent出现如下信息the system clock has been set backfeature: fluentlicense path: c: a fluent.inc license.dat a license...88309 flexlm error: -for further information, refer to the user manual flexlm himself,available at "".a: license过期, 把系统时间一点点往后调就ok问题24: fluent常见报错解答9q: 在linux es3下安装了fluent6.1.22, 启动后出现一下提示copyright 2003 fluent inc.: all rights reserved: loading "/ app / fluent / fluent.inc / fluent6.1.22 / lib / flprim.dmp.117-32": done.script file: ding ding fluent3267 in / root /a: 正常, 其中 "ding script file in / root / ding fluent3267" 是fluent产生的临时文件, 用来杀死fluent及其相关进程, 免得死进程时无法kill.只要fluent是正常退出, 则自动删除.Question 25: maximum memory cannot be opened when using VC and VF programming more than 256MB, or there may be a warning, only some version of the compiler warning, can still connect running, but how to solve some version of the compiler is not?Project->setting->Link->OutputWill stack allocations under Reserve and commit?Change to the maximum number of memory you wantNote: it is counted in bytes, so if you want to open 500MBYou need to write in 524288000 (500*1024*1024)Problem 26: the gambit was installed, but the runtime said it lacked base80.dllYou ran the wrong Gambit.exe.Open the directory fluent.inc/ntbin/ntx86, and run the gambit insideQuestion 27: what are the authoritative periodicals on CFD in the world?Journal of fluid mechanicsAIAA (American Aeronautics and Space Administration)Question 28: what is "convection" and "diffusion"?A drop of ink is placed in a sink. If the water is still, the range of colors expands evenly around, and this is diffusion. Diffusion is caused by molecular motion.If water flows, it extends not only the color range but also the distance to the downstream, which is convection. Convection is a fluid mass movement caused by uneven distribution of flow field.Convection has strong directionality compared with diffusion.Problem 29:Gambit common error 1Q: after installation, run the gambit.exe display: WARNING<17>-H:\hb\fluent\gambit\ntbin\ntx86\GambIT.1264 at 480, in @<#>July21: 2003, 16:11:54, FDIWHAT, sysfile.c:FILE, EXISTS: Warning:locale, not, supported, by, Xlib, locale, set, to, C: Using, X_DEVICE_DRIVER, with, standard, visual.A: This is normal. Do not close the window using GambitQuestion 30: what is PIV?Particle image velocimetryProblem 31:Fluent how to set a barrier with zero thicknessFor 3D, draw a face, spilt body, in the connected manner;For 2D, draw a line, the spilt surface, in the connected mannerQuestion 32: what is Favre-averaged, N-S, equations?The mean equation (rather than the ordinary time average) obtained by the Faver mean, which is generally referred to as turbulence, is mentioned in the book.Question 33:uniform, what is grid?Uniform meshQuestion 34: what is the difference between a conservative equation and a nonconservative equation?Conservative and nonconservative equations are also called conservative and non conservative equations. The difference between them is reflected in the convection term of theequation.For the conservation equation, the convection term is expressed as div (U in diameter)Among them, U is the velocity vector, and the phi is the universal variable. In the momentum equation, it is u, V, wFor the non conservation equation, the convection term is not used in divergence form, written for P Udiv.This can be derived by continuous equations.If for infinitesimal bodies, these two are equivalent. But the units we actually compute are of finite size, so the two forms have different characteristics. Prominent in the calculation of shock waves. It is impossible to calculate the position of the shock wave and the oscillation of the solution by using the non conservative equation.In general, we recommend the use of conservation type control equations. Because he's always conservative for any size computing unit.How is the dimensionless data in question 35:Tecplot?Dataalter write a formula canQuestion 36: how do I import pre files into gambit?In pre, save the file in.Stp format and import GambitWhat is the difference between the entity and the virtual body in question 37:gambit?The entities and virtual entities of the gambit do not have any effect on the results when they are generated and calculated, and the main differences between the entity and the virtual body are the following:1. Entities can perform Boolean operations, but virtual bodies cannot, although Boolean operations cannot be performed, but virtual bodies have functions such as merge, split, and so on.2, entity computing is available in many CAD software, but virtual body is one of the major features of gambit. After the virtual body, the flexibility of gambit modeling and grid generation has been increased a lot.3. In the process of mesh generation, if there are several relatively flat surfaces, they can be synthesized by merge one, so, when the grid can save steps, for a relatively large surface curvature, may generate mesh quality is not good, then you can take the way to divide it with split in order to improve the quality of the grid into several small。
A Study of Through-Thickness T exture Gradients in Rolled SheetsO.ENGLER,M.-Y .HUH,and C.N.TOME´A method to simulate shear effects and through-thickness texture gradients in rolled sheet materialsis introduced.The strain history during a rolling pass is idealized by superimposing a sine-shaped evolution of the ˙13shear component to a plane-strain state.These generic strain histories are enforced in a visco-plastic self-consistent (VPSC)polycrystal deformation model to simulate texture evolution as a function of through-thickness position.The VPSC scheme is deemed superior to a full constraints (FC)or relaxed constraints (RC)approach,because it allows one to fully prescribe diagonal and shear-strain-rate components while still accounting for grain-shape effects.The idealized strain states are validated by comparison with deformation histories obtained through finite-element method (FEM)calculations.The through-thickness texture gradients are accounted for by introducing a relative variation of the sine-shaped ˙13shear with respect to the plane-strain component.The simulation results are validated,in turn,by comparison with typical examples of through-thickness texture gradients observed experimentally in rolled plates and in sheets of fcc and bcc materials.I.INTRODUCTIONstrongly affect the plastic properties of the rolled products (e.g.,References 5through 7).Severe through-thickness I N the literature,there are many attempts to model thetexture gradients have been reported in cases where the ratio evolution of the crystallographic texture accompanying roll-of the contact length between the roll and sample (l C )to the ing deformation with the help of Taylor-type deformation sheet thickness (t )is smaller than 1(with l C ϷΊR ⌬t ,where models;extensive reviews have been given,e.g.,by Hirsch R is the roll radius and ⌬t is the thickness reduction per and Lu ¨cke [1]and Aernoudt et al.[2]In such models,the pass).[8–13]However,also,in cases characterized by l C /t Ͼindividual crystallites are assumed to deform by slip on a 1,shear textures have been observed,which have been attrib-number of crystallographic slip systems,so as to accommo-uted to the friction between the rolls and sheet.[14–17]Such date the prescribed macroscopic strain rate (˙ij ).[3,4]The term conditions,viz.,large rolling draughts combined with high ˙ij is the symmetric part of the velocity gradient e ˙ij ϵu ˙i ,j .The friction,are typically encountered in practical materials pro-antisymmetric part,denoted ˙ij ,represents the plastic spin.cessing during breakdown rolling and multistand hot roll-In most simulations of rolling textures,the deformation ing operations.in every grain is either approximated by a plane-strain state As a flat plate or sheet enters the rolling gap,the rolls (˙11ϭϪ˙33),and all other components (˙22,˙12,˙13,and induce a compressive stress in the normal direction (ND).˙23)are imposed to be zero (full constraints (FC)),or else,The sheet is thinned along the ND and is free to expand in the grain-stress components 13and 23are made zero based the rolling direction (RD),whereas lateral expansion in the on grain shape and stress continuity considerations (relaxed transverse direction (TD)is effectively restrained by fric-constraints (RC)).*However,both approximations representtion.[18]For the center layer of the workpiece,the net effect *Throughout this article,the rolling direction (RD),transverse direction is a condition of plane strain,whereas for the outer layers,(TD),and normal direction (ND)of a sheet are identified as directions 1,strong deviations from the plane-strain state prevail.As illus-2,and 3,respectively.trated in Figure 1,the geometrical changes within the rolling a strong simplification,in that away from the center plane,gap lead to a nonzero component e ˙31ϭu˙3,1of the displace-shear deformation takes place as a consequence of the rolling ment-rate gradient.It is clear that the value of e ˙31changesboundary conditions.Factors like roll-gap geometry,includ-magnitude as well as sign during a rolling pass.[12]Another,ing the geometrical changes during a rolling pass,friction even more severe,deviation from the plane-strain condition between the roll and the sheet contact surface,and tempera-may result from the friction between the roll surface and ture gradients upon hot rolling can cause severe deviations the sheet surface,which can be understood in terms of the from the plane-strain condition.Furthermore,these parame-friction-hill effect.[18]At the beginning of the deformation ters depend on the distance from the surface of the rolled within the plastic zone,i.e.,left from the neutral point,the sheet,giving rise to nonhomogeneous strain states and,as velocity of the material (V 0)is lower than that of the rolls a consequence,to different rolling textures at different (V R ),and the friction between the metal and the rolls tends through-thickness layers of the sheet,which,in turn,mayto draw metal into the roll gap (Figure 1).To the right from the neutral point,the metal velocity (V e )is higher than the roll velocity,so that the friction direction is reversed.The O.ENGLER and C.N.TOME´are with the Materials Science and Technol-resulting friction hill imposes a component e ˙31ϭu ˙3,1,which ogy Division (MST-8),Los Alamos National Laboratory,Los Alamos,NM is positive at the entry of the rolling mill,zero at the neutral 87545U.S.A..M.-Y .HUH,Professor,is with the Division of Materials point,and negative at the exit of the rolling mill (Figure 1).Science and Engineering,Korea University,Seoul 136-701,Korea.Manuscript submitted October 26,1999.In the present article,we introduce a novel method toFE mesh,and texture,hardening,and anisotropy are updatedas deformation proceeds.[21–24]However,for the case ofrolling,the overall constraints are such that local deformationis not likely to be sensitive to details in the local hardeningand anisotropy evolution(constitutive law).In addition,FEM computations have the major disadvantage of beingextremely time consuming,especially if they are coupled toa polycrystal constitutive law.As an alternative,the evolution of strain can be deducedusing certain simplifying assumptions.The resulting strainstates are eventually fed into a deformation model so as toderive the texture variations as a function of the strain stateand,thus,the through-thickness texture gradient.Lee andDuggan[12]described the strain field in the rolling gap by Fig.1—Schematic representation showing the formation of the shear com-means of a highly simplified analytical model.This modelponents e13and e31in a roll gap.considers both the shear component e31,introduced by thegeometrical changes during a rolling pass,and the compo-nent e13,caused by the friction between the rolls and sheetsurface.Based on these analytical expressions,Huh etal.[17,19]used a Taylor approach to estimate shear textures simulate shear textures and through-thickness texture gradi-by an idealized strain history during a rolling pass.Fedosseev ents in rolled sheet materials.Based on the ideas put forwardet al.derived the strain distribution for various rolling by Huh et al.,[19]the strain history during a rolling pass isidealized by a plane strain superimposed on a simple sine-sequences based on the concepts of fluid dynamics(method shaped profile of the e˙13and e˙31shear-rate components.The of superposition of harmonic currents)to model through-resulting strain history is rationalized by comparing it with thickness texture gradients.[25,26]results obtained by the finite-element method(FEM).To In all these approaches,the strain distribution derived with simulate the texture evolution and,in particular,the through-the various assumptions was eventually fed into a Taylor thickness texture gradients,the strain history at different FC deformation model to simulate the resulting textures. layers is input in a visco-plastic self consistent(VPSC)However,it is nowadays acknowledged that the Taylor FC deformation model.The simulation results obtained for fcc model is not well suited for simulating the texture changes and bcc structures are compared with typical examples of accompanying rolling deformation.With increasing reduc-through-thickness texture gradients observed experimentally tion,the grain thickness decreases and the prescription ofin rolled plates and sheets of various aluminum alloys and the˙13and˙23shear components in the Taylor FC modelsteels.becomes increasingly less meaningful;for infinitely flat It has erroneously been stated in the literature that the grains,only the remaining components are defined.This led textures observed at the sheet surface are caused by an to the development of the so-called RC models,[27,28]which accumulated shear induced by the friction between the roll since have proven to be superior in simulating the evolution and sheet surface.However,the required strong shear wouldof rolling textures,particularly at high strains(e.g.,Refer-lead to an unrealistic shape change of the sheets.Further-ences1and2).In the present application,however,the˙13 more,this view cannot account for the shear textures in theshear has to be prescribed such as to account for the shear surface of reversibly rolled sheets.In this article,we willsuperimposed on the plane-strain state.This means that the show that,even for a final strain distribution where the˙13shear cannot be relaxed,preventing the application of accumulated shear strain is completely reversed,pronouncedan RC model.through-thickness variations in texture may take place inAlternatively,deformation textures can be modeled by the sheet.means of a VPSC scheme.[29,30]Each orientation or“grain”of a polycrystalline aggregate is regarded as an inclusion that II.SIMULATION OF SHEAR TEXTURES AND is embedded in and interacts with a homogeneous equivalent THROUGH-THICKNESS TEXTURE medium(HEM)with the average properties of the aggregate.GRADIENTS The properties of the matrix(the HEM)are not knowna priori,however,but are adjusted“self-consistently”to The conventional way to determine the local strain evolu-coincide with the average of all inclusions forming the aggre-tion in inhomogeneously deforming specimens is to employgate.In contrast to the Taylor-type models,in the VPSC the FEM.In what concerns rolling operations,the strainmodel each grain deforms differently,depending on its rela-states in the sheet layers follow from enforcing an appro-tive stiffness with respect to the HEM.In addition,a relax-priate set of boundary conditions.The information aboutation of the shear components takes place in individual grains the resulting strain history can then be input in polycrystalas a consequence of evolving grain shape,while all compo-plasticity codes to model the corresponding crystallographicnents of the overall strain rate may still be prescribed.Thus, texture and,hence,to simulate texture gradients.[15,20]Thisa fully prescribed strain-rate tensor can be used,which favors method can be further refined by a coupling of the FE codethis approach for the simulation of through-thickness tex-with a polycrystal constitutive law.Within this approach,apolycrystal texture is associated with each element of the ture gradients.Table ler Indices and Euler Angles of the Orientations Characteristic of fcc and bcc Plane Strain and Shear Textures Miller Indices Euler Angles{hkl}͗uvw͘(“Designation”)1⌽2Remarks {112}͗111͘(“C”)90deg35deg45deg fcc plane strain/bcc shear {123}͗634͘(“S”)59deg34deg65deg fcc plane strain{011}͗211͘(“B”)35deg45deg0deg/90deg fcc plane strain/bcc shear {011}͗100͘(“Goss”)0deg45deg0deg/90deg(fcc plane strain)/bcc shear {111}͗112͘30deg/90deg54.7deg45deg bcc plane strain/fcc shear {111}͗110͘0deg/60deg54.7deg45deg bcc plane strain/fcc shear {112}͗110͘0deg35deg45deg bcc plane strain/fcc shear {001}͗110͘(“rotated cube”)0deg0deg45deg/45deg0deg0deg/90deg(bcc plane strain)/fcc shear III.THROUGH-THICKNESS TEXTURE symmetry is not generally justified in the case of pronounced GRADIENTS IN ROLLED SHEETSshear deformation.A.Analysis and Representation of CrystallographicTextures B.Examples of Through-Thickness Texture Gradients infcc MetalsThe experimental results of texture gradients reproducedin this section were determined by conventional X-ray mac-Gradients of strain and strain rate manifest themselves inthe pronounced through-thickness texture gradients that have rotexture analysis(e.g.,Reference31).Pole figures weremeasured from the sheets in back-reflection using a standard been described for various inhomogeneously deformed fcc X-ray texture goniometer.In order to analyze through-thick-metals and alloys(e.g.,References9through11,15,33, ness texture gradients,various layers of the sheets have to and34).The example shown here to illustrate the texture be prepared sequentially by careful grinding,polishing,orgradients in fcc materials pertains to a laboratory cold-rolled etching,or by an appropriate combination thereof.In the sample of a direct chill–cast commercial-purity aluminum,AA1145.A specimen was machined from13mm hot gage following text,the layer within the sheet is indicated by theparameter s,with sϭϩ1and sϭϪ1denoting the upper to6mm,so as to achieve an initially uniform through-and lower surface of the sheet,respectively,such that sϭthickness structure.Then,the specimen was cold rolled 0identifies the center layer.Note that,with an average reversibly to a0.6mm final thickness,corresponding to penetration depth or X-rays of the order of100m,X-raya total thickness reduction of90pct.In order to enforce diffraction is well suited for analysis of texture gradients in inhomogeneous deformation,the rolling was performed dry, sheets with a thickness in excess of,for example,1mm.i.e.,without using a lubricant,which resulted in pronounced Comparison of X-ray results with data obtained by means of through-thickness texture gradients.[35]the more tedious and time-consuming local-texture analysisThe texture in the center layer of the hot band mainly done by electron back-scattering diffraction showed very comprised orientations that are typical of plane-strain defor-good agreement between both techniques(e.g.,Referencesmation of fcc metals and alloys.In such textures,most 13and16).orientations are assembled along the so-calledfiber,which After correction of the pole-figure data for backgroundruns through the Euler-angle space from the C orientation irradiation and defocusing error,complete orientation distri-{112}͗111͘through the S orientation{123}͗634͘to the B bution functions(ODFs),f(g),were computed according toorientation{011}͗211͘(Figure2(a)and Table I).Close to the method of series expansion with spherical harmonic the surface,in contrast,typical shear textures were found functions.[32]In texture analysis,crystal orientations are(Figure2(b)).The maximum texture intensity was obtained commonly denoted by the Miller indices{hkl}͗uvw͘,where in the45deg ND-rotated cube orientation{001}͗110͘.Fur-the first set of Miller indices indicates the direction of thethermore,a pronounced scatter of{001}͗110͘toward crystal that is parallel to the ND and the second refers to{112}͗110͘and minor intensities of{111}͗uvw͘orientationswere also observed.the crystal direction parallel to the RD.For quantitativetexture analysis,the orientations(g)are expressed as a set Figure3shows another example of an fcc shear texture, of three Euler angles:1,⌽,and2.Table I lists the Millerobtained in a hot-rolled specimen of the aluminum alloy Al-indices and Euler angles of the most commonly observed 5.8pct Cu-0.4pct Zr.This material,with a composition orientations of rolled fcc and bcc metals.The textures areequivalent to the superplastic alloy SUPRAL100but with represented by plotting isodensity lines in sections of con-a coarser grain size,of the order of20m,was laboratory stant1or2through the three-dimensional Euler space.hot rolled at310ЊC to a thickness of6mm.The texture All experimental pole figures discussed in this article,irre-obtained for the layer sϭ0.67may be regarded as a transi-spective of the sheet layer analyzed,revealed orthotropiction between the two cases shown in Figure2.The typical sample symmetry within experimental accuracy.Therefore,plane-strain-texturefiber is still present,although weak, the ODF representation was confined to the familiar sub-but it shows strong scatter toward the rotated cube orientation space of Euler space,with0degՅ(1,⌽,2)Յ90deg.It{001}͗110͘.This is obvious from the smear of the individual is noted,however,that the assumption of orthotropic sampletexture maxima in the various2sections“upward,”i.e.,(b )(a )Fig.2—Hot band texture of commercial purity aluminum serving as an example of the through-thickness texture variations in fcc materials:(a )center layer,s ϭ0.0;and (b )surface layer,s ϭ1.0(by courtesy of S.Benum).toward ⌽ϭ0deg.The rotated cube orientation again shows with ␣and ␥fiber orientations prevail,the surfaces show shear textures consisting of the Goss orientation {011}͗100͘scatter toward the {112}͗110͘shear component,whereas {111}͗uvw ͘orientations were not observed.Similar textures and orientations close to {112}͗111͘.[8,17,40]In contrast,in hot-rolled low-carbon steels,usually weak,rather uniform,have been observed in a variety of nonrecrystallizing high-strength Al alloys.textures were observed,which can be attributed to the ran-domizing effect of the ␥-␣phase transformation during sub-It should be noted that,in addition to the friction and geometry effects,the evolution of through-thickness texture sequent cooling.[41]Lowering the finishing temperature has the effect that more deformation takes place in the ferrite gradients also depends on the material investigated.[10,11,36]While most Al alloys develop pronounced through-thickness range,which strengthens the hot-band texture and,in particu-lar,enhances through-thickness texture gradients.[42]texture gradients,other fcc materials,in particular,materials with a low stacking-fault energy like brass,silver,and aus-The typical texture gradients of bcc sheet materials are illustrated in Figure 4for an interstitial-free (IF)steel sheet.tenitic steels,tend to show more-uniform textures throughout the sheet thickness.In high-purity copper,even under very A specimen of a Ti-alloyed IF steel with 0.0026pct C was warm rolled at 650ЊC (i.e.,in the ferritic range)from 5to inhomogeneous deformation conditions,no through-thick-ness texture gradients,but rather,strongly enhanced shear 3mm in one pass.[43]Similarly as described for the commer-cial-purity aluminum (Figure 2),the rolling was performed band formation,were observed.[37]This material dependence is not the subject of this article,however,and will not,without lubricant in order to enforce nonuniform deforma-tion throughout the sheet thickness.therefore,be discussed here any further.The center layer of the specimen (s ϭ0.0)shows a texture that is characteristic of a plane-strain deformation state (Fig-C.Examples of Through-Thickness Texture Gradients in ure 4(a)):under plane-strain conditions,bcc metals and bcc Metalsalloys tend to form fiber textures where most orientations are assembled along two characteristic fibers (marked in Very pronounced through-thickness texture gradients form in alloyed steel grades that undergo none or only partial Figure 4(a)),as follows.(1)The (mostly incomplete)␣fiber comprises orientations with a common ͗110͘direction phase transformation during hot rolling.[14,38,39]Whereas in the center layers of such inhomogeneously rolled sheets,parallel to the RD,i.e.,the orientations {hkl }͗110͘,including the orientations {001}͗110͘,{112}͗110͘,and {111}͗110͘.typically the well-known plane-strain deformation texturesFig.3—Hot band texture of Al-5.8pct Cu-0.4pct Zr(regular grain sized SUPRAL100)at the layer sϭ0.7showing a characteristic mixture of plane strain and shear texture components.(a)Complete ODF in2sections,(b)representation in the2ϭ45deg section that contains the most important rolling and shear texture components,and(c)schematic representation of the orientations in the2ϭ45deg section.(2)The␥fiber comprises orientations with{111}parallel{112}͗111͘appear.The maximum intensity of these shear-texture components is obtained at sϭ0.8,whereas in the to the ND,i.e.,the orientations{111}͗uvw͘,including{111}͗110͘and{111}͗112͘.immediate surface texture,the sharpness decreasesslightly.[43]Raabe and Lu¨cke[39]reported very similar results The surface texture of the specimen mainly consisted ofa strong{011}͗100͘Goss orientation plus minor intensities in the hot bands of Cr-containing ferritic stainless steels.Microstructural investigations of the microband arrange-close to{011}͗211͘and{112}͗111͘(Figure4(b)).FromFigure4and Table I,it may be seen that most of the relevant ment through the thickness of a rolled steel sheet with pro-nounced through-thickness texture gradients have shown bcc orientations and fibers can be found in the2ϭ45degsection of the Euler space.Therefore,the representation of that,in the center layers,where plane-strain conditions pre-vail,the microbands formed at angles of approximately35 ODFs can be highly condensed by focusing merely on thissection rather than showing the entire ODF.Figure5shows deg to the RD,which is indicative of the arrangement ofthe activated slip planes.[17]Close to the surface,where well-the2ϭ45deg sections of the textures in the IF steel asobtained at the various layers.Again,at the center layer,the defined shear textures were observed,the microbands were typical plane-strain texture with the characteristic␣and␥arranged approximately parallel to the RD.This highly fibers prevails.With increasing the parameter s,the rolling unusual arrangement of slip planes can only be achieved ifthe local-stress tensor is strongly rotated by the superposition texture degrades;at sϭ0.4,a minimum in texture sharpnessis observed.This minimum is due to the transition from the of friction on the plane-stress state imposed by the rolls.Note that fcc plane-strain and bcc shear textures(and, plane strain to the shear texture that dominates closer to thesheet surface.Accordingly,with further increasing s,the vice versa,the bcc plane-strain and fcc shear textures)showsome characteristic similarities(Figures2through5andshear components{011}͗100͘(Goss),{011}͗211͘,and(b )(a )Fig.4—Hot band texture of an IF steel serving as an example of the through-thickness texture variations in bcc materials:(a )center layer,s ϭ0.0;and (b )surface layer,s ϭ1.0(by courtesy of B.Beckers).Fig.5—(a )through (h )2ϭ45deg sections of the ODFs determined at various through-thickness layers s of the IF hot band steel shown in Fig.4(by courtesy of B.Beckers).the sine-shaped profiles of e ˙13and e ˙31and the resultingcomponents ˙ij ,of the symmetric,and ˙ij ,of the antisymmet-ric,part of the velocity gradient (Eq.[2])are characterized by a single parameter,viz.,the value of its first maximum (step 4in Figure 6).For example,the curve of e ˙13in Figure6will be denoted e ˙max13ϭ3and the one of e ˙31as e ˙max 31ϭϪ1.In the case of plane-strain deformation,e ˙31and e ˙13are expected to be so small that the resulting shear component ˙13becomes negligible.Conversely,for pronounced shear deformation to take place,i.e.,a large ˙13component,either the friction-induced shear component e ˙13or the geometry-induced shear component e ˙31must be large.However,because of the sine-shaped shear profiles,after a complete rolling pass,the integrated values e 13and e 31are zero,inde-pendent of the respective amounts of e ˙13and e ˙31.As a conse-quence,a volume element would experience zero net shear Fig.6—Idealized evolution of the geometry-induced component e ˙31,the (13)as well as zero net rotation (13),such that the shape friction-induced component e ˙13,and the resulting shear ˙13ϭ˙31during a rolling pass.This sine-shaped strain rate history is subdivided into 13of an originally orthogonal element would still be orthogonal steps and used as a generic input for the present rolling texture simulations.after deformation.Thus,the deformation of the overall roll-ing bite is plane strain,although the instantaneous local values may strongly deviate from the plane-strain condition.We will show here that,although the accumulated shear Table I).This resemblance is addressed in detail else-adds up to zero,the final texture depends strongly on the where.[44,45]deformation history.IV .MODELING OF THROUGH-THICKNESS TEXTURE GRADIENTS IN ROLLED SHEETS B.Texture SimulationsA.Determination of the Strain DistributionTo simulate the rolling textures,the strain history was simulated by means of a VPSC deformation model briefly As already mentioned in the introduction,the strain state described in Section II.The calculations were performed during practical rolling operations may exhibit severe devia-with a fully prescribed velocity gradient e ˙ij of the form given tions from the idealized plane-strain condition that is defined by Eq.[1]and adopting various values of the shear strains as e ˙ij ϭ0for i j .Considering rolling as a two-dimensional e ˙13and e˙31.problem,i.e.,e ˙22ϭe ˙12ϭe ˙21ϭe ˙23ϭe ˙32ϭ0and e ˙33ϭϪe ˙11,these deviations from the plane-strain state manifest themselves as nonzero contributions of the geometry-e ˙ij ϭ1e ˙13000e ˙310Ϫ1[3]induced shear component e ˙31and the friction-induced shear component e ˙13(Figure 1).Thus,the displacement gradient tensor e ˙ij becomesTo simulate the texture evolution,an aggregate composed of 500initially random orientations was deformed in 78steps (6rolling passes of 13steps each).In each step,a e ˙ij ϭe ˙110e ˙13000e ˙310Ϫe ˙11[1]different displacement-rate tensor is imposed,and the incre-mental deformation is controlled by enforcing ⌬e 11ϭ0.0175up to a total accumulated strain of e 11ϭ1.365,which The components ˙13,˙31,˙13,and ˙31of the symmetric and approximately corresponds to a 75pct thickness reduction.antisymmetric parts of the velocity gradient e ˙ij are Simulations of fcc textures were performed with the usual twelve {111}͗110͘slip systems,i.e.,four {111}slip planes ˙13ϭ˙31ϭ12(e ˙13ϩe ˙31)and ˙13ϭϪ˙31ϭ12(e ˙13Ϫe ˙31)each containing three ͗110͘slip directions.In the simulations of bcc textures,two slip-system families,{110}͗111͘and [2]{112}͗111͘,were considered (with equal critical resolved shear stress (crss )),which is commonly assumed to give a It has already been pointed out that the friction-inducedshear component e ˙13is positive at the entry of the rolling reasonable description of the slip characteristics in many bcc structures (e.g.,Reference 2).Since,for this highly mill,zero at the neutral plane,and negative at the exit of the rolling mill;for the geometry-induced shear component constrained forming problem,local deformation is mostly controlled by the boundary conditions,all simulations were e ˙31,the opposite behavior is anticipated (Figure 1).Accord-ingly,in the present model,the evolution of e ˙31and e ˙13and performed without hardening.As outlined previously,the shear rates e ˙13and e ˙31were of the resulting shear rates (˙13ϭ˙31)and rotations (˙13ϭϪ˙31)is assumed to follow a simple sine-shaped profile varied according to a sine-shaped profile,so as to account for the changes in strain evolution from the entry to the during a rolling pass,as indicated in Figure 6.The numerical values of e ˙ij ,˙ij ,and ˙ij are referred to the principal strain exit of the rolling mill during one rolling pass.For the deformation texture simulations,the sine profiles were sub-rate e ˙11ϭ˙11,which is assumed to remain constant during a rolling pass and to be the same for all layers.Hereinafter,divided into 13steps (Figure 6),which are regarded as。
Do Clouds Compute?A Framework for Estimating the Value of Cloud ComputingMarkus Klems, Jens Nimis, Stefan TaiFZI Forschungszentrum Informatik Karlsruhe, Germany f klems,nimis, tai g @fzi.de•IntroductionOn-demand provisioning of scalable and reliable compute services, along witha cost model that charges consumers based on actual service usage, hasbeen an objective in distributed computing research and industry for a while.Cloud Computing promises to deliver on this objective: building on compute and storage virtualization technologies, consumers are able to rent infrastructure \in the Cloud"as needed, deploy applications and store data, and access them via Web protocols on a pay-per-use basis.In addition to the technological challenges of Cloud Computing there is a need for an appropriate, competitive pricing model for infrastructure-as-a-service. The acceptance of Cloud Computing depends on the ability to im-plement a model for value co-creation. In this paper, we discuss the need for valuation of Cloud Computing, identify key components, and structure these components in a framework. The framework assists decision makers in esti-mating Cloud Computing costs and to compare these costs to conventional IT solutions.•ObjectiveThe main purpose of our paper is to present a basic framework for estimat-ing value and determine bene ts from Cloud Computing as an alternative to conventional IT infrastructure, such as privately owned and managed IT hard-ware. Our e ort is motivated by the rise of Cloud Computing providers and the question when it is pro table for a business to use hardware resources \in the Cloud". More and more companies already embrace Cloud Computing services as part of their IT infrastructure [1]. However, there is no guide to tell when outsourcing into the Cloud is the way to go and in which cases it does not make sense to do so. With our work we want to give an overview of economic and technical aspects that a valuation approach to Cloud Computing must take into consideration.Valuation is an economic discipline about estimating the value of projects1and enterprises [2]. Corporate management relies on valuation methods in or-der to make reasonable investment decisions. Although the basic methods are rather simple, like Discounted Cash Flow (DCF) analysis, the di culties lie in appropriate application to real world cases.Within the scope of our paper we are not going to cover speci c valuation methods. Instead, we present a generic framework that serves for cost compar-ison analysis between hardware resources \in the Cloud"and a reference model, such as purchasing and installing IT hardware. The result of such a compari-son shows the value of Cloud Computing associated with a speci c project and measured in terms of opportunity costs. In later work the framework must be eshed out with metrics, such as project free cash ows, EBITDA, or other suit-able economic indicators. Existing cost models, such as Gartner's TCO seem promising candidates for the design of a reference model [3].•ApproachA systematic, dedicated approach to Cloud Computing valuation is urgently needed. Previous work from related elds, like Grid Computing, does not con-sider all aspects relevant to Cloud Computing and can thus not be directly applied. Previous approaches tend to mix business objectives with technologi-cal requirements. Moreover, the role of demand behavior and the consequences it poses on IT requirements needs to be evaluated in a new light. Most impor-tant, it is only possible to value the bene t from Cloud Computing if compared to alternative solutions. We believe that a structured framework will be helpful to clarify which general business scenarios Cloud Computing addresses.Figure 1 illustrates our framework for estimating the value of Cloud Com-puting. In the following, we describe in more detail the valuation steps suggested with the framework.2.1 Business ScenarioCloud Computing o ers three basic types of services over the Internet: virtual-ized hardware resources in form of storage capacity and processing power, plus data transfer volume. Since Cloud Computing is based on the idea of Internet-centric computing, access to remotely located storage and processors must be encompassed with su cient data transfer capacities.The business scenario must specify the business domain (internal processes, B2B, B2C, or other), key business objectives (cost e ciency, no SLA viola-tions, short time to market, etc.), the demand behavior (seasonal, temporary spikes, etc.) and technical requirements that follow from business objectives and demand behavior (scalability, high availability, reliability, ubiquitous access, se-curity, short deployment cycles, etc.).23(2) Business DomainIT resources are not ends in themselves but serve speci c business objectives. Organizations can bene t from Grid Computing and Cloud Computing in di er-ent domains: internal business processes, collaboration with business partners and for customer-faced services (compare to [14]). [1] Business ObjectivesOn a high level the typical business bene ts mentioned in the context of Cloud Computing are high responsiveness to varying, unpredictable demand behavior and shorter time to market. The IBM High Performance on Demand Solu-tions group has identi ed Cloud Computing as an infrastructure for fostering company-internal innovation processes [4]. The U.S. Defense Information Sys-tems Agency explores Cloud Computing with the focus on rapid deployment processes, and as a provisionable and scalable standard environment [5].z Demand BehaviorServices and applications in the Web can be divided into two disjoint categories: services that deal with somewhat predictable demand behavior and those that must handle unexpected demand volumes respectively. Services from the rst category must be built on top of a scalable infrastructure in order to adapt to changing demand volumes. The second category is even more challenging, since increase and decrease in demand cannot be forecasted at all and sometimes occurs within minutes or even seconds.Traditionally, the IT operations department of an organization must master the di culties involved in scaling corporate infrastructure up or down. In prac-tice it is impossible to constantly fully utilize available server capacities, which is why there is always a tradeo between resource over-utilization, resulting in glaring usability e ects and possible SLA violations, and under-utilization, leading to negative nancial performance [6]. The IT department dimensions the infrastructure according to expected demand volumes and in a way such that enough space for business growth is left. Moreover, emergency situations, like server outages and demand spikes must be addressed and dealt with. Asso-ciated with under- and over-utilization is the notion of opportunity costs. The opportunity costs of under-utilization are measured in units of wasted compute resources, such as idle running servers. The opportunity costs of over-utilization are the costs of losing customers or being sued as a consequence of a temporary server outage.Expected Demand: Seasonal DemandAn online retail store is a typical service that su ers from seasonal demand spikes. During Christmas the retail store usually faces much higher demand vol-umes than over the rest of the year. The IT infrastructure must be dimensioned such that it can handle even the highest demand peaks in December.Expected Demand: Temporary E ect4Some services and applications are short-lived and targeted to single or sel-dom events, such as Websites for the Olympic Games 2008 in Beijing. As seen with seasonal demand spikes, the increase and decrease of demand volume is somewhat predictable. However, the service only exists for a comparably short period of time, during which it experiences heavy tra c loads. After the event, the demand will decrease to a constant low level and the service be shut down eventually.Expected Demand: Batch ProcessingThe third category of expected demand scenarios are batch processing jobs. In this case the demand volume is usually known beforehand and does not need to be estimated.Unexpected demand: Temporary E ectThis scenario is similar to the \expected temporary e ect", except for one major di erence: the demand behavior cannot be predicted at all or only short time in advance. A typical example for this scenario is a Web start-up company that becomes popular over night because it was featured on a news network. Many people simultaneously rush to the Website of the start-up company, caus-ing signi cant tra c load and eventually bringing down the servers. Named after two famous news sharing Websites this phenomenon is known as \Slash-dot e ect"or \Digg e ect".[12] Technical RequirementsBusiness objectives are put into practice with IT support and thus translate into speci c IT requirements. For example, unpredictable demand behavior translates to the need for scalability and high availability even in the face of signi cant tra c spikes; time to market is directly correlated with deployment times.z Costs of Cloud ComputingAfter having modeled a business scenario and the estimated demand volumes, it is now time to calculate the costs of a Cloud Computing setting that can ful ll the scenario's requirements, such as scalability and high availability.A central point besides the scenario properties mentioned in section3.1.3 is the question: how much storage capacity and processing power is needed in order to cope with demand and how much data transfer will be used? The numbers might either be xed and already known beforehand or are unknown and must be estimated.In a next step a Utility Computing model needs to de ne compute units and thus provides a metric to convert and compare computing resources between the Cloud and alternative infrastructure services. Usually the Cloud Comput-ing provider de nes the Utility Computing model, associated with a pricing scheme, such as Amazon EC2 Compute Units (ECU). The vendor-speci c model can be converted into a more generic Utility Computing unit, such as FLOPS, I/O operations, and the like. This might be necessary when comparing Cloud5Computing o ers of di erent vendors. Since Cloud Computing providers charge money for their services based on the Utility Computing model, these pricing schemes can be used in order to determine the direct costs of the Cloud Com-puting scenario. Indirect costs comprise soft factors, such as learning to use tools and gain experience with Cloud Computing technology.3. Costs of the Reference IT Infrastructure ServiceThe valuation of Cloud Computing services must take into account its costs as well as the cash ows resulting from the underlying business model. Within the context of our valuation approach we focus on a cost comparison between infrastructure in the Cloud and a reference infrastructure service. Reaching or failing to reach business objectives has an impact on cash ows and can therefore be measured in terms of monetary opportunity costs.The reference IT infrastructure service might be conventional IT infrastruc-ture (SME or big business), a hosted service, a Grid Computing service, or something else. This reference model can be arbitrarily complex and detailed, as long as it computes the estimated resource usage in a similar manner as in the Cloud Computing scenario of section 3.2. The resource usage will not in all cases be the same as in the Cloud Computing scenario. Some tasks might e.g. be computed locally, thus saving data transfer. Other di erences could result from a totally di erent approach that must be taken in order to ful ll the business objectives de ned in the business scenario.In the case of privately owned IT infrastructure, cost models, such as Gart-ner's TCO [3], provide a good tool for calculations [8]. The cost model should comprise direct costs, such as Capital Expenditures for the facility, energy and cooling infrastructure, cables, servers, and so on. Moreover, there are Opera-tional Expenditures which must be taken into account, such as energy, network fees and IT employees. Indirect costs comprise costs from failing to meet busi-ness objectives, e.g. time to market, customer satisfaction or Quality of Service related Service Level Agreements. There is no easy way to measure how this can be done and will vary from case to case. More sophisticated TCO models must be developed to mitigate this shortcoming. One approach might be to compare cash ow streams that result from failing to deliver certain business objectives, such as short time to market. If the introduction of a service o ering is delayed due to slow deployment processes, the resulting de cit can be calculated as a discounted cash ow.When all direct and indirect costs have been taken into account, the total costs of the reference IT infrastructure service can be calculated by summing up. Finally, costs of the Cloud Computing scenario and the reference model scenario can be compared.64. Evaluation and DiscussionEarly adopters of Cloud Computing technologies are IT engineers who work on Web-scale projects, such as the New York Times TimesMachine [9]. Start-ups with high scalability requirements turn to Cloud Computing providers, such as Amazon EC2, in order to roll out Web-scale services with comparative low entry costs [7]. These and other examples show that scalability, low market barriers and rapid deployment are among the most important drivers of Cloud Computing.5. New York Times TimesMachineIn autumn 2007 New York Times senior software engineer Derek Gottfrid worked on a project named TimesMachine. The service should provide access to any New York Times issue since 1851, adding up to a bulk of 11 million articles which had to be served in the form of PDF les. Previously Gottfrid and his colleagues had implemented a solution that generated the PDF les dynamically from already scanned TIFF images of the New York Times articles. This approach worked well, but when tra c volumes were about to increase signi cantly it would be better to serve pre-generated static PDF les.Faced with the challenge to convert 4 Terabyte of source data into PDF, Derek Gottfrid decided to make use of Amazon's Web Services Elastic Compute Cloud (EC2) and Simple Storage Service (S3). He uploaded the source data to S3 and started a Hadoop cluster of customized EC2 Amazon Machine Images (AMIs). With 100 EC2 AMIs running in parallel he could complete the task of reading the source data from S3, converting it to PDF and storing it back to S3 within 36 hours.How does this use case t in our framework?Gottfrid's approach was motivated by the simplicity with which the one-time task could be accomplished if performed \in the Cloud". No up-front costs were involved, except for insigni cant expenditures when experimenting if the endeavor was feasible at all. Due to the simplicity of the approach and the low costs involved, his superiors agreed without imposing bureaucratic obstacles.Another key driver was to cut short deployment times and thereby time to market. The alternative to Amazon EC2 and S3 would have been to ask for permission to purchase commodity hardware, install it and nally run the tasks - a process that very likely would have taken several weeks or even months. After process execution, the extra hardware would have to be sold or used in another context.This use case is a good example for a one-time batch-processing job that can be performed in a Grid Computing or Cloud Computing environment. From the backend engineer's point of view it is favorable to be able getting started without much con guration overhead as only the task result is rele vant. The data storage and processing volume is known beforehand and no measures have to be taken to guarantee scalability, availability, or the like.7In a comparative study researchers from the CERN-based EGEE project argue that Clouds di er from Grids in that they served di erent usage patterns. While Grids were mostly used for short-term job executions, clouds usually sup-ported long-lived services [10]. We agree that usage patterns are an important di erentiator between Clouds and Grids, however, the TimesMachine use case shows that this not a question of service lifetime. Clouds are well-suited to serve short-lived usage scenarios, such as batch-processes or situational Mash-up ser-vices.1 Major League BaseballMLB Advanced Media is the company that develops and maintains the Major League Baseball Web sites. During the 2007 season, director of operations Ryan Nelson received the request to implement a chat product as an additional service to the Web site [11]. He was told that the chat had to go online as soon as possible. However, the company's data center in Manhattan did not leave much free storage capacity and processing power.Since there was no time to order and install new machines, Nelson decided to call the Cloud Computing provider Joyent. He arranged for 10 virtual machines in a development cluster and another 20 machines for production mode. Nelson's team developed and tested the chat for about 2 months and then launched the new product. When the playo s and World Series started, more resources were needed. Another 15 virtual machines and additional RAM solved the problem.Ryan Nelson points out two major advantages of this approach. First, the company gains exibility to try out new products quickly and turn them o if they are not a success. In this context, the ability to scale down shows to be equally important as scaling up. Furthermore, Nelson's team can better respond to seasonal demand spikes which are typical for Web sites about sports events. 6) Related WorkVarious economic aspects of outsourcing storage capacities and processing power have been covered by previous work in distributed computing and grid comput-ing [12], [13], [14], [15]. However, the methods and business models introduced for Grid Computing do not consider all economic drivers which we identi ed relevant for Cloud Computing, such as pushing for short time to market in the context of organization inertia or low entry barriers for start-up companies.With a rule of thumb calculation Jim Gray points to the opportunity costs of distributed computing in the Internet as opposed to local computations, i.e. in LAN clusters [12]. In his scenario $1 USD equals 1 GB sent over WAN or alter-natively eight hours CPU processing time. Gray reasons that except for highly processing-intensive applications outsourcing computing tasks into a distributed environment does not pay o because network tra c fees outnumber savings in processing power. Calculating the tradeo between basic computing services can be useful to get a general idea of the economies involved. This method can8910easily be applied to the pricing schemes of Cloud Computing providers. For $1 USD the Web Service Amazon EC2 o ers around 6 GB data transfer or 10 hours CPU processing 1. However, this sort of calculation only makes sense if placed in a broader context. Whether or not computing services can be performed locally depends on the underlying business objective. It might for example be necessary to process data in a distributed environment in order to enable online collaboration. George Thanos, et al evaluate the adoption of Grid Computing technology for business purposes in a more comprehensive way [14]. The authors shed light on general business objectives and economic issues associated with Grid Computing, such as economies of scale and scope, network externalities, market barriers, etc. In particular, the explanations regarding the economic rationale behind complementing privately owned IT infrastructure with utility comput-ing services point out important aspects that are also valid for our valuation model. Cloud Computing is heavily based on the notion of Utility Computing where large-scale data centers play the role of a utility that delivers computing services on a pay-per-use basis. The business scenarios described by Thanos, et al only partially apply to those we can observe in Cloud Computing. Important bene ts associated with Cloud Computing, such as shorter time to market and responsiveness to highly varying demand, are not covered. These business objec-tives bring technological challenges that Cloud Computing explicitly addresses, such as scalability and high availability in the face of unpredictable short-term demand peaks.4 Conclusion and Future Work Cloud Computing is an emerging trend of provisioning scalable and reliable services over the Internet as computing utilities. Early adopters of Cloud Com-puting services, such as start-up companies engaged in Web-scale projects, intu-itively embrace the opportunity to rely on massively scalable IT infrastructure from providers like Amazon. However, there is no systematic, dedicated ap-proach to measure the bene t from Cloud Computing that could serve as a guide for decision makers to tell when outsourcing IT resources into the Cloud makes sense. We have addressed this problem and developed a valuation framework that serves as a starting point for future work. Our framework provides a step-by-step guide to determine the bene ts from Cloud Computing, from describing a business scenario to comparing Cloud Computing services with a reference IT solution. We identify key components: business domain, objectives, demand behavior and technical requirements. Based on business objectives and technical requirements, the costs of a Cloud Computing service, as well as the costs of a reference IT solution, can be calculated and compared. Well-known use cases of1According to the Amazon Web Service pricing in July 2008 one GB of outgoing tra c costs $0.17 for the rst 10 TB per month. Running a s mall AMI instance w ith the compute capacity of a 1.0-1.2 GHz 2007 Xeon or Opteron processor for one hour costs $0.10 USD.11Cloud Computing adopters serve as a means to discuss and evaluate the validity of our framework.In future work, we will identif y and analyze concrete valuation methods that can be applied within the context of our framework. Furthermore, it is necessary to evaluate cost models that might serve as a template for estimating direct and indirect costs, a key challenge that we have only mentioned.References1. Amazon Web Services: Customer Case Studies,/Success-Stories-AWS-home-page/b/ref=sc_fe_l_1?ie=UTF8&node=182241011&no=34406612. Titman, S.,Martin, J.: Valuation. The Art & Science of CorporateInvest-ment Decisions, Addison-Wesley (2007)3. Gartner TCO, /TCO/index.htm4. Chiu, W..: From Cloud Computing to the New Enterprise Data Center,IBM High Performance On Demand Solutions (2008)[5] Pentagon's IT Unit Seeks to Adopt Cloud Comput-ing, New York Times,/idg/IDG_852573C400693880002574890080F9EF.html?ref=technology[6] Schlossnagle, T.: Scalable Internet Architectures, Sams Publishing (2006)[7] PowerSet Use Case, /b?ie=UTF8&node=331766011&me=A36L942TSJ2AJA[8] Koomey, J.: A Simple Model for Determining True Total Cost ofOwnership for Data Centers, Uptime Institute (2007)[9] New York Times TimesMachine use case, /2007/11/01/self-service-prorated-super-computing-fun/[10] Begin, M.: An EGEE Comparative Study: Grids and Clouds - Evolution orRevolution?, CERN Enabling Grids for E-Science (2008)[11] Major League Baseball use case, /news/2007/121007-your-take-mlb.html[12] Gray, J.: Distributed Computing Economics. Microsoft ResearchTechnical Report: MSRTR- 2003-24, Microsoft Research (2003)[13] Buyya, R.,Stockinger, H.,Giddy, J.,Abramson, D.: Economic Models forManagement of Resources in Grid Computing, ITCom (2001)121 Thanos, G., Courcoubetis, C., Stamoulis, G.: Adopting the Grid forBusi-ness Purposes: The Main Objectives and the Associated Economic Is-sues, Grid Economics and Business Models: 4th International Workshop, GECON (2007)2 Hwang, J.,Park, J.: Decision Factors of Enterprises for Adopting GridComputing, Grid Economics and Business Models: 4th International Work-shop, GECON (2007)13。
Fluent常见报错和计算错误Fluent 计算错误汇总:1. .fluent 不能显⽰图像在运⾏fluent 时,导⼊case 后,检查完grid,在显⽰grid 时,总是出现这样的错误Error message from graphics function Update_Display:Unable to Set OpenGL Rendering ContextError: FLUENT received a fatal signal SEGMENTA TION VIOLATION.Error Object: 解决办法:右键单击快捷⽅式,把⽬标由x:fluent.incntbinntx86fluent.exe改成:x:fluent.incntbinntx86fluent.exe 2d -driver msw如果还有三维的,可以再建⽴⼀个快捷⽅式改成:x:fluent.incntbinntx86fluent.exe 3d -driver msw这就可以直接调⽤了。
如果不是以上原因引起的话,也有可能是和别的软件冲突,如MATLAB 等,这也会使fluent ⽆法显⽰图像。
2:GAMBIT 安装后⽆法运⾏,出错信息是“unable find Exceed X Server”A. GAMBIT 需要装EXCEED 才能⽤。
gambit 的运⾏:先运⾏命令提⽰符,输⼊gambit,回车fluent 的运⾏:直接在开始-程序-Fluent Inc ⾥⾯3:Fluent 安装后⽆法运⾏,出错信息是“unable find/open license.datA. FLUENT 和GAMBIT 需要把相应license.dat ⽂件拷贝到FLUENT.INC/license ⽬录下4:出错信息:运⾏gambit 时提⽰找不到gambit ⽂件A. FLUENT 和GAMBIT 推荐使⽤默认安装设置,安装完GAMBIT 请设置环境变量,设置办法“开始-程序-FLUENT INC-Set Environment 另外设置完环境变量需要重启⼀下,否则仍会提⽰找不到环境变量。
应用地球化学元素丰度数据手册迟清华鄢明才编著地质出版社·北京·1内容提要本书汇编了国内外不同研究者提出的火成岩、沉积岩、变质岩、土壤、水系沉积物、泛滥平原沉积物、浅海沉积物和大陆地壳的化学组成与元素丰度,同时列出了勘查地球化学和环境地球化学研究中常用的中国主要地球化学标准物质的标准值,所提供内容均为地球化学工作者所必须了解的各种重要地质介质的地球化学基础数据。
本书供从事地球化学、岩石学、勘查地球化学、生态环境与农业地球化学、地质样品分析测试、矿产勘查、基础地质等领域的研究者阅读,也可供地球科学其它领域的研究者使用。
图书在版编目(CIP)数据应用地球化学元素丰度数据手册/迟清华,鄢明才编著. -北京:地质出版社,2007.12ISBN 978-7-116-05536-0Ⅰ. 应… Ⅱ. ①迟…②鄢…Ⅲ. 地球化学丰度-化学元素-数据-手册Ⅳ. P595-62中国版本图书馆CIP数据核字(2007)第185917号责任编辑:王永奉陈军中责任校对:李玫出版发行:地质出版社社址邮编:北京市海淀区学院路31号,100083电话:(010)82324508(邮购部)网址:电子邮箱:zbs@传真:(010)82310759印刷:北京地大彩印厂开本:889mm×1194mm 1/16印张:10.25字数:260千字印数:1-3000册版次:2007年12月北京第1版•第1次印刷定价:28.00元书号:ISBN 978-7-116-05536-0(如对本书有建议或意见,敬请致电本社;如本社有印装问题,本社负责调换)2关于应用地球化学元素丰度数据手册(代序)地球化学元素丰度数据,即地壳五个圈内多种元素在各种介质、各种尺度内含量的统计数据。
它是应用地球化学研究解决资源与环境问题上重要的资料。
将这些数据资料汇编在一起将使研究人员节省不少查找文献的劳动与时间。
这本小册子就是按照这样的想法编汇的。
ExtremeTech3D Pipeline TutorialJune13,2001By: Dave SalvatorIntroductionFrom the movie special effects that captivate us,to medical imaging to games and beyond,the impact that 3D graphics have made is nothing short of revolutionary.This technology,which took consumer PCs by storm about five years ago,has its roots in academia,and in the military.In fact,in some sense,the3D graphics we enjoy on our PCs today is a sort of"Peace Dividend,"since many professionals now working in the3D chip,film and game development industries cut their teeth designing military simulators.Entertainment aside,3D graphics in Computer Assisted Design(CAD)has also brought industrial design a quantum leap forward.Manufacturers can not only design and"build"their products without using a single piece of material,but can also leverage that3D model by interfacing with manufacturing machinery using Computer Assisted Manufacturing(CAM).But for most of us,3D graphics has made its biggest impact in the entertainment industries,both in film and in games,and its here that most of us became acquainted with the technology and jargon surrounding3D graphics.It is useful to note that not all rendering systems have the same goals in mind.Offline rendering systems, such as those used in CAD applications,stress accuracy over frame rate.Such models might be used to manufacture airplane parts,for instance.Real-time renderers,like game engines and simulators,tend to emphasize constant frame rate to keep animations smooth and fluid,and are willing to sacrifice both geometric and texture detail in order to do this.Some renderers are a kind of hybrid,like those used in the Toy Story movies.Artists and programmers using Pixar's Renderman technology create visually stunning scenes,but each frame of animation might take hours to render on a server"farm"--a group of computers to which the rendering work is distributed. These offline-rendered frames are then sequenced together at24frames per second(fps),the standard film rate,to produce the final cut of the film.With the rapid growth of consumer3D chips'rendering power,the line between"consumer3D"and "workstation3D"has blurred considerably.However,real-time game engines do make trade-offs to maintain frame rate,and some even have the ability to"throttle"features on and off if the frame rate dips below a certain level.In contrast,workstation3D can require advanced features not needed in today's3D games.The field of3D graphics is expansive and complex.Our goal is to present a series of fairly technical yet approachable articles on3D graphics technology.We'll start with the way3D graphics are created using a multi-step process called a3D pipeline.We'll walk down the3D pipeline,from the first triangle in a scene to the last pixel drawn.We will present some of the math involved in rendering a3D scene,though the treatment of3D algorithms will be introductory.(We list a good number of references at the end of this story if you want to dive deeper).In the Pipe,Five By FiveBecause of the sequential nature of3D graphics rendering,and because there are so many calculations to be done and volumes of data to be handled,the entire process is broken down into component steps, sometimes called stages.These stages are serialized into the aforementioned3D graphics pipeline.The huge amount of work involved in creating a scene has led3D rendering system designers(both hardware and software)to look for all possible ways to avoid doing unnecessary work.One designerquipped,"3D graphics is the art of cheating without getting caught."Translated,this means that one of the art-forms in3D graphics is to elegantly reduce visual detail in a scene so as to gain better performance,but do it in such a way that the viewer doesn't notice the loss of quality.Processor and memory bandwidth are precious commodities,so anything designers can do to conserve them benefits performance greatly.One quick example of this is culling,which tells the renderer,"If the view camera(the viewer's eye)can't see it, don't bother processing it and only worry about what the view camera can see."With the number of steps involved and their complexity,the ordering of these stages of the pipeline can vary between implementations.While we'll soon inspect the operations within these stages in much more detail,broadly described,the general ordering of a3D pipeline breaks down into four sections:Application/ Scene,Geometry,Triangle Setup,and Rasterization/Rendering.While the following outline of these sections may look daunting,by the time you're done reading this story,you'll be among an elite few who really understand how3D graphics works,and we think you'll want to get even deeper!3D Pipeline-High-Level Overview1.Application/SceneScene/Geometry database traversalMovement of objects,and aiming and movement of view cameraAnimated movement of object modelsDescription of the contents of the3D worldObject Visibility Check including possible Occlusion CullingSelect Level of Detail(LOD)2.GeometryTransforms(rotation,translation,scaling)Transform from Model Space to World Space(Direct3D)Transform from World Space to View SpaceView ProjectionTrivial Accept/Reject CullingBack-Face Culling(can also be done later in Screen Space)LightingPerspective Divide-Transform to Clip SpaceClippingTransform to Screen Space3.Triangle SetupBack-face Culling(or can be done in view space before lighting)Slope/Delta CalculationsScan-Line Conversion4.Rendering/RasterizationShadingTexturingFogAlpha Translucency TestsDepth BufferingAntialiasing(optional)DisplayWhere the Work Gets DoneWhile numerous high-level aspects of the3D world are managed by the application software at the application stage of the pipeline(which some argue isn't technically part of the3D pipeline),the last three major stages of the pipeline are often managed by an Application Programming Interface(API),such as SGI's OpenGL,Microsoft's Direct3D,or Pixar's Renderman.And graphics drivers and hardware are called by the APIs to performing many of the graphics operations in hardware.Graphics APIs actually abstract the application from the hardware,and vice versa,providing the application with true device independence.Thus,such APIs are often called Hardware Abstraction Layers(HALs).The goal of this design is pretty straightforward--application makers can write their program once to an API,and it will(should)run on any hardware whose drivers support that API.Conversely,hardware makers write their drivers up to the API,and in this way applications written to this API will(should)run on their hardware(See Figure1).The parenthetical"should's"are added because there are sometimes compatibility issues as a result of incorrect usages of the API(called violations),that might cause application dependence on particular hardware features,or incorrect implementation of API features in a hardware driver,resulting in incorrect or unexpected results.Figure1-3D API stackSpace InvadersBefore we dive into pipeline details,we need to first understand the high level view of how3D objects and3D worlds are defined and how objects are defined,placed,located,and manipulated in larger three-dimensional spaces,or even within their own boundaries.In a3D rendering system,multiple Cartesian coordinate systems(x-(left/right),y-(up/down)and z-axis(near/far))are used at different stages of the pipeline.While used for different though related purposes,each coordinate system provides a precise mathematical method to locate and represent objects in the space.And not surprisingly,each of these coordinate systems is referred to as a"space."Model of X-Y-Z Cartesian Coordinate systemObjects in the 3D scene and the scene itself are sequentially converted,or transformed,through five spaceswhen proceeding through the 3D pipeline.A brief overview of these spaces follows:Model Space:where each model is in its own coordinate system,whose origin is some point on the model,such as the right foot of a soccer player model.Also,the model will typically have a control point or"handle".To move the model,the 3D renderer only has to move the control point,because model spacecoordinates of the object remain constant relative to its control point.Additionally,by using that same"handle",the object can be rotated.World Space:where models are placed in the actual 3D world,in a unified world coordinate system.It turnsout that many 3D programs skip past world space and instead go directly to clip or view space.TheOpenGL API doesn't really have a world space.View Space (also called Camera Space):in this space,the view camera is positioned by the application(through the graphics API)at some point in the 3D world coordinate system,if it is being used.The worldspace coordinate system is then transformed (using matrix math that we'll explore later),such that thecamera (your eye point)is now at the origin of the coordinate system,looking straight down the z-axis intothe scene.If world space is bypassed,then the scene is transformed directly into view space,with thecamera similarly placed at the origin and looking straight down the z-axis.Whether z values are increasingor decreasing as you move forward away from the camera into the scene is up to the programmer,but fornow assume that z values are increasing as you look into the scene down the z-axis.Note that culling,back-face culling,and lighting operations can be done in view space.The view volume is actually created by a projection,which as the name suggests,"projects the scene"infront of the camera.In this sense,it's a kind of role reversal in that the camera now becomes a projector,and the scene's view volume is defined in relation to the camera.Think of the camera as a kind ofholographic projector,but instead of projecting a 3D image into air,it instead projects the 3D scene "into"your monitor.The shape of this view volume is either rectangular (called a parallel projection),or pyramidal(called a perspective projection),and this latter volume is called a view frustum (also commonly calledfrustrum,though frustum is the more currentdesignation).click on image for full viewThe view volume defines what the camera will see,but just as importantly,itdefines what the camera won't see,and in so doing,many objects models andparts of the world can be discarded,sparing both 3D chip cycles and memorybandwidth.The frustum actually looks like an pyramid with its top cut off.The top of the inverted pyramid projection isclosest to the camera's viewpoint and radiates outward.The top of the frustum is called the near (or front)clipping plane and the back is called the far (or back)clipping plane.The entire rendered 3D scene must fitbetween the near and far clipping planes,and also be bounded by the sides and top of the frustum.Iftriangles of the model (or parts of the world space)falls outside the frustum,they won't be processed.Similarly,if a triangle is partly inside and partly outside the frustrum the external portion will be clipped off atthe frustum boundary,and thus the term clipping.Though the view space frustum has clipping planes,clipping is actually performed when the frustum is transformed to clip space.Deeper Into SpaceClip Space:Similar to View Space,but the frustum is now "squished"into a unit cube,with the x and ycoordinates normalized to a range between –1and 1,and z is between 0and 1,which simplifies clippingcalculations.The"perspective divide"performs the normalization feat,by dividing all x,y,and z vertex coordinates by a special"w"value,which is a scaling factor that we'll soon discuss in more detail.The perspective divide makes nearer objects larger,and farther objects smaller as you would expect when viewing a scene in reality.Screen Space:where the3D image is converted into x and y2D screen coordinates for2D display.Note that z and w coordinates are still retained by the graphics systems for depth/Z-buffering(see Z-buffering section below)and back-face culling before the final render.Note that the conversion of the scene to pixels, called rasterization,has not yet occurred.Because so many of the conversions involved in transforming throughthese different spaces essentially are changing the frame of reference,it's easy to get confused.Part of what makes the3D pipeline confusingis that there isn't one"definitive"way to perform all of theseoperations,since researchers and programmers have discovereddifferent tricks and optimizations that work for them,and because thereare often multiple viable ways to solve a given3D/mathematicalproblem.But,in general,the space conversion process follows theorder we just described.To get an idea about how these different spaces interact,consider this example:Take several pieces of Lego,and snap them together to make some object.Think of the individual pieces of Lego as the object's edges,with vertices existing where the Legos interconnect(while Lego construction does not form triangles,the most popular primitive in3D modeling,but rather quadrilaterals,our example will still work).Placing the object in front of you,the origin of the model space coordinates could be the lower left near corner of the object,and all other model coordinates would be measured from there.The origin can actually be any part of the model,but the lower left near corner is often used.As you move this object around a room(the3D world space or view space,depending on the3D system),the Lego pieces'positions relative to one another remain constant(model space),although their coordinates change in relation to the room(world or view spaces).3D Pipeline Data FlowIn some sense,3D chips have become physical incarnations of the pipeline,where data flows"downstream" from stage to stage.It is useful to note that most operations in the application/scene stage and the early geometry stage of the pipeline are done per vertex,whereas culling and clipping is done per triangle,and rendering operations are done per putations in various stages of the pipeline can be overlapped, for improved performance.For example,because vertices and pixels are mutually independent of one another in both Direct3D and OpenGL,one triangle can be in the geometry stage while another is in the Rasterization stage.Furthermore,computations on two or more vertices in the Geometry stage and two or more pixels(from the same triangle)in the Rasterzation phase can be performed at the same time.Another advantage of pipelining is that because no data is passed from one vertex to another in the geometry stage or from one pixel to another in the rendering stage,chipmakers have been able to implement multiple pixel pipes and gain considerable performance boosts using parallel processing of these independent entities.It's also useful to note that the use of pipelining for real-time rendering,though it has many advantages,is not without downsides.For instance,once a triangle is sent down the pipeline,the programmer has pretty much waved goodbye to it.To get status or color/alpha information about that vertex once it's in the pipe is very expensive in terms of performance,and can cause pipeline stalls,a definite no-no.Pipeline Stages--The Deep Dive1.Application/Scene3D Pipeline-High-Level Overview1.Application/SceneScene/Geometry database traversalMovement of objects,and aiming and movement of view cameraAnimated movement of object modelsDescription of the contents of the 3D worldObject Visibility Check including possible Occlusion CullingSelect Level of Detail (LOD)2.Geometry3.Triangle Setup4.Rendering /Rasterization The 3D application itself could be considered the start of the 3D pipeline,though it's not truly part of thegraphics subsystem,but it begins the image generation process that results in the final scene or frame ofanimation.The application also positions the view camera,which is essentially your "eye"into the 3D world.Objects,both inanimate and animated are first represented in the application using geometric primitives,orbasic building blocks.Triangles are the most commonly used primitives.They are simple to utilize becausethree vertices always describe a plane,whereas polygons with four or more vertices may not reside in thesame plane.More sophisticated systems support what are called higher-order surfaces that are differenttypes of curved primitives,which we'll cover shortly.3D worlds and the objects in them are created in programs like 3D Studio Max,Maya,AutoDesk 3D Studio,Lightwave,and Softimage to name a few.These programs not only allow 3D artists to build models,but alsoto animate them.Models are first built using high triangle counts and can then be shaded and textured.Next,depending on the constraints of the rendering engine--off-line or real-time--artists can reduce thetriangle counts of these high-detail models to fit within a given performance budget.Objects are moved from frame to frame by the application,be it an offline renderer or a game engine.Theapplication traverses the geometry database to gather necessary object information (the geometry databaseincludes all the geometric primitives of the objects),and moves all objects that are going to change in thenext frame of animation.Appreciate that in a game engine for instance,the renderer doesn't have theplayground all to itself.The game engine must also tend to AI (artificial intelligence)processing,collisiondetection and physics,audio,and networking (if the game is being played in multiplayer mode over anetwork).All models have a default "pose",and in the case of models of humans,the default pose is called itsDaVinci pose,because this pose resembles DaVinci's famous Vitruvian Man.Once the application hasspecified the model's new "pose,"this model is now ready for the next processingstep.click on image for full viewThere's an operation that some applications do at this point,called "occlusionculling",a visibility test that determines whether an object is partially orcompletely occluded (covered)by some object in front of it.If it is,the occludedobject,or the part of it that is occluded is discarded.The cost savings in termsof calculations that would otherwise need to be performed in the pipeline can beconsiderable,particularly in a scene with high depth complexity,meaning thatobjects toward the back of the scene have several "layers"of objects in front ofthem,occluding them from the view camera.If these occluded objects can be discarded early,they won't have to be carriedany further into the pipeline,which saves unnecessary lighting,shading andtexturing calculations.For example,if you're in a game where it's you versusGodzilla,and the big guy is lurking behind a building you're walking toward,youcan't see him (sneaky devil).The game engine doesn't have to worry aboutdrawing the Godzilla model,since the building's model is in front of him,andthis can spare the hardware from having to render Godzilla in that frame of animation.A more important step is a simple visibility check on each object.This can be accomplished by determining ifthe object is in the view frustum (completely or partially).Some engines also try to determine whether anobject in the view frustum is completely occluded by another object.This is typically done using simpleconcepts like portals or visibility sets,especially for indoor worlds.These are two similar techniques that getimplemented in 3D game engines as a way to not have to draw parts of the 3D world that the camera won'tbe able to see.[Eberly,p.413]The original Quake used what were called potentially visible sets(PVS)that divided the world into smaller pieces.Essentially,if the game player was in a particular piece of the world, other areas would not be visible,and the game engine wouldn't have to process data for those parts of the world.Another workload-reduction trick that's a favorite among programmers is the use of bounding boxes.Say for instance you've got a10,000-triangle model of a killer rabbit,and rather than test each of the rabbit model's triangles,a programmer can encase the model in a bounding box,consisting of12triangles(two for each side of the six-sided box).They can then test culling conditions(based on the bounding box vertices instead of the rabbit's vertices)to see if the killer rabbit will be visible in the scene.Even before you might further reduce the number of vertices by designating those in the killer rabbit model that are shared(vertices of adjacent triangles can be shared,a concept we'll explore in more detail later),you've already reduced your total vertex count from30,000(killer rabbit)to36(bounding box)for this test.If the test indicates the bounding box is not visible in the scene,the killer rabbit model can be trivially rejected,you've just saved yourself a bunch of work.You Down With LOD?Another method for avoiding excessive work is what's called object Level of Detail,referred to as LOD.This technique is lossy,though given how it's typically used,the loss of model detail is often imperceptible. Object models are built using several discrete LOD levels.A good example is a jet fighter with a maximum LOD model using10,000triangles,and additional lower resolution LOD levels consisting of5,000,2,500, 1000and500triangles.The jet's distance to the view camera will dictate which LOD level gets used.If it's very near,the highest resolution LOD gets used,but if it's just barely visible and far from the view camera, the lowest resolution LOD model would be used,and for locations between the two,the other LOD levels would be used.LOD selection is always done by the application before it passes the object onto the pipeline for further processing.To determine which LOD to use,the application maps a simplified version of the object(often just the center point)to view space to determine the distance to the object.This operation occurs independently of the pipeline.The LOD must be known in order to determine which set of triangles(different LOD levels)to send to the pipeline..Geometric Parlor TricksGenerally speaking,a higher triangle count will produce a more realistic looking rmation about these triangles--their location in3D space,color,etc.--is stored in the descriptions of the vertices of each triangle.The aggregation of these vertices in the3D world is referred to as a scene database,which is the very same animal as the geometry database mentioned above.Curved areas of models,like tires on a car, require many triangles to approximate a smooth curve.The adverse effect of aggressively curtailing the number of vertices/triangles in a circle,for example,via an LOD reduction would be a"bumpy"circle,where you could see the vertices of each component triangle.If many more triangles represented the circle,it would look far smoother at its edge.Optimizations can be made to reduce the actual number of vertices sent down the pipeline without compromising the quality of the model,because connected triangles share vertices.Programmers can use connected triangle patterns called triangle strips and fans to reduce vertex count.For example:In the case of a strip of triangles,the simplest example would be a rectangle described by two right triangles,with a shared hypotenuse.Normally,two such triangles drawn separately would yield six vertices. But,with the two right triangles being connected,they form a simple triangle strip that can be described using four vertices,reducing the average number of vertices per triangle to two,rather than the original three.While this may not seem like much of reduction,the advantage grows as triangle(and resulting vertex)counts scale,and the average number of unique vertices per triangle moves toward one.[RTR,p. 234]Here's the formula for calculating the average number of vertices,given m triangles:1+2/mSo,in a strip with100triangles,the average number of vertices per triangle would be1.02,or about102 vertices total,which is a considerable savings compared to processing the300vertices of the individual triangles.In this example,we hit the maximum cost savings obtainable from the use of strips for m number of triangles,which is m+2vertices[RTR,p.239].These savings can really add up when you consider that it takes32bytes of data to describe the attributes(such as position,color,alpha,etc.)of a single vertex in Direct3D.Of course,the entire scene won't consist of strips and fans,but developers do look to use them where they can because of the associated cost savings.In the case of fans,a programmer might describe a semicircle using20triangles in a pie-slice arrangement. Normally this would consist of60vertices,but by describing this as a fan,the vertex count is reduced to22. The first triangle would consist of three vertices,but each additional triangle would need only one additional vertex,and the center of the fan has a single vertex shared by all triangles.Again the maximum savings possible using strips/fans is achieved.Another important advantage of strips and fans,is that they are a"non-lossy"type of data reduction, meaning no information or image quality is thrown away in order to get the data reduction and resulting speedup.Additionally,triangles presented to the hardware in strip or fan order improve vertex cache efficiency,which can boost geometry processing performance.Another tool available to programmers is the indexed triangle list,which can represent a large number of triangles,m,with m/2vertices,about twice the reduction of using strips or fans.This representational method is preferred by most hardware architectures.Curved Surfaces AheadRather than use numerous triangles to express a curved surface,3D artists and programmers have another tool at their disposal:higher-order surfaces.These are curved primitives that have more complex mathematical descriptions,but in some cases,this added complexity is still cheaper than describing an object with a multitude of triangles.These primitives have some pretty odd sounding names:parametric polynomials(called SPLINEs),non-uniform rational b-splines(NURBs),Beziers,parametric bicubic surfaces and n-patches.Because3D hardware best understands triangles,these curved surfaces defined at the application level are tessellated,or converted to triangles by the API runtime,the graphics card driver or the hardware for further handling through the3D pipeline.Improved performance is possible if the hardware tessellates the surface after it has been sent from the CPU to the3D card for transform and lighting(T&L) processing,placing less of a load on the AGP port,a potential bottleneck.2.Geometry3D Pipeline-High-Level Overview1.Application/Scene2.GeometryTransforms(rotation,translation,scaling)Transform from Model Space to World Space(Direct3D)Transform from World Space to View SpaceView ProjectionTrivial Accept/Reject Culling(or can be done later in Screen Space)Back-Face Culling(can also be done later in Screen Space)LightingPerspective Divide-Transform to Clip SpaceClipping。
Preprint of:Timo A.Nieminen,Vincent L.Y.Loke,Alexander B.Stilgoe,Gregor Kn¨o ner,Agata M.Bra´n czyk,Norman R.Heckenberg and Halina Rubinsztein-Dunlop “Optical tweezers computational toolbox”Journal of Optics A9,S196-S203(2007)Optical tweezers computational toolboxTimo A Nieminen,Vincent L Y Loke,Alexander B Stilgoe,Gregor Kn¨o ner,Agata M Bra´n czyk,Norman R Heckenbergand Halina Rubinsztein-DunlopCentre for Biophotonics and Laser Science,School of Physical Sciences,TheUniversity of Queensland,Brisbane QLD4072,AustraliaAbstract.We describe a toolbox,implemented in Matlab,for the computationalmodelling of optical tweezers.The toolbox is designed for the calculation of opticalforces and torques,and can be used for both spherical and nonspherical particles,inboth Gaussian and other beams.The toolbox might also be useful for light scatteringusing either Lorenz–Mie theory or the T-matrix method.1.IntroductionComputational modelling provides an important bridge between theory and experiment—apart from the simplest cases,computational methods must be used to obtain quantitative results from theory for comparison with experimental results. This is very much the case for optical trapping,where the size range of typical particles trapped and manipulated in optical tweezers occupies the gap between the geometric optics and Rayleigh scattering regimes,necessitating the application of electromagnetic theory.Although,in principle,the simplest cases—the trapping and manipulation of homogeneous and isotropic microspheres—has an analytical solution—generalised Lorenz–Mie theory—significant computational effort is still required to obtain quantitative results.Unfortunately,the mathematical complexity of Lorenz–Mie theory presents a significant barrier to entry for the novice,and is likely to be a major contributor to the lagging of rigorous computational modelling of optical tweezers compared to experiment.If we further consider the calculation of optical forces and torques on non-spherical particles—for example,if we wish to consider optical torques on and rotational alignment of non-spherical microparticles,the mathematical difficulty is considerably greater.one the efficient methods for calculating optical forcesand torques on non-spherical particles in optical traps is closly allied to Lorenz–Mie theory—the T-matrix method(Waterman1971,Mishchenko1991,Nieminen et al.2003a).However,while the Mie scattering coefficients have a relatively simple analytical form,albeit involving special functions,the T-matrix requires considerable numerical effort for its calculation.It is not surprising that the comprehensive bibliographic database on computational light scattering using the T-matrix method by Mishchenko et al.(2004)lists only four papers applying the method to optical tweezers(Bayoudh et al.2003,Bishop et al.2003,Nieminen,Rubinsztein-Dunlop, Heckenberg&Bishop2001,Nieminen,Rubinsztein-Dunlop&Heckenberg2001).Since the compilation of this bibliography,other papers have appeared in which this is done (Nieminen et al.2004,Simpson&Hanna2007,Singer et al.2006),but they are few in number.Since the potential benefits of precise and accurate computational modelling of optical trapping is clear,both for spherical and non-spherical particles,we believe that the release of a freely-available computational toolbox will be valuable to the optical trapping community.We describe such a toolbox,implemented in Matlab.We outline the theory underlying the computational methods,the mathematics and the algorithms,the toolbox itself,typical usage,and present some example results.The toolbox can be obtained at .au/people/nieminen/software.html at the time of publication.Since such software projects tend to evolve over time,and we certainly intend that this one will do so,potential users are advised to check the accompanying documentation.Along these lines,we describe our plans for future development. Of course,we welcome input,feedback,and contributions from the optical trapping community.2.FundamentalsThe optical forces and torques that allow trapping and manipulation of microparticles in beams of light result from the transfer of momentum and angular momentum from the electromagneticfield to the particle—the particle alters the momentum or angular momentumflux of the beam through scattering.Thus,the problem of calculating optical forces and torques is essentially a problem of computational light scattering.In some ways,it is a simple problem:the incidentfield is monochromatic,there is usually only a single trapped particle,which isfinite in extent,and speeds are so much smaller than the speed of light that we can for most purposes neglect Doppler shifts and assume we have a steady-state monochromatic single-scattering problem.Although typical particles inconveniently are of sizes lying within the gap between the regimes of applicability of small-particle approximations(Rayleigh scattering) and large-particle approximations(geometric optics),the particles of choice are often homogeneous isotropic spheres,for which an analytical solution to the scattering problem is available—the Lorenz–Mie solution(Lorenz1890,Mie1908).While theapplication of Lorenz–Mie theory requires significant computational effort,the methods are well-known.The greatest difficulty encountered results from the incident beam being a tightly focussed beam.The was developed for scattering of plane waves,and its extension to non-plane illumination is usually called generalised Lorenz–Mie theory(GLMT)(Gouesbet&Grehan1982)which has seen significant use for modelling the optical trapping of homogeneous isotropic spheres(Ren et al.1996, Wohland et al.1996,Maia Neto&Nussenzweig2000,Mazolli et al.2003,Lock2004a, Lock2004b,Kn¨o ner et al.2006,Neves et al.2006).The same name is sometimes used for the extension of Lorenz–Mie theory to non-spherical,but still separable geometries such as spheroids(Han&Wu2001,Han et al.2003).The source of the difficulty lies in the usual paraxial representations of laser beams being solutions of the scalar paraxial wave equation rather than solutions of the vector Helmholtz equation.Our method of choice is to use a least-squaresfit to produce a Helmholtz beam with a far-field matching that expected from the incident beam being focussed by the objective(Nieminen et al.2003b).At this point,we can write the incidentfield in terms of a discrete basis set of functionsψ(inc)n,where n is mode index labelling the functions,each of which is a solution of the Helmholtz equation,U inc=∞n a nψ(inc)n,(1)where a n are the expansion coefficients for the incident wave.In practice,the sum must be truncated at somefinite n max,which places restrictions on the convergence behaviour of useful basis sets.A similar expansion is possible for the scattered wave,and we can writeU scat=∞k p kψ(scat)k,(2)where p k are the expansion coefficients for the scattered wave.As long as the response of the scatterer—the trapped particle in this case—is linear, the relation between the incident and scatteredfields must be linear,and can be written as a simple matrix equationp k=∞n T kn a n(3)or,in more concise notation,P=TA(4) where T kn are the elements of the T-matrix.This is the foundation of both GLMT and the T-matrix method.In GLMT,the T-matrix T is diagonal,whereas for non-spherical particles,it is not.When the scatterer isfinite and compact,the most useful set of basis functions is vector spherical wavefunctions(VSWFs)(Waterman1971,Mishchenko1991,Nieminenet al.2003a,Nieminen et al.2003b).Since the VSWFs are a discrete basis,this method lends itself well to representation of thefields on a digital computer,especially since their convergence is well-behaved and known(Brock2001).The T-matrix depends only on the properties of the particle—its composition, size,shape,and orientation—and the wavelength,and is otherwise independent of the incidentfield.This means that for any particular particle,the T-matrix only needs to be calculated once,and can then be used for repeated calculations of optical force and torque.This is the key point that makes this a highly attractive method for modelling optical trapping and micromanipulation,since we are typically interested in the optical force and torque as a function of position within the trap,even if we are merely trying to find the equilibrium position and orientation within the trap.Thus,calculations must be performed for varying incident illumination,which can be done very easily with the T-matrix method.This provides a significant advantage over many other methods of calculating scattering where the entire calculation needs to be repeated.This is perhaps the the reason that while optical forces and torques have been successfully modelled using methods such as thefinite-difference time-domain method(FDTD),the finite element method(FEM),or other methods(White2000b,White2000a,Hoekstra et al.2001,Collett et al.2003,Gauthier2005,Chaumet et al.2005,Sun et al.2006,Wong &Ratner2006),the practical application of such work has been limited.Since,as noted above,the optical forces and torques result from differences between the incoming and outgoingfluxes of electromagnetic momentum and angular momentum,calculation of thesefluxes is required.This can be done by integration of the Maxwell stress tensor,and its moment for the torque,a surface surrounding the particle.Fortunately,in the T-matrix method,the bulk of this integral can be performed analytically,exploiting the orthogonality properties of the VSWFs.In this way,the calculation can be reduced to sums of products of the expansion coefficients of thefields.At this point,two controversies in macroscopic classical electromagnetic theory intrude.Thefirst of these is the Abraham–Minkowski controversy,concerning the momentum of an electromagnetic wave in a material medium(Minkowski1908, Abraham1909,Abraham1910,Jackson1999,Pfeifer et al.2006).This controversy is resolved for practical purposes by the realisation that what is physically observable is not the force due to change in the electromagnetic momentum,but the force due to the total momentum.The controversy is essentially one of semantics—what portion of the total momentum is to be labelled“electromagnetic”,and what portion is to be labelled “material”(Pfeifer et al.2006).Abraham’s approach can be summarised as calling P/nc the electromagnetic momentumflux,where P is the power,n the refractive index,and c the speed of light in free The quantum equivalent is the momentum of a photon in a material medium¯h k/n2=¯h0on the other hand,gives nP/c as the electromagneticflux,or¯h k=n¯h k0photon.The discrepancy is resolved by realising that the wave of induced polarisation in the dielectric carries energy and momentum,equal to the difference between the Abraham and Minkowskipictures.It is simplest to use the Minkowski momentum flux nP/c ,since this is equal to the total momentum flux.The second controversy is the angular momentum density of circularly polarised electromagnetic waves (Humblet 1943,Khrapko 2001,Zambrini &Barnett 2005,Stewart 2005,Pfeifer et al.2006).On the one hand,we can begin with theassumption that the angular momentum density is the moment of the momentum density,r ×(E /c ,which results in a circularly polarised plane wave carrying zero angular momentum in the direction of propagation.On the other hand,we can begin with the Lagrangian for an electromagnetic radiation field,and obtain the canonical stress tensor and an angular momentum tensor that can be divided into spin and orbital components (Jauch &Rohrlich 1976).For a circularly polarised plane wave,the component of the angular momentum flux in the direction of propagation would be I/ω,where I is the irradiance and ωthe angular frequency,in disagreement with the first result.The division of the angular momentum density resulting from this procedure is not gauge-invariant,and it is common to transform the integral of the angular momentum density into a gauge-invariant form,yielding the integral of r ×(E ×H )/c .Jauch &Rohrlich (1976)carefully point out that this transformation requires the dropping of surface terms at infinity.The reverse of this procedure,obtaining the spin and orbital term starting from r ×(E ×H )/c ,involving the same surface terms,had already been shown by Humblet (1943).The controversy thus consists of which of the two possible integrands to call the angular density.However,it is not the angular momentum density as such that we are interested in,but the total angular momentum flux through a spherical surface surrounding the particle.For the electromagnetic fields used in optical tweezers,this integrated flux is the same for both choices of angular momentum density.Crichton &Marston (2000)also show that for monochromatic radiation,the division into spin and orbital angular momenta is gauge-invariant,and observable,with it being possible to obtain the spin from measurement of the Stokes parameters.The total angular momentum flux is the same as that resulting from assuming a density of r ×(E ×H )/c .Since the torque due to spin is of practical interest (Nieminen,Heckenberg &Rubinsztein-Dunlop 2001,Bishop et al.2003,Bishop et al.2004),it is worthwhile to calculate this separately from the total torque.3.Incident fieldThe natural choice of coordinate system for optical tweezers is spherical coordinates centered on the trapped particle.Thus,the incoming and outgoing fields can be expanded in terms of incoming and outgoing vector (VSWFs):E in =∞ n =1n m =−n a nm M (2)nm (k r )+b nm N (2)nm (k r ),(5)E out =∞ n =1n m =−np nm M (1)nm (k r )+q nm N (1)nm (k r ).(6)where the VSWFs areM(1,2)nm (k r)=N n h(1,2)n(kr)C nm(θ,φ)(7)N(1,2)nm (k r)=h(1,2)n(kr)krN nP nm(θ,φ)+N n h(1,2)n−1(kr)−nh(1,2)n(kr)kr B nm(θ,φ)where h(1,2)n(kr)are spherical Hankel functions of thefirst and second kind,N n= [n(n+1)]−1/2are normalization constants,and B nm(θ,φ)=r∇Y m n(θ,φ),C nm(θ,φ)=∇×(r Y m n(θ,φ)),and P nm(θ,φ)=ˆr Y m n(θ,φ)are the vector spherical harmonics (Waterman1971,Mishchenko1991,Nieminen et al.2003a,Nieminen et al.2003b), and Y m n(θ,φ)are normalized scalar spherical harmonics.The usual polar spherical coordinates are used,whereθis the co-latitude measured from the+z axis,andφis the azimuth,measured from the+x axis towards the+y axis.M(1)nm and N(1)nm are outward-propagating TE and TM multipolefields,while M(2)nm and N(2)nm are the corresponding inward-propagating multipolefields.Since these wavefunctions are purely incoming and purely outgoing,each has a singularity at the origin.Sincefields that are free of singularities are of interest,it is useful to define the singularity-free regular vector spherical wavefunctions:RgM nm(k r)=1[M(1)nm(k r)+M(2)nm(k r)],(8)RgN nm(k r)=12[N(1)nm(k r)+N(2)nm(k r)].(9) Although it is usual to expand the incidentfield in terms of the regular VSWFs, and the scatteredfield in terms of outgoing VSWFs,this results in both the incident and scattered waves carrying momentum and angular momentum away from the system. Since we are primarily interested in the transport of momentum and angular momentum by thefields(and energy,too,if the particle is absorbing),we separate the totalfield into purely incoming and outgoing portions in order to calculate these.The user of the code can choose whether the incident–scattered or incoming–outgoing representation is used otherwise.We use an point-matching scheme tofind the expansion coefficients a nm and b nm describing the incident beam et al.2003b),providing stable and robust numerical performance and convergence.Finally,one needs to be able to calculate the force and torque for the same particle in the same trapping beam,but at different positions or orientations.The transformations of the VSWFs under rotation of the coordinate system or translation of the origin of the coordinate system are known(Brock2001,Videen2000,Gumerov& Duraiswami2003,Choi et al.1999).It is sufficient tofind the VSWF expansion of the incident beam for a single origin and orientation,and then use translations and rotations tofind the new VSWF expansions about other points(Nieminen et al.2003b,Doicu& Wriedt1997).Since the transformation matrices for rotation and translations along the z-axis are sparse,while the transformation matrices for arbitrary translations are full,the most efficient way to carry out an arbitrary translation is by a combination of rotation and axial translation.The transformation matrices for both rotations and axialtranslations can be efficiently computed using recursive methods(Videen2000,Gumerov &Duraiswami2003,Choi et al.1999).3.1.ImplementationFirstly,it is necessary to provide routines to calculate the special functions involved. These include:(i)sbesselj.m,sbesselh.m,sbesselh1.m,and sbesselh2.m for the calculation ofspherical Bessel and Hankel functions.These make use of the Matlab functions for cylindrical Bessel functions.(ii)spharm.m for scalar spherical harmonics and their angular partial derivatives. (iii)vsh.m for vector spherical harmonics.(iv)vswf.m for vector spherical wavefunctions.Secondly,routines must be provided tofind the expansion coefficients,or beam shape coefficients,a nm and b nm for the trapping beam.These are:(i)bsc pointmatch farfield.m and bsc pointmatch focalplane.m,described in(Nieminen et al.2003b),which can calculate the expansion coefficients for Gaussian beams,Laguerre–Gauss modes,and bi-Gaussian beams.Since these routines are much faster for rotationally symmetric beams,such as Laguerre–Gauss beams,a routine,lgmodes.m,that can provide the Laguerre–Gauss decomposition of an arbitrary paraxial beam is also provided.(ii)bsc plane.m,for the expansion coefficients of a plane wave.This is not especially useful for optical trapping,but makes the toolbox more versatile,improving its usability for more general light scattering calculations.Thirdly,the transformation matrices for the expansion coefficients under rotations and translations must be calculated.Routines include:(i)wigner rotation matrix.m,implementing the algorithm given by Choi et al.(1999).(ii)translate z.m,implementing the algorithm given by Videen(2000).4.T-matrixFor spherical particles,the usual Mie coefficients can be rapidly calculated.For non-spherical particles,a more intensive numerical effort is required.We use a least-squares overdetermined point-matching method(Nieminen et al.2003a).For axisymmetric particles,the method is relatively fast.However,as is common for many methods of calculating the T-matrix,particles cannot have extreme aspect ratios,and must be simple in shape.Typical particle shapes that we have used are spheroids and cylinders, and aspect ratios of up to4give good results.Although general non-axisymmetric particles can take a long time to calculate the T-matrix for,it is possible to makeuse of symmetries such as mirror symmetry and discrete rotational symmetry to greatly speed up the calculation (Kahnert 2005,Nieminen et al.2006).We include a symmetry-optimised T -matrix routine for cubes.Expanding the range of particles for which we can calculate the T -matrix is one of our current active research efforts,and we plan to include routines for anistropic and inhomogeneous particles,and particles with highly complex geometries.Once the T -matrix is calculated,the scattered field coefficients are simply found by a matrix–vector multiplication of the T -matrix and a vector of the incident field coefficients.4.1.ImplementationOur T -matrix routines include:(i)tmatrix mie.m ,calculating the Mie coefficients for homogeneous isotropic spheres.(ii)tmatrix pm.m ,our general point-matching T -matrix routine.(iii)tmatrix pm cube.m ,the symmetry-optimised cube code.5.Optical force and torqueAs noted earlier,the integrals of the momentum and angular momentum fluxes reduce to sums of products of the expansion coefficients.It is sufficient to give the formulae for the z -components of the fields,as given,for example,by Crichton &Marston (2000).We use the same formulae to calculate the x and y components of the optical force and torque,using 90◦rotations of the coordinate system (Choi et al.1999).It is also possible to directly calculate the x and y components using similar,but more complicated,formulae (Farsund &Felderhof 1996).The axial trapping efficiency Q isQ =2P ∞ n =1n m =−n m n (n +1)Re(a nm b nm −p nm q nm )−1n +1 n (n +2)(n −m +1)(n +m +1)(2n +1)(2n +3)12×Re(a nm a n +1,m +b nm b n +1,m−p nm p n +1,m −q nm q n +1,m )(10)in units of n ¯h k per photon,where n is the refractive index of the medium in which the trapped particles are suspended.This can be converted to SI units by multiplying by nP/c ,where P is the beam power and c is the speed of light in free space.The torque efficiency ,or normalized torque,about the z -axis acting on a scatterer isτz =∞ n =1n m =−n m (|a nm |2+|b nm |2−|p nm |2−|q nm |2)/P (11)in units of¯h per photon,whereP=∞n=1n m=−n|a nm|2+|b nm|2(12)is proportional to the incident power(omitting a unit conversion factor which will depend on whether SI,Gaussian,or other units are used).This torque includes contributions from both spin and orbital components,which can both be calculated by similar formulae(Crichton&Marston2000).Again,one can convert these values to SI units by multiplying by P/ω,whereωis the optical frequency.5.1.ImplementationOne routine,forcetorque.m,is provided for the calculation of the force,torque and spin transfer.The orbital angular momentum transfer is the difference between the torque and the spin transfer.The incoming and outgoing power(the difference being the absorbed power)can be readily calculated directly from the expansion coefficients, as can be seen from(12).6.Miscellaneous routinesA number of other routines that do not fall into the above categories are included.These include:(i)Examples of use.(ii)Routines for conversion of coordinates and vectors from Cartesian to spherical and spherical to Cartesian.(iii)Routines to automate common tasks,such asfinding the equilibrium position of a trapped particle,spring constants,and force maps.(iv)Functions required by other routines.7.Typical use of the toolboxTypically,for a given trap and particle,a T-matrix routine(usually tmatrix mie.m) will be run once.Next,the expansion coefficients for the beam are found.Depending on the interests of the user,a function automating some common task,such asfinding the equilibrium position within the trap,might be used,or the user might directly use the rotation and translation routines to enable calculation of the force or torque at desired positions within the trap.The speed of calculation depends on the size of the beam,the size of the particle, and the distance of the particle from the focal point of the beam.Even for a wide beam and a large distance,the force and torque at a particular position can typically be calculated in much less than one second.(a)(b)Figure1.Gaussian trap(example gaussian.m).Force on a sphere in a Gaussianbeam trap.The half-angle of convergence of the1/e2edge of the beam is50◦,corresponding to a numerical aperture of1.02.The particle has a relative refractiveindex equal to n=1.59in water,and has a radius of2.5λ,corresponding to a diameterof4.0µm if trapped at1064nm in water.(a)shows the axial trapping efficiency as afunction of axial displacement and(b)shows the transverse trapping efficiency as afunction of transverse displacement from the equilibrium point.(a)(b)guerre–Gauss trap(example lg.m).Force on a sphere in a Laguerre–Gauss beam trap.The half-angle of convergence of the1/e2outer edge of the beamis50◦,as infigure1.The sphere is identical to that infigure1.(a)shows the axialtrapping efficiency as a function of axial displacement and(b)shows the transversetrapping efficiency as a function of transverse displacement from the equilibrium point.Compared with the Gaussian beam trap,the radial force begins to drop offat smallerradial displacements,due the far side of the ring-shaped beam no longer interactingwith the particle.0.0250.020.0150.010.005Figure3.Trapping landscape(example landscape.m).The maximum axial restoringforce for displacement in the direction of beam propagation is shown,in terms of thetrapping efficiency as a function of relative refractive index and microsphere diameter.The trapping beam is at1064nm and is focussed by an NA=1.2objective.This type ofcalculation is quite slow,as the trapping force as a function of axial displacement mustbe found for a grid of combinations of relative refractive index and sphere diameter.At the left-hand side,we can see that the trapping force rapidly becomes very small asthe particle size becomes small—the gradient force is proportional to the volume of theparticle for small particles.In the upper portion,we can see that whether or not theparticle can be trapped strongly depends on the size—for particular sizes,reflection isminimised,and even high index particles can be trapped.More complex tasks are possible,such asfinding the optical force as a function of some property of the particle,which can,for example,be used to determine the refractive index of a microsphere(Kn¨o ner et al.2006).Figures1to4demonstrate some of the capabilities of the toolbox.Figure1shows a simple application—the determination of the force as a function of axial displacement from the equilibrium position in a Gaussian beam trap.Figure2shows a similar result, but for a particle trapped in a Laguerre–Gauss LG03beam.Figure3shows a more complex application,with repeated calculations(each similar to the one shown infigure 1(a))being used to determine the effect of the combination of relative refractive index and particle size on trapping.Finally,figure4shows the trapping of a non-sphericalFigure4.Optical trapping of a cube(example cube.m).A sequence showing theoptical trapping of a cube.The cube has faces of2λ/n medium across,and has arefractive index of n=1.59,and is trapped in water.Since the force and torquedepend on the orientation as well as position,a simple way tofind the equilibriumposition and orientation is to“release”the cube and calculate the change in positionand orientation for appropriate time steps.The cube can be assumed to always bemoving at terminal velocity and terminal angular velocity(Nieminen,Rubinsztein-Dunlop,Heckenberg&Bishop2001).The cube begins face-up,centred on the focalplane of the beam,and to one side.The cube is pulled into the trap and assumesa corner-up orientation.The symmetry optimisations allow the calculation of the T-matrix in20minutes;otherwise,30hours would be required.Once the T-matrix isfound,successive calculations of the force and torque require far less time,on the orderof a second or so.particle,a cube.Agreement with precision experimental measurements suggests that errors of less than1%are expected.8.Future developmentWe are actively engaged in work to extend the range of particles for which we can model trapping.This currently included birefringent particles and particles of arbitrary geometry.Routines to calculate the T-matrices for such particles will be included in the main when available.Other areas in which we aim to further improve the toolbox are robust handling of incorrect or suspect input,more automation of tasks,and GUI tools.We also expect feedback from the optical trapping and micromanipulation community to help us add useful routines and features.ReferencesAbraham M1909Rendiconti Circolo Matematico di Palermo28,1–28.Abraham M1910Rendiconti Circolo Matematico di Palermo30,33–46.Bayoudh S,Nieminen T A,Heckenberg N R&Rubinsztein-Dunlop H2003Journal of Modern Optics 50(10),1581–1590.Bishop A I,Nieminen T A,Heckenberg N R&Rubinsztein-Dunlop H2003Physical Review A 68,033802.Bishop A I,Nieminen T A,Heckenberg N R&Rubinsztein-Dunlop H2004Physical Review Letters 92(19),198104.Brock B C2001Using vector spherical harmonics to compute antenna mutual impedance from measured or computedfields Sandia report SAND2000-2217-Revised Sandia National Laboratories Albuquerque,New Mexico,USA.Chaumet P C,Rahmani A,Sentenac A&Bryant G W2005Physical Review E72,046708.Choi C H,Ivanic J,Gordon M S&Ruedenberg K1999Journal of Chemical Physics111,8825–8831. Collett W L,Ventrice C A&Mahajan S M2003Applied Physics Letters82(16),2730–2732. Crichton J H&Marston P L2000Electronic Journal of Differential Equations Conf.04,37–50. Doicu A&Wriedt T1997Applied Optics36,2971–2978.FarsundØ&Felderhof B U1996Physica A227,108–130.Gauthier R C2005Optics Express13(10),3707–3718.Gouesbet G&Grehan G1982Journal of Optics(Paris)13(2),97–103.Gumerov N A&Duraiswami R2003SIAM Journal on Scientific Computing25(4),1344–1381.Han Y,Gr´e han G&Gouesbet G2003Applied Optics42(33),6621–6629.Han Y&Wu Z2001Applied Optics40,2501–2509.Hoekstra A G,Frijlink M,Waters L B F M&Sloot P M A2001Journal of the Optical Society of America A18,1944–1953.Humblet J1943Physica10(7),585–603.Jackson J D1999Classical Electrodynamics3rd edn John Wiley New York.Jauch J M&Rohrlich F1976The Theory of Photons and Electrons2nd edn Springer New York. Kahnert M2005Journal of the Optical Society of America A22(6),1187–1199.Khrapko R I2001American Journal of Physics69(4),405.Kn¨o ner G,Parkin S,Nieminen T A,Heckenberg N R&Rubinsztein-Dunlop H2006Measurement of refractive index of single microparticles.Lock J A2004a Applied Optics43(12),2532–2544.Lock J A2004b Applied Optics43(12),2545–2554.Lorenz L1890Videnskabernes Selskabs Skrifter6,2–62.Maia Neto P A&Nussenzweig H M2000Europhysics Letters50,702–708.Mazolli A,Maia Neto P A&Nussenzveig H M2003Proc.R.Soc.Lond.A459,3021–3041.。
a r X i v :h e p -l a t /9709066v 1 18 S e p 19971INLO-PUB-7/97The QCD vacuum ∗Pierre van BaalaaInstituut-Lorentz for Theoretical Physics,University of Leiden,PO Box 9506,NL-2300RA Leiden,The NetherlandsWe review issues involved in understanding the vacuum,long-distance and low-energy structure of non-Abelian gauge theories and QCD.The emphasis will be on the role played by instantons.1.INTRODUCTIONThe term “QCD vacuum”is frequently abused.Only in the case of the Hamiltonian formulation is it clear what we mean by the vacuum:it is the wave functional associated with the lowest energy state.Observables create excitations on top of this vacuum.Knowing the vacuum is knowing all:We should know better.Strictly speaking the vacuum is empty.Nev-ertheless its wave functional can be highly non-trivial,deviating considerably from that of a non-interacting Fock space,based on a quadratic the-ory.Even in the later case the result of probing the vacuum by boundaries is non-trivial as we know from Casimir.The probe is essential:one needs to disturb the vacuum to study its prop-erties.Somewhat perversely the vacuum may be seen as a relativistic aether.It promises to mag-ically resolve our problems,from confinement to the cosmological constant.For the latter super-symmetry is often called for to remove the other-wise required fine-tuning.It merely hides the rel-ativistic aether,even giving it further structure.Remarkably it seems to have enough structure to give a non-trivial example of the dual supercon-ductor at work [1].Most will indeed put their bet on the dual su-perconductor picture for the QCD vacuum [2],and this has motivated the hunt for magnetic monopoles using lattice techniques,long before supersymmetric duality stole the show [1].The definitions rely on choosing an abelian projec-tion [3]and the evidence is based on the no-tion of abelian dominance [4],establishing the dual Meissner effect [5],or the construction of22.V ACUUM DEMOCRACYThe model we wish to describe here starts from the physics in a small volume,where asymptotic freedom guarantees that perturbative results are valid.The assumption is made,that at least for low-energy observables,integrating out the high-energy degrees of freedom is well-defined pertur-batively and all the non-perturbative dynamics is due to a few low-lying modes.This is most easily defined in a Hamiltonian setting,since we are in-terested in situations where the non-perturbative effects are no longer described by semiclassical methods.plete gaugefixingDue to the action of the gauge group on the vectorfields,afinite dimensional slice through the physical configuration space(gauge inequivalent fields)is bounded.One way to demonstrate this is by using the complete Coulomb gaugefixing, achieved by minimising the L2norm of the gauge field along the gauge orbit.At small energies,fields are sufficiently smooth for this to be well defined and it can be shown that the space under consideration has a boundary,defined by points where the norm is degenerate.These are by defi-nition gauge equivalent such that the wave func-tionals are equal,possibly up to a phase factor in case the gauge transformation is homotopically non-trivial.The space thus obtained is called a fundamental domain.For a review see ref.[10].2.2.Non-perturbative dynamicsGiven a particular compact three dimensional manifold M on which the gauge theory is defined, scaling with a factor L allows one to go to larger volumes.It is most convenient to formulate the Hamiltonian in scale invariantfieldsˆA=LA.Di-viding energies by L recovers the L dependence in the classical case,but the need of an ultraviolet cutoffand the resulting scale anomaly introduces a running coupling constant g(L),which in the low-energy effective theory is the only remnant of the breaking of scale invariance.When the volume is very small,the effective coupling is very small and the wave functional is highly localised,staying away from the bound-aries of the fundamental domain.We may com-pare with quantum mechanics on the circle,seen as an interval with identifications at its bound-ary.At which points we choose these boundaries is just a matter of(technical)convenience.The fact that the circle has non-trivial homotopy,al-lows one to introduce aθparameter(playing the role of a Bloch momentum).Expressed inˆA,the shape of the fundamental domain and the nature of the boundary condi-tions,is independent of L.Due to the rise of the running coupling constant with increasing L the wave functional spreads out over the fundamental domain and will start to feel the boundary iden-tifications.This is the origin of non-perturbative dynamics in the low-energy sector of the theory. Quite remarkably,in all known examples(for the torus and sphere geometries),the sphalerons lie exactly at the boundary of the fundamental domain,with the sphaleron mapped into the anti-sphaleron by a homotopically non-trivial gauge transformation.The sphaleron is the saddle point at the top of the barrier reached along the tun-nelling path associated with the largest instanton, its size limited by thefinite volume.For increasing volumes the wave functionalfirst starts to feel the boundary identifications at these sphalerons,“biting its own tail”.When the en-ergy of the state under consideration becomes of the order of the energy of this sphaleron,one can no longer use the semiclassical approximation to describe the transition over the barrier and it is only at this moment that the shift in energy be-comes appreciable and causes sizeable deviations from the perturbative result.This is in particular true for the groundstate energy.Excited states feel these boundary identifications at somewhat smaller volumes,but nodes in their wave func-tional near the sphaleron can reduce or postpone the influence of boundary identifications.This has been observed clearly for SU(2)on a sphere[11].The scalar and tensor glueball mass is reduced considerably due to the boundary identi-fications,whereas the oddball remains unaffected (seefig.1).These non-perturbative effects re-move an unphysical near-degeneracy in perturba-tion theory(with the pseudoscalar even slightly lower than the scalar glueball mass).The dom-inating configurations involved are associated to3instantonfields,in a situation where semiclassical techniques are inappropriate for computing the magnitude of the effect.When boundary identi-fications matter,the path integral receives large contributions from configurations that have non-zero topological charge,and in whose background the fermions have a chiral zero mode,its conse-quences to be discussed later.fFigure1.The low-lying glueball spectrum on a sphere of radius R as a function of f=g2(R)/2π2 atθ=0.Approximately,f=0.28corresponds to a circumference of1.3fm.From ref.[11].At some point technical control is lost,since so far only the appropriate boundary conditions near the sphalerons can be implemented.As soon as the wave functional starts to become apprecia-ble near the rest of the boundary too,this is no longer sufficient.This method has in particular been very suc-cessful to determine the low-lying spectrum on the torus in intermediate volumes,where for SU(2)agreement with the lattice Monte Carlo re-sults has been achieved within the2%statistical errors[10,12].In this case the non-perturbative sector of the theory was dominated by the en-ergy of electricflux(torelon mass),which van-ishes to all orders in perturbation theory.The leading semiclassical result is exp(−S0/g(L)),due to tunnelling through a quantum induced barrier of height E s=3.21/L and action S0=12.5.Al-ready beyond0.1fm this approximation breaks down.Onefinds,accidentally in these small vol-umes,the energy to be nearly linear in L.The effective Hamiltonian in the zero-momen-tum gaugefields,derived by L¨u scher[13],and later augmented by boundary identifications to include the non-perturbative effects[12],breaks down at the point where boundary identifica-tions in the non-zero momentum directions as-sociated with instantons become relevant.The sphaleron has an energy72.605/(Lg2(L))and was constructed numerically[14].Its effect be-comes noticeable beyond volumes of approxi-mately(0.75fm)3.For SU(3)this was verified directly in a lattice Monte Carlo calculation of thefinite volume topological susceptibility[15]. The results for the sphere have shown that also these effects can in principle be included reliably, but the lack of an analytic instanton solution on T3×I R has prevented us from doing so in practise.2.3.Domain formationThe shape of the fundamental domain depends on the geometry.Assuming that g(L)keeps on growing with increasing L,causing the wave func-tional to feel more and more of the boundary,one would naturally predict that the infinite volume limit depends on the geometry.This is clearly unacceptable,but can be avoided if the ground state obtained by adiabatically increasing L is not stable.Thus we conjecture that the vacuum is unstable against domain formation.This is the minimal scenario to make sure that at large vol-umes,the spectrum is independent of its geome-try.Domains would naturally explain why a non-perturbative physical length scale is generated in QCD,beyond which the coupling constant will stop running.However,we have no guess for the order parameter,let alone an effective theory de-scribing excitations at distances beyond these do-mains.Postulating their existence,nevertheless a number of interesting conclusions can be drawn. The best geometry to study domain formation is that of a box since it is space-filling.We can exactlyfill a larger box by smaller ones.This is not true for most other geometries.In small to intermediate volumes the vacuum energy density is a decreasing function[12]of L,but in analogy to the double well problem one may expect that at stronger coupling the vacuum energy density rises again with a minimum at some value L0, assumed to be0.75fm.For L sufficiently larger than L0it thus becomes energetically favourable to split the volume in domains of size L30.Since the ratio of the string tension to the4scalar glueball mass squared shows no structure around(0.75fm)3,we may assume that both have reached their large volume value within a domain. The nature of theirfinite size corrections is suffi-ciently different to expect these not to cancel ac-cidentally.The colour electric string arises from the fact thatflux that enters the box has to leave it in the opposite direction.Flux conservation with these building blocks automatically leads to a string picture,with a string tension as com-puted within a single domain and a transverse size of the string equal to the average size of a domain,0.75fm.The tensor glueball in an in-termediate volume is heavily split between the doublet(E+)and triplet(T+2)representations of the cubic group,with resp.0.9and1.7times the scalar glueball mass.This implies that the tensor glueball is at least as large as the average size of a domain.Rotational invariance in a domain-like vacuum comes about by averaging over all orien-tations of the domains.This is expected to lead to a mass which is the multiplicity weighted average of the doublet and triplet,yielding a mass of1.4 times the scalar glueball mass.Domain formation in this picture is driven by the largefield dynam-ics associated with sphalerons.Which state gets affected most depends in an intricate way on the behaviour of the wave functionals(cmp.fig.1). In the four dimensional euclidean context,O(4) invariance makes us assume that domain forma-tion extends in all four directions.As is implied by averaging over orientations,domains will not neatly stack.There will be dislocations which most naturally are gauge dislocations.A point-like gauge dislocation in four dimensions is an in-stanton,lines give rise to monopoles and surfaces to vortices.In the latter two cases most natu-rally of the Z N type.We estimate the density of these objects to be one per average domain size. We thus predict an instanton density of3.2fm−4, with an average size of1/3fm.For monopoles we predict a density of2.4fm−3.If an effective colour scalarfield will play the role of a Higgsfield,abelian projected monopoles will appear.It can be shown[16]that a monopole (or rather dyon)loop,with its U(1)phase rotat-ing Q times along the loop(generating an elec-tricfield),gives rise to a topological charge Q.In abelian projection it has been found that an instanton always contains a dyon loop[17].We thus argue this result to be more general,leading to further ties between monopoles and instantons.2.4.Regularisation andθIt is useful to point out that the non-trivial ho-motopy of the physical configuration space,like non-contractable loops associated to the instan-tons(π1(A/G)=π3(G)=Z Z),is typically de-stroyed by the regularisation of the theory.This is best illustrated by the example of quantum me-chanics on the circle.Suppose we replace it by an annulus.As long as the annulus does notfill the hole,or we force the wave function to vanish in the middle,theta is a well-defined parameter associated to a multivalued wave function.We may imagine the behaviour for small instantons in gauge theories to be similar to that at the cen-ter in the above model.Indeed,the gauge in-variant geodesic length of the tunnelling path for instantons on M×I R,given byℓ= ∞−∞dt4) 2)), with the instanton size defined byρ≡(1+b2)−15ter by mixing the electricfield withθtimes themagneticfield,E→E−θB/(2π)2.In these ap-proaches theta is simply a parameter added to the theory.Whether or not one will retrieve the ex-pected periodic behaviour in the continuum limit becomes a dynamical question.It should be pointed out that in particular forSU(N)gauge theories in a box(in sectors with non-trivial magneticflux)there is room to arguefor a2πN,as opposed to a2π,periodicity fortheθdependence.However,the spectrum is pe-riodic with a period2π,and the apparent dis-crepancy is resolved by observing that there is a non-trivial spectralflow[19].This may lead tophase transitions at some value(s)ofθ,relatedto the oblique confinement mechanism[3].Sim-ilarly for supersymmetric gauge theories this in-terpretation,supported by the recent discoveryof domain walls between different vacua[20],re-moves the need for semiclassical objects with acharge1/N.Such solutions do exist for the torus, but the fractional charge is related to magneticflux and the interpretation is necessarily as statedabove!The“wrong”periodicity in theta has long been used to argue against the relevance of in-stantons,but in the more recent literature this isnow phrased more cautiously[21,22].3.INSTANTONSInstantons are euclidean solutions responsiblefor the axial anomaly,breaking the U A(1)sub-group of the U(N f)×U(N f)chiral symmetry for N fflavours of massless fermions[23],as dictatedby the anomaly, f∂µ¯Ψfγµγ5Ψf(x)=2N f q(x). The breaking of U A(1)manifests itself in thesemiclassical computations through the presenceof fermion zero modes,with their number and chiralityfixed by the topological charge,through the Atiyah-Singer index theorem[24].Integra-tion over the fermion zero modes leads to the so-called’t Hooft vertex or effective interaction[23]. The integration over the scale parameter of the instanton ensemble is infrared dominated and a non-perturbative computation is desirable.In addition it is believed that the instantons are responsible for chiral symmetry breaking,where a chiral condensate is formed,which breaks the axial gauge group U A(N f)completely.This spon-taneous breaking is dynamical and it is less well established that instantons are fully responsible.It is the basis of the instanton liquid model as developed by Shuryak over the years.For a com-prehensive recent review see ref.[9].The details of the instanton ensemble play an important role. Only a liquid-like phase,as opposed to the di-lute or crystalline phases,will give rise to a chiral condensate.The model also makes a prediction for the average size and the topological suscep-tibility.In particular the latter quantity should be well-defined beyond a semiclassical approxima-tion.For large sizes the instanton distribution is exponentially cut-offand instantons do not give rise to an area law for the Wilson loop.Whenlarge instantons are more weakly suppressed the situation may differ[25],but a semiclassical anal-ysis in this case should not be trusted. Remarkably the topological susceptibility in pure gauge theories can be related to theη′mass through the so-called Witten-Veneziano relation,f2π(m2η+m2η′−2m2K)/2N f= d4x<T(q(x)q(0))>R ≡χt,leading to the predictionχt∼(180MeV)4. This is based on the fact that the U A(1)symme-try is restored in the planar limit[26,27],withχt of order1/N2.From the requirement that in the presence of massless quarksχt(and the thetadependence)disappears,the pure gauge suscep-tibility can be related to the quark-loop contri-butions in the pseudoscalar channel.Pole dom-inance requires the lightest pseudoscalar meson to have a mass squared of order1/N.Relating the residue to the pion decay constant gives the desired result[27].The index R indicates the ne-cessity of equal-time regularisation[26].A deriva-tion on the lattice using Wilson and staggered fermions was obtained in ref.[28],making use of Ward-Takahashi identities.Finally,also the coarse grained partition function of the instan-ton liquid model[9]allows one to directly deter-mine the’t Hooft effective Lagrangian[23],from which the Witten-Veneziano formula can be read off[29].This formula is almost treated as the holy grail of instanton physics.It is important to realise that some approximations are involved, although it is gratifying there are three indepen-dent ways to obtain it[26,27,28,29].63.1.Field theoretic methodA direct computation ofχt= d4x<q(x)q(0)> on the lattice requires a choice of discretisation for the charge density.A particularly simple one is[30]q L(x)=− T r(Uµν(x)˜Uµν(x))/16π2, where Uµν(x)=17 cutoffbreaks the scale and rotational invariance,we would expect that the action is no longer con-stant on the continuum moduli space.Indeed,fora smooth instanton the Wilson action behaves asS W(ˆρ≡ρ/a)=8π2(1−ˆρ−2/5+O(ˆρ−4))and causesthe instanton to shrink,until it becomes of thesize of the cutoffand falls through the lattice.Cooling willfirst remove high-frequency modesand one is left with a slow motion along the mod-uli space,giving rise to a plateau in the coolinghistory,used to identify the topological charge.One will miss instantons smaller than somefixedvalueˆρc.Assuming asymptotic freedom,one eas-ily shows that the error vanishes in the contin-uum limit.Note that by construction,the coolingmethod never will associate charge to a disloca-tion with an action smaller than96π2/11N,theentropic bound,which would spoil scaling[36].For extracting the size distribution,cooling andunder-relaxed(or slow)cooling[37]is problematicas the size clearly will depend on where alongthe plateau one analyses the data[38].The sizedistribution can be made to scale properly onlyat the expense of carefully adjusts the number ofcooling steps[39]when going to differentβ.One can avoid loosing instantons under coolingby modifying the action such that the scaling vio-lations change sign[35],for example by adding a2×2plaquette to the Wilson action.This so-calledover-improved action has the property that in-stantons grow under cooling,until stopped by thefinite volume.Consequently it would still muti-late the size distribution.This can be avoided byimproving the action so as to minimise the scalingviolations[40].A particularly efficient choice isthe so-calledfive-loop improved(5Li)lattice ac-tion:S5Li= m,n c m,n x,µνTr(1−P mµ,nν(x)),where P mµ,nν(x)is the m×n plaquette andc1,1=65720,c1,2=−890andc3,3=18bining results forβ=2.4(averaged over the differ-ent lattice types and boundary conditions after20 cooling sweeps)and forβ=2.6(after50cooling sweeps).The solid curve is afit to the formula P(ρ)∝ρ7/3exp(−(ρ/w)p),with w=0.47(9)fm and p=3(1),which atsmall sizes coincides withthe semiclassical result[23].The peak of this dis-tribution occurs atρ=0.43(5)fm.Under pro-longed cooling,up to300sweeps,I-A instantonannihilations and in particularfinite size effectsin the charge one sector do affect the distribu-tion somewhat,but not the average size,whichtherefore seems to be quite a robust result.Figure3.SU(2)instanton size distribution forβ=2.4(squares)and2.6(crosses)in a volume1.44fm across at lattice spacings a=0.12and0.06fm.The dotted and dashed lines representthe cutoffat aˆρc for both lattices.From ref.[40].It would be advantageous if one could come upwith a definition for the size that is related to aphysical quantity,since now the notion is basedon the semiclassical picture.This is neverthe-less appropriate for the comparison with the in-stanton liquid.The relatively large value of theaverage size as compared to that of1/3fm pre-dicted by the instanton liquid[9]is a point ofworry,typically leading to stronger interactionsthat may lead to a crystal(without chiral sym-metry breaking),rather than a liquid.Neverthe-less,in ref.[40]it has been tested that the pseu-doparticles are homogeneously distributed with adensity of2−3fm−4and occupying nearly halfthe volume.This is the case when only close I-Apairs have annihilated and therefore depends onthe amount of improved cooling.It does,how-ever,show that the pseudoparticles are relativelydense(more so than assumed in the instanton liq-uid[9]).The value of3.2fm−4for the density,de-rived earlier in the context of the domain pictureis quite realistic in the light of these results.3.3.SmoothingAnother method to study instantons on thelattice is based on the classicalfixed point ac-tions[41],defined through the saddle point equa-tion S F P(V)=min{U}(S F P(U)+κT(U,V)).It isobtained as the weak coupling limit of the block-ing transformation with a positive definite ker-nelκT(U,V),which maps a lattice{U}to{V},coarser by a factor two.Reconstructing thefinefrom the coarse lattice is called inverse blocking.It can be shown to map a solution of the latticeequations of motion to a solution on thefiner lat-tice with the same action.Iterating the inverseblocking,the lattice can be made arbitrarilyfine,thereby proving the absence of scaling violationsto any power in the lattice spacing[41].Thisclassically perfect action still looses instantons be-low a critical size,which is typically smaller thana lattice spacing.For solutions this most likelyhappens at the point where the continuum inter-polation of the latticefield is ambiguous,causingthe integer geometric charge[42]to jump.Forrough configurations that are not solutions,in-verse blocking typically reduces the action by afactor32and makes it more smooth.Thefixedpoint topological charge is defined as the limit-ing charge after repeated inverse blockings.Thisguarantees no charge will be associated to disloca-tions(of any action below the instanton action).The classicalfixed point action,although op-timised to be short range,still has an infinitenumber of terms andfinding a suitable trunca-tion is a practical problem.From examples ofparametrisations,the success of reducing scalingviolations in quantities like the heavy quark po-tential(tested by restoring rotational invariance)is evident,for recent reviews see ref.[45].In prac-tise only a limited number of inverse blockings isfeasible and thefixed point topological charge hasto rely on a rapid convergence.The closer one isable to construct thefixed point action the bet-9ter this convergence is expected to be.For two di-mensional non-linear sigma models sufficient con-trol was achieved to demonstrate that more than one inverse blockingdid not appreciably change the topological charge [43].In four dimensional gauge theories,both find-ing a manageable parametrisation and doing re-peated inverse blockings is a major effort.It goes without saying that if no good approximation to the fixed point action is used,one cannot rely on its powerful theoretical properties.Studies of in-stantons for SU (2)gauge theories were performed in ref.[44].A 48term approximation to the fixed point action was used to verify the theoretical properties.The geometric charge was measured after one inverse blocking and it was shown that for Q =1the instanton action was to within a few percent from the continuum action (slightly above it due to finite size effects),whereas for Q =0the action was always lower.Subsequently a simplified eight parameter form was used on which the instanton action was somewhat poorly reproduced,but such that the Q =1boundary stayed above the entropic bound for the action.A value of χt =(235(10)MeV)4was quoted on a 84lattice with physical volumes of up to 1.6fm,taking full advantage of the fact that fixed point actions can be simulated at rather large lat-tice spacings.However,<Q 2>measured on the coarse lattice was up to a factor 4larger than on the fine lattice (for the two dimensional study much closer agreement was seen [43]).Further inverse blocking to check stability of the charge measurement was not performed.The same eight parameter action was used in ref.[46],but they did their simulations on the fine lattice and performed an operation called smoothing:first blocking and then inverse block-ing.They changed the proportionality factor κin the blocking kernel,requiring that the saddle point condition is satisfied for the blocked lat-tice.Due to the change of κthe properties of the fixed point action that inspired these authors can unfortunately no longer be called upon as a justification.This smoothing satisfies the prop-erties of cooling (the action always decreases and stays fixed for a solution)and should probably by judged as such.(See for further comments be-low.)They consider their study exploratory and concentrate on finite temperature near and be-yond the deconfinement transition.In ref.[47]the number of terms to parametrise the fixed point action was extended to four powers of resp.the plaquette,a six-link and an eight-link Wilson loop.The latter was required to improve on the properties for the classical solutions.They achieved ˆρc =0.94,still considerably smaller than for S 5Li ,and reproduced the continuum instan-ton action to a few percent for ρ>ρc .To increase the quality of the fit a constant was added,which should vanish in the continuum limit (as it drops out of the saddle point equation).Possible rami-fications of this at finite coupling are not yet suf-ficiently understood.After one inverse blocking insufficient smooth-ing is achieved to extract the pseudoparticle po-sitions and sizes and further inverse blocking was considered computationally too expensive.Like in ref.[46],they also introduced a smoothing cy-cle,but now by blocking the fine lattice back to the coarse one.Such a cycle would not change the action when the blocking is indeed to the same coarse lattice.However,there are 24different coarse sublattices associated to a fine one and in ref.[47]the smoothing cycle involved blocking to the coarse lattice shifted along the diagonal over one lattice spacing on the fine lattice.Unlike in ref.[46],the smoothing cycle will be repeated.Figure 4.Example of the smoothing after 1and 9cycles,shown on the fine lattice.From ref.[47].Although the fixed point nature of the action guarantees it is close to a perfect classical action it needs to be demonstrated that it preserves the topological charge at sufficiently large scales.For cooling this is argued from the local nature of the updates,not affecting the long distance be-haviour.Improved cooling is in this sense less。
ISPRS Journal of Photogrammetry and Remote Sensing65(2010)558–569Contents lists available at ScienceDirectISPRS Journal of Photogrammetry and Remote Sensing journal homepage:/locate/isprsjprsClose range photogrammetry for industrial applicationsThomas LuhmannInstitute for Applied Photogrammetry and Geoinformatics,Jade University of Applied Sciences Oldenburg,D-26121Oldenburg,Germanya r t i c l e i n f o Article history:Received27January2010 Received in revised form16June2010Accepted17June2010 Available online15July2010 Keywords:Close rangeMetrologySensorsAccuracy a b s t r a c tThis article summarizes recent developments and applications of digital photogrammetry in industrial measurement.Industrial photogrammetry covers a wide field of different practical challenges in terms of specified accuracy,measurement speed,automation,process integration,cost-performance ratio,sensor integration and analysis.On-line and off-line systems are available,offering general purpose systems on the one hand and specific turnkey systems for individual measurement tasks on the other.Verification of accuracy and traceability to standard units with respect to national and international standards is inevitable in industrial practice.System solutions can be divided into the measurement of discrete points,deformations and motions,6DOF parameters,3D contours and3D surfaces.Recent and future developments concentrate on higher dynamic applications,integration of systems into production chains, multi-sensor solutions and still higher accuracy and lower costs.©2010International Society for Photogrammetry and Remote Sensing,Inc.(ISPRS).Published byElsevier B.V.All rights reserved.1.IntroductionClose range photogrammetry in industry became technically and economically successful in the mid1980s,with a first break-through in automated and high accurate3D measurements(Fraser and Brown,1986).Based on analogue large format reseau cameras, convergent multi-image configurations,digital comparators and digital image processing of the scanned imagery,close range photogrammetry offered the potential of measurement precision to1:500,000with respect to the largest object dimension. Especially for large volume objects(e.g.>10m diameter)with a high number of object points,photogrammetry could exceed the performance of theodolite systems and thus became a standard method for complex3D measurement tasks.The availability of video and digital cameras in combination with direct access to the digital image data generated new con-cepts for close-range applications.So-called off-line photogram-metry systems utilizing high-resolution digital SLR cameras with (usually)wide angle lenses,retro-reflective object targets and sub-pixel image point measurement operators afforded object measurement within minutes by robust bundle adjustment,in-cluding self-calibration(e.g.Beyer,1992;Brown and Dold,1995). Recent systems,for example from the companies GSI,AICON and GOM,are practically fully automated,and can therefore be oper-ated by non-specialist personnel.Data acquisition and processing can be performed at different locations,at different times and by different people(Fig.1).E-mail addresses:luhmann@jade-hs.de,luhmann@fh-oow.de.URL:http://www.jade-hs.de/iapg/.Digital off-line systems can be regarded as fully accepted3D measurement tools that are applied in a large variety of industrial application areas,including•automotive manufacturing,for car body deformation measure-ment,control of supplier parts,adjustment of tooling and rigs, establishment of control point networks,crash testing,etc.;•the aerospace industry,for measurement and adjustment of mounting rigs,antenna measurement,part-to-part alignment, etc.;•wind energy systems for deformation measurements and production control;and•engineering and construction,for measurement of water dams, tanks,plant facilities,etc.Off-line photogrammetry systems offer the highest precision and accuracy levels.The precision of image point measurement can be as high as1/50of a pixel,yielding typical measurement precision (RMS1-sigma)on the object in the range of1:100,000to1:200,000, the former corresponding to0.1mm for an object of10m size (Fraser et al.,2005;Parian et al.,2006).The absolute accuracy of length measurements is generally2–3times less(e.g.about 0.05mm for a2m object)than the precision of object point coordinates,which expresses the relative accuracy of3D shape reconstruction(Rieke-Zapp et al.,2009).On-line photogrammetry systems provide measurements in a closed data chain,hence in real-time and with a direct link to external processes(Fig.1).Typically,an on-line system consists of two or more calibrated and oriented cameras that observe a specific volume.Appropriate object targeting affords fully automated feature extraction in image space.3D information about points,contours or surfaces is directly generated in order to control0924-2716/$–see front matter©2010International Society for Photogrammetry and Remote Sensing,Inc.(ISPRS).Published by Elsevier B.V.All rights reserved. doi:10.1016/j.isprsjprs.2010.06.003T.Luhmann /ISPRS Journal of Photogrammetry and Remote Sensing 65(2010)558–569559Fig.1.Operational stages for off-line and on-line systems.a connected process,such as on a production line or for the positioning of an object with respect to an external reference frame.Example applications of on-line systems include•tactile probing,where a hand-held probing device withcalibrated local reference points is tracked in 3D space in order to provide the coordinates of the probing tip;•robot calibration,where a local reference body representing the robot tool centre point is observed by one or more cameras in order to determine the robot trajectory in space;•tube measurement,where a multiple camera set-up is used to measure points and lines of arbitrarily shaped pipes or tubes in order to control a tube bending machine;and•sensor navigation,where a 2D or 3D measurement device (e.g.a laser profile sensor)is tracked in 6DOF by a stereo camera system.The accuracy of on-line systems is usually less than that of off-line systems due to the limited number of images,restrictions resulting in less than optimal camera calibration and orientation,and the manual operation of probes.Typical accuracy figures lie in the order of 0.2–0.5mm over a range of 2m (e.g.Broers and Jansing,2007).The successful use of photogrammetry in industry requires a number of technical components that form an efficient and eco-nomic system.The following list summarises these components and related technical issues:•imaging sensor:resolution (number of pixels),available lenses,acquisition and data transfer speed,camera stability,synchronisation,data compression,etc;•targeting and illumination:representation of interesting object features,target shape and size,wave length of light sources,restrictions to object access,illumination power and measure-ment volume;•imaging configuration:number of camera stations,desired measurement accuracy,network design,redundancy,robust-ness,self-calibration ability,datum definition and object con-trol,self-control of orientation and calibration;•image processing:automation of target recognition and identi-fication,sub-pixel measurement of target centre,multi-image matching approaches,feature tracking,and handling of outliers and scene artefacts;•3D reconstruction:methods for determination of 3D coordi-nates (e.g.spatial intersection,bundle adjustment)and error statistics;•data interfaces:integration into CAD/CAM environments,machine and data interfaces,user interaction and displays,etc;and•verification of accuracy:reference bodies,reference data,standards and guidelines,and acceptance tests.The above listed topics illustrate that appropriate design,setup and operation of close-range industrial photogrammetry systems forms a complex task.The feasibility of a solution is not only a question of technical issues but also a function of required cost-performance ratio,system support,documentation,quality assurance and interdisciplinary skills.As a consequence,the world-wide number of system suppliers in this field is limited to probably less than 10professional companies.However,the market for optical 3D measurements is significantly growing and offers promising prospects for the future.In the following,the basic camera concepts,system designs and measurement tasks for industrial photogrammetry are presented.Due to the large variety of applications and system configurations,this paper can provide only an overview of recent technology and applications,rather than a comprehensive coverage.Further descriptions of technical data concerning commercial systems are provided at the listed websites of system suppliers and measurement service companies.2.Sensor technologyImaging sensor technology is the key feature of an industrial photogrammetry system.The selection of the appropriate sensor device is driven by requirements in accuracy,resolution,acquisi-tion speed and frame rate,synchronisation,amount of data,spec-tral information,field of view,image scale,digital interfaces and cost.In general,it is desirable to use cameras with the highest res-olution,imaging speed and accuracy in order to provide maximum efficiency and productivity with respect to system costs and return on investment.Nowadays the range of available cameras and imaging sensors is huge.Based on CCD and CMOS technology,sensors are available with very high resolutions (>60Mpixel),very high frame rates (>2000Hz),pixel sizes varying between about 1.4and 15µm,and different sensor formats.An updated overview is given by Luhmann (in press ),following on from earlier summaries by Luhmann and Robson (2008)and Luhmann et al.(2006).2.1.SLR camerasHigh-resolution digital SLR cameras are now available with sensors between 10and 60Mpixel and image formats between approximately 20×14mm and 54×45mm.Such cameras are designed for (semi-)professional photographic work with a range of exchangeable lenses,high-capacity storage devices and powerful batteries.Their mechanical stability is usually poor in terms of high-accuracy photogrammetric requirements,and camera calibration is therefore an important step in the complete process chain (Shortis et al.,1998).Depending upon absolute accuracy requirements,these cameras can be regarded as partially metric,with changing interior orientation,even from image to image (see Section 3.2).SLR cameras are mainly used for off-line applications,i.e.the measurement of static objects.Suitable cameras in classical small format (35mm SLR)are offered by companies such as Nikon,Canon and Sony,whereas medium-format cameras are available from Rollei,Hasselblad or Alpa,these being usually equipped with CCD sensor backs by PhaseOne or Leaf.Two sample cameras are shown in Fig.2.2.2.Digital video and high speed camerasDynamic processes can be observed by digital cameras with higher frame rates,e.g.video cameras or high-speed cameras.560T.Luhmann /ISPRS Journal of Photogrammetry and Remote Sensing 65(2010)558–569(a)Nikon D3x (6048×4032pixels).(b)Hasselblad H4D-60(8956×6708pixels).Fig.2.Examples of digital SLRcameras.Fig.3.High-speed camera PCO dimax.Controlled through a fast computer interface (e.g.CameraLink or Giga Ethernet),sensors with more than 1500×1000pixels and frame rates of 2000Hz are commercially available,conferring the opportunity to carry out dynamic photogrammetry of high speed events.Video cameras with typically 1.3Mpixel sensors and frame rates of 10–30Hz are used in a variety of applications of photogrammetric on-line systems,examples being tube inspection (Bösemann,2005),stereo navigation and robot guidance.Digital high speed cameras are usually equipped with CMOS sensors that enable fast data access,programmable field of view,extremely short exposure times and high dynamic range.Typical high-speed cameras provide images of about 1500×1000pixels at 1000Hz,though there are already on the market newly developed cameras with similar spatial resolutions,but with frame rates of more than 2000Hz (Fig.3).A special high-speed camera for photogrammetric measure-ments has been developed by the AICON company.TraceCam F (Wiora et al.,2004),shown in Fig.4,is based on a 1.3Mpixel CMOS sensor and incorporates a camera-integrated FPGA processor able to locate and measure up to 10,000circular white targets in real-time with a frame rate of up to 500Hz at full sensor resolution.The camera,which stores only image coordinates and not single image frames,is designed for single camera 6DOF measurements such as recording the spatial position and rotation of spinning wheels with respect to a car body.Synchronisation of two or more high-speed cameras remains a challenging task at high image acquisition rates,for example in the dynamic measurement of deformation in car crash testing.As an alternative to the use of multiple cameras,however,the technology of stereo beam splitting can be employed.Through the use ofanFig.4.High-speed camera with FPGA-based target detection(AICON).Fig.5.High-speed camera with stereo beam splitter.optical beam splitter (Fig.5),it is possible to acquire synchronised stereo images with only one camera (Luhmann,2005),though the available image format is then only half of the original sensor size (Fig.6).2.3.Photogrammetric camerasThere are very few cameras specifically designed for close-range photogrammetric applications.The classical ‘metric camera’approach with stable interior orientation requires high additional effort in terms of optical and mechanical sensor design.The main advantage of these cameras is their assured stability and the consequent reduced need for periodic or on-the-job calibration,for example in applications where high accuracy is demanded without the technical possibility for simultaneous camera calibration.T.Luhmann /ISPRS Journal of Photogrammetry and Remote Sensing 65(2010)558–569561Fig.6.Examples of stereo beam splitting image sequence.However,the term ‘metric camera’should only be used in conjunction with the desired accuracy level of the camera.Consequently,even metric cameras have to be calibrated on the job if the desired accuracy exceeds the metric tolerance of the camera.Fig.7(a)shows the INCA 3photogrammetric camera from GSI,which is designed for high-accuracy industrial metrology.The fixed lens,ring flash and integrated processor enable 3D measurements in off-line (1camera)or on-line (2cameras)mode.The measurement of targets is performed inside the camera processor.Fig.7(b)shows a special digital video camera AXIOS 3D SingleCam that is designed for 6DOF navigation tasks.The mechanical stability of the lens and sensor assembly is extremely high,with the camera being shock resistant up to 50g without measurable change of the camera calibration parameters.Multi-sensor systems comprising two or more cameras (Fig.8)enable 3D measurements without any additional effort for calibration and orientation.Depending upon the mechanical stability of the cameras and their relative orientation,such systems can provide measurement accuracy of about 0.05mm over a range of up to 2m.Accuracy in absolute length measurement is closer to 0.1mm and the overall accuracy is generally stated at around 1:20,000.These multi-camera systems are often used for navigation tasks,such as the positioning and tracking of crash dummies in car safety testing,and for medical applications.2.4.Additional sensorsPhotogrammetric sensors can be combined with additional measuring systems,as seen in fringe projection systems,laser tracking or scanning systems,tacheometers or 3D cameras.Most important are hybrid systems where the advantages of two sensor types are combined in order to form a new type of measuring system with extended functionality or performance.Examples are laser trackers equipped with a photogrammetric camera for6DOF(a)Metric camera GSI INCA 3(3500×2350pixels).(b)Metric video camera AXIOS 3D SingleCam (776×582pixels).Fig.7.Examples of photogrammetriccameras.(a)Metric stereo camera AXIOS 3D CamBarB2(1392×1040pixels).(b)Four-camera head AICON DPS (1300×1000pixels).Fig.8.Examples of photogrammetric multi-camera systems.562T.Luhmann/ISPRS Journal of Photogrammetry and Remote Sensing65(2010)558–569(a)Laser tracker with camera and6DOF probe(Leica).(b)Surface sensor tracked by multi-camera system(Steinbichler,NDI).Fig.9.Examples of hybrid systems.measurements at the object surface,offered for example by Leica (Fig.9(a)),and the optical3D navigation of a surface sensor through tracking with a stereo or multiple camera system,as offered by Steinbichler(Fig.9(b)).3.Camera calibration3.1.Physical and mathematical modelsCamera calibration is an essential part of photogrammetric systems in industrial application since measurement accuracy is usually directly related to sensor quality and correct modelling of the interior orientation.The standard models for camera calibration include the3D position of the perspective centre in image space(principal distance and principal point),parameters for radial and decentring distortion,and possibly corrections for affinity and shear within the sensor system.These parameters are calculated through a self-calibrating bundle adjustment in a multi-station convergent network of images.Recent overviews are given in Remondino and Fraser(2006)and Luhmann et al.(2006).Calibration becomes a more difficult task in the following cases:•camera geometry is unstable during image acquisition(e.g.due to gravity effects);•the number of acquired images is less than a minimum number required for self-calibration(e.g.for stereo on-line systems);•the geometric configuration of images does not allow bundle adjustment with self-calibration(e.g.due to weak intersection angles or lack of orthogonal camera rotations about the optical axis);and•the object does not provide enough information(e.g.points, distances)for calibration.Photogrammetric metrology systems usually consist of integrated bundle adjustment software which is,in some cases,not accessible by normal users.Examples are systems like GSI VSTARS or AICON3D Studio that work like black boxes in a fully automated and robust manner.In addition,bundle adjustment programs are available as stand-alone off-line packages enabling the full control of parameter selection and analysis,such as Australis (Photometrix)or Ax.Ori(AXIOS3D).3.2.Calibration of off-line systemsCamera calibration for off-line systems is based on a multi-image setup that is recorded either for a test field(test field calibration)or for the measured object itself(on-the-job calibration).In both cases a minimum number of tie points must be provided and these can be natural or signalised points.Optionally, given control points or distance constraints can be introduced.The datum of the object coordinate system can either be defined by three or more control points(with the risk of introducing shape constraints in the photogrammetric orientation),by minimum definition(e.g.3-2-1method)or by free-net adjustment.If the mechanical instability of the camera is worse than the required accuracy level,the camera can be calibrated image-wise, i.e.each image of the bundle configuration obtains individual calibration parameters(Maas,1999a;Tecklenburg et al.,2001). Usually the distortion values are kept constant for all images in the network,while the position of perspective centre is adjusted for each image.This method leads to significant accuracy enhancements as long as the imaging configuration consists of enough well distributed images(usually more than30including images rolled around the optical axis).The precision of camera calibration can be measured by the precision of image and object points,or by standard deviations of camera parameters.A reliable and strict accuracy assessment is only possible if checks can be made against independent control points or standardised calibrated distances(see Section5).Typical internal precision measures for digital SLR camera self-calibration can reach1:100,000and beyond(RMS1-sigma accuracy rgest object dimension),whereas accuracy values determined through length measurement are generally closer to the1:50,000level (Rieke-Zapp et al.,2009).3.3.Calibration of on-line systemsCamera calibration for on-line systems with fixed or limited camera positions can only be solved by multi-image bundle adjustment if a local test or reference object can be moved through the measurement volume.In this case an appropriate multi-image configuration can be recorded which is usable for self-calibration.T.Luhmann /ISPRS Journal of Photogrammetry and Remote Sensing 65(2010)558–569563Fig.10.Principle of a single camera system with tactile probing.In cases where an additional calibration body is not provided cameras have to be calibrated in advance.The successful use of the system then requires stable cameras with respect to the desired accuracy e of an extended space resection approach with additional parameters for interior orientation may allow for a camera-wise calibration based on given 3D control points.However,the potential of self-calibration bundle adjustment cannot be matched by single image space resection.Calibration of on-line systems with fixed camera set-up can be performed either by observing a calibration body which is located at different positions in object space,or by measuring an additional calibrated scale-bar that is moved in front of the cameras (Amdal ,1992;Maas ,1999b ).The latter method is easy to handle and provides scale information throughout the measurement volume,hence it is advantageous for accuracy assessments based on distance measurements.4.Measurements 4.1.Single point probingThe measurement of single object points is a common task when the photogrammetric system is used as an optical coordinate measurement machine (CMM).In this case a tactile probe is employed to measure a point on an object surface while the position and orientation (6DOF)is determined by a photogrammetric system.The probe consists of local control points and a probing tip with given 3D coordinates in the same system.It can be operated manually or by a moving system,e.g.a robot.The measurement of the 6DOF values of the probe can be performed by single cameras,by stereo cameras or by multiple camera ually the accuracy of point measurement increases with the number of cameras that observe the probe simultaneously.Fig.10describes the principle of single camera probing (Amdal ,1992;Luhmann ,2009).The position of the probe is calculated by inverse space resection with respect to a minimum of four control points.Fig.11shows a commercial version using a probe with a linear arrangement of points,hence only 5degrees of freedom can be determined by space resection.The typical accuracy of single-camera probing systems is in the order of 0.5mm over a distance of about 1–2m.Fig.11.Single camera system with tactile probing (Metronor).A hand-held probe with integrated camera is offered by AICON.The camera is oriented by space resection using mobile object point panels that consist of a number of coded and uncoded control points.The 3D point accuracy is about 0.1mm in a measuring volume that is defined by the size and number of panels.If the principle is extended to two or more cameras,the task of 6DOF measurement is solved by spatial intersection of the probe’s control ually the accuracy of multi-camera systems is higher than that of single cameras since the typical distance between cameras is larger than the equivalent base of control points of the monly used systems consist of 2–4digital video cameras (example in Fig.8)and active or passive targets,providing measurement frequencies between 10and 100Hz.Systems based on linear CCD sensors,as exemplified in Fig.9(b),work with active LED targets at frequencies of up to 1000Hz.The typical accuracy of multi-camera probing systems by companies such as Metronor,AICON,GSI and GOM lies in the order of 0.1mm over distances of 2–5m,depending upon the system configuration (see Fig.12).Single and multi-camera systems can also be used for navigation of additional sensors (Fig.9(b)).In this case the uncertainty of spatial orientation by the observing system is directly introduced into the measurement errors of the additional sensors.4.2.Multiple point measurementThe measurement of a large number of object points is probably the application to which photogrammetry is best suited.The standard case for multi-point measurement is the off-line approach using a single high-resolution digital camera (Section 2.1),targeted object points and self-calibrating bundle adjustment.The spatial net design of images and object points can be optimised according to precision,reliability and accuracy of the measurement (Fraser,1996).Fig.13shows a partial set of images,taken with a digital SLR camera,from a multi-station bundle network used for 3D deformation analysis of a car door.Since each object point is covered by an average of 10–12images,a high redundancy and consequently high reliability of coordinate determination,camera orientation and calibration is provided.The configuration of imaging rays illustrates that they intersect with large convergence angles yielding a homogeneous accuracy in X ,Y and Z .In this example the precision in object space (RMS 1-sigma)is around 0.025mm.564T.Luhmann/ISPRS Journal of Photogrammetry and Remote Sensing65(2010)558–569(a)Tactile probing with two-camera system(Metronor).(b)Probe with integrated camera(AICON).Fig.12.Tactile on-line metrologysystems.(a)Multi-image configuration.(b)3D net configuration of rays.Fig.13.Multi-image set-up for car door measurement.4.3.Surface measurementThe measurement of free-form surfaces is of increasing interestin industrial manufacturing due to the need for3D digitisationand quality control in rapid prototyping and reverse engineeringprocesses.Among the variety of technical solutions,the followingoptical methods are usually applied for surface reconstruction:•fringe projection systems with one camera•fringe projection systems with two or more cameras•photogrammetry with grid projection or grid measurement•photogrammetry with artificial or natural textures•laser profiling and scanning with photogrammetric orientation•hybrid solutions with combinations of the above mentionedmethods.4.3.1.Fringe projectionFringe projection systems are applied in a large variety ofindustrial measurement tasks such as design,prototyping,copyingof objects,roughness measurement and quality control.The typicalmeasurement volume for a single-shot fringe projection systemlies in the range of100×100×30mm up to1000×1000×300mm.The measurement volume is generally restricted byphysical limitations such as reflectivity of surface material and theillumination power of the fringe projector.The principle of fringe projection systems is usually basedon phase–shift methods(e.g.Zumbrunn,1987)where multiplesets of shifted fringes are projected and observed by a cameraunder a certain triangulation angle.Since phase–shift methods areonly unique in a range of±πthey are combined with absoluteGray code measurements or with projection of multiple sets offringes with different wave lengths.The lateral resolution of fringeprojection lies betweenλ/20andλ/100whereλis the wavelength of the fringe pattern(Brenner et al.,1999).In practice fringeprojection can achieve an accuracy of about0.05–0.1mm in ameasuring volume of up to1×1×0.3m.Systems based on one camera(Fig.14(a))require a calibratedand oriented projector that can be regarded as an inversecamera.If two cameras are used,the projector serves onlyas a projection device while the3D coordinates are derived。