LOD-sprite technique for accelerated terrain rendering
- 格式:pdf
- 大小:302.96 KB
- 文档页数:8
LOD技术概述:虚拟现实中场景的生成对实时性要求很高,LOD技术是一种有效的图形生成加速方法。
本文主要介绍了LOD技术的研究内容、LOD模型的生成算法以及L OD模型在虚拟场景生成中的选择。
最后,对LOD技术未来的研究方向作了展望。
一、引言虚拟现实技术是一种逼真地模拟人在自然环境中视觉、听觉、触觉及运动等行为的人机交互技术。
它融合了计算机图形学、多媒体技术、人工智能、人机接口技术、数字图像处理、网络技术、传感器技术以及高度并行的实时计算技术等多个信息技术分支。
它的主要特征是沉浸感、交互性和想象力。
它的要害技术包括:环境建模技术、立体声合成和立体显示技术、触觉反馈、交互技术、系统集成技术。
虚拟现实中最重要的是人可以在随意变化的交互控制下感受到场景的动态特性,也就是虚拟现实系统要求随着人的活动即时生成相应的图形画面。
有两种重要指标衡量用户对虚拟环境的沉浸效果和程度:一是动态特性,自然的动态特性要求每秒生成和显示30帧图形画面,至少不能少于10帧,否则将会产生严重的不连续和跳动感。
另一个指标是交互延迟,系统的图形生成对用户的交互动作做出反应的延迟时间不应大于0.1秒,最多不能大于1/4秒。
以上两种指标均依靠于系统生成图形的速度。
显而易见,图形生成速度是虚拟现实的重要瓶颈。
图形生成的速度主要取决于图形处理的软硬件体系结构,非凡是硬件加速器的图形处理能力以及图形生成所采用的各种加速技术。
虽然现今的图形工作站得益于高速发展的CPU和专用图形处理器性能得到很大的提高,但距离VR的需求仍有相当大的差距。
考虑到VR对场景复杂度几乎无限制的要求,在高质量图形的实时生成要求下,如何从软件着手,减少图形画面的复杂度,已成为VR图形生成的主要目标。
1976年,Clark[1]提出了细节层次(Levels of Detail,简称LOD)模型的概念,认为当物体覆盖屏幕较小区域时,可以使用该物体描述较粗的模型,并给出了一个用于可见面判定算法的几何层次模型,以便对复杂场景进行快速绘制。
NOTE: The K5 Select is factory preset at 180° Arc with the #2 Nozzle.NOZZLE SELEcTiONThe K5 Select is designed to conserve water by matching the flow rate to the arc.The following settings are recommended:Arc NOZZLE40° to 135° Use #1 Nozzle136° to 225° Use #2 Nozzle226° to 315° Use #3 Nozzle316° to 360° Use #4 NozzleTo Select Nozzle: Insert Adjustment Key (I) into Nozzle Selector (B) and turn to desired nozzle. SETTiNG THE ArcNOTE: The K5 Select gear driven sprinkler has a fixed right start and an adjustable left stop. 1. POSiTiONiNG NOZZLE TurrET TO THE riGHT STArT POSiTiONPlace your fingers on the top center of the nozzle turret (G). Rotate the turret counter-clockwise to the left stop to complete any interrupted rotation cycle. Rotate the nozzle turret clockwise to the right start. This is the fixed side of the arc. The nozzle turret must be held in this position for arc adjustments. The right start does not change.2. AdjuSTiNG THE riGHT (FixEd) SidE OF ArcIf the right side of the arc is not properly aligned, the sprinkler may spray areas not intended for watering such as driveways or adjacent properties. The right side arc can easily be realigned.OPTiON 1: repositioning can on FittingTurn the sprinkler can (H) and the fitting below it left or right to the desired position. This may require temporary removal of the soil around the sprinkler to allow you to grip the sprinkler can.OPTiON 2: remove internal riser AssemblyUnscrew the top (J) counter-clockwise and remove the Sprinkler Assembly (K) from the can. Once removed with nozzle turret (G) at its right start, reposition the sprinkler assembly so that nozzle arrow (A) points to the desired start position. Replace the sprinkler assembly back in the can and screw on the top. At this point you have realigned the right arc stop, and you can adjust the left arc to your desired setting.3. AdjuSTiNG THE LEFT (VAriAbLE) SidE OF ArcSETTiNG THE ArcInsert Adjustment Key (I) into the arc adjustment slot (D). While holding the nozzle turret (G) at the right start, turn the Key (I) until the Arc Indicator (E) shows the desired radius.SPriNkLEr iNSTALLATiON1. iNSTALL ANd buryDo not use pipe dope. Thread the sprinkler on the pipe. Bury the sprinkler flush with the ground.2. iNSPEcTiNG THE FiLTErUnscrew the top (J) and lift complete sprinkler assembly (K) out of can (H). The filter is located on the bottom of sprinkler assembly and can be easily pulled out, cleaned and re-installed. 3. WiNTEriZATiON TiPSWhen using an air compressor to remove water from the system please note the following:1. Do not exceed 30 PSI.2. Always introduce air into the system gradually to avoid air pressure surges.Sudden release of compressed air into the sprinkler can cause damage.3. Each zone should run no longer than 1 minute on air. Caution: Sprinklers turn10 to 12 times faster on air than on water. Over spinning rotors on air can causedamage to the internal components.#1 30 32’ 1.240 34’ 1.350 36’ 1.6#2 30 33’ 2.440 34’ 2.550 34’ 3.1#3 30 34’ 3.740 34’ 4.050 35’ 4.6#4 30 33’ 4.740 35’ 5.350 36’ 6.12.1 9.8 4.52.8 10.4 4.93.4 11.0 6.12.1 10.1 9.12.8 10.4 9.53.4 10.4 11.72.1 10.4 14.02.8 10.4 15.13.4 10.7 17.42.1 10.1 17.82.8 10.7 20.13.4 11.0 23.1 *Data represents test results in zero wind. Adjust for local conditions.Radius may be reduced with the nozzle retention screw. PErFOrMANcE dATAA. Nozzle Arrow。
简单的SpriteKit接触检测(太空射击游戏)没有正确地进行简单的SpriteKit接触检测(太空射击游戏)没有正确地进行[英]Easy SpriteKit Contact Detection (Space Shooter Game) Not Wotking Properly I’m trying to make a simple Space Shooter game. The contact should happen either between the torpedo and the alien or the shuttle and the alien. The problem is that this second contact (shuttle vs. alien) only happens after the first kind of contact has happend (torpedo vs. alien) and further more they’re not always precise. This is a struct created outside the class 我正在尝试制作一个简单的太空射击游戏。
接触应该发生在鱼雷和外星人或航天飞机和外星人之间。
问题在于,第二次接触(穿梭与外星人)只发生在第一种接触发生后(鱼雷对外星人),而且它们并不总是精确的。
这是在类外创建的结构 struct PhysicsCategory {static let alien : UInt32 = 1static let torpedo : UInt32 = 2static let shuttle : UInt32 = 3 } Shuttle: shuttle.physicsBody = SKPhysicsBody(rectangleOfSize: shuttle.size)shuttle.physicsBody?.categoryBitMask = PhysicsCategory.shuttleshuttle.physicsBody?.contactTestBitMask = PhysicsCategory.alien shuttle.physicsBody?.dynamic = false shuttle.physicsBody?.affectedByGravity = false Torpedo: torpedo.physicsBody = SKPhysicsBody(rectangleOfSize: torpedo.size)torpedo.physicsBody?.categoryBitMask = PhysicsCategory.torpedotorpedo.physicsBody?.contactTestBitMask = PhysicsCategory.alientorpedo.physicsBody?.affectedByGravity = falsetorpedo.physicsBody?.dynamic = false Alien: alien.physicsBody = SKPhysicsBody(rectangleOfSize: torpedo.size)alien.physicsBody?.categoryBitMask = PhysicsCategory.alienalien.physicsBody?.contactTestBitMask = PhysicsCategory.torpedoalien.physicsBody?.affectedByGravity =。
3ds Max 样题单选题1:坐标类型中,哪一种适合与旋转陈列结合使用?( 2 分)A:ViewB:ScreenC:WorldD:Pick答案:D2:Blend材质是根据什么来混合两个材质?( 2 分)A:根据法线B:根据表面IDC:根据贴图的绿色和蓝色作为蒙板D:根据黑白贴图的黑色和白色作为蒙板答案:D3:设立一个物体的匀速运动,在轨迹视图中选择关键点后,应点击轨迹视图中的工具( 2 分)A:Set Tangents to LinearB:Set Tangents to FastC:Move KeyD:Set Tangents to Slow答案:A4:轨迹视图中的Parameter Curve Out-of-Range Types用于( 2 分)A:用于设立匀速运动。
B:用于设立加速运动。
C:用于设立减速运动。
D:用于设立反复运动。
答案:D5:在可编辑多边形中(Editable Poly),命令Ring重要用于?( 2 分) A:更方便的选择顶点B:更方便的选择边C:更方便的选择表面D:更方便的选择元素答案:B6:图中使用了哪种环境特效( 2 分) A:雾效B:体积雾C:体积光D:火焰答案:B7:编辑多边形中(Editable Poly),Flip用于。
( 2 分)A:删除表面B:挤出表面C:镜像D:反转表面的方向答案:D8:关联复制的一组灯光,假如其中一盏需要单独修改参数,怎么办?( 2 分) A:在修改器堆栈中点击Make Unique使其独立。
B:把灯光的阴影关掉。
C:调整灯光的衰减值。
D:将灯光塌陷。
答案:A9:材质编辑器中的吸管工具的作用是( 2 分)A:从材质库中获取材质B:建立新的材质C:从场景中获取材质D:从场景中获取颜色答案:C10:图中造型是如何实现的( 2 分)A:对两个圆柱施加弯曲修改器。
B:对两个圆柱施加锥化修改器。
C:对两个圆柱施加扭曲修改器。
D:二维线条挤出。
答案:C11:贴图展开修改器中的Select MatID指的是( 2 分)A:根据材质ID选择表面。
cocos2dx之精灵类Sprite使用方法精灵类精灵类 Sprite 是一张二维图片对象,可以使用一张图片或者一张图片的一块矩形部分来定义。
精灵对象可以移动,旋转,缩放,执行动画,并接受其他转换。
Cocos2dx的 Sprite 由 Texture , Frame 和 Animation 组成,由OpenES负责渲染。
一般使用 Texture2D 加载图片,使用 Texture2D 生成对应的 SpriteFrame(精灵帧), SpriteFrame 添加到 Animation 生成动画数据,用 Animation 生成 Animate (就是最终的动画动作),最后用 Sprite 执行这个动作。
我们在游戏中,比如战场中的英雄、怪物等这些都是 Sprite ,对于 Sprite 的使用,直接关系到你对游戏的编写和控制。
好吧,言重了,反正是比较重要的东西。
理解 Sprite 的不同创建与加载方式,对游戏的性能都有很大的关系。
废话不说,开始学习吧。
Sprite不管是使用 Texture , Frame ,还是 Animation ,我们最终得到的是Sprite 对象,先来看看创建 Sprite 的函数。
/*** 不适用任何纹理来创建一个空的精灵对象。
我们可以在随后调用setTexture设置精灵的纹理* @return 返回一个autoreleased的精灵对象*/static Sprite * create ( ) ;/*** 通过一个image文件名来创建一个精灵对象* 创建精灵完成以后,精灵的大小就是这个image的大小,同时offset是(0, 0)** @param filename image文件的路径* @return 返回一个autoreleased的精灵对象*/static Sprite * create ( const std :: string & filename ) ;/*** 通过一个image文件来创建一个精灵对象,并指定这个精灵在image文件中的位置和大小* 注:这里的rect的坐标系统是UI坐标系统,从左上角开始计算** @param filename image文件的路径* @param rect 精灵的位置和大小* @return 返回一个autoreleased的精灵对象*/static Sprite * create ( const std :: string & filename , const Rect & rect ) ; /*** 通过一个Texture2D对象来创建精灵对象* 创建以后,这个精灵对象的大小就是这个纹理的大小,精灵的offset是(0, 0) ** @param texture 一个Texture2D对象指针* @return 返回一个autoreleased的精灵对象*/static Sprite * createWithTexture ( Texture2D * texture ) ;/*** Creates a sprite with a texture and a rect.* 通过一个Texture2D对象来创建精灵对象,并指定这个精灵的位置和大小** @param texture 指向Texture2D对象的指针,这个指针可以进行复用* @param rect 精灵的位置和大小* @param rotated 是否需要旋转这个精灵,逆时针旋转90°* @return 返回一个autoreleased的精灵对象*/static Sprite * createWithTexture ( Texture2D * texture , const Rect & rect , boolrot ated = false ) ;/*** 通过精灵帧来创建一个精灵** @param spriteFrame 一个精灵帧的指针* @return 返回一个autoreleased的精灵对象*/static Sprite * createWithSpriteFrame ( SpriteFrame * spriteFrame ) ;/*** 通过一个精灵帧名字来创建一个精灵* 内部实现中,会通过spriteFrameName参数从SpriteFrameCache中获取SpriteFrame 对象* 如果在SpriteFrameCache中没有对应的SpriteFrame, 会引发一个异常** @param spriteFrameName 精灵帧的名字* @return 返回一个autoreleased的精灵对象*/static Sprite * createWithSpriteFrameName ( const std :: string & spriteFrameNam e );上面这些都是创建 Sprite 对象的函数,我们在实际工作中也就是通过这些函数来完成 Sprite 的创建。
LOD-Sprite Technique for Accelerated Terrain RenderingBaoquan Chen SUNY at Stony Brook J.Edward Swan IINaval Research LabEddy KuoNaval Research LabArie KaufmanSUNY at Stony BrookAbstractWe present a new rendering technique,termed LOD-sprite render-ing,which uses a combination of a level-of-detail(LOD)represen-tation of the scene together with reusing image sprites(previously rendered images).Our primary application is accelerating terrain rendering.The LOD-sprite technique renders an initial frame using a high-resolution model of the scene geometry.It renders subse-quent frames with a much lower-resolution model of the scene ge-ometry and texture-maps each polygon with the image sprite from the initial high-resolution frame.As it renders these subsequent frames the technique measures the error associated with the diver-gence of the view position from the position where the initial frame was rendered.Once this error exceeds a user-defined threshold, the technique re-renders the scene from the high-resolution model. We have efficiently implemented the LOD-sprite technique with texture-mapping graphics hardware.Although to date we have only applied LOD-sprite to terrain rendering,it could easily be extended to other applications.We feel LOD-sprite holds particular promise for real-time rendering systems.Keywords:Image-Based Modeling and Rendering,Texture Map-ping,Acceleration Techniques,Multi-Resolution,Level of Detail, Terrain Rendering,Virtual Reality,Virtual Environments.1INTRODUCTIONAs scene geometry becomes complex(into the millions of poly-gons),even the most advanced rendering hardware cannot provide interactive rates.Current satellite imaging technology provides ter-rain datasets which are well beyond this level of complexity.This presents two problems for real-time systems:1)the provided frame rate may be insufficient,and2)the system latency may be too high. Much of real-time computer graphics has been dedicated tofinding ways to trade off image quality for frame rate and/or system latency. Many recent efforts fall into two general categories:Level-of-detail(LOD):These techniques model the objects in the scene at different levels of detail.They select a particular LOD for each object based on various considerations such as the rendering cost and perceptual contribution to thefinal image.Image-based modeling and rendering(IBMR):These tech-niques model(some of the)objects in the scene as image sprites. These sprites only require2D transformations for most rendering operations,which,depending on the object,can result in substantial time savings.However,the2D transformations eventually result in distortions which require the underlying objects to be re-rendered from their full3D geometry.IBMR techniques typically organizeposed of a few polygons),a much larger transformation can occur before image distortions require re-rendering the sprite from the full 3D scene geometry.Thus,the LOD-sprite technique can reuse im-age sprites for a larger number of frames than previous techniques. In addition,because the sprite preserves object details,a lower LOD model can be used for the same image quality.These properties al-low interactive frame rates for larger scene databases.The next section of this paper places LOD-sprite in the context of previous work.Section3describes the LOD-sprite technique itself. Section4presents the results of our implementation of LOD-sprite. 2RELATED WORKThe previous work which is most closely related to LOD-sprite can be classified into image-based modeling and rendering techniques and level-of-detail techniques.Wefirst revisit and classify previous IBMR techniques while also considering LOD techniques,and then focus on some hybrid techniques.2.1Image-Based Modeling and Rendering Previous work in image-based modeling and rendering falls primar-ily into three categories:(1)The scene is modeled by2D image sprites;no3D ge-ometry is used.Many previous techniques model the3D scene by registering a number of static images[2,18,19,26].These techniques are particularly well-suited for applications where pho-tographs are easy to take but modeling the scene would be difficult (outdoor settings,for example).Novel views of the scene are cre-ated by2D transforming and interpolating between images[3,18]. By adding depth[17]or even layered depth[23]to the sprites,more realistic navigation,which includes limited parallax,is possible. Another category samples the full plenoptic function,resulting in 3D,4D or even5D image sprites[13,10],which allow the most unrestricted navigation of this class of techniques.However,all of these techniques lack the full3D structure of the scene,and so restrict navigation to at least some degree.(2)The scene is modeled using either3D geometry or 2D image sprites.Another set of previous techniques model each object with either3D geometry or a2D image sprite,based on object contribution to thefinal image and/or viewing direc-tion[5,16,20,21,22,27].The LOD-sprite technique differs from these techniques in that it integrates both3D geometry and2D im-age sprites to model and render objects.(3)The scene is modeled using a combination of3D ge-ometry and2D image sprites.There are a group of tech-niques which add very simple3D geometry to a single2D image [6,7,12,24],which guides the subsequent image warping.De-bevec et al.[7]construct a3D model from reference images,while Sillion et al.[24]and Darsa et al.[6]use a textured depth mesh which is constructed and simplified from depth information.In general,using a depth mesh with projective texture mapping gives better image quality than using depth image warping[17],because the mesh stretches to cover regions where no pixel information is available,and thus no holes appear.The main advantage of adding 3D scene geometry to the image is that it allows the warping to ap-proximate parallax,and therefore increases the range of novel views which are possible before image distortion becomes too severe.Our LOD-sprite is most closely related to the techniques of Co-hen et al.[4]and Soucy et al.[25].Both create a texture map from a3D object represented at a high geometric resolution,and then subsequently represent the object at a much lower geometric reso-lution,but apply the previously created texture map to the geometry.However,the LOD-sprite technique generates texture maps(image sprites)from images rendered at run-time,while these techniques generate the texture map from the object itself.2.2Level-of-DetailThere is a large body of previous work in level-of-detail(LOD) techniques,which is not reviewed here.The general LOD-sprite technique requires that geometric objects be represented at various levels of detail,but it does not require any particular LOD represen-tation or technique(although a specific implementation of LOD-sprite will need to access the underlying LOD data structures).This paper does not cover how to create LOD representations of a terrain—there exist numerous multiresolution representations for heightfields.Lindstrom et al.[14]and Hoppe[11]represent the most recent view-dependent terrain LOD methods,and Luebke and Erikson[15]can also be adapted for terrain datasets.In this paper we adopt the technique of Lindstrom et al.[14].This algorithm organizes the terrain mesh into a hierarchical quadtree structure.To decide which quadrant level to use,the algorithm computes a screen space error for each vertex,and compares it to a pre-defined error threshold.This error measures the pixel difference between the full-resolution and lower-resolution representations of the quadrant. 2.3Accelerated Virtual Environment Navigation As stated above,many LOD and IBMR techniques have been ap-plied to the problem of accelerating virtual environment naviga-tion.Of these,LOD-sprite is most closely related to the techniques of Maciel and Shirley[16],Shade et al.[22],Schaufler and Stuer-zlinger[21],and Aliaga[1].All of these papers present similar hybrid LOD/IBMR techniques.They create a hierarchy of image sprites based on a space partition of the scene geometry.In sub-sequent frames,for each node the techniques either texture map the node sprite onto a polygon,or re-render the node’s3D geom-etry if an error metric is above a threshold.Each reused image sprite means an entire subtree of3D geometry need not be ren-dered,which yields substantial speedup for navigating large virtual environments.The main limitation of these techniques is that creat-ing a balanced space partition is not a quick operation,and it must be updated if objects move.Also,to avoid gaps between neighbor-ing partitions,they either maintain a fairly large amount of overlap between partitions[22],or they morph geometries to guarantee a smooth transition between geometry and sprite[1];both operations add storage and computational complexity.LOD-sprite differs from these techniques in that they interpolate the image sprite on a single 2D polygon,while LOD-sprite interpolates the image sprite on a coarse representation of the3D scene geometry.3THE LOD-SPRITE TECHNIQUE3.1AlgorithmThe general idea of the LOD-sprite technique is to cache the ren-dered view of a high-resolution representation of the dataset.We refer to this image as a sprite,and the frame where the sprite is created as a keyframe.LOD-sprite renders subsequent frames,re-ferred to as novel views,at a lower resolution,but applies the sprite as a texture map.LOD-sprite measures the error caused by the di-vergence of the viewpoint from the keyframe as each novel view is rendered.When this error exceeds a threshold,LOD-sprite renders a new keyframe.Pseudocode for the LOD-sprite algorithm is given in Figure3. Lines1and5generate a sprite image from high-resolution scene geometry.This is necessary whenever the viewer jumps to a new viewpoint position(line1),and when LOD-sprite generates a new1render sprite image from high-resolutionscene geometry at viewpoint vp2for each novel viewpoint vp3let error=ErrorMetric(vp,sprite)4if error threshold then5render sprite image from high-resolutionscene geometry at viewpoint vp6let polys=set of low-resolution scenegeometry polygons7for each poly8if WasVisible(poly,sprite)then9render poly,map with sprite10else11render poly,map with original texture mapFigure3:Pseudocode for the LOD-Sprite algorithm. keyframe(line5).At line2the algorithm processes each novel viewpoint.Lines3and4measure the error associated with how far the current viewpoint diverges from viewpoint at the time when the sprite was rendered;the procedure ErrorMetric is described in Section3.2.At line6the algorithm prepares to render the frame at the current viewpoint by gathering a set of polygons from a low-resolution version of the scene geometry.Line7considers each polygon.Line8determines,for each low-resolution poly-gon,whether the polygon was visible when the sprite image was taken.This routine,WasVisible(described in Section3.3),deter-mines whether the polygon is texture mapped with the sprite texture (line9)or the original texture map(line11).The sprite data structure holds both the sprite texture map and the keyframe viewing parameters;LOD-sprite uses both to map poly-gons with the sprite texture in line9.Creating a new sprite(lines 1and5)requires copying the frame buffer into texture memory, which is efficiently implemented with the OpenGL glCopyTexIm-age2D function.Texture mapping a keyframe could be achieved using projective texture mapping:a light placed at the keyframe camera position projects the sprite image onto the scene geometry.However,our implementation of LOD-sprite does not use projective texture map-ping,because the current OpenGL implementation does not test for polygon visibility.Occluded polygons in the keyframe are mapped with wrong textures when they become visible.Therefore,our im-plementation detects polygon visibility on its own(line8),and ap-plies a different texture map depending on each polygons’visibility (lines9and11).3.2Error MetricLOD-sprite decides when to render a new keyframe based on an error metric which is similar to that described by Shade et al.[22]. Figure4gives the technique,which is drawn in2D for clarity.Con-sider rendering the full-resolution dataset from viewpoint position .In this case the line segments and are rendered(in 3D these are polygons).From this view,the ray passing through vertex intersects the edge at point.After rendering thefull-resolution dataset,the image from is stored as a texture map. Now consider rendering the scene from the novel viewpoint,us-ing the low-resolution representation of the dataset.In this case the line segment is rendered,and texture mapped with the sprite rendered from.Note that this projects the vertex to the po-sition on.From this projection makes no visible differ-ence.However,from,vertex is shifted by the angle from its true location.This angle can be converted to a pixel distance on the image plane of view,which is our measure of the error of rendering point from view:(1) where is the view angle of a single pixel(e.g.,thefield-of-view over the screen resolution),and is a user-specified error threshold. As long as Equation1is true,we render using the sprite from the most recent keyframe(e.g.,line5in Figure3is skipped).Once Equation1becomes false,it is again necessary to render from the full-resolution dataset(e.g.,line5in Figure3is executed).vFigure4:Calculating the error metric.Theoretically,we should evaluate Equation1for all points in the high-resolution dataset for each novel view.Clearly this is impracti-cal.Instead,our implementation calculates for the central vertex of each low-resolution quadtree quadrant.The resolution of each quadrant is determined by the number of levels we traverse down into the quadtree that is created by our LOD algorithm[14].We calculate the central vertex by averaging the four corner vertices of the quadrant.To calculate,we have to know the point.We cal-culate by interesting the vector with the plane spanned by the estimated central vertex and two original vertices of the quad-rant.Once we know,we calculate from the dot product of the vectors and.We next calculate the average sum of squares of the error for all evaluated quadrants and compare this with:Keyframe viewportFigure8:The camera path for Figures9–14. resolution run requires about2orders of magnitude more triangles. The semi-log plot shows that both triangle counts have a similar variation as the animation progresses.Figure10shows how the LOD-sprite error(Section3.2)changes as each frame is rendered.The error always starts from zero for a keyframe.As more novel views are interpolated from the keyframe, the error increases.When the error exceeds pixels,we calculate another keyframe from the high-resolution scene geometry,which again drops the error to zero.Figure11shows the amount of time required to render each frame.The high-resolution time runs along the top of the graph, at an average of526milliseconds per frame.The low-resolution time runs along the bottom,at an average of22milliseconds per frame.The rendering time for the LOD-sprite frames follows the low-resolution times,except when a new keyframe is rendered.For this animation the system generated16keyframes,at an average time of680milliseconds per keyframe.The great majority of the LOD-sprite frames are shown near the bottom of the graph;these took an average of36milliseconds to render.The overall average for LOD-sprite was53milliseconds per frame.Figure12shows the fraction of the total number of rendered frames which are keyframes.This is plotted against the total num-ber of frames rendered for the path shown in Figure8.As expected, as more frames are rendered for afixed path,the distance moved between each frame decreases,and so there is more coherence be-tween successive frames.Thisfigure shows how our system takes advantage of this increasing coherence by rendering a smaller frac-tion of keyframes.Thisfigure also illustrates a useful property of the LOD-sprite technique for real-time systems:as the frame up-date rate increases,the LOD-sprite technique becomes even more efficient in terms of reusing keyframes.Figure13also shows the fraction of the total number of rendered frames which are keyframes,but this time plots the fraction against the error threshold in pixels.As expected,a larger error threshold means fewer keyframes need to be rendered.However,the shape of this curve indicates a decreasing performance benefit as the er-ror threshold exceeds about pixel.For a given dataset and a path which is representative of the types of maneuvers the user is expected to make,this type of analysis can help determine the best error threshold versus performance tradeoff.The LOD-sprite technique results in a substantial speedup over rendering a full-resolution dataset.Rendering600frames of the full-resolution dataset along the path in Figure8takes316seconds. Rendering the same600frames with the LOD-sprite technique,us-ing an error threshold of pixel,takes32seconds—a speedup050100150200250300350400450500550600Frame Number1000100001000001000000Number of TrianglesHigh−ResolutionLOD−spriteFigure9:The number of triangles as a function of the frame number on a semi-log plot.(600frames;path from Figure8.)Frame Number0.20.40.60.811.21.4Error (pixels)Figure10:The error in pixels as a function of the frame num-ber for the LOD-sprite run.(600frames;path from Figure8.)of9.9.Figure14shows how the speedup varies as a function of the error threshold.5CONCLUSIONS AND FUTURE WORKThis paper has described the LOD-sprite rendering technique,and our application of the technique to accelerating terrain rendering. The technique is a combination of two rich directions in accelerated rendering for virtual environments:multiple level-of-detail(LOD) techniques,and image-based modeling and rendering(IBMR)tech-niques.It is a general-purpose rendering technique that could ac-celerate rendering for any application.It could be built upon any LOD decomposition technique.It improves the image quality of LOD techniques by preserving surface complexity,and it improves the efficiency of IBMR techniques by increasing the range of novel views that are possible.The LOD-sprite technique is particularly well-suited for real-time system architectures that decompose the50100150200250300350400450500550600Frame Number010020030040050060070080090010001100120013001400Rendering Time (msec)High−Resolution Low−Resolution LOD−SpriteFigure 11:The rendering time in milliseconds as a function of frame number.(600frames;path from Figure 8.)Total Number of Frames0.010.020.030.040.050.060.070.08Fraction of KeyframesFigure 12:The fraction of keyframes as a function of the total number of frames rendered.(Path from Figure 8.)scene into coherent layers.Our primary applied thrust with this work is to augment the ren-dering engine of a real-time,three-dimensional battlefield visual-ization system [9].As this system operates in real-time,our most important item of future work is to address the variable latency caused by rendering the keyframes.One optimization is to use a dual-thread implementation,where one thread renders the keyframe while another renders each LOD-sprite frame.Another optimiza-tion is to render the keyframe in advance by predicting where the viewpoint will be when it is next time to render a keyframe.We can predict this by extrapolating from the past several viewpoint loca-tions.Thus,we can begin rendering a new keyframe immediately after the previous keyframe has been rendered.If the system makes a bad prediction (perhaps the user makes a sudden,high-speed ma-neuver),two solutions are possible:1)we could use the previous keyframe as the sprite for additional frames of LOD-sprite render-ing,with the penalty that succeedingframes will have errors beyond the normal threshold.Or,2)if the predicted viewpoint is closer to the current viewpoint than the current viewpoint is to the previous keyframe,we can use the predicted viewpoint as the keyframe in-Error Threshold (pixels)0.050.10.150.20.25Fraction of KeyframesFigure 13:The fraction of keyframes as a function of error threshold.(600frames;path from Figure 8.)Error Threshold (pixels)246810121416SpeedupFigure 14:Speedup as a function of error threshold.600frames.(Path from Figure 8.)stead.We are also considering implementing a cache of keyframes,which would accelerate the common virtual environment naviga-tion behavior of moving back and forth within a particular viewing region.Issues include how many previous keyframes to cache,and choosing a cache replacement policy.The continuous LOD algorithm [14]in our implementation is well-suited for our application of real-time terrain rendering.How-ever,the low-resolution mesh generated by this technique does not preserve silhouette edges,which as demonstrated in Figures 6and 7,forces us to use the original texture map along the silhouette.Another problem with many continuous-LOD techniques (includ-ing [14])is the artifact caused by sudden resolution changes,which results in a continuous popping effect during real-time flythroughs.The solution to this artifact is geomorphing ,where the geometry is slowly changed over several frames.To address both of these is-sues we are currently integrating the LOD technique of Luebke and Erikson [15],which preserves silhouette edges and provides a nice framework for evaluating geomorphing techniques.Finally,an important limiting factor for the performance of the LOD-sprite technique,as well as other image-based modeling and rendering techniques(e.g.,[22]),is that OpenGL requires texture maps to have dimensions which are powers of2.Thus,many texels in our texture maps are actually unused.The LOD-sprite technique could be more efficiently implemented with graphics hardware that did not impose this constraint. ACKNOWLEDGMENTSWe acknowledge the valuable contributions of Bala Krishna Nakshatrala for bugfixes and various improvements to the code, for re-generating the animations,and for help in preparing the graphs.This work was supported by Office of Naval Research grants N000149710402and N0001499WR20011,and the National Science Foundation grant MIP-9527694.We acknowledge Larry Rosenblum for advice and direction during this project. References[1] D.G.Aliaga.Visualization of complex models using dynamictexture-based simplification.Proceedings IEEE Visualization ’96,pages101–106,Oct.1996.[2]S.E.Chen.Quicktime VR-an image-based approach to vir-tual environment puter Graphics(Proc.SIG-GRAPH’95),pages29–38,1995.[3]S.E.Chen and L.Williams.View interpolation for imageputer Graphics(Proc.SIGGRAPH’93),pages 279–288,1993.[4]J.Cohen,M.Olano,and D.Manocha.Appearance-preservingsimplifiputer Graphics(Proc.SIGGRAPH’98), pages115–122,July1998.[5] D.Cohen-Or,E.Rich,U.Lerner,and V.Shenkar.A real-timephoto-realistic visualflythrough.IEEE Transactions on Visu-alization and Computer Graphics,2(3):255–264,Sept.1996.[6]L.Darsa,B.C.Silva,and A.Varshney.Navigating static en-vironments using image-space simplification and morphing.Symposium on Interactive3D Graphics,pages25–34,Apr.1997.[7]P.E.Debevec,C.J.Taylor,and J.Malik.Modeling and ren-dering architecture from photographs:A hybrid geometry-and image-based puter Graphics(Proc.SIG-GRAPH’96),pages11–20,Aug.1996.[8]P.E.Debevec,Y.Yu,and G.Borshukov.Efficient view-dependent image-based rendering with projective texture-mapping.Rendering Techniques’98,pages105–116,1998.[9]J.Durbin,J.E.Swan II,B.Colbert,J.Crowe,R.King,T.King,C.Scannell,Z.Wartell,and T.Welsh.Battlefield vi-sualization on the responsive workbench.Proceedings IEEE Visualization’98,pages463–466,Oct.1998.[10]S.J.Gortler,R.Grzeszczuk,R.Szeliski,and M.F.Cohen.The puter Graphics(Proc.SIGGRAPH’96), pages43–54,1996.[11]H.Hoppe.Smooth view-dependent level-of-detail control andits application to terrain rendering.Proceedings IEEE Visual-ization’98,pages35–42,1998.[12]Y.Horry,K.ichi Anjyo,and K.Arai.Tour into the picture:Using a spidery mesh interface to make animation from a sin-gle puter Graphics(Proc.SIGGRAPH’97),pages 225–232,Aug.1997.[13]M.Levoy and P.Hanrahan.Lightfield puterGraphics(Proc.SIGGRAPH’96),pages31–42,1996. [14]P.Lindstrom,D.Koller,W.Ribarsky,L.F.Hughes,N.Faust,and G.Turner.Real-Time,continuous level of detail rendering of heightfiputer Graphics(Proc.SIGGRAPH’96), pages109–118,Aug.1996.[15] D.Luebke and C.Erikson.View-dependent simplification ofarbitrary polygonal puter Graphics(Proc.SIGGRAPH’97),pages199–208,Aug.1997.[16]P.W.C.Maciel and P.Shirley.Visual navigation of large en-vironments using textured clusters.Symposium on Interactive 3D Graphics,pages95–102,Apr.1995.[17]W.R.Mark,L.McMillan,and G.Bishop.Post-rendering3Dwarping.Symposium on Interactive3D Graphics,pages7–16, Apr.1997.[18]L.McMillan and G.Bishop.Plenoptic modeling:An image-based rendering puter Graphics(Proc.SIG-GRAPH’95),pages39–46,1995.[19]P.Rademacher and G.Bishop.Multiple-center-of-projectionputer Graphics(Proc.SIGGRAPH’98),pages 199–206,July1998.[20]M.Regan and R.Post.Priority rendering with a virtual real-ity address recalculation puter Graphics(Proc.SIGGRAPH’94),pages155–162,July1994.[21]G.Schaufler and W.Stuerzlinger.A three dimensional imagecache for virtual puter Graphics Forum(Proc.of Eurographics’96),15(3):227–235,Aug.1996.[22]J.Shade,D.Lischinski,D.Salesin,T.DeRose,and J.Sny-der.Hierarchical image caching for accelerated walkthroughs of complex puter Graphics(Proc.SIG-GRAPH’96),pages75–82,Aug.1996.[23]J.W.Shade,S.J.Gortler,L.He,and yereddepth puter Graphics(Proc.SIGGRAPH’98), pages231–242,July1998.[24] F.Sillion,G.Drettakis,and B.Bodelet.Efficient impos-tor manipulation for real-time visualization of urban scenery.Computer Graphics Forum(Proc.of Eurographics’97), 16(3):207–218,Sept.1997.[25]M.Soucy,G.Godin,and M.Rioux.A texture-mapping ap-proach for the compression of colored3D triangulations.The Visual Computer,12(10):503–514,1996.[26]R.Szeliski.Video mosaics for virtual environments.IEEEComputer Graphics and Applications,16(2):22–30,Mar.1996.[27]J.Torborg and J.Kajiya.Talisman:Commodity Real-time3Dgraphics for the puter Graphics(Proc.SIGGRAPH ’96),pages353–364,Aug.1996.(a)Low resolution with1,503triangles.(b)High resolution with387,937triangles.(c)LOD-sprite with1,503triangles.(d)Difference:low(a)high(b).(e)Difference:high(b)LOD-sprite(c).Figure6:Comparing the LOD-sprite technique to a traditional LOD technique,first view(see also color plates).(a)Low resolution with2,007triangles.(b)High resolution with156,884triangles.(c)LOD-sprite with2,007triangles.(d)Difference:low(a)high(b).(e)Difference:high(b)LOD-sprite(c).Figure7:Comparing the LOD-sprite technique to a traditional LOD technique,second view(see also color plates).。
两种手眼标定方法手眼标定是指机器人视觉引导的过程中,将相机的视野与机器人的运动空间进行关联,以实现准确的操作。
手眼标定方法的选择对于机器人系统的准确性和稳定性具有重要影响。
目前,常见的手眼标定方法包括基于外观的特征匹配方法和基于内参标定的方法,下面将详细介绍这两种方法。
1.基于外观的特征匹配方法:基于外观的特征匹配方法是指通过相机采集场景中的图像,利用图像中的特征点或线段等视觉特征与机器人末端执行器上的特征点或标志物进行匹配,然后计算相机的外部参数与机器人的末端执行器之间的关系。
常用的特征匹配算法包括SIFT(尺度不变特征变换)、SURF(加速稳健特征)、ORB(Oriented FAST and Rotated BRIEF)等。
这些算法通过对图像的特征点进行检测、描述和匹配,能够实现比较精确的手眼标定。
该方法的优点在于对标定板等外部标定物的依赖性较小,适用范围广泛,不受特定目标限制;但缺点是对图像质量和环境光照变化较为敏感,对计算资源的需求较高。
2.基于内参标定的方法:基于内参标定的方法是指在相机采集图像的通过相机固有的内部参数进行标定,如相机的焦距、主点、畸变等,并利用机器人运动学模型推导出相机与机器人末端执行器之间的位姿关系。
这种方法需要事先对相机内参进行精准标定,并需要较为准确的机器人运动学模型和外部传感器的辅助。
优点在于对图像质量和环境光照变化的要求较低,较为稳健;缺点在于需要事先进行相机内参标定,对运动学模型的精准度要求较高。
基于外观的特征匹配方法适用范围广泛,但对图像质量和环境光照的要求较高;基于内参标定的方法相对稳健,但需要事先对相机内参进行精准标定。
在实际应用中,可以根据具体需求和实际情况选择适合的手眼标定方法,以实现机器人系统的准确操作。
LOD-Sprite Technique for Accelerated Terrain RenderingBaoquan Chen SUNY at Stony Brook J.Edward Swan IINaval Research LabEddy KuoNaval Research LabArie KaufmanSUNY at Stony BrookAbstractWe present a new rendering technique,termed LOD-sprite render-ing,which uses a combination of a level-of-detail(LOD)represen-tation of the scene together with reusing image sprites(previously rendered images).Our primary application is accelerating terrain rendering.The LOD-sprite technique renders an initial frame using a high-resolution model of the scene geometry.It renders subse-quent frames with a much lower-resolution model of the scene ge-ometry and texture-maps each polygon with the image sprite from the initial high-resolution frame.As it renders these subsequent frames the technique measures the error associated with the diver-gence of the view position from the position where the initial frame was rendered.Once this error exceeds a user-defined threshold, the technique re-renders the scene from the high-resolution model. We have efficiently implemented the LOD-sprite technique with texture-mapping graphics hardware.Although to date we have only applied LOD-sprite to terrain rendering,it could easily be extended to other applications.We feel LOD-sprite holds particular promise for real-time rendering systems.Keywords:Image-Based Modeling and Rendering,Texture Map-ping,Acceleration Techniques,Multi-Resolution,Level of Detail, Terrain Rendering,Virtual Reality,Virtual Environments.1INTRODUCTIONAs scene geometry becomes complex(into the millions of poly-gons),even the most advanced rendering hardware cannot provide interactive rates.Current satellite imaging technology provides ter-rain datasets which are well beyond this level of complexity.This presents two problems for real-time systems:1)the provided frame rate may be insufficient,and2)the system latency may be too high. Much of real-time computer graphics has been dedicated tofinding ways to trade off image quality for frame rate and/or system latency. Many recent efforts fall into two general categories:Level-of-detail(LOD):These techniques model the objects in the scene at different levels of detail.They select a particular LOD for each object based on various considerations such as the rendering cost and perceptual contribution to thefinal image.Image-based modeling and rendering(IBMR):These tech-niques model(some of the)objects in the scene as image sprites. These sprites only require2D transformations for most rendering operations,which,depending on the object,can result in substantial time savings.However,the2D transformations eventually result in distortions which require the underlying objects to be re-rendered from their full3D geometry.IBMR techniques typically organize 1Department of Computer Science,State University of New York at Stony Brook,Stony Brook,NY11794-4400,USA.Email: baoquan ari@2Virtual Reality Laboratory,Naval Research Laboratory Code5580, 4555Overlook Ave SW,Washington,DC,20375-5320,USA.Email: swan@,ekuo@Figure1:Traditional hybrid LOD and IBMR techniques render each objecteither as a sprite or at a certain level of detail. Figure2:The LOD-sprite technique renders each object as both a sprite and as a geometric object at a certain level of detail.the scene into separate non-occluding layers,where each layer con-sists of an object or a small group of related objects.They render each layer separately,and then alpha-channel composite them. Some hybrid techniques use both multiple LODs and IBMR meth-ods[16,27,22].A general pipeline of these techniques is shown in Figure1.Each3D object isfirst subjected to a culling operation. Then,depending upon user-supplied quality parameters,the system either renders the object at a particular LOD,or it reuses a cached sprite of the object.This paper presents the LOD-sprite rendering technique.As shown in Figure2,the technique is similar to previous hybrid tech-niques in that it utilizes view frustum culling and a user-supplied quality metric.Objects are also modeled as both LOD models and sprites.However,the LOD-sprite technique differs in that the2D sprite is coupled with the LOD representation;the renderer utilizes both the LOD and the sprite as the inputs to create the output im-age.The LOD-sprite techniquefirst renders a frame from high-resolution3D scene geometry,and then caches this frame as an image sprite.It renders subsequent frames by texture-mapping the cached image sprite onto a lower-resolution representation of the scene geometry.This continues until an image quality metric re-quires again rendering the scene from the high-resolution geometry.We have developed the LOD-sprite technique as part of the ren-dering engine for a real-time,three-dimensional battlefield visual-ization system[9].For this application the terrain database con-sumes the vast majority of the rendering resources,and therefore in this paper our focus is on terrain rendering.However,LOD-sprite is a general-purpose rendering technique and could certainly be ap-plied to many different types of scene geometry.The primary advantage of LOD-sprite over previous techniques is that when the sprite is transformed,if the2D transformation is within the context of an underlying3D structure(even if only com-posed of a few polygons),a much larger transformation can occur before image distortions require re-rendering the sprite from the full 3D scene geometry.Thus,the LOD-sprite technique can reuse im-age sprites for a larger number of frames than previous techniques. In addition,because the sprite preserves object details,a lower LOD model can be used for the same image quality.These properties al-low interactive frame rates for larger scene databases.The next section of this paper places LOD-sprite in the context of previous work.Section3describes the LOD-sprite technique itself. Section4presents the results of our implementation of LOD-sprite. 2RELATED WORKThe previous work which is most closely related to LOD-sprite can be classified into image-based modeling and rendering techniques and level-of-detail techniques.Wefirst revisit and classify previous IBMR techniques while also considering LOD techniques,and then focus on some hybrid techniques.2.1Image-Based Modeling and Rendering Previous work in image-based modeling and rendering falls primar-ily into three categories:(1)The scene is modeled by2D image sprites;no3D ge-ometry is used.Many previous techniques model the3D scene by registering a number of static images[2,18,19,26].These techniques are particularly well-suited for applications where pho-tographs are easy to take but modeling the scene would be difficult (outdoor settings,for example).Novel views of the scene are cre-ated by2D transforming and interpolating between images[3,18]. By adding depth[17]or even layered depth[23]to the sprites,more realistic navigation,which includes limited parallax,is possible. Another category samples the full plenoptic function,resulting in 3D,4D or even5D image sprites[13,10],which allow the most unrestricted navigation of this class of techniques.However,all of these techniques lack the full3D structure of the scene,and so restrict navigation to at least some degree.(2)The scene is modeled using either3D geometry or 2D image sprites.Another set of previous techniques model each object with either3D geometry or a2D image sprite,based on object contribution to thefinal image and/or viewing direc-tion[5,16,20,21,22,27].The LOD-sprite technique differs from these techniques in that it integrates both3D geometry and2D im-age sprites to model and render objects.(3)The scene is modeled using a combination of3D ge-ometry and2D image sprites.There are a group of tech-niques which add very simple3D geometry to a single2D image [6,7,12,24],which guides the subsequent image warping.De-bevec et al.[7]construct a3D model from reference images,while Sillion et al.[24]and Darsa et al.[6]use a textured depth mesh which is constructed and simplified from depth information.In general,using a depth mesh with projective texture mapping gives better image quality than using depth image warping[17],because the mesh stretches to cover regions where no pixel information is available,and thus no holes appear.The main advantage of adding 3D scene geometry to the image is that it allows the warping to ap-proximate parallax,and therefore increases the range of novel views which are possible before image distortion becomes too severe.Our LOD-sprite is most closely related to the techniques of Co-hen et al.[4]and Soucy et al.[25].Both create a texture map from a3D object represented at a high geometric resolution,and then subsequently represent the object at a much lower geometric reso-lution,but apply the previously created texture map to the geometry.However,the LOD-sprite technique generates texture maps(image sprites)from images rendered at run-time,while these techniques generate the texture map from the object itself.2.2Level-of-DetailThere is a large body of previous work in level-of-detail(LOD) techniques,which is not reviewed here.The general LOD-sprite technique requires that geometric objects be represented at various levels of detail,but it does not require any particular LOD represen-tation or technique(although a specific implementation of LOD-sprite will need to access the underlying LOD data structures).This paper does not cover how to create LOD representations of a terrain—there exist numerous multiresolution representations for heightfields.Lindstrom et al.[14]and Hoppe[11]represent the most recent view-dependent terrain LOD methods,and Luebke and Erikson[15]can also be adapted for terrain datasets.In this paper we adopt the technique of Lindstrom et al.[14].This algorithm organizes the terrain mesh into a hierarchical quadtree structure.To decide which quadrant level to use,the algorithm computes a screen space error for each vertex,and compares it to a pre-defined error threshold.This error measures the pixel difference between the full-resolution and lower-resolution representations of the quadrant. 2.3Accelerated Virtual Environment Navigation As stated above,many LOD and IBMR techniques have been ap-plied to the problem of accelerating virtual environment naviga-tion.Of these,LOD-sprite is most closely related to the techniques of Maciel and Shirley[16],Shade et al.[22],Schaufler and Stuer-zlinger[21],and Aliaga[1].All of these papers present similar hybrid LOD/IBMR techniques.They create a hierarchy of image sprites based on a space partition of the scene geometry.In sub-sequent frames,for each node the techniques either texture map the node sprite onto a polygon,or re-render the node’s3D geom-etry if an error metric is above a threshold.Each reused image sprite means an entire subtree of3D geometry need not be ren-dered,which yields substantial speedup for navigating large virtual environments.The main limitation of these techniques is that creat-ing a balanced space partition is not a quick operation,and it must be updated if objects move.Also,to avoid gaps between neighbor-ing partitions,they either maintain a fairly large amount of overlap between partitions[22],or they morph geometries to guarantee a smooth transition between geometry and sprite[1];both operations add storage and computational complexity.LOD-sprite differs from these techniques in that they interpolate the image sprite on a single 2D polygon,while LOD-sprite interpolates the image sprite on a coarse representation of the3D scene geometry.3THE LOD-SPRITE TECHNIQUE3.1AlgorithmThe general idea of the LOD-sprite technique is to cache the ren-dered view of a high-resolution representation of the dataset.We refer to this image as a sprite,and the frame where the sprite is created as a keyframe.LOD-sprite renders subsequent frames,re-ferred to as novel views,at a lower resolution,but applies the sprite as a texture map.LOD-sprite measures the error caused by the di-vergence of the viewpoint from the keyframe as each novel view is rendered.When this error exceeds a threshold,LOD-sprite renders a new keyframe.Pseudocode for the LOD-sprite algorithm is given in Figure3. Lines1and5generate a sprite image from high-resolution scene geometry.This is necessary whenever the viewer jumps to a new viewpoint position(line1),and when LOD-sprite generates a new1render sprite image from high-resolutionscene geometry at viewpoint vp2for each novel viewpoint vp3let error=ErrorMetric(vp,sprite)4if error threshold then5render sprite image from high-resolutionscene geometry at viewpoint vp6let polys=set of low-resolution scenegeometry polygons7for each poly8if WasVisible(poly,sprite)then9render poly,map with sprite10else11render poly,map with original texture mapFigure3:Pseudocode for the LOD-Sprite algorithm. keyframe(line5).At line2the algorithm processes each novel viewpoint.Lines3and4measure the error associated with how far the current viewpoint diverges from viewpoint at the time when the sprite was rendered;the procedure ErrorMetric is described in Section3.2.At line6the algorithm prepares to render the frame at the current viewpoint by gathering a set of polygons from a low-resolution version of the scene geometry.Line7considers each polygon.Line8determines,for each low-resolution poly-gon,whether the polygon was visible when the sprite image was taken.This routine,WasVisible(described in Section3.3),deter-mines whether the polygon is texture mapped with the sprite texture (line9)or the original texture map(line11).The sprite data structure holds both the sprite texture map and the keyframe viewing parameters;LOD-sprite uses both to map poly-gons with the sprite texture in line9.Creating a new sprite(lines 1and5)requires copying the frame buffer into texture memory, which is efficiently implemented with the OpenGL glCopyTexIm-age2D function.Texture mapping a keyframe could be achieved using projective texture mapping:a light placed at the keyframe camera position projects the sprite image onto the scene geometry.However,our implementation of LOD-sprite does not use projective texture map-ping,because the current OpenGL implementation does not test for polygon visibility.Occluded polygons in the keyframe are mapped with wrong textures when they become visible.Therefore,our im-plementation detects polygon visibility on its own(line8),and ap-plies a different texture map depending on each polygons’visibility (lines9and11).3.2Error MetricLOD-sprite decides when to render a new keyframe based on an error metric which is similar to that described by Shade et al.[22]. Figure4gives the technique,which is drawn in2D for clarity.Con-sider rendering the full-resolution dataset from viewpoint position .In this case the line segments and are rendered(in 3D these are polygons).From this view,the ray passing through vertex intersects the edge at point.After rendering the full-resolution dataset,the image from is stored as a texture map. Now consider rendering the scene from the novel viewpoint,us-ing the low-resolution representation of the dataset.In this case the line segment is rendered,and texture mapped with the sprite rendered from.Note that this projects the vertex to the po-sition on.From this projection makes no visible differ-ence.However,from,vertex is shifted by the angle from its true location.This angle can be converted to a pixel distance on the image plane of view,which is our measure of the error of rendering point from view:(1) where is the view angle of a single pixel(e.g.,thefield-of-view over the screen resolution),and is a user-specified error threshold. As long as Equation1is true,we render using the sprite from the most recent keyframe(e.g.,line5in Figure3is skipped).Once Equation1becomes false,it is again necessary to render from the full-resolution dataset(e.g.,line5in Figure3is executed).vFigure4:Calculating the error metric.Theoretically,we should evaluate Equation1for all points in the high-resolution dataset for each novel view.Clearly this is impracti-cal.Instead,our implementation calculates for the central vertex of each low-resolution quadtree quadrant.The resolution of each quadrant is determined by the number of levels we traverse down into the quadtree that is created by our LOD algorithm[14].We calculate the central vertex by averaging the four corner vertices of the quadrant.To calculate,we have to know the point.We cal-culate by interesting the vector with the plane spanned by the estimated central vertex and two original vertices of the quad-rant.Once we know,we calculate from the dot product of the vectors and.We next calculate the average sum of squares of the error for all evaluated quadrants and compare this with:(2)where is the number of low-resolution quadrants.When this test fails,line5in Figure3is executed.3.3Visibility ChangesAs the viewpoint changes,polygons which were originally oc-cluded or culled by the view frustum may become visible.Figure5 illustrates this problem.Let the two objects represent mountains. The light shaded region of the back mountain indicates occluded polygons in the keyframe,while the heavy shaded regions in both mountains show polygons culled by the view frustum.If these re-gions become visible in a novel view,there will be no sprite texture to map on them.Our solution is to map them with the same texture map we use to generate the keyframe.We classify the visibility of each polygon with a single pass over all vertices of the low-resolution geometry.This loop is part of the process of generating a new keyframe.For novel views,the visibil-ity of each polygon to the sprite is alreadyflagged.This visibilityKeyframe viewportFigure8:The camera path for Figures9–14. resolution run requires about2orders of magnitude more triangles.The semi-log plot shows that both triangle counts have a similarvariation as the animation progresses.Figure10shows how the LOD-sprite error(Section3.2)changesas each frame is rendered.The error always starts from zero for akeyframe.As more novel views are interpolated from the keyframe,the error increases.When the error exceeds pixels,we calculateanother keyframe from the high-resolution scene geometry,whichagain drops the error to zero.Figure11shows the amount of time required to render each frame.The high-resolution time runs along the top of the graph,at an average of526milliseconds per frame.The low-resolutiontime runs along the bottom,at an average of22milliseconds perframe.The rendering time for the LOD-sprite frames follows thelow-resolution times,except when a new keyframe is rendered.Forthis animation the system generated16keyframes,at an averagetime of680milliseconds per keyframe.The great majority of theLOD-sprite frames are shown near the bottom of the graph;thesetook an average of36milliseconds to render.The overall averagefor LOD-sprite was53milliseconds per frame.Figure12shows the fraction of the total number of renderedframes which are keyframes.This is plotted against the total num-ber of frames rendered for the path shown in Figure8.As expected,as more frames are rendered for afixed path,the distance movedbetween each frame decreases,and so there is more coherence be-tween successive frames.Thisfigure shows how our system takesadvantage of this increasing coherence by rendering a smaller frac-tion of keyframes.Thisfigure also illustrates a useful property ofthe LOD-sprite technique for real-time systems:as the frame up-date rate increases,the LOD-sprite technique becomes even moreefficient in terms of reusing keyframes.Figure13also shows the fraction of the total number of renderedframes which are keyframes,but this time plots the fraction against the error threshold in pixels.As expected,a larger error thresholdmeans fewer keyframes need to be rendered.However,the shapeof this curve indicates a decreasing performance benefit as the er-ror threshold exceeds about pixel.For a given dataset and a path which is representative of the types of maneuvers the user isexpected to make,this type of analysis can help determine the besterror threshold versus performance tradeoff.The LOD-sprite technique results in a substantial speedup overrendering a full-resolution dataset.Rendering600frames of thefull-resolution dataset along the path in Figure8takes316seconds.Rendering the same600frames with the LOD-sprite technique,us-ing an error threshold of pixel,takes32seconds—a speedup050100150200250300350400450500550600Frame Number1000100001000001000000Number of TrianglesHigh−ResolutionLOD−spriteFigure9:The number of triangles as a function of the frame number on a semi-log plot.(600frames;path from Figure8.)Frame Number0.20.40.60.811.21.4Error (pixels)Figure10:The error in pixels as a function of the frame num-ber for the LOD-sprite run.(600frames;path from Figure8.)of9.9.Figure14shows how the speedup varies as a function of the error threshold.5CONCLUSIONS AND FUTURE WORKThis paper has described the LOD-sprite rendering technique,and our application of the technique to accelerating terrain rendering. The technique is a combination of two rich directions in accelerated rendering for virtual environments:multiple level-of-detail(LOD) techniques,and image-based modeling and rendering(IBMR)tech-niques.It is a general-purpose rendering technique that could ac-celerate rendering for any application.It could be built upon any LOD decomposition technique.It improves the image quality of LOD techniques by preserving surface complexity,and it improves the efficiency of IBMR techniques by increasing the range of novel views that are possible.The LOD-sprite technique is particularly well-suited for real-time system architectures that decompose the50100150200250300350400450500550600Frame Number010020030040050060070080090010001100120013001400Rendering Time (msec)High−Resolution Low−Resolution LOD−SpriteFigure 11:The rendering time in milliseconds as a function of frame number.(600frames;path from Figure 8.)Total Number of Frames0.010.020.030.040.050.060.070.08Fraction of KeyframesFigure 12:The fraction of keyframes as a function of the total number of frames rendered.(Path from Figure 8.)scene into coherent layers.Our primary applied thrust with this work is to augment the ren-dering engine of a real-time,three-dimensional battlefield visual-ization system [9].As this system operates in real-time,our most important item of future work is to address the variable latency caused by rendering the keyframes.One optimization is to use a dual-thread implementation,where one thread renders the keyframe while another renders each LOD-sprite frame.Another optimiza-tion is to render the keyframe in advance by predicting where the viewpoint will be when it is next time to render a keyframe.We can predict this by extrapolating from the past several viewpoint loca-tions.Thus,we can begin rendering a new keyframe immediately after the previous keyframe has been rendered.If the system makes a bad prediction (perhaps the user makes a sudden,high-speed ma-neuver),two solutions are possible:1)we could use the previous keyframe as the sprite for additionalframes of LOD-sprite render-ing,with the penalty that succeeding frames will have errors beyond the normal threshold.Or,2)if the predicted viewpoint is closer to the current viewpoint than the current viewpoint is to the previous keyframe,we can use the predicted viewpoint as the keyframe in-Error Threshold (pixels)0.050.10.150.20.25Fraction of KeyframesFigure 13:The fraction of keyframes as a function of error threshold.(600frames;path from Figure 8.)Error Threshold (pixels)246810121416SpeedupFigure 14:Speedup as a function of error threshold.600frames.(Path from Figure 8.)stead.We are also considering implementing a cache of keyframes,which would accelerate the common virtual environment naviga-tion behavior of moving back and forth within a particular viewing region.Issues include how many previous keyframes to cache,and choosing a cache replacement policy.The continuous LOD algorithm [14]in our implementation is well-suited for our application of real-time terrain rendering.How-ever,the low-resolution mesh generated by this technique does not preserve silhouette edges,which as demonstrated in Figures 6and 7,forces us to use the original texture map along the silhouette.Another problem with many continuous-LOD techniques (includ-ing [14])is the artifact caused by sudden resolution changes,which results in a continuous popping effect during real-time flythroughs.The solution to this artifact is geomorphing ,where the geometry is slowly changed over several frames.To address both of these is-sues we are currently integrating the LOD technique of Luebke and Erikson [15],which preserves silhouette edges and provides a nice framework for evaluating geomorphing techniques.Finally,an important limiting factor for the performance of the LOD-sprite technique,as well as other image-based modeling and rendering techniques(e.g.,[22]),is that OpenGL requires texture maps to have dimensions which are powers of2.Thus,many texels in our texture maps are actually unused.The LOD-sprite technique could be more efficiently implemented with graphics hardware that did not impose this constraint. ACKNOWLEDGMENTSWe acknowledge the valuable contributions of Bala Krishna Nakshatrala for bugfixes and various improvements to the code, for re-generating the animations,and for help in preparing the graphs.This work was supported by Office of Naval Research grants N000149710402and N0001499WR20011,and the National Science Foundation grant MIP-9527694.We acknowledge Larry Rosenblum for advice and direction during this project. References[1] D.G.Aliaga.Visualization of complex models using dynamictexture-based simplification.Proceedings IEEE Visualization ’96,pages101–106,Oct.1996.[2]S.E.Chen.Quicktime VR-an image-based approach to vir-tual environment puter Graphics(Proc.SIG-GRAPH’95),pages29–38,1995.[3]S.E.Chen and L.Williams.View interpolation for imageputer Graphics(Proc.SIGGRAPH’93),pages 279–288,1993.[4]J.Cohen,M.Olano,and D.Manocha.Appearance-preservingsimplifiputer Graphics(Proc.SIGGRAPH’98), pages115–122,July1998.[5] D.Cohen-Or,E.Rich,U.Lerner,and V.Shenkar.A real-timephoto-realistic visualflythrough.IEEE Transactions on Visu-alization and Computer Graphics,2(3):255–264,Sept.1996.[6]L.Darsa,B.C.Silva,and A.Varshney.Navigating static en-vironments using image-space simplification and morphing.Symposium on Interactive3D Graphics,pages25–34,Apr.1997.[7]P.E.Debevec,C.J.Taylor,and J.Malik.Modeling and ren-dering architecture from photographs:A hybrid geometry-and image-based puter Graphics(Proc.SIG-GRAPH’96),pages11–20,Aug.1996.[8]P.E.Debevec,Y.Yu,and G.Borshukov.Efficient view-dependent image-based rendering with projective texture-mapping.Rendering Techniques’98,pages105–116,1998.[9]J.Durbin,J.E.Swan II,B.Colbert,J.Crowe,R.King,T.King,C.Scannell,Z.Wartell,and T.Welsh.Battlefield vi-sualization on the responsive workbench.Proceedings IEEE Visualization’98,pages463–466,Oct.1998.[10]S.J.Gortler,R.Grzeszczuk,R.Szeliski,and M.F.Cohen.The puter Graphics(Proc.SIGGRAPH’96), pages43–54,1996.[11]H.Hoppe.Smooth view-dependent level-of-detail control andits application to terrain rendering.Proceedings IEEE Visual-ization’98,pages35–42,1998.[12]Y.Horry,K.ichi Anjyo,and K.Arai.Tour into the picture:Using a spidery mesh interface to make animation from a sin-gle puter Graphics(Proc.SIGGRAPH’97),pages 225–232,Aug.1997.[13]M.Levoy and P.Hanrahan.Lightfield puterGraphics(Proc.SIGGRAPH’96),pages31–42,1996. [14]P.Lindstrom,D.Koller,W.Ribarsky,L.F.Hughes,N.Faust,and G.Turner.Real-Time,continuous level of detail rendering of heightfiputer Graphics(Proc.SIGGRAPH’96), pages109–118,Aug.1996.[15] D.Luebke and C.Erikson.View-dependent simplification ofarbitrary polygonal puter Graphics(Proc.SIGGRAPH’97),pages199–208,Aug.1997.[16]P.W.C.Maciel and P.Shirley.Visual navigation of large en-vironments using textured clusters.Symposium on Interactive 3D Graphics,pages95–102,Apr.1995.[17]W.R.Mark,L.McMillan,and G.Bishop.Post-rendering3Dwarping.Symposium on Interactive3D Graphics,pages7–16, Apr.1997.[18]L.McMillan and G.Bishop.Plenoptic modeling:An image-based rendering puter Graphics(Proc.SIG-GRAPH’95),pages39–46,1995.[19]P.Rademacher and G.Bishop.Multiple-center-of-projectionputer Graphics(Proc.SIGGRAPH’98),pages 199–206,July1998.[20]M.Regan and R.Post.Priority rendering with a virtual real-ity address recalculation puter Graphics(Proc.SIGGRAPH’94),pages155–162,July1994.[21]G.Schaufler and W.Stuerzlinger.A three dimensional imagecache for virtual puter Graphics Forum(Proc.of Eurographics’96),15(3):227–235,Aug.1996.[22]J.Shade,D.Lischinski,D.Salesin,T.DeRose,and J.Sny-der.Hierarchical image caching for accelerated walkthroughs of complex puter Graphics(Proc.SIG-GRAPH’96),pages75–82,Aug.1996.[23]J.W.Shade,S.J.Gortler,L.He,and yereddepth puter Graphics(Proc.SIGGRAPH’98), pages231–242,July1998.[24] F.Sillion,G.Drettakis,and B.Bodelet.Efficient impos-tor manipulation for real-time visualization of urban scenery.Computer Graphics Forum(Proc.of Eurographics’97), 16(3):207–218,Sept.1997.[25]M.Soucy,G.Godin,and M.Rioux.A texture-mapping ap-proach for the compression of colored3D triangulations.The Visual Computer,12(10):503–514,1996.[26]R.Szeliski.Video mosaics for virtual environments.IEEEComputer Graphics and Applications,16(2):22–30,Mar.1996.[27]J.Torborg and J.Kajiya.Talisman:Commodity Real-time3Dgraphics for the puter Graphics(Proc.SIGGRAPH ’96),pages353–364,Aug.1996.。