Abstract Retargeting Vector Animation for Small Displays
- 格式:pdf
- 大小:506.84 KB
- 文档页数:9
C#中abstract的⽤法详解abstract可以⽤来修饰类,⽅法,属性,索引器和时间,这⾥不包括字段. 使⽤abstrac修饰的类,该类只能作为其他类的基类,不能实例化,⽽且abstract修饰的成员在派⽣类中必须全部实现,不允许部分实现,否则编译异常. 如:using System;namespace ConsoleApplication8{class Program{static void Main(string[] args){BClass b = new BClass();b.m1();}}abstract class AClass{public abstract void m1();public abstract void m2();}class BClass : AClass{public override void m1(){throw new NotImplementedException();}//public override void m2()//{// throw new NotImplementedException();//}}}Abstract classes have the following features:抽象类拥有如下特征:1,抽象类不能被实例化, 但可以有实例构造函数, 类是否可以实例化取决于是否拥有实例化的权限 (对于抽象类的权限是abstract, 禁⽌实例化),即使不提供构造函数, 编译器也会提供默认构造函数;2,抽象类可以包含抽象⽅法和访问器;3,抽象类不能使⽤sealed修饰, sealed意为不能被继承;4,所有继承⾃抽象类的⾮抽象类必须实现所有的抽象成员,包括⽅法,属性,索引器,事件;abstract修饰的⽅法有如下特征:1,抽象⽅法即是虚拟⽅法(隐含);2,抽象⽅法只能在抽象类中声明;3,因为抽象⽅法只是声明, 不提供实现, 所以⽅法只以分号结束,没有⽅法体,即没有花括号部分;如public abstract void MyMethod();4,override修饰的覆盖⽅法提供实现,且只能作为⾮抽象类的成员;5,在抽象⽅法的声明上不能使⽤virtual或者是static修饰.即不能是静态的,⼜因为abstract已经是虚拟的,⽆需再⽤virtual强调.抽象属性尽管在⾏为上与抽象⽅法相似,但仍有有如下不同:1,不能在静态属性上应⽤abstract修饰符;2,抽象属性在⾮抽象的派⽣类中覆盖重写,使⽤override修饰符;抽象类与接⼝:1,抽象类必须提供所有接⼝成员的实现;2,继承接⼝的抽象类可以将接⼝的成员映射位抽象⽅法.如:interface I{void M();}abstract class C: I{public abstract void M();}抽象类实例:// abstract_keyword.cs// 抽象类using System;abstract class BaseClass // 抽象类{protected int _x = 100; //抽象类可以定义字段,但不可以是抽象字段,也没有这⼀说法.protected int _y = 150;public BaseClass(int i) //可以定义实例构造函数,仅供派⽣的⾮抽象类调⽤; 这⾥显式提供构造函数,编译器将不再提供默认构造函数. {fielda = i;}public BaseClass(){}private int fielda;public static int fieldsa = 0;public abstract void AbstractMethod(); // 抽象⽅法public abstract int X { get; } //抽象属性public abstract int Y { get; }public abstract string IdxString { get; set; } //抽象属性public abstract char this[int i] { get; } //抽象索引器}class DerivedClass : BaseClass{private string idxstring;private int fieldb;//如果基类中没有定义⽆参构造函数,但存在有参数的构造函数,//那么这⾥派⽣类得构造函数必须调⽤基类的有参数构造函数,否则编译出错public DerivedClass(int p): base(p) //这⾥的:base(p)可省略,因为基类定义了默认的⽆参构造函数{fieldb = p;}public override string IdxString //覆盖重新属性{get{return idxstring;}set{idxstring = value;}}public override char this[int i] //覆盖重写索引器{get { return IdxString[i]; }}public override void AbstractMethod(){_x++;_y++;}public override int X // 覆盖重写属性{get{return _x + 10;}}public override int Y // 覆盖重写属性{get{return _y + 10;}}static void Main(){DerivedClass o = new DerivedClass(1);o.AbstractMethod();Console.WriteLine("x = {0}, y = {1}", o.X, o.Y);}}以上所述是⼩编给⼤家介绍的C#中abstract的⽤法详解,希望对⼤家有所帮助,如果⼤家有任何疑问请给我留⾔,⼩编会及时回复⼤家的。
File (文件)菜单New Scene(新场景)Open Scene(打开场景)Save Scene(保存场景)Save Scene As…(另存为)Save Preferences(保存参考)Optimize Scene Size(优化场景大小)Import…(导入)Export All…(导出)Export Selection…(导出选择的物体)View Image…(观看图像)View Sequence…(观看序列帧)Create Reference…(创建参考)Reference Editor(参考编辑器)Project(工程)Recent Files(最近的文件)Recent Projects(最近的工程)Exit(退出)Edit(编辑)菜单Undo(后退)Redo(重做)Repeat(重复)Recent Commands List(最近命令列表)Cut(剪切)Copy(复制)Paste(粘贴)Keys(关键帧)Delete(删除)Delete by Type(类型删除)Delete All by Type(类型全部删除)Select Tool(选择工具)Deselect(去选)Select Hierarchy(选择层级)Invert Selection(反选)Select All by Type(类型全部选择)Quick Select Sets(快速选择设置)Duplicate(复制)Duplicate Special(特殊复制)Duplicate with Transform(变换复制)Group(成组)Ungroup(解组)Level of Detail(细节级别)Parent(父子)Unparent(断开父子)Modify (修改)菜单Transformation Tools(变换工具)Reset Transformations(重设变换)Freeze Transformations(冻结变换)Snap Align Objects(吸附排列物体)Align Tool(排列工具)Snap Together Tool(吸附工具)Evaluate Nodes(计算节点)Make Live(激活吸附)Center Pivot(中心坐标点)Prefix Hierarchy Names…(层级名前缀)Search and Replace Names…(查找替换名称)Add Attribute…(增加属性)Edit Attribute…(编辑属性)Delete Attribute…(删除属性)Convert(转换)Paint Scripts Tool(绘制代码工具)Paint Attributes Tool(绘制属性工具)Display(显示)菜单Grid(网格)Heads Up Display(顶部显示)UI Elements(用户界面元素)Hide(隐藏)Show(显示)Wireframe Color…(网格颜色)Object Display(物体显示)Transform Display(变换显示)Polygons(多边形)NURBS(NURBS)Subdiv Surfaces(细分表面)Animation(动画)Rendering(渲染)Windows (窗口)菜单General Editors(主编辑器)Rendering Editors(渲染编辑器)Animation Editors(动画编辑器)Relationship Editors(关系编辑器)Settings/Preferences(设置/参数编辑器)Attribute Editor(属性编辑器)Outliner(大纲窗口)Hypergraph: Hierarchy(超级图表:层级)Hypergraph: Connections(超级图表:连接)Paint Effects(笔刷特效)UV Texture Editor(UV编辑器)Playblast(动画预览)View Arrangement(视窗排列)Saved Layouts(保存Layout)Save Current Layout…(保存当前的Layout)Frame All in All Views(在所有窗口最大化显示所有物体)Frame Selection in All Views(在所有窗口最大化显示选择的物体)Minimize Application(最小化应用窗口)Raise Main Window(提升主窗口)Raise Application Windows(提升应用窗口)Help(帮助)菜单Maya Help(Maya帮助)Learning Movies(学习视频)Tutorials(教程)What’s New(最近更新)Support Center(支援中心)Data Exchange Solutions(数据交换方案)Online Release Note Updates(在线节点更新)The Area(区域)Download Setup Assistant(下载设置助手)Browser Setup Assistant(浏览设置助手)Report a Problem…(上报问题)Suggest a Feature…(建议)Autodesk DirectConnect Help(Autodesk直连帮助)Autodesk Online Store(Autodesk网上商城)MEL Command Reference(MEL命令参考)Python Command Reference(Python命令参考)Node and Attribute Reference(节点和属性参考)Find Menu…(查找按钮)About Maya(关于Maya)Animation (动画)模块Animate(动画)菜单Set Key(设置关键帧)Set Breakdown(设置受控制值)Hold Current Keys(保持当前帧)Set Driven Key(设置驱动关键帧)Set Transform Keys(设置变形帧)IK/FK Keys(IK/FK关键帧)Set Full Body IK Keys(设置全身IK关键帧)Set Blend Shape Target Weight Keys(设置混合变形目标权重帧)Create Clip(创建片段)Create Pose(创建姿势)Ghost Selected(影像代理)Unghost Selected(移除所选择的影像代理)Unghost All(移除全部影像代理)Create Motion Trail(创建运动轨迹)Create Animation Snapshot(创建动画快照)Update Motion Trail/Snapshot(更新运动轨迹/快照)Create Animated Sweep(由动画曲线创建几何体曲面)Motion Paths(运动路径)Turntable(转盘)Geometry Cache(物体缓存)菜单Create New Cache(创建新的缓存)Import Cache…(导入缓存)Disable All Cache On Selected(使所有选中的缓存无效)Enable All Caches On Selected(使所有选中的缓存有效)Replace Cache(重置缓存)Merge Caches(合并缓存)Delete Cache(删除缓存)Append to Cache(附加缓存)Replace Cache Frame(重置缓存帧)Delete Cache Frame(删除缓存帧)Delete History/Ahead of Cache(删除历史/以前的缓存)Paint Cache Weights Tool(绘制缓存权重工具)Create Deformers (创建变形器)菜单Blend Shape(混合变形)Lattice(旋转)Wrap(蒙皮)Cluster(簇)Soft Modification(软化修正)Nonlinear(非线性式)Sculpt Deformer(造型变形)Jiggle Deformer(抖动变形)Jiggle Disk Cache(抖动磁盘缓存)Jiggle Disk Cache Attributes(抖动磁盘缓存属性)Wire Tool(网格化工具)Wire Dropoff Locator(网线定位器)Wrinkle Tool(褶绉变形工具)Point On Curve(线点造型)Edit Deformers(编辑变形器)菜单Edit Membership Tool(编辑成员工具)Prune Membership(变形成员)Blend Shape(混合变形)Lattice(旋转)Wrap(蒙皮)Wire(网格化)Display Intermediate Objects(显示中间物体)Hide Intermediate Objects(隐藏中间物体)Paint Blend Shape Weights Tool(绘制混合变形权重工具)Paint Cluster Weights Tool(绘制簇权重工具)Paint Jiggle Weights Tool(绘制轻微变形权重工具)Paint Wire Weights Tool(绘制网格权重工具)Paint Membership Tool(绘制设置成员工具)Skeleton(骨骼)菜单Joint Tool(关节工具)IK Handle Tool(反向动力学句柄工具)IK Spline Handle Tool(反向动力学样条曲线句柄工具)Insert Joint Tool(添加关节工具)Reroot Skeleton(重新设定根关节)Remove Joint(去除关节)Disconnect Joint(解除连接关节)Connect Joint(连接关节)Mirror Joint(镜向关节)Orient Joint(确定关节)Retargeting(重置目标)Joint Labelling(接合标注)Full Body IK(全身反向动力学)Set Preferred Angle(设置参考角)Assume Preferred Angle(采取参考角)Enable IK Handle Snap(反向动力学句柄捕捉有效)Enable IK/FK Control(反向/正向动力学控制有效)Enable Selected IK Handles(反向动力学句柄有效)Disable Selected IK Handles(反向动力学句柄无效)Skin(蒙皮)菜单Bind Skin(绑定蒙皮)Detach Skin(断开皮肤)Go to Bind Pose(恢复骨头绑定)Edit Smooth Skin(柔体皮肤编辑)Edit Rigid Skin(刚体皮肤编辑)Constrain(约束)菜单Point(关节)Aim(目标)Orient(方向)Scale(缩放)Parent(父子)Geometry(几何体)Normal(法向)Tangent(切线)Pole Vector(矢量坐标)Remove Target(移除目标)Set Rest Position(设置静止位置)Modify Constrained Axis…(修改约束坐标轴)Character(角色)菜单Create Character Set(创建角色属性)Create Subcharacter Set(创建子角色属性)Character Mapper(角色贴图)Attribute Editor(属性编辑器)Add to Character Set(添加角色属性)Remove from Character Set(从角色属性中移除)Merge Character Sets(合并角色属性)Select Character Set Node(选择角色属性节点)Select Character Set Members(选择角色属性成员)Set Current Character Set(设置当前角色属性)Redirect(改变方向)Polygons(多边形)模块Select(选择)菜单最常用的是上面五项:Object/Component(物体层级)Vertex(点层级)Edge(边层级)Face(面层级)UV(UV点层级)每一个后面都有对应的快捷键(后同)。
滨江学院学年论文题目基于Unity3D在PC端的TPS游戏的开发与设计院系电子系专业电子信息工程学生姓名杨鑫学号***********指导教师王新蕾职称讲师二O一七年十月二十五日基于Unity3D在PC端的TPS游戏的开发与设计杨鑫南京信息工程大学滨江学院电子信息工程专业,南京210044摘要:本文详细介绍了基于Unity3D游戏引擎(个人版)的TPS(第三人称射击)游戏的开发与设计的过程,主要包括游戏开发前准备,场景的制作以及游戏角色控制,子弹发射效果,敌对角色生成的实现方法等。
并简单介绍了Unity3D引擎及其特点和资源商店的利用。
详细阐述了游戏中的各种关键C#脚本程序。
实现以键盘控制位置鼠标控制视角的人机对抗游戏模式。
游戏操作简单,人物动作细腻多变,场景氛围代入感强,给玩家良好的游戏体验。
关键词: Unity3D;TPS;C#第1章绪论1.1基于Unity游戏开发的国内外现状2016年又被称为VR元年。
虚拟现实(VR)是当今最前沿的科学技术之一,谷歌,微软,Facebook,苹果,三星,索尼等知名高科技企业对其视为上宾。
VR通过计算机虚拟出现实世界,在VR技术影响下,仿佛置身于另一个世界。
2016年,是VR技术突破性发展的一年,VR将与各行各业相互融合,产生庞大的岗位需求——VR开发工程师,而Unity已经成为VR内容开发的首选平台。
通过使用Unity引擎制作的游戏吸引了全球6亿游戏玩家。
,Facebook拥有8.29亿的日常用户。
用Unity 制作的应用和游戏目前的累计体验量已达到了87亿次。
根据Unity官方在2017年8月最新公布的数据[1],Unity中国区的开发者数量、用户活跃度和终端安装量均已经成为全球第一。
在世界范围内,Unity占据全功能游戏引擎市场份额的45%,居世界首位。
最接近我们的美国,其市场份额只有我们的三分之一。
Unity的每月全球活跃用户超过60万。
中国区每个月Unity引擎被使用的次数总和高达180万次,居世界首位。
Android应用程序UI硬件加速渲染的动画执行过程分析通常我们说一个系统不如另一个系统流畅,说的就是前者动画显示不如后者流畅,因此动画显示流畅程度是衡量一个系统流畅性的关键指标。
为什么这样说呢?这是因为流畅的动画显示需要60fps的UI刷新速度,然而这却不是一个容易达到的速度。
Android 5.0通过引入Render Thread尽最大努力提升动画显示流畅性。
本文就分析Render Thread显示动画的过程,以便了解它是如何提高动画显示流畅性的。
在前面一文中,我们提到了Render Thread对动画显示的两个优化。
第一个优化是在动画显示期间,临时将动画的目标View的Layer Type设置为LAYER_TYPE_HARDW ARE,这样就可以使得目标View以Open GL里面的Frame Buffer Object(FBO)进行渲染。
这种优化的效果就如Render Thread直接以Open GL里面的Texture来渲染TextureView一样。
第二个优化是在Main Thread不需要参与动画的显示过程时,动画就会被注册到Render Thread中,这样动画的计算和显示过程就完全由Render Thread来负责。
这种优化带来的好处就是在动画显示期间,Main Thread可以去处理其它的用户输入,而且动画的显示也会更加流畅。
上面描述的两种动画优化涉及到的Main Thread和Render Thread的交互过程如图1所示:接下来,我们就通过代码分析上述的两种动画显示优化过程。
我们通过调用View类的成员函数animate可以获得一个ViewPropertyAnimator对象,如下所示:[java] view plain copypublic class View implements Drawable.Callback, KeyEvent.Callback,AccessibilityEventSource {......public ViewPropertyAnimator animate() {if (mAnimator == null) {mAnimator = new ViewPropertyAnimator(this);}return mAnimator;}......}这个函数定义在文件frameworks/base/core/java/android/view/View.java中。
PatchMatch:A Randomized Correspondence Algorithm for Structural Image EditingConnelly Barnes 1Eli Shechtman 2,3Adam Finkelstein 1Dan B Goldman 21Princeton University 2Adobe Systems 3University of Washington(a)original (b)hole+constraints (c)hole filled (d)constraints (e)constrained retarget (f)reshuffleFigure 1:Structural image editing.Left to right:(a)the original image;(b)a hole is marked (magenta)and we use line constraints(red/green/blue)to improve the continuity of the roofline;(c)the hole is filled in;(d)user-supplied line constraints for retargeting;(e)retargeting using constraints eliminates two columns automatically;and (f)user translates the roof upward using reshuffling.AbstractThis paper presents interactive image editing tools using a new randomized algorithm for quickly finding approximate nearest-neighbor matches between image patches.Previous research in graphics and vision has leveraged such nearest-neighbor searches to provide a variety of high-level digital image editing tools.However,the cost of computing a field of such matches for an entire image has eluded previous efforts to provide interactive performance.Our algorithm offers substantial performance improvements over the previous state of the art (20-100x),enabling its use in interactive editing tools.The key insights driving the algorithm are that some good patch matches can be found via random sampling,and that natural coherence in the imagery allows us to propagate such matches quickly to surrounding areas.We offer theoretical analysis of the convergence properties of the algorithm,as well as empirical and practical evidence for its high quality and performance.This one simple algorithm forms the basis for a variety of tools –image retargeting,completion and reshuffling –that can be used together in the context of a high-level image editing application.Finally,we propose additional intuitive constraints on the synthesis process that offer the user a level of control unavailable in previous methods.CR Categories:I.3.6[Computing Methodologies]:Computer Graphics—Methodology and Techniques;I.4.9[Computing Method-ologies]:Image Processing and Computer Vision—Applications Keywords:Approximate nearest neighbor,patch-based synthesis,image editing,completion,retargeting,reshuffling1IntroductionAs digital and computational photography have matured,researchers have developed methods for high-level editing of digital pho-tographs and video to meet a set of desired goals.For example,recent algorithms for image retargeting allow images to be resized to a new aspect ratio –the computer automatically produces a goodlikeness of the contents of the original image but with new dimen-sions [Rubinstein et al.2008;Wang et al.2008].Other algorithms for image completion let a user simply erase an unwanted portion of an image,and the computer automatically synthesizes a fill re-gion that plausibly matches the remainder of the image [Criminisi et al.2003;Komodakis and Tziritas 2007].Image reshuffling al-gorithms make it possible to grab portions of the image and move them around –the computer automatically synthesizes the remain-der of the image so as to resemble the original while respecting the moved regions [Simakov et al.2008;Cho et al.2008].In each of these scenarios,user interaction is essential,for several reasons:First,these algorithms sometimes require user intervention to obtain the best results.Retargeting algorithms,for example,sometimes provide user controls to specify that one or more regions (e.g.,faces)should be left relatively unaltered.Likewise,the best completion algorithms offer tools to guide the result by providing hints for the computer [Sun et al.2005].These methods provide such controls because the user is attempting to optimize a set of goals that are known to him and not to the computer.Second,the user often cannot even articulate these goals a priori .The artistic process of creating the desired image demands the use of trial and error,as the user seeks to optimize the result with respect to personal criteria specific to the image under consideration.The role of interactivity in the artistic process implies two prop-erties for the ideal image editing framework:(1)the toolset must provide the flexibility to perform a wide variety of seamless edit-ing operations for users to explore their ideas;and (2)the perfor-mance of these tools must be fast enough that the user quickly sees intermediate results in the process of trial and error.Most high-level editing approaches meet only one of these criteria.For ex-ample,one family of algorithms known loosely as non-parametric patch sampling has been shown to perform a range of editing tasks while meeting the first criterion –flexibility [Hertzmann et al.2001;Wexler et al.2007;Simakov et al.2008].These methods are based on small (e.g.7x7)densely sampled patches at multiple scales,and are able to synthesize both texture and complex image structures that qualitatively resemble the input imagery.Because of their abil-ity to preserve structures,we call this class of techniques structural image editing .Unfortunately,until now these methods have failed the second criterion –they are far too slow for interactive use on all but the smallest images.However,in this paper we will describe an algorithm that accelerates such methods by at least an order of mag-nitude,making it possible to apply them in an interactive structural image editing framework.To understand this algorithm,we must consider the common com-ponents of these methods:The core element of nonparamet-ric patch sampling methods is a repeated search of all patchesin one image region for the most similar patchin another image region.In other words,givenimages or regions A and B,find for everypatch in A the nearest neighbor in B under apatch distance metric such as L p.We call thismapping the Nearest-Neighbor Field(NNF),illustrated schematically in the insetfigure.Approaching this problem with a na¨ıve bruteforce search is expensive–O(mM2)for imageregions and patches of size M and m pixels,respectively.Even using acceleration methodssuch as approximate nearest neighbors[Mountand Arya1997]and dimensionality reduction,this search step remains the bottleneck of non-parametric patch sampling methods,preventing them from attain-ing interactive speeds.Furthermore,these tree-based acceleration structures use memory in the order of O(M)or higher with rela-tively large constants,limiting their application for high resolution imagery.To efficiently compute approximate nearest-neighborfields our new algorithm relies on three key observations about the problem: Dimensionality of offset space.First,although the dimensional-ity of the patch space is large(m dimensions),it is sparsely pop-ulated(O(M)patches).Many previous methods have accelerated the nearest neighbor search by attacking the dimensionality of the patch space using tree structures(e.g.,kd-tree,which can search in O(mM log M)time)and dimensionality reduction methods(e.g., PCA).In contrast,our algorithm searches in the2-D space of pos-sible patch offsets,achieving greater speed and memory efficiency. Natural structure of images.Second,the usual independent search for each pixel ignores the natural structure in images.In patch-sampling synthesis algorithms,the output typically contains large contiguous chunks of data from the input(as observed by Ashikhmin[2001]).Thus we can improve efficiency by performing searches for adjacent pixels in an interdependent manner.The law of large numbers.Finally,whereas any one random choice of patch assignment is very unlikely to be a good guess, some nontrivial fraction of a largefield of random assignments will likely be good guesses.As thisfield grows larger,the chance that no patch will have a correct offset becomes vanishingly small. Based on these three observations we offer a randomized algorithm for computing approximate NNFs using incremental updates(Sec-tion3).The algorithm begins with an initial guess,which may be derived from prior information or may simply be a randomfield. The iterative process consists of two phases:propagation,in which coherence is used to disseminate good solutions to adjacent pixels in thefield;and random search,in which the current offset vector is perturbed by multiple scales of random offsets.We show both theoretically and empirically that the algorithm has good conver-gence properties for tested imagery up to2MP,and our CPU im-plementation shows speedups of20-100times versus kd-trees with PCA.Moreover,we propose a GPU implementation that is roughly 7times faster than the CPU version for similar image sizes.Our algorithm requires very little extra memory beyond the original im-age,unlike previous algorithms that build auxiliary data structures to accelerate the ing typical settings of our algorithm’s parameters,the runtime is O(mM log M)and the memory usage is O(M).Although this is the same asymptotic time and memory as the most efficient tree-based acceleration techniques,the leading constants are substantially smaller.In Section4,we demonstrate the application of this algorithm in the context of a structural image editing program with three modes of interactive editing:image retargeting,image completion and image reshuffling.The system includes a set of tools that offer additional control over previous methods by allowing the user to constrain the synthesis process in an intuitive and interactive way(Figure1). The contributions of our work include a fast randomized approxi-mation algorithm for computing the nearest-neighborfield between two disjoint image regions;an application of this algorithm within a structural image editing framework that enables high-quality inter-active image retargeting,image completion,and image reshuffling; and a set of intuitive interactive controls used to constrain the opti-mization process to obtain desired creative results.2Related workPatch-based sampling methods have become a popular tool for image and video synthesis and analysis.Applications include texture synthesis,image and video completion,summarization and retargeting,image recomposition and editing,image stitching and collages,new view synthesis,noise removal and more.We will next review some of these applications and discuss the common search techniques that they use as well as their degree of interactivity. Texture synthesis and completion.Efros and Leung[1999]in-troduced a simple non-parametric texture synthesis method that outperformed many previous model based methods by sampling patches from a texture example and pasting them in the synthe-sized image.Further improvements modify the search and sam-pling approaches for better structure preservation[Wei and Levoy 2000;Ashikhmin2001;Liang et al.2001;Efros and Freeman2001; Kwatra et al.2003;Criminisi et al.2003;Drori et al.2003].The greedyfill-in order of these algorithms sometimes introduces incon-sistencies when completing large holes with complex structures,but Wexler et al.[2007]formulated the completion problem as a global optimization,thus obtaining more globally consistent completions of large missing regions.This iterative multi-scale optimization algorithm repeatedly searches for nearest neighbor patches for all hole pixels in parallel.Although their original implementation was typically slow(a few minutes for images smaller than1MP),our algorithm makes this technique applicable to much larger images at interactive rates.Patch optimization based approaches have now become common practice in texture synthesis[Kwatra et al.2005; Kopf et al.2007;Wei et al.2008].In that domain,Lefebvre and Hoppe[2005]have used related parallel update schemes and even demonstrated real-time GPU based implementations.Komodakis and Tziritas[2007]proposed another global optimization formu-lation for image completion using Loopy Belief Propagation with an adaptive priority messaging scheme.Although this method pro-duces excellent results,it is still relatively slow and has only been demonstrated on small images.Nearest neighbor search methods.The high synthesis quality of patch optimization methods comes at the expense of more search iterations,which is the clear complexity bottleneck in all of these methods.Moreover,whereas in texture synthesis the texture example is usually a small image,in other applications such as patch-based completion,retargeting and reshuffling,the input image is typically much larger so the search problem is even more critical.Various speedups for this search have been proposed, generally involving tree structures such as TSVQ[Wei and Levoy 2000],k d-trees[Hertzmann et al.2001;Wexler et al.2007;Kopf et al.2007],and VP-trees[Kumar et al.2008],each of which supports both exact and approximate search(ANN).In synthesis applications,approximate search is often used in conjunction with dimensionality reduction techniques such as PCA[Hertzmann et al. 2001;Lefebvre and Hoppe2005;Kopf et al.2007],because ANN methods are much more time-and memory-efficient in low dimensions.Ashikhmin[2001]proposed a local propagation technique exploiting local coherence in the synthesis process bylimiting the search space for a patch to the source locations of its neighbors in the exemplar texture.Our propagation search step is inspired by the same coherence assumption.The k-coherence technique[Tong et al.2002]combines the propagation idea with a precomputation stage in which the k nearest neighbors of each patch are cached,and later searches take advantage of these precomputed sets.Although this accelerates the search phase,k-coherence still requires a full nearest-neighbor search for all pixels in the input, and has only been demonstrated in the context of texture synthesis. It assumes that the initial offsets are close enough that it suffices to search only a small number of nearest neighbors.This may be true for small pure texture inputs,but we found that for large complex images our random search phase is required to escape local minima. In this work we compare speed and memory usage of our algorithm against k d-trees with dimensionality reduction,and we show that it is at least an order of magnitude faster than the best competing combination(ANN+PCA)and uses significantly less memory.Our algorithm also provides more generality than k d-trees because it can be applied with arbitrary distance metrics,and easily modified to enable local interactions such as constrained completion. Control and interactivity.One advantage of patch sampling schemes is that they offer a great deal offine-scale control.For ex-ample,in texture synthesis,the method of Ashikhmin[2001]gives the user control over the process by initializing the output pixels with desired colors.The image analogies framework of Hertz-mann et al.[2001]uses auxiliary images as“guiding layers,”en-abling a variety of effects including super-resolution,texture trans-fer,artisticfilters,and texture-by-numbers.In thefield of image completion,impressive guidedfilling results were shown by an-notating structures that cross both inside and outside the missing region[Sun et al.2005].Lines arefilledfirst using Belief Propa-gation,and then texture synthesis is applied for the other regions, but the overall run-time is on the order of minutes for a half MP image.Our system provides similar user annotations,for lines and other region constraints,but treats all regions in a unified iterative process at interactive rates.Fang and Hart[2007]demonstrated a tool to deform image feature curves while preserving textures that allowsfiner adjustments than our editing tools,but not at interac-tive rates.Pavic et al.[2006]presented an interactive completion system based on large fragments that allows the user to define the local3D perspective to properly warp the fragments before corre-lating and pasting them.Although their system interactively pastes each individual fragment,the user must still manually click on each completion region,so the overall process can still be tedious. Image retargeting.Many methods of image retargeting have ap-plied warping or cropping,using some metric of saliency to avoid deforming important image regions[Liu and Gleicher2005;Setlur et al.2005;Wolf et al.2007;Wang et al.2008].Seam carving[Avi-dan and Shamir2007;Rubinstein et al.2008]uses a simple greedy approach to prioritize seams in an image that can safely be removed in retargeting.Although seam carving is fast,it does not preserve structures well,and offers only limited control over the results. Simakov et al.[2008]proposed framing the problem of image and video retargeting as a maximization of bidirectional similarity be-tween small patches in the original and output images,and a similar objective function and optimization algorithm was independently proposed by Wei et al.[2008]as a method to create texture sum-maries for faster synthesis.Unfortunately,the approach of Simakov et al.is extremely slow compared to seam carving.Our constrained retargeting and image reshuffling applications employ the same ob-jective function and iterative algorithm as Simakov et al.,using our new nearest-neighbor algorithm to obtain interactive speeds.In each of these previous methods,the principal method of user con-trol is the ability to define and protect important regions from dis-tortion.In contrast,our system integrates specific user-directableFigure2:Phases of the randomized nearest neighbor algorithm: (a)patches initially have random assignments;(b)the blue patch checks above/green and left/red neighbors to see if they will im-prove the blue mapping,propagating good matches;(c)the patch searches randomly for improvements in concentric neighborhoods. constraints in the retargeting process to explicitly protect lines from bending or breaking,restrict user-defined regions to specific trans-formations such as uniform or non-uniform scaling,andfix lines or objects to specific output locations.Image“reshuffling”is the rearrangement of content within an image,according to user input,without precise mattes.Reshuffling was demonstrated simultaneously by Simakov et al.[2008]and by Cho et al.[2008],who used larger image patches and Belief Propagation in an MRF formulation.Reshuffling requires the minimization of a global error function,as objects may move significant distances,and greedy algorithms will introduce large artifacts.In contrast to all previous work,our reshuffling method is fully interactive.As this task might be particularly hard and badly constrained,these algorithms do not always produce the expected result.Therefore interactivity is essential,as it allows the user to preserve some semantically important structures from being reshuffled,and to quickly choose the best result among alternatives. 3Approximate nearest-neighbor algorithmThe core of our system is the algorithm for computing patch correspondences.We define a nearest-neighborfield(NNF)as a function f:A→R2of offsets,defined over all possible patch coordinates(locations of patch centers)in image A,for some distance function of two patches D.Given patch coordinate a in image A and its corresponding nearest neighbor b in image B,f(a) is simply b−a.We refer to the values of f as offsets,and they are stored in an array whose dimensions are those of A.This section presents a randomized algorithm for computing an approximate NNF.As a reminder,the key insights that motivate this algorithm are that we search in the space of possible offsets, that adjacent offsets search cooperatively,and that even a random offset is likely to be a good guess for many patches over a large image.The algorithm has three main components,illustrated in Figure2. Initially,the nearest-neighborfield isfilled with either random offsets or some prior information.Next,an iterative update process is applied to the NNF,in which good patch offsets are propagated to adjacent pixels,followed by random search in the neighborhood of the best offset found so far.Sections3.1and3.2describe these steps in more detail.(a)originals (b)random (c)14iteration (d)34iteration (e)1iteration (f)2iterations (g)5iterationsFigure 3:Illustration of convergence.(a)The top image is reconstructed using only patches from the bottom image.(b)above:the reconstruction by the patch “voting”described in Section 4,below:a random initial offset field,with magnitude visualized as saturation and angle visualized as hue.(c)1/4of the way through the first iteration,high-quality offsets have been propagated in the region above the current scan line (denoted with the horizontal bar).(d)3/4of the way through the first iteration.(e)First iteration complete.(f)Two iterations.(g)After 5iterations,almost all patches have stopped changing.The tiny orange flowers only find good correspondences in the later iterations.3.1InitializationThe nearest-neighbor field can be initialized either by assigning ran-dom values to the field,or by using prior information.When ini-tializing with random offsets,we use independent uniform samples across the full range of image B .In applications described in Sec-tion 4,we use a coarse-to-fine gradual resizing process,so we have the option to use an initial guess upscaled from the previous level in the pyramid.However,if we use only this initial guess,the al-gorithm can sometimes get trapped in suboptimal local minima.To retain the quality of this prior but still preserve some ability to es-cape from such minima,we perform a few early iterations of the algorithm using a random initialization,then merge with the up-sampled initialization only at patches where D is smaller,and then perform the remaining iterations.3.2IterationAfter initialization,we perform an iterative process of improving the NNF.Each iteration of the algorithm proceeds as follows:Offsets are examined in scan order (from left to right,top to bottom),and each undergoes propagation followed by random search .These operations are interleaved at the patch level:if P j and S j denote,respectively,propagation and random search at patch j ,then we proceed in the order:P 1,S 1,P 2,S 2,...,P n ,S n .Propagation.We attempt to improve f (x ,y )using the known offsets of f (x −1,y )and f (x ,y −1),assuming that the patch offsets are likely to be the same.For example,if there is a good mapping at (x −1,y ),we try to use the translation of that mapping one pixel to the right for our mapping at (x ,y ).Let D (v )denote the patch distance (error)between the patch at (x ,y )in A and patch (x ,y )+v in B .We take the new value for f (x ,y )to be the arg min of {D (f (x ,y )),D (f (x −1,y )),D (f (x ,y −1))}.The effect is that if (x ,y )has a correct mapping and is in a coherent region R ,then all of R below and to the right of (x ,y )will be filled with the correct mapping.Moreover,on even iterations we propagate information up and left by examining offsets in reverse scan order,using f (x +1,y )and f (x ,y +1)as our candidate offsets.Random search.Let v 0=f (x ,y ).We attempt to improve f (x ,y )by testing a sequence of candidate offsets at an exponentially decreasing distance from v 0:u i =v 0+w αi R i(1)where R i is a uniform random in [−1,1]×[−1,1],w is a large maximum search “radius”,and αis a fixed ratio between search window sizes.We examine patches for i =0,1,2,...until the current search radius w αi is below 1pixel.In our applications w isthe maximum image dimension,and α=1/2,except where noted.Note the search window must be clamped to the bounds of B .Halting criteria.Although different criteria for halting may be used depending on the application,in practice we have found it works well to iterate a fixed number of times.All the results shown here were computed with 4-5iterations total,after which the NNF has almost always converged.Convergence is illustrated in Figure 3and in the accompanying video.Efficiency.The efficiency of this naive approach can be improved in a few ways.In the propagation and random search phases,when attempting to improve an offset f (v )with a candidate offset u ,one can do early termination if a partial sum for D (u )exceeds the current known distance D (f (v )).Also,in the propagation stage,when using square patches of side length p and an L q norm,the change in distance can be computed incrementally in O (p )rather than O (p 2)time,by noting redundant terms in the summation over the overlap region.However,this incurs additional memory overhead to store the current best distances D (f (x ,y )).GPU implementation.The editing system to be described in Sec-tion 4relies on a CPU implementation of the NNF estimation al-gorithm,but we have also prototyped a fully parallelized variant on the GPU.To do so,we alternate between iterations of random search and propagation,where each stage addresses the entire offset field in parallel.Although propagation is inherently a serial oper-ation,we adapt the jump flood scheme of Rong and Tan [2006]to perform propagation over several iterations.Whereas our CPU version is capable of propagating information all the way across a scanline,we find that in practice long propagations are not needed,and a maximum jump distance of 8suffices.We also use only 4neighbors at each jump distance,rather than the 8neighbors pro-posed by Rong and Tan.With similar approximation accuracy,the GPU algorithm is roughly 7x faster than the CPU algorithm,on a GeForce 8800GTS card.3.3Analysis for a synthetic exampleOur iterative algorithm converges to the exact NNF in the limit.Here we offer a theoretical analysis for this convergence,showing that it converges most rapidly in the first few iterations with high probability.Moreover,we show that in the common case where only approximate patch matches are required,the algorithm converges even faster.Thus our algorithm is best employed as an approximation algorithm,by limiting computation to a small number of iterations.We start by analyzing the convergence to the exact nearest-neighbor field and then extend this analysis to the more useful case of con-vergence to an approximate solution.Assume A and B have equal size (M pixels)and that random initialization is used.Although the odds of any one location being assigned the best offset in this initial guess are vanishingly small (1/M ),the odds of at least one offset being correctly assigned are quite good (1−(1−1/M )M ))or approximately 1−1/e for large M .Because the random search is quite dense in small local regions we can also consider a “correct”assignment to be any assignment within a small neighborhood of size C pixels around the correct offset.Such offsets will be cor-rected in about one iteration of random search.The odds that at least one offset is assigned in such a neighborhood are excellent:(1−(1−C /M )M )or for large M ,1−exp (−C ).Now we consider a challenging synthetic test case for our algorithm:a distinctive region R of size m pixels lies at two different locations in an oth-erwise uniform pair of images A and B (shown inset).This image is a hard case because the background offers no information about where the offsets for the distinctive region may be found.Patches in the uniform background can match a large number of other identical patches,which are found by random guesses in one iteration with very high probability,so we consider convergence only for the distinct region R .If any one offset inthe distinct region R is within the neighborhood C of the correct offset,then we assume that after a small number of iterations,due to the density of random search in small local regions (mentioned previously),that all of R will be correct via propagation (for nota-tional simplicity assume this is instantaneous).Now suppose R has not yet converged.Consider the random searches performed by our algorithm at the maximum scale w .The random search iterations at scale w independently sample the image B ,and the probability p that any of these samples lands within the neighborhood C of the correct offset isp =1−(1−C /M )m (2)Before doing any iterations,the probability of convergence is p .The probability that we did not converge on iterations 0,1,...,t −1and converge on iteration t is p (1−p )t .The probabilities thus form a geometric distribution,and the expected time of convergence is t =1/p −1.To simplify,let the relative feature size be γ=m /M ,then take the limit as resolution M becomes large:t=[1−(1−C /M )γM ]−1−1(3)lim M →∞t=[1−exp (−C γ)]−1−1(4)By Taylor expansion for small γ, t =(C γ)−1−12=M /(Cm )−12.That is,our expected number of iterations to convergence remains constant for large image resolutions and a small feature size m relative to image resolution M .We performed simulations for images of resolution M from 0.1to 2Megapixels that confirm this model.For example,we find that for a m =202region the algorithm converges with very high probability after 5iterations for a M =20002image.The above test case is hard but not the worst one for exact matching.The worst case for exact matching is when image B consists of a highly repetitive texture with many distractors similar to the distinct feature in A .The offset might then get “trapped”by one of the distractors,and the effective neighborhood region size C might be decreased to 1(i.e.,only the exact match can pull the solution out of the distractor during random search).However in practice,for many image analysis and synthesis applications such as the ones we show in this paper,finding an approximate match (in terms of patch similarity)will not cause any noticeable difference.The chancesTime [s]Memory [MB]MegapixelsOurs k d-tree Ours k d-tree 0.10.6815.2 1.733.90.2 1.5437.2 3.468.90.352.6587.7 5.6118.3Table 1:Running time and memory comparison for the input shownin Figure 3.We compare our algorithm against a method commonly used for patch-based search:kd-tree with approximate nearest neighbor matching.Our algorithm uses n =5iterations.The parameters for kd-tree have been adjusted to provide equal mean error to our algorithm.of finding a successful approximate match are actually higher when many similar distractors are present,since each distractor is itself an approximate match.If we assume there are Q distractors in image B that are similar to the exact match up to some small threshold,where each distractor has approximately the same neighborhood region C ,then following the above analysis the expected number of iterations for convergence is reduced to M /(QCm )−0.5.3.4Analysis for real-world imagesHere we analyze the approximations made by our algorithm on real-world images.To assess how our algorithm addresses different de-grees of visual similarity between the input and output images,we performed error analysis on datasets consisting of pairs of images spanning a broad range of visual similarities.These included inputs and outputs of our editing operations (very similar),stereo pairs 1and consecutive video frames (somewhat similar),images from the same class in the Caltech-256dataset 2(less similar)and pairs of unrelated images.Some of these were also analyzed at multiple resolutions (0.1to 0.35MP)and patch sizes (4x4to 14x14).Our algorithm and ANN+PCA k d-tree were both run on each pair,and compared to ground truth (computed by exact NN).Note that be-cause precomputation time is significant for our applications,we use a single PCA projection to reduce the dimensionality of the in-put data,unlike Kumar et al.[2008],who compute eigenvectors for different PCA projections at each node of the k d-tree.Because each algorithm has tunable parameters,we also varied these parameters to obtain a range of approximation errors.We quantify the error for each dataset as the mean and 95th percentile of the per-patch difference between the algorithm’s RMS patch distance and the ground truth RMS patch distance.For 5iterations of our algorithm,we find that mean errors are between 0.2and 0.5gray levels for similar images,and between 0.6and 1.5gray levels for dissimilar images (out of 256possible gray levels).At the 95th percentile,errors are from 0.5to 2.5gray levels for similar images,and 0.9to 6.0gray levels for dissimilar images.Our algorithm is both substantially faster than k d-tree and uses substantially less memory over a wide range of parameter settings.For the 7x7patch sizes used for most results in the paper,we find our algorithm is typically 20x to 100x faster,and uses about 20x less memory than k d-tree,regardless of resolution.Table 1shows a comparison of average time and memory use for our algorithm vs.ANN k d-trees for a typical input:the pairs shown in Figure 3.The rest of our datasets give similar results.To fairly compare running time,we adjusted ANN k d-tree parameters to obtain a mean approximation error equal to our algorithm after 5iterations.The errors and speedups obtained are a function of the patch size and image resolution.For smaller patches,we obtain smaller speedups (7x to 35x for 4x4patches),and our algorithm has higher error values.Conversely,larger patches give higher speedups (3001/stereo/data/scenes2006/2/Image_Datasets/Caltech256/。
第一章课后习题1.编译Java程序的命令是什么2.执行Java程序的命令是什么应用程序和小程序的区别是什么4.编写一个application ,实现在屏幕上打印自己名字的功能。
第一章课后习题答案1.编译Java程序的命令是什么答案:javac 源文件名2.执行Java程序的命令是什么java 主类名应用程序和小程序的区别是什么Java application由Java解释器独立运行字节码由专门的命令行启动程序执行程序中有定义了main()方法的主类Java applet不能独立运行,字节码必须嵌入HTML文档当浏览器调用含applet的Web页面时执行程序中含有java. applet. Applet 类的子类4.编写一个application ,实现在屏幕上打印自己名字的功能。
class Test{public static void main(String[] args){张三”);}}第二章课后习题(1)一、选择题1.下列变量定义错误的是。
A) int a; B) double b=; C) boolean b=true; D)float f=;2.下列数据类型的精度由高到低的顺序是:a)float,double,int,longb)double,float,int,bytec)byte,long,double,floatd)double,int,float,long3.执行完下列代码后,int a=3;char b='5';char c=(char)(a+b);c的值是A)’8’ b)53 c)8 d)56是一种_____________A) 数据类型 B)java包 C)字符编码 D)java类+5%3+2的值是___________A)2 B)1 C) 9 D)106.下面的逻辑表达式中合法的是__________A)(7+8)&&(9-5) B)(9*5)||(9*7) C)9>6&&8<10 D)(9%4)&&(8*3)语言中,占用32位存储空间的是__________。
abstract在c语言中的用法Abstract在C语言中的用法Abstract是C语言中的一个关键字,它用于定义抽象数据类型(ADT)。
抽象数据类型是一种数据类型,它的实现细节被隐藏在一个抽象层次之下,只有一组操作被公开。
这种数据类型的实现方式可以被修改,而不会影响到使用它的代码。
在C语言中,使用Abstract定义ADT需要使用struct结构体和指向结构体的指针。
下面是一个例子:```typedef struct {int data;void (*print)(int);} AbstractDataType;void printData(int data) {printf("Data: %d\n", data);}AbstractDataType* createADT(int data) {AbstractDataType* adt = (AbstractDataType*) malloc(sizeof(AbstractDataType));adt->data = data;adt->print = printData;return adt;}int main() {AbstractDataType* adt = createADT(10);adt->print(adt->data);free(adt);return 0;}```在这个例子中,我们定义了一个AbstractDataType结构体,它包含一个整数data和一个指向函数的指针print。
我们还定义了一个printData函数,它用于打印data的值。
createADT函数用于创建一个AbstractDataType对象,并将data和print函数指针初始化。
在main函数中,我们创建了一个AbstractDataType对象,调用了它的print函数,并释放了它的内存。
使用Abstract定义ADT的好处是,它可以将数据类型的实现细节隐藏起来,只公开一组操作。
摘要城市化进程不断的发展导致了城市中心的地块不停的被分隔,因此出现了许多在空间极为局促、环境极为苛刻或使用者行为活动受到一定限制的条件下的极限建筑空间。
在此情况下,根据行为建筑学相关理论及设计方法,计算出满足使用者功能需求的最小建筑空间,显得十分重要。
然而现有的极限建筑空间的设计数据主要是根据人体百分位参数进行建筑空间以及空间中固定物的设计。
这样的设计方式,在很大程度上存在着缺少设计针对性、空间尺寸不合理、空间使用效率低、建筑能耗大等问题。
针对这一现象,本研究将首先详细阐述通过计算机编程方式模拟人体运动方式,并通过运动轨迹计算得出人体运动包络体。
人体运动包络体模拟是行为建筑学理论研究推理过程中所采用的一种模拟法。
从而克服了传统实验法存在的样本人体尺度从二维平面研究转化为三维立体空间研究。
在此基础之上,该论文将探讨现存极限建筑存在的问题以及如何在实际建筑设计中,通过计算空间使用者运动包络体得到他们的详细数据,并以此确定使用者在空间中的活动范围,作为极限建筑空间设计的重要参考依据。
这样的设计方式,可以计算出可以满足使用需求的极限建筑空间形态与体积,从而保证建筑空间可以满足使用者对使用功能的基本需求,提高建筑空间使用效率。
另一方面,人体运动包络体可以用于优化极限空间中固定物的位置与尺寸、形状,根据具体使用者的实际测量参数的进行个性化的私人定制,并保证了固定物的基本使用功能。
关键词:运动包络体;极限建筑空间;行为建筑学;模拟法;空间效率AbstractThe land in the center of the city is constantly divided for the sake of urbanization development. As a result, an increasing number of limited architectural space was designed and built. The environment of such kind of space is usually cramped. And the users’ behavior is also limited. In this case, it is of great importance to calculate the minimum size of space which can meet the basic functional needs of the users. However, the existing data for limited architectural extent, leads to an increasing number serious issues, such as lacking pertinence, unreasonable space size, low space efficiency and high energy consumption.In order to solve this issue, this essay will first simulate the movement of human body by computer programming. After that, enveloping solid will be calculated by the trail of human body. Enveloping solid simulation is a basic simulating method in the inference procedure of behavioral architecture. Compared with traditional experiments, there will be no sample quantity limitation and anthropogenic factor in simulating process. And the 2-dimensional human parameter comes to 3 dimensional.Based on which, this essay will explore the existing problems on limited architectural space design and how to use enveloping solid simulation in architecture design. In the first stage, the design data of users can be get from the process of enveloping solid simulation. And the users’ parameter shows the range of activity, which is important reference frame in design procedure. By this method, the functional needs of users can be meet. And space efficiency can also be improved. What’s more, enveloping solid can be used in optimizing the shape and location of fixtures in building as well.Keywords:enveloping solid, limited architectural space,behavioral architecture, simulation, space efficiency目录摘要 (1)Abstract (2)第1章绪论 (1)1.1课题背景及研究的目的和意义 (1)1.1.1 课题的研究背景 (1)1.1.2 课题的研究目的和意义 (2)1.2相关概念概述 (3)1.2.1 极限建筑空间的概念 (3)1.2.2 “包络体”的概念及构成概述 (3)1.3国内外研究现状及分析 (4)1.3.1 行为建筑学 (4)1.3.2 极限建筑空间 (4)1.3.3 包络体的应用及计算方式 (6)1.4研究内容、方法与框架 (11)1.4.1 课题的研究内容 (11)1.4.2 研究方法 (12)1.4.3 课题的研究框架 (14)第2章研究基础 (15)2.1人体运动学、运动解剖学 (15)2.1.1 人体运动形式 (15)2.1.2 人体运动的特性与坐标系建立 (15)2.2人体测量学与程序人体基本参数设定 (17)2.2.1 人体上肢静态尺寸测量 (17)2.2.2 程序人体基本参数设定 (18)2.3计算机编程 (19)2.3.1 模拟软件 (19)2.3.2 Toxiclibs类库引用与运动轨迹的向量表示 (19)2.3.3 HE_Mesh类库引用与包络曲面生成 (20)2.4本章小结 (20)第3章程序模拟 (21)3.1程序逻辑 (21)3.1.1 程序参数设定 (21)3.1.2 上肢运动轨迹模拟 (22)3.1.3 上肢运动包络体生成 (30)3.2不同人体参数对模拟结果的影响 (30)3.2.1 儿童(四肢长度对模拟结果的影响) (30)3.2.2 老年人(活动角度对模拟结果的影响) (33)3.2.3 残疾人(残肢对模拟结果的影响) (34)3.2.4 数据对比 (35)3.3“人体运动包络体”程序对行为建筑学研究方法的扩展 (36)3.3.1 行为建筑学研究的一般方法以及主要存在问题 (36)3.3.2 “人体运动包络体”模拟对行为建筑学研究方法的贡献 (37)3.4本章小结 (39)第4章 (40)4.1计算满足使用需求的极限建筑空间形态与体积 (40)4.1.1 满足功能需求,提高空间使用效率 (40)4.1.2 根据运动轨迹预测使用者所需的三维建筑空间 (45)4.1.3 节约能源 (49)4.2优化极限空间中固定物的位置与尺寸、形状 (50)4.2.1 包络体与极限空间中固定物的位置 (51)4.2.2 包络体与极限空间中固定物的尺寸 (55)4.2.3 包络体与固定物的三维空间组合 (57)4.3本章小结 (58)结论 (59)参考文献 (60)附录 (63) (74)致谢 (75)第1章绪论1.1 课题背景及研究的目的和意义1.1.1 课题的研究背景古代有蜗居的说法,用“蜗舍”比喻“圆舍”“蜗”字描述的是空间的形状,后来逐渐演变为居住空间狭小的意思。
抽象⽅法(abstractmethod)和虚⽅法(virtualmethod),重载(ov。
1. 抽象⽅法 (abstract method)在抽象类中,可以存在没有实现的⽅法,只是该⽅法必须声明为abstract抽象⽅法。
在继承此抽象类的类中,通过给⽅法加上override关键字来实现此⽅法. 由于该abstract method (抽象⽅法)是没有实现的⽅法,所以在⼦类中必须使⽤override关键字来重写此⽅法抽象⽅法不提供⾃⾝的实现,并且强制⼦类重写2. 虚⽅法 (virtual method)2.1 虚⽅法,⼦类可以选择性重写此⽅法(⽤override关键字重写),也可以不重写. 它不像抽象⽅法,⼦类必须重写(如果⼦类不重写,那么⼦类也必须是抽象类)2.2 虚⽅法可以有实体,所以可以直接调⽤public virtual void Vt(){Console.WriteLine("this is virtual method");}2.3 虚⽅法可以实现多态。
⽽抽象⽅法不可以虚⽅法提供⾃⾝的实现,并且不强制要求⼦类重写3. 重载应该叫overload, 重写叫override重载 (overload)某个⽅法是在同⼀个类中发⽣重写 (override) 就是在⼦类中重写⽗类中的⽅法3.1 重写(override)是⽤于重写基类的虚⽅法, 这样在派⽣类中提供⼀个新的⽅法⽗类: public virtual string ToString(){return "a";}⼦类 public override string ToString() {return "b";}3.2 重载(overload)是提供⼀种机制,相同函数名通过不同的返回值类型以及参数列表来区分的机制public string ToString(){return "a";}public string ToString(int id){return id.ToString();}很明显的区别---函数特征:重写(override)的两个函数的函数特征相同, 或者说有相同的函数签名重载(overload)的两个函数的函数名虽然相同,但函数特征不同 (函数特征包括函数名,返回值类型, 参数的类型和个数)。
C++中abstract修饰类的用法1. 概述在C++中,我们经常会听到关于abstract类的概念。
那么,abstract 类到底是什么?它又有什么作用呢?2. 什么是abstract类在C++中,我们可以使用关键字“abstract”来修饰一个类,使其成为一个“abstract类”。
一个abstract类是一种不能被实例化的类,即不能创建它的对象。
abstract类通常用于定义接口和抽象的行为,它的目的是为了让其他类继承并实现它的纯虚函数。
3. abstract类的定义要定义一个abstract类,我们可以在类中声明纯虚函数。
纯虚函数是指在类中声明但没有实现的虚函数。
通过在函数声明后面加上“= 0”来将一个虚函数声明为纯虚函数。
例如:```C++class AbstractClass {public:virtual void pureVirtualFunction() = 0;};```4. abstract类的作用abstract类的作用主要有以下几点:- 定义接口:abstract类定义了一组接口,表示了一种抽象的行为。
其他类可以继承并实现这些接口。
这样一来,我们就可以通过基类指针来调用派生类的函数。
- 特定行为的约束:abstract类可以约束其派生类必须实现某些特定的行为。
这样一来,我们就可以确保派生类都具有相同的接口,从而提高代码的一致性和可维护性。
- 防止实例化:abstract类的对象不能被创建,这可以防止程序员错误地使用该类,从而避免一些潜在的错误。
5. 如何使用abstract类在C++中,我们可以通过继承abstract类并实现其中定义的纯虚函数来使用abstract类。
例如:```C++class ConcreteClass : public AbstractClass {public:void pureVirtualFunction() override {// 实现纯虚函数的具体逻辑}};```在上面的例子中,ConcreteClass继承了AbstractClass,并实现了其中定义的纯虚函数pureVirtualFunction。
Retargeting Vector Animation for Small DisplaysVidya Setlur Northwestern University and Nokia Research Center Xuejin ChenUSTC and Microsoft Research AsiaYingqing Xu Microsoft Research Asia Bruce Gooch NorthwesternUniversityFigure 1:Preserving the spatial detail in important objects from a source animation to a smaller sized animation.AbstractWe present a method that preserves the recognizability of key ob-ject interactions in a vector animation.The method allows an artistto author an animation once,and then output it to any display de-vice.We specifically target mobile devices with small screen sizes.In order to adapt an animation,the author specifies an importance value for objects in the animation.The algorithm then identifies and categorizes the vector graphics objects that comprise the anima-tion,leveraging the implicit relationship between extensible Mark-up Language (XML)and scalable vector graphics (SVG).Based on importance,the animation can then be automatically retargeted for any display using artistically motivated resizing and grouping algorithms that budget size and spatial detail for each object.CR Categories:I.3.3[Computer Graphics]:Picture/Image Generation—Display Algorithms;Keywords:perception,animation,small displays,WWW applica-tions,information visualization,non-photorealistic rendering,vec-tor graphics,XML1IntroductionAdvances in mobile devices and wireless telecommunication pro-vide users with ubiquitous access to online information and ser-vices.However,user access and interaction are still quite restricted with regard to the display of imagery such as animations,diagrams,maps,and charts.There exists a growing need for the effective adaptation of imagery for small size displays.This work presents an algorithm for retargeting vector based animations while main-taining the recognizability of object interaction.Effective display consists of understanding the communication goal of some form of imagery and then fitting that imagery to the display device in a manner that aids this goal.In most animation,the story is communicated to the viewer via the interaction of a few key ob-jects .Remaining objects in the scene provide a context for this interaction,and are referred to as contextual objects .In order to achieve the communication goal of an animation on a mobile de-vice,key object interactions must be displayed at both sufficient size and spatial detail for easy recognition.The contextual objects in the animation are less important.The premise of our method is that when the key objects are known,their features can be exagger-ated in order to render their interaction more obvious.Objects in vector graphics images and animation are typically uni-formly scaled regardless of their importance.Therefore,we intro-duce a perceptually motivated algorithm that exploits the semantics of vector graphics data to guide the retargeting process.In addi-tion,the algorithm redistributes spatial detail among the objects in the scaled animation based on importance.1.1ContributionsThis work provides tools for artists to intuitively author and manip-ulate machine-readable forms of animation.We also demonstrate that these tools are useful for encoding multiple levels of detail in a single image to enhance the utility of mobile devices as information displays.The intellectual contribution of this work is the idea of importance tags that can be leveraged to make imagery dynamically adapt to a user’s needs.Copyright is held by the HIT Lab NZ, University of Canterbury MUM 2005 Christchurch, New Zealand ISBN 0-473-10658-22Related WorkPrior work in human perception and computer graphics has estab-lished that it often becomes necessary to sacrifice detail in order to meet the computational demands of complex scenes[O’Sullivan and Dingliana2001;Reitsma and Pollard2003].Level of de-tail(LOD)techniques for real-time rendering and other percep-tion based computer graphics problems have become necessary for meeting real-time demands for most scenes of significant complex-ity and to adaptively modulate levels of detail in different parts of a simulation process.Visual artifacts that occur in areas with less amount of detail,may go unnoticed to an average viewer if these areas are perceptually less important for a given visual task.For example,the work of Chenney and Forsyth involved culling non-visible parts of the scene[Chenney and Forsyth1997].Carlson and Hodgins explored techniques for reducing the computational cost of simulating groups of creatures by using less accurate simu-lations of individuals when they are less important to the viewer or to the action in the virtual world[Carlson and Hodgins1997].Sim-ilarly,there has been work on reducing time complexity in geomet-rical models based on lower importance[Reddy1997;Funkhouser and S´e quin1993].These approaches allow speed-accuracy trade-offs to be optimized by exploiting a viewers inability to distinguish simplifications in less important parts of an image or animation.All these techniques demonstrate the idea that adaptive detail modula-tion can be more effective than uniformly reducing the complexity of the entire scene.Our work is inspired by this idea.While previ-ous work deals with speed and time constraints for reducing com-plexity,we address the problem of adaptation based on a size con-straint.Instead of uniformly scaling vector graphics animation,we apply adaptive detail modulation to emphasize important objects. Existing work on intelligent adaptation of images and video for smaller displays,focusses on maintaining the recognizability of more important objects in the visual scene.Suh et al.proposed a technique for automatic image thumbnail cropping based on a vi-sual attention model to detect interesting areas in the image[Suh et al.2003].This method however,crops only the most important region and does not retain the entire context of the visual scenario from the original image in the smaller sized image.The absence of contextual information,may not convey the entire visual story to the viewer.This may not be an issue for images containing a single subject,where the surrounding context is less influential in under-standing the content of the image.On the other hand,for images containing multiple objects or for images where the entire visual context is necessary for performing visual tasks,thumbnail crop-ping may not be suitable.Chen et al.introduced an image adap-tation technique that delivers the most important region to mobile devices[Chen et al.2003].The user can scroll between different pages of an image to view different important regions.Work by Wang et es a sampling-based dynamic attention model to ob-tain and maintain the users attention on video streams[Wang et al. 2004].The amount of visual data presented to the user is adjusted by uniformly zooming in and out of the visual scene based on user interest.Related to this work,Fan et al.introduced an approach that allows users to explicitly zoom into video frames while browsing on small displays[Fan et al.2003;Liu et al.2003].Computational at-tention models tend to perform poorly on animation,because they depend strongly on image luminance and contrast,and often do not identify features important for a visual task[Ferwerda2003].In-stead,our animation retargeting method allows users at the author-ing level to assign object importance at the authoring level.Further, our system displays all content on the screen at once,and allows differential zooming to emphasize and de-emphasize information. Rist and Brandmeier have explored automated adaptation mecha-nisms for transforming images to serve mobile devices using down-sampling and color reduction[Rist and Brandmeier2002].Mar-tin has developed a system for adaptive delivery of3D models in heterogeneous networked environments,enabling access by clients with diverse graphics capabilities[Martin2000].Marriott et al. address the client-side adaptation of documents to various view-ing conditions,such as varying screen sizes,style preferences,and different device capabilities,by including one-way constraints into SVG[Marriott et al.2002].These constraints mainly manip-ulate document layout specifications by declaratively specifying the desired layout of the web document.There has also been re-search on map generalization[Agrawala and Stolte2001;Neuf-fer et al.2004;Visvalingam1999].Cartographic generalization is concerned with deriving small scale,less detained maps from larger scale maps.While their work is mainly concerned with auto-matic generalization techniques for static map images,we introduce a general framework to work for both static as well as dynamic im-agery.Figure2:Flowchart of the animation retargeting process.The goal of the algorithm is to increase the recognizability of the key objects, while simplifying contextual objects in order to maintain the net spatial detail of the retargeted animation.Here,the boat is exag-gerated,while the tree is simplified.3The Retargeting ProcessThis work presents an algorithm for retargeting vector based anima-tions while maintaining the recognizability of object interactions. Our retargeting algorithm takes a target size and a vector animation or image as input.The XML format of the vector graphics structure is parsed to identify objects and their assigned importance.The im-portance parameter is an SVG tag set by the animation author,and is constrained to be∈[0,1].The animation is then resized using traditional graphics methods resulting in uniform scaling of all ob-jects,regardless of importance.We use the term spatial detail toFigure4:The relationship between spatial details and features present in an object.Here,the spatial detail of an object decreases as thenumber of its features reduce.(Left)Spatial detail=0.333754.(Center)Spatial detail=0.258025.(Right)Spatial detail=0.231496.Figure3:Illustration of the one-one correspondence between vec-tor graphics and its underlying XML structure.measure the feature density of objects.For example,a white spherehas less spatial detail than a soccer ball with the same dimensions.The overall spatial detail in the scene is redistributed,by exagger-ating more important key objects and simplifying less importantcontextual objects.The amount of exaggeration or simplification isbased on an objects’importance.Figure2illustrates the outline ofthe process.Section6provides a more detailed description of thealgorithm.4The Vector Graphics FormatOur system extends the Scalable Vector Graphics(SVG)format.SVG was developed as an open standard grammar for vectorgraphics.SVG is written in XML,and can easily be extendedusing XML tags.We use SVG structural tags to define the buildingblocks of our vector graphics data format.These tags include the <svg>element,which is the top-level description of the SVG document,a group element<g>,which is a container elementto group semantically related Bezier strokes into an object,the <path>element for rendering strokes as Bezier curves,and several kinds of<animate>elements to specify motion of objects.4.1Directed Acyclic Tree RepresentationThe SVG format conceptually consists of visual components thatare modeled as nodes and links.Elements are rendered in the orderin which they appear in the SVG document.Each object in the dataformat can be thought of as a canvas on which paint is applied.Ifobjects are grouped together with a<g>tag,they arefirst renderedas a separate group canvas,then composited on the main canvas us-ing thefilters or alpha masks associated with the group.In otherwords,the SVG document can be viewed as a directed acyclic treestructure proceeding from the most abstract,coarsest shapes of theobjects to the most refined details rendered on top of these abstractshapes.This property of SVG allows us to do a depth-first traver-sal of the nodes of the tree and manipulate the detail of any objectby altering the structural definitions of that object.We observe thatthis framework is similar to several perceptually guided model andmesh simplification techniques[Floriani et al.1997;Williams et al.2003;Bolin and Meyer1998].SVG also tags objects throughoutan animation sequence alleviating the issue of video segmentation.The motion of objects can be tracked through all frames of an ani-mation by using<animate>tags.Figure3shows a fragment of thedata format used for two objects in an animation.4.2Assigning importance tags to SVG objectsIn order to redistribute spatial detail among objects we sort thembased on importance.We provide the infrastructure for artists atthe authoring level to assign importance values to objects in an an-imation.The artist or user has to annotate objects in a scene withimportance tags that might become cumbersome as the number ofobjects in the scene increases.However,SVG has a number ofopen source GUI authoring tools[Ink n.d.;Sod n.d.],and the im-portance annotation functionality is incorporated as a plug-in to theGUI.This allows users to mouse click and annotate importance val-ues more easily.Importance values are tagged per scene by addingthem as attributes to the objects and propagated through the SVGdata structure.The only constraint is that the importance value is ∈[0,1],with0indicating most simplified and1indicating least simplified.The process is analogous to using RGB boxes in AdobePhotoshop[Ado n.d.]to set a color,and then using sliders tofinetune the importance values.Once the importance tags are defined,the rest of the algorithm is completely automatic.The importancetags are hence defined at the authoring level and do not change withdisplay size.Exaggeration Elimination Typification Outline SimplificationFigure5:Illustrating artistic rules for distributing spatial detail.5Computing Spatial DetailIn order to perform differential zooming of objects in the scene,it is necessary to compute spatial detail of each object and to beable to redistribute this quantity based on importance.The spatialdetail indicates how rapidly luminance is changing in the neighbor-hood of a given pixel.Figure4demonstrates that the features ofthe buildings’windows become simplified as the spatial detail de-creases.The computational measure of this property is well studiedparticularly for texture analysis and retrieval applications.We didexperiment with all the texture features described in[Amadasunand King1989],including variance,but spatial detail or‘busynesstextural property’best worked for our purpose.The NeighborhoodGray-Tone Difference Matrix NGT DM is a perceptual descriptionof spatial detail for an image in terms of changes in intensity anddynamic range per unit area.The NGT DM is a matrix,in whichthe i th entry is the summation of the differences between the lumi-nance value of all pixels in the image with the luminance value ofthe pixels in a neighborhood of pixel i.We use YUV color space to compute the gray value for each pixel,which is equal to(0.257×R)+(0.504×G)+(0.098×B)+16.Let f(k,l)be the luminance of the pixel at(k,l).We thenfind theaverage luminance over a neighborhood centered at,but excluding(k,l).A i=A(k,l)=1[d∑m=−dd∑n=−df(k+m,l+n)]where d specifies the neighborhood size,W=(2d+1)2,and(m,n)=(0,0).Then the i th entry in the NGT DM is defined ass(i)=∑i−A i,∀i∈N,i f N i=00,otherwisewhere N i is the set of all pixels having gray tone i(except in theperipheral regions of width d).We then use the NGTDM to obtain the following computationalmeasure for spatial detail after[Amadasun and King1989].Spatial detail=∑G h i=0p i s(i)∑G h i=0∑G h j=0ip i−jp jp i=0,p j=0where G h is the highest gray-tone value present in the image.Thenumerator is a measure of the spatial rate of change in intensity,while the denominator is a summation of the magnitude of differ-ences between luminance values.Each value is weighted by theprobability of occurrence.For an N×N image,p i is the probabil-ity of occurrence of gray-tone value i,and is given by p i=N i/n2,where n=N−2d,and N i is the set of all pixels having gray tone i(except in the peripheral regions of width d).Spatial detail is com-puted for a given target display size.Also,if an object changessize‘or color during the course of the animation,spatial detail isrecomputed for the changed object.6Spatial Detail DistributionThe goal of the retargeting process is to preserve the recognizabil-ity of the interactions between key objects after the animation isresized.While vector graphics animations are resolution indepen-dent,key object interactions may not be recognizable at all sizes dueto artifacts introduced by uniform scaling.In order to automate theprocess of retargeting animations,we draw inspiration from a col-lection of perceptually based artistic techniques.These techniquesfacilitate differential resizing instead of a uniform scaling.Artistictechniques often involve de-emphasizing context objects,and in-creasing the detail in key objects[Kowalski et al.2001;Johnstonand Thomas1995;Markosian et al.2000;Lansdown and Schofield1995;Winkenbach and Salesin1994;Meier1996].Similarly,gen-eralization is a process used by cartographers[Agrawala and Stolte2001;Board1978;MacEachren1995]to reduce the scale and com-plexity of imagery while maintaining detail in important elements.The following rules are automatically applied to the object nodes inthe SVG representation of the animation based on the importancevalue of the object.The rules can be classified based on whetherthey emphasize or de-emphasize objects.The redistribution of spatial detail in the retargeted image is a sim-ple budget allocation method based on the importance value of in-dividual objects.The most important object is budgeted the largestamount of the total spatial detail available for the image,while theleast important object is budgeted the least amount.The impor-tance value of an object is constrained by definition to be∈[0,1],and the importance values of all objects are then normalized.Anobject cannot be made more detailed than the original or more sim-plified than its basic outline.Additional constraints that may affectredistributing of spatial detail in the scene are derived from dis-play configurations,and the bounds of human visual acuity.Theseconstraints may be dictated by the physical limitations of displaydevices such as the size and resolution of display monitors,the min-(a)(a)OriginalAnimation(b)(b)ScaledAnimation(c)(c)Object Enhance-ment(d)(d)Object Generaliza-tionFigure6:Objects are enhanced or generalized based on spatial budget distribution.Here,the boat is enlarged and the tree detail is simplified to satisfy the spatial budget constraint.However,even though the spatial budget constraint requires the lake to be exaggerated,its bounding area is as large as the image and remains unchanged.imum size and width of objects that can be displayed or the mini-mum spacing between objects that avoids symbol collision or over-lap.The following spatial detail redistribution algorithm computes a spacial detail constraint for every object to emphasize particular objects and to clarify by removing visual clutter:1.Resize original vector graphics image or animation to desiredtarget size.All objects are uniformly scaled.2.Look up the Importance Value of each object.3.Normalize the spatial detail value by dividing the OriginalSpatial Detail of each object by its corresponding Bounding Area.We call this the object’s Unit Spatial Detail.4.Add the Unit Spatial Detail values of all objects to obtain theTotal Unit Spatial Detail.pute the Weighted Unit Spatial Detail for each object,which is the object’s Importance Value×Total Unit Spatial Detail.pute Spatial Detail Constraint allocated for each object,which is the Weighted Unit Spatial Detail×Bounding Area of object.7.If(Original Spatial Detail of object<Spatial Detail Con-straint of object),Then apply Key Object Enhancement until Original Spatial Detail of object≥Spatial Detail Constraint of object.However,when the retarget size is very small,there may not be enough space to exaggerate the size of the object.In such cases,the size of the objects remains the same as in the uniformly scaled image.8.Else if(Original Spatial Detail of object>Spatial DetailConstraint of object),Then apply Context Object Generaliza-tion until Original Spatial Detail of object≤Spatial Detail Constraint of object.6.1Key Object EnhancementKey object enhancement consists of both size and line exaggeration rules.These rules are applied to increase the spatial detail and visibility of the object after the vector animation or image is uniformly scaled down.Our system increases the object’s size to satisfy the spatial detail constraint.If the object is just a line stroke, such as routes in informational images,our system then applies line exaggeration,by increasing the line weight.Figure5shows both line and size exaggeration.6.2Context Object GeneralizationGeneralization is a process of making entity classes less specific by suppressing characteristics that describe the class.These rules are applied when the spatial detail of the object needs to be reduced, after uniform scaling of the vector animation or image.Starting from leaf nodes of the SVG tree,regions in objects are eliminated based on the spatial detail constraints.1.Elimination:The process selectively removes regions insideobjects that are too small to be presented in the retargeted im-age.Beginning from the leaf nodes of the SVG tree,that rep-resent the smallest lines and regions in an object,primitives are iteratively eliminated until the spatial detail constraint for the object is satisfied at the new target size.Figure5shows elimination applied to the veins of a leaf.2.Typification:Typification is the reduction of feature densityand level of detail while maintaining the representative dis-tribution pattern of the original feature group.Typification isa form of elimination constrained to apply to multiple simi-lar objects.Our system applies typification based on object puting object similarity is a difficult pattern recognition problem.We use the heuristic of tree isomor-phism within the SVG data format to compute a measure of spatial similarity.Each region of the object is represented asa node in the tree.Nested regions form leaves of the node.A tree with a single node(the root)is isomorphic only to atree with a single node that has approximately the same as-sociated properties.Two trees with roots A and B,none of which is a single-node tree,are isomorphic if and only if the associated properties at the roots are identical and there is a one-to-one correspondence between the subtrees of A and ofB.This method works well on objects that are semanticallygrouped and in the same orientation.Figure5shows typifica-tion for removing apples from a tree.3.Outline Simplification:Often the control points of theBezier curves,representing ink lines at object boundaries become too close together resulting in noisy outline.Outline simplification reduces the number of control points to relax the Bezier curve.We use a vertex reduction technique, which is a simple and fast O(n)algorithm.In vertexreduction,successive vertices that are clustered too closely are reduced to a single vertex.In our system,control points with minimum separation are simplified iteratively until the spatial detail constraint is reached.In Figure5the silhouettes of the mountains are simplified using the vertex reduction rule.Anti-aliasing could also be applied in conjunction with outline simplification to minimize the occurrence of scaling effects in the outlines of objects.While retargeting animation containing textual objects,certain measures could be taken for greater legibility:using a thinner font, and readjusting text to prevent overlap during object enhancement. Applying the spatial distribution algorithm to Figure6,we can com-pute the following values for each object.The input importance values∈[0,1]for Object1,Object2,and Object3are0.1,0.8,and 0.7respectively.ObjectNormalized Importance0.06250.50.4375Original Spatial Detail0.21.020.77Unit Spatial Detail(e-005)0.8616.124.86Weighted Unit Spatial Detail(e-005)2.6120.8718.27Spatial Detail Constraint0.621.320.6New Spatial Detail0.21.320.6Figure7:Intermediate values calculating during the spatial detailbudgeting process.The goal is to make the new spatial detail ofeach object as close as possible to its Spatial Detail Constraint.However for object1,the spatial detail cannot be increased as itsarea is equal to that of the retargeted area.Notice that thefirst object is constrained by the animation’s bound-ing size and cannot be exaggerated further,and so it spatial detail(0.2)remains unchanged as shown in Figure6a.The spatial detailconstraint(1.32)of the second object is satisfied by applying exag-geration.The increase in size is shown in Figure6c.The spatialdetail of the third object reduces to the budgeted spatial detail(0.6)by applying typification.Figure6d shows that typification removesapples from the tree.7Informational ImagesThe retargeting framework for animation may be extended to infor-mational images as well(Figures9c,9d,9e).Informational imagesare an abstraction,or generalization,of physical reality,and theireffectiveness as a communication medium is strongly influencedby the nature of the spatial data,the form and structure of represen-tation,the intended purpose,the experience of the viewer,and thecontext and time in which the images are viewed[Buttenfield andMcMaster1991].The retargeting process needs to exploit the artists’intentions foreach entity in the information image to create a representation con-sistent with the knowledge conveyed by the original image.Deter-mining the knowledge to be conveyed to the viewer,often involvesa high level semantic understanding of the context of the visualtask.Converting such a high level semantic ontology of informa-tion into a computational form is often a non-trivial r-mational image systems such as MapQuest[Map n.d.]and GoogleMaps[Goo n.d.]work around this problem by applying differen-tial zooming based on where the user clicks on the map.Althoughour system has a similar goal as these systems,the difference in ourapproach is that differential levels of detail are applied to each ob-ject in the scene based on importance tags in the underlying XMLstructure of the graphics data.Our work may be extended to lo-cation based services by using Global Positioning Systems(GPS)to guide the annotation of objects with importance tags.Here,thecontribution of this work is the methods for increasing and reducingcomplexity in the image,resulting in differential zooming.8Results and DiscussionOur results demonstrate that the animation retargeting method per-forms reasonably well on vector graphics and images,where se-mantically important objects are rendered with greater clarity,whileunimportant objects maintain the context of the animation or the in-formation conveyed to the user.We ran the algorithm on an Intel(R)Xeon(TM)CPU3.06GHz pro-cessor with2GB RAM.The memory requirement for running thealgorithm is29+8MB.The run-time performance is as follows:Windmill example(Figure1):1817msBoat example(Figure6):2459msHouse example(Figure8):1927msFrog example(Figure10a):1394msEiffel tower example(Figure10b):1942msMap1example(Figure10c):2255msMap2example(Figure10d):1426msMap3example(Figure10e):752msWhile the importance values are an effective way of designatingkey objects in an animation clip,these parameters often need tobe tuned by the animation author for a given display.Figure8shows the variation in retargeted results depending on which objectis more important.In addition,unless the artist specifically groupsobjects with implicit visual relationships these relationships may bedestroyed by the retargeting process.In the case of animations involving temporally consistent objects,we apply object transformations to the entire scene rather than ona frame-by-frame basis.This is because SVG provides the advan-tage of declarative animation rather than frame based animation.However,for objects temporally varying in size and/or color,spa-tial detail needs to be calculated at each new instance of changein object state.The process could get more complex particularlywhen key objects become context objects and vice-versa.The au-thor may then have to annotate importance tags to every new stateof the vector object.The exaggeration and generalization rules that we use may havea non-linear effect on computed value of spatial detail.This canresult in a noisy version of an animation at small target sizes.Thiseffect becomes more evident as the target animation or image sizebecomes very small.The semantic grouping of objects also affectsthe performance of the algorithm.For example in Figure6,theisland is grouped with the water.Since this object cannot be furtherexaggerated,both the island and water remain the same size as inthe uniformly scaled animation.However,ungrouping the water。