Imagebased tree modeling
- 格式:pdf
- 大小:2.23 MB
- 文档页数:7
程序员英语单词册经过几天的整理将程序员必备的600 词汇,和有关计算机需要掌握的词汇整理出来,分享给大家学习,希望对你有所帮助。
- 1 -程序员必备的600 个英语词汇(1)对于时刻需要和国际接轨的码农们,英语的重要性是毋庸置疑的。
尤其是那些胸怀大志的潜在大牛们,想在码农行业闯出一片天地,秒杀身边的小弟们,熟练掌握英语更是实现其目标最关键的因素之一。
否则,试想在你捧着某出版社刚刚翻译出来的《JSP 高效编程》苦苦学习JSP 模板的时候,你旁边的小弟却是拿着原版的《AngularJS in Action》学习开发单页面应用,虽然你们都同样认真地学习了一个月,可做出来东西的效果能一样吗?所以,英语好才能学到最新最炫的技术,否则只能拿着国内出的翻译版学习两三年前的老古董还把它当个宝。
首先要有词汇:下面开始我们的英语词汇的学习生涯!application 应用程式应用、应用程序application framework 应用程式框架、应用框架应用程序框架architecture 架构、系统架构体系结构argument 引数(传给函式的值)。
叁见parameter 叁数、实质叁数、实叁、自变量array 阵列数组arrow operator arrow(箭头)运算子箭头操作符assembly 装配件assembly language 组合语言汇编语言assert(ion) 断言assign 指派、指定、设值、赋值赋值assignment 指派、指定赋值、分配assignment operator 指派(赋值)运算子= 赋值操作符associated 相应的、相关的相关的、关联、相应的associative container 关联式容器(对应sequential container)关联式容器atomic 不可分割的原子的attribute 属性属性、特性audio 音讯音频A.I. 人工智慧人工智能background 背景背景(用於图形着色)后台(用於行程)- 2 -backward compatible 回溯相容向下兼容bandwidth 频宽带宽base class 基础类别基类base type 基础型别(等同於base class)batch 批次(意思是整批作业) 批处理benefit 利益收益best viable function 最佳可行函式最佳可行函式(从viable functions 中挑出的最佳吻合者)binary search 二分搜寻法二分查找binary tree 二元树二叉树binary function 二元函式双叁函数binary operator 二元运算子二元操作符binding 系结绑定bit 位元位bit field 位元栏位域bitmap 位元图位图bitwise 以bit 为单元逐一┅bitwise copy 以bit 为单元进行复制;位元逐一复制位拷贝block 区块,区段块、区块、语句块boolean 布林值(真假值,true 或false)布尔值border 边框、框线边框brace(curly brace) 大括弧、大括号花括弧、花括号bracket(square brakcet) 中括弧、中括号方括弧、方括号breakpoint 中断点断点build 建造、构筑、建置(MS 用语) - 3 -build-in 内建内置bus 汇流排总线business 商务,业务业务buttons 按钮按钮byte 位元组(由8 bits 组成) 字节cache 快取高速缓存call 呼叫、叫用调用callback 回呼回调call operator call(函式呼叫)运算子调用操作符(同function call operator)candidate function 候选函式候选函数(在函式多载决议程序中出现的候选函式)chain 串链(例chain of function calls) 链character 字元字符check box 核取方块(i.e. check button) 复选框checked exception 可控式异常(Java)check button 方钮(i.e. check box) 复选按钮child class 子类别(或称为derived class, subtype) 子类class 类别类class body 类别本体类体class declaration 类别宣告、类别宣告式类声明class definition 类别定义、类别定义式类定义class derivation list 类别衍化列类继承列表class head 类别表头类头class hierarchy 类别继承体系, 类别阶层类层次体系class library 类别程式库、类别库类库- 4 -class template 类别模板、类别范本类模板class template partial specializations类别模板偏特化类模板部分特化class template specializations类别模板特化类模板特化cleanup 清理、善后清理、清除client 客端、客户端、客户客户client-server 主从架构客户/服务器clipboard 剪贴簿剪贴板clone 复制克隆collection 群集集合combo box 复合方块、复合框组合框command line 命令列命令行(系统文字模式下的整行执行命令) communication 通讯通讯compatible 相容兼容compile time 编译期编译期、编译时compiler 编译器编译器component 组件组件composition 复合、合成、组合组合computer 电脑、计算机计算机、电脑concept 概念概念concrete 具象的实在的concurrent 并行并发configuration 组态配置connection 连接,连线(网络,资料库) 连接- 5 -constraint 约束(条件)construct 构件构件container 容器容器(存放资料的某种结构如list, vector...)containment 内含包容context 背景关系、周遭环境、上下脉络环境、上下文control 控制元件、控件控件console 主控台控制台const 常数(constant 的缩写,C++ 关键字)constant 常数(相对於variable) 常量constructor(ctor) 建构式构造函数(与class 同名的一种member functions)copy (v) 复制、拷贝拷贝copy (n) 复件, 副本cover 涵盖覆盖create 创建、建立、产生、生成创建creation 产生、生成创建cursor 游标光标custom 订制、自定定制data 资料数据database 资料库数据库database schema 数据库结构纲目data member 资料成员、成员变数数据成员、成员变量data structure 资料结构数据结构datagram 资料元数据报文dead lock 死结死锁- 6 -debug 除错调试debugger 除错器调试器declaration 宣告、宣告式声明deduction 推导(例:template argument deduction) 推导、推断default 预设缺省、默认defer 延缓推迟define 定义预定义definition 定义、定义区、定义式定义delegate 委派、委托、委任委托delegation (同上)demarshal 反编列散集dereference 提领(取出指标所指物体的内容) 解叁考dereference operator dereference(提领)运算子* 解叁考操作符derived class 衍生类别派生类design by contract 契约式设计design pattern 设计范式、设计样式设计模式※最近我比较喜欢「设计范式」一词destroy 摧毁、销毁destructor 解构式析构函数device 装置、设备设备dialog 对话窗、对话盒对话框directive 指令(例:using directive) (编译)指示符directory 目录目录disk 碟盘dispatch 分派分派distributed computing 分布式计算(分布式电算) 分布式计算分散式计算(分散式电算)- 7 -document 文件文档dot operator dot(句点)运算子 . (圆)点操作符driver 驱动程式驱动(程序)dynamic binding 动态系结动态绑定今天的英语学习就到这了!!!!- 8 -程序员必备的600 个英语词汇(2)哥是来吐槽的,虽说英语很重要,对于每一位程序员,非常重要,每一行代码,都是密密麻麻的英文字母重要,有时还要去苦思深虑的去为变量、方法、类取其一个名字,思前想后,一个又长又难懂的名字诞生了!当时一阵欣慰,日后,再见,一阵伤感,这到底是个毛意思?efficiency 效率效率efficient 高效高效end user 终端用户entity 物体实体、物体encapsulation 封装封装enclosing class 外围类别(与巢状类别nested class 有关)外围类enum (enumeration) 列举(一种C++ 资料型别) 枚举enumerators 列举元(enemy 型别中的成员) 枚举成员、枚举器equal 相等相等equality 相等性相等性equality operator equality(等号)运算子== 等号操作符equivalence 等价性、等同性、对等性等价性equivalent 等价、等同、对等等价escape code 转义码转义码evaluate 评估、求值、核定评估event 事件事件event driven 事件驱动的事件驱动的exception 异常情况异常exception declaration 异常宣告(ref. C++ Primer 3/e, 11.3)异常声明exception handling 异常处理、异常处理机制异常处理、异常处理机制exception specification 异常规格(ref. C++ Primer 3/e, 11.4)异常规范exit 退离(指离开函式时的那一个执行点) 退出- 9 -explicit 明白的、明显的、显式显式export 汇出引出、导出expression 运算式、算式表达式facility 设施、设备设施、设备feature 特性field 栏位,资料栏(Java) 字段, 值域(Java)file 档案文件firmware 韧体固件flag 旗标标记flash memory 快闪记忆体闪存flexibility 弹性灵活性flush 清理、扫清刷新font 字型字体form 表单(programming 用语) 窗体formal parameter 形式叁数形式叁数forward declaration 前置宣告前置声明forwarding 转呼叫,转发转发forwarding function 转呼叫函式,转发函式转发函数fractal 碎形分形framework 框架框架full specialization 全特化(ref. partial specialization)function 函式、函数函数function call operator 同call operatorfunction object 函式物件(ref. C++ Primer 3/e, 12.3) 函数对象function overloaded resolution函式多载决议程序函数重载解决(方案)- 10 -functionality 功能、机能功能function template 函式模板、函式范本函数模板factor 仿函式仿函式、函子game 游戏游戏generate 生成generic 泛型、一般化的一般化的、通用的、泛化generic algorithm 泛型演算法通用算法getter (相对於setter) 取值函式global 全域的(对应於local) 全局的global object 全域物件全局对象global scope resolution operator全域生存空间(范围决议)运算子:: 全局范围解析操作符group 群组group box 群组方块分组框guard clause 卫述句(Refactoring, p250) 卫语句GUI 图形介面图形界面hand shaking 握手协商handle 识别码、识别号、号码牌、权柄句柄handler 处理常式处理函数hard-coded 编死的硬编码的hard-copy 硬拷图屏幕截图hard disk 硬碟硬盘hardware 硬体硬件hash table 杂凑表哈希表、散列表header file 表头档、标头档头文件heap 堆积堆- 11 -hierarchy 阶层体系层次结构(体系)hook 挂钩钩子hyperlink 超链结超链接icon 图示、图标图标IDE 整合开发环境集成开发环境identifier 识别字、识别符号标识符if and only if 若且唯若当且仅当Illinois 伊利诺伊利诺斯image 影像图象immediate base 直接的(紧临的)上层base class。
哈尔滨理工大学工学硕士学位论文具有最大相关系数的对应块。
a)b)图3.10参与拼接的图像Fig.3-10Imagesinmosaic4)以特征块的中心和对应块的中心作为特征点和它在相邻图像中对应的特征点,即特征点对。
提取到4对特征点对后,求解上面的方程,就可以得到矩阵M的各个参数。
然后将M矩阵应用到图像的每一个像素上,完成平面图像配准。
利用公式(3-17)将图像进行投影变换后的点可能不在整数坐标上,所以必须要用插值法对图像进行插值,这里采用的是双线性插值法,得到变换后的图像。
3.3.2柱面投影图像的配准在保证所有的相机运动都发生X.z平面,而且图像的中心点就是相机的光轴与图像平面的交点前提下,将图像投影到柱面上,图像序列中相邻两幅柱面图像间的关系只与平移矩阵有关。
这时的图像配准主要工作是从图像信息中获得两幅柱面图像的上下左右的平移参数t=(tx,r。
,1)。
我们依照以下原则进行:1)确定两幅图像的重叠区域。
依据拍摄时的条件,一般选左边图像的右1/3,右边图像的左l/3,如图3-11所示。
匿蘸陵鋈錾凿匿鎏薰塞::|图3-11具有重叠区域的两幅图像Fig.3-11Twoimageshaveoverlapregions2)在左边图像的重叠区域内,利用本文提出的特征区域选取办法,选取具哈尔滨理工大学工学硕士学位论文有最大窗I=l值的特征块,并以该块的中心点坐标作为一个特征点p(x,J,)。
3、以最大相关原则,在右边图像中寻找与特征块具有最大相关系数的同等大小的块作为匹配块,并以该块的中心点坐标作为特征点的匹配点P’(一,y’)。
4)特征点与匹配点的平移参数,即为两幅相邻柱面图像的匹配参数:t=(t,,t,,1)=(x—x’,y—y’,1)将右边的图像进行相应的坐标平移,投影到左边图像的平面上,相邻两幅柱面图像的配准就完成了。
3.4全景图像的平滑拼接上一节中,本文提出了基于特征的图像配准算法,并给予了详细的论述。
建模技术是虚拟现实中的技术核心,也是难点之一,目前主要有三种方法实现。
虚拟现实是在虚拟的数字空间中模拟真实世界中的事物,这就需要真实世界的事物在数字空间中的表示,于是催生了虚拟现实中的建模技术。
虚拟现实对现实“虚拟”得到底像不像,是与建模技术紧密相关的。
因此,建模技术的研究具有非常重要的意义,得到了国内外研究人员的重视。
数字空间中的信息主要有一维、二维、三维几种形式。
一维的信息主要指文字,通过现有的键盘、输入法等软硬件。
二维的信息主要指平面图像,通过照相机、扫描仪、PhotoShop等图像采集与处理的软硬件。
对于虚拟现实技术来说,事物的三维建模是更需要关心的核心,也是当今的难点技术。
按使用方式的不同,现有的建模技术主要可以分为: 几何造型、扫描设备、基于图像等几种方法。
基于几何造型的建模技术基于几何造型的建模技术是由专业人员通过使用专业软件(如AutoCAD、3dsmax、Maya)等工具,通过运用计算机图形学与美术方面的知识,搭建出物体的三维模型,有点类似画家作画。
这种造型方式主要有三种: 线框模型、表面模型与实体模型。
1. 线框模型只有“线”的概念,使用一些顶点和棱边来表示物体。
对于房屋、零件设计等更关注结构信息,对显示效果要求不高的计算机辅助设计(CAD)应用,线框模型以其简单、方便的优势得到较广泛的应用。
AutoCAD软件是一个较好的造型工具。
但这种方法很难表示物体的外观,应用范围受到限制。
2. 表面模型相对于线框模型来说,引入了“面”的概念。
对于大多数应用来说,用户仅限于“看”的层面,对于看得见的物体表面,是用户关注的,而对于看不见的物体内部,则是用户不关心的。
因此,表面模型通过使用一些参数化的面片来逼近真实物体的表面,就可以很好地表现出物体的外观。
这种方式以其优秀的视觉效果被广泛应用于电影、游戏等行业中,也是我们平时接触最多的。
3dsmax、Maya等工具在这方面有较优秀的表现。
3. 实体模型相对于表面模型来说,又引入了“体”的概念,在构建了物体表面的同时,深入到物体内部,形成物体的“体模型”,这种建模方法被应用于医学影像、科学数据可视化等专业应用中。
保持视觉感知的三维树木叶片模型分治简化方法1. 引言- 介绍三维树木叶片模型的应用背景和意义- 分析现有三维树木叶片模型在计算复杂度和解决精度上的局限性- 阐述本文提出的分治简化方法的研究价值和优势2. 相关工作- 综述现有的三维树木叶片模型的研究进展- 分析现有方法的优缺点,指出其在处理复杂树木几何结构上的不足- 介绍分治算法的基本原理和应用范围3. 分治简化方法- 基于分治算法的三维树木叶片模型简化流程- 利用分层次的数据结构对树木叶片进行切分- 提出基于层次约束和剪枝的简化策略- 实现简化算法的系统框架和具体方法4. 实验与评估- 介绍测试数据集和测试环境- 对比测试分治简化方法和现有方法的精度和计算复杂度- 分析实验结果,证明分治简化方法是一种高效且精度可控的树木叶片模型简化算法5. 结论与展望- 总结本文提出的分治简化方法,并指出其在三维树木叶片模型研究中的研究意义和实际应用前景- 探讨未来的研究方向和改进空间- 结束整篇论文1. 引言近年来,随着计算机视觉和图形学领域的快速发展和广泛应用,三维模型的精度和效率问题越来越受到研究者的关注。
其中,三维树木叶片模型是一个重要的研究领域,主要应用于生态学研究、环境模拟和动画制作等领域。
传统的方法通过采集大量现场数据来构建三维模型,但这种方法不仅需要大量的时间和人力,而且存在精度低、处理难度大等问题。
为了解决这些问题,目前已经出现了一些以分治策略为核心的简化方法,这些方法旨在通过对三维树木叶片进行分层次的处理,减少计算量和存储空间,从而在保证精度的前提下提高算法效率。
本文提出了一种基于分治策略的三维树木叶片模型简化方法,可以有效地降低复杂树木模型的计算复杂度,提高模型绘制和渲染的效率。
首先,本文将介绍三维树木叶片模型在生态学和其他领域的应用背景和意义。
然后,分析现有方法在处理复杂树木几何结构的过程中所面临的局限性。
接下来,详细阐述本文提出的三维树木叶片模型简化方法的研究价值和实际意义。
计算机图形学学科研讨会基于图像绘制(IBR)技术综述(A Survey on Image-based Rendering Techniques)石教英浙江大学计算机学院计算机辅助设计与图形学国家重点实验室2005年5月目录一、基于图像绘制(IBR)技术定义二、典型IBR技术演示1、Panorama Mosaics2、Tour-Into-Picture3、Light Field4、Feature-based Morphing三、IBR技术理论基础-Plenoptic Funciton1、An introduction2、how to handle with plenoptic fuction四、IBR技术发展1、Image Matting2、Digital PhotoMontage3、High-Dynamic-Range Image Display4、Plenoptic photography五、基于图像绘制技术定义的扩展TransformRasterization(& Lighting)¾Computer Vision methods to recover models.Image-Based Rendering& ModelingRendering& Modeling VolumeRendering一、基于图像绘制(IBR)技术定义z Problems of triangle-based graphics:•Always starts from scratch.•Millions of sub-pixel triangles.一、基于图像绘制(IBR)技术定义¾Definition of IBR (by Sing Bing Kang)image-based rendering techniques rely on interpolation using the original set of input images or pixel reprojection from source images onto the target image in order to produce a novel virtual view一、基于图像绘制(IBR)技术定义¾Definition of IBR (by Cha Zhang & Tsuhan Chen of CMU) Given a continuous plenoptic function that describes a scene, image-based rendering is a process of two stages: –sampling and rendering. In the sampling stage, samples are taken from the plenoptic function for representation and storage. In the rendering stage, the continuous plenopticfunction is reconstructed with the captured samples.二、典型IBR技术演示¾Paronoma MosaicsDemo二、典型IBR技术演示¾Tour-Into-Picture二、典型IBR技术演示Lightfield Video二、典型IBR技术演示Feature-based Morphing三、IBR技术理论基础-Plenoptic Funciton1. An Introductionz Two ways of describing the world:• A source description: ——The world can be described bygeometric models, texture maps, refection models, lightingand shading models.•An appearance description: ——The world can be describedby the dense array of light rays filling the space, which canbe observed by posing eyes or cameras in the space. Thelight rays can be represented through the Plenoptic function.z The traditional model-based rendering approachadopts the source description method.z The image-based rendering approach adopots theplenoptic function to describe the world.三、IBR 技术理论基础-Plenoptic Funciton ¾IBR: An Old Storyz Plenoptic function z As pointed out by Adelson and Bergen (1991):The world is made of three -dimensional objects, but these objects do not communicate their properties directly to an observer. …The plenoptic function serves as the sole communication link between the physical objects and their corresponding retinal images. It is the intermediary between the world and the eye.),,,,,,(t Z Y X f λφθ三、IBR技术理论基础-Plenoptic Funciton ¾7D Plenoptic Function三、IBR技术理论基础-Plenoptic Funciton2. How to Handle Plenoptic Function?z Two Stages•sampling and reconstruction of sampled signals z Two directions for simplication•Restrain the viewing space of the viewersz View pointz Perceptual•Introduce some source descriptions into IBRz Geometryz Depth三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 1: Wavelength•Constant wavelength•RGB•Almost all the practical representations of IBR make this assumption.),,,,,,(t Z Y X f λφθ三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 2: Air•Air is translucent•Radiances along a light ray through empty space remain constant.),,,,,,(t Z Y X f λφθ三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 3: Time•Static scene•Images captured at different time and positions can be used together to render novel views.•Too large size for dynamic scene),,,,,,(t Z Y X f λφθ三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 4: Viewpoint•The viewer is constrained to be on a surface•Acceptable:zHuman eyes are usually at a certain height -level z Human eyes are less sensitive to vertical parallax and lighting changes),,,,,,(t Z Y X f λφθ三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 5: Viewpoint•The viewer moves along a certain path.•Reduces 2dimensions from the full plenoptic function.•Too large size for dynamic scene),,,,,,(t Z Y X f λφθ三、IBR 技术理论基础-Plenoptic Funciton ¾Restraining in View Spacez Assumption 6: Viewpoint•The viewer has a fixed position.•Reduces the dimension of the plenoptic function by 3.•No 3D effects can possibly be perceived.•Similarity to regular images and videos.),,,,,,(t Z Y X f λφθ三、IBR技术理论基础-Plenoptic Funciton ¾Various Representations三、IBR技术理论基础-Plenoptic Funciton¾6D –The Surface Plenoptic Function z Assumption 2. As we discussed, whenradiance along a light ray through emptyspace remains constant.z6D•Position on the surface (2D)•Light ray direction(2D)•Time (1D) and wavelength (1D).三、IBR技术理论基础-Plenoptic Funciton¾Examplez The surface light field couldbe considered asdimension-reduced versionof SPF.D. N. Wood, D. I. Azuma, K.Aldinger, B. Curless, T.Duchamp, D. H. Salesinand W. Stuetzle, “Surfacelight fields for 3Dphotography”, ComputerGraphics (SIGGRAPH’00),July 2000.三、IBR技术理论基础-Plenoptic Funcitonz5D –Plenoptic Modeling and Light field Video Plenoptic modeling, which is a 5D function•3D for the camera position, 2D for the cylindricalimage.L. McMillan and G. Bishop, “Plenoptic modeling: animage-based rendering system”, ComputerGraphics (SIGGRAPH’95), August 1995,三、IBR技术理论基础-Plenoptic Funciton¾Plenoptic Modelingz To render a novel view from the 5D representation, the close-by cylindrical projected images are warped to the viewingposition based on their epipolar relationship and some visibility tests.三、IBR技术理论基础-Plenoptic Funciton¾4D –Light field / Lumigraphz Assumption 1, 2 and 3z Trickily parameterize ray space to 2-plane representation.uf(t sv),,,三、IBR技术理论基础-Plenoptic Funciton三、IBR技术理论基础-Plenoptic Funciton¾3D –Concentric Mosaicsz Assumption 1, 2, 3 and 4Tripodz Center camera => panoramaz Off-centered cameras => motion parallax…...Top view三、IBR技术理论基础-Plenoptic Funciton¾2D –Image Mosaicingz Composes one single mosaic with multipleinput imagesz In most cases, the light rays recorded in the mosaic share the same center-of-projection(COP) -Panoramic mosaic or panorama三、IBR技术理论基础-Plenoptic Funciton ¾Examples三、IBR技术理论基础-Plenoptic Funciton¾2D –Image Mosaicingz In the more general scenario, the cameras of the input images can move in free form andthe resultant mosaic has MCOPsz Manifold Mosaic.三、IBR技术理论基础-Plenoptic Funciton ¾Examples三、IBR技术理论基础-Plenoptic Funciton¾Quicktime VRz Using environmentalmaps•Cylindrical•Cubic•sphericalz At a fixed point,sample all the raydirections.z Users can look in bothhorizontal and verticaldirections三、IBR技术理论基础-Plenoptic Funciton ¾Mars Pathfinder Panorama三、IBR技术理论基础-Plenoptic Funciton ¾IBR with Various Source Descriptions三、IBR技术理论基础-Plenoptic Funciton ¾Correspondence between Images三、IBR技术理论基础-Plenoptic Funciton ¾Example 1: View Morphing(SIGGRAPH 96) Steve Seitz etc.三、IBR技术理论基础-Plenoptic Funciton ¾Dense Depth MapViewing RegionSprite withDepthLDIEnvironment Map三、IBR技术理论基础-Plenoptic Funciton¾Layered Depth Imagesz Idea:•Handle disocclusion•Store invisible geometry in depth images三、IBR技术理论基础-Plenoptic Funciton¾Texture Map (+ Scene Geometry) z Image based modelingz View dependent texture mapping三、IBR技术理论基础-Plenoptic Funciton¾Reflection Models (+ Scene Geometry) z Image based relighting。
Image-based Tree ModelingPing Tan Gang Zeng∗Jingdong Wang Sing Bing Kang1Long QuanThe Hong Kong University of Science and Technology1Microsoft Research,Redmond,WA,USAFigure1:Image-based modeling of a tree.From left to right:A source image(out of18images),reconstructed branch structure rendered at the same viewpoint,tree model rendered at the same viewpoint,and tree model rendered at a different viewpoint.AbstractIn this paper,we propose an approach for generating3D models of natural-looking trees from images that has the additional benefit of requiring little user intervention.While our approach is primar-ily image-based,we do not model each leaf directly from images due to the large leaf count,small image footprint,and widespread occlusions.Instead,we populate the tree with leaf replicas from segmented source images to reconstruct the overall tree shape.In addition,we use the shape patterns of visible branches to predict those of obscured branches.We demonstrate our approach on a variety of trees.CR Categories:I.3.5[Computer Graphics]:Computational ge-ometry and object modeling—Modeling packages;I.4.5[Image Processing and computer vision]:Reconstruction.Keywords:Tree modeling,plant modeling,image-based model-ing,photography.1IntroductionTrees are hard to model in a realistic way because of their inher-ent geometric complexity.While progress has been made over the years in modeling trees,ease of model generation,model editabil-ity,and realism are difficult to achieve simultaneously.A tool with all these features could easily be part of a cost-effective solution to building realistic-looking environments for movie post-production, architectural designs,games,and web applications.Our system uses images to model the tree.We chose an image-based approach because we believe such an approach has the best potential for producing a realistic tree model.The capture process is simple,as it involves only a hand-held camera.We use a structure from motion technique to recover the camera motion and3D point cloud of the tree.Rather than applying specific rules for branch generation,we use the local shapes of branches that are observed ∗Gang Zeng is currently with ETH Z¨u rich.to interpolate those of obscured branches.Finally,the leaves are generated by segmenting the source images and computing their depths using the pre-computed3D points or the recovered branches. One such result can be seen in Figure1.Note that in this paper,we differentiate between plants and trees—we consider“plants”as terrestrialflora with large dis-cernible leaves(relative to the plant size),and“trees”as large ter-restrialflora with small leaves(relative to the tree size).The spec-trum of plants and trees with varying leaf sizes is shown in Figure2.Plants with largediscernible leavesTrees with smallundiscernible leaves[Quan et al. 06]Our techniqueIncreasingly moremanual intensiveIncreasingly lesssimilar to inputsFigure2:Spectrum of plants and trees based on relative leaf size: On the left end of the spectrum,the size of the leaves relative to the plant is large.This is ideal for using the modeling system of Quan et al.[2006].Our system,on the other hand,targets trees with small relative leaf sizes(compared to the entire tree).2Related workThe techniques for modeling plants and trees can be roughly clas-sified as being rule-based or image-based.Rule-based techniques make use of small set of generative rules or grammar to create branches and leaves.Prusinkiewicz et al.[1994],for example,de-veloped a series of approaches based on the idea of the generative L-system,while Weber and Penn[1995]used a series of geometric rules to produce realistic-looking trees.De Reffye et al.[1988]alsoSourceImagesImage SegmentationStructurefrom motionReconstruction ofvisible branchesReconstruction of occluded branchesTextured 3D modelFigure 3:Overview of our tree modeling system.used a collection of rules of plant growth.Such techniques provide some realism and editability,but they require expertise for effec-tive use.They work on the (usually)reasonable assumption that the branch shape and leaf arrangement follow a predictable pattern.On the other hand,they require considerable effort to replicate unex-pected local structural modifications such as stunted growth due to disease or responses to external agents.Image-based approaches use images of tree for modeling.They range from the use of a single image and (limited)shape priors [Han and Zhu 2003]to multiple images [Sakaguchi 1998;Shlyakhter et al.2001;Reche-Martinez et al.2004;Quan et al.2006].A popular approach is to use the visual hull to aid the modeling pro-cess [Sakaguchi 1998;Shlyakhter et al.2001;Reche-Martinez et al.2004].While Shlyakhter et al.[2001]refines the medial axis of the volume to a simple L-system fit for branch generation,Sakaguchi et al.[1998]use simple branching rules in voxel space for the same purpose.However,the models generated by these approaches are only approximate and have limited realism.Reche et al.[2004],on the other hand,compute a volumetric representation with variable opacity.While realism is achieved,their models cannot be edited or animated easily.The approaches closest to ours are those of Quan et al.[2006]and Gossett et al.[2006].Quan et al.[2006]showed that it is possible to model plants well by explicitly modeling leaves from images and providing a simple user interface to generate branches.However,generating a tree with a large number of small overlap-ping leaves would be impractical.In our approach,the branches are generated automatically from the images,with pattern-based inter-polation in occluded areas.This substantially reduces the amount of manual input required.In addition,Quan et ually require a 360◦capture,which may not always be possible for outdoor trees.We require only a partial coverage of the tree and a small number of overlapping images (10-20images for most examples in this paper).Figure 2illustrates the types of plants and trees that are appropriate for Quan et al.’s [2006]technique and ours.Both these techniques are complementary.Gossett et al.[2006]used a laser scanner to acquire the range data for modeling tree.Part of our work—the generation of initial visible branches—is inspired by their work.The major difference is that they use a 3D point cloud for modeling;no registered source images are used.It is not easy to generate complete tree models from just 3D points because of the difficulties in determining what is missing and in filling the missing information.Our experience has led us to believe that adapting models to images is a more in-tuitive means for realistic modeling.The image-based approach is also more flexible for modeling a wide variety of trees at different scales.3Overview of the systemOur tree modeling system consists of three main parts:image cap-ture and 3D point recovery,branch recovery,and leaf population illustrated in Figure 3.It is designed to reduce the amount of user interaction required by using as much data from images as possible.The recovery of the visible branches is mostly automatic,with the user given the option of refining their shapes.The subsequent re-covery of the occluded branches and leaves is automatic with only a few parameters to be set by the user.As was done by researchers in the past,we capitalize on the structural regularity of trees,more specifically the self-similarity of structural patterns of branches and arrangement of leaves.How-ever,rather than extracting rule parameters (which is very difficult to do in general),we use the extracted local arrangement of visible branches as building blocks to generate the occluded ones.This is done using the recovered 3D points as hard constraints and the matte silhouettes of trees in the source images as soft constraints.To populate the tree with leaves,the user first provides the ex-pected average image footprint of leaves.The system then segments each source image based on color.The 3D position of each leaf segment is determined either by its closest 3D point or by its closest branch segment.The orientation of each leaf is approximated from the shape of the region relative to the leaf model or the best-fit plane of leaf points in its vicinity.4Image capture and 3D point recoveryWe use a hand-held camera to capture the appearance of the tree of interest from a number of different overlapping views.In all but one of our experiments,only 10to 20images were taken for each tree,with coverage between 120◦and 200◦around the tree.The exception is the potted flower tree shown in Figure 7,where 32images covering 360◦were taken.Prior to any user-assisted geometry reconstruction,we extract point correspondences and ran structure from motion on them to recover camera parameters and a 3D point cloud.We also assume the matte for the tree has been extracted in each image,so that we know the extracted 3D point cloud is that of the tree and not the background.In our implementation,matting is achieved with auto-matic color-based segmentation and some user guidance.Standard computer vision techniques have been developed to es-timate the point correspondences across the images and the camera parameters.We used the approach described in [Lhuillier and Quan 2005]to compute the camera poses and a semi-dense cloud of re-liable 3D points in space.Depending on the spatial distribution of the camera and the geometric complexity of the tree,there may be significant areas that are missing or sparse due to occlusion.One example of structure from motion is shown in Figure 3.5Branch recoveryOnce the camera poses and 3D point cloud have been extracted,we next reconstruct the tree branches,starting with the visible ones.The local structures of the visible branches are subsequently used to reconstruct those that are occluded in a manner similar to non-parametric texture synthesis in 2D [Efros and Leung 1999](and later 3D [Breckon and Fisher 2005]),using the 3D points as con-straints.To enable the branch recovery stage to be robust,we make three assumptions.First,we assume that the cloud of 3D points has been partitioned into points belonging to the branches and leaves (using color and position).Second,the tree trunk and its branches are as-sumed to be unoccluded.Finally,we expect the structures of the visible branches to be highly representative of those that are oc-cluded (modulo some scaled rigid transform).5.1Reconstruction of visible branchesThe cloud of 3D points associated with the branches is used to guide the reconstruction of visible branches.Note that these 3D points can be in the form of multiple point clusters due to occlusion of branches.We call each cluster a branch cluster ;each branch cluster has a primary branch with the rest being secondary branches.The visible branches are reconstructed using a data-driven,bottom-up approach with a reasonable amount of user interaction.The reconstruction starts with graph construction,with each sub-graph representing a branch cluster.The user clicks on a 3D point of the primary branch to initiate the process.Once the reconstruc-tion is done,the user iteratively selects another branch cluster to be reconstructed in very much the same way,until all the visible branches are accounted for.The very first branch cluster handled consists of the tree trunk (primary branch)and its branches (sec-ondary branches).There are two parts to the process of reconstructing visible branches:graph construction to find the branch clusters,followed by sub-graph refinement to extract structure from each branch clus-ter.Graph construction.Given the 3D points and 3D-2D projection information,we build a graph G by taking each 3D point as a node and connecting to its neighboring points with edges.The neigh-boring points are all those points whose distance to a given point is smaller than a threshold set by the user.The weight associated with each edge between a pair of points is a combined distance d (p,q )=(1−α)d 3D +αd 2D with α=0.5by default.The 3D distance d 3D is the 3D Euclidean distance between p and q normal-ized by its variance.For each image I i that p and q project to,let l i be the resulting line segment in the image joining their projections P i (p )and P i (q ).Also,let n i be the number of pixels in l i and {x ij |j =1,...,n i }be the set of 2D points in l i .We define a 2D distance function d 2D = i 1n ij |∇I i (x ij )|,normalized by its variance,with ∇I (x )being the gradient in image I at 2D location x .The 2D distance function accounts for the normalized inten-sity variation along the projected line segments over all observed views.If the branch in the source images has been identified and pre-segmented (e.g.,using some semi-automatic segmenta-tion technique),this function is set to infinity if any line segment is projected outside the branch area.Each connected component,or sub-graph,is considered as a branch cluster.We now describe how each branch cluster is processed to produce geometry,which consists of the skeleton and its thickness distribution.Conversion of sub-graph into branches.We start with the branch cluster that contains the lowest 3D point (the “root”point),which we assume to be part of the primary branch.(For the first cluster,the primary branch is the tree trunk.)The shortest paths from the root point to all other points are computed by a standard shortestpath algorithm.The edges of the sub-graph are kept if they are part of the shortest paths and discarded otherwise.This step results in 3D points linked along the surface of branches.Next,to extract the skeleton,the lengths of the short-est paths are divided into segments of a pre-specified length.The centroid of the points in each segment is computed and is referred to as a skeleton node.The radius of this node (or the radius of the corresponding branch)is the standard deviation of the points in the same bin.This procedure is similar to those described in [Gossett et al.2006]and [Brostow et al.2004].User interface for branch refinement.Our system allows the user to refine the branches through simple operations that include adding or removing skeleton nodes,inserting a node between two adjacent nodes,and adjusting the radius of a node (which controls the local branch thickness).In addition,the user can also connect different branch clusters by clicking on one skeleton node of one cluster and a root point of another cluster.The connection is used to guide the creation of occluded branches that link these two clusters (see Sec-tion 5.2).Another feature of our system is that all these operations can be specified at a view corresponding to any one of the source images;this allows user interaction to occur with the appropriate source image superimposed for reference.A result of branch structure recovery is shown for the bare tree example in Figure 4.This tree was captured with 19images cover-ing about 120◦.The model was automatically generated from only one branchcluster.Figure 4:Bare tree example.From left to right:one of the source images,superimposed branch-only tree model,and branch-only tree model rendered at a different viewpoint.5.2Reconstruction of occluded branchesThe recovery of visible branches serves two purposes:portions of the tree model is reconstructed,and the reconstructed parts are used to replicate the occluded branches .We make the important assumption that the tree branch structure is locally self-similar.In our current implementation,any subset,i.e.,a subtree,of the re-covered visible branches is a candidate replication block.This is illustrated in Figure 5for the final branch results shown in Figure 1and 7.The next step is to recover the occluded branches given the vis-ible branches and the library of replication blocks.We treat this problem as texture synthesis,with the visible branches providing the texture sample and boundary conditions.There is a major dif-ference with conventional texture synthesis:the scaling of a repli-cation block is spatially dependent.This is necessary to ensure that the generated branches are geometrically plausible with the visible branches.In a typical tree with dense foliage,most of branches in the up-per crown are occluded.To create plausible branches in this area,the system starts from the existing branches and “grows”to occupy part of the upper crown using our synthesis approach.The cut-off boundaries are specified by the tree silhouettes from the source images.The growth of the new branches can also be influenced(a)(b)(c)(d)Figure 5:Branch reconstruction for two different trees.The left column shows the skeletons associated with visible branches while the right are representative replication blocks.(a,b)are for the fig tree in Figure 1,and (c,d)are for the potted flower tree in Figure 7.(Only the main branch of the flower tree is clearly visible.Three replication blocks were chosen from another tree and used for branch reconstruction.)by the reconstructed 3D points on the tree surface as branch end-points.Depending on the availability of reconstructed 3D points,the “growth”of occluded branches can be unconstrained or con-strained.Unconstrained growth.In areas where 3D points are unavail-able,the system randomly picks an endpoint or a node of a branch structure and attaches the endpoint or node to a random replica-tion block.Although the branch selection is mostly random,pri-ority is given to thicker branches or those closer to the tree trunk.In growing the branches,two parameters associated with the repli-cation block are computed on the fly:a random rotation and a scale .The replication block is first rotated about its primary branch by the chosen random angle.Then,it is globally scaled right before it is attached to the tree such that the length of its scaled primary branch matches that of the end branch being replaced.Once scaled,the primary branch of the replication block replaces the end-branch.This growth is capped by the silhouettes of the source images to ensure that the reconstructed overall shape is as close as possible to that of the real tree.Constrained growth.The reconstructed 3D points,by virtue of be-ing visible,are considered to be very close to the branch endpoints.By branch endpoints,we mean the exposed endpoints of the last generation of the branches.These points are thus used to constrain the extents of the branch structure.By comparison,in the work of [Quan et al.2006],the 3D points are primarily used to extract the shapes of leaves.This constrained growth of branches (resulting in T ree )is computed by minimizing iD (p i ,T ree )over all the 3D points {p i |i =1,...,n 3D },with n 3D being the number of 3D points.D (p,T ree )is the smallest distance between a given point p and the branch endpoints of T ree .Unfortunately,the space of all pos-sible subtrees with a fixed number of generations to be added to a given tree is exponential.Instead,we solve this optimization in a greedy manner.For each node of the current tree,we define an in-fluence cone with its axis along the current branch and an angular extent of 90◦side to side.For that node,only the 3D points that fall within its influence cone are considered.This restricts the number of points and set of subtrees considered in the optimization.Our problem reduces to minimizingp i∈Cone D (p i ,Subtree )for each subtree,with Cone being the set of points within the influ-ence cone associated with Subtree .If Cone is empty,the branches for this node are created using the unconstrained growth procedure described earlier.The order in which subtrees are computed is in the same order of the size of Cone ,and is done generation by gen-eration.The number of generations of branches to be added at a time can be controlled.In our implementation,for speed consider-ations,we add one generation at a time and set a maximum numberof 7generations.Once the skeleton and thickness distribution have been com-puted,the branch structure can be converted into a 3D mesh,as shown in Figures 1,4,and 7.The user has the option to perform the same basic editing functions as described in Section 5.1.6Populating the tree with leavesThe previous section described how the extracted 3D point cloud is used to reconstruct the branches.Given the branches,one could always just add the leaves directly on the branches using simple guidelines such as making the leaves point away from branches.While this approach would have the advantage of not requiring the use of the source images,the result may deviate significantly from the look of the real tree we wish to model.Instead,we chose to analyze the source images by segmenting and clustering and use the results of the analysis to guide the leaf population process.6.1Image segmentation and clusteringSince the leaves appear relatively repetitive,one could conceivably use texture analysis for image segmentation.Unfortunately,the spatially-varying amounts of foreshortening and mutual occlusion of leaves significantly complicate the use of texture analysis.How-ever,we do not require very accurate leaf-by-leaf segmentation to produce models of realistic-looking trees.We assume that the color for a leaf is homogeneous and there are intensity edges between adjacent leaves.We first apply the mean shift filter [Comaniciu and Meer 2002]to produce homogeneous regions,with each region tagged with a mean-shift color.These regions undergo a split or merge operation to produce new regions within a prescribed range of sizes.These new regions are then clus-tered based on the mean-shift colors.Each cluster is a set of new regions with similar color and size that are distributed in space,as can be seen in Figure 6(c,d).These three steps are detailed below.(a)(b)(c)(d)Figure 6:Segmentation and clustering.(a)Matted leaves from source image,(b)regions created after the mean shift filtering,(c)the first 30clusters (color-coded by cluster),and (d)17textured clusters (textures from source images).Mean shift filtering.The mean shift filter is performed on color and space jointly.We map the RGB color space into LUV space,which is more perceptually meaningful.We define our multi-variate kernel as the product of two radially symmetric kernels:K h s ,h r(x )=C h 2s h 2rkEx s h s2 k Ex r h r2,where x sis the spa-tial vector (2D coordinates),and x r is the color vector in LUV ,and C is the normalization constant.k E (x )the profile of Epanechnikov kernel,k E (x )=1−x if 0≤x ≤1,and 0for x >1.The band-widths parameters h s and h r are interactively set by the user.Inour experiments,h s ranged from6to8and h r from3to7.The segmentation results were reasonable as long as the values used were within the specified ranges.Region split or merge.After applying mean-shiftfiltering,we build a graph on the image grid with each pixel as a node;edges are established between8-neighboring nodes if their(mean-shift)color difference is below a threshold(1in our implementation).Prior to the split or merge operation,the user specifies the range of valid leaf sizes.Connected regions that are too small are merged with neighboring ones until a valid size is reached.On the other hand, connected regions that are too large are split into smaller valid ones. Splitting is done by seeding and region growing;the seeds can be either automatic by even distribution or interactively set.This split or merge operation produces a set of new regions.Color-based clustering.Each new region is considered a candidate leaf.We use K-means clustering method to obtain about20to 30clusters.We only keep about10clusters associated with the brightest colors,as they are much more likely to represent visible leaves.Each new region in the kept clusters isfitted to an ellipse through singular value decomposition(SVD).User interaction.The user can click to add a seed for splitting and creating a new region,or click on a specific cluster to accept or reject it.With the exception of thefig tree shown in Figure1, the leaves were fully automatically segmented.(For thefig tree,the user manually specified a few boundary regions and clusters.)6.2Adding leaves to branchesThere are two types of leaves that are added to the tree model: leaves that are created from the source images using the results of segmentation and clustering(Section6.1),and leaves that are syn-thesized tofill in areas that either are occluded or lack coverage from the source viewpoints.Creating leaves from segmentation.Once we have produced the clusters,we now proceed to compute their3D locations.Recall that each region in a cluster represents a leaf.We also have a user-specified generic leaf model for each tree example(usually an el-lipse,but a more complicated model is possible).For each region in each source image,wefirstfind the closest pre-computed3D point(Section4)or branch(Section5)along the line of sight of the region’s centroid.The3D location of the leaf is then snapped to the closest pre-computed3D point or nearest3D position on the ing branches to create leaves is necessary to make up for the possible lack of pre-computed3D points(say,due to using a small number of source images).The orientation of the generic leaf model is initially set to be parallel to the source image plane.In the case where more than three pre-computed3D points project onto a region,SVD is applied to all these points to compute the leaf’s orientation.Otherwise,its orientation is such that its projection is closest to the region shape.This approach of leaf generation is simple and fast,and is ap-plied to each of source images.However,since we do not com-pute the explicit correspondences of regions in different views,it typically results in multiple leaves for a given corresponding leaf region.(Correspondence is not computed because our automatic segmentation technique does not guarantee consistent segmentation across the source images.)We just use a distance threshold(half the width of a leaf)to remove redundant leaves.Synthesizing missing leaves.Because of lack of coverage by the source images and occlusion,the tree model that has been recon-structed thus far may be missing a significant number of leaves.To overcome this limitation,we synthesize leaves on the branch struc-ture to produce a more evenly distributed leaf density.The leaf density on a branch is computed as the ratio of the num-ber of leaves on the branch to the length of the branch.We syn-thesize leaves on branches with the lowest leaf densities(bottom third)using the branches with the highest leaf densities(top third) as exemplars.7ResultsIn this section,we show reconstruction results for a variety of trees. The recovered models have leaves numbering from about3,000to 140,000.We used Maya TM for rendering;note that we did not model complex phenomena such as inter-reflection and subsurface scattering of leaves.In our experiments,image acquisition us-ing an off-the-shelf digital camera took about10minutes.The computation time depends on the complexity of the tree.Au-tomatic visible branch reconstruction took1-3minutes while manual editing took about5minutes.Invisible branches were reconstructed in about5minutes while leaf segmentation took about1minute per image.Thefinal stage of leaf population took3-5minutes.Thefig tree shown in Figure1was captured using18images covering about180◦.It is a typical but challenging example as there are substantial missing points in the crown.Nevertheless,its shape has been recovered reasonably well,with a plausible-looking branch structure.The process was automatic,with the exceptions of manual addition of a branch and a few adjustments to the thickness of branches.The pottedflower tree shown in Figure7is an interesting exam-ple:the leaf size relative to the entire tree is moderate and not small as in the other examples.Here,32source images were taken along a complete360◦path around the tree.Its leaves were discernable enough that our automatic leaf generation technique produced only moderately realistic leaves,since larger leaves require more accu-rate segmentation.The other challenge is the very dense foliage—dense to the extent that only the trunk is clearly visible.In this case, the user supplied only three simple replication blocks shown in Fig-ure5(d);our system then automatically produced a very plausible-looking model.About60%of the reconstructed leaves relied on the recovered branches for placement.Based on leaf/tree size ratio, this example falls in the middle of the plant/tree spectrum shown in Figure2.Figure8shows a medium-sized tree,which was captured with 16images covering about135◦.The branches took10minutes to modify,and the leaf segmentation was fully automatic.The right-most image in Figure8shows a view not covered by the source images;here,synthesized leaves are shown as well.The tree in Figure9is large with relatively tiny leaves.It was captured with16images covering about120◦.We spentfive min-utes editing the branches after automatic reconstruction to clean up the appearance of the tree.Since the branches are extracted by con-necting nearby points,branches that are close to each other may be merged.The rightmost visible branch in Figure9is an example of a merged branch.8Concluding remarksWe have described a system for constructing realistic-looking tree models from images.Our system was designed with the goal of minimizing user interaction in mind.To this end,we devised automatic techniques for recovering visible branches,generating plausible-looking branches that have been originally occluded,and populating the tree with leaves.There are certainly ways for improving our system.For one,the replication block need not necessarily be restricted to being part of the visible branches in the same tree.It is possible to generate a much larger and tree-specific database of replication blocks.The observed set of replication blocks can be used to fetch the appro-priate database for branch generation,thus providing a richer set of branch shapes.。