Animated BDP Agents in Virtual Environments
- 格式:pdf
- 大小:85.34 KB
- 文档页数:2
三维动画术语中英⽂对照三维动画术语中英⽂对照AAbsolute Mode Transform Type-in对坐标⽅式变换输⼊Absolute/Relative Snap Toggle Mode绝对/相对捕捉开关模式Accelerated Montage加速蒙太奇Activate All Maps激活所有贴图Activate Grid Object激活⽹格对象;激活⽹格物体Activate Grid激活栅格;激活⽹格Activate Home Grid激活主栅格;激活主⽹格Activate活动;激活Active Shade Floater⾃动着⾊⾯板;交互渲染浮动窗⼝Active Shade Viewport⾃动着⾊视图Active Shade(Scanline)着⾊(扫描线)Active Shade实时渲染视图;着⾊;⾃动着⾊Actual Sound真实声⾳Adaptive Cubic⽴⽅适配Adaptive Degradation Toggle降级显⽰开关Adaptive Degradation⾃动降级Adaptive Linear线性适配Adaptive Path Steps适配路径步幅;路径步幅⾃动适配Adaptive Path⾃适应路径Adaptive Perspective Grid Toggle适配透视⽹格开关Adaptive适配;⾃动适配;⾃适应Add as Proxy加为替⾝Add Cross Section增加交叉选择Additive Color加⾊法Adopt the File's Unit Scale采⽤⽂件单位尺度Adv. Lighting⾼级照明Advanced Surface Approximation⾼级表⾯近似;⾼级表⾯精度控制Advanced Surface Approx⾼级表⾯近似;⾼级表⾯精度控制Affect Diffuse Toggle影响漫反射开关Affect Neighbors影响相邻Affect Region影响区域Affect Specular Toggle影响镜⾯反射开关AI Export输出Adobe Illustrator(*.AI)⽂件AI Import输⼊Adobe Illustrator(*.AI)⽂件Align Camera对齐摄像机Align Grid to View对齐⽹格到视图Align Normals对齐法线Align Orientation对齐⽅向Align Position对齐位臵(相对当前坐标系)Align Selection对齐选择Align to Cursor对齐到指针Align对齐All Class ID全部类别All Commands所有命令All Edge Midpoints全部边界中点;所有边界中⼼All Face Centers全部三⾓⾯中⼼;所有⾯中⼼All Faces所有⾯All Keys全部关键帧All Tangents全部切线All Transform Keys全部变换关键帧Allow Dual Plane Support允许双⾯⽀持Along Edges沿边缘Along Vertex Normals沿顶点法线Along Visible Edges沿可见的边Alphabetical按字母顺序Ambient Only Toggle只是环境光标记Ambient Only只是环境光;阴影区Ambient阴影⾊;环境反射光Amount数量Amplitude振幅;幅度Analyze World分析世界Anchor锚Angle Snap Toggle⾓度捕捉开关Angle摄影⾓度Animated Mesh动画⽹格Animated Objects运动物体;动画物体;动画对象Animated Object动画物体Animated Tracks Only仅动画轨迹Animated Tracks动画轨迹Animate动画Animatic 动态脚本Animation Camera摄影动画机Animation Mode Toggle动画模式开关Animation Offset Keying动画偏移关键帧Animation Offset动画偏移Aanimation School of Violence暴⼒派动画Animation Tools动画⼯具Animation动画Animator动画设计Animatronic机械偶Anime⽇本低俗动画⽚Antialiasing抗锯齿Appearance Preferences外观选项Apply Atmospherics指定⼤⽓Apply Inverse Kinematics指定反向运动Apply Mapping指定贴图坐标Apply to All Duplicates指定到全部复本Apply to指定到;应⽤到Apply-Ease Curve指定减缓曲线Apply-Multiplier Curve指定增强曲线Arc Lamp弧形光Arc Rotate Selected弧形旋转于所有物体;圆形旋转选择物;选择对象的中⼼旋转视图Arc Rotate SubObject弧形旋转于次物体;选择次对象的中⼼旋转视图Arc Rotate弧形旋转;旋转视图;圆形旋转Arc Subdivision弧细分;圆弧细分Archive⽂件归档Arc弧;圆弧Area区域Armature⽀架Array Dimensions阵列尺⼨;阵列维数Array Transformation阵列变换Array阵列Aartificial Light⼈⼯光ASCII Export输出ASCII⽂件Aspect Ratio纵横⽐Asset Browser资源浏览器Assign Controller分配控制器Assign Float Controller分配浮动控制器Assign Position Controller赋予控制器Assign Random Colors随机指定颜⾊Assigned Controllers指定控制器Assign指定At All Vertices在所有的顶点上At Distinct Points在特殊的点上At Face Centers 在⾯的中⼼At Point在点上Atmospheres氛围Atmosphere氛围;⼤⽓层;⼤⽓,空⽓;环境Attach Modifier结合修改器Attach Multiple多项结合控制;多重连接Attach to RigidBody Modifier连接到刚性体编辑器Attach to连接到Attachment Constraint连接约束Attachment连接;附件Attach连接;结合;附加Attenuation衰减Audio Position Controller⾳频位臵控制器Audio Rotation Controller⾳频旋转控制器Audio Scale Controller⾳频缩放控制器Audio Clip⾳频剪切板Audio Float浮动⾳频Audio Position⾳频位臵Audio Rotation⾳频旋转Audio Scale⾳频缩放;声⾳缩放Auto Align Curve Starts⾃动对齐曲线起始节点Auto Arrange Graph Nodes⾃动排列节点Auto Arrange⾃动排列Auto Expand Base Objects⾃动扩展基本物体Auto Expand Children⾃动扩展⼦级Auto Expand Materials⾃动扩展材质Auto Expand Modifiers⾃动扩展修改器Auto Expand Selected Only⾃动扩展仅选择的Auto Expand Transforms⾃动扩展变换Auto Expand XYZ Components⾃动扩展坐标组成Auto Expand⾃动扩展Auto Key⾃动关键帧Auto Scroll⾃动滚屏Auto Select Animated⾃动选择动画Auto Select Position⾃动选择位臵Auto Select Rotation⾃动选择旋转Auto Select Scale⾃动选择缩放Auto Select XYZ Components⾃动选择坐标组成Auto Select⾃动选择AutoGrid⾃动⽹格;⾃动栅格AutoKey Mode Toggle⾃动关键帧模式开关Automatic Coarseness⾃动粗糙Automatic Intensity Calculation⾃动亮度计算Automatic Reinitialization⾃动重新载⼊Automatic Reparam.⾃动重新参数化Automatic Reparamerization⾃动重新参数化Automatic Update⾃动更新Auto-Rename Merged Material⾃动重命名合并材质Auto-Smooth⾃动光滑Axis Constraints轴向约束Axis Scaling轴向⽐率Axis轴;轴向;坐标轴BBaby Drop⼩布景Baby Tripod低脚架Back Length后⾯长度Back Lighting背光、逆光Back Segs后⾯⽚段数Back View背视图Back Width后⾯宽度Backface Cull Toggle背景拣出开关Backface Cull背⾯忽略显⽰;背⾯除去;背景拣出Background Artist背景艺术家Background Display Toggle背景显⽰开关Background Image背景图像Background Lock Toggle背景锁定开关Background Music背景⾳乐Background Sound环境⾳Background Texture Size背景纹理尺⼨;背景纹理⼤⼩Backgrounds背景Backside ID内表⾯材质号Backup Time One Unit每单位备份时间Back后视图Baffle隔⾳墙Banking倾斜Banyan tree榕树Barn Doors挡光板Barney软式隔⾳罩Base Colors基准颜⾊Base Color基准颜⾊;基本颜⾊Base Curve基本曲线Base Elev基准海拔;基本海拔Base Objects导⼊基于对象的参数,例如半径、⾼度和线段的数⽬;基本物体Base Scale基本⽐率Base Surface基本表⾯;基础表⾯Base to Pivot中⼼点在底部Base/Apex基点/顶点Base基本;基部;基点;基本⾊;基⾊Basher⼿灯Batch Process 批处理Bevel Profile Modifier轮廓倒⾓编辑器;轮廓倒⾓修改器Bevel Profile轮廓倒⾓Bezier Color Bezier颜⾊Bezier Float Bezier浮动Bezier Lines Bezier曲线Bezier or Euler Controller Bezier或Euler控制器Bezier Position Bezier位臵Bezier Position Controller Bezier位臵控制器Bezier Scale Controller Bezier尔缩放控制器Bezier Scale Bezier⽐例Bezier-Corner Bezier⾓Bezier-Smooth Bezier光滑点Bezie r Bezier曲线Billboard⼴告牌Biped步迹;两⾜Birth Rate再⽣速度Birth诞⽣;⽣产Bitmap位图Blast爆炸Blend Curve融合曲线Blend Surface融合曲⾯Blend to Color Above融合到颜⾊上层;与上⾯的颜⾊混合Blend混合;混合材质;混合度;融合;颜⾊混合;调配Blimp隔⾳罩Blizzard Particle System暴风雪粒⼦系统Blizzard暴风雪Blowup渲染指定区域(必须保持当前视图的长宽⽐);区域放⼤Blue Screen蓝幕Blue Spruce蓝⾊云杉Blur模糊Body Horizontal⾝体⽔平Body Rotation⾝体旋转Body Vertical⾝体垂直Body主体;⾝体;壶⾝Bomb Space Warp爆炸空间变形Bomb爆炸Bone⾻骼Bone Object⾻骼物体;⾻骼对象Bone Options⾻骼选项Bone Tools⾻骼⼯具Bones IK Chain⾻骼后向运动链Bones Objects⾻骼物体Bones/Biped⾻骼/步迹Boolean Compound Object布尔合成物体Boolean Controller布尔运算控制器Boolean布尔运算Bottom View底视图Bottom底;底部;底部绑定物;底视图Bounce弹⼒;反弹;反弹⼒Bound to Object Pivots绑定到物体轴⼼Bounding Box边界盒Box Emitter⽴⽅体发射器Box Gizmo(Atmospheres)⽅体线框(氛围)Box Gizmo⽅体线框Box Mode Selected Toggle被选择的物体模式开关Box Mode Selected被选择的物体模式Box Selected按选择对象的边界盒渲染;物体长宽⽐Box Gizmo⽴⽅体框;⽅体线框Box⽅体Break Both⾏列打断Break Col列打断Break Row⾏打断Bridge过渡Brightness亮度Bright亮度Bring Selection In加⼊选择;加⼊选择集Broadband 宽带Bubble Motion泡沫运动;⽓泡运动Bubbles⽓泡;泡沫;改变截⾯曲线的形状;膨胀Bubble膨胀;改变截⾯曲线的形状;⽓泡;浮起Build Only At Render Time 仅在渲染时建⽴By Material Within Layer按层中的材质CCalc Intervals Per Frame计算间隔帧;计算每帧间隔Camera Point Object摄像机配合点物体Camera Point摄像机配合点Camera Stand 脚架Camera摄像机视图;镜头点;摄像机;相Cam Point相机配合点Cancel Align取消对齐Cap Closed Entities封闭实体Cap End封底Cap Height顶⾯⾼度;顶盖⾼度Cap Holes ModifierCap Holes封闭孔洞Cap Segments端⾯⽚段数Cap Start始端加盖;封闭起端Cap Surface封盖曲⾯Capping顶盖Capsule Object胶囊体;囊体Capsule囊体;胶囊;胶囊体Cap封盖;封顶;盖⼦Caroline Leaf卡罗琳.丽芙Case Sensitive区分⼤⼩写Cast Shadows投射阴影CD 光盘CD-R 可烧录光盘CD-RW 可重复烧录光盘Cel 赛璐珞Ccelluloid电影胶⽚Center Point Cycle中⼼点循环Center&Sides中⼼和边Centered,Specify Spacing居中,指定间距C-Ext C型物体;C型延伸体;C型墙C-Extrusion Object C型物体;C型延伸体;C型墙CGI 计算机⽣成影像Chamfer Curve曲线倒⾓;切⾓曲线Chamfer Cylinder Object倒⾓圆柱体Chamfer Cylinder倒⾓圆柱体Chamfer Edge倒⾓边缘Chamfer Vertex倒⾓顶点ChamferBox Object倒⾓长⽅体;倒⾓⽅体;倒⾓⽴⽅体ChamferBox倒⾓长⽅体;倒⾓⽅体;倒⾓⽴⽅体ChamferCyl倒⾓圆柱体;倒⾓柱体Chamfer倒⾓;切⾓Change Graphics Mode改变图形模式Change Leg StateChange Light Color改变灯光颜⾊Change to Back Viewport改变到后视图Change to Bottom Viewport改变到底视图Change to Camera Viewport改变到摄像机视图Change to Front View改变到前视图Change to Grid View改变到栅格视图Change to Isometric User View改变到⽤户轴测视图Change to Left View改变到左视图Change to Perspective User View改变到⽤户透视视图Change to Right View改变到右视图Change to Shape Viewport改变到⼆维视图Change to Spot/Directional Light View改变到⽬标聚光灯/平⾏光视图Change to Top View改变到顶视图Change to Track View改变到轨迹视图Change改变Channel通道Chaos混乱;混乱度Character Designer ⾓⾊设计师Character Structures⾓⾊结构Character⾓⾊Children⼦级Chop切除;切劈Chord Length弦长;弦长度Circle Shape圆形Circle圆;圆形;圆形区域Circular Region圆形区域Circular Selection Region圆形选择区域Claymation 黏⼟动画Clean-up 清稿Clear All清除全部;清除所有的捕捉设臵Clear Set Key Mode BufferClear Surface Level清除表⾯级Click and drag to begin creation process单击并拖动,开始创作Clone Method克隆⽅式;复制⽅法Clone复制;克隆Close Cols.闭合列Close Loft闭合放样Close Rows闭合⾏Cloth Collection采集布料Cloth Modifier布料编辑器;布料修改器Cloth布;布料Cloud云CMYK 四⾊印刷Collapse All全部坍塌;全部折叠Collapse Controller坍塌控制器Collapse Stack坍塌堆栈Collapse To坍塌到;折叠到Collapse坍塌;塌陷Color by Elevation根据海拔指定颜⾊;以标⾼分⾊Color Chart⾊板Color RGB颜⾊RGBColor Temperature⾊温Color Zone⾊带Color颜⾊Col列Combine合并;联合Combos复合;过滤器组合Combustion燃烧;合并Comet彗星Command Panel命令⾯板Common Hose Parameters软管共同参数Compare⽐较Compass Object指针物体Compass指南针;指针Completely replace current scene完全替换当前场景Component组成Composite合成;复合材质;合成贴图;复合Compositing 合成Composition in Depth多层次构图Compound Objects合成物体Compound Object合成物体computer animation计算机动画Cone Angle锥体⾓度Cone Object锥体Cone锥体Configure Driver设臵驱动Configure OpenGL配臵OpenGL显⽰驱动Configure Paths设臵路径Configure设臵;配臵Conflict冲突Conform Compound Object包裹合成物体Conform Space Warp包裹空间扭曲Conform包裹Connect Compound Object连接包裹合成物体Connect Edge连接边界Connect Vertex连接顶点Connect连接Constant Cross-Section截⾯恒定Constant Key Reduction Filtering减少过滤时关键帧不变Constant Velocity匀速Constant晶体;定常;连续的;连续性;恒定;常量;圆⽚Constrain to XY 约束到XY轴Constrain to X 约束到X轴Constrain to Y 约束到Y轴Constrain to Z 约束到Z轴Constrained Motion约束运动Constraints约束Constraint约束Context前后关系;关联菜单Continuity Clerk场记Contours轮廓Contour轮廓Contrast对⽐度Controller Defaults Dialog默认控制器对话框Controller Defaults默认控制器Controller Output控制器输出Controller Properties控制器属性Controller Range Editor控制器范围编辑器Controller Range控制器范围Controller Types控制器类型Controllers控制器Controller控制器;选择⽤于控制链接对象的关联复制类型Convert blocks to groups转化块为群组Convert Curve On Surface在曲⾯上转换曲线Convert Curve转换曲线Convert Groups to转化群组到Convert Instances to Blocks转化关联属性为块Convert Selected转换选择;转换当前选择Convert Surface转换曲⾯Convert to Edge转换到边Convert to Editable Mesh转换到可编辑⽹格Convert to Editable Patch转换到可编辑⾯⽚Convert to Editable Polygon转换到可编辑多边形Convert to Editable Spline转换到可编辑曲线Convert to Face转换到⾯Convert to NURBS Surface转换到NURBS曲⾯Convert To Patch Modifier转换到⾯⽚修改器Convert to single objects转化到单⼀物体。
Animation of Virtual Human Bodies Using Motion Capture DevicesLuciana Porcher Nedel† and Daniel Thalmann‡†UFRGS - Universidade Federal do Rio Grande do SulComputer Science DepartmentCaixa Postal 15.064 - 91.501-970 - Porto Alegre - RS - Brazil‡EPFL - Swiss Federal Institute of TechnologyLIG - Computer Graphics LabCH 1015 - Lausanne - VD - Switzerlandnedel@inf.ufrgs.br, thalmann@lig.di.epfl.chResumoO presente trabalho objetiva a integração de um conjunto de características inerentes ao corpo humano, em uma estrutura de representação de seres humanos virtuais desenvolvida para uso em aplicações multimídia. Acredita-se que movimentos mais realísticos podem ser obtidos se forem aplicados em modelos virtuais os mesmos princípios usados na natureza. Por este motivo propõe-se a “humanização”da estrutura humanóide disponível, através da inclusão de modificações na aparência física do esqueleto e do uso de músculos deformáveis dispostos em suas posições originais, no que refere à anatomia. Considerando a animação, foram utilizados captores de movimento magnéticos na obtenção das posições dos membros, ativando assim as linhas de ação dos músculos e permitindo consequentemente sua deformação. O uso de dispositivos Flock of Birds™ permite tanto o controle do movimento de “avatares”, como a geração de movimentos humanos complexos.AbstractThe goal of this work is the integration of a set of "real" human body characteristics into an existing body structure conceived for the use on multimedia applications. We believe that we can obtain more realistic movements in human simulation if we apply the same principles used in nature. Because of that, we propose the "humanization" of the humanoid structure, including some modifications on the body physical appearance and the inclusion of deformable muscles arranged at its original positions and animated passively, relating the activation of its action lines. Concerning the animation, tracking devices are used to capture the limb movements, activating the muscle action lines and allowing deformations. The use of Flock of Birds™ trackers provides the easy animation of avatars as well as the generation of complex human movements.1. IntroductionFor years, modeling and animation of the human being has been an important research goal in Computer Graphics. For instance, simple skeleton-based models are limited for human picture simulation when, in fact, a human body is a collection of complex rigid and non-rigid components very difficult to model. Considering the complexity of the human structure and the fact that our eyes are specially sensitive to familiar things (e. g., our own image), people became very exacting concerning humansimulation. Consequently, researchers begin to use the human anatomy know-how to produce human models with more realistic behavior.Computer Graphics right now has a strong trend to develop real-time applications and there is a need for virtual human actors in many applications. For example, the simulation of virtual environments, where people can live and work in a virtual place for visualization, analysis, training or just experience purpose. Efficient tele-conferencing using virtual representation of participants can reduce transmission bandwidth requirements. Thalmann et al. [Thalmann 95] illustrate a teleconferencing application where multiple participants from distant sites can share, move and act within the same 3-D environment. On education, we have distance mentoring, interactive assistance and personalized instructions. On military causes, researches have been made on battlefield simulation with individual participants, team training and peace-keeping operations. On game industry, people are using real-time characters with actions and personality for fun and profit.A new challenge on the application of human being simulation is the interaction between agents and avatars, inserting real-time humans into virtual worlds with virtual reality. An agent is a virtual human figure representation created and controlled by computer programs while an avatar is a virtual human controlled by a live participant.Initially, the research on a realistic human model seems to be incompatible with real-time goals, but that is exactly what we are working on. The first component of our human model is the anatomically-based skeleton, composed by bones attached with each other by joints, as presented in Section 2. Muscles are the second component (see Section 3) and are designed on two levels: the muscle action and the muscle shape deformation. As this paper focuses especially on animation, we present in Section 4 some aspects of the motion generation process. Section 5 describes the use of motion capture devices to allow the easy animation of avatars and also the generation of complex human movements. Finally, the results are shown and some considerations are made.2. Skeleton DesignThe human body can be briefly defined as a composition of skeleton, muscles, fat and skin. The skeleton is formed by bones (about 206 in total) attached with each other by joints, and constitutes the foundation of all human surface form. The base of a skeleton is the spinal column where the two pairs of limbs (arms and legs) and the head are connected.We also use the term skeleton in computer animation, to designate a stick figure representing the positions and orientations of the joints which make up the articulated figure. To represent our stick made human model, we have used the basic skeleton structure proposed by Boulic et al. [Boulic 91]. The skeleton hierarchy is composed of articulated line segments whose movements are described at the joints level.Relating the implementation aspect, a BODY data structure maintains some generic information, including a topological tree structure for a vertebrate body with predefined mobility. The hierarchical topology orientates the propagation of motion from the root to the terminal nodes. Each joint of a body is represented by one or more nodes of the tree, depending of the joint’s Degree of Freedom - DOF (joints have up to three DOFs and each DOF corresponds to one node of the tree) and each node position is defined locally in relation of its node parent in the hierarchy.Our simplified model of the human skeleton contains 31 joints with 62 DOFs. We do not represent neither the joints of the head, nor the facial animation.As mentioned before, the main goal of this work is to produce a human model based on anatomic principles. At the skeleton level we suggest the creation of a simplified body with the joint positions defined anatomically. To accomplish it, we propose the design of a new human template defining new position and orientation for the joints. But finding the joints center location from the knowledge of external markers is a very difficult task. Joints like the shoulder or the hip for example, are specially hidden within the flesh and complex bone structure.We propose the template definition by analyzing a three-dimensional real skeleton. From a realistic reconstructed human skeleton, we have carefully determined the best position for each joint between the bones, based on anatomic concepts and motion observation. In Figure 1 we can see the skeleton with bony landmarks highlighted, representing the positions of the joints.(a) (b)Figure 1. (a) Frontal and (b) lateral views of the skeleton with the joints positions highlighted.Bones are linked to the joints reference systems and modeled by triangle meshes defined locally in relation to their parent joint. Our system is modeled with 73 bones, ignoring the hands and feet that are represented by complete sets of bones. Bones do not change shape during animation, but can change position in relation to each other.Our main goal in representing bones is to allow the muscles attachment definition (skeleton muscles are fixed on the bones by their extremity points, in general). But also esthetic purposes were considered. In fact, in some parts of the body, the skeleton contributes directly on the external appearance and should be considered during the skin generation process. For example, on the lower part of the legs, the elbow and some ribs, if we consider a slim person. Furthermore, parts of bones that appear not to create surface form in some postures do so in others.Moreover, another reason pushed us to append the bone shapes in our model. The visualization of the skeleton during animation allows the avoidance of forbidden postures, a tedious and abstract task when only based on a stick figure. The definition ofthe limit angles for each joint became also more precise when using bones representation between articulations.3. Muscles Simulation3.1. Action LinesIn our model, a muscle is represented in two levels: the action line and the muscle shape. To simulate the muscle forces, muscles can be represented by one or more action lines, basically defined by an origin and an insertion point. These points represent the links between the muscles and the bones, sometimes also corresponding to the extremities of the muscle shape. However, depending of the shape, position and complexity of the muscle, simple action lines are not enough to represent the force produced over the skeleton. For this reason, we have decided to represent the actions of the muscles not simply by a straight line, but by using polylines. To accomplish it we have developed the concept of control points, whose objective is to guide the line, avoiding the intersection with the bones. An example of this kind of action line is shown in Figure 2. The Biceps Brachii is represented by two action lines (one for each of the two muscle heads), but one of them needs to use a control point. If we try to represent this action line by a straight line, the bone will be intercepted.Figure 2. The Biceps Brachii muscle and corresponding action lines Other examples of intersection between bones and lines of action can be perceived only while animating the skeleton. It is exactly the case of the Triceps Brachii, for example, that are represented by three action lines (one for each muscle head) attaching the upper arm to the upper part of the lower arm. Figure 3 shows the definition of the action line that represents the Triceps middle head. In Figure 3.a, we can see the skeleton of the arm with the defined action line when the muscle is in its rest position. Figure 3.b shown the arm contraction and the resultant action line, if it was defined by a simple straight line. We can see in the example, that if the action line is not correct, the muscle can contract during an extension action. Finally, Figure 3.c shows the correct simulation of the Triceps action line, by using some control points.Moreover, Figure 3 shows also an example of the muscle shape simulation and reaction to the arm motion. In these examples we can verify that the muscle belly is not attached to the bones by the origin and insertion of the action lines. It has its own origin and insertion points, simulating in this way the presence of tendons. In order to simulate the tendons we assume that the action line is, in fact, a thread linking two bones andcrossing some pulley, here represented by the control points. Tendons stretch just about 8% of their original length but, for the sake of simplicity, we define the tendons by giving distances between the muscle boundaries and the action lines extremities. During the animation, these distances are kept constant.(a) (b) (c)Figure 3. The Triceps Brachii middle head action representation: (a) bones of the arm and Triceps action line with the muscle in its rest position; (b) action line and muscle reaction during the arm contraction and with a straight action line; (c) correct simulation of the action line by using additional control points.3.2. Shape designTo simulate muscle deformations we use a set of mass points linked by springs and organized in a way to ensure a correspondence between the action line and the muscle shape. We consider, for all the cases, that each muscle belly involves completely an action line or a part of an action line. Each point that composes the surface of a muscle is arranged between two neighbors in the horizontal and two others in the vertical direction, considering the action line of a muscle as the up reference. In this sense, we can say also, that the extremities of the muscle shape have the same position of the action lines extremities or, if we are using tendons, that they are placed on the action line.Taking into account the limitations imposed by our deformation model, which was designed specifically to simulate fusiform muscles, we developed a resampling method whose goal is the generation of simple and regular surface of the muscles. From a muscle form composed by triangles, this method is able to generate another muscle shape designed in a way to achieve our objectives.To resample given dense irregular triangle meshes into regular grids (Figure 4 shows an example of a resampled Triceps), we need to define the number of slices perpendicular to the action line on the new muscle shape, as well as the number of points in each slice. Excepting the extremities, every slice has the same number of points. The algorithm goes through all the action line and, at each position of a new muscle slice performs the design of an imaginary circle. Lines are drawn in a star-shaped manner with the origin on the intersection between the action line and the slice. For each line we compute the outermost intersection point with the initial muscle. Each result point of these intersections will be a point over the new muscle surface.Figure 4. Model of the internal part of the Triceps (with 2160 points) and the resampled models with respectively: 197, 182, 82 and 32 points.3.3. Deformation MethodThe physical model presented here is based on the application of forces over all mass points that compose the mesh, generating new positions for them (in a correspondence with the geometric structure presented before, each point of the mesh corresponds to a particle in the physical model). Adding all the applied forces, we obtain a resultant force for each particle on the deformable mesh. For the sake of simplicity, we have considered three different forces: elasticity, curvature and constraint force. Then, the resultant force in each particle i can be calculated as:[]),,,,(),,,,(3210s constraint 3210i i i i i curvature elasticity i i i i i result x x x x x f f f x x x x x f ++=,where i x is the vector position of the particle i and 3210,,,i i i i x x x x the positions of the particles that compose its neighborhood.To simulate the elastic behaviour of the muscle mesh, we have used linear springs. The force that determines the degree of bending of a muscle is called curvature force. As with the elasticity force, this force is also calculated for each mass point over the surface as a function of its four neighbours. The physical simulation of these effects is designed towards the use of a new kind of linear spring, which we call angular spring . The difference between both kinds of springs to simulate elasticity is the way they are attached. Angular springs are also used as a technique to control the muscles volume.The geometric constraints are developed to improve the response to the different conditions not formally explicit in our internal force model. The implementation of the major part of the constraints force is done by using inverse dynamics. We would like to emphasise that the methodology used to satisfy constraints allows the inclusion of new ones without the need to modify the physical model.In order to reduce calculations, we have decided to consider only the representation of muscle surfaces. In fact, we believe we can simulate muscle deformation without directly considering their volume characteristics. The surface is composed of a set of particles with mass density m . Its behaviour is determined by its interaction with the other particles that define the muscle surface. In correspondence with the geometric structure presented before, each point of the mesh corresponds to a particle in the physical model.The main parameters that the user needs to set to perform deformations are: two elasticity coefficients, two curvature coefficients, the muscle mass and a damping factor used during the motion simulation. More details about our muscle model can be found in [Nedel 98].3.4. Motion SimulationThe movement simulation is done by applying systems of motion equations over each particle of the model. These systems rely on second-order differential equations derived from a definition of elasticity as being the resistance to extension of a material, and viscosity as being resistance to a change in extension for a material [Holton 95].To create animations simulating the dynamics of elastic models, we have used the Lagrange’s equations of motion as presented by Terzopoulos et al. [Terzopoulos87]. The ordinary differential equations of motion are integrated through time, by usinga fourth order Runge-Kutta method [Press 92]. We have chosen this method because it is more stable than Euler’s method while retaining an acceptable time of execution.4. Motion MotorCurrently, there are three ways to perform human motion. The first way consists in picking a joint and changing interactively the value of this joint, while the second one is to read and execute the motion from an animation file. In the third way, the motion is captured in real-time by using tracking devices (as stressed in the next section). In all the three cases, the system detects the joints motion and enables the deformation of the muscles, the motion of bones and the update of the action lines.Each object in our system is represented globally and locally. For simplification purposes during the motion process, we consider the same global reference system for all the human figure and several local reference systems, one for each Degree of Freedom of the body. Every object that composes a human body is attached to a joint reference and represented in its local reference system. Then, when this joint moves, the object moves in the same time, automatically. In the case of the physically-based muscles, motion are guaranteed by updating the action lines.The action line is the lowest level in our muscle simulation system. We first simulate the action line motion and then, the muscle deformation. In this way, the muscle compression or extension depends directly on the action line. However, the final deformation depends also of other physical parameters. In fact, action lines are attached to bones that move accordingly with the joints. When a joint angle changes, bones move and consequently, muscles attached to these bones also move on the same way.The action line motion is produced in a module that, after each movement of the skeleton, detects which action lines are concerned and updates these ends and control points positions, calculating the new tendons location over the action line. To accomplish it, we maintain a list with pointers to all the segments of action lines and a flag for each one indicating if it is in motion or not. If the action line changes during a joint motion, then the flag is set.To enable the muscles deformation, the system maintains a callback function called 20 times per second. This function verifies if there are some action line or muscle in motion. If one of the two conditions is satisfied, the muscle deformation procedure starts. A muscle will be “static” again, only when the motion of all the particles can be considered as non-representative (very small).5. Tracking DevicesTraditional rotoscopy in animation consists of recording the motion by a specific device for each frame and using this information to generate the image by computer. For example, a human walking motion may be recorded and then applied to a computer-generated 3D character. This off-line approach will provide a very good motion, because it comes directly from reality. However, it does not bring any new concept to animation methodology, and for any new motion, it is necessary to record the reality again.A real-time rotoscopy method consists of recording input data from a VR device in real-time allowing to apply at the same time the same data to a graphics object on the screen. Motion capture involves measuring the position and orientation of an object in physical space, then recording that information in a computer-usable form. For real-time motion capture devices, this data can be used to provide real-time feedback regarding the character and quality of the captured data. Flock of Birds™ is an example of a real-time motion capture input device [Dyer 95].5.1 Technical DescriptionMagnetic motion capture systems use sensors to accurately measure the magnetic field created by a source. Such systems are real-time, in that they can provide from 15 to 120 samples per second (depending on the model and number of sensors) of 6D data (position and orientation) with minimal transport delay.The system we use has a set of sixteen cabled magnetic sensors and one extended range emitter (Flock of Birds TM from Ascension Technology) with integrated filters (FIR notch filter and adaptive IIR lowpass filter). The data transfer between the host computer and the sensor's units is performed through the use of an Annex Communications Server over the ethernet. Thus applications can run on any workstation of the network. The emitter sends magnetic field to the surrounding environment. At the same time, the sensors receive the magnetic signal and send back their measurements to the electronic units to decide the positions and orientations of the sensors.In addition, we can have hand motion tracking with the Virtual Technologies Cyberglove TM which is directly connected to the host computer via one serial port connection. The operating range for sensor's measurements is 8 feet around the emitter. The static position accuracy is 0.1" RMS, 0.5o for angular accuracy, averaged over the translation range with a position resolution of 0.03" at 12" and 0.1o RMS at 12" for angular [Ascension 94]. This accuracy depends on the distance to the base emitter that should be less than ten square meters. There can be no metal in the active area, since it can interfere with the motion capture.5.2 Motion Capture SystemOur motion capture system, also called anatomical converter [Molet 96], was developed to transcript the performer motion in the joint space of his/her associated virtual representation. The motion of the different segments is tracked using the magnetic sensors presented before.The sensors management has been implemented as a two levels library module. The first low level library integrates a sensor data structure in the general skeleton hierarchy presented on Section 2. The second level library is dedicated to human motioncapture. It provides higher level commands and the calibration procedure. Each sensor has been assigned to a particular joint once and for all. So a performer only has to wear each sensor on the segment distal to its associated joint. Finally, the calibration stage automatically adjusts the location of the corresponding virtual sensor in the virtual model's hierarchy.The anatomical converter has three important stages: skeleton calibration, sensor calibration and real-time conversion.Skeleton calibration. The virtual skeleton calibration is made once per performer. First, we scale an average human model to the total height of the performer. Second, we eventually adjust some segments statistically showing a low correlation with the total height [Kroemer 90]. For that purpose we simply make hand measurements based on anatomical landmarks. The error of those measurements is estimated to be within two centimeters per segment. In applications where flexibility is more important than accuracy, such as demonstrators dedicated to a wide public, only the first stage is automatically completed. The total height can be estimated from the head sensor measurement in the initial standing posture.Sensors calibration. At first, we need to construct the relation between the sensor attached to one performer segment (Figure 5.a) and the corresponding proximal joint of the virtual model (Figure 5.b). This step has to be performed every time the sensor attachments are modified.(a) (b)Figure 5. Sensor attachment positionsThe calibration assumes that the virtual skeleton reasonably matches the performer skeleton, as mentioned before. Sensors are attached on the performer body using the following rules: one sensor drives one joint and thus has to be fixed on the associated distal segment; avoid segment regions where muscles and fat abound to attach the sensors.The sensor calibration step computes the calibration matrix between each sensor and its associated proximal joint for the calibration standard posture (Figure 5.b). The calibration matrix is, in fact, the transformation matrix from the sensor frame to thejoint final frame. It is composed by multiplying 3 other transformation matrices: from sensor frame to emitter reference frame (returned by the hardware), from emitter reference frame to world frame, and from world frame to the joint final frame (that coincides with the joint initial frame in the calibration posture). During the motion, the calibration matrix is considered to be rigid even if small displacements due to muscles and fat occur.In our system, only one sensor requires a position calibration: the spine sensor used to track the global position of the participant. All the other sensors need only an orientation calibration.Real-time conversion. In this stage the joint angle and global position are computed. In order to compute the joint angles associated with one sensor, we compute the joint rotation matrix R j. In our hierarchy this matrix is decomposed into angles using an Euler angle sequence method. The matrix is evaluated using the following formula:R j = R jw . R we . R es . R sj,with the following rotation matrices (line matrices):R j unknown matrix of the joint to be converted in anatomical angles;R jw from the joint initial frame to the world frame;R we from the world frame to the emitter reference frame;R es from the emitter reference frame to the sensor frame (inverse of transformation matrix returned by the hardware);R sj rotation part of the calibration matrix computed at the initialization stage.Our method is mostly driven by orientation rather than position, this means that a segment size difference between the animatior and the virtual model is less subject to alter the virtual model posture. The possible drawback in this situation is that the end-effectors could reach different spatial positions than the performer’ s effectors.6. ResultsFigure 6 shows the current state of our work in designing real muscles for the representation of a complete human body. Our current model is composed of 31 joints with 62 degrees of freedom, 73 bones, 33 muscles (represented by 105 fusiform muscles parts) and 186 action lines.Figure 6. Muscled bodyFigure 7 shows an example of elbow flexion performed by a body composed of bones and some muscles, specifically the Biceps Brachii and the Triceps Brachii.Figure 7.Elbow flexion with muscles deformationFigure 8 shows a motion capture example, comparing a converted posture with its performed one. This example was generated using 11 captors.Figure 8.Motion capture example7. ConclusionTo implement a human representation model, we have presented an approach containing the skeleton and the muscles. From a 3-D skeleton and accordingly with our studies on anatomy and some practical experiments, we have redefined all the joints positions and orientations. Concerning the visual aspect, we have added bones to the model. Our main contribution to this topic is the design of a new human template.Considering the muscles, we have presented a new technique to represent action lines, where tendons are defined over the lines. Similar techniques have been proposed in Biomechanics research works but it is not current in Computer Graphics applications. Even the most known methods developed to simulate anatomically-based humans [Scheepers 97, Wilhelms 97] did not present this kind of solution, using simpler techniques to attach muscles to bones. Looking for the recycling of sets of action lines over any human skeleton, all the action lines of the muscles are saved in their normalized form. The normalization is made in function of the limbs length.We have chosen a physically-based approach to simulate muscles in interactive applications, because we believe we can obtain more fluid and realistic movements if。
英语作文-虚拟现实技术应用于游戏开发,打造沉浸式体验Virtual reality (VR) technology has revolutionized the gaming industry, offering unparalleled immersive experiences for players worldwide. By integrating VR into game development, developers can create truly immersive environments that transport players into fantastical worlds and adventures. This article explores the application of virtual reality technology in game development and how it enhances the overall gaming experience.One of the key advantages of using VR in game development is its ability to create a sense of presence and immersion. Unlike traditional gaming experiences that are viewed on a screen, VR games allow players to feel like they are inside the game world. Through the use of headsets and motion-tracking devices, players can interact with virtual environments in a way that feels natural and intuitive. This heightened sense of immersion makes the gaming experience more engaging and captivating.Furthermore, VR technology enables developers to create highly realistic and visually stunning environments. With advanced graphics capabilities and immersive sound design, VR games can transport players to breathtaking landscapes and dynamic worlds. Whether exploring alien planets, traversing ancient ruins, or battling fierce enemies, VR games offer a level of visual fidelity and detail that is unmatched by traditional gaming platforms.In addition to visual and auditory immersion, VR technology also enhances gameplay mechanics and interactivity. By incorporating motion controls and haptic feedback devices, players can physically interact with objects and characters within the game world. This adds a new layer of realism and engagement to gameplay, allowing players to feel like active participants in the virtual environment.Moreover, VR technology opens up new possibilities for social interaction and multiplayer experiences. Through online multiplayer modes and virtual social spaces,players can connect with friends and fellow gamers from around the world. Whether collaborating on missions, competing in tournaments, or simply hanging out in virtual lounges, VR gaming fosters a sense of community and camaraderie among players.Another benefit of VR technology in game development is its potential for educational and training applications. Beyond entertainment, VR games can be used to simulate real-world scenarios and environments for educational purposes. From historical reenactments to medical simulations, VR technology provides a powerful tool for immersive learning and skills development.Furthermore, VR technology continues to evolve and innovate, with new advancements pushing the boundaries of what is possible in gaming. From improved graphics and performance to enhanced motion tracking and sensory feedback, the future of VR gaming looks incredibly promising. As developers continue to explore the creative potential of VR technology, players can look forward to even more immersive and captivating gaming experiences in the years to come.In conclusion, virtual reality technology has transformed the gaming industry, offering unparalleled immersion and interactivity for players. By integrating VR into game development, developers can create truly immersive environments that transport players to fantastical worlds and adventures. With its ability to create a sense of presence, enhance visual and auditory experiences, and foster social interaction, VR technology represents the future of gaming. As the technology continues to evolve, the possibilities for immersive gaming experiences are endless.。
2023年12月英语听力原文Section A:新闻听力News report 1新闻1A police officer in the US stopped a large SUV car [1] that was going very slow and drifting across lanes on a highway. He expected to find the driver who was either very drunk or having a medical emergency. Instead the officer dis covered a five year old boy sat on the edge of the driver’s seat. His feet could barely reached the brake, and his head was only high enough to see out of the windows.The child had taken the keys to the family car while his teenage sister was napping. He then drove 3 kilometers across town before getting on the highway. The boy later told confused officers that [2] he was planning the travel to California and buy a Lamborghini sports car. Although he only had 3 dollars in his wallet, at least he was driving direction.Question 1. why did the police officer stopped the SUV car?Question 2. What did the boy tell the police officer he was planning to do?News report 2新闻2Mobile phones have changed the way we live.How we read, work,communicate and shop,but we already know this. O3. What we have not yet understood is the way the tiny machines in front of us are changing our skeletons, possibly altering not just the way we behave, but even the very shape f our bodies.New scientific research at the University of the Sunshine; Coast in Queensland Australia,suggests that young people are developing extra pieces of bone at the backs of their heads. These pieces of bone are caused by the way people bend their heads when they use the phone. This shifts weight onto the muscles at the back of the head and causes the bone to grow in a way that is not normal.This process can be compared to the way the skin hardens in response to constant rubbing or pressure. Q4.The result is a piece of bone like a horn that sticks out from the head by the neck.Question 3. What does the report say we have not yet understood about mobile phones?Question 4. What happens to the skin when rubbed or pressed constantly?News report 3新闻3A village is going to [Q5] throw a birthday party for an orange.cat named Wilbur, who has become famous for making a regular appearance at local shops. Wilbur has his own internet pages as he approaches his 10thbirthday on July 7.[Q6]Wilbur is both bold and friendly He spends most of his time in shop sand businesses around the village.He'll just take himself into any shop anywhere There was one time when the doctor's receptionist came back and he was sitting on her chair.Among his favorite haunts, are a local hairdresser,two pubs,wand the Nottingham Primary School.Leslie Harper, who runs one of the pubs said the cat has been a big hit with their customers [Q7] “He's been a regular visitor for most of this year,”she told the Press Association,“He's a very relaxed cat happy for customers and villages young and old to come in and say hello.”She also said he is [Q6] a cat of expensive tastes who likes his high-priced cat food.Sara Godfrey,who is organizing Wilber's party, told the local newspaper,"Wilbur is part of our lives as he is for everyone who lives in the village."Question 5. What does the report say the village is going to do for the cat Wilbur?Question 6 What do we learn from there port about the cat?Section B:长对话1M: So, _[8) where do you want to go for lunch?W: I don't know. Do you have anything in mind?M: What about [9] the sandwich place on Camden Street?W : No, sorry. I don't feel like sandwiches today. It's a great place,but I think we go there too often.M: That's true. Remember that's where we saw/ Bridget Clark, the famous movie star.W: Of course. How could i e,er forget?There were crowds of people around her asking for a photo with her.M:What about hamburgers,Then?There's that American style diner on the way to the mall.W: I like that place. Their chips are great, but their service takes along time. And I need to get back by 2:30 [10] for a conference call.How about trying something new? We could try Mario's, the new Italian restaurant on the corner. It look all right.Have you been there?M: No, l haven't, but I've noticed that it has great reviews on the Internet. It's supposed to be one of the best Italians in town.I think Jeremy has been there and said it was amazing. I'm up for that.W: Cool. Have you asked my if he would like to come with us?M: I already have. He says he can't. He has brought his own lunch today, so he will stay in. I think his wife made him soup.W:Poor Jeremy.[11] His wife is a terrible cook He should throw that soup in the bin and join us.Question 8. What are the speakers talking about in this conversation?Question 9. Where did the speakers see Bridget Clark, the famous movie star?Question 10. Why does the woman say she needs to get back by 2:30?Question 11. Why does the woman say Jeremy should throw his soup in the bin?长对话2W: Hi there How are you today? [O12]Do you have a reservation with us already?M: Good afternoon. Yes.We reserved our rooms yesterday morning on your website for three nights.The name’s Patterson.W: Okay.Let me have a look. Yes, we have it here. You brought the whole family with you .I see. M: Yes. The two kids, my wife and l, and her parents too.W: Great. So we have a family room for you and your wife and the kids,and another double room for your parents-in-law. They are right next to each other on the ground floor Since you mentioned in your message that they have trouble with stairs.M: That's wonderful.[O13] My father-in-aw has had terrible problems getting up and down stairs since his knee operation last April.W: I'm sorry to,hear that. And if you need any help to find transportation for the whole family, we can definitely recommend someone for you.M: We were thinking of renting a car,but we will explore all the options available for sure. So yes, that would be very helpful [O14lin comparing prices.We're also wondering what tours and day trips are available.W: We have bunches of brochures here. would recommend getting out on a boat trip. The kids will love it and there are so many islands nearby to explore. There's also a great night market further into town that has all kinds of food and cool little shops selling souvenirs and local jewelry and clothing made by hand Ma That all sounds marvelous.W: [O15] Now all l need is to photo copy your passports and then I can get you all checked in and show you to your rooms.Question 12. Who is the man talking to in the conversation?(预测是酒店前台receptionist/reception desk)Question 13. What do we learn about the man's father-in-law from the conversation?Question 14. Why does the man say they will explore all the options available?Question15.What does the woman suggest the man and his family do close to the end of the conversation?Section C:短文1Artificial intelligence agents play evermore influential roles in our lives. [Q16)They do everything from suggesting new friends to[Q16)purchases.They're s even beginning to drive our cars. Another role that they are expected to take over is negotiating on our behalf in commercial transactions or legal disputes. So it's important to know[Q17] whether using an artificial intelligence agent might affect how we negotiate.Research indicates that it does. In anew study,participants were told to imagine that they were negotiating for something important to them, like a house. Next, they were told either that they would negotiate for themselves, or they would program an artificial intelligence agent to negotiate for them. Participants then completed a survey indicating how tough,deceptive and pleasant,or otherwise they wanted to be, or one of their agent to be in the negotiations,For example,participants could choose to be tough by making an opening demand far greater than what they'd be willing to accept.They could also choose to expre55sympathy with their opponent to appear pleasant, but they could also indicate that day or their agent would strategically express anger toward the opponent to gain advantage or they could opt to convey dissatisfaction with the encounter, so that the other part would think they were losing interest. These are both examples of deceptive strategies.fQ18] Participants were more willing to employ deceptive strategies when assigned an agent to negotiate on their behalf.Questions 16, What does the passage say about artificial intelligence agents?Questions 17 What does the new study wan to find out about using an artificial intelligence agent?Question 18 What did participants tend to do when assigned an Al agent to negotiate on their behalf?短文2Q19, For years using recycled plastic to make plastic products was cheap By contrast, fossil fuel plastic was more expensive. Thus, this sustainable option was an economic option, too. But now it is cheaper for major manufacturers to use new plastic. According to one recent business report,recycled plastic now costs an extra S72 a ton compared with newly made plastic. This may be because of consumer demands. They are pushing for more recycled plastics in new products. Q20.Meanwhile, plastic is becoming cheaper, This is because of a boom in petroleum chemical production from the US. The price increase of recycled plastic could cost sustainable manufacturers an extra $250 million a year. Smaller manufacturers may also be forced to use new plastic to reduce costs.Makers of clear plastic bottles may also opt for new fossil fuel based plastic to save money. plastic packaging makers are being pressured to use more recycled plastic.This is done in hopes of reducing the enormous amount of plastic pollution in the oceans.Q21.The UK government plans to tax companies which don't use at least30% recycled plastic in their products.Additionally the government is planning to increase the quantity of recycled plastic in the market. This could mean incentives for new recycling plants, Additionally,recycling facilities may be improved at a local council level and recycled plastic could be imported. This would help increase the amount of recycled plastic in circulation.Question 19. What is said about using recycled plastic to make plastic products in the past?Question 20. What has led. to more competitive price for new plasticQuestion 21. What does the UK government plan to do about plastic?Section C: 短文3What is personal space? We often think of it as an invisible bubble of space surrounding us that others can't enter without causing discomfort. Research shows however[22] that we actually have bubbles of different sizes. Each of these bubbles applies to a different set of people.The smallest zone called intimate space extends outward from our bodies 18 inches in ever direction.Only family, pets and one's closest friends may enter it.[23] A mere acquaintance entering our intimate space makes us uncomfortable.Next is the bubble called personal space extending from 1.5 feet to 4 feet away. Friends and acquaintances can comfortably occupy this zone,especially during informal conversations,but strangers are forbidden .[24]Extending from 4 to 12 feet away from us is social space. Here people feel comfortable comfortable conducting routine social interactions with new acquaintances or total strangers. Those are the average sizes of Americans personal bubbles anyway.It is important to keep in mind that personal space varies depending on culture and context Furthermore,there are significant individual differences. As we all know cultural or individual differences in personal bubble diameters are all too often the cause of discomfort.How did these personal bubbles arise? According to research, we begin to develop our individual sense of persona! space around age 3 or 4. The sizes of our bubbles are fixed -[25)by our teens. These bubbles are constructed and monitored by the brain region involved in fear. Question 22.What does research show about our personal space?Question 23. What happens if a mere acquaintance enters our intimate space?Question 24. Where do people feel comfortable interacting with new acquaintances or strangers? Question 25 When are the sizes of our bubbles fixed?。
The future of 3D printing is a topic that has captured the imagination of many, and as a high school student deeply interested in technology, its a subject that I find particularly fascinating. The potential applications of3D printing are vast and varied, ranging from the creation of intricate art pieces to the manufacturing of complex mechanical parts. In this essay, I will explore the possible future developments of 3D printing and how it might impact our daily lives.One of the most exciting prospects of 3D printing is its potential to revolutionize the medical field. Imagine a world where organ transplants are no longer a matter of life and death due to shortages. With advancements in bioprinting, we could potentially create organs that are compatible with a patients body, drastically reducing the risk of rejection. This technology could save countless lives and alleviate the suffering of many who are currently on waiting lists for organ transplants.Moreover, 3D printing could also transform the construction industry. Traditional building methods can be timeconsuming and laborintensive. However, with 3D printing, entire buildings could be constructed more quickly and with less waste. This could lead to more affordable housing and a reduction in the environmental impact of construction projects. I envision a future where 3D printed homes are not only a reality but also a common sight in our neighborhoods.In the realm of consumer goods, 3D printing has the potential to make customization more accessible than ever before. Instead of massproduced items, consumers could have products tailored to their exact specifications.This could range from clothing and accessories to electronic devices and furniture. The ability to print items ondemand would also reduce waste, as only what is needed would be produced.Education is another area where 3D printing could have a significant impact. Students could use 3D printers to create models for their projects, making learning more interactive and engaging. For example, a student studying anatomy could print a 3D model of a human heart, allowing them to explore its structure in a way that is not possible with traditional textbooks.The environmental benefits of 3D printing are also worth considering. By reducing the need for transportation of goods, 3D printing could help to lower carbon emissions. Additionally, the ability to recycle materials and print only what is needed could contribute to a more sustainable future.However, with these potential benefits come challenges that need to be addressed. The cost of 3D printers and the materials needed for printing can be prohibitive for some. Ensuring that this technology is accessible to all is crucial to its widespread adoption. Additionally, there are concerns about the impact of 3D printing on jobs, particularly in manufacturing. It will be important for society to adapt and find ways to integrate this technology without displacing workers.In conclusion, the future of 3D printing is filled with immense possibilities. From medical breakthroughs to environmental sustainability, the impact of this technology could be profound. As a high school student, I am excitedto see how these developments unfold and look forward to the day when I can use 3D printing in my own life. The future is not just about what we can imagine its about what we can create.。
Animated Pedagogical Agent in a LearningEnvironmentMaria Augusta S. N. Nunes, Luciane Fraga, Leandro L. Dihl,Lisiane Oliveira, Cristiane R. Woszezenki, Deise J. Francisco,Glaucio J.C. Machado, Carmen R. D. Nogueira, Maria da GlóriaNotargiacomo{guta, fraga, dihl, loli, cristianerw, dfrancis, gcmachado, crdn,gloria}@urisan.tche.brGPEAD – Distance Learning GroupEngineering and Computer Science DepartmentURI University - Av. Universidade das Missões, 464.CP. 203 - Santo Ângelo - RS – Brazil - 98802-470Fone/Fax: 00-55-55-33137900http://www.urisan.tche.br/~gutaAbstract: Artificial Intelligence is an important area in the field of Computing applied to Education in terms of technological implementation. This paper describes IVTE, Intelligent Virtual Teaching Environment, implemented by Multi-Agents technology including pedagogical features, represented by Guilly. Guilly is a Animated Pedagogical Agent that acts based on a student model and teaching strategies.Keyword: Computer Science, Education, Learning Environment, Cognition, Games.IntroductionComputer Science has suffered modification in the last decades. In the same way, Artificial Intelligence in Education (AIED) has followed the same tendency.Taking this aspect into consideration, AIED [Te99] changes its individual context to collaborative and social context. It is bringing a strong tendency in the collaborative process among software pieces. When somebody works with Intelligent Educational Software, these pieces are Tutor and Student. Students make the collaboration between Tutor/Student, Student/Student and Virtual Tutor/Student.An Educational Software is considered efficient, according to John Self [Se99], when it models the student cognitive capacities, and thus, gives personalized instructions adapting them. Educational Software in this context is considered an Intelligent Tutoring System (ITS) . Besides, an ITS should incorporate a specific Student Model Base and Teaching Strategies tied to that Base.According to Paiva [PSH94], Student Model in a system can be considered like an explicit representation of some particular student feature, which allows Educational Software adaptable/personable teaching, to transform it into an ITS. The knowledge exchanges among students and system depends on what is inside the student model and teaching strategies. An ITS will be more efficient on condition that its student model and teaching strategies are more complete according to the Educational software needed.According to Hietala [HN98], an ITS provides effective teaching when it models students correctly and also when it selects adequate teaching strategies. Added to this, an ITS should be included in a social context, working as an autonomous agent [Nu98] in cooperation with students. Regarding Dimitrova’s work [DSB00], the system presents a student model to prove its adaptable features, adequate feedback and well-directed instruction knowledge, and then, the system is built to group model discarding the student model [Nu98].With reference to MAS in ITS, Hietala [HN98] writes that the agents are represented by Teaching Agents or Co-learners called Tutors, which are divided into Reactive and Cognitive Agents. The main features of these agents are communication, capacity, autonomy and initiative.Furthermore, in MAS, nowadays, a new type of agent called Animated Pedagogical Agent arises, when it acts in ITS, it supplies new features. The Animated Pedagogical Agent is considered the latest implementation paradigm used in an Educational Environment like ITS.At the same time, considering Boulay [BL00], there is a considerable problem in the Educational Software. This is happening because it is necessary to know how humans learn and pass this information to the ITS generated. To create an ITS it is necessary to build a Tutor, that is, a figure or a character that monitors the student action during system interaction, helping and stimulating the building knowledge process.The main aim of the IVTE project is to make possible to programmers the ability to built an educational software which is, actually, related to student deficiency and that contributes to explore its potentialitiesto make the cognitive process better. The IVTE is modeled and implemented like a pedagogical software to provide these features. Regarding Demazeau's works [De00] there is a tendency in using MAS and Virtual Reality (VR) together. As a result, the IVTE Software - Intelligent Virtual Teaching Environment [Nu01, Nu00] was adapted to these technology, particularly, ITS, MAS, VR, Animated Pedagogical Agents.An IVTE project is justified by new teaching, learning technologies that will be provided to ITS improving the efficiency level of teaching processes.This paper is organized as follows: section 2 describes features of Animated Pedagogical Agent; section 3 describes the IVTE environment, secondly, in section 4, the Animated Pedagogical Agent, called Guilly, and its behavior in an IVTE environment are presented. Additionally, in section 4, student model base and teaching strategies are presented. The following section, is about conclusion and future works and finally the references are presented at the last section. Animated Pedagogical AgentsThe Animated Pedagogical Agent model is recent in the Computing area. It has arisen from the need of new technologies to implement Educational Softwares. Considering MAS, the Animated Pedagogical Agents were created to bring more dynamism to Educational Software mainly in terms of pedagogical features. They are considered Alive Autonomous Characters cohabiting in teaching environments creating a rich interface face-to-face with the student. A Animated Pedagogical Agent interacts with a student through messages and gives interactive and dynamic feedback.The Animated Pedagogical Agents are used in Educational Softwares to increase their pedagogical competence. In general, being capable of negotiating theirs and student decisions instead of just commanding them [Nu98]. They have autonomy to decide what is relevant to teach for each student based on a Student Model.According to Johnson [JR00] an ITS Tutor, represented by Animated Pedagogical Agents, supplies feedback to student actions. It gives non-verbal feedback by using facial language, body language, and gestures. It also gives verbal feedback by using messages, and both, messages and facial body language; its purpose is motivate the student. The ability to use non-verbal feedback allows the Animated Pedagogical Agent to supply high feedback level for students. For instance, facial language feedback is subtler than verbal feedback.The insertion of a Animated Pedagogical Agent in an Educational Environment is very important, mainly due to its feedback between environment and student during software interaction. Furthermore, it allows fluent communication, keeping track of students performance and, thus, monitoring students. It creates a pleasant environment, easier to exchange messages, to talk, to have fun and to simulate cognitive reasoning. Consequently, the Animated Pedagogical Agent makes possible a quality jump in terms of pedagogical features in the Educational Software.An Animated Pedagogical Agent into Educational Software is extremely important because it is responsible for giving student feedback during the interaction. Also, it makes the communication between student and environment more efficient, monitors student performance and guides the user/student. Finally, it transforms the dialog/communication more pleasant, funny and stimulant. These features allow the increase of pedagogical quality in the Educational Softwares.According to Person [Pe01], innovation is necessary nowadays in terms of its modulate and implementation as it is done by using MAS architecture including Animated Pedagogical Agents in ITS.IVTE – Intelligent Virtual Teaching EnvironmentThe IVTE is a n educational software developed for children aged from 8 to 10. The IVTE software makes children aware of urban garbage correct selection.The environment is simulated by a small village where the child makes his/her way home after a hard day at school choosing one of the possible suggested itineraries. This microworld shows characteristics that are similar to reality. The context into which the microworld is inserted represents actions that occur in the daily routine of Brazilian and worldwide children.Inside the IVTE, just like in real life, the child may follow different directions (itineraries, heuristics) from school to his/her home. All these ways give the children a chance to interact and to keep contact with the kinds of garbage from daily life and, most crucially, these ways give the child a chance to select them correctly. The environment operates on a non-immersive Virtual Reality where the student has the clear sensation of being into a real environment. However, the student is only entitled to a partial view of the environment as the technology pertaining Virtual Reality allows the student to see just what is quite close to him. The student action into the environment is adjusted by the existing elements in the scenery. The scenery, shown in figure 1, is made of several directions between the school and the student residence. Along these ways, we can find trees, bushes, buildings faces, different kinds of garbage spread on the ground and peddlers.Figure 1 – IVTE Environment: Guilly Animated Pedagogical Agent, Meter and Localization MapThe IVTE game time may be configured by the teacher, therefore 15 minutes is default time.Student action happens in two different environments. They consist of outside and inside environments. Outside environment consists of many ways where students can walk from school to their home. In these ways, students come across different types of garbage found in the environment or produced by themselves during interaction. The interaction is encouraged by the student when he buys sweets, drinks, etc from the vendors in the environment.In addition, inside the home environment there is the same interaction with garbage as said before. At t h e student home, there are different trash bins to correct garbage selection.When students are navigating in the IVTE environment, they need to identify themselves and their colleagues. For this reason, there is a student localization map. This map, showed in figure 1, is interesting to make the direction sense easier not only for the students but for teachers when monitoring the students.Students are monitored by a Animated Pedagogical Agent called Guilly. Guilly, as showed in figure 1, gives a feedback to students through clues. These clues are given through messages, body and facial language. This work will present Guilly’s model based on interdisciplinary research.The IVTE environment is ludic, for this reason, student performance is measured by a “Meter” as shown in figure 1. The Meter is represented by a tree, growing up or keeping at the same stage, depending on student behavior in the IVTE environment.Afterwards, another IVTE important feature is the Zoom t o ol. Zoom is a hidden tool in the IVTE environment, appearing when the Animated Pedagogical Agent calls it or when the student deliberately calls it. It is used to show the environmental impacts of the student actions. In other words, the student must think about his/her actions in the environment. This procedure helps the student in the knowledge construction process.Animated Pedagogical Agent Model in the IVTEAn Educational Software should have teaching instructions focused on students needs, taking care of different student categories or model teaching processes. It should be adapted to Student Models to satisfy specific difficulties of each student category.The Animated Pedagogical Agent was inserted at the IVTE to promote adapted teaching through teaching strategies based on a Student Model Base. The main aim of inserting a Animated Pedagogical Agent in the IVTE is to reach a high pedagogical level, as it works as a Tutor at the teaching-learning process. According to Oliveira [Ol00], the Animated Pedagogical Agent of IVTE software is a cognitive agent, taking into consideration its autonomy, memory of past actions knowing the environment and other society agents, making plans for the future, being pro-active. Cognitive agents are based on knowledge, it means, they show intelligent behavior in many situations, and they have implicit and explicit knowledge representation.In the IVTE software, the Animated Pedagogical Agent is represented by a “worm”, called Guilly, whose name was chosen during the field research.The Field ResearchIn the IVTE software, the Animated Pedagogical Agent Guilly should select correct teaching strategies according to a specific Student Model. The Student Model Base was created in a field research. This field research involves children aged from 8 to 10 in three schools. The Schools were selected based on social aspects. The field research was carried out to investigate an ecological character, performance and teaching rhythms associated with different student activities. As a result, a searchand a collection of information which are used in the creation of a Student Model Base and Teaching Strategies will be accomplished.Student activities happen at the University campus (similar to the IVTE Environment where there are identified trash bins) and students are consuming food, like sweets and drinks. They walk around the municipal garbage dumping area and in the sector of garbage selection where workers separate garbage for recycling; they produce an ecological book, where they write a story about garbage; watch a film and a lecture about the garbage issue, collection selection and recycling; answer a questionnaire helped by their parents.Each child (user) receives a bonus based on his performance in each activity. This bonus is represented by a part of a tree (leaf, stem, branch, fruit). These parts together will produce a whole tree. Each child who gets the whole tree wins a prize. This way, the effect in the teaching process represented by the Meter in the IVTE Software is stimulated and proved successful.This field research was important to identify difficulties faced by children in the selection garbage process, making possible the creation of more adequate strategies based on stimulus to help students reach their learning needs.The Student ModelThe Student Model is used in the field research described above. The Student Model Base will store information like percentage of marks and how many times the students have used specific garbage for selection. The Student Model Base is also based on student performance and game time.The Student Model is dynamic in the software, in other words, the student plays in the IVTE Software where he acts, through his actions. The agents on the IVTE Software selected the correct Student Model from Student Model Base. When the software initialize the Student Model, optimal Student Model is activated. While the student acts in the IVTE Software, he is monitored. This monitoration verifies if it is necessary to change the Student Model. The Student Model changes according to student behavior. Student behavior is controlled by error percentage, game time, number of times that the student selects the same garbage and number of times the student proceeds correctly withsome garbage. According to this Student Model the IVTE Software interaction must be adjusted.Based on this data, the student should be classified in four specific grades of Student Model, taking into consideration the field research and 15 minute play time:1. Optimal: From 100% to 75% in the correct trash bin selection;or just one selection error in the same type of garbage;2. Medium: From 75% to 50% in the correct trash bin selection;or two selection errors in the same type of garbage;3. Regular: From 50% to 75% in the correct trash bin selection;or three selection errors in the same type of garbage;4. Weak: From 25% to 0% in the correct trash bin selection; orfour selection errors in the same type of garbage.The grades described are represented and manipulated internally by the IVTE Software and the user/student does not know about it. Therefore, it is extremely necessary to select the correct Teaching Strategy to make teaching into the IVTE more efficient.Teaching StrategiesThe fundamental principle to guarantee pedagogical quality in a Teaching Environment is by using different Teaching Strategies. A Teaching Strategy is the way through which the Tutor helps student in his knowledge construction.In the IVTE Software, Teaching Strategies will be applied by a Animated Pedagogical Agent according to the selected Student Model. The IVTE Animated Pedagogical Agent selects the Teaching Strategy based on a student cognitive stage, his knowledge level and learning rhythm.Concerning Oliveira [Ol00], Teaching Strategies used by Animated Pedagogical Agent in the IVTE are divided in six categories: face and body language; clues about garbage; alert message; questions; zoom and alert about Meter performance as shown in figure 2:Figure 2 – Some used strategiesIt is important to remember that when the student is acting correctly, Guilly will just use face and body language to preserve the student cognitive stage. Otherwise, if he has a bad performance, Teaching Strategies must be reinforced by using face and body language linked to other categories described before. Therefore, a well-prepared student can think and then makes his mind about his knowledge contradiction; thus, he can try again changing his behavior/performance in the IVTE environment.Partial Conclusion and Future WorksThe IVTE Software is an implemented tool to help students in their own knowledge construction in an efficient way. In this context, a Animated Pedagogical Agent is important when used in the ITS, because it supports many features that are suitable to student knowledge construction. These features are presented as personalized learning by using Teaching Strategies tied to a Dynamic Student Model. Teaching Strategies and Student Model are dynamic in the IVTE, it means, Student Model changes through student interaction in the IVTE environment and Teaching Strategies are applied according to Real Student Model. These features give more dynamism to Educational Software.Regarding future works a research about 3D Animation Techniques will be carried out to be applied to Animated Pedagogical Agent, exploiting and making much better the interface and communication level into the Teaching Environment. References[Te99] Tedesco, P. C. A. R. Mediating Meta-Cognitive Conflicts in a Collaborative problem-Solving Situation. IN: Young Researchers’ Track, AIED 99. Proceeding 1999, p 43-44.[Se99] Self, John. The Defining Characteristics of Intelligent Tutoring Sytems Research: ITS care, precisely. IN: International Journal of Artificial Intelligence in Education, 1999, 10, 350-364.[PSH94] Paiva, a.; Self, j.; Hartley, R. On the Dynamics of Learner Models IN: ECAI’94 – European Conference on Artificial Intelligence. Proceedings of ECAI’94. John Wiley & Sons, Ltda.[HN98] Hietala, P.; Niemirepo, T. The Competence of Learning Companion Agents. IN: Journal of Artificial Intelligence in Education, 1998, 9, 178-192.[Nu98] Nunes, Maria Augusta S. N. Modelagem de um Agente Cognitivo em um Ambiente de Simulação Utilizando uma Arquitetura Híbrida em Sistema Multiagente. Porto Alegre: CPGCC-UFRGS, 1998. Dissertação de Mestrado.[DSB00] Dimitrova, V.; Self, J., Brna, P. Involving the Learner in Diagnosis – Potencials and Problems. IN: Web Information Technologies: research, Education and commerce.Proceeding. Montepellier, France, 2000.[JR00] Johnson, W.L.; Rickel, J. W. Animated Animated Pedagogical Agents: Face-to-Face Interaction in interactive Learning Environment. IN: International Journal of Atificial Inelligence in Education, 2000.[BL00] Boulay, B. du; Luckin, R. How to Make your System Teach Well: Learning about teaching from teaches and learners. IN: International Journal of Atificial Intelligence in Education, 2000, 11, 1020-1029.[De00] Demazeau, Yves. Next Agent World. IN: ASAI’2000 Argentine Symposium on Artificial Intelligence. Proceedings...SADIO: Tandil,Argentina,2000. P 11-13.[Nu01] Nunes, M. A. S. N., Dihl, L. L., Oliveira, L. C. de, Woszezenki, C. R., Fraga, L., Nogueira, C. R. D., Francisco, D. J., Machado, G. J. C., Notargiacomo, M. G. C.Reactive Agents in the Ivte Software Using Java 3D. In: Imsa´2001- Internet and Multimedia Systems and Applications, 2001, Proceedings ...Honolulu - Hawaii.IMSA. 2001.[Nu00] Nunes, M. A. S. N.; Dihl, L. L.. Virtual Reality in a Virtual Microworld in Distance Learning. IN: AST’2000 – Argentine Symposium on Computing Technology.Proceedings of AST’2000, 2000, Tandil, Argentina: SADIO, 2000. V.p.95-104[Pe01] Person, N. K.; Graesser, A. C.; Kreuz, R. J.; Pomeroy, V. Simulating Human Tutor Dialog Moves in Auto Tutor. IN: International Journal of Artificial Intelligence in Education, 2001, 12.[Ol00] Oliveira, L. C. Uma Proposta de Modelagem de Aluno para o AVEI – Ambiente Virtual de Ensino Inteligente: Trabalho de Conclusão. Santo Ângelo: URI, 2000.。
animatediff-cli-prompt-travel 使用说明
animatediff-cli-prompt-travel是一个交互式命令行工具,
用于生成旅行动画效果的差异图。
用户可以使用该工具轻松创建两个旅行场景的图片,并生成它们之间的动态差异图,使观看者可以清晰地看到两个场景的不同之处。
使用animatediff-cli-prompt-travel非常简单,用户只需要按照提示逐步操作即可。
首先,用户需要提供两个旅行场景的图片,并在命令行中输入命令启动工具。
接着,工具会逐步引导用户选择动画效果的参数,如动画速度、渐变效果等。
最后,工具会生成动态差异图,并保存在指定的位置,供用户查看和使用。
通过使用animatediff-cli-prompt-travel,用户可以快速创
建引人入胜的旅行动画效果,无需复杂的技术知识和专业软件。
该工具提供了丰富的动画效果选项,可以满足用户不同的需求和创意。
用户可以将生成的动态差异图用于网页设计、社交媒体分享等用途,吸引更多的目标受众。
总的来说,animatediff-cli-prompt-travel是一个功能强大而易于使用的工具,可以帮助用户快速生成旅行动画效果的差异图,让他们的创意得以展现。
如果您需要创建旅行场景的动态差异图,不妨试试animatediff-cli-prompt-travel,让您的作品更加生动和吸引人。
Bring Odors into Virtual Worlds将气味带入虚拟世界作者:塔尼娅·巴苏/文韩聪/译来源:《英语世界》2024年第05期Soon, you may be able to smell the metaverse1. 很快,也许你就能在元宇宙中闻到气味了。
Scientists have come up with a new way to introduce odors into virtual reality via small,wireless interfaces.科學家们想出了一种新方法,通过小型无线接口将气味引入虚拟现实。
Creating smells in virtual reality is a vexing problem that has prevented consumer VR devices from offering a full sensory experience in most settings. “People can touch in VR,” says Xin ge Yu, a professor at the department of biomedical engineering at the City University of Hong Kong and the lead author of the new paper,published in Nature Communications2. “And of course, you can see and hear in VR. But how about smell and taste?”在虚拟现实中创造气味是一个令人烦恼的问题,阻碍了虚拟现实设备在大多数环境下为消费者提供完整的感官体验。
“人们在虚拟现实中能体验到触觉。
”香港城市大学生物医学工程系教授、在《自然·通讯》上发表这篇新论文的主要作者于欣格说,“当然,在虚拟现实中能体验到视觉和听觉。
COMPUTER ANIMATION AND VIRTUAL WORLDSComp.Anim.Virtual Worlds 2004;15:95–108(DOI:10.1002/cav.8)******************************************************************************************************Fast and learnable behavioral andcognitive modeling for virtual character animationBy Jonathan Dinerstein*,Parris K.Egbert,Hugo de Garis and Nelson Dinerstein************************************************************************************Behavioral and cognitive modeling for virtual characters is a promising field.It significantly reduces the workload on the animator,allowing characters to act autonomously in abelievable fashion.It also makes interactivity between humans and virtual characters more practical than ever before.In this paper we present a novel technique where an artificial neural network is used to approximate a cognitive model.This allows us to execute the model much more quickly,making cognitively empowered characters more practical for interactive applications.Through this approach,we can animate several thousand intelligent characters in real time on a PC.We also present a novel technique for how a virtual character,instead of using an explicit model supplied by the user,can automatically learn an unknown behavioral/cognitive model by itself through reinforcement learning.The ability to learn without an explicit model appears promising for helping behavioral and cognitive modeling become more broadly accepted and used in the computer graphics community,as it can further reduce the workload on the animator.Further,it provides solutions for problems that cannot easily be modeled explicitly.Copyright #2004John Wiley &Sons,Ltd.Received:May 2003;Revised:September 2003KEY WORDS :computer animation;synthetic characters;behavioral modeling;cognitivemodeling;machine learning;reinforcement learningIntroductionVirtual characters are an important part of computer graphics.These characters have taken forms such as synthetic humans,animals,mythological creatures,and non-organic objects that exhibit lifelike properties (walking lamps,etc).Their uses include entertainment,training,and simulation.As computing and rendering power continue to increase,virtual characters will only become more commonplace and important.One of the fundamental challenges involved in using virtual characters is animating them.It can often be difficult and time consuming to explicitly define all aspects of the behavior and animation of a complex virtual character.Further,the desired behavior may be impossible to define ahead of time if the character’s virtual world changes in unexpected or diverse ways.For these reasons,it is desirable to make virtual char-acters as autonomous and intelligent as possible while still maintaining animator control over their high-level goals.This can be accomplished with a behavioral model :an executable model defining how the character should react to stimuli from its environment.Alternatively,we can use a cognitive model :an executable model of the character’s thought process.A behavioral model is reactive (i.e.,seeks to fulfill immediate goals),whereas a cognitive model seeks to accomplish long-term goals through planning :a search for what actions should be performed in what order to reach a goal state.Thus a cognitive model is generally considered more powerful than a behavioral one,but can require significantly more processing power.As can be seen,behavioral and cognitive modeling have unique strengths and weak-nesses,and each has proven to be very useful for virtual character animation.However,despite the success of these techniques in certain domains,some important arguments have been brought against current behavioral and cognitive mod-eling systems for autonomous characters in computer graphics.******************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.*Correspondence to:Jonathan Dinerstein,Brigham Young University,3366TMCB,Provo,UT 84602,USA.E-mail:jondinerstein@First,cognitive models are traditionally very slow to execute,as a tree search must be performed to formulate a plan.This speed bottleneck requires the character to make suboptimal decisions and limits the number of virtual characters that can be used simultaneously in real time.Also,since a search of all candidate actions throughout time is performed,it is necessary to use only a small set of candidate actions(which is not practical for all problems,especially those with continuous action spaces).Note that behavioral models are currently more popular than cognitive models,partially because they are usually significantly faster to execute.Second,for some problems,it can be very difficult and time consuming to construct explicit behavioral or cog-nitive models(this is known as the curse of modeling in the artificial intelligencefield).For example,it is not uncommon for behavioral/cognitive models to require weeks to design and program.Therefore,it would be extremely beneficial to have virtual characters be able to automatically learn behavioral and cognitive models if possible,alleviating the animator of this task.In this paper,we present two novel techniques.In the first technique,an artificial neural network is used to approximate a cognitive model.This allows us to exe-cute our cognitive model much more quickly,making intelligent characters more practical for interactive ap-plications.Through this approach,we can animate several thousand intelligent characters in real time on a PC.Further,this approach allows us to use optimal plans rather than suboptimal plans.The second technique we introduce allows a virtual character to automatically learn an unknown behavioral or cognitive model through reinforcement learning.The ability to learn without an explicit model appears pro-mising for helping behavioral and cognitive modeling become more broadly used in the computer graphics community,as this can further reduce the workload on the animator.Further,it provides solutions for problems that cannot easily be modeled explicitly.In summary,this paper presents the following origi-nal contributions:*a novel technique for fast execution of a cognitivemodel using neural network approximation;*a novel technique for a virtual character to auto-matically learn an approximate behavioral or cogni-tive model by itself(we call this offline character learning).We present each of these techniques in turn.We begin by surveying related work.We then give a brief introduction to cognitive modeling(as it is less well known than behavioral modeling)and neural networks.Next we present our technique for using neural networks to ra-pidly approximate cognitive models.We then give a brief introduction to reinforcement learning,and then present our technique for offline character learning.Next we present our experience with several experimental appli-cations and the lessons learned.Finally,we conclude with a summary and possible directions for future work.Related W orkPrevious computer graphics research in the area of autonomous virtual characters includes automatic gen-eration of motion primitives.1–7This is useful for redu-cing the work required by animators.More recently, Faloutsos et al.8present a technique for learning the preconditions from which a given specialist controller can succeed at its task,thus allowing them to be combined into a general-purpose motor system for physically based animated characters.Note that these approaches to motor learning focus on learning how to move to minimize a cost function(such as the energy used). Therefore,these techniques do not embody the virtual characters with any decision-making abilities.However, these techniques can be used in a complementary way with behavioral/cognitive modeling in a multilevel animation system.In other words,a behavioral/cogni-tive model makes a high-level decision for the character (e.g.,‘walk left’),which is then carried out by a lower-level animation system(e.g.,skeletal animation).A great deal of research has also been performed in control of animated autonomous characters.9–12These techniques have produced impressive results,but are limited in two aspects.First,they have no ability to learn,and therefore are limited to explicit prespecified behavior.Secondly,they only perform behavioral con-trol,not cognitive control(where behavioral means re-active decision making and cognitive means reasoning and planning to accomplish long-term tasks).Online behavioral learning has only begun to be explored in computer graphics.13–15A notable example is Blumberg et al.,16where a virtual dog can be interactively taught by the user to exhibit desired behavior.This technique is based on reinforcement learning and has been shown to work extremely well.However,it has no support for long-term reasoning to accomplish complex tasks.Also, since these learning techniques are all designed to be used online,they are(for the sake of interactive speed) limited in terms of how much they can learn.To endow virtual characters with long-term reason-ing,cognitive modeling for computer graphicswasJ.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************ Copyright#2004John Wiley&Sons,Ltd.96Comp.Anim.Virtual Worlds2004;15:95–108recently introduced.17Cognitive modeling can provide a virtual character with enough intelligence to automa-tically perform long-term,complex tasks in a believable manner.The techniques we present in this paper build on the successes of traditional behavioral and cognitive modeling with the goal of alleviating two important weaknesses:performance of cognitive models,and time-consuming construction of explicit behavioral and cognitive models.We will first present our technique for speeding up cognitive model execution through approx-imation.We will briefly review cognitive modeling and neural networks,and then present our new technique.Introduction to Cognitive ModelingCognitive modeling 17–20is closely related to behavioral modeling,but is less well known,so we now provide a brief introduction.A cognitive model defines what a character knows,how that knowledge is acquired,and how it can be used to plan actions.The traditional approach to cognitive modeling is a symbolic approach.It uses a type of first-order logic known as ‘the situation calculus’,wherein the virtual world is seen as a se-quence of situations,each of which is a ‘snapshot’of the state of the world.The most important component of a cognitive model is planning.Planning is the task of formulating a se-quence of actions that are expected to achieve a goal.Planning is performed through a tree search of all candidate actions throughout time (see Figure 1).How-ever,it is usually cost prohibitive to plan all the way to the goal state.Therefore,any given plan is usually only a partial path to the goal state,with new partial plans formulated later on.The animator has high-level control over the virtual character since she can supply it with a goal state.Note that to achieve real-time performance it is necessary to have the goal hard-coded into the cognitive model.This is because it is necessary to implement custom heuristics to speed up the tree search for planning (for further details see Funge et al.17).Therefore,either an animator and programmer must collaborate,or the programmer must also be the animator.This traditional symbolic approach to cognitive model-ing has many important strengths.It is explicit,has formal semantics,and is both human readable and executable.It also has a firm mathematical foundation and is well established in Al theory.However,it also has somesignificant weaknesses with respect to application in computer graphics animation.Since planning is per-formed through a tree search,and the branching factor is the number of actions to consider,the set of candidate actions must be kept very small if real-time performance is to be achieved.Also,to keep real-time performance,we are limited to short (suboptimal)plans.Another performance problem that is unique to computer graphics is the fact that the user may want to have many intelligent virtual characters interacting in real time.In most situa-tions,on a commodity PC,this is impossible to achieve with the traditional symbolic approach to planning.An-other limitation is that it is not possible to have a virtual character automatically learn a cognitive model by itself (which could further reduce the workload on the anima-tor,and provide solutions to very difficult problems).Introduction to Artif|cialNeural NetworksNote that there are many machine learning techniques,many of which could be used to approximate an explicit cognitive model.However,we have chosen to use neural networks because they are both compact and computationally efficient.In this section we briefly review a common type of artificial neural network.22A more thorough introduction can be found in Grzeszczuk et al.5There are many libraries and applications publicly available*(free and commercial)for constructing and executing artificial neuralnets.Figure 1.Planning is performed with a tree search of all candidate actions throughout time.To perform planning in real time without dedicated hardware,it is usually necessary to greatly limit the number of candidate actions and to onlyformulate short (suboptimal)plans.*For example,SNNS (rmatik.uni-tuebingen.de/pub/SNNS)and Xerion (/pub/xerion).FAST AND LEARNABLE BEHAVIORAL AND COGNITIVE MODELING************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.97Comp.Anim.Virtual Worlds 2004;15:95–108A neuron can be modeled as a mathematical operator that maps R p !R .Consider Figure 2(a).Neuron j re-ceives p input signals (denoted s i ).These signals are scaled by associated connection weights w ij .The neuron sums its input signalsz j ¼w 0j þX p i ¼1s i w ij ¼u Áw jwhere u ¼½1;s 1;s 2;...;s p is the input vector and w j ¼½w 0j ;w 1j ;...;w pj is the connection weight vector.The neuron outputs a signal s j ¼g ðz j Þ,where g is an activation function:s j ¼g ðz j Þ¼1=ð1þe Àz j ÞA feedforward artificial neural network (see Figure 2b),also known simply as a neural net,is a set of interconnected neurons organized in yer l receives inputs only from the neurons of layer l À1.The first layer of neurons is the input layer and the last layer is the output layer .The intermediate layers are called hidden layers .Note that the input layer has no functionality,as its neurons are simply ‘containers’for the network inputs.A neural network ‘learns’by adjusting its connection weights such that it can perform a desired computa-tional task.This involves considering input–output ex-amples of the desired functionality (or target function ).The standard approach to training a neural net is the backpropagation training algorithm.23Note that it has been proven that neural networks are universal function approximators (see Hornik et al.24).An alternative approach that we considered was to use the continuous k-nearest neighbor algorithm.21Unlike neural nets,k -nearest neighbor provides a local approx-imation of the target function,and can be used auto-matically without the user carefully selecting inputs.Also,k -nearest neighbor is guaranteed to correctly re-produce the examples that it has been provided (whereas no such guarantee exists with neural nets).However,k -nearest neighbor requires the explicit sto-rage of many examples of the target function.Becauseof this storage issue,we opted to use a neural net approach.Fast Animation Using Neural Network Approximation ofCognitive ModelsThe novel technique we now present is analogous to how a human becomes an expert at a task.As an example,let’s consider typing on a computer keyboard.When a person first learns how to type,she must search the keyboard with her eyes to find every key she wishes to press.However,after enough experience,she learns (i.e.,memorizes)where the keys are.Thereafter,she can type more quickly,only having to recall where the keys are.There is a strong parallel between this example and all other tasks humans perform.After enough experi-ence we no longer have to implicitly ‘plan’or ‘search’for our actions;we simply recall what to do.In our technique,we use a neural net to learn (i.e.,memorize)the decisions made through planning by a cognitive model to achieve a goal.Thereafter,we can quickly recall these decisions by executing the trained neural net.Training is done offline and then the trained network is used online.Thus,we can achieve intelligent virtual characters in real time using very few CPU cycles.We now present our technique in detail,first discuss-ing the structure of our technique,followed by how to train the neural network,and then finally how to use the trained network in practice.StructureA cognitive model with a goal defines a policy .A policy specifies what action to perform for a given state.A policy is formulated asa ¼ ðiÞFigure 2.(a)Mathematical model of a neuron j.(b)A three-layer feedforward neural network of p inputs and q outputs.J.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.98Comp.Anim.Virtual Worlds 2004;15:95–108where i is the current state and a is the action to perform.This is a non-context-sensitive formulation,which cov-ers most cognitive models.However,if desired,context information can also be supplied as input (e.g.,the last n actions can be input).We train our feed-forward neural net to approximate a specific policy .We denote theneural net approximation of the policy ^(see Figure 3a).Note that the current state (network input)and action (output)will likely be vector-valued for non-trivial virtual worlds and characters.Further,a logical selec-tion and organization of the input and output compo-nents can help make the target function as smooth as possible (and therefore easier to approximate).Selecting network inputs will be discussed in more detail later.Also note that the input should be normalized and the output denormalized for use.Specifically,the normal-ized input components should have zero means and unit variances,and the normalized output components should have 0.5means and be in the range [0.1,0.9].This ensures that all inputs contribute equivalently,and that the output is in a range the neural net’s activation function can produce.An important question is how many hidden layers (and how many neurons in each of those hidden layers)we need to use in a neural net to achieve a good approximation of a policy.This is important because we want a reasonable approximation,but we also want the neural net to be as fast to execute as possible (i.e.,there is a speed/quality trade-off).We have found that,at minimum,it is best to use one hidden layer with the same number of neurons as there are inputs.If a higher-quality approximation is desired,then it is useful to use two hidden layers,the first with 2p þ1neurons (where p is the number of inputs),and the second with 2q þ1neurons (where q is the number of outputs).We have found that any more layers and/or neurons than this usually provides little benefit.Note that the state and action spaces can be contin-uous or discrete,as all processing in a neural network is real-valued.If discrete outputs are desired,the real-valued outputs of the network should simply be quan-tized to predefined discrete values.Even though cognitive models (i.e.,policies)produce good animations in most cases,there are some cases in which they can appear too predictable.This is due to the fact that cognitive models are fundamentally determi-nistic (mapping states to actions).We now introduce an alternative form of our technique that addresses this problem.First note that,in some cases,it may be interesting to not always perform the same action for a given state (even if that action is most desirable).Occa-sional slight randomness in the decision making of an intelligent virtual character,performed in the right manner,can dramatically improve the aesthetic quality of an animation when predictability cannot be tolerated.However,it is not enough to simply choose actions at random,as this makes the virtual character appear very unintelligent.Instead,we do this in a much more believable fashion with a modification of the structure of our technique (see Figure 3b).We formulate it as a priority function :priority ¼P ði ;a ÞThe priority function represents the value of performing any given action a from the current state i under a policy .The priority can simply be an ordering of the best action to the worst,or can represent actual value in-formation (i.e.,how much an action helps the character reach a goal state).Using a priority function allows us to query for the best action at any given state,but also lets us choose an alternative action if desired (with knowl-edge of that action’s cost).For example,by using the known priorities of all candidate actions from the cur-rent state,we can select an action probabilistically.Thus our virtual character is able to make intelligent,but non-deterministic,decisions for all situations.However,note that while this non-deterministic technique is useful,we focus on standard policies in this paper.This is because they are simpler,faster,and correspond to the standard approach to cognitive modeling (i.e.,always using the best possible action in a given state).T raining the Neural NetworkWe train the neural net using the backpropagation algorithm with examples of the cognitive model’s deci-sions (i.e.,policy).A naive approach is to randomly select many examples of the entire state space.However,this is wasteful because we are usually only interested in a small portion of the state space.This is because,asaFigure 3.(a)Neural net approximation of a policy .Thenetwork input is the current state,the output is the action to perform.T and T À normalize the input and denormalize the output,respectively.(b)Neural net approximation of apriority function.FAST AND LEARNABLE BEHAVIORAL AND COGNITIVE MODELING************************************************************************************************************************************************************************************************************Copyright #2004John Wiley &Sons,Ltd.99Comp.Anim.Virtual Worlds 2004;15:95–108character makes intelligent decisions,it willfind itself traversing into only a subset of all possible states.As an example,consider a sheepdog that is herding a flock of sheep.It is illogical for the dog to become afraid of the sheep and run away.It is equally illogical for the sheep to herd the dog.Therefore,such states should never be experienced in practice.We have found that by ignoring uninteresting states the neural net’s training can focus on more important states,resulting in a higher-quality approximation.However,for the sake of robustness,it may be desirable to also use a few randomly selected states that we never expect to en-counter(to ensure that the neural net has at least seen a coarse sampling of the entire state space).To focus on the subset of the state space of interest,we generate examples by running many animations with the cognitive model.At each iteration of an animation,we have a current state and the action decided upon,which are stored for later use as training examples.We have found that using a large number of examples is best to achieve a well-generalized trained network.Specifically, we prefer to use between5000and20,000examples.Note that this is far more than is normally used when training neural nets,but we found that the use of so many examples helps to ensure that all interesting states are visited at least once(or at least a very similar state is visited).Finally,note that if a small time step is used between actions,it may be desirable to keep only an even subsampling of the examples generated through anima-tion.This is because,with a small time step,it is likely that little state change will occur with each step and therefore temporally adjacent examples may be virtually identical. We used a backpropagation learning rate of ffi0.1 and momentum of ffi0.4in all our experiments.Train-ing a neural net took about15minutes on average using a1.7GHz PC.In all of our experiments,an appropriate selection of inputs to the neural net resulted in a good approximation of a cognitive model.Choosing Salient V ariables and Features Training a neural network is not a conceptually difficult task.All that is required is to supply the backpropaga-tion algorithm with examples of the desired behavior we want the network to exhibit.However,there is one well-known challenge that we need to discuss:selecting network inputs.This is critical as too many inputs can make a neural net computationally infeasible.Also,a poor choice of inputs can be incomplete or may define a mapping that is too rough for a neural net to approx-imate well.General tips for input selection can be found in Haykin,22so we only briefly mention key points and focus our current discussion on lessons we have learned specific to approximation of cognitive models.The inputs should be salient variables(no constants), which have a strong impact in determining the answer of the function.Further,if possible,features should be used.Features are transformations or combinations of state variables.This is useful not only for reducing the total number of inputs but also for making the input–output mapping smoother.Through experience,we have discovered some useful features that we now present. When approximating cognitive models,many of the potential inputs represent raw3D geometry information (position,orientation,etc).We have found that it is very important to make all inputs rotation and translation invariant if possible.Specifically,we have found it very useful to transform all inputs so that they are relative to the local coordinate system of the virtual character.That is,rather than considering the origin to be at somefixed point in space,transform the world such that the origin is with respect to the virtual character.This not only makes it unnecessary to input the character’s current position and orientation,but also makes the mapping smoother.We have also found it useful,in some cases,to separate critical information into distinct inputs.For example,if a cognitive model relies on knowing the direction and distance to an object in its virtual world, this information could be presented as a scaled vector (dx,dy,dz).However,we have found that in many cases it is better to present this information as a normalized vector with distance(x,y,z,d),as the decision-making may be dramatically different depending on the dis-tance.In other words,if a piece of information is very important to the decision-making of a cognitive model, the mapping will likely be more smooth if that infor-mation is presented as a separate input to the neural net.Thus we need to balance the desire to keep the number of inputs low with clearly presenting all salient information.Finally,note that choosing good inputs sometimes requires experimentation to see what choice produces the best trained network,as input selection can be a dif-ficult task.However,recall that if storage is not a con-cern k-nearest neighbor can be used instead of a neural network and(as described in Mitchell21)can automati-cally discover those inputs that are necessary to approx-imate the target function.Several practical examples of selecting good inputs for neural networks to approximate cognitive models are given in the results section of thispaper.J.DINERSTEIN ET AL.************************************************************************************************************************************************************************************************************ Copyright#2004John Wiley&Sons,Ltd.100Comp.Anim.Virtual Worlds2004;15:95–108。
Animated BDP Agents in Virtual EnvironmentsA. Nijholt, A. Egges, R. op den Akker & J. ZwiersParlevink Research Group, University of TwentePO Box 217, 7500 AE Enschede, the NetherlandsAbstractWe introduce a Believes, Desires and Plans (BDP) agent that acts in a virtualenvironment using multi-modal interaction with the user. The environment is ourvirtual theatre environment. In this environment different agents have been introduced.In order to obtain a more uniform framework for agent interaction and a more uniformagent architecture we introduced a BDP agent for interactions between visitors anddomain agents that inhabit this environment. We demonstrate how such an agent canplay the role of a librarian in a virtual library.1. IntroductionIn our virtual theatre environment [6] we experiment with virtual reality, agent technology, multi-modal interaction and, in particular, we pay attention to (virtual) humans interacting with a visualized environment and the role this visualization plays. Several agents have been introduced. The approach has been bottom-up. We started with an information and transaction agent (Karin) that could be accessed, using NL, about performances in a theatre. Karin is a 3D embodied agent that shows simple facial expressions and - using lip synchronization and TTS synthesis - mouths her answers.We experimented with several agents that help the visitor to navigate in this virtual environment and to find the information that suits his interests. Our present agent (cf.[5]) can be addressed natural language. It knows about the environment, the locations and objects and the routes that can be walked to go from one location to another. The visitor walks around in the virtual environment, but he can also look at a 2D map and see where exactly he is. Positioning the mouse on objects allows the visitor to ask a question like “What is this?”, or say “Bring me there.”. In the latter case the navigation agent gives control to an agent that guides the user to the desired position. A framework that allows the introduction of different agents is described in [2].2. Dialogues with BDP AgentsAs mentioned, our approach in introducing agents in our virtual environment has been bottom-up. Only after having some agents that had to interact with each other and with the visitor we came up with a framework that allowed communication between agents and the introduction of new agents. The next step in our research is the introduction of amore uniform agent architecture which should allow us to introduce agents with different knowledge, intelligence and behaviour. Behaviour in our environment includes verbal and nonverbal communication with the visitor, animation of face and body parts (since some of our agents are embodied) and performing ‘physical’ actions in the virtual environment. For that purpose we introduced our version of BDP (Beliefs, Desires, Plans) agents, agents that can be used for multi-modal interaction in virtual environments. Beliefs and desires are represented with quasi-logical forms [1]. Conditional plans (CPs) allow actions according to some conditions. For specifying a BDP agent an agent specification language has been developed. A BDP interpreter applies conditional plans given a current state of the agent. BDP agents can interact by sending and receiving messages using a communication platform [2]. In order to construct dialogue systems using our BDP agent technology we extended the QLF formalism with communicative acts. Agents are plan-based, but they can communicate with other agents in order to perform actions or make dialogue moves.3. A Library Agent in a Virtual LibraryFor demonstration purposes we have applied the BDI agent framework in a virtual library world. The world contains books and boxes. The agent knows CPs for placing books in boxes and removing them from boxes. The librarian receives QLF input from the visitor through a parser, linguistic analyser and a reference resolver and it selects plans and goals according to this input. Changes in the virtual environment are acomplished by sending messages to (and receiving them from) virtual environment agents that can make changes to the environment. The system allows multimodal input. For instance, the visitor may position the mouse-pointer on a book and ask “Who is the author of this book?”, requiring the library agent to apply unification on information that comes from different sources. Currently we are working on an embodiment of this library agent so that we can really see an agent retrieving and storing books. Java has been chosen as the implementation language for the system.References[1] H. Alshawi (ed.). The Core language Engine. MIT Press, 1992.[2] B. van Dijk et al. Navigation assistance in virtual worlds. Proc. 2001 InformingScience Conference, Krakow, Poland, June 2001, to appear.[3] A. Egges. Conversational agents in a virtual environment with multi-modalfunctionality. Internal Report, University of Twente, June 2001.[4] A. Egges et al. Dialogs with BDP Agents in Virtual Environments. Knowledge andReasoning in Practical Dialogue Systems. Seattle, August 2001, to appear.[5] J. van Luin, A. Nijholt & R. op den Akker. NL Navigation Support in VR.ICAV3D, V. Giagourta & M.G. Strintzis (eds.), 2001, 263-266.[6] A. Nijholt & J. Hulstijn. Multimodal Interactions with Agents in Virtual Worlds.Future Directions for Intelligent Systems and Information Science. N. Kasabov (ed.), Physica-Verlag: Studies in Fuzziness and Soft Computing, 2000, 148-173.。