三维动画设计外文翻译文献
- 格式:docx
- 大小:1.28 MB
- 文档页数:19
目录1 引言 ........................................................................................................ 错误!未定义书签。
2 目前做法 ................................................................................................ 错误!未定义书签。
3 相关工作 ................................................................................................ 错误!未定义书签。
4 游艇的绘制 ............................................................................................ 错误!未定义书签。
5 特点和数据结构 .................................................................................... 错误!未定义书签。
6 成立曲线 ................................................................................................ 错误!未定义书签。
自由曲线 ........................................................................................... 错误!未定义书签。
约束曲线 ........................................................................................... 错误!未定义书签。
外文资料翻译—原文部分Fundamentals of Human Animation(From Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)If you are reading this part, then you have most likely finished building your human character, created textures for it, set up its skeleton, made morph targets for facial expressions, and arranged lights around the model. You have then arrived at perhaps the most exciting part of 3-D design, which is animating a character. Up to now the work has been somewhat creative, sometimes tedious, and often difficult.It is very gratifying when all your previous efforts start to pay off as you enliven your character. When animating, there is a creative flow that increases gradually over time. You are now at the phase where you become both the actor and the director of a movie or play.Although animation appears to be a more spontaneous act, it is nevertheless just as challenging, if not more so, than all the previous steps that led up to it. Your animations will look pitiful if you do not understand some basic fundamentals and principles. The following pointers are meant to give you some direction. Feel free to experiment with them. Bend and break the rules whenever you think it will improve the animation.SOME ANIMATION POINTERS1. Try isolating parts. Sometimes this is referred to as animating in stages. Rather than trying to move every part of a body at the same time, concentrate on specific areas. Only one section of the body is moved for the duration of the animation. Then returning to the beginning of the timeline, another section is animated. By successively returning to the beginning and animating a different part each time, the entire process is less confusing.2. Put in some lag time. Different parts of the body should not start and stop at the same time. When an arm swings, the lower arm should follow a few frames after that. The hand swings after the lower arm. It is like a chain reaction that works its way through the entire length of the limb.3. Nothing ever comes to a total stop. In life, only machines appear to come to a dead stop. Muscles, tendons, force, and gravity all affect the movement of a human. You can prove this to yourself. Try punching the air with a full extension. Notice that your fist has a bounce at the end. If a part comes to a stop such as a motion hold, keyframe it once and then again after three to eight or more keyframes. Your motion graph will then have a curve between the two identical keyframes. This will make the part appear to bounce rather than come to a dead stop.4. Add facial expressions and finger movements. Your digital human should exhibit signs of life by blinking and breathing. A blink will normally occur every 60 seconds. A typical blink might be as follows:Frame 60: Both eyes are open.Frame 61: The right eye closes halfway.Frame 62: The right eye closes all the way and the left eye closes halfway.Frame 63: The right eye opens halfway and the left eye closes all the way.Frame 64: The right eye opens all the way and left eye opens halfway.Frame 65: The left eye opens all the way.Closing the eyes at slightly different times makes the blink less mechanical.Changing facial expressions could be just using eye movements to indicate thoughts running through your model's head. The hands will appear stiff if you do not add finger movements. Too many students are too lazy to take the time to add facial and hand movements. If you make the extra effort for these details you will find that your animations become much more interesting.5. What is not seen by the camera is unimportant. If an arm goes through a leg but is not seen in the camera view, then do not bother to fix it. If you want a hand to appear close to the body and the camera view makes it seem to be close even though it is not, then why move it any closer? This also applies to sets. There is no need to build an entire house if all the action takes place in the living room. Consider painting backdrops rather than modeling every part of a scene.6. Use a minimum amount of keyframes. Too many keyframes can make the character appear to move in spastic motions. Sharp, cartoonlike movements are created with closely spaced keyframes. Floaty or soft, languid motions are the result of widely spaced keyframes. An animation will often be a mixture of both. Try to look for ways that will abbreviate the motions. You can retain the essential elements of an animation while reducing the amount of keyframes necessary to create a gesture.7.Anchor a part of the body. Unless your character is in the air, it should have some part of itself locked to the ground. This could be a foot, a hand, or both. Whichever portion is on the ground should be held in the same spot for a number of frames. This prevents unwanted sliding motions. When the model shifts its weight, the foot that touches down becomes locked in place. This is especially true with walking motions.There are a number of ways to lock parts of a model to the ground. One method is to use inverse kinematics. The goal object, which could be a null, automatically locks a foot or hand to the bottom surface. Another method is to manually keyframe the part that needs to be motionless in the same spot. The character or its limbs will have to be moved and rotated, so that foot or hand stays in the same place. If you are using forward kinematics, then this could mean keyframing practically every frame until it is time to unlock that foot or hand.8.A character should exhibit weight. One of the most challenging tasks in 3-D animation is to have a digital actor appear to have weight and mass. You can use several techniques to achieve this. Squash and stretch, or weight and recoil, one of the 12 principles of animation discussed in Chapter 12, is an excellent way to give your character weight.By adding a little bounce to your human, he or she will appear to respond to the force of gravity. For example, if your character jumps up and lands, lift the body up a little after it makes contact. For a heavy character, you can do this several times and have it decrease over time. This will make it seem as if the force of the contact causes the body to vibrate a little.Secondary actions, another one of the 12 principles of animation discussed in Chapter 12, are an important way to show the effects of gravity and mass. Using the previous example of a jumping character, when he or she lands, the belly could bounce up and down, the arms could have some spring to them, the head could tilt forward, and so on.Moving or vibrating the object that comes in contact with the traveling entity is another method for showing the force of mass and gravity. A floor could vibrate or a chair that a person sits in respond to the weight by the seat going down and recovering back up a little. Sometimes an animator will shake the camera to indicate the effects of a force.It is important to take into consideration the size and weight of a character. Heavy objects such as an elephant will spend more time on the ground, while a light character like a rabbit will spendmore time in the air. The hopping rabbit hardly shows the effects of gravity and mass.9. Take the time to act out the action. So often, it is too easy to just sit at the computer and try to solve all the problems of animating a human. Put some life into the performance by getting up and acting out the motions. This will make the character's actions more unique and also solve many timing and positioning problems. The best animators are also excellent actors. A mirror is an indispensable tool for the animator. Videotaping yourself can also be a great help.10. Decide whether to use IK, FK, or a blend of both. Forward kinematics and inverse kinematics have their advantages and disadvantages. FK allows full control over the motions of different body parts. A bone can be rotated and moved to the exact degree and location one desires. The disadvantage to using FK is that when your person has to interact within an environment, simple movements become difficult. Anchoring a foot to the ground so it does not move is challenging because whenever you move the body, the feet slide. A hand resting on a desk has the same problem.IK moves the skeleton with goal objects such as a null. Using IK, the task of anchoring feet and hands becomes very simple. The disadvantage to IK is that a great amount of control is packed together into the goal objects. Certain poses become very difficult to achieve.If the upper body does not require any interaction with its environment, then consider a blend of both IK and FK. IK can be set up for the lower half of the body to anchor the feet to the ground, while FK on the upper body allows greater freedom and precision of movements.Every situation involves a different approach. Use your judgment to decide which setup fits the animation most reliably.11. Add dialogue. It has been said that more than 90% of student animations that are submitted to companies lack dialogue. The few that incorporate speech in their animations make their work highly noticeable. If the animation and dialogue are well done, then those few have a greater advantage than their competition. Companies understand that it takes extra effort and skill tocreate animation with dialogue.When you plan your story, think about creating interaction between characters not only on a physical level but through dialogue as well. There are several techniques, discussed in this chapter, that can be used to make dialogue manageable.12. Use the graph editor to clean up your animations. The graph editor is a useful tool that all 3-D animators should become familiar with. It is basically a representation of all the objects, lights, and cameras in your scene. It keeps track of all their activities and properties.A good use of the graph editor is to clean up morph targets after animating facial expressions. If the default incoming curve in your graph editor is set to arcs rather than straight lines, you will most likely find that sometimes splines in the graph editor will curve below a value of zero. This can yield some unpredictable results. The facial morph targets begin to take on negative values that lead to undesirable facial expressions. Whenever you see a curve bend below a value of zero, select the first keyframe point to the right of the arc and set its curve to linear. A more detailed discussion of the graph editor will be found in a later part of this chapter.ANIMATING IN STAGESAll the various components that can be moved on a human model often become confusing if you try to change them at the same time. The performance quickly deteriorates into a mechanical routine if you try to alter all these parts at the same keyframes. Remember, you are trying to create humanqualities, not robotic ones.Isolating areas to be moved means that you can look for the parts of the body that have motion over time and concentrate on just a few of those. For example, the first thing you can move is the body and legs. When you are done moving them around over the entire timeline, then try rotating the spine. You might do this by moving individual spine bones or using an inverse kinematics chain. Now that you have the body moving around and bending, concentrate on the arms. If you are not using an IK chain to move the arms, hands, and fingers, then rotate the bones for the upper and lower arm. Do not forget the wrist. Finger movements can be animated as one of the last parts. Facial expressions can also be animated last.Example movies showing the same character animated in stages can be viewed on the CD-ROM as CD11-1 AnimationStagesMovies. Some sample images from the animations can also be seen in Figure 11-1. The first movie shows movement only in the body and legs. During the second stage, the spine and head were animated. The third time, the arms were moved. Finally, in the fourth and final stage, facial expressions and finger movements were added.Animating in successive passes should simplify the process. Some final stages would be used to clean up or edit the animation.Sometimes the animation switches from one part of the body leading to another. For example, somewhere during the middle of an animation the upper body begins to lead the lower one. In a case like this, you would then switch from animating the lower body first to moving the upper part before the lower one.The order in which one animates can be a matter of personal choice. Some people may prefer to do facial animation first or perhaps they like to move the arms before anything else. Following is a summary of how someone might animate a human.1. First pass: Move the body and legs.2. Second pass: Move or rotate the spinal bones, neck, and head.3. Third pass: Move or rotate the arms and hands.4. Fourth pass: Animate the fingers.5. Fifth pass: Animate the eyes blinking.6. Sixth pass: Animate eye movements.7. Seventh pass: Animate the mouth, eyebrows, nose, jaw, and cheeks (you can break these up into separate passes).Most movement starts at the hips. Athletes often begin with a windup action in the pelvic area that works its way outward to the extreme parts of the body. This whiplike activity can even be observed in just about any mundane act. It is interesting to note that people who study martial arts learn that most of their power comes from the lower torso.Students are often too lazy to make finger movements a part of their animation. There are several methods that can make the process less time consuming.One way is to create morph targets of the finger positions and then use shape shifting to move the various digits. Each finger is positioned in an open and fistlike closed posture. For example, the sections of the index finger are closed, while the others are left in an open, relaxed position for one morph target. The next morph target would have only the ring finger closed while keeping the others open. During the animation, sliders are then used to open and close the fingers and/or thumbs. Another method to create finger movements is to animate them in both closed and open positions and then save the motion files for each digit. Anytime you animate the same character, you can load the motions into your new scene file. It then becomes a simple process of selecting either the closed or the open position for each finger and thumb and keyframing them wherever you desire.DIALOGUEKnowing how to make your humans talk is a crucial part of character animation. Once you add dialogue, you should notice a livelier performance and a greater personality in your character. At first, dialogue may seem too great a challenge to attempt. Actually, if you follow some simple rules, you will find that adding speech to your animations is not as daunting a task as one would think. The following suggestions should help.DIALOGUE ESSENTIALS1. Look in the mirror. Before animating, use a mirror or a reflective surface such as that on a CD to follow lip movements and facial expressions.2. The eyes, mouth, and brows change the most. The parts of the face that contain the greatest amount of muscle groups are the eyes, brows, and mouth. Therefore, these are the areas that change the most when creating expressions.3. The head constantly moves during dialogue. Animate random head movements, no matter how small, during the entire animation. Involuntary motions of the head make a point without having to state it outright. For example, nodding and shaking the head communicate, respectively, positive and negative responses. Leaning the head forward can show anger, while a downward movement communicates sadness. Move the head to accentuate and emphasize certain statements. Listen to thewords that are stressed and add extra head movements to them.4. Communicate emotions. There are six recognizable universal emotions: sadness, anger, joy, fear, disgust, and surprise. Other, more ambiguous states are pain, sleepiness, passion, physical exertion, shyness, embarrassment, worry, disdain, sternness, skepticism, laughter, yelling, vanity, impatience, and awe.5. Use phonemes and visemes. Phonemes are the individual sounds we hear in speech. Rather than trying to spell out a word, recreate the word as a phoneme. For example, the word computer is phonetically spelled "cumpewtrr." Visemes are the mouth shapes and tongue positions employed during speech. It helps tremendously to draw a chart that recreates speech as phonemes combined with mouth shapes (visemes) above or below a timeline with the frames marked and the sound and volume indicated.6. Never animate behind the dialogue. It is better to make the mouth shapes one or two frames before the dialogue.7. Don't overstate. Realistic facial movements are fairly limited. The mouth does not open that much when talking.8. Blinking is always a part of facial animation. It occurs about every two seconds. Different emotional states affect the rate of blinking. Nervousness increases the rate of blinking, while anger decreases it.9. Move the eyes. To make the character appear to be alive, be sure to add eye motions. About 80% of the time is spent watching the eyes and mouth, while about 20% is focused on the hands and body.10. Breathing should be a part of facial animation. Opening the mouth and moving the head back slightly will show an intake of air, while flaring the nostrils and having the head nod forward a little can show exhalation. Breathing movements should be very subtle and hardly noticeable...外文资料翻译—译文部分人体动画基础(引自Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)如果你读到了这部分,说明你很可能已构建好了人物角色,为它创建了纹理,建立起了人体骨骼,为面部表情制作了morph修改器并在模型周围安排好了灯光。
动画设计电视广告论文中英文外文翻译文献Copywriting for Visual MediaBefore XXX。
and film advertising were the primary meansof advertising。
Even today。
local ads can still be seen in some movie theaters before the start of the program。
The practice of selling time een programming for commercial messages has e a standard in the visual media XXX format for delivering shortvisual commercial messages very XXX.⑵Types of Ads and PSAsThere are us types of ads and public service announcements (PSAs) that XXX ads。
service ads。
and XXX a specific product。
while service ads promote a specific service。
nal ads。
on theother hand。
promote an entire company or industry。
PSAs。
on the other hand。
are mercial messages that aim to educate andinform the public on important issues such as health。
safety。
and social XXX.⑶The Power of Visual AdvertisingXXX。
The use of colors。
AnimationAnimation is the rapid display of a sequence of images of 2-D or 3—D artwork or model positions to create an illusion of movement. The effect is an optical illusion of motion due to the phenomenon of persistence of vision,and can be created and demonstrated in several ways。
The most common method of presenting animation is as a motion picture or video program,although there are other methods.Early examplesAn Egyptian burial chamber mural,approximately 4000 years old,showing wrestlers in action. Even though this may appear similar to a series of animation drawings,there was no way of viewing the images in motion. It does,however,indicate the artist's intention of depicting motion.Early examples of attempts to capture the phenomenon of motion drawing can be found in paleolithic cave paintings,where animals are depicted with multiple legs in superimposed positions,clearly attempting to convey the perception of motion。
关于动画的毕业设计论文英文文献翻译附录附录A:外文资料翻译,原文部分The needs of the development of the Chinese animationWhy the development of cultural industries such as animation and game? Who isthe model for the development of animation and game industry in China? By following the survey report in Japan and the U.S. can be seen, animation, games and other cultural industries to each country to bring much benefit. Not ugly, social progress, to a certain period of time, the development of cultural industries is inevitable.Japan's animation industry can be described as a model, andtherefore the reference object and catch up with the target of China's animation industry. However, reporters found that a series of data on the Japanese animation industry is also confusing, especially back in five or six years ago, a number of widely cited data today seems very absurd.In many articles in 2006, reporters found that when the output value of the global animation industry between $ 200,000,000,000 to $500,000,000,000, the annual output value of Japan's animation industry to reach 230 trillion yen, Japan's second-pillar industry. " According to the 2010 release in Japan this year Japan's gross domestic product (GDP) at current prices of479.1791 trillion yen, while Japan's economicgrowth in recent years is not, you can estimate when the Japanese animation industry, the proportion of GDP is likely to exceed 50%!The most popular data is the Japanese animation industry share of GDP over 10%, this estimate, the Japanese anime industry output shouldbe about 48 trillion yen, which is $ 800,000,000,000. Which is basically the global animation industry and its industrial output value of derivatives and the United States topped the list where the shelter?According to the Japan Association of digital content, the White Paper 2004 "of the" digital animation industry as an important part of Japanese culture and creative industries, the output value in 2004 reached 12.8 trillion yen, accounting for Japan's gross domestic product 2.5%, Imaging Products 4.4 trillion yen, 1.7 trillion yen of the music products, books and periodicals published 5.6 trillion yen, 1.1 trillion yen of the game, more than agriculture, forestry, aquatic production value of 10 trillion yen. Andcommunications, information services, printing, advertising, appliances and other aggregate, it is up to the scale of 59 trillion yen. Only in this way the scope of the animation industry generalized, so as to achieve 10% of the proportion of domestic widespread.The integration of information seems relatively reasonable, "White Paper on digital content 2004 to data released, with some reference value, that is, Japan's animation industry's share of GDP should be between 2-5%. This way, the domestic animation industry is also a lot less pressure, but the runner-up position in the global animationindustry, is the total GDP has exceeded Japan's, China is still beyond the reach of being the so-called efforts will be necessary.About 20% of GDP of the U.S. cultural industries, especiallyfollowing a set of data appear most frequently in a variety of articles: 2006 U.S. GDP was $ 13.22 trillion, the cultural industries for the $2.64 trillion; cultural products occupy 40% of international market share. The United States controlled 75 percent of global production and production of television programs; the American animation industryoutput accounted for almost 30% of the global market to reach $ 31 billion; film production in the United States accounted for 6.7 percent of the world, but occupied 50% of the world screening time; In addition, the total size of the sports industry in the United States is about $300 000 000 000, accounting for 2.3% of GDP which only NBA a $ 10billion. However, we can see that this so-called American cultureindustry output is included, including sports and related industries,its scope is greater than the domestic cultural industry classification.Last article published on the web on the proportion of cultural industry in the United States, the earliest dating back to the Economic Daily News October 27, 2000 published in the Chinese culture, industry, academic Yearbook (1979-2002 Volume) cultural entrepreneurship space is there much ". Mentioned According to statistics, 18-25 percent of theU.S. cultural industries accounted for the total GDP, the 400 richest American companies, there are 72 cultural enterprises, the U.S. audioand video have been more than the aerospace industry ranks exports tradefirst. " Since then, the concept of "cultural industries" in the Research Office of CPC Central Committee from 2002 release of "2001-2002: China Cultural Industry Development Report", the official presentationof its background "article is the first official document reference the data. Now, the "Economic Daily News, the data from wherehas been untraceable, however, has passed 10 years, the data arestill widely various articles and government documents referenced, justa little floating, such as to 1/3 or dropped to 12%, the value ratio of72 cultural enterprises "in the past 10 years has never been subject to change. At least the data, has 11 years, there is a problem.The definition of cultural industries, the classification system, statistical methods and cultural enterprises related to the composition. Culture Research Center of the Chinese Academy of Social Sciences,deputy director Zhang Xiaoming, in an interview with reporters: "to a large extent, today's American culture industry is more frommultinational companies to operate these multinational corporations majority of United States as the main body. This seems to be one kind of paradox: American culture industry backed by multinational companies to benefit from all over the world, but the ultimate holding company liesin the hands of the merchants of other countries, although the countryis still the biggest beneficiary the United States during the GDP statistics still this part of the cross-cultural enterprises to join them. It is reported that, among the most powerful movie studios of Hollywood, Columbia TriStar is a subsidiary of Sony Corporation of Japan,parent company of Fox (Fox) is Australia's News Corporation. Especially in the popular music industry sector, in addition to the WEA, the more money earned in the U.S. market is the Sony of Japan, the Netherlands, Polygram, BMG in Germany, the United Kingdom Thorn EMIcompanies.China in recent years to increase the development of cultural industries such as animation and game, the seventh international animation festival, the statistics of the number of Chinese animation turnover super-Japan, to become the first in the world. We need more quality to support domestic animation to the world.[1] Marilyn Hugh著, Andrea Jane译外文资料翻译,中文部分中国动画发展的需求中国为什么要发展动漫游戏等文化产业,中国发展动漫游戏产业的榜样是谁,通过下面对日本与美国的调查报告可以看出来,动漫游戏等文化产业给每个国家带来了多大的利益。
三维建模外文资料翻译3000字外文资料翻译—原文部分Fundamentals of Human Animation(From Peter Ratner.3D Human Modeling andAnimation[M].America:Wiley,2003:243~249)If you are reading this part, then you have most likely finished building your human character, created textures for it, set up its skeleton, made morph targets for facial expressions, and arranged lights around the model. You have then arrived at perhaps the most exciting part of 3-D design, which is animating a character. Up to now the work has been somewhat creative, sometimes tedious, and often difficult.It is very gratifying when all your previous efforts start to pay off as you enliven your character. When animating, there is a creative flow that increases gradually over time. You are now at the phase where you become both the actor and the director of a movie or play.Although animation appears to be a more spontaneous act, it is nevertheless just as challenging, if not more so, than all the previous steps that led up to it. Your animations will look pitiful if you do not understand some basic fundamentals and principles. The following pointers are meant to give you some direction. Feel free to experiment with them. Bend and break the rules whenever you think it will improve the animation.SOME ANIMATION POINTERS1. Try isolating parts. Sometimes this is referred to as animating in stages. Rather than trying to move every part of a body at the same time, concentrate on specific areas. Only one section of the body is moved for the duration of the animation. Then returning to the beginning of the timeline, another section is animated. By successively returning to the beginning and animating a different part each time, the entire process is less confusing.2. Put in some lag time. Different parts of the body should not start and stop at the same time. When an arm swings, the lower arm should follow a few frames after that. The hand swings after the lower arm. It is like a chain reaction that works its way through the entire length of the limb.3. Nothing ever comes to a total stop. In life, only machines appear to come to a dead stop. Muscles, tendons, force, and gravity all affect the movement of a human. You can prove this to yourself. Try punching the air with a full extension. Notice that your fist has a bounce at the end. If a part comes to a stop such as a motion hold, keyframe it once and then again after three to eight or more keyframes. Your motion graph will then have a curve between the two identical keyframes. This will make the part appear to bounce rather than come to a dead stop.4. Add facial expressions and finger movements. Your digital human should exhibit signs of life by blinking and breathing. A blink will normally occur every 60 seconds. A typical blink might be as follows:Frame 60: Both eyes are open.Frame 61: The right eye closes halfway.Frame 62: The right eye closes all the way and the left eye closes halfway.Frame 63: The right eye opens halfway and the left eye closes all the way.Frame 64: The right eye opens all the way and left eye opens halfway.Frame 65: The left eye opens all the way.Closing the eyes at slightly different times makes the blink less mechanical.Changing facial expressions could be just using eye movements to indicate thoughts running through your model's head. The hands will appear stiff if you do not add finger movements. Too many students are too lazy to take the time to add facial and hand movements. If you make the extra effort for these details you will find that your animations become much more interesting.5. What is not seen by the camera is unimportant. If an arm goes through a leg but is not seen in the camera view, then do not bother to fix it. If you want a hand to appear close to the body and the camera view makes it seem to be close even though it is not, then why move it any closer? This also applies to sets. There is no need to build an entire house if all the action takes place in the living room. Consider painting backdrops rather than modeling every part of a scene.6. Use a minimum amount of keyframes. Too many keyframes can make the character appear to move in spastic motions. Sharp, cartoonlike movements are created with closely spaced keyframes. Floaty or soft, languid motions are the result of widely spaced keyframes. An animation will often be a mixture of both. Try to look for ways that will abbreviate the motions. You can retain the essential elements of an animation while reducing the amount of keyframes necessary to create a gesture.7.Anchor a part of the body. Unless your character is in the air, it should have some part of itself locked to the ground. This could be a foot, a hand, or both. Whichever portionis on the ground should be held in the same spot for a number of frames. This prevents unwanted sliding motions. When the model shifts its weight, the foot that touches down becomes locked in place. This is especially true with walking motions.There are a number of ways to lock parts of a model to the ground. One method is to use inverse kinematics. The goal object, which could be a null, automatically locks a foot or hand to the bottom surface. Another method is to manually keyframe the part that needs to be motionless in the same spot. The character or its limbs will have to be moved and rotated, so that foot or hand stays in the same place. If you are using forward kinematics, then this could mean keyframing practically every frame until it is time to unlock that foot or hand.8.A character should exhibit weight. One of the most challenging tasks in 3-D animation is to have a digital actor appear to have weight and mass. You can use several techniques to achieve this. Squash and stretch, or weight and recoil, one of the 12 principles of animation discussed in Chapter 12, is an excellent way to give your character weight.By adding a little bounce to your human, he or she will appear to respond to the force of gravity. For example, if your character jumps up and lands, lift the body up a little after it makes contact. For a heavy character, you can do this several times andhave it decrease over time. This will make it seem as if the force of the contact causes the body to vibrate a little.Secondary actions, another one of the 12 principles of animation discussed in Chapter 12, are an important way to show the effects of gravity and mass. Using the previous example of a jumping character, when he or she lands, the belly could bounce up and down, the arms could have some spring to them, the head could tilt forward, and so on.Moving or vibrating the object that comes in contact with the traveling entity is another method for showing the force of mass and gravity. A floor could vibrate or a chair that a person sits in respond to the weight by the seat going down and recovering back up a little. Sometimes an animator will shake the camera to indicate the effects of a force.It is important to take into consideration the size and weight of a character. Heavy objects such as an elephant will spend more time on the ground, while a light character like a rabbit will spend more time in the air. The hopping rabbit hardly shows the effects of gravity and mass.9. Take the time to act out the action. So often, it is too easy to just sit at the computer and try to solve all the problems of animating a human. Put some life into the performance by getting up and acting out the motions. This will make the character's actions more unique and also solve many timing and positioning problems. The best animators are also excellent actors. A mirror is an indispensable tool for the animator. Videotaping yourself can also be a great help.10. Decide whether to use IK, FK, or a blend of both. Forward kinematics and inverse kinematics have their advantages and disadvantages. FK allows full control over the motions of different body parts. A bone can be rotated and moved to the exact degree and location one desires. The disadvantage to using FK is that when your person has to interact within an environment, simple movements become difficult. Anchoring a foot to the ground so it does not move is challenging because whenever you move the body, the feet slide. A hand resting on a desk has the same problem.IK moves the skeleton with goal objects such as a null. Using IK, the task of anchoring feet and hands becomes very simple. The disadvantage to IK is that a great amount of control is packed together into the goal objects. Certain poses become very difficult to achieve.If the upper body does not require any interaction with its environment, then consider a blend of both IK and FK. IK can be set up for the lower half of the body to anchor the feet to the ground, while FK on the upper body allows greater freedom and precision of movements.Every situation involves a different approach. Use your judgment to decide which setup fits the animation most reliably.11. Add dialogue. It has been said that more than 90% of student animations that are submitted to companies lack dialogue. The few that incorporate speech in their animations make their work highly noticeable. If the animation and dialogue are well done, then those few have a greater advantage than their competition. Companies understand that it takes extra effort and skill tocreate animation with dialogue.When you plan your story, think about creating interaction between characters not only on a physical level but through dialogue as well. There are several techniques, discussed in this chapter, that can be used to make dialogue manageable.12. Use the graph editor to clean up your animations. The graph editor is a useful tool that all 3-D animators should become familiar with. It is basically a representation of all the objects, lights, and cameras in your scene. It keeps track of all their activities and properties.A good use of the graph editor is to clean up morph targets after animating facial expressions. If the default incoming curve in your graph editor is set to arcs rather than straight lines, you will most likely find that sometimes splines in the graph editor will curve below a value of zero. This can yield some unpredictable results. The facial morph targets begin to take on negative values that lead to undesirable facial expressions. Whenever you see a curve bend below a value of zero, select the first keyframe point to the right of the arc and set its curve to linear. A more detailed discussion of the graph editor will be found in a later part of this chapter.ANIMATING IN STAGESAll the various components that can be moved on a human model often become confusing if you try to change them at the same time. The performance quickly deteriorates into a mechanical routine if you try to alter all these parts at the same keyframes. Remember, you are trying to create human qualities, not robotic ones. Isolating areas to be moved means that you can look for the parts of the body that have motion over time and concentrate on just a few of those. For example, the first thing you can move is the body and legs. When you are done moving them around over the entire timeline, then try rotating the spine. You might do this by moving individual spine bones or using an inverse kinematics chain. Now that you have the body moving around and bending, concentrate on the arms. If you are not using an IK chain to move the arms, hands, and fingers, then rotate the bones for the upper and lower arm. Do not forget the wrist. Finger movements can be animated as one of the last parts. Facial expressions can also be animated last.Example movies showing the same character animated in stages can be viewed on the CD-ROM as CD11-1 AnimationStagesMovies. Some sample images from the animations can also be seen in Figure 11-1. The first movie shows movement only in the body and legs. During the second stage, the spine and head were animated. The third time, the arms were moved. Finally, in the fourth and final stage, facial expressions and finger movements were added.Animating in successive passes should simplify the process. Some final stages would be used to clean up or edit the animation.Sometimes the animation switches from one part of the body leading to another. For example, somewhere during the middle of an animation the upper body begins to lead the lower one. In a case like this, you would then switch from animating the lower body first to moving the upper part before the lower one.The order in which one animates can be a matter of personal choice. Some people may prefer to do facial animation first or perhaps they like to move the arms before anything else. Following is a summary of how someone might animate a human.1. First pass: Move the body and legs.2. Second pass: Move or rotate the spinal bones, neck, and head.3. Third pass: Move or rotate the arms and hands.4. Fourth pass: Animate the fingers.5. Fifth pass: Animate the eyes blinking.6. Sixth pass: Animate eye movements.7. Seventh pass: Animate the mouth, eyebrows, nose, jaw, and cheeks (you can break these up into separate passes).Most movement starts at the hips. Athletes often begin with a windup action in the pelvic area that works its way outward to the extreme parts of the body. This whiplike activity can even be observed in just about any mundane act. It is interesting to note that people who study martial arts learn that most of their power comes from the lower torso. Students are often too lazy to make finger movements a part of their animation. There are several methods that can make the process less time consuming.One way is to create morph targets of the finger positions and then use shape shifting to move the various digits. Each finger is positioned in an open and fistlike closed posture. For example, the sections of the index finger are closed, while the others are left in an open, relaxed position for one morph target. The next morph target would have only the ring finger closed while keeping the others open. During the animation, sliders are then used to open and close the fingers and/or thumbs.Another method to create finger movements is to animate them in both closed and open positions and then save the motion files for each digit. Anytime you animate the same character, you can load the motions into your new scene file. It then becomes a simple process of selecting either the closed or the open position for each finger and thumb and keyframing them wherever you desire.DIALOGUEKnowing how to make your humans talk is a crucial part of character animation. Once you add dialogue, you should notice a livelier performance and a greater personality in your character. At first, dialogue may seem too great a challenge to attempt. Actually, if you follow some simple rules, you will find that adding speech to your animations is not as daunting a task as one would think. The following suggestions should help.DIALOGUE ESSENTIALS1. Look in the mirror. Before animating, use a mirror or a reflective surface such as that on a CD to follow lip movements and facial expressions.2. The eyes, mouth, and brows change the most. The parts of the face that contain the greatest amount of muscle groups are the eyes, brows, and mouth. Therefore, these are the areas that change the most when creating expressions.3. The head constantly moves during dialogue. Animate random head movements, no matter how small, during the entire animation. Involuntary motions of the head make a point without having to state it outright. For example, nodding and shaking the head communicate, respectively, positive and negative responses. Leaning the head forward can show anger, while a downward movement communicates sadness. Move the head to accentuate and emphasize certain statements. Listen to the words that are stressed and add extra head movements to them.4. Communicate emotions. There are six recognizable universal emotions: sadness, anger, joy, fear, disgust, and surprise. Other, more ambiguous states are pain, sleepiness, passion, physical exertion, shyness, embarrassment, worry, disdain, sternness, skepticism, laughter, yelling, vanity, impatience, and awe.5. Use phonemes and visemes. Phonemes are the individual sounds we hear in speech. Rather than trying to spell out a word, recreate the word as a phoneme. For example, the word computer is phonetically spelled "cumpewtrr." Visemes are the mouth shapes and tongue positions employed during speech. It helps tremendously to draw a chart that recreates speech as phonemes combined with mouth shapes (visemes) above or below a timeline with the frames marked and the sound and volume indicated.6. Never animate behind the dialogue. It is better to make the mouth shapes one or two frames before the dialogue.7. Don't overstate. Realistic facial movements are fairly limited. The mouth does not open that much when talking.8. Blinking is always a part of facial animation. It occurs about every two seconds. Different emotional states affect the rate of blinking. Nervousness increases the rate of blinking, while anger decreases it.9. Move the eyes. To make the character appear to be alive, be sure to add eye motions. About 80% of the time is spent watching the eyes and mouth, while about 20% is focused on the hands and body.10. Breathing should be a part of facial animation. Opening the mouth and moving the head back slightly will show an intake of air, while flaring the nostrils and having the head nod forward a little can show exhalation. Breathing movements should be very subtle and hardly noticeable...外文资料翻译—译文部分人体动画基础(引自 Peter Ratner.3D Human Modeling andAnimation[M].America:Wiley,2003:243~249)如果你读到了这部分,说明你很可能已构建好了人物角色,为它创建了纹理,建立起了人体骨骼,为面部表情制作了morph修改器并在模型周围安排好了灯光。
3d建模ip形象设计外文文献Title: Designing IP Characters with 3D ModelingAbstract:This article explores the process of creating IP characters through 3D modeling. It delves into the importance of designing unique and visually appealing characters that resonate with the target audience. The article emphasizes the need to approach the design from a human perspective, infusing emotions and storytelling elements to bring the characters to life. The goal is to create a natural and fluid narrative that captivates readers and evokes a genuine human experience.Introduction:In the realm of IP (Intellectual Property), character design plays a vital role in capturing the attention and imagination of the audience. With the advent of 3D modeling technology, designers now have the ability to bring these characters to life in a realistic and immersive manner. However, it is crucial to approach the design process with a human perspective, ensuring that the characters resonate with the viewers on an emotional level.Creating Unique and Memorable Characters:When designing IP characters, it is essential to prioritize uniquenessand memorability. The characters should stand out from the crowd and have distinctive traits that make them easily recognizable. This can be achieved through careful consideration of their physical appearance, personality, and backstory. By infusing the characters with depth and complexity, they become more relatable to the audience and foster a stronger emotional connection.The Importance of 3D Modeling:3D modeling provides designers with a powerful tool to bring their characters to life. It allows for the creation of realistic and detailed models that can be viewed from any angle. Through this process, designers can meticulously craft the characters' features, expressions, and movements, making them more believable and captivating. Additionally, 3D modeling enables seamless integration of the characters into various media formats, such as animations, games, and merchandise.Infusing Emotion and Storytelling:To enhance the authenticity of IP characters, it is crucial to infuse them with emotions and storytelling elements. Characters that evoke emotions such as joy, sadness, or excitement are more likely to resonate with the audience. By creating compelling narratives that showcase the characters' growth, relationships, and challenges,designers can create a captivating world that draws readers in and keeps them engaged.The Role of Human Perspective:Throughout the design process, it is essential to maintain a human perspective. This involves considering the characters' interactions, expressions, and movements from a real-life standpoint. By observing how humans naturally behave and react, designers can create characters that feel genuine and relatable. This human touch adds a layer of authenticity to the characters, making them more compelling and memorable.Conclusion:Designing IP characters through 3D modeling is a multifaceted process that requires careful attention to detail, creativity, and empathy. By prioritizing uniqueness, infusing emotion, and maintaining a human perspective, designers can create characters that leave a lasting impression on the audience. It is through these characters that stories come to life, forging a strong connection between the IP and its consumers.。
U n i t y3D A n i m a t i o n外文文献Untiy3D AnimationUnity’s Animation features include Retargetable animations, Full control of animation weights at runtime, Event calling from within the animation playback, Sophisticated State Machine hierarchies and transitions, Blend shapes for facial animations, and more.Read this section to find out how to import and work with imported animation and how to animate objects, colours, and any other parameters within Unity itself.Animation System OverviewUnity has a rich and sophisticated animation system (sometimes referred to as ‘Mecanim’). It provides:Easy workflow and setup of animations for all elements of Unity including objects, characters, and properties.Support for imported animation clips and animation created within UnityHumanoid animation retargeting - the ability to apply animations from one character model onto another.Simplified workflow for aligning animation clips.Convenient preview of animation clips, transitions and interactions between them. This allows animators to work more independently of programmers, prototype and preview their animations before gameplay code is hooked in.Management of complex interactions between animations with a visual programming tool.Animating different body parts with different logic.Layering and masking featuresAnimation workflowUnity’s animation system is based on the concept of Animation Clips, which contain information about how certain objects should change their position, rotation, or other properties over time. Each clip can be thought of as a single linear recording. Animation clips from external sources are created by artists or animators with 3rd party tools such as Max or Maya, or come from motion capture studios or other sources.Animation Clips are then organised into a structured flowchart-like system called an Animator Controller. The Animator Controller acts as a “State Machine” which keeps track of which clip should currently be playing, and when the animations should change or blend together.A very simple Animator Controller might only contain one or two clips, for example to control a powerup spinning and bouncing, or to animate a door opening and closing at the correct time. A more advanced Animator Controller might contain dozens of humanoid animations for all the main character’s actions, and might blend between multiple clips at the same time to provide a fluid motion as the player moves around the scene.Unity’s Animation system also has numerous special features for handling humanoid characters which give you the abilityto retargethumanoid animation from any source (Eg. motion capture, the asset store, or some other third-party animation library) to your own character model, as well as adjusting muscle definitions. These special features are enabled by Unity’s Avatar system, where humanoid characters are mapped to a common internal format.Each of these pieces - the Animation Clips, the Animator Controller, and the Avatar, are brought together on a GameObject via theAnimator Component. This component has a reference to an Animator Controller, and (if required) the Avatar for this model. TheAnimator Controller, in turn, contains the references tothe Animation Clips it uses.The above diagram shows the following:Animation clips are imported from an external source or created within Unity. In this example, they are imported motion captured humanoid animations.The animation clips are placed and arranged in an Animator Controller. This shows a view of an Animator Controller in the Animator window. The States (which may represent animations or nested sub-state machines) appear as nodes connected by lines. This Animator Controller exists as an asset in the Project window.The rigged character model (in this case, the astronaut “Astrella”) has a specific configuration of bones which are mapped to Unity’s common Avatar format. This mapping is stored as an Avatar asset as part of the imported character model, and also appears in the Project window as shown.When animating the character model, it has an Animator component attached. In the Inspector view shown above, you can see the Animator Component which has both the AnimatorController and the Avatar assigned. The animator uses these together to animate the model. The Avatar reference is only necessary whenanimating a humanoid character. For other types of animation, only an Animator Controller is required.Unity’s animation system (Known as “Mecanim”) comes wit h a lot of concepts and terminology. If at any point, you need to find out what something means, go to our Animation Glossary.Legacy animation systemWhile Mecanim is recommended for use in most situations, Unity has retained its legacy animation system which existed before Unity 4. You may need to use when working with older content created before Unity 4. For information on the Legacy animation system, see this sectionUnity intends to phase out the Legacy animation system over time for all cases by merging the workflows into Mecanim.Animation ClipsAnimation Clips are one of the core elements to Unity’s animation system. Unity supports importing animation from external sources, and offers the ability to create animation clips from scratch within the editor using the Animation window.Animation from External SourcesAnimation clips imported from external sources could include: Humanoid animations captured at a motion capture studioAnimations created from scratch by an artist in an external 3D application (such as 3DS Max or Maya)Animation sets from 3rd-party libraries (eg, from Unity’s asset store)Multiple clips cut and sliced from a single imported timeline. Animation Created and Edited Within UnityUnity’s Animation Window also allows you to create and edit animation clips. These clips can animate:The position, rotation and scale of GameObjectsComponent properties such as material colour, the intensity of a light, the volume of a soundProperties within your own scripts including float, int, Vector and boolean variablesThe timing of calling functions within your own scripts Animation from External SourcesOverview of Imported AnimationAnimation from external sources is imported into Unity in the same way as regular 3D files. These files, whether they’re generic FBX files or native formats from 3D software such as Maya, Cinema 4D, 3D Studio Max, can contain animation data in the form of a linear recording of the movements of objects within the file.In some situations the object to be animated (eg, a character) and the animations to go with it can be present in the same file. In other cases, the animations may exist in a separate file to the model to be animated.It may be that animations are specific to a particular model, and cannot be re-used on other models. For example, a giant octopus end-boss in your game might have a unique arrangement of limbs and bones, and its own set of animations.In other situations, it may be that you have a library of animations which are to be used on various different models in your scene. For example, a number of different humanoid characters might all use the same walk and run animations. In these situations, it’s common to have a simple placeholder model in your animation files for the purposes of previewing them. Alternatively, it ispossible to use animation files even if they have no geometry at all, just the animation data.When importing multiple animations, the animations can each exist as separate files within your project folder, or you can extract multiple animation clips from a single FBX file if exported as takes from Motion builder or with a plugin / script for Maya, Max or other 3D packages. You might want to do this if your file contains multiple separate animations arranged on a single timeline. For example, a long motion captured timeline might contain the animation for a few different jump motions, and you may want to cut out certain sections of this to use as individual clips and discard the rest. Unity provides animation cutting tools to achieve this when you import all animations in one timeline by allowing you to select the frame range for each clip.Importing Animation FilesBefore any animation can be used in Unity, it must first be imported into your project. Unity can import native Maya (.mbor .ma), 3D Studio Max (.max) and Cinema 4D (.c4d) files, and also generic FBX files which can be exported from most animation packages (see this page for further details on exporting). To import an animation, simply drag the file to the Assets folder of your project. When you select the file in the Project View you can edit the Import Settings in the inspector.Working with humanoid animationsThe Mecanim Animation System is particularly well suited for working with animations for humanoid skeletons. Since humanoid skeletons are used extensively in games, Unity provides a specialized workflow, and an extended tool set for humanoid animations.Because of the similarity in bone structure, it is possible to map animations from one humanoid skeleton to another, allowingretargeting and inverse kinematics. With rare exceptions, humanoid models can be expected to have the same basic structure, representing the major articulate parts of the body, head and limbs. The Mecanim system makes good use of this idea to simplify the rigging and control of animations. A fundamental step in creating a animation is to set up a mapping between the simplified humanoid bone structure understood by Mecanim and the actual bones present in the skeleton; in Mecanim terminology, this mapping is calledan Avatar. The pages in this section explain how to create an Avatar for your model.Creating the AvatarAfter a model file (FBX, COLLADA, etc.) is imported, you can specify what kind of rig it is in the Rig tab of the Model Importer options.Humanoid animationsFor a Humanoid rig, select Humanoid and click Apply. Mecanim will attempt to match up your existing bone structure to the Avatar bone structure. In many cases, it can do this automatically by analysing the connections between bones in the rig.If the match has succeeded, you will see a check mark next to the Configure menuAlso, in the case of a successful match, an Avatar sub-asset is added to the model asset, which you will be able to see in the project view hierarchy.Avatar added as a sub-assetSelecting the avatar sub-asset will bring up the inspector. You can then configure the avatar.The inspector for an Avatar assetIf Mecanim was unable to create the Avatar, you will see a cross next to the Configure button, and no Avatar sub-asset will be added. When this happens, you need to configure the avatar manually.Non-humanoid animationsTwo options for non-humanoid animation areprovided: Generic and Legacy. Generic animations are imported using the Mecanim system but don’t take advantage of the extrafeatures available for humanoid animations. Legacy animations use the animation system that was provided by Unity before Mecanim. There are some cases where it is still useful to work with legacy animations (most notably with legacy projects that you don’t want to update fully) but they are seldom needed for new projects. See this section of the manual for further details on legacy animations.Configuring the AvatarSince the Avatar is such an important aspect of the Mecanim system, it is important that it is configured properly for your model. So, whether the automatic Avatar creation fails or succeeds, you need to go into the Configure Avatar mode to ensure your Avatar is valid and properly set up. It is important that your character’s bone structure matches Mecanim’s predefined bone structure and that the model is in T-pose.If the automatic Avatar creation fails, you will see a cross next to the Configure button.If it succeeds, you will see a check/tick mark:Here, success simply means all of the required bones have been matched but for better results, you might want to match the optional bones as well and get the model into a proper T-pose.When you go to the Configure … menu, the editor will ask you to save your scene. The reason for this is that in Configure mode, the Scene View is used to display bone, muscle and animation information for the selected model alone, without displaying the rest of the scene.Once you have saved the scene, you will see a new Avatar Configuration inspector, with a bone mapping.The inspector shows which of the bones are required and which are optional - the optional ones can have their movements interpolated automatically. For Mecanim to produce a valid match, your skeleton needs to have at least the required bones in place. In order to improve your chances for finding a match to the Avatar, name your bones in a way that reflects the body parts they represent (names like “LeftArm”, “RightForearm” are suitable here).If the model does NOT yield a valid match, you can manually follow a similar process to the one used internally by Mecanim:-Sample Bind-pose (try to get the model closer to the pose with which it was modelled, a sensible initial pose)Automap (create a bone-mapping from an initial pose)Enforce T-pose (force the model closer to T-pose, which is the default pose used by Mecanim animations)If the auto-mapping (Mapping->Automap) fails completely or partially, you can assign bones by either draging them fromthe Scene or from the Hierarchy. If Mecanim thinks a bone fits, it will show up as green in the Avatar Inspector, otherwise it shows up in red.Finally, if the bone assignment is correct, but the character is not in the correct pose, you will see the message “Character not in T-Pose”. You can try to fix that with Enforce T-Pose or rotate the remaining bones into T-pose.Avatar Body MasksSometimes it is useful to restrict an animation to specific body parts. For example, an walking animation might involve the character swaying his arms but if he picks up a gun, he should hold it in front of him. You can use an Avatar Body Mask to specify which parts of a character an animation should be restricted to - see this page page for further details.Untiy3D 动画系统统一的动画功能包括Retargetable动画,在运行时动画完全控制重量,从内部事件调用动画播放,复杂的状态机结构和转换,混合形状的面部动画等等。
英文文献原文及译文学生姓名:赵凡学号:1021010639学院:软件学院专业:软件工程指导教师:武敏顾晨昕2014年 6月英文文献原文The use of skinWhat is a skin? In the role of setting, the preparations made for the animation in the final process is skinning. The so-called skinning skinning tool is to use role-model role do with our skeletal system to help set the course together. After this procedure, fine role model can be rendered on real numbers can be made into animation. Bones in skinning process, in which the position is called Bind Pose. After the skin, bone deformation of the skin caused by the Games. However, sometimes inappropriate distortion, which requires bone or skin to make the appropriate changes, then you can make use of relevant command to restore the bone binding position, and then disconnect the association between bone and skin. In Maya, you can always put the bones and skin disconnected or reconnected. There is a direct way to skin the skin (skin flexible rigid skinning) and indirect skin (or wrap the lattice deformation of flexible or rigid skinning skinning joint use).In recent years, more and more 3D animation software, a great competition in the market, software companies are constantly developing and updating the relevant software only more humane, but in three-dimensional animation maya mainstream animation software. Able to create bone, meat, God's role is that each CG digital artists dream. Whether the digital characters charm, the test is the animator of life, understanding of life. Digital character to have bone and meat producers are required for the role of the body and has a full grasp of motor function. In addition, the roles of whether there is realism, the key lies in the design and production of the skin, which is skinning animation software for skilled technical and creative mastery is essential. Skin is ready to work in animation final steps, after this procedure, you can do the movements designed, if the skin did not do the work, after the animation trouble, so the skin is very important.As the three-dimensional animation with accuracy and authenticity, the current three-dimensional animation is rapidly developing country, nowadays the use ofthree-dimensional animation everywhere, the field of architecture, planning areas, landscape areas, product demonstrations, simulated animation, film animation, advertising, animation, character animation, virtual reality and other aspects of three-dimensional animation fully reflects the current importance. If compared to the three-dimensional animation puppet animation in real life, then the doll puppet animation equivalent of Maya modeling, puppet performers equivalent Maya animators and puppet steel joints in the body is the skeletal system. Bones in the animation will not be the final rendering, its role is only equivalent to a bracket that can simulate real bones set of major joints to move, rotate, etc.. When the bones are set, we will be bound to the skeleton model, this step is like a robot mounted to a variety of external parts, like hanging, and then through the various settings, add a keyframe animation on bone, and then drive to be bound by the bones corresponding to the model on the joints. Thus, in the final animation, you can see the stiffness of a stationary model with vitality. The whole process from the rigging point of view, may not compare more tedious keyframe animation, rigging, but it is the core of the whole three-dimensional animation, and soul.Rigging plays a vital role in a three-dimensional animation. Good rigging easy animation production, faster and more convenient allows designers to adjust the action figures. Each step are bound to affect the skeleton final animation, binding is based on the premise of doing animation, animators animate convenient, good binding can make animation more fluid, allowing the characters to life even more performance sex. In addition to rigging as well as expression of the binding character, but also to let people be able to speak or behave different facial expressions. Everything is done in order to bind the animation is set, it is bound to set a good animation is mainly based on the entire set of styles and processes. Rigging is an indispensable part in the three-dimensional animation.Three-dimensional animation production process: model, texture, binding, animation, rendering, special effects, synthesis. Each link is associated. Model and material determines the style of animation, binding, and animation determine fluency animation, rendering, animation effects, and synthetic colors and determine the finalresult.Three-dimensional animation, also known as 3D animation, is an emerging technology. Three-dimensional animation gives a three-dimensional realism, even subtle animal hair, this effect has been widely applied to the production of film and television in many areas, education, and medicine. Movie Crash, deformed or fantasy scenes are all three-dimensional animation in real life. Designers in the first three-dimensional animation software to create a virtual scene, and then create the model according to the proportion, according to the requirements set trajectory models, sports, and other parameters of the virtual camera animation, and finally as a model assigned a specific material, and marked the lights , the final output rendering, generating the final screen. DreamWorks' "Shrek" and Pixar's "Finding Nemo" is so accomplished visual impact than the two-dimensional animation has.Animated film "Finding Nemo" extensive use of maya scene technology. Produced 77,000 jellyfish animation regardless of the technical staff or artist is one of the most formidable challenge. This pink translucent jellyfish is most needed is patience and skill, you can say, jellyfish appeared animated sea creatures taken a big step. His skin technology can be very good. The use of film roles skinning techniques is very good, so that each character is vivid, is not related to expression, or action is so smooth, these underwater underwater world is so beautiful. Maya maya technology for the creation of the first to have a full understanding and knowledge. He first thought of creative freedom virtual capacity, but the use of technology has limitations. When the flexible skinning animation technique many roles in the smooth bound for editing, re-allocation tools needed to adjust the skeletal model for the control of the weight through the right point, every detail clownfish are very realistic soft. In the joint on the affected area should smear, let joints from other effects, this movement was not wearing a tie. Used less rigid, rigid lattice bound objects must be created in a position to help the bones of the joint motion. Animated film "Finding Nemo," the whole movie a lot of facial animation, facial skin but also a good technique to make facial expressions, the facial animation is also animated, and now more and more animated facial animationtechnology increasingly possible, these should be good early skin behind it will not affect the expression, there is therefore the creation of the film how maya digital technology, play his video works styling advantages and industrial processes are needed to explore creative personnel, all and three-dimensional figures on the production of content, from maya part. Two-dimensional hand-painted parts, post-synthesis of several parts, from a technical production, artistic pursuit. Several angles to capture the entire production cycle of creation. Maya techniques used in the animated film "Finding Nemo", the flexible skinning performance of many, clown face on with a lot of smooth binding, so more people-oriented, maya application of technical advantages in certain limited extent. Realistic three-dimensional imaging technology in the animation depth spatial density, the sense of space, mysterious underwater world to play the most. Because lifelike action, it also brings the inevitable footage and outdoor sports realistic density, but also to explore this movie maya main goal of the three-dimensional animation.英文文献译文蒙皮的运用什么是蒙皮?在角色设定中,为动画所作的准备工作里的最后一道工序就是蒙皮。
3d动画制作中英文对照外文翻译文献预览说明:预览图片所展示的格式为文档的源格式展示,下载源文件没有水印,内容可编辑和复制中英文对照外文翻译文献(文档含英文原文和中文翻译)Spin: A 3D Interface for Cooperative WorkAbstract: in this paper, we present a three-dimensional user interface for synchronous co-operative work, Spin, which has been designed for multi-user synchronous real-time applications to be used in, for example, meetings and learning situations. Spin is based on a new metaphor of virtual workspace. We have designed an interface, for an office environment, which recreates the three-dimensional elements needed during a meeting and increases the user's scope of interaction. In order to accomplish these objectives, animation and three-dimensional interaction in real time are used to enhance the feeling of collaboration within the three-dimensional workspace. Spin is designed to maintain a maximum amount of information visible. The workspace is created using artificial geometry - as opposed to true three-dimensional geometry - and spatial distortion, a technique that allows all documents and information to be displayed simultaneously while centering the user's focus of attention. Users interact with each other via their respective clones, which are three-dimensional representations displayed in each user's interface, and are animated with user action on shared documents. An appropriate object manipulation system (direct manipulation, 3D devices and specific interaction metaphors) is used to point out and manipulate 3D documents.Keywords: Synchronous CSCW; CVE; Avatar; Clone; Three-dimensional interface; 3D interactionIntroductionTechnological progress has given us access to fields that previously only existed in our imaginations. Progress made in computers and in communication networks has benefited computer-supported cooperative work (CSCW), an area where many technical and human obstacles need to be overcome before it can be considered as a valid tool. We need to bear in mind the difficulties inherent in cooperative work and in the user's ability to perceive a third dimension.The Shortcomings of Two- Dimensional InterfacesCurrent WIMP (windows icon mouse pointer) office interfaces have considerable ergonomic limitations [1].(a) Two-dimensional space does not display large amounts of data adequately. When it comes to displaying massive amounts of data, 2D displays have shortcomings such as window overlap and the need for iconic representation of information [2]. Moreover, the simultaneous display of too many windows (the key symptom of Windowitis) can be stressful for users [3].(b) WIMP applications are indistinguishable from one another; leading to confusion. Window dis- play systems, be they XII or Windows, do not make the distinction between applications, con- sequently, information is displayed in identical windows regardless of the user's task.(c) 2D applications cannot provide realistic rep- resentation. Until recently, network technology only allowed for asynchronous sessions (electronic mail for example); and because the hardware being used was not powerful enough, interfaces could only use 2D representations of the workspace.Metaphors in this type of environment do not resemble the real space; consequently, it is difficult for the user to move around within a simulated 3D space.(d) 2D applications provide poor graphical user representations. As windows are indistinguish- able and there is no graphical relation between windows, it is difficult to create a visual link between users or between a user and an object when the user's behavior is been displayed [4].(e) 2D applications are not sufficiently immersive, because 2D graphical interaction is not intuitive (proprioception is not exploited) users have difficulties getting and remaining involved in the task at hand.Interfaces: New ScopeSpin is a new interface concept, based on real-time computer animation. Widespread use of 3D graphic cards for personal computers has made real-time animation possible on low-cost computers. The introduction of a new dimension (depth) changes the user's role within the interface, the use of animation is seamless and therefore lightens the user's cognitive load. With appropriate input devices, the user now has new ways of navigating in, interacting with and organizing his workspace. Since 1995, IBM has been working on RealPlaces [5], a 3D interface project. It was developed to study the convergence between business applications and virtual reality. The user environment in RealPlaces is divided into two separate spaces (Fig, 1): ? a 'world view', a 3D model which stores and organizes documents through easy object interaction;a 'work plane', a 2D view of objects with detailed interaction, (what is used in most 2D interfaces).RealPlaces allows for 3D organization of a large number ofobjects. The user can navigatethrough them, and work on a document, which can be viewed and edited in a 2D application that is displayed in the foreground of the 'world'. It solves the problem of 2D documents in a 3D world, although there is still some overlapping of objects. RealPtaces does solve some of the problems common to 2D interfaces but it is not seamless. While it introduces two different dimensions to show documents, the user still has difficulty establishing links between these two dimensions in cases where multi-user activity is being displayed. In our interface, we try to correct the shortcomings of 2D interfaces as IBM did in RealPlaces, and we go a step further, we put forward a solution for problems raised in multi-user cooperation, Spin integrates users into a virtual working place in a manner that imitates reality making cooperation through the use of 3D animation possible. Complex tasks and related data can be represented seamlessly, allowing for a more immersive experience. In this paper we discuss, in the first part, the various concepts inherent in simultaneous distant cooperative work (synchronous CSCW), representation and interaction within a 3D interface. In the second part, we describe our own interface model and how the concepts behind it were developed. We conclude with a description of the various current and impending developments directly related to the prototype and to its assessment.ConceptsWhen designing a 3D interface, several fields need to be taken into consideration. We have already mentioned real-time computer animation and computer-supported cooperative work, which are the backbone of our project. There are also certain fields of the human sciences that have directty contributed to thedevelopment of Spin. Ergon- omics [6], psychology [7] and sociology [8] have broadened our knowIedge of the way in which the user behaves within the interface, both as an individual and as a member of a group.Synchronous Cooperative WorkThe interface must support synchronous cooper- ative work. By this we mean that it must support applications where the users have to communicate in order to make decisions, exchange views or find solutions, as would be the case with tele- conferencing or learning situations. The sense of co-presence is crucial, the user needs to have an immediate feeling that he is with other people; experiments such as Hydra Units [9] and MAJIC [10] have allowed us to isolate some of the aspects that are essential to multimedia interactive meetings.Eye contact." a participant should be able to see that he is being looked at, and should be able to look at someone else. ? Gaze awareness: the user must be able to estab- fish a participant's visual focus of attention. ? Facial expressions: these provide information concerning the participants' reactions, their acquiescence, their annoyance and so on. ? GesCures. ptay an important role in pointing and in 3D interfaces which use a determined set of gestures as commands, and are also used as a means of expressing emotion.Group ActivitySpeech is far from being the sole means of expression during verbal interaction [1 1]. Gestures (voluntary or involuntary) and facial expressions contribute as much information as speech. More- over, collaborative work entails the need to identify other people's points of view as well as their actions [1 2,1 3]. This requires defining the metaphors which witl enable users involvedin collaborative work to understand what other users are doing and to interact withthem. Researchers I1 4] have defined various communication criteria for representing a user in a virtual environment. In DIVE (Distributed Interactive Virtual Environment, see Fig. 2), Benford and Fahl6n lay down rules for each characteristic and apply them to their own system [1 5]. lhey point out the advantages of using a clone (a realistic synthetic 3D representation of a human) to represent the user. With a clone, eye contact (it is possible to guide the eye movements of a clone) as well as gestures and facial expressions can be controlled; this is more difficult to accomplish with video images. tn addition to having a clone, every user must have a telepointer, which is used to designate obiects that can be seen on other users' displays.Task-Oriented InteractionUsers attending a meeting must be abte to work on one or several shared documents, it is therefore preferable to place them in a central position in the user's field of vision, this increases her feeling of participation in a collaborative task. This concept, which consists of positioning the documents so as to focus user attention, was developed in the Xerox Rooms project [1 6]; the underlying principle is to prevent windows from overlapping or becoming too numerous. This is done by classifying them according to specific tasks and placing them in virtual offices so that a singIe window is displayed at any one (given) time. The user needs to have an instance of the interface which is adapted to his role and the way he apprehends things, tn a cooperative work context, the user is physically represented in the interface and has a position relative to the other members of the group.The Conference Table Metaphor NavigationVisually displaying the separation of tasks seems logical - an open and continuous space is not suitable. The concept of 'room', in the visual and in the semantic sense, is frequently encountered in the literature. It is defined as a closed space that has been assigned a single task.A 3D representation of this 'room' is ideal because the user finds himself in a situation that he is familiar with, and the resulting interfaces are friendlier and more intuitive.Perception and Support of Shared AwarenessSome tasks entail focusing attention on a specific issue (when editing a text document) while others call for a more global view of the activity (during a discussion you need an overview of documents and actors). Over a given period, our attention shifts back and forth between these two types of activities [17]. CSCW requires each user to know what is being done, what is being changed, where and by whom. Consequently, the interface has to be able to support shared awareness. Ideally, the user would be able to see everything going on in the room at all times (an everything visible situation). Nonetheless, there are limits to the amount of information that can be simultaneously displayed on a screen. Improvements can be made by drawing on and adopting certain aspects of human perception. Namely, a field of vision with a central zone where images are extremely clear, and a peripheral vision zone, where objects are not well defined, but where movement and other types of change can be perceived.Interactive Computer AnimationInteractive computer animation allows for two things: first, the amount of information displayed can be increased, andsecond, only a small amount of this information can be made legible [18,19]. The remainder of the information continues to be displayed but is less legible (the user only has a rough view of the contents). The use of specific 3D algorithms and interactive animation to display each object enables the user visually to analyse the data quickly and correctly. The interface needs to be seamless. We want to avoid abstract breaks in the continuity of the scene, which would increase the user's cognitive load.We define navigation as changes in the user's point of view. With traditional virtual reality applica- tions, navigation also includes movement in the 3D world. Interaction, on the other hand, refers to how the user acts in the scene: the user manipulates objects without changing his overall point of view of the scene. Navigation and interaction are intrinsically linked; in order to interact with the interface the user has to be able to move within the interface. Unfortunately, the existence of a third dimension creates new problems with positioning and with user orientation; these need to be dealt with in order to avoid disorienting the user [20].Our ModelIn this section, we describe our interface model by expounding the aforementioned concepts, by defining spatial organization, and finally, by explaining how the user works and collaborates with others through the interface.Spatial OrganizationThe WorkspaceWhile certain aspects of our model are related to virtual reality, we have decided that since our model iS aimed at an office environment, the use of cumbersome helmets or gloves is not desirable. Our model's working environment is non-immersive.Frequently, immersive virtual reality environments tack precision and hinder perception: what humans need to perceive to believe in virtual worlds is out of reach of present simulation systems [26]. We try to eliminate many of the gestures linked to natural constraints, (turning pages in a book, for example) and which are not necessary during a meeting. Our workspace has been designed to resolve navigation problems by reducing the number of superfluous gestures which slow down the user. In a maI-life situation, for example, people sitting around a table could not easily read the same document at the same time. To create a simple and convenient workspace, situations are analysed and information which is not indispensable is discarded [27]. We often use interactive computer animation, but we do not abruptly suppress objects and create new icons; consequently, the user no longer has to strive to establish a mental link between two different representations of the same object. Because visual recognition decreases cognitive load, objects are seamlessly animated. We use animation to illustrate all changes in the working environment, i.e. the arrival of a new participant, the telepointer is always animated. There are two basic objects in our workspace: the actors and the artefacts. The actors are representations of the remote users or of artificial assistants. The artefacts are the applications and the interaction tools.The Conference tableThe metaphor used by the interface is the con- ference table. It corresponds to a single activity (our task-oriented interface solves the (b) shortcoming of the 2D interface, see Introduction). This activity is divided spatially and semantically into two parts. The first is asimulated panoramic view on which actors and sharedapplications are displayed. Second, within this view there is a workspace located near the center of the simulated panoramic screen, where the user can easily manipulate a specific document. The actors and the shared applications (2D and 3D) are placed side by side around the table (Fig. 4), and in the interest of comfort, there is one document or actor per 'wail'. As many applications as desired may be placed in a semi-circle so that all of the applications remain visible. The user can adjust the screen so that the focus of her attention is in the center; this type of motion resembles head- turning. The workspace is seamless and intuitive,Fig, 4. Objects placed around our virtual table.And simulates a real meeting where there are several people seated around a table. Participants joining the meeting and additional applications are on an equal footing with those already present. Our metaphor solves the (c) shortcoming of the 2D interface (see Introduction),DistortionIf the number of objects around the table increases, they become too thin to be useful. To resolve this problem we have defined a focus-of-attention zone located in the center of the screen. Documents on either side of this zone are distorted (Fig.5). Distortion is symmetrical in relation to the coordinate frame x=0. Each object is uniformly scaled with the following formula: x'=l-(1-x) '~, O<x<l< bdsfid="116" p=""></x<l<>Where is the deformation factor. When a= 1 the scene is not distorted. When all, points are drawn closer to the edge; this results in centrally positioned objects being stretched out, while those in the periphery are squeezed towards the edge. This distortion is similar to a fish-eye with only one dimension [28].By placing the main document in the centre of the screen and continuing to display all the other documents, our model simulates a human field of vision (with a central zone and a peripheral zone). By reducing the space taken up by less important objects, an 'everything perceivable' situation is obtained and, although objects on the periphery are neither legible nor clear, they are visible and all the information is available on the screen. The number of actors and documents that it is possible to place around the table depends, for the most part, on screen resolution. Our project is designed for small meetings with four people for example (three clones) and a few documents (three for example). Under these conditions, if participants are using 17-inch, 800 pixels screens all six objects are visible, and the system works.Everything VisibleWith this type of distortion, the important applications remain entirely legible, while all others are still part of the environment. When the simulated panoramic screen is reoriented, what disappears on one side immediately reappears on the other. This allows the user to have all applications visible in the interface. In CSCW it is crucial that each and every actor and artefact taking part in a task are displayed on the screen (it solves the (a) shortcoming of 2D interface, see Introduction),A Focus-of-Attention AreaWhen the workspace is distorted in this fashion, the user intuitively places the application on which she is working in the center, in the focus-of- attention area. Clone head movements correspond to changes of the participants' focus of attention area. So, each participant sees theother participants' clones and is able to perceive their headmovements. It gives users the impression of establishing eye contact and reinforces gaze awareness without the use of special devices. When a participant places a private document (one that is only visible on her own interface) in her focus in order to read it or modify it, her clone appears to be looking at the conference table.In front of the simulated panoramic screen is the workspace where the user can place (and enlarge) the applications (2D or 3D) she is working on, she can edit or manipulate them. Navigation is therefore limited to rotating the screen and zooming in on the applications in the focus-of-attention zone.ConclusionIn the future, research needs to be oriented towards clone animation, and the amount of information clones can convey about participant activity. The aim being to increase user collaboration and strengthen the feeling of shared presence. New tools that enable participants to adopt another participant's point of view or to work on another participant's document, need to be introduced. Tools should allow for direct interaction with documents and users. We will continue to develop visual metaphors that will provide more information about shared documents, who is manipulating what, and who has the right to use which documents, etc. In order to make Spin more flexible, it should integrate standards such as VRML 97, MPEG 4, and CORBA. And finally, Spin needs to be extended so that it can be used with bigger groups and more specifically in learning situations.旋转:3D界面的协同工作摘要:本文提出了一种三维用户界面的同步协同工作—旋转,它是为多用户同步实时应用程而设计,可用于例如会议和学习情况。
文献信息:文献标题:Aesthetics and design in three dimensional animation process(三维动画过程中的美学与设计)国外作者:Gokce Kececi Sekeroglu文献出处:《Procedia - Social and Behavioral Sciences》, 2012 , 51 (6):812-817字数统计:英文2872单词,15380字符;中文4908汉字外文文献:Aesthetics and design in three dimensional animation processAbstract Since the end of the 20th century, animation techniques have been widely used in productions, advertisements, movies, commercials, credits, visual effects, and so on, and have become an indispensable part of the cinema and television. The fast growth of technology and its impact on all production industry has enabled computer-generated animation techniques to become varied and widespread. Computer animation techniques not only saves labour and money, but it also gives the producer the option of applying the technique in either two dimensional (2D) or three dimensional (3D), depending on the given time frame, scenario and content. In the 21st century cinema and television industry, computer animations have become more important than ever. Imaginary characters or objects, as well as people, events and places that are either difficult or costly, or even impossible to shoot, can now be produced and animated through computer modelling techniques. Nowadays, several sectors are benefiting from these specialised techniques. Increased demand and application areas have put the questions of aesthetics and design into perspective, hence introducing a new point of view to the application process. Coming out of necessity, 3D computer animations have added a new dimension to the field of art and design, and they have brought in the question of artistic and aesthetic value in such designs.Keywords: three dimension, animation, aesthetics, graphics, design, film1.IntroductionCenturies ago, ancient people not only expressed themselves by painting still images on cave surfaces, but they also attempted to convey motion regarding moments and events by painting images, which later helped establish the natural course of events in history. Such concern contributed greatly to the animation and cinema history.First examples of animation, which dates back approximately four centuries ago, represents milestones in history of cinema. Eadweard J. Muybridge took several photographs with multiple cameras (Figure 1) and assembled the individual images into a motion picture and invented the movie projector called Zoopraxiscope and with the projection he held in 1887 he was also regarded as the inventor of an early movie projector. In that aspect, Frenchmen Louis and Auguste Lumière brothers are often credited as inventing the first motion picture and the creator of cinematography (1895).Figure 1. Eadweard J. Muybridge’s first animated pictureJ. Stuart Blackton clearly recognised that the animated film could be a viable aesthetic and economic vehicle outside the context of orthodox live action cinema. Inparticular, his movie titled The Haunted Hotel (1907) included impressive supernatural sequences, and convinced audiences and financiers alike that the animated film had unlimited potential. (Wells, 1998:14)“Praxinoscope”- invented by Frenchman Charles-Émile Reynaud - is one of the motion picture related tools which was developed and improved in time, and the invention is considered to be the beginning of the history of animated films, in the modern sense of the word. At the beginning of the 20th century, animated films produced through hand-drawn animation technique proved very popular, and the world history was marked by the most recognisable cartoon characters in the world that were produced through these animations, such as Little Nemo (1911), Gertie the Dinosaur (1914), The Sinking of the Lusitania (1918), Little Red Riding Hood (1922), The Four Musicians of Bremen (1922) Mickey Mouse(1928), Snow White and the Seven Dwarfs (1937).Nazi regime in Germany leads to several important animation film productions. When Goebbels could no longer import Disney movies, he commissioned all animation studios to develop theatrical cartoons. Upon this, Hans Fischerkoesen began to produce animation films and by end of the war, he produced over a thousand cartoons (Moritz, 2003:320).In due course, animated films became increasingly popular, resulting in new and sizable sectors, and the advances in technology made expansion possible. From then on, the computer-generated productions, which thrived in the 1980's, snowballed into the indispensable part of the modern day television and cinema.The American animated movie Aladdin grossed over 495 million dollars worldwide, and represented the success of the American animation industry, which then led to an expansion into animated movies which targeted adults (Aydın, 2010:110).Japan is possibly just as assertive in the animation films as America. Following the success of the first Japanese animation (anime) called The White Snake Enchantress 1958 (Figure 2)which resulted in awards in Venice, Mexico and Berlin film festivals, Japanese animes became ever so popular, which led to continuousinternational success. For example, the movie titled Spirited Away won an Oscar for Best Animated Feature Film, and became the winner of the top prize at this year's Berlin film festival. Following their ever-increasing success in anime production, Japan became one of the most sought after hubs of animation industry by European and American companies interested in collaboration.Figure 2. The White Snake Enchantress 19582.Three Dimensional AnimationThe development of animation techniques, a process that can be traced back to the 18th century brought with it a thematic variety in animation genres. Today, animation techniques based on cartoons, puppets, stop-motion, shadow, cut-out and time lapse can be applied both manually and based on digital technology. Furthermore the use of 3D computer graphics in the 1976-dated film "Futureworld" opened the way for this technology to be in high demand in a variety of industries. 3D animations occupy a central role today in cinema, TV, education and video games alike, and their creative processes in both realistic and surreal terms seem to know no limits. This new medium that with its magical powers makes the impossible possible and defies the laws of physic (Gökçearslan, 2008: 1) open a door for designers and artists to anunlimited imagination. "In particular in the movies of the 80s, computer-aided animated effects turned out to be life-savers, and the feature film Terminator 2 (1991) in which 3D animation technology was used for the first time received praise from both audience and film critics" (Kaba, 1992: 19). Toy Story (Walt Disney Pictures, 1995), a film that became very popular among audiences of all ages due to its script, characters, settings and animation technique, was the first fully 3D animated feature film in history, and was followed by two sequels.By help of the support coming from the homeland, and its form oriented realistic format, Disney characters have been amongst the top animated characters. In order to achieve a realistic production, Disney even kept animals such as horses, deer, and rabbits in the studios, while the artists studied their form, movements and behaviour. As for human characters, famous movie stars of the period were hired as a reference point for human form and behaviour. (Gökçearslan, 2009:80).Another American movie "Shrek" (2001) created by William Steig, whose book Shrek (1990) formed basis for the DreamWorks Pictures full length 3D animation film, attracted millions of people. The movie is a great example of a clever and aesthetically pleasing combination of powerful imagination and realistic design. Also, by means of certain dialogues and jokes, the theme of "value judgement" is simplified in a way that it is also understood by children. These are amongst two undeniable factors which are thought to have contributed to the worldwide success of the movie.Most successful 3D animation movies are of American make. The importance of budget, historical and political factors, as well as contextual and stylistic factors which bring in simplicity and clarity to the movies is incontrovertible.“The era of the post-photographic film has arrived, and it is clear that for the animator, the computer is essentially "another pencil". Arguably, this has already reached its zenith in PIXAR's Monsters Inc. Consequently, it remains important to note that while Europe has retained a tradition of auteurist film making, also echoed elsewhere in Russia, China, and Japan, the United States has often immersed its animation within a Special Effects tradition, and as an adjunct to live action cinema.” (Wells, 2002:2).3.Aesthetics and Design in Three Dimensional AnimationsLow-budget and high-budget 3D animation movies go through the same process, regardless. This process is necessary in order to put several elements together properly.The first step is to write up a short text called synopsis, which aims to outline the movie plot, content and theme. Following the approval of the synopsis, the creative team moves on to storyboarding, where illustrations or images are displayed in sequence for the purpose of visualising the movie (Figure 3). Storyboarding process reflects 3D animator's perspective and the elements that are aimed to be conveyed to the audience. The animation artists give life to a scenario, and add a touch of their personality to the characters and environment. “"Gone With The Wind" is the first movie where the storyboarding technique, which was initially used in Walt Disney Studios during the production process of animated movies, was used for a non-animation movie, and since the 1940's, it has been an indispensible part of the film industry.Figure 3: Toy Story, storyboarding, PixarStory board artists are the staple of film industry, and they are the ones who either make or break the design and aesthetics of the movie. While they their mainresponsibility is to enframe the movie scenes with aesthetics and design quality in mind, they are also responsible for incorporating lights, shadows and colours in a way that it enhances the realistic features of the movie.The next step following storyboarding, is "timing" which is particularly important in determining the length of scenes, by taking the script into consideration. In order to achieve a realistic and plausible product, meticulous mathematical calculations are required.The next important step is to create characters and environment in 3D software, and finalise the production in accordance with the story-board. While character and objects are modelled in 3D software, such as 3Ds Max, Cinema 4D , Houdini, Maya, Lightwave, the background design is also created with digital art programs such as Photoshop, Illustrator, Artage, depending on the type or content of the movie (Figure: 4). Three dimensional modelling is the digital version of sculpturing. In time, with ever-changing technology, plastic arts have improved and become varied, leading to a new form of digital art, which also provides aesthetic integrity in terms of technique and content. Same as manually produced art work, 3D creations are also produced by highly skilled artist with extensive knowledge of anatomy, patterns, colours, textures, lights and composition. Such artists and designers are able to make use of their imagination and creativity, and take care of both technical and aesthetic aspects of creating an animated movie.Figure 4: Examples of 3D modelling (left) and background (right).In a movie, the colour, light and shadow elements affect the modelled character, setting and background to a very large extent. Three dimensional computer graphics software provides a realistic virtual studio and endless source of light combinations.Hence, the message and feeling is conveyed through an artistically sensitive and aesthetically pleasing atmosphere, created with a certain combination of light and colours. Spot light, omni, area and direct lights are a few examples to the types of options that can be used on their own or as a combination. For example, in 3D animations the 'direct light' source can be used outdoors as an alternative for the sun, whereas the 'area light' which uses vertical beams can help smooth out the surface by spreading the light around, which makes it ideal for indoors settings. Blue Sky Studio's 3D movie called “Ice Age” (Figure 5) produced in 2001 achieved a kind of unique and impressive technology-driven realistic technique with clever use of lights and colours, becoming one of the first exceedingly successful 3D animations of the period.Figure 5: “Ice Age”, Blue Sky Studios, 2001Following the modelling and finishing touches of other visual elements, each scene is animated one by one. “Actions assigned to each and every visual element within the scene have to have a meaningful connection with the story, in terms of form and content. In fact, the very fundamental principle of computer animations is that each action within the scene serves a certain purpose, and the design within the frame creates visual pleasure” . Underscoring element is also expected to complement the visuals and be in harmony with the scene. It is an accepted fact that a good visual is presented along with suitable music, affects the audience in emotional and logicalsense a lot more than it would have done so otherwise. For that reason, underscores are just as important as other audio elements, such as voiceovers and effects, when it comes to visual complements. Sound is an indispensable part of life and nature, therefore it can be considered as a fundamental means of storytelling. Clever and appropriate use of sound is very effective in maintaining the audience's attention and interest.In order to produce a meaningful final product in the editing phase, a careful process of storyboarding and timing have to be carried out. Skilfully executed editing can add rhythm and aesthetics to scenes. The integrity of time, setting, audio and atmosphere within a movie is also profusely important in terms of conveying the semantic rhythm. Meticulously timed fade-out, fade-in, radiance or smoke effects would allow the audience to follow the story more attentively and comfortably, and it would also establish consistency in terms of aesthetics of the movie itself.4. ConclusionNo matter how different the technological circumstances are today, and used to be back in the ancient times when humans painted images on cave surfaces, human beings have always been fascinated with visual communication. Since then, they have been striving to share their experiences, achievements, wishes and dreams with other people, societies or masses. For the same purpose, people have been painting, acting, writing plays, or producing movies. Incessant desire to convey a message through visual communication brought about the invention of the cinema, and since the 18th century, it has become an essential means of presenting ideas, thoughts or feelings to masses. 3D animations, which were mainly used in advertisements, commercials, education and entertainment related productions in the 2000's, brought about many blockbuster 3D movies.When recorded with a camera, the three dimensional aspect of reality is lost, and turned into two dimensions. In 3D animations, the aim is to emulate the reality and present the audience an experience as close to the real life as possible. “Human eye is much more advanced than a video camera. infinite sense of depth and the ability tofocus on several objects at the same time are only a few of many differences between a camera and the human eye. Computer-produced visuals would give the same results as the camera. Same as painting and photography, it aims to interpret the three dimensional world in a two dimensional form.” As a result, 3D animations have become just as important as real applications, and thanks to their ability to produce scenes that are very difficult, even impossible to emulate, they have actually become a better option. Big companies such as Walt Disney, Pixar, and Tree Star have been making 3D animations which appeal to both children and adults worldwide. Successful productions include the elements of appropriate ideas, decent content, combined with expert artists and designers with technical backgrounds. For that reason, in order to establish good quality visual communication and maintain the audience's attention, art and design must go hand in hand. Sometimes, being true to all the fundamental design principles may not be enough to achieve an aesthetically pleasing scene. In order to achieve an aesthetically pleasing scene, warmth and sincerity, which are typical attributes of human beings, must be incorporated into the movie. The modelling team, which functions as the sculptor and creates authentic materials like a painter, teams up with creative story-board artists, and texture and background artists, to achieve an artistically valuable work. In order to achieve plausibility and an aesthetically valuable creation, it is important that colour, light, shadow and textures used during the process are true to real life. Camera angles, speed and direction of movement, the sequence of the scenes and their harmony with the underscoring are essential in determining the schematic and aesthetic quality of a movie.In conclusion, Art does not teach. Rather, art presents the full and concrete reality of the end target. What art does is presents things "as they should be or could have been", which helps people attain such things in real life. However, this is just a secondary benefit of art. The main benefit of art is that it provides people with a taste of what "things would be like if they were the way they were supposed to be" in real life. Such an experience is essential to human life. Surely, people cannot watch a movie with the schematic or aesthetic quality of it in mind. However, as the movieprogresses, a visual language settles into the spectator's subconsciousness, creating a sense of pleasure. Walter Benjamin claims that a spectator analysing a picture is able to abandon himself to his associations. However, this is not the case for people watching a movie at the cinema. Rather, the cinema audience can only build associations after they have watched the movie, therefore the process of perception is delayed. (Benjamin, 1993:66).中文译文:三维动画过程中的美学与设计摘要自20世纪末以来,动画技术在生产、广告、电影、商业、节目、视觉效果等方面得到了广泛的应用,并已经成为影视业不可或缺的组成部分。