3000字英文参考文献及其翻译范例
- 格式:doc
- 大小:27.50 KB
- 文档页数:2
附录A3 Image Enhancement in the Spatial DomainThe principal objective of enhancement is to process an image so that the result is more suitable than the original image for a specific application. The word specific is important, because it establishes at the outset than the techniques discussed in this chapter are very much problem oriented. Thus, for example, a method that is quite useful for enhancing X-ray images may not necessarily be the best approach for enhancing pictures of Mars transmitted by a space probe. Regardless of the method used .However, image enhancement is one of the most interesting and visually appealing areas of image processing.Image enhancement approaches fall into two broad categories: spatial domain methods and frequency domain methods. The term spatial domain refers to the image plane itself, and approaches in this category are based on direct manipulation of pixels in an image. Fourier transform of an image. Spatial methods are covered in this chapter, and frequency domain enhancement is discussed in Chapter 4.Enhancement techniques based on various combinations of methods from these two categories are not unusual. We note also that many of the fundamental techniques introduced in this chapter in the context of enhancement are used in subsequent chapters for a variety of other image processing applications.There is no general theory of image enhancement. When an image is processed for visual interpretation, the viewer is the ultimate judge of how well a particular method works. Visual evaluation of image quality is a highly is highly subjective process, thus making the definition of a “good image” an elusive standard by which to compare algorithm performance. When the problem is one of processing images for machine perception, the evaluation task is somewhat easier. For example, in dealing with a character recognition application, and leaving aside other issues such as computational requirements, the best image processing method would be the one yielding the best machine recognition results. However, even in situations when aclear-cut criterion of performance can be imposed on the problem, a certain amount of trial and error usually is required before a particular image enhancement approach is selected.3.1 BackgroundAs indicated previously, the term spatial domain refers to the aggregate of pixels composing an image. Spatial domain methods are procedures that operate directly on these pixels. Spatial domain processes will be denotes by the expression()[]=(3.1-1)g x y T f x y,(,)where f(x, y) is the input image, g(x, y) is the processed image, and T is an operator on f, defined over some neighborhood of (x, y). In addition, T can operate on a set of input images, such as performing the pixel-by-pixel sum of K images for noise reduction, as discussed in Section 3.4.2.The principal approach in defining a neighborhood about a point (x, y) is to use a square or rectangular subimage area centered at (x, y).The center of the subimage is moved from pixel to starting, say, at the top left corner. The operator T is applied at each location (x, y) to yield the output, g, at that location. The process utilizes only the pixels in the area of the image spanned by the neighborhood. Although other neighborhood shapes, such as approximations to a circle, sometimes are used, square and rectangular arrays are by far the most predominant because of their ease of implementation.The simplest from of T is when the neighborhood is of size 1×1 (that is, a single pixel). In this case, g depends only on the value of f at (x, y), and T becomes a gray-level (also called an intensity or mapping) transformation function of the form=(3.1-2)s T r()where, for simplicity in notation, r and s are variables denoting, respectively, the grey level of f(x, y) and g(x, y)at any point (x, y).Some fairly simple, yet powerful, processing approaches can be formulates with gray-level transformations. Because enhancement at any point in an image depends only on the grey level at that point, techniques in this category often are referred to as point processing.Larger neighborhoods allow considerably more flexibility. The general approach is to use a function of the values of f in a predefined neighborhood of (x, y) to determine the value of g at (x, y). One of the principal approaches in this formulation is based on the use of so-called masks (also referred to as filters, kernels, templates, or windows). Basically, a mask is a small (say, 3×3) 2-Darray, in which the values of the mask coefficients determine the nature of the type of approach often are referred to as mask processing or filtering. These concepts are discussed in Section 3.5.3.2 Some Basic Gray Level TransformationsWe begin the study of image enhancement techniques by discussing gray-level transformation functions. These are among the simplest of all image enhancement techniques. The values of pixels, before and after processing, will be denoted by r and s, respectively. As indicated in the previous section, these values are related by an expression of the from s = T(r), where T is a transformation that maps a pixel value r into a pixel value s. Since we are dealing with digital quantities, values of the transformation function typically are stored in a one-dimensional array and the mappings from r to s are implemented via table lookups. For an 8-bit environment, a lookup table containing the values of T will have 256 entries.As an introduction to gray-level transformations, which shows three basic types of functions used frequently for image enhancement: linear (negative and identity transformations), logarithmic (log and inverse-log transformations), and power-law (nth power and nth root transformations). The identity function is the trivial case in which out put intensities are identical to input intensities. It is included in the graph only for completeness.3.2.1 Image NegativesThe negative of an image with gray levels in the range [0, L-1]is obtained by using the negative transformation show shown, which is given by the expression=--(3.2-1)s L r1Reversing the intensity levels of an image in this manner produces the equivalent of a photographic negative. This type of processing is particularly suited for enhancing white or grey detail embedded in dark regions of an image, especiallywhen the black areas are dominant in size.3.2.2 Log TransformationsThe general from of the log transformation is=+(3.2-2)log(1)s c rWhere c is a constant, and it is assumed that r ≥0 .The shape of the log curve transformation maps a narrow range of low gray-level values in the input image into a wider range of output levels. The opposite is true of higher values of input levels. We would use a transformation of this type to expand the values of dark pixels in an image while compressing the higher-level values. The opposite is true of the inverse log transformation.Any curve having the general shape of the log functions would accomplish this spreading/compressing of gray levels in an image. In fact, the power-law transformations discussed in the next section are much more versatile for this purpose than the log transformation. However, the log function has the important characteristic that it compresses the dynamic range of image characteristics of spectra. It is not unusual to encounter spectrum values that range from 0 to 106 or higher. While processing numbers such as these presents no problems for a computer, image display systems generally will not be able to reproduce faithfully such a wide range of intensity values .The net effect is that a significant degree of detail will be lost in the display of a typical Fourier spectrum.3.2.3 Power-Law TransformationsPower-Law transformations have the basic froms crϒ=(3.2-3) Where c and y are positive constants .Sometimes Eq. (3.2-3) is written as to account for an offset (that is, a measurable output when the input is zero). However, offsets typically are an issue of display calibration and as a result they are normally ignored in Eq. (3.2-3). Plots of s versus r for various values of y are shown in Fig.3.6. As in the case of the log transformation, power-law curves with fractional values of y map a narrow range of dark input values into a wider range of output values, with theopposite being true for higher values of input levels. Unlike the log function, however, we notice here a family of possible transformation curves obtained simply by varying y. As expected, we see in Fig.3.6 that curves generated with values of y>1 have exactly the opposite effect as those generated with values of y<1. Finally, we note that Eq.(3.2-3) reduces to the identity transformation when c = y = 1.A variety of devices used for image capture, printing, and display respond according to as gamma[hence our use of this symbol in Eq.(3.2-3)].The process used to correct this power-law response phenomena is called gamma correction.Gamma correction is important if displaying an image accurately on a computer screen is of concern. Images that are not corrected properly can look either bleached out, or, what is more likely, too dark. Trying to reproduce colors accurately also requires some knowledge of gamma correction because varying the value of gamma correcting changes not only the brightness, but also the ratios of red to green to blue. Gamma correction has become increasingly important in the past few years, as use of digital images for commercial purposes over the Internet has increased. It is not Internet has increased. It is not unusual that images created for a popular Web site will be viewed by millions of people, the majority of whom will have different monitors and/or monitor settings. Some computer systems even have partial gamma correction built in. Also, current image standards do not contain the value of gamma with which an image was created, thus complicating the issue further. Given these constraints, a reasonable approach when storing images in a Web site is to preprocess the images with a gamma that represents in a Web site is to preprocess the images with a gamma that represents an “average” of the types of monitors and computer systems that one expects in the open market at any given point in time.3.2.4 Piecewise-Linear Transformation FunctionsA complementary approach to the methods discussed in the previous three sections is to use piecewise linear functions. The principal advantage of piecewise linear functions over the types of functions we have discussed thus far is that the form of piecewise functions can be arbitrarily complex. In fact, as we will see shortly, a practical implementation of some important transformations can be formulated onlyas piecewise functions. The principal disadvantage of piecewise functions is that their specification requires considerably more user input.Contrast stretchingOne of the simplest piecewise linear functions is a contrast-stretching transformation. Low-contrast images can result from poor illumination, lack of dynamic range in the imaging sensor, or even wrong setting of a lens aperture during image acquisition. The idea behind contrast stretching is to increase the dynamic range of the gray levels in the image being processed.Gray-level slicingHighlighting a specific range of gray levels in an image often is desired. Applications include enhancing features such as masses of water in satellite imagery and enhancing flaws in X-ray images. There are several ways of doing level slicing, but most of them are variations of two basic themes. One approach is to display a high value for all gray levels in the range of interest and a low value for all other gray levels.Bit-plane slicingInstead of highlighting gray-level ranges, highlighting the contribution made to total image appearance by specific bits might be desired. Suppose that each pixel in an image is represented by 8 bits. Imagine that the image is composed of eight 1-bit planes, ranging from bit-plane 0 for the least significant bit to bit-plane 7 for the most significant bit. In terms of 8-bit bytes, plane 0 contains all the lowest order bits in the bytes comprising the pixels in the image and plane 7 contains all the high-order bits.3.3 Histogram ProcessingThe histogram of a digital image with gray levels in the range [0, L-1] is a discrete function , where is the kth gray level and is the number of pixels in the image having gray level . It is common practice to pixels in the image, denoted by n. Thus, a normalized histogram is given by , for , Loosely speaking, gives an estimate of the probability of occurrence of gray level . Note that the sum of all components of a normalized histogram is equal to 1.Histograms are the basis for numerous spatial domain processing techniques.Histogram manipulation can be used effectively for image enhancement, as shown in this section. In addition to providing useful image statistics, we shall see in subsequent chapters that the information inherent in histograms also is quite useful in other image processing applications, such as image compression and segmentation. Histograms are simple to calculate in software and also lend themselves to economic hardware implementations, thus making them a popular tool for real-time image processing.附录B第三章空间域图像增强增强的首要目标是处理图像,使其比原始图像格式和特定应用。
外文资料翻译—原文部分Fundamentals of Human Animation(From Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)If you are reading this part, then you have most likely finished building your human character, created textures for it, set up its skeleton, made morph targets for facial expressions, and arranged lights around the model. You have then arrived at perhaps the most exciting part of 3-D design, which is animating a character. Up to now the work has been somewhat creative, sometimes tedious, and often difficult.It is very gratifying when all your previous efforts start to pay off as you enliven your character. When animating, there is a creative flow that increases gradually over time. You are now at the phase where you become both the actor and the director of a movie or play.Although animation appears to be a more spontaneous act, it is nevertheless just as challenging, if not more so, than all the previous steps that led up to it. Your animations will look pitiful if you do not understand some basic fundamentals and principles. The following pointers are meant to give you some direction. Feel free to experiment with them. Bend and break the rules whenever you think it will improve the animation.SOME ANIMATION POINTERS1. Try isolating parts. Sometimes this is referred to as animating in stages. Rather than trying to move every part of a body at the same time, concentrate on specific areas. Only one section of the body is moved for the duration of the animation. Then returning to the beginning of the timeline, another section is animated. By successively returning to the beginning and animating a different part each time, the entire process is less confusing.2. Put in some lag time. Different parts of the body should not start and stop at the same time. When an arm swings, the lower arm should follow a few frames after that. The hand swings after the lower arm. It is like a chain reaction that works its way through the entire length of the limb.3. Nothing ever comes to a total stop. In life, only machines appear to come to a dead stop. Muscles, tendons, force, and gravity all affect the movement of a human. You can prove this to yourself. Try punching the air with a full extension. Notice that your fist has a bounce at the end. If a part comes to a stop such as a motion hold, keyframe it once and then again after three to eight or more keyframes. Your motion graph will then have a curve between the two identical keyframes. This will make the part appear to bounce rather than come to a dead stop.4. Add facial expressions and finger movements. Your digital human should exhibit signs of life by blinking and breathing. A blink will normally occur every 60 seconds. A typical blink might be as follows:Frame 60: Both eyes are open.Frame 61: The right eye closes halfway.Frame 62: The right eye closes all the way and the left eye closes halfway.Frame 63: The right eye opens halfway and the left eye closes all the way.Frame 64: The right eye opens all the way and left eye opens halfway.Frame 65: The left eye opens all the way.Closing the eyes at slightly different times makes the blink less mechanical.Changing facial expressions could be just using eye movements to indicate thoughts running through your model's head. The hands will appear stiff if you do not add finger movements. Too many students are too lazy to take the time to add facial and hand movements. If you make the extra effort for these details you will find that your animations become much more interesting.5. What is not seen by the camera is unimportant. If an arm goes through a leg but is not seen in the camera view, then do not bother to fix it. If you want a hand to appear close to the body and the camera view makes it seem to be close even though it is not, then why move it any closer? This also applies to sets. There is no need to build an entire house if all the action takes place in the living room. Consider painting backdrops rather than modeling every part of a scene.6. Use a minimum amount of keyframes. Too many keyframes can make the character appear to move in spastic motions. Sharp, cartoonlike movements are created with closely spaced keyframes. Floaty or soft, languid motions are the result of widely spaced keyframes. An animation will often be a mixture of both. Try to look for ways that will abbreviate the motions. You can retain the essential elements of an animation while reducing the amount of keyframes necessary to create a gesture.7.Anchor a part of the body. Unless your character is in the air, it should have some part of itself locked to the ground. This could be a foot, a hand, or both. Whichever portion is on the ground should be held in the same spot for a number of frames. This prevents unwanted sliding motions. When the model shifts its weight, the foot that touches down becomes locked in place. This is especially true with walking motions.There are a number of ways to lock parts of a model to the ground. One method is to use inverse kinematics. The goal object, which could be a null, automatically locks a foot or hand to the bottom surface. Another method is to manually keyframe the part that needs to be motionless in the same spot. The character or its limbs will have to be moved and rotated, so that foot or hand stays in the same place. If you are using forward kinematics, then this could mean keyframing practically every frame until it is time to unlock that foot or hand.8.A character should exhibit weight. One of the most challenging tasks in 3-D animation is to have a digital actor appear to have weight and mass. You can use several techniques to achieve this. Squash and stretch, or weight and recoil, one of the 12 principles of animation discussed in Chapter 12, is an excellent way to give your character weight.By adding a little bounce to your human, he or she will appear to respond to the force of gravity. For example, if your character jumps up and lands, lift the body up a little after it makes contact. For a heavy character, you can do this several times and have it decrease over time. This will make it seem as if the force of the contact causes the body to vibrate a little.Secondary actions, another one of the 12 principles of animation discussed in Chapter 12, are an important way to show the effects of gravity and mass. Using the previous example of a jumping character, when he or she lands, the belly could bounce up and down, the arms could have some spring to them, the head could tilt forward, and so on.Moving or vibrating the object that comes in contact with the traveling entity is another method for showing the force of mass and gravity. A floor could vibrate or a chair that a person sits in respond to the weight by the seat going down and recovering back up a little. Sometimes an animator will shake the camera to indicate the effects of a force.It is important to take into consideration the size and weight of a character. Heavy objects such as an elephant will spend more time on the ground, while a light character like a rabbit will spendmore time in the air. The hopping rabbit hardly shows the effects of gravity and mass.9. Take the time to act out the action. So often, it is too easy to just sit at the computer and try to solve all the problems of animating a human. Put some life into the performance by getting up and acting out the motions. This will make the character's actions more unique and also solve many timing and positioning problems. The best animators are also excellent actors. A mirror is an indispensable tool for the animator. Videotaping yourself can also be a great help.10. Decide whether to use IK, FK, or a blend of both. Forward kinematics and inverse kinematics have their advantages and disadvantages. FK allows full control over the motions of different body parts. A bone can be rotated and moved to the exact degree and location one desires. The disadvantage to using FK is that when your person has to interact within an environment, simple movements become difficult. Anchoring a foot to the ground so it does not move is challenging because whenever you move the body, the feet slide. A hand resting on a desk has the same problem.IK moves the skeleton with goal objects such as a null. Using IK, the task of anchoring feet and hands becomes very simple. The disadvantage to IK is that a great amount of control is packed together into the goal objects. Certain poses become very difficult to achieve.If the upper body does not require any interaction with its environment, then consider a blend of both IK and FK. IK can be set up for the lower half of the body to anchor the feet to the ground, while FK on the upper body allows greater freedom and precision of movements.Every situation involves a different approach. Use your judgment to decide which setup fits the animation most reliably.11. Add dialogue. It has been said that more than 90% of student animations that are submitted to companies lack dialogue. The few that incorporate speech in their animations make their work highly noticeable. If the animation and dialogue are well done, then those few have a greater advantage than their competition. Companies understand that it takes extra effort and skill tocreate animation with dialogue.When you plan your story, think about creating interaction between characters not only on a physical level but through dialogue as well. There are several techniques, discussed in this chapter, that can be used to make dialogue manageable.12. Use the graph editor to clean up your animations. The graph editor is a useful tool that all 3-D animators should become familiar with. It is basically a representation of all the objects, lights, and cameras in your scene. It keeps track of all their activities and properties.A good use of the graph editor is to clean up morph targets after animating facial expressions. If the default incoming curve in your graph editor is set to arcs rather than straight lines, you will most likely find that sometimes splines in the graph editor will curve below a value of zero. This can yield some unpredictable results. The facial morph targets begin to take on negative values that lead to undesirable facial expressions. Whenever you see a curve bend below a value of zero, select the first keyframe point to the right of the arc and set its curve to linear. A more detailed discussion of the graph editor will be found in a later part of this chapter.ANIMATING IN STAGESAll the various components that can be moved on a human model often become confusing if you try to change them at the same time. The performance quickly deteriorates into a mechanical routine if you try to alter all these parts at the same keyframes. Remember, you are trying to create humanqualities, not robotic ones.Isolating areas to be moved means that you can look for the parts of the body that have motion over time and concentrate on just a few of those. For example, the first thing you can move is the body and legs. When you are done moving them around over the entire timeline, then try rotating the spine. You might do this by moving individual spine bones or using an inverse kinematics chain. Now that you have the body moving around and bending, concentrate on the arms. If you are not using an IK chain to move the arms, hands, and fingers, then rotate the bones for the upper and lower arm. Do not forget the wrist. Finger movements can be animated as one of the last parts. Facial expressions can also be animated last.Example movies showing the same character animated in stages can be viewed on the CD-ROM as CD11-1 AnimationStagesMovies. Some sample images from the animations can also be seen in Figure 11-1. The first movie shows movement only in the body and legs. During the second stage, the spine and head were animated. The third time, the arms were moved. Finally, in the fourth and final stage, facial expressions and finger movements were added.Animating in successive passes should simplify the process. Some final stages would be used to clean up or edit the animation.Sometimes the animation switches from one part of the body leading to another. For example, somewhere during the middle of an animation the upper body begins to lead the lower one. In a case like this, you would then switch from animating the lower body first to moving the upper part before the lower one.The order in which one animates can be a matter of personal choice. Some people may prefer to do facial animation first or perhaps they like to move the arms before anything else. Following is a summary of how someone might animate a human.1. First pass: Move the body and legs.2. Second pass: Move or rotate the spinal bones, neck, and head.3. Third pass: Move or rotate the arms and hands.4. Fourth pass: Animate the fingers.5. Fifth pass: Animate the eyes blinking.6. Sixth pass: Animate eye movements.7. Seventh pass: Animate the mouth, eyebrows, nose, jaw, and cheeks (you can break these up into separate passes).Most movement starts at the hips. Athletes often begin with a windup action in the pelvic area that works its way outward to the extreme parts of the body. This whiplike activity can even be observed in just about any mundane act. It is interesting to note that people who study martial arts learn that most of their power comes from the lower torso.Students are often too lazy to make finger movements a part of their animation. There are several methods that can make the process less time consuming.One way is to create morph targets of the finger positions and then use shape shifting to move the various digits. Each finger is positioned in an open and fistlike closed posture. For example, the sections of the index finger are closed, while the others are left in an open, relaxed position for one morph target. The next morph target would have only the ring finger closed while keeping the others open. During the animation, sliders are then used to open and close the fingers and/or thumbs. Another method to create finger movements is to animate them in both closed and open positions and then save the motion files for each digit. Anytime you animate the same character, you can load the motions into your new scene file. It then becomes a simple process of selecting either the closed or the open position for each finger and thumb and keyframing them wherever you desire.DIALOGUEKnowing how to make your humans talk is a crucial part of character animation. Once you add dialogue, you should notice a livelier performance and a greater personality in your character. At first, dialogue may seem too great a challenge to attempt. Actually, if you follow some simple rules, you will find that adding speech to your animations is not as daunting a task as one would think. The following suggestions should help.DIALOGUE ESSENTIALS1. Look in the mirror. Before animating, use a mirror or a reflective surface such as that on a CD to follow lip movements and facial expressions.2. The eyes, mouth, and brows change the most. The parts of the face that contain the greatest amount of muscle groups are the eyes, brows, and mouth. Therefore, these are the areas that change the most when creating expressions.3. The head constantly moves during dialogue. Animate random head movements, no matter how small, during the entire animation. Involuntary motions of the head make a point without having to state it outright. For example, nodding and shaking the head communicate, respectively, positive and negative responses. Leaning the head forward can show anger, while a downward movement communicates sadness. Move the head to accentuate and emphasize certain statements. Listen to thewords that are stressed and add extra head movements to them.4. Communicate emotions. There are six recognizable universal emotions: sadness, anger, joy, fear, disgust, and surprise. Other, more ambiguous states are pain, sleepiness, passion, physical exertion, shyness, embarrassment, worry, disdain, sternness, skepticism, laughter, yelling, vanity, impatience, and awe.5. Use phonemes and visemes. Phonemes are the individual sounds we hear in speech. Rather than trying to spell out a word, recreate the word as a phoneme. For example, the word computer is phonetically spelled "cumpewtrr." Visemes are the mouth shapes and tongue positions employed during speech. It helps tremendously to draw a chart that recreates speech as phonemes combined with mouth shapes (visemes) above or below a timeline with the frames marked and the sound and volume indicated.6. Never animate behind the dialogue. It is better to make the mouth shapes one or two frames before the dialogue.7. Don't overstate. Realistic facial movements are fairly limited. The mouth does not open that much when talking.8. Blinking is always a part of facial animation. It occurs about every two seconds. Different emotional states affect the rate of blinking. Nervousness increases the rate of blinking, while anger decreases it.9. Move the eyes. To make the character appear to be alive, be sure to add eye motions. About 80% of the time is spent watching the eyes and mouth, while about 20% is focused on the hands and body.10. Breathing should be a part of facial animation. Opening the mouth and moving the head back slightly will show an intake of air, while flaring the nostrils and having the head nod forward a little can show exhalation. Breathing movements should be very subtle and hardly noticeable...外文资料翻译—译文部分人体动画基础(引自Peter Ratner.3D Human Modeling and Animation[M].America:Wiley,2003:243~249)如果你读到了这部分,说明你很可能已构建好了人物角色,为它创建了纹理,建立起了人体骨骼,为面部表情制作了morph修改器并在模型周围安排好了灯光。
英语论文参考文献精选3篇英语论文参考文献精选1篇英文及其它语种的文献在前,中文文献在后,参照以下标准执行。
期刊论文Bolinger, D. 1965. The atomization of word meaning [J]. Language 41 (4): 555-573.朱永生,2006,名词化、动词化与语法隐喻[J],《外语教学与研究》(2):83-90。
论文集论文Bybee, J. 1994. The grammaticization of zero: Asymmetries in tense and aspect systems [A]. In W. Pagliuca (ed.). Perspectives on Grammaticalization [C]. Amsterdam: John Benjamins. 235-254.文秋芳,2003a,英语学习者动机、观念、策略的变化规律与特点 [A]。
载文秋芳、王立非(编),《英语学习策略实证研究》[C]。
西安:陕西师范大学出版社。
255-259。
网上文献Jiang, Yan. 2000. The Tao of verbal communication: An Elementary textbook on pragmatics and discourse analysis [OL]. (accessed 30/04/2006).王岳川,2004,当代传媒中的网络文化与电视批评[OL], (2005年11月18日读取)。
专著Bloomfield, L. 1933. Language [M]. New York: Holt.吕叔湘、朱德熙,1952,《语法修辞讲话》[M]。
北京:中国青年出版社。
译著Nedjalkov, V. P. (ed.). 1983/1988. Typology of Resultative Constructions, trans. Bernard Comrie [C]. Amsterdam: John Benjamins.赵元任,1968/1980,《中国话的文法》(A Grammar of Spoken Chinese)[M],丁邦新译。
(二 零 一 五 年 六 月英文参考资料翻译 题 目:多层横机针织织物面料的开发 ****:*** 学 院:轻工与纺织学院 系 别:服装设计与工程系 专 业:服装设计与工程 班 级:服装设计与工程11-1班 指导教师:邱莉 讲师学校代码: 10128学 号:************Development of Multi-Layer Fabric on a Flat KnittingMachineAbstractThe loop transfer technique was used to develop the a splitable multi layer knit fabric on a computerized multi gauge flat knitting machine. The fabric consists of three layers: inner-single jersey, middle-1X1 purl and, outer-single jersey. By varying the loop length the multi layer knit fabric samples were produced,namely CCC-1, CCC-2 and CCC-3. The above multi layer fabrics were knitted using 24s Ne cotton of combined yarn feed in feeders 3, 4, and 4 respectively. The influence of loop length on wpc,cpc and tightness factor was studied using linear regression. The water vapor and air permeability properties of the produced multi layer knit fabrics were studied using ANOVA. The change of raw material in three individual layers could be useful for the production of fabric for functional, technical, and industrial applications.Keywords: multi layer fabric, loop length, loop transfer, permeability prope1.INTRODUCTIONcapable of manufacturing engineered fabric in two dimensional, three dimensional of bi-layer, and multilayer knit fabrics. The feature includes individual needle selection, the presences of holding down sinkers, racking, transfer, and adapted feeding devices combined with CAD. The layered fabrics are more suitable for functional and technical applications than single layer fabrics. These fabrics may be non-splitable (branching knit structure, plated fabric, spacer fabric) and splitable (bilayer,multilayer). The functional knitted structure of two different fabric layers based on different textile components (hydrophobic and hydrophilic textile material) is used to produce leisure wear, sportswear and protective clothing to improve the comfort. The separation layer is polypropylene in contact with skin, and the absorption layer will be the outside layer when cotton is used for the knit fabric .Garments made of plant branch structured knit fabrics facilitate the transport of sweat from the skin to the outer layer of the fabric very fast and make the wearer more comfortable. Qing Chen et al .reported that the branching knitted fabrics made from different combinations of polyester/cotton yarns with varying linear density and various knitted structure produced in rib machine improved water transport properties. The Moisture comfort characteristics of plated knitted fabric was good, reported by the authors findings from three layer plated fabric of cotton (40s), lycra (20 D) and superfine polypropylene (55 dtex/72 f) was used as a raw material in face, middle and ground layers respectively . The applications in wearable electronics for the multilayer fabric are wider. Novel multi-functional fibers structure include three layered knit fabric embedded with electronics for health monitoring. The knit fabric consists of 5% spandex and 95% polypropylene in –1stlayer, -2nd layer composed of metal fibers to conduct electric current to embedded electronics + PCM and the 3rd layer is composed of highly hydrophilic cotton . In flat knitting, two surface (U-,V-,M-,X- and Y-shaped) and three surface layers (U-face to face, U- zigzag and X-shaped) spacer fabric were developed from hybrid glass filament and polypropylene for light weight composite applications.HCebulla et al produced three dimensional performs like open cuboid and spherical shell using the needle parking method. The focus was in the area of individual needle selection in the machine for the production of near net-shape performs. The multi layered sandwich knit fabric of rectangular core structure (connecting layer: rib), triangular core structure (connecting layer:interlock), honeycomb core structure (connecting layer: jersey combined with rib), triple face structure 1 (connecting layers are not alternated), Triple face structure 2 (connecting layers are alternated) were developed on a flat-knitting machine for technical applications .In this direction the flat knitting machine was elected to produce splittable multi layer knit fabric with varying loop length and loop transfer techniques. The influence of loop length on wpc, cpc and tightness factor was studied for the three individual layers in the fabric. The important breathability properties of the fabric such as water vapor permeability and air permeability were studied. The production technique used for this fabric has wide applications such as in functional wear, technical textiles, and wearable textiles.2.MATERIALS AND METHODSThe production of multi-layer knit fabrics such as CCC-1, CCC-2 and CCC-3 cotton yarn with the linear density of 24s Ne was fed in the knit feeder.For layered fabric development, a computerized multi gauge flat knitting machine and combined yarn feed was selected like 3, 4 and 4 respectively, shown in the Table I. Q. M. Wang and H. Hu [9] was the selected yarn feed in the range of 4 –10 for the production of glass fiber yarn composite reinforcement on a flat knitting machine. The intermediate between integral and fully fashioned garment was produced using the “half gauging orneedle parking” technique. The use of only alternate needles on each bed of the flat knitting machine was used for stitch formation, The remaining needles did not participate in stitch formation in the same course,but the loops formed were kept in the needle head until employed for stitch formation again, thus freeing needles to be used as temporary parking places for loop transfer . For production of layered fabric and fully fashionedgarment, the loop transfer stitch is essential part of the panel. The running-on bars were used for transferring of loops either by hand or automatically from one needle to another needle depending on the machine. The principle of the loop transfer is shown in the Figure1.FIGURE. 1. Principle of loop transfer.(a)The delivering needle is raised by a cam in the carriage. The loop is stretched over the transfer spring. (b)The receiving needle is raised slightly from its needle bed. The receiving needle enters the transfer spring of delivering needle and penetrates the loop that will be transferred. (c)The delivering needle retreats leaving the loop on the receiving needle. The transfer spring opens to permits the receiving needleto move back from its closure. Finally, loop transference is completed.TABLE I. Machine & Fabric parameters.2.1 Fabric DevelopmentUsing STOLL M1.PLUS 5.1.034 software the needle selection pattern was simulated is shown in Figure 2.In Figure3, feeder 1, 2 and 3 are used for the formation of three layer fabric (inner-single jersey,middle-1X1 purl and outer-single jersey) respectively. With knit stitches the outer and inner layer knit fabrics are formed by means of selecting the alternate working needles in each bed. But the middle layer fabric is formed by free needles in each bed with the help of loop transfer and knit stitches.FIGURE 2. Selection of Machine & pattern parameters.FIGURE 3. Needle diagram for the multi-layer knit fabric.2.2TESTINGThe produced multi layer knit fabric was given a relaxation process and the following tests were carried out. The knitted fabric properties are given in Table II. and the cross sectional view of the fabrics is shown in Figure 4.FIGURE 4. Cross Sectional view of Multi-layer knit fabric.2.3 Stitch DensityThe courses and wale density of the samples in outer,middle and inner layer were calculated individually in the direction of the length and width of the fabric.The average density per square centimeter was taken for the discussion.2.4 Loop LengthIn outer, middle and inner layers of various combinations in multi layer fabric, 20 loops in a course were unraveled and the length of yarn in cm (LT) was measured. From the LT value the stitch length/loop length was measured by usingStitch length/loop length in cm (L) = (LT)/20 (1)The average loop length (cm) was taken and reported in Table II.2.5 Tightness Factor (K)The tightness of the knits was characterized by the tightness factor (K). It is known that K is a ratio of the area covered by the yarns in one loop to the area occupied by the loop. It is also an indication of the relative looseness or tightness of the knitted structure. For determination of TF the following formula was usedTightness Factor (K) = √T/l (2)Where T= Yarn linear density in Tex, l = loop length of fabric in cm. The TF of three layers (outer, midd le, and inner) were calculated separately is given in Table II.TABLE II. Multi-layer knitted fabric parameters3. RESULTS AND DISCUSSIONThe water vapor permeability of the multi layer knit fabrics were analyzed and shown in Figure 8. It can be observed that a linear trend is followed between water vapor permeability and loop length. With increases in loop length, there is less resistance per unit area, so, the permeability property of the fabric also increased. Anova data show increasesin loop length yield a significant difference in the water vapor permeability of the multi fabrics [F (2, 15) > Fcrit]. The regression analysis was done between CCC-1and CCC-2 and CCC-2 and CCC-3 for studying the influence of the number of yarn feeds.R2 values shows 0.755 for both comparisons. The water vapor permeability of the fabric is highly influenced by the length of the loop in the fabric and less by the number of yarn feed in the fabric.The air permeability of the multi layer knit fabricswas analyzed and is shown in Figure 8. It can be observed that the air permeability of the CCC-1,CCC-2, and CCC-3 fabrics is linear with loop length.FIGURE 8. Water Vapor Permeability & Air Permeability of fabric.As loop length in the fabric increased, air permeability also increased. The Anova- single factor analysis also proves that there is a significant difference at 5 % significance level between the air permeability characteristics of multi layer fabrics produced from various loop length [F (2, 15) > F crit] shown in Table IV. To study the influence of the combination yarn feed, the regression analysis was done between CCC-1 and CCC-2 andCCC-2 and CCC-3. It shows R2 =0.757. So, the air permeability of the fabric is may not be dependent on the number of yarns fed, but more influenced by the loop length.4. CONCLUSIONSIn flat knitting machine using a loop transfer technique, multi layer fabrics were developed with varying loop length. With respect to loop length, the loop density and tightness factor were analyzed.Based on analysis the following conclusions were made:TABLE III. Permeability Characteristics of Multi-layer knit fabrics.TABLE IV. ANOVA single factor data analysis.For multi-layer fabric produced with various basic structures (single jersey and 1x1 purl), the change of loop length between the layers has no significant difference.The wpc and cpc had an inverse relationship with the loop length produced from CCC combination multilayer fabrics.The combination yarn feed is an important factor affecting the tightness factor and loop lengths of the individual layers in knitted fabrics.The water vapor and air permeability properties of the multi layer knit fabrics were highly influenced by the change in loop length followed by the combination yarn feed.多层横机针织织物面料的开发摘要循环传输技术被用于开发一种计算机化多针距的横机上的一个多层编织织物。
英语学科文献综述范文篇一英文作文:The field of English language teaching (ELT) has witnessed significant advancements over the past few decades, with a plethora of research studies contributing to the development of various teaching methodologies and strategies. This literature review aims to systematically organize and summarize the current state of research in ELT, focusing on the evolution of teaching methods, the application of second language acquisition (SLA) theories, and the integration of crosscultural communication in classroom settings.One of the most notable trends in ELT research is the shift from traditional teachercentered approaches to more studentcentered methodologies. The communicative language teaching (CLT) approach, for instance, has gained widespread acceptance due to its emphasis on reallife communication and interactive learning. Studies have shown that CLT not only enhances students' language proficiency but also fosters their communicative competence, enabling them to use English effectively in various social contexts. However, the implementation of CLT is not without challenges, as it requires teachers to adopt new roles and adapt their teaching practices to meet the diverse needs of learners.Another significant area of research in ELT is the application of SLA theories to inform teaching practices. The input hypothesis, proposed by Stephen Krashen, has been particularly influential in shaping language teaching methodologies. According to this hypothesis, learners acquire language most effectively when they are exposed to comprehensible input that is slightly above their current level of proficiency. This has led to the development of various techniques, such as the use of authentic materials and taskbased learning, which aim to provide learners with meaningful andengaging input. Despite its popularity, the input hypothesis has also been subject to criticism, with some researchers arguing that it underestimates the role of output and interaction in language learning.Crosscultural communication has also emerged as a critical component of ELT, as English continues to serve as a global lingua franca. Research in this area has highlighted the importance of developing learners' intercultural competence, which involves not only linguistic skills but also an understanding of cultural norms and values. Studies have shown that incorporating crosscultural elements into the curriculum can enhance students' ability to communicate effectively with people from different cultural backgrounds. However, achieving this goal requires careful consideration of cultural sensitivity and the avoidance of stereotypes, which can hinder rather than promote intercultural understanding.In terms of research methods, there has been a growing emphasis on the use of mixedmethods approaches in ELT studies. Combining qualitative and quantitative data allows researchers to gain a more comprehensive understanding of the complexities involved in language teaching and learning. For example, classroom observations and interviews can provide valuable insights into teachers' and students' experiences, while statistical analyses can help identify patterns and trends in language acquisition. This holistic approach to research has the potential to yield more robust and nuanced findings, which can inform both theory and practice.Looking ahead, future research in ELT is likely to continue exploring the interplay between technology and language learning. The rapid advancement of digital tools and online platforms has opened up new possibilities for teaching and learning English, such as the use of virtual reality (VR) and artificial intelligence (AI) in language education. These technologies have the potential to create immersive and personalized learning experiences, but they also raise important questions about accessibility, equity, and the role of the teacher in a technologydriven classroom. Additionally, there is a need for further research on the impact of globalization onEnglish language teaching, particularly in terms of how local contexts and cultures influence the adoption and adaptation of global ELT practices.In conclusion, the field of ELT is characterized by a dynamic and evolving body of research that reflects the changing needs and realities of language learners and educators. By systematically reviewing the current literature, this paper has highlighted key trends, challenges, and future directions in the field. As English continues to play a pivotal role in global communication, it is imperative that researchers and practitioners work together to develop innovative and effective approaches to language teaching that are responsive to the diverse and complex needs of learners in the 21st century.中文翻译:在过去的几十年里,英语教学(ELT)领域取得了显著的进展,大量的研究为各种教学方法和策略的发展做出了贡献。
古村落介绍英文翻译参考文献皖南古村落—西递村与宏村Natural Features自然概况Xidi village is 8 km east of the town seat of Yi County,Huangshan City,Anhui Province. According to“Noble clans in Xin'an”,the place was surrounded byMt Luo in front,Mt Yangjian at the back,Mt Shishi in the north,and Mt Tianma in the south,flanked by two rivers flowing to the west instead of the east,hence named as Xidi",which meanspassing to the west" in Chinese.西递村位于安徽省黄山市黟县城东8公里处,根据《新安民族志》记:“只地罗峰高其前,阳尖障其后,石狮盘其北,天马霭其南。
中有二水环绕,不之东而之西,故名西递。
”即取水势东往西流之意。
The village embraced in the mountains is really scenic and abu ndant of resources. Inhabitants here are accustomed to a life of leisure and self-reliance with little disturbance from outside,therefore keep much of their ancient customs and enjoy the fame aslucky folks of an Utopian land".西递村群山环绕,景色优美,物产丰富。
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
Introduction:The field of sustainable development has gained significant attention in recent years, as the global community seeks to balance economic growth with environmental conservation and social equity. This summary aims to provide a comprehensive overview of selected references that discuss various aspects of sustainable development, including its definitions, challenges, and potential solutions.1.Definition and Conceptual Framework:Several references provide a foundational understanding of sustainable development. According to the Brundtland Commission (1987), sustainable development is "development that meets the needs of the present without compromising the ability of future generations to meet their own needs." This definition highlights the intergenerational equity aspect of sustainable development. Renzetti and Volpe (2019) further explain that sustainable development encompasses three pillars: economic, social, and environmental.2.Challenges and Barriers:The pursuit of sustainable development faces numerous challenges and barriers. In their article, Gereffi et al. (2019) discuss the role of global value chains in perpetuating unsustainable practices. They argue that the fragmented nature of global production networks often leads to environmental degradation and social inequality. Similarly, Kaczan and Swain (2018) identify the lack of political will and insufficient policy frameworks as major barriers to achieving sustainable development goals.3.Policy and Governance:Policy and governance play a crucial role in facilitating sustainable development. Chatterji and Ranganathan (2017) explore the effectiveness of various policy instruments, such as regulations, incentives, and voluntary agreements, in promoting sustainable practices. They conclude that a combination of these instruments is necessary to achieve meaningful change. Furthermore, Patten and Young (2018) discuss theimportance of stakeholder engagement and participatory governance in shaping sustainable development policies.4.Economic and Social Dimensions:The economic and social dimensions of sustainable development areclosely intertwined. In their study, Safra and Swilling (2019) examine the role of the green economy in achieving sustainable development goals. They argue that a shift towards a green economy can lead to job creation, poverty reduction, and environmental conservation. Additionally, Inderbitzin and Gertler (2018) explore the social impacts of sustainable development, emphasizing the importance of inclusivity and socialjustice in achieving sustainable outcomes.5.Environmental Aspects:The environmental pillar of sustainable development is vital forensuring the well-being of future generations. In their article, Groombridge et al. (2019) discuss the state of biodiversity and the importance of preserving ecosystems. They highlight the urgent need for conservation efforts to address the loss of biodiversity. Furthermore, Angelsen et al. (2018) explore the role of land-use change incontributing to climate change and the need for sustainable land management practices.6.Potential Solutions and Innovations:Several references discuss potential solutions and innovations that can contribute to sustainable development. In their book, Chopra and Garg (2019) propose a framework for sustainable urban development, emphasizing the importance of infrastructure, transportation, and green spaces. Additionally, Chirwa and Ranganathan (2018) discuss the role of technology in promoting sustainable development, highlighting the potential of renewable energy and digitalization.Conclusion:This summary provides an overview of selected references on sustainable development, covering its definitions, challenges, and potentialsolutions. The diverse range of topics discussed in these references underscores the complexity of achieving sustainable development. It is essential for policymakers, researchers, and stakeholders to collaborate and adopt a multidisciplinary approach to address the challenges and realize the vision of a sustainable future.。
英语学科方面的文献综述范文3000字A Literature Review on English Language TeachingIntroductionEnglish language teaching has always been a significant area of focus in education, and a plethora of research and literature has been conducted on this subject. This literature review aims to provide an overview of key research findings and trends in English language teaching, examining various aspects such as methodologies, approaches, technologies, and challenges faced by teachers and learners in the field.Methodologies and Approaches in English Language TeachingOne of the most widely debated topics in English language teaching is the choice of methodology and approach used in the classroom. Traditional methods such as grammar-translation and audio-lingual approaches have given way to more communicative and task-based approaches that emphasize the use of language in authentic contexts. Research has shown that communicative language teaching (CLT) has been particularly effective in promoting fluency and communicative competence among learners.In recent years, there has been a growing interest in content and language integrated learning (CLIL), which integrates language learning with subject content. CLIL has been found to be beneficial for learners as it enhances their language proficiency while also deepening their understanding of academic subjects.Furthermore, the use of technology in English language teaching has gained popularity, with digital tools and resources being used to enhance learning outcomes. Computer-assisted language learning (CALL) platforms, mobile applications, and online resources have been instrumental in providing learners with opportunities for autonomous and personalized learning.Challenges and Issues in English Language TeachingDespite the advancements in methodologies and technologies, English language teaching still faces several challenges and issues. One of the major challenges is the issue of language proficiency among teachers. Research has shown that many teachers lack the necessary language skills to effectively teach English, which can hinder the learning progress of students.Another challenge is the lack of resources and support for English language teaching in schools, particularly in low-incomeand rural areas. Many schools struggle to provide adequate materials and training for teachers, which can limit the quality of English instruction.Additionally, the assessment of English language proficiency remains a contentious issue, with standardized tests often being criticized for their lack of validity and reliability. Alternative assessment methods, such as portfolios and performance-based assessments, are being explored as more authentic ways to measure language proficiency.ConclusionOverall, English language teaching continues to evolve with new methodologies, approaches, and technologies being developed to enhance learning outcomes. While there are challenges and issues that need to be addressed, the field of English language teaching remains dynamic and research-driven. By staying abreast of the latest research findings and trends, educators can continue to improve their teaching practices and provide students with the skills they need to succeed in an increasingly globalized world.。
云计算外文翻译参考文献(文档含中英文对照即英文原文和中文翻译)原文:Technical Issues of Forensic Investigations in Cloud Computing EnvironmentsDominik BirkRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyRuhr-University BochumHorst Goertz Institute for IT SecurityBochum, GermanyAbstract—Cloud Computing is arguably one of the most discussedinformation technologies today. It presents many promising technological and economical opportunities. However, many customers remain reluctant to move their business IT infrastructure completely to the cloud. One of their main concerns is Cloud Security and the threat of the unknown. Cloud Service Providers(CSP) encourage this perception by not letting their customers see what is behind their virtual curtain. A seldomly discussed, but in this regard highly relevant open issue is the ability to perform digital investigations. This continues to fuel insecurity on the sides of both providers and customers. Cloud Forensics constitutes a new and disruptive challenge for investigators. Due to the decentralized nature of data processing in the cloud, traditional approaches to evidence collection and recovery are no longer practical. This paper focuses on the technical aspects of digital forensics in distributed cloud environments. We contribute by assessing whether it is possible for the customer of cloud computing services to perform a traditional digital investigation from a technical point of view. Furthermore we discuss possible solutions and possible new methodologies helping customers to perform such investigations.I. INTRODUCTIONAlthough the cloud might appear attractive to small as well as to large companies, it does not come along without its own unique problems. Outsourcing sensitive corporate data into the cloud raises concerns regarding the privacy and security of data. Security policies, companies main pillar concerning security, cannot be easily deployed into distributed, virtualized cloud environments. This situation is further complicated by the unknown physical location of the companie’s assets. Normally,if a security incident occurs, the corporate security team wants to be able to perform their own investigation without dependency on third parties. In the cloud, this is not possible anymore: The CSP obtains all the power over the environmentand thus controls the sources of evidence. In the best case, a trusted third party acts as a trustee and guarantees for the trustworthiness of the CSP. Furthermore, the implementation of the technical architecture and circumstances within cloud computing environments bias the way an investigation may be processed. In detail, evidence data has to be interpreted by an investigator in a We would like to thank the reviewers for the helpful comments and Dennis Heinson (Center for Advanced Security Research Darmstadt - CASED) for the profound discussions regarding the legal aspects of cloud forensics. proper manner which is hardly be possible due to the lackof circumstantial information. For auditors, this situation does not change: Questions who accessed specific data and information cannot be answered by the customers, if no corresponding logs are available. With the increasing demand for using the power of the cloud for processing also sensible information and data, enterprises face the issue of Data and Process Provenance in the cloud [10]. Digital provenance, meaning meta-data that describes the ancestry or history of a digital object, is a crucial feature for forensic investigations. In combination with a suitable authentication scheme, it provides information about who created and who modified what kind of data in the cloud. These are crucial aspects for digital investigations in distributed environments such as the cloud. Unfortunately, the aspects of forensic investigations in distributed environment have so far been mostly neglected by the research community. Current discussion centers mostly around security, privacy and data protection issues [35], [9], [12]. The impact of forensic investigations on cloud environments was little noticed albeit mentioned by the authors of [1] in 2009: ”[...] to our knowledge, no research has been published on how cloud computing environments affect digital artifacts,and on acquisition logistics and legal issues related to cloud computing env ironments.” This statement is also confirmed by other authors [34], [36], [40] stressing that further research on incident handling, evidence tracking and accountability in cloud environments has to be done. At the same time, massive investments are being made in cloud technology. Combined with the fact that information technology increasingly transcendents peoples’ private and professional life, thus mirroring more and more of peoples’actions, it becomes apparent that evidence gathered from cloud environments will be of high significance to litigation or criminal proceedings in the future. Within this work, we focus the notion of cloud forensics by addressing the technical issues of forensics in all three major cloud service models and consider cross-disciplinary aspects. Moreover, we address the usability of various sources of evidence for investigative purposes and propose potential solutions to the issues from a practical standpoint. This work should be considered as a surveying discussion of an almost unexplored research area. The paper is organized as follows: We discuss the related work and the fundamental technical background information of digital forensics, cloud computing and the fault model in section II and III. In section IV, we focus on the technical issues of cloud forensics and discuss the potential sources and nature of digital evidence as well as investigations in XaaS environments including thecross-disciplinary aspects. We conclude in section V.II. RELATED WORKVarious works have been published in the field of cloud security and privacy [9], [35], [30] focussing on aspects for protecting data in multi-tenant, virtualized environments. Desired security characteristics for current cloud infrastructures mainly revolve around isolation of multi-tenant platforms [12], security of hypervisors in order to protect virtualized guest systems and secure network infrastructures [32]. Albeit digital provenance, describing the ancestry of digital objects, still remains a challenging issue for cloud environments, several works have already been published in this field [8], [10] contributing to the issues of cloud forensis. Within this context, cryptographic proofs for verifying data integrity mainly in cloud storage offers have been proposed,yet lacking of practical implementations [24], [37], [23]. Traditional computer forensics has already well researched methods for various fields of application [4], [5], [6], [11], [13]. Also the aspects of forensics in virtual systems have been addressed by several works [2], [3], [20] including the notionof virtual introspection [25]. In addition, the NIST already addressed Web Service Forensics [22] which has a huge impact on investigation processes in cloud computing environments. In contrast, the aspects of forensic investigations in cloud environments have mostly been neglected by both the industry and the research community. One of the first papers focusing on this topic was published by Wolthusen [40] after Bebee et al already introduced problems within cloud environments [1]. Wolthusen stressed that there is an inherent strong need for interdisciplinary work linking the requirements and concepts of evidence arising from the legal field to what can be feasibly reconstructed and inferred algorithmically or in an exploratory manner. In 2010, Grobauer et al [36] published a paper discussing the issues of incident response in cloud environments - unfortunately no specific issues and solutions of cloud forensics have been proposed which will be done within this work.III. TECHNICAL BACKGROUNDA. Traditional Digital ForensicsThe notion of Digital Forensics is widely known as the practice of identifying, extracting and considering evidence from digital media. Unfortunately, digital evidence is both fragile and volatile and therefore requires the attention of special personnel and methods in order to ensure that evidence data can be proper isolated and evaluated. Normally, the process of a digital investigation can be separated into three different steps each having its own specificpurpose:1) In the Securing Phase, the major intention is the preservation of evidence for analysis. The data has to be collected in a manner that maximizes its integrity. This is normally done by a bitwise copy of the original media. As can be imagined, this represents a huge problem in the field of cloud computing where you never know exactly where your data is and additionallydo not have access to any physical hardware. However, the snapshot technology, discussed in section IV-B3, provides a powerful tool to freeze system states and thus makes digital investigations, at least in IaaS scenarios, theoretically possible.2) We refer to the Analyzing Phase as the stage in which the data is sifted and combined. It is in this phase that the data from multiple systems or sources is pulled together to create as complete a picture and event reconstruction as possible. Especially in distributed system infrastructures, this means that bits and pieces of data are pulled together for deciphering the real story of what happened and for providing a deeper look into the data.3) Finally, at the end of the examination and analysis of the data, the results of the previous phases will be reprocessed in the Presentation Phase. The report, created in this phase, is a compilation of all the documentation and evidence from the analysis stage. The main intention of such a report is that it contains all results, it is complete and clear to understand. Apparently, the success of these three steps strongly depends on the first stage. If it is not possible to secure the complete set of evidence data, no exhaustive analysis will be possible. However, in real world scenarios often only a subset of the evidence data can be secured by the investigator. In addition, an important definition in the general context of forensics is the notion of a Chain of Custody. This chain clarifies how and where evidence is stored and who takes possession of it. Especially for cases which are brought to court it is crucial that the chain of custody is preserved.B. Cloud ComputingAccording to the NIST [16], cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) that can be rapidly provisioned and released with minimal CSP interaction. The new raw definition of cloud computing brought several new characteristics such as multi-tenancy, elasticity, pay-as-you-go and reliability. Within this work, the following three models are used: In the Infrastructure asa Service (IaaS) model, the customer is using the virtual machine provided by the CSP for installing his own system on it. The system can be used like any other physical computer with a few limitations. However, the additive customer power over the system comes along with additional security obligations. Platform as a Service (PaaS) offerings provide the capability to deploy application packages created using the virtual development environment supported by the CSP. For the efficiency of software development process this service model can be propellent. In the Software as a Service (SaaS) model, the customer makes use of a service run by the CSP on a cloud infrastructure. In most of the cases this service can be accessed through an API for a thin client interface such as a web browser. Closed-source public SaaS offers such as Amazon S3 and GoogleMail can only be used in the public deployment model leading to further issues concerning security, privacy and the gathering of suitable evidences. Furthermore, two main deployment models, private and public cloud have to be distinguished. Common public clouds are made available to the general public. The corresponding infrastructure is owned by one organization acting as a CSP and offering services to its customers. In contrast, the private cloud is exclusively operated for an organization but may not provide the scalability and agility of public offers. The additional notions of community and hybrid cloud are not exclusively covered within this work. However, independently from the specific model used, the movement of applications and data to the cloud comes along with limited control for the customer about the application itself, the data pushed into the applications and also about the underlying technical infrastructure.C. Fault ModelBe it an account for a SaaS application, a development environment (PaaS) or a virtual image of an IaaS environment, systems in the cloud can be affected by inconsistencies. Hence, for both customer and CSP it is crucial to have the ability to assign faults to the causing party, even in the presence of Byzantine behavior [33]. Generally, inconsistencies can be caused by the following two reasons:1) Maliciously Intended FaultsInternal or external adversaries with specific malicious intentions can cause faults on cloud instances or applications. Economic rivals as well as former employees can be the reason for these faults and state a constant threat to customers and CSP. In this model, also a malicious CSP is included albeit he isassumed to be rare in real world scenarios. Additionally, from the technical point of view, the movement of computing power to a virtualized, multi-tenant environment can pose further threads and risks to the systems. One reason for this is that if a single system or service in the cloud is compromised, all other guest systems and even the host system are at risk. Hence, besides the need for further security measures, precautions for potential forensic investigations have to be taken into consideration.2) Unintentional FaultsInconsistencies in technical systems or processes in the cloud do not have implicitly to be caused by malicious intent. Internal communication errors or human failures can lead to issues in the services offered to the costumer(i.e. loss or modification of data). Although these failures are not caused intentionally, both the CSP and the customer have a strong intention to discover the reasons and deploy corresponding fixes.IV. TECHNICAL ISSUESDigital investigations are about control of forensic evidence data. From the technical standpoint, this data can be available in three different states: at rest, in motion or in execution. Data at rest is represented by allocated disk space. Whether the data is stored in a database or in a specific file format, it allocates disk space. Furthermore, if a file is deleted, the disk space is de-allocated for the operating system but the data is still accessible since the disk space has not been re-allocated and overwritten. This fact is often exploited by investigators which explore these de-allocated disk space on harddisks. In case the data is in motion, data is transferred from one entity to another e.g. a typical file transfer over a network can be seen as a data in motion scenario. Several encapsulated protocols contain the data each leaving specific traces on systems and network devices which can in return be used by investigators. Data can be loaded into memory and executed as a process. In this case, the data is neither at rest or in motion but in execution. On the executing system, process information, machine instruction and allocated/de-allocated data can be analyzed by creating a snapshot of the current system state. In the following sections, we point out the potential sources for evidential data in cloud environments and discuss the technical issues of digital investigations in XaaS environmentsas well as suggest several solutions to these problems.A. Sources and Nature of EvidenceConcerning the technical aspects of forensic investigations, the amount of potential evidence available to the investigator strongly diverges between thedifferent cloud service and deployment models. The virtual machine (VM), hosting in most of the cases the server application, provides several pieces of information that could be used by investigators. On the network level, network components can provide information about possible communication channels between different parties involved. The browser on the client, acting often as the user agent for communicating with the cloud, also contains a lot of information that could be used as evidence in a forensic investigation. Independently from the used model, the following three components could act as sources for potential evidential data.1) Virtual Cloud Instance: The VM within the cloud, where i.e. data is stored or processes are handled, contains potential evidence [2], [3]. In most of the cases, it is the place where an incident happened and hence provides a good starting point for a forensic investigation. The VM instance can be accessed by both, the CSP and the customer who is running the instance. Furthermore, virtual introspection techniques [25] provide access to the runtime state of the VM via the hypervisor and snapshot technology supplies a powerful technique for the customer to freeze specific states of the VM. Therefore, virtual instances can be still running during analysis which leads to the case of live investigations [41] or can be turned off leading to static image analysis. In SaaS and PaaS scenarios, the ability to access the virtual instance for gathering evidential information is highly limited or simply not possible.2) Network Layer: Traditional network forensics is knownas the analysis of network traffic logs for tracing events that have occurred in the past. Since the different ISO/OSI network layers provide several information on protocols and communication between instances within as well as with instances outside the cloud [4], [5], [6], network forensics is theoretically also feasible in cloud environments. However in practice, ordinary CSP currently do not provide any log data from the network components used by the customer’s instances or applications. For instance, in case of a malware infection of an IaaS VM, it will be difficult for the investigator to get any form of routing information and network log datain general which is crucial for further investigative steps. This situation gets even more complicated in case of PaaS or SaaS. So again, the situation of gathering forensic evidence is strongly affected by the support the investigator receives from the customer and the CSP.3) Client System: On the system layer of the client, it completely depends on the used model (IaaS, PaaS, SaaS) if and where potential evidence could beextracted. In most of the scenarios, the user agent (e.g. the web browser) on the client system is the only application that communicates with the service in the cloud. This especially holds for SaaS applications which are used and controlled by the web browser. But also in IaaS scenarios, the administration interface is often controlled via the browser. Hence, in an exhaustive forensic investigation, the evidence data gathered from the browser environment [7] should not be omitted.a) Browser Forensics: Generally, the circumstances leading to an investigation have to be differentiated: In ordinary scenarios, the main goal of an investigation of the web browser is to determine if a user has been victim of a crime. In complex SaaS scenarios with high client-server interaction, this constitutes a difficult task. Additionally, customers strongly make use of third-party extensions [17] which can be abused for malicious purposes. Hence, the investigator might want to look for malicious extensions, searches performed, websites visited, files downloaded, information entered in forms or stored in local HTML5 stores, web-based email contents and persistent browser cookies for gathering potential evidence data. Within this context, it is inevitable to investigate the appearance of malicious JavaScript [18] leading to e.g. unintended AJAX requests and hence modified usage of administration interfaces. Generally, the web browser contains a lot of electronic evidence data that could be used to give an answer to both of the above questions - even if the private mode is switched on [19].B. Investigations in XaaS EnvironmentsTraditional digital forensic methodologies permit investigators to seize equipment and perform detailed analysis on the media and data recovered [11]. In a distributed infrastructure organization like the cloud computing environment, investigators are confronted with an entirely different situation. They have no longer the option of seizing physical data storage. Data and processes of the customer are dispensed over an undisclosed amount of virtual instances, applications and network elements. Hence, it is in question whether preliminary findings of the computer forensic community in the field of digital forensics apparently have to be revised and adapted to the new environment. Within this section, specific issues of investigations in SaaS, PaaS and IaaS environments will be discussed. In addition, cross-disciplinary issues which affect several environments uniformly, will be taken into consideration. We also suggest potential solutions to the mentioned problems.1) SaaS Environments: Especially in the SaaS model, the customer does notobtain any control of the underlying operating infrastructure such as network, servers, operating systems or the application that is used. This means that no deeper view into the system and its underlying infrastructure is provided to the customer. Only limited userspecific application configuration settings can be controlled contributing to the evidences which can be extracted fromthe client (see section IV-A3). In a lot of cases this urges the investigator to rely on high-level logs which are eventually provided by the CSP. Given the case that the CSP does not run any logging application, the customer has no opportunity to create any useful evidence through the installation of any toolkit or logging tool. These circumstances do not allow a valid forensic investigation and lead to the assumption that customers of SaaS offers do not have any chance to analyze potential incidences.a) Data Provenance: The notion of Digital Provenance is known as meta-data that describes the ancestry or history of digital objects. Secure provenance that records ownership and process history of data objects is vital to the success of data forensics in cloud environments, yet it is still a challenging issue today [8]. Albeit data provenance is of high significance also for IaaS and PaaS, it states a huge problem specifically for SaaS-based applications: Current global acting public SaaS CSP offer Single Sign-On (SSO) access control to the set of their services. Unfortunately in case of an account compromise, most of the CSP do not offer any possibility for the customer to figure out which data and information has been accessed by the adversary. For the victim, this situation can have tremendous impact: If sensitive data has been compromised, it is unclear which data has been leaked and which has not been accessed by the adversary. Additionally, data could be modified or deleted by an external adversary or even by the CSP e.g. due to storage reasons. The customer has no ability to proof otherwise. Secure provenance mechanisms for distributed environments can improve this situation but have not been practically implemented by CSP [10]. Suggested Solution: In private SaaS scenarios this situation is improved by the fact that the customer and the CSP are probably under the same authority. Hence, logging and provenance mechanisms could be implemented which contribute to potential investigations. Additionally, the exact location of the servers and the data is known at any time. Public SaaS CSP should offer additional interfaces for the purpose of compliance, forensics, operations and security matters to their customers. Through an API, the customers should have the ability to receive specific information suchas access, error and event logs that could improve their situation in case of aninvestigation. Furthermore, due to the limited ability of receiving forensic information from the server and proofing integrity of stored data in SaaS scenarios, the client has to contribute to this process. This could be achieved by implementing Proofs of Retrievability (POR) in which a verifier (client) is enabled to determine that a prover (server) possesses a file or data object and it can be retrieved unmodified [24]. Provable Data Possession (PDP) techniques [37] could be used to verify that an untrusted server possesses the original data without the need for the client to retrieve it. Although these cryptographic proofs have not been implemented by any CSP, the authors of [23] introduced a new data integrity verification mechanism for SaaS scenarios which could also be used for forensic purposes.2) PaaS Environments: One of the main advantages of the PaaS model is that the developed software application is under the control of the customer and except for some CSP, the source code of the application does not have to leave the local development environment. Given these circumstances, the customer obtains theoretically the power to dictate how the application interacts with other dependencies such as databases, storage entities etc. CSP normally claim this transfer is encrypted but this statement can hardly be verified by the customer. Since the customer has the ability to interact with the platform over a prepared API, system states and specific application logs can be extracted. However potential adversaries, which can compromise the application during runtime, should not be able to alter these log files afterwards. Suggested Solution:Depending on the runtime environment, logging mechanisms could be implemented which automatically sign and encrypt the log information before its transfer to a central logging server under the control of the customer. Additional signing and encrypting could prevent potential eavesdroppers from being able to view and alter log data information on the way to the logging server. Runtime compromise of an PaaS application by adversaries could be monitored by push-only mechanisms for log data presupposing that the needed information to detect such an attack are logged. Increasingly, CSP offering PaaS solutions give developers the ability to collect and store a variety of diagnostics data in a highly configurable way with the help of runtime feature sets [38].3) IaaS Environments: As expected, even virtual instances in the cloud get compromised by adversaries. Hence, the ability to determine how defenses in the virtual environment failed and to what extent the affected systems havebeen compromised is crucial not only for recovering from an incident. Also forensic investigations gain leverage from such information and contribute to resilience against future attacks on the systems. From the forensic point of view, IaaS instances do provide much more evidence data usable for potential forensics than PaaS and SaaS models do. This fact is caused throughthe ability of the customer to install and set up the image for forensic purposes before an incident occurs. Hence, as proposed for PaaS environments, log data and other forensic evidence information could be signed and encrypted before itis transferred to third-party hosts mitigating the chance that a maliciously motivated shutdown process destroys the volatile data. Although, IaaS environments provide plenty of potential evidence, it has to be emphasized that the customer VM is in the end still under the control of the CSP. He controls the hypervisor which is e.g. responsible for enforcing hardware boundaries and routing hardware requests among different VM. Hence, besides the security responsibilities of the hypervisor, he exerts tremendous control over how customer’s VM communicate with the hardware and theoretically can intervene executed processes on the hosted virtual instance through virtual introspection [25]. This could also affect encryption or signing processes executed on the VM and therefore leading to the leakage of the secret key. Although this risk can be disregarded in most of the cases, the impact on the security of high security environments is tremendous.a) Snapshot Analysis: Traditional forensics expect target machines to be powered down to collect an image (dead virtual instance). This situation completely changed with the advent of the snapshot technology which is supported by all popular hypervisors such as Xen, VMware ESX and Hyper-V.A snapshot, also referred to as the forensic image of a VM, providesa powerful tool with which a virtual instance can be clonedby one click including also the running system’s mem ory. Due to the invention of the snapshot technology, systems hosting crucial business processes do not have to be powered down for forensic investigation purposes. The investigator simply creates and loads a snapshot of the target VM for analysis(live virtual instance). This behavior is especially important for scenarios in which a downtime of a system is not feasible or practical due to existing SLA. However the information whether the machine is running or has been properly powered down is crucial [3] for the investigation. Live investigations of running virtual instances become more common providing evidence data that。
3000字英文参考文献及其翻译
【注意:选用的英文一定要与自己的论文题目相关。
如果文章太长,可以节选(用省略号省略一些段略)。如果字数不够,
可以选2至3篇,但要逐一注明详细出处。英文集中在一起放前面,
对应的中文翻译放后面。中文翻译也要将出处翻译,除非是网页。
对文献的翻译一定要认真!对英文文献及其翻译的排版也要和论文正
文一样!
特别注意:英文文献应该放在你的参考文献中。】
TOY RECALLS——IS CHINA THE PROBLEM?
Hari. Bapuji Paul W. Beamish
China exports about 20 billion toys per year and they are the second
most commonly imported item by U.S. and Canada. It is estimated that
about 10,000 factories in China manufacture toys for export. Considering
this mutual dependence, it is important that the problems resulting in
recalls are addressed carefully.
Although the largest portion of recalls by Mattel involved design
flaws, the CEO of Mattel blamed the Chinese manufacturers by saying
that the problem resulted ‘in this case (because) one of our
manufacturers did not follow the rules’. Several analysts too blamed the
Chinese manufacturers. By placing blame where it did not belong, there
is a danger of losing the opportunity to learn from the errors that have
occurred. The first step to learn from errors is to know why and where the
error occurred. Further, the most critical step in preventing the recurrence
of errors is to find out what and who can prevent it.
……
From:http://www.cei.gov.cn/loadpage.aspx?Page=ShowDoc&Category
Alias=zonghe/ggmflm_zh&BlockAlias=sjhwsd&filename=/doc/sjhwsd/2
00709281954.xml, Sep. 2007
玩具召回——是中国的问题吗?
哈里·巴普基 保罗·比密什
中国每年大约出口20亿美元的玩具,最常见是从美国和加拿大进口项目。
据估计,中国大约有10000家玩具制造工厂从事玩具出口。考虑到这种相互依存,
在召回时谨慎处理是很重要的。
尽管绝大部分美泰的玩具召回涉及设计缺陷,但是,美泰公司的首席执行官
指责中国制造商说,导致这种问题的原因“是因为我们的一些制造商不遵守规
则”。一些分析师也指责中国制造商。把责任归咎于不存在错误的地方是危险的
行为,这将使得我们丧失从继有的错误中吸取教训的机会。第一步是要从错误中
学习知道为什么发生错误和哪里发生了错误。此外,防止错误的再次发生最关键
的一步是要找出什么以及谁可以阻止它。
……
来源:
http://www.cei.gov.cn/loadpage.aspx?Page=ShowDoc&CategoryAlias=zonghe/ggmflm_zh
&BlockAlias=sjhwsd&filename=/doc/sjhwsd/200709281954.xml,2007年9月