当前位置:文档之家› A study on gesture interaction with a 3D Audio Display

A study on gesture interaction with a 3D Audio Display

A study on gesture interaction with a 3D Audio Display
A study on gesture interaction with a 3D Audio Display

A Study on Gestural Interaction with a 3D Audio

Display

Georgios Marentakis and Stephen A. Brewster

Glasgow Interactive Systems Group

Department of Computing Science

University of Glasgow

Glasgow, G12 8QQ, UK

{georgios,stephen}@https://www.doczj.com/doc/a3827268.html, https://www.doczj.com/doc/a3827268.html,

Abstract

The study reported here investigates the design and evaluation of a gesture-

controlled, spatially-arranged auditory user interface for a mobile computer.

Such an interface may provide a solution to the problem of limited screen space

in handheld devices and lead to an effective interface for mobile/eyes-free

computing. To better understand how we might design such an interface, our

study compared three potential interaction techniques: head nodding, pointing

with a finger and pointing on a touch tablet to select an item in exocentric 3D

audio space. The effects of sound direction and interaction technique on the

browsing and selection process were analyzed. An estimate of the size of the

minimum selection area that would allow efficient 3D sound selection is pro-

vided for each interaction technique. Browsing using the touch screen was

found to be more accurate than the other two techniques, but participants found

it significantly harder to use.

1. Introduction

Designing a user interface for a handheld device to be used on the move is a challeng-ing task. The lack of screen space for information display in combination with the dis-turbances incurred by walking makes most of the techniques that are used in desktop user interface design problematic. Anyone who has tried to read a piece of text on a handheld computer while sitting in a taxi or to target a menu item while walking can verify that this task is a difficult one.

We are taking an alternative approach to interface design for mobile devices by creating multimodal interfaces based on sound and gestures. Multimodal interfaces al-low the user to use multiple senses to interact with a mobile computer. It is an objec-tive of our work to use the human senses so that they act in a complementary way to each other. No sense can replace all of the others and each can outperform the rest for certain tasks. For example, listening to text is much more efficient than reading it when walking but on the other hand performing corrections and editing the result can be more efficiently done using the visual sense.

The study reported here examines the potential of designing an interface based on the auditory sense for information display and the use of gestures for control. More-over, three-dimensional (3D) sound is used as it enables better separation between multiple sound sources and increases the information content of an audio display. It also allows the spatial nature of the audio space to be used, which we hope will be as beneficial as the spatial display of information in a Graphical User Interface (GUI).

The spatial aspects of our auditory sense have been little explored in human-computer interaction. The ability of the auditory system to separate and apply focus to a sound source in the presence of others (commonly known as the ‘Cocktail Party’ ef-fect [1]) is very helpful for interface design. It implies that simultaneous streams of information can be presented, with users choosing to focus on what is most important (just as occurs visually in GUIs). This phenomenon is greatly enhanced if the sources are spatially separated and thus suggesting the use of three-dimensional sound in auditory user interface design. Other interesting audition properties include omni-directionality and persistence.

Gestures have the potential to be effective input techniques when used on the move because they do not require visual attention (as do most current mobile input tech-niques such as pens or soft keyboards). Our kinaesthetic system allows us to know the position of our limbs and body even though we cannot see them. This means that for a mobile application the user would not need to look at his/her hands to provide input, visual attention could remain elsewhere, for example on navigating the environment.

Background task

Announcement

Fig. 1. Example of a gesture controlled 3D auditory user interface. A range of different audio sources are presented around a listener and they can be selected using a gesture.

As can be seen in the Figure 1, we are planning to build a 3D audio system where the user will be able to monitor a number of tasks simultaneously, discriminating be-tween foreground and background ones and interacting with them using gestures. The user will hear a range of different sounds but will be able to tune in to the one that is most important, selecting items and interacting with them using gestures. The sound locations in this study are not truly three-dimensional. We place sounds on a plane around the user’s head at the height of the ears to avoid problems related to elevation perception. This results in a 2.5D planar soundscape.

2. P revious Work on Auditory and Gestural Interfaces for

Mobile Devices

Applications of audio in user interface design have been examined by many research-ers. Gaver [8] introduced the notion of Auditory Icons in user interface design. Audi-tory Icons are based on the notion of everyday listening and they have been used in systems such as the SonicFinder and the ARKola system [15]. Blattner et al. [2] have proposed designing audio displays that are based on structured musical listening, re-sulting in the notion of Earcons that have been examined and proved to be usable by Brewster [5].

The notion of an audio window system was introduced by Cohen and Ludwig [7]. Cohen also introduced the concept of using 3D sound to increase the auditory display space and proposed simple gestural interaction with 3D sound for input [6]. Accord-ing to Cohen, sounds are positioned in the space around the user and a mapping be-tween the sounds and the elements of the interface is performed. Users can subse-quently interact with the sounds by pointing, pitching, catching and throwing them. By using these interaction techniques users can organize the system so that it suits their needs. Another idea developed by Cohen has been the an audio pointer as an aid in the cluttered audio space to assist localization and help the user disambiguate cur-rent position in relation to the position of the sounds in the display. The concept of ‘filtears’ has been also introduced by Cohen. According to this idea sounds slightly change as a result of filtering when being in different states such as selected, caught etc. This cue has been designed to assist the user in understanding the state of the dis-play elements as he/she is interacting with them.

Another attempt to construct a system based around spatialised audio was No-madic Radio by Sawhney and Schmandt [14]. It is targeted primarily at messaging. It is enabled with speech recognition and synthesis to allow the user to communicate and receive feedback from the system. It is also enabled with 3D audio to enhance simultaneous listening and conferencing. Another interesting issue about this applica-tion is the fact that it works based on loudspeakers mounted on the shoulders of the user and a directional microphone on the chest of the user; the user is able to listen to his/her real audio environment at the same time as when interacting with the system. The system also uses a space to time metaphor to position different messages around the user depending on the time of arrival. It works using a limited set of commands that can be recognized through the speech recognizer.

Brewster et al. [4] tested a three dimensional gesture controlled audio display on the move. They used an auditory pie menu centred on the head of the user and com-pared fixed to the world versus fixed to user sound presentation. They found that fixed to user sound presentation performs better in terms of time required to perform tasks as well as in terms of the walking speed the users could maintain. In another study by Pirhonen et al. [12] gestural control of a MP3 audio player was found to be faster and less demanding than the usual stylus based interaction when on the move.

Goose et al. [9] presented a system using 3D audio and earcons and text to speech for browsing the WWW. Finally, Savidis et al. [13] used a non-visual 3D audio envi-ronment to allow blind users to interact with standard GUIs. Different menu items were mapped to different locations around the user’s head.

The ideas in the literature shape a framework for working with sound in a gesture controlled 3D audio display. Speech control has been used to control a 3D audio dis-play, however it is known to require a silent environment to operate, users to be able to remember the command repertoire and can be indiscrete. Gesture control seems like a more feasible solution for systems to be used on the move and in a social con-text. Cohen as well as other researchers, have proposed designs for 3D audio interface development. However, with the exemption of [4], no formal evaluation of these ideas was done. We believe that given the ambiguity that can occur in such interfaces, fur-ther empirical research is necessary to allow us to design 3D audio interfaces in a for-mal way.

3. Three-Dimensional Audio Issues and Definitions

Designing a user interface based on 3D sound and controlling it by gestures poses a number of questions that must be answered before successful interfaces can be cre-ated. It is the case that when asking people to locate a sound, there is a certain extent of ambiguity in their answers. This ambiguity, called Localization Blur [3], has been measured for users listening to sounds from different locations in space and has been shown to be bounded (for a full review see [3]). As found in Blauert [3] localization blur can range from ±3.6° in the frontal direction, ±10° on the left/right directions and ±5.5° to the back of a listener under well controlled conditions. Localization blur also depends on the position of the sound source and the spectral content of the source. Virtual sound positioning using headphones is realized using HRTF filtering [3]. Head Related Transfer Functions (HRTF) are functions that capture the frequency re-sponse of the path between a sound source and the listener’s tympanic membrane. These functions are estimated experimentally usually using a dummy head and torso. By filtering a sound signal using these functions it is possible to apply to it directional characteristics. However, problems related to non-individualized HRTF’s (using a set of filters not created from your own ears) and HRTF interpolation and reproduction reliability affects the quality of the result so that performance is commonly poorer than for real-world listening.

In the light of these facts, it is interesting to try to define what we mean by asking a person to interact with a spatially positioned sound source, utilizing cues such as the source’s direction. It is necessary to associate a certain area of the display to each of its elements. This mapping is not obvious as it is in graphical displays, since a person cannot judge exactly where the sound source is located or what its dimensions are. For example, consider the setup where non-overlapping sounds are presented around the user in the horizontal plane. In this case, we could map an angle interval to each display element. Any type of interaction that occurs in this area could be mapped to a specific display element positioned in the centre of this angle interval. By estimating this interval a design principle is obtained that can be used to partition the audio space. The estimation of such quantities can be problematic though, due to the unfa-miliarity of many users with the sound localization task as well as with virtual 3D sound environments. Both when using real sound sources and when using virtual ones

untrained subjects respond with great variation to questions related to the direction of a sound source.

Localization accuracy can also be improved by using feedback. Feedback could help in assisting the whole localization procedure by guiding the user towards the source and by reassuring the user that he/she is on the target area, thus making the se-lection process more effective. It could help overcome the poorer localization that oc-curs with virtual 3D sound to allow it to be used effectively in a user interface.

Two design techniques are positioning the sound sources egocentric versus exocen-tric. Egocentric or fixed to the listener sources, can be localized faster but less accu-rately, due to the absence of active listening. By active listening we refer to the proc-ess of disambiguating sound direction by small head movements. Active listening enhances localization accuracy but results in computationally intensive updating of the sound source positions (which may be a problem in a lower-powered mobile de-vice) as well as in increasing the time required for a person to localize a sound stimu-lus. This is because the process of active listening involves moving and converging towards the target sound using the information provided by the updated sound scene. There is a trade-off in localization accuracy and time required to make a selection when deciding between fixed to the listener versus fixed to the world sound sources. We chose the better accuracy of exocentric or fixed to the world over egocentric to overcome the limitations of the non-individualized HRTF’s we used.

A key issue in 3D audio design is the number of sources that can be presented si-multaneously. It has been shown that human performance degrades as the number of audio display elements increases [11] when sounds stem from the same point in the display. Spatial separation, however, forms a basic dimension in auditory stream seg-regation and thus can possibly increase the number of sources users can deal with. The study we present here uses just one sound source as we wanted to gain an idea of selection angles in the simplest case, before we move on to more sophisticated sound designs later in our research.

To handle the ambiguity in the aforementioned tasks, we decided to use adaptive psychophysical methods. Adaptive methods are characterized by the fact that a stimu-lus is adjusted depending on the course of an experiment. They result in measures of performance on psychophysical tasks as a function of stimulus strength or other char-acteristics. The result constitutes what is called a psychometric function [10]. The psychometric function provides fundamental data for psychophysics, with abscissa being the stimulus magnitude and the ordinate measuring the subjective response. One commonly used psychophysical method is the Up-Down method. Up - Down procedures work by setting the stimulus to a certain level at the beginning of an ex-periment and then decreasing or increasing the stimulus based on the observation of a specific pattern in the subject’s response. The phenomenon that occurs when the di-rection of stimulus change is reversed is called a reversal. Up-Down methods that de-crease the stimulus after a valid answer and increase stimulus after an invalid answer converge to the 50% point of the associated psychometric function. A point of this function that corresponds to 50% would imply that at this stimulus level, 50% of the answers would be expected to be ‘valid’. By altering the rule of stimulus change, dif-ferent points of the psychometric function can be estimated. However, full sampling of the function is often impossible due to the large number of experimental trials re-quired.

4. Experiment

An experiment was designed to answer some fundamental questions about the design of audio and gestural interfaces, in particular: what is the minimum display area needed for the effective selection of a sound source, and what selection technique is the most accurate. We estimated the angle interval that would result in 67% of a user’s selections being on target. To do this we used an adaptive psychophysical method, more specifically a two-down one-up method (for a review of adaptive psy-chophysical methods see [10]). We investigated three different browsing and selection gestures that could be used by users to find items in a soundscape and select them. We used head/hand tracking to update the soundscape in real time to improve localization accuracy.

The three browsing gestures were: browsing with the head, browsing with the hand or browsing using a touch tablet. These gestures differ with respect to how common they are in everyday life. The first is the normal way humans perform active listening, with the position of the sound being updated as the user’s head moves, so should be very easy to perform. The second is more like holding a microphone and moving it around a space to listen for sounds. The location of the sounds in the display is up-dated based on the direction of the right index finger. Direction is inferred by a 2D vector defined by the position of the head and the position of the index finger of the user. The third gesture can be thought as an extreme, in the sense that it cannot be mapped to a real world case. The user moves a stylus around the circumference of a circle on a tablet (the centre of the tablet marks the centre of the audio space) and the position of the sound source is determined by the stylus direction with respect to the centre of the tablet. In early pilot testing this type of sound positioning proved to be confusing if a user was to start a selection from the lower hemisphere. This was due to the fact that sounds moved as if the participant was looking backwards, although the participant was actually looking forwards. For this reason, we decided to reverse left and right in case the user began browsing in the lower hemisphere. By doing this, the optimal path to the next sound could be found by always moving on the circle towards the direction in which the sound cue was perceived to be stronger.

The selection gestures were: nodding with the head, moving the index finger as if clicking a non-existent mouse button, and clicking a button available on the side of the stylus to indicate selection. In this experiment, three combinations of the above were examined: browsing with the head and selecting by nodding, browsing with the hand and selecting by gesturing with the index finger, and browsing with the pen on the tablet and selecting by clicking.

4.1 Sound Design and Apparatus

The aim of the experiment was to look at how the minimum angle interval that allows efficient selection of an audio source varies with respect to direction of sound event and interaction technique used. We used a single target sound placed in one of eight

locations around the users head (every 45° starting from 0° in front of the user’s nose) at a distance of two meters. This stimulus was a 0.9 second broadband electronic syn-thesizer sound, repeated every 1.2 seconds.

We used very simple audio feedback to indicate that the user was within the target region and could select the sound source. This was a short percussive sound that was played repeatedly while the user was ‘on target’ (i.e. within the current selection re-gion) to assist each user in localizing the sound. This was played from the direction of the target sound. Sounds were played via headphones and spatially positioned in real time using the HRTF filtering implementation from Microsoft’s DirectX 9 API. Sound positions were updated every 50msec.

Fig. 2. A participant making a selection in the hand pointing condition.

To perform gesture recognition and finger tracking we used a Polhemus Fastrack to get position and orientation data, and two sensors (see Figure 2). One sensor was mounted on top of the headphones to determine head orientation and allow us to rec-ognize the nod gestures. A second sensor was mounted on top of the index finger to determine the orientation of the hand relative to the head and to recognize the clicking gesture in the hand condition. A Wacom tablet was used for the tablet condition. We determined nodding and clicking by calculating velocity from the position data.

4.2 Experimental Design and Procedure

The experiment used a two-factor within-subjects design with each participant using each of the three interaction techniques in a counterbalanced order. There were two independent variables: sound location (eight different levels) and interaction tech-nique (three levels). The dependent variables were deviation angle from target and ef-fective selection angle. Participants were also asked to rate the three interaction used for browsing and selecting on a scale from one to ten with respect to how comfortable and how easy to use they found them. Our hypotheses were that the effective selection angle would be affected by interaction technique, with no effect of location because participants always faced the targets when selecting them.

Twelve participants took part: five females and seven males with ages ranging from 19 to 30.

The participant’s task was to browse the soundscape until the sound was in front and then select the target sound using the interaction techniques described. The target sound repeated until the participant performed a selection. Upon selection, the stimu-lus was presented in a different location randomly out of the set of available positions. The whole process was repeated until all up-down methods for each position con-verged. According to the up-down rule the effective selection angle was varied be-tween trials; it was reduced after two on-target selections and increased after one off target selection. The step was initially 2° but was halved to 1° after the third reversal occurred. It should be noted that participants were unaware of this process; they were instructed to perform selections based only on audio feedback and localization cues.

The experiment lasted approximately one hour. Participants stood wearing the headphones and tracker. They could turn around and move/point as they wished and were given a rest after each condition. The experiment could not be conducted in a fully mobile way with users walking (as in previous studies such as [4]) due to the tracking technology needed for gesture recognition – participants had to stay within range of the Polhemus receiver. The results may therefore be different if the tech-niques were used in a fully mobile setting, but they will indicate if any of them are usable and should be taken further. Participants were trained for a short period before being tested in each condition to ensure they were familiar with the interaction tech-niques. They performed eight selections before embarking on the experiment. Prior to testing, participants’ localisation skills were checked to rule out hearing problems and to familiarise them with the sound signal they would hear. During this 3D sound train-ing, participants were asked to indicate verbally the direction they had perceived the sound source was coming from. The experimenter subsequently corrected them in case they were wrong and tried to direct their attention to the relevant cues.

5. Results

The up-down method was expected to converge on the point of the associated psy-chometric function where 67% of the selections would be on target. To estimate this point we averaged the angle intervals as these were updated by the up-down rule. Av-eraging included only the angle intervals that occurred after the second reversal.

A 3x8 two factor ANOVA was performed to examine whether sound location and interaction technique affected the effective selection angle. Sound location was not found to have a significant main effect (F(2.314, 77) = 2.241, p = 0.121). However, there was a significant main effect for interaction technique (F(2, 22) = 10.777, p = 0.001). There was no interaction between location and technique. Pair-wise compari-sons using Bonferroni confidence interval adjustments showed that the tablet condition was significantly more accurate than the other two techniques, but no sig-nificant differences were found between the hand and head. Figure 3 shows the mean effective angle intervals for the three interaction techniques with respect to direction of the sound. These results define the one side interval around a source. To give an example of how these data could be applied, if an exocentric 3D audio user interface (enabled with active listening) using audio feedback and controlled by a stylus on a touch tablet, was developed, the designer should allow at least 4° on each side of a

sound positioned at 90° relative to the front of the user so that a user would be able to

Fig. 4. Effective selection angle for each sound direction.

The deviations of the users’ selections from target were also analyzed. Ninety measurements for all different directions were analyzed. A 3x8 two factor ANOVA showed a significant main effect for interaction technique (F (2,192) = 7.463, p = 0.001). Direction also had a significant main effect (F (7,672) = 7.987, p = 0.001). There was a significant interaction between technique and direction (F(14,1344) = 7.996, p = 0.001). Pair-wise comparisons using Bonferroni confidence interval ad-justments showed that the tablet condition was significantly better than the others, but there was no significant difference between head and hand. With respect to the direc-tion of the sound event, direction 225° was significantly different from direction s 0°, 45°, 90°, 135°, 270°, 315° and direction 180° was different from 45°, 270°, 315°.

Fig. 6. Mean deviation from target versus sound direction.

As mentioned, each participant was asked to rate each of the interaction methods in terms of how easy and how comfortable he/she found them to be, on a scale from 1 to 10. Figure 5 shows the means of the results for ease of use.A statistical analysis of variance showed interaction method to be a significant factor (F(36) = 7.386, p =

Fig. 13. Mean ease of use ratings for each interaction technique.Fig. 14. Mean Comfort ratings for each in-teraction technique.

0.002). Bonferroni t-tests verified mouse to be significantly harder to use, but showed no statistical difference between hand and head.

A similar analysis on how comfortable the use of the three devices was showed no significant difference between devices. Figure 7 shows comfort means for the three interaction methods. It should be noted that participants have performed a large num-ber of selection using the three interaction methods to allow the up-down methods to converge in the three different conditions and the eight different sound positions. In that sense, when observing the graphs, absolute values should be taken into account carefully. However, the ratings of the three devices relative to each other can be used to infer how they are ordered relative to each other with respect to ease of use and comfort.

6. Discussion

The results of the study showed that interaction with a 3D audio source can be done effectively in the presence of localization feedback. They also showed that novel methods of browsing can be as effective (and even more effective) in locating sounds than ‘natural’ ones in the presence of feedback. Users were able to perform active lis-tening using the tablet and the hand without any particular difficulty. It was also sur-prising that they could do the active listening operation more accurately when using the tablet than when using their heads. This can be explained in terms of the resolu-tion that the three different mechanisms provide. For example, a stylus controlled touch tablet provides a much better minimum possible displacement compared to the head or the hand of a person. By constructing histograms of deviation data we verified that the results of the up-down procedure would indeed allow 67% percent of the se-lections to be on target. It should be mentioned, however, that more reversals would result in having more accurate results. This was not possible to do since we tried to maintain a within-subjects design and keep the experiment duration in the order of one hour to avoid effects caused by fatigue. Effective selection angles are likely to re-duce with practice and improved feedback design.

When considering the three interaction methods, one would not expect the direc-tion of sound to be a significant factor in the results of this study. This is due to the active listening operation; that is, users selected a sound when it was in front of them. This was verified in the effective angle case where no location was found to be a sig-nificant factor. However, in the deviation analysis, certain angles were significantly different from others. This was mostly in the direction of 225° degrees. The reason for this difference can be described by the mechanics of the browsing and selection mo-dalities. A closer look at the graphs reveals the technique that caused this difference was browsing by hand. As was observed during testing, some right-handed partici-pants found it difficult to point to that location, if they had not turned their bodies first (they had to reach around their body causing them to stretch, reducing the accuracy of their selections). A significant number of participants indeed tried to point without turning their bodies, a result that influenced the accuracy of the browsing and selec-tion processes.

By analyzing how the ease of use ratings are ordered, we see that users find brows-ing the sound space to be equally easy either using the head or using the hand. The touch tablet however, although more accurate, was not rated highly. This can be asso-ciated with the unnaturalness of the browsing process. In the other two cases, partici-pants used a natural process for browsing the space, such as moving their heads or simulated one by moving their hand in a synchronous way with their head.

When considering the effective angles, we can observe that if accuracy was the only factor to be taken into account, an audio user interface could be constructed hav-ing all eight sounds locations, and possibly more. Our next study will investigate the presentation of multiple sounds and the design of a more sophisticated soundscape such as would be needed for a real application of a wearable device based around 3D sound and gestures. If studies show that listeners cannot use sounds from eight loca-tions then we can increase the selection angles for our sound sources which will fur-ther increase selection accuracy.

7. Conclusions

In this paper a study on gestural interaction with a sound source in the presence of feedback was presented. Three different gestures for browsing and selecting in a 3D soundscape were examined and their effectiveness in terms of accuracy was assessed. Browsing and selecting using a touch tablet proved to be more accurate than using a hand or a head gesture. However, browsing and selecting using the hand or the head were found to be easier and more comfortable by the users. Effective selection angles that would allow efficient selection were estimated for each interaction technique and on 8 sound locations around the user using an adaptive psychophysical method. The results show that these different interaction techniques were effective and could be used in a future mobile device to provide a flexible, eyes free way to interact with a system.

Acknowledgements

This study was supported by the EPSRC funded Audioclouds project grant number GR/R98105.

References

A Review of the Cocktail Party Effect. Journal of the American Voice I/O

B.,

1. Arons,

Society, 1992. 12: p. 35-50.

2. Blattner, M. M., Sumikawa, D. A., and Greenberg, R. M., Earcons and Icons: Their

Structure and Common Design Principles. Human-Computer Interaction, 1989. 4(1):

p. 11-44.

Spatial Hearing: The psychophysics of human sound localization. 1999: 3. Blauert,

J.,

The MIT Press.

4. Brewster , S., Lumsden , J., Bell, M., Hall, M., and Tasker, S. Multimodal 'Eyes-Free'

Interaction Techniques for Wearable Devices. In ACM CHI, 2003. Fort Lauderdale,

FL: ACM Press, Addison-Wesley. p. 463-480

5. Brewster, S. A., The design of sonically-enhanced widgets. Interacting with Com-

puters, 1998. 11(2): p. 211-235.

6. Cohen , M., Throwing, pitching and catching sound: audio windowing models and

modes. International Journal of Man - Machine Studies (1993), 1993. 39: p. 269 -

304.

7. Cohen , M. and Ludwig , L., Multidimensional Audio Window Management. Inter-

naltional Journal of Man - Machine Studies, 1991. 34: p. 319-336.

8. Gaver , W. W., The SonicFinder: An Interface that uses Auditory Icons. Human-

Computer Interaction, 1989. 4: p. 67-94.

9. Goose, S. and Moller, C. A 3D Audio Only Interactive Web Browser: Using Spatiali-

zation to Convey Hypermedia Document Structure. In 7th ACM international confer-

ence on Multimedia, 1999. Orlando, Florida, United States: ACM Press. p. 363 - 371 10. Leek, M. R., Adaptive procedures in psychophysical research. Journal of Perception

& Psychophysics, 2001. 63(8): p. 1279-1292.

11. McGookin, D. K. and Brewster, S. A. An Investigation into the Identification of Con-

currently Presented Earcons. in ICAD 2003. (Boston, MA). p. 42-46

12. Pirhonen , A., Brewster , S., and Holguin, C. Gestural and audio metaphors as a

means of control for mobile devices. In ACM CHI, 2002. Minneapolis, Minnesota,

USA: ACM Press New York, NY, USA. p. 291-298

13. Savidis, A., Stephanidis, C., Korte, A., Rispien, K., and Fellbaum, C. A generic di-

rect-manipulation 3D auditory environment for hierarchical navigation in non-visual

interaction. In ACM ASSETS '96, 1996. Vancouver, Canada, 1996: ACM Press. p.

117-123

14. Sawhney, N. and Schmandt, C., Nomadic Radio: Speech and Audio Interaction for

Contextual Messaging in Nomadic Environments. ACM Transactions on Computer-

Human Interaction, 2000. 7(3): p. 353-383.

15. William W. Gaver, Randall B. Smith, and O'Shea, T. Effective Sounds in Complex

Systems: The ARKOLA Simulation. in Conference on Human Factors in Computing

Systems archive. In ACMCHI. 1991. New Orleans, Louisiana, United States: ACM

Press New York, NY, USA. p. 85 - 90

(完整版)小学英语总复习--常用介词介绍及专项练习

小学英语总复习 小学英语介词总结 介词(Preposition) 一、概述 介词是英语中很活跃的词,一般置于名词之前。它常和名词或名词性词语构成介词短语。同一个介词常和不同的词语搭配形成固定搭配,表示不同意义。 二、常用介词的基本用法 at ①表示时间:I go to school at seven every day 我每天早上7点去上学。 ②表示在某一具体地点:He is standing at the bus stop 他站在公共汽车站。 ③表示动作的方向、目标:Let me have a look at the picture 让我看看这幅图。 ④用于某些固定搭配:at once 立刻、马上at last 最后 at the same time 同时at first 开始时 not at all 一点也不 about ①表示大约时间:It’s about six o'clock now. 现在大约6点钟了。 ②表示地点;在……周围:Everything about me is so beautiful 我周围的一切都那么美好。 ③关于,对于:We are talking about the news. 我们正在谈论新闻。 after ①在……之后:After dinner I watch TV. 晚饭后我看电视。 ②在……后面:He came into the room after me. 他在我后面进了房间。 behind ①在……之后:There is a bike behind the tree. 树后有一辆自行车 ②比……晚,迟于:The train is behind time. 火车晚点了 by ①在……旁:He is sitting by the bed. 他正坐在床边。 ②到……时候:We have learned three English songs by now. 到现在为止,我们已经学会了三首英文歌曲。 ③以……方式:I go to school by bus. 我乘公共汽车去上学。 ④用于某些固定搭配:one by one 一个接一个by the way 顺便说一句

小学常用介词的基本用法

常用介词的基本用法 at ①表示时间:I go to school at seven every day 我每天早上7点去上学。 ②表示在某一具体地点:He is standing at the bus stop 他站在公共汽车站。 ③表示动作的方向、目标:Let me have a look at the picture 让我看看这幅图。 ④用于某些固定搭配:at once 立刻、马上at last 最后 at the same time 同时at first 开始时 not at all 一点也不 about ①表示大约时间:I's about six o'clock now. 现在大约6点钟了。 ②表示地点;在……周围:Everthing about me is so beautiful 我周围的一切都那么美好。 ③关于,对于:We are talking about the news. 我们正在谈论新闻。 after ①在……之后:After dinner I watch TV. 晚饭后我看电视。 ②在……后面:He came into the room after me. 他在我后面进了房间。 behind ①在……之后:There is a bike behind the tree. 树后有一辆自行车 ②比……晚,迟于:The train is behind time. 火车晚点了 by ①在……旁:He is sitting by the bed. 他正坐在床边。 ②到……时候:We have learned three English songs by now. 到现在为止,我们已经学会了三首英文歌曲。 ③以……方式:I go to school by bus. 我乘公共汽车去上学。 ④用于某些固定搭配:one by one 一个接一个by the way 顺便说一句 for ①为,给,替:I'll make a card for my teacher. 我要给老师做卡片。 ②由于:Thank you for helping me. 你帮我。 ③表示给(某人)用的:There is letter for you. 这儿有你一封信。 in ①在……里面:The pencil is in the desk. 铅笔在课桌里。 ②在一段时间里:We have four classes in the morning. 我们上午有四节课。 ③用,以:What's this in English? 这用英语怎么说? ④在某一年份,季节,月份:in 2002, in spring, in January ⑤表示状态,服饰:Helen is in yellow. 海伦身穿黄色衣服。

《葛底斯堡演讲》三个中文译本的对比分析

《葛底斯堡演讲》三个中文译本的对比分析 葛底斯堡演讲是林肯于19世纪发表的一次演讲,该演讲总长度约3分钟。然而该演讲结构严谨,富有浓郁的感染力和号召力,即便历经两个世纪仍为人们津津乐道,成为美国历史上最有传奇色彩和最富有影响力的演讲之一。本文通过对《葛底斯堡演讲》的三个译本进行比较分析,从而更进一步加深对该演讲的理解。 标签:葛底斯堡演讲,翻译对比分析 葛底斯堡演讲是美国历史上最为人们所熟知的演讲之一。1863年11月19日下午,林肯在葛底斯堡国家烈士公墓的落成仪式上发表献词。该公墓是用以掩埋并缅怀4个半月前在葛底斯堡战役中牺牲的烈士。 林肯是当天的第二位演讲者,经过废寝忘食地精心准备,该演讲语言庄严凝练,内容激昂奋进。在不足三分钟的演讲里,林肯通过引用了美国独立宣言中所倡导的人权平等,赋予了美国内战全新的内涵,内战并不仅是为了盟军而战,更是为了“自由的新生(anewbirthoffreedom)”而战,并号召人们不要让鲜血白流,要继续逝者未竞的事业。林肯的《葛底斯堡演讲》成功地征服了人们,历经多年仍被推崇为举世闻名的演说典范。 一、葛底斯堡演说的创作背景 1.葛底斯堡演说的创作背景 1863年7月1日葛底斯堡战役打响了。战火持续了三天,战况无比惨烈,16万多名士兵在该战役中失去了生命。这场战役后来成为了美国南北战争的一个转折点。而对于这个位于宾夕法尼亚州,人口仅2400人的葛底斯堡小镇,这场战争也带来了巨大的影响——战争遗留下来的士兵尸体多达7500具,战马的尸体几千具,在7月闷热潮湿的空气里,腐化在迅速的蔓延。 能让逝者尽快入土为安,成为该小镇几千户居民的当务之急。小镇本打算购买一片土地用以兴建公墓掩埋战死的士兵,然后再向家属索要丧葬费。然而当地一位富有的律师威尔斯(DavidWills)提出了反对意见,并立即写信给宾夕法尼亚州的州长,提议由他本人出资资助该公墓的兴建,该请求获得了批准。 威尔斯本打算在10月23日邀请当时哈佛大学的校长爱德华(EdwardEverett)来发表献词。爱德华是当时一名享有盛誉的著名演讲者。爱德华回信告知威尔斯,说他无法在那么短的时间之内准备好演讲,并要求延期。因此,威尔斯便将公墓落成仪式延期至该年的11月19日。 相比较威尔斯对爱德华的盛情邀请,林肯接到的邀请显然就怠慢很多了。首先,林肯是在公墓落成仪式前17天才收到邀请。根据十九世纪的标准,仅提前17天才邀请总统参加某一项活动是极其仓促的。而威尔斯的邀请信也充满了怠慢,

小学常用介词练习

小学常用介词练习 ( )1.___ the afternoon of May, we visited the old man. A. On B. At C. In ( )2.Many people work ___ the day and sleep ___ night. A. on at B. in in C. in at ( )3.He speaks Japanese best ____ the boy students. A. between B. with C. among ( )4.A wolf ___ a sheep skin is our dangerous enemy. A. with B. in C. on ( )5.Joan hopes to come back ___ three days. A. after B. for C. in ( )6.They sent the letter to me ___ mistake. A. by B. for C. with ( )7.He left home ___ a cold winter evening. A. at B. on C. in ( )8.Shanghai is ____ the east of China. A. in B. on C. to ( )9.____ my father’s help, I have finished my composition. A. Under B. On C. with ( )10.He’s very strict ____ himself and he’s very strict ___ his work. A. with in B. in with C. with with ( )11.I really can’t agree ____ you. A. to B. on C. with ( )12.The shop won’t open ___ nine in the morning. A. until B. at C. during ( )13.How about ___ the flowers now? A. watering B. are watering C. watered ( )14.She spent all his money ___ books. A. in B. with C. on ( )15.They are talking ___ low voices. A. with B. in C. on ( )16.It’s very kind ___ you to help us. A. for B. to C. of ( )17.What will you have ___ breakfast this morning? A. with B. for C. by ( )18.A plane is flying ____ the city. A. on B. over C. above ( )19.You are free to speak ___ the meeting. A. at B. in C. on ( )20.Mr. Green will stay in China___ Friday. A. to B. on C. till ( )21.It’s wrong to play jokes ___ other people. A. on B. of C. with ( )22.Which color do you like? I prefer blue ___ red. A. for B. as C. to ( )23.The student will give us a talk ___ how to use our spare time. A. for B. on C. in ( )24.I paid two hundred yuan ___ that kind of bicycle.

小学常用介词练习(附答案)

常用介词练习 ( ) the afternoon of May, we visited the old man. A. On B. At C. In ( ) people work ___ the day and sleep ___ night. A. on ; at B. in ; in C. in ; at ( ) speaks Japanese best ____ the boy students. A. between B. with C. among ( ) wolf ___ a sheep skin is our dangerous enemy. A. with B. in C. on ( ) hopes to come back ___ three days. A. after B. for C. in ( ) sent the letter to me ___ mistake. A. by B. for C. with ( ) left home ___ a cold winter evening. A. at B. on C. in ( ) is ____ the east of China. A. in B. on C. to ( ) my father’s help, I have finished my composition. A. Under B. On C. with ( )’s very strict ____ himself and he’s very strict ___ his work. A. with ; in B. in ; with C. with ; with ( ) really can’t agree ____ you.

译文对比分析

话说宝玉在林黛玉房中说"耗子精",宝钗撞来,讽刺宝玉元宵不知"绿蜡"之典,三人正在房中互相讥刺取笑。 杨宪益:Pao-yu,as we saw, was in Tai-yu?s room telling her the story about the rat spirits when Pao-chai burst in and teased him for forgetting the “green wax” allusion on the night of the Feast of Lanterns. 霍克斯: We have shown how Bao-yu was in Dai-yu?s room telling her the story of the magic mice; how Bao-Chai burst in on them and twitted Bao-yu with his failure to remember the …green wax? allusion on the night of the Lantern Festival; and how the three of them sat teasing each other with good-humored banter. 对比分析:杨宪益和霍克斯在翻译“耗子精”采用来了不同的处理方法,前者使用了异化”rat spirits”,后者用的是归化法”magic mice”,使用归化法更受英美读者的亲乃。但是二者同时采用了增译法,增添了the story,原文并没有。在翻译“宝玉不知绿烛之典”的“不知”,英文1用的是“forgetting”,而译文2用的是“with failure to ”,显然译文2更符合英美的表达习惯。 那宝玉正恐黛玉饭后贪眠,一时存了食,或夜间走了困,皆非保养身体之法。幸而宝钗走来,大家谈笑,那林黛玉方不欲睡,自己才放了心。 杨宪益:Pao-yu felt relieved as they laughed and made fun of each other, for he had feared that sleeping after lunch might give Tai-yu indigestion or insomnia that night, and so injure her health. Luckily Pao-chai?s arrival and the lively conversation that followed it had woken Tai-yu up. 霍克斯: Bao-yu had been afraid that by sleeping after her meal Dai-yu would give herself indigestion or suffer from insomnia through being insufficiently tired when she went to bed at night, but Bao-chai?s arrival and the lively conversation that followed it banished all Dai-yu?s desire to sleep and enabled him to lay aside his anxiety on her behalf. 对比分析:译文一对原文语序进行了调整,先说了“放心”,再说“担心”,但并不如不调整顺序的逻辑强。译文二只是用了一个“but”就把原文意思分层了两层,逻辑更加清晰,符合西方人注重逻辑的习惯。原文中的“谈笑”是动词,而两个译文版本都是译的“the lively conversation”,是名词,体现了汉语重动态,英文重静态的特点。 忽听他房中嚷起来,大家侧耳听了一听,林黛玉先笑道:"这是你妈妈和袭人叫嚷呢。那袭人也罢了,你妈妈再要认真排场她,可见老背晦了。" 杨宪益:Just then, a commotion broke out in Pao-yu?s apartments and three of th em pricked up their ears. “It?s your nanny scolding Hai-jen,” announced Tai-yu. “There?s nothing wrong with Hai-jen, yet your nanny is for ever nagging at her. Old age has befuddled her.”

小学生常见介词

?小学常见介词: 1.on (1) 在------上面The book is on the desk. (2) 在------(哪一天/星期)What do you do on Wednesday? (3) 在------(月、日)My birthday is on August 2nd. 2. in (1)在------里面The pens are in the pencil-box. (2)在------(哪一年/月)His birthday is in October. He worked here in 1992. (3 ) 在------(地方)He works in Dongguan. (4 ) 在------之内What are you going to do in 20 years? (5 ) 在------(早上、下午、晚上) I do morning exercises in the morning every day. I usually play basketball in the afternoon. I often do my homework in the evening. 3. under 在------底下There is a ball under the bed. 4. near

在------附近There is a book shop near our school. 5. in front of 在------前面 A boy is standing in front of the house. 6. beside 在------旁边 A football is beside the door. 7. next to 紧挨着There is a bus station next to No. 13 Middle School. 8. over 在------正上方A bridge is over the river. 9. on the left 在------左边The bookstore is on the left. 10. on the right 在------右边The hospital is on the right. 11. before 在……之前Mike sits before me. 12. after 在------以后He went home after school. 13. in the middle

《傲慢与偏见》译文对比分析

《傲慢与偏见》(节选一) Pride and Prejudice by Jane Austen (An Except from Chapter One) 译文对比分析 节选文章背景:小乡绅贝内特有五个待字闺中的千金,贝内特太太整天操心着为女儿物色称心如意的丈夫。新来的邻居宾利(Bingley)是个有钱的单身汉,他立即成了贝内特太太追猎的目标。 1.It’s a truth u niversally acknowledged, that a single man in possession of a good fortune, must be in want of a wife. 译文一:凡是有钱的单身汉,总想娶位太太,这已经成了一条举世公认的道理。译文二:有钱的单身汉总要娶位太太,这是一条举世公认的真理。 2.However little known the feelings or views of such a man may be on his first entering a neighborhood, this truth is so well fixed in the minds of the surrounding families that he is considered as the rightful property of some one or other of their daughters. 译文一:这样的单身汉,每逢新搬到一个地方,四邻八舍虽然完全不了解他的性情如何,见解如何,可是,既然这样的一条真理早已在人们心中根深蒂固,因此人们总是把他看作是自己某一个女儿理所应得的一笔财产。 译文二:这条真理还真够深入人心的,每逢这样的单身汉新搬到一个地方,四邻八舍的人家尽管对他的性情见识一无所知,却把他视为某一个女儿的合法财产。 3.”My dear Mr. benne” said his lady to him one day ,”have you heard that nether field park is let at last?” 译文一:有一天班纳特太太对她的丈夫说:“我的好老爷,尼日斐花园终于租出去了,你听说过没有?” 译文二:“亲爱的贝特先生”一天,贝纳特太太对先生说:“你有没有听说内瑟费尔德庄园终于租出去了:” 4.Mr. Bennet replied that he had not. 译文一:纳特先生回答道,他没有听说过。 译文二:纳特先生回答道,没有听说过。 5.”But it is,” returned she:” for Mrs. long has just been here, and she t old me all about it.” 译文一:“的确租出去了,”她说,“朗格太太刚刚上这来过,她把这件事情的底细,一五一十地都告诉了我。” 译文二:“的确租出去了,”太太说道。“朗太太刚刚来过,她把这事一五一十地全告诉我了。”

小学介词常用示例

小学介词常用示例 表示时间的介词称为时间介词.表示时间的介词有:at, on, in, before, after 等. 一、at, on和in ① at 表示:(在(某时刻、时间、阶段),在……岁时) My cousin joined the army at fifteen. 我表哥十五岁参的军. ② on 表示:在(某日),在周末,在……节日 He was born on the 15th of August in 1769. 他出生于1769年8月15日. ③ in 表示:在……事后,在……期间,在……年/月 She went to America in 2000. 她2000年去了美国. at, on 和in 作时间介词的比较: ① at 表示具体时间点. ② on 后可以跟表日期、星期、节日的词,还可以指具体某一天的早、中、晚. ③ in 泛指一天的早、中、晚,还可以表示一段时间,如:周、年、月、季节等. 二、before和after ① before 表示:在……之前 before eight o’ clock 八点之前 Spring comes before summer. 夏天之前是春天. ② after 表示:表示……之后 after lunch 午饭之后 Come to my office after school. 放学后请来我办公室. 表示做某事的方法、手段的介词有by, with, in, at, on. 一、by by表示:用,以,靠,通过……方式.by表示手段时后接动作或制作方式.“by + 交通工具”表示交通方式. by bike 骑车 by bus 坐公车 by taxi 搭出租 by train 坐火车 by ship 乘船 by air 坐飞机 Linda usually goes to work by subway. 琳达通常做地铁上班. She makes a living by teaching. 她考教书谋生. 二、with with 表示:用,以.with表示手段时,后接工具、材料或具体内容. write with a pen 用钢笔写 eat with knife and fork 用刀叉吃 see with one’s eyes 用眼睛看 I killed the fly with a swatter. 我用苍蝇拍打死那只苍蝇. She cut the cake with a knife. 她用刀切开了蛋糕.

《乡愁》两个英译本的对比分析

《乡愁》两个英译本的对比分析 曹菊玲 (川外2013级翻译硕士教学1班文学翻译批评与鉴赏学期课程论文) 摘要:余光中先生的《乡愁》是一首广为人知的诗歌,曾被众多学者翻译成英文,本文将对赵俊华译本和杨钟琰译本进行对比分析,主要从原作的分析、译文的准确性、译文风格以及对原文意象的把握入手进行赏析,指出两译本的不足和精彩之处。 关键词:《乡愁》;原诗的分析;准确性;风格;意象的把握 1.原诗的分析 作者余光中出生于大陆,抗战爆发后在多地颠沛流离,最后22岁时移居台湾。20多年都未回到大陆,作者思乡情切,于上世纪70年代创作了这首脍炙人口的诗歌,但是当时大陆与台湾关系正处于紧张时期,思而不得的痛苦与惆怅包含于整首诗中。 全诗为四节诗,节与节之间完全对称,四句一节,共十六句,节奏感、韵律感很强,读起来朗朗上口。而且这首诗语言简洁,朴实无华,但是却表达了深刻的内涵。从整体上看,作者采取层层递进的方式,以时空的隔离和变化来推进情感的表达,最后一句对大陆的思念,一下子由个人哀愁扩大到国家分裂之愁,因而带有历史的厚重感。 【原诗】 乡愁 余光中 小时候/乡愁是一枚小小的邮票/我在这头/母亲在那头 长大后/乡愁是一张窄窄的船票/我在这头/新娘在那头 后来啊/乡愁是一方矮矮的坟墓/我在外头/母亲在里头 而现在/乡愁是一湾浅浅的海峡/我在这头/大陆在那头 【译诗】 (1) 赵俊华译 Homesick As a boy, I was homesick for a tiny stamp, —I was here, Mom lived alone over there. When grow up, I was homesick for a small ship ticket. —I was here,

小学英语常用介词用法

小学英语常用介词用法 摘要:英语介词并不很多,但其用法灵活多样。掌握常用介词的用法及常见的介词搭配,是学习英语的重点和难点。 关键词:英语介词用法 介词是英语中一个十分活跃的词类,在句子的构成中起着非常重要的作用。介词也是英语中的一个最少规则可循的词类,几乎每一个介词都可用来表达多种不同的含义;不同的介词往往又有十分相似的用法。因此,要学好介词,最好的方法就是在掌握常用介词的基本用法的基础上,通过广泛阅读去细心地揣摩,认真地比较,归纳不同的介词的用法,方能收到良好的效果。那么,什么是介词呢?小学常用的介词都有哪些以及它们的用法? 一·介词 1 介词的含义 介词又叫前置词,是一种虚词,用来表明名词,代词(或相当于名词的其他词类,短语或从句)与句中其他词关系的词。介词不能重读,不能单独做句子成分,必须与它后面的名词或代词(或相当于名词的其他词类,短语或从句)构成介词短语,才能在句子中充当一个成分。介词后面的名词,代词或相当于名词的部分称为介词宾语,简称介宾。 2介词的分类 1)根据介词的构成形式,可将介词分为简单介词,合成介词,双重介词,短语介词四类。而小学英语中涉及到的介词仅为简单介词和一些短语介词,如:after(在……之后),against(反对),along(沿着),among(在……之中),at(在),behind(在……后面),near(在……附近),into(在……里),in(在……内,用,戴),from(从),for (为,给),except(除……之外),by(乘,在,由,到),on the left (在左边),on the right (在右边)······ 2)根据介词的意义,可将介词分为表示空间关系的介词,表示时间的介词,表示原因,目的的介词,表示手段的介词和其他介词。如:表示时间的介词in,on,at,表示方位的介词in,on,to······下文则主要是从介词的意义方面来讲介词的用法。 二·小学常用英语介词 1课文中常出现的介词 1)about(关于,大约):a book about animals 一本关于动物的书。 2)above(在……上面):a map above the blackboard 黑板上方的一张地图,above all 首要的是。 3)across(穿过,跨过):a bridge across the river 跨过河的一座桥 4)after(在……之后):after breakfast 早饭后,after school 放学后,after class 课后 5)against(反对):play against them 跟他们比赛。 6)along(沿着):plant trees along the lake 沿着湖边植树。 7)among(在……之中):among the workers 在工人们中间,among the trees 在树丛中。 8)at(在):at home在家,at school 在学校,at work 在工作。 9)before(在……之前):before class 课前,before lunch 午饭前 10)behind(在……后面):behind the house 在房子后面,behind the door 在门后。 11)near(在……附近):near the river 在河边,stand near the door 站在门旁。 12)into(在……里):come into the classroom 进入教室,fall into the water 掉进水里

落花生两英译本的对比分析

《落花生》两英译本的对比分析 【摘要】本文对张培基先生和刘士聪先生对《落花生》的英译本从接受美学的角度进行了对比分析。通过分析原文的写作目的,风格和语言特点从而对翻译的分析打下良好基础。举例对翻译的段落进行了对比分析,反映了译者的期待视野影响翻译目的,职业经历和翻译的基本观点影响期待视野。这一点对于文本不确定性和语义空白点的具体化十分重要。 【关键词】《落花生》;张培基;刘士聪;接受美学;翻译对比 一、原文的介绍 《落花生》是中国著名作家许地山创作的一篇具有深远意义的散文。他回忆了童年时发生的一件小事。父亲通过一件关于花生的小事讲述了生活哲理。 原文具有以下几个功能: 1、信息功能:它描述了童年发生的一件事及花生的特点和用途。 2、审美功能:文章的语言简单朴素清新自然。语言特点,内容和风格相得益彰,形成了浓厚的艺术吸引力。 3、表达功能:这篇散文表达了作者崇高的思想,即便是在动荡浑浊的旧时代,还应保持个人节操。 4、祈使功能:虽然描写的是不起眼的花生,却在字里行间向大家传递了人生哲学。那就是“那么,人要做有用的人,不要做伟大、体面的人了。” 二、两英译本对比分析 张先生和刘先生在翻译时都保留了原文的风格和特点。但他们的翻译风格还是有些不同的。 他们的期待视野影响其翻译目的。他们的职业经历影响他们翻译期待视野。张先生和刘先生都曾在出版社做过编辑,都潜心研究翻译事业。但是不同的是刘先生还有翻译教学的经历。和张先生相比,他更算的上是一个翻译教育学者。他投身于翻译教学和其他英语相关的学科研究中。从以上可以推断出,可能刘先生在翻译过程中遵循的规则更加严格,并且会更加的客观,也就是说学术的客观性会比较多,在翻译中自己主观理解的加入可能较少。这里有一个例子。原文:“那晚上的天色不大好,…”刘先生译文:“The weather was not very good that night but,….”张先生译文:“It looked like rain that evening,….”而“天色不太好”并不代表就是要下雨了。虽然对于这一句的翻译并不影响整体,但是作为一个翻译来说,并不应该加入过多的理所当然的想象。 除了翻译家的职业经历会影响他们的翻译表现和态度,他们对于翻译的基本看法,也会影响翻译工作。刘先生认为翻译的最高境界是对原文韵味的再现。译作“韵味”就是原作的艺术内涵通过译文准确而富有文采的语言表达时所蕴含的艺术感染力,这能引起读者的美感共鸣。除此之外,刘先生还强调要把译文作为独立的文本来看待。具体说来就是对译文美感和韵味的展现虽是从词句入手的,同时还应注意译文包括内容和风格的整体效果。当两者有了矛盾,要变通前者来适应后者。从以上不难看出,刘先生在翻译时非常注重两方面,一是对原文韵味的保留,也就是原文内容中所蕴含的艺术吸引力的保留,另一方面就是对翻译整体效果的追求要超越对词语句子的效果的追求。

译文对比评析从哪些方面

译文对比评析从哪些方面 匆匆英译文对比赏析 (1)匆匆 译文1:Transient Days 译文2:Rush (2)燕子去了,有再来的时候;杨柳枯了,有再青的时候;桃花谢了,有再开的时候。 译文1:If swallows go away, they will come back again. If willows wither, they will turn green again. If peach blossoms fade, they will flower again. 译文2:Swallows may have gone, but there is a time to return; willow trees may have died back, but there is a time of regreening; peach blossoms may have fallen, but they will bloom again.

(3)但是,聪明的,你告诉我,我们的日子为什么一去不复返呢?——是有人偷了他们罢:那是谁?又藏在何处呢?是他们自己逃走了罢:现在又到了哪里呢? 译文1:But, tell me, you the wise, why should our days go by never to return? Perhaps they have been stolen by someone. But who could it be and where could he hide them? Perhaps they have just run away by themselves. But where could they be at the present moment? 译文2:Now, you the wise, tell me, why should our days leave us, never to return? ----If they had been stolen by someone, who could it be? Where could he hide them? If they had made the escape themselves, then where could they stay at the moment? (4)我不知道他们给了我多少日子;但我的手确乎是渐渐空虚了。译文1:I don’t know how many days I am entitled to

(完整版)小学英语常用介词及用法

常见介词及用法 (一)表示时间的介词 1.英语里最常见的时间介词有:at, in, on, before, after和from。 2.at , in和on这三个词都表示时间。 ?at主要指具体的钟点:at half past eight 在八点半 ?in一般指某一段时间:in January 在一月份 ?on指具体在某一天:on Monday 在星期一 3.before和after表示时间的先后顺序。 ?before表示“在……之前”。 You should wash your hands before eating. 吃饭前你应该洗手。 ?after表示“在……之后”。 They often play basketball after dinner. 他们放学后经常打篮球。 4.from作时间介词含有“从……开始”的意思,常和to连用,组成 “from…to…”的结构,表示“从……到……”的意思。 We go to school from Monday to Friday. 我们从周一到周五上学。 (二)表示方位的介词,也就是表示位置和地点的介词。 1.小学阶段常见的方位介词有:on, in, at, under, over, above, below, about, around, between等。 2.on, over和above 这三个词都有“在……上面”的意思,但它们所表示的方位还是有些不同。 ?on表示两个物体的表面相互接触。如: There is a book on the desk. 桌上有一本书。 The boy is sleeping on the desk. 那个孩子睡在地上。 ?over表示“在……的正上方”,两个物体表面没有接触。如: There is a light bulb over my head. 在我头顶上有一个灯泡。 ?above表示两个物体中一个在另一个的上方,如: The plane is flying above the clouds. 飞机上云层上飞行。

小学常用介词的基本用法

常用介词的基本用法 1、at ①表示时间:I go to school at seven every day 我每天早上7点去上学。 ②表示在某一具体地点:He is standing at the bus stop 他站在公共汽车站。 ③表示动作的方向、目标:Let me have a look at the picture 让我看看这幅图。 ④用于某些固定搭配:at once 立刻、马上at last 最后 at the same time 同时at first 开始时 not at all 一点也不 ; 2、about ①表示大约时间:I's about six o'clock now. 现在大约6点钟了。 ②表示地点;在……周围:Everthing about me is so beautiful 我周围的一切都那么美好 ③关于,对于:We are talking about the news. 我们正在谈论新闻。 3、after ①在……之后:After dinner I watch TV. 晚饭后我看电视。 ②在……后面:He came into the room after me. 他在我后面进了房间。 4、behind ; ①在……之后:There is a bike behind the tree. 树后有一辆自行车 ②比……晚,迟于:The train is behind time. 火车晚点了 5、by ①在……旁:He is sitting by the bed. 他正坐在床边。 ②到……时候:We have learned three English songs by now. 到现在为止,我们已经学会了三首英文歌曲。 ③以……方式:I go to school by bus. 我乘公共汽车去上学。 ④用于某些固定搭配:one by one 一个接一个by the way 顺便说一句 6、for ] ①为,给,替:I'll make a card for my teacher. 我要给老师做张卡片。 ②由于:Thank you for helping me. 谢谢你帮我。 ③表示给(某人)用的:There is letter for you. 这儿有你一封信。 7、in ①在……里面:The pencil is in the desk. 铅笔在课桌里。 ②在一段时间里:We have four classes in the morning. 我们上午有四节课。 ③用,以:What's this in English 这用英语怎么说 ④在某一年份,季节,月份:in 2002, in spring, in January 【 ⑤表示状态,服饰:Helen is in yellow. 海伦身穿黄色衣服。 ⑥在……方面:He is weak in English. 他的英语不行。 ⑦用于某些固定搭配:in front of 在……前面 in the end 最后

对《匆匆》的三个译本对比分析

对《匆匆》的三个译本对比分析 翻译大家常说好的翻译作品要忠于原文,完全了解原文的写作背景,作者的写作风格;并把握理解作者表达的思想感情,挖掘作者原作中的内涵,捕捉原作里的神韵之美,以及丰富的感情。为了对比散文《匆匆》的三个不同译本我他特意查了一下《匆匆》的写作背景,然后读了几遍原文,深深地体味了一下朱自清的写作情感。顿时,感受到了朱自清对时间流逝的无奈焦急和惋惜之情。再读《匆匆》,似乎在听一首好听灵动的音乐,那朴素平淡的抒情气氛,优美的了律动美,委婉流畅悠远的的音律美油然而生。然后再欣赏三个不同的译本,我或多或少感受到了三个译本各自不同的特点。 乍眼一看,我特别喜欢张培基先生的译本,我觉得他译的真好。因为我看他的译文时,感受到了原作丰富的思想感情,浓厚的感染力,以及原文的美和意境。但是经过仔细品味,我发现其实三个译本各有千秋,不能单单说谁译的好与不好,只能说谁翻译的比较准确,更能符合原文之美,原文之意,原文之味。 以下是我对三个译本的对比分析欣赏:(朱纯深,张培基,张梦井先生的译本) 对比一:题目。 朱纯深译文:Rush 张培基译文:Transient Days 张梦井译文:Days Gone By Rush的意思是快速移动;急促;有仓促之意。我认为时间虽然短暂,但也不至于仓促过往,rush 没有美感,也不符合原文所表达匆匆之意,不太妥帖。 Transien t的意思是转瞬即逝的,短暂的(continuing for only a short time )把时间匆匆流逝,时间短暂之意表达的淋漓尽致,似乎让人觉得时间犹如过往云烟,捉不住,摸不着,与原文匆匆之意相符合,且具有美感。 Go b y的意思是流逝,过去(to past)多指时间的推移,不能表达匆匆之意。 综上所述,Transient Days更为贴切,而其他两个就欠妥帖。 对比二: 朱纯深:Swallows may have gone, but there is a time to return; willow trees may have died back, but there is a time of re-greening; peach blossoms may have fallen, but they will bloom again. 张培基:If swallows go away, they will come back again. If willows wither, they will turn green again. If peach blossoms fade, they will flower again.

相关主题
文本预览
相关文档 最新文档