当前位置:文档之家› Studies on Auditory Processing of Spatial Sound and Speech by Neuromagnetic Measurements an

Studies on Auditory Processing of Spatial Sound and Speech by Neuromagnetic Measurements an

Studies on Auditory Processing of Spatial Sound and Speech by Neuromagnetic Measurements an
Studies on Auditory Processing of Spatial Sound and Speech by Neuromagnetic Measurements an

Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing Espoo 2005 Report 74

STUDIES ON AUDITORY PROCESSING OF SPATIAL SOUND AND SPEECH BY NEUROMAGNETIC MEASUREMENTS

AND COMPUTATIONAL MODELING

Kalle Palom?ki

Helsinki University of Technology Laboratory of Acoustics and Audio Signal Processing Espoo 2005 Report 74

STUDIES ON AUDITORY PROCESSING OF SPATIAL SOUND AND SPEECH BY NEUROMAGNETIC MEASUREMENTS

AND COMPUTATIONAL MODELING

Kalle Palom?ki

Dissertation for the degree of Doctor of Science in Technology to be presented with due permission

for public examination and debate in Auditorium S4, Department of Electrical and Communications Engineering, Helsinki University of Technology, Espoo, Finland, on the 17th of June 2005, at 12

o'clock noon.

Helsinki University of Technology

Department of Electrical and Communications Engineering

Laboratory of Acoustics and Audio Signal Processing

Teknillinen korkeakoulu

Helsinki University of Technology

Laboratory of Acoustics and Audio Signal Processing P.O. Box 3000

FIN-02015 HUT

Tel. +358 9 4511

Fax +358 9 460 224

E-mail lea.soderman@hut.fi

ISBN 951-22-7716-6

ISSN 1456-6303

Otamedia Oy

Espoo, Finland 2005

ABSTRACT OF DOCTORAL DISSERTATION

Article dissertation (summary + original articles)Studies on auditory processing of spatial sound and speech by neuromagnetic measurements and computational modeling 17.6.2005

This thesis addresses the auditory processing of spatial sound and speech. The thesis consists of two research branches:one, magnetoencephalographic (MEG) brain measurements on spatial localization and speech perception, and two,construction of computational auditory scene analysis models, which exploit spatial cues and other cues that are robust in reverberant environments. In the MEG research branch, we have addressed the processing of the spatial stimuli in the auditory cortex through studies concentrating to the following issues: processing of sound source location with realistic spatial stimuli, spatial processing of speech vs. non-speech stimuli, and finally processing of range of spatial location cues in the auditory cortex. Our main findings are as follows: Both auditory cortices respond more vigorously to contralaterally presented sound, whereby responses exhibit systematic tuning to the sound source direction. Responses and response dynamics are generally larger in the right hemisphere, which indicates right hemispheric specialization in the spatial processing. These observations hold over the range of speech and non-speech stimuli. The responses to speech sounds are decreased markedly if the natural periodic speech excitation is changed to random noise sequence. Moreover, the activation strength of the right auditory cortex seems to reflect processing of spatial cues, so that the dynamical differences are larger and the angular organization is more orderly for realistic spatial stimuli compared to impoverished spatial stimuli In the auditory modeling part, we constructed models for the recognition of speech in the presence of interference. Firstly,we constructed a system using binaural cues in order to segregate target speech from spatially separated interference, and showed that the system outperforms a conventional approach at low signal-to-noise ratios. Secondly, we constructed a single channel system that is robust in room reverberation using strong speech modulations as robust cues, and showed that it outperforms a baseline approach in the most reverberant test conditions. In this case, the baseline approach was specifically optimized for recognition of speech in reverberation. In summary, this thesis addresses the auditory processing of spatial sound and speech in both brain measurement and auditory modeling. The studies aim to clarify cortical processes of sound localization, and to construct computational auditory models for sound segregation exploiting spatial cues, and Spatial localization, auditory cortex, MEG, N1m, binaural models, CASA, missing data speech recognition

951-22-7716-6

Helsinki University of Technology, Laboratory of Acoustics and Audio Signal Processing

Report 74 / HUT, Laboratory of Acoustics and Audio Signal Processing

This thesis was done in collaboration with the following units:

Laboratory of Acoustics and Audio Signal Processing, Helsinki University of Technology, Finland

Apperception & Cortical Dynamics (ACD), Department of Psychology, University of Helsinki, Finland

Speech and Hearing, Department of Computer Science, University of Sheffield, United Kingdom

BioMag Laboratory, Engineering Centre, Helsinki University Central Hospital, Helsinki, Finland

Acknowledgements

This thesis was undertaken at the Laboratory of Acoustics and Audio Signal Processing at the Helsinki University of Technology in collaboration with the Department of Computer Science in the University of Sheffield, Apperception and Cortical Dynamics, Department of Psychology at the University of Helsinki and BioMag laboratory of the Helsinki University Central Hospital. I wish to thank Prof. Paavo Alku, Doc. Hannu Tiitinen, Dr. Guy Brown, and Doc. Patrick May for their supervision as well as their research efforts in studies in the thesis. I am grateful to Ville M?kinen, Dr. Jon Barker, and Prof. DeLiang Wang for their participation in the studies in this thesis. Many thanks for Prof. Martin Cooke and Prof. Kimmo Alho for their efforts in pre-examination of the thesis. For Dr. Aki H?rm?, Prof. Matti Karjalainen, and Prof. Unto Laine, I am especially grateful as they arranged my first research project. In fact, Dr. Aki H?rm? even pointed out the SPHEAR research position, which led me to Sheffield and SpandH. I wish to thank Prof. Phil Green for providing an excellent opportunity to work in the SpandH-group during the years 2000-2002. I wish to thank Dr. Ville Pulkki and Juha Merimaa for their help related on spatial hearing, especially at the beginning of my thesis. I wish to thank Matti Airas for his help with acoustic measurements. I had the opportunity to work and interact with people of several research units, namely The Laboratory of Acoustics and Audio Signal Processing, SpandH and ACD. Although many more people deserve to be thanked, I want to specifically mention my office mate Dr. Hanna J?rvel?inen for her support with the last steps with this thesis as well as for help in issues relating to statistics. I wish to thank our secretary Lea S?derman for her support and aid which extends far beyond secretarial duties. To mention just one issue, she arranged my housing in my return back to Helsinki from Sheffield.

For their lifelong support and ever-encouraging attitude to my studies, I wish to thank my dear mother Salme and father Jaakko, who unfortunately is no longer here with us. Finally, I wish to thank my dear wife Johanna for her love, care, and continuing support.

The thesis was funded by projects of the Academy of Finland (proj. no 1168030, 1277811), TMR SPHEAR European research training network, and partially supported by grants from Tekniikan edist?miss??ti?, Jenny ja Antti Wihurin rahasto, Nokia-s??ti?, Emil Aaltosen s??ti?, and Kaupallisten ja teknillisten tieteiden edist?miss??ti?.

Kalle Palom?ki, Espoo, Finland

Table of Contents Acknowledgements (2)

Table of contents (3)

List of articles (4)

List of abbreviations (5)

1Introduction (7)

2Psychoacoustic background of spatial hearing and sound segregation (8)

2.1Spatial localization cues (8)

2.2Precedence effect (10)

2.3Spatial hearing in speech segregation (12)

2.4Modulation frequencies in speech segregation (15)

3Processing auditory space and speech in the brain (18)

3.1Spatial auditory processing in animal models (18)

3.2Spatial auditory processing in humans (22)

3.3Processing of speech and speech presented spatially (27)

4Applied research methods in brain measurements and auditory modeling (30)

4.1Technologies of 3D audio in brain imaging (30)

4.2Measuring auditory cortical responses using magnetoencephalography.31

4.2.1Magnetoencephalography (MEG) (31)

4.2.2Event-related potential (ERP) and magnetic field (ERF) (32)

4.3Auditory modeling (34)

4.3.1Peripheral models and frequency selectivity (35)

4.3.2Loudness models (37)

4.3.3Models of binaural localization (38)

4.3.4Computational auditory scene analysis and missing data ASR (41)

4.3.5Modulation filtering in speech analysis and recognition (42)

4.3.6Binaural CASA processors (44)

5Summary of the publications and author's contribution (47)

5.1Author's contributions (47)

5.2Summary of publications (48)

5.3General discussion and future directions (55)

6Conclusions (58)

7Errata (59)

8Appendix: Additional remarks (60)

8.1Acoustic tubephones (60)

8.2Discussion about subject consistency (62)

9References (63)

List of articles

P1

Palom?ki K., Alku P., M?kinen V., May P. and Tiitinen H. (2000) Sound localization in the human brain: neuromagnetic observations, NeuroReport 11(7), 1535-1538.

P2

Palom?ki K. J., Tiitinen H., M?kinen V., May P. and Alku P. (2002) Cortical processing of speech sounds and their analogues in a spatial auditory environment, Cogn. Brain Res. 14(2), 294-299.

P3

Alku P., Sivonen P., Palom?ki K. J. and Tiitinen H. (2001) The periodic structure of vowel sounds is reflected in human electromagnetic brain responses, Neurosci. Lett. 298(1), 25-28.

P4

Palom?ki K. J., Tiitinen H., M?kinen V., May P. and Alku P. (2005) Spatial processing in human auditory cortex: the effects of 3D, ITD and ILD stimulation techniques. Accepted for publication in Cogn. Brain Res.

P5

Palom?ki K. J., Brown G. J. and Wang D. L. (2004) A binaural processor for missing data speech recognition in the presence of noise and small-room reverberation, Speech Comm. 43(4), 361-378.

P6

Palom?ki K. J., Brown G. J. and Barker J. (2004) Techniques for handling convolutional distortion with “missing data” automatic speech recognition, Speech Comm. 43(1-2), 123-142.

List of abbreviations

ANOVA Analysis of variance

ASA Auditory scene analysis

AVCN Anteroventral cochlear nucleus

BAR Binaural recordings

frequency

BF Best

B&K Bruel & Kjaer

membrane

BM Basilar

CASA Computational auditory scene analysis DNLL Dorsal nucleus of lateral lemniscus EEG Electroencephalography

ECD Equivalent current dipole

field

ERF Event-related

magnetic

potential

ERP Event-related

field

eye

FEF Frontal

fMRI Functional magnetic resonance imaging HRTF Head-related transfer function

colliculus

IC Inferior

ICX External nucleus of inferior colliculus IHC Inner hair cell

ILD Interaural level difference

standards

organization ISO International

ITD Interaural time difference

LSO Lateral superior olive

difference

Md Magnetic

MCE Minimum current estimation

MEG Magnetoencephalography

MGB Medial geniculate body

MMN Mismatch negativity

resonance

MR Magnetic

MSO Medial superior olive

MTF Modulation transfer function

ERP/ERF

deflection at about 100 ms latency N1/N1m Negative

PET Positron emission tomography

PSP Post synaptic potential

colliculus

SC Superior

SEM Standard error of mean

SNR Signal-to-noise

ratio

olivary complex

SOC Superior

SQUID Super conducting quantum interference device

SPL Sound

level

pressure

SRT Speech reception threshold

T-F Time-frequency

1 Introduction

Making sense of surrounding space is essential for almost any species. Space perception by the visual system, although very accurate, is limited only to the frontal regions of space. Auditory localization, despite being less accurate than visual localization, is capable of covering all the directions simultaneously even if the object is not visible. It is important for rapidly directing one’s attention to important events, such as mating, foes, food, etc., in any direction. Auditory localization is equally important for a chasing predator as well as escaping prey, or even for a modern day human walking in busy traffic. However, due to the importance of speech communication, more complex scenarios requiring spatial orientation arise in humans. Spatial hearing has a dual role in speech communication. Firstly, sound localization helps the listener to direct his or her attention towards an interesting speaker. Secondly, it is known that spatial separation between the target speaker and interfering sources helps the intelligibility of the target.

The theme of this thesis relates to the issue of how the human auditory system processes space and spatially presented speech signals, as well as how space perception is utilized in the segregation of target speech. These themes are treated in two research branches. In the first branch, we conduct brain measurements on sound localization using magnetoencephalography (MEG). Our aim is to clarify the brain processes carried out in sound localization, and perception and localization of speech sounds. We investigate cortical processing of sound localization across range of spatial directions, for speech and non-speech stimuli and study effects of spatial localization cues. In the second branch, we construct computational models of the auditory system in order to simulate sound localization and segregate target speech out of noisy background. Firstly, we show a binaural approach that exploits spatial separation between target speech and the interferer in source segregation in presence of mild room reverberation. Secondly, we design a monaural approach, which applies modulation filtering to cope with more severely reverberated speech material.

This thesis consists of three parts: one, an introductory part (Sect. 1-4) that reviews literature relevant for both the research branches and the publications of the thesis; two, a part that shows the author’s contribution to the work (Sect. 5.1) and that summarizes the publications (Sect. 5.2); three, copies of the publications in the thesis. Sect. 2 contains basic psychophysics of both sound localization and the effects of multiple sound sources presented spatially separated. Sect. 3 reviews the literature in the auditory brain processing of sound localization and on some extent processing of speech. Sect. 4 collects the methods applied in this study, in which binaural technologies and MEG, as well as auditory models are covered.

2 Psychoacoustic background of spatial hearing and

sound segregation

Although less accurate than visual space perception auditory localization is still surprisingly accurate. Experiments with wide band sounds show that azimuthal localization accuracy is best in the front (around one degree), and is two and ten times less accurate behind and at the sides of subject, respectively (Sect. 2.1 in Blauert, 1997). Localization accuracy of the elevation is around 10 degrees in the median plane (Sect. 2.5.1 in Blauert, 1997). Relevant psychoacoustics background in spatial hearing as well as sound segregation are covered in this chapter as follows: Sound localization is explained in terms of localization cues present in the signals reaching the ears in Sect. 2.1, sound localization in rooms and precedence effect is addressed in Sect. 2.2, sound and speech segregation are addressed in Sect.

2.3, and perception of speech in rooms and the importance of speech modulation frequencies in speech intelligibility is addressed in Sect. 2.4. Psychoacoustics of spatial hearing is thoroughly covered in the following reviews: Yost & Gourevitch (1987), Moore (1989), Grantham (1995), Gilkey & Anderson (1997) and Blauert (1997).

2.1 Spatial localization cues

Azimuthal localization relies primarily on binaural cues, interaural time differences, and level differences (ITD and ILD, respectively), which are extracted in the comparison process of the signals reaching the ears (Sect. 2.4 in Blauert, 1997). In addition to those, monaural spectral cues introduced by pinna, head and body

filtering (e.g. Musicant & Butler, 1985; Wightman & Kistler, 1992; Sect. 2.3.1 in

left of the subject (right panel). Any sound source placed in the cone of confusion regions produce equal ITD between the signals received at each ear. For sound sources in the median plane both the ITD and ILD are zero.

Blauert, 1997) as well as head movements (Thurlow et al., 1967; Sect. 2.5.1 in Blauert, 1997) provide cues especially for localization of elevation. Non-acoustical cues affecting sound localization are mediated by vision (e.g. Shelton & Searle, 1980) and source familiarity (Coleman, 1962).

Because of the distance between the ears, sound waves arrive earlier at the ear closer to the sound source, from which the ITD cue originates. ITD is dominant in the low frequency range, below 1500 Hz. A physical explanation is that in the low frequency range, the head dimensions are small in proportion to acoustic wavelength, and therefore it is possible to phase lock to the signal. It is noteworthy that phase locking in the auditory periphery is limited to about 3 kHz in mammals as observed in animal models (Johnson, 1980), which also sets limits to accurate ITD estimation.

Interaural level difference (ILD) is the dominant cue in the high frequency range, above 1500 Hz, where head shadowing strongly attenuates the sound field in the ear opposite the sound source. Localization experiments with narrowband signals have demonstrated that ambiguities arise in the frequency range 1500-2000 Hz, within which neither ITD or ILD is very effective (Stevens & Newman, 1936; Sect. 6.2 in Moore, 1989). In this region, the wavelength of sound is already so short that many cycles of the waveform will fit within the ears; thus, it leads to problems for spotting ITD accurately. Within the same 1500-2000 Hz frequency region, the head shadow is not effective enough to produce prominent ILD cues. Fortunately, however, the problems with localizing narrow band sounds do not apply for most natural sounds, such as speech, as they contain energy spread across the audible frequency range.

There are situations in which sounds from different locations cannot be discriminated by interaural differences. Consider the localization in the median plane (Figure 1, left panel; Sect. 2.3 in Blauert, 1997). If reasonable symmetry of the head is assumed, the elevation shift in the median plane has very little or no effect on ITD and ILD. Near-constant ITDs are also observed within cone of confusion regions (Figure 1), whereas ILD shows some variation across frequency, mostly because pinna and head are asymmetric between the front and back directions. The accuracy of discrimination of elevation in those regions is diminished, compared to azimuthal localization. In the median plane, the accuracy is around 10 degrees (Sect. 2.3.1 in Blauert, 1997). Localization in these regions exploits the ability of the hearing system to extract location cues from the direction-dependent filtering effects of the pinnae, head and body. In fact, in the median plane, where ITD and ILD are near zero, different directions can only be discriminated from spectral differences. In addition, in the cone of confusion regions, excluding the median plane, ILDs are variant across frequency, which might partially explain localization (Wightman & Kistler, 1997). For the spectral cues in particular, the shape of pinna is of importance (e. g. Shaw, 1997; Sect. 2.2.2 in Blauert, 1997). It modifies sound spectra mostly above 5 kHz; thus, direction dependent placing of spectral notches and peaks is believed to explain the localization of elevation (e.g. Musicant & Butler, 1985; Sect. 2.3.1 in Blauert, 1997). In normal listening, human subjects use head movements to resolve direction in the cone of confusion regions (Thurlow et al., 1967; Sect. 2.3.1 & 2.5.1 in

Blauert, 1997). Rotation of the head causes shifts in the spatial image of the sound source in relation to the listener, which can be used as cue for spotting elevation.

It has been found that ITD is a dominant cue for localizing complex sounds in the azimuth (Wightman & Kistler, 1992; Wightman & Kistler, 1997). When an ITD cue conflicted with the other localization cues in the wide band stimuli, listeners judged the azimuth based almost solely on ITD. This is beneficial considering the localization of real world sounds, given that they often have more energy in the low frequencies. For example, in voiced speech the most prominent energy region lies within the frequency range of 100 Hz to 4000 Hz.

Among the localization cues, ITD is only weakly dependent on frequency, as it is slightly larger towards low frequencies. This frequency dependency, however, does not seem to account for spatial localization (Kistler & Wightman, 1992; Wightman & Kistler 1997). In contrast, ILD is highly dependent on frequency and even might help in resolving elevation in the cone of confusion regions (Wightman & Kistler, 1997). It has also been shown that ITD at the higher frequency range can be extracted from the envelope (e. g. Henning, 1974). However, ILD dominates at high frequencies when sound with conflicting ILD and ITD is presented to subjects, as demonstrated using high-pass filtered random noise with the cut-off ranging from 2.5-5 kHz (Wightman & Kistler, 1992).

In summary, sound localization in the azimuth is based most importantly on the ITD and ILD cues, of which ITD is more prominent for both wideband and low frequency sounds. Direction dependent high frequency variation due to pinna head and body filtering is used particularly for resolving elevation in cone of confusion regions. Additional important cues for resolving elevation arise from spatial image shifts due to head movements.

2.2 Precedence effect

Practically all normal listening environments: rooms, outdoor spaces, etc. contain sound reflective materials. Therefore, sound not only reach our ears directly from active sources, but in addition, by multiple reflections originating from the surfaces. When sound localization is considered, it appears that listeners can identify the correct direction even in presence of reflections arriving from all around the subject. Listeners seem to localize sound based on the first arriving wave front (Sect. 3.1.2 in Blauert, 1997). The phenomenon which allows the localization accurately in direction of direct sound is called the precedence effect (e.g. Wallach et al., 1949; Zurek, 1987; Sect. 3.1.2 in Blauert, 1997; Litovsky et al., 1999). Localization is based on the first transient if the delay of incoming reflections is within a critical range, which is typical for reflections in rooms.

The precedence effect has often been investigated using a method based on single echoes. Thus, direct and sound echo are represented by temporally leading and lagging signals, respectively, from loudspeakers placed in different directions in an anechoic space. It appears that the hearing system localizes sound in three different phases (Sect. 3.1 in Blauert, 1997). In the first phase, known as summing localization, when the lead-lag time difference is below 1 ms, the listener hears only

one fused event between the lead and lag sounds, where perceived direction depends on relative loudness and the time difference between the lead and lag (Pulkki, 2001). In the second phase, after 1 ms, sound events are still fused, but now the sound source appears at the direction of the first arriving wave front. Thus, the lead sound has localization dominance (Litovsky et al., 1999) over the lag sound. Here, the precedence effect plays an active role. Furthermore, after a critical delay, called the echo threshold (Sect. 3.1.2, page 225 in Blauert, 1997), the lead-lag pair is no longer perceived as one event, but rather is heard as being split into two events localized at directions of the lead and lag sounds. The echo threshold depends on stimulus duration. For single clicks, the echo threshold is about 5 ms, but for sounds of more complex character, such as speech or music, it can be as long as 50 ms (Litovsky et al., 1999; Table 3.2, page 231 in Blauert, 1997).

Given that lead-lag sounds within the echo threshold interval are fused to a single event does not mean that the lag sound is not detectable. Lead-lag sounds and lead only sounds can be distinguished based on sound quality, timbre and spatial extent (Sect 3.1.2 in Blauert, 1997; Litovsky, 1999). In fact, the auditory system can extract information about the surrounding space, other than the direction of the sound source, from the reflections. Reflections contribute to perception of distance and spaciousness (Sect. 3.3 in Blauert, 1997).

The precedence effect is the strongest for identical lead-lag pairs, and works to some extent even if the lag sound is not an exact replica of the lead (Litovsky et al., 1999). It is noteworthy that, in rooms, reflections seldom are identical copies of direct sound. Wall reflections introduce some spectral variation to original signals because of across frequency variation of wall material reflection coefficients. Blauert & Divenyi (1988) showed that inhibition in the precedence effect does not appear to work effectively for lead and lag sounds if they have energy in different spectral bands. However, in the same study, the authors demonstrated that inhibition is similar for correlated and uncorrelated broadband noise sounds (independent noise processes).

It has also been found that the precedence effect contains the so-called buildup and breakdown phases. Consider a case where directions of lead and lag sounds and time interval between them are kept constant. In a trial, this lead-lag pair is repeated periodically a number of times. Even if the lead-lag interval is chosen so that the lead and lag sounds are heard as separate events originating from their own directions in the beginning of the trial, at the end of trial they can be fused into single events originating from the lead’s direction. Thus, during repetition of the same lead-lag sound pair, precedence effect is adaptively built up (e.g. Thurlow & Parks, 1961; Freyman et al., 1991). However, if the lead-lag configuration is alternated abruptly by changing their positions, the previously fused lead-lag sound event is broken down into two events originating from their own directions (Clifton, 1987). These buildup and breakdown phases demonstrate a complex adaptation effect related to the precedence effect, and further demonstrate that the precedence effect does not originate from hardwired neural structure (Sect. 5.4 in Blauert, 1997; Litovsky et al., 1999).

In summary, according to the precedence effect, listeners are able to localize sound at the direction of direct sound component, despite multiple

reflections reaching the subject from all around in a normal listening environment. For lead-lag sound pairs (direct sound followed by single echo), the precedence effect lasts from 1 ms up to 40 ms, depending on the type of signals. The precedence effect works most effectively if the lead and lag sounds are of similar spectral content. Inhibition in the precedence effect strengthens if the same lead-lag pair is repeated in successive trials, and eventually breaks down if the configuration is altered (buildup and breakdown of the precedence effect). The precedence effect is of particular relevance to this thesis, as the paper P5 introduces a new model for precedence effect in order to improve localization in moderately reverberant spaces.

2.3 Spatial hearing in speech segregation

A common example of a complex listening scenario is the so-called cocktail party situation (Cherry, 1953; for review see Yost, 1997), in which the listener is faced with a complex acoustic mixture of sounds. In such a situation, a human listener is still capable of orienting his or her attention to an interesting sound event and is often able to segregate the target out of the complex acoustic mixture.

Spatial hearing plays a dual role in the cocktail party effect. Firstly, it mediates the shift of attention to the target direction in space. Secondly, spatial separation between the target and interferer(s) helps to segregate the target sound from the acoustic mixture. When speech sources competing with other speech or sound sources are investigated, it has been found that spatial separation between the sources helps markedly in the intelligibility of the target speech (e.g. Cherry, 1953; Spieth et al., 1954; Hawley et al., 1999; Hawley et al., 2004). The aid of spatial separation in target intelligibility consists of two components: one a monaural component of “better ear advantage” originating from a better signal-to-noise ratio in the ear closer to the target source, and the other the true binaural component, which causes binaural unmasking of the target to occur (e.g. Hawley et al. 2004). The binaural unmasking leads to further speech intelligibility improvements compared to better ear advantage only.

Originally, Cherry (1953) suggested that spatial hearing constitutes the main mechanism for solving the cocktail party problem. However, since those days it has become evident that mechanisms other than spatial hearing play a more prominent part in source segregation (e.g. Bregman, 1990; Yost, 1997). Bregman (1990) explains sound segregation in terms of auditory scene analysis (ASA). Here, it is illustrative to think of the auditory signal as a chain of events, which can be shown in a two-dimensional time-frequency plot visualizing the auditory scene. According to this philosophy, the auditory system divides sound events into segments, which are further grouped to meaningful events by the higher level processes. The cues that indicate common source origin mediate auditory grouping: harmonics sharing common fundamental frequency (f0), common onset, common offset, temporal continuity, common temporal modulations, common spatial location, proximity in time or frequency, etc. (see Cooke & Ellis, 2001 for a recent review). However, role of spatial hearing in auditory grouping has remained rather controversial. It is agreed that spatial separation between target speech and distracter improves intelligibility of the target, and spatial unmasking of the target

occurs when spatial separation between the target and distracter is increased. However, whether this is related to grouping or simply to masking effects has recently been a topic of enthusiastic debate (Culling & Summerfield, 1995; Darwin & Hukin, 1997; Drennan et al., 2003; Edmonds, 2004).

Culling & Summerfield (1995) studied the role of ITD and ILD in across frequency grouping of concurrent sounds. They used artificial "whispered" vowel stimuli, where each vowel was represented by two narrow band noise bursts adjusted to the first and second formant frequencies of the vowel. Two vowels, target and distracter, were presented laterally separated using either ITD or ILD. They found that lateral separation of target from distracter by ILD improved the identification of the target but separation by ITD did not. Thus, the authors concluded that common ITD did not mediate across frequency grouping.

Darwin & Hukin (1997) complemented the observations of Culling & Summerfield (1995). In their experiment, a harmonic component was extracted from a vowel and presented with an ITD at the ear opposite to the vowel. Although laterally separated from the vowel, the harmonic was still grouped back to the vowel during simultaneous presentation. However, when the extracted harmonic was temporally pre-cued at the same ITD perceptual segregation of the tone out of complex (vowel) occurred. Hence, ITD may contribute to across time grouping. However, more recently Drennan et al. (2003) used the same "whispered vowel" stimuli of the original Culling & Summerfield (1995) study, and demonstrated that, with sufficient training, subjects were able to use ITD in across frequency grouping, and that the aid of spatial separation in source segregation was even more remarkable when the competing sources were presented in more natural free-field conditions.

Edmonds (2004) extended the studies conducted with isolated vowels (e.g. Culling & Summerfield, 1995; Darwin & Hukin, 1997; Drennan et al., 2003) by an intelligibility test of real continuous speech via a speech reception threshold (SRT) measurement. Figure 2 depicts one of his experimental setups. Speech and interferer were first split into high and low frequency bands. Then the target and interferer were presented with three different ITD configurations: one, both in the right side ("baseline"), two, interferer on the left, target on the right ("consistent"), and three, low and high frequency parts of the interferer split in the left and right sides, respectively; low and high frequency parts of the target split on the right and left sides, respectively ("swapped"). Comparing the "baseline" to "consistent" and “swapped” conditions, he demonstrated that ITD separation of target and interferer improves speech intelligibility (due to binaural masking level difference, see Moore, 1989 for review). Moreover, he found that intelligibility did not differ between "consistent" and "swapped" cases, which suggests that common ITD does not mediate across frequency grouping. This is because the target was equally intelligible even though target frequency bands were divided to opposite ears (vice versa for the interferer). Only the (constant) amount of separation between target and interferer mattered. However, recent replication of Edmonds experiment (2004) by Brown & Palom?ki (submitted) demonstrate that the intelligibility in the "consistent" case is slightly superior than in the "swapped" case. This advantage may be related to small benefit in the across frequency grouping mediated by the

common ITD. Edmonds’s (2004) and Brown & Palom?ki (submitted) studies are relevant considering many computational cocktail party processors. If the human performance is considered those processors should be able to reproduce similar behavior. In the following Section 4.3.6 we shall discuss that often this is not the case.

An interesting question is why ITD is not used in across frequency processing in the human auditory system. One explanation might be that efficient use of binaural cues in the presence of reverberation is difficult. Although spectro-temporal regions containing reliable binaural cues can be detected efficiently for localization related processing even in presence of reverberation (Faller & Merimaa, 2004), they may be too sparse to be used efficiently in sound segregation. It is also known that the binaural advantage in speech intelligibility observed in anechoic conditions is reduced in presence of reverberation (Plomp, 1976). Another explanation might be in retaining the intelligibility of spatially overlapping sources. If the across frequency grouping effect would rely a great deal on the binaural cues,

it is possible that they would override some important monaural cues (common f0, common onset, etc.). This in turn might lead to misjudgments about the source origin of spectro-temporal regions in cases where spatial location of sources overlap.

In summary, it has been noted that spatial separation of target and interfering signals improves in the segregation of the target out of a complex acoustic mixture. How this process is carried out in the human brain is not altogether clear. Currently, leading opinion is that ITDs (the main cue for localization) do not mediate grouping across frequency, but might be useful in grouping across time. Other cues, such as common fundamental frequency, might be more powerful in across frequency grouping. However, as controversial evidence exists (Drennan et al., 2003), the issue still essentially requires further clarifying studies. The explanation of why binaural cues are not used in across frequency grouping possibly originates from their compromised effectiveness in the presence of reverberation. Also, in order to avoid ambiguities in the case of spatially overlapping sources, it may be beneficial to emphasize monaural cues over binaural ones in the across frequency grouping.

2.4 Modulation frequencies in speech segregation

In addition to location cues in the auditory signals, room reverberation also tends to smooth spectro-temporal structure of speech by filling the gaps between strong speech regions (see Figure 3 left panel). Human listeners appear to have remarkable tolerance to reverberation especially in one-talker situations. Nabelek and Robinson (1982) show that speech recognition accuracy for the anechoic case is 99.7% and degrades to 97,0%, 92.5% and 88.7% for reverberation times of 0.4 s, 0.8 s and 1.2 s, respectively. The effect of reverberation becomes more disrupting in the presence of multiple competing voices (Culling et al., 2003).

Tolerance of slowly varying interference such as stationary noise or reverberation has been explained by the capability of human hearing to focus on temporal modulation characteristics unique to speech, which are at their strongest at modulation frequencies roughly between 1 to 16 Hz (Houtgast & Steeneken, 1985; Drullman et al., 1994a, 1994b). The most important 3-4 Hz modulation frequency range reflects the syllable rate of speech originating from the articulatory movements (Houtgast & Steeneken, 1985), which in turn convey the linguistic message in speech. For example, modulations faster than those related to articulation, are those related to vocal fold vibration (e.g. fundamental frequency). The fundamental frequency itself does not carry articulatory information, but is rather a carrier signal, which can be used in varying intonation or expressing emotions. Modulations slower than those of articulation often originate from the environment: transmission line, reverberation, or an active noise source, such as traffic on busy roads. Similarly, slow modulations of speech spectra are important for speech intelligibility, as they carry information about formants, which are crucial in determining phoneme identities. Again, slow modulations like spectral tilts do not originate from articulation but are affected by, for example, transmission

九年级课文翻译完整版

九年级课文翻译 Document serial number【NL89WT-NY98YT-NC8CB-NNUUT-NUT108】

人教版初中英语九年级U n i t3课文翻译 SectionA2d 何伟:这是欢乐时光公园——我所在城市最大的游乐园! 艾丽斯:要尝试一些乘骑项目,我好兴奋呀! 何伟:我们应该从哪里开始玩呢?有太空世界、水上世界、动物世界…… 艾丽斯:哦,你能先告诉我哪儿有洗手间吗? 何伟:什么休息室你想要休息吗但是我们还没有开始玩呢! 艾丽斯:哦,不是,我的意思不是休息间。我的意思是……你知道的,卫生间或盥洗室。何伟:嗯……那么你是指……厕所? 艾丽斯:正是!对不起,也许“洗手间”一词在中国不常用。 何伟:对,我们通常说“厕所”或“卫生间”。就在那边。 艾丽斯:好的,我会很快的!我想知道公园今天何时关门。 何伟:九点三十,你不必着急! SectionA3a 欢乐时光公园——永远快乐的时光 [艾丽斯和何伟在太空世界。] 艾丽斯:我想知道我们接下来应该去哪里? 何伟:那边那个新项目怎么样? 艾丽斯:好吧……看起来蛮吓人的。 何伟:快来吧!我保证它将激动人心!如果你害怕,只要大叫或者抓住我的手。 [云霄飞车之后……] 艾丽斯:你是对的!太有趣了!起初我好害怕,但大声喊叫还蛮管用的。 何伟:瞧,那玩意并不糟糕,对吧?某些东西你不去尝试,就绝不会知道。 艾丽斯:对,我很高兴我试过了! 何伟:你现在想去水上世界么? 艾丽斯:好,但我很饿。你知道我们在哪里能吃到好的快餐? 何伟:当然!我建议在水上世界的水城餐馆,那里有美味可口的食物。 艾丽斯:好极了!我们走! [在前往水城餐馆途中,艾丽斯和何伟经过鲍勃叔叔餐馆。] 艾丽斯:看!这家餐馆看起来蛮有趣的。这招牌上说这儿每晚有摇滚乐队演出。

四种翻译方法,十种翻译技巧

四种翻译方法 1.直译和意译 所谓直译,就是在译文语言条件许可时,在译文中既保持原文的内容,又保持原文的形式——特别指保持原文的比喻、形象和民族、地方色彩等。 每一个民族语言都有它自己的词汇、句法结构和表达方法。当原文的思想内容与译文的表达形式有矛盾不宜采用直译法处理时,就应采用意译法。意译要求译文能正确表达原文的内容,但可以不拘泥与原文的形式。(张培基) 应当指出,在再能确切的表达原作思想内容和不违背译文语言规范的条件下,直译有其可取之处,一方面有助于保存原著的格调,另一方面可以进新鲜的表达方法。 Literal translation refers to an adequate representation of the original. When the original coincides or almost tallies with the Chinese language in the sequence of vocabulary, in grammatical structure and rhetorical device, literal translation must be used. Free translation is also called liberal translation, which does not adhere strictly to the form or word order of the original.(郭著章) 直译法是指在不违背英语文化的前提下,在英译文中完全保留汉语词语的指称意义,求得内容和形式相符的方法。

九年级英语unit课文翻译

九年级英语u n i t课文 翻译 Document serial number【KK89K-LLS98YT-SS8CB-SSUT-SST108】

unit3 Could you please tell me where the restroom are ? sectionA 2d 何伟:这就是欢乐时代公园——我们这座城市最大的游乐园。 爱丽丝:就要玩各种游乐项目了,我好兴奋呀! 何伟:我们先玩那样呢?有太空世界、水世界、动物世界…. 爱丽丝:在我们决定前,麻烦你先告诉我哪儿有洗手间吗? 何伟:什么?休息室?你想要休息了?我们可还没有开始玩呢! 爱丽丝:不是的,我不是指休息的地方。我是说…..你知 道,一间洗手间或卫生间。 何伟:嗯….那么你是指….卫生间吗? 爱丽丝:对啦!不好意思,也许中国人说英语不常用 restroom这个词。 何伟:就是的,我们常说toilets 或washroom 。不过,厕所在哪里。 爱丽丝:知道了,我一会儿就好! 何伟:没问题,你不必赶的。 Section A 3a 欢乐时代公园————总是欢乐时光 [ 爱丽丝和何伟在太空世界] 爱丽丝:我不知道我们接下来该去哪里。

何伟:去玩玩那边那个新项目怎样? 爱丽丝:啊…..看上去挺吓人的。 何伟:勇敢些!我保证会很好玩!如果害怕就喊出来或抓住我的手。 【乘坐后…..】 爱丽丝:你是对的,这真好玩!我起先有些害怕,但喊叫真管用。何伟:瞧:这并不糟糕,对吧?你需要去尝试,否则永远不会知道你能行。 爱丽丝:是这样的,我真高兴自己尝试了这个项目。 何伟:现在你想去水世界吗? 爱丽丝:当然,但我饿了。你知道哪里有又好吃又快的地方? 何伟:当然知道!我建议去水世界的水城餐馆他们做得很好吃。爱丽丝:太好了,我们去吧! 何伟:[在去水城餐馆的路上,爱丽丝与何伟路过鲍勃叔叔的餐厅]爱丽丝:你瞧!这间餐厅看上去很有意思。牌子上写着有个摇滚乐队每晚在演奏。 何伟:我们为何不回头过来在这吃晚饭?咱们去问一下乐队演出几点开始。 【爱丽丝和何伟向门口的员工走去】 爱丽丝:劳驾,请问你们乐队今天晚上何时开始演奏? 何伟:八点。那时人总是很多,所以得来早一点才有桌子。 爱丽丝:好的,谢谢!

高中英语作文范文赏析

高中英语作文范文赏析 Today, with the development of high-technology, we can get access to all kinds of high-tech products, such as computer, digital cameras and so on. These products make our life more convenient, we can keep in touch with family and friends any time any where. Our life has been changed by the high-tech, we live in a fast-pace world. Cell phone influences our life deeply, everyone owns it. While cell phone brings many dangers. First, cell hone contains radiation which hurts people’s body. Today, more and mo re people die of cancer, the main reason is that the high-tech products radiate their bodies, in the long run, the bodies get sick. Cell phone is one of such products, it hurts our bodies as long we use it. Second, cell phone distracts our attention about discovering the beauty of life. People pay their attention on cell phone, they count on it by reading news and making friends, being less going out. We should use cell phone properly. 今天,随着高科技的发展,我们可以接触各种各样的高科技产品,比如电脑,数字相 机等等。这些产品让我们的生活变得方便,我们可以和家人和朋友随时随地保持联系。我 们的生活受高科技影响而改变,我们生活在一个快节奏的世界。手机深深影响着我们的生活,每个人都有手机。然而手机给我们带来了危害。第一,手机包含了危害我们身体的辐射。现在,越来越多的人死于癌症,主要的原因是高科技产品辐射着他们的身体,长此下去,身体就得病了。手机就是其中的产品,只要我们使用,就会受到危害。第二,手机让 我们忽略了发现生活的美。 As we know, students should learn how to get on well with others. But how? First of all, we should respect others. Everybody has their own way to do things, so we should understand each other. Besides, it’s important for us to communicate with each other and share our happiness and sadness. What’s more, we should be kind to others and offer necessary help to those who are in trouble. If we live in the school’s dormitory, we should be careful not to disturb others. Don’t bring friends back to the dormitory in the mid-night after all the roommates are sleeping. Or playing computer games with the music turn up so loud. Such behaviors will hurt the relationships between you and your roommates. 我们都知道,学生应该学会与人相处。但是该怎样做呢?首先,我们应该相互理解体 谅对方。除此之外,和他人分享我们的喜怒哀乐也是很有必要的。我们还要对他人友善, 为他人提供必要的帮助。如果我们住在学校里面,就要注意不要打扰到他人。不要在午夜 时分舍友睡觉的时,把朋友带回来。或者是大声的开着音乐玩电脑。这种行为会伤害到你 与舍友之间的关系。

关于经典英文诗歌赏析

英语诗歌以其独特的文体形式充分调动、发挥语言的各种潜能,使之具有特殊的感染力。读来隽永,富有音韵美。下面是是由带来的关于经典英文诗歌,欢迎阅读! 【篇一】关于经典英文诗歌赏析 I Started Early - Took My Dog Emily Dickinson (1830-86) I started Early - Took my Dog And visited the Sea The Mermaids in the Basement Came out to look at me And Frigates - in the Upper Floor Extended Hempen Hands Presuming Me to be a Mouse Aground - upon the Sands But no Man moved Me - till the Tide Went past my simple Shoe And past my Apron - and my Belt And past my Bodice - too And made as He would eat me up As wholly as a Dew Upon a Dandelion's Sleeve And then - I started - too And He - He followed - close behind I felt His Silver Heel Upon my Ankle - Then my Shoes

Would overflow with Pearl Until We met the Solid Town No One He seemed to know And bowing - with a Mighty look At me - The Sea withdrew 【篇二】关于经典英文诗歌赏析 The Wild Swans At Coole William Butler Yeats (1865-1939) The trees are in their autumn beauty, The woodland paths are dry, Under the October twilight the water Mirror a still sky; Upon the brimming water among the stones Are nine-and-fifty swans. The nineteenth autumn has come upon me Since I first made my count; I saw, before I had well finished, All suddenly mount And scatter wheeling in great broken rings Upon their clamorous wings. I have looked upon those brilliant creatures, And now my heart is sore. All's changed since I, hearing at twilight, The first time on this shore,

高考英语满分作文赏析背诵佳选

一.用词准确;表达地道 【北京卷】【试题回放】第一节情景作文(20分) 美国中学生jeff将要来你所在的红星中学学习中文,经协商安排住在你家。假设你是李华,请给Jeff写一封信,按照下图顺序介绍他来中国后的生活安排。 注意:1. 信的开头已为你写好。 2.词数不少于60。 Dear Jeff, I’m Hua from Beijing Hongxing Middle School. I’m very happy to learn that you’re going to stay with my family while you’re in Beijing. While you are here, we’ll provide you with a room of your own with a bed, a desk, a couple of chairs and a TV. You’ll also have your own bathroom. Our school is quite close to our home, so we could go to school t ogether by bike. At noon we’ll eat at the school dining hall. I’m sure you’ll like the delicious Chinese food there, and enjoy talking with friends over lunch. Classes in our school usually finish at 4 in the afternoon. You can then join other students in playing ball games or swimming. It’ll be a lot of fun. If you have any questions or requests, please let me know. We’ll try our best to make your stay here in Beijing a pleasant experience. Best wishes, Li Hua [名师点评]

高一英语翻译技巧和方法完整版及练习题及解析

高一英语翻译技巧和方法完整版及练习题及解析 一、高中英语翻译 1.高中英语翻译题:Translation: Translate the following sentences into English, using the words given in the brackets. 1.究竟是什么激发小王学习电子工程的积极性?(motivate) 2.网上支付方便了客户,但是牺牲了他们的隐私。(at the cost of) 3.让我的父母非常满意的是,从这个公寓的餐厅可以俯视街对面的世纪公园,从起居室也可以。(so) 4.博物馆疏于管理,展品积灰,门厅冷落,急需改善。(whose) 【答案】 1.What on earth has motivated Xiao Wang’s enthusiasm/ initiative to major in electronic engineering? 2.Online payment brings convenience to consumers at the cost of their privacy. 3.To my parents’ satisfaction, the dining room of this apartment overlooks the Century Park opposite the street and so it is with the sitting room. 或者What makes my parents really satisfy is that they can see the Century Park from the dining room of this apartment, so can they from the living room. 4.This museum is not well managed, whose exhibits are covered with dust, and there are few visitors, so everything is badly in need of improvement. 或The museum whose management is reckless, whose exhibits are piled with dust and whose lobby is deserted, requires immediate improvement. 【解析】 1.motivate sb to do sth 激发某人做某事,on earth究竟,major in 以…为专业,enthusiasm/ initiative热情/积极性,故翻译为What on earth has motivated Xiao Wang’s enthusiasm/ initiative to major in electronic engineering? 2.online payment网上支付,brings convenience to给…带来方便,at the cost of以…为代价,privacy隐私,故翻译为Online payment brings convenience to consumers at the cost of their privacy. 3.To my parents’ satisfaction令我父母满意的是,后者也那样so it is with。也可以用主语从句What makes my parents really satisfy 表语从句thatthey can see the Century Park from the dining room of this apartment。overlooks俯视,opposite the street街对面,living room 起居室。故翻译为To my parents’ satisfaction, the dining room of this apartment overlooks the Century Park opposite the street and so it is with the sitting room.或者What makes my parents really satisfy is that they can see the Century Park from the dining room of this apartment, so can they from the living room. 4.not well managed/ management is reckless疏于管理,be covered with dust/ be piled with dust被灰尘覆盖,few visitors游客稀少,be badly in need of improvement/ requires immediate improvement亟需改善。故翻译为his museum is not well managed, whose exhibits

英语新课标九年级课文翻译Unit9-14

英语新课标九年级课文翻译Unit9-14 Unit9 I like music that I can dance to. SectionA 2d 吉尔:斯科特,周末你打算干什么呢? 斯科特:没什么事,我估计就是听听我新买的激光唱片吧. 吉尔:哦?是什么激光唱片? 斯科特:嗯,全是音乐的,没有歌曲。我喜欢听舒缓的音乐来放松自己,尤其是在工作了漫长的一周以后。 吉尔:听起来不错啊。嗯,如果你有空,愿意和我一起去看部电影吗?(电影的)导演很有名。 斯科特:嗯,那要看什么电影。我只喜欢有趣的电影,我只想笑一笑不想费脑筋,你懂我的意思吧? 吉尔:哦,那样的话,我还是去邀请喜欢看严肃电影的人吧. 斯科特:(你说的)电影是关于什么的? 吉尔:是关于第二次世界大战的。我喜欢能让我思考的电影。 section A, 3a 今天你想看什么(电影)呢? 虽然一些人坚持只看一种电影,但是我喜欢看不同种类的电影,(具体)由当时盼心情决定。 当我情绪低落或感到疲惫的时候,我更喜欢能让我开心的影片。比如,像《黑衣人》那样的喜剧片或像《功夫熊猫》这类的动画片,通常都有滑稽的对话和一个愉快的结局。影片中的人物不一定完美,但是他们都会尽力去解决问题,看了这样的电影,我所面对的许多问题突然间会显得不那么严重,我也会感觉好多了。两个小时的欢笑是一种很好的放松方式。 当我伤心或劳累的时候,我不看剧情片或纪录片。像《泰坦尼克号》这样的剧情片只会让我更伤心。像《帝企鹅日记》这样的纪录片,(通常)会针对某个特定话题提供丰富的信息,(内容)也很有趣,但是当我累的时候,我不想思考太多。当我太累不想思考时,我不介意看像《蜘蛛侠》这样的动作电影。我只想屏蔽我的大脑,坐在那里观看一个令人兴奋的超级英雄,他总是能在关键时刻挽救世界。 偶尔我会喜欢看恐怖片。虽然它们很有意思,但是我会因为太害怕而不敢独自一人看,我总会带上一位不怕这些类型的电影的朋友(一起看),这样就觉得没那么可怕了。 Secton B 2b 凄美 昨晚我的一个中国朋友带我去听了一场中国民间音乐会。其中有一首二胡曲令我特别感动。音乐出奇的美,但是在那美的背后,我感受到悲伤和痛苦。这首曲子有个简单的名字,《二泉映月》,但它是我听到过的最感人的曲子之一。二胡(的声音)听起来那么悲伤,以至于我在听的时候也几乎随着它哭了。后来我查阅了《二泉映月》的历史,开始理解音乐中蕴含的伤感。 这首曲子由一位民间音乐家阿炳写成,他于1893年出生在无锡市。在他还很小的时候,他的母亲就去世了。阿炳的父亲教他弹奏各种乐器,如鼓、笛子和二胡。到了十七岁,阿炳就以他的音乐天赋闻名。然而,他的父亲去世后,阿炳的生活变得更糟糕。他很穷,还得了很严重的疾病,眼睛瞎了。好些年他都没有家,他住在大街上,以弹奏音乐来谋生。即使在他结了婚有了家以后,他还是继续在街道上唱歌、弹曲,他以这种方式表演了好多年。 阿炳惊人的音乐技能让他在有生之年就非常出名。到他临终前,他已会弹六百多首曲子,大部分是他自己写的。遗憾的是,一共只有六首曲子被录了下来得以传世,但时至今日,他(的作品)依旧颇受人们喜爱。今天,阿炳的《二泉映月》成 了所有伟大的二胡演奏家弹奏和赞赏的曲子。它已成了中国文化魁宝之一。它的凄美不仅描绘出阿炳自己的生活,而且也让人们回想起自身的悲苦体验和最深的的痛。

英文诗歌赏析方法

英文诗歌赏析方法 英诗的欣赏:诗的格律、诗的押韵、诗的体式、诗的评判。 诗以高度凝结的语言表达着人们的喜怒哀乐,用其特有的节奏与方式影响着人们的精神世界。诗讲究联想,运用象征、比喻、拟人等各种修辞手法,形成了独特的语言艺术。 一、诗的格律 “格律是指可以用脚打拍子的节奏”,是每个音步轻重音节排列的格式,也是朗读时轻重音的依据。而音步是由重读音节和非重读音节构成的诗的分析单位。重读音节为扬(重),在音节上用“-”或“?”标示,非重读音节为抑(轻),在音节上用“?”标示,音步之间可用“/”隔开。以下是五种常见格式: 1. 抑扬格(轻重格)Iambus:是最常见的一种格式,每个音步由一个非重读音节加一个重读音节构成。 As fair / art thou / my bon/nie lass, So deep / in luve / am I : And I / will luve / thee still,/ my dear,Till a` / the seas / gang dry: Robert Burns(1759-1796):My Luve Is like a Red,Red Rose 注;art=are luve=love bonnie=beautiful a`=all gang=go 上例中为四音步与三音步交叉,可标示为:?-/?-/?-/(?-) 2.扬抑格(重轻格)Trochee:每个音步由一个重读音节加一个非重读音节构成。 下例中为四音步扬抑格(少一个轻音节),可标示为:-?/-?/-?/- Tyger!/ Tyger!/ burning / bright In the / forests / of the / night William Blake:The Tyger 3. 抑抑扬格(轻轻重格)Anapaestic foot:每个音步由两个非重读音节加一个重读音节构成。如:三音步抑抑扬格??-/??-/??- Like a child / from the womb, Like a ghost / from the tomb, I arise / and unbuild / it again. 4. 扬抑抑格(重轻轻格)Dactylic foot:每个音步由一个重读音节加两个非重读音节构成。如:两音步扬抑抑格-??/-?? ?Touch her not / ?scornfully, ?Think of her / ?mournfully. - Thomas Hood 5. 抑扬抑格(轻重轻格)Amphibrach:每个音步由一个非重读音节加一个重读音节再加一个非重读音节构成。如:三音步抑扬抑格?-?/?-?/?-?下例中最后一个音步为抑扬格。 O ?hush thee / my ?babie / thy ?sire was / a knight. 在同一首诗中常会出现不同的格律,格律解析对朗读诗歌有一定参考价值。现代诗中常不遵守规范的格律。 二、诗的押韵

文言文翻译技巧和方法

文言文翻译技巧和方法 一、基本方法:直译和意译文言文翻译的基本方法有直译和意译两种。所谓直译,是指用现代汉语的词对原文进行逐字逐句地对应翻译,做到实词、虚词尽可能文意相对。直译的好处是字字落实;其不足之处是有时译句文意难懂,语言也不够通顺。所谓意译,则是根据语句的意思进行翻译,做到尽量符合原文意思,语句尽可能照顾原文词义。意译有一定的灵活性,文字可增可减,词语的位置可以变化,句式也可以变化。意译的好处是文意连贯,译文符合现代语言的表达习惯,比较通顺、流畅、好懂。其不足之处是有时原文不能字字落实。这两种翻译方法当以直译为主,意译为辅。 二、具体方法:留、删、补、换、调、变。 “留”:就是保留。凡是古今意义相同的词,以及古代的人名、地名、物名、官名、国号、年号、度量衡单位等,翻译时可保留不变。例如:《晏子使楚》中的“楚王”、“晏婴”、“晏子”等不用翻译。 “删”,就是删除。删掉无须译出的文言虚词。例如:“寡人反取病焉”的“焉”是语气助词,可不译,本句的意思就是“我反而自讨没趣。”(《晏子使楚》)又如:“子猷、子敬俱病笃,而子敬先亡”中的“而”是连词,可不译,整句意思是“子猷与子敬都病重,子敬先死去。” “补”,就是增补。(1)变单音词为双音词,如《桃花源记》中“率妻子邑人来此绝境”,“妻子”一词是“妻子、儿女”的意思;(2)补出省略句中的省略成分,如《人琴俱亡》中“语时了不悲”,翻译为:(子猷)说话时候完全不悲伤。 “换”,就是替换。用现代词汇替换古代词汇。如把“吾、余、予”等换成“我”,把“尔、汝”等换成“你”。“调”就是调整。把古汉语倒装句调整为现代汉语句式。例如《人琴俱亡》中“何以都不闻消息”,“何以”是“以何”的倒装,宾语前置句,意思是“为什么”。“变”,就是变通。在忠实于原文的基础上,活译有关文字。“子猷问左右”(人琴俱亡))中的“左右”指的是“手下的人”,“左右对曰”(《晏子使楚》中的“左右”指的是“近臣”。 古文翻译口诀古文翻译,自有顺序,首览全篇,掌握大意;先明主题,搜集信息,由段到句,从句到词,全都理解,连贯一起,对待难句,则需心细,照顾前文,联系后句,

九年级英语课文翻译,帮忙

7楼 迪士尼乐园是个娱乐的公园,但我们也可以叫它“主题公园”。它拥有你在其他公园可以找到的所有的娱乐设施,但是它有一个主题。当然,这个主题就是迪士尼电影和迪士尼人物。比如说,在其他一些公园你可以看到到摩天轮,但是在迪士尼乐园,摩天轮就有了“迪士尼人物”的主题。这意味着你可以在摩天轮的任何地方看到迪士尼人物。同样,你也可以看有关迪士尼的电影,在迪士尼餐厅就餐,买迪士尼礼物。你还随时可以看到迪士尼乐园里来回走动的迪士尼人物! 你听说过“迪士尼航行”吗?那是同样有迪士尼主题的巨大的船。你可以在船上航行几天,还可以在甲板上吃东西睡觉。和迪士尼乐园一样,甲板上也有许多吸引人的地方。你可以购物,参加迪士尼的聚会,和米老鼠一起吃晚餐。这些船有几条不同的航线,但最终都会抵达同样的地方。那就是属于迪士尼乐园的小岛。 在迪士尼乐园的乐趣简直太多了! 9. p72 UNIT 9 3a: Come to the Hilltop Language School and change your life. 来到Hilltop语言学校,改变你的人生! 第一篇:这里是两名同学讲述关于我校的事情 当我还是个小女孩,我就梦想着去旅行,因而我选择了个最佳的方式—从事空中乘务员.我做这个行业已有两年了,它的确是份有趣的工作,我能游遍世界各地,但也我发现说好英语对我来说是件多么重要的事,正因如此早在当空乘之前我就在Hilltop语言学校学习五年了,我以一口流利的口语赢得了这份工作,谢谢, Hilltop语言学校! Mei Shan 第二篇: 我渴望当一名导游,事实上,这是我一直都很想要从事的事业.我渴望旅行,特别是英语国家,像是美国,澳大利亚.不论如何我知道我必须提高我的英语能力,于是我来到Hilltop语言学校进行口语学习, Hilltop语言学校对我的学习有很大的帮助,我已经在这里学满一年了,非常热爱这个地方.或许当我要离开这个学校的时候,我开始想做一名英语教师的热情会大过于想做一名导游! David Feng 9. p74 UNIT 9 3a: Have you ever been to Singapore?翻译! 你去过新加坡吗? 对于很多中国旅游者来说,这个位于东南亚的小岛是个度假的好去处。一方面,超过四分之

经典英文诗歌赏析(全)

经典英文诗歌赏析(全) 一 nothing gold can stay 1简介:《美景易逝(Nothing Gold Can Stay)》罗伯特弗罗斯特 的代表作之一。此诗于1923年写就,即于当年十月在《耶鲁杂志(The Yale Review)》上刊印出版,随后就被收录到弗罗斯特的一本名为 《新罕布什尔州(New Hampshire)》的诗集中。 2诗歌翻译: Nothing gold can stay 岁月留金 Nature's first green is gold, 大自然的第一抹新绿是金, Her hardest hue to hold. 也是她最无力保留的颜色.。 Her early leaf's a flower; 她初发的叶子如同一朵花,; But only so an hour. 不过只能持续若此一刹那。 Then leaf subsides leaf, 随之如花新叶沦落为旧叶。 So Eden sank to grief. 由是伊甸园陷入忧伤悲切, So down gose down to day, 破晓黎明延续至晃晃白昼。 Nothing gold can stay. 宝贵如金之物岁月难保留。 3诗歌赏析:这首诗揭示了一切真切而美好的事物最终定会逐渐消失的哲理。它同时也使用了独特的技巧来表现了季节的变化。想到了 小时了了,大未必佳。一切都是转瞬即逝的,浮世有的仅仅转丸般的 繁华。 二 the road not taken 1诗歌简介:这首名诗《The Road NotTaken》形式是传统的抑扬 格四音步,但音步可变(含有很多抑抑扬的成分);每节的韵式为abaab 。

2020年中考英语优秀范文赏析

英语优秀范文赏析 一.父亲节马上就要到了,但是有些同学不知道送什么礼物给父亲。请根据下面的提示进行适当扩展,并向同学们提出一些建议。80词左右,开头和结尾以给出(不计入总词数)。 提示:1.如果有足够的零花钱,可以为父亲买一些使用单不贵的东西,如领带等: 2.还可以为父亲做些力所能及的事情,如为父亲准备一杯茶 等他下班后饮用等。 Father’s Day is coming and you must be thinking of giving a present to your father. Different people like different kinds of gifts. If you have enough pocket money, you can buy a useful but not expensive thing, like a tie. I think your father will like it. But if you don’t have enough pocket money, you can do something that you can do.For example, you can prepare a cup of tea. When your father comes back from work, he can drink it. No matter what you do, the most important thing is to make your father happy on Father’s Day. Don’t you think so? 二.请根据提示以“The Spring Festival”为题,写一篇不少于80词的英语短文。 提示:1.春节是中国最重要的节日; 2.人们大多回家过年,家庭成员团聚; 3.家人一起吃团年饭、看电视、拜访亲朋好友(friends and relatives); 4.孩子们喜欢春节。 The Spring Festival The Spring Festival comes in wenter. It is the most important traditional festival in China.

人教版九年级英语课文全册翻译

人教版九年级英语课文翻译 一单元 SECTION A 1a 我通过制作抽认卡来学习。通过和朋友一起学习。通过听磁带。通过做抽认卡。通过向老师求助。通过读课本。通过制作单词本。 1c A:你怎么为考试而学习。B:我通过参加学习小组来学习。 2a 1、你是通过看英文录像学英语的吗? 2、你曾和朋友们练习过对话吗? 3、听磁带怎么样? 4、大声朗读以练习发音怎么样?5、我曾经通过参加学习小组的方式学习过吗? 2b A是的,我通过那种学习方式学到了很多。B、哦,是的,它提高了我说英语的能力。C、有时那样做。我觉得他有用。D、不。(通过看英语录像学习)太难了,无法理解录像中的人所说的话。 2c A你曾经通过参加学习小组来学习吗?B、是的,我参加赤字,通过那种方式我学到了很多。 Grammer Focus 你怎么为准备一场考试而学习?我靠听磁带。你怎样学习英语?我通过参加学习小组来学习。你通过大声朗读来学习英语吗?是的,我是。你曾和朋友们练习过对话吗?哦,是的,他提高了我说英语的能力。你曾经通过参加学习?小组来学习吗?是的,我参加过。通过那种方式我学习到了很多。 3a如何才能学得最好 这星期我们询问了新星高中的同学关于学习更多英语的最佳方法的问题。许多同学说他们通过使用英语为学习它,一些还有很特别的建议。比如,李莉莲说学习新单词的最好的方法是阅读英语杂志。她说记忆浒音乐的歌词也有一些作用。当我们问及学习语法的问题时,她说:“我从不学习语法。它太枯燥了。” 魏明有不同的看法。他学习英语已经6年了,并且确实喜欢英语。他认为学习语法是学习一门语言的一种好方法。他还认为观看英语电影也不错,国灰他可以看到演员说话的情形。但是,有时候他发现看英语电影是件很头痛的事情,因为那些演员说话太快了。 刘畅说加入学校英语俱乐部是提高英语最好的方法。学生有很多练习的机会并且他们也有很多乐趣。她补充说和朋友练习会话一点用处也没有。“我们会因为某件事变得很激动,最后用汉语来讲,”她说。 3b A:我正在作一个关于学习英语的调查。我能问你一些问题吗?B:当然。A:太好了!你叫什么名字?B:魏明。A:那么你是怎样学习英语的,魏明?B:…… 4 A:你列词汇表吗?B:噢,是的。我常那样做。 SECTION B 1a我不会发其中一些单词的音。我不会拼写一些英语单词。我听不懂英语口语。我在语法上犯错误。我读得很慢。1b我不知道怎么使用逗号。 2a 1、不能正确发音。 2、忘记很多生词。 3、人们和我说话时我不能每次都听懂。 4、不能理解杂志中的单词。 5、没有获得很多写作训练。 2B A、你可以一直将生词写在你的笔记本里,并在家学习它们。B、你应该找一个笔友。C、听力能起作用。D、为什么不加入一个英语俱乐部来练习说英语呢? 2C A:我没有搭档来练习英语。B、也许我应该加入一个英语俱乐部。 3a我是怎样学习英语的 去年英语课对我来说很难。首先,对我来说听懂老师说话很难。开始,她说的太快,我不能听懂每个单词,后来,我意识到如果你听不懂每个单词并没有关系。而且我害怕在班上说话,因为我认为同学们可能会嘲笑我。我也不是总能造出完整的句子。然后我开始看英文电视。那很有用。我认为做大量听力练习是成为一个好的语言学习者的秘决之一。另一件我觉得很难的事是英语语法。所以我决定在每节课上记大量语法要点。然后我开始用我正在学的语法自己写新句子。这样作用处之大令人惊奇。现在我很喜欢学英语并且这学期我得了个A。我的老师对我印象很深。作者觉学英语很难是因为……1、老师发音差。2、她说话时人们总是嘲笑她。3、她在造完整的句子方面有困难。4、英语语法很难。当她开始…她的英语提高了。5、和说英语的朋友一起出去。6、大量的听力练习。7、在自己组织的句子里使用语法。 3b 亲爱的,我知道学英语不容易,但我有一些想法可能有用。你说你不能理解说话太快的人。那么,你可以尽量听最重要的单词,而不是每个单词。 4 1、关于学英语什么不容易。2、就这一点你作了什么?3、你最喜欢的学习更多英语的方式是什么?韩文说如果人

英语诗歌鉴赏及名词解释(英文版)

The Basic Elements of Appreciating English Poetry is poetry Poetry is the expression of Impassioned feeling in language. “Poetry is the spontaneous overflow of powerful feelings: it takes its origin from emotion recollected in tranquility.” “Poetry, in a general sense, may be defined to be the expression of the imagination.”Poetry is the rhythmical creation of beauty. Poetry is the image of man and nature. ~ “诗言志,歌咏言。” ---《虞书》 “诗言志之所以也。在心为志,发言为诗。情动于中而行于言,言之不足,则嗟叹之;嗟叹之不足,故咏歌之;咏歌之不足,不知手之舞之,足之蹈之也。情发于声;声成文,谓之音。” ---《诗·大序》 “诗是由诗人对外界所引起的感觉,注入了思想与情感,而凝结了形象,终于被表现出来的一种‘完成’的艺术。” ---艾青:《诗论》 Sound System of English Poetry a. The prosodic features Prosody (韵律)---the study of the rhythm, pause, tempo, stress and pitch features of a language. Chinese poetry is syllable-timed, English poetry is stress-timed. Stress: The prosody of English poetry is realized by stress. One stressed syllable always comes together with one or more unstressed syllables. eg. Tiger, /tiger, /burning /bright — In the /forest /of the/ night, What im/mortal /hand or /eye Could frame thy/ fearful /symme/try ---W. Blake Length: it can produce some rhetorical and artistic effect. eg. The curfew tolls the knell of parting day, The lowing herd wind slowly o’er the lea, The Ploughman homeward plods his weary way, And leaves the world to darkness and to me. ---Thomas Gray Long vowels and diphthongs make the poem slow, emotional and solemn; short vowels quick, passionate, tense and exciting. Pause: it serves for the rhythm and musicality of poetry.

相关主题
文本预览
相关文档 最新文档