当前位置:文档之家› Learning Strategies for Open-Domain Natural Language Question Answering

Learning Strategies for Open-Domain Natural Language Question Answering

Learning Strategies for Open-Domain Natural Language Question Answering
Learning Strategies for Open-Domain Natural Language Question Answering

Abstract

We present an approach to automatically learning

strategies for natural language question answering

from examples composed of textual sources,

questions, and answers. Our approach formulates

QA as a problem of first order inference over a

suitably expressive, learned representation. This

framework draws on prior work in learning action

and problem-solving strategies, as well as relational

learning methods. We describe the design of a

system implementing this model in the framework of

natural language question answering for story

comprehension. Finally, we compare our approach

to three prior systems, and present experimental

results demonstrating the efficacy of our model.

1 Introduction

This paper presents an approach to automatically learning strategies for natural language question answering from examples composed of textual sources, questions, and answers. Our approach is focused on one specific type of text-based question answering known as story comprehension. Most TREC-style QA systems are designed to extract an answer from a document contained in a fairly large general collection [Voorhees, 2003]. They tend to follow a generic architecture, such as the one suggested by [Hirschman and Gaizauskas, 2001], that includes components for document pre-processing and analysis, candidate passage selection, answer extraction, and response generation. Story comprehension requires a similar approach, but involves answering questions from a single narrative document. An important challenge in text-based question answering in general is posed by the syntactic and semantic variability of question and answer forms, which makes it difficult to establish a match between the question and answer candidate. This problem is particularly acute in the case of story comprehension due to the rarity of information restatement in the single document.

Several recent systems have specifically addressed the task of story comprehension. The Deep Read reading comprehension system [Hirschman et al., 1999] uses a statistical bag-of-words approach, matching the question with the lexically most similar sentence in the story. Quarc [Riloff and Thelen, 2000] utilizes manually generated rules that selects a sentence deemed to contain the answer based on a combination of syntactic similarity and semantic correspondence (i.e., semantic categories of nouns). The Brown University statistical language processing class project systems [Charniak, et al., 2000] combine the use of manually generated rules with statistical techniques such as bag-of-words and bag-of-verb matching, as well as deeper semantic analysis of nouns. As a rule, these three systems are effective at identifying the sentence containing the correct answer as long as the answer is explicit and contained entirely in that sentence. They find it difficult, however, to deal with semantic alternations of even moderate complexity. They also do not address situations where answers are split across multiple sentences, or those requiring complex inference.

Our framework, called QABLe (Question-Answering Behavior Learner), draws on prior work in learning action and problem-solving strategies [Tadepalli and Natarajan, 1996; Khardon, 1999]. We represent textual sources as sets of features in a sparse domain, and treat the QA task as behavior in a stochastic, partially observable world. QA strategies are learned as sequences of transformation rules capable of deriving certain types of answers from particular text-question combinations. The transformation rules are generated by instantiating primitive domain operators in specific feature contexts. A process of reinforcement learning [Kaebling, et al., 1996] is used to select and promote effective transformation rules. We rely on recent work in attribute-efficient relational learning [Khardon et al., 1999; Cumby and Roth, 2000; Even-Zohar and Roth, 2000] to acquire natural representations of the underlying domain features. These representations are learned in the course of interacting with the domain, and encode the features at the levels of abstraction that are found to be conducive to successful behavior. This selection effect is achieved by a fusion of abstraction space generalization [Sacerdoti, 1974; Knoblock, 1992] and reinforcement learning elements.

The rest of this paper is organized as follows. Section 2 presents the details of the QABLe framework. In section 3 we describe preliminary experimental results which indicate promise for our approach. In section 4 we summarize and draw conclusions.

Learning Strategies for Open-Domain Natural Language Question Answering

Eugene Grois, David C. Wilkins, Dan Roth

Beckman Institute

University of Illinois

405 N. Mathews Avenue

Urbana, IL 61801

e-grois@https://www.doczj.com/doc/2d9177030.html,, dcw@https://www.doczj.com/doc/2d9177030.html,, danr@https://www.doczj.com/doc/2d9177030.html,

2 QABLe – Learning to Answer Questions

2.1 Overview

Figure 1 shows a diagram of the QABLe framework. The bottom-most layer is the natural language textual domain. It represents raw textual sources, questions, and answers. The intermediate layer consists of processing modules that translate between the raw textual domain and the top-most layer, an abstract representation used to reason and learn.

This framework is used both for learning to answer questions and for the actual QA task. While learning, the system is provided with a set of training instances, each consisting of a textual narrative, a question, and a corresponding answer. During the performance phase, only the narrative and question are given.

At the lexical level, an answer to a question is generated by applying a series of transformation rules to the text of the narrative. These transformation rules augment the original text with one or more additional sentences, such that one of these explicitly contains the answer, and matches the form of the question.

On the abstract level, this is essentially a process of searching for a path through problem space that transforms the world state, as described by the textual source and question, into a world state containing an appropriate answer. This process is made efficient by learning answer-generation strategies. These strategies store procedural knowledge regarding the way in which answers are derived from text, and suggest appropriate transformation rules at each step in the answer-generation process. Strategies (and the procedural knowledge stored therein) are acquired by explaining (or deducing) correct answers from training examples. The framework’s ability to answer questions is tested only with respect to the kinds of documents it has seen during training, the kinds of questions it has practiced answering, and its interface to the world (domain sensors and operators).

In the next two sections we discuss lexical pre-processing, and the representation of features and relations over them in the QABLe framework. In section 2.4 we look at the structure of transformation rules and describe how they are instantiated. In section 2.5, we build on this information and describe details of how strategies are learned and utilized to generate answers. In section 2.6 we explain how candidate answers are matched to the question, and extracted.

2.2 Lexical Pre-Processing

Several levels of syntactic and semantic processing are required in order to generate structures that facilitate higher order analysis. We currently use MontyTagger 1.2, an off-the-shelf POS tagger based on [Brill, 1995], for POS tagging. At the next tier, we utilize a Named Entity (NE) tagger for proper nouns a semantic category classifier for nouns and noun phrases, and a co-reference resolver (that is limited to pronominal anaphora). Our taxonomy of semantic categories is derived from the list of unique beginners for WordNet nouns [Fellbaum, 1998]. We also have a parallel stage that identifies phrase types. Table 1 gives a list of phrase types currently in use, together with the categories of questions each phrase type can answer. In the near future, we plan to utilize a link parser to boost phrase-type tagging accuracy. For questions, we have a classifier that identifies the semantic category of information requested by the question. Currently, this taxonomy is identical to that of semantic categories. However, in the future, it may be expanded to accommodate a wider range of queries. A separate module reformulates questions into statement form for later matching with answer-containing phrases.

2.3 Representing the Question-Answering Domain

In this section we explain how features are extracted from raw textual input and tags which are generated by pre-processing modules.

A sentence is represented as a sequence of words ?w 1, w 2,…, w n ?, where word (w i , word) binds a particular word to its position in the sentence. The k th sentence in a passage is

given a unique designation s k . Several simple functions capture the syntax of the sentence. The sentence Main (e.g ., main verb) is the controlling element of the sentence, and is recognized by main (w m , s k ). Parts of speech are recognized by the function pos , as in pos (w i , NN) and pos (w i , VBD). The relative syntactic ordering of words is captured by the predicate before (w i , w j ). It can be applied recursively, as before (w i , before (w j , w k )) to generate the entire sentence starting with an arbitrary word, usually the sentence Main . The integrity of the sentence is checked by the function inSentence (w i , s i ) ? main (w m , s k ) ∧ (before (w i , w m ) ∨ before (w m , w i )) for each word w i in the sentence. A consecutive sequence of words is a phrase entity or simply entity . It is given the designation e x and declared by a binding function, such as entity (e x , NE ) for a named entity, and entity (e x , NP ) for a syntactic group of type noun phrase . Each phrase entity is identified by its head , as head (w h , e x ), and we say that the phrase head controls the entity. A phrase entity is defined as head (w h , e x ) ∧ inPhrase (w i , e x ) ∧ … ∧ inPhrase (w j , e x ).

We also wish to represent higher-order relations such as functional roles and semantic categories. Functional dependency between pairs of words is encoded as, for example, subj (w i , w j ) and aux (w j , w k ). Functional groups are represented just like phrase entities. Each is assigned a designation r x , declared for example, as func_role (r x , SUBJ ), and defined in terms of its head and members (which may be individual words or composite entities). Semantic categories are similarly defined over the set of words and syntactic phrase entities – for example, sem_cat (c x , PERSON ) ∧ head (w h , c x ) ∧ pos (w i , NNP) ∧ word (w h , “John”).

Semantically, sentences are treated as events defined by their verbs. A multi-sentential passage is represented by tying the member sentences together with relations over their verbs. We declare two such relations – seq and cause . The seq relation between two sentences, seq (s i , s j ) ? prior (main (s i ), main (s j )), is defined as the sequential ordering in time of the corresponding events. The cause relation cause (s i , s j ) ? cdep (main (s i ), main (s j )) is defined such that the second event is causally dependent on the first.

Phrase Type

Comments

SUBJ

“Who” and nominal “What”

questions

VERB event “What” questions

DIR-OBJ

“Who” and nominal “What”

questions

INDIR-OBJ

“Who” and nominal “What”

questions

ELAB-SUBJ

descriptive “What” questions

(eg. what kind)

ELAB-VERB-TIME ELAB-VERB-PLACE ELAB-VERB-MANNER ELAB-VERB-CAUSE “Why” question

ELAB-VERB-INTENTION

“Why” as well as “What for”

question

ELAB-VERB-OTHER

smooth handling of undefined

verb phrase types

ELAB-DIR-OBJ

descriptive “What” questions

(eg. what kind)

ELAB-INDIR-OBJ

descriptive “What” questions

(eg. what kind)

VERB-COMPL WHERE/WHEN/HOW

questions concerning state or

status Table 1. Phrase types used by QABLe framework.

2.4 Primitive Operators and Transformation Rules

The system, in general, starts out with no procedural knowledge of the domain (i.e ., no transformation rules). However, it is equipped with 9 primitive operators that define basic actions in the domain. Primitive operators are existentially quantified. They have no activation condition, but only an existence condition – the minimal binding condition for the operator to be applicable in a given state.

A primitive operator has the form A

C E ?→, where E C is the existence condition and A

? is an action implemented in the domain. An example primitive operator is

primitive-op-1 : ? w x , w y → add-word-after(w y , w x ) Other primitive operators delete words or manipulate entire phrases. Note that primitive operators act directly on the syntax of the domain. In particular, they manipulate words

and phrases. A primitive operator bound to a state in the domain constitutes a transformation rule. The procedure for

instantiating transformation rules using primitive operators is given in Figure 2. The result of this procedure is a

universally quantified rule having the form A G C R →∧. A

may represent either the name of an action in the world or an internal predicate. C represents the necessary condition for rule activation in the form of a conjunction over the relevant attributes of the world state. R G represents the expected effect of the action. For example, turn_on_x2→∧∧221g x x indicates that when 1x is on and

2x is off, this operator is expected to turn 2x on.

An instantiated rule is assigned a rank composed of:

? priority rating

? level of experience with rule

? confidence in current parameter bindings

The first component, priority rating, is an inductively acquired measure of the rule’s performance on previous instances. The second component modulates the priority rating with respects to a frequency of use measure. The third component captures any uncertainty inherent in the underlying features serving as parameters to the rule. Each time a new rule is added to the rule base, an attempt is made to combine it with similar existing rules to produce more general rules having a wider relevance and applicability. Given a rule 1A g g c c R

y R x b a →∧∧∧covering a set of example instances 1E and another rule

2A g g c c R z R y c b →∧∧∧covering a set of examples 2E , we add a more general rule 3A g c R

y b →∧ to the strategy. The new rule 3A is consistent with 1E and 2E . In addition it will bind to any state where the literal b c is active. Therefore the hypothesis represented by the triggering condition is

likely an overgeneralization of the target concept. This means that rule 3A may bind in some states erroneously. However, since all rules that can bind in a state compete to fire in that state, if there is a better rule, then 3A will be preempted and will not fire.

2.5 Generating Answers

Returning to Figure 1, we note that at the abstract level the process of answer generation begins with the extraction of features active in the current state. These features represent low-level textual attributes and the relations over them described in section 2.3.

Immediately upon reading the current state, the system checks to see if this is a goal state . A goal state is a state who’s corresponding textual domain representation contains

an explicit answer in the right form to match the questions. In the abstract representation, we say that in this state all of the goal constraints are satisfied.

If the current state is indeed a goal state, no further inference is required. The inference process terminates and

the actual answer is identified by the matching technique described in section 2.6 and extracted.

If the current state is not a goal state and more processing time is available, QABLe passes the state to the

Inference Engine (IE). This module stores strategies in the

form of decision lists of rules. For a given state, each strategy may recommend at most one rule to execute. For each strategy this is the first rule in its decision list to fire. The IE selects the rule among these with the highest relative rank, and recommends it as the next transformation rule to

be applied to the current state. If a valid rule exists it is executed in the domain. This modifies the concrete textual layer. At this point, the pre-processing and feature extraction stages are invoked, a new current state is produced, and the inference cycle begins anew. If a valid rule cannot be recommend by the IE, QABLe passes the current state to the Search Engine (SE). The SE

uses the current state and its set of primitive operators to instantiate a new rule, as described in section 2.4. This rule is then executed in the domain, and another iteration of the process begins. If no more primitive operators remain to be applied to the current state, the SE cannot instantiate a new rule. At this point, search for the goal state cannot proceed, processing terminates, and QABLe returns failure. When the system is in the training phase and the SE instantiates a new rule, that rule is generalized against the

existing rule base. This procedure attempts to create more

general rules that can be applied to unseen example

instances.

Once the inference/search process terminates

(successfully or not), a reinforcement learning algorithm is

applied to the entire rule search-inference tree. Specifically,

rules on the solution path receive positive reward, and rules that fired, but are not on the solution path receive negative

reinforcement.

2.6 Candidate Answer Matching and Extraction As discussed in the previous section, when a goal state is generated in the abstract representation, this corresponds to

a textual domain representation that contains an explicit answer in the right form to match the questions. Such a candidate answer may be present in the original text, or may be generated by the inference/search process. In either case, the answer-containing sentence must be found, and the actual answer extracted. This is accomplished by the Answer Matching and Extraction procedure.

The first step in this procedure is to reformulate the question into a statement form. This results in a sentence containing an empty slot for the information being queried. For example, “How far is the drive to Chicago?” becomes “The drive to Chicago is ______.” Recall further that QABLe’s pre-processing stage analyzes text with respect to various syntactic and semantic types. In addition to supporting abstract feature generation, these tags can be used to analyze text on a lexical level. Thus, the question above is marked up as [ELAB-VERB (WRB How) (RB far)] [VERB (VBZ is)] [SUBJ (DT the) (NN drive)] [VERB-COMPL (TO to) (NNP Chicago)]. Once reformulated into statement form, this becomes [ELAB-VERB ______ ] [VERB (VBZ is)] [SUBJ (DT the) (NN drive)] [VERB-COMPL (TO to) (NNP Chicago)]. The goal now is to find a sentence who’s syntactic and semantic analysis matches that of the reformulated question’s as closely as possible. Thus, for example the text may contain the sentence “The drive to Chicago is 2 hours” with the corresponding analysis [SUBJ (DT the) (NN drive)] [VERB-COMPL (TO to) (NNP Chicago)] [VERB (VBZ is)] [VERB-COMPL (CD 2) (NNS hours)]. Notice that all of the elements of this candidate answer match the corresponding elements of the question, with the exception of the semantic category of the ELAB-VERB phrase. This is likely not the answer we are seeking. The text may contain a second sentence “The drive to Chicago is 130 miles”, that analyses as [SUBJ (DT the) (NN drive)] [VERB-COMPL (TO to) (NNP Chicago)] [VERB (VBZ is)] [VERB-COMPL (CD 130) (NNS miles)]. In this case, all of the elements match their counterparts in the reformulated question. Thus, the second sentence can be matched as the correct answer with high confidence. 3 Experimental Evaluation

3.1 Experimental Setup

We evaluate our approach to open-domain natural language question answering on the Remedia corpus. This is a collection of 115 children’s stories provided by Remedia Publications for reading comprehension. The comprehension of each story is tested by answering five who, what, where, and why questions.

The Remedia Corpus was initially used to evaluate the Deep Read reading comprehension system [Hirschman et al., 1999], and later also other systems, including Quarc [Riloff and Thelen, 2000] and the Brown University statistical language processing class project [Charniak, et al., 2000].

The corpus includes two answer keys. The first answer key contains annotations indicating the story sentence that is lexically closest to the answer found in the published answer key (AutSent). The second answer key contains sentences that a human judged to best answer each question (HumSent). Examination of the two keys shows the latter to be more reliable. We trained and tested using the HumSent answers. We also compare our results to the HumSent results of prior systems. In the Remedia corpus, approximately 10% of the questions lack an answer. Following prior work, only questions with annotated answers were considered.

We divided the Remedia corpus into a set of 55 tests used for development, and 60 tests used to evaluate our model, employing the same partition scheme as followed by the prior work mentioned above. With five questions being supplied with each test, this breakdown provided 275 example instances for training, and 300 example instances to test with. However, due to the heavy reliance of our model on learning, many more training examples were necessary. We widened the training set by adding story-question-answer sets obtained from several online sources. With the extended corpus, QABLe was trained on 262 stories with 3-5 questions each, corresponding to 1000 example instances.

3.2 Discussion of Results

Table 2 compares the performance of different versions of QABLe with those reported by the three systems described

System who what when where why Overall Deep

Read 48% 38% 37% 39% 21% 36% Quarc 41% 28% 55% 47% 28% 40% Brown 57% 32% 32% 50% 22% 41% QABLe-N/L 48% 35% 52% 43% 28% 41%

QABLe-L 56% 41% 56% 45% 35% 47%

QABLe-L+ 59% 43% 56% 46% 36% 48%

Table 2. Comparison of QA accuracy by question type.

System # rules learned # rules on solution path average # rules per correct answer QABLe-L 3,463 426 3.02 QABLe-L+ 16,681 411 2.85

Table 3. Analysis of transformation rule learning and use.

above. We wish to discern the particular contribution of transformation rule learning in the QABLe model, as well as the value of expanding the training set. Thus, the QABLe-N/L results indicate the accuracy of answers returned by the QA matching and extraction algorithm described in section 2.6 only. This algorithm is similar to prior answer extraction techniques, and provides a baseline for our experiments. The QABLe-L results include answers returned by the full QABLe framework, including the utilization of learned transformation rules, but trained only on the limited training portion of the Remedia corpus. The QABLe-L+ results are for the version trained on the expanded training set.

As expected, the accuracy of QABLe-N/L is comparable to those of the earlier systems. The Remedia-only training set version, QABLe-L, shows an improvement over both the baseline QABLe, and most of the prior system results. This is due to its expanded ability to deal with semantic alternations in the narrative by finding and learning transformation rules that reformulate the alternations into a lexical form matching that of the question.

The results of QABLe-L+, trained on the expanded training set, are for the most part noticeably better than those of QABLe-L. This is because training on more example instances leads to wider domain coverage through the acquisition of more transformation rules. Table 3 gives a break-down of rule learning and use for the two learning versions of QABLe. The first column is the total number of rules learned by each system version. The second column is the number of rules that ended up being successfully used in generating an answer. The third column gives the average number of rules each system needed to answer an answer (where a correct answer was generated). Note that QABLe-L+ used fewer rules on average to generate more correct answers than QABLe-L. This is because QABLe-L+ had more opportunities to refine its policy controlling rule firing through reinforcement and generalization.

Note that the learning versions of QABLe do significantly better than the QABLe-N/L and all the prior systems on why-type questions. This is because many of these questions require an inference step, or the combination of information spanning multiple sentences. QABLe-L and QABLe-L+ are able to successfully learn transformation rules to deal with a subset of these cases.

4 Conclusion

In this paper we present an approach to automatically learn strategies for natural language questions answering from examples composed of textual sources, questions, and corresponding answers. The strategies thus acquired are composed of ranked lists transformation rules that when applied to an initial state consisting of an unseen text and question, can derive the required answer. We compare our approach to three prior systems, and present experimental results demonstrating the efficacy of our model. In particular, we show that our approach is effective given a sufficiently large training corpus, and reasonably accurate pre-processing of raw input data.

References

[Brill, 1995] E. Brill. Transformation-based error driven learning and natural language processing: A case study

in part of speech tagging. In Computational Linguistics, 21(4):543-565, 1995.

[Charniak, et al., 2000] Charniak, Y. Altun, R. de Salvo Braz, B. Garrett, M. Kosmala, T. Moscovich, L. Pang,

C. Pyo, Y. Sun, W. Wy, Z. Yang, S. Zeller, and L. Zorn.

Reading comprehension programs in a statistical-language-processing class. ANLP/NAACL-00, 2000. [Cumby and Roth, 2000] C. Cumby and D. Roth.

Relational representations that facilitate learning. KR-

00, pp. 425-434, 2000.

[Even-Zohar and Roth, 2000] Y. Even-Zohar and D. Roth.

A classification approach to word prediction. NAACL-

00, pp. 124-131, 2000.

[Fellbaum, 1998] C. Fellbaum (ed.) WordNet: An Electronic Lexical Database. The MIT Press, 1998. [Hirschman and Gaizauskas, 2001] L. Hirschman and R.

Gaizauskas. Natural language question answering: The view from here. Natural Language Engineering,

7(4):275-300, 2001.

[Hirschman et al., 1999] L. Hirschman, M. Light, and J.

Burger. Deep Read: A reading comprehension system.

ACL-99, 1999.

[Kaebling, et al., 1996] L. P. Kaebling, M. L. Littman, and

A. W. Moore. Reinforcement learning: A survey. J.

Artif. Intel. Research, 4:237-285, 1996.

[Khardon et al., 1999] R. Khardon, D. Roth, and L. G.

Valiant. Relational learning for nlp using linear

threshold elements, IJCAI-99, 1999.

[Khardon, 1999] R. Khardon. Learning to take action.

Machine Learning 35(1), 1999.

[Knoblock, 1992] C. Knoblock. Automatic generation of abstraction for planning. Artificial Intelligence, 68(2):243-302, 1992.

[Riloff and Thelen, 2000] E. Riloff and M. Thelen. A rule-based question answering system for reading

comprehension tests. ANLP/NAACL-2000, 2000. [Sacerdoti, 1974] E. D. Sacerdoti. Planning in a hierarchy of abstraction spaces. Artificial Intelligence, 5(2):115-

135, 1974.

[Tadepalli and Natarajan, 1996] P. Tadepalli and B.

Natarajan. A formal framework for speedup learning

from problems and solutions. J. Artif. Intel. Research,

4:445-475, 1996.

[Voorhees, 2003] E. M. Voorhees Overview of the TREC 2003 question answering track. TREC-12, 2003.

最新The_Monster课文翻译

Deems Taylor: The Monster 怪才他身材矮小,头却很大,与他的身材很不相称——是个满脸病容的矮子。他神经兮兮,有皮肤病,贴身穿比丝绸粗糙一点的任何衣服都会使他痛苦不堪。而且他还是个夸大妄想狂。他是个极其自负的怪人。除非事情与自己有关,否则他从来不屑对世界或世人瞧上一眼。对他来说,他不仅是世界上最重要的人物,而且在他眼里,他是惟一活在世界上的人。他认为自己是世界上最伟大的戏剧家之一、最伟大的思想家之一、最伟大的作曲家之一。听听他的谈话,仿佛他就是集莎士比亚、贝多芬、柏拉图三人于一身。想要听到他的高论十分容易,他是世上最能使人筋疲力竭的健谈者之一。同他度过一个夜晚,就是听他一个人滔滔不绝地说上一晚。有时,他才华横溢;有时,他又令人极其厌烦。但无论是妙趣横生还是枯燥无味,他的谈话只有一个主题:他自己,他自己的所思所为。他狂妄地认为自己总是正确的。任何人在最无足轻重的问题上露出丝毫的异议,都会激得他的强烈谴责。他可能会一连好几个小时滔滔不绝,千方百计地证明自己如何如何正确。有了这种使人耗尽心力的雄辩本事,听者最后都被他弄得头昏脑涨,耳朵发聋,为了图个清静,只好同意他的说法。他从来不会觉得,对于跟他接触的人来说,他和他的所作所为并不是使人产生强烈兴趣而为之倾倒的事情。他几乎对世间的任何领域都有自己的理

论,包括素食主义、戏剧、政治以及音乐。为了证实这些理论,他写小册子、写信、写书……文字成千上万,连篇累牍。他不仅写了,还出版了这些东西——所需费用通常由别人支付——而他会坐下来大声读给朋友和家人听,一读就是好几个小时。他写歌剧,但往往是刚有个故事梗概,他就邀请——或者更确切说是召集——一群朋友到家里,高声念给大家听。不是为了获得批评,而是为了获得称赞。整部剧的歌词写好后,朋友们还得再去听他高声朗读全剧。然后他就拿去发表,有时几年后才为歌词谱曲。他也像作曲家一样弹钢琴,但要多糟有多糟。然而,他却要坐在钢琴前,面对包括他那个时代最杰出的钢琴家在内的聚会人群,一小时接一小时地给他们演奏,不用说,都是他自己的作品。他有一副作曲家的嗓子,但他会把著名的歌唱家请到自己家里,为他们演唱自己的作品,还要扮演剧中所有的角色。他的情绪犹如六岁儿童,极易波动。心情不好时,他要么用力跺脚,口出狂言,要么陷入极度的忧郁,阴沉地说要去东方当和尚,了此残生。十分钟后,假如有什么事情使他高兴了,他就会冲出门去,绕着花园跑个不停,或者在沙发上跳上跳下或拿大顶。他会因爱犬死了而极度悲痛,也会残忍无情到使罗马皇帝也不寒而栗。他几乎没有丝毫责任感。他似乎不仅没有养活自己的能力,也从没想到过有这个义务。他深信这个世界应该给他一条活路。为了支持这一信念,他

新版人教版高中语文课本的目录。

必修一阅读鉴赏第一单元1.*沁园春?长沙……………………………………毛泽东3 2.诗两首雨巷…………………………………………戴望舒6 再别康桥………………………………………徐志摩8 3.大堰河--我的保姆………………………………艾青10 第二单元4.烛之武退秦师………………………………….《左传》16 5.荆轲刺秦王………………………………….《战国策》18 6.*鸿门宴……………………………………..司马迁22 第三单元7.记念刘和珍君……………………………………鲁迅27 8.小狗包弟……………………………………….巴金32 9.*记梁任公先生的一次演讲…………………………梁实秋36 第四单元10.短新闻两篇别了,“不列颠尼亚”…………………………周婷杨兴39 奥斯维辛没有什么新闻…………………………罗森塔尔41 11.包身工………………………………………..夏衍44 12.*飞向太空的航程……………………….贾永曹智白瑞雪52 必修二阅读鉴赏第一单元1.荷塘月色…………………………………..朱自清2.故都的秋…………………………………..郁达夫3.*囚绿记…………………………………..陆蠡第二单元4.《诗经》两首氓采薇5.离骚………………………………………屈原6.*《孔雀东南飞》(并序) 7.*诗三首涉江采芙蓉《古诗十九首》短歌行……………………………………曹操归园田居(其一)…………………………..陶渊明第三单元8.兰亭集序……………………………………王羲之9.赤壁赋……………………………………..苏轼10.*游褒禅山记………………………………王安石第四单元11.就任北京大学校长之演说……………………..蔡元培12.我有一个梦想………………………………马丁?路德?金1 3.*在马克思墓前的讲话…………………………恩格斯第三册阅读鉴赏第一单元1.林黛玉进贾府………………………………….曹雪芹2.祝福………………………………………..鲁迅3. *老人与海…………………………………….海明威第二单元4.蜀道难……………………………………….李白5.杜甫诗三首秋兴八首(其一) 咏怀古迹(其三) 登高6.琵琶行(并序)………………………………..白居易7.*李商隐诗两首锦瑟马嵬(其二) 第三单元8.寡人之于国也…………………………………《孟子》9.劝学……………………………………….《荀子》10.*过秦论…………………………………….贾谊11.*师说………………………………………韩愈第四单元12.动物游戏之谜………………………………..周立明13.宇宙的边疆………………………………….卡尔?萨根14.*凤蝶外传……………………………………董纯才15.*一名物理学家的教育历程……………………….加来道雄第四册阅读鉴赏第一单元1.窦娥冤………………………………………..关汉卿2.雷雨………………………………………….曹禹3.*哈姆莱特……………………………………莎士比亚第二单元4.柳永词两首望海潮(东南形胜) 雨霖铃(寒蝉凄切) 5.苏轼词两首念奴娇?赤壁怀古定风波(莫听穿林打叶声) 6.辛弃疾词两首水龙吟?登建康赏心亭永遇乐?京口北固亭怀古7.*李清照词两首醉花阴(薄雾浓云愁永昼) 声声慢(寻寻觅觅) 第三单元8.拿来主义……………………………………….鲁迅9.父母与孩子之间的爱……………………………..弗罗姆10.*短文三篇热爱生

高中外研社英语选修六Module5课文Frankenstein's Monster

Frankenstein's Monster Part 1 The story of Frankenstein Frankenstein is a young scientist/ from Geneva, in Switzerland. While studying at university, he discovers the secret of how to give life/ to lifeless matter. Using bones from dead bodies, he creates a creature/ that resembles a human being/ and gives it life. The creature, which is unusually large/ and strong, is extremely ugly, and terrifies all those/ who see it. However, the monster, who has learnt to speak, is intelligent/ and has human emotions. Lonely and unhappy, he begins to hate his creator, Frankenstein. When Frankenstein refuses to create a wife/ for him, the monster murders Frankenstein's brother, his best friend Clerval, and finally, Frankenstein's wife Elizabeth. The scientist chases the creature/ to the Arctic/ in order to destroy him, but he dies there. At the end of the story, the monster disappears into the ice/ and snow/ to end his own life. Part 2 Extract from Frankenstein It was on a cold November night/ that I saw my creation/ for the first time. Feeling very anxious, I prepared the equipment/ that would give life/ to the thing/ that lay at my feet. It was already one/ in the morning/ and the rain/ fell against the window. My candle was almost burnt out when, by its tiny light,I saw the yellow eye of the creature open. It breathed hard, and moved its arms and legs. How can I describe my emotions/ when I saw this happen? How can I describe the monster who I had worked/ so hard/ to create? I had tried to make him beautiful. Beautiful! He was the ugliest thing/ I had ever seen! You could see the veins/ beneath his yellow skin. His hair was black/ and his teeth were white. But these things contrasted horribly with his yellow eyes, his wrinkled yellow skin and black lips. I had worked/ for nearly two years/ with one aim only, to give life to a lifeless body. For this/ I had not slept, I had destroyed my health. I had wanted it more than anything/ in the world. But now/ I had finished, the beauty of the dream vanished, and horror and disgust/ filled my heart. Now/ my only thoughts were, "I wish I had not created this creature, I wish I was on the other side of the world, I wish I could disappear!” When he turned to look at me, I felt unable to stay in the same room as him. I rushed out, and /for a long time/ I walked up and down my bedroom. At last/ I threw myself on the bed/ in my clothes, trying to find a few moments of sleep. But although I slept, I had terrible dreams. I dreamt I saw my fiancée/ walking in the streets of our town. She looked well/ and happy/ but as I kissed her lips,they became pale, as if she were dead. Her face changed and I thought/ I held the body of my dead mother/ in my arms. I woke, shaking with fear. At that same moment,I saw the creature/ that I had created. He was standing/by my bed/ and watching me. His

人教版高中语文必修必背课文精编WORD版

人教版高中语文必修必背课文精编W O R D版 IBM system office room 【A0816H-A0912AAAHH-GX8Q8-GNTHHJ8】

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。

恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地

结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨; 她彷徨在这寂寥的雨巷,撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。她默默地走近, 走近,又投出 太息一般的眼光

她飘过 像梦一般地, 像梦一般地凄婉迷茫。像梦中飘过 一枝丁香地, 我身旁飘过这个女郎;她默默地远了,远了,到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自

彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来; 我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘; 波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇; 在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。

(完整版)Unit7TheMonster课文翻译综合教程四

Unit 7 The Monster Deems Taylor 1He was an undersized little man, with a head too big for his body ― a sickly little man. His nerves were bad. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had delusions of grandeur. 2He was a monster of conceit. Never for one minute did he look at the world or at people, except in relation to himself. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. He was one of the most exhausting conversationalists that ever lived. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. 3He had a mania for being in the right. The slightest hint of disagreement, from anyone, on the most trivial point, was enough to set him off on a harangue that might last for hours, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. 4It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact. He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets, letters, books ... thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them ― usually at somebody else’s expense ― but he would sit and read them aloud, for hours, to his friends, and his family. 5He had the emotional stability of a six-year-old child. When he felt out of sorts, he would rave and stamp, or sink into suicidal gloom and talk darkly of going to the East to end his days as a Buddhist monk. Ten minutes later, when something pleased him he would rush out of doors and run around the garden, or jump up and down off the sofa, or stand on his head. He could be grief-stricken over the death of a pet dog, and could be callous and heartless to a degree that would have made a Roman emperor shudder. 6He was almost innocent of any sense of responsibility. He was convinced that

人教版高中语文必修一背诵篇目

高中语文必修一背诵篇目 1、《沁园春长沙》毛泽东 独立寒秋,湘江北去,橘子洲头。 看万山红遍,层林尽染;漫江碧透,百舸争流。 鹰击长空,鱼翔浅底,万类霜天竞自由。 怅寥廓,问苍茫大地,谁主沉浮? 携来百侣曾游,忆往昔峥嵘岁月稠。 恰同学少年,风华正茂;书生意气,挥斥方遒。 指点江山,激扬文字,粪土当年万户侯。 曾记否,到中流击水,浪遏飞舟? 2、《诗两首》 (1)、《雨巷》戴望舒 撑着油纸伞,独自 /彷徨在悠长、悠长/又寂寥的雨巷, 我希望逢着 /一个丁香一样的 /结着愁怨的姑娘。 她是有 /丁香一样的颜色,/丁香一样的芬芳, /丁香一样的忧愁, 在雨中哀怨, /哀怨又彷徨; /她彷徨在这寂寥的雨巷, 撑着油纸伞 /像我一样, /像我一样地 /默默彳亍着 冷漠、凄清,又惆怅。 /她静默地走近/走近,又投出 太息一般的眼光,/她飘过 /像梦一般地, /像梦一般地凄婉迷茫。 像梦中飘过 /一枝丁香的, /我身旁飘过这女郎; 她静默地远了,远了,/到了颓圮的篱墙, /走尽这雨巷。 在雨的哀曲里, /消了她的颜色, /散了她的芬芳, /消散了,甚至她的 太息般的眼光, /丁香般的惆怅/撑着油纸伞,独自 /彷徨在悠长,悠长 又寂寥的雨巷, /我希望飘过 /一个丁香一样的 /结着愁怨的姑娘。 (2)、《再别康桥》徐志摩 轻轻的我走了, /正如我轻轻的来; /我轻轻的招手, /作别西天的云彩。 那河畔的金柳, /是夕阳中的新娘; /波光里的艳影, /在我的心头荡漾。 软泥上的青荇, /油油的在水底招摇; /在康河的柔波里, /我甘心做一条水草!那榆阴下的一潭, /不是清泉,是天上虹 /揉碎在浮藻间, /沉淀着彩虹似的梦。寻梦?撑一支长篙, /向青草更青处漫溯, /满载一船星辉, /在星辉斑斓里放歌。但我不能放歌, /悄悄是别离的笙箫; /夏虫也为我沉默, / 沉默是今晚的康桥!悄悄的我走了, /正如我悄悄的来;/我挥一挥衣袖, /不带走一片云彩。 4、《荆轲刺秦王》 太子及宾客知其事者,皆白衣冠以送之。至易水上,既祖,取道。高渐离击筑,荆轲和而歌,为变徵之声,士皆垂泪涕泣。又前而为歌曰:“风萧萧兮易水寒,壮士一去兮不复还!”复为慷慨羽声,士皆瞋目,发尽上指冠。于是荆轲遂就车而去,终已不顾。 5、《记念刘和珍君》鲁迅 (1)、真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头!

人教版高中语文必修必背课文

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。 恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地 结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨;

她彷徨在这寂寥的雨巷, 撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。 她默默地走近, 走近,又投出 太息一般的眼光 她飘过 像梦一般地, 像梦一般地凄婉迷茫。 像梦中飘过 一枝丁香地, 我身旁飘过这个女郎; 她默默地远了,远了, 到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来;我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘;波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇;

在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。 寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。 但我不能放歌,悄悄是别离的笙箫; 夏虫也为我沉默,沉默是今晚的康桥。 悄悄的我走了,正如我悄悄的来; 我挥一挥衣袖,不带走一片云彩。 记念刘和珍君(二、四节)鲁迅 二 真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头! 我们还在这样的世上活着;我也早觉得有写一点东西的必要了。离三月十八日也已有两星期,忘却的救主快要降临了罢,我正有写一点东西的必要了。 四 我在十八日早晨,才知道上午有群众向执政府请愿的事;下午便得到噩耗,说卫队居然开枪,死伤至数百人,而刘和珍君即在遇害者之列。但我对于这些传说,竟至于颇为怀疑。我向来是不惮以最坏的恶意,来推测中国人的,然而我还不料,也不信竟会下劣凶残到这地步。况且始终微笑着的和蔼的刘和珍君,更何至于无端在府门前喋血呢? 然而即日证明是事实了,作证的便是她自己的尸骸。还有一具,是杨德群君的。而且又证明着这不但是杀害,简直是虐杀,因为身体上还有棍棒的伤痕。 但段政府就有令,说她们是“暴徒”! 但接着就有流言,说她们是受人利用的。 惨象,已使我目不忍视了;流言,尤使我耳不忍闻。我还有什么话可说呢?我懂得衰亡民族之所以默无声息的缘由了。沉默啊,沉默啊!不在沉默中爆发,就在沉默中灭亡。

Unit5THEMONSTER课文翻译大学英语六

Unit 5 THE MONSTER He was an undersized little man, with a head too big for his body -- a sickly little man. His nerves were had. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had seclusions of grandeur. He was a monster of conceit.Never for one minute did he look at the world or at people, except in relation to himself. He was not only the most important person in the world,to himself;in his own eyes he was the only person who existed. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. And you would have had no difficulty in hearing him talk. He was one of the most exhausting conversationalists that ever lived. An evening with him was an evening spent in listening to a monologue. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. He had a mania for being in the right.The slightest hint of disagreement,from anyone, on the most trivial point, was enough to set him off on a harangue that might last for house, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact.He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets,le tters, books? thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them -- usually at somebody else's expense-- but he would sit and read them aloud, for hours, to his friends and his family. He wrote operas,and no sooner did he have the synopsis of a story, but he would invite -- or rather summon -- a crowed of his friends to his house, and read it aloud to them. Not for criticism. For applause. When the complete poem was written, the friends had to come again,and hear that read aloud.Then he would publish the poem, sometimes years before the music that went with it was written. He played the piano like a composer, in the worst sense of what that implies, and he would sit down at the piano before parties that included some of the finest pianists of his time, and play for them, by the hour, his own music, needless to say. He had a composer's voice. And he would invite eminent vocalists to his house and sing them his operas, taking all the parts.

Unit7TheMonster课文翻译综合优质教程四.docx

最新资料欢迎阅读 Unit 7 The Monster课文翻译综合教程四 Unit 7 The Monster Deems Taylor 1 He was an undersized little man, with a head too big for his body ― a sickly little man. His nerves were bad. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had delusions of grandeur. 2 He was a monster of conceit. Never for one minute did he look at the world or at people, except in relation to himself. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. He was one of the most exhausting conversationalists that ever lived. Sometimes he was brilliant;sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. 3 He had a mania for being in the right. The slightest hint of disagreement, from anyone, on the most trivial point, was enough to set him off on a harangue that might last for hours, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him,for the sake of peace. 4 It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whomhe came in contact. He had theories about almost any subject under the sun,including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets, letters,books ... thousands upon thousands

人教版高中语文必修(1-5)必背课文

人教版高中语文必修(1-5)必背课文.txt6宽容润滑了彼此的关系,消除了彼此的隔阂,扫清了彼此的顾忌,增进了彼此的了解。人教版高中语文必修(1-5)必背课文 诗★诗★诗 ★卫风?氓(背诵全文)《诗经》 氓之蚩蚩,抱布贸丝。匪来贸丝,来即我谋。送子涉淇,至于顿丘。匪我愆期,子无良媒。将子无怒,秋以为期。 乘彼垝垣,以望复关。不见复关,泣涕涟涟。既见复关,载笑载言。尔卜尔筮,体无咎言。以尔车来,以我贿迁。 桑之未落,其叶沃若。于嗟鸠兮,无食桑葚!于嗟女兮,无与士耽!士之耽兮,犹可说也。女之耽兮,不可说也。 桑之落矣,其黄而陨。自我徂尔,三岁食贫。淇水汤汤,渐车帷裳。女也不爽,士贰其行。士也罔极,二三其德。 三岁为妇,靡室劳矣;夙兴夜寐,靡有朝矣。言既遂矣,至于暴矣。兄弟不知,咥其笑矣。静言思之,躬自悼矣。 及尔偕老,老使我怨。淇则有岸,隰则有泮。总角之宴,言笑晏晏。信誓旦旦,不思其反。反是不思,亦已焉哉! ★涉江采芙蓉(背诵全文)古诗十九首 涉江采芙蓉,兰泽多芳草。采之欲遗谁?所思在远道。 还顾望旧乡,长路漫浩浩。同心而离居,忧伤以终老。 ★秋兴八首(其一)(背诵全文)杜甫 玉露凋伤枫树林,巫山巫峡气萧森。江间波浪兼天涌,塞上风云接地阴。 丛菊两开他日泪,孤舟一系故园心。寒衣处处催刀尺,白帝城高急暮砧。 ★咏怀古迹(其三)(背诵全文)杜甫 群山万壑赴荆棘,生长明妃尚有村。一去紫台连朔漠,独留青冢向黄昏。 画图省识春风面,环佩空归月夜魂。千载琵琶作胡语,分明怨恨曲中论。 ★登高(背诵全文)杜甫 风急天高猿啸哀,渚清沙白鸟飞回。无边落木萧萧下,不尽长江滚滚来。 万里悲秋常作客,百年多病独登台。艰难苦恨繁霜鬓,潦倒新停浊酒杯。 ★锦瑟(背诵全文)李商隐 锦瑟无端五十弦,一弦一柱思华年。庄生晓梦迷蝴蝶,望帝春心托杜鹃。 沧海月明珠有泪,蓝田日暖玉生烟。此情可待成追忆,只是当时已惘然。 ★马嵬李商隐 海外徒闻更九州,他生未卜此生休。空闻虎旅传宵柝,无复鸡人报晓筹。 此日六军同驻马,当时七夕笑牵牛。如何四纪为天子,不及卢家有莫愁! ★蜀道难(背诵全文)李白

全新版大学英语综合教程3 课文翻译unit 1 Mr. Doherty Builds His Dream Life

unit 1 Mr. Doherty Builds His Dream Life In America many people have a romantic idea of life in the countryside. Many living in towns dream of starting up their own farm, of living off the land. Few get round to putting their dreams into practice. This is perhaps just as well, as the life of a farmer is far from easy, as Jim Doherty discovered when he set out to combine being a writer with running a farm. Nevertheless, as he explains, he has no regrets and remains enthusiastic about his decision to change his way of life. 在美国,不少人对乡村生活怀有浪漫的情感。许多居住在城镇的人梦想着自己办个农场,梦想着靠土地为生。很少有人真去把梦想变为现实。或许这也没有什么不好,因为,正如吉姆·多尔蒂当初开始其写作和农场经营双重生涯时所体验到的那样,农耕生活远非轻松自在。但他写道,自己并不后悔,对自己作出的改变生活方式的决定仍热情不减。 Mr. Doherty Builds His Dream Life Jim Doherty There are two things I have always wanted to do -- write and live on a farm. Today I'm doing both. I am not in E. B. White's class as a writer or in my neighbors' league as a farmer, but I'm getting by. And after years of frustration with city and suburban living, my wife Sandy and I have finally found contentment here in the country. 多尔蒂先生创建自己的理想生活吉姆·多尔蒂有两件事是我一直想做的――写作与务农。如今我同时做着这两件事。作为作家,我和E·B·怀特不属同一等级,作为农场主,我和乡邻也不是同一类人,不过我应付得还行。在城市以及郊区历经多年的怅惘失望之后,我和妻子桑迪终于在这里的乡村寻觅到心灵的满足。 It's a self-reliant sort of life. We grow nearly all of our fruits and vegetables. Our hens keep us in eggs, with several dozen left over to sell each week. Our bees provide us with honey, and we cut enough wood to just about make it through the heating season. 这是一种自力更生的生活。我们食用的果蔬几乎都是自己种的。自家饲养的鸡提供鸡蛋,每星期还能剩余几十个出售。自家养殖的蜜蜂提供蜂蜜,我们还自己动手砍柴,足可供过冬取暖之用。 It's a satisfying life too. In the summer we canoe on the river, go picnicking in the woods and take long bicycle rides. In the winter we ski and skate. We get excited about sunsets. We love the smell of the earth warming and the sound of cattle lowing. We watch for hawks in the sky and deer in the cornfields. 这也是一种令人满足的生活。夏日里我们在河上荡舟,

相关主题
文本预览
相关文档 最新文档