当前位置:文档之家› Improvements in phrase-based statistical machine translation

Improvements in phrase-based statistical machine translation

Improvements in phrase-based statistical machine translation
Improvements in phrase-based statistical machine translation

Improvements in Phrase-Based Statistical Machine Translation

Richard Zens and Hermann Ney

Chair of Computer Science VI

RWTH Aachen University

zens,ney@cs.rwth-aachen.de

Abstract

In statistical machine translation,the currently

best performing systems are based in some way

on phrases or word groups.We describe the

baseline phrase-based translation system and

various re?nements.We describe a highly ef-

?cient monotone search algorithm with a com-

plexity linear in the input sentence length.We

present translation results for three tasks:Verb-

mobil,Xerox and the Canadian Hansards.For

the Xerox task,it takes less than7seconds to

translate the whole test set consisting of more

than10K words.The translation results for

the Xerox and Canadian Hansards task are very

promising.The system even outperforms the

alignment template system.

1Introduction

In statistical machine translation,we are given a source

language(‘French’)sentence,

which is to be translated into a target language(‘English’)

sentence Among all possible target

language sentences,we will choose the sentence with the

highest probability:

(1)

(2)

The decomposition into two knowledge sources in Equa-

tion2is known as the source-channel approach to statisti-

cal machine translation(Brown et al.,1990).It allows an

independent modeling of target language model

and translation model1.The target language

2Phrase-Based Translation

2.1Motivation

One major disadvantage of single-word based approaches is that contextual information is not taken into account. The lexicon probabilities are based only on single words. For many words,the translation depends heavily on the surrounding words.In the single-word based translation approach,this disambiguation is addressed by the lan-guage model only,which is often not capable of doing this.

One way to incorporate the context into the translation model is to learn translations for whole phrases instead of single words.Here,a phrase is simply a sequence of words.So,the basic idea of phrase-based translation is to segment the given source sentence into phrases,then translate each phrase and?nally compose the target sen-tence from these phrase translations.

2.2Phrase Extraction

The system somehow has to learn which phrases are translations of each other.Therefore,we use the follow-ing approach:?rst,we train statistical alignment models using GIZA++and compute the Viterbi word alignment of the training corpus.This is done for both translation di-rections.We take the union of both alignments to obtain a symmetrized word alignment matrix.This alignment ma-trix is the starting point for the phrase extraction.The fol-lowing criterion de?nes the set of bilingual phrases

of the sentence pair and the alignment matrix that is used in the translation system.

This criterion is identical to the alignment template cri-terion described in(Och et al.,1999).It means that two phrases are considered to be translations of each other,if the words are aligned only within the phrase pair and not to words outside.The phrases have to be contiguous. 2.3Translation Model

To use phrases in the translation model,we introduce the hidden variable.This is a segmentation of the sentence pair into phrases.We use a one-to-one phrase alignment,i.e.one source phrase is translated by exactly one target phrase.Thus,we obtain:

(3)

(4)

(5)In the preceding step,we used the maximum approxima-tion for the sum over all segmentations.Next,we allow only translations that are monotone at the phrase level. So,the phrase is produced by,the phrase is produced by,and so on.Within the phrases,the re-ordering is learned during training.Therefore,there is no constraint on the reordering within the phrases.

(6)

(7) Here,we have assumed a zero-order model at the phrase level.Finally,we have to estimate the phrase translation probabilities.This is done via relative frequencies:

3Re?nements

In this section,we will describe re?nements of the phrase-based translation model.First,we will describe two heuristics:word penalty and phrase penalty.Sec-ond,we will describe a single-word based lexicon model. This will be used to smooth the phrase translation proba-bilities.

3.1Simple Heuristics

In addition to the baseline model,we use two simple heuristics,namely word penalty and phrase penalty:

wp(12)

pp(13) The word penalty feature is simply the target sentence length.In combination with the scaling factor this re-sults in a constant cost per produced target language word.With this feature,we are able to adjust the sentence length.If we set a negative scaling factor,longer sen-tences are more penalized than shorter ones,and the sys-tem will favor shorter translations.Alternatively,by us-ing a positive scaling factors,the system will favor longer translations.

Similar to the word penalty,the phrase penalty feature results in a constant cost per produced phrase.The phrase penalty is used to adjust the average length of the phrases.

A negative weight,meaning real costs per phrase,results in a preference for longer phrases.A positive weight, meaning a bonus per phrase,results in a preference for shorter phrases.

3.2Word-based Lexicon

We are using relative frequencies to estimate the phrase translation probabilities.Most of the longer phrases are seen only once in the training corpus.Therefore,pure relative frequencies overestimate the probability of those phrases.To overcome this problem,we use a word-based lexicon model to smooth the phrase translation probabili-ties.For a source word and a target phrase,we use the following approximation:

This models a disjunctive interaction,also called noisy-OR gate(Pearl,1988).The idea is that there are multiple independent causes that can generate an event.It can be easily integrated into the search algorithm.The corresponding feature function is:

lex Here,and denote the?nal position of phrase number in the source and the target sentence,respectively,and we de?ne and.

To estimate the single-word based translation probabil-ities,we use smoothed relative frequencies.The smoothing method we apply is absolute discounting with interpolation:

(14)

bigram language model.This is only for notational con-venience.The generalization to a higher order language model is straightforward.For the maximization problem in(11),we de?ne the quantity as the maximum probability of a phrase sequence that ends with the lan-guage word and covers positions to of the source sentence.is the probability of the opti-mum translation.The symbol is the sentence boundary marker.We obtain the following dynamic programming recursion.

Here,denotes the maximum phrase length in the source language.During the search,we store back-pointers to the maximizing arguments.After perform-ing the search,we can generate the optimum translation. The resulting algorithm has a worst-case complexity of

.Here,denotes the vocabulary size of the target language and denotes the maximum num-ber of phrase translation candidates for a source language https://www.doczj.com/doc/1e11389686.html,ing ef?cient data structures and taking into ac-count that not all possible target language phrases can oc-cur in translating a speci?c source language sentence,we can perform a very ef?cient search.

This monotone algorithm is especially useful for lan-guage pairs that have a similar word order,e.g.Spanish-English or French-English.

5Corpus Statistics

In the following sections,we will present results on three tasks:Verbmobil,Xerox and Canadian Hansards.There-fore,we will show the corpus statistics for each of these tasks in this section.The training corpus(Train)of each task is used to train a word alignment and then extract the bilingual phrases and the word-based lexicon.The re-maining free parameters,e.g.the model scaling factors, are optimized on the development corpus(Dev).The re-sulting system is then evaluated on the test corpus(Test). Verbmobil Task.The?rst task we will present re-sults on is the German–English Verbmobil task(Wahlster, 2000).The domain of this corpus is appointment schedul-ing,travel planning,and hotel reservation.It consists of transcriptions of spontaneous speech.Table1shows the corpus statistics of this task.

Xerox task.Additionally,we carried out experiments on the Spanish–English Xerox task.The corpus consists of technical manuals.This is a rather limited domain task. Table2shows the training,development and test corpus statistics.

Canadian Hansards task.Further experiments were carried out on the French–English Canadian Hansards Table1:Statistics of training and test corpus for the Verb-mobil task(PP=perplexity).

English

Train Sentences

Words549921

7939

276

3159

Trigram PP28.1

251

2628

Trigram PP30.5 Table2:Statistics of training and test corpus for the Xe-

rox task(PP=perplexity).

English

Train Sentences

Words665399

11050

Dev Sentences

Words14278

Test Sentences

Words8370

Table3:Statistics of training and test corpus for the

Canadian Hansards task(PP=perplexity).

English

Train Sentences

Words22M

100269

500

9043

Trigram PP57.7

5432

97646

Trigram PP56.7

that all these monotonicity consideration are done at the phrase level.Within the phrases arbitrary reorderings are allowed.The only restriction is that they occur in the

training corpus.

Table4shows the percentage of the training corpus that can be generated with monotone and nonmonotone

phrase-based search.The number of sentence pairs that can be produced with the nonmonotone search gives an estimate of the upper bound for the sentence error rate of

the phrase-based system that is trained on the given data. The same considerations hold for the monotone search. The maximum source phrase length for the Verbmobil

task and the Xerox task is12,whereas for the Canadian Hansards task we use a maximum of4,because of the large corpus size.This explains the rather low coverage

on the Canadian Hansards task for both the nonmonotone and the monotone search.

For the Xerox task,the nonmonotone search can pro-

duce of the sentence pairs whereas the mono-tone can produce.The ratio of the two numbers measures how much the system deteriorates by using the

monotone search and will be called the degree of mono-tonicity.For the Xerox task,the degree of monotonicity is.This means the monotone search can produce of the sentence pairs that can be produced with the nonmonotone search.We see that for the Spanish-

English Xerox task and for the French-English Canadian Hansards task,the degree of monotonicity is rather high. For the German-English Verbmobil task it is signi?cantly lower.This may be caused by the rather free word order in German and the long range reorderings that are neces-sary to translate the verb group.

It should be pointed out that in practice the monotone

search will perform better than what the preceding esti-mates indicate.The reason is that we assumed a perfect nonmonotone search,which is dif?cult to achieve in prac-tice.This is not only a hard search problem,but also a complicated modeling problem.We will see in the next section that the monotone search will perform very well on both the Xerox task and the Canadian Hansards task.Table4:Degree of monotonicity in the training corpora for all three tasks(numbers in percent).

Xerox

76.359.7 monotone65.3

deg.of mon.87.0

Single-Word Based Systems(SWB).First,there is a monotone search variant(Mon)that translates each word of the source sentence from left to right.The second vari-ant allows reordering according to the so-called IBM con-straints(Berger et al.,1996).Thus up to three words may be skipped and translated later.This system will be denoted by IBM.The third variant implements spe-cial German-English reordering constraints.These con-straints are represented by a?nite state automaton and optimized to handle the reorderings of the German verb group.The abbreviation for this variant is GE.It is only used for the German-English Verbmobil task.This is just an extremely brief description of these systems.For de-tails,see(Tillmann and Ney,2003).

Phrase-Based System(PB).For the phrase-based sys-tem,we use the following feature functions:a trigram language model,the phrase translation model and the word-based lexicon model.The latter two feature func-tions are used for both directions:and. Additionally,we use the word and phrase penalty fea-ture functions.The model scaling factors are optimized on the development corpus with respect to mWER sim-ilar to(Och,2003).We use the Downhill Simplex al-gorithm from(Press et al.,2002).We do not perform the optimization on-best lists but we retranslate the whole development corpus for each iteration of the op-timization algorithm.This is feasible because this system is extremely fast.It takes only a few seconds to translate the whole development corpus for the Verbmobil task and the Xerox task;for details see Section8.In the experi-ments,the Downhill Simplex algorithm converged after about200iterations.This method has the advantage that it is not limited to the model scaling factors as the method described in(Och,2003).It is also possible to optimize any other parameter,e.g.the discounting parameter for the lexicon smoothing.

Alignment Template System(AT).The alignment template system(Och et al.,1999)is similar to the sys-tem described in this work.One difference is that the alignment templates are not de?ned at the word level but at a word class level.In addition to the word-based tri-gram model,the alignment template system uses a class-based?vegram language model.The search algorithm of the alignment templates allows arbitrary reorderings of the templates.It penalizes reorderings with costs that are linear in the jump width.To make the results as compa-rable as possible,the alignment template system and the phrase-based system start from the same word alignment. The alignment template system uses discriminative train-ing of the model scaling factors as described in(Och and Ney,2002).

7.3Verbmobil Task

We start with the Verbmobil results.We studied smooth-ing the lexicon probabilities as described in Section3.2. The results are summarized in Table5.We see that the Table5:Effect of lexicon smoothing on the translation performance for the German-English Verbmobil task.

system mPER NIST

unsmoothed21.17.96

uniform20.77.99

unigram22.37.79 uniform smoothing method improves translation quality. There is only a minor improvement,but it is consistent among all evaluation criteria.It is statistically signi?-cant at the94%level.The unigram method hurts perfor-mance.There is a degradation of the mWER of.In the following,all phrase-based systems use the uniform smoothing method.

The translation results of the different systems are shown in Table6.Obviously,the monotone phrase-based system outperforms the monotone single-word based sys-tem.The result of the phrase-based system is comparable to the nonmonotone single-word based search with the IBM constraints.With respect to the mPER,the PB sys-tem clearly outperforms all single-word based systems. If we compare the monotone phrase-based system with the nonmonotone alignment template system,we see that the mPERs are similar.Thus the lexical choice of words is of the same quality.Regarding the other evaluation criteria,which take the word order into account,the non-monotone search of the alignment templates has a clear advantage.This was already indicated by the low degree of monotonicity on this task.The rather free word order in German and the long range dependencies of the verb group make reorderings necessary.

Table6:Translation performance for the German-English Verbmobil task(251sentences).

system variant mPER NIST SWB Mon29.37.07 IBM25.07.84

GE25.37.83

37.047.0

AT20.68.57

7.4Xerox task

The translation results for the Xerox task are shown in Table7.Here,we see that both phrase-based systems clearly outperform the single-word based systems.The PB system performs best on this https://www.doczj.com/doc/1e11389686.html,pared to the AT system,the BLEU score improves by4.1%absolute. The improvement of the PB system with respect to the AT system is statistically signi?cant at the99%level.

Table7:Translation performance for the Spanish-English Xerox task(1125sentences).

System PER NIST

SWB IBM27.68.00

26.567.9

AT20.18.76

7.5Canadian Hansards task

The translation results for the Canadian Hansards task are shown in Table8.As on the Xerox task,the phrase-based systems perform better than the single-word based sys-tems.The monotone phrase-based system yields even better results than the alignment template system.This improvement is consistent among all evaluation criteria and it is statistically signi?cant at the99%level.

Table8:Translation performance for the French-English Canadian Hansards task(5432sentences). System Variant PER NIST SWB Mon53.0 5.96

IBM51.3 6.21

57.827.8

AT49.1 6.71

8Ef?ciency

In this section,we analyze the translation speed of the phrase-based translation system.All experiments were carried out on an AMD Athlon with2.2GHz.Note that the systems were not optimized for speed.We used the best performing systems to measure the translation times. The translation speed of the monotone phrase-based system for all three tasks is shown in Table9.For the Xerox task,the translation process takes less than7sec-onds for the whole10K words test set.For the Verbmobil task,the system is even slightly faster.It takes about1.6 seconds to translate the whole test set.For the Canadian Hansards task,the translation process is much slower,but the average time per sentence is still less than1second. We think that this slowdown can be attributed to the large training corpus.The system loads only phrase pairs into memory if the source phrase occurs in the test corpus. Therefore,the large test corpus size for this task also af-fects the translation speed.

In Fig.1,we see the average translation time per sen-tence as a function of the sentence length.The translation times were measured for the translation of the5432test sentences of the Canadian Hansards task.We see a clear linear dependency.Even for sentences of thirty words, the translation takes only about1.5seconds.Table9:Translation Speed for all tasks on a AMD Athlon 2.2GHz.

Xerox

avg.sentence length13.5

0.0060.794 words/second1448

We described a highly ef?cient monotone search al-gorithm.The worst-case complexity of this algorithm is linear in the sentence length.This leads to an impressive translation speed of more than1000words per second for the Verbmobil task and for the Xerox task.Even for the Canadian Hansards task the translation of sentences of length30takes only about1.5seconds.

The described search is monotone at the phrase level. Within the phrases,there are no constraints on the re-orderings.Therefore,this method is best suited for lan-guage pairs that have a similar order at the level of the phrases learned by the system.Thus,the translation pro-cess should require only local reorderings.As the exper-iments have shown,Spanish-English and French-English are examples of such language pairs.For these pairs, the monotone search was found to be suf?cient.The phrase-based approach clearly outperformed the single-word based systems.It showed even better performance than the alignment template system.

The experiments on the German-English Verbmobil task outlined the limitations of the monotone search. As the low degree of monotonicity indicated,reordering plays an important role on this task.The rather free word order in German as well as the verb group seems to be dif-?cult to translate.Nevertheless,when ignoring the word order and looking at the mPER only,the monotone search is competitive with the best performing system.

For further improvements,we will investigate the use-fulness of additional models,e.g.modeling the segmen-tation probability.Also,slightly relaxing the monotonic-ity constraint in a way that still allows an ef?cient search is of high interest.In spirit of the IBM reordering con-straints of the single-word based models,we could allow a phrase to be skipped and to be translated later. Acknowledgment

This work has been partially funded by the EU project TransType2,IST-2001-32091.

References

A.L.Berger,P.F.Brown,S.A.D.Pietra,V.J.D.Pietra, J.R.Gillett,A.S.Kehler,and R.L.Mercer.1996. Language translation apparatus and method of using context-based translation models,United States patent, patent number5510981,April.

P.F.Brown,J.Cocke,S.A.Della Pietra,V.J.Della Pietra,F.Jelinek,https://www.doczj.com/doc/1e11389686.html,fferty,R.L.Mercer,and P.S.Roossin.1990.A statistical approach to machine https://www.doczj.com/doc/1e11389686.html,putational Linguistics,16(2):79–85, June.

G.Doddington.2002.Automatic evaluation of machine translation quality using n-gram co-occurrence statis-tics.In Proc.ARPA Workshop on Human Language Technology.P.Koehn,F.J.Och,and D.Marcu.2003.Statistical phrase-based translation.In Proc.of the Human Lan-guage Technology Conf.(HLT-NAACL),pages127–133,Edmonton,Canada,May/June.

D.Marcu and W.Wong.2002.A phrase-based,joint probability model for statistical machine translation. In Proc.Conf.on Empirical Methods for Natural Lan-guage Processing,pages133–139,Philadelphia,PA, July.

H.Ney,S.Martin,and F.Wessel.1997.Statistical lan-guage modeling using leaving-one-out.In S.Young and G.Bloothooft,editors,Corpus-Based Methods in Language and Speech Processing,pages174–207. Kluwer.

F.J.Och and H.Ney.2002.Discriminative training and maximum entropy models for statistical machine trans-lation.In Proc.of the40th Annual Meeting of the As-sociation for Computational Linguistics(ACL),pages 295–302,Philadelphia,PA,July.

F.J.Och,C.Tillmann,and H.Ney.1999.Improved alignment models for statistical machine translation. In Proc.of the Joint SIGDAT Conf.on Empirical Meth-ods in Natural Language Processing and Very Large Corpora,pages20–28,University of Maryland,Col-lege Park,MD,June.

F.J.Och.2003.Minimum error rate training in statis-tical machine translation.In Proc.of the41th Annual Meeting of the Association for Computational Linguis-tics(ACL),pages160–167,Sapporo,Japan,July.

K.A.Papineni,S.Roukos,T.Ward,and W.J.Zhu.2001. Bleu:a method for automatic evaluation of machine translation.Technical Report RC22176(W0109-022), IBM Research Division,Thomas J.Watson Research Center,September.

J.Pearl.1988.Probabilistic Reasoning in Intelligent Systems:Networks of Plausible Inference.Morgan Kaufmann Publishers,Inc.,San Mateo,CA.Revised second printing.

W.H.Press,S.A.Teukolsky,W.T.Vetterling,and B.P. Flannery.2002.Numerical Recipes in C++.Cam-bridge University Press,Cambridge,UK.

C.Tillmann and H.Ney.2003.Word reordering and a dynamic programming beam search algorithm for sta-tistical machine https://www.doczj.com/doc/1e11389686.html,putational Linguis-tics,29(1):97–133,March.

J.Tomas and https://www.doczj.com/doc/1e11389686.html,bining phrase-based and template-based aligned models in statisti-cal translation.In Proc.of the First Iberian Conf.on Pattern Recognition and Image Analysis,pages1020–1031,Mallorca,Spain,June.

W.Wahlster,editor.2000.Verbmobil:Foundations of speech-to-speech translations.Springer Verlag, Berlin,Germany,July.

R.Zens,F.J.Och,and H.Ney.2002.Phrase-based sta-tistical machine translation.In25th German Confer-ence on Arti?cial Intelligence(KI2002),pages18–32, Aachen,Germany,September.Springer Verlag.

小学joinin剑桥英语单词汇总

JOIN IN 学生用书1 Word List Starter Unit 1.Good afternoon 下午好 2.Good evening 晚上好 3.Good morning 早上好 4.Good night 晚安 5.Stand 站立 Unit 1 6.count [kaunt] (依次)点数 7.javascript:;eight [eit] 八 8.eleven [i'levn] 十一 9.four [f?:] 四 10.five [faiv] 五 11.flag [fl?g] 旗 12.guess [ges] 猜 13.jump [d??mp] 跳 14.nine [nain] 九 15.number ['n?mb?] 数字 16.one [w?n] 一 17.seven ['sevn] 七 18.six [siks] 六 19.ten [ten] 十 20.three [θri:] 三 21.twelve [twelv] 十二 22.two [tu:] 二 23.your [ju?] 你的 24.zero ['zi?r?u] 零、你们的 Unit 2 25.black [bl?k] 黑色26.blue [blu:] 蓝色 27.car [kɑ:] 小汽车 28.colour ['k?l?] 颜色 29.door [d?:] 门 30.favourite [feiv?rit]javascript:; 特别喜爱的 31.green [gri:n] 绿色 32.jeep [d?i:p] 吉普车 33.orange ['?:rind?] 橙黄色 34.pin k [pi?k] 粉红色 35.please [pli:z] 请 36.purple ['p?:pl] 紫色 37.red [red] 红色 38.white [wait] 白色 39.yellow ['jel?u] 黄色 Unit 3 40.blackboard ['bl?kb?:d] 黑板 41.book [buk] 书 42.chair [t???] 椅子 43.desk [desk] 桌子 44.pen [pen] 钢笔 45.pencil ['pensl] 铅笔 46.pencil case [keis] 笔盒 47.ruler ['ru:l?] 尺、直尺 48.schoolbag [sku:l] 书包 49.tree [tri:] 树 50.window ['wind?u] 窗、窗口 Unit 4 51.brown [braun] 棕色 52.cat [k?t] 猫

让多媒体技术为小学英语教学插上翅膀

让多媒体技术为小学英语教学插上翅膀 摘要:随着科学技术的发展,多媒体技术应用的领域越来越广,在小学英语课堂中的应用给课堂带来了新的面貌,它以生动形象的画面、鲜明多姿的色彩、直观清楚的演示在教育领域逐渐取代传统教学方式,与传统教学方式相比较,多媒体教学有着无法比拟的优势。 关键词:多媒体技术小学英语课堂 正文: 科技日新月异,伴随着科技的进步,多媒体技术也日趋成熟,逐渐地应用到了教育领域,多媒体技术的使用,使得教育领域发生了巨大的变化。多媒体技术贯穿了现代教育中的各个方面,各个学科,新教学手段(媒介)取代旧的就学手段这是教育发展的必然要求,多媒体运用到课堂,使得教育更具时代色彩,这也是事物发展的客观规律。 多媒体技术运用到小学英语教学中,对此我有切身感受,它颠覆了我们传统的教学方式,使得过去单一的教学模式因注入科技成果的运用而发生质的飞跃,如今的英语课堂用“高效优质”、“多姿多彩”、“有声有色”、“精彩纷呈”等这样的词语来形容并不为过。 一、利用多媒体充分调动学生学习积极性 多媒体技术就是利用电脑把文字、图形、影象、动画、声音及视频等媒体信息都数位化,并将其整合在一定的交互式界面上,使电脑具有交互展示不同媒体形态的能力,多媒体技术具有声形俱全与图文并茂的特点,能使一些枯燥抽象的概念、复杂的变化过程、现象各异的运动形式直观而形象地显现在学生面前。。传统的教学手段只有简单的板书和口述,新课标要求教师的角色转换,传统教学中教师是主导,而新课标要求学生的主体地位,教师应该是课堂的组织者。而多媒体技术在英语课堂中的应用就是组织者教师与主体学生联系的一个纽带。而这个纽带不像传统教学的填鸭式教育手段,而是多姿多彩,有声有色的多媒体教学,这样的教学方式,能引起了学生们极大的学习兴趣,而兴趣是最好的老师,有了兴趣他们就愿意用心学,花时间去学,甚至自觉地学,这样久儿久之,就能将

小学英语自我介绍简单七篇-最新范文

小学英语自我介绍简单七篇 小学的时候我们就开始学习英语了,用英语进行自我介绍是我们值得骄傲的哦,以下是小编为大家带来的小学英语自我介绍简单,欢迎大家参考。 小学英语自我介绍1 Goodafternoon, teachers! Today, I`m very happy to make a speech here. Now, let me introduce myself. My name is ZhuRuijie. My English name is Molly. I`m 12. I come from Class1 Grade 6 of TongPu No.2 Primary School. I`m an active girl. I like playing basketball. Because I think it`s very interesting. I`d like to eat potatoes. They`re tasty. My favourite colour is green. And I like math best. It`s fun. On weekend, I like reading books in my room. I have a happy family. My father is tall and strong.My mother is hard-working and tall, too. My brother is smart. And I`m a good student. My dream is

to be a teacher, because I want to help the poor children in the future. Thank you for listening! Please remember me! 小学英语自我介绍2 Hello,everyone,Iamagirl,mynameisAliceGreenIaminClasstwoGrad efive,Iamveryoutgoing,Icandomanyhousework,suchascooking,swe epingthefloor,andIlikesinginganddancing,IamgoodatpiayingSoc cer,IhopeIcanmakegoodfriendswithyou.Thankyou! 小学英语自我介绍3 Hi, boys and girls, good morning, it is my plearsure to take this change to introduce myself, my name is ---, i am 10 years old. About my family, there are 4 people in my family, father, mother, brother and me, my father and mother are all very kind, welcome you to visit my family if you have time! I am a very outgoing girl(boy), I like reading、playing games and listening to music, of course, I also like to make friends with all the

joinin剑桥小学英语

Join In剑桥小学英语(改编版)入门阶段Unit 1Hello,hello第1单元嗨,嗨 and mime. 1 听,模仿 Stand up Say 'hello' Slap hands Sit down 站起来说"嗨" 拍手坐下来 Good. Let's do up Say 'hello' Slap hands Sit down 好. 我们来再做一遍.站起来说"嗨"拍手坐下来 the pictures. 2 再听一遍给图画编号. up "hello" hands down 1 站起来 2 说"嗨" 3 拍手 4 坐下来说 3. A ,what's yourname 3 一首歌嗨,你叫什么名字 Hello. , what's yourname Hello. Hello. 嗨. 嗨. 嗨, 你叫什么名字嗨,嗨. Hello, what's yourname I'm Toby. I'm Toby. Hello,hello,hello.嗨, 你叫什么名字我叫托比. 我叫托比 . 嗨,嗨,嗨. I'm Toby. I'm Toby. Hello,hello, let's go! 我是托比. 我是托比. 嗨,嗨, 我们一起! Hello. , what's yourname I'm 'm Toby. 嗨.嗨.嗨, 你叫什么名字我叫托比.我叫托比. Hello,hello,hello. I'm 'm Toby. Hello,hello,let's go! 嗨,嗨,嗨.我是托比. 我是托比. 嗨,嗨,我们一起! 4 Listen and stick 4 听和指 What's your name I'm Bob. 你叫什么名字我叫鲍勃. What's your name I'm Rita. What's your name I'm Nick. 你叫什么名字我叫丽塔. 你叫什么名字我叫尼克. What's your name I'm Lisa. 你叫什么名字我叫利萨. 5. A story-Pit'sskateboard. 5 一个故事-彼德的滑板. Pa:Hello,Pit. Pa:好,彼德. Pi:Hello,:What's this Pi:嗨,帕特.Pa:这是什么 Pi:My new :Look!Pi:Goodbye,Pat! Pi:这是我的新滑板.Pi:看!Pi:再见,帕特! Pa:Bye-bye,Pit!Pi:Help!Help!pi:Bye-bye,skateboard! Pa:再见,彼德!Pi:救命!救命!Pi:再见,滑板! Unit 16. Let's learnand act 第1单元6 我们来边学边表演.

小学生英语自我介绍演讲稿五篇

小学生英语自我介绍演讲稿五篇 小学生英语自我介绍演讲稿(1) Good afternoon, teachers! My name is YangXiaodan. I`m 11 now. I`m from Class2 Grade 5 of TongPu No.2 Primary School. My English teacher is Miss Sun. She`s quiet and kind. She`s short and young. My good friend is ZhangBingbing. She`s 12. She`s tall and pretty. We`re in the same class. We both like English very much. I like painting , listening to music, playing computer games and reading books. My favourite food is chicken. It`s tasty and yummy. I often do my homework and read books on Saturdays. This is me. Please remember YangXiaodan. Thank you very much! 小学生英语自我介绍演讲稿(2) Hi everybody, my name is XX ,what’s your name? I’m 11 years old, how old are you? My favourite sport is swimming, what’s your favourite sport? My favourite food is hamburgers,what food do you like?

Join In剑桥小学英语.doc

Join In剑桥小学英语(改编版)入门阶段 Unit 1Hello,hello第1单元嗨,嗨 1.Listen and mime. 1 听,模仿 Stand up Say 'hello' Slap hands Sit down 站起来说"嗨" 拍手坐下来 Good. Let's do itagain.Stand up Say 'hello' Slap hands Sit down 好. 我们来再做一遍.站起来说"嗨"拍手坐下来 2.listen again.Number the pictures. 2 再听一遍给图画编号. 1.Stand up 2.Say "hello" 3.Slap hands 4.Sit down 1 站起来 2 说"嗨" 3 拍手 4 坐下来说 3. A song.Hello,what's yourname? 3 一首歌嗨,你叫什么名字? Hello. Hello.Hello, what's yourname? Hello. Hello. 嗨. 嗨. 嗨, 你叫什么名字? 嗨,嗨. Hello, what's yourname? I'm Toby. I'm Toby. Hello,hello,hello. 嗨, 你叫什么名字? 我叫托比. 我叫托比 . 嗨,嗨,嗨. I'm Toby. I'm Toby. Hello,hello, let's go! 我是托比. 我是托比. 嗨,嗨, 我们一起! Hello. Hello.Hello, what's yourname? I'm Toby.I'm Toby. 嗨.嗨.嗨, 你叫什么名字? 我叫托比.我叫托比. Hello,hello,hello. I'm Toby.I'm Toby. Hello,hello,let's go! 嗨,嗨,嗨.我是托比. 我是托比. 嗨,嗨,我们一起! 4 Listen and stick 4 听和指 What's your name? I'm Bob. 你叫什么名字? 我叫鲍勃. What's your name ? I'm Rita. What's your name ? I'm Nick. 你叫什么名字? 我叫丽塔. 你叫什么名字? 我叫尼克. What's your name ? I'm Lisa. 你叫什么名字? 我叫利萨. 5. A story-Pit'sskateboard. 5 一个故事-彼德的滑板. Pa:Hello,Pit. Pa:好,彼德. Pi:Hello,Pat.Pa:What's this? Pi:嗨,帕特.Pa:这是什么? Pi:My new skateboard.Pi:Look!Pi:Goodbye,Pat! Pi:这是我的新滑板.Pi:看!Pi:再见,帕特! Pa:Bye-bye,Pit!Pi:Help!Help!pi:Bye-bye,skateboard! Pa:再见,彼德!Pi:救命!救命!Pi:再见,滑板! Unit 16. Let's learnand act 第1单元6 我们来边学边表演.

常用标点符号用法简表.doc

常用标点符号用法简表 标点符号栏目对每一种汉语标点符号都有详细分析,下表中未完全添加链接,请需要的同学或朋友到该栏目查询。名称符号用法说明举例句号。表示一句话完了之后的停顿。中国共产党是全中国人民的领导核心。逗号,表示一句话中间的停顿。全世界各国人民的正义斗争,都是互相支持的。顿号、表示句中并列的词或词组之间的停顿。能源是发展农业、工业、国防、科学技术和提高人民生活的重要物质基础。分号;表示一句话中并列分句之间的停顿。不批判唯心论,就不能发展唯物论;不批判形而上学,就不能发展唯物辩证法。冒号:用以提示下文。马克思主义哲学告诉我们:正确的认识来源于社会实践。问号?用在问句之后。是谁创造了人类?是我们劳动群众。感情号①!1.表示强烈的感情。2.表示感叹句末尾的停顿。战无不胜的马克思主义、列宁主义、毛泽东思想万岁!引号 ②“ ” ‘ ’ ╗╚ ┐└1.表示引用的部分。毛泽东同志在《论十大关系》一文中说:“我们要调动一切直接的和间接的力量,为把我国建设成为一个强大的社会主义国家而奋斗。”2.表示特定的称谓或需要着重指出的部分。他们当中许多人是身体好、学习好、工作好的“三好”学生。 3.表示讽刺或否定的意思。这伙政治骗子恬不知耻地自封为“理论家”。括号③()表示文中注释的部分。这篇小说环境描写十分出色,它的描写(无论是野外,或是室内)处处与故事的发展扣得很紧。省略号④……表示文中省略的部分。这个县办工厂现在可以生产车床、电机、变压器、水泵、电线……上百种产品。破折号⑤——1.表示底下是解释、说明的部

分,有括号的作用。知识的问题是一个科学问题,来不得半点的虚伪和骄 傲,决定地需要的倒是其反面——诚实和谦逊的态度。2.表示意思的递进。 团结——批评和自我批评——团结3.表示意思的转折。很白很亮的一堆洋 钱!而且是他的——现在不见了!连接号⑥—1.表示时间、地点、数目等 的起止。抗日战争时期(1937-1945年)“北京—上海”直达快车2.表 示相关的人或事物的联系。亚洲—太平洋地区书名号⑦《》〈〉表示 书籍、文件、报刊、文章等的名称。《矛盾论》《中华人民共和国宪法》《人 民日报》《红旗》杂志《学习〈为人民服务〉》间隔号·1.表示月份和日期 之间的分界。一二·九运动2.表示某些民族人名中的音界。诺尔曼·白求 恩着重号.表示文中需要强调的部分。学习马克思列宁主义,要按照毛泽 东同志倡导的方法,理论联系实际。······

让多媒体走进小学英语课堂

让多媒体走进小学英语课堂 多媒体技术整合于英语学科教学,主动适应课程的发展,并与其和谐自然融为一体。作为课程改革不可或缺的条件,多媒体技术在教学中的飞速发展,必将使教师更新教学观念,营造学习环境,完善教学模式,变革评价方法,促进科学与人文统一,让课堂更精彩。那么,多媒体技术在小学英语教学过程中到底有什么作用呢?笔者认为主要表现在以下几个方面: 一、运用多媒体,激发学习兴趣 托尔斯泰说:“成功的教学所需要的不是强制而是激发学生兴趣。”培养学生学习英语的兴趣无疑是小学英语教学的关键所在。学生只有对自己感兴趣的东西才会开动脑筋、积极思考,外国学者在研究可靠学习发生的条件时指出:学生只能学到那些他们有兴趣的东西,在缺乏适当的环境﹑背景和铺垫的时候,学习不能发生。因此,在教学中要重视培养和引导学生学习英语的兴趣爱好。现代信息技术具有形象性﹑直观性﹑色彩鲜艳﹑图像逼真等特点,能将抽象的教学内容变成具体的视听形象,激发情趣,调动学生多种感官同时参与,从而激发学生的学习兴趣,使学生从多角度轻松舒畅地接受知识,并使其记忆长久。 二、运用多媒体,突破教学难点 小学生的思维正处在由具体形象向抽象思维过渡的时期,而有些教学内容之所以成为学生学习的难点就是因为缺乏语境而显得不具体。如果仅凭教师的描述和讲解,往往是教师花了很大精力,教学效果却事倍功半。而运用多媒体演示就能够突破时空限制,化静为动,使抽象的概念具体化、形象化,不仅突破了教材的难点,还可以加深学生对教学内容的理解和掌握,从而提高教学效率,达到事半功倍的效果。 例如:在教学“in、on、behind、under、near”这几个表示空间关系的方位介词时,我利用多媒体设计了这样的动画:通过点按鼠标,一只小猫一会儿跳到书桌上,一会儿藏在桌子里,一会儿蹦到书桌下,一会儿躲到桌子后,一会儿又窜到桌子旁。随着小猫的位置移动,电脑会问:Where is the cat?这样连续演示几次,学生通过观察、思考,就能快速地回答:It’s on the desk\in the desk\under the desk……,动画刺激了学生的感官,诱发了思维,难点不仅易于突出,更易于突破。 三、运用多媒体,提高创新能力 创新能力是未来人才的重要素质,是一个民族兴旺发达的灵魂。因此发挥每个学生潜在的创造因素,培养学生的创新能力,是当前教育工作者研究的重要课题。要想培养创新能力,就要力求使学生处于动眼、动手、动口的主体激活状态。多媒体技术对文本、图形、声音、动画和视频等信息具有集成处理的能力,使教学手段趋于全方位、多层次,它能加速学生感知过程,促进认识深化、加深理解、

小学生英文自我介绍及故事(精选多篇)

小学生英文自我介绍及故事(精选多篇) 第一篇:小学生英文自我介绍 good afternoon everyone! my name is max. my chinese name is dong li zhen. i am six years old. i am in the primary school of jinling high school hexi branch.my family have four people. one,myfather, his name is dongning, he is a worker; two,my mother, her name is liqing,she is a housewife; three,my pet yoyo ,he is a dog; there is one more. now let me see….oh,yes,that is me! i love my family. i have many toys,a car ,a helicopter, two balls, three jumpropes. i like jumpropes best of all,because i jumprope very good. i have a pencilcase ,it’s bule. i have an brown eraser. i have five pencils,they are many colors, and one ruler,it’s white. i like english,i like speak english thank you! 第二篇:小学生英文自我介绍 自我介绍 各位老师好,我叫邵子豪,今年15岁,现在是红星海学校初中二年级学生。 我是一个善良、自信、幽默、乐于助人的阳光男孩,平时我喜欢和同学一起打篮球,还喜欢和爸爸谈古论今,我自引以为豪的是我有一个善解人意的妈妈和博学多才的爸爸,我为我生活在这样一个充满爱和民主的家庭中感到幸福! hello , teachers: mynameisshaoziha o. i’mfifteenyearsold. istudy in hongxinghaimiddeleschool..iamingradetwonow. iamakind,confident,humourousandhelpfulboy..ilikeplaying basketballwithmyfriends. andialsoliketotalkaboutthehsitoryandthecurrent newswithmyfather. i amveryproudofmyfatherandmymotherbecauseoftheirknowledge.

常用标点符号用法含义

一、基本定义 句子,前后都有停顿,并带有一定的句调,表示相对完整的意义。句子前后或中间的停顿,在口头语言中,表现出来就是时间间隔,在书面语言中,就用标点符号来表示。一般来说,汉语中的句子分以下几种: 陈述句: 用来说明事实的句子。 祈使句: 用来要求听话人做某件事情的句子。 疑问句: 用来提出问题的句子。 感叹句: 用来抒发某种强烈感情的句子。 复句、分句: 意思上有密切联系的小句子组织在一起构成一个大句子。这样的大句子叫复句,复句中的每个小句子叫分句。 构成句子的语言单位是词语,即词和短语(词组)。词即最小的能独立运用的语言单位。短语,即由两个或两个以上的词按一定的语法规则组成的表达一定意义的语言单位,也叫词组。 标点符号是书面语言的有机组成部分,是书面语言不可缺少的辅助工具。它帮助人们确切地表达思想感情和理解书面语言。 二、用法简表 名称

句号① 问号符号用法说明。?1.用于陈述句的末尾。 2.用于语气舒缓的祈使句末尾。 1.用于疑问句的末尾。 2.用于反问句的末尾。 1.用于感叹句的末尾。 叹号! 2.用于语气强烈的祈使句末尾。 3.用于语气强烈的反问句末尾。举例 xx是xx的首都。 请您稍等一下。 他叫什么名字? 难道你不了解我吗?为祖国的繁荣昌盛而奋斗!停止射击! 我哪里比得上他呀! 1.句子内部主语与谓语之间如需停顿,用逗号。我们看得见的星星,绝大多数是恒星。 2.句子内部动词与宾语之间如需停顿,用逗号。应该看到,科学需要一个人贡献出毕生的精力。 3.句子内部状语后边如需停顿,用逗号。对于这个城市,他并不陌生。 4.复句内各分句之间的停顿,除了有时要用分号据说苏州园林有一百多处,我到过的不外,都要用逗号。过十多处。 顿号、用于句子内部并列词语之间的停顿。

五年级下英语月考试卷全能练考Joinin剑桥英语(无答案)

五年级下英语月考试卷-全能练考-Join in剑桥英语 姓名:日期:总分:100 一、根据意思,写出单词。(10′) Fanny:Jackie, ________[猜] my animal,my ________最喜欢的 animal.OK? Jackie:Ok!Mm…Has it got a nose? Fanny:Yes,it’s got a big long nose,…and two long ________[牙齿]. Jackie:Does it live in ________[美国]? Fanny:No,no.It lives ________[在] Africa and Asia. Jackie:Can it ________[飞]? Fanny:I’m sorry it can’t. Jackie:Is it big? Fanny:Yes,it’s very big and ________[重的]. Jackie:What ________[颜色] is it?Is it white or ________[黑色]? Fanny:It’s white. Jackie:Oh,I see.It’s ________[大象]. 二、找出不同类的单词。(10′) ()1 A.sofa B.table C.check ()2 A.noodle https://www.doczj.com/doc/1e11389686.html,k C.way ()3 A.fish B.wolf C.lion ()4 A.bike B.car C.write ()5 A.tree B.farm C.grass ()6 A.big B.small C.other ( )7 A.eat B.listen C.brown

多媒体对小学英语教学的作用.docx

一、多媒体的设计要符合学生认知规律多媒体的设计应依据学生学习的需求来分析,并采用解决问题的最佳方案,使教学效果达到最优化。 多媒体教学的手段能化被动为主动,化生硬为生动,化抽象为直观,化理论为实践。 根据小学生的认知规律,他们缺乏坚实的理论基础,但直观的操作演示可以弥补这项弱点,可以帮助他们更好地理解和掌握。 对此,学者们已渐有共识这种直观生动的教学形式极大地激发了学生的学习兴趣,也引发了学生丰富的想象力和创造力。 在教学设计过程中,一方面,要运用多媒体的功能优势,将课本中抽象原理形象化,从而激发学生学习的兴趣,提高学习效果;另一方面必须让教学过程符合学生的认知规律,要多为学生创设动手操作的机会。 例如,在教学?一课时,我利用多媒体设计了一次动物聚会,森林里有很多小动物,有些通过图片来呈现,有些通过动画来呈现。 我带领学生边欣赏多媒体演示边复习了小动物的单词,然后引导学生猜每种小动物都来了几个,通过猜测的方式,强调复数的用法。 这里我使用了链接的方式,让学生像在欣赏魔术表演一样,使其出于好奇而变得主动。 通过设计符合学生知识基础和认知能力的多媒体教学内容,不仅能够激发学生学习的积极性和主动性,而且能够促进学生对知识的自我建构和内化。 此外,考虑到学生的年龄特点,在每节课的教学过程中我还格外重视评价的作用,而多媒体技术在这方面也表现出独特的优势。

不论是学习单词、句型,还是与电脑进行互动交流,每个知识点 都有图案背景,并提供过程性评价,随之出现的评价语言和动画无不让学生兴奋异常。 这种符合学生认知规律和个性化特征的内容设计与反馈,不仅使学生们感受到获得成功的喜悦,更达到了快乐交互学习的目的。 二、多媒体的应用要适时、适度和适量信息技术整合于课程不是 简单地将信息技术应用于教学,而是高层次地融合与主动适应。 必须改变传统的单一辅助教学的观点,创造数字化的学习环境,创设主动学习情景,创设条件让学生最大限度地接触信息技术,让信息技术成为学生强大的认知工具,最终达到改善学习的目的。 为保证多媒体与英语教学有效的融合,充分发挥多媒体技术应用的效果,在利用多媒体进行教学设计过程中,要遵循适时、适度和适量的原则,即在恰当的时间选择合适的媒体,适量地呈现教学内容。 在设计多媒体的过程中,要把握教学时机,不同的教学内容和教学环节,决定了多媒体运用的最佳时机也不同,精心策划才能妙笔生花,不能为使用多媒体而使用,不要一味追求手段的先进性、豪华性,而造成课堂教学过程中的画蛇添足。 这就要求教师熟悉多媒体的功能和常用多媒体的种类、特点,以便在教学设计和应用过程中将多媒体与课程学习进行有效地融合,最终提高课堂教学效果。 例如,在学习?这一知识点时,传统的英语课常常是借助钟表或模型来解释某一时刻,讲解整点、几点几分、差几分几点。

剑桥小学英语Join_In

《剑桥小学英语Join In ——Book 3 下学期》教材教法分析2012-03-12 18:50:43| 分类:JOIN IN 教案| 标签:|字号大中小订阅. 一、学情分析: 作为毕业班的学生,六年级的孩子在英语学习上具有非常显著的特点: 1、因为教材的编排体系和课时不足,某些知识学生已遗忘,知识回生的现象很突出。 2、有的学生因受学习习惯及学习能力的制约,有些知识掌握较差,学生学习个体的差异性,学习情况参差不齐、两级分化的情况明显,对英语感兴趣的孩子很喜欢英语,不喜欢英语的孩子很难学进去了。 3、六年级的孩子已经进入青春前期,他们跟三、四、五年级的孩子相比已经有了很大的不同。他们自尊心强,好胜心强,集体荣誉感也强,有自己的评判标准和思想,对知识的学习趋于理性化,更有自主学习的欲望和探索的要求。 六年级学生在英语学习上的两极分化已经给教师的教学带来很大的挑战,在教学中教师要注意引导学生调整学习方式: 1、注重培养学生自主学习的态度 如何抓住学习课堂上的学习注意力,吸引他们的视线,保持他们高涨的学习激情,注重过程的趣味化和学习内容的简易化。 2、给予学生独立思考的空间。 3、鼓励学生坚持课前预习、课后复习的好习惯。 六年级教材中的单词、句子量比较多,难点也比较多,学生课前回家预习,不懂的地方查英汉词典或者其它资料,上课可以达到事半功倍的效果,课后复习也可以很好的消化课堂上的内容。 4、注意培养学生合作学习的习惯。 5、重在培养学生探究的能力:学习内容以问题的方式呈现、留给学生更多的发展空间。 二、教材分析: (一).教材特点: 1.以学生为主体,全面提高学生的素质。

小学简单英语自我介绍范文(精选3篇)

小学简单英语自我介绍范文(精选3篇) 小学简单英语自我介绍范文 当换了一个新环境后,我们难以避免地要作出自我介绍,自我介绍可以满足我们渴望得到尊重的心理。但是自我介绍有什么要求呢?下面是收集整理的小学简单英语自我介绍范文,仅供参考,希望能够帮助到大家。 小学简单英语自我介绍1 Good afternoon, teachers! I`m very happy to introduce myself here. I`m Scott. My Chinese name is SunZijing. I`m 13. I`m from Class1 Grade 6 of TongPu No.2 Primary School. I have many hobbies, such as playing ping-pong, playing badminton, listening to music, reading magazines and so on. My favourite food is chicken. It`s tasty and yummy. So I`m very strong. I`m a naughty boy. I always play tricks. And I just like staying in my room and reading books. But I have a dream, I want to be a car-designer. Because I`m a car fan.This is me. Please remember SunZijing! Thank you very much! 小学简单英语自我介绍2 Goodafternoon, teachers! Today, I`m very happy to make a speech here. Now, let me introduce myself. My name is ZhuRuijie.

剑桥小学英语join in五年级测试卷

五 年 级 英 语 测 试 卷 学校 班级 姓名 听力部分(共20分) 一、Listen and colour . 听数字,涂颜色。(5分) 二、 Listen and tick . 听录音,在相应的格子里打“√”。 (6分) 三、Listen and number.听录音,标序号。(9分) pig fox lion cow snake duck

sheep 笔试部分(共80分) 一、Write the questions.将完整的句子写在下面的横线上。(10分) got it Has eyes on a farm it live sheep a it other animals eat it it Is 二、Look and choose.看看它们是谁,将字母填入括号内。(8分) A. B. C. D.

E. F. G. H. ( ) pig ( ) fox ( ) sheep ( ) cat ( ) snake ( ) lion ( ) mouse ( ) elephant 三、Look at the pictures and write the questions.看图片,根据答语写出相应的问题。(10分) No,it doesn’t. Yes,it is.

Yes,it does. Yes,it has. Yes,it does. 四、Choose the right answer.选择正确的答案。(18分) 1、it live on a farm? 2. it fly?

3. it a cow? 4. it eat chicken? 5. you swim? 6. you all right? 五、Fill in the numbers.对话排序。(6分) Goodbye. Two apples , please. 45P , please. Thank you.

小学生英语自我介绍演讲稿带翻译

小学生英语自我介绍演讲稿带翻译 篇一:小学英语自我介绍演讲稿 Good morning everyone! Today I am very happy to make a speech(演讲) here. Now, let me introduce myself. My name is Chen Qiuxiang. I am from Dingcheng center Primary School. I live in our school from Monday to Friday. At school, I study Chinese, Math, English, Music, Computer, and so on. I like all of them. I am a monitor, so I often help my teacher take care of my class. I think I am a good help. And I like Monday best because we have English, music, computer and on Tuesdays. There are four seasons in a year , but my favourite season is summer. Because I can wear my beautiful dress . When I am at home, I often help my mother do some housework. For example, wash clothes , clean the bedroom, do the dishes and so on . and my mother says I am a good help, too. In my spare time, I have many hobbies. I like reading books , going hiking, flying kites .But I like singing best, and my favourite English song is “The more we get

(完整版)剑桥小学英语Joinin六年级复习题(二)2-3单元.doc

2011-2012 学年度上学期六年级复习题(Unit2-Unit3 ) 一、听力部分 1、听录音排序 ( ) () ()() () 2、听录音,找出与你所听到的单词属同一类的单词 () 1. A. spaceman B. pond C . tiger () 2. A.mascots B. potato C . jeans () 3. A. door B. behind C . golden () 4. A. sometimes B. shop C . prince () 5. A. chair B. who C . sell 3、听录音,将下面人物与他的梦连线 Sarah Tim Juliet Jenny Peter 4、听短文,请在属于Mr. Brown的物品下面打√ ( ) ( ) ( ) ( ) ( ) ( ) ( ) 5、听问句选答句 () 1. A. Yes, I am B. Yes, I have C . Yes, you do () 2. A.Pink B. A friendship band C . Yes. () 3. A. OK B. Bye-bye. C . Thanks, too. () 4. A. Monday. B. Some juice. C . Kitty. () 5. A. I ’ve got a shookbag. B. I ’m a student. C . It has got a round face. 6、听短文,选择正确答案 () 1. Where is Xiaoqing from? She is from . A.Hebei B. Hubei C . Hunan () 2. She goes to school at . A.7:00 B.7:30 C . 7:15 () 3. How many classes in the afternoon? classes. A. four B. three C . two () 4. Where is Xiaoqing at twelve o ’clock? She is . A. at home B. at school C .in the park () 5. What does she do from seven to half past eight? She . A.watches TV B. reads the book C. does homework

三年级下学期英语(Joinin剑桥英语)全册单元知识点归纳整理-

Starter Unit Good to see you again知识总结 一. 短语 1. dance with me 和我一起跳舞 2. sing with me 和我一起唱歌 3. clap your hands 拍拍你的手 4. jump up high 高高跳起 5.shake your arms and your legs晃晃你的胳膊和腿 6. bend your knees 弯曲你的膝盖 7. touch your toes 触摸你的脚趾8. stand nose to nose鼻子贴鼻子站 二. 句子 1. ---Good morning. 早上好。 ---Good morning, Mr Li. 早上好,李老师。 2. ---Good afternoon. 下午好。 ---Good afternoon, Mr Brown. 下午好,布朗先生。 3. ---Good evening,Lisa. 晚上好,丽莎。 ---Good evening, Bob. 晚上好,鲍勃。 4. ---Good night. 晚安。 ----Good night. 晚安。 5. ---What’s your name? 你叫什么名字? ---I’m Bob./ My name is Bob. 我叫鲍勃。 6. ---Open the window, please. 请打开窗户。 ---Yes ,Miss. 好的,老师。 7. ---What colour is it? 它是什么颜色? 它是蓝红白混合的。 ---It’s blue, red and white. 皮特的桌子上是什么? 8. ---What’s on Pit’s table? ---A schoolbag, an eraser and two books. 一个书包,一个橡皮和两本书。 9. ---What time is it? 几点钟? 两点钟。 ---It’s two. 10.---What’s this? 这是什么? ---My guitar. 我的吉他。

相关主题
文本预览
相关文档 最新文档