当前位置:文档之家› Online large-margin training of dependency parsers

Online large-margin training of dependency parsers

Online large-margin training of dependency parsers
Online large-margin training of dependency parsers

Online Large-Margin Training of Dependency Parsers Ryan McDonald Koby Crammer Fernando Pereira Department of Computer and Information Science

University of Pennsylvania

Philadelphia,PA

{ryantm,crammer,pereira}@https://www.doczj.com/doc/7d12803977.html,

Abstract

We present an effective training al-

gorithm for linearly-scored dependency

parsers that implements online large-

margin multi-class training(Crammer and

Singer,2003;Crammer et al.,2003)on

top of ef?cient parsing techniques for de-

pendency trees(Eisner,1996).The trained

parsers achieve a competitive dependency

accuracy for both English and Czech with

no language speci?c enhancements.

1Introduction

Research on training parsers from annotated data has for the most part focused on models and train-ing algorithms for phrase structure parsing.The best phrase-structure parsing models represent gen-eratively the joint probability P(x,y)of sentence x having the structure y(Collins,1999;Charniak, 2000).Generative parsing models are very conve-nient because training consists of computing proba-bility estimates from counts of parsing events in the training set.However,generative models make com-plicated and poorly justi?ed independence assump-tions and estimations,so we might expect better per-formance from discriminatively trained models,as has been shown for other tasks like document classi-?cation(Joachims,2002)and shallow parsing(Sha and Pereira,2003).Ratnaparkhi’s conditional max-imum entropy model(Ratnaparkhi,1999),trained to maximize conditional likelihood P(y|x)of the training data,performed nearly as well as generative models of the same vintage even though it scores parsing decisions in isolation and thus may suffer from the label bias problem(Lafferty et al.,2001). Discriminatively trained parsers that score entire trees for a given sentence have only recently been investigated(Riezler et al.,2002;Clark and Curran, 2004;Collins and Roark,2004;Taskar et al.,2004). The most likely reason for this is that discrimina-tive training requires repeatedly reparsing the train-ing corpus with the current model to determine the parameter updates that will improve the training cri-terion.The reparsing cost is already quite high for simple context-free models with O(n3)parsing complexity,but it becomes prohibitive for lexical-ized grammars with O(n5)parsing complexity. Dependency trees are an alternative syntactic rep-resentation with a long history(Hudson,1984).De-pendency trees capture important aspects of func-tional relationships between words and have been shown to be useful in many applications includ-ing relation extraction(Culotta and Sorensen,2004), paraphrase acquisition(Shinyama et al.,2002)and machine translation(Ding and Palmer,2005).Yet, they can be parsed in O(n3)time(Eisner,1996). Therefore,dependency parsing is a potential“sweet spot”that deserves investigation.We focus here on projective dependency trees in which a word is the parent of all of its arguments,and dependencies are non-crossing with respect to word order(see Fig-ure1).However,there are cases where crossing dependencies may occur,as is the case for Czech (Hajiˇc,1998).Edges in a dependency tree may be typed(for instance to indicate grammatical func-tion).Though we focus on the simpler non-typed

Figure1:An example dependency tree. case,all algorithms are easily extendible to typed structures.

The following work on dependency parsing is most relevant to our research.Eisner(1996)gave a generative model with a cubic parsing algorithm based on an edge factorization of trees.Yamada and Matsumoto(2003)trained support vector machines (SVM)to make parsing decisions in a shift-reduce dependency parser.As in Ratnaparkhi’s parser,the classi?ers are trained on individual decisions rather than on the overall quality of the parse.Nivre and Scholz(2004)developed a history-based learning model.Their parser uses a hybrid bottom-up/top-down linear-time heuristic parser and the ability to label edges with semantic types.The accuracy of their parser is lower than that of Yamada and Mat-sumoto(2003).

We present a new approach to training depen-dency parsers,based on the online large-margin learning algorithms of Crammer and Singer(2003) and Crammer et al.(2003).Unlike the SVM parser of Yamada and Matsumoto(2003)and Ratna-parkhi’s parser,our parsers are trained to maximize the accuracy of the overall tree.

Our approach is related to those of Collins and Roark(2004)and Taskar et al.(2004)for phrase structure parsing.Collins and Roark(2004)pre-sented a linear parsing model trained with an aver-aged perceptron algorithm.However,to use parse features with suf?cient history,their parsing algo-rithm must prune heuristically most of the possible parses.Taskar et al.(2004)formulate the parsing problem in the large-margin structured classi?cation setting(Taskar et al.,2003),but are limited to pars-ing sentences of15words or less due to computation time.Though these approaches represent good?rst steps towards discriminatively-trained parsers,they have not yet been able to display the bene?ts of dis-criminative training that have been seen in named-entity extraction and shallow parsing.

Besides simplicity,our method is ef?cient and ac-curate,as we demonstrate experimentally on English and Czech treebank data.

2System Description

2.1De?nitions and Background

In what follows,the generic sentence is denoted by x(possibly subscripted);the i th word of x is de-noted by x i.The generic dependency tree is denoted by y.If y is a dependency tree for sentence x,we write(i,j)∈y to indicate that there is a directed edge from word x i to word x j in the tree,that is,x i is the parent of x j.T={(x t,y t)}T t=1denotes the training data.

We follow the edge based factorization method of Eisner(1996)and de?ne the score of a dependency tree as the sum of the score of all edges in the tree, s(x,y)= (i,j)∈y s(i,j)= (i,j)∈y w·f(i,j) where f(i,j)is a high-dimensional binary feature representation of the edge from x i to x j.For exam-ple,in the dependency tree of Figure1,the following feature would have a value of1:

f(i,j)= 1if x i=‘hit’and x j=‘ball’

0otherwise.

In general,any real-valued feature may be used,but we use binary features for simplicity.The feature weights in the weight vector w are the parameters that will be learned during training.Our training al-gorithms are iterative.We denote by w(i)the weight vector after the i th training iteration.

Finally we de?ne dt(x)as the set of possi-ble dependency trees for the input sentence x and best k(x;w)as the set of k dependency trees in dt(x) that are given the highest scores by weight vector w, with ties resolved by an arbitrary but?xed rule. Three basic questions must be answered for mod-els of this form:how to?nd the dependency tree y with highest score for sentence x;how to learn an appropriate weight vector w from the training data; and?nally,what feature representation f(i,j)should be used.The following sections address each of these questions.

2.2Parsing Algorithm

Given a feature representation for edges and a weight vector w,we seek the dependency tree or

h1h1h2h2

?s h1h1r r+1h2h2t

h1

h1h2h2

?

s h1h1h2h2t

h1

h1

s h1h1t

Figure2:O(n3)algorithm of Eisner(1996),needs to keep3indices at any given stage.

trees that maximize the score function,s(x,y).The primary dif?culty is that for a given sentence of length n there are exponentially many possible de-pendency https://www.doczj.com/doc/7d12803977.html,ing a slightly modi?ed version of a lexicalized CKY chart parsing algorithm,it is pos-sible to generate and represent these sentences in a forest that is O(n5)in size and takes O(n5)time to create.

Eisner(1996)made the observation that if the head of each chart item is on the left or right periph-ery,then it is possible to parse in O(n3).The idea is to parse the left and right dependents of a word inde-pendently and combine them at a later stage.This re-moves the need for the additional head indices of the O(n5)algorithm and requires only two additional binary variables that specify the direction of the item (either gathering left dependents or gathering right dependents)and whether an item is complete(avail-able to gather more dependents).Figure2shows the algorithm schematically.As with normal CKY parsing,larger elements are created bottom-up from pairs of smaller elements.

Eisner showed that his algorithm is suf?cient for both searching the space of dependency parses and, with slight modi?cation,?nding the highest scoring tree y for a given sentence x under the edge fac-torization assumption.Eisner and Satta(1999)give a cubic algorithm for lexicalized phrase structures. However,it only works for a limited class of lan-guages in which tree spines are regular.Further-more,there is a large grammar constant,which is typically in the thousands for treebank parsers.

2.3Online Learning

Figure3gives pseudo-code for the generic online learning setting.A single training instance is con-sidered on each iteration,and parameters updated by applying an algorithm-speci?c update rule to the instance under consideration.The algorithm in Fig-ure3returns an averaged weight vector:an auxil-iary weight vector v is maintained that accumulates Training data:T={(x t,y t)}T t=1

1.w0=0;v=0;i=0

2.for n:1..N

3.for t:1..T

4.w(i+1)=update w(i)according to instance(x t,y t)

5.v=v+w(i+1)

6.i=i+1

7.w=v/(N?T)

Figure3:Generic online learning algorithm. the values of w after each iteration,and the returned weight vector is the average of all the weight vec-tors throughout training.Averaging has been shown to help reduce over?tting(Collins,2002).

2.3.1MIRA

Crammer and Singer(2001)developed a natural method for large-margin multi-class classi?cation, which was later extended by Taskar et al.(2003)to structured classi?cation:

min w

s.t.s(x,y)?s(x,y )≥L(y,y )

?(x,y)∈T,y ∈dt(x)

where L(y,y )is a real-valued loss for the tree y relative to the correct tree y.We de?ne the loss of a dependency tree as the number of words that have the incorrect parent.Thus,the largest loss a depen-dency tree can have is the length of the sentence. Informally,this update looks to create a margin between the correct dependency tree and each incor-rect dependency tree at least as large as the loss of the incorrect tree.The more errors a tree has,the farther away its score will be from the score of the correct tree.In order to avoid a blow-up in the norm of the weight vector we minimize it subject to con-straints that enforce the desired margin between the correct and incorrect trees1.

The Margin Infused Relaxed Algorithm (MIRA)(Crammer and Singer,2003;Cram-mer et al.,2003)employs this optimization directly within the online framework.On each update, MIRA attempts to keep the norm of the change to the parameter vector as small as possible,subject to correctly classifying the instance under considera-tion with a margin at least as large as the loss of the incorrect classi?cations.This can be formalized by substituting the following update into line4of the generic online algorithm,

min w(i+1)?w(i)

s.t.s(x t,y t)?s(x t,y )≥L(y t,y )

?y ∈dt(x t)

(1)

This is a standard quadratic programming prob-lem that can be easily solved using Hildreth’s al-gorithm(Censor and Zenios,1997).Crammer and Singer(2003)and Crammer et al.(2003)provide an analysis of both the online generalization error and convergence properties of MIRA.In equation (1),s(x,y)is calculated with respect to the weight vector after optimization,w(i+1).

To apply MIRA to dependency parsing,we can simply see parsing as a multi-class classi?cation problem in which each dependency tree is one of many possible classes for a sentence.However,that interpretation fails computationally because a gen-eral sentence has exponentially many possible de-pendency trees and thus exponentially many margin constraints.

To circumvent this problem we make the assump-tion that the constraints that matter for large margin optimization are those involving the incorrect trees y with the highest scores s(x,y ).The resulting optimization made by MIRA(see Figure3,line4) would then be:

min w(i+1)?w(i)

s.t.s(x t,y t)?s(x t,y )≥L(y t,y )

?y ∈best k(x t;w(i))

reducing the number of constraints to the constant k. We tested various values of k on a development data set and found that small values of k are suf?cient to achieve close to best performance,justifying our as-sumption.In fact,as k grew we began to observe a slight degradation of performance,indicating some over?tting to the training data.All the experiments presented here use k=5.The Eisner(1996)algo-rithm can be modi?ed to?nd the k-best trees while only adding an additional O(k log k)factor to the runtime(Huang and Chiang,2005).

A more common approach is to factor the struc-ture of the output space to yield a polynomial set of local constraints(Taskar et al.,2003;Taskar et al., 2004).One such factorization for dependency trees is

min w(i+1)?w(i)

s.t.s(l,j)?s(k,j)≥1

?(l,j)∈y t,(k,j)/∈y t

It is trivial to show that if these O(n2)constraints are satis?ed,then so are those in(1).We imple-mented this model,but found that the required train-ing time was much larger than the k-best formu-lation and typically did not improve performance. Furthermore,the k-best formulation is more?exi-ble with respect to the loss function since it does not assume the loss function can be factored into a sum of terms for each dependency.

2.4Feature Set

Finally,we need a suitable feature representation f(i,j)for each dependency.The basic features in our model are outlined in Table1a and b.All fea-tures are conjoined with the direction of attachment as well as the distance between the two words being attached.These features represent a system of back-off from very speci?c features over words and part-of-speech tags to less sparse features over just part-of-speech tags.These features are added for both the entire words as well as the5-gram pre?x if the word is longer than5characters.

Using just features over the parent-child node pairs in the tree was not enough for high accuracy, because all attachment decisions were made outside of the context in which the words occurred.To solve this problem,we added two other types of features, which can be seen in Table1c.Features of the?rst type look at words that occur between a child and its parent.These features take the form of a POS trigram:the POS of the parent,of the child,and of a word in between,for all words linearly between the parent and the child.This feature was particu-larly helpful for nouns identifying their parent,since

a)

Basic Uni-gram Features p-word

c-word,c-pos

c-pos b)

Basic Big-ram Features

p-pos,c-word,c-pos

p-word,p-pos,c-pos

p-word,c-word

p-pos,b-pos,c-pos

p-pos,p-pos+1,c-pos-1,c-pos

p-pos,p-pos+1,c-pos,c-pos+1

Czech

Accuracy Root Complete

Y&M2003---

N&S2004---

Avg.Perceptron82.988.030.3

MIRA83.388.631.3

Table2:Dependency parsing results for English and Czech.Accuracy is the number of words that correctly identi?ed their parent in the tree.Root is the number of trees in which the root word was correctly identi?ed. For Czech this is f-measure since a sentence may have multiple https://www.doczj.com/doc/7d12803977.html,plete is the number of sentences for which the entire dependency tree was correct.

not exploit phrase structure.We ensured that the gold standard dependencies of all systems compared were identical.

Table2shows that the model described here per-forms as well or better than previous comparable systems,including that of Yamada and Matsumoto (2003).Their method has the potential advantage that SVM batch training takes into account all of the constraints from all training instances in the op-timization,whereas online training only considers constraints from one instance at a time.However, they are fundamentally limited by their approximate search algorithm.In contrast,our system searches the entire space of dependency trees and most likely bene?ts greatly from this.This difference is am-pli?ed when looking at the percentage of trees that correctly identify the root word.The models that search the entire space will not suffer from bad ap-proximations made early in the search and thus are more likely to identify the correct root,whereas the approximate algorithms are prone to error propaga-tion,which culminates with attachment decisions at the top of the tree.When comparing the two online learning models,it can be seen that MIRA outper-forms the averaged perceptron method.This differ-ence is statistically signi?cant,p<0.005(McNe-mar test on head selection accuracy).

In our Czech experiments,we used the depen-dency trees annotated in the Prague Treebank,and the prede?ned training,development and evaluation sections of this data.The number of sentences in this data set is nearly twice that of the English tree-bank,leading to a very large number of features—13,450,672.But again,each instance uses just a handful of these features.For POS tags we used the automatically generated tags in the data set.Though we made no language speci?c model changes,we did need to make some data speci?c changes.In par-ticular,we used the method of Collins et al.(1999)to simplify part-of-speech tags since the rich tags used by Czech would have led to a large but rarely seen set of POS features.

The model based on MIRA also performs well on Czech,again slightly outperforming averaged per-ceptron.Unfortunately,we do not know of any other parsing systems tested on the same data set.The Czech parser of Collins et al.(1999)was run on a different data set and most other dependency parsers are evaluated using English.Learning a model from the Czech training data is somewhat problematic since it contains some crossing dependencies which cannot be parsed by the Eisner algorithm.One trick is to rearrange the words in the training set so that all trees are nested.This at least allows the train-ing algorithm to obtain reasonably low error on the training set.We found that this did improve perfor-mance slightly to83.6%accuracy.

3.1Lexicalized Phrase Structure Parsers

It is well known that dependency trees extracted from lexicalized phrase structure parsers(Collins, 1999;Charniak,2000)typically are more accurate than those produced by pure dependency parsers (Yamada and Matsumoto,2003).We compared our system to the Bikel re-implementation of the Collins parser(Bikel,2004;Collins,1999)trained with the same head rules of our system.There are two ways to extract dependencies from lexicalized phrase structure.The?rst is to use the automatically generated dependencies that are explicit in the lex-icalization of the trees,we call this system Collins-auto.The second is to take just the phrase structure output of the parser and run the automatic head rules over it to extract the dependencies,we call this sys-

Collins-auto

91.495.142.6O(n5)98m21s

MIRA-Normal

92.295.842.9O(n5)105m08s

k=1k=2k=5k=10k=20

90.7390.8290.8890.9290.91

Train Time

we are looking at model extensions to allow non-projective dependencies,which occur in languages such as Czech,German and Dutch. Acknowledgments:We thank Jan Hajiˇc for an-swering queries on the Prague treebank,and Joakim Nivre for providing the Yamada and Matsumoto (2003)head rules for English that allowed for a di-rect comparison with our systems.This work was supported by NSF ITR grants0205456,0205448, and0428193.

References

D.M.Bikel.2004.Intricacies of Collins parsing model.

Computational Linguistics.

Y.Censor and S.A.Zenios.1997.Parallel optimization: theory,algorithms,and applications.Oxford Univer-sity Press.

E.Charniak.2000.A maximum-entropy-inspired parser.

In Proc.NAACL.

S.Clark and J.R.Curran.2004.Parsing the WSJ using CCG and log-linear models.In Proc.ACL.

M.Collins and B.Roark.2004.Incremental parsing with the perceptron algorithm.In Proc.ACL.

M.Collins,J.Hajiˇc,L.Ramshaw,and C.Tillmann.1999.

A statistical parser for Czech.In Proc.ACL.

M.Collins.1999.Head-Driven Statistical Models for Natural Language Parsing.Ph.D.thesis,University of Pennsylvania.

M.Collins.2002.Discriminative training methods for hidden Markov models:Theory and experiments with perceptron algorithms.In Proc.EMNLP.

K.Crammer and Y.Singer.2001.On the algorithmic implementation of multiclass kernel based vector ma-chines.JMLR.

K.Crammer and Y.Singer.2003.Ultraconservative on-line algorithms for multiclass problems.JMLR.

K.Crammer,O.Dekel,S.Shalev-Shwartz,and Y.Singer.

2003.Online passive aggressive algorithms.In Proc.

NIPS.

A.Culotta and J.Sorensen.2004.Dependency tree ker-

nels for relation extraction.In Proc.ACL.

Y.Ding and M.Palmer.2005.Machine translation using probabilistic synchronous dependency insertion gram-mars.In Proc.ACL.J.Eisner and G.Satta.1999.Ef?cient parsing for bilexi-cal context-free grammars and head-automaton gram-mars.In Proc.ACL.

J.Eisner.1996.Three new probabilistic models for de-pendency parsing:An exploration.In Proc.COLING. J.Hajiˇc.1998.Building a syntactically annotated cor-pus:The Prague dependency treebank.Issues of Va-lency and Meaning.

L.Huang and D.Chiang.2005.Better k-best parsing.

Technical Report MS-CIS-05-08,University of Penn-sylvania.

Richard Hudson.1984.Word Grammar.Blackwell. T.Joachims.2002.Learning to Classify Text using Sup-port Vector Machines.Kluwer.

https://www.doczj.com/doc/7d12803977.html,fferty,A.McCallum,and F.Pereira.2001.Con-ditional random?elds:Probabilistic models for seg-menting and labeling sequence data.In Proc.ICML. M.Marcus,B.Santorini,and M.Marcinkiewicz.1993.

Building a large annotated corpus of english:the penn https://www.doczj.com/doc/7d12803977.html,putational Linguistics.

J.Nivre and M.Scholz.2004.Deterministic dependency parsing of english text.In Proc.COLING.

A.Ratnaparkhi.1996.A maximum entropy model for

part-of-speech tagging.In Proc.EMNLP.

A.Ratnaparkhi.1999.Learning to parse natural

language with maximum entropy models.Machine Learning.

S.Riezler,T.King,R.Kaplan,R.Crouch,J.Maxwell, and M.Johnson.2002.Parsing the Wall Street Journal using a lexical-functional grammar and discriminative estimation techniques.In Proc.ACL.

F.Sha and F.Pereira.2003.Shallow parsing with condi-

tional random?elds.In Proc.HLT-NAACL.

Y.Shinyama,S.Sekine,K.Sudo,and R.Grishman.

2002.Automatic paraphrase acquisition from news ar-ticles.In Proc.HLT.

B.Taskar,

C.Guestrin,and

D.Koller.2003.Max-margin

Markov networks.In Proc.NIPS.

B.Taskar,D.Klein,M.Collins,D.Koller,and

C.Man-

ning.2004.Max-margin parsing.In Proc.EMNLP.

H.Yamada and Y.Matsumoto.2003.Statistical depen-

dency analysis with support vector machines.In Proc.

IWPT.

英语作文网上购物 Online shopping

Online Shopping Alone with the advance of society more and more people tend to online shopping instead of going outside to shop.There is a certainty that online shopping will be gradually a common lifestyle as time rolls by.Surely ,just as every coin has two sides,online shopping comes with strengths and weaknesses. On one side,when customers are longing to get one thing which is hard to be discovered in reality but easy to be searched on the Internet only with a click,they may choose online shopping without hesitating.On the other side,for retailers,online shopping make their products to be known by some people lived in the remote places.In a word,to a large extent,online shopping has become a vital convenience in our daily life. However,we cannot ignore the weakness of online shopping.As you see,nowadays a lot of people are tempted by the fascinating appearance of products,which bring about damned unsatisfactory.Additionally,no one could insure a completely secure process of transport.And usually,conflict between retailers and customers occurs when products are damaged .What’s more,it’s highly risky to trade on the Internet especially when one’s computer is invaded by virus. From my point of view,online shopping should be accepted appropriately by people.For one thing,it is the product of social development,and as more and more people purchase in this way,those

网上购物Online Shopping Experience Sharing

Online Shopping Experience Sharing By Koay & Ling Want to go out shopping but afraid of traffic jam on the road? You can always go for a second choice, which is to sit at home and wait for delivery Consumers can buy the products or services they like from any parts of the world. I would like to share one of my online shopping experiences. One of my experiences that I would love to share is online purchasing of air ticket from Air https://www.doczj.com/doc/7d12803977.html,. Previously, when I was still studying at University Malaysia Sabah, I was very new to the environment and without experience; I would go to the counter at the airport in order to purchase the air ticket. One glory morning, I saw an advertisement in newspaper showing that Air Asia launched a new promotion. An air ticket was only RM9.99!! I was so excited and rushed to the airport to get the promotion ticket. But the Air Asia staff told me the promotion was only valid for online purchasing and if I buy from counter in airport, the price will be RM59.99. So I hurried bac k and got the “lucky” ticket. So by doing online shopping, we can get a lot of benefits such as convenience, time saving, and more variety of choices and so on. And the important thing is the price for online purchasing is cheaper than traditional shopping. Thus, we should do more online shopping! 【写作内容】 1.以约30个词概括短文的内容要点; 2.然后以约120个词就“网上购物”的主题发表看法,并包括如下要点: (1)“网上购物”的现状; (2)“网上购物”的利弊; (3) 就“网上购物”发表你自己的观点。 【写作要求】 (1)可以使用实例或其它论述方法支持你的论点,也可以参照阅读材料的内容,但不得直接引用原文中的句子; (2)标题自定。 【参考范文】 Online Shopping The passage tells us one of the experiences of Internet shopping by the author, who benefited from online shopping and got the much cheaper ticket compared to traditional shopping. With the Internet entering our daily life, online shopping is commonly used throughout the world nowadays. There are many advantages in online shopping. Internet shopping offers a number of benefits for the shoppers. The most important advantage is convenience. You can shop when you like as the online shop are open 24 hours a day and you do not have to queue with other shoppers. Secondly, it is easy to find what you are looking for online. However, every coin has two sides, so does Internet shopping. The main disadvantage of Internet shopping is that you cannot actually see the products you are buying or check their quality. Some people are also worried about paying for goods using credit cards. As far as I am concerned, Internet shopping has advantages over disadvantages. It is very convenient, time saving and user friendly and online shopping is much cheaper and more secured than before.

Online shopping 四人英语对话

A:Hi,girls . What are you doing this afternoon? B:I am going to buy some short-sleeved T- shirts in ZhongJie with C. It's getting hotter and hotter. C:Year,I also plan to buy a pair of sandals . Will you join us? D:I am afraid I can not go with you.I have class this afernoon.By the way,Why don’t you shop on the Inernet? A:I also want to ask this question.You know I almost buy everything on the Internet. I think It is very convenient and the price is even lower. B:You are about half right.I think it will be very difficult when you want to return goods.I once had that experience. It is an unpleasant experience to me. C:You got a point.And I only buy daily supplies online.I think it hardly to judge if the clothes or trousers fit me.Returning goods is a troublesome thing. D:I have not experience like that. Most online stores have pretty returning goods service,moreover,you can pay for it after you get the goods in some stores,like Fanke,dangdang. A:Yes.And besides that way, you can also buy goods from well-known and trustworthy companies, it’s prrety safe.These companies often have good returning service. B:Sounds great.I just want to buy a pair of earphones .I will have a try in this way this time.I hope it won’t let me down. Can you give me a hand,A? A:Of course. C:But I still have a question.How can you buy the right clothes from the Internet. D:You can first try on the clothes in physical store.And then buy it online.By this way not only can you buy the right one,but also save money. C:Wonderful.I will try it when I back. Thanks for your advice.

作文OnlineShopping

On li ne Shopp ing (范文一) With the development of the Internet and the popularization (普及)of computer , shopping on li ne has become a com mon place (普通的)in our life. Chin ese people are buying thi ngs on li ne and the shipping companies(船务公司)are busy to no end. Online shopping is very convenient. All you have to do is sit at home and wait for your package to come in stead of hovering (徘徊)outside in the cold wind. We can buy almost everyth ing without going out. However, enjoyment of retail shopping lost. Many people enjoy shopping with others and it is ofte n a good way to make social connections. Whe n shopp ing on li ne, the enjo yme nt is lost. Besides, you may lose your private information during online shopping. Although having access to a very large number of products is highly desirable, consumers have limited cognitive resources and may simply be un able to process the pote ntially vast amounts of in formatio n about these alter natives It seems that the on li ne bus in ess contains many commercial opport un ities . Many people want to seek their fortune on it. However, I think this kind of bus in ess now is close to saturati on. There are thousands of shops online, it ' asproblem how to make your shop outstanding and attractive among the “ shop sea". (范文二) Nowadays with the ever rapid development and increasing popularity of the information tech no logy, shopp ing on the internet has bee n a fashi on especially among the youn gsters. Online shopping has made our daily life more convenient and comfortable. For example, shopp ing on the internet can save stude nts a great deal of time on the way betwee n home and store, so they would be able to concentrate more time and energy on their academic work. The internet has shorte n the dista nee betwee n manu facturers and con sumers and thus we can eve n buy goods in other coun tries .On the other han d, lack of the face to face deal makes on li ne shopp ing less reliable and trustworthy. What ' s more the delivery will in crease the risk of items ' damage. In my opinion, shopping on the internet is a irreversible trend. More and more people will be accustomed to it. It will be much more popular in the near future. And at the same time we should take some measures to make it perfect. (范文三) Several decades ago, it should be a marvelous won der to purchase our favorite gifts only by click ing the mouse and the n just wait ing for the door knock by a smili ng expressive delivery courier with the exact package you ordered. While, today, it is no long a rare case. Combined with the fast food, the digital com muni cati on ,on li ne shopp ing has bee n a com mon part of our life Admittedly, on line shopp ing offers magical convenien ce. For example, it saves time and offers numerous choices since floods of information can be supplied on Internet. While, every coin has tow sides and online shopping is no different. Have you still remembered the annoying time when you found the commodities you buy on Internet was not the slightest微博) as what you had

英语教案 shopping online

UNIT8 一对话练习重点句子 1Why not tell me earlier? 为什么不早点儿告诉我? 2Sorry to tell you so late! 很抱歉这么晚告诉你。 3We will be having a party at 8 o’clock tomorrow evening. 明晚八点我们将会举办晚会。 4We’ll be waiting for you! 我们等你来哦! 二短文翻译 The Chinese Will Be Enjoying Shopping Online More and more Chinese people are going online to buy clothing and other goods. China will overtake the United States and become the biggest market of eBay someday. EBay is now the world’s largest online auction site.It is based in the United States, and has 94.9 million registered users globally. “Ten to fifteen years from now, I think China has potential and we want to do everything we can to keep our No.1 position.” EBay has 220 employees in Shanghai and it is planning to expand its staff in China over the next few years. Words Expanding online 在线的,联网的 overtake赶上,追上 market 市场 auction拍卖 Chief Executive Officer=CEO potential潜力 employee雇员,员工 expand扩张,增加 三将来进行时 将来进行时(The future continuous tense)表示要在将来某一时间开始,并继续下去的动作。一般用延续性动词表示。常用来表示礼貌的询问、请求或期待等。 shall/will be doing +确切的将来时间 例如:We will be having a party at 8 o’clock tomorrow evening. 我们明天晚上八点钟有个聚会。 I shall be working at the office this time next week. 下星期这个时候我在上班。

网上购物的英语作文 Online Shopping

网上购物的英语作文Online Shopping Online Shopping Nowadays, can we find a person who has not experienced online shopping? Definitely not. Online shopping is coming into fashion in most of cities due to the rapid development of internet technology. Online shopping is welcomed by most people due to various reasons. From the perspective of consumer, it can save some time for people who don't have much spare time. Just click the mouse, they can get whatever they want while staying at home. For the retailers, it can cut some costs for those who don't have much circulating funds. They don't have to rent a house and spend money on employees compared with the traditional trade mode. However, there are still some defects in online shopping. First, face to face deal makes online shopping less reliable and trustworthy. Second, people will lose the fun of bargain. It is undeniable that shopping on the internet has become an irresistible trend in modern society. It's of great urgency that we need to regulate the relative laws in accordance with the rapid growth of online shopping. Only in this way can we enjoy the pleasure and convenience of online shopping without the concern of

作文 Online Shopping (八篇范文 加 一个模板)

Online Shopping ——由NUAA3号楼203工作室整理(范文一) With the development of the Internet and the popularization of computer,shopping online has become a commonplace in our life.Chinese people are buying things online and the shipping companies are busy to no end.Online shopping is very convenient.All you have to do is sit at home and wait for your package to come instead of hovering outside in the cold wind.We can buy almost everything without going out. However,enjoyment of retail shopping lost.Many people enjoy shopping with others and it is often a good way to make social connections.When shopping online,the enjoyment is lost. Besides,you may lose your private information during online shopping.Although having access to a very large number of products is highly desirable,consumers have limited cognitive resources and may simply be unable to process the potentially vast amounts of information about these alternatives It seems that the online business contains many commercial opportunities.Many people want to seek their fortune on it.However,I think this kind of business now is close to saturation. There are thousands of shops online,it’s a problem how to make your shop outstanding and attractive among the“shop sea". (范文二) Nowadays with the ever rapid development and increasing popularity of the information technology,shopping on the internet has been a fashion especially among the youngsters. Online shopping has made our daily life more convenient and comfortable.For example, shopping on the internet can save students a great deal of time on the way between home and store,so they would be able to concentrate more time and energy on their academic work.The internet has shorten the distance between manufacturers and consumers and thus we can even buy goods in other countries.On the other hand,lack of the face to face deal makes online shopping less reliable and trustworthy.What’s more the delivery will increase the risk of items’damage. In my opinion,shopping on the internet is a irreversible trend.More and more people will be accustomed to it.It will be much more popular in the near future.And at the same time we should take some measures to make it perfect. (范文三) Several decades ago,it should be a marvelous wonder to purchase our favorite gifts only by clicking the mouse and then just waiting for the door knock by a smiling expressive delivery courier with the exact package you ordered.While,today,it is no long a rare https://www.doczj.com/doc/7d12803977.html,bined with the fast food,the digital communication,online shopping has been a common part of our life。 Admittedly,on line shopping offers magical convenience.For example,it saves time and offers numerous choices since floods of information can be supplied on Internet.While,every coin has tow sides and online shopping is no different.Have you still remembered the annoying time when you found the commodities you buy on Internet was not the slightest(微博)as what you had

英语作文 online shopping

With the development of Internet, Online shopping is gaining growing popularity with young people. Now online shopping has become a kind of fashion. Online shopping benefits a lot. Almost all of daily product that people can buy online without going out. As long as you look at the screen while pressing the mouse, and then the thing you bought will be delivered to your home within a few days. What`s more, the price of online commodity is usually lower than the market`s. However, there are some risks in online shopping. Some online sellers to show you the real thing but to sell you fake, obtaining money by deception. Their bad behavior will inevitably bring about an unfavorable influence on online shopping. As far as i`m concerned, We can buy what we need in a trusted seller and ask our friends to buy things from their trusted man. In addition, if the seller is who we do not clear, wo should be careful with his words, and do not believe in his sweet words.

作文Online-Shoppingdoc资料

作文O n l i n e-S h o p p i n g

Online Shopping (范文一) With the development of the Internet and the popularization(普及) of computer , shopping in our life. Chinese people are buying things online you have to do is sit at home and wait for your package to come instead of hovering(徘徊)outside in the cold wind. We can buy almost everything without going out. However, enjoyment of retail shopping lost. Many people enjoy shopping with others and it is often a good way to make social connections. When shopping online, the enjoyment is lost. Besides, you may lose your private information during online shopping. Although having access to a very large number of products is highly desirable, consumers have limited cognitive resources and may simply be unable to process the potentially vast amounts of information about these alternatives It seems that the online business contains many commercial opportunities . Many people want to seek their fortune on it. However, I think this kind of business now is close to saturation. There are thousands of shops online, it’s a problem how to make your shop outstanding and attractive among the “shop sea". (范文二) Nowadays with the ever rapid development and increasing popularity of the information technology, shopping on the internet has been a fashion especially among the youngsters. Online shopping has made our daily life more convenient and comfortable. For example, shopping on the internet can save students a great deal of time on the way between home and store, so they would be able to concentrate more time and energy on their academic work. The internet has shorten the distance between manufacturers and consumers and thus we can even buy goods in other countries .On the other hand, lack of the face to face deal makes online shopping less reliable and trustworthy. What’ s more the delivery will increase the risk of items’ damage. In my opinion, shopping on the internet is a irreversible trend. More and more people will be accustomed to it. It will be much more popular in the near future. And at the same time we should take some measures to make it perfect. (范文三) Several decades ago, it should be a marvelous wonder to purchase our favorite gifts only by clicking the mouse and then just waiting for the door knock by a smiling expressive delivery courier with the exact package you ordered. While, today, it is no long a rare case. Combined with the fast food, the digital communication ,online shopping has been a common part of our life。 Admittedly, on line shopping offers magical convenience. For example, it saves time and offers numerous choices since floods of information can be supplied on Internet. While, every coin has tow sides and online shopping is no different. Have you still remembered the annoying time when you found the commodities you buy on Internet was not the slightest( 微博) as what you had

Online shopping 英语作文五篇

Online Shopping 一、 With the development of the Internet and the popularization of computer, shopping online is normal and even necessary in our life. As many of us might have witnessed, more individuals do the shopping on the Internet. Shopping on the Internet has been a fashion especially among the youngsters. Undoubtedly, online shopping has made our everyday life more convenient and comfortable. Firstly, the types of the commodities can bedazzle you. Online products range from small items such as garments and books to bulk merchandise, for instance, the computers. Therefore, individuals can buy almost everything on the Internet. Beyond that, With the online tools that facilitate products comparison. Consumers can contrast products prices and features to make a better decision and get the goods at a lower price. However, for another, its potential dangers to the consumers could not be ignored. First, the Internet is visual and we can only see the goods rather than touch them, which let us have false views and make wrong decisions. Moreover, if consumers do not pay attention to the confidence level of online stores, they might get cheated. Consumers may also lose their private information during online shopping. In my opinion, shopping on the internet is an irreversible tide. More and more consumers will be accustomed to it. In addition, it will be much more popular in the near future. So at the same time it is urgency to regulate the relevant laws according to the rapid growth of online shopping. Only in this way, can we enjoy the pleasure and convenience of online shopping without concern of being treated. 二、 With the development of technology, online shopping has become popular in people’s daily life. As many of us might have witnessed, more people are buying products through the Internet. And to facilitate online purchase, stores provide pictures of their products together with other illustrations on the websites. Besides, online products range from small items such as clothes and books to large ones like computers. Undoubtedly, online shopping has brought us many conveniences. Yet its potential dangers to the consumer could not be ignored. One possible risk is that consumers might not get the exact product they want because the information provided might not be enough or reliable. For a simple example, a consumer who wants to buy a pair of shoes might end up getting

相关主题
文本预览
相关文档 最新文档