当前位置:文档之家› 2010_How Reliable are Annotations via Crowdsourcing

2010_How Reliable are Annotations via Crowdsourcing

How Reliable are Annotations via Crowdsourcing?

A Study about Inter-annotator Agreement for Multi-label Image Annotation

Stefanie Nowak

Fraunhofer IDMT

Ehrenbergstr.31

98693Ilmenau,Germany stefanie.nowak@idmt.fraunhofer.de

Stefan Rüger

Knowledge Media Institute

The Open University

Walton Hall,Milton Keynes,MK76AA,UK s.rueger@https://www.doczj.com/doc/eb2608828.html,

ABSTRACT

The creation of golden standard datasets is a costly business. Optimally more than one judgment per document is ob-tained to ensure a high quality on annotations.In this con-text,we explore how much annotations from experts di?er from each other,how di?erent sets of annotations in?uence the ranking of systems and if these annotations can be ob-tained with a crowdsourcing approach.This study is applied to annotations of images with multiple concepts.A sub-set of the images employed in the latest ImageCLEF Photo Annotation competition was manually annotated by expert annotators and non-experts with Mechanical Turk.The inter-annotator agreement is computed at an image-based and concept-based level using majority vote,accuracy and kappa statistics.Further,the Kendallτand Kolmogorov-Smirnov correlation test is used to compare the ranking of systems regarding di?erent ground-truths and di?erent eval-uation measures in a benchmark scenario.Results show that while the agreement between experts and non-experts varies depending on the measure used,its in?uence on the ranked lists of the systems is rather small.To sum up,the majority vote applied to generate one annotation set out of several opinions,is able to?lter noisy judgments of non-experts to some extent.The resulting annotation set is of comparable quality to the annotations of experts.

Categories and Subject Descriptors

D.2.8[Software Engineering]:Metrics—complexity mea-sures,performance measures;H.3.4[Information Storage and Retrieval]:Systems and Software—Performance eval-uation(e?ciency and e?ectiveness)

General Terms

Experimentation,Human Factors,Measurement,Performance Keywords

Inter-annotator Agreement,Crowdsourcing

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for pro?t or commercial advantage and that copies bear this notice and the full citation on the?rst page.To copy otherwise,to republish,to post on servers or to redistribute to lists,requires prior speci?c permission and/or a fee.

MIR’10,March29–31,2010,Philadelphia,Pennsylvania,USA. Copyright2010ACM978-1-60558-815-5/10/03...$10.00.1.INTRODUCTION

In information retrieval and machine learning,golden stan-dard databases play a crucial role.They allow to compare the e?ectiveness and quality of systems.Depending on the application area,creating large,semantically annotated cor-pora from scratch is a time and cost consuming activity. Usually experts review the data and perform manual an-notations.Often di?erent annotators judge the same data and the inter-annotator agreement is computed among their judgments to ensure quality.Ambiguity of data and task have a direct e?ect on the agreement factor.

The goal of this work is twofold.First,we investigate how much several sets of expert annotations di?er from each other in order to see whether repeated annotation is neces-sary and if it in?uences performance ranking in a bench-mark scenario.Second,we explore if non-expert annota-tions are reliable enough to provide ground-truth annota-tions for a benchmarking campaign.Therefore,four exper-iments on inter-annotator agreement are conducted applied to the annotation of an image corpus with multiple labels. The dataset used is a subset of the MIR Flickr25,000image dataset[12].18,000Flickr photos of this dataset annotated with53concepts were utilized in the latest ImageCLEF2009 Photo Annotation Task[19]in which19research teams sub-mitted74run con?gurations.Due to time and cost restric-tions most images of this task were annotated by only one expert annotator.We conduct the experiments on a small subset of99images.For our experiments,11di?erent ex-perts annotated the complete set,so that each image was annotated11times.Further,the set was distributed over Amazon Mechanical Turk(MTurk)to non-expert annota-tors all over the world,who labelled it nine times.The inter-annotator agreement as well as the system ranking for the74submissions is calculated by considering each anno-tation set as single ground-truth.

The remainder of the paper is organized as follows.Sec.2 describes the related work on obtaining inter-annotator agree-ments and crowdsourcing approaches for distributed data annotation.Sec.3explains the setup of the experiments by illustrating the dataset and the annotation acquisition pro-cess.Sec.4details the methodology of the experiments and introduces the relevant background.Finally,Sec.5presents and discusses the results of the four experiments and we conclude in Sec.6.

2.RELATED WORK

Over the years a fair amount of work on how to prepare golden standard databases for information retrieval eval-

uation has been published.One important point in as-sessing ground-truth for databases is to consider the agree-ment among annotators.The inter-annotator agreement de-scribes the degree of consensus and homogeneity in judg-ments among annotators.Kilgarri?[15]proposes guidelines on how to produce a golden standard dataset for benchmark-ing campaigns for word-sense disambiguation.He concludes that the annotators and the vocabulary used during anno-tation assessment have to be chosen with care while the re-sources should be used e?ectively.Kilgarri?states that it re-quires more than one person to assign word senses,that one should calculate the inter-annotator agreement and deter-mine whether it is high enough.He identi?es three reasons that can lead to ambiguous annotations and suggests ways how to solve them.Basically the reasons lie in the ambiguity of data,poor de?nition of annotation scheme or mistakes of annotators due to lack of motivation or knowledge.

To assess the subjectivity in ground-truthing in multime-dia information retrieval evaluation,several work has been performed on the analysis of inter-annotator agreements. Voorhees[28]analyses the in?uence of changes in relevance judgments on the evaluation of retrieval results utilizing the Kendallτcorrelation coe?cient.Volkmer et al.[27]present an approach that integrates multiple judgments in the clas-si?cation system and compare them to the kappa statistics. Brants proposes in[2]a study about inter-annotator agree-ment for part-of-speech and structural information annota-tion in a corpus of German newspapers.He uses the accu-racy and F-score between the annotated corpus of two an-notators to assess their agreement.A few studies have been performed to study the inter-annotator agreement for word sense disambiguation[26,5].These studies often utilize kappa statistics for calculating agreement between judges. Recently,di?erent works were presented that outsource multimedia annotation tasks to crowdsourcing approaches. According to Howe[10],

crowdsourcing represents the act of a company

or institution taking a function once performed

by employees and outsourcing it to an unde?ned

(and generally large)network of people in the

form of an open call.

Often the work is distributed over web-based platforms.Uti-lizing crowdsourcing approaches for assessing ground-truth corpora is mainly motivated by the reduction of costs and time.The annotation task is divided into small parts and distributed to a large community.Sorokin et al.[25]were one of the?rst who outsourced image segmentation and la-belling tasks to MTurk.The ImageNet database[7]was constructed by utilizing workers at MTurk that validated if images depict the concept of a certain WordNet node. Some studies have been conducted that explore the an-notation qualities obtained with crowdsourcing approaches. Alonso and Mizarro[1]examine how well relevance judg-ments for the TREC topic about space program can be ful-?lled by workers at MTurk.The relevance of a document had to be judged regarding this topic and the authors com-pared the results of the non-experts to the relevance assess-ment of TREC.They found that the annotations among non-expert and TREC assessors are of comparable quality. Hsueh et al.[11]compare the annotation quality of senti-ment in political blog snippets from a crowdsourcing ap-proach and expert annotators.They de?ne three criteria,the noise level,the sentiment ambiguity,and the lexical un-certainty,that can be used to identify high quality annota-tions.Snow et al.[24]investigate the annotation quality for non-expert annotators in?ve natural language tasks.They found that a small number of non-expert annotations per item yields to equal performance to an expert annotator and propose to model the bias and reliability of individual workers for an automatic noise correction algorithm.Kazai and Milic-Frayling[13]examine measures to obtain the qual-ity of collected relevance assessments.They point to several issues like topic and content familiarity,dwell time,agree-ment or comments of workers that can be used to derive a trust weight for judgments.Other work deals with how to verify crowdsourced annotations[4],how to deal with sev-eral noisy labellers[23,8]and how to balance pricing for crowdsourcing[9].

Following the work of[25,7],we obtained annotations for images utilizing MTurk.In our experiments,these annota-tions are acquired on an image-based level for a multi-label scenario and compared to expert annotations.Extending the work that was performed on inter-annotator agreement [1,2],we do not just analyse the inter-rater agreement,but study the e?ect of multiple annotation sets on the ranking of systems in a benchmark scenario.

3.EXPERIMENTAL SETUP

In this section,we describe the setup of our experiments. First,the dataset used for the experiments on annotator agreements is brie?y explained.Next,the process of ob-taining expert annotations is illustrated by outlining the de-sign of our annotation tool and the task the experts had to perform.Following,the acquisition process of obtaining ground-truth from MTurk is detailed.Finally,the work?ow of posing tasks at Amazon MTurk,designing the annotation template,obtaining and?ltering the results is highlighted.

3.1Dataset

The experiments are conducted on a subset of99images from the MIR Flickr Image Dataset[12].The MIR Flickr Image Dataset consists of25,000Flickr images.It was uti-lized for a multi-label image annotation task at the latest Im-ageCLEF2009[19]competition.Altogether,18,000of the images were annotated with53visual concepts by expert annotators of the Fraunhofer IDMT research sta?.5,000 images with annotations were provided as training set and the performance of the annotation systems was evaluated on 13,000images.19research teams submitted a total of74run con?gurations.The99images utilized in our experiments on inter-annotator agreements and its in?uence on system ranking are part of the testset of the Photo Annotation Task. Consequently,the results of74system con?gurations in au-tomated annotation of these images can serve as basis for investigating the in?uence on ranking.

3.2Collecting Data of Expert Annotators

The set of99images was annotated by11expert annota-tors from the Fraunhofer IDMT research sta?with53con-cepts.We provided the expert annotators a de?nition of each concept including example photos(see[18]for a de-tailed description of the concepts.).The53concepts to be annotated per image were ordered into several categories.In principle,there were two di?erent kinds of concepts,optional concepts and mutual exclusive concepts. E.g.the category

Place contains three mutual exclusive concepts,namely In-door ,Outdoor and No Visual Place .In contrast several op-tional concepts belong to the category Landscape Elements .The task of the annotators was to choose exactly one concept for categories with mutual exclusive concepts and to select all applicable concepts for optional designed concepts.All photos were annotated at an image-based level.The anno-tator tagged the whole image with all applicable concepts and then continued with the next

image.

Figure 1:Annotation tool that was used for the ac-quisition of expert annotations.

Fig.1shows the annotation tool that was delivered to the annotators.The categories are ordered into the three tabs Holistic Scenes ,Representation and Pictured Objects .All optional concepts are represented as check boxes and the mutual exclusive concepts are modelled as radio but-ton groups.The tool veri?es if for each category containing mutual exclusive concepts exactly one was selected before storing the annotations and presenting the next image.

3.3Collecting Data of Non-expert Annotators

The same set of images that was used for the expert anno-tators,was distributed over the online marketplace Amazon Mechanical Turk (https://www.doczj.com/doc/eb2608828.html, )and annotated by non-experts in form of mini-jobs.At MTurk these mini-jobs are called HITs (Human Intelligence Tasks).They represent a small piece of work with an allocated price and completion time.The workers at MTurk,called turkers,can choose the HITs they would like to perform and submit the re-sults to MTurk.The requester of the work collects all re-sults from MTurk after they are completed.The work?ow of a requester can be described as follows:1)design a HIT template,2)distribute the work and fetch results and 3)approve or reject work from turkers.For the design of the HITs,MTurk o?ers support by providing a web interface,command line tools and developer APIs.The requester can de?ne how many assignments per HIT are needed,how much time is allotted to each HIT and how much to pay per HIT.MTurk o?ers several ways of assuring quality.Optionally the turkers can be asked to pass a quali?cation test before working on HITs,multiple workers can be assigned the same HIT and requesters can reject work in case the HITs were not ?nished correctly.The HIT approval rate each turker achieves by completing HITs can be used as a threshold for authorisation to work.

3.3.1Design of HIT Template

The design of the HITs at MTurk for the image annota-tion task is similar to the annotation tool that was provided to the expert annotators (see Sec.3.2).Each HIT consists of the annotation of one image with all applicable 53con-cepts.It is arranged as a question survey and structured into three sections.The section Scene Description and the section Representation each contain four questions,the sec-tion Pictured Objects consists of three questions.In front of each section the image to be annotated is presented.The repetition of the image ensures that the turker can see it while answering the questions without scrolling to the top of the document.Fig.2illustrates the questions for the section Representation

.

Figure 2:Section Representation of the survey.The turkers see a screen with instructions and the task to ful?l when they start working.As a consequence,the guide-lines should be very short and easy to understand.In the annotation experiment the following annotation guidelines were posted to the turkers.These annotation guidelines are far shorter than the guidelines for the expert annotators and do not contain example images.

?Selected concepts should be representative for the con-tent or representation of the whole image.?Radio Button concepts exclude each other.Please an-notate with exactly one radio button concept per ques-tion.?Check Box concepts represent optional concepts.Please choose all applicable concepts for an image.?Please make sure that the information is visually de-picted in the images (no meta-knowledge)!

3.3.2Characteristics of Results

The experiment at MTurk was conducted in two phases. In the?rst phase(also considered as test or validation phase), each of the99HITs was assigned?ve times,which resulted in495annotation sets(?ve annotation sets per image).One HIT was rewarded with5Cent.The second phase consists of the same99HITs that were annotated four times by the turkers.So altogether891annotation sets were obtained. The work of all turkers that did not follow the annotation rules was rejected.As the review of the annotation cor-rectness is di?cult and subjective,the rejection process was conducted on a syntactical level.Basically all images in which at least one radio button group was not annotated was rejected,as this clearly violates the annotation guide-lines.Overall94annotation sets were rejected that belong to54di?erent images(see Fig.3(a)).In maximum for one image?ve HITs and for two others four HITs were rejected. Looking at the images in Fig.3(a),no obvious reason can be found why so many turkers did not annotate all cate-gories in these images.Fig.3(b)illustrates the amount of images that were annotated per turker.The turkers are represented at the x-axis starting from the turker with most completed HITs to the turker with least completed HITs for both batches.

Statistics of?rst Batch.

The?rst batch,consisting of495HITs,was completed in about6hours and24minutes.In total58turkers worked on the annotations and spent in average156.4seconds per HIT.58annotation sets were rejected which corresponds to 11.72%of the batch.The time spent to annotate these im-ages was in average144.4seconds,which does not substan-tially di?er from the overall average working time.In all rejected images at least one category with mutual exclusive concepts was not annotated at all.

The?rst round also served as validation phase.Results were analysed to check whether there was a misconception in the task.In the survey,the category Time of Day consists as only category of mutual exclusive and optional concepts at the same time.The turker should choose one answer out of the radio buttons Day,Night and No visual time and optionally could select the concepts Sunny and Sunset or Sunrise.For this category,it seemed not clear to everybody that one radio button concept had to be selected.As a result the description for this category was rendered more precisely for the second round.The rest of the survey remained un-changed.

Statistics of second Batch.

The second batch was published four days after the?rst. Its396HITs were completed in2hours and32minutes by 38workers.9.09%of the HITs had to be rejected which is equal to36HITs.In average137.5seconds were needed for the annotation of one image.The rejected images were annotated in117.8seconds in average.Six workers worked on both batches.MTurk arranges that several assignments per HIT are not?nished by the same turkers.However,as the second batch was published as a new task some days later,it was possible that the same turkers of the?rst round also worked on the second.All in all,there were13images that were annotated twice by the same

person.Figure3:At the top,the number of rejections per image is plotted for all rejected images.At the bot-tom the amount of images annotated by each turker is illustrated.

Feedback.

Each HIT was provided with a comment?eld for feedback. The comments received can be classi?ed into1)comments about work,2)comments about the content of the photo, 3)comments about the quality of the photo,4)comments about feelings concerning a photo and5)other comments. In Table1an excerpt of the comments is listed.Notably, no negative comment was posted and the turkers seemed to enjoy their task.

4.EV ALUATION DESIGN

In this section,the methodology of the inter-annotator agreement experiments is outlined and the utilized measures are introduced.Four experiments are conducted to assess the in?uence of expert annotators on the system ranking and whether the annotation quality of non-expert annotators is good enough to be utilized in benchmarking campaigns:

1.Analysis of the agreement among experts

There are di?erent possibilities to assess the inter-rater agreement among annotators in case each annotator annotated a whole set of images.One way is to calcu-late the accuracy between two sets of annotations.An-other way is to compare the average annotation agree-ment on a basis of the majority vote for each concept or for each image.

In the annotation of the images two principal types of concepts were used,optional and mutual exclusive ones.The question is how to assess when an annotator performs a decision:

?Is a decision just performed by explicitly selecting

a concept?

Content of photo About work Feelings about photo Quality of photo Other Cupcakes Dolls aren’t persons right Cute And nice Color e?ect can be better ha haa Sure

Looks Like a dream land really nice to work on this.Just Beautiful Interesting good represen-

tation for Logo or....For what purpose is this useful?

this is very di?erent and

easy.

Thats Creative

Answer for Picture Objects 2and3are not?tting cor-rectly.I really like this one has nice composition.

Table1:The table depicts an excerpt of the comments posted by the turkers.

?Or does the annotator perform a judgment through

the selection and deselection of concepts?

In case of the optional concepts,it is not assured that a deselected concept was chosen to be deselected or just forgotten during the annotation process.In case of the mutual exclusive concepts,the selection of one concept automatically leads to a deselection of the other ones in the same group.In this case,both the selection and deselection of a group of concepts is performed intentionally.The majority analysis of the agreement on images and concepts takes these two paradigms into consideration and compares its results.

2.In?uence of di?erent sets of expert annotations on rank-

ing the performance of systems

In the second experiment,the in?uence of annotator sets on the performance of systems is determined.The goal is to examine how much di?erent ground-truths a?ect the ranks of systems in a benchmark scenario.

Each set of expert annotations is regarded as ground-truth for the evaluation of annotation systems.The 74run con?gurations of the ImageCLEF2009Photo Annotation task were trimmed to contain only the an-notations for the99images.For each run,results against each ground-truth were computed with the evaluation measures Ontology Score(OS),Equal Er-ror Rate(EER)and Area Under Curve(AUC),that were utilized in the o?cial ImageCLEF campaign[19] (see Sec.4.2).In a second step,the resulting ranked lists per annotator ground-truth are compared to each other with the Kendallτcorrelation coe?cient[14] and the Kolmogorov-Smirnov statistics[16].

3.Analysis of the agreement between experts and non-

experts

The third experiment analyses the agreement between expert and non-expert annotators.Its goal is to as-sess if there is a comparable agreement for non-experts to the inter-rater agreement of experts.In general, the annotations obtained from MTurk are organized as HITS.A HIT should cover a small piece of work that is paid with a small reward.As a consequence, the major di?erences between the expert annotation sets and the ones from MTurk are that at MTurk each set of99images is annotated by several persons.This allows to compare the agreement on the labels at a concept-and image-based level,but not to compare the correlation among annotators over the whole set.

The analysis of non-expert agreements considers only approved annotation sets from MTurk and uses the annotation sets from both rounds combined.

In this experiment,the annotation agreement for each

image is calculated in terms of example-based accu-

racy and compared between both groups of annotators.

Further,the expert annotation set determined with the

majority vote is compared to the combined annotation

set of the non-experts.Like in the?rst agreement ex-

periment,the accuracy serves as the evaluation mea-

sure.To evaluate the inter-annotator agreement on

a concept basis,the kappa statistics are utilized(see

Sec.4.4).They take all votes from annotators into ac-

count and derive an agreement value which excludes

the agreement by chance.This experiment is per-

formed for both groups of annotators and the results

for the mutual exclusive categories and the optional

concepts are compared.

4.In?uence of averaged expert annotations compared to

averaged non-expert annotations on system ranking

Finally,the in?uence of the non-expert annotations

on the system ranking is investigated.The combined

annotations determined by the majority vote of the

non-experts are used as basis for evaluation.The cor-

relation in ranking between this ground-truth and the

combined ground-truth of the expert annotators is com-puted for all three evaluation measures OS,EER and

AUC.The Kendallτcorrelation coe?cient and the

Kolmogorov-Smirnov statistics are calculated between

both rankings.

In the following,the background needed to understand the experiments is brie?y explained.First,the inter-annotator agreement computation based on accuracy is described.Sec-ond,the evaluation measures utilized in the ranking exper-iment are introduced.Next,the calculation of rank correla-tion is outlined.Finally,the kappa statistics are explained.

4.1Accuracy for Agreement Assessment Following[2]the accuracy between two sets of annotated images U and V is de?ned as

accuracy(U,V)=

#identically tagged labels

#labels in the corpus

,(1)

where labels refer to the annotated instance of a concept over the whole set.

The accuracy between two sets of annotations per image can be calculated according to Eq.2.This way of calculating the accuracy does not presume the existence of two persons that annotated the whole set of images,but evaluates the accuracy of annotations on an image-based level.

accuracy ex(X)=

1

N

N

i=1

#identically tagged labels in X i

#labels in X i

.

(2)

A2A3A4A5A6A7A8A9A10A11Merged

A1.900.877.905.892.914.912.890.894.900.916.929

A2.885.913.905.916.915.903.911.909.925.939

A3.900.873.902.884.878.886.892.904.918

A4.897.926.918.899.914.915.932.947

A5.900.917.902.901.900.911.928

A6.925.902.918.925.932.952

A7.900.916.918.929.945

A8.887.892.909.920

A9.918.918.941

A10.919.941

A11.958

Table2:The confusion matrix depicts the accuracy among annotators averaged over a set of99images with 53annotations per image.The column Merged contains the majority votes of all annotators.

4.2Image Annotation Evaluation Measures The performance of systems in the ImageCLEF Photo An-notation task was assessed with the three evaluation mea-sures OS,EER and AUC.The OS was proposed in[20]. It assesses the annotation quality on an image basis.The OS considers partial matches between system output and ground-truth and calculates misclassi?cation costs for each missing or wrongly annotated concept per image.The score is based on structure information(distance between con-cepts in the hierarchy),relationships from the ontology and the agreement between annotators for a concept.The calcu-lation of misclassi?cation costs favours systems that anno-tate an image with concepts close to the correct ones more than systems that annotate concepts that are far away in the hierarchy from the correct concepts.In contrast,the measures EER and AUC assess the annotation performance of the system per concept.They are calculated out of Re-ceiver Operator Curves(ROC).The EER is de?ned as the point where the false acceptance rate of a system is equal to the false rejection rate.The AUC value is calculated by summing up the area under the ROC curve.As the EER and the AUC are concept-based measures,they calculate a score per concept which is later averaged.In case a concept was not annotated at least once in the ground-truth of the annotator,it is not considered in the?nal evaluation score for EER or AUC.

4.3Rank Correlation

The correlation between ranked lists can be assessed by Kendall’sτ[14]or Kolmogorov-Smirnov’s D[16].Both use a non-parametric statistic to measure the degree of correspon-dence between two rankings and to assess the signi?cance of this correspondence.The Kendall test computes the dis-tance between two rankings as the minimum number of pair-wise adjacent swaps to turn one ranking into the other.The distance is normalised by the number of items being ranked such that two identical rankings produce a correlation of+1, the correlation between a ranking and its perfect inverse is -1and the expected correlation of two rankings chosen ran-domly is0.The Kendall statistics assume as null hypothesis that the rankings are discordant and reject the null hypoth-esis whenτis greater than the1?αquantile,withαas signi?cance level.In contrast,the Kolmogorov-Smirnov’s D[16]states as null hypothesis that the two rankings are concordant.It is sensitive to the extent of disorder in the rankings.Both tests are utilized in our experiment to see how much the di?erent ground-truths a?ect the ranking of systems.The correlation is regarded as a kind of agreement between the annotators on the annotations,as a high corre-lation denotes an equal ranking in both lists,which points to a close annotation behaviour.

4.4Kappa Statistics

Kappa statistics can be utilized to analyse the reliability of the agreement among annotators.It is a statistical mea-sure that was originally proposed by Cohen[6]to compare the agreement between two annotators when they classify as-signments into mutual exclusive categories.It calculates the degree of agreement while excluding the probability of con-sistency that is expected by chance.The coe?cient ranges between0when the agreement is not better than chance and1when there is perfect agreement.In case of system-atic disagreement it can also become negative.As a rule of thumb,a kappa value above0.6represents an adequate an-notator agreement while a value above0.8is considered as almost perfect[17].The kappa statistic used in the following analysis is called free-marginal kappa(see[3,21])and can be utilized when the annotators are not forced to assign a certain number of documents to each concept.It is suitable for any number of annotators.

5.RESULTS AND DISCUSSION

This section details the results of the four experiments. The?rst experiment analyses the agreement between di?er-ent sets of expert annotations.The second one investigates the in?uence of the annotation sets on the performance rank-ing.The third one compares inter-rater agreement between experts and non-experts and the last one considers the in-?uence of non-expert annotations on ranking.

5.1Agreement Analysis among Experts

For each annotator the accuracy in comparison to the an-notations of all other annotators is calculated.Addition-ally,the majority vote of all annotators is utilized as12th ground-truth,further denoted as merged annotations.Ta-ble2presents the results in a confusion matrix.In general, the accuracy between the annotations is very high.The overall accuracy is0.912with a minimum of0.873between annotator3and5and a maximum of0.958between anno-tator11and the merged annotations.

Figure4:The upper?gure depicts the agreement among annotators for each concept determined over the majority vote.The lower diagram shows the inter-annotator agreement for each image.

Fig.4presents at the top the agreement for each concept among the eleven annotators.The majority vote determines if a concept has to be annotated for a speci?c image.The percentage of the annotators that chose this concept is de-picted in the?gure averaged over all images.Note that by calculating agreements based on the majority vote,the agreement on a concept cannot be worse than50%.The upper line represents the agreements on a concept averaged over the set of images in case the selection and deselection of concepts is regarded as intentional.This means that if no annotator chose concept C to be annotated in an image X,the agreement is regarded as100%.The lower line rep-resents the case,when only selected concepts are taken into account.All images in which a concept C was not annotated by at least one annotator are not considered in the averaging process.In case only a small number of annotators select one concept the majority vote determines the agreement and it is considered in the averaging process.This means if e.g. nine out of11annotators decided not to select concept C in an image X,the agreement on this concept would be about 82%.For the concept Snow the lower line represents an agreement of0%.There was no annotator that annotated that concept in one of the99images.At the bottom of Fig.4 the agreement among annotators is illustrated averaged for each image.Again,the average per image was calculated based on selected concepts and based on all concepts.The upper line represents the average agreement among anno-tators for each image when taking into account the selected and deselected concepts.The lower line illustrates the agree-ment per image when just considering the selected concepts. All in all,the agreement among annotators in case of aver-aging based on the majority vote shows a mean agreement of79,6%and93,3%per concept and81,2%and93,3%per image for selected and all concepts,respectively.

5.2System Ranking with Expert Annotations In the ranking experiment,the e?ect of the annotator se-lection on classi?cation accuracy is investigated.Results are presented in Table3which displays the Kendallτcor-relation coe?cients and the decisions of the Kolmogorov-Smirnov test.The upper triangle depicts the correlations between the ranked result lists by taking into account the di?erent ground-truths of the annotators for the evaluation with the OS measure.In average,there is a correlation of0.916between all result lists.The list computed with the merged ground-truth has an overall correlation of0.927 with the other lists.Despite three cases,the Kolmogorov-Smirnov test supported the decision of concordance in the rankings.Overall,annotator11has the highest correlation with all other annotators with0.939and annotator10has the lowest average correlation of0.860.

The lower triangle of Table3contains the correlation of ranks in the result lists for the evaluation measure EER.All pairs of ranked lists have in average a correlation of0.883. The ground-truth of the annotators correlates on average with the merged ground-truth in0.906.For the rankings with EER,the Kolmogorov-Smirnov statistics assigned dis-cordance in rankings in six times.The annotator with the lowest average correlation in its annotations is annotator1 with0.85and the annotations with the highest correlation in average are the ones from annotator6with0.901(when not considering the merged annotations as having the highest correlation overall).

The results for the evaluation measure AUC are similar to the ones of the OS.The correlation between all runs is on average0.936.The correlation between the ground-truth of each annotator and the merged ground-truth is on av-erage0.947.The lowest average correlation with all other annotations are from annotator10with0.916.The high-est average correlation could be achieved from annotator11 with0.947.In all cases the Kolmogorov-Smirnov statistics supported the Kendall‘sτtest results for concordance. Summarizing,the results of two tests showed a high cor-relation of the ranked lists calculated against the ground-truths of the di?erent expert annotators.Just in a few case the test results were contradicting.Depending on the eval-uation measure with which the ranked lists were computed, the average correlation varies from0.916(OS),0.883(EER) to0.936(AUC).One can conclude from these results that the annotators have a high level of agreement and that it does not a?ect the ranking of the teams substantially which annotator to choose.For the measures OS and AUC the same two annotators,annotator11and annotator10,show the highest and the lowest average correlation with the other annotations respectively.

5.3Agreement Analysis between Experts and

Non-experts

In the following,the results of the agreement analysis of non-expert annotations are presented and compared to the inter-annotator agreement of the experts.

5.3.1Accuracy

The accuracy was computed for each image X among all annotators of that image.The averaged accuracy for each image annotated by the expert annotators is0.81.The av-erage accuracy among all turkers for each image is0.79.A merged ground-truth?le was composed from the HITs by computing the majority vote for each concept in each image. The accuracy between the ground-truth?le of MTurk and the merged ground-truth?le of the expert annotators is0.92.

A 1

A 2A 3A 4A 5A 6A 7A 8A 9A 10A 11Merged A 1.938.938.914.938.884.959.952.890.816.927.890A 2.842.964.960.914.939.951.898.934.869.960.947A 3.804.890.969.901.942.945.897.937.872.976.948A 4.874.898.872.892.967.939.878.959.892.976.973A 5.872.892.864.892.859.944.950.869.792.898.866A 6.845.927.904.908.905.910.846.955.918.951.978A 7.863.880.872.893.895.906.928.910.841.948.917A 8.881.884.859.896.888.889.876.850.777.888.851A 9.819.868.882.888.875.880.849.861.892.946.958A 10.847.867.861.919.874.881.869.851.893.872.914A 11.838.915.894.890.883.919.898.861.882.891.954

Merged

.865

.925

.889

.919

.914

.946

.898

.905

.903

.890

.912

Table 3:This table presents the Kendall τcorrelation for the evaluation with OS measure and varying ground-truth in the upper triangle.The lower triangle shows the correlation coe?cient for the evaluation with EER score.The cells coloured in gray represent the combinations for which the Kolmogorov-Smirnov decided on discordance for the rankings.

In both cases,the accuracy between experts and non-experts is very high.Remembering Table 2,the results between the expert annotators and the merged expert annotator results are on average 0.94.In terms of the overall accuracy of the ground-truth ?les,the annotations from MTurk nearly show as good results as the expert annotators.

5.3.2Kappa Statistics

The kappa statistics are calculated in three di?erent con-?gurations using [22].The ?rst con?guration (denoted as non-experts )takes all votes of the turkers into considera-tion.The second,called experts ,uses all annotations from the experts and the third (combined )computes the kappa statistics between the averaged expert annotation set and the averaged non-expert annotation set.In the following,the results are presented for the categories with mutual ex-clusive concepts and the optional concepts.

Kappa Statistics on Mutual Exclusive

Concepts.

Figure 5:The diagram depicts the kappa values for the mutually exclusive categories.

The images were annotated with six categories that con-tain mutual exclusive concepts.Fig.5presents the results for the kappa analysis for the categories Blurring ,Season ,Time of Day ,Place ,Illumination and Persons .Time of Day and Place contain three concepts,Blurring ,Illumina-tion and Persons four concepts and the category Season is assigned with ?ve concepts.The results show that the kappa

value is higher for the expert annotators for each category.In all categories the expert annotators could achieve a kappa value higher than 0.6.For the categories Illumination and Persons an agreement higher than 0.8could be obtained.Considering the non-expert annotations,only for half of the categories the kappa value is above the threshold.The kappa value for the categories Season and Illumination is indeed quite low.A possible reason for the category Season lies in the fact,that the images should only be annotated with con-cepts that are visible in the image.In most cases the season is not directly visible in an image and the expert annotators were trained to assign the concept No visual season in this case.However,the turkers may have guessed,which leads to a lower agreement.The kappa statistics for the combined con?guration shows that the majority of the experts has a good agreement to the majority of non-experts.Despite the category Blurring all agreements are higher than 0.8.

Kappa Statistics on Optional

Concepts.

Figure 6:The diagram depicts the kappa values for the optional concepts.

In the dataset,31optional concepts were annotated.For each optional concept the kappa statistics are exploited sep-arately in a binary scenario for the described three con?gu-rations.Fig.6presents the kappa statistics for the optional concepts.On average,the non-expert annotators agree with a value of 0.68,the experts with a value of 0.83and the com-bined kappa value is 0.84.For a few concepts (aesthetic ,Still Life ,Quality ,Macro ...)the non-expert agreement is very

low.However,the combined agreement for these concepts is quite high,also slightly better on average than the agree-ment among experts.The results indicate that the majority vote is able to?lter the noise from the non-expert annota-tions of most concepts and raise the averaged annotations to the level of the expert annotations.For a few concepts like Plants,Sky and Landscape the agreement among the combined annotation sets is low.These concepts are de-picted quite frequently in the images,so apparently there is no major agreement about how to annotate these concepts. For other concepts the agreement in the combined con?g-uration is very high.Taking into account the contents of the images,the reasons for this are twofold.On the one hand,there are some concepts that simply are not often depicted in the99images(e.g.the concept Snow is not de-picted at all).So it is correct,that all annotators agree that the concept is not visible.But the results for these images can not be considered as a general annotation agreement for this concept,as this may change on images that depict these concepts.On the other hand,there is the problem of which annotation paradigm to apply as illustrated in Sec.4. If just the annotations for images are considered in which at least one annotator selected the concept,the agreement would decrease.

In contrast to the agreement results in terms of accuracy (Sec.5.3.1),the inter-annotator agreement evaluated with kappa statistics shows major di?erences between experts and turkers.While the experts could achieve a satis?able agree-ment on concepts for most categories and optional concepts, the results of the turkers are not comparable well.They only cross the0.6threshold for half of the categories.In case of the optional concepts22of31concepts have a non-expert agreement higher than0.6.For other concepts,there exist major di?erences in agreement.

5.4Ranking with Non-expert Annotations This experiment explores how the di?erent ground-truths of expert annotators and turkers a?ect the ranking of sys-tems in a benchmark scenario.For this experiment the combined annotation sets obtained over the majority vote are utilized as ground-truth for the system evaluation.The Kendallτtest assigns a high correlation in ranking between the combined ground-truth of the turkers and the combined ground-truth of the experts.Evaluated with the OS,the cor-relation is0.81,evaluated with EER the correlation is0.92 and utilizing the AUC measure,the correlation coe?cient is0.93.This corresponds approximately to the correlation coe?cient the single expert annotators had in comparison to the combined expert list as illustrated in Sec.5.2.In that experiment the correlation coe?cient for the OS is0.93,for the EER0.91and for the AUC0.95on average.Conse-quently,the ranking of the systems is a?ected most in case of the OS.For the EER the non-experts annotations show even a higher correlation with the merged list as the sin-gle experts annotations.These results are supported by the Kolmogorov-Smirnov test.This test decides for concordance in case of EER and AUC,but for discordance in case of OS. Following,these results con?rm that the majority vote seems to?lter some noise out of the annotations of the non-experts.However,as shown in Sec.5.3.2,for some concepts the agreement is still low.It is surprising that the concept-based measures EER and AUC show such a high average correlation in the ranking of systems,even if there are a few concepts for which the annotations di?er substantially among expert and non-expert annotators.These results pose the question whether the evaluation measures used for the evaluation of image annotation are sensitive enough.

6.CONCLUSION AND FUTURE WORK Summarizing,this paper illustrates di?erent experiments on inter-annotator agreement in assessing ground-truth of multi-labelled images.The annotations of11expert annota-tors were evaluated on a concept-and an image-based level and utilized as ground-truths in a ranking correlation ex-periment.All expert annotators show a high consistency in annotation with more than90%agreement in most cases de-pending on the measure utilized.The kappa statistics show an agreement of0.76on average for the exclusive categories and a kappa of0.83for the optional concepts.A further ex-periment analyses the in?uence of judgments of di?erent an-notators on the ranking of annotation systems.This indirect agreement measure exploits the fact that system are ranked equally for the same ground-truth,so a low ranking corre-lation points to a low agreement in annotation.The results show that a high correlation in ranking is assigned among the expert annotators.All in all,the ranking of systems in a benchmark scenario is in most cases not major in?uenced by evaluating against di?erent expert annotations.Depending on the evaluation measure used to perform the ranking in a few combinations a discordance between rankings could be detected.This leads to the conclusion that repeated expert annotation of the whole dataset is not necessary,as long as the annotation rules are clearly de?ned.However,we sug-gest that the inter-rater agreement is validated on a small set to ensure quality as depending on the concepts also the expert annotation agreement varies.

The same experiment was conducted with non-expert an-notators at Amazon Mechanical Turk.Altogether nine an-notation sets were gathered from turkers for each image. The inter-annotator agreement was not judged consistently by the di?erent approaches.The accuracy shows a high agreement of0.92which is very close to the agreement among expert annotators.However,the kappa statistics report an average of0.62for the exclusive categories and an average of0.68for the optional concepts for the non-experts.The value of the exclusive concepts is close to the lower thresh-old for what is regarded as acceptable for annotator agree-ments.When comparing the averaged ground-truth?le of the non-experts with the one from the experts,the inter-annotator agreement in terms of kappa rises to0.84on av-erage.The majority vote used for generating this ground-truth?le seems to?lter some noise out of the annotations of the non-experts.Finally,the ranking experiment shows a high correlation with combined ground-truth of the non-expert annotators in comparison to the one of the expert annotators.These results indicate that the di?erences in annotations found by the kappa statistics,even with the combined ground-truth,do not major in?uence the rank-ing of di?erent systems in a benchmark scenario.This poses new questions concerning the sensitivity of image annotation evaluation measures,especially in the case of concept-based evaluation measures.

Concluding,data annotation utilizing a crowdsourcing ap-proach is very fast and cheap and therefore o?ers a prospec-tive opportunity for large-scale data annotation.The results obtained in these experiments are quite promising when re-

peated labelling is performed and support results of other studies on distributed data annotation[25,1,24].However, as the experiments were conducted on a small database,fu-ture work has to explore if the results remain stable on a larger set.This work does not answer the question how many annotation sets of non-experts are necessary to obtain comparable results to expert annotators.Additionally,fur-ther analysis needs to be performed to answer the question why the concept-based evaluation measures for ranking sys-tems in a benchmark scenario do not re?ect the di?erences in annotation quality to a great extent,as the kappa statis-tics or the OS do.A deeper analysis about which evaluation measures are more sensitive to varying annotation quality will be part of future work.

7.ACKNOWLEDGMENTS

This work has been partly supported by grant01MQ07017 of the German research program THESEUS which is funded by the Ministry of Economics.The work was partly per-formed at Knowledge Media Institute at Open University thanks to the German Academic Exchange Service(DAAD) scholarship.Thanks for all annotation e?ort conducted at Fraunhofer IDMT and for the work of the turkers at Me-chanical Turk.

8.REFERENCES

[1]O.Alonso and S.Mizzaro.Can we get rid of TREC

assessors?Using Mechanical Turk for relevance

assessment.In SIGIR2009Workshop on the Future of IR Evaluation.

[2]T.Brants.Inter-annotator agreement for a German

newspaper corpus.In Proc.of the2nd Intern.Conf.

on Language Resources and Evaluation,2000.

[3]R.Brennan and D.Prediger.Coe?cient kappa:Some

uses,misuses,and https://www.doczj.com/doc/eb2608828.html,cational and

Psychological Measurement,41(3):687,1981.

[4]K.Chen,C.Wu,Y.Chang,and C.Lei.A

crowdsourceable QoE evaluation framework for

multimedia content.In Proceedings of the17th ACM

international conference on Multimedia,2009.

[5]T.Chklovski and R.Mihalcea.Exploiting agreement

and disagreement of human annotators for word sense disambiguation.In Proceedings of RANLP2003,2003.

[6]J.Cohen et al.A coe?cient of agreement for nominal

https://www.doczj.com/doc/eb2608828.html,cational and psychological measurement,

20(1):37–46,1960.

[7]J.Deng,W.Dong,R.Socher,L.Li,K.Li,and

L.Fei-Fei.ImageNet:a large-scale hierarchical image

database.In Proc.CVPR,pages710–719,2009.

[8]P.Donmez,J.Carbonell,and J.Schneider.E?ciently

learning the accuracy of labeling sources for selective

sampling.In Proceedings of the15th ACM SIGKDD

intern.conference on Knowledge discovery and data

mining,pages259–268.New York,USA,2009.

[9]D.Feng and S.Zajac.Acquiring High Quality

Non-Expert Knowledge from On-demand Workforce.

ACL-IJCNLP2009.

[10]J.Howe.Crowdsourcing-A De?nition.

https://www.doczj.com/doc/eb2608828.html,/cs/2006/06/,

last accessed24.11.2009,2006.

[11]P.Hsueh,P.Melville,and V.Sindhwani.Data quality

from crowdsourcing:a study of annotation selection

criteria.In Proceedings of the NAACL HLT2009

Workshop on Active Learning for Natural Language

Processing,2009.

[12]M.J.Huiskes and M.S.Lew.The MIR Flickr

Retrieval Evaluation.In Proc.of ACM Intern.Conf.

on Multimedia Information Retrieval,2008.

[13]G.Kazai and https://www.doczj.com/doc/eb2608828.html,ic-Frayling.On the Evaluation of

the Quality of Relevance Assessments Collected

through Crowdsourcing.In SIGIR2009Workshop on the Future of IR Evaluation.

[14]M.Kendall.A New Measure of Rank Correlation.

Biometrika,30:81–89,1938.

[15]A.Kilgarri?.Gold standard datasets for evaluating

word sense disambiguation https://www.doczj.com/doc/eb2608828.html,puter

Speech and Language,12(4):453,1998.

[16]A.Kolmogoro?.Sulla determinazione empirica di una

legge di distribuzione.Giorn.Ist.Ital.Attuari,

4(1):83–91,1933.

[17]https://www.doczj.com/doc/eb2608828.html,ndis and G.Koch.The measurement of observer

agreement for categorical data.Biometrics,

33(1):159–174,1977.

[18]S.Nowak and P.Dunker.A Consumer Photo Tagging

Ontology:Concepts and Annotations.In THESEUS-

ImageCLEF Pre-Workshop2009,Corfu,Greece,2009.

[19]S.Nowak and P.Dunker.Overview of the CLEF2009

Large-Scale Visual Concept Detection and Annotation Task.CLEF working notes2009,Corfu,Greece,2009.

[20]S.Nowak and H.Lukashevich.Multilabel

Classi?cation Evaluation using Ontology Information.

In Proc.of IRMLeS Workshop,ESWC,Greece,2009.

[21]J.Randolph.Free-Marginal Multirater Kappa

(multiraterκfree):An Alternative to Fleiss’

Fixed-Marginal Multirater Kappa.In Joensuu

Learning and Instruction Symposium,2005.

[22]J.Randolph.Online Kappa Calculator.

https://www.doczj.com/doc/eb2608828.html,/kappa,2008.

[23]V.Sheng,F.Provost,and P.Ipeirotis.Get another

label?Improving data quality and data mining using

multiple,noisy labelers.In Proc.of the14th ACM

SIGKDD international conference on Knowledge

discovery and data mining,2008.

[24]R.Snow,B.O’Connor,D.Jurafsky,and A.Ng.Cheap

and fast—but is it good?:Evaluating non-expert

annotations for natural language tasks.In Proc.of the Conference on Empirical Methods in Natural Language Processing,2008.

[25]A.Sorokin and D.Forsyth.Utility data annotation

with Amazon Mechanical Turk.In IEEE Computer

Society Conference on Computer Vision and Pattern

Recognition Workshops,2008,2008.

[26]J.V′e ronis.A study of polysemy judgements and

inter-annotator agreement.In Programme and

advanced papers of the Senseval workshop,1998. [27]T.Volkmer,J.Thom,and S.Tahaghoghi.Modeling

human judgment of digital imagery for multimedia

retrieval.IEEE Trans.on Multimedia,9(5):967–974,

2007.

[28]E.Voorhees.Variations in relevance judgments and

the measurement of retrieval e?https://www.doczj.com/doc/eb2608828.html,rmation Processing and Management,36(5):697–716,2000.

英语各种词性的用法

英语各种词性的用法 (2011-11-10 19:35:25) 转载▼ 标签: 杂谈 名词nouns分为专有名词(指人、地方、团体等特有的名称)普通名词(指人或东西或一个抽象的名称)用法,一般做主语,表语,动词宾语,介词宾语,状语,宾补。 冠词Art.本身没有意义,分为定冠词(表示名词为特定者)和不定冠词(用在单数名词前,表示一个)用法,用在名词前,帮助名词指明含义。 数词Num.表示数目的多少或顺序。用法,作主语,宾语,定语,表语,同位语。 代词Pronouns代替名词的,具有名词和形容词的作用。具体用法,第二次提到一些名词的时候,一般用代词代替。 形容词adjectives用来修饰名词,说明事物或人的性质或特征。用法,作定语,表语,宾补。副词adverbs修饰动词,形容词,副词,或整个句子。用法,作状语,定语,表语,宾补。动词verbs用来表示动作或状态,决定时态。用法,很广,一般作谓语。 介词prepositions一种虚词,不能单独在句子中作任何成分,需构成介词短语。介词短语用法,作定语,状语,表语,宾补。 主语S,是在句子中说明全局中心主体的部分。一般由名词、代词、不定式、或相当于名词的词(例如动名词)或短语来充当。它的位置一般在句首。 谓语V,是用来说明主语做什么、是什么、怎么样的。一般由简单动词或动词短语构成。位置在主语后面。 表语P是说明主语是什么、或者怎么样的。由名词、形容词、副词或介词短语、不定时式来构成的。位置在系动词后面。 宾语O,是动作行为的对象。由名词,代词,不定式或相当于名词的词、短语来担任。他和系动词一起说明主语做什么,在谓语之后。 宾补OC,就是宾语补足语。用来补充宾语不能完全表达完整的意思。 状语,用来修饰动词、副词、或形容词的。表示行为发生的时间、地点、方式、程度、目的等。由副词,介词短语,不定式,或相当于介词的词或短语来表示。位置在句末,或句首。定语,用来修饰名词或代词。形容词、代词、数词、名词、介词短语、不定式一般作定语。位置一般在动词后名词前。 注明,系动词包括be动词,表示保持或变成某种状态的动词和感官动词。感官动词就是看、听、感觉等。表示保持或变成某种状态的动词有become变得,keep保持,remain保持,get变得,prove证明,grow变得,turn变得,seem似乎,appear好像。 基本句型 1.S+V 2.S+V+O 3.S+V+P 4.S+V+O+OC 5.S+V+IO间接宾语+DO直接宾语

英语单词惯用法集锦解析

英语单词惯用法集锦 习惯接动词不定式的动词(V to inf) adore(vi极喜欢) dread (vt.不愿做,厌恶)plan 计划 afford(+to,vt有条件,能承担)endeavour (vt,竭力做到,试图或力图)prefer(vt.宁可;宁愿(选择);更喜欢)agree 同意endure(忍受.cannot ~ to) prepare准备 aim (vi[口语]打算:) engage (vi.保证,担保;) presume(vt.冒昧;敢于[用于第一人称时为客套话]:) appear (vi.似乎;显得) essay(vt.尝试,试图) pretend(vt.自命;自称;敢于;妄为) apply (申请)expect(期望,希望)proceed(开始,着手,)arrange (vi.做安排,(事先)筹划)fail (vt.未做…;疏忽)promise(许诺,保证做 ask (要求)forget (vt. 忘记)purpose (vt.决心,打算) beg (vt.正式场合的礼貌用语]请(原谅),请(允许):I beg to differ.恕我不能赞同)guarantee(保证,担保)refuse(拒绝)bear 承受,忍受hate([口语]不喜欢;不愿意;)regret (vt. 抱歉;遗憾)begin help (有助于,促进)remember(记住) bother (vi.通常用于否定句]麻烦,费心)hesitate(vi.犹豫;有疑虑,不愿)scheme(策划做)care (vt.想要;希望;欲望[后接不定式,常用于否定、疑问及条件句中])hope (vt.希望,盼望,期待)seek(vt.谋求,图谋[后接不定式]) cease (停止; 不再(做某事)[正式] intend (打算;想要)seem(似乎,好像[后接不定式或从句];觉得像是,以为[ choose (意愿;选定;决定)itch start开始claim (vt. 主张;断言;宣称) continue (继续)like 喜欢swear(vt.起誓保证;立誓要做(或遵守) dare (vt.敢,敢于,勇于,胆敢)long(vi.渴望;热望;极想) decline(vt.拒绝,拒不(做、进入、考虑等) manage(设法完成某事)threaten(vt.威胁,恐吓,恫吓)deign (屈尊做)mean(有意[不用进行时)trouble(vi.费心,费神;麻烦)demand(vi.要求,请求:)need (需要)try(设法做) deserve (应得) neglect (疏忽) undertake(承诺,答应,保证) desire (希望渴望)offer(表示愿意(做某事),自愿;)venture(冒险(做某事))determine(vi.决心,决意,决定,)omit (疏忽,忘记)want 想要 die (誓死做)pine (渴望)wish (希望) 习惯接“疑问词+动词不定式”的动词(有时也包括VN wh-+to do) advise 建议explain 解释perceive 觉察,发觉 answer 答复find 得知,察觉persuade 说服,劝说;使某人相信 ask 询问,问forget 忘记phone 打电话 assure 保证guess 臆测,猜度pray 祈祷 beg 请求,恳求hear 小心聆听(法庭案件)promise 允诺 conceive 想象,设想imagine 以为,假象remember记得 consider 考虑,思考indicate 暗示remind 提醒,使想起 convince 使相信inform告知通知instruct告知,教导 see 看看,考虑,注意decide 解决,决定know 学得,得知 show 给人解释;示范;叙述;discover发现;知道learn 得知,获悉 signal以信号表示doubt 怀疑,不相信look 察看;检查;探明 strike 使想起;使突然想到;使认为suggest 提议,建议tell 显示,表明;看出,晓得;warn 警告,告诫think 想出;记忆,回忆;想出,明白wonder 纳闷,想知道 wire 打电报telegraph 打电报 习惯接动名词的动词(包括v+one’s/one+v+ing) acknowledge 认知,承认…之事实escape免除,避免omit疏忽,忽略 admit 承认,供认excuse 原谅overlook 放任,宽容,忽视adore (非正式)极为喜欢fancy 构想,幻想,想想postpone 延期,搁置 advise 劝告,建议finish完成prefer较喜欢 appreciate 为…表示感激(或感谢)forbid 不许,禁止prevent预防 avoid 逃避forget 忘记prohibit 禁止,妨碍

“的、地、得”用法分析及练习(后附答案)

“的、地、得”用法分析及练习(后附答案) 一、的、地、得用法分析: “的”后面跟的都是表示事物名称的词或词语,如:敬爱的总理、慈祥的老人、戴帽子的男孩、珍贵的教科书、鸟的天堂、伟大的祖国、有趣的情节、优雅的环境、可疑的情况、团结友爱的集体、他的妈妈、可爱的花儿、谁的橡皮、清清的河水...... “地”后面跟的都是表示动作的词或词语,如:高声地喊、愉快地唱、拼命地逃、疯狂地咒骂、严密地注视、一次又一次地握手、迅速地包围、沙沙地直响、斩钉截铁地说、从容不迫地申述、用力地踢、仔细地看、开心地笑笑......” “得”前面多数是表示动作的词或词语,少数是形容词;后面跟的都是形容事物状态的词或词语,表示怎么怎么样的,如:走得很快、踩得稀烂、疼得直叫唤、瘦得皮包骨头、红得发紫、气得双脚直跳、理解得十分深刻、乐得合不拢嘴、惊讶得目瞪口呆、大得很、扫得真干净、笑得多甜啊...... 二、的、地、得用法补充说明: 1、如果“de”的后面是“很、真、太”等这些词,十有八九用“得”。 2、有一种情况,如“他高兴得一蹦三尺高”这句话里,后面的“一蹦三尺高”虽然是表示动作的,但是它是来形容“高兴”的程度的,所以也应该用“得”。

三、的、地、得用法总结: 1、“的”前面的词语一般用来修饰、限制“的”后面的事物,说明“的”后面的事物怎么样。结构形式一般为:修饰、限制的词语+的+名词。 2、“地”前面的词语一般用来形容“地”后面的动作,说明“地”后面的动作怎么样。结构方式一般为:修饰、限制的词语+地+动词。 3、“得”后面的词语一般用来补充说明“得”前面的动作怎么样,结构形式一般为:动词(形容词)+得+补充、说明的词语。 四、的、地、得用法例句: 1. 蔚蓝色的海洋,波涛汹涌,无边无际。 2. 向日葵在微风中向我们轻轻地点头微笑。 3. 小明在海安儿童公园玩得很开心。 五、“的、地、得”的读音: “的、地、得”是现代汉语中高频度使用的三个结构助词,都起着连接作用;它们在普通话中都各自有着各自的不同的读音,但当他们附着在词,短语,句子的前面或后面,表示结构关系或某些附加意义的时候都读轻声“de”,没有语音上的区别。 但在书面语中有必要写成三个不同的字,这样可以区分他们在书面语用法上的不同。这样做的好处,就是可使书面语言精确化。

英语介词的分类及用法

介词的分类和应用英语介词虽是小词,数量也不多,但它灵活多变,随处可见,功能强大而且难于掌握。在现代英语中,介词的地位非常重要。我们切不可小看这个小角色,不可忽视它的作用。如果你能在英语介词上下一番功夫,那么你的英语水平会有一个飞跃提高。 英语介词分类:按结构英语介词可分为3类: 1.简单介词(约有70个),如:in,at,on,by,with,down,for,beside,along,across等。 2.分词介词(约15个)如:during,following,considering,regarding,speaking,judging,talking等。 3.成语介词(约有500个)如:out of,apart from(除之外:别无、尚有),because of,by means of用、依靠等。 按意义英语介词可分为3类: 1. 时间介词,如:at, on, in, during, over, from, for, until等。 2. 地点介词,如:at, on, in, across, to, over, between, inside, outside等。 3. 其它介词,如:by, with, about, except, instead of, due to, apart from等。 介词-- 从不独立行动的精灵 英语介词不可单独使用,只能与不同的此类构成介词短语来在句子中担当一个成分。常用的五种介词短语 1.介词+名词:at the door, into the bag 2.介词+代词: for me, of others 3.介词+动名词: in doing so, to my saying that 4.介词+连接副词/连接代词/what从句:over what he had better do 5.介词+连接副词/连接代词+不定式:on how to do this 其他类型的介词短语 6.介词+介词短语:from across the street, until after dinner 7.介词+副词:from below 8.介词+复合结构:with the light on 9.介词+不定式(but/except):…did nothing but sleep介词-- 连接词与词纽带 英语经常用介词来表示词与词之间的关系 1. 时间 1)at表示在某一时间点: at 3 o’clock 2)in表示在某一时间段内的某一或某些点:in 2004 in表示在某段时间的结束点:I’ll see you again in a week. 3) during表示某一时间段内自始至终:during the first period 4) on表示在某一day/date或其中的某一段:on Monday, on Sunday morning 5) by表示不迟于某个时间:by now 2. 地点 1)at表示在某处(而非它处):at school 2) in表示在内部或某个范围内:in the office 3) on表示在上面与某平面接触:on the table 4) outside表示在某个范围之外:outside world 5) under表示在比某个位置低的地方或在某表面之下:under a chair 6) by表示靠近或接近:by the window

英语词性的分类及用法

英语词性的分类及用法 一、词性的分类 词类又叫词性,英语单词根据其在句子中的功用,可以分成十个大类。 1 名词noun n. student 学生 2 代词pronoun pron. you 你 3 形容词adjective adj. happy 高兴的 4 副词adverb adv. quickly 迅速地 5 动词verb v. cut 砍、割 6 数词numeral num. three 三 7 冠词article art. a 一个 8 介词preposition prep. at 在... 9 连词conjunction conj. and 和 10 感叹词interjection interj. oh 哦 前六类叫实词,后四类叫虚词。 二、名词 名词概论 名词复数的规则变化 其它名词复数的规则变化 1) 以y结尾的专有名词,或元音字母+y 结尾的名词变复数时,直接加s变复数:如:two Marys the Henrys monkey---monkeys holiday---holidays 2) 以o 结尾的名词,变复数时: a. 加s,如:photo---photos piano---pianos radio---radios zoo---zoos; b. 加es,如:potato—potatoes tomato—tomatoes 3) 以f或fe 结尾的名词变复数时: a. 加s,如:belief---beliefs roof---roofs safe---safes gulf---gulfs; b. 去f, fe 加-ves,如:half---halves

常见动词用法

1、keep ①keep + 形容词表示“保持” Please keep quite. 请保持安静。 ②keep + 宾语+ 形容词(或介词短语)表示“把……保持在某一状态” We must do everything we can to keep the air clean. 我们必须尽一切所能保持空气清洁。 ③keep sb doing sth 表示“让某人做某事” ——只能用现在分词作宾语补足语,不能用不定式。 He kept us waiting for two hours. 他让我们等了两个小时。 He kept us to wait for two hours. (错误) ④keep on doing sth和keep doing sth 表示“继续做某事,反复做某事”,可换用。 但keep on doing 更强调动作的反复性或做事人的决心。 He keeps on phoning me, but I don’t want to talk to him. Though he failed 3 times, he kept on trying. 他老是给我打电话,但我不想同他讲话。虽然他已失败了3次,但他仍继续干下去。 keep doing sth 经常用于静态动词。 He kept lying in bed all day long. 他整天都躺在床上。 ⑤keep …from doing sth 表示“阻止,使免于” He kept them from fishing in the lake. 他不让他们在那个湖里捕鱼。 2、may not / mustn’t / needn’t / wouldn’t ①may not be 是may be的否定式,意为“可能不是,也许不是” He may be there.他可能在那里。He may not be there.他可能不在那里。 ②must 意为“必须”,mustn’t 意为“千万不可,绝对不可” 所以Must we/I ……?的否定回答要用needn’t—意为“不必” -Must we get there before 11 o’clock? -No, we needn’t. ③wouldn’t = would not 意为“不会,不愿” I wouldn’t say no. 3、do ①do表示“做”,做某事,常指某种不具体的活动;make表示“制作”,指做出某种具体的东西。

英语中几个特殊数词的用法小结正式稿

英语中几个特殊数词的用法小结 石广银 湖南怀化广播电视大学湖南怀化 418000 【摘要】英语的数词除基数词外,还有dozen, score, hundred, thousand, million, billion, trillion七个。它们除了表达基本的数目和顺序外,还有一些特殊用法,涉及到用单数还是用复数,与不与介词连用,与不与定冠词the连用等,本文对它们的用法作七点分析总结。 【关键词】特殊数词、用法、总结 英语的数词由基数词one至 nineteen, 以及twenty至ninety组成,共二十七个。除了这二十七个基数词外,还有七个表示整数的特殊数词,它们分别是dozen(十二),score(二十), hundred(百), thousand(千), million(百万), billion(【美国、法国】十亿 ,【英国、德国】万亿), trillion(【英】百万兆,一百万的三次幂,【美】万亿,兆,一千的四次幂)。称这七个数词为特殊数词,是因为它们除了表达基本的数目和顺序外,还有一些特殊用法,涉及到用单数还是用复数,与不与介词连用,与不与定冠词the连用等,本文对它们的用法作七点分析总结。这七个词中,billion和trillion用法与其它五个词相同,但很少用,故本文不为它们举例。 1.当它们表示具体、确切的数量、修饰名词时,不用复数(但名词要用复数,且可以承前文省略),也不和定冠词the及介词of连用,基本句型为:数词+特殊数词单数(+描绘性形容词)+名词。如: He had come half a dozen times to call upon his sister.他来看望他姐姐已有六七次了。 Four score and seven years ago our fathers brought forth on this continent a new nation conceived in Liberty and dedicated to the proposition that all men are created equal.(亚伯拉罕·林肯,葛底斯堡演讲)八十七年前,我们的先辈在这个大陆上创立了一个新的国家,它孕育于自由之中,奉行一切人生来平等的原则。

英语中各种词性的意义及用法

英语中各种词性的意义及用法 1、名词,Nouns (n.) 表示人或事物的名称的词,按意义分类 ①专有名词 表示具体的人,事物,地点,团体或机构的专有名称(第一个字母要大写)。例:China(中国)、Asia(亚洲)、the People’s Republic Of China(中华人民共和国)。 专有名词如果是含有名词短语,则必须使用定冠词the。如:the Great Wall(长城)。 姓氏名如果采用复数形式,则表示该姓氏一家人(复数含义)。如:the Greens(格林一家人)。 ②普通名词 表示某些人,某类事物,某种物质或抽象概念的名称。如: box, pen,tree,apple按是否可数分类 名词又可分为可数名词(Countable Nouns)和不可数名词(Uncountable Nouns) ①不可数名词 不可数名词是指不能以数目来计算,不可以分成个体的概念、状态、品质、感情或表 示物质材料的东西;它一般没有复数形式,只有单数形式,它的前面不能用不定冠词 a / an。抽象名词,物质名词和专有名词一般是不可数名词。 ②可数名词 可数名词是指能以数目来计算,可以分成个体的人或东西,因此它有复数形式。 2、代词,Pronouns (pron.)代替名词、数词、形容词的词,大多数代词具有名词和形 容词的功能。英语中的代词,按其意义、特征及在句中的作用分为:人称代词、物主代 词、指示代词、反身代词、相互代词、疑问代词、关系代词、连接代词和不定代词九种。 如:We, this, them, myself 3、动词,Verb (v.) 表示动作或状态的词,如:Jump,sing,visit,它又分为及物动 词和不及物动词两大类: ①及物动词:字典里词后标有transitive verb(vt.)的就是及物动词。及物动词后 一般必须跟有动作的对象(即宾语)。必须加宾语意思才完整的动词,就是及物动词。 ②不及物动词:不及物动词是不需要受词的动词。字典里词后标有intransitive verb (vi.)的就是不及物动词。不及物动词后不能直接跟有动作的对象(即宾语)。若要跟宾 语,必须先在其后添加上某个介词,如to,of ,at等后方可跟上宾语。 ③系动词:联系动词或称不完全不及物动词,虽有意义,但不完全,需要加名词、形 容词、代名词等作主词补语以补足其意义。常和形容词连用构成系表结构的连系动词有: look (看起来,看上去), feel (感觉), taste (尝起来), smell (闻起来),get (变得),turn(变),become(成为,变得),sound (听起来)等。 例如: The weather gets warmer and the trees turn green in spring. 春天天气 变暖和了,树变绿了。 The flowers smell sweet. 这些花闻起来很香。 The soup taste nice. 这汤尝上去很好吃。

动词以及动词短语的用法(动词后加to do 还是doing)

一动词加-ing 的情况 consider, suggest/advise,look forward to, excuse/pardon admit,delay/put off,fancy avoid,miss,keep/keep on,practise deny,finish,enjoy/appreciate forbid,imagine,risk can't help,mind,allow/permit,escape 考虑建议盼原谅, 承认推迟没得想. 避免错过继续练, 否认完成就欣赏. 禁止想象才冒险, 不禁介意准逃亡. 如:建议:advise,suggest,冒险:risk,献身:devote oneself to 二动词后加doing 和加to do sth. 意思不一样的情况 ①remember doing指记住过去做过的事,remember to do指记住将来要做的事,表示“不要忘记”。 ②forget doing表示忘记过去做过的事,forget to do表示“没有想起做某事”。 ③mean doing表示“意味着做某事”,mean to do表示“打算做某事”。 ④regret doing表示对已做过的事感到后悔,regret to do表示对将要做的事表示遗憾。 ⑤stop doing表示“停止做某事”,stop to do是停止做正在做的事以便去做另外一件事,这里的to do不是stop的宾语而是stop的目的状语。 ⑥try doing表示“尝试做某事”,try to do表示“设法、试图做某事”。 ⑦go on doing表示继续做同一件事,go on to do表示做完一件事后,接下去做另外一件事。 三动词后加to do sth. afford负担得起agree同意appear似乎,显得arrange安排 ask问attempt企图beg请求begin开始 choose选择claim要求decide决定demand要求 desire愿望determine决定expect期望fail不能 forget忘记happen碰巧hate憎恨,厌恶hesitate犹豫 hope希望intend想要learn学习long渴望 love爱manage设法mean意欲,打算need需要 neglect忽视offer提供omit忽略,漏other扰乱;烦恼

pretend三种易混淆不定式的用法

pretend三种易混淆不定式的用法 今天给大家带来了pretend三种易混淆不定式的用法,我们一起来学习吧,下面就和大家分享,来欣赏一下吧。 pretend三种易混淆不定式的用法 1. pretend to do sth .这个短语的意思是假装(将要)去做什么事情,适用于将来时态动作将来假装要去做但不一定去做的状态。 举例: If youpretend to know what you dont know,youll only make afool of yourself.不懂装懂就会闹笑话。(suri的回答) Child pretend to be mother and father in kindergarten.孩子在幼稚园里面假扮父亲和母亲(表将来)(JasoOon的回答) 2. pretend to have done sth .这个短语的意思是假装已经做过了某事,强调事情的一个完成的状态,侧重于假装的事情已经做好了。 举例:I pretend tohave seen nothing,but I cant.我假装自己没有看到任何东西,但是我做不到(侧重于一个完成时态,已经试图去假装没有看到)

she pretended to have finished the homeworkwhen she went out and played.当她出门玩的时候她假装自己已经完成了家庭作业。(假装做作业这个动作已经在出门玩之前做完了)(JasoOon的回答)以及怀陌的回答:When the teacher came in,he pretended to havefinished the homework.当老师进来的时候他假装自己已经完成家庭作业了,两者有异曲同工之妙。 3. pretendtobe doing sth 这个短语的意思是假装正在做某事,强调动作的一个进行时态。 举例:They pretend to be reading books when the teacher sneakingly stands at the back door.当老师偷偷地站在后门的时候他们假装正在读书(读书与老师站在后门都是过去进行时 态)(JasoOon的回答) Asmanypeople do,youoftenpretend to be doingwork when actuallyyou arejust wasting time online.像很多人一样,你经常假装正在工作,其实是在上网。 群主补充:昨天和今天已提交作业的同学,做得都很好,全部授予小红花。希望你们再接再厉,不要松懈哟。所以下周一出题者为所有已提交作业的同学或者你们选出的代表。

英语中的常见六大疑问词的用法

英语中的常见六大疑问词的用法 who whose where when what how 这六个词的常见用法(指的是一般情况下的用法,除特殊外) 1.回答中有“物”,就用what 来提问; 2.回答中有“地方,地点”,就用where来提问 3.回答中有“方式,方法”,就用how来提问 4.回答中有“人”,就用who来提问 5.回答中有“人的所有格”,就用whose来提问 6.回答中有“时间”,就用when来提问 以上这六种里最简单的为第四个。就刚才所说六点现在举例说明如下: 1. (What) are you going to take? 2. (Where) are you from? Sandwiches,milk and cakes. I am from New York. 3. (How) did you get there? 4. (Who)is going to help me? I got there by train . Mike. 5. (Whose) bag is this? 6. (When) are you going to watch TV? Mike's bag is this. At eight o'clock. 英语疑问词用法总结 句子是英语学习的核心。从句子使用的目的来分,它可分为四类1、陈述句(肯定句和否定句)2、疑问句(一般疑问句、特殊疑问句和选择疑问句)3、祈使句(肯定句和否定句)4、感叹句。四大句子类型的相互转换,对于学生来讲是个难点,为此,可通过说顺口留的形式来帮助学生解决这一难题。如:将陈述句变成一般疑问句,可以变成这样的顺口留:疑问疑问调个头,把be(系动词“is are am”)放在最前头。又如:将陈述句的肯定句变成否定句,我们就可以这样说:否定,否定加“not”,加在何处,加在系动词

英语词性的分类及用法

英语词性的分类及用法 词类又叫词性,英语单词根据其在句子中的功用,可以分成十个大类。 1 名词noun n. student 学生 2 代词pronoun pron. you 你 3 形容词adjective adj. happy 高兴的 4 副词adverb adv. quickly 迅速地 5 动词verb v. cut 砍、割 6 数词numeral num. three 三 7 冠词article art. a 一个8 介词preposition prep. at 在... 9 连词conjunction conj. and 和10 感叹词interjection interj. oh 哦 前六类叫实词,后四类叫虚词。 名词(表示人或物名称的词) ?名词可以分为专有名词和普通名词. ?专有名词是某个(些)人,地方,机构等专有的名称,如Beijing,China, the United States,等。 ?普通名词是一类人或东西或是一个抽象概念的名词,如:book,sadness等。(普 通名词包括可数名词和不可数名词) ?普通名词又可分为下面四类: 1)个体名词:表示某类人或东西中的个体,如:gun 2)集体名词:表示若干个个体组成的集合体,如:family ?(以上两类属于可数名词) 3)物质名词:表示无法分为个体的实物,如:air 4)抽象名词:表示动作、状态、品质、感情等抽象概念,如:work ?(以上两类属于不可数名词) 代词(代替名词的词) ?代词可以分为下列九类: ? 1. 人称代词:They are my school mates. ? 2. 物主代词:Our friends have great concern for each other. ? 3. 反身代词:Take good care of yourselves. ? 4. 相互代词:We should help each other. ? 5. 指示代词:Who are these people? ? 6. 疑问代词:What are you doing? ?7. 关系代词:She married Tony Harper, who is a student too. ?8. 连接代词:Do you know who did it? ?9. 不定代词:Do you know anything about it? ? ?代词是非常活跃的词,特别是不定代词,比较复杂,我们要熟练掌握。 形容词(修饰名词等,说明事物或人的性质或特征的词) ?形容词可分成性质形容词和叙述形容词两类,其位置不一定都放在名词前面。 1) 直接说明事物的性质或特征的形容词是性质形容词,它有级的变化,可以用程度 副词修饰,在句中可作定语、表语和补语。例如:hot ,good ,wonderful等. 2) 叙述形容词只能作表语,所以又称为表语形容词。这类形容词没有级的变化,也 不可用程度副词修饰。大多数以a开头的形容词都属于这一类。例如:afraid, alone 等.

标点符号用法分析

标点符号用法 一、标点符号 标点符号:辅助文字记录语言的符号,是书面语的有机组成部分,用来表示语句的停顿、语气以及标示某些成分(主要是词语)的特定性质和作用。 句子:前后都有较大停顿、带有一定的语气和语调、表达相对完整意义的语言单位。 复句:由两个或多个在意义上有密切关系的分句组成的语言单位,包括简单复句(内部只有一层语义关系)和多重复句(内部包含多层语义关系)。 分句:复句内两个或多个前后有停顿、表达相对完整意义、不带有句末语气和语调、有的前面可添加关联词语的语言单位。 陈述句:用来说明事实的句子。 祈使句:用来要求听话人做某件事情的句子。 疑问句:用来提出问题的句子。 感叹句:用来抒发某种强烈感情的句子。 词语:词和短语(词组)。词,即最小的能独立运用的语言单位。短语,即由两个或两个以上的词按一定的语法规则组成的表达一定意义的语言单位,也叫词组。 二、分类 标点符号分为点号和标号两大类。

点号的作用是点断,主要表示说话时的停顿和语气。点号又分为句末点号和句内点号。 句末点号用在句末,表示句末停顿和句子的语气,包括句号、问号、叹号。 句内点号用在句内,表示句内各种不同性质的停顿,有逗号、顿号、分号、冒号。 标号的作用是标明,主要标示某些成分(主要是词语)的特定性质和作用。包括引号、括号、破折号、省略号、着重号、连接号、间隔号、书名号、专名号、分隔号。 (一)句号 1.用于句子末尾,表示陈述语气。使用句号主要根据语段前后有较大停顿、带有陈述语气和语调,并不取决于句子的长短。 2.有时也可表示较缓和的祈使语气和感叹语气。 请您稍等一下。 我不由地感到,这些普通劳动者也是同样值得尊敬的。 (二)问号 主要表示句子的疑问语气。形式是“?”。 1.用于句子末尾,表示疑问语气(包括反问、设问等疑问类型)。使用问号主要根据语段前后有较大停顿、带有疑问语气和语调,并不取决于句子的长短。 2.选择问句中,通常只在最后一个选项的末尾用问号,各个选项之间一般用逗号隔开。当选项较短且选项之间几乎没有停顿时,选项之间可不用逗号。当选项较多或较长,或有意突出每个选项的独立性时,也可每个选项之后都用问号。 3.问号也有标号的用法,即用于句内,表示存疑或不详。 马致远(1250?―1321)。 使用问号应以句子表示疑问语气为依据,而并不根据句子中包含有疑问词。当含有疑问词的语段充当某种句子成分,而句子并不表示疑问语气时,句末不用问号。

英语名词用法大全

名词 名词:是一些名称,表示人物、地方、国家、动物或物品等。 不用an、one,如How many sandwiches would you like?你想要多少块三明治。I would like just one sandwiches.我只要一块三明治。比较May I have a sandwiches?和May I have one sandwiches?的区别) 单数变复数的规则:

1、 Chinese中国人,sheep羊,deer鹿,fish鱼, Japanese日本人,li,jin,yuan,two li,three mu,four jin,但除人民币元、角、分外,美元、英镑、法郎等都有复数形式。如:a dollar, two dollars; a meter, two meters 2、不规则的名词 foot脚-feet mouse老鼠-mice child小孩-child goose鹅-geese man男人-men woman 女人-women tooth牙-teeth,注意:与man 和woman构成的合成词,其

复数形式也是-men 和-women。如:an Englishman,two Englishmen. 但German 不是合成词,故复数形式为Germans; 3、集体名词,以单数形式出现,但实为复数。 如:people police cattle 等本身就是复数,不能说 a people,a police,a cattle,但可以说 a person,a policeman,a head of cattle,the English,the British,the French,the Chinese,the Japanese,the Swiss 等名词,表示国民总称时,作复数用。如:The Chinese are industries and brave.中国人民是勤劳勇敢的。 4、以s结尾,仍为单数的名词,如: a). maths,politics,physics等学科名词,为不可数名词,是单数。 b). news 是不可数名词“新闻”。 c). the United States,the United Nations 应视为单数。 The United Nations was organized in 1945. 联合国是1945年组建起来的。 d). 以复数形式出现的书名,剧名,报纸,杂志名,也可视为单数。 "The Arabian Nights" is a very interesting story-book..<<一千零一夜>>是一本非常有趣的故事书。 5、没有单数形式的名词:表示由两部分构成的东西 glasses眼镜shorts短裤trousers裤子scissors剪刀 若表达具体数目,要借助数量词pair(对,双);suit(套); a pair of glasses; two pairs of trousers,His trousers are there 他的裤子在那里 6、另外还有一些名词,其复数形式有时可表示特别意思,如:goods货物,waters水域,fishes (各种)鱼,room为可数名词时为“房间”,如:I live in Room 5.而room为抽象名词时 为空间上面一句话应译为“请给老妇人在校车上留个地方。”这样的词还有:glass 玻璃glasses 眼镜stone 石头a stone 一块石头time 时间two times 两次wood 木头woods 树林。 clothes 为衣服,而cloth则是布,sand沙子,而sands是沙滩 7、不同国家的人的单复数(注:中日不变英法变,其余S加后面) 名称总称(谓语用复数)一个人两个人 中国人the Chinese a Chinese two Chinese 瑞士人the Swiss a Swiss two Swiss 澳大利亚人the Australians an Australian two Australians 俄国人the Russians a Russian two Russians 意大利人the Italians an Italian two Italians 希腊人the Greek a Greek two Greeks 法国人the French a Frenchman two Frenchmen 日本人the Japanese a Japanese two Japanese 美国人the Americans an American two Americans 印度人the Indians an Indian two Indians 加拿大人the Canadians a Canadian two Canadians 德国人the Germans a Germans two Germans 英国人the English an Englishman two Englishmen 瑞典人the Swedish a Swede two Swedes 8、复合名词的复数形式(名词+名词) 1)、通常只变后面的名词为复数,如boy student→boy student s,shoe shop→shoe shop s 2)但当前面的名词是man和woman时,两个词都变为复数,如man teacher→m e n teacher s 3)一般组合名词变为复数形式时只将中心词变为复数 daughter-in-law儿媳妇—daughters-in-law man doctor男医生-men doctor half brother—half brothers(同父异母或同母异父的兄弟),man driver—men drivers(男司机) woman doctor—women doctors(女大夫)grown up—grown ups(成年人) 4)、“数词+名词+形容词”构成的复合形容词,中间的名词不能用复数形式而须用单数形 式,She is a five-year-old girl 她是一个五岁女孩。a ten-story-high building 一幢

英文各种词性的用法知识

精品文档 英文各种词性的用法及句子结构讲解: 十大词类: 1, 名词,Nou ns (n.) 表示人或事物的名称 box, pen, tree,apple 2, 代词,Pronouns (pron.) 代替名词、数词、形容词 We, this, them,myself 3,形容词,Adjectives(adj.) 用来修饰名词,表示人或事物的特征good, sad, high, short 4,数词,Numerals(num.)表示数目或顺序 one,two, first 5,动词,Verb (v.) 表示动作或状态Jump,sing,visit vt.是及物动词,vt.后必须跟宾语:sing a song vi.是不及物动词,vi.后不直接带宾语或不带宾语:jump high 6,副词,Adverbs (adv.) 修饰动、形、副等词,表示动作特征there,widely,suddenly 7,冠词,Articles (art.) 用在名词前,帮助说明名词所指的范围a, an, the 8,介词,Prepositions (prep.) 用在名词或代词前,说明它与别的词的关系in,on,down,up 9,连词,Conjunctions (conj.) 表示人或事物的名称 if,because,but 10,感叹词,In terjectio ns (in t.) 代替名词、数词、形容词等 oh,hello,hi,yeah 组成句子的各个部分叫句子成分。英语句子成分有主语,谓语,表语,宾语,宾语补足语,定语,状语等。 顺序是主语,谓语,宾语,宾语补足语,而表语,定语,状语的位置要根据情况而定。 1、主语 主语表示句子主要说明的人或事物,一般由名词,代词,数词,不定式等充当。 2、谓语 谓语说明主语的动作,状态或特征。 一般可分为两类: 1) ,简单谓语 由动词(或短语动词)构成。 可以有不同的时态,语态和语气。 2) ,复合谓语:情态动词+动词原形 3、表语 表语是谓语的一部分,它位于系动词如be之后,说明主语身份,特征,属性或状态。一般由名词,代词,形容词,副词,不定式,介词短语等充当。 4、宾语 宾语表示动作行为的对象,跟在及物动词之后,能作宾语的有名词,代词,数词,动词不定式等。 有些及物动词可以带两个宾语,往往一个指人,一个指物,指人的叫间接宾语,指物的叫直接宾语。有些及物动词的宾语后面还需要有一个补足语,意思才完整,宾语和它的补足语构成复合宾语。 5、定语 在句中修饰名词或代词的成分叫定语。 用作定语的主要是形容词,代词,数词,名词,副词,动词不定式,介词短语等。形容词,代词, 数词,名词等作定语时,通常放在被修饰的词前面。 但副词,动词不定式,介词短语等作定语时,则放在被修饰的词之后。

相关主题
文本预览
相关文档 最新文档