当前位置:文档之家› Self-describing schemes for interoperable MPEG-7 multimedia content descriptions

Self-describing schemes for interoperable MPEG-7 multimedia content descriptions

Self-describing schemes for interoperable MPEG-7 multimedia content descriptions
Self-describing schemes for interoperable MPEG-7 multimedia content descriptions

Self-Describing Schemes for Interoperable MPEG-7

Multimedia Content Descriptions

Seungyup Paek, Ana B. Benitez, and Shih-Fu Chang1

Image & Advanced TV Lab, Department of Electrical Engineering

Columbia University, 1312 S.W. Mudd, Mail code 4712 Box F-4

New York, NY 10027, USA

ABSTRACT

In this paper,we present the self-describing schemes for interoperable image/video content descriptions,which are being developed as part of our proposal to the MPEG-7standard.MPEG-7aims to standardize content descriptions for multimedia data.The objective of this standard is to facilitate content-focused applications like multimedia searching,?ltering,browsing, and summarization.To ensure maximum interoperability and?exibility,our descriptions are de?ned using the eXtensible Markup Language(XML),developed by the World Wide Web Consortium.We demonstrate the feasibility and ef?ciency of our self-describing schemes in our MPEG-7testbed.First,we show how our scheme can accommodate image and video descriptions that are generated by a wide variety of systems.Then,we present two systems being developed that are enabled and enhanced by the proposed approach for multimedia content descriptions.The?rst system is an intelligent search engine with an associated expressive query interface.The second system is a new version of MetaSEEk,a metasearch system for mediation among multiple search engines for audio-visual information.

Keywords:MPEG-7,self-describing scheme,interoperability,audio-visual content description,visual information system, metasearch, XML.

1. INTRODUCTION

It is increasingly easier to access digital multimedia information.Correspondingly,it has become increasingly important to develop systems that process,?lter,search and organize this information,so that useful knowledge can be derived from the exploding mass of information that is becoming accessible.To enable exciting new systems for processing,searching,?ltering and organizing multimedia information,it has become clear that an interoperable method of describing multimedia content is necessary. This is the objective of the emerging MPEG-7 standardization effort.

In this paper,we?rst give a brief overview of the objectives of the MPEG-7standard.MPEG-7aims at the standardization of content descriptions of multimedia data.The objectives of this standard are to facilitate content-focused applications like multimedia searching, ?ltering, browsing, and summarization.

Then,we present self-describing schemes for interoperable image/video content descriptions,which are being developed as part of our proposal to MPEG-7.To ensure maximum interoperability and?exibility,our descriptions use the eXtensible Markup Language(XML),developed by the World Wide Web Consortium.Under the proposed self-describing schemes,an image is represented as a set of relevant objects that are organized in one or more object hierarchies.Similarly,a video is viewed as a set of relevant events that can be combined hierarchically in one ore more event hierarchies.Both,objects and events, are described by some feature descriptors that can link to external extraction and similarity code.

Finally,we demonstrate the feasibility and ef?ciency of our self-describing schemes in our MPEG-7testbed.In our testbed,we will show how our scheme can accommodate image and video descriptions that are generated by a wide variety of systems.In addition,we introduce two systems being developed,which are enabled and enhanced by our approach for multi-media content descriptions.The?rst system is an intelligent search engine with an associated expressive query interface.The second system is a metasearch system for mediation among multiple search engines for audio-visual information.

1.Email: {syp, ana, sfchang}@https://www.doczj.com/doc/7f14607457.html,; WWW: https://www.doczj.com/doc/7f14607457.html,/~{syp, ana, sfchang}/

2. MPEG-7 STANDARD AND SCENARIOS

2.1. MPEG-7 standard

The MPEG-7standard[14]has the objective of specifying a standard set of descriptors to describe various types of multimedia information.MPEG-7will also standardize ways to de?ne other descriptors as well as Description Schemes(DSs)for the structure of descriptors and their relationships.This description(i.e.the combination of descriptors and description schemes) will be associated with the content itself to allow fast and ef?cient searching for material of a user’s interest.MPEG-7will also standardize a language to specify description schemes,i.e.a Description De?nition Language(DDL),and the schemes for encoding the descriptions of multimedia content.

2.2. MPEG-7 scenarios

MPEG-7will improve existing applications and enable completely new ones.We will review three of the most relevantly impacted application scenarios [3]: distributed processing, exchange, and personalized viewing of multimedia content.

q Distributed processing

MPEG-7will provide the ability to interchange descriptions of audio-visual material independently of any platform,any vendor,and any application,which will enable the distributed processing of multimedia content.This standard for interoperable content descriptions will mean that data from a variety of sources can be plugged into a variety of distributed applications such as multimedia processors,editors,retrieval systems,?ltering agents,etc.Some of these applications may be provided by third parties,generating a sub-industry of providers of multimedia tools that can work with the standard descriptions of the multimedia data.

The vision of the near future is one in which a user can access various content providers’web sites to download content and associated indexing data,obtained by some low-level or high-level processing.The user can then proceed to access several tool providers’web sites to download tools(e.g.Java applets)to manipulate the heterogeneous data descriptions in particular ways,according to the user’s personal interests.An example of such a multimedia tool will be a video editor.A MPEG-7compliant video editor will be able to manipulate and process video content from a variety of sources if the description associated with each video is MPEG-7compliant.Each video may come with varying degrees of description detail such as camera motion, scene cuts, annotations, and object segmentations.

q Content exchange

A second scenario that will greatly bene?t from an interoperable content-description standard is the exchange of

multimedia content among heterogeneous audio-visual databases.MPEG-7will provide the means to express,exchange, translate, and reuse existing descriptions of audio-visual material.

Currently,TV broadcasters,radio broadcasters,and other content providers manage and store an enormous amount of audio-visual material.This material is currently described manually using textual information and proprietary databases.

Describing audio-visual material is an expensive and time-consuming task,so it is desirable to minimize the re-indexing of data that has been processed before.

Consider a media company that purchases videos from a TV broadcaster.The TV broadcaster has already described and indexed the content in their proprietary description scheme.Without an interoperable content description,the purchasing company will have to invest manpower to translate manually the description of the broadcaster into their proprietary scheme.Interchange of multimedia content descriptions would be possible if all the content providers embraced the same scheme and system.As this is unlikely to happen,MPEG-7proposes to adopt a single industry-wide interoperable interchange format that is system and vendor independent.

q Customized views

Finally,multimedia players and viewers compliant with the multimedia description standard will provide the users with innovative capabilities such as multiple views of the data con?gured by the user.The user could change the display’s con?guration without requiring the data to be downloaded again in a different format from the content broadcaster.

The ability to capture and transmit semantic and structural annotations of the audio-visual data,made possible by MPEG-7,greatly expands the range of possibilities for client-side manipulation of the data for displaying purposes.For example,a browsing system can allow users to quickly browse through videos if they receive information about their corresponding semantic structure.For example,when modeling a tennis match video,the viewer can choose to view only the third game of the second set, all the overhead smashes made by one player, etc.

These examples only hint at the possible uses that creative multimedia-application designers will?nd for richly structured data delivered in a standardized way based on MPEG-7.

3. SELF-DESCRIBING SCHEMES

In this section,we present description schemes for interoperable image/video content descriptions.The proposed description schemes are self-describing in the sense that they combine the data and the structure of the data in the same format.The advantages of such a type of descriptions are ?exibility, easy validation, and ef?cient exchange.

3.1. eXtensible Markup Language (XML)

SGML(Standard Generalized Markup Language,ISO8879)is a standard language for de?ning and using document formats. SGML allows documents to be self-describing,i.e.they describe their own grammar by specifying the tag set used in the document and the structural relationships that those tags represent.SGML makes it possible to de?ne your own formats for your own documents,to handle large and complex documents,and to manage large information repositories.However,full SGML contains many optional features that are not needed for Web applications and has proven to be too complex to current vendors of Web browsers.

The World Wide Web Consortium(W3C)has created an SGML Working Group to build a set of speci?cations to make it easy and straightforward to use the bene?cial features of SGML on the Web[21].The goal of the W3C SGML activity is to enable the delivery of self-describing data structures of arbitrary depth and complexity to applications that require such structures.The?rst phase of this effort is the speci?cation of a simpli?ed subset of SGML specially designed for Web applications.This subset,called XML(Extensible Markup Language),retains the key SGML advantages in a language that is designed to be vastly easier to learn, use, and implement than full SGML.

Before describing the image and video DSs,we present some of the core features of XML.Let’s start with a simple XML element:

Hello MPEG-7 world!

is the start tag andis the end tag;Hello MPEG-7world!is the content of the element.What does the image tag mean?In short,it means anything you want it to mean.XML prede?nes no tags at all.Rather than relying on a few hun-dred prede?ned tags,XML lets you create the tags you need to describe your https://www.doczj.com/doc/7f14607457.html,ers de?ne what is allowed in each docu-ment by providing rules,collectively known as the Document Type De?nition(DTD).The DTD states the element types with their characteristics,the notations,and the entities allowed in the document.Apart from the DTD,XML documents must fol-low some basic well-form rules. This is the minimum criterion for XML parsers and processors.

Text in XML documents consists of characters.A document’s text is divided into character data and markup.In a?rst approximation,markup describes a document’s logical structure while character data is the basic content of the document. Generally,anything inside a pair of<>angle brackets is markup and anything that is not inside these brackets is character data. Start tags and empty tags may optionally contain attributes.An attribute is a name-value pair separated by an equal sign.Work is in progress to include binary data in XML tags.Currently,XML allows de?ning binary entities pointing to binary data(e.g. images). They require an associated notation describing the type of resource (e.g. GIF and JPG).

3.2. Image description scheme

In this section,we present the proposed description scheme for images.To clarify the explanation,we will use the example shown in https://www.doczj.com/doc/7f14607457.html,ing this example,we will walk through the image DS expressed in XML.Along the way,we will explain the use of various XML elements that are de?ned for the proposed image DS.The complete set of rules of the tags in the image and video description schemes is de?ned in our document type de?nitions[15].Another advantage of using XML as the DLL is that it provides the capability to import external description schemes’DTDs to incorporate them in one description by using namespaces. We will see an example later in this section.

The basic description element of our image description scheme is the object element().An object element represents a region of the image for which some features are available.There are two different types of objects:physical and logical objects.Physical objects usually correspond to continuous regions of the image with some descriptors in common (semantics,features,etc.)-in other words,real objects in the image.Logical objects are groupings of objects based on some high-level semantic relationships(e.g.faces).The object element comprises the concepts of group of objects,objects,and regions in the visual literature.The set of all objects identi?ed in an image is included within the object set element ().

For the image example of Figure 1.a,we have chosen to describe the objects listed below.Each object element has a unique identi?er within an image description.The identi?er is expressed as an attribute of the object element (id).Another attribute of the object element (type)distinguishes between physical and logical objects.We have left the content of each object element empty to show clearly the overall structure of the image https://www.doczj.com/doc/7f14607457.html,ter in the section,we will describe the features that can be included within the object element.

Figure 1: a) Image example. b) High-level description of the image by proposed image description scheme.

The image description scheme is comprised of object elements that are combined hierarchically in one or more object hierarchy elements ().The hierarchy is a way to organize the object elements in the object set element.Each object hierarchy consists of a tree of object node elements ().Each object node points to an object.The objects in an image can be organized by their location in the image or by their semantic relationships.These two ways to group objects generate two types of hierarchies:physical and logical hierarchies.A physical hierarchy describes the physical location of the objects in the image.On the other hand,a logical hierarchy organizes the objects based on a higher level understanding of their semantics, similar to semantic clustering.

Continuing with the image example in Figure 1.a,two possible hierarchies are shown in Figure 1.b.These hierarchies are expressed in XML below.The type of hierarchy is included in the object hierarchy element as an attribute (type).The object node element has associated a unique identi?er in the form of an attribute (id).The object node element references an object element by using the latter’s unique identi?er.The reference to the object element is included as an attribute (object_ref). An object element can include links back to nodes in the object hierarchy as an attribute too (object_node_ref).

An object set element and one or more object hierarchy elements form the image element().The image ele-ment symbolizes the image or picture being described.

In our image description scheme,the object element contains the feature elements;they include location,color,tex-ture,shape,size,motion,time,and annotation elements,among others.Time and motion descriptors with have sense when the object belongs to a video sequence.The location element contains pointers to the locations of the image.Note that annotations can be textual,visual or audio.These features can be extracted or assigned automatically or manually.For those features extracted automatically,the feature descriptors can include links to external extraction and similarity matching code.An exam-ple is included below. This example also shows how external DSs can be imported and combined with ours.

Face

In summary,both,the object hierarchy and object set elements,are part of the image element().The objects in the object set are combined hierarchically in one or more object hierarchy elements.For ef?cient transversal of the image description,links are provided to traverse from objects in the object set to corresponding object nodes in the object hierarchy and viceversa.The objects include various feature descriptors that can link to external extraction and similarity matching code.

3.3. Video description scheme

In this section,we present the proposed description scheme(DS)for videos.To clarify the explanation,we will use the exam-ple shown in https://www.doczj.com/doc/7f14607457.html,ing this example,we will walk through the video DS expressed in XML.Along the way,we will explain the use of various XML elements that are de?ned for the proposed MPEG-7video DS.The structure of the image description scheme and the video description scheme are very similar.

The basic description element of our video description scheme is the event element().An event represents one or more shots of the video for which some features are available.We distinguish three different types of events:a shot,a continuous group of shots,and a discontinuous group of shots.Discontinuous group of shots will usually be associated together based on common features(e.g.background color)or high-level semantic relationships(e.g.actor on screen).The event element comprises the concepts of story,scene,and shot in the visual literature.The set of all events identi?ed in a video is included within the event set element ().

For the video example of Figure2.b,we have chosen to describe the events listed below.Each event element has a unique identi?er within a video description.The identi?er is expressed as an attribute of the event element(id).Another attribute of the event element(type)distinguishes between the three different types of events.We have left each event element empty to show clearly the overall structure of the video https://www.doczj.com/doc/7f14607457.html,ter in the section,we will describe the features that can be included within the event element.

Figure 2: a) Video example. b) High-level description of the video by proposed video description scheme.

The video description scheme is comprised of event elements that are combined hierarchically in one or more event hierarchy elements ().The hierarchy is a way to organize the event elements in the event set element.Each event hierarchy consists of a tree of event node elements ().Each event node points to an event.The event in a video can be organized by their location in the video or by their semantic relationships.These two ways to group events generate two types of hierarchies:physical and logical hierarchies.A physical hierarchy describes the time composition of the events in the video.On the other hand,a logical hierarchy organizes the events based on a higher level understanding of their semantics, similar to semantic clustering.

Continuing with the video example in Figure 2.a,one possible hierarchy is shown in Figure 2.b.The corresponding XML is below.The type of hierarchy is included in the event hierarchy element as an attribute (type).The event element has associated a unique identi?er as an attribute (id).The event node element references an event element by using the latter’s unique identi?er.The reference to the event element is included as an attribute (event_ref).An event element can include links back to nodes in the event hierarchy to jump between events and event nodes in both directions (event_node_ref).

b)

An event set element and one or more even hierarchy elements form the video element(

In our video description scheme,the event element contains the feature elements;they include location,shot transition(i.e.various within shot or across shot special effects),camera motion,time,key frame,annotation and object set elements,among others.The object element is de?ned in the image description scheme;it represents the relevant objects in the event.As in the image DS,these features can be extracted or assigned automatically or manually.For those features extracted automatically, the feature descriptors can include links to extraction and similarity matching code. For example,

In summary,both,the event hierarchy and event set elements,are part of the video element(

4. MPEG-7 TESTBED

The proposed self-describing schemes are intuitive,?exible,and ef?cient.We demonstrate the feasibility of our self-describing schemes in our MPEG-7testbed.In our test bed,we are using the self-describing schemes for descriptions of images and videos that are generated by a wide variety of systems we have developed.We are developing two systems that are enabled and enhanced by our approach for multimedia content descriptions.The?rst system is an intelligent search engine with an associated expressive query interface.The second system is a new version of MetaSEEk,a metasearch system for mediation among multiple search engines for audio-visual information.

4.1. Description generator

In our MPEG-7testbed,we are using various image/video processing,analysis,and annotation systems to generate a rich vari-ety of descriptions for a collection of image/video items,as shown in Figure3.The descriptions that we generate for visual content include low-level visual features of automatically segmented regions,user de?ned semantic objects,high-level scene properties,classi?cations,and associated textual information.We are also including descriptions that are generated by our col-laborators.As described in section3.1.,we are using XML as the DDL for the descriptions.The descriptions have the structure of the image/video description scheme(DS)described in section3.2.and3.3.The DS and DDL are designed to accommodate descriptions generated by a wide variety of heterogeneous systems.

Once all the descriptions for an image/video item are generated,the descriptions are inputted into a database,which the search engine accesses. We shall describe now the systems used to generate the descriptions.

q VideoQ: Region-based indexing and searching system.

This system extracts visual features such as color,texture,motion,shape,and size for automatically segmented regions of

a video sequence[5].The system?rst decomposes a video into separate shots.This is performed by scene change detec-

tion[13].Scene changes may be either abrupt or transitional(e.g.dissolve,fade in/out,and wipe).For each shot,the sys-tem estimates the global(i.e.the motion of dominant background)and the camera motion.Then,it segments,detects,and tracks regions across the frames in the shot computing different visual features for each region.For each shot,the descrip-

tion generated by this system is a set of regions with visual and motion features,and the camera motion.Some keywords, assigned manually, are also available for each shot.

q AMOS: Video object segmentation system.

Currently,fully automatic segmentation of semantic objects is only successful in constrained visual domains.The AMOS system[23]takes on a powerful approach in which automatic segmentation is integrated with user input to track semantic objects in video sequences.For general video sources,the system allows users to de?ne an approximate object boundary by using a tracing interface.Given the approximate object boundary,the system automatically re?nes the boundary and tracks the movement of the object in subsequent frames of the video.The system is robust enough to handle many real-world situations that are hard to model in existing approaches,including complex objects,fast and intermittent motion, complicated backgrounds,multiple moving objects,and partial occlusion.The description generated by this system is a set of semantic objects with the associated regions and features that can be manually annotated with text.

q MPEG domain face detection system.

This system ef?ciently and automatically detects faces directly in the MPEG compressed domain[20].The human face is an important subject in video.It is ubiquitous in news,documentaries,movies,etc.,providing key information to the viewer for the understanding of the video content. This system provides a set of regions with face labels.

q WebClip: Hierarchical video browsing system.

This system parsers compressed MPEG video streams to extract shot boundaries,moving objects,object features,and camera motion[12].It also generates a hierarchical shot-based browsing interface for intuitive visualization and editing of videos.

Figure 3: Architecture for combining descriptions generated by heterogeneous systems.

q Visual Apprentice: Model based image classi?cation system.

Many automatic image classi?cation systems are based on a pre-de?ned set of classes in which class-speci?c algorithms are used to perform classi?cation.The Visual Apprentice[10]allows users to de?ne their own classes and provide exam-

ples that are used to automatically learn visual models.The visual models are based on automatically segmented regions, their associated visual features,and their spatial relationships.For example,the user may build a visual model of a portrait in which one person wearing a blue suit is seated on a brown sofa,and a second person is standing to the right of the seated person.The system uses a combination of lazy-learning,decision trees,and evolution programs during classi?ca-tion. The description generated by this system is a set of text annotations, i.e. the user de?ned classes, for each image.

q In Lumine:Scene classi?cation system.

The In Lumine system[16]is a method for high-level semantic classi?cation of images and video shots based on low-level visual features.The core of the system consists of various machine learning techniques such as rule induction,clustering, and nearest neighbor classi?cation.The system is being used to classify images and video scenes into high-level semantic scene classes such as{nature landscape},{city/suburb},{indoor},and{outdoor}.The system focuses on machine learn-ing techniques because we have found that the?xed set of rules that might work well with one corpus may not work well with another corpus,even for the same set of semantic scene classes.Since the core of the system is based on machine learning techniques,the system can be adapted to achieve high performance for different corpora by training the system with examples from each corpus.The description generated by this system is a set of text annotations to indicate the scene class for each image or each keyframe associated with the shots of a video sequence.

Currently,we are exploring the integration of visual features and textual features for scene classi?cation.Images from on-line news sources(e.g.Clarinet)have associated rich textual information(e.g.caption or articles).This textual infor-mation can be included in an expanded text-based DS in MPEG-7 as well.

4.2. Intelligent search engine

The image and video descriptions generated in our MPEG-7testbed can be broadly categorized into three classes of descrip-tions [11], as follows:

q Semantic descriptions

These descriptions give the position and characteristics of visible objects in images and videos.The face detection system gives the position and characteristics of face objects in images and videos.The In Lumine scene classi?cation system views an entire image or video key frame as an object and associates semantic scene labels such as indoor,city,etc.The Visual Apprentice system gives the position and characteristics of user de?ned objects in images and videos.Images from on-line sources include rich textual information as well.Finally,the AMOS system allows users to ef?ciently outline and track semantic objects in video sequences.

q Feature descriptions

These descriptions are generated by segmenting images and video frames into regions,either automatically or manually, and extracting characteristics such as color,texture,size,shape,motion,and shapes for each regions.The VideoQ system generates descriptions of this type.

q Media based descriptions

These descriptions are based on the medium the data is expressed in.What occurred in the translation from scene to video?What is the sampling rate of the digital?le?Is it an analogue source?Where are the shot cuts?What is the cam-era’s focal length?When applicable,we will rely on the program edit data about the image/video we get from the content provider (e.g. stock video companies).

The descriptions that are used by current state of the art image/video search engines can be viewed primarily as fall-ing into the class of visual feature description[6][8]and semantic description[9][17][18][22].The visual feature-based approach has been to obtain and utilize discriminants(features)that are useful in conducting similarity queries for visual infor-mation.Recent efforts have focused on a few speci?c visual dimensions such as color,texture,shape,motion and spatial infor-mation.The QBIC system[7]enables querying whole images and manually extracted image regions by color,texture,and shape.The Virage system[1]provides for querying global images by features such as color,composition,texture,and struc-ture.The SaFE system[19]allows users to query images by regions and their spatial and feature attributes.The system enables the user to?nd the images that contain arrangements of regions similar to those diagrammed in a sketch.The VideoQ system [5] allows users to search for videos based on a set of visual features and spatio-temporal relationships of regions.

The semantic-level video indexing primarily uses the textual information(e.g.speech and transcript)to index the seg-mented video units(e.g.shot or story).Some approaches use high-level summaries(e.g.scene graphs or table of contents)to

provide ef?cient visualization of video content.These high-level descriptions can be implemented using the proposed self-describing description scheme for video.

As part of the MPEG-7testbed,we are developing new expressive query interfaces and intelligent search engines that allow users to express a rich set of queries for image/video information by combining query elements that are in different the description classes, not only in a single description class.

As an example,consider the VideoQ query interface and search engine.The query is formulated exclusively in terms of elements in the visual feature description class.In VideoQ,a sketch based query interface allows users to assign motion, color,shape,and texture attributes to multiple regions.If the user is looking for a person that is walking in a particular direc-tion in an outdoor scene,the user may try to draw a region with the color of a human face and assign a motion trajectory to the region. The user may also try to draw a blue sky to indicate that it is an outdoor scene.

However,in the new expressive query interface that we are building for the MPEG-7testbed,users will be able to draw a region with an associated motion trajectory,indicate that the region is a human face,and also restrict the search to out-door scenes.This will be possible because we will process all images and videos in our testbed with the face detection system and the scene classi?cation system to generate semantic descriptions,along with the VideoQ system to extract feature descrip-tions.In other words,the search engine will have access to a database of descriptions of the images and videos that contains not only feature descriptions, but also semantic and media based descriptions.

Another example is the use of the video event hierarchy to build a video browsing interface based on table of contents for videos [17]. We show an example XML ?le for this interface.

4.3. Metasearch engine

Metasearch engines act as gateways linking users automatically and transparently to multiple search engines.Most of the cur-rent metasearch engines work with text.Our metasearch engine,MetaSEEk[2],explores the issues involved in querying large, distributed,on-line visual information systems[4].In this section,we will describe the impact that an interoperable content description for multimedia data such as MPEG-7 can have in metasearch engines.

MetaSEEk is designed to intelligently select and interface with multiple on-line image search engines by ranking their performance for different classes of user queries.The overall architecture of MetaSEEk is shown in Figure4.The three main components of the system are quite standard for metasearch engines;they include the query dispatcher,the query transla-tor, and the display interface. The procedure for each search is as follows:

q Upon receiving a query,the dispatcher selects the target search engines to be queried by consulting the performance

database at the MetaSEEk site.This database contains performance scores of past query successes and failures for each supported search engine.The query dispatcher only selects search engines that provide compatible capabilities with the user’s query (e.g. color, keywords).

q The query translators,then,translate the user query to suitable scripts conforming to the interfaces of the selected search engines.

q Finally,the display component uses the performance scores to merge the results from each search engine,and displays them to the user.

MetaSEEk evaluates the quality of the results returned by each search engine based on the user’s feedback.This information is used to update the performance database.The operation of MetaSEEk is very restrained to the interface limitations of current search engines:solely support for query by example and by sketch,and results as a?at list of images(with similarity scores,in some cases), among others.

Figure 4: Overall architecture of MetaSEEk.

As discussed in the previous section,we envision a major transformation in multimedia search engines thanks to the development of the MPEG-7standard.Future systems will accept not only queries by example and by sketch,but also queries by MPEG-7multimedia https://www.doczj.com/doc/7f14607457.html,ers will be able to submit MPEG-7descriptions of multimedia content as the query input to search engines,without the need of submitting the multimedia content itself.In this case,search engines will work on a best effort basis to provide appropriate results:search engines unfamiliar with some descriptors in the query multimedia description may just ignore those descriptors;others may try to translate them to local descriptors.Furthermore,queries will result in a list of matched images as well as their MPEG-7descriptions(partial or complete).Each search engine will also make available the description scheme of its content and maybe some proprietary code.

We envision the connection between the individual search engines and the metaseach engine to be a path for MPEG-7streams,which will enhance the performance of metasearch engines.In particular,the ability of the proposed description

schemes to dynamically download programs for feature extraction and similarity matching by using linking or code embed-

ding will open the door to improved metasearching capabilities.Metasearch engines will use the description schemes of each target search engine to learn about the content and the capabilities of each search engine.This knowledge also enables mean-ingful queries to the repository,proper decisions to query optimal search engines,ef?cient ways of merging results from differ-ent repositories, and intelligent display of the search results from heterogeneous sources.

Consider a metasearch engine that receives a query by MPEG-7multimedia description,queries by example and by sketch can be easily expressed using MPEG-7 standard.

q First,the dispatcher will match the query description to the description schemes of each search engine to ensure the sat-isfaction of the user preferences in the query(features selection,annotations,etc.).It will then select the target search engines to be queried by consulting the performance database.If the user wants to search by color and one search engine does not support any color descriptors,it will not be useful to query that particular search engine.The current prototype of MetaSEEk already considers the speci?c features supported by each search engine;however,they are?xed parameters and have to be maintained manually.On the contrary,a MPEG-7compliant metasearch engine could adapt automatically to additions of new features in the target search engines by consulting their description schemes regularly.

q Then,the query translators will adapt the query description to descriptions conforming to each selected search engine.

This translation will also be based on the description schemes available from each search engine.This task may require executing extraction code for standard descriptors or downloaded extraction code from speci?c search engines to trans-form descriptors.For example,if the user speci?es the color feature of an object using a color histogram of166bins,the query translator will translate it to the specialized color descriptors used by each search engine(e.g.dominant color and color histogram of128bins).If the direct transformation of features is not possible,the metasearch engine may download the extraction code from the target search engine to extract the compatible features from the query object.

q Before displaying the results to the user,the query interface will merge the results from each search engine by translat-ing the descriptions of the result images into a homogeneous one.Again,code for standard descriptors or specialized one may need to be https://www.doczj.com/doc/7f14607457.html,er preferences and/or past performance information may determine how the results are merged and displayed to the user.

We are currently developing the necessary components to begin the evaluation and test of the metasearch engine test-bed based on the self-describing schemes proposed in this paper.

5. CONCLUSIONS

The increasing availability of digital multimedia information requires an interoperable multimedia content description for its ef?cient processing,?ltering,and searching,among others.This is the objective of the MPEG-7standard.In this paper,we have presented self-describing schemes for image and video content that we plan to propose to MPEG-7.To ensure maximum interoperability and?exibility,our descriptions use the eXtensible Markup Language(XML),developed by the World Wide Web Consortium.Under the proposed self-describing schemes,an image is represented as a set of relevant objects that are organized in one or more object hierarchies.Similarly,a video is viewed as a set of relevant events that can be combined hierarchically in one ore more event hierarchies.Both,objects and events,are described by some feature descriptors that can link to external extraction and similarity code.

The proposed self-describing schemes are intuitive,?exible,and ef?cient.We demonstrate the feasibility of our self-describing schemes in our MPEG-7testbed.In the testbed,we are using the self-describing schemes for descriptions of images and videos that are generated by a wide variety of image/video indexing systems.We are developing two systems which are enabled and enhanced by our approach for multimedia content descriptions.The?rst system is an intelligent search engine with an associated expressive query interface.The second system is a new version of MetaSEEk,a metasearch system for mediation among multiple search engines for audio-visual information.

Currently,we are also undertaking interoperability experiments with several research groups using the proposed XML-based DS and our MPEG-7 testbed.

6. REFERENCES

[1]J.R.Bach,C.Fuller,A.Gupta,A.Hampapur,B.Horowitz,R.Humphrey,R.C.Jain,and C.Shu,"Virage image search

engine:an open framework for image management",Symposium on Electronic Imaging:Science and Technology-Storage&Retrieval for Image and Video Databases IV,IS&T/SPIE’96,San Jose,CA,Feb.1996;web site http:// https://www.doczj.com/doc/7f14607457.html,/cgi-bin/query-e.

[2] A.B.Benitez,M.Beigi,and S.-F.Chang,"Using Relevance Feedback in Content-Based Image Metasearch",IEEE

Internet Computing, V ol. 2, No. 4, pp. 59-69, Jul./Aug. 1998; web site https://www.doczj.com/doc/7f14607457.html,/metaseek/.

[3]J.Bosak,"XML,Java,and the future of the Web",web site https://www.doczj.com/doc/7f14607457.html,/pub/sun-info/standards/xml/why/

xmlapps.htm.

[4]S.-F.Chang,J.R.Smith,M.Beigi,and A.B.Benitez,"Visual Information Retrieval from Large Distributed On-line

Repositories",Communications of the ACM, V ol. 40, No. 12, pp. 63-71, Dec. 1997.

[5]S.-F.Chang,W.Chen,H.J.Meng,H.Sundaram,and D.Zhong,"A Fully Automated Content-Based Video Search

Engine Supporting Spatio-Temporal Queries",IEEE Transactions on Circuits and Systems for Video Technology,V ol.8, No. 5, pp. 602-615, Sep. 1998; web site https://www.doczj.com/doc/7f14607457.html,/videoq/.

[6]S.-F.Chang,J.R.Smith,H.J.Meng,H.Wang,and D.Zhong,"Finding Images/Videos in Large Archives",CNRI Dig-

ital Library Magazine, Feb. 1997.

[7]M.Flickner,H.Sawhney,W.Niblack,J.Ashley,Q.Huang,B.Dom,M.Gorkani,J.Hafner,D.Lee,D.Petkovic,D.

Steele,and P.Yanker,"Query by Image and Video Content:The QBIC System",IEEE Computer Magazine,V ol.28, No. 9, pp. 23-32, Sep. 1995; web site https://www.doczj.com/doc/7f14607457.html,/.

[8] A.Gupta and R.Jain,"Visual Information Retrieval",Communications of the ACM,V ol.40,No.5,pp.70-79,May

1997.

[9] A.G.Hauptmann and M.Smith,"Text,Speech and Vision for Video Segmentation:The Informedia Project",AAAI Fall

Symposium, Computational Models for Integrating Languageand Vision, Boston, MA, Nov. 1995.

[10] A.Jaimes and S.-F.Chang,"Model-based Classi?cation of Visual Information for Content-Based Retrieval",Sympo-

sium on Electronic Imaging:Multimedia Processing and Applications-Storage and Retrieval for Image and Video Databases VII,IS&T/SPIE’99, San Jose, CA, Jan. 1999.

[11] A.Lindsay,"Descriptor and Description Scheme Classes",ISO/IEC JTC1/SC29/WG11MPEG98/M4015MPEG doc-

ument, Atlantic City, NJ, Oct. 1998.

[12]J.Meng and S.-F.Chang,"CVEPS:A Compressed Video Editing and Parsing System",ACM Multimedia Conference,

Boston, MA, Nov. 1996.

[13]J.Meng,Y.Juan,and S.-F.Chang,"Scene Change Detection in a MPEG Compressed Video Sequence",Symposium on

Electronic Imaging: Science & Technology,IS&T/SPIE’95, V ol. 2417, San Jose, CA, Feb. 1995.

[14]MPEG-7’s Call for Proposals for Multimedia Content Description Interface,web site at http://drogo.cselt.it/mpeg/pub-

lic/mpeg-7_cfp.htm.

[15]S.Paek,A.B.Benitez,and S.-F.Chang,"Document Type De?nitions for the Self-Describing Image/Video Schemes",

ADVENT Project Technical Report #1998-02, Columbia University, Nov. 1998.

[16]S.Paek and S.-F.Chang,"In Lumine:A Scene Classi?cation System for Images and Videos",ADVENT Project Tech-

nical Report #1998-03, Columbia University, Nov. 1998.

[17]Y.Rui,T.S.Huang,and S.Mehrotra,"Constructing Table of Contents for Videos",ACM J.of Multimedia Systems,

1998.

[18] B.Shahraray and D.C.Gibbon,"Automatic Generation of Pictorial Transcript of Video Programs",SPIE,V ol.2417,

pp. 512-518, 1995.

[19]J.R.Smith and S.-F.Chang,"Integrated Spatial and Feature Image Query",ACM Multimedia Systems Journal,1997;

web site https://www.doczj.com/doc/7f14607457.html,/safe/.

[20]H.Wang and S.-F.Chang,"A Highly Ef?cient System for Automatic Face Region Detection in MPEG Video

Sequences,"IEEE Trans.on Circuits and Systems for Video Technology,special issue on Multimedia Systems and Tech-nologies, V ol. 7, No. 4, pp. 615-628, Aug. 1997.

[21]World Wide Web Consortium’s (W3C) XML web site https://www.doczj.com/doc/7f14607457.html,/XML.

[22]M.M.Yeung and B.L.Yeo,"Video Content Characterization and Compaction for Digital Library Applications",Sym-

posium on Electronic Imaging:Science&Technology-Storage and Retrieval for Still Image and Video Databases V, IS&T/SPIE’97, San Jose, CA, V ol. SPIE 3022, pp 45-58, Feb. 1997.

[23] D.Zhong and S.-F.Chang,"AMOS:An Active System for MPEG-4Video Object Segmentation",IEEE International

Conference on Image Processing, Chicago, IL, Oct. 1998.

初中英语介词用法总结

初中英语介词用法总结 介词(preposition):也叫前置词。在英语里,它的搭配能力最强。但不能单独做句子成分需要和名词或代词(或相当于名词的其他词类、短语及从句)构成介词短语,才能在句中充当成分。 介词是一种虚词,不能独立充当句子成分,需与动词、形容词和名词搭配,才能在句子中充当成分。介词是用于名词或代词之前,表示词与词之间关系的词类,介词常与动词、形容词和名词搭配表示不同意义。介词短语中介词后接名词、代词或可以替代名词的词(如:动名词v-ing).介词后的代词永远为宾格形式。介词的种类: (1)简单介词:about, across, after, against, among, around, at, before, behind, below, beside, but, by, down, during, for, from, in, of, on, over, near, round, since, to, under, up, with等等。 (2)合成介词:inside, into, outside, throughout, upon, without, within (3)短语介词:according to, along with, apart from, because of, in front of, in spite of, instead of, owing to, up to, with reguard to (4)分词介词:considering, reguarding, including, concerning 介词短语:构成 介词+名词We go to school from Monday to Saturday. 介词+代词Could you look for it instead of me? 介词+动名词He insisted on staying home. 介词+连接代/副词I was thinking of how we could get there. 介词+不定式/从句He gives us some advice on how to finish it. 介词的用法: 一、介词to的常见用法 1.动词+to a)动词+ to adjust to适应, attend to处理;照料, agree to赞同,

超全的英语介词用法归纳总结

超全的英语介词用法归纳总结常用介词基本用法辨析 表示方位的介词:in, to, on 1. in 表示在某地范围之内。 Shanghai is/lies in the east of China. 上海在中国的东部。 2. to 表示在某地范围之外。 Japan is/lies to the east of China. 日本位于中国的东面。 3. on 表示与某地相邻或接壤。 Mongolia is/lies on the north of China. 蒙古国位于中国北边。 表示计量的介词:at, for, by 1. at 表示“以……速度”“以……价格”。 It flies at about 900 kilometers an hour. 它以每小时900公里的速度飞行。 I sold my car at a high price. 我以高价出售了我的汽车。 2. for 表示“用……交换,以……为代价”。 He sold his car for 500 dollars. 他以五百元把车卖了。 注意:at表示单价(price) ,for表示总钱数。

3. by 表示“以……计”,后跟度量单位。 They paid him by the month. 他们按月给他计酬。 Here eggs are sold by weight. 在这里鸡蛋是按重量卖的。 表示材料的介词:of, from, in 1. of 成品仍可看出原料。 This box is made of paper. 这个盒子是纸做的。 2. from 成品已看不出原料。 Wine is made from grapes. 葡萄酒是葡萄酿成的。 3. in 表示用某种材料或语言。 Please fill in the form in pencil first. 请先用铅笔填写这个表格。They talk in English. 他们用英语交谈。 表示工具或手段的介词:by, with, on 1. by 用某种方式,多用于交通。 I went there by bus. 我坐公共汽车去那儿。 2. with表示“用某种工具”。 He broke the window with a stone. 他用石头把玻璃砸坏了。注意:with表示用某种工具时,必须用冠词或物主代词。

to与for的用法和区别

to与for的用法和区别 一般情况下, to后面常接对象; for后面表示原因与目的为多。 Thank you for helping me. Thanks to all of you. to sb.表示对某人有直接影响比如,食物对某人好或者不好就用to; for表示从意义、价值等间接角度来说,例如对某人而言是重要的,就用for. for和to这两个介词,意义丰富,用法复杂。这里仅就它们主要用法进行比较。 1. 表示各种“目的” 1. What do you study English for? 你为什么要学英语? 2. She went to france for holiday. 她到法国度假去了。 3. These books are written for pupils. 这些书是为学生些的。 4. hope for the best, prepare for the worst. 作最好的打算,作最坏的准备。 2.对于 1.She has a liking for painting. 她爱好绘画。 2.She had a natural gift for teaching. 她对教学有天赋/ 3.表示赞成同情,用for不用to. 1. Are you for the idea or against it? 你是支持还是反对这个想法? 2. He expresses sympathy for the common people.. 他表现了对普通老百姓的同情。 3. I felt deeply sorry for my friend who was very ill. 4 for表示因为,由于(常有较活译法) 1 Thank you for coming. 谢谢你来。 2. France is famous for its wines. 法国因酒而出名。 5.当事人对某事的主观看法,对于(某人),对…来说(多和形容词连用)用介词to,不用for.. He said that money was not important to him. 他说钱对他并不重要。 To her it was rather unusual. 对她来说这是相当不寻常的。 They are cruel to animals. 他们对动物很残忍。 6.for和fit, good, bad, useful, suitable 等形容词连用,表示适宜,适合。 Some training will make them fit for the job. 经过一段训练,他们会胜任这项工作的。 Exercises are good for health. 锻炼有益于健康。 Smoking and drinking are bad for health. 抽烟喝酒对健康有害。 You are not suited for the kind of work you are doing. 7. for表示不定式逻辑上的主语,可以用在主语、表语、状语、定语中。 1.It would be best for you to write to him. 2.The simple thing is for him to resign at once. 3.There was nowhere else for me to go. 4.He opened a door and stood aside for her to pass.

(完整版)介词for用法归纳

介词for用法归纳 用法1:(表目的)为了。如: They went out for a walk. 他们出去散步了。 What did you do that for? 你干吗这样做? That’s what we’re here for. 这正是我们来的目的。 What’s she gone for this time? 她这次去干什么去了? He was waiting for the bus. 他在等公共汽车。 【用法说明】在通常情况下,英语不用for doing sth 来表示目的。如: 他去那儿看他叔叔。 误:He went there for seeing his uncle. 正:He went there to see his uncle. 但是,若一个动名词已名词化,则可与for 连用表目的。如: He went there for swimming. 他去那儿游泳。(swimming 已名词化) 注意:若不是表目的,而是表原因、用途等,则其后可接动名词。(见下面的有关用法) 用法2:(表利益)为,为了。如: What can I do for you? 你想要我什么? We study hard for our motherland. 我们为祖国努力学习。 Would you please carry this for me? 请你替我提这个东西好吗? Do more exercise for the good of your health. 为了健康你要多运动。 【用法说明】(1) 有些后接双宾语的动词(如buy, choose, cook, fetch, find, get, order, prepare, sing, spare 等),当双宾语易位时,通常用for 来引出间接宾语,表示间接宾语为受益者。如: She made her daughter a dress. / She made a dress for her daughter. 她为她女儿做了件连衣裙。 He cooked us some potatoes. / He cooked some potatoes for us. 他为我们煮了些土豆。 注意,类似下面这样的句子必须用for: He bought a new chair for the office. 他为办公室买了张新办公椅。 (2) 注意不要按汉语字面意思,在一些及物动词后误加介词for: 他们决定在电视上为他们的新产品打广告。 误:They decided to advertise for their new product on TV. 正:They decided to advertise their new product on TV. 注:advertise 可用作及物或不及物动词,但含义不同:advertise sth=为卖出某物而打广告;advertise for sth=为寻找某物而打广告。如:advertise for a job=登广告求职。由于受汉语“为”的影响,而此处误加了介词for。类似地,汉语中的“为人民服务”,说成英语是serve the people,而不是serve for the people,“为某人的死报仇”,说成英语是avenge sb’s death,而不是avenge for sb’s death,等等。用法3:(表用途)用于,用来。如: Knives are used for cutting things. 小刀是用来切东西的。 This knife is for cutting bread. 这把小刀是用于切面包的。 It’s a machine for slicing bread. 这是切面包的机器。 The doctor gave her some medicine for her cold. 医生给了她一些感冒药。 用法4:为得到,为拿到,为取得。如: He went home for his book. 他回家拿书。 He went to his friend for advice. 他去向朋友请教。 She often asked her parents for money. 她经常向父母要钱。

【备战高考】英语介词用法总结(完整)

【备战高考】英语介词用法总结(完整) 一、单项选择介词 1. passion, people won't have the motivation or the joy necessary for creative thinking. A.For . B.Without C.Beneath D.By 【答案】B 【解析】 【详解】 考查介词辨析。句意:没有激情,人们就不会有创新思维所必须的动机和快乐。A. For 对于;B. Without没有; C. Beneath在……下面 ; D. By通过。没有激情,人们就不会有创新思维所必须的动机和快乐。所以空处填介词without。故填without。 2.Modern zoos should shoulder more social responsibility _______ social progress and awareness of the public. A.in light of B.in favor of C.in honor of D.in praise of 【答案】A 【解析】 【分析】 【详解】 考查介词短语。句意:现代的动物园应该根据社会的进步和公众的意识来承担更多的社会责任。A. in light of根据,鉴于;B. in favor of有利于,支持;C. in honor of 为了纪念;D. in praise of歌颂,为赞扬。此处表示根据,故选A。 3.If we surround ourselves with people _____our major purpose, we can get their support and encouragement. A.in sympathy with B.in terms of C.in honour of D.in contrast with 【答案】A 【解析】 【详解】 考查介词短语辨析。句意:如果我们周围都是认同我们主要前进目标的人,我们就能得到他们的支持和鼓励。A. in sympathy with赞成;B. in terms of 依据;C. in honour of为纪念; D. in contrast with与…形成对比。由“we can get their support and encouragement”可知,in sym pathy with“赞成”符合句意。故选A项。 4.Elizabeth has already achieved success_____her wildest dreams. A.at B.beyond C.within D.upon

英语介词for的用法归纳总结.doc

英语介词for的用法归纳总结用法1:(介词for表目的)为了 They went out for a walk. 他们出去散步了。 What did you do that for? 你干吗这样做? That s what we re here for. 这正是我们来的目的。 What s she gone for this time? 她这次去干什么去了? He was waiting for the bus. 他在等公共汽车。 【用法说明】在通常情况下,英语不用for doing sth 来表示目的 他去那儿看他叔叔。 误:He went there for seeing his uncle. 正:He went there to see his uncle. 但是,若一个动名词已名词化,则可与for 连用表目的 He went there for swimming. 他去那儿游泳。(swimming 已名词化) 注意:若不是表目的,而是表原因、用途等,则其后可接动名词。(见下面的有关用法) 用法2:(介词for表利益)为,为了 What can I do for you? 你想要我什么? We study hard for our motherland. 我们为祖国努力学习。 Would you please carry this for me? 请你替我提这个东西好吗?

Do more exercise for the good of your health. 为了健康你要多运动。 【用法说明】(1) 有些后接双宾语的动词(如buy, choose, cook, fetch, find, get, order, prepare, sing, spare 等),当双宾语易位时,通常用for 来引出间接宾语,表示间接宾语为受益者 She made her daughter a dress. / She made a dress for her daughter. 她为她女儿做了件连衣裙。 He cooked us some potatoes. / He cooked some potatoes for us. 他为我们煮了些土豆。 注意,类似下面这样的句子必须用for: He bought a new chair for the office. 他为办公室买了张新办公椅。 (2) 注意不要按汉语字面意思,在一些及物动词后误加介词for: 他们决定在电视上为他们的新产品打广告。 误:They decided to advertise for their new product on TV. 正:They decided to advertise their new product on TV. 注:advertise 可用作及物或不及物动词,但含义不同:advertise sth=为卖出某物而打广告;advertise for sth=为寻找某物而打广告advertise for a job=登广告求职。由于受汉语为的影响,而此处误加了介词for。类似地,汉语中的为人民服务,说成英语是serve the people,而不是serve for the people,为某人的死报仇,说成英语是avenge sb s death,而不是avenge for sb s death,等等。 用法3:(介词for表用途)用于,用来 Knives are used for cutting things. 小刀是用来切东西的。

介词for用法完全归纳

用法1:(表目的)为了。如: They went out for a walk. 他们出去散步了。 What did you do that for? 你干吗这样做? That’s what we’re here for. 这正是我们来的目的。 What’s she gone for this time? 她这次去干什么去了? He was waiting for the bus. 他在等公共汽车。 【用法说明】在通常情况下,英语不用for doing sth 来表示目的。如:他去那儿看他叔叔。 误:He went there for seeing his uncle. 正:He went there to see his uncle. 但是,若一个动名词已名词化,则可与for 连用表目的。如: He went there for swimming. 他去那儿游泳。(swimming 已名词化) 注意:若不是表目的,而是表原因、用途等,则其后可接动名词。(见下面的有关用法) 用法2:(表利益)为,为了。如: What can I do for you? 你想要我什么? We study hard for our motherland. 我们为祖国努力学习。 Would you please carry this for me? 请你替我提这个东西好吗? Do more exercise for the good of your health. 为了健康你要多运动。 【用法说明】(1) 有些后接双宾语的动词(如buy, choose, cook, fetch, find, get, order, prepare, sing, spare 等),当双宾语易位时,通常用for 来引出间接宾语,表示间接宾语为受益者。如:

常用介词用法(for to with of)

For的用法 1. 表示“当作、作为”。如: I like some bread and milk for breakfast. 我喜欢把面包和牛奶作为早餐。 What will we have for supper? 我们晚餐吃什么? 2. 表示理由或原因,意为“因为、由于”。如: Thank you for helping me with my English. 谢谢你帮我学习英语。 3. 表示动作的对象或接受者,意为“给……”、“对…… (而言)”。如: Let me pick it up for you. 让我为你捡起来。 Watching TV too much is bad for your health. 看电视太多有害于你的健康。 4. 表示时间、距离,意为“计、达”。如: I usually do the running for an hour in the morning. 我早晨通常跑步一小时。 We will stay there for two days. 我们将在那里逗留两天。 5. 表示去向、目的,意为“向、往、取、买”等。如: Let’s go for a walk. 我们出去散步吧。 I came here for my schoolbag.我来这儿取书包。 I paid twenty yuan for the dictionary. 我花了20元买这本词典。 6. 表示所属关系或用途,意为“为、适于……的”。如: It’s time for school. 到上学的时间了。 Here is a letter for you. 这儿有你的一封信。 7. 表示“支持、赞成”。如: Are you for this plan or against it? 你是支持还是反对这个计划? 8. 用于一些固定搭配中。如: Who are you waiting for? 你在等谁? For example, Mr Green is a kind teacher. 比如,格林先生是一位心地善良的老师。 尽管for 的用法较多,但记住常用的几个就可以了。 to的用法: 一:表示相对,针对 be strange (common, new, familiar, peculiar) to This injection will make you immune to infection. 二:表示对比,比较 1:以-ior结尾的形容词,后接介词to表示比较,如:superior ,inferior,prior,senior,junior 2: 一些本身就含有比较或比拟意思的形容词,如equal,similar,equivalent,analogous A is similar to B in many ways.

for的用法完全归纳

for的用法完全归纳 用法1:(表目的)为了。如: They went out for a walk. 他们出去散步了。 What did you do that for? 你干吗这样做? That’s what we’re here for. 这正是我们来的目的。 What’s she gone for this time? 她这次去干什么去了? He was waiting for the bus. 他在等公共汽车。 在通常情况下,英语不用for doing sth 来表示目的。如:他去那儿看他叔叔。 误:He went there for seeing his uncle.正:He went there to see his uncle. 但是,若一个动名词已名词化,则可与for 连用表目的。如: He went there for swimming. 他去那儿游泳。(swimming 已名词化) 注意:若不是表目的,而是表原因、用途等,则其后可接动名词。 用法2:(表利益)为,为了。如: What can I do for you? 你想要我什么? We study hard for our motherland. 我们为祖国努力学习。 Would you please carry this for me? 请你替我提这个东西好吗? Do more exercise for the good of your health. 为了健康你要多运动。 (1)有些后接双宾语的动词(如buy, choose, cook, fetch, find, get, order, prepare, sing, spare 等),当双宾语易位时,通 常用for 来引出间接宾语,表示间接宾语为受益者。如: She made her daughter a dress. / She made a dress for her daughter. 她为她女儿做了件连衣裙。 He cooked us some potatoes. / He cooked some potatoes for us. 他为我们煮了些土豆。 注意,类似下面这样的句子必须用for: He bought a new chair for the office. 他为办公室买了张新办公椅。 (2) 注意不要按汉语字面意思,在一些及物动词后误加介词for: 他们决定在电视上为他们的新产品打广告。 误:They decided to advertise for their new product on TV. 正:They decided to advertise their new product on TV. 注:advertise 可用作及物或不及物动词,但含义不同:advertise sth=为卖出某物而打广告;advertise for sth=为寻找某物而打广告。如:advertise for a job=登广告求职。由于受汉语“为”的影响,而此处误加了介词for。类似地,汉语中的“为人民服务”,说成英语是serve the people,而不是serve for the people,“为某人的死报仇”,说成英语是avenge sb’s death,而不是avenge for sb’s death,等等。 用法3:(表用途)用于,用来。如: Knives are used for cutting things. 小刀是用来切东西的。 This knife is for cutting bread. 这把小刀是用于切面包的。 It’s a machine for slicing bread. 这是切面包的机器。 The doctor gave her some medicine for her cold. 医生给了她一些感冒药。 用法4:为得到,为拿到,为取得。如: He went home for his book. 他回家拿书。 He went to his friend for advice. 他去向朋友请教。 She often asked her parents for money. 她经常向父母要钱。 We all hope for success. 我们都盼望成功。 Are you coming in for some tea? 你要不要进来喝点茶? 用法5:给(某人),供(某人)用。如: That’s for you. 这是给你的。 Here is a letter for you. 这是你的信。 Have you room for me there? 你那边能给我腾出点地方吗? 用法6:(表原因、理由)因为,由于。如:

for和to区别

1.表示各种“目的”,用for (1)What do you study English for 你为什么要学英语? (2)went to france for holiday. 她到法国度假去了。 (3)These books are written for pupils. 这些书是为学生些的。 (4)hope for the best, prepare for the worst. 作最好的打算,作最坏的准备。 2.“对于”用for (1)She has a liking for painting. 她爱好绘画。 (2)She had a natural gift for teaching. 她对教学有天赋/ 3.表示“赞成、同情”,用for (1)Are you for the idea or against it 你是支持还是反对这个想法? (2)He expresses sympathy for the common people.. 他表现了对普通老百姓的同情。 (3)I felt deeply sorry for my friend who was very ill. 4. 表示“因为,由于”(常有较活译法),用for (1)Thank you for coming. 谢谢你来。

(2)France is famous for its wines. 法国因酒而出名。 5.当事人对某事的主观看法,“对于(某人),对…来说”,(多和形容词连用),用介词to,不用for. (1)He said that money was not important to him. 他说钱对他并不重要。 (2)To her it was rather unusual. 对她来说这是相当不寻常的。 (3)They are cruel to animals. 他们对动物很残忍。 6.和fit, good, bad, useful, suitable 等形容词连用,表示“适宜,适合”,用for。(1)Some training will make them fit for the job. 经过一段训练,他们会胜任这项工作的。 (2)Exercises are good for health. 锻炼有益于健康。 (3)Smoking and drinking are bad for health. 抽烟喝酒对健康有害。 (4)You are not suited for the kind of work you are doing. 7. 表示不定式逻辑上的主语,可以用在主语、表语、状语、定语中。 (1)It would be best for you to write to him. (2) The simple thing is for him to resign at once.

介词for 的常见用法归纳

介词for 的常见用法归纳 贵州省黔东南州黎平县黎平一中英语组廖钟雁介词for 用法灵活并且搭配能力很强,是一个使用频率非常高的词,也是 高考必考的重要词汇,现将其常见用法归纳如下,供参考。 1.表时间、距离或数量等。 ①意为“在特定时间,定于,安排在约定时间”。如: The meeting is arranged for 9 o’clock. 会议安排在九点进行。 ②意为“持续达”,常于last、stay 、wait等持续性动词连用,表动作持续的时间,有时可以省略。如: He stayed for a long time. 他逗留了很久。 The meeting lasted (for)three hours. 会议持续了三小时。 ③意为“(距离或数量)计、达”。例如: He walked for two miles. 他走了两英里。 The shop sent me a bill for $100.商店给我送来了100美元的账单。 2. 表方向。意为“向、朝、开往、前往”。常与head、leave 、set off、start 等动词连用。如: Tomorrow Tom will leave for Beijing. 明天汤姆要去北京。 He put on his coat and headed for the door他穿上大衣向门口走去。 介词to也可表示方向,但往往与come、drive 、fly、get、go、lead、march、move、return、ride、travel、walk等动词连用。 3.表示理由或原因,意为“因为、由于”。常与thank、famous、reason 、sake 等词连用。如: Thank you for helping me with my English. 谢谢你帮我学习英语。 For several reasons, I’d rather not meet him. 由于种种原因,我宁可不见他。 The West Lake is famous for its beautiful scenery.西湖因美景而闻名。 4.表示目的,意为“为了、取、买”等。如: Let’s go for a walk. 我们出去散步吧。 I came here for my schoolbag.我来这儿取书包。 He plays the piano for pleasure. 他弹钢琴是为了消遣。 There is no need for anyone to know. 没必要让任何人知道。 5.表示动作的对象或接受者,意为“给、为、对于”。如: Let me pick it up for you. 让我为你捡起来。 Watching TV too much is bad for your health. 看电视太多有害于你的健康。 Here is a letter for you. 这儿有你的一封信。

双宾语tofor的用法

1. 两者都可以引出间接宾语,但要根据不同的动词分别选用介词to 或for: (1) 在give, pass, hand, lend, send, tell, bring, show, pay, read, return, write, offer, teach, throw 等之后接介词to。 如: 请把那本字典递给我。 正:Please hand me that dictionary. 正:Please hand that dictionary to me. 她去年教我们的音乐。 正:She taught us music last year. 正:She taught music to us last year. (2) 在buy, make, get, order, cook, sing, fetch, play, find, paint, choose,prepare, spare 等之后用介词for 。如: 他为我们唱了首英语歌。 正:He sang us an English song. 正:He sang an English song for us. 请帮我把钥匙找到。 正:Please find me the keys. 正:Please find the keys for me. 能耽搁你几分钟吗(即你能为我抽出几分钟吗)? 正:Can you spare me a few minutes? 正:Can you spare a few minutes for me? 注:有的动词由于搭配和含义的不同,用介词to 或for 都是可能的。如: do sb a favou r do a favour for sb 帮某人的忙 do sb harnn= do harm to sb 对某人有害

英语介词的用法总结

介词的用法 1.表示地点位置的介词 1)at ,in, on, to,for at (1)表示在小地方; (2)表示“在……附近,旁边” in (1)表示在大地方; (2)表示“在…范围之内”。 on 表示毗邻,接壤,“在……上面”。 to 表示在……范围外,不强调是否接壤;或“到……” 2)above, over, on 在……上 above 指在……上方,不强调是否垂直,与below相对; over指垂直的上方,与under相对,但over与物体有一定的空间,不直接接触。 on表示某物体上面并与之接触。 The bird is flying above my head. There is a bridge over the river. He put his watch on the desk. 3)below, under 在……下面 under表示在…正下方 below表示在……下,不一定在正下方 There is a cat under the table. Please write your name below the line. 4)in front [frant]of, in the front of在……前面 in front of…意思是“在……前面”,指甲物在乙物之前,两者互不包括;其反义词是behind(在……的后面)。There are some flowers in front of the house.(房子前面有些花卉。) in the front of 意思是“在…..的前部”,即甲物在乙物的内部.反义词是at the back of…(在……范围内的后部)。 There is a blackboard in the front of our classroom. 我们的教室前边有一块黑板。 Our teacher stands in the front of the classroom. 我们的老师站在教室前.(老师在教室里) 5)beside,behind beside 表示在……旁边 behind 表示在……后面 2.表示时间的介词 1)in , on,at 在……时 in表示较长时间,如世纪、朝代、时代、年、季节、月及一般(非特指)的早、中、晚等。 如in the 20th century, in the 1950s, in 1989, in summer, in January, in the morning, in one’s life , in one’s thirties等。 on表示具体某一天及其早、中、晚。 如on May 1st, on Monday, on New Year’s Day, on a cold night in January, on a fine morning, on Sunday afternoon等。 at表示某一时刻或较短暂的时间,或泛指圣诞节,复活节等。 如at 3:20, at this time of year, at the beginning of, at the end of …, at the age of …, at Christmas,at night, at noon, at this moment等。 注意:在last, next, this, that, some, every 等词之前一律不用介词。如:We meet every day. 2)in, after 在……之后 “in +段时间”表示将来的一段时间以后; “after+段时间”表示过去的一段时间以后; “after+将来的时间点”表示将来的某一时刻以后。 3)from, since 自从…… from仅说明什么时候开始,不说明某动作或情况持续多久;

介词的归纳

介词的归纳 一、单项选择介词 1.(重庆)Last year was the warmest year on record, with global temperature 0.68 ℃ ________ the average. A.below B.on C.at D.above 【答案】D 【解析】 【详解】 考查介词。句意:去年是有纪录以来最热的一年,全球平均气温上升0.68度。A. below低于;B. on在……之上;C. at在;D. above超过,多于。根据前一句Last year was the warmest year on record推知,温度应该是上升了,故用介词above。 【点睛】 with的复合结构中,复合宾语中第一部分宾语由名词和代词充当,第二部分补足语由形容词,副词,介词短语,动词不定式或分词充当。而本题考查with +名词/代词+介词短语,而介词的使用则根据当时语境的提示来做出相应的变化即句中的the warmest year on record 起重要作用,可知高出平均气温。 2.According to Baidu, the high-quality content of Cloud Music will reach massive users _______ Baidu’s app and video platform. A.in honor of B.in view of C.by virtue of D.by way of 【答案】C 【解析】 【详解】 考查介词短语。句意:根据百度的说法,云音乐的高质量内容将借助于百度应用和视频平台到达广大用户。A. in honor of向……致敬;B. in view of考虑到;C. by virtue of借助于;D. by way of通过。根据句意可知,此处要表达“借助于”。故选C项。 3.We charge parcels ________ weight, rather than individual units. A.in honor of B.in contact with C.in terms of D.in connection with 【答案】C 【解析】 【详解】 考查介词短语。句意:我们根据包裹的重量,而不是包裹的件数收费。A. in honor of为了对……表示敬意;B. in contact with与……有联系,接触;C. in terms of根据,在……方面;D. in connection with与……有关,有联系。表示根据什么计费。故选C。 【点睛】

介词用法归纳

介词(preposition) 又称前置词,是一种虚词。介词不能单独做句子成分。介词后须接宾语,介词与其宾语构成介词短语。 一、介词从其构成来看可以分为: 1、简单介词(Simple prepositions)如:at ,by, for, in, from, since, through等; 2、复合介词(Compound prepositions)如:onto, out of, without, towards等; 3、短语介词(phrasal prepositions)如;because of, instead of, on account of, in spite of, in front of等; 4、二重介词(double prepositions)如:from behind, from under, till after等; 5、分词介词(participial prepositions),又可称动词介词(verbal prepositions)如:during, concerning, excepting, considering, past等。 二、常见介词的基本用法 1、 about 关于 Do you know something about Tom? What about this coat?(……怎么样) 2、 after 在……之后 I’m going to see you after supper. Tom looked after his sick mother yesterday.(照看) 3、 across 横过 Can you swim across the river. 4、 against 反对 Are you for or against me? Nothing could make me turn against my country.(背叛) 5、 along 沿着 We walked along the river bank. 6、 before 在……之前 I hope to get there before seven o’clock. It looks as though it will snow before long.(不久) 7、behind 在……后面 The sun is hidden behind the clouds. 8、by 到……时 We had learned ten English songs by the end of last term. 9、during 在……期间 Where are you going during the holiday. 10、except 除了 Everyone except you answered the question correctly. 11、for 为了 The students are studying hard for the people. 12、from 从 I come from Shanghai. 13、in 在……里 on 在……上面 under在……下面 There are two balls in/on/under the desk. 14、near 在……附近 We live near the park. 15、of ……的 Do you know the name of the winner. 16、over 在……正上方 There is a bridge over the river. Tom goes over his English every day.(复习) 17、round/around 围绕 The students stand around the teacher. 18、to 朝……方向 Can you tell me the way to the cinema. 19、towards朝着 The car is traveling towards Beijing.

文本预览