当前位置:文档之家› Rule-Based Spatiotemporal Query Processing for Video Databases

Rule-Based Spatiotemporal Query Processing for Video Databases

Rule-Based Spatiotemporal Query Processing for Video Databases
Rule-Based Spatiotemporal Query Processing for Video Databases

The VLDB Journal(2004)13:86–103/Digital Object Identi?er(DOI)10.1007/s00778-003-0114-0

Rule-based spatiotemporal query processing for video databases

Mehmet Emin D¨o nderler1,¨Ozg¨ur Ulusoy2,,Uˇg ur G¨ud¨ukbay2

1Department of Computer Science and Engineering,Arizona State University,Tempe,AZ85287-5406,USA 2Department of Computer Engineering,Bilkent University,Bilkent06800Ankara,Turkey

Edited by A.Buchmann.Received:October11,2001/Accepted:October3,2003

Published online:December12,2003–c Springer-Verlag2003

Abstract.In our earlier work,we proposed an architecture for a Web-based video database management system(VDBMS) providing an integrated support for spatiotemporal and seman-tic queries.In this paper,we focus on the task of spatiotemporal query processing and also propose an SQL-like video query language that has the capability to handle a broad range of spatiotemporal queries.The language is rule-based in that it allows users to express spatial conditions in terms of Prolog-type predicates.Spatiotemporal query processing is carried out in three main stages:query recognition,query decompo-sition,and query execution.

Keywords:Spatiotemporal query processing–Content-based retrieval–Inference rules–Video databases–Mul-timedia databases

1Introduction

Interest in multimedia databases,especially video databases, is growing rapidly.Research that started out tackling the issue of content-based image retrieval by low-level features(color, shape,and texture)and keywords[4,6,12,35]has progressed over time to video databases dealing with spatiotemporal and semantic features of video data[5,16,20,27,29,41].There has also been some work on picture retrieval systems to enhance their query capabilities using the spatial relationships between objects in images[6,7].

First attempts at supporting content-based video retrieval were initiated by applying the techniques devised for image retrieval to video databases since video can basically be re-garded as a consecutive sequence of images ordered in time [12,39].Some prototype systems were designed and imple-mented such as VideoQ,KMED,QBIC,and OVID[5,7,12,31]. Furthermore,querying video objects by motion properties has This work is supported by the Scienti?c and Research Council of Turkey(T¨UB_ITAK)under Project Code199E025.This work was done while the?rst author was at Bilkent University. Corresponding author:¨O.Ulusoy,

e-mail:oulusoy@https://www.doczj.com/doc/143170289.html,.tr also been studied extensively[13,22,24,30,38].Some exam-ples of the use of semantic properties of video data for querying video collections can be found in[1,16,18].Nonetheless,to the best of our knowledge no proposal has been made thus far for a generic,application-independent video database man-agement system(VDBMS)that aims to support spatiotem-poral,semantic,and low-level queries on video data in an integrated manner.

In our earlier work,we proposed a novel architecture for a VDBMS that provides integrated support for both spa-tiotemporal and semantic queries on video data[9].A spa-tiotemporal query may contain any combination of directional, topological,third dimension(3D)relation,external-predicate, object-appearance,trajectory-projection,and similarity-based object-trajectory conditions.The system responds to spa-tiotemporal queries using its knowledge base,which consists of a fact base and a comprehensive set of rules implemented in Prolog,while semantic queries are handled by an object-relational database.The query processor interacts with both the knowledge base and object-relational database to respond to user queries that contain a combination of spatiotempo-ral and semantic queries.Intermediate query results returned from these two system components are integrated seamlessly by the query processor and sent to Web clients.The architec-ture is extensible in that it can be augmented easily to provide integrated support for low-level video queries in addition to spatiotemporal and semantic queries on video data.

The focus and contributions of this paper are on the spa-tiotemporal video query processing;therefore,issues related to semantic and low-level video queries are not discussed. Our rule-based spatiotemporal video query processing strat-egy is explained in detail.Moreover,an SQL-like textual query language is proposed for spatiotemporal queries on video data.The language can be used to query the knowledge base of the system,proposed in[9],for object trajectories,spa-tiotemporal relations between video objects,external predi-cates,and object-appearance relations.It is very easy to use even for novice users.In fact,it is easier to use compared with other proposed query languages for video databases such as CVQL,MOQL,and VideoSQL[19,25,31].Furthermore,it offers great expressiveness for creating complex spatiotem-poral queries thanks to its rule-based structure.Similarity-

based object-trajectory and trajectory-projection query con-ditions are processed separately from spatiotemporal,object-appearance,and external-predicate query conditions.The lat-ter type of conditions are grouped together to form the maxi-mal subqueries.Given a query,a maximal subquery is de?ned as the longest sequence of conditions that can be processed by Prolog without changing the semantics of the original query. Grouping the spatial conditions in a query into maximal sub-queries minimizes the number of subqueries to be processed by our inference engine Prolog,thereby reducing the inter-val processing time and improving the overall performance of the system for spatiotemporal query processing.Our ap-proach can be seen as reducing spatiotemporal video retrieval to metadata queries on a rule-based fact base;nonetheless,in-terval and similarity-based trajectory processing is carried out outside of the Prolog engine.Spatiotemporal query processing is carried out in three main stages:query recognition,query decomposition,and query execution.

In[9],we also proposed a novel video segmentation tech-nique speci?cally for spatiotemporal modeling of video data that is based on the spatiotemporal relations between salient video objects.In our approach,video clips are segmented into shots whenever the current set of relations between video ob-jects changes,thereby helping us to determine parts of the video where the spatial relationships do not change at all.Spa-tiotemporal relations are represented as Prolog facts partially stored in the knowledge base,and those relations that are not stored explicitly can be derived by our inference engine Prolog using the rules in the knowledge base.The system has a com-prehensive set of rules that reduces the storage space needed for the spatiotemporal relations considerably while keeping the query response time at interactive rates,as proven by our performance tests using both synthetic and real video data[9]. Our rule-based spatiotemporal query processing strategy and query language take advantage of this segmentation technique to provide precise(?ne-grained)answers to spatiotemporal video queries.Consequently,the smallest unit of retrieval is not a scene(a single camera shot)but a single frame in our VDBMS that we call BilVideo.

To the best of our knowledge,all VDBMSs proposed in the literature associate the spatiotemporal relations between video objects,as well as object trajectories,with scenes de?ned as single camera shots.Hence these systems are unable to return arbitrary segments of video clips in response to user queries that consist of spatiotemporal conditions.Nonetheless,users may not be interested in seeing an entire scene as a result of a query if the query conditions are satis?ed only in some parts of the scene.Moreover,since object trajectories are conven-tionally de?ned within the scenes,and thereby do not span over the entire video as one entity,trajectory matching is re-stricted to the subtrajectories of objects that fall into scenes in the entire video.We believe that such a restriction limits the ?exibility and power of a VDBMS for spatiotemporal query processing:users should be able to retrieve arbitrary video segments if there is a match for a given query trajectory with a part of an object trajectory,where the object trajectory spans the entire video.To the best of our knowledge,only BilVideo provides this support thanks to its unique video segmenta-tion technique that is based on the spatiotemporal relations between video objects.

The rest of the paper is organized as follows.Section2 presents a discussion of some of the VDBMS and query lan-guages proposed in the literature and their comparison to Bil-Video and its query language.BilVideo’s overall architecture and our rule-based approach to representing spatiotemporal relations between salient video objects are brie?y mentioned in Sect.3.Section4presents the proposed SQL-like textual query language and demonstrates the capabilities of the lan-guage with some query examples on three different applica-tion areas:soccer event analysis,bird migration tracking,and movie retrieval systems.Section5provides a detailed discus-sion on the proposed rule-based spatiotemporal query pro-cessing strategy with some example queries.The results of our preliminary performance and scalability tests conducted on the knowledge base of BilVideo,which are presented in detail in[9],are summarized in Sect.6.We draw our con-clusions and state possible future research areas in Sect.7. Finally,the grammar of the proposed query language is given in Appendix A.

2Related work

In this section,we compare BilVideo and its query language with some other systems and query languages proposed in the literature.One point worth noting at the outset is that the Bil-Video query language is,to the best of our knowledge,unique in its support for retrieving any segment of a video clip,where the given query conditions are satis?ed,regardless of how video data are semantically partitioned.None of the systems discussed here can return a subinterval of a scene as part of a query result because video features are associated with scenes de?ned to be the smallest semantic units of video data.In our approach,object trajectories,object-appearance relations, and spatiotemporal relations between video objects are repre-sented as Prolog facts in a knowledge base,and they are not explicitly related to semantic units of videos.Thus the BilVideo query language can return precise answers for spatiotemporal queries in terms of frame intervals.Moreover,our assessment of the directional relations between two video objects is also novel in that two overlapping objects may have directional relations de?ned for them with respect to one another,pro-vided that center points of the objects’minimum bounding rectangles(MBRs)are different.It is because Allen’s tempo-ral interval algebra,[2],is not used as a basis for the direc-tional relation de?nition in our approach:to determine which directional relation holds between two objects,center points of the objects’MBRs are used[9].Furthermore,the BilVideo query language provides three aggregate functions,average, sum,and count,that may be very attractive for some applica-tions such as sports statistical analysis systems for collecting statistical data on spatiotemporal events.Moreover,the Bil-Video query language provides full support for spatiotemporal querying of video data.

VideoSQL.VideoSQL is an SQL-like query language devel-oped for OVID to retrieve video objects[31].Before exam-ining the conditions of a query for each video object,target video objects are evaluated according to the interval inclusion inheritance mechanism.A VideoSQL query consists of the ba-sic select,from,and where clauses.Conditions may contain

attribute/value pairs and comparison operators.Video num-bers may also be used in specifying conditions.In addition, VideoSQL has the ability to merge the video objects retrieved by multiple queries.Nevertheless,the language does not con-tain any expression to specify spatial and temporal conditions on video objects.Thus VideoSQL does not support spatiotem-poral queries,which is a major weakness of the language. MOQL and MTQL.In[26],multimedia extensions to the Object Query Language(OQL)and TIGUKAT Query Lan-guage(TQL)are proposed.The extended languages are called Multimedia Object Query Language(MOQL)and Multime-dia TIGUKAT Query Language(MTQL),respectively.The extensions made are spatial,temporal,and presentation fea-tures for multimedia data.MOQL has been used in the STARS system[23]as well as in an object-oriented SGML/HyTime-compliant multimedia database system[32],both developed at the University of Alberta.

MOQL and MTQL support content-based spatial and tem-poral queries as well as query presentation.Both languages in-clude support for3D-relation queries,as we call them,by front, back,and their combinations with other directional relations, such as front left,front right,etc.The BilVideo query language has a different set of third-dimension(3D)relations,though. The3D relations supported by the BilVideo query language are infrontof,behind,strictlyinfrontof,strictlybehind,touch-frombehind,touchedfrombehind,and samelevel.De?nitions of these3D relations are given in Sect.4.2.2.The moving-object model integrated in MOQL and MTQL[22]is also different from our model.The BilVideo query language does not support similarity-based retrieval on spatial conditions as MOQL and MTQL do.Nonetheless,it does allow users to specify separate weights for the directional and displacement components of the trajectory conditions in queries,which both languages lack.

AVIS.In[28],a uni?ed framework for characterizing mul-timedia information systems is proposed.Some user queries may not be answered ef?ciently using these data structures; therefore,for each media instance,some feature constraints are stored as a logic program.Nonetheless,temporal aspects and relations are not taken into account in the model.More-over,complex queries involving aggregate operations as well as uncertainty in queries require further work to be done.In addition,although the framework incorporates some feature constraints as facts to extend its query range,it does not pro-vide a complete deductive system as we do.

The authors extend their work de?ning feature–subfeature relationships in[27].When a query cannot be answered,it is relaxed by substituting a subfeature for a feature.This re-laxation technique provides some support for reasoning with uncertainty.

In[1],a prototype video information system,called Ad-vanced Video Information System(A VIS),is introduced.The authors propose a special kind of segment tree,namely,frame segment tree,and a set of arrays to represent objects,events, activities,and their associations.The proposed data model is based on the generic multimedia model described in[28].Consequently,temporal queries on events are not addressed in A VIS.

In[15],an SQL-like video query language based on the data model developed by Adal?et al.[1]is proposed.Thus the language does not provide any support for temporal queries on events,nor does it have any language construct for spa-tiotemporal querying of video clips since it was designed for semantic queries on video data.In the BilVideo query model, temporal operators,such as before,during,etc.,would also be used to specify order in time between events just as they are used for spatiotemporal queries.

VideoSTAR.VideoSTAR proposes a generic data model that makes possible sharing and reusing video data[14].Thematic indexes and structural components might implicitly be related to one another since frame sequences may overlap and be reused.Therefore,considerable processing is needed to ex-plicitly determine the relations,making the system complex. Moreover,the model does not support spatiotemporal relations between video objects.

CVQL.A content-based logic video query language,CVQL, is proposed in[20].Users retrieve video data specifying some spatial and temporal relationships for salient objects. An elimination-based preprocessing for?ltering unquali?ed videos and a behavior-based approach for video function eval-uation are also introduced.For video evaluation,an index structure called M-index is https://www.doczj.com/doc/143170289.html,ing this index struc-ture,frame sequences satisfying a query predicate can be ef?ciently retrieved.Nevertheless,topological relations be-tween salient objects are not supported since an object is rep-resented by a point in two-dimensional(2D)space.Conse-quently,the language does not allow users to specify topolog-ical and similarity-based object-trajectory queries.

3BilVideo VDBMS

This section is intended only to provide a very brief overview of the BilVideo system architecture.Further information and details can be found in our earlier paper[9].

3.1Overall system architecture

Figure1illustrates the system architecture of BilVideo.In the heart of the system lies the query processor,which is respon-sible for processing and responding to user queries in a mul-tiuser environment.The query processor communicates with a knowledge base and an object-relational database.The knowl-edge base stores fact-based metadata used for spatiotemporal queries,whereas semantic and histogram-based(color,shape, and texture)metadata are stored in the feature database main-tained by the object-relational database.Raw video data and video data features are stored separately.Semantic metadata stored in the feature database is generated and updated by a video-annotator tool,and the fact base is populated by a fact-extractor tool,both developed as Java applications[3,8].The fact-extractor tool also extracts the color and shape histograms

Fig.1.BilVideo system architecture

of objects of interest in video keyframes to be stored in the fea-ture database[37].

BilVideo can currently handle only spatiotemporal queries on video data,which is the focus of this paper;however,we are in the process of extending it to provide an integrated support for semantic and low-level(color,shape,and texture)queries as well.

3.2Knowledge-base structure

In the knowledge base,each fact has a single frame number that is of a keyframe.1This representation scheme allows our in-ference engine Prolog to process spatiotemporal queries faster and easier compared to using frame intervals for the facts.It is because the frame interval processing that forms the?nal query results is carried out ef?ciently by some optimized code, written in C++,outside the Prolog environment.Therefore,the rules used for querying video data,which we call query rules, have frame-number variables associated with them.A second set of rules that we call extraction rules was also created to work with frame intervals so as to extract spatiotemporal rela-tions from video data.Extracted spatiotemporal relations are then converted to be stored as facts with frame numbers of the keyframes in the knowledge base,and these facts are used by the query rules for query processing in the system.

The rules in the knowledge base signi?cantly reduce the number of facts that need to be stored for spatiotemporal querying of video data.Our storage space savings was about 40%for some real video data we experimented on.Moreover, the system’s response time for different types of spatiotem-poral queries posed on the same data was at interactive rates. We provide a brief summary of our performance tests con-ducted on the knowledge base of BilVideo in Sect.6.Details on the knowledge-base structure of BilVideo,our fact-extraction (video segmentation)algorithm,types of rules/facts used, their de?nitions,and a detailed discussion of our performance tests involving spatial relations can be found in[9].

1This does not include appear and object-trajectory facts,which have frame intervals as a component instead of frame numbers be-cause of storage space,ease of processing,and processing cost con-siderations.4BilVideo query language

Retrieval of video data by their spatiotemporal content is a very important and challenging task.Query languages designed for relational,object,and object-relational databases do not provide suf?cient support for spatiotemporal video retrieval; consequently,either a new language should be designed and implemented or an existing language should be extended with the required functionality.

In this section,we present a new video query language that is similar to SQL in structure.The language can be used for spatiotemporal queries that contain any combination of directional,topological,3D-relation,external-predicate, object-appearance,trajectory-projection,and similarity-based object-trajectory conditions.

4.1Features of the language

The BilVideo query language has four basic statements for retrieving information:

select video from all[where condition];

select video from videolist where condition;

select segment from range where condition;

select variable from range where condition.

The target of a query is speci?ed in the select clause.A query may return videos(video),or segments of videos(seg-ment),or values of variables(variable)with or without seg-ments of videos.Regardless of the target type speci?ed,video identi?ers for videos are always returned as part of the query answer.The aggregate functions(sum,average,and count), which operate on segments,may also be used in the select clause.Variables might be used for the object identi?ers and trajectories.Moreover,if the target of a query is videos(video), users may also specify the maximum number of videos to be returned as a result of a query.If the keyword random is used,video fact?les to process are selected randomly in the system,thereby returning a random set of videos as a result. The range of a query is speci?ed in the from clause,which may be either the entire video collection or a list of speci?c videos.The query conditions are given in the where clause. In the BilVideo query language,the condition is de?ned recur-sively,and consequently it may contain any combination of spatiotemporal query conditions.

Supported Operators:The BilVideo query language sup-ports a set of logical and temporal operators to be used in the query conditions.The logical operators are and,or, and not,while the temporal operators are before,meets, overlaps,starts,during,?nishes,and their inverse opera-tors.

The language also has a trajectory-projection operator, project,which can be used to extract subtrajectories of video objects on a given spatial condition.The condition is local to project,and it is optional.If it is not given,entire object trajectories rather than subtrajectories of objects are returned.

The language has two operators,“=”and“!=”,to be used for assignment and comparison.The left argument of these operators should be a variable,whereas the right ar-gument may be either a variable or a constant(atom).The

“!=”operator is used for inequality comparison,while the “=”operator may take on different semantics depending on its arguments.If one of the arguments of the“=”operator is an unbound variable,it is treated as the assignment oper-ator.Otherwise,it is considered the equality-comparison operator.These semantics were adopted from the Prolog language.

Operators that perform interval processing are called interval operators.Hence all temporal operators are in-terval operators.Logical operators are also considered as interval operators when their arguments contain intervals.

In the BilVideo query language,precedence values of the logical,assignment,and comparison operators fol-low their usual order.Logical operators assume the same precedence values when they are considered as interval operators as well.Temporal operators are given a higher priority over logical operators when determining the ar-guments of operators,and they are left associative,as are logical operators.

The BilVideo query language also provides a keyword, repeat,that can be used in conjunction with a temporal operator,such as before,meets,etc.,or a trajectory condi-tion.Video data may be queried by repetitive conditions in time using repeat with an optional repetition number given.If a repetition number is not given with repeat, then it is considered inde?nite,thereby causing the proces-sor to search for the largest intervals in a video,where the conditions given are satis?ed at least once over time.The keyword tgap may be used for the temporal operators and a trajectory condition.However,it has rather different semantics for each type.For temporal operators,tgap is only meaningful when repeat is used because it speci-?es the maximum time gap allowed between the pairs of intervals to be processed for repeat.Therefore,the lan-guage requires that tgap be used along with repeat for temporal operators.For a trajectory condition,it may be used to specify the maximum time gap allowed for con-secutive object movements as well as pairs of intervals to be processed for repeat if repeat is also given in the condition.

Aggregate Functions:The BilVideo query language has three aggregate functions,average,sum,and count,which take a set of intervals(segments)as input.Average and sum return a time value in minutes,while count returns an integer for each video clip satisfying given conditions.

Average is used to compute the average of the time dura-tions of all intervals found for a video clip,whereas sum and count are used to calculate,respectively,the total time duration for and the total number of all such intervals.

These aggregate functions might be very useful to collect statistical data for some applications such as sports event analysis systems,motion tracking systems,etc. External Predicates:The BilVideo query language is generic and designed to be used for any application that requires spatiotemporal query processing capabilities.It has a con-dition type external de?ned for application-dependent predicates,which we call external predicates.This con-dition type is generic;consequently,a query may contain any application-dependent predicate in the where clause of the language with a name different from any prede?ned predicate and language construct and with at least one ar-

gument that is either a variable or a constant(atom).Ex-

ternal predicates are processed just like spatial predicates

as part of the maximal subqueries.If an external predicate

is to be used for querying video data,facts and/or rules

related to the predicate should be added to the knowledge

base beforehand.

In our design,each video segment returned as an answer

to a user query has an associated importance value ranging be-

tween0and1,where1denotes an exact match.The results are

ordered with respect to these importance values in descending

order.Maximal subqueries return segments with importance

value1because they are exact-match queries,whereas the

importance values for the segments returned by similarity-

based object-trajectory queries are the similarity values com-

puted.Interval operators not and or return the complements

and union of their input intervals,respectively.Interval op-

erator or returns intervals without changing their importance

values,while the importance value for the intervals returned

by not is1.The remaining interval operators take the average

of the importance values of their input interval pairs for the

computed https://www.doczj.com/doc/143170289.html,ers may also specify a time period in

a query to view only the parts of videos returned as an an-

swer.The grammar of the BilVideo query language is given in

Appendix A.

4.2Basic query types

This section presents the basic query types that the BilVideo

query language supports.These types of queries can be com-

bined to construct complex spatiotemporal queries without

any restriction,which makes the language very?exible and

powerful in terms of expressiveness.In this section,we pro-

vide some examples of the object and similarity-based object-

trajectory queries;examples of the other types used in combi-

nation are introduced later in Sects.4.3and5.5.

4.2.1Object queries

This type of query may be used to retrieve salient objects for

each video queried that satis?es the conditions,along with

segments if desired,where the objects appear.Some example

queries of this type are given below:

Query1:“Find all video segments from the database in which

object o1appears.”

select segment

from all

where appear(o1).

In this query,the appear predicate returns the frame in-

tervals(segments)of each video in the database where object

o1appears.The segments returned are grouped by videos,and each group is sorted in the linear timeline based on the starting

frames,where smaller segments appear before larger ones if

the starting frames of the intervals are the same.

Query2:“Find the objects that appear together with object

o1in a given video clip,and also return such segments.”

(Video identi?er for the given video clip is assumed to

be1.)

select segment,X

from1

where appear(o1,X)and X!=o1.

4.2.2Spatial queries

This type of query may be used to query videos by spatial properties of objects de?ned with respect to each other.Sup-ported spatial properties for objects can be grouped into three main categories:directional relations that describe order in2D space,topological relations that describe neighborhood and incidence in2D space,and3D relations that describe object positions on the z-axis of3D space.

There are eight distinct topological relations:disjoint, touch,inside,contains,overlap,covers,coveredby,and equal. The fundamental directional relations are north,south,east, west,northeast,northwest,southeast,and southwest.Fur-thermore,our3D relations consist of infrontof,strictlyin-frontof,touchfrombehind,samelevel,behind,strictlybehind, and touchedfrombehind.

De?nitions of the topological and3D relations are based on Allen’s temporal interval algebra[2].Table1presents the semantics of our3D relations.We,however,do not provide in this paper the semantics for the topological relations since they are given in a number of papers in the literature(e.g.,[11] and[33]).We also include the relations left,right,below,and above in the group of directional relations,and these relations are de?ned in terms of the fundamental directional relations. However,directional components of the object trajectories can only contain the fundamental directional relations in query speci?cations.Our de?nitions for the directional relations are given in[9].

4.2.3Similarity-based object-trajectory queries

In our data model,for each moving object in a video clip,a trajectory fact is stored in the fact base.A trajectory fact is modelled as tr(ν,?,ψ,κ),where

ν:object identi?er,

?(list of directions):[?1,?2,...,?n],where?i∈F2 (1≤i≤n),

ψ(list of displacements):[ψ1,ψ2,...,ψn],where ψi∈Z+(1≤i≤n),

κ(list of intervals):[[s1,e1],...,[s n,e n]],where s i, e i∈N and s i≤e i(1≤i≤n).

A trajectory query is modeled as

tr(α,λ)[sthresholdσ[dirweightβ| dspweightη]][tgapγ]

or

tr(α,θ)[sthresholdσ][tgapγ],

where

α:object identi?er,

λ:trajectory list([θ,χ])

θ:list of directions,

χ:list of displacements,

sthreshold(similarity threshold):0<σ<1,

dirweight(directional weight):0≤β≤1and1-β=η,

2set of fundamental directional relations

dspweight(displacement weight):0≤η≤1and 1-η=β,

tgap:time threshold,γ∈N,for the gap between consecutive object movements.

In a trajectory query,variables may be used forαand λ,and the number of directions is equal to the number of displacements inλ,just like in trajectory facts,because each element of a list is associated with an element of the other list that has the same index value.The list of directions speci?es a path an object follows,while the displacement list associates each direction in this path with a displacement value.However, it is optional to specify a displacement list in a query in which case no weights are used in matching trajectories.Such queries are useful when displacements are not important to the user.

In a trajectory query,similarity and time threshold values are also optional.If a similarity threshold is not given,the query is considered as an exact-match query.A query without a tgap value implies a continuous motion without any stop between consecutive object movements.The time threshold value speci?ed in a query is considered in seconds.A trajectory query may have either a directional or a displacement weight supplied because the other is computed from the one given. Moreover,for a weight to be speci?ed,a similarity threshold value must also be provided.If a similarity value is supplied and no weight is given,then the weights of the directional and displacement components are considered equal by default. The key idea in measuring the similarity between a pair of trajectories is to?nd the distance between the two,and this is achieved by computing the distances between the directional and displacement components of the trajectories when both lists are available.If a displacement list is not speci?ed in a query,then trajectory similarity is measured by comparing the directional lists.Furthermore,when a weight value is0, its corresponding list is not taken into account in computing the similarity between trajectories.

Directional Similarity:

De?nition4.1.A directional region is an area between neigh-boring directional segments in the directional coordinate sys-tem depicted in Fig.2.

De?nition4.2.Let d a and d b be two directions in the direc-tional coordinate system.The distance between d a and d b, denoted as D(d a,d b),is de?ned to be the minimum number of directional regions between d a and d b.

De?nition4.3.The directional normalization factor,ω,is de-?ned to be the number of directional regions between two op-posite directions in the directional coordinate system(w=4).

LetA and B be two directional lists each having n elements such that A=[A1,A2,...,A n]and B=[B1,B2,...,B n].The directional similarity between A and B is speci?ed as follows:

?(A,B)=1?

1

w

1

n

n

i=1

D(A i,B i)2.(1) Displacement Similarity:

De?nition4.4.The displacement normalization factor of a displacement list A is de?ned to be the maximum displacement value in the list,and it is denoted by Aμ.

Table 1.De?nitions of our 3D relations on the z -axis of 3D space

Relation

Inverse

Meaning

AAA BBB (A overlaps B)A infrontof B B behind A

or

AAABBB (A meets B)

or

AAA BBB (A before B)AAA BBB (A before B)

A strictlyinfrontof

B B strictlybehind A

or

AAABBB (A meets B)AAA BBBBBB

(A starts B)or AAA BBBBBB

(A ?nishes B)A samelevel B B samelevel A

or AAA BBBBBB

(A during B)or AAA BBB (A equal B)A touchfrombehind B B touchedfrombehind A

BBBAAA

(B meets

A)

South

Fig. 2.Directional coordinate system

Let A and B be two displacement lists each having n ele-ments such that A =[A 1,A 2,...,A n ]and B =[B 1,B 2,...,B n ].Furthermore,let us suppose that D nr (A i ,B i )denotes the normalized distance between A i and B i for 1≤i ≤n .Then,the displacement similarity between A and B is speci?ed as follows:

?(A,B )=1? 1n n

i =1D nr (A i ,B i )2,

where D nr (A i ,B i )=

B μA i ?A μB i

A μ

B μ

.

(2)

Trajectory Matching:

Similarity-based object-trajectory queries are processed by the trajectory processor,which takes such queries as input and re-turns a set of intervals,each associated with an importance value (similarity value),along with some other data needed by the query processor for forming the ?nal set of answers to user queries such as variable bindings (values)if variables are

used.Here we formally discuss how similarity-based object-trajectory queries with no variables are processed by the tra-jectory processor.In doing so,it is assumed without loss of generality that trajectory queries contain both the directional and displacement lists.Moreover,we restrict our discussion to such cases as those where the time gaps between consecutive object movements in trajectory facts are equal to or below the time threshold given in a query.These assumptions are made simply for the sake of simplicity because our main goal here is to explain the theory that provides a novel framework for our similarity-based object-trajectory matching mechanism rather than presenting our query processing algorithm in detail.Let Q and T be,respectively,a similarity-based object-trajectory query and a trajectory fact for an object stored in the fact base for a video clip such that Q =tr (α,λ)sthreshold σdirweight βand T =(ν,?,ψ,κ),where λ=[θ,χ].Let us assume that there is no variable used in Q or all variables are bound,α=ν, ? =n,and θ =m.Let us also assume that there is no gap between any consecutive pairs of intervals in κsuch that κe i =κs i +1(1≤i

Case 1(n =m ):The similarity between the two trajectories Q t =(θ,χ)and T t =(?,ψ)is computed as follows:

?(Q t ,T t )=β?(θ,?)+η?(χ,ψ),where β=1?η.(3)In this case,the trajectory processor returns only one in-terval,ξ=[κs 1,κe n ],iff ?(Q t ,T t )≥σ.Otherwise (?(Q t ,T t )<σ),the answer set is empty because there is no simi-larity between Q t and T t with a given threshold σ.Case 2(n >m ):In this case,the trajectory processor returns a set of intervals φsuch that

φ={[s i ,e i ]|1≤i ≤n ?m +1∧s i =κs i ∧e i =κe i +m ?1∧

?(Q t ,T t [i,i +m ?1])≥σ},

(4)

where

T t

[i,i+m?1]

=([?i,...,?i+m?1],[ψi,...,ψi+m?1]).(5)

If there is no match found for any T t

i for1≤i≤n?m+1,

where T t

i =T t

[i,i+m?1]

,then the answer set is empty.

Case3(n

only one interval,ξ=[κs

1,κe

n

],

i???(Q t

[i,i+n?1],T t)≥

m

n

σ

for1≤i≤m?n+1,

where

Q t

[i,i+n?1]

=([θi,...,θi+n?1],[χi,...,χi+n?1]).

The importance value(similarity value)associated and re-turned withξis

?=n

m

MAX{?|?(Q t

[i,i+n?1]

,T t)(1≤i≤m?n+1)}.

If no match is found,the answer set is empty because there is no similarity between Q t and T t with a given thresholdσ.

Following is an example similarity-based object-trajectory query speci?cation in the BilVideo query language.In this example query,we are interested in retrieving the segments of a video whose identi?er is speci?ed as1,where object o1 follows a similar path to the query trajectory with no time gap value given(continuous movement).For the sake of simplicity, let us assume that the trajectory of object o1stored in the knowledge base for the video queried is

tr(o1,[east,north,east,north, south],[10,20,10,30,15],[[1,100], [100,150],[150,200],[200,250], [250,300]]).

select segment

from1

where tr(o1,

[[east,north,east,northwest],

[10,20,15,25]])

sthreshold0.6dirweight0.7.

Hence for this query example,α=ν=o1, ? =n=5, θ =m=4,σ=0.6,andβ=0.7(η=1-β=0.3).Moreover,T= (ν,?,ψ,κ)and Q=tr(α,λ)sthresholdσdirweight β,where

?=[east,north,east,north,south],

ψ=[10,20,10,30,15],

κ=[[1,100],[100,150],[150,200],[200,250],[250,300]],λ=[θ,χ]

θ=[east,north,east,northwest],

χ=[10,20,15,25].

Since n>m,this query falls into case2.Thus,from Eq.5 T t

[1,4]

=[[east,north,east,north],[10,20,10,30]]and

T t

[2,5]=[[north,east,north,south],[20,10,30,15]].

According to Eq.4,?(Q t,T t

[1,4]

)and?(Q t,T t

[2,5]

)are

computed using the formula given in Eq.3.Therefore,

?(Q t,T t

[1,4]

)=0.7?(θ,?T

t[1,4]

)+0.3?(χ,ψT

t[1,4]

)

?(Q t,T t

[2,5]

)=0.7?(θ,?T

t[2,5]

)+0.3?(χ,ψT

t[2,5]

),

where

?T

t[1,4]

=[east,north,east,north],

?T

t[2,5]

=[north,east,north,south],

ψT

t[1,4]

=[10,20,10,30],

ψT

t[2,5]

=[20,10,30,15].

?(θ,?T

t[1,4]

)and?(θ,?T

t[2,5]

)are computed using Eq.1,

while?(χ,ψT

t[1,4]

)and?(χ,ψT

t[2,5]

)are computed us-

ing Eq.2.After the computations,?(θ,?T

t[1,4]

)=0.875,

?(θ,?T

t[2,5]

)=0.427,?(χ,ψT

t[1,4]

)=0.949,and?(χ,ψT

t[2,5]

)

=0.156.Therefore,?(Q t,T t

[1,4]

)=0.897and?(Q t,T t

[2,5]

)=

0.346.

Since?(Q t,T t

[1,4]

)>0.6,but?(Q t,T t

[2,5]

)<0.6,the only

interval,[s,e],returned as a result of this query is[κs

1

,κe

4

],

whereκs

1

=1andκe

4

=250.Hence,φ={[1,250]}.

Projection Operator:

The BilVideo query language provides a trajectory-projection

operator,project(α[,β]),to extract subtrajectories from the

trajectory facts,whereαis an object identi?er for which a

variable might be used andβis an optional condition.If a

condition is not given,then the operator returns the entire

trajectory that an object follows in a video clip.Otherwise,

subtrajectories of an object,where the given condition is sat-

is?ed,are returned.Hence the output of project is a set?=

{λ|λ=[θ,χ]},whereλis a trajectory andθandχare the

directional and displacement components ofλ,respectively.

The condition,if it is given,is local to project,and it is of type

as speci?ed in Appendix A.

4.2.4Temporal queries

This type of query is used to specify the order of occurrence

of conditions in time.Conditions may be of any type,but tem-

poral operators process their arguments only if they contain

intervals.The BilVideo query language implements all tem-

poral relations,de?ned by Allen’s temporal interval algebra,

as temporal operators,except for equal:our interval operator

and yields the same functionality as that of equal because its

de?nition,given in Sect.5.4,is the same as that of equal for

interval processing.Supported temporal operators,which are

used as interval operators in the BilVideo query language,are

before,meets,overlaps,starts,during,?nishes,and their in-

verse operators.A user query may contain repeating temporal

conditions speci?ed by repeat with an optional repetition

number given.If tgap is not provided with repeat,then

its default value for the temporal operators(equivalent to one

frame when converted)is assumed.De?nitions of the temporal

relations can be found in[2].

4.2.5Aggregate queries

This type of query may be used to retrieve statistical data about objects and events in video data.The BilVideo query language supports three aggregate functions,average,sum,and count, as explained in Sect.4.1.

4.3Example applications

To demonstrate the capabilities of the BilVideo query lan-guage,three application areas,soccer event analysis,bird mi-gration tracking,and movie retrieval systems,have been se-lected.However,it should be noted that the BilVideo system architecture and BilVideo query language provide a generic framework to be used for any application that requires spa-tiotemporal query processing capabilities.

4.3.1Soccer event analysis system

A soccer event analysis system may be used to collect statis-tical data on events that occur during a soccer game,such as ?nding the number of goals,offsides and passes,average ball control time for players,etc.,as well as to retrieve video seg-ments where such events take place.The BilVideo query lan-guage can be used to answer such queries,provided that some necessary facts,such as players and goalkeepers for the teams, as well as some predicates,such as player to?nd the players of a certain team,are added to the knowledge base.This section provides some query examples based on an imaginary soccer game fragment between England’s two teams Liverpool and Manchester United.The video identi?er of this fragment is assumed to be1.

Query1:“Find the number of direct shots to the goalkeeper of Liverpool by each player of Manchester United in a given video clip and return such video segments.”

This query can be speci?ed in the BilVideo query language as follows:

select count(segment),segment,X

from1

where goalkeeper(X,liverpool)and player(Y,manchester)

and touch(Y,ball)

meets not(touch(Z,ball))

meets touch(X,ball).

In this query,the external predicates are goalkeeper and player.For each player of Manchester United found in the speci?ed video clip,the number of direct shots to the goal-keeper of Liverpool by the player,along with the player’s name and video segments found,is returned provided that such segments exist.In the BilVideo system architec-ture,semantic metadata are stored in an object-relational database.Hence video identi?ers can be retrieved from this database by querying it with some descriptional data. Query2:“Find the average ball control(play)time for each player of Manchester United in a given video clip.”

This query can be speci?ed in the BilVideo query language as follows:

select average(segment),X

from1

where player(X,manchester)

and touch(X,ball).

In answering this query,it is assumed that when a player touches the ball,it is in his control.Then,the ball control time for a player is computed with respect to the time interval during which he is in touch with the ball.Hence the average ball control time for a player is simply the sum of all time intervals where the player is in touch with the ball divided by the number of these time intervals.This value is computed by the aggregate function average. Query3:“Find the number of goals of Liverpool scored against Manchester United in a given video clip.”

This query can be speci?ed in the BilVideo query language as follows:

select count(segment)

from1

where samelevel(ball,net)and

overlap(ball,net).

In this query,the3D relation samelevel ensures that an event that is not a goal because the ball does not go into the net but rather passes somewhere near the net,is not considered as a goal.The ball may overlap with the net in2D space while it is behind or in front of the net on the z-axis of3D space.Hence by using the3D relation samelevel,such false events are discarded.

4.3.2Bird migration tracking system

A bird migration tracking system is used to determine the mi-gration paths of birds over a set of regions in certain times. In[30],an animal movement querying system is discussed, and we have chosen a speci?c application of such a system to show how the BilVideo query language might be used to an-swer spatiotemporal,especially object-trajectory,queries on the migration paths of birds.

Query1:“Find the migration paths of bird o1over region r1 in a given video clip.”

This query can be speci?ed in the BilVideo query language as follows:

select X

from2

where X=project(o1,inside(o1,r1)).

In this query,X is a variable used for the trajectory of bird o1over region r1.The video identi?er of the video clip where the migration of bird o1is recorded is assumed to be2.This query returns the paths bird o1follows when it is inside region r1.

Query2:“How long does bird o1appear inside region r1in

a given video clip?”

This query can be speci?ed in the BilVideo query language as follows:

select sum(segment)

from2

where inside(o1,r1).

The result of this query is a time value that is computed by the aggregate function sum adding up the time intervals during which bird o1is inside region r1.

Query3:“Find the video segments where bird o1enters re-gion r1from the west and leaves from the north in a given video clip.”

This query can be speci?ed in the BilVideo query language as follows:

select segment

from2

where(touch(o1,r1)

and west(o1,r1))meets

overlap(o1,r1)

meets coveredby(o1,r1)meets

inside(o1,r1)meets

coveredby(o1,r1)

meets overlap(o1,r1)meets

(touch(o1,r1)and north(o1,r1)); Query4:“Find the names of birds following a similar path to that of bird o1over region r1with a similarity threshold value of0.9in a given video clip and return such seg-ments.”

This query can be speci?ed in BilVideo query language as follows:

select segment,X

from2

where Y=project(o1,inside(o1,r1))

and

inside(X,r1)and X!=o1and

tr(X,Y)sthreshold0.9.

Here,X and Y are variables representing the bird names and subtrajectories of bird o1over region r1,respectively.

Projected subtrajectories of bird o1,where the given con-dition is to be inside region r1,are used to?nd similar subtrajectories of other birds over the same region.

4.3.3Movie retrieval system

A movie retrieval system contains movies and series from dif-ferent categories such as cartoon,comedy,drama,?ction,hor-ror,etc.Such a system may be used to retrieve videos or seg-ments from a collection of movies with some spatiotemporal, semantic,and low-level conditions given.In this section,a spe-ci?c episode of Smurfs(a cartoon series),titled Bigmouth’s Friend,is used for the two spatiotemporal query examples given.The video identi?er of this episode is assumed to be3. Query1:“Find the segments from Bigmouth’s Friend where Bigmouth is below RobotSmurf,while RobotSmurf starts moving westward and then eastward,repeating this as many times as it happens in the video clip.”

select segment

from3

where below(bigmouth,robotsmurf)and (tr(bigmouth,[west,east]))repeat. Query2:“Find the segments from Bigmouth’s Friend where RobotSmurf and Bigmouth are disjoint,and RobotSmurf is to the right of Bigmouth,while there is no other object of interest that appears.”

select segment

from3

where disjoint(RobotSmurf,Bigmouth)

and right(RobotSmurf,Bigmouth)

and appear alone(RobotSmurf,

Bigmouth).

In this query,appear alone is an external predicate de?ned in the knowledge base as follows:

appear alone(X,Y,F):-

keyframes(L1),

member(F,L1),findall(W,

p appear(W,F),L2),

length(L2,2),

forall(member(Z,L2),(Z=X;Z=Y)).

5Spatiotemporal query processing

This section explains our rule-based spatiotemporal query pro-cessing strategy in detail.The query processing is carried out in three phases,namely,query recognition,query decomposi-tion,and query execution.These phases are depicted in Fig.3, and they are explained in Sects.5.1through5.3.The interval processing is performed in the query execution phase,and it is discussed in Sect.5.4through some case studies.

In the BilVideo query model,the conditions are evaluated in a single timeline.For each internal node in the query tree, the child nodes are evaluated?rst and the results obtained from the child nodes are propagated to the parent node for interval processing,going up in the query tree until the?nal query results are obtained.

5.1Query recognition

The lexical analyzer and parser for the BilVideo query lan-guage were implemented using Linux-compatible versions of Flex and Bison[10,34],which are the GNU versions of the original Lex&Yacc[17,21]compiler–compiler generator tools.The lexical analyzer partitions a query into tokens,which are passed to the parser with possible values for further pro-cessing.The parser assigns structure to the resulting pieces and creates a parse tree to be used as a starting point for query processing.This phase is called the query recognition phase.

5.2Query decomposition

The parse tree generated after the query recognition phase is traversed in a second phase,which we call the query de-composition phase,to construct a query tree.The query tree is constructed when the parse tree decomposes a query into three basic types of subqueries:plain Prolog subqueries or maxi-mal subqueries that can be directly sent to the inference engine Prolog,trajectory-projection subqueries that are handled by the trajectory projector,and similarity-based object-trajectory subqueries that are processed by the trajectory processor.Tem-poral queries are handled by the interval-operator functions such as before,during,etc.Arguments of the interval opera-tors are handled separately because they should be processed before the interval operators are applied.Since a user may give any combination of conditions in any order while specifying a query,a query is decomposed in such a way that a mini-mum number of subqueries are formed.This is achieved by

Query Execution Phase

Query Decomposition Phase Query Recognition Phase Fig.3.Query processing

phases

Fig.4.Query execution

grouping the Prolog-type predicates into maximal subqueries without changing the semantic meaning of the original query.5.3Query execution

The input for the query execution phase is a query tree.In this phase,the query tree is traversed in postorder,executing each subquery separately and performing interval processing in internal nodes so as to obtain the ?nal set of results.Since it would be inef?cient and very dif?cult,if not impossible,to fully handle spatiotemporal queries by Prolog alone,the query execution phase is mainly carried out by some ef?cient C++code.Thus Prolog is utilized only to obtain intermediate answers to user queries from the fact base.The intermediate query results returned by Prolog are further processed,and the ?nal answers to user queries are formed after the interval processing.Figure 4illustrates the query execution phase .The BilVideo query language is designed to return vari-able values,when requested explicitly,as part of the query result as well.Therefore,the language not only supports video /segment queries but also variable-value retrieval for the parts of videos satisfying given query conditions,utilizing a knowledge base.Variables may be used for the object identi-?ers and trajectories.

One of the main challenges in query execution is to handle such user queries where the scope of a variable used extends to several subqueries after the query is decomposed.It is a challenging task because subqueries are processed separately,accumulating and processing the intermediate results along the way to form the ?nal set of answers.Hence the values assigned to variables for a subquery are retrieved and used for the same variables of other subqueries within the scope of these variables.Therefore,it is necessary to keep track of the scope of each variable for a query.This scope information

is stored in a hash table generated for the variables.Dealing with variables makes the query processing much harder,but it also empowers the query capabilities of the system and yields much richer semantics for user queries.5.4Interval processing

In the BilVideo query model,intervals are categorized into two types:nonatomic and atomic intervals.If a condition holds for every frame of a part of a video clip,then the interval represent-ing an answer for this condition is considered to be a nonatomic interval.Nonatomicity implies that the condition holds for ev-ery frame within an interval in question.Hence the condition holds for any subinterval of a nonatomic interval as well.This implication is not correct for atomic intervals,though.The reason is that the condition associated with an atomic interval does not hold for all its subintervals.Consequently,an atomic interval cannot be broken into its subintervals for query pro-cessing.On the other hand,subintervals of an atomic interval are populated for query processing,provided that conditions are satis?ed in their range.In other words,the query proces-sor generates all possible atomic intervals for which the given conditions are satis?ed.This interval population is necessary since atomic intervals cannot be broken down into subinter-vals,and all such intervals,where the conditions hold,should be generated for query processing.The intervals returned by the plain Prolog subqueries (maximal subqueries )that contain directional,topological,object-appearance,3D-relation,and external-predicate conditions are nonatomic,whereas those obtained by applying the temporal operators to the interval sets,as well as those returned by the similarity-based object-trajectory subqueries,are atomic intervals.Since the logical operators and ,or ,and not are considered as interval operators when their arguments contain intervals to process,they also work on intervals.The operators and and or may return atomic and/or nonatomic intervals depending on the types of their in-put intervals.The operator and takes the intersection of its input intervals,while the operator or performs a union opera-tion on its input intervals.The unary operator not returns the complement of its input interval set with respect to the video clip being queried,and the intervals in the result set are of the nonatomic type,regardless of the types of the input intervals.Semantics of the interval intersection and union operations are given in Tables 2and 3,respectively.

The rationale behind classifying the video frame intervals into two categories as atomic and nonatomic may be best de-scribed with the following query example:“Return the video segments in the database,where object A is to the west of object B and object A follows a similar trajectory to the one speci?ed in the query with respect to the similarity thresh-old given .”Let us assume that the intervals [10,200]and [10,50]are returned as part of the answer set for a video for the trajectory and spatial (directional)conditions of this query,

Table2.Interval intersection(AND)

Input interval1Input interval2Result set Result interval

type

I1iff I1?I2

I1s≤I2s∧I1e≥I2e

I1(Atomic)I2(Atomic)I2iff I1?I2Atomic

I2sI1e

otherwise,?

I1(Atomic)I2(Nonatomic)I1iff I2?I1Atomic

otherwise,?

I1(Nonatomic)I2(Atomic)I2iff I1?I2Atomic

otherwise,?

[I s,I e]iff I1overlaps I2

I s=I1s iff I1s≥I2s

I1(Nonatomic)I2(Nonatomic)otherwise,I s=I2s Nonatomic

I e=I1e iff I1e≤I2e

otherwise,I e=I2

e

otherwise,?

Table3.Interval union(OR)

Input interval1Input interval2Result set Result interval

type

I1(Atomic)I2(Atomic){I1,I2}Atomic

Atomic

I1(Atomic)I2(Nonatomic){I1,I2}and

Nonatomic

Nonatomic I1(Nonatomic)I2(Atomic){I1,I2}and

Atomic

[I1

s ,I2

e

]if I2

s

=I1

e

+1

[I2

s ,I1

e

]if I1

s

=I2

e

+1

[I s,I e]if I1overlaps I2

I1(Nonatomic)I2(Nonatomic)I s=I1s iff I1s≥I2s Nonatomic

otherwise,I s=I2

s

I e=I1e iff I1e≤I2e

otherwise,I e=I2

e

otherwise,{I1,I2}

respectively.Here,the?rst interval is of the atomic type be-cause the trajectory of object A is only valid within the interval [10,200],and therefore a trajectory-similarity computation is not performed for any of its subintervals.However,the second interval is nonatomic since the directional condition given is satis?ed for each frame in this interval.When these two inter-vals are processed to form the?nal result by the and operator, no interval is returned as an answer because there is no such interval where both conditions are satis?ed together.If there were no classi?cation of intervals and all intervals were to be breakable into subintervals,then the?nal result set would in-clude the interval[10,50].However,the two conditions obvi-ously cannot hold together in this interval due to the fact that the trajectory of object A spans over the interval[10,200]. As another case,let us suppose that the intervals[10,200]and [10,50]are returned as part of the answer set for the spatial(di-rectional)and trajectory conditions of this query,respectively, and the intervals are to be unbreakable to subintervals.Then,the result set would be empty for these two intervals.This is not correct since there is an interval,[10,50],where both conditions hold.These two cases clearly show that intervals must be classi?ed into two groups as atomic and nonatomic for query processing.Following is a discussion with another example query that has a temporal predicate provided to make all these concepts much clearer.

Let us suppose that a user wants to?nd the parts of a video clip satisfying the following query:

Query:(A before B)and west(x,y),where A and B are Prolog subqueries and x and y are atoms(constants).

The interval operator“before”returns a set of atomic in-tervals,where?rst A is true and B is false and then A is false and B is true in time.If A and B are true in the intervals[4,10] and[20,30],respectively,and if these two intervals are both nonatomic,then the result set will consist of[10,20],[10,21],

[9,20],[10,22],[9,21],...,[4,30].Now,let us discuss two different scenarios.

Case1:west(x,y)holds for[9,25].This interval is nonatomic because west(x,y)returns nonatomic intervals.If the op-erator“before”returned only the atomic interval[4,30] as the answer for“A before B”,then the answer set to the entire query would be empty.However,the user is inter-ested in?nding the parts of a video clip where“(A before

B)and west(x,y)”is true.The intervals[10,20],[10,21],

...,[4,29]also satisfy“A before B”;however,they would not be included in the answer set for“before”.This is wrong!All these intervals must be part of the answer set for“before”as well.If they are included,then the answer to the entire query will be[9,25]because[9,25](atomic) and[9,25](nonatomic)=>[9,25](atomic).Nonetheless, note that such intervals as[10,19],[11,25],etc.are not included in the answer set of“A before B”since they do not satisfy the condition“A before B”.

Case2:west(x,y)holds for[11,25].Let us suppose that“be-fore”returned nonatomic intervals rather than atomic in-tervals and that the answer for“A before B”was[4,30].

Then the answer to the entire query would be[11,25]for [4,30](nonatomic)and[11,25](nonatomic)=>[11,25] (nonatomic).Nevertheless,this is wrong due to the fact that“A before B”is not satis?ed within this interval.Hence “before”should return atomic intervals so that such incor-rect results are not produced.

These two cases clearly show that the temporal operators should return atomic intervals and that the results should also include the subintervals of each largest interval that satisfy the given conditions,rather than consisting only of the set of largest intervals.It also demonstrates why such a classi?ca-tion of the intervals as atomic and nonatomic is necessary for interval processing.

5.5Query examples

In this section,three example spatiotemporal queries are given to demonstrate how the query processor decomposes a query into subqueries.Intermediate results obtained from these sub-queries are integrated step by step to form the?nal answer set.

Query1:select segment,X,Y

from all

where west(X,Y)and west(Y,o1)

and west(o1,o2)

and tr(o2,[west,east],[24,40]])

sthreshold0.4dspweight0.3and

disjoint(X,Y)before

touch(X,Y)and

disjoint(Y,o1);

This example query is decomposed into the following sub-queries:

Subquery1:tr(o2,[[west,east],[24,40]])

sthreshold0.4dspweight0.3 Subquery2:disjoint(X,Y)

Subquery3:touch(X,

Y)

from all

where west(X, Y) and west(Y, a), and west(a, b) and

disjoint(X, Y) before touch(X, Y) and disjoint(Y, a);

Query:select segment, X, Y

tr(b, [[west, east], [24, 40]]) sthreshold 0.4 dspweight 0.3 and Fig.5.Query tree constructed for query1

Subquery4:west(X,Y)and west(Y,o1)

and west(o1,o2).

and disjoint(Y,o1)

The directional conditions west(X,Y),west(Y, o1),and west(o1,o2)can be grouped together with the topological condition disjoint(Y,o1)using the and op-erator without changing the semantics of the original query,as shown in the example decomposition.It should be noted here that if the topological condition disjoint(Y,o1)were connected in the query with the operator or or a temporal op-erator,then such a grouping would not be possible.In this example,subqueries2through4are the maximal subqueries. Subqueries2and3are linked to each other by the temporal operator before.The rest of the internal nodes in the query tree contain the operator and.Figure5depicts the query tree constructed for this example query.

Query2:select segment,Y

from all

where west(X,Y)and west(Y,o1)and

tr(o2,[[west,east],[24,40]])

sthreshold0.4dirweight0.4and

disjoint(Y,o1);

Query2is decomposed into the following subqueries: Subquery1:tr(o2,[[west,east],[24,40]])

sthreshold0.4dirweight0.4 Subquery2:west(X,Y)and west(Y,o1)

and disjoint(Y,o1).

To answer query1,the query processor computes each subquery traversing the query tree in postorder,performing interval processing at each internal node and taking into ac-count the scope of each variable encountered.Here,the scope of object variables X and Y is subqueries2,3,and4.Hence for each value pair of variables X and Y,a set of intervals is computed in subquery2.Another reason for computing a set of intervals for each value pair is that the values obtained for vari-ables X and Y are also returned in pairs,along with the video segments satisfying the query conditions,as part of the query results.Hence even if the scope of these variables were to be only subquery2,the same type of interval processing and care

must be provided.Nonetheless,if an object variable is bound by only one subquery,and its values are not to be returned as part of the query result as in the case of object variable X in query2,then it is possible to combine consecutive intervals where the variable takes different values,while the remaining conditions are satis?ed for the same set of value sequences for the remaining variables.Query3better explains this concept of interval processing and variable value computation: Query3:“Return video segments in the database where ob-ject o1is?rst disjoint from object o2and then touches it, repeating this event three times while it is inside another object.”

select segment

from all

where inside(o1,X)

and(disjoint(o1,o2)meets

touch(o1,o2))repeat 3.

In this query,we do not care which object object o1is inside;we are only interested in the video segments where object o1is?rst disjoint from object o2and then touches it, repeating this event three times,while it is inside another ob-ject.Thus the consecutive intervals for different objects that contain object o1may be combined,provided that the given conditions are satis?ed.

6Discussion on performance

The running time of our algorithms for processing spatiotem-poral queries depend on many parameters that are very hard to formulate nicely.This is mostly due to the possible exis-tence of variables in user queries.As explained in Sect.5.3, allowing variables in a user query makes the query processing much harder;nonetheless,it also empowers the query capa-bilities of the system and results in much richer semantics for user queries.In BilVideo,when a variable is uni?ed(bound to some values previously computed within its scope),these values are transferred and used for a condition(containing that variable)that comes next within the variable’s scope.The query processor uses these values,instead of?nding all the val-ues of the variable that satisfy the condition regardless of the previous condition(s)and eliminating those that cannot be in-cluded in the result set because they do not satisfy the previous condition(s)in the variable’s scope.This speeds up the query processing with uni?ed variables,even though there is also an overhead for transferring the previously computed values for the variables.The reason is that the query domains of the variables for the next condition are narrowed down(restricted to the previously computed values for the uni?ed variables). Since a condition may contain any number of variables and some of these variables might have been uni?ed previously in executing the query,the query processor has to take into account for that condition a set of variable-value lists.For this reason it is very hard to formalize the running time behaviors of our spatiotemporal query processing algorithms as they de-pend on many parameters,such as the number of variables used,their scope within the entire query,the query domains of the variables for each condition,the overhead involved in transferring the variable-value lists,etc.,in addition to the database size.Therefore,we instead provide a brief summary Table4.Speci?cations of real video data

Video#of Frames#of Objects Max.#of objects

in a Frame Jornal.mpg5254214 Smurfs.avi4185136

of our preliminary performance results,which are presented in detail in[9].

These performance results show that the system is scalable for spatiotemporal queries in terms of the number of salient objects per frame and the total number of frames in a video clip.The results also demonstrate the space savings achieved due to our rule-based approach.For the time ef?ciency tests, queries were given to the knowledge base as Prolog predicates. For the scalability and space savings,program-generated syn-thetic video data were used.These tests constitute the?rst part of our overall tests.In the second part,the performance of the knowledge base was tested on some real video fragments with the consideration of space and time ef?ciency criteria to show its applicability in real-life applications.Real video data were extracted from jornal.mpg3and a Smurfs cartoon episode named Bigmouth’s Friend.Table4presents some information about these video fragments.

For the space ef?ciency tests with the program-generated synthetic data,the number of objects per frame was selected as8,15,and25,while the total number of frames was?xed at 100.To show the system’s scalability in terms of the number of objects per frame,the total number of frames was chosen to be100,and the number of objects per frame was changed from4to25.For the scalability test with respect to the total number of frames,the number of objects was?xed at8,while the total number of frames was varied from100to1000.

In the tests conducted with the program-generated video data,there was a19.59%savings from the space for the sample data of8objects and1000frames.The space savings was 31.47%for the sample video of15objects and1000frames, while it was40.42%for25objects and1000frames.With the real data,for the?rst video fragment jornal.mpg,our rule-based approach achieved a savings of37.5%of the space.The space savings for the other fragment,smurfs.avi,was40%.

The space savings obtained from the program-generated video data is relatively low compared to that obtained from the real video fragments.We believe that such behavior is due to the random simulation of the motion of objects in our synthetic test data:while creating the synthetic video data,the motion pattern of objects was simulated randomly changing the objects’MBR coordinates by choosing only one object to move at each frame.Nevertheless,objects generally move slower in real video,causing the set of spatial relations to change over a longer period of frames.It is also observed that, during the tests with the synthetic video data,the space savings does not change when the number of frames is increased as the number of objects of interest per frame is?xed.The test results obtained for the synthetic data are in agreement with those obtained for the real video.Some differences seen in the results stem from the fact that synthetic data were produced 3from MPEG-7Test Data set CD-14,Port.news

by a program and thereby were not able to perfectly simulate a real-life scenario.

The time ef?ciency tests performed on the program-generated synthetic data show that the system is scalable in terms of the number of objects and the number of frames when either of these numbers is increased while the other is?xed. Moreover,the knowledge base of the system has a reasonable response time as the results of the time ef?ciency tests on the real video data show.Therefore,we can claim that the knowl-edge base of BilVideo is reasonably fast enough for answering spatiotemporal queries.

7Conclusions and future work

We proposed an SQL-like textual query language for spa-tiotemporal queries on video data and demonstrated the capa-bilities of the language through some example queries given on different application areas.Our novel rule-based spatiotem-poral query processing strategy has also been explained with some query examples.

The BilVideo query language is designed to be used for any application that needs spatiotemporal query processing capabilities.It is extensible in that any application-dependent predicate with a different name from those of prede?ned pred-icates and constructs of the language and with at least one argument can be used in user queries.For that it suf?ces to add some necessary facts and/or rules to the knowledge base a priori.Hence the language provides query support through external predicates for application-dependent data.

The BilVideo query language currently supports a broad range of spatiotemporal queries.However,the BilVideo sys-tem architecture is designed to handle semantic(keyword, event/activity,and category-based)and low-level(color, shape,and texture)video queries as well.We completed our work on semantic video modeling and reported our results in[3].As for the low-level queries,our fact-extractor tool also extracts color and shape histograms of the salient objects in video keyframes[37],and it is currently being extended to extract texture information from the video keyframes as well. We are currently working on integrating the support for se-mantic and low-level video queries into BilVideo by extending its query processor and query language without affecting the way the spatiotemporal query conditions are speci?ed in the query language and processed by the query processor.Further-more,we also completed our initial work on the optimization of the spatiotemporal video queries[40].In an ideal environ-ment,the BilVideo query language will establish the basis for a Web-based visual query interface and serve as an embedded language for users.Hence we developed a Web-based visual query interface for visually specifying spatiotemporal video queries over the Internet[36].We are currently working on en-hancing the interface for semantic and low-level video query speci?cation support.We will integrate the Web-based visual query interface to BilVideo and make it available on the Inter-net in the future when we complete our work on semantic and low-level video queries.References

1.Adal?S,Candan KS,Chen S,Erol K,Subrahmanian VS(1996)

Advanced video information systems:data structures and query processing.ACM Multimedia Sys4:172–186

2.Allen JF(1983)Maintaining knowledge about temporal inter-

https://www.doczj.com/doc/143170289.html,mun ACM26(11):832–843

3.Arslan U,D¨o nderler ME,Saykol E,Ulusoy¨O,G¨u d¨u kbay

U(2002)A semi-automatic semantic annotation tool for video databases.In:Proceedings of the workshop on multimedia semantics(SOFSEM’2002),Milovy,Czech Republic,24–29November,2002,pp1–10.Available at: https://www.doczj.com/doc/143170289.html,.tr/~ediz/bilmdg/ papers/sofsem02.pdf

4.Chang NS,Fu KS(1980)Query by pictorial example.IEEE

Trans Softw Eng SE66:519–524

5.Chang S,Chen W,Meng HJ,Sundaram H,Zhong D(1997)

VideoQ:an automated content-based video search system using visual cues.In:Proceedings of ACM Multimedia,Seattle,9–13 November1997,pp313–324

6.Chang SK,Shi QY,Yan CW(1987)Iconic indexing by2-d

strings.IEEE Trans Patt Anal Mach Intell9:413–428

7.Chu W,Cardenas AF,Taira RK(1995)A knowledge-based

multimedia medical distributed database system–KMED.Inf Sys20(2):75–96

8.D¨o nderler ME,Saykol E,Ulusoy¨O,G¨u d¨u kbay U(2003)Bil-

Video:a video database management system.IEEE Multimedia 1(10):66–70

9.D¨o nderler ME,Ulusoy¨O,G¨u d¨u kbay U(2002)A rule-based

video database system architecture.Inf Sci143(1–4):13–45 10.Donnelly C,Stallman R(1995)Bison:the yacc-

compatible parser generator.Online manual: https://www.doczj.com/doc/143170289.html,/bison/

11.Egenhofer M,Franzosa R(1991)Point-set spatial relations.Int

J Geograph Inf Sys5(2):161–174

12.Flickner M,Sawhney H,Niblack W,Ashley J,Huang Q,Dom

B,Gorkani M,Hafner J,Lee D,Petkovic D,Steele D,Yanker P(1995)Query by image and video content:the QBIC system.

IEEE Comput28:23–32

13.Guting RH,Bohlen MH,Erwig M,Jensen CS,Lorentzos NA,

Schneider M,Vazirgiannis M(2000)A foundation for repre-senting and querying moving objects.ACM Trans Database Sys25(1):1–42

14.Hjelsvold R,Midtstraum R(1994)Modelling and querying

video data.In:Proceedings of the20th international confer-ence on very large databases,Santiago,Chile,12–15September 1994,pp686–694

15.Hwang E,Subrahmanian VS(1996)Querying video libraries.

J Vis Commun Image Represent7(1):44–60

16.Jiang H,Montesi D,ElmagarmidAK(1997)VideoText database

systems.In:Proceedings of IEEE Multimedia Computing and Systems,Ottawa,Canada,3–6January,1997,pp344–351 17.Johnson SC(1975)Yacc:yet another compiler https://www.doczj.com/doc/143170289.html,-

puting Science Technical Report32,Bell Laboratories,Murray Hill,NJ

18.Koh J,Lee C,Chen ALP(1999)Semantic video model for

content-based retrieval.In:Proceedings of IEEE Multime-dia Computing and Systems,Florence,Italy,7–11June,1999, 1:472–478

19.Kuo TCT,Chen ALP(1996)A content-based query language

for video databases.In:Proceedings of IEEE Multimedia Com-puting and Systems,17–23June,Hiroshima,Japan,pp209–214 20.Kuo TCT,Chen ALP(2000)Content-based query processing

for video databases.IEEE Trans Multimedia2(1):1–13

21.Lesk ME(1975)Lex–a lexical analyzer https://www.doczj.com/doc/143170289.html,puting

Science Technical Report39,Bell Laboratories,Murray Hill, NJ

22.Li JZ(1998)Modeling and querying multimedia data.Techni-

cal Report TR-98-05,Department of Computing Science,The University of Alberta,Alberta,Canada

23.Li JZ,¨Ozsu MT(1997)Stars:a spatial attributes retrieval system

for images and videos.In:Proceedings of the4th international conference on multimedia modeling,Singapore,18–19Novem-ber,1997,pp69–84

24.Li JZ,¨Ozsu MT,Szafron D(1997)Modeling of moving ob-

jects in a video database.In:Proceedings of IEEE Multime-dia Computing and Systems,Ottawa,Canada,3–6June,1997, pp336–343

25.Li JA,¨Ozsu MT,Szafron D,Oria V(1997)MOQL:A multi-

media object query language.In:Proceedings of the3rd inter-national workshop on multimedia information systems,Como, Italy,25–27September,1997,pp19–28

26.Li JZ,¨Ozsu MT,Szafron D,Oria V(1997)Multimedia exten-

sions to database query languages.Technical Report TR-97-01, Department of Computing Science,University of Alberta,Al-berta,Canada

27.Marcus S,Subrahmanian VS(1996a)Foundations of multime-

dia information systems.J ACM43(3):474–523

28.Marcus S,SubrahmanianVS(1996b)Towards a theory of multi-

media database systems.In:Subrahmanian VS,Jajodia S(eds) Multimedia database systems:issues and research directions.

Springer,Berlin Heidelberg New York,pp1–35

29.Mehrotra S,Chakrabarti K,Ortega M,Rui Y,Huang TS(1997)

Multimedia analysis and retrieval system(MARS project).In: Proceedings of the3rd international workshop on information retrieval systems,Como,Italy,25–27September,1997,pp39–45

30.Nabil M,Ngu AH,Shepherd J(2001)Modeling and retrieval

of moving objects.Multimedia Tools Appl13:35–71

31.Oomoto E,Tanaka K(1993)OVID:Design and implementation

of a video object database system.IEEE Trans Knowl Data Eng 5:629–643

32.¨Ozsu MT,Iglinski P,Szafron D,El-Medani S,Junghanns M

(1997)An object-oriented sqml/hytime compliant multimedia database management system.In:Proceedings of of ACM Mul-timedia,Seattle,9–13November,1997,pp233–240

33.Papadias D,TheodoridisY,Sellis T,Egenhofer M(1995)Topo-

logical relations in the world of minimum bounding rectangles:

a study with R-trees.In:Proceedings of the ACM SIGMOD in-

ternational conference on management of data,San Jose,22–25 May1995,pp92–103

34.Paxson V(1995)Flex:a fast scanner generator.Online manual:

https://www.doczj.com/doc/143170289.html,/flex/

35.Petrakis EGM,Orphanoudakis SC(1993)Methodology for

the representation,indexing and retrieval of image by content.

Image Vision Comput11(8):504–521

36.Saykol E(2001)Web-based user interface for query speci?ca-

tion in a video database system.Master’s thesis,Department of Computer Engineering,Bilkent University,Ankara,Turkey 37.Saykol E,G¨u d¨u kbay U,Ulusoy¨O(2002)A histogram-based ap-

proach for object-based query-by-shape-and-color in multime-dia databases.Available as a technical report(BU-CE-0201)at: https://www.doczj.com/doc/143170289.html,.tr/tech-reports/ 2002/BU-CE-0201.ps.gz

38.Sistla AP,Wolfson O,Chamberlain S,Dao S(1997)Modeling

and querying moving objects.In:Proceedings of IEEE Data Engineering,Birmingham,UK,7–11April1997,pp422–432 39.Smoliar SW,Zhang H(1994)Content-based video indexing

and retrieval.IEEE Multimedia Mag1(2):62–72

40.¨Unel G,D¨o nderler ME,Ulusoy¨O,G¨u d¨u kbay U(2004)An

ef?cient query optimization strategy for spatio-temporal queries in video databases.J Sys Softw(in press)

41.Zhuang Y,Rui Y,Huang TS,Mehrotra S(1998)Applying se-

mantic association to support content-based video retrieval.In: Proceedings of the IEEE Very Low Bitrate Video Coding work-shop(VLBV98),Urbana,IL,8–9October1998,pp45–48

A Grammar speci?cation of the query language

:=selectfrom all

[where]‘;’

|selectfrom

where‘;’

|select segment[‘,’] from

where‘;’

|selectfrom

where‘;’

|select‘(’segment‘)’

[‘,’segment][‘,’]

fromwhere‘;’

:=

|random‘(’‘)’)]

:=average|sum|count

:=all|

:=[‘,’]

:=‘(’‘)’

|not‘(’condition‘)’

|and

|or

||

||

:=| ||

|

:=

(|)

|‘=’

:=

|‘(’ []‘)’

:=

|‘(’‘)’

:=appear‘(’‘)’

:=

‘(’‘,’‘)’

:=

‘(’‘,’‘)’

:=

‘(’‘,’‘)’

:=‘(’‘)’

:=project‘(’

[‘,’]‘)’

:=tr

‘(’‘,’(‘)’

[]|‘)’

[])[]

:=

|‘[’‘,’

‘]’

:=‘[’‘]’

:=‘[’‘]’

:=‘[’‘]’

:=

[dirweight

|dspweight]

:=sthreshold

:=tgap

:=[‘,’]

:=[‘,’]

:=repeat[]

:=

‘(’‘)’

|not‘(’‘)’

|and

|or

||

||

|

|

:=left|right|above|below |

:=west|east|north|south |northeast|southeast|northwest

|southwest

:=equal|contains|inside|cover |coveredby|disjoint|overlap|touch :=infrontof|behind|sinfrontof |sbehind|tfbehind|tdfbehind

|samelevel

:=before|meets|overlaps |starts|during|finishes|ibefore

|imeets|ioverlaps|istarts|iduring |ifinishes

:=|:=[‘,’]

:=[‘,’]

:=(1-9)(0-9)*

:=(1-9)(0-9)*

:=(A-Z)(A-Za-z0-9)*

:=(a-z)(A-Za-z0-9)*

4:=(a-z)(A-Za-z0-9)*

:=‘=’|‘‘!=’’

:=0‘.’(0*(1-9)0*)+

:=0[‘.’(0-9)*]|1

:=0[‘.’[0-9]*]|1

:=(1-9)(0-9)*

4Lexer recognizes such a character sequence as an external pred-icate name iff it is different from any prede?ned predicate and con-struct in the language.

投资项目在线审批监管平台政务外网操作手册

投资项目在线审批监管平台 (青海省) 部 门 用 户 手 册 东软集团编制 二〇一六年一月

目录

1说明 投资项目在线审批监管平台是根据国办发[2014]59号文件和国办发[2015]12号文件要求,为实现“精简审批事项,网上并联办理,强化协同监管”目标而建设的跨部门平台。 投资项目在线审批监管平台由中央平台和地方平台两部分组成。中央平台包括互联网门户网站和业务系统两部分,其中,门户网站依托互联网建设,向申报单位提供项目申报、信息服务、监管公示等功能;业务系统依托国家电子政务外网建设,包括并联审批、项目监管、电子监察和综合管理等功能。 本手册的使用对象为各部门用户,为部门用户在中央平台完成项目办理信息和项目监管信息的录入、相关信息的查询以及信息维护提供操作指引。目前,中央平台在部门间试运行,期间系统功能将进一步完善,平台操作界面如有微调,我们将及时更新用户手册。

2系统简介 2.1 中央平台的组成 投资项目在线审批监管平台(中央平台)由互联网和政务外网两部分组成,门户网站包括申报服务、信息服务、监管公示等,其中申报服务包括统一受理项目申请、统一反馈审批结果、提供办理事项清单、提供办事指南、在线查询办理进度、上报建设情况报告等;信息服务包括新闻资讯、政策法规、统计信息等;监管公示包括中介服务事项及名录、异常纪录、黑名单、部门办件情况统计等。政务外网业务系统包括并联审批系统、项目监管系统、电子监察系统、综合管理系统等,主要实现项目办理过程和办理结果信息共享、明确办理时限,临期超期预警,催督办、建设情况报告审查、记录异常信息和黑名单信息、维护本部门相关审批事项和办事指南、维护本部门涉及中介服务事项和名录等。平台组成结构如下图所示: 2.2 业务流程 中央平台首先实现中央核准项目相关的行政审批事项网上申报,部门间信息共享,后续将逐步实现所有非涉密投资项目的“平台受理、并联办理、依责监管、全程监察”。业务办理流程如下图所示: 1、申报阶段 申报单位登录平台互联网网站填写申报信息,并打印项目登

在线订单系统操作手册

目录 第一章系统介绍 (1) 1.1 系统特色 (1) 1.2 系统要求 (1) 第二章登录 (2) 2.1登录页介绍 (2) 2.2 修改密码 (2) 2.3 找回密码 (4) 第三章商品订购 (6) 3.1 首页 (6) 3.2 订购产品 (6) 第四章订单管理 (12) 4.1单据状态 (12) 4.2 搜索单据 (12) 4.3 查看单据详情 (13) 第五章退货管理 (15) 5.1 退货 (15) 5.2 退货单状态 (18) 5.3 搜索退货单 (18) 5.4 查看退货单详情 (19) 第六章收货确认 (20) 6.1 收货确认单状态 (20) 6.2 搜索发货单 (20) 6.3 发货单详情 (20) 第七章信息上报 (21) 7.1 上传信息 (21) 7.2 上报表单搜索 (21)

第一章系统介绍 XX在线订货平台,是供应链电子商务的一种管理应用,它能帮助经销伙伴快速的实现信息流、资金流和物流的全方位管理和监控,实现高效的业务运作和资金周转,同时帮助经销伙伴从传统的订货方式向互联网时代的订货方式转变,解放资源、显著提升经销伙伴的生产力和运营效率。 XX在线订货平台是通过在线的方式展示XX商品、是经销伙伴快速下单的新途径,经销伙伴以此便捷的手段能迅速提升订货业务效率,改善供求双方的往来业务环境。 1.1 系统特色 异地业务,实时方便 经销伙伴遍布全国,在公司产品数量众多、订货业务极为频繁的情况下,使用在线订货平台,能实时处理异地企业间的业务,降低商务传递的成本,大大缩短订单处理的时间,实现信息随时随地双向同步。 便捷的订货状态查询 订单状态实时更新,让经销伙伴能够随时随地通过在线订货平台查询和跟踪在线订单的执行情况。 直观的货品展示和订购 商品有列表和图片形式进行展示,实现货品浏览和货品订购的同步,并实现订购货品选择到订单生成的自动化。 1.2 系统要求 经销伙伴必须使用由XX公司发布的账号、密码在XX公司的网站上进行登录,并做好密码保护。

继续教育网络培训平台操作手册

附件1 尖峰合讯在线培训管理系统 学员端用户操作手册 1我的课堂 (2) 1.1我的课堂 (2) 1.1.1我的课堂 (2) 1.1.2我的考试 (6) 1.1.3考试报名 (10) 1.1.4我的收藏 (10) 1.1.5个人学习档案 (11) 1.1.6电子证书查询 (13) 2图书馆 (21) 2.1图书馆 (21) 2.1.1电子图书馆 (21) 3培训事项 (22) 3.1培训事项 (22) 3.1.1需求调查 (22) 3.1.2课程评估 (22) 3.1.3培训班评估 (23) 3.1.4综合评估 (23) 4个人积分 (24) 4.1个人积分 (24) 4.1.1积分记录 (24) 4.1.2申请奖品 (24) 4.1.3领取记录 (25) 5个人面板 (26) 5.1个人面板 (26) 5.1.1个人信息 (26) 5.1.2修改密码 (26)

1我的课堂 1.1我的课堂 1.1.1我的课堂 ●导航入口 我的课堂->我的课堂->我的课堂 ●功能描述 学员可以选课,能查看课程,培训项目,岗位课程相关信息。系统将各种学习工具以工具栏的方式放置在页面右边具体有讲义及相关文档,评价,经验共享,课程评估,课堂笔记,论坛,聊天室,通讯录,我的作业,考核标准,自考,结束学习,电子证书功能键。学习过程中学员可以随时使用各种学习工具和查看自己的学习状态,为了减少学员的无效操作系统将各种工具是否包含内容都直接告知学员,避免学员点击进入后没有任何内容。 ●页面操作 1.课程页面 页面显示学员已选择的课程,系列课程,培训项目,岗位课程信息。 ?点击【课程名称】可以查看课程详细信息,并可以直接进入学习。 ?右面工具栏功能如下: 学习工具功能说明如下:

在线填报操作手册

在线填报操作手册 一、浏览器与屏幕分辨率要求 “安徽省公共信用信息共享服务平台”支持IE8.0或以上版本的浏览器,建议分辨率为1280*800。 二、网络要求 “安徽省公共信用信息共享服务平台”在电子政务外网上登录访问。 三、如何登录 在网址栏输入https://www.doczj.com/doc/143170289.html,/进入信用安徽网站,将鼠标移动到网站导航栏的右上角“服务平台”上,点击进入用户登录界面。 正确填写“用户名”、“密码”,点击登录即可。

四、如何上报数据 登陆进入安徽省公共信用信息共享服务平台首页后,请先将鼠标移动到导航栏的“工作平台”上,点击即可进入信用工作平台。点击菜单栏中“交换管理”,再将鼠标移动到网页右边“在线填报【进入系统】”模块,点击“进入系统”即可查看相关信息。

上报有两种方式:数据填报和批量导入。 数据填报:进入“在线填报”首页,点击“目录填报”,随后点开“目录”的 下拉框。 在点开的目录列表里双击要填报的业务域。

在点开的页面里点击“新增”,即可进行逐条数据填报。 功后,点“上报”,在弹出的对话框中点“是”,则上报成功。 批量导入:点击“目录导入”,进入批量导入界面。

点击“目录”选项,然后在打开的“目录列表”里双击要上报的业务域。 在打开的页面点击“下载模板”,然后将要上报的数据填到模板里。 数据填写完成后将文件保存,点击“导入并上报”,选择保存的文件,并点击“导入”。

导入成功后,页面会显示“导入并上报成功”,并显示导入成功的数据信息。 五、如何查看数据 在进入的“在线填报”页面,点击“目录查询”,可查询已经上报成功的数据。 六、如何删除数据 数据删除有两种方式:单条删除和批量删除。 单条删除:点击“目录查询”,可查询已经上报的数据。点击每条数据后面的“删除”按钮,可对上报的数据进行删除。

学生在线学习平台操作手册教学内容

学生在线学习平台操 作手册

山东财经大学高等教育自学考试强化实践能力考核系统 学 生 使 用 手 册 目录

1.学生个人设置 ............................................................................................ - 4 - 1.1学生登录................................................................................................................. - 4 - 1.2学生个性化设置 ..................................................................................................... - 4 - 1.3学生如何找回密码.................................................................................................. - 6 - 1.3.1报考学校帮助重置密码。................................................................................. - 6 - 1.3.2验证绑定手机/邮箱重置密码 ........................................................................... - 6 - 2.学生课程学习............................................................................................ - 8 - 2.1学生查看报考课程.................................................................................................. - 8 - 2.2学生课程学习 ........................................................................................................ - 9 - 2.2.1课件类型详解..................................................................................................- 10 - ?视频课时 ...................................................................................................... - 11 - ?音频课时 ...................................................................................................... - 11 - ?网页课时 ...................................................................................................... - 12 - ?Scorm课时................................................................................................... - 12 - 2.3学生在线考试 ....................................................................................................... - 13 - 2.4学生提交考核作业 ................................................................................................ - 16 - 2.4.1学生查看考核作业 .......................................................................................... - 16 - 2.4.2学生提交考核作业.......................................................................................... - 16 - 3.学生查看成绩........................................................................................... - 17 - 3.1考核作业成绩........................................................................................................ - 17 - 3.2自考课程成绩报表 ................................................................................................ - 17 -

研学平台使用手册

研学平台使用手册 1总体说明 1.1产品简介 面向个人知识创新和研究型学习的核心服务平台。支持个人与学习群体探究式学习、支持创新能力构建与成长的“协同研学平台”在内容资源XML碎片化的基础上实现交互式增强出版,构建新一代数字图书馆———研学型数字图书馆,将成为国内外高校研究型数字图书馆的核心服务平台。 1.2登录说明 1)网址:https://www.doczj.com/doc/143170289.html,/ 2)注册和登录:使用手机号自行注册后登录; 3)检索文献,大部分已XML碎片化。 2产品使用说明 2.1研学平台产品中心首页 首页是整个系统的总入口,浏览器中访问:https://www.doczj.com/doc/143170289.html,/ 2.2登录和注册 试用时可直接用手机号注册,注册成功以后试用系统默认将帐户绑定到TTOD机构资源帐户下,具有阅读权限,可直接进行体验。

2.3各子系统导航 在产品中心首页上点击可跳转到各子系统,学术社交,功能暂未实现。 2.4文献检索 用户可通过产品中心首页或平台首页两个入口进行文献检索。 1)检索结果页,通过图标区分XML文献和pdf文献。 2)收藏 用户可在检索结果页直接收藏文献到“临时学习文献”,用户可点击一篇文献查看知网节:可点击“在线阅读”进行单篇文献学习,也可以在临时学习文献中查看收藏文献。 2.5研读学习 从产品中心首页点“研读学习”进入平台。研读学习有两种方式:单篇文献学习和专题学习。 2.5.1单篇文献学习 即检索后直接收藏到“临时学习文献”内的资料学习; 2.5.2专题学习 即用户可根据选题确定的研究方向,“新建专题”后,通过“添加资料”收集文献后进

行文献研读。 2.5.3在线阅读 a)添加笔记 b)文摘 用户可摘录原文内容到“我的文摘”库。 c)编辑段落内容 用户也可以直接将文摘库的内容插入到原文中。 d)构建知识目录 添加子目录: 添加内容:

在线系统用户手册

1污染源监控平台 污染源监控平台包括“污染源总览”、“实时监控”、“数据补录”、“数据修约”、“预警管理”、“报表统计”、“基础信息”七个子模块。 1.1污染源总览 进入污染源监控平台,默认显示“污染源总览”界面,界面内以三小图一大图形式展示“废气污染物超标Top10”、“废水污染物超标Top10”、“数据传输率”、“报警统计”四项数据,点击小图模式的展开按钮可切换到大图模式。可以设置时间段查询不同时间段内的历史数据,如下图所示: 图1-1 污染源总览 1.1.1.实时监控 点击“实时监控”进入实时监控页面,实时数据页面,系统展示各污染源企业最新监测数据和在线状态。可根据查询条件进行查询操作,点击企业名称可展示该企业所有排口数据信息,点击企业排口名称显示排口信息,点击监测项名称显示监测项信息,如下图所示:

实时监控 查询: 页面内可根据全部、国控、省控、行政区、废水、废气、火电、污水等查询条件进行筛选,也可以输入关键字进行模糊查询,如下图所示: 选择查询条件

输入关键字查询 查看企业信息: 点击企业名称可查看该企业实时监控的所有排口信息,点击“基本信息”可查看该企业的详细信息,如下图所示: 实时监控

基本信息 查看企业排口信息: 单击树形结构中企业名称下的排口名称,可查看单个排口数据信息,该页面可通过选择不同的数据类型进行查看,包括实时数据、分钟数据、小时数据、日数据、月数据、年数据,同时可以设置相应时间段进行查询,查询时需要勾选监测项,默认状态下监测项复选框为选中状态,勾选瞬时排放量可以查看瞬时排放量数据;单击“基本信息”可查看该排口的详细信息;某些企业排口还可以查看实时视频及视频回放,如下图所示: 数据类型

在线考试系统-操作手册

微厦在线考试(试题练习)平台 操作手册

1建设内容 微厦在线考试(试题练习)平台主要分为两大块学员管理和管理员管理,学员在系统中的主要职责是在线学习、在线练习、在线考试、充值消费;管理员主要负责系统日常任务的分配和管理,如:教务管理、题库管理、资金管理、员工管理等。 1.1学员管理 学员在系统中主要是学习和消费,学员进入系统后主要对以下六个模块的内容进行操作:章节练习、模拟场、考试指南、错题重做、我的笔记、我的收藏、统计分析、联系客服、个人中心,如下图: 学员进入系统后如果未购买课程,可以对系统中的课程进行试用,试用的题数可以管理员后台自定义,试用分为两种情况:一、游客试用(即未登录试用);游客试用时只能操作章节练习、考试指南、联系客服这三个模块的内容,游客操作其他模块会自动跳转到登录界面。二、登录试用;学员登录试用时可以操作除“模拟考场”之外的所有模块,学员购买科目试题后方能操作全部模块。 点击右上角的“”可以切换专业,也可以查看“我的科目”,如下图: 点击其他专业则会切换到其他专业下的科目学习,点击“我的科目”可以查看“当前科目”和“已购买的科目”。如下图: 1.1.1章节练习 学员第一次登录后操作任意模块都会进入专业选择,学员选择相关专业和科目后才能进行学习,级别划分是:专业>>>科目>>>章节,学员学习时针对“科目”进行充值消费,科目有多个章节,这里的“章节练习”包含了该科目下的所有章节。如下图: “章节练习”即试题练习,主要是对章节里的试题进行练习和学习。学员在练习时可以查看试题的答案和解析。对于一些难题、错题、易考题学员可以收藏,

收藏后收藏按钮会变成红色,笔记功能有助于学员在学习过程中记录自己的解题思路,帮助理解加深记忆。左右滑动可以切换上下题。 点击“提交”按钮后系统会自动对该题的答案做出批阅,如下图: 如果该试题有错误,学员可以点击右上角的“报错”向系统提交错误报告,错误报告在管理员后台查阅。如下图: 1.1.2模拟考场 模拟考场中存储了科目下的所有试卷,学员可以随时进行模拟测试,如下图: 如上图所示右上角是计时器,显示该场考试的剩余时间,点击“”可以收藏试题,收藏后“”按钮会变成“”点击“”可以报错。最下方是答题卡和提交按钮,答题卡按钮提示了当前已做答的题数和全部试题数,点击可进入答题卡界面。如下图: 如上图所示,蓝色背景的试题序号表示已作答的试题,点击“试题序号”可以自动定位到该试题,点击“交卷”可以交卷,交卷后系统会自动给出得分,学员也可以在“统计分析”中查看详细的成绩报告。 1.1.3考试指南 考试指南类似于教学大纲,明确重点、难点、考点,帮助学员轻松掌握,顺利通过考试。由管理员后台录入。 1.1.4错题重做 错题重做收录了学员每次在练习中做错的试题,相当于一个错题集。如下图所示: 点击试题题干可以查看该试题的答案和笔记,点击“进入答题”可以练习这些错题,重点学习。如下图:

一致收敛函数列与函数项级数的性质

§2 一致收敛函数列与函数项级数的性质 教学计划:4课时. 教学目的:让学生掌握一致收敛函数列与函数项级数的性质及其应用. 教学重点:函数列与函数项级数的确定的函数的连续性、可积性与可微性. 教学难点:在一致收敛的条件下证明各项分析性质. 教学方法:讲授法. 教学步骤: 本节讨论由函数列与函数项级数的确定的函数的连续性、可积性与可微性. 定理13.8 设函数列{}n f 在()()b x x a o o ,, 上一致收敛于()x f ,且对每个n , ()n n x x a x f o =→lim 则n a ∞ →lim 和()x f o x x →lim 均存在且相等. 证 先证{}n a 是收敛数列.对任意0>ε,由于{}n f 一致收敛,故有N ,当N n >和任意正整数p ,对一切()()b x x a x o o ,, ∈有 ()().ε<-+x f x f p n n (1) 从而 ()()ε≤-=-+→+x f x f a a p n n x x p n n 0 lim 这样由柯西准则可知{}n a 是收敛数列. 设.lim A a n n =∞ →.再证().lim 0 A x f x x =→ 由于)(x f n 一致收敛于)(x f 及n a 收敛于A ,因此对任意,0>ε存在正数N ,当N n >时,对任意),(),(00b x U x a x ∈ 3 3 )()(ε ε < -< -A a x f x f n 和 同时成立.特别取,1+=N n 有 .3 ,3 )()(11ε ε < -< -++A a x f x f N N 又(),lim 110 ++→=N N x x a x f ,故存在,0>δ,当δ<-<00x x 时, .3 )(11ε < -++N N a x f 这样,当x 满足δ<-<00x x 时, A a a x f x f x f A x f N N N N -+-+-≤-++++1111)()()()( ,3 3 3 εε ε ε =+ + < 即 ().lim 0 A x f x x =→ □ 这个定理指出:在一致收敛的条件下,{})(x f n 中两个独立变量x 与n ,在分别求极限时其求极限的顺序可以交换,即 ()().lim lim lim lim 0 0x f x f n x x n n n x x →∞→∞ →→= (2) 类似地,若)(x f n 在()b a ,上一致收敛且)(lim x f n a x + →存在,可推得 ()().lim lim lim lim x f x f n a x n n n a x ++→∞→∞ →→=;若)(x f n 在()b a ,上一致收敛和)(lim x f n b x +→存在,则可推 得()().lim lim lim lim x f x f n b x n n n b x + + →∞→∞ →→=.

中国铁塔在线商务平台操作手册_供应商版V1.0 (1)

中国铁塔在线商务平台用户手册---供应商

文件修改历史 版本日期负责人内容描述

目录 用户手册---认证平台 (1) 1.简介 (3) 1.1.目的 (3) 1.2.适用范围 (4) 1.3.引用文件 (4) 1.4.术语表 (4) 1.5.参考资料 (4) 1.6.系统登陆 (4) 2.安装 (4) 2.1.运行环境安装 (4) 2.2.系统安装 (5) 2.3.系统卸载............................................................................................... 错误!未定义书签。 3.操作详解 (5) 3.1.结算管理............................................................................................... 错误!未定义书签。 3.1.1.项目信息 ................................................................................................................... 错误!未定义书签。 3.1.2.收票人信息........................................................................................................ 错误!未定义书签。 3.1.3.订单收货............................................................................................................ 错误!未定义书签。 3.1. 4.结算单................................................................................................................ 错误!未定义书签。 3.1.5.回填发票............................................................................................................ 错误!未定义书签。 3.1.5.1.京东发票回填......................................... 错误!未定义书签。 3.1.5.2.运营类发票回填................................. 错误!未定义书签。 3.1.5.3.发票寄出............................................. 错误!未定义书签。 3.1.5. 4.发票审核............................................. 错误!未定义书签。 3.1.6.结算.................................................................................................................... 错误!未定义书签。 1.简介 本手册是指对中国铁塔在线商务平台-认证平台的操作使用详细手册。 1.1.目的 本文是专为供应商编写的系统使用手册。

智慧城市云平台系统使用手册

1.平台概述 共享交换云平台是依托地理信息数据,通过在线方式满足政府部门、企事业单位和社会公众对地理信息和空间定位、分析的基本需求,具备个性化 应用的二次开发接口和可扩展空间,是实现地理空间框架应用服务功能的数 据、软件及其支撑环境的总称。 共享交换云平台主要面向国家级主节点、省级分节点、市(区/县)信息基地三级构成三层网络架构,以SOA架构设计实现,并集成GEO-ESB服务总线的模式以实现空间信息的共享、交换、运维、管理和服务。遵循OGC标准,支持各种不同GIS平台服务的聚合再发布,支持二次开发能力,为政府部门提供统一的、高效的地理信息服务,同时也可以支撑各部门业务系统的建设。 2.平台总体架构

●基础设施服务层(IaaS层) IaaS层由网络和服务器、存储设备、网络设备、安全设备构成的硬件设施作为共享交换云平台长期运行的基础支撑保障。 ●数据服务层(DaaS层) DaaS层是本平台的核心,可实现多源、多类数据的集中管理,并支持按照时间、空间、业务专题等不同纬度进行数据分发和数据服务。 ●平台服务层(PaaS层) PaaS层除了操作系统、数据库、中间件等平台软件外,还集成了SuperMap GIS基础平台产品,支持共享交换行业分平台和区域分平台的分发,并支持两级平台间数据按区域和专题进行数据双向交换。 ●服务层(SaaS层) 共享交换云平台提供了API、控件、模版不同级别的服务接口,可快速构建智慧城市政府部门应用、行业部门应用、企事业单位应用和公众服务应用。 3.平台功能设计

1)云平台门户 云平台门户是智慧城市建设各类应用提供在线使用平台的入口和统一登录认证。门户既提供各子系统的入口,同时也作为一个对外的窗口,为用户呈现平台动态、平台向导、热点服务、最新发布数据、政策法规、平台知识、服务热线等信息。 资源展示与应用子系统 资源展示与应用系统提供各类信息资源的统一展示,以及基于特定资源提供各类应用分析,它主要通过在线网络地图、影像图等方式为用户提供平台信息资源的直观展示和信息资源的查询、统计、分析、标注等信息应用服务,为用户提供了解平台基础数据资源和各单位共享专题资源的通道。

综合服务平台操作手册-单位版网厅操作手册

许昌住房公积金管理中心单位网厅系统 操作参考手册 V1.0

编写单位四川久远银海软件股份有限公司 文档编号当前版本 1.0 编制人员戎冠武编制时间20180514 审核人员审核时间 确认人员确认时间 生效日期文档密级受控 历史修订记录 编号修订日期版本号修订人说明 01 02 03 04

目录 1.系统登录及辅助 (1) 1.1.单位开户查询 (1) 1.2.单位开户预约 (3) 1.3.下载中心 (5) 1.4.APP下载 (6) 1.5.微信公众号 (6) 1.6.系统登录 (7) 1.7.密码变更 (8) 1.8.消息订阅及查看 (11) 1.9.在线客服 (13) 1.10.投诉建议 (15) 1.11.在线调查 (16) 1.12.查看留言 (17) 2.当月汇缴 (18) 2.1.变更清册 (18) 2.2.基数调整 (23) 2.3.比例调整 (26) 2.4.单位汇缴 (27) 3.在线付款 (30) 4.职工转移 (32) 4.1.职工转移 (32) 5.信息维护 (37) 5.1.单位信息修改 (37) 5.2.职工信息修改 (38) 6.查询打印 (40) 6.1.职工信息查询 (40) 6.2.单位信息查询 (42) 6.3.操作日志查询 (45)

1.系统登录及辅助 单位登陆网厅的前提是需要到柜台签订网上自主协议。 单位用户可以通过CA认证进行登录。 1.1.单位开户查询 业务描述 1、查询单位预约开户的相关信息; 功能导向图 具体操作 2、点击单位开户查询按钮,进入单位开户查询的页面,页面如下:

根据填写的相应查询条件,获取对应的结果;如果中心审核成功,会在业务状态中显示“预约成功待柜台初审”,失败会在业务状态栏中显示“初审不通过待修改”,示图如下: 业务状态栏为“预约成功待柜台初审”,修改、填写预约人信息、详情不能操作; 业务状态栏为“初审不通过待修改”,可以进行修改等,并重新提交;待审核通过。 通过后打印预约单,携带相应资料,就可以根据预约的时间,到相应的柜台办理。

第十章 函数项级数

1 第十章函数项级数 § 1 函数项级数的一致收敛性(1) 一、本次课主要内容 点态收敛,函数项级数收敛的一般问题。 二、教学目的与要求 使学生理解怎样用函数列(或函数项级数)来定义一个函数,掌握如何利用函 数列(或函数项级数)来研究被它表示的函数的性质。 三、教学重点难点 函数列一致收敛的概念、性质 四、教学方法和手段 课堂讲授、提问、讨论;使用多媒体教学方式。 五、作业与习题布置 P68 1(5)(7)

2 一. 函数列及极限函数:对定义在区间I上的函数列,介绍概念: 收敛点,收敛域(注意定义域与收敛域的区别),极限函数等概念. 1.逐点收敛 ( 或称为“点态收敛” )的“ ”定义. 例1 对定义在 内的等比函数列, 用“”定义验 证其收敛域为 , 且 例2 .用“”定义验证在内. 例3 考查以下函数列的收敛域与极限函数: . (1). . (2). (3)设 为区间上的全体有理数所成数列. 令 , . (4). , . (5) 有, , . (注意 .) 二. 函数列的一致收敛性:

3 问题: 若在数集D上, . 试问: 通项 的解析性质 是否必遗传给极限函数 能遗传,而例3⑶说明可积性未能遗传. 例3⑷⑸说明虽然可积性得到遗传, 但 . 的一种手段. 对这种函数, 就是其表达式.于是,由通项函数的解析性质研 究极限函数的解析性质就显得十分重要. 那末, 在什么条件下通项函数的解析性质 能遗传给极限函数呢? 一个充分条件就是所谓“一致收敛”. 一致收敛是把逐点收 敛加强为所谓“整体收敛”的结果. 定义( 一致收敛 ) 一致收敛的几何意义. 在数集D上一致收敛, Th1 (一致收敛的Cauchy准则 ) 函数列 , . ( 介绍另一种形式.) 证 ( 利用式) ,……,有. 易见逐点收敛. 设 令 , 推论1 在D上 , ,. D , 使 推论2 设在数集D上, . 若存在数列 在数集D上非一致收敛 . 应用系2 判断函数列 ―在数集D上的最值点. . 证明函数列在R内一致收敛. 例4

教育云平台使用说明书

教育云平台软件 使用说明书 信安技术(中国)有限公司 一、产品简介 教育云平台软件是集电子教育白板、教学资源库、各学科仿真实验、媒体频

道发布、实时直播、各种应用服务于一身的一款智能化平台,可以运用到教育、商务等场合,实现远程会议、远程教学和参与人员之间的真正互动。 教育云平台功能强大,操作简便,人机互动性强,参照说明书便可自主操作。教育白板:实现教学中各种学科符号展示、多点书写、擦除、标注、文字、线条、角尺、涂鸦、保存、拖动、放大、遮幕、屏幕捕捉、画面保存、实时录制、手写识别、键盘输入、文本输入等功能;同时它也是一套完美的演示系统,它可以调用多种格式的图片、PPT、Flash、视频,支持多点触控,可以对各个对象进行放大、缩小、旋转、滑屏等操作。无论是现场演示和教学,使用电子教育云平台软件并配合互动电子教育云平台就可让您实现轻松的互动交流效果。 教学资源库:用户可以直接在教育云平台中进入360大课堂,一个丰富翔实的全学科教学资源库。海量的资源不仅整合了视频、音频、动画、图片、教案、试题、计划、总结等媒体素材和文字资源,还整合了由3D立体技术制作生成的智能仿真实验。使用户在教学过程中就能充分共享教学资源,并可以对资源进行有效的管理。 仿真实验:仿真实验是利用FLASH技术开发的最富真实感的实验,可直接在电脑上在线模拟操作。通过自主操作实验,从而掌握物理和化学知识原理,理解并记忆化学方程式、公式、定律、定理等,从而有效提高理化学习成绩。 签到系统:学生可以使用该功能模块签到,签到的学生名字就可以在界面中展示,学生的名字,学生的人数就可以一目了然。 校园频道:校园广告机,可满足在学校各场所进行信息发布的需要。用户根据自己的需求,在后台服务器上编辑自己的频道布局,比如哪个地方该放图片哪个地方该放文字哪个地方该放视频,放什么背景等都能自定义,除此之外用户还能上传自己的播放素材,将素材应用到具体的某个布局时,还可以预览到整个频道的发布的效果,如果还要规定某个时间播某段内容,也可以对频道进行排程。 白板同步:该模块包含创建同步,加入同步等,通过输入同步ID(或者IP地址)来连接到指定教育云平台,同其它教育云平台软件达到在画布书写同步的效果。 本说明书适用于教育云平台产品各种型号。 二、教育云平台软件安装、卸载 1、系统需求 window 7操作系统 Pentium 4以上处理器 1 GB (建议 2 GB以上) Media Player 1G 空闲磁盘空间(完全安装)

《证券账户业务在线平台用户使用手册》

证券账户在线业务平台 用户使用手册 (特殊账户开户外部用户业务处理介绍) 中国证券登记结算有限责任公司 二零一五年四月

一、特殊账户开户业务申报 申请机构按现有业务规则要求进行特殊机构及产品证券账户申请材料的预审。预审通过的,登录PROP“在线平业务受理系统”填写账户开户信息,上传必要的开户申请材料电子扫描件,提交开户申请。具体方式如下: 1、用户点击主界面右上方[PROP功能模块]-->[在线业务受理系统],再点击左边菜单 [特殊账户开户]-->[特殊账户业务申报],进入开户申报界面。 2、在“申请方信息栏目”填写申请方的全称、简称、类型、中国结算总部配发的结算参与人编码及经办人联系信息等要素。

3、在“客户信息”中选择本次申请开立的证券账户所属的类型(自然人、产品或机构)、户名(客户名称)及其他各类信息,即对应《证券账户开立申请表》中“身份信息”栏目相关要素。 4、在“服务信息申报”中填写是否直接开通网络服务,如选择“是”,则要求同步录入 网络服务初始密码。 5、在“证券账户开立申报”中:选择本次申请开立的具体账户类型填写开户数量(有 一码通账户号码,一并录入相应一码通账户号码)。例如申请开立1个沪市A股账户和1个

深市A股账户,则在“沪市A股账户(个)、深市A股账户(个)”中分别填写数字1。 6、在“基本信息”中,根据已选择的客户类型(自然人、产品或机构),录入本次申请所开账户的的相关基本信息。 (1)基本信息栏目(当为“自然人”时): (2)基本信息栏目(当为“产品”时):

(3)基本信息栏目(当为“机构”时):

投资在线平台3.0操作手册(项目单位)

浙江政务服务网 投资项目在线审批监管平台3.0 工程建设项目全流程审批管理系统2.0 操作手册 (项目单位) 浙江省经济信息中心 2019年6月

目录 第一章概述 (3) 1.1 手册简介 (3) 1.2 手册结构 (3) 1.3 预期读者 (3) 1.4 编写目的 (3) 1.5 系统使用环境 (4) 第二章登录 (4) 第三章项目申报 (9) 3.1 新项目领码 (9) 3.2 有码项目报批 (15) 3.2.1 阶段报批 (16) 3.2.2 单事项申报 (26) 3.2.3 退件重新申报 (30) 3.3 实施进展报备 (32) 第四章进度跟踪 (34) 4.1 领码进度 (34) 4.2 报批进度 (36) 第五章材料包 (39) 5.1 材料上传&查阅 (39) 5.2 材料补齐补正 (41) 第六章证照包 (43)

第一章概述 本手册主要对如何使用“浙江政务服务网投资项目在线审批监管平台 3.0(工程建设项目全流程审批管理系统2.0)”(以下简称为“投资在线平台3.0(工程审批系统2.0)”)进行项目申报作一个概要描述。本手册按照系统中的导航菜单划分业务操作章节。在每个具体业务模块的操作说明中,按照业务功能操作描述用户使用某一功能完成业务的过程。 1.1手册简介 本手册主要是为指导用户操作使用而编写的。在手册中,我们将以本系统运行在Internet Explorer 11或Google Chrome上为例进行详细的介绍。希望本手册能够帮助您在短时间内对系统有一个概括的了解,让您亲身体验到它所带来的方便与快捷。由于系统在不断完善和调整中,因此本手册很难一次做到面面俱到,需要逐渐完善,欢迎使用者提出修改意见。 1.2手册结构 本手册针对用户如何使用本系统进行了详细的介绍,请认真阅读其中的内容。手册第一章介绍了手册的基本内容,包括手册简介、手册结构、预期读者及编写目的等。其他各章节按系统功能模块分别介绍了具体功能的操作方法。本文是最基本的帮助文档,涵盖了系统常用功能的介绍和使用方法指导。如果您能够详细阅读本手册,就可以轻松的运用本系统完成项目申报等工作。 1.3预期读者 本手册的预期读者为项目单位用户。手册默认用户具备基本的计算机操作技能,熟悉Windows 操作环境并且已经掌握基本的软件操作方法。您可以按顺序阅读每一章,或利用目录寻找您需要的主题。 1.4编写目的 本手册的编写目的是指导项目单位用户能够熟练使用本平台完成项目申报等工

在线业务受理系统操作手册

在线业务受理系统操作手册 (发行人版) Ver 1.1 中国证券登记结算有限责任公司上海分公司 二〇二〇年九月

文档修订记录

目录 1引言 (6) 1.1编写目的 (6) 1.2覆盖范围 (6) 1.3术语与缩略语 (6) 1.4注意事项和原则 (6) 2PROP综合业务终端渠道 (7) 2.1首页功能 (7) 2.1.1待办任务 (8) 2.1.2在办任务 (9) 2.1.3终止任务 (10) 2.1.4办结任务 (10) 2.1.5通知消息 (11) 2.1.6问题搜索 (12) 2.1.7业务搜索 (13) 2.2业务办理 (14) 2.2.1业务申报 (14) 2.2.2业务暂存 (19) 2.2.3退出申报 (20) 2.2.4待办任务办理 (20) 2.2.5业务撤单 (22) 2.2.6重新发起 (26) 2.3业务通知 (27) 2.4业务查询 (29) 2.4.1查看业务办理轨迹 (29) 2.4.2节点办理详情 (29) 2.4.3中国结算对外意见 (30)

2.5辅助功能 (32) 2.5.1ZAP用户绑定 (32) 2.5.2在线业务办理历史查询 (33) 3公司网站渠道 (34) 3.1登录前准备 (34) 3.1.1IE配置 (34) 3.1.2RPOP硬件证书安装 (34) 3.2访问入口 (34) 3.3首页功能 (37) 3.3.1待办任务 (37) 3.3.2在办任务 (38) 3.3.3终止任务 (39) 3.3.4办结任务 (40) 3.3.5通知消息 (40) 3.3.6问题搜索 (41) 3.3.7业务搜索 (42) 3.4业务办理 (43) 3.4.1业务申报 (43) 3.4.2业务暂存 (48) 3.4.3退出申报 (49) 3.4.4待办任务办理 (49) 3.4.5业务撤单 (52) 3.4.6重新发起 (56) 3.5业务通知 (57) 3.6业务查询 (58) 3.6.1查看业务办理轨迹 (58) 3.6.2节点办理详情 (59) 3.6.3中国结算对外意见 (60)

函数项级数

第十章 函数项级数 一、内容简介 本章主要介绍函数项级数的收敛域和一致收敛性的判别、和函数的性质以及初等函数的幂级数展开。 二、学习要求 1. 了解用多项式来逼近函数的思想; 2. 正确理解函数项级数的收敛域、一致收敛性以及和函数的性质; 3. 掌握函数项级数的一致收敛性的Weierstrass 判别法和A-D 判别法,幂级数的收敛半径及和函数的计算。 三、学习的重点和难点 重点:函数项级数的一致收敛性, 初等函数的幂级数展开; 难点:含参数数项级数的条件收敛性和函数项级数一致收敛性的判别, 四、研究级数的目的 1. 借助级数表示很多有用的非初等函数。 2. 解微分方程。 3. 利用多项式来逼近一般的函数。 4. 实数的近似计算。 §1 一致收敛性 一.点收敛的收敛域 函数项级数: 1 ()n n u x ∞ =∑. 定义1 设()n u x (1,2, ,)n =在E 上有定义,0x E ∈.若数项级数01 ()n n u x ∞ =∑收敛, 则称函数项级数在0x 点收敛,称0x 是 1 ()n n u x ∞=∑的收敛点.收敛点全体D 称为1 ()n n u x ∞ =∑的收 敛域.其和()S x 是定义在D 上的函数称为其和函数. 例:(1) 1 ()1n n x S x x ∞ === -∑ (1,1)x ∈-. (2)1 n p n x n ∞ =∑ 1p > ,收敛域为[-1,1];01p <≤,收敛域为[-1,1]; 0P ≤,收敛域为(-1,1). (3) 1sin p n x n ∞ =∑ 0p >时,(,)-∞∞.

例:nx e - 收敛域为(0,)∞. 部分和函数列:{()}n S x . 1 ()n n u x ∞ =∑在D 上收敛?{()}n S x 在D 上收敛. 二.函数序列的一致收敛性 {()}n S x .lim ()().n n S x S x →∞ = x D ∈. 00 lim(lim ())lim(lim ())lim ()n n x x n n x x x x S x S x S x →→∞ →∞→→==.即逐项求极限.是否逐项求导,求积分? 一般否.反例: 例:()n n S x x = 收敛域(1,1]D =- 0(1,1)()11 x S x x ∈-?=? =?  . 1lim ()0x S x - →= 1 lim lim ()lim11n n n x S x - →∞→∞ →==。 例:()0n S x = →.(,)D =-∞+∞ ()0S x = ()n S x nx '==. 例:1!()0n n x S x ∈?=? ? 其他 . x =无理数时,()0n S x =;x =有理数 q p 时,n p >时,!q n p 整数,()1n S x =. ()n S x 在任何区间[,]a b 上可积,而()S x 不可积. 定义2 设lim ()().n n S x S x →∞ = x D ∈.若0,()0.N n N εε?>?>?>及x D ?∈,有: ()()n S x S x ε-<成立,则称在D 上{()}n S x 一致收敛于()S x ,记为()().D n S x S x ? 若级数 1 ()n n u x ∞=∑的部分和函数列在D 上一致收敛于()S x ,则称1 ()n n u x ∞ =∑一致收敛于 ()S x . 例1:22 ()1n x S x n x =+. 1 ()n n S x ∞ =∑ 0x =时,()00n S x =→;0x ≠时,()0n S x →.

相关文档 最新文档