当前位置:文档之家› Efficient Action Spotting Based on a Spacetime Oriented Structure

Efficient Action Spotting Based on a Spacetime Oriented Structure

Efficient Action Spotting Based on a Spacetime Oriented Structure
Efficient Action Spotting Based on a Spacetime Oriented Structure

E?cient Action Spotting Based on a Spacetime Oriented Structure

Representation

Konstantinos G.Derpanis,Mikhail Sizintsev,Kevin Cannons and Richard P.Wildes

Department of Computer Science and Engineering

York University

Toronto,

{

Abstract

This paper addresses action spotting,the spatiotem-

poral detection and localization of human actions in

video.A novel compact local descriptor of video dy-

namics in the context of action spotting is introduced

based on visual spacetime oriented energy measure-ments.This descriptor is e?ciently computed directly from raw image intensity data and thereby forgoes the problems typically associated with?ow-based features. An important aspect of the descriptor is that it allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appear-ance,such as di?erences induced by clothing,and with robustness to clutter.An associated similarity measure is introduced that admits e?cient exhaustive search for an action template across candidate video sequences. Empirical evaluation of the approach on a set of chal-lenging natural videos suggests its e?cacy.

1.Introduction

This paper addresses the problem of detecting and localizing spacetime patterns,as represented by a sin-gle query video,in a reference video database.Speci?-cally,patterns of current concern are those induced by human actions.This problem is referred to as action spotting.The term“action”refers to a simple dynamic pattern executed by a person over a short duration of time(e.g.,walking and hand waving).Potential ap-plications of the presented approach include video in-dexing and browsing,surveillance,visually-guided in-terfaces and tracking initialization.

A key challenge in action spotting arises from the fact that the same underlying pattern dynamics can yield very di?erent image intensities due to spatial ap-pearance di?erences,as with changes in clothing and live action versus animated cartoon content.Another challenge arises in natural imaging conditions where scene clutter requires the ability to distinguish relevant

oriented energy volumes

search (database)

y

?lters decomposes the input videos into a distributed rep-resentation according to3D,(x,y,t),spatiotemporal orien-tation.(bottom)In a sliding window manner,the distri-bution of oriented energies of the template is compared to the search distribution at corresponding positions to yield a similarity volume.Finally,signi?cant local maxima in the similarity volume are identi?ed.

pattern information from distractions.Clutter can be of two types:Background clutter arises when actions are depicted in front of complicated,possibly dynamic, backdrops;foreground clutter arises when actions are depicted with distractions superimposed,as with dy-namic lighting,pseudo-transparency(e.g.,walking be-hind a chain-link fence),temporal aliasing and weather e?ects(e.g.,rain and snow).It is proposed that the choice of representation is key to meeting these chal-lenges:A representation that is invariant to purely spa-tial pattern allows actions to be recognized indepen-dent of actor appearance;a representation that sup-ports?ne delineations of spacetime structure makes it

possible to tease action information from clutter.

For present purposes,local spatiotemporal orienta-tion is of fundamental descriptive power,as it captures the?rst-order correlation structure of the data irre-spective of its origin(i.e.,irrespective of the underlying visual phenomena),even while distinguishing a wide range of image dynamics(e.g.,single motion,multiple superimposed motions,temporal?icker).Correspond-ingly,visual spacetime will be represented according to its local3D,(x,y,t),orientation structure:Each point of spacetime will be associated with a distribu-tion of measurements indicating the relative presence of a particular set of spatiotemporal https://www.doczj.com/doc/c14381339.html,-parisons in searching are made between these distribu-tions.Figure1provides an overview of the approach.

A wealth of work has considered the analysis of hu-man actions from visual data[26].A brief survey of representative approaches follows.

Tracking-based methods begin by tracking body parts and/or joints and classify actions based on fea-tures extracted from the motion trajectories(e.g., [28,19,1]).General impediments to fully automated operation include tracker initialization and robustness. Consequently,much of this work has been realized with some degree of human intervention.

Other methods have classi?ed actions based on fea-tures extracted from3D spacetime body shapes as rep-resented by contours or silhouettes,with the motiva-tion that such representations are robust to spatial ap-pearance details[3,11,15].This class of approach relies on?gure-ground segmentation across spacetime, with the drawback that robust segmentation remains elusive in uncontrolled settings.Further,silhouettes do not provide information on the human body limbs when they are in front of the body(i.e.,inside silhou-ette)and thus yield ambiguous information.

Recently,spacetime interest points have emerged as a popular means for action classi?cation[22,8,17,16]. Interest points typically are taken as spacetime loci that exhibit variation along all spatiotemporal dimen-sions and provide a sparse set of descriptors to guide action recognition.Sparsity is appealing as it yields signi?cant reduction in computational e?ort;however, interest point detectors often?re erratically on shad-ows and highlights[14],and along object occluding boundaries,which casts doubt on their applicability to cluttered natural imagery.Additionally,for actions substantially comprised of smooth motion,important information is ignored in favour of a small number of possibly insu?cient interest points.

Most closely related to the approach proposed in the present paper are others that have considered dense templates of image-based measurements to rep-resent actions(e.g.,optical?ow,spatiotemporal gra-dients and other?lter responses selective to both spa-tial and temporal orientation),typically matched to a video of interest in a sliding window formulation. Chief advantages of this framework include avoidance of problematic localization,tracking and segmentation preprocessing of the input video;however,such ap-proaches can be computationally intensive.Further limitations are tied to the particulars of the image mea-surement used to de?ne the template.

Optical?ow-based methods(e.g.,[9,14,20])su?er as dense?ow estimates are unreliable where their local single?ow assumption does not hold(e.g.,along oc-cluding boundaries and in the presence of foreground clutter).Work using spatiotemporal gradients has en-capsulated the measurements in the gradient structure tensor[23,15].This tack yields a compact way to char-acterize visual spacetime locally,with template video matches via dimensionality comparisons;however,the compactness also limits its descriptive power:Areas containing two or more orientations in a region are not readily discriminated,as their dimensionality will be the same;further,the presence of foreground clutter in a video of interest will contaminate dimensionality measurements to yield match failures.Finally,meth-ods based on?lter responses selective for both spatial and temporal orientation(e.g.,[5,13,18])su?er from their inability to generalize across di?erences in spatial appearance(e.g.,di?erent clothing)of the same action.

The spacetime features used in the present work de-rive from?lter techniques that capture dynamic as-pects of visual spacetime with robustness to purely spatial appearance,as developed previously(e.g.,in application to motion estimation[24]and video seg-mentation[7]).The present work appears to be the ?rst to apply such?ltering to action analysis.Other notable applications of similar spacetime oriented en-ergy?lters include,pattern categorization[27],track-ing[4]and spacetime stereo[25].

In the light of previous work,the major contribu-tions of the present paper are as follows.(i)A novel compact local oriented energy feature set is developed for action spotting.This representation supports?ne delineations of visual spacetime structure to capture the rich underlying dynamics of an action from a sin-gle query video.(ii)Associated computationally ef-?cient similarity measure and search method are pro-posed that leverage the structure of the representation. The approach does not require preprocessing in the form of person localization,tracking,motion estima-tion,?gure-ground segmentation or learning.(iii)The approach can accommodate variable appearance of the same action,rapid dynamics,multiple actions in the

?eld-of-view,cluttered backgrounds and is resilient to the addition of distracting foreground clutter.While others have dealt with background clutter,it appears that the present work is the?rst to address directly the foreground clutter challenge.(iv)The approach is demonstrated on a set of challenging natural videos.

2.Technical approach

In visual spacetime the local3D,(x,y,t),orientation structure of a pattern captures signi?cant,meaningful aspects of its dynamics.For action spotting,single mo-tion at a point,e.g.,motion of an isolated body part,is captured as orientation along a particular spacetime di-rection.Signi?cantly,more complicated scenarios still give rise to well de?ned spacetime orientation distribu-tions:Occlusions and multiple motions(e.g.,as limbs cross or foreground clutter intrudes)correspond to mul-tiple orientations;high velocity and temporal?icker (e.g.,as encountered with rapid actions)correspond to orientations that become orthogonal to the temporal axis.Further,appropriate de?nition of local spacetime oriented energy measurements can yield invariance to purely spatial pattern characteristics and support ac-tion spotting as an actor changes spatial appearance. Based on these observations,the developed action spot-ting approach makes use of such measurements as local features that are combined into spacetime templates to maintain relative geometric spacetime positions.

2.1.Features:Spacetime orientation

The desired spacetime orientation decomposition is realized using broadly tuned3D Gaussian third deriv-

ative?lters,G3

?θ(x),with the unit vector?θcaptur-

ing the3D direction of the?lter symmetry axis and x=(x,y,t)spacetime position.The responses of the image data to this?lter are pointwise recti?ed (squared)and integrated(summed)over a spacetime neighbourhood,Ω,to yield the following locally aggre-gated pointwise energy measurement

E?θ(x)=

x∈Ω(G3

?I)2,(1)

where I≡I(x)denotes the input imagery and?con-volution.Notice that while the employed Gaussian derivative?lter is phase-sensitive,summation over the support region ameliorates this sensitivity to yield a measurement of signal energy at orientationθ.More speci?cally,this follows from Rayleigh’s theorem[12] that speci?es the phase-independent signal energy in the frequency passband of the Gaussian derivative:

E?θ(x)∝

ωx,ωy,ωt |F{G3

?I}(ωx,ωy,ωt)|2,(2)

where(ωx,ωy)denote the spatial frequency,ωt the

temporal frequency and F the Fourier transform1.

Each oriented energy measurement,(1),is con-

founded with spatial orientation.Consequently,in

cases where the spatial structure varies widely about

an otherwise coherent dynamic region(e.g.,single mo-

tion of a surface with varying spatial texture),the re-

sponses of the ensemble of oriented energies will re-

?ect this behaviour and thereby are appearance depen-

dent;whereas,a description of pure pattern dynamics

is sought.Note,that while in tracking applications it

is vital to preserve both the spatial appearance and dy-

namic properties of a region of interest,in action spot-

ting one wants to be invariant to appearance,while

being sensitive to dynamic properties.This is neces-

sary so as to detect di?erent people wearing a variety

of clothing as they perform the same action.To re-

move this di?culty,the spatial orientation component

is discounted by“marginalization”,as follows.

In general,a pattern exhibiting a single spacetime

orientation(e.g.,image velocity)manifests itself as

a plane through the origin in the frequency domain

[12].Correspondingly,summation across a set of x-y-t-

oriented energy measurements consistent with a single

frequency domain plane through the origin is indica-

tive of energy along the associated spacetime orienta-

tion,independent of purely spatial orientation.Since

Gaussian derivative?lters of order N=3are used in

the oriented?ltering,(1),it is appropriate to consider

N+1=4equally spaced directions along each fre-

quency domain plane of interest,as N+1directions

are needed to span orientation in a plane with Gaussian

derivative?lters of order N[10].Let each plane be

parameterized by its unit normal,?n;a set of equally

spaced N+1directions within the plane are given as

?θi=cos

2πi

N+1

?θa(?n)+sin

2πi

N+1

?θb(?n),(3)

with?θa(?n)=?n×?e x/ ?n×?e x ,?θb(?n)=?n×?θa(?n),?e x

the unit vector along theωx-axis2and0≤i≤N.

Now,energy along a frequency domain plane with

normal?n and spatial orientation discounted through

marginalization,is given by summing across the set of

measurements,E?θ

i

,as

?E?n(x)=

N

i=0

E?θ

i

(x),(4)

with?θi one of N+1=4speci?ed directions,(3),and

each E?θ

i

calculated via the oriented energy?ltering,

(1).In the present implementation,six di?erent space-

time orientations are made explicit,corresponding to

1Strictly,Rayleigh’s theorem is stated with in?nite frequency

domain support on summation.

2Depending on the spacetime orientation sought,?e x can be

replaced with another axis to avoid an unde?ned vector.

static(no motion/orientation orthogonal to the image plane),leftward,rightward,upward,downward motion (one pixel/frame movement),and?icker/in?nite mo-tion(orientation orthogonal to the temporal axis);al-though,due to the relatively broad tuning of the?lters employed,responses arise to a range of orientations about the peak tunings.

Finally,the marginalized energy measurements,(4), are confounded by the local contrast of the signal and as a result increase monotonically with contrast.This makes it impossible to determine whether a high re-sponse for a particular spacetime orientation is indica-tive of its presence or is indeed a low match that yields a high response due to signi?cant contrast in the signal. To arrive at a purer measure of spacetime orientation, the energy measures are normalized by the sum of con-sort planar energy responses at each point,

?E?n

i (x)=?E?n i(x)/

M

j=1

?E?n

j

(x)+

,(5)

where M denotes the number of spacetime orienta-tions considered and is a constant introduced as a noise?oor and to avoid instabilities at points where the overall energy is small.As applied to the six ori-ented,appearance marginalized energy measurements, (4),Eq.(5)produces a corresponding set of six normal-ized,marginalized oriented energy measurements.To this set a seventh measurement is added that explicitly captures lack of structure via normalized ,

?E (x)= /

M

j=1?E?n

j

(x)+

,(6)

to yield a seven dimensional feature vector at each point in the image data.(Note that for loci where oriented structure is less apparent,the summation in (6)will tend to0;hence,?E approaches1and thereby indicates relative lack of structure.)

Conceptually,(1)-(6)can be thought of as taking an image sequence and carving its(local)power spec-trum into a set of planes,with each plane correspond-ing to a particular spacetime orientation,to provide a relative indication of the presence of structure along each plane or lack thereof in the case of a uniform inten-sity region as captured by the normalized ,(6).This orientation decomposition of input imagery is de?ned pointwise in spacetime.For present purposes,it is used to de?ne spatiotemporally dense3D,(x,y,t),action templates from an example video(with each point in the template associated with a7D orientation feature vector)to be matched to correspondingly represented videos where actions are to be spotted.

The constructed representation enjoys a number of attributes that are worth emphasizing.(i)Owing to the bandpass nature of the Gaussian derivative?lters (1),the representation is invariant to additive photo-metric bias in the input signal.(ii)Owing to the divi-sive normalization(5),the representation is invariant to multiplicative photometric bias.(iii)Owing to the marginalization(4),the representation is invariant to changes in appearance manifest as spatial orientation variation.Overall,these three invariances result in a robust pattern description that is invariant to changes that do not correspond to dynamic variation(e.g.,dif-ferent clothing),even while making explicit local ori-entation structure that arises with temporal variation (single motion,multiple motion,temporal?icker,etc.). (iv)Owing to the oriented energies being de?ned over a spatiotemporal support region,(1),the representa-tion can deal with input data that are not exactly spa-tiotemporally aligned.(v)Owing to the distributed na-ture of the representation,foreground clutter can be ac-commodated:Both the desirable action pattern struc-ture and the undesirable clutter structure can be cap-tured jointly so that the desirable components remain available for matching even in the presence of clutter. (vi)The representation is e?ciently realized via linear (separable convolution,pointwise addition)and point-wise non-linear(squaring,division)operations[6]. 2.2.Spacetime template matching

To detect actions(as de?ned by a small template video)in a larger search video,the search video is scanned over all spacetime positions by sliding a3D template over every spacetime position.At each posi-tion,the similarity between the oriented energy distri-butions(histograms)at the corresponding positions of the template and search volumes are computed.

To obtain a global match measure,M(x),between the template and search videos at each image position, x,of the search volume,the individual histogram sim-ilarity measurements are summed across the template: M(x)=

u

m[S(u),T(u?x)],(7) where u=(u,v,w)ranges over the spacetime support of the template volume and m[S(u),T(u?x)]is the similarity between local distributions of the template, T,and the search,S,volumes.The global similarity measure peaks represent potential match locations.

There are several histogram similarity measures that could be used[21].Here,the Bhattacharyya coe?-cient[2]is used,as it takes into account the summed unity structure of distributions(unlike L p-based match measures)and yields to e?cient implementation(see Section2.2.2).The Bhattacharyya coe?cient for two histograms P and Q,each with B bins,is de?ned as

m(P,Q)=

B

b=1

P b Q b,(8)

with b the bin index.This measure is bounded below by zero and above by one,with zero indicating a com-plete mismatch,intermediate values indicating greater similarity and one complete agreement.Signi?cantly, the bounded nature of the Bhattacharyya coe?cient makes it robust to small outliers(e.g.,as might arise during occlusion in the present application).

The?nal step consists of identifying peaks in the similarity volume,M,where peaks correspond to volu-metric regions in the search volume that match closely with the template dynamics.The local maxima in the volume are identi?ed via non-maxima suppression.In the experiments,the volumetric region of the template centered at the peak is used for suppression.

2.2.1Weighting template contributions Depending on the task,it may be desirable to weight the contribution of various regions in the template dif-ferently.For example,one may want to emphasize certain spatial regions and/or frames in the template. This can be accommodated with the following modi?-cation to the global match measure:

M(x)=

u w(u)m[S(u),T(u?x)],(9)

where w denotes the weighting function.In some sce-

narios,it may also be desired to emphasize the contri-

bution of certain dynamics in the template over others.

For example,one may want to emphasize the dynamic

over the unstructured and static information.This can

be done by setting the weight in the match measure,

(9),to w=1?(?E +?E static),with?E static the oriented energy measure,(5),corresponding to static,i.e.,non-

moving/zero-velocity,structure and?E capturing local lack of structure,(6).An advantage of the developed representation is that it makes these types of semanti-cally meaningful dynamics directly accessible.

2.2.2E?cient matching

For e?cient search,one could resort to:(i)spatiotem-

poral coarse-to-?ne search using spacetime pyramids

[23],(ii)evaluation of the template on a coarser sam-

pling of positions in the search volume,(iii)evaluation

of a subset of distributions in the template and(iv)

early termination of match computation.A drawback

of these optimizations is that the target may be missed

entirely.In this section,it is shown that exhaustive

computation of the search measure,(7),can be real-

ized in a computationally e?cient manner.

Inserting the Bhattacharyya coe?cient,(8),into the

global match measure,(7),and reorganizing by swap-

ping the spacetime and bin summation orders reveals

that the expression is equivalent to the sum of cross-correlations between the individual bin volumes:

M(x)=

b

u

S b(u)

T b(u?x)=

b

S b

T b,

(10) with denoting cross-correlation,b indexing histogram bins and u=(u,v,w)ranging over template support.

Consequently,the correlation surface can be com-puted e?ciently in the frequency domain using the Convolution Theorem of the Fourier transform[12], where the expensive correlation operations in space-time are exchanged for relatively inexpensive pointwise multiplications in the frequency domain:

M(x)=F?1

b

F{

S b}F

T b

,(11) with F{·}and F?1{·}denoting the Fourier transform and its inverse,resp.,and T b the re?ected template. In implementation,the Fourier transforms are realized e?ciently by the fast Fourier transform(FFT).

https://www.doczj.com/doc/c14381339.html,putational complexity analysis

Let W{T,S},H{T,S},D{T,S}be the width,height and temporal duration,respectively,of the template, T,and the search video,S,and B denote the number of spacetime orientation histogram bins.The complex-ity of the correlation-based scheme in the spacetime domain,(10),is O(B

i∈{T,S}

W i H i D i).In the case of the frequency domain-based correlation,(11),the 3D FFT can be realized e?ciently by a set of1D FFTs due to the separability of the kernel[12].The compu-tational complexity of the frequency domain-based cor-relation is O[BW S H S D S(log2D S+log2W S+log2H S)].

In practice,the overall runtime to compute the en-tire match volume between a50×25×20template and a144×180×200search video with six spacetime ori-entations and is26minutes when computed strictly in the spacetime domain,(10),and20seconds(i.e.,10 frames/sec)when computed using the frequency-based scheme,(11),with the the computation of the repre-sentation(Sec.2.1)taking up16seconds of the total time.These timings are based on unoptimized Matlab code executing on a2.3GHz processor.In compar-ison,using the same sized input and a Pentium3.0 GHz processor,[23]report that their approach takes 30minutes for exhaustive search.

Depending on the target application,additional sav-ings of the proposed approach can be achieved by precomputing the search target representation o?-line. Also,since the representation construction and match-ing are highly parallelizable,real to near-real-time performance is anticipated through the use of widely available hardware and instruction sets,e.g.,multicore CPUs,GPUs and SIMD instruction sets.

To recapitulate,the proposed approach is given in

Algorithm 1:Action spotting.

Input :T :Query video,S :Search video,τ:Similarity

threshold

Output :M :Similarity volume,d :Set of bounding

volumes of detected action

Step 1:Compute spacetime oriented energy representation (Sec.2.1)

1.Initialize 3D G 3steerable basis.

https://www.doczj.com/doc/c14381339.html,pute normalized spacetime oriented energies for T and S ,Eq.(1)-(6).

Step 2:Spacetime template matching (Sec.2.2,2.2.1and 2.2.2)

3.(Optional)Weight template representation.

https://www.doczj.com/doc/c14381339.html,pute M between T and S (Eq.11).Step 3:Find similarity peaks (Sec.2.2)

5.Find global maximum in M .

6.Suppress values around the maximum (set to zero).

7.Repeat 5.and 6.until remaining match scores are below τ.

8.Centre bounding boxes,d ,with size of template at identi?ed maxima.

Jump Left/Right

Spin Left/Right

Squat

Figure 2.Aerobics routine.The instructor performs four cycles of a dance routine composed of directional jumping,spinning and squatting;each row corresponds to a cycle.

algorithmic terms in Algorithm 1.

3.Empirical evaluation

The performance of the proposed action spotting al-gorithm has been evaluated on an illustrative set of test sequences.Matches are represented as a series of (spatial)bounding boxes that spatiotemporally outline the action.Unless otherwise stated,the templates are

weighted by 1?(?E

+?E static ),see (9),to emphasize the dynamics of the action pattern over the background portion of the template.The constant is empirically set to 500for all experiments.All video results and additional examples are available at:www.cse.yorku.ca/vision/research/action-spotting .

Figure 2shows results of the proposed approach

on

Figure 3.Multiple actions in the outdoors.(top,middle)Sample frames for two query templates depicting walking left and two-handed wave actions.A third template for a walking right action is derived from mirroring the walking left template.(bottom)Several example detection results from an outdoor scene containing the query actions.

an aerobics routine consisting of 806frames and a res-olution of 361×241(courtesy of https://www.doczj.com/doc/c14381339.html, ).The instructor performs four cycles of the routine com-posed of directional jumping,spinning and squatting.The ?rst cycle consists of jumping left,spinning right and squatting while the instructor is moving to her left,then a mirrored version while moving to her right,then again moving to her left and concludes moving to her right.Challenging aspects of this example include,fast (realistic)human movements and the instructor in-troducing nuances into each performance of the same action;thus,no two examples of the same action are exactly alike.The templates are all based on the ?rst cycle of the routine.The jumping right and spinning left templates are de?ned by mirrored versions of the jumping left and spinning right,resp.Each template is matched with the search target,yielding ?ve simi-larity volumes in total.Matches in each volume are combined to yield the ?nal detections.All actions are correctly spotted,with no false positives.

Figure 3shows results of the proposed approach on an outdoor scene consisting of three actions,namely,walking left,walking right and two-handed wave.In addition,there are several instances of distracting background clutter,most notably the water ?owing from the fountain.The video consists of 672frames with a resolution of 321×185.The walk right tem-plate is a mirrored version of the walk left template.Similar to the previous example,the matches for each template are combined to yield the ?nal detections.All actions are spotted correctly,save one walking left ac-tion that di?ers signi?cantly in spatial scale by a factor of 2.5from the template;there are no false positives.Figure 4shows results of the proposed approach on two outdoor scenes containing distinct forms of fore-ground clutter,which superimpose signi?cant unmod-eled patterning over the depicted actions and thereby test robustness to irrelevant structure in matching.

Figure4.Foreground clutter.(top)Sample frames for a one-handed wave query template.Templates for other ac-tions in this?gure(two-handed wave,walking left)are shown in Fig.3.(middle)Sample walking left and one-handed wave detection results with foreground clutter in the form of local lighting variation caused by overhead dappled sunlight.(bottom)Sample two-handed wave de-tection results as action is performed beside and behind chain-linked fence.Foreground clutter takes the form of superimposed static structure when action is behind fence. The?rst example contains foreground clutter in the form of dappled sunlight with the query actions of walking left and one-handed wave.The second ex-ample contains foreground clutter in the form of su-perimposed static structure(i.e.,pseudo-transparency) caused by the chain-linked fence and the query action of two-handed wave.The?rst and second examples con-tain365and699frames,resp.,with the same spatial resolution of321×185pixels.All actions are spotted correctly;there are no false positives.

For the purpose of quantitatively evaluating the proposed approach,action spotting performance was tested on the publicly available CMU action data set [15].The data set is comprised of?ve action cate-gories,namely“pick-up”,“jumping jacks”,“push el-evator button”,“one-handed wave”and“two-handed wave”.The total data set consists of20minutes of video containing109actions of interest with14to34 testing instances per category performed by three to six subjects.The videos are160×120pixels in resolution. In contrast to the widely used KTH[22]and Weizmann [11]data sets which contain relatively uniform intensity backgrounds,the CMU data set was captured with a handheld camera in crowded environments with mov-ing people and cars in the background.There are large variations in the performance of the target actions,in-cluding their distance with respect to the camera.

Results are compared with ground truth labels in-cluded with the CMU data set.The labels de?ne the spacetime positions and extents of each action.For each action,a Precision-Recall(P-R)curve is generated by varying the similarity threshold between0.6to0.97: P recision=T P/(T P+F P)and Recall=T P/nP,where T P is the number of true positives,F P is the number of false positives and nP is the total number of positives in the data set.In evaluation,the same testing protocol as in[15]is used.A detected action is considered a true positive if it has a(spacetime)volu-metric overlap greater than50%with a ground truth label.The same action templates from[15]are used, including the provided spacetime binary masks to em-phasize the action over the background.

Figure5shows P-R curves for each action with rep-resentative action spotting results.The blue curves correspond to the proposed approach,while the red and green curves are results from two baseline approaches (as reported in[15]),namely parts-based shape plus ?ow[15],and holistic?ow[23],resp.In general,the proposed approach achieves signi?cantly superior per-formance over both baselines,except in the case of two-handed wave.Two-handed wave is primarily confused with one-handed wave,resulting in a higher false pos-itive rate and thus lower precision;nevertheless,the proposed approach still outperforms holistic?ow over most of the P-R plot for this action.

4.Discussion and summary

The main contribution of this paper is the represen-tation of visual spacetime via spatiotemporal orienta-tion distributions for the purpose of action spotting.It has been shown that this tack can accommodate vari-able appearance of the same action,rapid dynamics, multiple actions in the?eld-of-view and is robust to scene clutter(both foreground and background),while being amenable to e?cient computation.

A current limitation of the proposed approach is the use of a single monolithic template for any given action. This choice limits the ability to generalize across the range of observed variability as an action is depicted in natural https://www.doczj.com/doc/c14381339.html,rge geometric spatial(e.g., scale)and temporal(e.g.,action execution speed)de-formations are not currently handled but may be ad-dressed partly through the use of a multi-scale(space-time)framework.Additional sources of natural vari-ability include deviations due to local di?erences in action execution(e.g.,performance nuances)and an-thropomorphic attributes(e.g.,height and shape).Re-sponse to these variations motivates future investiga-tion of deformable action templates(e.g.,parts-based, cf.[15]),which will allow for?exibility in matching template(sub)components.Also,false detections may be reduced through the integration of complementary cues,e.g.,shape.Along these lines,it is interesting to note that the presented empirical results attest that the approach already is robust to a range of deformations between the template and search target(e.g.,modest changes in spatial scale,rotation and execution speed

Figure5.Precision-Recall curves for CMU action data set.(top)Precision-Recall plots;blue curves correspond to the proposed approach,and red and green to the baselines proposed in[15](shape+?ow parts-based)and[23](holistic?ow), resp.,as reported in[15].(bottom)Corresponding example action spotting results recovered by the proposed approach.

as well as individual performance nuances).Such ro-bustness owes to the relatively broad tuning of the ori-ented energy?lters,which discount minor di?erences in spacetime orientations between template and target.

In summary,this paper has presented an e?cient ap-proach to action spotting using a single query template; there is no need for extensive training.The approach is founded on a distributed characterization of visual spacetime in terms of3D,(x,y,t),spatiotemporal ori-entation that captures underlying pattern dynamics. Empirical evaluation on a broad set of image sequences, including a quantitative comparison with two baseline approaches on a challenging public data set,demon-strates the potential of the proposed approach. References

[1]S.Ali,A.Basharat,and M.Shah.Chaotic invariants for

human action recognition.In ICCV,2007.

[2] A.Bhattacharyya.On a measure of divergence between two

statistical populations de?ned by their probability distrib-ution.Bull.Cal.Math.Soc.,35:99–110,1943.

[3] A.Bobick and J.Davis.The recognition of human move-

ment using temporal templates.PAMI,23(3):257–267, 2001.

[4]K.Cannons and R.Wildes.Spatiotemporal oriented energy

features for visual tracking.In ACCV,pages I:532–543, 2007.

[5]O.Chomat,J.Martin,and J.Crowley.A probabilistic

sensor for the perception and the recognition of activities.

In ECCV,pages I:487–503,2000.

[6]K.Derpanis and J.Gryn.Three-dimensional nth derivative

of Gaussian separable steerable?lters.In ICIP,pages III: 553–556,2005.

[7]K.Derpanis and R.Wildes.Early spatiotemporal group-

ing with a distributed oriented energy representation.In CVPR,2009.

[8]P.Dollar,V.Rabaud,G.Cottrell,and S.Belongie.Behavior

recognition via sparse spatio-temporal features.In PETS, pages65–72,2005.

[9] A.Efros, A.Berg,G.Mori,and J.Malik.Recognizing

action at a distance.In ICCV,pages726–733,2003. [10]W.Freeman and E.Adelson.The design and use of steerable

?lters.PAMI,13(9):891–906,1991.[11]L.Gorelick,M.Blank, E.Shechtman,M.Irani,and

R.Basri.Actions as space-time shapes.PAMI,29(12):2247–2253,2007.

[12] B.J¨a hne.Digital Image Processing.Springer,2005.

[13]H.Jhuang,T.Serre,L.Wolf,and T.Poggio.A biologically

inspired system for action recognition.In ICCV,2007. [14]Y.Ke,R.Sukthankar,and M.Hebert.E?cient visual event

detection using volumetric features.In ICCV,pages I:166–173,2005.

[15]Y.Ke,R.Sukthankar,and M.Hebert.Event detection in

crowded videos.In ICCV,2007.

[16]J.Liu,J.Luo,and M.Shah.Recognizing realistic actions

from videos‘in the wild’.In CVPR,2009.

[17]J.Niebles,H.Wang,and L.Fei-Fei.Unsupervised learning

of human action categories using spatial-temporal words.

IJCV,79(3):299–318,2008.

[18]H.Ning,T.Han,D.Walther,M.Liu,and T.Huang.Hierar-

chical space-time model enabling e?cient search for human actions.T-CirSys,2009.

[19] D.Ramanan,D.Forsyth.Automatic annotation of every-

day movements.In NIPS,2003.

[20]M.Rodriguez,J.Ahmed,and M.Shah.Action MACH a

spatio-temporal maximum average correlation height?lter for action recognition.In CVPR,2008.

[21]Y.Rubner,J.Puzicha,C.Tomasi,and J.Buhmann.Empir-

ical evaluation of dissimilarity measures for color and tex-ture.CVIU,84(1):25–43,2001.

[22] C.Schuldt,https://www.doczj.com/doc/c14381339.html,ptev,and B.Caputo.Recognizing human

actions.In ICPR,pages III:32–36,2004.

[23] E.Shechtman and M.Irani.Space-time behavior-based cor-

relation-OR-How to tell if two underlying motion?elds are similar without computing them?PAMI,29(11):2045–2056,2007.

[24] E.Simoncelli.Distributed Analysis and Representation of

Visual Motion.PhD thesis,MIT,1993.

[25]M.Sizintsev and R.Wildes.Spatiotemporal stereo via spa-

tiotemporal quadric element(stequel)matching.In CVPR, 2009.

[26]P.Turaga,R.Chellappa,V.Subrahmanian,and O.Udrea.

Machine recognition of human activities:A survey.T-CirSys,18(11):1473–1488,2008.

[27]R.Wildes and J.Bergen.Qualitative spatiotemporal analy-

sis using an oriented energy representation.In ECCV,pages II:768–784,2000.

[28]Y.Yacoob and M.Black.Parameterized modeling and

recognition of activities.CVIU,73(2):232–247,1999.

domino权限

Domino数据库存取控制列表(ACL) 在Domino中,数据库的存取控制列表是Domino安全性的重要组成部分,也是决定每个用户能否访问数据库的主要设置。每个数据库都有自己的存取控制列表(Access Control List, 以下简称ACL)。打开一个数据库,选择菜单“文件”-“数据库”-“存取控制”,就可以看到该数据库的ACL。 ACL分为四个页面:基本、角色、日志和高级。以下分别说明这四个页面中的内容。并说明了ACL中可以接受的名称格式。 ACL的“基本”页面 ACL的核心功能都包含在“基本”页面中。在“个人/服务器/工作组”中选择“全部显示”,所有存取级别的用户都会被列出。也可以选择仅查看“管理者”、“设计者”等某个存取级别的用户。当选中某个用户名时,对话框中会显示他的用户类型和存取级别,以及与此存取级别相应的一些扩展和限制选项。用户类型和存取级别指定了用户对此数据库的最大权限。数据库的管理员可以增加、删除或修改用户的权限。

七个存取级别 ACL中共有七个存取级别:管理者、设计者、编辑者、作者、读者、存放者和无访问权限。了解这些级别的含义是了解ACL工作机制的基础。下图显示了每个存取级别的缺省权限,从无访问权限开始,每个级别都比下一级拥有更多的权限,直到拥有所有权限的管理者。(每个级别的权限都包含其下所有级别的权限)。

无访问权限 此级别表示用户没有任何权限,不能存取数据库。管理员可以开放给无访问权限的权限只有“读取公用文档”和“写公用文档”。关于公用文档的概念,见下面的“读写公用文档”部分。 存放者和读者 存放者只能向数据库中放入文档,但不能读取这些文档。读者则只能读文档,但不能向数据库中放入文档。二者都只具有单一的功能。(读者拥有一个额外的权限,可以执行代理)。 作者 作者可以创建、修改、删除文档。但是,要想使拥有作者权限的用户能够修改、删除文档,还需要作进一步的设置:要创建文档,需要选中“创建文档”选项。要修改文档,需要设置文档中的作者域。要删除文档,需要选中“删除文档”选项,同时需要设置文档中的作者域。 在设计数据库时,有一类特殊的域称为作者域,这种域的类型是“作者”。在文档中,作者域可以包含用户、群组或角色的名称。如果一个用户在ACL中具有作者权限,同时,他的名字又包含在文档的作者域中,那么,这个用户就是该文档的所有者之一,可以修改此文档。如果用户未被包含在作者域中,则即使此文档是由他创建的,也无权修改它。 作者域只对存取级别为作者的用户起作用。其他的存取级别不受

计算机系统结构考试题库及答案

计算机系统结构试题及答案 一、选择题(50分,每题2分,正确答案可能不只一个,可单选 或复选) 1.(CPU周期、机器周期)是内存读取一条指令字的最短时间。 2.(多线程、多核)技术体现了计算机并行处理中的空间并行。 3.(冯?诺伊曼、存储程序)体系结构的计算机把程序及其操作数 据一同存储在存储器里。 4.(计算机体系结构)是机器语言程序员所看到的传统机器级所具 有的属性,其实质是确定计算机系统中软硬件的界面。 5.(控制器)的基本任务是按照程序所排的指令序列,从存储器取 出指令操作码到控制器中,对指令操作码译码分析,执行指令操作。 6.(流水线)技术体现了计算机并行处理中的时间并行。 7.(数据流)是执行周期中从内存流向运算器的信息流。 8.(指令周期)是取出并执行一条指令的时间。 9.1958年开始出现的第二代计算机,使用(晶体管)作为电子器件。 10.1960年代中期开始出现的第三代计算机,使用(小规模集成电路、 中规模集成电路)作为电子器件。 11.1970年代开始出现的第四代计算机,使用(大规模集成电路、超 大规模集成电路)作为电子器件。 12.Cache存储器在产生替换时,可以采用以下替换算法:(LFU算法、 LRU算法、随机替换)。

13.Cache的功能由(硬件)实现,因而对程序员是透明的。 14.Cache是介于CPU和(主存、内存)之间的小容量存储器,能高 速地向CPU提供指令和数据,从而加快程序的执行速度。 15.Cache由高速的(SRAM)组成。 16.CPU的基本功能包括(程序控制、操作控制、时间控制、数据加 工)。 17.CPU的控制方式通常分为:(同步控制方式、异步控制方式、联合 控制方式)反映了时序信号的定时方式。 18.CPU的联合控制方式的设计思想是:(在功能部件内部采用同步控 制方式、在功能部件之间采用异步控制方式、在硬件实现允许的情况下,尽可能多地采用异步控制方式)。 19.CPU的同步控制方式有时又称为(固定时序控制方式、无应答控 制方式)。 20.CPU的异步控制方式有时又称为(可变时序控制方式、应答控制 方式)。 21.EPROM是指(光擦可编程只读存储器)。 22.MOS半导体存储器中,(DRAM)可大幅度提高集成度,但由于(刷 新)操作,外围电路复杂,速度慢。 23.MOS半导体存储器中,(SRAM)的外围电路简单,速度(快),但 其使用的器件多,集成度不高。 24.RISC的几个要素是(一个有限的简单的指令集、CPU配备大量的 通用寄存器、强调对指令流水线的优化)。

Lotus Script语法基础篇

Lotus Script语法基础篇 注释 注释在程序运行中并不执行,只是让程序员在编写程序的时候添加的一些标记性的文字,但他的作用可不容忽视,在编写一个程序模块时说明这个模块的功能、作用、创建时间、作者等,这对以后的维护大有好处。一般在用户的脚本中插入头信息是一个不错的想法,如下事例: ‘Script name: Connect internet ‘Author: zynet ‘Last Modified: 6/23/08 ‘Description: Connect my webserver. 1.单行注释:通过输入一个单引号( ‘ )或REM ( Remark的缩写 )来添加。 2.多行注释:%Rem …%End Rem 所以以上例子也可以写成这样: %REM Script name: Connect internet Author: zynet Last Modified: 6/23/08 Description: Connect my webserver. %END REM 变量和数据类型 1.声明变量 Dim Variablename as type dim是dimension的缩写,为变量的值在内存中创建一个空间。 Variablename 是变量的名称,一个变量的名称一定要以字母开头,其后可以是字母、任何数字以及下划线,变量的名称不区分大小写,最多可容纳40个字符。Lotus中有一些保留字,大部分是指令和语句,它们不能被用做变量名。 type是数据类型,可以是Boolean、Byte、Integer、Long、Single、Double、Currency、String、Variant 当然以后在介绍面象对象的时候还涉及到类和对象的定义,也是这个声明格式,这些在以后再介绍。 类型值 Size

在 IBM Lotus Domino Designer 中使用 Java 构建应用程序

在 IBM Lotus Domino Designer 中使用Java 构建应用程序 Oscar Hernandez, 高级软件工程师, IBM 简介:本文旨在帮助那些传统的 LotusScript 开发人员转向 Java 开发。在几乎不具备 Java 知识的情况下,本文可帮助您开始在 IBM Lotus Domino 中进行Java 应用程序开发。 简介 对于 IBM Lotus Notes 应用程序开发人员来说,LotusScript 和 LotusScript 类在设计 Notes 应用程序时很有用。LotusScript API 支持以编程方式与数据库、文档,甚至设计元素进行交互。随着 Java 发展成为主流编程语言,原来越多的 Notes 应用程序开发人员开始将目光投向 Java。 本文面向那些想要在 IBM Lotus Domino 中使用 Java 进行编程的 LotusScript 开发人员。假定读者具有一定的 LotusScript 编程经验。 本文包含三部分:Java 语言、示例和其他要素。Java 编程人员可以直接跳到示例部分,但是新手应该从 Java 语言部分开始阅读。 Java 语言 发布于 1995 年,Java 是一种面向对象的编程语言,它的出现满足了平台独立性需求。这是通过将 Java 源代码编译成字节码实现的,然后任意平台上的任何Java 虚拟机(JVM)都可以解释字节码。 因此,只要为目标平台实现了 JVM,就可以运行任何 Java 应用程序。有关更多信息,请参见“The History of Java Technology”。 Java 的语法类似于 C,而它面向对象的实现方式类似于 C++,因此 C/C++ 编程人员可以轻松掌握 Java。但是,LotusScript 开发人员可能会发现 Java 语法与 LotusScript 大不相同。 由于本文关注的是在 Lotus Domino 中使用 Java,因此所展示的示例也将在Lotus Domino 的环境中实现。 我们从经典的“Hello World”示例开始。在 Domino Designer 中创建新的 Java 代理时,会收到表 1 左栏所示的模板代码。右栏的代码已经进行了修改以适应“Hello World”示例的需要。

lotusScript Notes类入门教程

LotusScript Notes 课程指导 欢迎来到 LotusScript 和 Notes 的世界。“LotusScript Notes 课程指导”由三节课程组成,介绍如何在 Notes 中使用 LotusScript 语言。每节课程包括: 1.指导您通过创建和测试的 Script 的一系列步骤。 2.在线式解答 Script。 3.一个挑战,要求您用刚刚学到的知识创建一段新的 Script。 4.一个方案,提供一个可能解决挑战的方案。 每节课程都使您学习到一些 Notes 应用开发的经验,使您完成上一节课程的内 步骤0:准备工作 每节课程都使用一个简单的讨论数据库,现在创建它: 1. 选择“文件”“数据库”“新建”。 2. 在“服务器”选项中,选择“本地”。 3. 输入标题。例如可以输入“学习 LotusScript”。 4. 输入文件名。例如可以输入“LEARNING.NSF”。 5. 如果在列表中没有发现模板,选择模板服务器并且选择一个包含这些模板的服务器。 6. 选择“讨论数据库(R4)”模板。 7. 使“继承未来的设计变化”选项无效。 8. 单击“确定”。 9. 在 Notes 完成创建新数据库时,关闭“关于此数据库”文档。 现在准备开始第一课。 目录:第一课:打印数据库标题 第二课:统计视图中文档的数量 第三课:发送电子邮件消息 第一课:打印数据库标题 第一课指导您用 Script 创建一个按钮,无论何时用户单击此按钮便可以打印出数据库的标题。先创建一个示例的讨论数据库叫做“学习 LotusScript”。然后: 1.创建一个按钮 2.书写一个按钮的 Script ,无论何时用户单击此按钮便可以打印出数据库的标题。 3.编译和测试该 Script。 步骤 A:创建按钮 用户单击按钮时运行 Script,所以首先创建一个按钮。 1. 打开创建好的“学习 LotusScript”数据库。 2. 选择“创建”“讨论主题”在数据库中创建一个新的主题文档。 3. 输入一个简单的文档主题,例如“Scripting in Notes is a joy”。 4. 输入一个分类,例如“Script”。 5. 把光标移动到文档正文域中并选择“创建”“热点”“按钮”。程序员的设计窗格出 现在屏幕底部,同时出现了属性框。 6. 在按钮属性框中,编写按钮的标签,例如“打印标题”,然后单击绿色确认标志保存 它。关闭属性框。 7. 在程序员的设计窗格中,如果该按钮没有被选中,则从定义好的可编程对象中选择“打 印标题”按钮。 8. 选择 Script 可选按钮。 9. 如果按钮事件没被选中的话,从按钮事件列表中选择“Click”。当“Click”事件发 生时,本 Script 将运行。

学习LotusScript

学习LotusScript 学习LotusScript LotusScript对象与类 1.面向对象编程 类是以抽象数据类型为基础的对象行为,抽象数据类型定义了以类型为基础执行所有接口为一个类而定义的所有操作称之为方法。 2.对象 一般可以从两个方面来理解面向对象编程中的对象: ?属性。 ?对象。 3.类 类是从对象中抽象出来,作为对具有相同特征的一组对象的描述。类是一种定义,它描述该类中每个对象共有的属性和方法,类不占用计算机内存。 Notes中的类被分为前端类(front-end)和后端类(back-end)两种。 前端类主要对用户当前正在工作的对象、表示Notes客户机用户界面中的对象进行操作。后端类可以访问和操作任何数据库中的任何文档、视图与文件夹内容、数据库ACL以及外部数据。 7个前端类 ?NotesUIDatabase :代表Notes工作台上打开的数据库 ?NotesUIDocument :用于用户访问当前文档 ?NotesUIView :代表当前的数据库视图 ?NotesUIWorkspace :帮助用户访问Notes当前的工作台 ?Button :代表表单或文档上的操作按钮、热点或按钮 ?Navigator :代表一个导航器上的对象,帮助用户操作浏览器 ?Field :代表表单中的一个域。 23个后端类 ?NotesACL :代表数据库中的所有存取控制列表,通过它用户可以访问和使用数据库中的存取控制列表 ?NotesACLEntry :代表存取控制列表中一个单一的ACL项目,通过它用户可以查询Notes 对象的访问属性 ?NotesAgent :代表一个代理,通过它用户可以运行一个代理或查询代理的属性

LotusScript 代理的基本用法

LotusScript 代理的基本用法 1、FTSearch搜索: Set dc=db.Ftsearch("name",0) '0位置为最大的查询数,0为所有匹配的文件FTSearch必须创建数据库索引Set doc=dc.Getfirstdocument()、 2、Item: Set doc=dc.Getfirstdocument() While Not doc Is Nothing ForAll ritem In doc.Items MsgBox https://www.doczj.com/doc/c14381339.html, End ForAll Wend 3、取出特定的域 Set doc=view.getFirstdocument() If doc.HashItem("yu") <> "" Then Set item=doc.getfirstitem("yu") Set doc=view.getNextdocument(doc) End If 4、使用文本属性 If doc.Hashitem("yu") <> "" Then Set doc=dc.Getfirstdocument() While Not doc Is Nothing ForAll itemValue In doc.yu itemValue = "Anonymous" End ForAll Set doc=dc.Getnextdocument(doc) Wend End If 5、获取域值: ForAll itemValue In doc.Getitemvalue("yu") 6、添加域 set item =new NotesItem(doc,"newYu",https://www.doczj.com/doc/c14381339.html,erName) Call doc.Appenditemvalue("newYu",Newvalue) 7、替换值: 1)、While Not doc Is Nothing Call doc.Replaceitemvalue("resName","newValue") Set doc=dc.getnextdocument(doc) Wend 2)、Set doc=dc.Getfirstdocument() While Not doc Is Nothing 'Call doc.Replaceitemvalue("resName","newValue") Set item =doc.Getfirstitem("yu")

《可编程控制器技术》张东主编8章习题答案

第八章程序控制类指令的程序设计 (编写人:王冬) 一、选择题 1、循环指令由FOR及NEXT两条指令构成,可以循环多次,但(C)。 A、不得大于128次 B、不得大于256次 C、要考虑程序执行时间 D、不得大于1024次 2、跳转指令在向回跳的时候,该指令可能会对程序造成(B)。 A、跑飞 B、死机 C、没影响 D、基本没影响 3、当中断X0动作,程序执行中断程序(A)。 A、一次执行 B、二次执行 C、三次执行 D、每次扫描执行 4、刷新指令用于在运算过程中,可用于(D)的刷新。 A、输入信号 B、输出信号 C、计数器 D、输入与输出 二、填空题 1、程序控制类指令涉及程序结构,它们主要是主控指令、跳转指令、中断指令及循环指令。 2、看门狗指令WDT可用来选择执行监视定时器刷新的指令。 3、跳转指令CJ可用来选择执行一定的程序段,跳过暂且不执行的程序段。 4、中断的程序段用允许中断指令EI及不允许中断指令DI指令标示出来。 5、子程序调用指令CALL安排在主程序中,当子程序执行的条件满足,子程序得以执行。子程序安排在主程序结束指令FEND之后,第一个SRET之间。 6、循环指令由FOR及NEXT两条指令构成,这两条指令总是成对出现的。 三、简答题 1、跳转发生后,PLC还是否对被跳转指令跨越的程序段逐行扫描、逐行执行。被跨越的程序中的输出继电器、时间继电器及计数器的工作状态如何? 答:(1)不扫描。 (2)输出继电器变成断开,定时器和计数器保持当前状态。 2、考查跳转和主控区关系(图8-4),说明从主控制区和由主控区内跳出主控区各有什么条件?跳转和主控两种指令哪个优先? 答:(1)从主控区外跳到主控区内时,跳转独立于主控操作,CJ执行时,不论主控触点工作条件状态如何,均作ON处理。在主控制区内跳转时,如主控触点工作条件状态为OFF,跳转不可能执行。从主控区内跳到主控区外时,主控触点工作条件状态为OFF时,跳转不可能执行,主控触点工作条件状态为ON时,跳转条件满足可以跳转,这时MCR被忽略,但不会出错。 (2)主控指令优先。 3、试比较中断子程序和普通子程序的异同点。

计算机组成原理期末考试试题及答案

计算机组成原理期末考试试题及答案 一、选择题 1、完整的计算机系统应包括______。D A. 运算器、存储器和控制器 B. 外部设备和主机 C. 主机和实用程序 D. 配套的硬件设备和软件系统 2、计算机系统中的存储器系统是指______。D A.RAM存储器 B.ROM存储器 C. 主存储器 D. 主存储器和外存储器 3、冯·诺依曼机工作方式的基本特点是______。B A. 多指令流单数据流 B. 按地址访问并顺序执行指令 C. 堆栈操作 D. 存储器按内部选择地址 4、下列说法中不正确的是______。D A. 任何可以由软件实现的操作也可以由硬件来实现 B. 固件就功能而言类似于软件,而从形态来说又类似于硬件 C. 在计算机系统的层次结构中,微程序级属于硬件级,其他四级都是软件级 D. 面向高级语言的机器是完全可以实现的 5、在下列数中最小的数为______。C A. (101001)2 B. (52)8 C. (101001)BCD D. (233)16 6、在下列数中最大的数为______。B A. (10010101)2 B. (227)8 C. (143)5 D. (96)16 7、在机器中,______的零的表示形式是唯一的。B A. 原码 B. 补码 C. 反码 D. 原码和反码 9、针对8位二进制数,下列说法中正确的是______。B A.–127的补码为10000000 B.–127的反码等于0的移码B C.+1的移码等于–127的反码 D.0的补码等于–1的反码 9、一个8位二进制整数采用补码表示,且由3个“1”和5个“0”组成,则最小值为______。 B A. –127 B. –32 C. –125 D. –3 10、计算机系统中采用补码运算的目的是为了______。C A. 与手工运算方式保持一致 B. 提高运算速度 C. 简化计算机的设计 D. 提高运算的精度 11、若某数x的真值为–0.1010,在计算机中该数表示为1.0110,则该数所用的编码方法是 ______码。B A. 原 B. 补 C. 反 D. 移 12、长度相同但格式不同的2种浮点数,假定前者阶段长、尾数短,后者阶段短、尾数长, 其他规定均相同,则它们可表示的数的范围和精度为______。B A. 两者可表示的数的范围和精度相同 B. 前者可表示的数的范围大但精度低 C. 后者可表示的数的范围大且精度高 D. 前者可表示的数的范围大且精度高

ACL基础

在Domino中,数据库的存取控制列表是Domino安全性的重要组成部分,也是决定每个用户能否访问数据库的主要设置。每个数据库都有自己的存取控制列表(Access Control List, 以下简称ACL)。打开一个数据库,选择菜单“文件”-“数据库”-“存取控制”,就可以看到该数据库的ACL。 ACL分为四个页面:基本、角色、日志和高级。以下分别说明这四个页面中的内容。并说明了ACL 中可以接受的名称格式。 ACL的“基本”页面 ACL的核心功能都包含在“基本”页面中。在“个人/服务器/工作组”中选择“全部显示”,所有存取级别的用户都会被列出。也可以选择仅查看“管理者”、“设计者”等某个存取级别的用户。当选中某个用户名时,对话框中会显示他的用户类型和存取级别,以及与此存取级别相应的一些扩展和限制选项。用户类型和存取级别指定了用户对此数据库的最大权限。数据库的管理员可以增加、删除或修改用户的权限。 点击看大图 返回 七个存取级别 ACL中共有七个存取级别:管理者、设计者、编辑者、作者、读者、存放者和不能存取者。了解这些级别的含义是了解ACL工作机制的基础。下图显示了每个存取级别的缺省权限,从不能存取

者开始,每个级别都比下一级拥有更多的权限,直到拥有所有权限的管理者。(每个级别的权限 都包含其下所有级别的权限)。 不能存取者 此级别表示用户没有任何权限,不能存取数据库。管理员可以开放给不能存取者的权限只有“读取公用文档”和“写公用文档”。关于公用文档的概念,见下面的“读写公用文档”部分。 存放者和读者 存放者只能向数据库中放入文档,但不能读取这些文档。读者则只能读文档,但不能向数据库中放入文档。二者都只具有单一的功能。(读者拥有一个额外的权限,可以执行代理)。 作者 作者可以创建、修改、删除文档。但是,要想使拥有作者权限的用户能够修改、删除文档,还需要作进一步的设置:要创建文档,需要选中“创建文档”选项。要修改文档,需要设置文档中的作者域。要删除文档,需要选中“删除文档”选项,同时需要设置文档中的作者域。 在设计数据库时,有一类特殊的域称为作者域,这种域的类型是“作者”。在文档中,作者域可以包含用户、群组或角色的名称。如果一个用户在ACL中具有作者权限,同时,他的名字又包含在文档的作者域中,那么,这个用户就是该文档的所有者之一,可以修改此文档。如果用户未被包含在作者域中,则即使此文档是由他创建的,也无权修改它。 作者域只对存取级别为作者的用户起作用。其他的存取级别不受作者域的限制。例如,存取级别为读者的用户,即使名字包含在作者域中,也无权修改文档;存取级别为编辑者以上的用户,不需要包含在作者域中也可以修改数据库中所有文档。 因此,在设计数据库时,如果需要仅允许用户修改库中的一部分文档,则需要加入作者域,以细化用户的权限。 编辑者 编辑者具有更高的存取权限,可以修改数据库中的所有文档。通过各个子选项的设置,还可以赋予编辑者以下权限:删除文档,创建个人代理,创建个人文件夹/视图,创建共享文件夹/视图,创建LotusScript/Java代理。

LotusScript语法

第1课 LotusScript语法  1.1 LotusScript的语法:  1.1.1 语法概要:  √ 语句行结束为行结束,无须特殊符号  √ 一条语句占有多行时,使用“ _”作为行结束  √ 同一行录入多条语句时,使用“:”分隔语句  √ 语法类似VB,大小写不敏感  √ 注释:  ? 单行:’  ? 多行:%Rem …%End Rem  √ 基本变量类型赋值:“=”  √ 对象赋值:“Set ObjectName = ”  1.1.2 内置常量:  √ NOTHING:  ? 为Object的默认值  ? 使用Is来判断(不可使用“=”)  √ NULL  ? 使用IsNull()来判断(不可使用“=”)  √ PI  ? 3.1415926…  √ TRUE/FALSE  ? LotusScript内部使用-1代表TRUE;使用0代表FALSE  ? 任何非0数,即为TRUE  √ EMPTY  ? 对String类型:””  ? 对数字类型:0  ? 使用IsEmpty()判断(参见Notes Help)  > 当对NotesItem之Variant值判断时,IsEmpty(Var(0))总返回True,即使域值为””  > 使用Var(0) = “”判断!  ? 该EMPTY名称不可用以赋值!  1.1.3 基本变量类型  √ 数字:  ? 非十进制数  > Byte(1-Byte)  > 2进制:B  ° 如:B10010100, %B10010100  > 8进制:O  ° 如:O711423, &O711423  > 16进制:H  ° 如:H459f, &H459f  ? 整数:  > Integer(2-Byte)

计算机组成原理作业~第四章

一、选择题 1、以下四种类型指令中,执行时间最长的是__C__。 A. RR型 B. RS型 C. SS型 D.程序控制指令 2、Intel80486是32位微处理器,Pentium是__D__位微处理器。 A.16 B.32 C.48 D.64 3、设变址寄存器为X,形式地址为D,(X)表示寄存器X的内容,这种寻址方式的有效地址为___A__。 A. EA=(X)+D B. EA=(X)+(D) C.EA=((X)+D) D. EA=((X)+(D)) 4、在指令的地址字段中,直接指出操作数本身的寻址方式,称为__B___。 A. 隐含寻址 B. 立即寻址 C. 寄存器寻址 D. 直接寻址 5、变址寻址方式中,操作数的有效地址等于__C__。 A 基值寄存器内容加上形式地址(位移量) B 堆栈指示器内容加上形式地址(位移量) C 变址寄存器内容加上形式地址(位移量) D 程序记数器内容加上形式地址(位移量) 6、. Pentium-3是一种__A_。 A.64位处理器 B.16位处理器 C.准16位处理器 D.32位处理器 7、用某个寄存器中操作数的寻址方式称为__C__寻址。 A 直接 B 间接 C 寄存器直接 D 寄存器间接 8、某计算机的字长16位,它的存储容量是64KB,若按字编址,那么它的寻址范围是_B_。 A. 64K B.32K C. 64KB D. 32KB 9、某计算机字长32位,其存储容量为4MB,若按半字编址,它的寻址范围是__C__。 A 4M B B 2MB C 2M D 1M 10、单地址指令中为了完成两个数的算术运算,除地址码指明的一个操作数外,另一个常需 采用__C__。 A 堆栈寻址方式 B 立即寻址方式 C 隐含寻址方式 D 间接寻址方式 11、寄存器间接寻址方式中,操作数处在__D__。 A.通用寄存器 B.程序计数器 C.堆栈 D.主存单元 12、描述汇编语言特性的概念中,有错误的句子是__C__。 A. 对程序员的训练要求来说,需要硬件知识 B. 汇编语言对机器的依赖性高 C. 用汇编语言编制程序的难度比高级语言小 D. 汇编语言编写的程序执行速度比高级语言快 13、下面描述RISC机器基本概念中,正确的表述是_B___ A.RISC机器不一定是流水CPU B.RISC机器一定是流水CPU C.RISC机器有复杂的指令系统 D.其CPU配备很少的通用寄存器 14、微程序控制器中,机器指令与微指令的关系是__B__。 A. 每一条机器指令由一条微指令来执行 B. 每一条机器指令由一段微指令编写的微程序来解释执行 C. 每一条机器指令组成的程序可由一条微指令来执行 D. 一条微指令由若干条机器指令组成 15、指令周期是指__C__。

Domino ACL权限设置详解

Domino中,数据库的存取控制列表是Domino安全性的重要组成部分,也是决定每个用户 能否访问数据库的主要设置。本文详述了存取控制列表的各项设置,以帮助用户更好地理解并 使用数据库的ACL。 ACL的“基本”页面 ACL的“角色”页面 ACL的“日志”页面 ACL的“高级”页面 存取控制列表(ACL) 中可接受的名称 正文在Domino中,数据库的存取控制列表是Domino安全性的重 要组成部分,也是决定每个用户能否访问数据库的主要设置。 每个数据库都有自己的存取控制列表(Access Control List, 以下简称ACL)。打开一个数据库,选择菜单“文件”-“数据 库”-“存取控制”,就可以看到该数据库的ACL。 ACL分为四个页面:基本、角色、日志和高级。以下分别说明 这四个页面中的内容。并说明了ACL中可以接受的名称格式。 ACL的“基本”页面 ACL的核心功能都包含在“基本”页面中。在“个人/服务器/工作 组”中选择“全部显示”,所有存取级别的用户都会被列出。也可 以选择仅查看“管理者”、“设计者”等某个存取级别的用户。当选 中某个用户名时,对话框中会显示他的用户类型和存取级别, 以及与此存取级别相应的一些扩展和限制选项。用户类型和存 取级别指定了用户对此数据库的最大权限。数据库的管理员可 以增加、删除或修改用户的权限。 点击看大图

返回 七个存取级别 ACL中共有七个存取级别:管理者、设计者、编辑者、作者、读者、存放者和不能存取者。了解这些级别的含义是了解ACL 工作机制的基础。下图显示了每个存取级别的缺省权限,从不能存取者开始,每个级别都比下一级拥有更多的权限,直到拥有所有权限的管理者。(每个级别的权限都包含其下所有级别 的权限)。 不能存取者 此级别表示用户没有任何权限,不能存取数据库。管理员可以开放给不能存取者的权限只有“读取公用文档”和“写公用文档”。关于公用文档的概念,见下面的“读写公用文档”部分。 存放者和读者 存放者只能向数据库中放入文档,但不能读取这些文档。读者则只能读文档,但不能向数据库中放入文档。二者都只具有单一的功能。(读者拥有一个额外的权限,可以执行代理)。 作者 作者可以创建、修改、删除文档。但是,要想使拥有作者权限的用户能够修改、删除文档,还需要作进一步的设置:要创建文档,需要选中“创建文档”选项。要修改文档,需要设置文档中的作者域。要删除文档,需要选中“删除文档”选项,同时需要设置文档中的作者域。

程序控制类指令编程实验

实验七程序控制类指令编程实验 一、实验目的: 1. 掌握基本控制功能指令的编程方法。 2. 掌握主控、跳转、子程序调用指令的编程方法。 3. 通过程序的调试,进一步牢固掌握控制程序流程类指令,及它们之间的异同点。 4..学会程序模块化式的编程方法。 二、实验设备: PLC实验台:主机挂件(西门子S7-300 PLC)、基本逻辑指令实验挂件、PC机、连接导线 三、预习内容: 1.熟悉S7-300 PLC程序控制类功能指令的执行方式,操作数的种类。 2.熟悉西门子S7-300 PLC的程序流程类指令的基本格式。 3.熟悉软件流程图的画法及含义。 四、实验步骤: 1.电路连接好后经指导教师检查无误,并将RUN/STOP开关置于STOP后,方可接入220V 交流电源. 2.在PC机启动西门子STEP 7编程软件,新建工程,进入编程环境。 3.根据实验内容,在STEP 7编程环境下输入梯形图程序,转换后,下载到PLC中。 4.程序运行调试并修改。 5.写实验报告。 五、实验内容: 1. 应用主控指令对分支程序A和B进行控制编程 (1) 控制要求: A程序段为每秒一次闪光输出,而B程序段为每2秒一次闪光输出。要求按钮I0.0导通时执行A程序段,A灯每秒一次闪光,按钮I0.0断开时,执行B程序段,B灯每2秒一次闪光. (2) 输入/输出信号定义: 输入:I0.0—按钮输出:Q0.0—A灯 Q0.1—B灯 (3) 程序:(梯形图)自行编程 (4)思考: 上机运行程序后,观察:当I0.0的状态发生变化时,程序中的输出点的状态是否会保存? 2. 应用跳转指令对分支程序A和B进行控制编程(在主控指令的基础上修改) (1) 控制要求: A程序段为每秒一次闪光输出,而B程序段为每2秒一次闪光输出。要求按钮I0.0导通时执行A程序段,A灯每秒一次闪光,按钮I0.0断开时,执行B程序段,B灯每2秒一次闪光. (2) 输入/输出信号定义: 输入:I0.0—按钮输出:Q0.0—A灯 Q0.1—B灯 (3) 程序:(梯形图)自行编程

lotusscript基本语法

LotusScript基本语法 LotusScript语言的基本知识 一、概述 LotusScript是一种和Basic相兼容的面向对象的Scripting环境,它具有强大的能够从 事面向对象应用软件开发的语言范围,能够提供循环和分支的能力、数组以及访问Notes对象的能力。 判断Notes中什么时候使用LotusScript或公式语言 1)任何时候当执行该任务的函数或命令存在时,使用公式。 2)对于复杂的程序控制或循环,使用LotusScript。 3)存取或操作储存的文档数据要使用LotusScript,特别对于跨文档、跨数据库的存取。 4)若公式语言不能办到,则使用LotusScript 在Notes应用程序中访问和操作对象需要三步进行: 1.声明对象变量DIM DIM db As NotesDatabase 2.实例化一个类将其赋值给对象变量 SET db = New NotesDatabase("Domestic","Sales.nsf") 3.使用这个对象的方法或属性 db.created '用来展现创建数据库时的日期和时间

二、NotesScript中的数据元素 NotesScript中的常量 Null '特殊值,代表数据遗失或空 Empty '相当于"",也就是空串 Nothing '涉及变量的初始值,表示对象变量没有内容 PI '圆周率 True/False '分别为数值1和0 数据类型 Integer(整型) Long(长整型) Single(单精度型) Double(双精度型) Currency(货币型) String(字符串) 运算符 + - * / < > <> = Not And OR & 常量和变量 Dim address As String '定义变量 address = "100 Main Street " '变量赋值

LotusScript笔记

常有函数 CBool(0) 'prints FALSE 0:假,非零:真 CDbl ( expr ) CInt(expr) CLng(expr) CSng(expr) CStr(expr) CVar(expr) The data type of the return value is Variant. Print Format(9, "0.0;0%") ' Output: 9.0 Dim value As Integer value = 0 Print Format$(value, "Yes/No") ' Output "No" Print Format$(value, "On/Off") ' Output "Off" Print Format$(value, "True/False") ' Output "False" value = 2 Print Format$(value, "Yes/No") ' Output "Yes" Print Format$(value, "On/Off") ' Output "On" Print Format$(value, "True/False") ' Output "True" ' Date and Time formats x = 36525 Print Format(x, "General Date") ' Output: 12/31/1999 Print Format(x, "Long Date") ' Output: Friday, December 31, 1999 Print Format(x, "Medium Date") ' Output: 31-Dec-99 Print Format(x, "Short Date") ' Output: 12/31/99 y = 123.45 Print Format(y, "Long Time") ' Output: 10:48:00 AM Print Format(y, "Medium Time") ' Output: 10:48 AM Print Format(y, "Short Time") ' Output: 10:48 Print Format(2010-11-15 12:53:52, "yyyy-mm-dd") ' Output: 2010-11-15 If condition Then [ statements ] [ Else [ statements ] ] IsArray(expr) IsDate(expr) IsDefined(stringExpr) IsEmpty(expr) IsList(expr) IsNull(expr) IsNumeric(expr) IsObject(expr)

lotus-script语言基础

LotusScript语言的基本知识 一、概述 LotusScript是一种和Basic相兼容的面向对象的Scripting环境,它具有强大的能够从事面向对象应用软件开发的语言范围,能够提供循环和分支的能力、数组以及访问Notes对象的能力。 判断Notes中什么时候使用LotusScript或公式语言 1)任何时候当执行该任务的函数或命令存在时,使用公式。 2)对于复杂的程序控制或循环,使用LotusScript。 3)存取或操作储存的文档数据要使用LotusScript,特别对于跨文档、跨数据库的存取。 4)若公式语言不能办到,则使用LotusScript 在Notes应用程序中访问和操作对象需要三步进行: 1.声明对象变量DIM DIM db As NotesDatabase 2.实例化一个类将其赋值给对象变量 SET db = New NotesDatabase("Domestic","Sales.nsf") 3.使用这个对象的方法或属性 db.created '用来展现创建数据库时的日期和时间 二、NotesScript中的数据元素 NotesScript中的常量 Null '特殊值,代表数据遗失或空 Empty '相当于"",也就是空串 Nothing '涉及变量的初始值,表示对象变量没有内容 PI '圆周率 True/False '分别为数值1和0 数据类型 Integer(整型) Long(长整型) Single(单精度型) Double(双精度型) Currency(货币型) String(字符串) 运算符 + - * / < > <> = Not And OR & 常量和变量 Dim address As String '定义变量 address = "100 Main Street " '变量赋值 Const address = "100 Main Street" '创建字符常量address 通信输入和输出 Dim response as Integer response = Inputbox("Do you want to continue") '弹出输入框输入Messagebox(response) '弹出输出窗口输出数据 注释 1.单行注释:用Rem或'号 2.多行注释:用%Rem和%EndRem

PLC可编程控制器,第五章的总结(机电期末考试)

第五章 S7-200的功能指令(功能指令的作用及使用方法) 一、程序控制类指令 1、系统控制类指令 1) 结束指令 结束指令有两条:END 和MEND 。两条指令在梯形图中以线圈形式编程。 END ,条件结束指令。使能输入有效时,终止用户主程序。 MEND 无条件结束指令。无条件终止用户程序的执行,返回主程序的第一条指令。 指令格式:END (无操作数) 2) 停止指令 STOP ,暂停指令。使能输入有效时,该指令使主机CPU 的工作方式由RUN 切换到STOP 方式,从 而立即终止用户程序的执行。 STOP 指令在梯形图中以线圈形式编程。指令不含操作数。 指令格式:STOP (无操作数) 3)看门狗复位指令 WDR ,看门狗复位指令。当使能输入有效时,执行WDR 指令,每执行一次,看门狗定时器就被复位一次。用本指令可用以延长扫描周期,从而可以有效避免看门狗超时错误。 指令格式:WDR (无操作数) 2、跳转、循环指令 1) 跳转指令 (1)跳转指令 JMP ,跳转指令。使能输入有效时,使程序流程跳到同一程序中的指定标号n 处执行。 (2)标号指令 LBL ,标号指令。标记程序段,作为跳转指令执行时跳转到的目的位置。操作数n 为0~255的字型数据。

2)循环指令 由FOR和NEXT指令构成程序的循环体。FOR指令标记循环的开始,NEXT指令为循环体的结束指令。 工作原理: 使能输入EN有效,循环体开始执行,执行到NEXT指令时返回,每执行一次循环体,当前值计数器INDX增1,达到终止值FINAL时,循环结束。使能输入无效时,循环体程序不执行。每次使能输入有效,指令自动将各参数复位。 3、子程序调用指令 1)建立子程序 (1)从“编辑”菜单,选择插入→子程序; (2)从“指令树”,用鼠标右键单击“程序块”图标,并从弹出菜单选择插入→子程序; (3)从“程序编辑器”窗口,用鼠标右键单击,并从弹出菜单选择插入→子程序。 2)子程序调用 (1)子程序调用和返回指令 子程序调用SBR 子程序条件返回CRET 3. 带参数的子程序调用 (1)子程序参数 (2)局部变量的类型 (3)数据类型 (4)建立带参数子程序的局部变量表 (5)带参数子程序调用指令 4、顺序控制指令 1. 功能流程图 2、顺序控制指令 (1)顺序步开始指令(LSCR) (2)顺序步结束指令(SCRE) 二、顺序步转移指令(SCRT)

C 程序控制类指令的功能 改变程序执行的顺序

B 标号和变量都是。。。标号是指令,变量是操作数 C 程序控制类指令的功能改变程序执行的顺序 D 定义段结束伪指令ENDS,ENDP H 汇编语言源程序结束伪指令END I IN和OUT指令可寻址。。。用立即数(N) IN AL,DX(Y) IN AL,N;(N>255)(N) IN BX,DX(N) J JMP BX(Y) L 累加器专用传送指令IN访问I/O,范围0-65535 M MOV AX,[BX][DI]指令源操作寻址基址加变址寻址方式 MOV DS,AX(Y) MOV ES,3F00H(N) MOV [BX],[SI](N) MOV AL,[BX=10H](Y) MOV [BX],[1000](N) MOV [BX][DI],10(N) MOV CS,AX(N) P PUSH AL(N) PUSH SS(Y) POP CS(N) R 若当前(SP)=6000H,CPU执行IRET后,(SP)=6006H(SP)=6008H 若(AL)=95H,执行SAR AL,1后(AL)=0CAH 若(CS)=1000H(DS)=2000H(SS)=4000H…..BP的功能是将32000H单元内容,32001单元内容 若(AL)=35H,执行ROL AL,1后,(AL)=6AH 若(DS)=2000H,(ES)=2100H,(CS)=1500H….(AX)=01A0H,基址变址寻址 S 设当前(SP)=0100H,执行PUSH AX,(SP)=00FEH,(SP)=00FAH SHL SX,2(N) T 条件转移指令转移范围-128~+127 W 伪指令X DB 4 DUP...Y偏移地址2014H,(BL)=00 伪指令VR1 DB 2 DUP分配了16字节 X XCHG BX,IP(N) Y 已知(BX)=2000H(DI)=3000H(SS)=4000H,66000H内容28H[BX=DI=1000H](AL)=28H 一个程序有下列伪指令ARY DB。。。LEN单元值350 有一个程序片段MSG DW...AX的值最后是36 Z 在寻址方式可做基址寄存器的BX,BP 指令MOV DX,0FFSET BUFFER寻址方式立即寻址方式 指令MOV AX,【DI-4】寻址方式寄存器相对寻址方式 执行指令将00H送到1A0H外设上(N) 执行指令将00H送到2F8H外设上(Y) 88086I/O指令寻址方式直接寻址,寄存器间接寻址 8088的MOV指令不能。。。传送(Y) 8088系统中,堆栈。。。为单位(N)

相关主题
文本预览
相关文档 最新文档