当前位置:文档之家› Derandomization, witnesses for Boolean matrix multiplication and construction of perfect ha

Derandomization, witnesses for Boolean matrix multiplication and construction of perfect ha

Derandomization, witnesses for Boolean matrix multiplication and construction of perfect ha
Derandomization, witnesses for Boolean matrix multiplication and construction of perfect ha

Derandomization,witnesses for Boolean matrix multiplication and construction of perfect hash functions?

Noga Alon?Moni Naor?

To appear in Algorithmica,?nal version

Abstract

Small sample spaces with almost independent random variables are applied to de-sign e?cient sequential deterministic algorithms for two problems.The?rst algorithm,

motivated by the attempt to design e?cient algorithms for the All Pairs Shortest Path

problem using fast matrix multiplication,solves the problem of computing witnesses for

the Boolean product of two matrices.That is,if A and B are two n by n matrices,and

C=AB is their Boolean product,the algorithm?nds for every entry C ij=1a witness:

an index k so that A ik=B kj=1.Its running time exceeds that of computing the

product of two n by n matrices with small integer entries by a polylogarithmic factor.

The second algorithm is a nearly linear time deterministic procedure for constructing

a perfect hash function for a given n-subset of{1,...,m}.

?Part of this paper was presented at the IEEE33rd Symp.on Foundations of Computer Science

?Incumbent of the Florence and Ted Baumritter Combinatorics and Computer Science Chair,Depart-ment of Mathematics,Raymond and Beverly Sackler Faculty of Exact Sciences,Tel Aviv University,Tel Aviv,Israel.Research supported in part by a USA Israeli BSF grant and by the Fund for Basic Research administered by the Israel Academy of Sciences.e-mail:noga@math.tau.ac.il.

?Incumbent of the Morris and Rose Goldman Career Development Chair,Department of Applied Math and Computer Science,Weizmann Institute,Rehovot76100,Israel.Supported by an Alon Fellowship and by a grant from the Israel Science Foundation administered by the Israeli Academy of Sciences.e-mail: naor@wisdom.weizmann.ac.il.Some of this work was done while the author was with the IBM Almaden Research Center.

1Introduction

In this paper we show how to make two very e?cient probabilistic algorithms deterministic at a relatively small degradation in their performance.The random choices made by a probabilistic algorithm de?ne naturally a probability space where each choice corresponds to a random variable.In order to remove randomness from an algorithm one should come up with a way of?nding deterministically a successful assignment to these choices.

One approach for achieving it,known as the method of conditional probabilities,is to search the probability space for a good choice by bisecting the probability space at every iteration by?xing an additional choice.The value of the next bit is chosen according to some estimator function that should approximate the probability of success given the choices?xed so far.The number of steps is therefore proportional to the number of choices made by the probabilistic algorithm and each step involves evaluating an estimator which usually takes time proportional to the input size.Thus,the cost of derandomization though polynomial, can considerably increase the complexity of the algorithm.This approach is taken in[29],

[25],(cf.,also[6].)

A di?erent approach for?nding a good point is to show that the random choices made need not be fully independent,i.e.even if some limited form of independence is obeyed, then the algorithm is successful.A smaller probability space where the random choices obey this limited independence is constructed.If this space is exhaustively searched,then a good point is found.The complexity is increased by a factor proportional to the size of the space. The size of the space is usually some polynomial in the input size.Thus again this approach su?ers from considerable increase in time.This approach is taken in[17],[1],[15]using probability spaces that are k-wise independent,and in[5],[23]using small bias probability spaces and almost k-wise independence(see de?nition below in subsection1.3).

Our goal in this work is to use these methods without incurring a signi?cant penalty in the run time.We exploit the fact that very small(polylogarithmic)probability spaces exist if one is willing to live with very limited independence.This form of independence is usually too limited to be applicable directly for replacing the random choices in a probabilistic algorithm.Our tactic will be to divide the random choices into a small(logarithmic or polylogarithmic)number of sets of random variables with complete independence between the sets.However within each set we will require only very limited independence.The search algorithm?nds a good assignment by?xing the sets one by one.At every iteration all the points of a small probability space corresponding to the current set of random variables is examined and the one that maximizes some estimator function of the probability of success is chosen.Since we have only a few sets of random variables and since each probability space is small the total work is increased by only a polylogarithmic factor.

We note that the two approaches described above had been combined in a di?erent way previously in[18,7,22].The random variables used there were k-wise independent resulting in a probability space of size O(n k).This probability space was then searched using an estimator function in O(k log n)steps.This method does not seem applicable for the problems considered here,since we could not come up with appropriate estimator functions that were e?ciently computable.

In the following two subsections we describe the two algorithmic problems for which

2

applying the above mentioned method yields e?cient deterministic algorithms:the compu-tation of Boolean matrix multiplication with witnesses and the(deterministic)construction of perfect hash functions.Although the two problems are not related,the algorithms we suggest for both are similar,and are based on the same approach outlined above.In sub-section1.3we review the probability spaces that have the independence properties used for both applications.

1.1Witnesses for matrix multiplication

Consider a Boolean matrix multiplication:C=AB,C ij= n k=1(A ik∧B kj).The n3time method that evaluates these expressions gives for every i,j for which C ij=1all the k’s for which A ik=B kj=1.The subcubic methods on the other hand(see,e.g.,[8])consider A and B as matrices of integers and do not provide any of these k’s.We call a k such that A ik=B kj=1a witness(for the fact that C ij=1).We want to compute in addition to the matrix C a matrix of witnesses.When there is more than one witness for a given i and j we are satis?ed with one such witness.

We use O(nω)to denote the running time of some subcubic algorithm for Boolean matrix multiplication.Our algorithm for this problem can be derived from any such algorithm yielding a corresponding time bound as a function of w.The best asymptotic bound known at present is the one with the exponentω<2.376and is due to Coppersmith and Winograd[8].

For two functions f(n)and g(n)we let g(n)=?O(f(n))denote the statement that g(n) is O(f(n)(log n)O(1)).

Several researchers(see,e.g.,[28],[4])observed that there is a simple randomized algo-rithm that computes witnesses in?O(nω)time.In Section2we describe a deterministic algorithm for computing the witnesses in?O(nω)time.It is essentially a derandomization of a modi?ed version of the simple randomized algorithm using the approach outlined in the Introduction,i.e.the combination of small sample spaces and the method of conditional probabilities.A di?erent,more complicated algorithm for this problem,whose running time is slightly inferior,i.e.not?O(nω)(but is also O(nω+o(1))),has been found by Galil and Margalit[20],[14].

The main motivation for studying the computation of witnesses for Boolean matrix mul-tiplication is the observation of Galil and Margalit that this problem is crucial for the design of e?cient algorithms for the All-Pairs-Shortest-Path problem for graphs with small integer weights which are based on fast matrix multiplication.E?cient algorithms for computing the distances in this way were initiated in[3]and improved(for some special cases)in[13], [28].The attempt to extend this method for computing the shortest paths as well leads naturally to the above problem,which already found other(related)applications as well. See[4,14,20,28]for more details.

1.2Perfect hash functions

For a set S?{1,...,m}a perfect hash function is a mapping h:{1,...,m}→{1,...,n} which is1-1on S.H is an(m,n,k)-family of perfect hash functions if?S?{1,...,m}of size k there is an h∈H that is perfect for S.We will be interested mainly in the case k=n.

3

The requirements from a perfect hash function are

?Succinct representation-the mapping h can be described by a relatively small number of bits.

?E?cient evaluation-given a value x∈{1,...,m}and the description of h,there should be an e?cient method of computing h(x).

?An e?cient construction-given S there should be an e?cient way of?nding h∈H that is perfect for S

Perfect hash functions have been investigated extensively(see e.g.[9,10,11,12,16,19, 21,26,27,30]).It is known(and not too di?cult to show,see[11],[16],[24])that the minimum possible number of bits required to represent such a mapping isΘ(n+log log m) for all m≥2n.

Fredman,Koml′o s and Szemer′e di[12]developed a method for constructing perfect hash functions.Given a set S,their method can supply a mapping with the required properties in almost linear expected randomized running time.Deterministically,however,they only describe a variant of their algorithm that works in worst-case running time O(n3log m).In Section3we describe a construction of perfect hash functions and a deterministic algorithm that for a given S in time O(n log m log4n)?nds a mapping with the above properties. Note that the size of the input isΘ(n log m)and hence this algorithm is optimal,up to a polylogarithmic factor.In case m is polynomial in n,the representation of the mapping h constructed requires O(n)bits.Given x computing h(x)takes O(1)operations.In the general case,the size of the mapping h requires O(n+log n log log m)bits.The time to evaluate h(x)depends on the computational model(i.e.what operations can be performed

in one step):it is either O(log m

log n )in a weak model or O(1)in a strong one(see more about

the model in Section3).

1.3Small bias probability spaces

Let?be a probability space with n random variables x1,x2,...x n.We say that?is a c-wise -bias probability space if for any nonempty subset S of x1,x2,...x n of size at most c we have

|P rob[ i∈S x i=0]?P rob[ i∈S x i=1]|< .

The property of a c-wise -bias probability space that we use is that for any subset S of x1,x2,...x n of size i≤c the probability that the random variables of S attain a certain con?guration deviates from1/2i by at most .Therefore c-wise -bias probability spaces are described as almost c-wise independent.

The known constructions of these probability spaces are of size polynomial in c,1/ and log n(often described by saying that the number of random bits required to sample from them is O(log1/ +log c+log log n)).Therefore if1/ is logarithmic in n and c is at most logarithmic in n the size of the probability space is still polylogarithmic in n.To be more precise,the construction of[23],as optimized in[2],yields a probability space of size O(c log n

3

)

and the ones in[5]yield probability spaces of size O(c2log2n

2

).

4

2Boolean matrix multiplication with witnesses

All the matrices in this section are n by n matrices,unless otherwise speci?ed.If M is such a matrix,we let M ij denote the entry in its i th row and j th column.Let A and B be two matrices with{0,1}entries,and let C be their product over the integers.Our objective is to?nd witnesses for all the positive entries of C,i.e.,for each entry C ij>0of C we wish to?nd a k such that A ik=B kj=1.This is clearly equivalent to the problem of ?nding witnesses in the Boolean case.As observed by several researchers there is a simple randomized algorithm that solves this problem in expected running time?O(nω).Here we consider deterministic algorithms for the problem.Our algorithm,described in the next two subsections,is a derandomized version of a modi?cation of the simple randomized solution. Its running time is?O(nω).The analysis of the algorithm is presented in subsection2.3, where it is also shown how to replace the probabilistic steps with deterministic ones.

2.1Outline and intuition of the algorithm

The starting observation is that?nding witnesses for entries which are1is easy:if E and F are two matrices with{0,1}entries and G=EF is the result of their multiplication over the integers,then one multiplication of matrices with entries of size at most n su?ces for?nding witnesses for all the entries of G which are precisely1.Indeed,simply replace every1-entry in the k th row of F by k(for all1≤k≤n)to get a matrix F and compute G =EF .

Now observe that if G ij=1and G

ij =k then k is a witness for G ij.

The idea(of both the randomized and deterministic algorithms)is to dilute F gradually, thus making the entries of G go down to0,however not before passing through1.Therefore if G ij= and every entry in F is made zero with probability roughly1/ ,then G ij becomes1 with probability bounded away from zero.A way of achieving it simultaneously for all entries is to work in phases,where in each phase every’1’entry of F is zeroed with probability1/2. At each phase also?nd witnesses for all entries of G that became1.If G ij= then for the (i,j)entry of EF after log such phases there is a constant probability of turning G ij into a 1and thus enabling the discovery of a witness.By repeating this process log n times,there is a high probability of discovering all the witnesses.

The choices we must make in the execution of the algorithm is which entries of F to zero at what phase.In the randomized algorithm all choices are independent,and thus the size of the probability is exponential in O(n log2n).In order to remove the randomness from the witness?nding algorithm we follow the paradigm outlined in the Introduction.Random choices corresponding to di?erent phases remain independent,however,the choices made in a phase will be highly dependent.We must also?nd a good estimate of progress.The key point is that for our estimate of progress it is su?cient that the choices within a phase be made according to a c-wise -bias sample space for c and1/ that are logarithmic in n.Our notion of progress is de?ned by two“contradicting”conditions:?rst the total sum of entries in G must go down signi?cantly at every round(by at least a constant fraction).This implies that in O(log n)rounds we get that G vanishes.The second condition is that we do not lose too many entries of G,where by lose we mean that they go from a large value to0without passing through1.

5

The second condition turns out to be too strong.We relax it by specifying some bound c (not coincidently,the same c as above)such that we would like every entry to pass through the range{1,...,c}before vanishing.We show how to?nd a witness in this case as well. The fraction of entries that disobey the second condition should be small enough to assure that at least a constant fraction of the entries do not skip the desired range.The set of good assignments to the choices of a phase is obtained by an exhaustive search among all the sample space for an assignment that progresses nicely.This is repeated for all phases and the whole process is repeated O(log n)times until all entries have a witness.

2.2Detailed description of the algorithm

De?ne c= log log n+9 andα=8

2c .For two matrices E and F with{0,1}entries de?ne

G=E∧F by G ij=E ij∧F ij.

Besides A,B and C=AB our algorithm employs two sequences of{0,1}matrices: R1,R2...,R t+1and D1,D2,...D t+1where t= 1+3log4/3n .For matrices R and R we say that R is a dilution of R if for every1≤j,k≤n we have R j,k≥R j,k.The sequence R1,R2...R t+1is monotonically decreasing,i.e.for every1≤i≤t R i+1is a dilution of R i. We now describe the algorithm;the way to perform steps4b and4c will be described later. The de?nition of a good dilution is given below.

?While not all witnesses are known

1.Let L be the set of all positive entries of C for which there are no known witnesses.

2.Let R1be the all1matrix.

3.Let D1←A·(B∧R1)

4.For i=1to t= 1+3log4/3n ,Perform the following:

(a)Let L be the set of all non-zero entries of D i in L which are at most c.

(b)Find witnesses for all entries in L .

(c)R i+1←good dilution of R i(see de?nition of“good”below)

(d)D i+1←A·(B∧R i+1)(The matrix multiplication is over the integers)

A matrix R i+1is good with respect to R i(in step4c above)if the following two conditions hold:

a)The total sum of the entries of D i+1=A·(B∧R i+1)is at most3/4of the total of the

entries of D i=A·(B∧R i).(Observe that this guarantees that after1+3log4/3n iterations of good R i’s all the entries of D will vanish.)

b)The fraction of entries in L that are0in D i+1,among those larger than c in D i is at

mostα.

6

2.3Analysis of the algorithm

We?rst analyse one iteration of a randomized version of the algorithm(Lemma1),then analyse it when the sample space has a small bias(Lemma2)and?nally we show that this su?ces for achieving a deterministic algorithm.

Lemma1For any1≤i≤t,suppose that R i+1←R i∧S in step4c where S is a random 0,1matrix,then the R i+1is good with probability at least1/6.

The lemma follows from the following three claims:

Claim1The probability that the sum of entries of D i+1is at most3/4the sum of entries of D i is at least1/3.

To see this,observe that the expected sum of entries of D i+1is1/2the sum of entries of D i,since for every1≤j,k,l≤n such that A jk=(B∧R i)kl=1the probability that (B∧R i+1)kl=1is exactly1/2.The claim then follows from Markov’s Inequality.2 Claim2The probability that a?xed entry of D i which is at least c drops down to0in D i+1 is at most1/2c.

This is obvious.Observe that the claim holds even if we only assume that every c entries of S are independent.2

Claim3The probability that more than a fractionαof the entries in L that had a value at

least c in D i drop to0in D i+1is at most1

2c 1

α

=1

8

.

This follows from Claim2by Markov’s Inequality.2

Since Claims1and3describe the event we are interested in and1/3?1/8>1/6the lemma follows.2

De?ne =1

2c+1.The crucial point is to observe that the proof of the above lemma still

holds,with almost no change,if the matrix S is not totally random but its entries are chosen from a c-wise -dependent distribution in the sense of[23],[5].Recall that if m random variables whose range is{0,1}are c-wise -dependent then every subset of j≤c of them attains each of the possible2j con?gurations of0and1with probability that deviates from 1/2j by at most .

Lemma2If R i+1←R i∧S in step4c where the entries of S are chosen as n2random variables that are c-wise -dependent,then R i+1is good with probability at least1/12?2 .

We note that in fact it is su?cient to choose only one column from a c-wise -dependent sample space and copy it n times.However,this changes the size of the sample space by a constant factor only.The proof of Lemma2is by the following modi?ed three claims,whose proofs are analogous to those of the corresponding previous ones.

Claim4The probability that the sum of entries of D i+1is at most3/4the sum of entries of D i is at least1/3?2 .2

7

Claim5The probability that a?xed entry of D i which is at least c drops down to0in D i+1 is at most1/2c+ .2

Claim6The probability that more than a fractionαof the entries in L that had a value

least c in D i drop to0in D i+1is at most(1

c + )1<2

c

1=1/4.2

Since Claims4and6describe the event we are interested in and1/3?2 ?1/4>1/12?2 the lemma follows.2

As shown in[23]and in[5]there are explicit probability spaces with n2random variables which are c-wise -dependent,whose size is

(log(n)·c·1

)2+o(1),

which is less than,e.g.,O((log n)5).Moreover,these spaces can be easily constructed in time negligible with respect to the total running time of our algorithm.

Suppose that in step4c all the matrices S de?ned by such a probability space are searched, until a good one is found.Checking whether a matrix is good requires only matrix multipli-cation plus O(n2)operations.Therefore the inner loop(starting at step4)takes polynomial in log n times matrix multiplication time.

Executing Step4b:It is important to note that during the performance of step4c,while considering all possible matrices S provided by our distribution,we can accomplish step4b (of the next iteration)as well.To see this we need

Claim7If R i+1←R i∧S in step4c where the entries of S are chosen as n2random variables that are c-wise -dependent,then for each entry in L there is a positive probability to be precisely1in D i+1.

This follows since if S is chosen uniformly at random,then the probability that an entry in L is precisely1in D i+1is at least c/2c and this event depends on at most c variables.2 To apply the claim,recall the observation at the beginning of Section2.1,that we can ?nd witnesses for entries that are at most1.If we replace each matrix multiplication in the search for a good S by two matrix multiplications as described in that observation,we complete steps4b and4c together.

Analysis of the outer loop:In every iteration of the inner loop4at most anαfraction of the entries of L are“thrown”(i.e.their witness will not be found in this iteration of the outer loop).Therefore at least1?(1+3log4/3n)αfraction of the entries of D in L will not be thrown during the completion of these iterations.For those entries,which are at least 1/2of the entries in L,a witness is found.Therefore,only O(log n)iterations of the outer loop are required,implying the desired?O(nω)total running time.

We have thus proved the following:

Theorem1The witnesses for the Boolean multiplication of two n by n matrices can be found in deterministic?O(nω)time.

8

3E?cient deterministic construction of perfect hash functions

In this Section we describe an e?cient method of constructing perfect hash functions.Recall that for a set S?{1,...,m}a perfect hash function is a mapping of{1,...,m}onto {1,...,n}which is1-1on S.

We are given a set of n elements out of{1,...,m}and the goal is to build a perfect hash function from{1,...,m}to a range which is O(n)with the properties listed in Section1.2 (Given a function that maps to a range of size O(n),another function which maps to a range of size n can be constructed using the technique in[12]or in[10].)

Outline of the FKS scheme:Our scheme has the same structure as the one of Fredman, Koml′o s and Szemer′e di[12]which we now review:The FKS scheme consists of two levels. The?rst level function,denoted by h,maps the elements of{1,...,m}into a range of size O(n);all the elements that were sent to the same location i are further hashed using a second level hash function h i.The second level hash function h i should be1-1on the subset that was hashed to location i by h.For every i in the range of h we allocate as much space as the range of h i which we denote by r i.The perfect hash function is now de?ned as follows: if x∈{1,...,m}is mapped to i by h,then the scheme maps x to h i(x)+ 1≤j

Let s i(h)=|{x|x∈S and h(x)=i}|,i.e s i=s i(h)denotes the number of elements mapped to i.The property we require h to satisfy is that n i=1 s i(h)2 should be O(n).The size of the range of h i will be O( s i2 ).The functions suggested by[12]for both levels were of the form(k·x mod p)mod r where p is an appropriate prime,r is n for the?rst level and

s2 i for the second level.

Overview of the new scheme:

The scheme consists of more than one level of hashing.The?rst level is a hash function

which is used to partition the elements according to their hash value,where S i is the set

of all elements with hash value i.Then,each S i is going to be mapped to a separate?nal

region,say R i,where the size of R i depends on|S i|.This?rst hash function is of the same form h(x)=Ax where A is a log(n)×log(m)matrix over GF[2]and x is treated as a vector

of length log m over GF[2].The mapping of S i into R i consists of either one more level of

hashing or two more levels,depending on how large S i is.If S i is su?ciently small,then

there is only one more hash function that maps S i into R i,and it is of the same form as

[12].If S i is large enough,then there are two more levels of hashing to map S i into R i,

where the bottom level is the same form as[12]but the upper level(which we will refer to

as the intermediate)is of the form h(x)=Ax where A is a matrix over GF[2]of appropriate

dimensions.

Our main di?culty is coming up with the?rst level hash function h.Given the proper h

we can allocate relatively long time for?nding the second and third level functions:even if

constructing a good(i.e.1-1)h i takes time proportional to O(s2

i ),then the total amount of

work in the second step would still be linear.In fact,?nding a perfect hash function of the

form(k·x mod p)mod r requires time proportional to s3

i log m.For the sake of completeness

we outline in Section3.2how to achieve this.

9

Finding the top level hash function h and the intermediate level h i’s is done using the approach outline in the Introduction to the paper.Instead of choosing h from a collection at random we select it by considering it a concatenation of one-bit functions and?xing each one in turn.We must show that the one-bit functions can be chosen from a very small collection (de?ned by a small bias sample space)and that there is a good estimator of progress(which will be the number of pairs that must be separated).

Model:Since we are interested in fast on-line evaluation of the constructed perfect hash function we must specify the computational power of the evaluator.The weakest model we consider only assumes that the evaluator can access in one operation the memory that stores the description of the perfect hash function and retrieve a word of width at most log n.It needs only very simple arithmetic,basically addition.Note that in this weak model we can perform a lot of computation in O(1)time using pre-stored tables of of size O(n)as we can see in the following example.

Consider the function f r(x)=r·x where r and x are in{0,1}log m and f r computes their inner product over GF[2].Partition r into k=log m parts r1,r2,...r k.For each r j arrange a table of size n/log m such that for0≤y

j

(x)requires k operations: access the tables at entries x1,x2,x k and Xor the results.If m is polynomial in n than this is O(1)operations and in general takes O(log m/log n)time.This example is important to us,since the top level hash function is of the form h(x)=Ax where the multiplication is over GF[2]

A stronger model is to assume that any operation on words of size O(log m)takes constant time.Thus we count only accesses to the memory(which may be interesting sometimes).

3.1First level hash function

To?nd the?rst level hash function we solve the following problem for all t in the range {1,...log n}:given a set S?{1,...,m}of size n,construct a function h:{1,...,m}→{1,...2t}such that if s i(h)=|{x|x∈S and h(x)=i}|then 2t i=1 s i(h)2 ≤q=e2 n2 /2t. Note that q is only a constant factor larger than the expected value of i s i2 in case h is chosen at random from all functions{1,...,m}→{1,...2t}.

We?nd h in a step by step manner,forming it as a concatenation of one bit functions f1,f2,...,f t where f j:{1,...,m}→{0,1}.After we have decided on f1,f2,...f j we determine f j+1using an estimator function which we try to reduce.The estimator which we

use is

P j(f1,f2,...f j)= i∈{0,1}j s i(f1,f2,...f j)

2

where s i(f1,f2,...f j)=|S i(f1,...,f j)|and S i(f1,...,f j)={x|x∈S and f1f2...f j(x)=i} is the set of all elements that were mapped by the?rst j functions to i.The motivation for choosing this estimator is the observation that P j(f1,...,f j)/2t?j is the conditional expec-tation of the number of pairs mapped to the same point by h given the chosen f1...f j and assuming the rest of the bit functions will be chosen randomly.

10

P0is n2 and if the j+1function is random,then

E[P j+1(f1,f2,...f j+1)|f1,f2,...f j]=1

2

P j(f1,f2,...f j),

since the expected contribution from each pair of elements that have not been separated so

far is1/2.

What should we do in order to reduce the randomness and yet get that the expectation

of P j+1(f1,f2,...f j+1)is at most half of P j(f1,f2,...f j)?It is enough to have that for any

two distinct x,y∈{1,...,m}the probability that f j+1(x)=f j+1(y)is at most1/2.This is

true if f j is selected by choosing a random vector r∈{0,1}log m and then setting f j(x)to

be the inner product modulo2of x and r(x is treated as a vector in{0,1}log m).Searching

among all such vectors for a“good”one requires m tests,far more that we are willing to

spend.Instead,we use a small collection of vectors in{0,1}log m which(almost)preserves

this property.

Suppose that f j is chosen from some collection F.If the probability that f j(x)and f j(y)

are equal is at most1/2+ then E[P j+1(f1,f2,...f j+1)]is at most(1/2+ )P j(f1,f2,...f j)

and we know that there must be some function f∈F such that if we set f j+1to be f,

then we get that P j+1(f1,f2,...f j+1)≤(1/2+ )P j(f1,f2,...f j).If we let F be the set of

points of a sample space with any pairwise -dependent distribution on m variables,then a

randomly chosen f from F satis?es the above requirement.(An equivalent description of the

properties of F is to say that we let F be the set of functions corresponding to computing

inner products with the columns of the generating matrix of a linear error correcting code

over GF[2]of dimension log m,length|F|and distance at least(1? )|F|.This is true since the requirement here is only almost pairwise independence.)

As mentioned in subsection1.3,there are explicit collections F as above of size|F|≤

O(log(m)/ 3)(using the construction of[2]),and somewhat simpler constructions of size

O(log2(m)/ 2)(given in[5]).Searching all of F is therefore possible in polylogarithmic time.

By maintaining the sets S i(f1,f2,...f j)we can check in O(n)time(when m is polynomial

in n)whether a given f∈F is good(i.e.,achieves P j+1(f1,f2,...f j+1)that does not exceed

(1/2+ )P j(f1,f2,...f j)):we simply go over these sets and examine to what size subsets

these are split by introducing f as f j+1.

The procedure is therefore:

?set =1/t and?nd a collection F of m random variables that are pairwise -bias with |F|≤O(log(m)/ 3).

?For j=1to t

1.For all f∈F compute P(f)=P j(f1,f2,...f j?1,f)

2.Choose f j as the f with the smallest P(f)

?Set h=f1,f2,...f t.

We know that the choice at step2implies that

P j(f1,f2,...f j)≤(1/2+ )·P j?1(f1,f2,...f j?1).

11

Therefore

P (h )=P (f 1,f 2,...,f t )≤(1/2+ )t ·P 0=(1/2+ )t · n 2 =(1+2 )t ·2?t · n 2 =(1+2/t )t ·2?t · n 2 ≤e 2 n 2

/2t

The total amount of work is O (t ·|F |·n ).(The time for constructing the sample space F is negligible compared with the the time to ?nd h ).Since in our case we can choose t = log n and |F |=O (t 3log m )=O (log m log 3n )this gives a total running time of O (n log m log 4n ).For these parameters we have that n i =1s 2i (h )≤O (n ).

It is worth noting that by precomputing and storing linear sized tables (as illustrated in the model description)we can make the computation of h to be constant time under the strictest de?nition (i.e.the weaker model),as long as m is polynomial in n .For general m the time is O (log m/log n ).

3.2Resolving collisions of S i (h )

We now turn to the question of resolving the collisions of S i (h ).Suppose that we attempt to resolve the collisions of S i (h )using a second level a l′a Fredman,Koml′o s and Szemer′e di [12]as brie?y explained below.First observe that for any set S of k elements in {1,...,m }there is a prime p ≤k 2log m so that for any two distinct x,y ∈S ,x (mod p )=y (mod p ).Indeed,this follows from the fact that every prime that does not satisfy the above property divides the product Πx,y ∈S,x

s i (h ).Therefore the total time is at most s 2i (h )·log m ·s i (h )=s 3i (h )log m .Given p i ,we

need to ?nd k i ≤p i such that the function h i (x )=(k i ·x mod p i )mod s 2i (h )is perfect on S i (h )(we are assured of its existence,since k ·(x ?y )mod p i is uniformly distributed if k is chosen at random from {0,...,p i ?1}and x =y ).Again,this does not take more time than p i ·s i (h )≤s 3i (h )log m .Therefore the total amount of work is

i s 3i (h )log(m )

≤log(m )( i s 2i (h ))1.5(1)

which is at most O (n 1.5log m )since i s 2i (h )≤O (n ).

As for the length of the representation,for every i we need O (log s i +log log m )bits,making it O (n log n +n log log m )bits altogether.However Schmidt and Siegel [27]have

12

found a way to amortize the cost.They choose the functions so that representing them requires O(n+log log m)bits only.We brie?y describe this method:for each S i(h)at least

half the primes p≤s2

i (h)log m are good(i.e1-1on S i(h))and given p i at least half the k i

are good.Therefore,many S i(h)can have the same p and k.We construct a collection of

functions of the form(kx mod p)mod s2in the following way:we partition the S i(h)’s into

(at most)1/2log n sets such that the j th set has all the i for which2j≤s i(h)≤2j+1?1. For each1≤j≤1/2log n?nd a p and a k that is good for1/4of the S i’of the j th

set,then one that is good for1/4of the remaining members and so on.The size of the

collection is O(log2n)and for every S i(h)at least one function in the collection is perfect for

S i(h).Furthermore,because of the way the collection was constructed,we can encode for

every i which hash function should be used using only O(n)bits by giving the more popular

functions shorter codes using,say,Hu?man coding.The additional time this coding requires

is at most O( i s2i(h)log m)which is O(n log m).

The resulting total time,O(n1.5log m),is larger than we are aiming for.However,notice

that the small s i(h)’s do not contribute much to the excess.We set(somewhat arbitrarily)a

threshold at log n and call those i’s such that s i(h)≤log n small and the remaining i’s large.

For any small i we choose h i as described in the preceding paragraph.The total amount of

work this takes is

O( s i≤log n s3i log m)≤O( s2i)log m log n=O(n log n log m).

The large i’s are those that contribute the most to(1).For them we employ a two level

scheme,but note that since s2i is O(n)our problem is easier than the original problem.

Instead of mapping S i to a range of size O(s i),we can map it to an exclusive range of size s2

i without violating the condition that the total range size be O(n).Thus we have a relaxed version of the original problem,since the range can be much larger than the set for which we are resolving the collisions.

Note also that there are at most O(n/log2n)large i’s,since

O(n)≥ s2i≥

large i s2

i

large i

log2n=(#of large i)log2n

This means that we have some?exibility in the size of the representation as well(which is signi?cant,since there is a log log n lower bound on the size of the description of h i).

We now describe in detail how to deal with the large i’s.First,h i is chosen by a sim-ilar method to which h was chosen.I.e.,h i is composed in a step by step manner as the concatenation of the one-bit functions g1,g2,...

The estimator we use is,as in the construction of h,

P i k (g1,g2,...g k)= j∈{0,1}k

s ij(g1,g2,...g k)

2

where s ij(g1,g2,...g k)=|S ij(g1,...,g k)|and

S ij(g1,...,g k)={x|x∈S i(h)and g1g2...g k(x)=j}. The procedure is now

13

?Set δ=(21.25/2?1)/2and t i =2log s i (h ).(The choice of δguarantees that (1+2δ)2log s i

will be at most s 1.25i .)Find a sample space G of m random variables that are pairwise δ-bias with |G |≤O (log(m )/δ3)(here we should use the construction in [2]).

?For k =1to t i

1.For all g ∈G compute P i (g )=P i k (g i 1,g i 2,...g i k ?1,g )

2.Choose g i k as the g with the smallest P i (g )

?Set h i =g i 1,g i 2,...g i t i

.As before,we are assured that at the end of the procedure

j s ij (h i )

2 =P i (h i )

=P i (g i 1,g i 2,...,g i t i

)≤(1/2+δ)t i ·P i 0=(1/2+δ)t i · s i (h )

2

=(1+2δ)t i ·2?t i · s i (h )2 =(1+2δ)2log s i (h )·2?2log s i (h )· s i (h )2

≤s 1.25i /2The amount of work spent constructing h i is O (s i log m log n )and therefore the amount of work spent constructing all the h i ’s is O (n log n log m ).

The third level hash functions h ij are chosen by the method described above for the functions h i corresponding to small s i .The total amount of work is

O (

j s 3ij log m )≤O (log m ( j s 2ij )1.5)≤O ((s 1.25i )1.5log m )≤O (s 2i (h )log m ).

Since s 2i (h )≤O (n )we conclude that the total amount of work for the computation of all the third level functions on the large s i (h )is at most O (n log m ).

Note that since δis ?xed the size of G is O (log m ).The description of h i is simply a subset of the members of G .Therefore,as with the ?rst level hash function,we have that by precomputing and storing linear sized tables (the same tables for all the h i ’s!)we can make the computation of h to be constant time under the strictest de?nition,as long as m is polynomial in n .

The length of the description of the function (including all the levels)is the sum of:

1.The number of bits needed to describe h ,which is log n log m (it can be compressed to log n log |F |)

14

2.The number of bits to describe the second and third level hash functions:for the

small i’s representing h i requires O(log log m+log s i(h))bits.However,using the amortization,all the h i’s can be represented by O(n+log log m)bits.For the large i’s for which s i(h)>log n,?rst recall that there can be at most n/log2n of them.Also,the description of each h i is simply that of a subset of G which can be described by O(log m) bits.It follows that for the second level for these h i’s at most O(n/log2n·log m)bits su?ce.For a?xed i,all the third level functions h ij can easily be represented by

O(s2

i (log s i+log log m)bits.However this can be reduced to O(s2

i

+log log m)via

amortization.Since i s2i is O(n),all the third level hash functions together require at most O(n/log2n·log log m+n)bits.

3.Some bookkeeping information,e.g. 1≤j

using O(n)bits(see e.g.[10]).

4.Tables to allow fast computation of h and the intermediate h i’s-O(n)bits.

For m=n O(1)the total number of bits is therefore linear in n and we obtain the following: Theorem2For any polynomial q(n)there is a deterministic algorithm that constructs a perfect hash function for a given subset S?{1,...m}of cardinality n where m=q(n). The perfect hash function can be found in time?O(n),its description requires O(n)bits of representation and the value of the hash function on any x∈{1,...,m}can be computed in constant time.

If m is superpolynomial in n then we can still get something:we?rst?nd a1-1mapping of S into{1,...,n3}using the same method to construct the?rst level hash function but with a?xed constant(as we do for the intermediate level).This takes time O(n log m log n). We proceed from there on as if the range is n3,getting an additional?O(n)factor.The only problem in computing the hash value of a given i∈{1,...,m}quickly is the time to compute the initial hash function.Depending on the exact computational model(i.e.what we assume that can be done in O(1)time)it may take O(log m/log n)time or O(1)to compute it. Theorem3For any m there is a deterministic algorithm that constructs a perfect hash function for a given subset S?{1,...m}of cardinality n.The perfect hash function can be found in time?O(n log m),its description requires O(n+log n·log log m)bits and the value of the hash function on any x∈{1,...,m}can be computed in O(log m/log n)time(or constant time if we assume the stronger model).

Remark:We do not know whether the scheme described above can be made implicit,i.e. not requiring any additional memory to represent the function(in the sense of[9]).The methods described in[10]and[9]for making the FKS scheme implicit are applicable here as well.The problem however seems to be in?nding a deterministic way of doing the encoding in nearly linear time.Consider for instance the following problem:given n/4 values v1,v2,...v n/4in{0,...n?1}?nd n/4disjoint pairs(x1,y1),(x2,y2),...(x n/4,y n/4) such that v i=x i?y i mod n.The deterministic algorithm for this problem seems to take O(n2)time.

15

Remark:One can construct a similar alternative algorithm,which uses only two levels,by using a3-wise -dependent distribution in the?rst level to conclude that by the end of this level the inequality s i3 =O(n)holds.We omit the details. Acknowledgments

We thank the anonymous referees and Mike Luby for their many useful remarks and sugges-tions.

References

[1]N.Alon,L.Babai and A.Itai,A fast and simple randomized parallel algorithm for the

maximal independent set problem,Journal of Algorithms7(1986),pp.567–583.

[2]N.Alon,J.Bruck,J.Naor,M.Naor and R.Roth,Construction of asymptotically good,

low-rate error-correcting codes through pseudo-random graphs,IEEE Transactions on Information Theory,38(1992),509–516.

[3]N.Alon,Z.Galil and O.Margalit,On the exponent of the All Pairs Shortest Path

problem,Proc.32th IEEE annual Symposium on Foundations of Computer Science (1991),569–575.Also:https://www.doczj.com/doc/1a5580219.html,puter and System Sciences,to appear.

[4]N.Alon,Z.Galil,O.Margalit and M.Naor,Witnesses for Boolean matrix multiplication

and for shortest paths,Proc.33rd IEEE Annual Symposium on Foundations of Computer Science(1992),417–426.

[5]N.Alon,O.Goldreich,J.Hastad and R.Peralta,Simple constructions of almost k-wise

independent random variables,Proc.31st IEEE Symposium on Foundations of Computer Science(1990),544–553.Also:Random Structures and Algorithms3(1992),289-304.

[6]N.Alon and J.Spencer,The probabilistic method,John Wiley and Sons Inc.,New

York,1991.

[7]B.Berger and J.Rompel,Simulating(log c n)-wise independence in NC,J.of the ACM

38(1991),pp.1026–1046.

[8]D.Coppersmith and S.Winograd,Matrix multiplication via arithmetic progressions,

Journal of Symbolic Computation9(1990),251–280.

[9]A.Fiat and M.Naor,Implicit O(1)probe search,SIAM https://www.doczj.com/doc/1a5580219.html,p.22(1993),pp.1–10.

[10]A.Fiat,M.Naor,J.P.Schmidt and A.Siegel,Non-oblivious hashing,J.of the ACM

31(1992),pp.764–782.

[11]M.L.Fredman and J.Koml′o s.On the size of separating systems and families of perfect

hash functions.SIAM Journal on Algebraic and Discrete Methods,5(1):61–68,1984.

16

[12]M.L.Fredman,J.Koml′o s and E.Szemer′e di,Storing a Sparse Table with O(1)Worst

Case Access Time,J.of the ACM31(1984),538–544.

[13]Z.Galil and O.Margalit,A faster algorithm for the All Pairs Shortest Path problem for

undirected graphs,Manuscript,August1991.

[14]Z.Galil and O.Margalit,Witnesses for Boolean matrix multiplication and for transitive

closure,J.of Complexity9(1993),201–221.

[15]R.M.Karp and A.Wigderson,A Fast Parallel Algorithm for the Maximal Independent

Set Problem,J.of the ACM32(1985),762–773.

[16]J.K¨o rner,Fredman-Komlos bounds and information theory,SIAM.J.Alg.Disc.Meth.,

7:560–570,1986.

[17]M.Luby,A simple parallel algorithm for the maximal independent set problem,SIAM

https://www.doczj.com/doc/1a5580219.html,p.15(1986),pp.1036–1053.

[18]M.Luby,Removing randomness in parallel computation without a processor penalty,

Journal of Computer Systems and Science47(1993),pp.250–286.

[19]H.Mairson,The e?ect of table expansion on the program complexity of perfect hash

functions,BIT32,1992,430–440.

[20]O.Margalit,PhD.Dissertation,Tel-Aviv Univ.1993.

[21]K.Mehlhorn,Data Structure and Algorithms1:Sorting and Searching,

Springer-Verlag,Berlin,1984.

[22]R.Motwani,J.Naor and M.Naor,The probabilistic method yields deterministic parallel

algorithms,Proceedings of the30th IEEE Symposium on Foundations of Computer Science,(1989),pp.8–13.

[23]J.Naor and M.Naor,Small-bias probability spaces:e?cient constructions and applica-

tions,SIAM J.on Computing22,1993,pp.838–856.

[24]A.Nilli,Perfect hashing and probability,Combinatorics,Probability and Computing,to

appear.

[25]P.Raghavan,Probabilistic construction of deterministic algorithms:approximating

packing integer programs,Journal of Computer Systems and Science37(1988),pp.

130–143.

[26]Sager,A Polynomial time generator for minimal perfect hash functions CACM28,1985.

[27]J.Schmidt and A.Siegel,The Spatial Complexity of oblivious k-probe hash functions,

SIAM https://www.doczj.com/doc/1a5580219.html,p.19(1990),pp.775–786.

[28]R.Seidel,On the All-Pairs-Shortest-Path Problem,Proc.24th annual ACM Symposium

on Theory of Computing(1992),745–749.

17

[29]J.Spencer,Ten lectures on the probabilistic method,SIAM(Philadelphia),1987.

[30]R.E.Tarjan and A.C.Yao,Storing a Sparse Table,Communications of the ACM22,

1979,pp.606–611.

18

[批处理]计算时间差的函数etime

[批处理]计算时间差的函数etime 计算时间差的函数etime 收藏 https://www.doczj.com/doc/1a5580219.html,/thread-4701-1-1.html 这个是脚本代码[保存为etime.bat放在当前路径下即可:免费内容: :etime <begin_time> <end_time> <return> rem 所测试任务的执行时间不超过1天// 骨瘦如柴版setlocal&set be=%~1:%~2&set cc=(%%d-%%a)*360000+(1%%e-1%%b)*6000+1%%f-1% %c&set dy=-8640000 for /f "delims=: tokens=1-6" %%a in ("%be:.=%")do endlocal&set/a %3=%cc%,%3+=%dy%*("%3>> 31")&exit/b ---------------------------------------------------------------------------------------------------------------------------------------- 计算两个时间点差的函数批处理etime 今天兴趣大法思考了好多bat的问题,以至于通宵 在论坛逛看到有个求时间差的"函数"被打搅调用地方不少(大都是测试代码执行效率的) 免费内容: :time0

::计算时间差(封装) @echo off&setlocal&set /a n=0&rem code 随风@https://www.doczj.com/doc/1a5580219.html, for /f "tokens=1-8 delims=.: " %%a in ("%~1:%~2") do ( set /a n+=10%%a%%100*360000+10%%b%%100*6000+10%% c%%100*100+10%%d%%100 set /a n-=10%%e%%100*360000+10%%f%%100*6000+10%%g %%100*100+10%%h%%100) set /a s=n/360000,n=n%%360000,f=n/6000,n=n%%6000,m=n/1 00,n=n%%100 set "ok=%s% 小时%f% 分钟%m% 秒%n% 毫秒" endlocal&set %~3=%ok:-=%&goto :EOF 这个代码的算法是统一找时间点凌晨0:00:00.00然后计算任何一个时间点到凌晨的时间差(单位跑秒) 然后任意两个时间点求时间差就是他们相对凌晨时间点的时间数的差 对09这样的非法8进制数的处理用到了一些技巧,还有两个时间参数不分先后顺序,可全可点, 但是这个代码一行是可以省去的(既然是常被人掉用自然体

教学情境有哪些主要类型

教学情境有哪些主要类型 一、教学情境分类: (一)借助实物和图像创设的教学情境 (二)借助动作(活动)创设的教学情境 (三)借助语言创设的教学情境 (四)借助新旧知识和观念的关系和矛盾创设的教学情境 (五)借助“背景”创设的教学情境 (六)借助问题创设的教学情境 二、各类教学情境的主要表现 (一)借助实物和图像创设的教学情境 教学中的实物主要指实物、模型、标本以及实验、参观等。 在教学中,图像是一种直观的工具,它包括板书、画图、挂图、幻灯、录相、电影、电脑等电化教学手段。 (二)借助动作(活动)创设的教学情境 动作的形象性从理科的角度来说主要指操作,从文科的角度来说主要指表演。 (1)操作:学生操作学具可以使许多抽象知识变得形象直观,操作的特点是通过动作而直观,从而把动作思维和形象思维有机结合起来。 (2)表演:表演是高一层次的形象性,因为它不仅是教学内容的外观形象,而且展现了人物内心世界。

(3)活动:以活动中获得的感性材料为支柱,进一步分析思考,便掌握了相遇问题的知识。 (4)演示。 (三)借助语言创设的教学情境 语言表达的形象性能够使听者的脑中呈现的是一幅幅鲜明而简洁的画面,而不是一些抽象的语义代码。 (四)借助新旧知识和观念的关系和矛盾创设的教学情境学生对新知识的学习是以旧知识为基础的,新知要么是在旧知的基础上引申和发展起来的,要么是在旧知的基础上增加新的内容,或由旧知重新组织或转化而成的,所以旧知是学习新知最直接最常用的认知停靠点。 (五)借助“背景”创设的教学情境 所谓背景知识是指与教材课文内容相关联的知识的总称,课堂教学的背景知识主要包括:(1)“作者介绍”;(2)“时代背景”;(3)“历史典故” (六)借助问题创设的教学情境 现代教学论研究指出,从本质上讲,感知不是学习产生的根本原因(尽管学生学习是需要感知的),产生学习的根本原因是问题。认为问题是学习的起点、方向、目标、崔化剂、主线、目的地等。

延时子程序计算方法

学习MCS-51单片机,如果用软件延时实现时钟,会接触到如下形式的延时子程序:delay:mov R5,#data1 d1:mov R6,#data2 d2:mov R7,#data3 d3:djnz R7,d3 djnz R6,d2 djnz R5,d1 Ret 其精确延时时间公式:t=(2*R5*R6*R7+3*R5*R6+3*R5+3)*T (“*”表示乘法,T表示一个机器周期的时间)近似延时时间公式:t=2*R5*R6*R7 *T 假如data1,data2,data3分别为50,40,248,并假定单片机晶振为12M,一个机器周期为10-6S,则10分钟后,时钟超前量超过1.11秒,24小时后时钟超前159.876秒(约2分40秒)。这都是data1,data2,data3三个数字造成的,精度比较差,建议C描述。

上表中e=-1的行(共11行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=999,999 e=1的行(共2行)满足(2*R5*R6*R7+3*R5*R6+3*R5+3)=1,000,001 假如单片机晶振为12M,一个机器周期为10-6S,若要得到精确的延时一秒的子程序,则可以在之程序的Ret返回指令之前加一个机器周期为1的指令(比如nop指令), data1,data2,data3选择e=-1的行。比如选择第一个e=-1行,则精确的延时一秒的子程序可以写成: delay:mov R5,#167 d1:mov R6,#171 d2:mov R7,#16 d3:djnz R7,d3 djnz R6,d2

djnz R5,d1 nop ;注意不要遗漏这一句 Ret 附: #include"iostReam.h" #include"math.h" int x=1,y=1,z=1,a,b,c,d,e(999989),f(0),g(0),i,j,k; void main() { foR(i=1;i<255;i++) { foR(j=1;j<255;j++) { foR(k=1;k<255;k++) { d=x*y*z*2+3*x*y+3*x+3-1000000; if(d==-1) { e=d;a=x;b=y;c=z; f++; cout<<"e="<

用c++编写计算日期的函数

14.1 分解与抽象 人类解决复杂问题采用的主要策略是“分而治之”,也就是对问题进行分解,然后分别解决各个子问题。著名的计算机科学家Parnas认为,巧妙的分解系统可以有效地系统的状态空间,降低软件系统的复杂性所带来的影响。对于复杂的软件系统,可以逐个将它分解为越来越小的组成部分,直至不能分解为止。这样在小的分解层次上,人就很容易理解并实现了。当所有小的问题解决完毕,整个大的系统也就解决完毕了。 在分解过程中会分解出很多类似的小问题,他们的解决方式是一样的,因而可以把这些小问题,抽象出来,只需要给出一个实现即可,凡是需要用到该问题时直接使用即可。 案例日期运算 给定日期由年、月、日(三个整数,年的取值在1970-2050之间)组成,完成以下功能: (1)判断给定日期的合法性; (2)计算两个日期相差的天数; (3)计算一个日期加上一个整数后对应的日期; (4)计算一个日期减去一个整数后对应的日期; (5)计算一个日期是星期几。 针对这个问题,很自然想到本例分解为5个模块,如图14.1所示。 图14.1日期计算功能分解图 仔细分析每一个模块的功能的具体流程: 1. 判断给定日期的合法性: 首先判断给定年份是否位于1970到2050之间。然后判断给定月份是否在1到12之间。最后判定日的合法性。判定日的合法性与月份有关,还涉及到闰年问题。当月份为1、3、5、7、8、10、12时,日的有效范围为1到31;当月份为4、6、9、11时,日的有效范围为1到30;当月份为2时,若年为闰年,日的有效范围为1到29;当月份为2时,若年不为闰年,日的有效范围为1到28。

图14.2日期合法性判定盒图 判断日期合法性要要用到判断年份是否为闰年,在图14.2中并未给出实现方法,在图14.3中给出。 图14.3闰年判定盒图 2. 计算两个日期相差的天数 计算日期A (yearA 、monthA 、dayA )和日期B (yearB 、monthB 、dayB )相差天数,假定A 小于B 并且A 和B 不在同一年份,很自然想到把天数分成3段: 2.1 A 日期到A 所在年份12月31日的天数; 2.2 A 之后到B 之前的整年的天数(A 、B 相邻年份这部分没有); 2.3 B 日期所在年份1月1日到B 日期的天数。 A 日期 A 日期12月31日 B 日期 B 日期1月1日 整年部分 整年部分 图14.4日期差分段计算图 若A 小于B 并且A 和B 在同一年份,直接在年内计算。 2.1和2.3都是计算年内的一段时间,并且涉及到闰年问题。2.2计算整年比较容易,但

情景题

情景题

情景题题库 一、说明 情景题共20题, 各组以抽签方式选题,每组一题,每题40分,由评委酌情给分。另外在此环节中个人表现优异者会有特别加分,计入个人总分,特别加分满分10分。 情景题会提供四种类型的情景,每种类型提供5个不同的问题或情景背景,要求参赛组对应所抽中的情景进行模拟,在情景中展示心理学知识,限时3分钟。这四种情景类型分别为: 1.七嘴八舌:此类情境为针对案例进行讨论,考查同学们灵活运用相关心理知识的程度。 2.心知吹雾:此类情境是针对生活中的一些常见现象进行提问,然后要求对其进行心理学角度的解释,其意义为运用心理知识吹散生活中的一些迷雾。 3.心知课堂:此类情境模拟课堂场景,由一人充当老师,参赛者为正在听课的学生。老师此时进行课堂随机提问,同学们进行回答。 4.心灵哑剧:此类情景会有一个心理有困扰的同学,参赛者需对应该同学心理困扰的背景进行安抚劝说,该同学在此过程中会一直保持沉默。 二、七嘴八舌 在该类型的情景演示中,参赛者应简单设计一个场景然后根据案例进行讨论,在讨论中要针对案例进行讨论,表现出相应的心理学知识(参赛者应根据问题在备赛期间自行查找资料),能切中案例中人物的问题并针对性提出解决措施。 1.案例一: 一般资料:王X X ,女,24岁,售货员,初中文化。 主述:对事物不感兴趣,觉得自己不如人一年多。 患者自述:一年来对生活、工作没有兴趣,不想去上班,感到别人看不起自己,觉得自己不如别人,不愿与别人交往,不愿交男朋友,因为觉得自己长得丑,怕人家看不中而被拒绝,与同事和家人很难相处,总感到受人歧视,受别人欺负,因觉得活着没意思,前来咨询。 他人反映:从小父母及乡邻说自己又黑又丑,乳名叫黑丫头。11岁和母亲吵架,不愿别人叫自己黑丫头。上小学时,有几个同学当众说自己长得丑,不愿与自己玩,和同学吵嘴时,一位同学嘲笑自己,说自己又黑又丑,在学校不愿参加集体活动,怕受歧视,学习兴趣不浓,经常受老师批评。中学毕业后,当售货员,一次与顾客发生口角,顾客出口伤人,说“看你那德性,丑八怪”。工作积极性差,被扣罚奖金多次,家里介绍过男朋友三次,都说性格不和,没谈成功。平时胆小,言语不多。 情景演示问题:(1)、求助者有哪些症状表现? (2)、求助者产生心理障碍的原因有哪些? 要点:(1)① 兴趣低落②自卑③丧失求爱信心

lovestory歌词中英文对照

We were both young when I first saw you 当我第一次看见你的时候,我们都还年轻 I close my eyes and the flashback starts 我闭上眼睛,一幕幕往事又在脑海中重现 I'm standing there on a balcony in summer air 我站在阳台上,空气里,浓浓的,是夏天的味道 See the lights, see the party, the ball gowns 看见灯火,看见热闹的舞会,华丽的盛装 See you make your way through the crowd 看见你穿越人群向我走来 And say hello 对我说“hello” little did I know 我却一无所知 That you were Romeo 当时我并不知道你是我的罗密欧 You were throwing pebbles 在你抛砖引玉向我示爱之后 And my daddy said “stay away from Juliet .” 我爸爸气急败坏地叫你离我远一点 And I was crying on the staircase 我却蜷坐在楼梯间里偷偷地抹眼泪 Begging you “please don't go ” 祈求你不要离开 And I said 我说 Romeo, take me somewhere we can be alone 罗密欧,带我走吧,一起去到一个我们可以相依相偎的地方I'll be waiting, all there's left to do is run 我一直在等待(这一天),只有逃离才能让我们摆脱束缚You'll be the prince and I'll be the princess (到那时)我们就可以像王子和公主一样(快乐地在一起)It's a love story, baby, just say yes 这是多么美好的爱情故事呀,亲爱的,对吧 So I sneak out to the garden to see you 于是,我偷偷摸摸地溜到小花园去见你 We keep quiet 'cause we're dead if they knew 我们压抑着声息,被他们发现我们就死定了 So close your eyes 那么,闭上你的双眼 Escape this town for a little while 逃避这个喧嚣的尘世,即使只有如此短暂的一刻 Oh, oh, oh Cause you were Romeo, 正因为你的出现 I was a scarlet letter 我的生命才有了如此鲜艳的光彩 And my daddy said “stay away from Juliet ” 我爸爸气急败坏地叫你离我远一点 But you were everything to me

Excel中如何计算日期差

Excel中如何计算日期差: ----Excel中最便利的工作表函数之一——Datedif名不见经传,但却十分好用。Datedif能返回任意两个日期之间相差的时间,并能以年、月或天数的形式表示。您可以用它来计算发货单到期的时间,还可以用它来进行2000年的倒计时。 ----Excel中的Datedif函数带有3个参数,其格式如下: ----=Datedif(start_date,end_date,units) ----start_date和end_date参数可以是日期或者是代表日期的变量,而units则是1到2个字符长度的字符串,用以说明返回日期差的形式(见表1)。图1是使用Datedif函数的一个例子,第2行的值就表明这两个日期之间相差1年又14天。units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数

表1units参数的类型及其含义 图1可以通过键入3个带有不同参数的Datedif公式来计算日期的差。units的参数类型 ----图中:单元格Ex为公式“=Datedif(Cx,Dx,“y”)”得到的结果(x=2,3,4......下同) ----Fx为公式“=Datedif(Cx,Dx,“ym”)”得到的结果 ----Gx为公式“=Datedif(Cx,Dx,“md”)”得到的结果 现在要求两个日期之间相差多少分钟,units参数是什么呢? 晕,分钟你不能用天数乘小时再乘分钟吗? units的参数类型对应的Datedif返回值 “y”日期之差的年数(非四舍五入) “m”日期之差的月数(非四舍五入) “d”日期之差的天数(非四舍五入) “md”两个日期相减后,其差不足一个月的部分的天数 “ym”两个日期相减后,其差不足一年的部分的月数 “yd”两个日期相减后,其差不足一年的部分的天数 假设你的数据从A2和B2开始,在C2里输入下面公式,然后拖拉复制。 =IF(TEXT(A2,"h:mm:ss")

(完整版)SeasonsintheSun_中文歌词翻译_中英对照

Seasons in the Sun_中文歌词翻译_中英对照 goodbye to you my trusted friend.再见了,我忠实的朋友. we're known each other we're 9 or 10.我们从孩提时就已相识相知. together we've climb hills trees.我们一起爬山上树. learned of love abc.一起学习. skinned our hearts skinned our knees.我们心意相通,情如手足. goodbye my friend it's hard to die.再见了,朋友们,我本不愿离去. when all the birds are singing in the sky.当所有的鸟儿在天空歌唱. now the spring is in the air.空气中弥漫着春天的气息. pretty girls are everywhere.到处是漂亮女孩. think of me and i'll be there.想我,我便与你同在. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾充满阳光. but the hills that we climbed were just seasons out of time.那些日子已然逝去. goodbye papa please pray for me.再见了爸爸,请为我祈祷. i was the black sheep of the family.我是家里的害群之马. u tried 2 teach me right from wrong.你费尽心思教我明辨是非. too much wine too much song.我却沉醉于歌酒狂欢. wonder how i got along.真不知那些日子是如何度过. goodbye papa is hard 2 die.再见了爸爸,我本不愿离去. when all the birds are singing in the sky.当所有的鸟儿在天空歌唱. now the spring in the air.空气中弥漫着春天的气息. little children everywhere.孩子们到处嬉戏. when u see them i'll be there.当你看见他们,我便会与你同在. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾有阳光季节. but the wild the song.但昔日的歌酒狂欢. like the season has all gone.犹如季节更迭已消逝. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾有阳光季节. but the wild the song.但昔日的歌酒狂欢. like the season has all gone.犹如季节更迭已消逝. goodbye michelle my little one.再见了蜜雪儿,我的贝比. u gave me love help me find the sun.你给我爱,给我希望. and every time that i was down.当我意志消沉时.

单片机C延时时间怎样计算

C程序中可使用不同类型的变量来进行延时设计。经实验测试,使用unsigned char类型具有比unsigned int更优化的代码,在使用时 应该使用unsigned char作为延时变量。以某晶振为12MHz的单片 机为例,晶振为12M H z即一个机器周期为1u s。一. 500ms延时子程序 程序: void delay500ms(void) { unsigned char i,j,k; for(i=15;i>0;i--) for(j=202;j>0;j--) for(k=81;k>0;k--); } 计算分析: 程序共有三层循环 一层循环n:R5*2 = 81*2 = 162us DJNZ 2us 二层循环m:R6*(n+3) = 202*165 = 33330us DJNZ 2us + R5赋值 1us = 3us 三层循环: R7*(m+3) = 15*33333 = 499995us DJNZ 2us + R6赋值 1us = 3us

循环外: 5us 子程序调用 2us + 子程序返回2us + R7赋值 1us = 5us 延时总时间 = 三层循环 + 循环外 = 499995+5 = 500000us =500ms 计算公式:延时时间=[(2*R5+3)*R6+3]*R7+5 二. 200ms延时子程序 程序: void delay200ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=132;j>0;j--) for(k=150;k>0;k--); } 三. 10ms延时子程序 程序: void delay10ms(void) { unsigned char i,j,k; for(i=5;i>0;i--) for(j=4;j>0;j--) for(k=248;k>0;k--);

seasonsinthesun_中文歌词翻译_中英对照

Seasons?in?the?Sun_中文歌词翻译_中英对照 goodbye to you my trusted friend.再见了,我忠实的朋友. we're known each other we're 9 or 10.我们从孩提时就已相识相知. together we've climb hills trees.我们一起爬山上树. learned of love abc.一起学习. skinned our hearts skinned our knees.我们心意相通,情如手足. goodbye my friend it's hard to die.再见了,朋友们,我本不愿离去. when all the birds are singing in the sky.当所有的鸟儿在天空歌唱. now the spring is in the air.空气中弥漫着春天的气息. pretty girls are everywhere.到处是漂亮女孩. think of me and i'll be there.想我,我便与你同在. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾充满阳光. but the hills that we climbed were just seasons out of time.那些日子已然逝去. goodbye papa please pray for me.再见了爸爸,请为我祈祷. i was the black sheep of the family.我是家里的害群之马. u tried 2 teach me right from wrong.你费尽心思教我明辨是非. too much wine too much song.我却沉醉于歌酒狂欢. wonder how i got along.真不知那些日子是如何度过. goodbye papa is hard 2 die.再见了爸爸,我本不愿离去. when all the birds are singing in the sky.当所有的鸟儿在天空歌唱. now the spring in the air.空气中弥漫着春天的气息. little children everywhere.孩子们到处嬉戏. when u see them i'll be there.当你看见他们,我便会与你同在. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾有阳光季节. but the wild the song.但昔日的歌酒狂欢. like the season has all gone.犹如季节更迭已消逝. we had joy,we had fun.我们曾如此快乐. we had seasons in the sun.也曾有阳光季节. but the wild the song.但昔日的歌酒狂欢. like the season has all gone.犹如季节更迭已消逝. goodbye michelle my little one.再见了蜜雪儿,我的贝比. u gave me love help me find the sun.你给我爱,给我希望. and every time that i was down.当我意志消沉时.

51单片机延时时间计算和延时程序设计

一、关于单片机周期的几个概念 ●时钟周期 时钟周期也称为振荡周期,定义为时钟脉冲的倒数(可以这样来理解,时钟周期就是单片机外接晶振的倒数,例如12MHz的晶振,它的时间周期就是1/12 us),是计算机中最基本的、最小的时间单位。 在一个时钟周期内,CPU仅完成一个最基本的动作。 ●机器周期 完成一个基本操作所需要的时间称为机器周期。 以51为例,晶振12M,时钟周期(晶振周期)就是(1/12)μs,一个机器周期包 执行一条指令所需要的时间,一般由若干个机器周期组成。指令不同,所需的机器周期也不同。 对于一些简单的的单字节指令,在取指令周期中,指令取出到指令寄存器后,立即译码执行,不再需要其它的机器周期。对于一些比较复杂的指令,例如转移指令、乘法指令,则需要两个或者两个以上的机器周期。 1.指令含义 DJNZ:减1条件转移指令 这是一组把减1与条件转移两种功能结合在一起的指令,共2条。 DJNZ Rn,rel ;Rn←(Rn)-1 ;若(Rn)=0,则PC←(PC)+2 ;顺序执行 ;若(Rn)≠0,则PC←(PC)+2+rel,转移到rel所在位置DJNZ direct,rel ;direct←(direct)-1 ;若(direct)= 0,则PC←(PC)+3;顺序执行 ;若(direct)≠0,则PC←(PC)+3+rel,转移到rel 所在位置 2.DJNZ Rn,rel指令详解 例:

MOV R7,#5 DEL:DJNZ R7,DEL; rel在本例中指标号DEL 1.单层循环 由上例可知,当Rn赋值为几,循环就执行几次,上例执行5次,因此本例执行的机器周期个数=1(MOV R7,#5)+2(DJNZ R7,DEL)×5=11,以12MHz的晶振为例,执行时间(延时时间)=机器周期个数×1μs=11μs,当设定立即数为0时,循环程序最多执行256次,即延时时间最多256μs。 2.双层循环 1)格式: DELL:MOV R7,#bb DELL1:MOV R6,#aa DELL2:DJNZ R6,DELL2; rel在本句中指标号DELL2 DJNZ R7,DELL1; rel在本句中指标号DELL1 注意:循环的格式,写错很容易变成死循环,格式中的Rn和标号可随意指定。 2)执行过程

excel中计算日期差工龄生日等方法

excel中计算日期差工龄生日等方法 方法1:在A1单元格输入前面的日期,比如“2004-10-10”,在A2单元格输入后面的日期,如“2005-6-7”。接着单击A3单元格,输入公式“=DATEDIF(A1,A2,"d")”。然后按下回车键,那么立刻就会得到两者的天数差“240”。 提示:公式中的A1和A2分别代表前后两个日期,顺序是不可以颠倒的。此外,DATEDIF 函数是Excel中一个隐藏函数,在函数向导中看不到它,但这并不影响我们的使用。 方法2:任意选择一个单元格,输入公式“="2004-10-10"-"2005-6-7"”,然后按下回车键,我们可以立即计算出结果。 计算工作时间——工龄—— 假如日期数据在D2单元格。 =DA TEDIF(D2,TODAY(),"y")+1 注意:工龄两头算,所以加“1”。 如果精确到“天”—— =DA TEDIF(D2,TODAY(),"y")&"年"&DATEDIF(D2,TODAY(),"ym")&"月"&DATEDIF(D2,TODAY(),"md")&"日" 二、计算2003-7-617:05到2006-7-713:50分之间相差了多少天、多少个小时多少分钟 假定原数据分别在A1和B1单元格,将计算结果分别放在C1、D1和E1单元格。 C1单元格公式如下: =ROUND(B1-A1,0) D1单元格公式如下: =(B1-A1)*24 E1单元格公式如下: =(B1-A1)*24*60 注意:A1和B1单元格格式要设为日期,C1、D1和E1单元格格式要设为常规. 三、计算生日,假设b2为生日

=datedif(B2,today(),"y") DA TEDIF函数,除Excel2000中在帮助文档有描述外,其他版本的Excel在帮助文档中都没有说明,并且在所有版本的函数向导中也都找不到此函数。但该函数在电子表格中确实存在,并且用来计算两个日期之间的天数、月数或年数很方便。微软称,提供此函数是为了与Lotus1-2-3兼容。 该函数的用法为“DA TEDIF(Start_date,End_date,Unit)”,其中Start_date为一个日期,它代表时间段内的第一个日期或起始日期。End_date为一个日期,它代表时间段内的最后一个日期或结束日期。Unit为所需信息的返回类型。 “Y”为时间段中的整年数,“M”为时间段中的整月数,“D”时间段中的天数。“MD”为Start_date与End_date日期中天数的差,可忽略日期中的月和年。“YM”为Start_date与End_date日期中月数的差,可忽略日期中的日和年。“YD”为Start_date与End_date日期中天数的差,可忽略日期中的年。比如,B2单元格中存放的是出生日期(输入年月日时,用斜线或短横线隔开),在C2单元格中输入“=datedif(B2,today(),"y")”(C2单元格的格式为常规),按回车键后,C2单元格中的数值就是计算后的年龄。此函数在计算时,只有在两日期相差满12个月,才算为一年,假如生日是2004年2月27日,今天是2005年2月28日,用此函数计算的年龄则为0岁,这样算出的年龄其实是最公平的。 本篇文章来源于:实例教程网(https://www.doczj.com/doc/1a5580219.html,) 原文链接:https://www.doczj.com/doc/1a5580219.html,/bgruanjian/excel/631.html

教学情境:特点与类型

https://www.doczj.com/doc/1a5580219.html,/wjszx/jks/article_view.asp?id=566 江中教科 教学情境:特点与类型(生物组包春娟) 教学情境:特点与类型 生物组包春娟 近一个世纪来,中国的教育受凯洛夫教育思想的影响,注重认知,忽略情感,学校成为单一传授知识的场所。这样导致了教育的狭隘性、封闭性,影响了人才素质的全面提高,尤其是情感意志及创造能力的培养和发展。因此,在当前的基础教育课程改革中,重点之一就是改变过于注重知识传授的倾向,强调形成积极主动的学习态度,使获得基础知识与技能的过程成为学会学习和形成正确价值观的过程。笔者认为,要实现这一目标,就要在教学过程中设置情境,激发和推动学生的认知活动、实践活动以及情感活动,使学生主动参与教学过程,只有这样,教学活动才能成为学生的内在需求而不是一种外部的驱动。但是,情境的设置必须符合学生的认知规律,否则,便会产生一种消极干扰。 布卢姆曾说过,学生在认知过程中交替攀登着两个梯子,“一个梯子代表认知行为和认知目标,另一个梯子代表情感和情感目标”,“通过交替地攀登这两个梯子,就可以达到某些复杂的目的”。情境就是学生获得知识与技能的“梯子”。学生知识习得的过程不只是单一的思维过程,而是伴有丰富的情感体验的认知过程,是一个情知统一的思维品质的提高过程。情境往往以暗默的方式启迪学生的思维,对学生的认知起到一种导向的作用。只有学生将情境中寓含的客观信息转化为心理意义或由情境的隐喻明确了学习心向,教学才是成功的。

一、教学情境的特点 生物的教学情境具有形象性、启发性、情知性的特点。 (一)形象性 生物世界里的客观情境具有召唤想象的功能,教师语言的描述也能启动学生想象的翅膀。当教师用具体生动的形象化的语言阐述教材,便会建构出生动的教学情境。例如,基本因工程中的“剪刀”—限制性内切酶,它的连接的“针线”--DNA连接酶等 (二)启发性 叶圣陶先生曾说过,“遵路识斯真”“入境始与亲”,教学情境的启发性就是引导学生“遵路”,“入境”。例如,从生物的受精作用,联想到原始生殖细胞的减数分裂等。 (三)情知性 教学情境的情感层面是多维的。它是教师、学生以及教材乃至生活世界的多元组合,根据不同的教学目的可以使学生通过情感体验达成认知目标。例如,从生活中南瓜果实的形状,让学生明白,果实的形成和种子有关,而种子是和生长素有关系等。 二、教学情境的类型 (一)冲突性情境认知需要是“要求知道和理解(事物),要求掌握知识以及系统地阐述并解决问题的需要”(美国心理学家奥苏伯尔语)。认知需要是直接指向学习或知识本身的,也就是通过学习和获得知识本身,学习者就能得到满足,知识本身就是学习的目的。从“知”本身,就可以获得快乐,学习本身既然能成为“乐”,自然也就不需要其他外在的东西来维持,更不需要“头悬梁,锥刺骨”了。 在现实生活中,学生往往带着各自不同的动机进入学校学习,但有关心理学的研究表明,无论学生原有的学习动机是什么,为了启动学生的思维,就必须激发学生对所学新知识的强烈的认知需要,并由此产生驱动思维过程的认知动机,驱使学生主动地进行学习。而通过情境的设置,产生了对新知识的渴望,产生了问题和矛盾,这正是思维的本源。

歌剧魅影全套歌词中英对照高级翻译超

Turn your face away 把你的脸转向这里 from the garish light of day Night-time sharpe ns 夜色渐浓 heighte ns each sen sati on 知觉萌动 Darkn ess stirs and wakes imag in ati on 冥冥黑暗引领想象出笼 Sile ntly the sen ses aba ndon their defe nces ... 寂静之中感觉幵始放纵 Slowly, gen tly ni ght un furls its sple ndour 夜晚显现魅力缓慢轻柔 Grasp it, sense it - tremulous and tender 抓住它感觉它 颤抖中带着温柔 tur n your thoughts away 把你的心转向这里 from cold, un feeli ng light 脱离那冰冷无情的光 and listen to the music of the night ... 尽情聆听这夜的乐章 Close your eyes and surre nder to your 闭上双眼尽情放纵 darkest dreams! 心灵深处的梦想 不要再看俗气的日光

you knew before! 全部主张! Close your eyes, 闭上双眼 let your spirit start to soar! 让你的灵魂去飞翔! And you'll live 你将拥有新的生活 as you've n ever lived before ... 在你从未生活过的地方 Softly, deftly 不知不觉中 music shall surro und you ... 音乐将你包容 Feel it, hear it, 去感受去聆听 clos ing in aro und you ... 让它进入你的心 Ope n up your mi nd 让你的思想翱翔 let your fan tasies unwind 为你的幻想松绑 in this dark ness which 在这黑暗之中 you know you cannot fight 你知道自己无力抵抗

场景的要素及类型

场景的要素及类型 动漫资料2009-11-27 16:22:39 阅读242 评论0 字号:大中小订阅 三场景的要素 3.1 场景的组成要素 场景上角色所处的环境,如客厅,卧室,教室,厂房,街巷,海滩,科幻片中的飞船,星球等。其场景 的组成要素按环境的性质和功能来确定。 3.1.1 物质要素 场景的物质要素是以实物形态显示的能被人们直观所感受,并能反映场景特点的实物实体,包括场景的整体环境,建筑特征,格局,生活娱乐设施。这些客观的物质实体凝聚着场景的特点,形象地表达动画场 景的实质。 (1)地面:草地,泥路,石头地面,木质地面,河湖等。 (2)景物:树木,房子,家具,道路设施等。 3.1.2 效果要素 场景的效果在动画片中越来越被重视。事现场景的最佳效果要掌握以下几点: (1)外形:统一,单纯,节奏,平衡。 (2)色彩:色调,环境色彩,主观色彩,特效色彩。 (3)光影:自然光,人造光,主观光等。 3.1.3 标识和文字符号 场景的结构和构成往往使人感到冷漠和压抑,标识和文字符号在能够调节气氛及指示,导向作用。如麦当 劳的“M”符号。 场景标识和文字符号的作用: (1)能提高场景的空间使用率; (2)是场景信息的重要组成部分; (3)是形象识别的重要工具; (4)是人性化设计不可或缺的一部分。 3.2 场景的空间分类 3.2.1 真实空间 在场景结构中,我们可以置身其中,现实世界里真实存在的空间。 3.2.2 虚拟空间 在场景结构中,假想的,虚幻的,不存在与真实世界的空间。 四动画场景的类型场景的类型主要可分为室外景,室内景, 4.1 室外景室外景是展示户外环境特征的场景,主要包括自然风景和人工建筑。例如山川,河流,天空,树木,亭台楼阁,城市,乡村,街巷等。室外场景在设计制作时要特别注意自然景观的地域风貌特征和人为景观的文化特征。这种类型的场景一般情况下在整部影片中占有的比重较大,所以外场景的设计和制作就显得尤其重要了。下面是一些国外优秀动画片的外场景实例。

see-you-again整理歌词中英文对照

see you again It's been a long day without you my friend 没有老友你的陪伴日子真是漫长 And I'll tell you all about it when I see you again 与你重逢之时我会敞开心扉倾诉所有 We've come a long way from where we began 回头凝望我们携手走过漫长的旅程 Oh I'll tell you all about it when I see you again 与你重逢之时我会敞开心扉倾诉所有 When I see you again 与你重逢之时 Damn who knew all the planes we flew 谁会了解我们经历过怎样的旅程 Good things we've been through 谁会了解我们见证过怎样的美好 That I'll be standing right here 我都会在这里 Talking to you about another path 与你聊聊另一种选择的可能 I know we loved to hit the road and laugh 我懂我们都喜欢速度与激情 But something told me that it wouldn't last 但有个声音告诉我这美好并不会永恒 Had to switch up look at things different see the bigger picture 如何才能改变观点用更宏观的视野看这世界 Those were the days hard work forever pays 有付出的日子终有收获的时节 Now I see you in a better place 此刻我看到你走进更加美好的未来 Now I see you in a better place 此刻我看到你走进更加美好的未来 How could we not talk about family when family's all that we got?当家人已是我们唯一的牵绊时我们怎么能忘却最可贵的亲情Everything I went through you were standing there by my side 无论历经怎样的艰难坎坷总有你相伴陪我度过 And now you gonna be with me for the last ride 而今你将陪我走完这最后一段旅程 It's been a long day without you my friend 没有老友你的陪伴日子真是漫长 And I'll tell you all about it when I see you again 与你重逢之时我会敞开心扉倾诉所有 We've come a long way from where we began

相关主题
文本预览