当前位置:文档之家› Effective fine-grain synchronization for automatically parallelized programs using optimist

Effective fine-grain synchronization for automatically parallelized programs using optimist

Effective fine-grain synchronization for automatically parallelized programs using optimist
Effective fine-grain synchronization for automatically parallelized programs using optimist

Effective Fine-Grain Synchronization For Automatically Parallelized Programs Using Optimistic Synchronization Primitives

MARTIN C.RINARD

Massachusetts Institute of Technology

1.INTRODUCTION

Parallelizing compilers have traditionally exploited a speci?c,restricted form of concur-rency:the concurrency available in loops that access dense matrices using af?ne access functions[Bacon et al.1994].The generated parallel programs use a correspondingly re-stricted kind of synchronization:coarse-grain barrier synchronization at the end of each parallel loop.

As shared-memory multiprocessors become the dominant commodity source of com-putation,parallelizing compilers must support a wider class of computations.It will be especially important to support irregular,object-based computations,including computa-

2Martin C.Rinard

tions that access pointer-based data structures such as lists,trees and graphs.We have implemented a parallelizing compiler for this class of computations[Rinard and Diniz 1997].Our experience using this compiler shows that the automatically generated parallel computations exhibit a very different kind of concurrency.Instead of parallel loops with coarse-grain barrier synchronization,the computations are structured as a set of lightweight threads that synchronize using?ne-grained atomic operations.The implementation of these atomic operations has a critical impact on the performance and resource consumption of the generated parallel code.

1.1Mutual Exclusion Locks

The standard implementation mechanism for atomic operations uses mutual exclusion locks.In this approach,each piece of data has an associated lock.To update a piece of data,an operation?rst acquires the corresponding lock.It performs the update,then releases the lock.Any other operation that attempts to acquire the lock blocks until the ?rst operation releases the lock.

Despite their popularity,mutual exclusion locks are far from an optimal synchroniza-tion mechanism.One drawback is the memory required to hold the state of the locks. Locks increase the amount of memory that the program consumes,and can degrade the performance of the memory hierarchy by occupying space in the caches.The overhead of executing the acquire and release constructs may also reduce the performance.

Locking the computation at a coarse granularity(by mapping multiple pieces of data to the same lock)addresses these problems.It reduces the impact on the memory system and may allow the program to amortize the lock overhead over many updates to different pieces of data.Unfortunately,a coarse lock granularity may also introduce false exclusion.False exclusion occurs when multiple operations that access different pieces of data attempt to acquire the same lock.The operations execute serially even though they are independent and could,in principle,execute in parallel.False exclusion can signi?cantly degrade the performance by reducing the amount of available parallelism.

1.2Optimistic Synchronization

Optimistic synchronization is an attractive alternative to locks.Atomic operations that use optimistic synchronization use a load linked primitive to retrieve the initial value in an updated memory location.They compute the new value,then use a store conditional primitive to attempt to write the new value back into the memory location.If no other operation wrote the location between the load linked and the store conditional,the store conditional succeeds.Otherwise,the store conditional fails,the new value is not written into the location,and the operation typically retries the computation.Operations that use optimistic synchronization never block—they avoid atomicity violations by nullifying and retrying computations.Optimistic synchronization primitives therefore avoid prob-lems(such as poor responsiveness,lock convoys,priority inversions,deadlock,and the need to reclaim locks held by failed processes)that are associated with the use of locks in multiprogrammed systems[Herlihy1993].But they also have properties that make them especially appropriate for implementing the?ne-grain atomic operations character-istic of irregular parallel computations.Speci?cally,the use of optimistic synchronization imposes no memory overhead and eliminates the use of heavyweight blocking synchro-nization primitives.

This paper describes our experience using optimistic synchronization to implement atomic

Effective Fine-Grain Synchronization For Automatically Parallelized Programs3 operations in the context of a parallelizing compiler for object-based programs.The com-piler accepts an unannotated serial program written in a subset of C++and automatically generates a parallel program that performs the same computation[Rinard and Diniz1997]. The compiler is designed to parallelize irregular computations,including computations that manipulate pointer-based data structures such as lists,trees and graphs.Because it uses commutativity analysis[Rinard and Diniz1997]as its primary analysis technique,the compiler views the computation as consisting of a sequence of operations on objects.If all of the operations in a given computation commute(i.e.generate the same result regardless of the order in which they execute),the compiler can automatically generate parallel code. For the parallel computation to execute correctly,each operation must execute atomically. We have implemented two versions of the compiler—one version generates code that uses mutual exclusion locks to make operations execute atomically;the other generates code that uses optimistic synchronization.A comparison characterizes the impact of using optimistic synchronization instead of mutual exclusion locks.Our results show that using optimistic synchronization instead of locks can signi?cantly improve the performance and reduce the amount of memory required to execute the computation.

This paper provides the following contributions:

—Optimistic Synchronization:It identi?es optimistic synchronization as an effective synchronization mechanism for atomic operations in the context of a parallelizing com-piler for object-based programs.

—Analysis Algorithms:It presents novel analysis and transformation algorithms that enable a compiler to automatically generate optimistically synchronized parallel code.—Experimental Results:It presents a complete set of experimental results that charac-terize the overall impact of using optimistic synchronization instead of mutual exclusion locks.These results show that optimistic synchronization can substantially reduce the amount of memory that the program consumes.Furthermore,the optimistically syn-chronized versions never perform signi?cantly worse than the best lock synchronized versions,and perform signi?cantly better than versions that lock the computation at an inappropriate granularity.

To our knowledge,our compiler is the?rst compiler to automatically generate optimisti-cally synchronized code.

The remainder of the paper is structured as follows.Section2presents the optimistic synchronization primitives that the compiler uses to implement?ne-grain atomic opera-tions.Section3presents an example that illustrates the use of optimistic synchronization. Section4discusses how the differences between optimistic and lock synchronization af-fect the computation.In Section5we present an overview of the technique,commutativity analysis,that our prototype compiler uses to parallelize the applications.In Section6we present the synchronization selection algorithm,which enables the compiler to generate optimistically synchronized code.Section7presents the experimental results.Section8 discusses related work.We conclude in Section9.

2.OPTIMISTIC SYNCHRONIZA TION PRIMITIVES

Only recently have modern RISC processors provided ef?cient implementations of the hardware primitives required for optimistic synchronization.Examples include the load linked/store conditional instructions in the MIPS RISC[Heinrich1993],PowerPC[Mo-torola,Incorporated1993],and DEC Alpha architectures[Digital Equipment Corporation

4Martin C.Rinard

retry:ll$2,0($4)#load value

addiu$3,$2,1#increment value

sc$3,0($4)#attempt to store new value

beq$3,0,retry#retry if failure

Fig.1.Atomic Increment Using ll and sc

1992],and the compare-and-swap instructions in the SPARC architecture[Weaver and Ger-mond1994].In this section we focus on hardware primitives available on MIPS processors such as the MIPS R4400and R10000.The two key instructions are the load linked(ll) and store conditional(sc)instructions[Jensen et al.1987],which can be used together to atomically update a memory location as follows.

The program?rst uses a load linked instruction to load the original value from the mem-ory location into a register.It computes the new value into another register,then uses a store conditional instruction to attempt to store the new value back into the memory loca-tion.If the memory location was not written between the load linked and store conditional, the store conditional succeeds and writes the new value into the memory location.If the memory location was written between the load linked and the store conditional,the store conditional fails and the new value is not written into the memory location.A?ag in-dicating the success or failure of the store conditional is written into the register in the store conditional instruction that held the new value.The computation typically retries the atomic update until it succeeds.Figure1presents an assembly language sequence that uses the ll and sc instructions to atomically increment an integer variable.Register4($4) contains a pointer to the variable to increment.

The standard implementation mechanism for load linked and store conditional uses a reservation address register that holds the address from the last load linked instruction and a reservation bit that is invalidated when the address is written[Michael and Scott1995]. As implemented in the MIPS processor family,ll and sc directly support atomic oper-ations only on32bit and64bit data items.Although it is possible to use the instructions to synthesize atomic operations on larger data items,the transformations may impose substan-tial data structure modi?cations[Herlihy1993].Because these modi?cations may degrade the performance,the current compiler uses optimistic synchronization only for updates to 32or64bit data items.Transactional memory[Herlihy and Moss1993]and Oklahoma update[Stone et al.1993]would support optimistically synchronized atomic operations on larger objects,but no hardware implementation of these mechanisms currently exists,and it is unclear how ef?cient any such mechanism would be.

3.AN EXAMPLE

We next provide an example that illustrates how the compiler can use optimistic synchro-nization to implement atomic operations.The program in Figure2implements a graph traversal.The visit operation traverses a single node.It?rst adds the parameter p into the running sum stored in the sum instance variable,then recursively invokes the oper-ations required to complete the traversal.The way to parallelize this computation is to execute the two recursive invocations in parallel.Our compiler is able to use commutativ-ity analysis to statically detect this source of concurrency[Rinard and Diniz1997].But because the data structure may be a graph,the parallel traversal may visit the same node multiple times.The generated code must therefore contain synchronization constructs that

Effective Fine-Grain Synchronization For Automatically Parallelized Programs5

class graph

private:

int value,sum;

graph*left;graph*right;

public:

void visit(int);

;

void graph::visit(int p)

sum=sum+p;

if(left!=NULL)left->visit(value);

if(right!=NULL)right->visit(value);

Fig.2.Serial Graph Traversal

make each operation execute atomically with respect to all other operations that access the same object.

Figure3presents the code that the compiler generates when it uses mutual exclusion locks to make operations execute atomically.The compiler augments each graph object with a mutual exclusion lock mutex.The automatically generated parallel

visit operation,then invokes the wait construct,which blocks until all parallel tasks created by

the current task or its descendant tasks?nishes.The parallel

visit operation uses a load linked instruction to fetch the value of sum.It computes the new value,then uses a store conditional instruction to attempt to store the new value back into sum.A loop retries the update until it succeeds.

4.ISSUES

We next discuss how the differences between optimistic and lock synchronization affect

the computation.

4.1Memory System Effects

To generate code that uses mutual exclusion locks,the compiler must augment the data structures with locks.Our experimental results indicate that using locks can signi?cantly increase the amount of memory required to run the program.The memory overhead as-sociated with using locks is therefore an important potential problem.In many cases,it is desirable to run as large a problem as possible,and the amount of available memory is the key factor that limits the runnable problem size.

6Martin C.Rinard

class graph

private:

lock mutex;

int value,sum;

graph*left;graph*right;

public:

void visit(int);

void parallel

visit(p);

wait();

void graph::parallel

visit(value));

if(right!=NULL)spawn(right->parallel

visit(int);

;

void graph::visit(int p)

this->parallel

visit(int p)

register int new

sum=ll(sum);

new sum+p;

while(!sc(new

visit(value));

if(right!=NULL)spawn(right->parallel

Effective Fine-Grain Synchronization For Automatically Parallelized Programs7 Locks can also have a negative impact on the performance of the memory hierarchy. They occupy space in the data caches,which reduces the amount of useful application data that the caches can hold.They also increase the amount of space between useful application data,which may reduce the spatial locality.If the lock and its associated data are stored on different cache lines,a locked update may generate extra cache misses—operations may incur not only the cache misses required to perform the update,but also an extra cache miss to acquire the lock.

The use of locks may also affect the performance of the instruction cache.The addi-tional instructions required to address,acquire,and release the lock increase the size of the generated code,effectively reducing the amount of application code that the instruction cache can hold.For optimistic synchronization,the conditional branches that retry failed updates are the only additional instructions.

Lock synchronization may also affect the performance of the memory consistency pro-tocol.To execute the program correctly,the machine must globally order the execution of the locking constructs with respect to the memory accesses performed as part of the atomic operation.Modern machines that implement relaxed memory consistency mod-els typically enforce this order by waiting for all outstanding writes to complete globally before releasing a lock[Adve and Gharachorloo1996;Gharachorloo1996].But unlike lock synchronization,optimistic synchronization imposes no additional order between ac-cesses to different memory locations.Although interactions between the caches,optimistic synchronization and out of order execution complicate the implementation on modern pro-cessors,in principle the machine can use the same relaxed ordering constraints for opti-mistically synchronized updates as it does for unsynchronized updates.The fundamental performance differences between optimistically synchronized and unsynchronized updates come primarily from the need to acquire exclusive access to the memory location before a store conditional can succeed and control dependences from the branch that retries the update if it fails.

A signi?cant advantage of optimistic synchronization is that it imposes no memory,data cache or memory consistency overheads—the serial and parallel programs have iden-tical data memory layouts,identical data memory access patterns and identical ordering constraints between accesses to different memory locations.

4.2Synchronization Granularity

As described in Section2,existing optimistic synchronization primitives support atomic operations only on individual data items.The advantage of synchronizing at this?ne gran-ularity is that it minimizes the possibility of false exclusion and maximizes the exposed concurrency.But it may not always be desirable or even possible to synchronize at this granularity.If an operation updates many data items,it may be more ef?cient to acquire a lock,perform all of the updates without additional synchronization,then release the lock. Furthermore,atomic operations that perform multiple interdependent updates to multiple data items cannot synchronize at the granularity of individual items—all of the updates must execute atomically together as a group.In this case,the compiler must generate code that uses lock synchronization.

4.3Impact on Applications

The extent to which an application can use optimistic synchronization depends on the characteristics of its atomic operations.The parallel phases of many applications compute a

8Martin C.Rinard

set of contributions that are atomically accumulated into shared objects using simple binary operators such as,max,min,addition,or multiplication.The basic atomic operations in these applications update a single word of memory,and all of these atomic operations can use optimistic synchronization instead of mutual exclusion locks.All of our benchmark applications use this kind of atomic operation exclusively,as do many of the applications in parallel benchmark suites such as the SPLASH and SPLASH2benchmarks[Singh et al. 1992;Woo et al.1995].

Problematic cases tend to arise when applications use atomic operations to construct or update linked data structures in parallel,as opposed to simply traversing the data structures in parallel.The version of Barnes-Hut in the SPLASH2benchmark suite,for example, builds an space-subdivision tree in parallel.The atomic operations used to build or update linked data structures tend to update multiple words of data atomically,which makes them unsuitable for the simple atomic synchronization primitives that are currently available on RISC microprocessors.Experience with operating systems kernels,however,suggests that more advanced primitives such as double compare and swap may make it possible to recode these computations to use optimistic synchronization instead of mutual exclusion locks[Greenwald and Cheriton1996;Massalin and Pu1989].

A?nal question is whether it may be more ef?cient to synchronize at the coarser gran-ularity of objects instead of using optimistic synchronization.For our set of benchmark applications,synchronizing at the granularity of objects never signi?cantly improves the performance.We expect,however,that applications with coarser granularity updates than those in our set of benchmark applications may execute more ef?ciently with lock syn-chronization than with optimistic synchronization.Given the relatively ef?cient current implementations of optimistic synchronization primitives,we would not expect to see dra-matic performance differences from this effect.

https://www.doczj.com/doc/8712952910.html,MUTA TIVITY ANAL YSIS

We next provide an overview of commutativity analysis,the technique that our compiler uses to automatically parallelize our set of applications[Rinard and Diniz1997].Com-mutativity analysis is designed to parallelize pure object-based programs.Such programs structure the computation as a set of operations on objects.Each object implements its state using a set of instance variables.An instance variable can be a nested object,a pointer to an object,a primitive data item such as an int or a double,or an array of any of the preceding types.Each operation has a receiver object and several parameters.When an op-eration executes,it can read and write the instance variables of the receiver object,access the parameters,or invoke other operations.Well structured object-based programs con-form to this model of computation;pure object-based languages such as Smalltalk enforce it explicitly.

The commutativity analysis algorithm analyzes the program at the granularity of oper-ations on objects to determine if the operations commute,i.e.,if they generate the same result regardless of the order in which the operations commute.If all operations in a given computation commute,the compiler can automatically generate parallel code.

To test that two operations A and B commute,the compiler must consider two execution orders:the execution order A;B in which A executes?rst,then B executes,and the execu-tion order B;A in which B executes?rst,then A executes.The two operations commute if they meet the following commutativity testing conditions:

Effective Fine-Grain Synchronization For Automatically Parallelized Programs9—Instance Variables:The new value of each instance variable of the receiver objects of A and B under the execution order A;B must be the same as the new value under the execution order B;A.

—Invoked Operations:The multiset of operations directly invoked by either A or B under the execution order A;B must be the same as the multiset of operations directly invoked by either A or B under the execution order B;A.

Both commutativity testing conditions are trivially satis?ed if the two operations have dif-ferent receiver objects or if neither operation writes an instance variable that the other accesses—in both of these cases the operations are independent.If the operations may not be independent,the compiler reasons about the values computed in the two execution orders.

The compiler uses symbolic execution[King1976;Kemmerer and Eckmann1985]to extract expressions that denote the new values of instance variables and the multiset of invoked operations.Symbolic execution simply executes the methods,computing with ex-pressions instead of values.It maintains a set of bindings that map variables to expressions that denote their values and updates the bindings as it executes the methods.The compiler uses the extracted expressions from the symbolic execution to apply the commutativity testing conditions presented above.If the compiler cannot determine that corresponding expressions are equivalent,it must conservatively assume that the two operations do not commute.

5.1Automatic Parallelization and Parallel Phases

The compiler uses commutativity analysis as outlined above to parallelize the program.It recursively traverses the call graph of the program visiting operation invocation sites as follows.At each site,it?rst computes the set of operations executed by the computation rooted at that site.It then uses commutativity analysis to determine if all pairs of operations in this set commute.If so,the compiler generates code that executes the operations in parallel.We refer to such parallelized computations as parallel phases.The parallel phase therefore corresponds to the entire computation rooted at the invocation site.

As soon as the traversal detects a parallel phase,it does not recursively visit the operation invoked at the invocation site.It instead skips to the next invocation site after the parallel phase.If the compiler was unable to parallelize the computation rooted at the call site,the traversal recursively visits the invocation sites in the invoked operation.

5.2Synchronization in the Generated Code

Commutativity analysis assumes that the operations in the parallel phases execute atomi-cally.When the compiler generates the parallel code,it must therefore choose a synchro-nization mechanism and insert the corresponding synchronization constructs into opera-tions in parallel phases that access potentially updated objects.These constructs ensure that the operation executes atomically with respect to all other operations in the parallel phase that access the same object.A standard mechanism is to augment each potentially updated object with a mutual exclusion lock.The generated code for each operation would ?rst acquire the object’s lock,access the object,then release the lock.This paper presents the results of a compiler that,when possible,uses optimistic synchronization instead of the standard lock synchronization.

10Martin C.Rinard

6.THE SYNCHRONIZA TION SELECTION ALGORITHM

Synchronization selection takes place after the commutativity analysis algorithm has suc-cessfully parallelized a phase of the computation as outlined above in Section5.The synchronization selection algorithm chooses a synchronization mechanism for all vari-ables updated by operations in the phase.The algorithm uses optimistic synchronization whenever possible.The synchronization mechanism for a given variable can differ across parallel phases—i.e.,one phase may use lock synchronization,while another phase may use optimistic synchronization for the same variable.

Even though we implemented the synchronization selection algorithm in the context of our parallelizing compiler,the algorithm is independent of the speci?c means used to extract the concurrency,and can be used both for automatically and manually parallelized computations.

6.1Model of Computation

We next outline the basic requirements for the application of the synchronization selection algorithm.The algorithm is designed to work on pure object-based programs with parallel phases and atomic operations.For each atomic operation,the algorithm must be able to determine the operations that may execute in parallel with the atomic operation and the instance variables that the operations update.To use optimistic synchronization,it must also be able to generate expressions that denote the new values of updated instance variables.

In our compiler,the commutativity analysis algorithm extracts some of the information that the synchronization selection algorithm uses.Speci?cally,it produces the set of op-erations that each parallel phase may invoke and the set of instance variables that these operation may update[Rinard and Diniz1997].For each operation,it also produces a set of update expressions that represent how the operation updates instance variables and a multiset of invocation expressions that represent the multiset of operations that the op-eration may invoke.There is one update expression for each instance variable that the operation modi?es and one invocation expression for each operation invocation site.Ex-cept where noted,the update and invocation expressions contain only instance variables and parameters—the algorithm uses symbolic execution to eliminate local variables from the update and invocation expressions[King1976;Kemmerer and Eckmann1985;Rinard and Diniz1997].

6.2Update Expressions

An update expression of the form=represents an update to a scalar instance vari-able.The symbolic expression denotes the new value of.An update expression =represents an update to the array instance variable.An update expression of the form=<+=represents a loop that repeatedly per-

forms the update.In this case,

Effective Fine-Grain Synchronization For Automatically Parallelized Programs11 expressions are available for all operations that the parallel phase many invoke.

6.3Invocation Expressions

An invocation expression->represents an invocation of the op-eration.The symbolic expression denotes the receiver object of the operation and the symbolic expressions denote the parameters.An invocation expres-sion of the form=<+=represents a loop that repeatedly invokes the operation.In this case,

6.4Synchronization Selection Requirements

To execute correctly,all accesses to a potentially updated variable must use the same syn-chronization mechanism.The synchronization selection algorithm therefore classi?es each potentially updated variable as either an optimistically synchronized variable,(a variable whose updates can use optimistic synchronization or no synchronization)or a lock syn-chronized variable(a variable whose accesses must use lock synchronization).

The commutativity analysis algorithm assumes that each operation executes atomically with respect to other operations that access the same object.The analysis takes place at the granularity of instance variables and multisets of invoked operations—two opera-tions commute if the instance variables and multisets of invoked operations are the same in both execution orders[Rinard and Diniz1997].But optimistically synchronized up-dates execute atomically only at the granularity of individual updates,not at the coarser granularity of complete operations(each operation may perform multiple updates).If the synchronization selection algorithm chooses to optimistically synchronize a set of updates, it must ensure that the generated parallel program always produces the same result as the corresponding program in which all operations execute atomically.

The atomicity requirements may force the synchronization selection algorithm to use lock synchronization.Consider,for example,a computation that contains an operation that updates multiple variables and an update in a different operation that reads all of the vari-ables.To satisfy the atomicity requirements,the updates in the?rst operation must execute atomically as a group with respect to the update that reads the variables.Because opti-mistically synchronized updates are atomic only at the granularity of individual updates, the synchronization selection algorithm cannot optimistically synchronize the updates in the?rst operation.All of the updated variables must be classi?ed as lock synchronized variables.

The current algorithm applies two constraints to enforce the atomicity requirements.—First,each update to an optimistically synchronized variable may access only the up-dated variable and variables that are not updated during the parallel phase.—Second,all accesses to an optimistically synchronized variable may occur only in up-

12Martin C.Rinard

dates to that variable—no other update reads the variable and no operation invocation

site depends on the variable.

It is possible to relax these constraints.For example,if an operation accessed only one

updated variable,the compiler could allow its operation invocation sites to depend on the variable even if the variable were optimistically synchronized.To ensure that updates to

the variable would execute atomically with respect to the accesses in the operation,the

generated code would read the value of the variable into a local variable.It would then access the value from that local variable for the remainder of the operation.

It would also be possible to integrate the commutativity testing and synchronization

selection more closely.In such a scenario,the synchronization selection algorithm would

propose a set of optimistically synchronized variables,and the commutativity testing would take place at the?ner granularity of individual updates to those variables.

The practical impact of these generalizations would depend on the characteristics of

the applications.In general,we expect the current algorithm to recognize most of the

opportunities to use optimistic synchronization that occur in practice.But experience with

a broader range of applications would shed additional light on the situation.

6.5The Algorithm

We next present some notation that we will use when we present the synchronization se-

lection algorithm.The program de?nes a set of classes and a set of operations .Given an operation,the function receiverClass returns the class of the re-ceiver objects of.The program also de?nes a set of instance variables.The func-

tion instanceVariables returns the instance variables of the class.No two classes share an instance variable—i.e.,instanceVariables instanceVariables if .

Figure5presents the algorithm.It takes as parameters a set of invoked operations,a set

of updated variables,a function updates,which returns the set of update expressions that represent the updates that the operation performs,and a function invocations, which returns the multiset of invocation expressions that represent the multiset of opera-tions that the operation invokes.There is also an auxiliary function called variables; variables returns the set of variables in the symbolic expression,variables returns the set of free variables in the update expression,and variables returns the set of free variables in the invocation expression.The free variables of an update or invocation expression include all variables in the expression except the induction vari-ables in expressions that represent loops.In particular,the free variables in an update expression include the updated variable.The algorithm produces a set of variables whose updates may be optimistically synchronized,a set of classes that must be augmented with locks,and a set of operations that must use lock synchronization.

The algorithm determines the kind of synchronization it must use by processing all of the updates and invocations.For each update,the choices are to use lock synchroniza-tion,optimistic synchronization,or no synchronization at all.For simplicity,the algorithm classi?es each variable as either requiring lock synchronization or able to use optimistic synchronization.Some updates to variables classi?ed as able to use optimistic synchro-nization may require no synchronization at all.When the compiler generates the code for such updates,it simply omits the synchronization.The decision to generate or omit synchronization is based on the form of the update as discussed below.

Effective Fine-Grain Synchronization For Automatically Parallelized Programs13 synchronizationSelection invokedOperations updatedVariables updates invocations

lockedClasses;

lockedOperations;

optimisticVariables updatedVariables;

for all invokedOperations

for all updates

if requiresLockSynchronization updatedVariables

optimisticVariables optimisticVariables variables;

lockedClasses lockedClasses receiverClass;

for all invocations

if variables updatedVariables

optimisticVariables optimisticVariables variables;

lockedClasses lockedClasses receiverClass;

lockedVariables updatedVariables optimisticVariables;

for all invokedOperations

for all updates

if variables lockedVariables

lockedOperations lockedOperations;

for all invocations

if variables lockedVariables

lockedOperations lockedOperations;

return optimisticVariables lockedClasses lockedOperations;

requiresLockSynchronization updatedVariables

if is of the form and variables updatedVariables

return false;

if is of the form and variables variables updatedVariables

return false;

if is of the form and variables updatedVariables

return false;

if is of the form and

variables variables updatedVariables

return false;

if is of the form and variables updatedVariables

return requiresLockSynchronization updatedVariables;

if is of the form

and variables updatedVariables

return requiresLockSynchronization updatedVariables;

return true;

Fig.5.Synchronization Selection Algorithm

14Martin C.Rinard

Three factors contribute to the synchronization choice for a given variable:the require-ment of each update to the variable,the way that other updates use the variable,and the way that invocations use the variable.Each update imposes the following requirement on the updated variable:

—No Synchronization:If an update computes a new value that does not depend on any updated variable,then writes the value into a variable,it is a candidate for using no synchronization at all.The sole responsibility of the synchronization selection algorithm is to ensure that the operations execute atomically.Because store instructions execute atomically,there may be no need to explicitly synchronize an update that simply writes a value into an instance variable if the value does not depend on a variable that some other operation may update.

—Optimistic Synchronization:In principle,if an update reads an instance variable,com-putes a new value that does not depend on a different variable that another operation might update,then writes the new value back into the variable,the update should be a candidate for optimistic synchronization.In the MIPS R4400,however,it is illegal to perform a memory access between the ll and sc instructions.The compiler could still use optimistic synchronization whenever it is possible to compute the new value without accessing memory after the initial read of the instance variable.To simplify the imple-mentation of the compiler,however,the algorithm uses optimistic synchronization only when the new value can be expressed in the form or,where or denotes the original value of the variable,is an arbitrary binary operator,and and do not contain an updated variable.In this case,the generated code can

compute the value of into a register,then use an instruction sequence similar to that in Figure1to atomically perform the computation and update the variable.

—Lock Synchronization:The algorithm classi?es all other updates as requiring lock synchronization.

In a given parallel phase,all accesses to an updated variable must use the same synchro-nization mechanism.If even one update that accesses the variable(either by reading the variable or writing the variable)requires lock synchronization,then all updates that access the variable must use lock synchronization.

The algorithm starts by assuming that all variables can use either optimistic synchro-nization or no synchronization at all.It then scans the updates to?nd updates that require lock synchronization.Any variable involved in such an update is removed from the set of optimistically synchronized variables.The compiler will augment the class of the re-ceiver object of the variable with a lock and insert constructs that acquire and release the lock into all operations that access the variable.The invocation expressions capture the remainder of the operation’s accesses to updated variables.If an invocation expression ac-cesses an updated variable,the algorithm deletes the variable from the set of optimistically synchronized variables.All accesses to the variable will use lock synchronization.The algorithm computes the set of operations that must use lock synchronization by scanning the operations to?nd all operations that access a lock synchronized variable.

6.6Multiple Updates In Loops

There is a technical detail associated with loops that update the same variable or array element more than once.If the updates are optimistically synchronized,they are atomic only at the granularity of the individual loop iterations,not at the granularity of the entire

Effective Fine-Grain Synchronization For Automatically Parallelized Programs15 loop.With lock synchronization,they would be atomic at the granularity of the entire loop. Optimistically synchronizing variables that may be updated multiple times in a loop makes the granularity of the enforced atomicity?ner.Because the commutativity analysis for such variables takes place at the granularity of individual loop iterations instead of at the granularity of the entire loop,this change in the granularity does not affect the correctness of the optimistically synchronized parallel program.

6.7Code Generation

If an operation must use lock synchronization,the generated code acquires the lock in the receiver object before it accesses any potentially updated instance variables;it does not release the lock until it has completed all of its accesses to potentially updated variables. The commutativity analysis algorithm ensures that all invocations of operations that may access potentially updated variables occur after the invoking operation has completed its last access to a potentially updated variable[Rinard and Diniz1997].This separability property ensures that the generated code never attempts to acquire more than one lock, which in turn ensures that it never deadlocks.

It is possible for an operation to synchronize some of its updates using locks and other updates using optimistic synchronization.The nonblocking nature of optimistic synchro-nization ensures that the combination of the two different synchronization mechanisms never causes deadlock.Finally,note that the code generation algorithm augments a class with a lock only if one of the phases uses lock synchronization when it updates an object of that class.So if the compiler is able to use optimistic synchronization for all updates in parallel phases to objects of a given class,the object layout is the same in the parallel and serial versions and there is no memory overhead for locks in objects of that class.

7.EXPERIMENTAL RESULTS

We next present experimental results that characterize the performance and memory im-pact of using optimistic synchronization.We present results for three automatically par-allelized applications:Barnes-Hut[Barnes and Hut1986],a hierarchical N-body solver, String[Harris et al.1990],which builds a velocity model of the geology between two oil wells,and Water[Singh et al.1992],which simulates water molecules in the liquid state. Each application performs a complete computation of interest to the scienti?c computing community.Barnes-Hut consists of approximately1500lines of serial C++code,String consists of approximately2050lines of serial C++code,and Water consists of approxi-mately1850lines of serial C++code.

7.1The Compilation System

We implemented a prototype parallelizing compiler that uses commutativity analysis as its basic analysis https://www.doczj.com/doc/8712952910.html,piler?ags determine whether it generates code that uses mutual exclusion locks or(when possible)optimistic synchronization.

The compiler is structured as a source-to-source translator that takes a serial program written in a subset of C++and generates an explicitly parallel C++program that performs the same computation.We use Sage++[Bodin et al.1994]as a front end.The analysis and code generation phases consist of approximately21,000lines of C++code.This count includes no code from the Sage++system.The generated parallel code contains calls to a run-time library that provides the basic concurrency management and synchronization functionality.The library consists of approximately6000lines of C code.

16Martin C.Rinard

The current version of the compiler imposes several restrictions on the dialect of C++ that it can analyze.The major restrictions include the following:

—The program does not use operator or method overloading.

—The program uses neither multiple inheritance nor templates.

—The program contains no typedef,union,struct,or enum types.

—Global variables cannot be primitive data types;they must be class types.

—The program has no static members.

—The program contains no casts between base types such as int,float,and double that are used to represent numbers.The program may contain casts between pointer types;the compiler assumes that the casts do not cause the program to violate its type declarations.

—The program contains no default arguments or methods with variable numbers of argu-ments.

—No operation accesses an instance variable of a nested object of the receiver or an in-stance variable declared in a class from which the receiver’s class inherits.

—The program has no virtual methods or function pointers.

—The program does not use exceptions.

—The program does not use pointers to members.

In addition to these restrictions,the compiler assumes that the program has been type checked and does not violate its type declarations.The goal of these restrictions is to sim-plify the implementation of the compiler while providing enough expressive power to allow the programmer to develop clean object-based programs.Most of the restrictions are de-signed simply to reduce the number of cases that the compiler must handle,and impose no conceptual limitation on the expressive power of the language.But the restrictions against virtual methods,function pointers,exceptions,and pointers to members do signi?cantly limit the expressive power of the language.Supporting these constructs in our compiler would require the presence of additional analysis algorithms.The current call graph con-struction algorithm,for example,would have to be extended to determine which methods could be invoked at virtual function call sites and call sites that use function pointers.The symbolic execution algorithm would have to be extended to support the additional control ?ow associated with exceptions and to operate conservatively in the presence of pointers to members.While none of these extensions would fundamentally affect the commuta-tivity analysis or synchronization selection algorithms discussed in this paper,they would require a signi?cant engineering effort to integrate into the compiler.

7.2Methodology

We collected experimental results for the applications running on an SGI Challenge XL multiprocessor with24100MHz R4400processors running IRIX version6.2.We com-piled the generated parallel programs using version7.1of the MipsPro compiler from Silicon Graphics.We implemented two versions of the lock primitives:a spin lock[Hein-rich1993],and a Mellor-Crummey Scott(MCS)lock[Mellor-Crummey and Scott1991]. The spin lock acquire is implemented using a compiler intrinsic that uses ll and sc to atomically test and set a value that indicates whether the lock is free or not.The release simply clears the value.Whenever an attempt to acquire a lock fails,the processor imme-diately reexecutes the instruction sequence that attempts to acquire the lock:there is no

Effective Fine-Grain Synchronization For Automatically Parallelized Programs17 backoff.Spin locks perform well if there is little lock contention.If there is substantial lock contention,spin locks typically perform poorly because they generate a large amount of invalidation traf?c[Mellor-Crummey and Scott1991].

MCS locks are designed to perform well across a range of locking conditions.Pro-cessors waiting to acquire a lock are arranged in a linked list;each processor spins on a separate memory location.MCS locks therefore avoid signi?cant invalidation traf?c even for heavily contended locks.The drawback is that the MCS lock implementation executes a compare-and-swap operation both when the lock is acquired and when the lock is released. As Table I in Section7.3illustrates,it therefore takes signi?cantly longer to acquire and release an available MCS lock than it does to acquire and release an available spin lock. We used the compiler to obtain the following versions of each application:—Optimistic:When possible,the compiler generates code that uses optimistic synchro-nization.For our three applications,the atomic operations use optimistic synchroniza-tion exclusively—the generated code contains no mutual exclusion locks.The serial and Optimistic versions therefore have identical memory layouts.

—Item Lock:If a primitive data item(such as an int,float or double)in an object may be updated in a parallel phase,there is a spin lock associated with that item.If an operation in a parallel phase updates an item,it acquires the corresponding lock, performs the update,then releases the lock.

—Object Lock:If an object may be updated in a parallel phase,there is a spin lock associated with that object.When an operation in a parallel phase updates an object, it acquires the object’s lock,performs the update,then releases the lock.Each nested object has the same lock as its enclosing object.

—Coarse Lock:Like the Object Lock version,there is a spin lock associated with each object and each operation that updates an object holds that object’s lock.But the com-piler analyzes the program to detect sequences of operations that acquire and release the same lock.It then transforms the sequence so that it acquires the lock once,executes the operations without synchronization,then releases the lock[Diniz and Rinard1998; 1997].

—Item MCS Lock:The same as Item Lock listed above,except that the generated code uses MCS locks instead of spin locks.

—Object MCS Lock:The same as Object Lock listed above,except that the generated code uses MCS locks instead of spin locks.

—Coarse MCS Lock:The same as Coarse Lock listed above,except that the generated code uses MCS locks instead of spin locks.

The Optimistic versions synchronize at the granularity of individual data items.The Item Lock versions also synchronize at this granularity,but use locks instead of optimistic syn-chronization.Because the Object Lock versions synchronize at the coarser granularity of objects,they allocate fewer locks and execute fewer lock constructs than the Item Lock versions.The tradeoff,of course,is that the Object Lock versions may suffer from false exclusion.

The Coarse Lock versions allocate the same number of locks as the Object Lock ver-sions,but execute fewer lock constructs.The tradeoff is that the Coarse Lock versions may suffer from false contention.False contention occurs when a processor attempts to acquire a lock,but the processor that currently holds the lock is executing code that was

18Martin C.Rinard

originally in no atomic operation.The?rst processor must wait until the lock is released, even though the computations are independent and should,in principle,be able to execute concurrently.False contention can signi?cantly degrade the performance by reducing the amount of available parallelism.

The compiler is structured as a source to source translator from serial C++to parallel C++containing synchronization primitives and calls to procedures in our run time system. Starting with an unannotated,serial C++program,the compiler generates the Object Lock and Coarse Lock versions automatically with no programmer intervention.Although it would be possible to generate the Item Lock versions automatically,we generated these versions by hand starting from the Object Lock version.Ideally,the generated code for the Optimistic versions would use compiler intrinsics to implement the atomic updates.But the 7.1version of the MipsPro compiler does not support such compiler intrinsics for?oating point numbers,and all of our applications need to perform atomic updates on?oating point numbers.The Optimistic versions therefore contain calls to assembly language routines that implement the atomic updates using optimistic synchronization primitives.

7.3Cost Of Basic Operations

Table I presents the execution times for a single update implemented using different syn-chronization mechanisms.Each update reads an array element,adds a constant to the element,then stores the new value back into the array.We present times for cached up-dates,in which all of the accessed data are present in the?rst level processor cache,and for uncached updates,in which all of the data are present in the?rst level processor cache of another processor.The times vary signi?cantly for the cached versions,with the opti-mistically synchronized update executing faster than the lock synchronized updates.The execution times of the uncached versions are dominated by the cache miss time and are roughly comparable for the different synchronization mechanisms.

One Cached Update One Uncached Update

Version

No Synchronization

0.20 3.48

Lock Synchronization

0.56 3.93

Table I.Measured Execution Times for One Update

7.4Barnes-Hut

We start our discussion of Barnes-Hut by analyzing the memory overhead of using locks. Table II presents the memory usage for each version—the amount of memory used to store objects during the execution of the program.The applications store all of their data except local variables in objects.The memory usage therefore indicates the total heap data usage of the program.The lock overhead is the percentage of memory used to hold locks.

Effective Fine-Grain Synchronization For Automatically Parallelized Programs19 Version Memory Usage(Mbytes)Lock Overhead

5.37-

Optimistic

7.0824%

Object Lock

5.51 2.5%

Item MCS Lock

6.3015%

Coarse MCS Lock

Number of Lock Number of Optimistically

Version

Optimistic

108,641,972-

Object Lock

212,992-

Item MCS Lock

54,271,834-

Coarse Lock

20Martin C.Rinard

14812162024

136.34------Optimistic

169.2046.8425.5618.5814.7912.8711.62 Object Lock

135.0938.8820.9015.8312.3811.109.63 Item MCS Lock

173.5547.7025.6918.7614.9212.9711.67 Coarse MCS Lock

如何把打印稿变成电子稿

如何把打印稿变成电子稿(太牛啦!!你打一天的字都比不上她2分钟!!人手一份,留着以后用哈!) 教你如何将打印稿变成电子稿最近,我的一个刚刚走上工作岗位上的朋友老是向我报怨,说老板真的是不把我们这些新来工作的人不当人看啊,什么粗活都是让我们做,这不,昨天又拿了10几页的文件拿来,叫他打成电子稿,他说都快变成打字工具了,我听之后既为他感到同情,同时教给他一个简单的方法,可以轻松将打印稿变成电子稿,我想以后对大家也有用吧,拿出来给大家分享一下。 首先你得先把这些打印稿或文件通过扫描仪扫到电脑上去,一般单位都有扫描仪,如果没有也没关系,用数码相机拍也行,拍成图片放到WORD里面去,不过在些之前,你还得装一下WORD自带的组件,03和07的都行。点开始-程序-控制面板-添加/删除程序,找到Office-修改找到Microsoft Office Document Imaging 这个组件,Microsoft Office Document Imaging Writer 点在本机上运行,安装就可以了。 首先将扫描仪安装好,接下来从开始菜单启动“Microsoft Office/ Microsoft Off ice 工具/Microsoft Office Document Scanning”即可开始扫描。 提示:Office 2003默认安装中并没有这个组件,如果你第一次使用这个功能可能会要求你插入Office2003的光盘进行安装。由于是文字扫描通常我们选择“黑白模式”,点击扫描,开始调用扫描仪自带的驱动进行扫描。这里也要设置为“黑白模式”,建议分辨率为300dpi。扫描完毕后回将图片自动调入Office 2003种另外一个组件“Microsoft Office Document Imaging”中。 点击工具栏中的“使用OCR识别文字”按键,就开始对刚才扫描的文件进行识别了。按下“将文本发送到Word”按键即可将识别出来的文字转换到Word中去了。如果你要获取部分文字,只需要用鼠标框选所需文字,然后点击鼠标右键选择“将文本发送到Word”就将选中区域的文字发送到Word中了。 此软件还有一小技巧:通过改变选项里的OCR语言,可以更准确的提取文字。例如图片里为全英文,把OCR语言改为“英语”可以确保其准确率,而如果是“默认”则最终出现的可能是乱码~ 还有: 应该说,PDF文档的规范性使得浏览者在阅读上方便了许多,但倘若要从里面提取些资料,实在是麻烦的可以。回忆起当初做毕业设计时规定的英文翻译,痛苦的要命,竟然傻到用Print Screen截取画面到画图板,再回粘到word中,够白了:(最近连做几份商务标书,从Honeywell本部获取的业绩资料全部是英文版的PDF,为了不再被折磨,花费了一个晚上的时间研究PDF和Word文件的转换,找到下面2种方法,出于无产阶级所谓的同甘共苦之心,共享下:)

(完整版)《高级日语》课程教学大纲资料

《高级日语》课程教学大纲 课程编号:D20662B0 课程属性:必修 学时:146学时学分:7 先修课程:中级日语、日本文学史、日本概况、日语实用语法 考核方式:考试 使用教材: 陆静华主编,《日语综合教程》(第五册),上海外语教育出版社,2006年 陈小芬主编,《日语综合教程》(第六册),上海外语教育出版社,2006年 季林根主编,《日语综合教程》(第七册),上海外语教育出版社,2011年 主要参考书: 陆静华、陈小芬主编,《日语综合教程第五、六册课文翻译与练习答案》,上海外语教育出版社,2007年 皮细庚主编,《日语综合教程》(第八册),上海外语教育出版社,2008年 季林根、皮细庚主编,《日语综合教程第七、八册课文翻译与练习答案》,上海外语教育出版社,2009年 周炎辉主编,《日语语法词法·句法》,湖南大学出版社,2000年 一、课程的性质和任务 高级日语是一门综合技能的课程,它涉及日语的听、说、写、读、译等多种实践能力的培养,是本科日语专业的专业必修课。该课程旨在培养学生更进一步熟练掌握日语知识;提高学生实际应用语言的能力;丰富学生的日本社会文化知识,培养文化理解能力,熟练地掌握日语语法体系,使学生达到日本国际交流基金和日本国际交流协会设立的“国际日本语能力考试”的2级水平。 本课程是“中级日语”课程的继续,教学重点从一、二年级的讲解词汇、语法、句型逐渐过渡到分析文章、理解语言背后的社会心理文化,从而掌握地道的日语。通过灵活多样的教学方法,充分调动学生的主观能动性,使学生积极参与课程教学。教师不仅要分析文章、

句子结构,还要介绍大量的语言文化背景知识,避免“中国式日语”的出现,为学生在今后的工作中使用日语打下坚实的基础。 二、教学目标与要求 高级日语教学的目标是:培养学生具有较强的阅读能力和较高的听、说、写、读、译的能力,使他们能用日语交流信息。培养学生良好学习作风,正确掌握良好的语言学习方法。培养学生逻辑思维能力,丰富社会文化知识,以适应社会发展和建设的需要。 本课程的具体教学要求: 1.读一般性日语文章,能理解作品的主要内涵和意境;能较好地归纳、概括其主要内容;能独立分析文章的思想观点、文章结构、语言技巧及问题修饰。 2.对于相当日语能力测试一级水平的文章,借助工具书、参考注释能读懂大意。 3.能用日语写出格式标准、语言基本正确、内容明了的书信、调查报告等各种文体的文章;能写内容充实,具有一定广度和深度的说明文、议论文和论文。 4.能翻译用现代日语撰写的各种文章、书籍。 5.能用日语较正确地表达自己的思想、感情,能与日本人自由交谈。 三、学时分配 本课程的学时共分配在三个学期完成,即大学3年级上、下两学期,大学4年级上学期。根据不同学年学生的具体日语水平,各学期的具体学时也不同。三年级上学期72学时,三年级下学期54学时,四年级上学期20学时。

谈谈自己与父母沟通的技巧

谈谈自己与父母沟通的技巧 我们今天很多人跟父母、父母跟孩子很难交流,原因何在呢?各位能谈谈自己与父母沟通的技巧有哪些?下面小编整理了自与父母沟通的技巧,供你阅读参考。 自与父母沟通的技巧:沟通技巧分析 我想起来,有个老母牛的故事,它不会说话啊,它用什么方法呢?踢!你只要把自己的真心发出来,你真爱他,你真关心他,不用说话自然就会交流。打个比方,比如说你的老父亲喜欢吃橘子,你看在眼里不说话,你没问他,他也不会告诉你,你去给我买什么,一般父母不会这样,那么你看在眼里,橘子买来了,第一框新鲜的橘子摆在爸爸床边、身边,也不要说,什么都不要讲,很快你就会发现,父母和子女之间的感情就像冰块一样,自然就融化了,现在问题是什么呢?我们大家总在要求别人,你怎么不给我买一筐橘子啊,错了!应该问我为什么没给他去买一筐橘子? 我们总在责备别人。总是责备别人,你永远不能沟通,会越来越远,所以提这样问题或者有类似疑问的,你想一想往往都是我们在怨别人;不能和孩子沟通的,往往都在怨他。孩子不能跟

父母沟通的,往往是他心里与父母都有怨恨和责怪。怎么办?把这些拿掉。干嘛?方向朝自己,责怪自己。 我们对父母确实差太远了。很多人说我对他已经很好了,那不是明白人说的话,真正的明白人他不说这话,他总是说我自己做的还不够,那么一个人常常觉得自己做的还不够,他那种爱所表现出来的能融化一切,不需要言语哪。 我跟各位讲,我跟我的爸爸在很多方面的交流并不多,但是我一直在观察他,我看我爸爸需要什么?他爱看小人书,我把我能买到的小人书都给他买到这个屋里。爸爸快七十岁了,每天晚上睡觉之前,把他爱看的这些小人书摆在床头,让他在那看几页,我们的交流不需要语言哪。他在翻小人书的时候,你说他幸福不幸福?他没有跟我提出来,让我给他买这个买那个,没有!我给他买的,那你说我怎么就知道呢?我总觉得我亏欠父亲的,我做得不好,常怀着这种心,就像一个口渴的人总在找水,你总能找到吧。 所以说今天的我们,夫妇很难交流、上下级很难交流、大家都是互相难交流,原因何在呢?都在责怪对方,不是责怪自己。真责怪自己,哎呀,你就天天替他服务啊!不需要呀,还需要什么交流方式?现在外边那还有什么来告诉你,你心里出毛病了,他是心理医生。那都是骗人的,他也不是故意要骗你们,他自己不知道传统文化。人只要觉得自己亏欠于别人,你总想为别人做

教你如何把打稿电子变成印稿

教你如何将打印稿变成电子稿最近,我的一个刚刚走上工作岗位上的朋友老是向我报怨,说老板真的是不把我们这些新来工作的人不当人看啊,什么粗活都是让我们做,这不,昨天又拿了10几页的文件拿来,叫他打成电子稿,他说都快变成打字工具了,我听之后既为他感到同情,同时教给他一个简单的方法,可以轻松将打印稿变成电子稿,我想以后对大家也有用吧,拿出来给大家分享一下。 首先你得先把这些打印稿或文件通过扫描仪扫到电脑上去,一般单位都有扫描仪,如果没有也没关系,用数码相机拍也行,拍成图片放到WORD里面去,不过在些之前,你还得装一下WORD自带的组件,03和07的都行。点开始-程序-控制面板-添加/删除程序,找到Office-修改找到Microsoft Office Document Imaging 这个组件,Microsoft Office Document Imaging Writer 点在本机上运行,安装就可以了。 首先将扫描仪安装好,接下来从开始菜单启动“Microsoft Office/ Microsoft Office 工具/Microsoft Office Document Scanning”即可开始扫描。 提示:Office 2003默认安装中并没有这个组件,如果你第一次使用这个功能可能会要求你插入Office2003的光盘进行安装。由于是文字扫描通常我们选择“黑白模式”,点击扫描,开始调用扫描仪自带的驱动进行扫描。这里也要设置为“黑白模式”,建议分辨率为300dpi。

扫描完毕后回将图片自动调入Office 2003种另外一个组件“Microsoft Office Document Imaging”中。 点击工具栏中的“使用OCR识别文字”按键,就开始对刚才扫描的文件进行识别了。按下“将文本发送到Word”按键即可将识别出来的 文字转换到 Word中去了。如果你要获取部分文字,只需要用鼠标框选所需文字,然后点击鼠标右键选择“将文本发送到Word”就将选 中区域的文字发送到Word中了。 此软件还有一小技巧:通过改变选项里的OCR语言,可以更准确的提取文字。例如图片里为全英文,把OCR语言改为“英语”可以确保其准确率,而如果是“默认”则最终出现的可能是乱码~ 还有: 应该说,PDF文档的规范性使得浏览者在阅读上方便了许多,但倘若要从里面提取些资料,实在是麻烦的可以。回忆起当初做毕业设计时规定的英文翻译,痛苦的要命,竟然傻到用Print Screen 截取画面到画图板,再回粘到word中,

(完整word版)日语专业教学大纲高年级阶段

高等院校日语专业高年级阶段教学大纲 一、总纲 (一)大纲宗旨 本大纲旨在规定日语专业本科三、四年级的教学目标、教学内容、教学原则、教学评价以及有关教学的几个问题,为制定高年级阶段的教学计划、教材编写、测试评估提供依据。 (二)适用范围 本大纲适用于全国高等院校日语专业本科高年级(三、四年级)。大纲只对专业课提出要求。 (三)指导思想 根据教育部对文科院校提出的“厚基础、宽口径、高质量”的要求,对高年级的课程设置和教学内容提出基本要求,各院校可根据具体情况掌握执行。其中主干课、必修课的设置应争取统一。教学质量应符合大纲要求,即继续锤炼语言基本功,提高日语实践能力,充实文化知识,进一步扩大知识面。为完成这一任务,应在教材、教学方法、教学质量评估等方面进行深入的研究和稳步的改革。要改革教学方法,激发学生的学习热情,培养学生的学术兴趣和创新意识,锻炼学生独立分析问题、解决问题的能力。 学生毕业时应具有扎实的日语基本功和较强的日语实践能力;还要具备日语语言学、日本文学、日本社会文化(包括地理、历史、政治、经济、风俗、宗教等)方面的基本知识。毕业生走出校门后,应能很快地适应除专业性较强的领域以外的各种口译、笔译及与日本研究相关的科研与教学工作。 (四)教学安排与教学内容 高年级教学分两个学年四个学期,即第五学期至第八学期。每个学期平均周数为18周。其中第八学期即毕业前的半学年中,各院校安排毕业论文的撰写工作。 本大纲是日语专业基础阶段教学大纲的延伸,与基础阶段教学大纲相衔接。在进一步练好听、说、读、写、译几方面基本功的同时,应注意扩大视野,拓宽知识面。高年级专业课程内容主要包括:(一)日语综合技能;(二)日语语言学:(三)日本文学;(四)日本社会文化。 二、课程 由于各院校的特点不同,开设课程的门类不同,课程名称及开设的时间、周学时数也不同。根据日语专业的基本特点,课程可分为主干课、必修课和选修课几种。对此,本大纲不进行具体划定,而是从内容上列出四个方面必须开设的课程,每个方面的具体课型及教学时数可

与家长的沟通和交流技巧

与家长的沟通与交流技巧讨论 一、认清自己的定位与身份,也就就是“我就是谁”。 (一)首先我就是一名老师,我需要传授学生知识;并制定学习计划与学生的知识巩固与复习; (二)我就是一名心理咨询师,我必须时刻关注学生的学习以及生活,最重要的就是孩子的心里状态; (三)我就是一名教育工作者,服务于孩子与家长,需要有服务、负责的心态; (四)我就是博学大家庭的一员,就是博学团队的重要组成部分。因此,请心系博学,为博学、为自己尽心尽力! 二、当您面对面与家长交流时需要把控的重点: (一)聊天时面露微笑,注意态度与语气;认真聆听家长的建议并适时做好笔录;要学会先听后讲,收集好信息之后组织好语言做更好的答复。 (二)认可孩子,相信孩子,以对孩子的鼓励与认可为主要内容;根据家长的不同或者情况的不同,来做自我状态的调整。 (三)不能给家长反馈过多孩子的负面信息,这会致使家长压力加大,甚至有绝望心里;这有悖于教育本义,更会让家长对博学失望。(我们要做的就是反映客观事实的同时给出相应措施,让家长对未来有所展望) (四)交流、谈话反映客观事实,不做主观评价;更不能借此宣泄私人情感。

(五)当家长对于孩子的学习有所失望,或对于现状有不满情绪时,切忌立刻道歉(道歉等于承认我们没有履职);应稳定心态,问清详情,以做调整;或者给家长灌输心灵鸡汤,动之以情,适时推出孩子成长的心灵教育。 (六)关于孩子成长的心灵教育:在与家长聊天时可以时不时提起,我们后面有专门的大课程来给孩子灌输心灵鸡汤;并且有专门的心理师来对孩子进行心理辅导,促进孩子成长,孩子成长了,懂事了,愿意学了,自然学习也会有收获。(博学给家长与孩子带来的不仅就是学习上的帮助,更促进孩子成长,这一点基本家长都会折服,而且我们也就是这么做的) (七)不能过分承诺家长,对于自己做得到的才可以承诺,否则适得其反。 (八)与家长交流时,需要为家长方做考虑,不能一昧给家长提要求,或者不了解家长的想法就大胆设想或主观臆断,这样会影响博学品牌。 (九)尊重家长,让家长尊重您;充分表现自己的教育专业性,并展现博学的教育理念。 (十)整个谈话过程都就是需要在您的节奏中,需要提前有规划及主题。 (以上十条为常见的注意点,有更多细节需要大家在实际中提出来作为讨论内容) 三、当您电话或者微信与家长交流时需要把控的重点:

如何把图片转换成电子版

在工作中,我常常在想,要是能把纸上有用的文字快速输入到电脑中,不用打字录入便可以大大提高工作效率该有多好呀!随着科技的发展,这个问题在不断的解决,例如,现在市场上的扫描仪就带有OCR软件,可以把扫描的文字转换到电脑中进行编辑。但是,对于我们平常人来说,大多数人都是即不想多花钱购买不常用的设备, 又不想费力气打字录入,那我就给大家提供一个我刚刚发现的方法吧!现在数码相机很普遍,也很常用,我们就从这里下手吧。 工具准备: 硬件:电脑一台数码相机 软件: word2003(其它的版本我没有实验) doPDF (百度可以搜索下载,是一款免费的PDF制作软件) AJViewer软件(在百度可以搜索下载,是一款免费的阅读器) 步骤: 1、在电脑中安装 doPDF和AJViewer 2、用数码相机把需要的文字拍下来(相机和照像水平就不多谈了。照片效果越好,可以大大缩小转换文字的误差率)

例如: 3、在word中插入你用数码相机照的书上的文字(打开word——插入菜单——图片——来自文件——选择照片——插入) 4、在word中选择文件菜单——打印——在打印机选项中选择doPDF——确定——点击“浏览”选项——选择文件保存的位置和填写文件名称——保存——确定 5、按照上面的步骤,电脑会自动打开AJViewer软件,若没有自动打开该软件,可以自己打开AJViewer软件,然后在AJViewer中打开刚刚转换的PDF文件。 6、选择AJViewer中的,然后在需要的文字部分拖动鼠标画出虚线。

7、点击发送到word按钮,就可以转换成word文件了。可以编辑了。第6、7步骤图片如下:

最新整理武汉大学考研参考书目学习资料

外国语言文学学院·回到页首↑专业名称研究方向参考书目 英语语言文学01 英语语言学 02 英美文学 03 翻译学 601 基础英语: 张汉熙等主编:《高级英语》(修 订本1-2册)外语教学与研究 出版社 张培基、俞云根等编:《英汉翻 译教程》上海外语教育出版社 401 语言、文学与翻译: 胡壮麟编:《语言学教程》(修 订本)北京大学出版社 章振邦:《新编英语语法教程》 (修订本) 上海外语教育出版社;张伯香编: 《英国文学教程》(修订本上下 册)武汉大学出版社 吴定柏:《美国文学大纲》上海 外语教育出版社 常耀信:《美国文学简史》(修 订本)南开大学出版社 郭著章、李庆生编:《英汉互译 实用教程》武汉大学出版社 查看 俄语语言文学01 俄语语言学 02 俄罗斯文学 03 俄语翻译学 04 语言文化学 602 实践俄语: 《大学俄语》东方1-7册北外 和普院合编外语教学与研究出 版社1995年 402 俄语语言学与文学: 王超尘等:《现代俄语理论教程》 上下册上海外语教育出版社 任光宣:《俄罗斯文学史》2003 年俄文版北京大学出版社 查看 法语语言文学01 法语语言学 02 法国文学 03 翻译理论与实践 04 法语文学理论与批评 603 专业法语: Nouveau sans frontiers CEL Internatial 3-4册 束景哲:《法语课本》第5册, 上海外语教育出版社 403 法国文学(中世纪至20世 纪): Les Grands auteurs francais Textes et Litterature Bordas1996 陈振尧:《法国文学史》外语教 学与研究出版社 查看

幼师与家长沟通技巧

幼师与家长沟通技巧 良好的仪表,打开交往的大门。 我们经常说 :教师要为人师表,这不仅指内在的品行,还要有得体的外表。教师的仪表包括: 衣着、打扮、谈吐、仪态等,仪态中面部表情尤为重要。当教师面对幼儿家长时,首先要 微笑,微笑是理解、是阳光、是人与人之间感情沟通的桥梁。每天放学时,老师将孩子送 到园门口,家长来接幼儿,整个接送过程虽然短短十分钟,但一个学期加起来,却是与家 长接触最多的时间段,老师的状态显得尤为重要。必须抛开一切烦恼,露出最真诚的微笑,向家长一一点头示意。 教师在家长面前还要注意衣冠整齐、身体站姿,是否抬头挺胸、显得神采奕奕,因为 我们此时此刻的形象代表着幼儿园的形象,所以一定要注意自己的肢体语言。教师是一个 公众的形象,良好的仪表既是尊重家长,更是尊重自己。 策略性的交谈,取得家长的信任。 教师与家长交谈时,态度要真诚 作为幼儿教师要理解家长对孩子的关爱之情。在父母眼中,自己的孩子总是最棒的。 所以当个别孩子有一些小错误,教师要与家长交谈时,一定要先说优点再说缺点。还要注 意场合,比如接孩子时,交谈要简短,因为许多家长都在等着接孩子,如果你跟某一位 家长聊得时间太长,其他家长难免会产生不耐烦的情绪,这时候的交谈一般以一两句为宜,内容应以表扬为主。 多通道的个别交流,心与心靠得更近。 家园活动的形式很多:家长会、家长公开半日活动、亲子远足、亲子小制作、家园宣 传栏等,这些都是集体性的活动,多通道的个别交流更为重要。 新生入园前家访,教师与家长面对面地真诚交流,形成教育合力,促进幼儿的发展。 只有坚持进行细致的沟通与交流,才能让家长感觉到老师的贴心服务,才能真正做好家长 工作。 教师要以一颗真诚、善良、理解的心,怀着对孩子的殷切关爱之情与家长交谈,使家 长心悦诚服,以此换来家长对教师、对幼儿园 的信赖。 首先,老师要准备好与家长沟通的问题。

复印资料变成电子版

办公室——教你如何把打印稿变成电子稿 教你如何将打印稿变成电子稿最近,我的一个刚刚走上工作岗位上的朋友老是向我报怨,说老板真的是不把我们这些新来工作的人不当人看啊,什么粗活都是让我们做,这不,昨天又拿了10几页的文件拿来,叫他打成电子稿,他说都快变成打字工具了,我听之后既为他感到同情,同时教给他一个简单的方法,可以轻松将打印稿变成电子稿,我想以后对大家也有用吧,拿出来给大家分享一下。 首先你得先把这些打印稿或文件通过扫描仪扫到电脑上去,一般单位都有扫描仪,如果没有也没关系,用数码相机拍也行,拍成图片放到WORD里面去,不过在些之前,你还得装一下WORD自带的组件,03和07的都行。点开始-程序-控制面板-添加/删除程序,找到Office-修改找到Microsoft Office Document Imaging 这个组件,Microsoft Office Document Imaging Writer 点在本机上运行,安装就可以了。 首先将扫描仪安装好,接下来从开始菜单启动“Microsoft Office/ Microsoft Office 工具/Microsoft Office Docume nt Scanning”即可开始扫描。 提示:Office 2003默认安装中并没有这个组件,如果你第一次使用这个功能可能会要求你插入Office2003的光盘进行安装。由于是文字扫描通常我们选择“黑白模式”,点击扫描,开始调用扫描仪自带的驱动进行扫描。这里也要设置为“黑白模式”,建议分辨率为300dpi。扫描完毕后回将图片自动调入Office 2003种另外一个组件“Microsoft Office Document Imaging”中。 点击工具栏中的“使用OCR识别文字”按键,就开始对刚才扫描的文件进行识别了。按下“将文本发送到Word”按键即可将识别出来的文字转换到 Word中去了。如果你要获取部分文字,只需要用鼠标框选所需文字,然后点击鼠标右键选择“将文本发送到Word”就将选中区域的文字发送到Word中了。 此软件还有一小技巧:通过改变选项里的OCR语言,可以更准确的提取文字。例如图片里为全英文,把OCR语言改为“英语”可以确保其准确率,而如果是“默认”则最终出现的可能是乱码~ 还有: 应该说,PDF文档的规范性使得浏览者在阅读上方便了许多,但倘若要从里面提取些资料,实在是麻烦的可以。回忆起当初做毕业设计时规定的英文翻译,痛苦的要命,竟然傻到用Print Screen截取画面到画图板,再回粘到word中,够白了:(最近连做几份商务标书,从Honeywell本部获取的业绩资料全部是英文版的PDF,为了不再被折磨,花费了一个晚上的时间研究PDF和Word文件的转换,找到下面2种方法,出于无产阶级所谓的同甘共苦之心,共享下:)

广外日语自考应试经验谈

广外日语自考基础科段/本科应试经验谈 日语基础科段 00605基础日语一(95分):2010.10月 题型:可以说是和日语等级考的体型基本一样,语法,固定搭配,助词等之类,均为选择题。只不过是阅读题里多了2小题日译中,或中译日的题目罢了 难度:2-3级之间 我的学习方法:没看书,直接考 00606日语基础2(82分):2011,4月 题型:均为选择题,题型基本和等级考试类似,全卷为140道选择题。第一大题的10分的选择题是指定教材里的句子,题型为单词的平假名选择,60分题型包括句法,副词,助词,敬语,形式名词,助,助动词,常用短语。剩余30分是阅读题,一共分成4篇阅读。 难度:2级 我的学习方法:没看指定教材,直接考试 00843日语阅读一(95分):2010.10月 题型:2篇中长篇阅读,并且有一篇阅读出自于指定教材{阅读一}里,均为选择题,并且指定教材里出的那个阅读里有2小题日译中,或中译日 难度:2-3级之间 我的学习方法:没看书,直接考 00844日语阅读二(85分):2010.10月 题型:3篇中长篇阅读,没有一篇是出自于指定教材,也没有日译中,或中译日的体型,均为选择题。 难度:相当于2级里边的短篇阅读难度 我的学习方法:没看书,直接考

00608日本国概括(60分):2010.10月 题型:题型均为日语的选择题,至于题型,可参考《日本概况复习指导与强化训练》,和考试题型一样,可以在百度文库里找到。建议背熟指定教材里的年代表和地图 难度: 我的学习方法:没看指定教材,背《日本概况复习指导与强化训练》 00607日语语法(89分): 题型:考核点和难度基本和基础日语2相同。全卷为100道选择题。 难度:2级 我的学习方法:没看指定教材,直接考试 11600日语综合技能(83分): 题型:一:2篇阅读,阅读题的各个小题大部分为语法题二:中日互译各10题三:400-600字的作文 难度:2-3级 我的学习方法:没看指定教材,直接考试 03706思想道德与`````(76分): 题型:参考统考试卷 难度: 我的学习方法:没看指定教材,背熟《一考通》里的选择题,在网上找100题左右的论述题进行背诵 03707毛邓三(68分): 题型:参考统考试卷 难度: 我的学习方法:没看指定教材,背熟《一考通》里的选择题,在网上找100题左右的论述题进行背诵

老师跟家长沟通的技巧

老师跟家长沟通的技巧 1、老师要尊重家长,打好与家长沟通的基础。 2、老师与家长要建立相互信任的平台: 老师还要让家长理解自己的诚恳的负责任的态度,就是要让家长知道你对他的孩子特别重视,沟通的认真,而不是虚与委蛇。事前要对该学生的方方面面作充分的了解,包括学习成绩、性格特点、优点和缺点,家庭基本情况以及你为这个学生做了哪些工作等,最好拟一个简单的提纲,这样做的一个好处是:你在与家长交流时,就能让他感觉到你对他的孩子特别关心、重视,以及你工作细致、认真负责的好印象,家长也会提升其责任心;这样从情感上容易迅速沟通。 3、老师要坚持多元化原则,因人而异的采取合理有效的方法。 常言说“人过一百,各色各样”,交流要因人而异,因时而异,因情而异。换言之,与不同的家长,在不同的情况下交流,需要有不同的方法策略。 ②对于溺爱型的家庭,交谈时,更应先肯定学生的长处,对学生的良好表现予以真挚的赞赏和表扬,然后再适时指出学生的不足。要充分尊重学生家长的感情,肯定家长热爱子女的正确性,使对方在心理上能接纳你的意见。同时,也要用恳切的语言指出溺爱对孩子成长的危害,耐心热情地帮助和说服家长采取正确的方式来教育子女,启发家长实事求是地反映学生的情况,千万不要袒护自己的子女,因溺爱而隐瞒子女的过失。 ③对于放任不管型的家庭,老师在交谈时要多报喜,少报忧,使学生家长认识到孩子的发展前途,激发家长对孩子的爱心与期望心理,改变对子女放任不管的态度,吸引他们主动参与对孩子的教育活动。同时,还要委婉地向家长指出放任不管对孩子的影响,使家

长明白,孩子生长在一个缺乏爱心的家庭中是很痛苦的,从而增强 家长对子女的关心程度,加强家长与子女间的感情,为学生的良好 发展创造一个合适的环境。 ④对性格粗暴,刚愎自用、甚至蛮不讲理的家长,要以冷对“热”,以静制动,以柔克刚。越是难以理喻,就越要坚持晓之以理;要做到先倾听而后以动。要宽容、理解。 除此之外还要考虑沟通形式的多样化。 例如,用信函方式与学生家长及时沟通信息虽然非常费事,但也有其独特的适用面,写信适用于两种情况:一是学生家长个性固执 或性情暴躁,与其交谈,难以形成共识,容易引起负作用,而用联 系信指出问题,分析原因,提示方法,容易被学生家长接受,并触 发一些冷静思考,从而改进教育孩子的方法。二是遇到不宜面谈的 问题。如学生有偷摸、早恋等行为,向家长面对面挑明,一则家长 脸上无光、很尴尬,再则容易导致家长的过激行为,如打孩子。而 通过家长联系信,可以含蓄地指出学生在校内外的有关表现,分析 问题的严重性,引起家长的警觉和重视。 2、避免伤害家长的感情。老师往往对喜欢的学生大力表扬,而 对一些不称心的学生指责有加,在家长面前大力批评,好事没一份,坏事份份有。这样,导致家长感情受到了伤害,迁怒于孩子。结果 造成学生家长怕见老师,于是影响了家校的联系。因此,在与家长 交往中,教师要客观对待学生的错误,以商量的口气与家长共商教 育方法。 3、正确评价学生。教师与学生家长的接触,往往离不开评论学生。这时,首先要了解家长的道德修养水平,先请家长谈学生在家 的表现,随后老师才谈学生在校表现,这样避免家长由于学生在校 出现问题产生心理压力,搞僵关系。其次要客观、全面地评价学生,不能好的都好,坏的全坏。应让家长听到教师的肺腑之言,使其产 生与老师共同教育学生的愿望。教师与家长谈话时,千万要避免只“告状”,除将孩子的问题告诉家长,对孩子的进步也要实事求是 地谈。在谈孩子的缺点时,教师还应主动、坦诚地检视自身在工作

北大外语学院考研书目

北京大学外国语学院参考书目 一、比较文学与世界文学专业 1.世界文学史科目: 1)《欧洲文学史》李赋宁等主编商务印书馆,1999年 2) 《东方文学史》季羡林主编吉林教育1995年 3) 《世界文学名著选读》陶德臻等主编高教2000年 2.文学理论科目: 1)《二十世纪西方文论述评》张隆溪著三联书店1986年 2)《西方文艺理论名著选编》伍蠡甫胡经之北大1988年 3)《中国历代文论选》郭绍虞上海古籍1988年。 二、英语语言文学专业 英语文学部分: 1.《欧洲文学史》李赋宁主编:古希腊罗马,西欧,俄国4卷本商务1999年 2.《新编英国文学选读》罗经国(2卷本)北大1996年 3、《英国文学作品选读》陈嘉2卷本)商务1982年12.5/1 4.《美国文学选读》李宜燮、常耀信(2卷本)南开1991年 5.Baym, Nina, ed. Norton Anthology of American Literature. Shorter Fourth Edition. New York and London: Norton, 1995. 英语语言学部分: 1.《语言学教程(修订版)》胡壮麟、姜望琪北大2002年 2.Poole, S. C. 1999/2000. An Introduction to Linguistics. (语言学入门),外研社。3.Robins, R. H. 1989/2000. General Linguistics . Fourth Edition.(普通语言学概论),外研社。 三、俄语语言文学专业

实践俄语书目: 1、《基础俄语》,外语教学与研究出版社,1987年。 2、《大学俄语》,外语教学与研究出版社,1994年 3、《新式俄语(俄文版)》阿克肖诺娃,圣彼得堡2000年 《综合基础》书目: 1、《苏联概况》李明滨北大1986年。 2、《俄罗斯国情》金亚娜哈工大2001年。 3、《俄苏文学史(第一卷)》曹靖华河南教育出版社,1992年。 (外语学院教务办公室有售) 4、《20年世纪俄罗斯文学史》李毓榛北大2000年。 5、《俄罗斯文学史(俄文版)》 任光宣、张建华、余一中著,北大2003年 6、《俄罗斯艺术史》任光宣著北大2000年。 四、法语语言文学专业 缺 五、德语语言文学专业 1 德语:《德语教程》1-4册北大1992年-1995年 2 德语文学: 1)《德国文学史》余匡复上外1991年。 2)Dr. Leo Krerr u. Dr. Leonhard Fiedrer: Deutsche Literaturgeschichte. C. C. Buchners Verlag. Bamberg 1976. 3)Fricke, Gerhard u. Schreiber, Mathias: Geschichte der deutschen Literatur. Paderborn: Ferdinand Sch?ningh, 1974. 4)Beutin, Wolfgang ... [et al.].: Deutsche Literaturgeschichte: von den Anf?ngen bis zur Gegenwart Stuttgart : Metzler, 1979. 六、日语语言文学专业 1.语言方向:

日本文学史中文版

奈良时期(8世纪) 最早的文学典籍是《古事记》、《日本书纪》及《风土记》。前两部著作追记了日本国史,后一部则记载了日本各地自然状况、风土人情。两者均收录了丰富的神话传说和生动的古歌谣。稍后出现的汉诗集《怀风藻》标志着文人诗歌创作的肇始,而和歌集《万叶集》的编撰成功则代表着日本诗歌发展的第一个高峰。 平安时期(8~12世纪) 受中国唐代文化影响,大量汉诗文集相继问世,汉文学热持续一个世纪之久。敕撰诗集《古今和歌集》恢复了日本民族诗歌的地位。与此同时,散文创作硕果累累:《竹取物语》、《伊势物语》开辟了传奇物语和歌物语两条道路,《宇津保物语》开长篇物语的先河,这就为物语文学的集大成之作《源氏物语》的诞生奠定了基础。长篇写实小说《源氏物语》出自女作家紫式部之手。作者以沉郁、凄婉的笔调抒写了源氏苦乐掺半的一生及宫廷妇女不幸的命运,表达了作者人生无常的佛学观和以哀为极至的美学观。除紫式部外,许多女作家的作品都于此时脱颖而出,如《蜻蛉日记》、《和泉式部日记》、《更级日记》等。这些日记成为日本后世文学中私小说的滥觞。女性散文中较为引人注目的是清少纳言的随笔《枕草子》,作者观察之敏锐细腻,用笔之纤柔清丽,一直为后人所称道。此期散文创作的最后收获是佛教说话集《今昔物语》和历史物语《大镜物语》。这些物语一改王朝物语的纤弱文风,拓展了物语文学表现的范围。 镰仓室町时期(12~16世纪) 随着武士阶级登上历史舞台,贵族和歌文学走向衰落。1205年完成的《新古今和歌集》虽与《万叶集》、《古今和歌集》形成三足鼎立之势,但毕竟是强弩之末,取而代之的是连歌和俳谐的兴起。二条良基、山崎宗鉴等人确立了连歌、俳谐的文学地位。散文方面也出现了描写新兴武士生活的军记物语和抒发隐遁者之情的僧人随笔。军记物语中臻于成熟的经典之作是记述平、源两大武士集团兴衰始末的《平家物语》。小说刻画了平清盛等骁勇善战的武士英雄形象,再现了他们自信向上的精神风貌,客观上反映了贵族社会向武士社会转变的时代本质。僧人随笔中的传世之作是鸭长明的《方丈记》和吉田兼好的《徒然草》。两篇随笔各具特色,被誉为随笔文学的双璧。该时期诞生的能与狂言是日本戏剧史上辉煌的开端。“能”着重演唱、舞蹈表演,具有庄重典雅的正剧特点,“狂言”以幽默滑稽的科白为主,体现轻松诙谐的笑剧风格。世阿弥(1363~1443)在能乐的表演艺术和创作理论等方面作出了开拓性贡献。 江户时期(17~19世纪) 商业经济的发展带来了社会结构的变化,町人阶级(市民阶层)作为社会的主体逐渐成为文学作品的欣赏者。适应他们的审美要求松尾芭蕉在贞门、谈林俳谐的基础上,推出了世俗化的蕉风俳谐,井原西鹤铺写了町人的商业生活和享乐生活,丰富了浮世草子(风俗小说)的创作内容。近松门左卫门的净琉璃(木偶戏)更广泛地表现了社会下层人物的生离死别、喜怒哀乐。这种以俗为美的美学追求,导致轻文学(戏作文学)的产生,给后世文学带来一定的消极影响。 明治时期(1868~1911) 1868年明治维新是日本近代文学开始的标志。坪内逍遥(1859~1935)的小说理论著作《小说神髓》的发表,具有近代文学启蒙的性质。二叶亭四迷写出近代第一部现实主义小说《浮云》,森鸥外相继发表近代最早的浪漫主

打印文档变WOERD

最近,我的一个刚刚走上工作岗位上的朋友老是向我报怨,说老板真的是不把我们这些新来工作的人不当人看啊,什么粗活都是让我们做,这不,昨天又拿了10几页的文件拿来,叫他打成电子稿,他说都快变成打字工具了,我听之后既为他感到同情,同时教给他一个简单的方法,可以轻松将打印稿变成电子稿,我想以后对大家也有用吧,拿出来给大家分享一下。 首先你得先把这些打印稿或文件通过扫描仪扫到电脑上去,一般单位都有扫描仪,如果没有也没关系,用数码相机拍也行,拍成图片放到WORD里面去,不过在些之前,你还得装一下WORD自带的组件,03和07的都行。点开始-程序-控制面板-添加/删除程序,找到Office-修改,找到Microsoft Office Document Imaging 这个组件,Microsoft Office Document Imaging Writer 点在本机上运行,安装就可以了。 首先将扫描仪安装好,接下来从开始菜单启动“Microsoft Office/ Microsoft Office 工具/Microsoft Office Document Scanning”即可开始扫描。 提示:Office 2003默认安装中并没有这个组件,如果你第一次使用这个功能可能会要求你插入Office2003的光盘进行安装。由于是文字扫描通常我们选择“黑白模式”,点击扫描,开始调用扫描仪自带的驱动进行扫描。这里也要设置为“黑白模式”,建议分辨率为300dpi。扫

描完毕后回将图片自动调入Office 2003种另外一个组件“Microsoft Office Document Imaging”中。 点击工具栏中的“使用OCR识别文字”按键,就开始对刚才扫描的文件进行识别了。按下“将文本发送到Word”按键即可将识别出来的文字转换到Word中去了。如果你要获取部分文字,只需要用鼠标框选所需文字,然后点击鼠标右键选择“将文本发送到Word”就将选中区域的文字发送到Word中了。 此软件还有一小技巧:通过改变选项里的OCR语言,可以更准确的提取文字。例如图片里为全英文,把OCR语言改为“英语”可以确保其准确率,而如果是“默认”则最终出现的可能是乱码~ 还有: 应该说,PDF文档的规范性使得浏览者在阅读上方便了许多,但倘若要从里面提取些资料,实在是麻烦的可以。回忆起当初做毕业设计时规定的英文翻译,痛苦的要命,竟然傻到用Print Screen截取画面到画图板,再回粘到word中,够白了:(最近连做几份商务标书,从Honeywell本部获取的业绩资料全部是英文版的PDF,为了不再被折磨,花费了一个晚上的时间研究PDF和Word文件的转换,找到下面2种方法,出于无产阶级所谓的同甘共苦之心,共享下:)

《综合日语1》课程教学大纲

《综合日语1》课程教学大纲 一、教师或教学团队信息 杨吉萍主要讲授的本科课程高年级综合日语、泛读、高级视听、写作等。所授课程一致受到学生们的欢迎。主要研究领域:日本语教育·日语学。研究成果如下:

崔红花主要讲授的本科课程有基础日语、综合日语、日语语法、日与写作、商务日语等,所授课程一致受到广大同学们的欢迎。研究方向:日语语言学、汉韩日语言对比、日语教育。主要研究成果:“对比分析在汉日、日汉翻译中的作用”、“汉语的声调与日语的アクセント”、“区别词、冠形词、连体词的语法功能对比”、“韩国语冠形词与日语连体词对比研究”、“韩国语冠形词的来源及结构特点”、“区别词与连体词的来源与结构对比”、“汉语区别词与日语连体词的对应关系研究”等多篇论文;另主编、副主编《日本语言文化研究论集》等多部教材及论文集。目前承担上海市教委(日语语法)重点课程建设项目。 张杭萍主要讲授的本科课程:《综合日语》《日本文学史》《八级辅导》《日本近现代文学作品选读》;其中,2013-2014学年第2学期学生评教结果分为4.68分;2014-2015学年第2学期学生评教结果为4.74分;2015-2016学年第一学期学生评教结果为4.83分。主要研究日本近现代文学,论文主要如下:1、「近代中国文壇に映された漱石の遠景」(北京日本语研究中心《日本学研究》,2012年10月)。2、扇子?女人?人血饼干——解读《湖南的扇子》中的想象与现实(外语类核心期刊、对外经济贸易大学《日语学习与研究》2014年第2期)。 3、国木田独步《爱弟通信》与历史的隐秘脉络(中文核心期刊、河北大学《日本问题研究》2014年第5期)。 蒋蓓讲授的课程:基础日语、综合日语、视听说。研究领域:日本语学、汉日对照研究、汉文训读。科研成果如下: 论文:1、《名詞の意味的特徴》,《日本学研究2008年上海外国语大学日本学国际论坛论文集》(戴宝玉主编,上海外语教育出版社,2008年12月第一版),ISBN978-7-5446-1102-2。 2、《空間表現「XのY」について》,《第六届中日韩文化教育研究国际研讨会论文汇编》(大连理工大学出版社,2011年7月出版),ISBN978-7-214-04525-6。

教师该如何与家长沟通的技巧(1)

教师该如何与家长沟通的技巧 教师和家长是一种双向性的沟通,孩子们的健康成长是第一要事,作为学校和教师,应当主动与学生家长加强联系,作为学生的家长,当然也有责任积极与校方沟通。家长们都知道,通常情况下,“老师”在学生眼里是“神圣”的人物,老师的话几乎奉为“圣旨”。有些孩子在家里不听话,不听劝告,做家长的总是一句:“告诉你老师”,孩子会立即接受劝告。家长如果很好地利用这一点,积极主动地与老师保持密切联系,将给孩子的进步与成长带来极大的帮助。有些家长往往喜欢在孩子面前对任课老师评头论足,这是对老师不尊重的表现,缺乏信任与尊重,就谈不上真正意义上的交流与沟通,特别是当学校和老师有考虑不周、方法欠妥时,家长要表现出充分的理解和宽容,这样就会使双方关系更为密切,这时候再谈沟通就顺理成章了,所以家长在孩子们面前评论老师时,要掌握分寸、正确评价,切莫信口开河。就是孩子对老师有意见,或者老师确有过失时,家长应耐心做好孩子的思想工作,帮助孩子理解老师的初衷和出发点是要他进步,这样有利于孩子的教育和成长。教师可以通过各种途径,加强与家长的联系。一是定期进行家访,主动告诉家长自己对学生的评价和意见,了解一下近期学生在家里的思想和学习情况等,特别是可以发现一些学生进步和退步的苗头,以利于及时调整学校教育的内容和方法。二是认真组织家长参加家长会和家长学校活动。学校每个学期至少召开一到两次家长会,或举办家长学校讲座等活动,让家长可以比较全面地了解学校的教育目标、教育内容、教育方法和教育改革的情况等,同时可让家长对学校教育、教学提出自己的看法和建议,这样的沟通使学校教育和家庭教育走入同一轨道,对学生的成长十分有利。三是采用家校联系簿进行交流。对一些缺乏自觉性、自我控制能力较弱,经常会出现一些过失的学生,教师可每天或每星期在联系簿上写一下该生的主要表现等,家长也可将孩子在家时的表现记入联系簿,这样教师和家长都能及时把握该生的基本情况,督促他们完成各项作业,及时改正缺点,引导他们积极进步。当然教师与家长联系并不在于一定的形式目的是加强沟通,增进理解,使学校与家庭形成一股共同的教育合力,也就是说,教师与家长之间应当建立起亲密的合作关系,认同一致,协作共事,努力减少一些互相抵消的无谓的消耗因素。

相关主题
文本预览
相关文档 最新文档