当前位置:文档之家› Learning Parts-Based Representations of Data

Learning Parts-Based Representations of Data

Learning Parts-Based Representations of Data
Learning Parts-Based Representations of Data

Journal of Machine Learning Research7(2006)2369-2397Submitted2/05;Revised10/06;Published11/06

Learning Parts-Based Representations of Data

David A.Ross DROSS@https://www.doczj.com/doc/a18779032.html, Richard S.Zemel ZEMEL@https://www.doczj.com/doc/a18779032.html, Department of Computer Science

University of Toronto

6King’s College Road

Toronto,Ontario M5S3H5,CANADA

Editor:Pietro Perona

Abstract

Many perceptual models and theories hinge on treating objects as a collection of constituent parts.

When applying these approaches to data,a fundamental problem arises:how can we determine what are the parts?We attack this problem using learning,proposing a form of generative latent factor model,in which each data dimension is allowed to select a different factor or part as its expla-nation.This approach permits a range of variations that posit different models for the appearance of

a part.Here we provide the details for two such models:a discrete and a continuous one.Further,

we show that this latent factor model can be extended hierarchically to account for correlations between the appearances of different parts.This permits modeling of data consisting of multiple categories,and learning these categories simultaneously with the parts when they are unobserved.

Experiments demonstrate the ability to learn parts-based representations,and categories,of facial images and user-preference data.

Keywords:parts,unsupervised learning,latent factor models,collaborative?ltering,hierarchical learning

1.Introduction

Many collections of data exhibit a common underlying structure:they consist of a number of parts or factors,each with a range of possible states.When data are represented as vectors,parts manifest themselves as subsets of the data dimensions that take on values in a coordinated fashion.In the domain of digital images,these parts may correspond to the intuitive notion of the component parts of objects,such as the arms,legs,torso,and head of the human body.Prominent theories of compu-tational vision,such as Biederman’s Recognition-by-Components(Biederman,1987)advocate the suitability of a parts-based approach for recognition in both humans and machines.Recognizing an object by?rst recognizing its constituent parts,then validating their geometric con?guration has several advantages:

1.Highly articulate objects,such as the human body,are able to appear in a wide range of

con?gurations.It would be dif?cult to learn a holistic model capturing all of these variants.

2.Objects which are partially occluded can be identi?ed as long as some of their parts are

visible.

R OSS AND Z EMEL

3.The appearances of certain parts may vary less under a change in pose than the appearance of

the whole object.This can result in detectors which,for example,are more robust to rotations of the target object.

4.New examples from an object class may be recognized as simply a novel combination of

familiar parts.For example a parts-based face detection system could generalize to detect faces with both beards and sunglasses,having been trained only on faces containing one,but not both,of these features.

The principal dif?culty in creating such systems is determining which parts should be used,and identifying examples of these parts in the training data.

In the part-based detectors created by Mohan et al.(2001)and Heisele et al.(2000)parts were chosen by the experimenters based on intuition,and the component-detectors—support vec-tor machines—were trained on image subwindows containing only the part in question.Obtaining these subwindows required that they be manually extracted from hundreds or thousands of train-ing images.In contrast,the parts-based detector created by Weber et al.(2000)proposed a way to automate this process.During training of the geometric model,parts were selected from an initial set of candidates to include only those which lead to the highest detection performance.The re-sulting detector relied on a very small number of parts(e.g.,3)corresponding to very small local features.Unlike the SVMs,which were trained on a range of appearances of the part,each of these part-detectors could identify only a single?xed appearance.

Parts-based representations of data can also be learned in an entirely unsupervised fashion. These parts can be used for subsequent supervised learning,but the models constructed can also be valuable on their own.A parts-based model provides an ef?cient,distributed representation,and can aid in the discovery of causal structure within data.For example,a model consisting of K parts, each with J possible states,can represent J K different objects.Inferring the state of each part can be done ef?ciently,as each part depends only on a fraction of the data dimensions.

These computational considerations make parts-based models particularly suitable for modeling high-dimensional data such as user preferences in collaborative?ltering problems.In this setting, each data vector contains ratings given by a human subject to a number of items,such as books or movies.Typically there are thousands of unique items,but for each user we can only observe ratings for a small fraction of them.The goal in collaborative?ltering is to make accurate predictions of the unobserved ratings.Parts can be formed from groups of related items,and the states of a part correspond to different attitudes towards the items.Unsupervised learning of a parts-based model allows us to learn the relationships between items,which allows for ef?cient online and active learning.

Here we propose a probabilistic generative approach to learning parts-based representations of high-dimensional data.Our key assumption is that the dimensions of the data can be separated into several disjoint subsets,or factors,which take on values independently of each other.Each factor has a range of possible states or appearances,and we investigate two ways of modeling this variability.First we address the case in which each factor has a small number of discrete states,and model it using a vector quantizer.In some situations,however,continually-varying descriptions of parts are more suitable.Thus,in our second approach we model part-appearances using factor analysis.Given a set of training examples,our approach learns the association of data dimensions with factors,as well as the states of each factor.Inference and learning are carried out ef?ciently via variational algorithms.The general approach,as well as details for the models,are given in

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

Section2.Experiments showing parts-based representations learned for real data sets follow in Section3.

Although we initially assume that parts take on states independently,clearly in real-world situ-ations there are dependencies.For example,consider the case of modeling images of human faces.

A model could be constructed representing the various parts of the face(eyes,nose,mouth,etc.), and the various appearances of each part.If one part were to select an appearance with high pixel intensities,due to lighting conditions or skin tone,then it seems likely that the other parts should appear similarly bright.Realizing this,in Section4we propose a method of learning these depen-dencies between part selections hierarchically,by introducing an additional higher-level cause,or ‘class’variable,on which state selections for the factors are conditioned.This allows us to model different categories of data using the same vocabulary of parts,and to induce the categories when they are not available in advance.We conclude with a comparison to related methods,and a?nal discussion in Sections5and6.

2.An Approach to Learning Parts-Based Models

We approach the problem of learning parts by posing it as a stochastic generative model.We assume that there are K factors,each a probabilistic model for the range of appearances of one of the parts. To generate an observed data vector of D dimensions,x∈?D,we stochastically select one state for each factor,and one factor for each data dimension,x d.The?rst selection allows each part to independently choose its appearance,while the second dictates how the parts combine to produce the observed data vector.

This approach differs from a traditional mixture model in that each dimension of the data is gen-erated by a different linear combination of the factors.Rather than clustering the data vectors based on which mixture component gives the highest probability,we are grouping the data dimensions based on which part provides the best explanation.

The selection of factors for each dimension are represented as binary latent variables,R={r dk}, for d=1...D,k=1...K.The variable r dk=1if and only if factor k has been selected for data dimension d.These variables can be described equivalently as multinomials,r d∈1...K,and are drawn according to their respective prior distributions,P(r dk)=a dk.The choice of state for each factor is also a latent variable,which we will represent by s https://www.doczj.com/doc/a18779032.html,ingθk to represent the parameters of the k th factor,we arrive at the following complete likelihood function:

P(x,R,S|θ)=∏

d,k (a dk r dk)∏

k

(P(s k))∏

d,k

(P(x d|θk,s k)r dk).(1)

This probability model is depicted graphically in Figure1.

As described thus far,the approach is independent of the particular choice of model used for each of the factors.We now provide details for two particular choices:a discrete model of factors, vector quantization;and a continuous model,factor analysis.

2.1Multiple Cause Vector Quantization

In Multiple Cause Vector Quantization,?rst proposed in Ross and Zemel(2003),we assume that each part has a small number of appearances,and model them using a vector quantizer(VQ)of J possible states.To generate an observed data example,we stochastically select one state for each

R OSS AND Z

EMEL

Figure 1:Graphical representation of the parts-based learning model.We let r d =1represent all the

variables r d =1,k ,which together select a factor for x 1.Similarly,s k =1selects a state for factor 1.The plates depict repetitions across the D input dimensions and the K factors.To extend this model to multiple data cases,we would include an additional plate over r ,x ,and s .

VQ,and,as described above,one VQ for each dimension.Given these selections,a single state from a single VQ determines the value of each data dimension x d .

As before,we represent the selections using binary latent variables,S ={s k j },for k =1...K ,j =

1...J ,where s k j =1if and only if state j is selected for VQ k .Again we introduce prior selection probabilities P (s k j )=b k j ,with ∑j b k j =1.

Assuming each VQ state speci?es the mean as well as the standard deviation of a Gaussian distribution,and the noise in the data dimensions is conditionally independent,we have (where θk ={μdk j ,σdk j },and N is the Gaussian pdf)

P (x |R ,S ,θ)=

∏d ,k ,j N (x d ;μdk j ,σ2dk j )r dk s k j .

The resulting model can be thought of as a Gaussian mixture model over J ×K possible states for each data dimension (x d ).The single state k j is selected if s k j r dk =1.Note that this selection has two components.The selection in the j component is made jointly for the different data dimensions,and in the k component it is made independently for each dimension.

2.1.1L EARNING AND I NFERENCE

The joint distribution over the observed vector x and the latent variables is

P (x ,R ,S |θ)=P (R |θ)P (S |θ)P (x |R ,S ,θ),=∏d ,k a r dk dk ∏k ,j b s k j k j ∏d ,k ,j

N (x d ;θk )r dk s k j .

Given an input x ,the posterior distribution over the latent variables,P (R ,S |x ,θ),cannot tractably be computed,since all the latent variables become dependent.

We apply a variational EM algorithm to learn the parameters θ,and infer latent variables given observations.For a given observation,we could approximate the posterior distribution using a factored distribution,where g and m are variational parameters related to r and s respectively:

Q 0(R ,S |x ,θ)=∏d ,k g r dk dk ∏k ,j

m s k j k j .

(2)

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

The model is learned by optimizing the following objective function(Neal and Hinton,1998), also known as the variational free energy:

F(Q0,θ)=E Q

[log P(x,R,S|θ)?log Q0(R,S|x,θk)],

=E Q

0 ?∑d,k r dk log(g dk/a dk)?∑k,j s k j log(m k j/b k j)+∑d,k,j r dk s k j log N(x d;θ) ,

=?∑

d,k g dk log

g dk

a dk

?∑

k,j

m k j log

m k j

b k j

?∑

d,k,j

g dk m k jεdk j,

whereεdk j=logσdk j+(x d?μdk j)2

2σ2

dk j .The objective function F is a lower bound on the log likelihood

of generating the observations,given the particular model parameters.The variational EM algorithm improves this bound by iteratively maximizing F with respect to Q0(E-step)and toθ(M-step).

Extending this to handle multiple observations—the columns of X=[x1...x C]—we must now consider approximating the posterior P(R,S|X,θ),where R={r c dk}and S={s c k j}are the latent selections for all training cases c.Our aim is to learn models which have a posterior distribution over factor selections for each data dimension that is consistent across all data(that is to say,regardless of the data case,each x d will typically be associated with the same part or parts).Thus,in the variational posterior we constrain the parameters{g dk}to be the same across all observations x c,c= 1...C.In this general case,the variational approximation to the posterior becomes(cf.Equation(2))

Q(R,S|X,θ)=∏

c,d,k g dk r c dk ∏c,k,j m c k j s c k j .(3)

It is important to point out that this choice of variational approximation is somewhat unorthodox, nonetheless it is perfectly valid and has produced good results in practice.A comparison to more conventional alternatives appears in Appendix A.

Under this formulation,only the{m c

k j }parameters are updated during the E step for each ob-

servation x c:

m c k j=b k j exp ?∑d g dkεc dk j /J∑ρ=1b kρexp ?∑d g dkεc dρk .

The M step updates the parameters,μandσ,which relate each latent state k j to each input dimension d,the parameters of Q related to factor selection{g dk},and the priors{a dk}and{b k j}:

g dk=a dk exp ?1C∑c,j m c k jεc dk j /K∑ρ=1a dρexp ?1C∑c,j m c jρεc d jρ ,(4)

μdk j=∑c m c k j x c d

∑c m c k j

,σ2dk j=

∑c m c k j(x c d?μdk j)2

∑c m c k j

,

a dk=g dk,

b k j=1

C

c

m c k j.

As can be seen from the update equations,an iteration of EM learning for MCVQ has compu-tational complexity linear in each of C,D,J,and K.

R OSS AND Z EMEL

A variational approximation is just one of a number of possible approaches to performing the intractable inference(E)step in MCVQ.One alternative,known as Monte Carlo EM,is to approxi-mate the posterior with a set of samples{R n,S n}N n=1drawn from the true posterior P(R,S|X,θ) via Gibbs sampling.The M step now becomes a maximization of the approximate likelihood

1 N ∑n P(X,R n,S n|θ)with respect to the model parametersθ.Note that the intractable marginal-

ization over{r c

dk ,s c k j}has been replaced with the less-costly sum over N samples.Details of this

approach for a related model can be found in Ghahramani(1995),and a general description in Andrieu et al.(2003).

2.2Multiple Cause Factor Analysis

In MCVQ,each factor is modeled as a set of basis vectors,one of which is chosen at a time when generating data.A more general approach is to allow data to be generated by arbitrary linear com-binations of the basis vectors in each factor.This extension(with the appropriate choice of prior distribution)amounts to modeling the range of appearances for each part with a factor analyzer.

A factor analysis(FA)model(e.g.,Ghahramani and Hinton,1996)proposes that the data vectors come from a low-dimensional linear subspace,represented by the basis vectors of the factor loading matrix,Λ∈?D×J,and an offsetρ∈?D from the origin.A data vector is produced by taking a linear combination s∈?J of the columns ofΛ.The linear combination is treated as an unobserved latent variable,with a standard Gaussian prior P(s)=N(s;0,I).To this is added zero-mean Gaussian noise,independent along each dimension.The probability model is

P(x,s|θ)=N(x;Λs+ρ,Ψ)N(s;0,I)(5)

whereΨis a D×D diagonal covariance matrix.

As with MCVQ,we assume that the data contains K parts and model each using a factor analyzer θk=(Λk,ρk,Ψk).A data vector is again generated by stochastically selecting one state s k per factor k,and choosing one factor per data dimension.Under this model factor analyzer k proposes that the

value of x d has a Gaussian distribution centered atΛk

d s k+ρdk with varianceΨk d(whereΛk d indicates

the d th row of factor loading matrix k,andΨk

d th

e d th entry on the diagonal ofΨk).Thus the

likelihood is

P(x|R,S,θ)=∏

d,k

N(x d;Λk d s k+ρdk,Ψk d)r dk.

2.2.1L EARNING AND I NFERENCE

Again this model produces an intractable posterior over latent variables S and R,so we resort to a variational approximation:

Q(R,S|X,θ)=∏

c,d,k g r c dk dk ∏c,k N(s c k;m c k,?ck).

This differs from the approximation used for MCVQ,Equation(3),in that here we assume the state variables s k have Gaussian posteriors.Thus,in the E step we must now estimate the?rst and second moments of the posterior over s k.As before,we also tie the{g dk}parameters to be the same across all data cases.Setting up the objective function and differentiating,gives us the following updates

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

for the E-step:1

m c k=?ckΛk TΨ?1k((x c?ρk).?g k),

?ck= Λk T g kΨΛk+I ?1, s c k s c k T =?ck+m c k m c k T,

where g k

k

is a diagonal matrix with entries g dk/Ψk d.Note that the expression for?ck is independent of the index over training cases,c,thus we need only have one?k=?ck,?c.

The M-step involves updating the prior and variational posterior factor-selection parameters, {a dk}and{g dk},as well as the parameters(Λk,ρk,Ψk)of each factor analyzer.At each iteration the prior,a dk=g dk is set to the posterior at the previous iteration.The remaining updates are

g dk∝

a dk

|Ψd|1/2

exp ?12CΨd∑c (x c d?ρdk)2+Λk d s c k s c k T Λk d T?2(x c d?ρdk)Λk d m c k ,

Λk= X?ρk1T M T k ∑c s c k s c k T ?1,

ρk=

1

C

c x c?Λk m c k

,

Ψk d=

1

C

c (x c d?ρdk)2+Λk

d s c k s c k T Λk d T?2(x c d?ρdk)Λk d m c k

.

where(M k is a J×C matrix in which the c th column is m c

k

).

2.3Related Algorithms

Here we present the details of two related algorithms,principal components analysis(PCA),and non-negative matrix factorization(NMF).A more detailed comparison of these algorithms with MCVQ and MCFA appears below,in Section5.

The goal of PCA is to learn a factorization of the data matrix X into the product of a coef?cient matrix S and an orthogonal basisΛso that X≈ΛS.TypicallyΛhas fewer columns than rows,so S can be thought of as a reduced-dimensionality approximation of X,andΛas the key features of the https://www.doczj.com/doc/a18779032.html,ing a squared-error cost function,the optimal solution is to letΛbe the top eigenvectors of

the sample covariance matrix,1

C?1XX T,and let S=ΛT X.

PCA can also be posed as a probabilistic generative model,closely related to factor analysis (Roweis,1997;Tipping and Bishop,1999).In fact,probabilistic PCA proposes the same generative model,Equation(5),except that the noise covariance,Ψ,is restricted to be a scalar times the identity matrix:Ψ=σ2I.

Non-negative matrix factorization(Lee and Seung,1999)also learns a factorization X≈WH of the data matrix,but includes the additional restriction that X,W,and H must be entirely non-negative.By allowing only additive combinations of a non-negative basis,Lee&Seung propose to obtain basis vectors that correspond to the intuitive parts of the data.Instead of squared error, NMF seeks to minimize the divergence D(X WH)=∑d,c x c d log(WH)dc?(WH)dc which is the 1.The second uncentered moment s c k s c k T need not be computed explicitly,since it can be expressed as a combination

of the?rst and second centered moments,m k and?ck.It is,however,a useful subexpression for computing the M-step updates and likelihood bound.

R OSS AND Z EMEL

negative log-probability of the data,assuming a Poisson density function with mean WH.A local minimum of the divergence is obtained by iterating the following multiplicative updates:

w d j←w d j∑

c

x c

d

(WH)dc

h jc,w d j←

w d j

∑d w d j

,h jc←h jc∑

d

w d j

x c

d

(WH)dc

.

Despite the probabilistic interpretation,X~Poisson(WH),NMF is not a proper probabilistic generative model,since it does not specify a prior distribution over the latent variable H.Thus NMF does not specify how new data could be generated from a learned basis.

Like the above methods,MCVQ and MCFA can be viewed as matrix factorization methods, where the left-hand matrix is formed from the{g dk}and the collection of VQ/FA basis vectors,

while the right-hand matrix is comprised of the latent variables{m c

k j }.This view highlights the key

contrast between the assumptions about the data embodied in these earlier models,as opposed to the proposed model.Here the data is viewed as a concatenation of components or parts,corresponding to particular subsets of data dimensions,each of which is modeled as a convex combination of appearances.

3.Experiments

In this section we examine the ability of MCVQ and MCFA to learn parts-based representations of data from two different problem domains.

We begin by modeling sets of digital images,in this case images of human faces.The parts learned are?xed subsets of the data dimensions(pixels),corresponding to?xed regions of the images,that closely resemble intuitive notions of the parts of faces.The ability to learn parts is robust to partial occlusions in the training images.

The application of MCVQ and MCFA to image data assumes that the images are normalized, that is,that the head is in a similar pose in each image,and aligned with respect to position and scale.This constraint is standard for learning methods that attempt to learn visual features beyond low level edges and corners,though,if desired,the model could be extended to perform automatic alignment of images(Jojic and Caspi,2004).While normalization may require a preprocessing step for image applications,in many other types of applications,the input representation is more stable. For instance,in collaborative?ltering each data vector consists of a single user’s ratings for a?xed set of items;each data dimension always corresponds to the same item.

Thus,we also explore the application of MCVQ and MCFA to the problem of predicting ratings of unseen movies,given observed ratings for a large set of users.Here parts correspond to subsets of the movies which have correlated ratings.

Code implementing MCVQ and MCFA in M ATLAB,as used for the following experiments,can be obtained at https://www.doczj.com/doc/a18779032.html,/~dross/mcvq/.

3.1Face Images

The face data set consisted of2429images from the CBCL Face database#1(MIT-CBCL,2000). Each image contained a single frontal or near-frontal face,depicted in19×19pixel grayscale.The images were histogram equalized,and pixel values rescaled to lie in[?2,2].Sample training images are shown in Figure2.

Using these images as input,we trained MCVQ and MCFA models,each containing K=6 factors.The MCVQ model with J=10states converged in120iterations of Monte Carlo EM,while

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

Figure2:Sample training images from the CBCL Face database#1.

VQ 1

VQ 2

VQ 3

VQ 4

VQ 5

VQ 6

Figure3:The parts-based model of faces learned by MCVQ.On the left are plots showing the posterior probability with which each pixel selects the indicated VQ.On the right are the

means,for each state of each VQ,masked by the aforementioned selection probabilities

(μk j.?g k).

the MCFA model with J=4basis vectors converged in15variational EM iterations.In practice,the Monte Carlo approach to inference leads to better local maxima of the objective function,and better parts-based models of the data.For both MCVQ and MCFA,prior probabilities were initialized to uniform,and states/basis vectors to randomly selected training images.The learned parts-based decompositions are depicted in Figures3and4.

On the left of each?gure is a plot of posterior probabilities{g dk}that each pixel selects the indicated factor as its explanation.White indicates high probability of selection,and black low.As can be seen,each g k can be thought of as a mask indicating which pixels‘belong’to factor k.

In the noise-free case,each image generated by one of these models is a sum of the contributions from the various factors,where each contribution is‘masked’by the probability of pixel selection. For example in Figure3the probability of selecting VQ#4is non-zero only around the nose,thus the contribution of this VQ to the remaining areas of the face in any generated image is negligible.

On the right of each?gure is a plot of the10states/4basis vectors for each factor.Each image has been masked(via element-wise multiplication)with the corresponding g k.For MCVQ this is (μdk j.?g dk),and for MCFA this is(Λk d j.?g dk).In the case of MCVQ,each state is an alternative appearance for the corresponding part.For example VQ#4gives10alternative noses to select from

R OSS AND Z EMEL

FA 1

FA 2

FA 3

FA 4

FA 5

FA 6

Figure4:The parts-based model of faces learned by MCFA.

when generating a face.These range from thin to wide,with shadows of the left or right,and with light or dark upper-lips(possibly corresponding to moustaches).In MCFA,on the other hand,the basis vectors are not discrete alternatives.Rather these vectors are combined via arbitrary linear combinations to generate an appearance of a part.

To test the?delity of the learned representations,the trained models were used to probabilisti-cally classify held-out test images as either face or non-face.The test data consisted of the472face images and472randomly-selected non-face images from the CBCL database test set.To classify test images,we evaluated their probability under the model,labeling an image as face if its prob-ability exceeded a threshold,and non-face if it did not.For each model the threshold was chosen to maximize classi?cation performance.Evaluating the probability of a data vector under MCVQ and MCFA can be dif?cult,since it requires marginalizing over all possible values of the latent vari-ables.Thus,in practice,we use a Monte Carlo approximation,obtained by averaging over a sample of possible selections.The results of this experiment are shown in Table1.In addition to MCVQ and MCFA,we performed the same experiment with four other probabilistic models:probabilistic principal components analysis(PPCA),mixture of Gaussians,a single Gaussian distribution,and cooperative vector quantization(Hinton and Zemel,1994)(which is described further in Section5). It is important to note that in this experiment an increase in model size(using more VQs,states, or basis vectors)does not necessarily improve the ability to discriminate faces from non-faces.For example,PPCA achieves its highest accuracy when using only three basis vectors,and a mixture of Gaussians with60states outperforms one with84states.As can be seen,the highest performance is achieved by an MCVQ model with6VQs and14states per VQ.

Another way of validating image models it to use them to generate new examples and see how closely they resemble images from the observed data—in this case how much the generated im-ages resemble actual faces.Examples of images generated from MCVQ and MCFA are shown in Figure5.

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

Model Accuracy

Mixture of Gaussians(60states)0.8072

MCVQ(6VQs,10states each)0.8030

Probabilistic PCA(3components)0.7903

Gaussian distribution(diagonal covariance)0.7871

Mixture of Gaussians(84states)0.7680

MCFA(6FAs,3basis vectors each)0.7415

MCFA(6FAs,4basis vectors each)0.7267

Cooperative Vector Quantization(6VQs,10states each)0.6208

Table1:Results of classifying test images as face or non-face,by computing their probabilities under trained generative models of faces.

Figure5:Synthetic images generated using MCVQ(left)and MCFA(right).

The parts-based models learned by MCVQ and MCFA differ from those learned by NMF and PCA,as depicted in Figure6,in two important ways.First,the basis is sparse—each factor con-tributes to only a limited region of the image,whereas in NMF and PCA basis vectors include more global effects.Secondly MCVQ and MCFA learn a grouping of vectors into related parts,by explicitly modeling the sparsity via the g dk distributions.

A further point of comparison is that,in images generated by PPCA and NMF,a signi?cant proportion of the pixels in the generated images lie outside the range of values appearing in the training data(7%for PPCA and4.5%for NMF),requiring that the generated values be thresholded. On the other hand,MCVQ will never generate pixel values outside the observed range.This is a simple result,since each generated image is a convex combination of basis vectors,and each basis vector is a convex combination of data vectors.Although no such guarantee exists for MCFA,in practice pixels it generates are all within-range.

R OSS AND Z EMEL

Figure 6:Bases learned by non-negative matrix factorization (left)and probabilistic principal com-

ponents analysis (right),when trained on the face images.

3.1.1P ARTIAL O CCLUSION

Parts-based models can also be reliably learned from image data in which the object is partially occluded.As an illustration,we trained an MCVQ model on a data set containing partially-occluded faces.The occlusions were generated by replacing pixels in randomly-selected contiguous regions

of each training image with noise.The noise,which covered 14to 12of each image,had the same

?rst-and second-order statistics as the unoccluded pixels.The resulting training set contained 4858images,half of which were partially occluded.

Examples of the training images,and the resulting model,can be seen in Figure 7.In the model each VQ has learned at least one state containing a blurry region,which corresponds to an occluded view of the respective part (e.g.,for VQ 1,the second state from the right).Thus this model is able to generate and reconstruct (recognize)partially-occluded faces.

3.2Collaborative Filtering

Here we test MCVQ on a collaborative ?ltering task,using the EachMovie data set,where the input vectors are ratings by viewers of movies,and a given element always corresponds to the same movie.The original data set contains ratings,on a scale from 1to 6,of a set of 1649movies,by 74,424viewers.In order to reduce the sparseness of the data set,since many viewers rated only a few movies,we only included viewers who rated at least 20movies,and removed unrated movies.The remaining data set,containing 1623movies and 36,656viewers,was still very sparse (95.7%).

We evaluated the performance of MCVQ using the framework proposed by Marlin (2004)for comparison of collaborative ?ltering algorithms.From each viewer in the EachMovie set,a single randomly-chosen rating was held out.The performance of the model at predicting these held-out ratings was evaluated using 3-fold cross validation.In each fold,the model was ?t to a training set

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

VQ 1

VQ 2

VQ 3

VQ 4

VQ 5

VQ 6

Figure7:The model learned by MCVQ on partially-occluded faces.On the left are six representa-tive images from the training data.In the middle are plots of the posterior part selection

probability(the g dk’s)for each VQ.On the right are the(unmasked)means for each state

of each VQ.Note that each VQ has learned at least one state to represent partial occlusion

of the corresponding part.

of2/3of the viewers,with the remaining1/3viewers retained as a testing set.2This model was used to predict the held-out ratings of the training viewers,a task referred to as weak generalization,as well as the held-out ratings of the testing viewers,called strong generalization.Each prediction was obtained by?rst inferring the latent states of the factors m k j,then averaging the mean of each state by its probability of selection:∑k,j g dk m k jμk jd.

Prediction performance was calculated using normalized mean absolute error(NMAE).Given a set of predictions{p i},and the corresponding ratings{r i},the NMAE is

1 zn

n

i=1

|r i?p i|

where z is a normalizing factor equal to the expectation of the mean absolute error,assuming uniformly distributed p i’s and r i’s(Marlin,2004).3For the EachMovie data set this value is z=35/18≈1.9444.

We trained several models on this data set,including MCVQ,MCFA,factor analysis,mixture of Gaussians,as well as a baseline algorithm which,for each movie,simply predicts the mean of 2.This differs slightly from Marlin’s approach.He used three randomly sampled training and test sets of30000and

5000viewers respectively.However,his test sets were not disjoint,introducing a potential bias in the error estimate.

We,instead,partitioned the data into thirds,using2/3of the viewers for training and1/3for testing in each of the three folds.The resulting estimates of error should be less biased,but remain directly comparable to Marlin’s results.

3.For data sets in which the ratings range over the integers,from a minimum of m to a maximum of M,there is a simple

expression for z.Let N=M?m be the largest possible absolute error.Then z=N(N+2)

3(N+1)

.

R OSS AND Z EMEL

Model Weak Generalization Strong Generalization

MCVQ K=5,J=120.48960.4942

Gaussian mixture J=600.49450.4895

Factor Analysis J=600.53140.5318

MCFA K=4,J=60.55020.5503

Predict mean observed rating0.56210.5621

Table2:Collaborative?ltering performance.For each algorithm we show the prediction error,in terms of NMAE(see text),on held out ratings from the training data(weak generalization) and the test data(strong generalization).K and J give the number of factors and basis vectors per factor respectively.

the observed ratings for that movie.MCVQ and MCFA were both trained using variational EM, converging in15iterations.We selected this approach due to its ef?ciency,as running each of these algorithms on the large data sets used in these experiments via the Monte Carlo EM approach would require considerable computation time.Note that in the graphical model(Fig.1),all the observation dimensions are leaves,so a data variable whose value is not speci?ed in a particular observation vector will not play a role in inference or learning.This makes inference and learning with sparse data rapid and ef?cient in MCVQ and MCFA.

The results of rating prediction are summarized in Table2For each model,we have included the best performance,under the restriction that JK≤60.For comparison,the current state-of-the-art performance on this experiment is0.4422weak and0.4557strong generalization,by Marlin’s User Rating Pro?le model(Marlin,2004).

In addition to its utility for rating prediction,the parts-based model learned by MCVQ can also provide key insights regarding the relationships between the movies.These relationships are largely captured by the learned{g dk}parameters.For a movie d,the probability distribution g d= [g d1...g dK]indicates the af?nity of the movie for each of the K factors.To study these relationships we retrained MCVQ on the EachMovie set,using all available data,again with K=5and J=12.

When trained on the full data set,the average entropy of the g d distributions was only0.0475 bits.In other words,for98%of the movies,at least one factor was consistently selected with a posterior probability exceeding0.9.The movies in the training set were distributed fairly evenly amongst the available factors.Speci?cally,using a probability of g dk>0.9to indicate strong asso-ciation,VQs1to5were assigned14%,33%,9%,20%and22%respectively of the movies.Thus MCVQ learned a partitioning of the movies into approximately disjoint subsets.

In the parts-based model,movies with related ratings were associated with the same factor.For example,all seven movies from the Amityville Horror series were assigned to VQ#1.Similarly the three original Star Wars movies,as well as seven of the eight Star Trek movies were all associated with VQ#5.By examining VQ#5in more detail we can see that each state of the VQ corresponds to a different attitude towards the associated movies,or‘ratings pro?le’.In Figure8we compare the ratings given to the Star Wars and Star Trek movies by various states of VQ#5.For each state we have plotted the difference between the predicted rating and the mean rating,μdk j?∑j (μdk j /J), for each of the10movies.This score will be large for a particular movie if the state,or pro?le,gives the movie a much higher rating than it ordinarily receives.

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

The four states depicted in Figure8show a range of possible ratings pro?les.State2shows a strong preference for all the movies,with each receiving a rating1-2.5points higher than usual.In contrast,state9shows an equally strong dislike for the movies.State10indicates a viewer who is fond of Star Wars,but not Star Trek.Finally,state11shows an ambivalent attitude:slight preference for some?lms,and slight dislike for others.

Star Wars W1Star Trek III:The Search for Spock T3

The Empire Strikes Back W2Star Trek IV:The V oyage Home T4

Return of the Jedi W3Star Trek V:The Final Frontier T5

Star Trek:The Motion Picture T1Star Trek VI:The Undiscovered Country T6

Star Trek II:The Wrath of Khan T2Star Trek:First Contact T7

Figure8:Ratings pro?les learned by MCVQ on the EachMovie data,for Star Wars and Star Trek movies.Each state represents a different attitude towards the movies.Positive scores

correspond to above-average ratings for the movie,and negative to below-average.

3.2.1A CTIVE L EARNING

When collaborative?ltering is posed in an online or active setting,the set of available data is con-tinually growing,as viewers are queried and new ratings are observed.When a viewer provides a new rating for an item,the system’s beliefs about his preferences must be updated.In generative models,such as we have described,this entails updating the distribution over latent state variables for the viewer.This update is often costly,and can pose a problem when the system must be used interactively.Furthermore,queries of the sort“What is the rating of x d?”must be generated online, based on the viewer’s previous responses,to maximize the value of the information obtained.This

R OSS AND Z EMEL

value could simply be the system’s certainty as to the viewer’s preferences,or,more interestingly, its ability to recommend movies that he might enjoy.

The parts based models we describe signi?cantly simplify both of these computations.By learning a partitioning of the items,a new rating will only affect beliefs about unobserved ratings associated with the same factor.Speci?cally,only one m k j will be updated for each new rating,and the update will depend only on a fraction of the observations.

Also,since we explicitly model the relationships between items,we can determine how the possible responses to a putative query would affect predictions.Thus,we can formulate a query designed to identify movies with high predicted ratings.More details on a successful application of MCVQ to active collaborative?ltering,using these ideas,can be found in Boutilier et al.(2003).

4.Hierarchical Learning

Thus far it has been assumed that the states of each factor are selected independently.Although this assumption has proven useful,it is unrealistic to suppose that,for example,the appearance of the eyes and mouth in a face are entirely uncorrelated.Rather than simply being a violation of our modeling assumptions,these correlations can be viewed as an additional source of information from which we can learn higher-order structure present in the data.

Here we relax the assumption of independence by including an additional higher-level latent variable upon which state selections are conditioned.This variable has two possible interpretations. First,if the higher-level cause is unobserved,it can be learned,inducing a categorization of the data vectors based on their state selections.Secondly,if the variable is observed,it can be treated as side information available during learning.In this case the prior over state selections will be adapted to account for the side information.

We now develop both approaches,and demonstrate their use on a set of images containing different facial expressions.

4.1Extending the Generative Model

Suppose that each data vector comes from one of N different classes.We assume that for a given data vector the selections of the states for each factor(the s k’s)depend on the class,but that the selections of a factor per data dimension(the r d’s)do not.Speci?cally,for training case x,we introduce a new multinomial variable y which selects exactly one of the N classes.y can be thought of as an indicator vector y∈{0,1}N,where y n=1if and only if class n has been https://www.doczj.com/doc/a18779032.html,ing a prior P(y n=1)=βn over y,the complete likelihood takes the following form(cf.Equation(1)): P(x,R,S,y|θ)=P(x|R,S,θ)P(R)P(S|y)P(y),

=∏

d,k (P(x d|θk,s k)r dk)∏

d,k a r dk dk

∏n,k,j b s k j y n nk j ∏n(βy n n).(6)

The graphical model representation is given in Figure9.

Note that for each class n there is a different prior distribution over the state selections.We represent these distributions with{b nk j}nk j,∑j b nk j=1,where b nk j is the probability of selecting state j from factor k given class n.Since the model contains N priors over S,one to be selected for each data vector,then we can think of this model as incorporating an additional vector quantization or clustering,this time over distributions for S.

L EARNING P ARTS -B ASED R EPRESENTATIONS OF D

ATA

Figure 9:Graphical model with additional class variable,y .Note that y can be either observed or

unobserved.

If the class variable y corresponds to an observed label for each data vector,then the new model can be thought of as a standard MCVQ or MCFA model,but with the requirement that we learn a different prior over S for each class.On the other hand,if y is unobserved,then the new model learns a mixture over the space of possible priors for S ,rather than the single maximum likelihood point estimate learned in standard MCVQ and MCFA.

Below we derive the EM updates for the hierarchical MCVQ model.The derivation of hierar-chical MCFA is similar.

4.2Unsupervised Case

In the unsupervised case,y c ,for each training case c =1...C is an unobserved variable,like R c and S c .As in standard MCVQ,the posterior P (R ,S ,y |x ,θ)over latent variables cannot tractably be computed.Instead,we use the following factorized variational approximation:

Q (R ,S ,Y )= ∏c ,d ,k g r c dk dk ∏c ,n ,k ,j m c nk j s c k j y c n ∏c ,n

z c n y c n .Note that,as in standard MCVQ,we restrict the posterior selections of VQ made for each training case to be identical,that is,{g dk }is independent of c .Using the variational posterior and the likelihood,Equation (6),we obtain the following lower bound on the log-likelihood:

F =E Q [logP (X ,R ,S ,Y |θ)?log Q (R ,S ,Y |X ,θ))],=E Q ∑c logP (x c |R c ,S c ,y c ,θ)+∑c

logP (R c ,S c ,y c )?∑c log Q (R c ,S c ,y c ) ,

=?

∑c ,d ,k g dk log g dk a dk ?∑c ,n ,k ,j m c nk j z c n log m c nk j b nk j ?∑c ,n z c n log z c n

βn ?∑c ,d ,k ,j g dk ∑n

m c nk j z c n εc dk j ?CD 2log (2π).

By differentiating F with respect to each of the parameters and latent variables,and solving for their respective maxima,we obtain the EM updates used for learning the model.The E-step updates

for m c nk j and z c n are m c nk j ∝b nk j exp ?∑d

g dk εc dk j ,

R OSS AND Z EMEL

z c n∝βn exp ∑k j m c nk j log b nk j m c nk j?∑dk j g dk m c nk jεc dk j .

The M-step updates for a dk,g dk,μdk j,andσdk j are unchanged from standard MCVQ,except for the substitution of∑n(m c nk j z c n)in place of m c k j,wherever it appears.The updates forβn and b nk j are

βn=1

C

c

z c n,b nk j=

∑c m c nk j z c n

∑c z n

.

A useful interpretation of the latent variable y is that,for a given data vector,it indicates the assignment of that datum to one of N clusters.Speci?cally,z c n can be thought of as the posterior probability that example c belongs to cluster n.

4.3Supervised Case

In the supervised case,we are given a set of labeled training data(x c,y c)for c=1...C.Note that this case is essentially the same as the unsupervised case—we can obtain the supervised updates by constraining z c=y c,?c.Since the class is known,we may now drop the subscript n from m c nk j.

As stated earlier,when the y’s are given,we learn a different prior over state selections for each

class n:

b nk j=

1

Cβn

c

m c k j y c n,

where the prior probability of observing class n,βn,can be calculated from the training labels:

βn=1

C

c

y c n.

The posterior probabilities of state selection for each example c,m c

k j ,depend only on the prior

corresponding to c’s class:

m c k j∝b y c k j exp ?∑d g dkεc dk j .

4.4Experiments

In this section we present experimental results obtained by training the above models on images taken from the AR Face Database(Martinez and Benavente,1998).This data consists of images of frontal faces of126subjects under a number of different conditions.The?ve conditions used for these experiments were:1)anger,2)neutral,3)scream,4)smile,and5)sunglasses.

The data set in its raw form contains faces which,although roughly centered,appear at different locations,angles,and scales.Since the learning algorithms do not attempt to compensate for these differences,we manually aligned each face such that the eyes always appeared in the same location. Next we cropped the images tightly around the face,and subsampled to reduce the size to29×22 pixels.Finally,we converted the image data to grayscale,with pixel values ranging from-1to1. Examples of the preprocessed images can be seen in Figure10.

The above preprocessing steps are standard when applying unsupervised learning methods,such as NMF,PCA,ICA,etc.,to image data.

L EARNING P ARTS-B ASED R EPRESENTATIONS OF D ATA

Figure10:Examples of preprocessed images from the AR Face Database.These images,from left to right,correspond to the conditions:anger,neutral,scream,smile,and sunglasses.

4.4.1S UPERVISED C ASE:L EARNING CLASS-CONDITIONAL PRIORS

We?rst trained a supervised model on the entire data set,with a goal of learning a different prior over state selections for each of the?ve classes of image(anger,neutral,scream,smile,and sun-glasses).

The model consisted of5VQs,10states each.The image regions that each VQ learned to explain are shown in Figure11.For each VQ k we have plotted,as a grayscale image,the prior probability of each pixel selecting k(i.e.,a dk,d=pixel index).Two regions which we expected to be class discriminative,the mouth and the eyes,were captured by VQs3and4respectively.

VQ 1VQ 2VQ 3VQ 4VQ 5

Figure11:Image regions explained by each VQ.The prior probability of a VQ being selected for each pixel is plotted as a gray value between0=black and1=white.

The priors over state selection for these VQs varied widely depending on the class of the image being considered.As an example,in Figure12we have plotted the prior probability,given the class,of selecting each of the states from VQ3.In the?gure we can see that the prior probabilities closely matched our intuition as to which mouth shapes corresponded to which facial expressions. For example the?rst state(top left)appears to depict a smiling mouth.Accordingly,the smile class assigned it the highest prior probability,0.34,while the other classes each assigned it0.05or less. Also,given the scream class,we see that the highest prior probabilities were assigned to the third and fourth states—both widely screaming mouths.States6through9(second row)range from depicting a neutral to an angry mouth.

Note that for sunglasses,which does not presuppose a mouth shape,the prior showed a prefer-ence for the more neutral mouths.The explanation for this is simply that most subjects in the data adopted a neutral expression when wearing sunglasses.

R OSS AND Z

EMEL

Figure12:Class-conditional priors over mouth selections.Each image displayed is the mean of

a state learned for VQ3,masked(multiplied)by the prior probability of VQ3being

selected for that pixel.The bar charts indicate,for each state,the prior probability of

it being selected given each of the?ve class labels.From left to right these are anger

(A),neutral(N),scream(S),smile(M),and sunglasses(G).On each chart,the y-axis

extends up to a probability of0.4.

最新The_Monster课文翻译

Deems Taylor: The Monster 怪才他身材矮小,头却很大,与他的身材很不相称——是个满脸病容的矮子。他神经兮兮,有皮肤病,贴身穿比丝绸粗糙一点的任何衣服都会使他痛苦不堪。而且他还是个夸大妄想狂。他是个极其自负的怪人。除非事情与自己有关,否则他从来不屑对世界或世人瞧上一眼。对他来说,他不仅是世界上最重要的人物,而且在他眼里,他是惟一活在世界上的人。他认为自己是世界上最伟大的戏剧家之一、最伟大的思想家之一、最伟大的作曲家之一。听听他的谈话,仿佛他就是集莎士比亚、贝多芬、柏拉图三人于一身。想要听到他的高论十分容易,他是世上最能使人筋疲力竭的健谈者之一。同他度过一个夜晚,就是听他一个人滔滔不绝地说上一晚。有时,他才华横溢;有时,他又令人极其厌烦。但无论是妙趣横生还是枯燥无味,他的谈话只有一个主题:他自己,他自己的所思所为。他狂妄地认为自己总是正确的。任何人在最无足轻重的问题上露出丝毫的异议,都会激得他的强烈谴责。他可能会一连好几个小时滔滔不绝,千方百计地证明自己如何如何正确。有了这种使人耗尽心力的雄辩本事,听者最后都被他弄得头昏脑涨,耳朵发聋,为了图个清静,只好同意他的说法。他从来不会觉得,对于跟他接触的人来说,他和他的所作所为并不是使人产生强烈兴趣而为之倾倒的事情。他几乎对世间的任何领域都有自己的理

论,包括素食主义、戏剧、政治以及音乐。为了证实这些理论,他写小册子、写信、写书……文字成千上万,连篇累牍。他不仅写了,还出版了这些东西——所需费用通常由别人支付——而他会坐下来大声读给朋友和家人听,一读就是好几个小时。他写歌剧,但往往是刚有个故事梗概,他就邀请——或者更确切说是召集——一群朋友到家里,高声念给大家听。不是为了获得批评,而是为了获得称赞。整部剧的歌词写好后,朋友们还得再去听他高声朗读全剧。然后他就拿去发表,有时几年后才为歌词谱曲。他也像作曲家一样弹钢琴,但要多糟有多糟。然而,他却要坐在钢琴前,面对包括他那个时代最杰出的钢琴家在内的聚会人群,一小时接一小时地给他们演奏,不用说,都是他自己的作品。他有一副作曲家的嗓子,但他会把著名的歌唱家请到自己家里,为他们演唱自己的作品,还要扮演剧中所有的角色。他的情绪犹如六岁儿童,极易波动。心情不好时,他要么用力跺脚,口出狂言,要么陷入极度的忧郁,阴沉地说要去东方当和尚,了此残生。十分钟后,假如有什么事情使他高兴了,他就会冲出门去,绕着花园跑个不停,或者在沙发上跳上跳下或拿大顶。他会因爱犬死了而极度悲痛,也会残忍无情到使罗马皇帝也不寒而栗。他几乎没有丝毫责任感。他似乎不仅没有养活自己的能力,也从没想到过有这个义务。他深信这个世界应该给他一条活路。为了支持这一信念,他

新版人教版高中语文课本的目录。

必修一阅读鉴赏第一单元1.*沁园春?长沙……………………………………毛泽东3 2.诗两首雨巷…………………………………………戴望舒6 再别康桥………………………………………徐志摩8 3.大堰河--我的保姆………………………………艾青10 第二单元4.烛之武退秦师………………………………….《左传》16 5.荆轲刺秦王………………………………….《战国策》18 6.*鸿门宴……………………………………..司马迁22 第三单元7.记念刘和珍君……………………………………鲁迅27 8.小狗包弟……………………………………….巴金32 9.*记梁任公先生的一次演讲…………………………梁实秋36 第四单元10.短新闻两篇别了,“不列颠尼亚”…………………………周婷杨兴39 奥斯维辛没有什么新闻…………………………罗森塔尔41 11.包身工………………………………………..夏衍44 12.*飞向太空的航程……………………….贾永曹智白瑞雪52 必修二阅读鉴赏第一单元1.荷塘月色…………………………………..朱自清2.故都的秋…………………………………..郁达夫3.*囚绿记…………………………………..陆蠡第二单元4.《诗经》两首氓采薇5.离骚………………………………………屈原6.*《孔雀东南飞》(并序) 7.*诗三首涉江采芙蓉《古诗十九首》短歌行……………………………………曹操归园田居(其一)…………………………..陶渊明第三单元8.兰亭集序……………………………………王羲之9.赤壁赋……………………………………..苏轼10.*游褒禅山记………………………………王安石第四单元11.就任北京大学校长之演说……………………..蔡元培12.我有一个梦想………………………………马丁?路德?金1 3.*在马克思墓前的讲话…………………………恩格斯第三册阅读鉴赏第一单元1.林黛玉进贾府………………………………….曹雪芹2.祝福………………………………………..鲁迅3. *老人与海…………………………………….海明威第二单元4.蜀道难……………………………………….李白5.杜甫诗三首秋兴八首(其一) 咏怀古迹(其三) 登高6.琵琶行(并序)………………………………..白居易7.*李商隐诗两首锦瑟马嵬(其二) 第三单元8.寡人之于国也…………………………………《孟子》9.劝学……………………………………….《荀子》10.*过秦论…………………………………….贾谊11.*师说………………………………………韩愈第四单元12.动物游戏之谜………………………………..周立明13.宇宙的边疆………………………………….卡尔?萨根14.*凤蝶外传……………………………………董纯才15.*一名物理学家的教育历程……………………….加来道雄第四册阅读鉴赏第一单元1.窦娥冤………………………………………..关汉卿2.雷雨………………………………………….曹禹3.*哈姆莱特……………………………………莎士比亚第二单元4.柳永词两首望海潮(东南形胜) 雨霖铃(寒蝉凄切) 5.苏轼词两首念奴娇?赤壁怀古定风波(莫听穿林打叶声) 6.辛弃疾词两首水龙吟?登建康赏心亭永遇乐?京口北固亭怀古7.*李清照词两首醉花阴(薄雾浓云愁永昼) 声声慢(寻寻觅觅) 第三单元8.拿来主义……………………………………….鲁迅9.父母与孩子之间的爱……………………………..弗罗姆10.*短文三篇热爱生

e-Learning 分享:2015年elearning发展趋势

2015年elearning发展趋势 1.大数据 elearning涉及的数据量越来越大,而传统的数据处理方式也变得越来越难以支撑。 仅在美国就有46%的大学生参加在线课程,而这也仅仅是一部分elearning用户而已。虽然大数据的处理是一个难题,但同时也是改善elearning的一个良好的契机。 下面是大数据改善elearning的一些案例。 大数据能够使人们更深入的了解学习过程。例如:对于(课程)完成时间及完成率的数据记录。 大数据能够有效跟踪学习者以及学习小组。例如:记录学习者点击及阅读材料的过程。 大数据有利于课程的个性化。例如:了解不同的学习者学习行为的不同之处。 通过大数据可以分析相关的学习反馈情况。例如:学习者在哪儿花费了多少时间,学习者觉得那部分内容更加有难度。 2.游戏化 游戏化是将游戏机制及游戏设计添加到elearning中,以吸引学习者,并帮助他们实现自己

的学习目标。 不过,游戏化的elearning课程并不是游戏。 elearning的游戏化只是将游戏中的元素应用于学习环境中。游戏化只是利用了学习者获得成功的欲望及需求。 那么,游戏化何以这么重要? 学习者能够回忆起阅读内容的10%,听讲内容的20%。如果在口头讲述过程中能够添加画面元素,该回忆比例上升到30%,而如果通过动作行为对该学习内容进行演绎,那么回忆比例则高达50%。但是,如果学习者能够自己解决问题,即使是在虚拟的情况中,那么他们的回忆比例则能够达到90%!(美国科学家联合会2006年教育游戏峰会上的报告)。近80%的学习者表示,如果他们的课业或者工作能够更游戏化一些,他们的效率会更高。(Talent LMS 调查) 3.个性化 每位学习者都有不同的学习需求及期望,这就是个性化的elearning如此重要的原因。个性化不仅仅是个体化,或差异化,而是使学习者能够自由选择学习内容,学习时间以及学习方式。 相关案例: 调整教学进度使教学更加个性化; 调整教学方式使教学更加与众不同; 让学习者自己选择适合的学习方式; 调整教学内容的呈现模式(文本、音频、视频)。

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克为例

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克 为例 Document number【SA80SAB-SAA9SYT-SAATC-SA6UT-SA18】

连锁咖啡厅顾客满意度对顾客忠诚度之影响-以高雄 市星巴克为例 The Effects of Customer Satisfaction on Customer Loyalty—An Empirical study of Starbucks Coffee Stores in Kaohsiung City 吴明峻 Ming-Chun Wu 高雄应用科技大学观光管理系四观二甲 学号:06 中文摘要 本研究主要在探讨连锁咖啡厅顾客满意度对顾客忠诚度的影响。 本研究以高雄市为研究地区,并选择8间星巴克连锁咖啡厅的顾客作为研究对象,问卷至2006年1月底回收完毕。 本研究将顾客满意度分为五类,分别是咖啡、餐点、服务、咖啡厅内的设施与气氛和企业形象与整体价值感;将顾客忠诚度分为三类,分别是顾客再购意愿、向他人推荐意愿和价格容忍度,并使用李克特量表来进行测量。 根据过往研究预期得知以下研究结果: 1.人口统计变项与消费型态变项有关。 2.人口统计变项与消费型态变项跟顾客满意度有关。 3.人口统计变项与消费型态变项跟顾客忠诚度有关。 4.顾客满意度对顾客忠诚度相互影响。 关键词:连锁、咖啡厅、顾客满意度、顾客忠诚度 E-mail

一、绪论 研究动机 近年来,国内咖啡消费人口迅速增加,国外知名咖啡连锁品牌相继进入台湾,全都是因为看好国内咖啡消费市场。 在国内较知名的连锁咖啡厅像是星巴克、西雅图极品等。 本研究针对连锁咖啡厅之顾客满意度与顾客忠诚度间关系加以探讨。 研究目的 本研究所要探讨的是顾客满意度对顾客忠诚度的影响,以国内知名的连锁咖啡厅星巴克之顾客为研究对象。 本研究有下列五项研究目的: 1.以星巴克为例,探讨连锁咖啡厅的顾客满意度对顾客忠诚度之影响。 2.以星巴克为例,探讨顾客满意度与顾客忠诚度之间的关系。 3.探讨人口统计变项与消费型态变项是否有相关。 4.探讨人口统计变项与消费型态变项跟顾客满意度是否有相关。 5.探讨人口统计变项与消费型态变项跟顾客忠诚度是否有相关。 二、文献回顾 连锁咖啡厅经营风格分类 根据行政院(1996)所颁布的「中华民国行业标准分类」,咖啡厅是属於九大行业中的商业类之饮食业。而国内咖啡厅由於创业背景、风格以及产品组合等方面有其独特的特质,使得经营型态与风格呈现多元化的风貌。 依照中华民国连锁店协会(1999)对咖啡产业调查指出,台湾目前的咖啡厅可分成以下几类:

elearning时代:企业培训大公司先行

e-learning时代:企业培训大公司先行 正如印刷书籍造就16世纪后的现代大学教育,互联网的普及,将使学习再次发生革命性的变化。通过e-learning,员工可以随时随地利用网络进行学习或接受培训,并将之转化为个人能力的核心竞争力,继而提高企业的竞争力。 从个人到企业,从企业到市场,e-learning有着无限的生长空间,跃上时代的浪尖是必然,也是必须。 企业培训:e-learning时代已经来临 约翰-钱伯斯曾经预言“互联网应用的第三次浪潮是e-learning”,现在,这个预言正在变为现实。 美国培训与发展协会(ASTD)对2005年美国企业培训情况的调查结果显示,美国企业对员工的培训投入增长了16.4%,e-learning培训比例则从24%增长到28%,通过网络进行学习的人数正以每年300%的速度增长,60%的企业已使用网络形式培训员工;在西欧,e-learning市场已达到39亿美元规模;在亚太地区,越来越多的企业已经开始使用e-learning…… 据ASTD预测,到2010年,雇员人数超过500人的公司,90%都将采用e-learning。 e-learning:学习的革命 正如印刷书籍造就16世纪后的现代大学教育,互联网的普及,将使学习再次发生革命性的变化。 按照ASTD给出的定义,e-learning是指由网络电子技术支撑或主导实施的教学内容或学习。它以网络为基础、具有极大化的交互作用,以开放式的学习空间带来前所未有的学习体验。 在企业培训中,e-learning以硬件平台为依托,以多媒体技术和网上社区技术为支撑,将专业知识、技术经验等通过网络传送到员工面前。通过e-learning,员工可以随时随地利用网络进行学习或接受培训,并将之转化为个人能力的核心竞争力,继而提高企业的竞争力。 据IDC统计,自1998年e-learning概念提出以来,美国e-learning市场的年增长率几乎保持在80%以上。增长速度如此之快,除了跟企业培训本身被重视度日益提高有关,更重要的是e-learning本身的特性: 它大幅度降低了培训费用,并且使个性化学习、终生学习、交互式学习成为可能。而互联网及相关产品的开发、普及和商业化是e-learning升温最强悍的推动力。 数据表明,采用e-learning模式较之传统模式至少可节约15%-50%的费用,多则可达

高中外研社英语选修六Module5课文Frankenstein's Monster

Frankenstein's Monster Part 1 The story of Frankenstein Frankenstein is a young scientist/ from Geneva, in Switzerland. While studying at university, he discovers the secret of how to give life/ to lifeless matter. Using bones from dead bodies, he creates a creature/ that resembles a human being/ and gives it life. The creature, which is unusually large/ and strong, is extremely ugly, and terrifies all those/ who see it. However, the monster, who has learnt to speak, is intelligent/ and has human emotions. Lonely and unhappy, he begins to hate his creator, Frankenstein. When Frankenstein refuses to create a wife/ for him, the monster murders Frankenstein's brother, his best friend Clerval, and finally, Frankenstein's wife Elizabeth. The scientist chases the creature/ to the Arctic/ in order to destroy him, but he dies there. At the end of the story, the monster disappears into the ice/ and snow/ to end his own life. Part 2 Extract from Frankenstein It was on a cold November night/ that I saw my creation/ for the first time. Feeling very anxious, I prepared the equipment/ that would give life/ to the thing/ that lay at my feet. It was already one/ in the morning/ and the rain/ fell against the window. My candle was almost burnt out when, by its tiny light,I saw the yellow eye of the creature open. It breathed hard, and moved its arms and legs. How can I describe my emotions/ when I saw this happen? How can I describe the monster who I had worked/ so hard/ to create? I had tried to make him beautiful. Beautiful! He was the ugliest thing/ I had ever seen! You could see the veins/ beneath his yellow skin. His hair was black/ and his teeth were white. But these things contrasted horribly with his yellow eyes, his wrinkled yellow skin and black lips. I had worked/ for nearly two years/ with one aim only, to give life to a lifeless body. For this/ I had not slept, I had destroyed my health. I had wanted it more than anything/ in the world. But now/ I had finished, the beauty of the dream vanished, and horror and disgust/ filled my heart. Now/ my only thoughts were, "I wish I had not created this creature, I wish I was on the other side of the world, I wish I could disappear!” When he turned to look at me, I felt unable to stay in the same room as him. I rushed out, and /for a long time/ I walked up and down my bedroom. At last/ I threw myself on the bed/ in my clothes, trying to find a few moments of sleep. But although I slept, I had terrible dreams. I dreamt I saw my fiancée/ walking in the streets of our town. She looked well/ and happy/ but as I kissed her lips,they became pale, as if she were dead. Her face changed and I thought/ I held the body of my dead mother/ in my arms. I woke, shaking with fear. At that same moment,I saw the creature/ that I had created. He was standing/by my bed/ and watching me. His

人教版高中语文必修必背课文精编WORD版

人教版高中语文必修必背课文精编W O R D版 IBM system office room 【A0816H-A0912AAAHH-GX8Q8-GNTHHJ8】

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。

恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地

结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨; 她彷徨在这寂寥的雨巷,撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。她默默地走近, 走近,又投出 太息一般的眼光

她飘过 像梦一般地, 像梦一般地凄婉迷茫。像梦中飘过 一枝丁香地, 我身旁飘过这个女郎;她默默地远了,远了,到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自

彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来; 我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘; 波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇; 在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。

(完整版)Unit7TheMonster课文翻译综合教程四

Unit 7 The Monster Deems Taylor 1He was an undersized little man, with a head too big for his body ― a sickly little man. His nerves were bad. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had delusions of grandeur. 2He was a monster of conceit. Never for one minute did he look at the world or at people, except in relation to himself. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. He was one of the most exhausting conversationalists that ever lived. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. 3He had a mania for being in the right. The slightest hint of disagreement, from anyone, on the most trivial point, was enough to set him off on a harangue that might last for hours, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. 4It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact. He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets, letters, books ... thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them ― usually at somebody else’s expense ― but he would sit and read them aloud, for hours, to his friends, and his family. 5He had the emotional stability of a six-year-old child. When he felt out of sorts, he would rave and stamp, or sink into suicidal gloom and talk darkly of going to the East to end his days as a Buddhist monk. Ten minutes later, when something pleased him he would rush out of doors and run around the garden, or jump up and down off the sofa, or stand on his head. He could be grief-stricken over the death of a pet dog, and could be callous and heartless to a degree that would have made a Roman emperor shudder. 6He was almost innocent of any sense of responsibility. He was convinced that

人教版高中语文必修一背诵篇目

高中语文必修一背诵篇目 1、《沁园春长沙》毛泽东 独立寒秋,湘江北去,橘子洲头。 看万山红遍,层林尽染;漫江碧透,百舸争流。 鹰击长空,鱼翔浅底,万类霜天竞自由。 怅寥廓,问苍茫大地,谁主沉浮? 携来百侣曾游,忆往昔峥嵘岁月稠。 恰同学少年,风华正茂;书生意气,挥斥方遒。 指点江山,激扬文字,粪土当年万户侯。 曾记否,到中流击水,浪遏飞舟? 2、《诗两首》 (1)、《雨巷》戴望舒 撑着油纸伞,独自 /彷徨在悠长、悠长/又寂寥的雨巷, 我希望逢着 /一个丁香一样的 /结着愁怨的姑娘。 她是有 /丁香一样的颜色,/丁香一样的芬芳, /丁香一样的忧愁, 在雨中哀怨, /哀怨又彷徨; /她彷徨在这寂寥的雨巷, 撑着油纸伞 /像我一样, /像我一样地 /默默彳亍着 冷漠、凄清,又惆怅。 /她静默地走近/走近,又投出 太息一般的眼光,/她飘过 /像梦一般地, /像梦一般地凄婉迷茫。 像梦中飘过 /一枝丁香的, /我身旁飘过这女郎; 她静默地远了,远了,/到了颓圮的篱墙, /走尽这雨巷。 在雨的哀曲里, /消了她的颜色, /散了她的芬芳, /消散了,甚至她的 太息般的眼光, /丁香般的惆怅/撑着油纸伞,独自 /彷徨在悠长,悠长 又寂寥的雨巷, /我希望飘过 /一个丁香一样的 /结着愁怨的姑娘。 (2)、《再别康桥》徐志摩 轻轻的我走了, /正如我轻轻的来; /我轻轻的招手, /作别西天的云彩。 那河畔的金柳, /是夕阳中的新娘; /波光里的艳影, /在我的心头荡漾。 软泥上的青荇, /油油的在水底招摇; /在康河的柔波里, /我甘心做一条水草!那榆阴下的一潭, /不是清泉,是天上虹 /揉碎在浮藻间, /沉淀着彩虹似的梦。寻梦?撑一支长篙, /向青草更青处漫溯, /满载一船星辉, /在星辉斑斓里放歌。但我不能放歌, /悄悄是别离的笙箫; /夏虫也为我沉默, / 沉默是今晚的康桥!悄悄的我走了, /正如我悄悄的来;/我挥一挥衣袖, /不带走一片云彩。 4、《荆轲刺秦王》 太子及宾客知其事者,皆白衣冠以送之。至易水上,既祖,取道。高渐离击筑,荆轲和而歌,为变徵之声,士皆垂泪涕泣。又前而为歌曰:“风萧萧兮易水寒,壮士一去兮不复还!”复为慷慨羽声,士皆瞋目,发尽上指冠。于是荆轲遂就车而去,终已不顾。 5、《记念刘和珍君》鲁迅 (1)、真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头!

星巴克swot分析

6月21日 85度C VS. 星巴克SWOT分析 星巴克SWOT分析 优势 1.人才流失率低 2.品牌知名度高 3.熟客券的发行 4.产品多样化 5.直营贩售 6.结合周边产品 7.策略联盟 劣势 1.店內座位不足 2.分店分布不均 机会 1.生活水准提高 2.隐藏极大商机 3.第三空间的概念 4.建立电子商务 威胁 1.WTO开放后,陆续有国际品牌进驻 2.传统面包复合式、连锁咖啡馆的经营 星巴克五力分析 1、供应商:休闲风气盛,厂商可将咖啡豆直接批给在家煮咖啡的消费者 2、购买者:消费者意识高涨、资讯透明化(比价方便) 3、同业內部竞争:产品严重抄袭、分店附近必有其他竞争者 4、潜在竞争者:设立咖啡店连锁店无进入障碍、品质渐佳的铝箔包装咖啡 5、替代品:中国茶点、台湾小吃、窜红甚快的日本东洋风...等 85度c市場swot分析: 【Strength优势】 具合作同业优势 产品精致 以价格进行市场区分,平价超值 服务、科技、产品、行销创新,机动性强 加盟管理人性化 【Weakness弱势】 通路品质控制不易 品牌偏好度不足 通路不广 财务能力不健全 85度c的历史资料,在他们的网页上的活动信息左邊的新聞訊息內皆有詳細資料,您可以直接上網站去查閱,皆詳述的非常清楚。

顧客滿意度形成品牌重於產品的行銷模式。你可以在上他們家網站找找看!【Opportunity机会】 勇于變革变革与创新的经营理念 同业策略联盟的发展、弹性空间大 【Threat威胁】 同业竞争对手(怡客、维多伦)门市面对面竞争 加盟店水准不一,品牌形象建立不易 直,间接竞争者不断崛起(壹咖啡、City Café...) 85度跟星巴克是不太相同的零售,星巴客应该比较接近丹堤的咖啡厅,策略形成的部份,可以从 1.产品线的宽度跟特色 2.市场区域与选择 3.垂直整合 4.规模经济 5.地区 6.竞争优势 這6點來做星巴客跟85的区分 可以化一个表,來比较他們有什么不同 內外部的話,內部就从相同产业來分析(有什麼优势跟劣势) 外部的話,一樣是相同产业但卖的东西跟服务不太同,来与其中一个产业做比较(例如星巴客) S(优势):點心精致平价,咖啡便宜,店面设计观感很好...等等 W(劣势):对于消费能力较低的地区点心价格仍然较高,服务人员素质不齐,點心种类变化較少 O(机会):对于点心&咖啡市场仍然只有少数的品牌独占(如:星XX,壹XX...等),它可以透过连锁店的开幕达成高市占率 T(威协):1.台湾人的模仿功力一流,所以必须保持自己的特色做好市场定位 2.消費者的口味变化快速,所以可以借助学者"麥XX"的做法保有主要的點心款式外加上几样周期变化的點心 五力分析 客戶讲价能力(the bargaining power of customers)、 供应商讲价能力(the bargaining power of suppliers)、 新进入者的竞争(the threat of new entrants)、 替代品的威协(the threat of substitute products)、 现有厂商的竞争(The intensity of competitive rivalry)

人教版高中语文必修必背课文

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。 恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地 结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨;

她彷徨在这寂寥的雨巷, 撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。 她默默地走近, 走近,又投出 太息一般的眼光 她飘过 像梦一般地, 像梦一般地凄婉迷茫。 像梦中飘过 一枝丁香地, 我身旁飘过这个女郎; 她默默地远了,远了, 到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来;我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘;波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇;

在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。 寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。 但我不能放歌,悄悄是别离的笙箫; 夏虫也为我沉默,沉默是今晚的康桥。 悄悄的我走了,正如我悄悄的来; 我挥一挥衣袖,不带走一片云彩。 记念刘和珍君(二、四节)鲁迅 二 真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头! 我们还在这样的世上活着;我也早觉得有写一点东西的必要了。离三月十八日也已有两星期,忘却的救主快要降临了罢,我正有写一点东西的必要了。 四 我在十八日早晨,才知道上午有群众向执政府请愿的事;下午便得到噩耗,说卫队居然开枪,死伤至数百人,而刘和珍君即在遇害者之列。但我对于这些传说,竟至于颇为怀疑。我向来是不惮以最坏的恶意,来推测中国人的,然而我还不料,也不信竟会下劣凶残到这地步。况且始终微笑着的和蔼的刘和珍君,更何至于无端在府门前喋血呢? 然而即日证明是事实了,作证的便是她自己的尸骸。还有一具,是杨德群君的。而且又证明着这不但是杀害,简直是虐杀,因为身体上还有棍棒的伤痕。 但段政府就有令,说她们是“暴徒”! 但接着就有流言,说她们是受人利用的。 惨象,已使我目不忍视了;流言,尤使我耳不忍闻。我还有什么话可说呢?我懂得衰亡民族之所以默无声息的缘由了。沉默啊,沉默啊!不在沉默中爆发,就在沉默中灭亡。

Elearning平台未来发展新趋势

Elearning 平台未来发展新趋势

随着e-Learning概念被大众逐渐所接受,如今许多e-Learning公司纷纷涌现,现基本形成三大氛围趋势。一类提供技术(提供学习管理平台,如上海久隆信息jite公司、Saba公司、Lotus公司)、一类侧重内容提供(如北大在线、SkillSoft公司、Smartforce公司、Netg公司)、一类专做服务提供(如Allen Interactions公司)。不过随着e-Learning发展,提供端到端解决方案的公司将越来越多,或者并不局限于一个领域,如目前北大在线除提供职业规划与商务网上培训课件外,还提供相应的网络培训服务,e-Learning作为企业发展的添加剂,必将成为知识经济时代的正确抉择。 e-Learning的兴起本身与互联网的发展应用有着密切关联,它更偏重于与传统教育培训行业相结合,有着扎实的基础及发展前景,因而在大部分互联网公司不景气之时,e-Learning公司仍然被许多公司所看好。 权威的Taylor Nelson Sofresd对北美的市场调查表明,有94%的机构认识到了e-Learning的重要性,雇员在10000人以上的公司62.7%都实施了e-Learning,有85%的公司准备在今后继续增加对e-Learning的投入。而54%的公司已经或预备应用e-Learning来学习职业规划与商务应用技能。 面对如此庞大的e-Learning市场,全世界e-Learning公司将面临巨大机遇,但竞争也更加激烈。优胜劣汰成为必然。将来只有更适应市场需求,确保质量,树立信誉,建设自有品牌的公司才更具备有竞争力。那么未来e-Learning到底将走向何方? 如今很多平台开发厂商以及用户都很关心这个问题。其实从e-Learning诞生到现在,如今大家有目共睹,移动终端上的3-D让我们的体验飘飘欲仙,该技术放大了我们的舒适度;云计算使得软件的访问轻而易举;全息图像为学习环境提供了逼真的效果;个性化的LMS使得用户可以学会更多专业人员的从业经验;更多的终端、更低廉的带宽连接为现实向世界提供解决方案的能力。 个性化学习是e-Learning平台共同追求的目标,力求解决单个群体的日常需求,将成为elearning平台下一阶段发展的新追求。那么个性化的学习将为何物呢?它又将通过何途径来实现呢? 移动学习 移动学习(M-learning)是中国远程教育本阶段发展的目标方向,特点是实现“Anyone、Anytime、Anywhere、Any style”(4A)下进行自由地学习。移动学习依托目前比较成熟的无线移动网络、国际互联网以及多媒体技术,学生和教师使用移动设备(如无线上网的便携式计算机、PDA、手机等),通 模板版次:2.0 I COPYRIGHT? 2000- JITE SHANGHAI

Unit5THEMONSTER课文翻译大学英语六

Unit 5 THE MONSTER He was an undersized little man, with a head too big for his body -- a sickly little man. His nerves were had. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had seclusions of grandeur. He was a monster of conceit.Never for one minute did he look at the world or at people, except in relation to himself. He was not only the most important person in the world,to himself;in his own eyes he was the only person who existed. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. And you would have had no difficulty in hearing him talk. He was one of the most exhausting conversationalists that ever lived. An evening with him was an evening spent in listening to a monologue. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. He had a mania for being in the right.The slightest hint of disagreement,from anyone, on the most trivial point, was enough to set him off on a harangue that might last for house, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact.He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets,le tters, books? thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them -- usually at somebody else's expense-- but he would sit and read them aloud, for hours, to his friends and his family. He wrote operas,and no sooner did he have the synopsis of a story, but he would invite -- or rather summon -- a crowed of his friends to his house, and read it aloud to them. Not for criticism. For applause. When the complete poem was written, the friends had to come again,and hear that read aloud.Then he would publish the poem, sometimes years before the music that went with it was written. He played the piano like a composer, in the worst sense of what that implies, and he would sit down at the piano before parties that included some of the finest pianists of his time, and play for them, by the hour, his own music, needless to say. He had a composer's voice. And he would invite eminent vocalists to his house and sing them his operas, taking all the parts.

相关主题
文本预览
相关文档 最新文档