当前位置:文档之家› Learning structured prediction models a large margin approach

Learning structured prediction models a large margin approach

Learning structured prediction models a large margin approach
Learning structured prediction models a large margin approach

Learning Structured Prediction Models:A Large Margin Approach

Ben Taskar taskar@https://www.doczj.com/doc/2d16340046.html, Electrical Engineering and Computer Science,UC Berkeley,Berkeley,CA94720

Vassil Chatalbashev vasco@https://www.doczj.com/doc/2d16340046.html, Daphne Koller koller@https://www.doczj.com/doc/2d16340046.html, Computer Science,Stanford University,Stanford,CA94305

Carlos Guestrin guestrin@https://www.doczj.com/doc/2d16340046.html, Computer Science,Carnegie Mellon University,Pittsburgh,PA15213

Abstract

We consider large margin estimation in a

broad range of prediction models where infer-

ence involves solving combinatorial optimiza-

tion problems,for example,weighted graph-

cuts or matchings.Our goal is to learn pa-

rameters such that inference using the model

reproduces correct answers on the training

data.Our method relies on the expressive

power of convex optimization problems to

compactly capture inference or solution op-

timality in structured prediction models.Di-

rectly embedding this structure within the

learning formulation produces concise convex

problems for e?cient estimation of very com-

plex and diverse models.We describe exper-

imental results on a matching task,disul?de

connectivity prediction,showing signi?cant

improvements over state-of-the-art methods.

1.Introduction

Structured prediction problems arise in many tasks where multiple interrelated decisions must be weighed against each other to arrive at a globally satisfactory and consistent solution.In natural language process-ing,we often need to construct a global,coherent anal-ysis of a sentence,such as its corresponding part-of-speech sequence,parse tree,or translation into an-other language.In computational biology,we analyze genetic sequences to predict3D structure of proteins,?nd global alignment of related DNA strings,and rec-ognize functional portions of a genome.In computer vision,we segment complex objects in cluttered scenes, Appearing in Proceedings of the22st International Confer-ence on Machine Learning,Bonn,Germany,2005.Copy-right2005by the author(s)/owner(s).reconstruct3D shapes from stereo and video,and track motion of articulated bodies.

Many prediction tasks are modeled by combinato-rial optimization problems;for example,alignment of2D shapes using weighted bipartite point match-ing(Belongie et al.,2002),disul?de connectivity pre-diction using weighted non-bipartite matchings(Baldi et al.,2004),clustering using spanning trees and graph cuts(Duda et al.,2000),and other combinatorial and graph structures.We de?ne a structured model very broadly,as a scoring scheme over a set of combina-torial structures and a method of?nding the highest scoring structure.The score of a model is a function of the weights of vertices,edges,or other parts of a struc-ture;these weights are often represented as parametric functions of a set of input features.The focus of this paper is the task of learning this parametric scoring function.Our training data consists of instances la-beled by a desired combinatorial structure(matching, cut,tree,etc.)and a set of input features to be used to parameterize the scoring https://www.doczj.com/doc/2d16340046.html,rmally,the goal is to?nd parameters of the scoring function such that the highest scoring structures are as close as possible to the desired structures on the training instances. Following the lines of the recent work on maximum margin estimation for probabilistic models(Collins, 2002;Altun et al.,2003;Taskar et al.,2003),we present a discriminative estimation framework for structured models based on the large margin princi-ple underlying support vector machines.The large-margin criterion provides an alternative to proba-bilistic,likelihood-based estimation methods by con-centrating directly on the robustness of the decision boundary of a model.Our framework de?nes a suite of e?cient learning algorithms that rely on the expressive power of convex optimization problems to compactly capture inference or solution optimality in structured

models.We present extensive experiments on disul?de

connectivity in protein structure prediction showing

superior performance to state-of-the-art methods. 2.Structured models

As a particularly simple and relevant example,con-

sider modeling the task of assigning reviewers to pa-

pers as a maximum weight bipartite matching prob-

lem,where the weights represent the“expertise”of

each reviewer for each paper.More speci?cally,sup-

pose we would like to have R reviewers per paper,and

that each reviewer be assigned at most P papers.For

each paper and reviewer,we have a score s jk indicat-

ing the quali?cation level of reviewer j for evaluating

paper k.Our objective is to?nd an assignment for

reviewers to papers that maximizes the total weight.

We represent a matching using a set of binary vari-

ables y jk that are set to1if reviewer j is assigned to

paper k,and0otherwise.The score of an assignment

is the sum of edge scores:s(y)= jk s jk y jk.We de-?ne Y to be the set of bipartite matchings for a given

number of papers,reviewers,R and P.The maximum

weight bipartite matching problem,arg max y∈Y s(y),

can be solved using a combinatorial algorithm or the

following linear program:

max jk s jk y jk(1) s.t. j y jk=R, k y jk≤P,0≤y jk≤1.

This LP is guaranteed to have integral(0/1)solutions (as long as P and R are integers)for any scoring func-tion s(y)(Schrijver,2003).

The quality of the assignment found depends criti-cally on the choice of weights that de?ne the objective.

A simple scheme could measure the“expertise”as the percent of word overlap in the reviewer’s home page and the paper’s abstract.However,we would want to weight certain words more heavily(words that are relevant to the subject and infrequent).Constructing and tuning the weights for a problem is a di?cult and time-consuming process to perform by hand.

Let webpage(j)denote the bag of words occurring in the home page of reviewer j and abstract(k)denote the bag of words occurring in the abstract of paper k.Then let x jk denote the intersection of the bag of words occurring in webpage(j)∩abstract(k).We can let the score s jk be simply s jk= d w d1I(word d∈x jk),a weighted combination of overlapping words(where1I(·) is the indicator function).De?ne f d(x jk)=1I(word d∈x jk)and f d(x,y)= jk y jk1I(word d∈x jk),the num-ber of times word d was in both the web page of a reviewer and the abstract of the paper that were matched in y.We can represent the objective s(y)as a weighted combination of a set of features w?f(x,y), where w is the set of parameters,f(x,y)is the set of features.We develop a large margin framework for learning the parameters of such a model from train-ing data,in our example,paper-reviewer assignments from previous years.

In general,we consider prediction problems in which the input x∈X is an arbitrary structured object and the output is a vector of values y=(y1,...,y L

x

),for example,a matching or a cut in the graph.We assume that the length L x and the structure of y depend deter-ministically on the input x.In our bipartite matching example,the output space is de?ned by the number of papers and reviewers as well as R and P.Denote the output space for a given input x as Y(x)and the entire output space is Y= x∈X Y(x).We assume that we can de?ne the output space for a structured example x using a set of constraint functions:g d(x,y):X×Y→I R such that Y(x)={y:g(x,y)≤0}.

The class of structured prediction models H we con-sider is the linear family:

h w(x)=arg max

y:g(x,y)≤0

w?f(x,y),(2)

where f(x,y)is a vector of functions f:X×Y→I R n. This formulation is very general;clearly,for many f,g pairs,?nding the optimal y is intractable.We focus our attention on models where this optimization prob-lem can be solved in polynomial time.Such problems include:probabilistic models such as certain types of Markov networks and context-free grammars;combi-natorial optimization problems such as min-cut and matching;and convex optimization problems such as linear,quadratic and semi-de?nite programming.In intractable cases,such as certain types of matching and graph cuts,we can use an approximate polyno-mial time optimization procedure that provides up-per/lower bounds on the solution.

3.Max-margin estimation

Our input consists of a set of training instances S={(x(i),y(i))}m i=1,where each instance consists of a structured object x(i)(such as a graph)and a tar-get solution y(i)(such as a matching).We develop methods for?nding parameters w such that:

arg max

y∈Y(i)

w?f(x(i),y)≈y(i),?i,

where Y(i)={y:g(x(i),y)≤0}.Note that the solu-tion space Y(i)depends on the structured object x(i); for example,the space of possible matchings depends on the precise set of nodes and edges in the graph,as well as on the parameters R and P.

We describe two general approaches to solving this problem,and apply them to bipartite and non-bipartite matchings.Both of these approaches de?ne a convex optimization problem for?nding such param-eters w.This formulation provides an e?ective and exact polynomial-time algorithm for many variants of this problem.Moreover,the reduction of this task to a standard convex optimization problem allows the use of highly optimized o?-the-shelf software.

Our framework extends the max-margin formula-tion for Markov networks(Taskar et al.,2003;Taskar et al.,2004a)and context free grammars(Taskar et al., 2004b),and is similar to other formulations(Altun et al.,2003;Tsochantaridis et al.,2004).

As in the univariate prediction,we measure the er-ror of prediction using a loss function?(y(i),y).In structured problems,where we are jointly predicting multiple variables,the loss is often not just the simple 0-1loss.For structured prediction,a natural loss func-tion is a kind of Hamming distance between y(i)and h(x(i)):the number of variables predicted incorrectly. We can express the requirement that the true struc-ture y(i)is the optimal solution with respect to w for each instance i as:

min 1

2

||w||2(3)

s.t.w?f i(y(i))≥w?f i(y)+?i(y),?i,?y∈Y(i),

where?i(y)=?(y(i),y),and f i(y)=f(x(i),y).

We can interpret1

||w||

w?[f i(y(i))?f i(y)]as the mar-gin of y(i)over another y∈Y(i).The constraints en-force w?f i(y(i))?w?f i(y)≥?i(y),so minimizing||w|| maximizes the smallest such margin,scaled by the loss ?i(y).We assume for simplicity that the features are rich enough to satisfy the constraints,which is analo-gous to the separable case formulation in SVMs.We can add slack variables to deal with the non-separable case;we omit details for lack of space.

The problem with Eq.(3)is that it has i|Y(i)|lin-ear constraints,which is generally exponential in L i, the number of variables in y i.We present two equiva-lent formulations that avoid this exponential blow-up for several important structured models.

4.Min-max formulation

We can rewrite Eq.(3)by substituting a single max constraint for each i instead of|Y(i)|linear ones:

min 1

2

||w||2(4)

s.t.w?f i(y(i))≥max

y∈Y(i)[w?f i(y)+?i(y)],?i.

The above formulation is a convex quadratic program

in w,since max y∈Y(i)[w?f i(y)+?i(y)]is convex in w

(maximum of a?ne functions is a convex function).

The key to solving Eq.(4)e?ciently is the loss-

augmented inference max y∈Y(i)[w?f i(y)+?i(y)].This

optimization problem has precisely the same form as

the prediction problem whose parameters we are try-

ing to learn—max y∈Y(i)w?f i(y)—but with an ad-

ditional term corresponding to the loss function.Even

if max y∈Y(i)w?f i(y)can be solved in polynomial time

using convex optimization,tractability of the loss-

augmented inference also depends on the form of the

loss term?i(y).In this paper,we assume that the loss

function decomposes over the variables in y(i).A nat-

ural example of such a loss function is the Hamming

distance,which simply counts the number of variables

in which a candidate solution y di?ers from the target

output y(i).

Assume that we can reformulate loss-augmented in-

ference as a convex optimization problem in terms of a

set of variablesμi,with an objective f i(w,μi)concave

inμi and subject to convex constraints g i(μi):

max

y∈Y(i)

[w?f i(y)+?i(y)]=max

μi: g i(μi)≤0 f i(w,μi).(5)

We call such formulation concise if the number

of variablesμi and constraints g i(μi)is polyno-

mial in L i,the number of variables in y(i).Note

that max

μi: g i(μi)≤0 f i(w,μi)must be convex in w,

as Eq.(4)is.Likewise,we can assume that it is feasi-

ble and bounded if Eq.(4)is.

For example,the Hamming loss for bipartite match-

ings counts the number of di?erent edges in the match-

ings y and y(i):

?H i(y)= jk1I(y jk=y(i)jk)=RN(i)p? jk y jk y(i)jk,

where the last equality follows from the fact that any

valid matching for training example i has R reviewers

for N(i)p papers.Thus,the loss-augmented matching

problem can be then written as an LP inμi similar

to Eq.(1)(without the constant term RN(i)p):

max jkμi,jk[w?f(x(i)jk)?y(i)jk]

s.t. jμi,jk=R, kμi,jk≤P,0≤μi,jk≤1.

In terms of Eq.(5), f i and g i are a?ne inμi:

f i(w,μi)=RN(i)p+ jkμi,jk[w?f(x(i)jk)?y(i)jk]and

g i(μi)≤0? jμi,jk=R, kμi,jk≤P,0≤

μi,jk≤1.

Generally,when we can express max y ∈Y (i )w ?f (x (i ),y )as an LP,we have a sim-ilar LP for the loss-augmented inference:

d i +max (F i w +c i )?μi s .t .A i μi ≤b i ,μi ≥0,(6)for appropriately de?ned d i ,F i ,c i ,A i ,b i ,which de-pend only on x (i ),y (i ),f (x ,y )and g (x ,y ).Not

e that w appears only in the objective.

In cases (such as these)where we can formulate the loss-augmented inference as a convex optimization problem as in Eq.(4),and this formulation is concise,we can use Lagrangian duality (see (Boyd &Vanden-berghe,2004)for an excellent review)to de?ne a joint,concise convex problem for estimating the parameters w .The Lagrangian associated with Eq.(4)is

L i,w (μi ,λi )= f i (w ,μi )?λ?i g

i (μi ),(7)

where λi ≥0is a vector of Lagrange multipliers,one

for each constraint function in g

i (μi ).Since we assume that f i (w ,μi )is concave in μi and bounded on the non-empty set {μi : g

i (μi )≤0},we have strong duality :max μi : g i (μi )≤0

f i (w ,μi )=min λi ≥0max μi L i,w (μi ,λi ).

For many forms of f

and g ,we can write the La-grangian dual min λi ≥0max μi L i,w (μi ,λi )explicitly as:

min h i (w ,λi )

s .t .q i (w ,λi )≤0,

(8)

where h i (w ,λi )and q i (w ,λi )are convex in both w and λi .(We folded λi ≥0into q i (w ,λi )for brevity.)Since the original problem had polynomial size,the dual is polynomial size as well.For example,the dual of the LP in Eq.(6)is d i +min b ?i λi

s .t .

A ?i λi ≥F i w +c i ,λi ≥0,(9)

where h i (w ,λi )=d i +b ?i λi and q i (w ,λi )≤0is

{F i w +c i ?A ?

i λi ≤0,?λi ≤0}.

Plugging Eq.(8)into Eq.(4),we get min 1

2

||w ||2(10)

s .t .

w ?f i (y (i ))≥

min q i (w ,λi )≤0

h i (w ,λi ),?i.

We can now combine the minimization over λwith minimization over w .The reason for this is that if the right hand side is not at the minimum,the constraint is tighter than necessary,leading to a suboptimal solu-tion w .Optimizing jointly over λas well will produce a solution to w that is optimal.

min 1

2

||w ||2(11)

s .t .

w ?f i (y (i ))≥h i (w ,λi ),?i ;

q i (w ,λi )≤0,?i.

Hence we have a joint and concise convex optimiza-tion program for estimating w .The exact form of this

program depends strongly on f

and g .For our LP-based example,we have a QP with linear constraints:

min 1

2

||w ||2(12)

s .t .

w ?f i (y (i ))≥d i +b ?i λi ,?i ;A ?i λi ≥F i w +c i ,?i ;

λi ≥0,

?i.

This formulation generalizes the idea used by Taskar

et al.(2004a)to provide a polynomial time estimation procedure for a certain family of Markov networks;it can also be used to derive the max-margin formula-tions of Taskar et al.(2003);Taskar et al.(2004b).

5.Certi?cate formulation

In the previous section,we assumed a concise con-vex formulation of the loss-augmented max in Eq.(4).There are several important combinatorial problems which allow polynomial time solution yet do not have a concise convex optimization formulation.For exam-ple,a maximum weight spanning tree and perfect non-bipartite matching problems can be expressed as linear programs with exponentially many constraints,but no polynomial formulation as a convex optimization prob-lem is known (Schrijver,2003).Both of these prob-lems,however,can be solved in polynomial time using combinatorial algorithms.In some cases,though,we can ?nd a concise certi?cate of optimality that guaran-tees that y (i )=arg max y [w ?f i (y )+?i (y )]without ex-pressing loss-augmented inference as a concise convex program.Intuitively,verifying that a given assignment is optimal can be easier than actually ?nding one.As a simple example,consider the maximum weight spanning tree problem.A basic property of a spanning tree is that cutting any edge (j,k )in the tree creates two disconnected sets of nodes (V j [jk ],V k [jk ]),where j ∈V j [jk ]and k ∈V k [jk ].A spanning tree is opti-mal with respect to a set of edge weights if and only if for every edge (j,k )in the tree connecting V j [jk ]and V k [jk ],the weight of (j,k )is larger than (or equal to)the weight of any other edge (j ′,k ′)in the graph with j ′∈V j [jk ],k ′∈V k [jk ](Schrijver,2003).These con-ditions can be expressed using linear inequality con-straints on the weights.

More generally,suppose that we can ?nd a concise convex formulation of these conditions via a polyno-mial (in L i )set of functions q i (w ,νi ),jointly con-vex in w and auxiliary variables νi such that for ?xed w ,?νi :q i (w ,νi )≤0??y ∈Y (i ):

w ?f i (y (i ))≥w ?f i (y )+?i (y ).Then min 1

2

||w ||2such that q i (w ,νi )≤0for all i is a joint convex program in w ,νthat computes the max-margin parameters.

Expressing the spanning tree optimality does not re-quire additional variablesνi,but in other problems, such as in perfect matching optimality(see below), such auxiliary variables are needed.

Consider the problem of?nding a matching in a non-bipartite graph;we begin by considering perfect matchings,where each node has exactly one neighbor, and then provide a reduction for the non-perfect case (each node has at most one neighbor).Let M be a perfect matching for a complete undirected graph G=(V,E).In an alternating cycle/path in G with respect to M,the edges alternate between those that belong to M and those that do not.An alternating cycle is augmenting with respect to M if the score of the edges in the matching M is smaller that the score of the edges not in the matching M.

Theorem5.1(Edmonds,1965)A perfect matching M is a maximum weight perfect matching if and only if there are no augmenting alternating cycles.

The number of alternating cycles is exponential in the number of vertices,so simply enumerating all of them is infeasible.Instead,we can rule out such cycles by considering shortest paths.

We begin by negating the score of those edges not in M.In the discussion below we assume that each edge score s jk has been modi?ed this way.We also refer to the score s jk as the length of the edge jk.An alternating cycle is augmenting if and only if its length is negative.A condition ruling out negative length alternating cycles can be stated succinctly using a type of distance function.Pick an arbitrary root node r. Let d e j,with j∈V,e∈{0,1},denote the length of the shortest distance alternating path from r to j,where e=1if the last edge of the path is in M,0otherwise. These shortest distances are well-de?ned if and only if there are no negative alternating cycles.The following constraints capture this distance function.

s jk≥d0k?d1j,s jk≥d0j?d1k,?jk/∈M;(13) s jk≥d1k?d0j,s jk≥d1j?d0k,?jk∈M. Theorem5.2There exists a distance function{d e j} satisfying the constraints in Eq.(14)if and only if no augmenting alternating cycles exist.

The proof is presented in Taskar(2004),Chap.10.

In our learning formulation,assuming Hamming loss,we have the loss-augmented edge weights s(i)

jk

=

(2y(i)

jk ?1)(w?f(x jk)+1?2y(i)

jk

),where the(2y(i)

jk

?1)

term in front negates the score of edges not in the matching.Let d i be a vector of distance variables d e j, F i and G i be matrices of coe?cients and q i be a vector such that F i w+G i d i≥q i represents the constraints in Eq.(14)for example i.Then the following joint convex program in w and d(with d i playing the role ofνi)computes the max-margin parameters: min

1

2

||w||2s.t.F i w+G i d i≥q i,?i.(14) If our features are not rich enough to predict the train-ing data perfectly,we can introduce a slack variable vector to allow violations of the constraints.

The case of non-perfect matchings can be handled by a reduction to perfect matchings as follows(Schrijver, 2003).We create a new graph by making a copy of the nodes and the edges and adding edges between each node and the corresponding node in the copy.We extend the matching by replicating its edges in the copy and for each unmatched node,introduce an edge to its copy.We de?ne f(x jk)≡0for edges between the original and the copy.Perfect matchings in this graph projected onto the original graph correspond to non-perfect matchings in the original graph.

6.Duals and kernels

Both the min-max formulation and the certi?cate formulation produce primal convex optimization prob-lems.Rather than solving them directly,we consider their dual versions.In particular,as in SVMs,we can use the kernel trick in the dual to e?ciently learn in high-dimensional feature spaces.

Consider,for example,the primal problem in Eq.(15).In the dual,each training example

i has L i(L i?1)variables,two for each edge(α(i)

jk

and

α(i)

kj

),since we have two constraints per edge in the primal.Letα(i)be the vector of dual variables for example i.The dual quadratic problem is:

min i q?iα(i)+12

i F?iα(i)

2

s.t.G iα(i)=0,α(i)≥0,?i.

The only occurrence of feature vectors is in the expan-sion of the squared-norm term in the objective:

F?iα(i)= j

Therefore,we can apply the kernel trick and let

f(x(i)

jk

)?f(x(t)mn)=K(x(i)

jk

,x(t)mn).At prediction time, we can also use the kernel trick to compute:

w?f(x mn)= i j

RS CC P C YWGG C PWGQN C YPEG C SGPKV

1 2 3

5 6

Figure1.PDB protein1ANS:amino acid sequence,3D

structure,and graph of potential disul?de bonds.Actual

disul?de connectivity is shown in yellow in the3D model

and the graph of potential bonds.

7.Experiments

We apply our framework to the task of disul?de

connectivity prediction.Proteins containing cysteine

residues form intra-chain covalent bonds known as

disul?de bridges.Such bonds are a very important

feature of protein structure,as they enhance confor-

mational stability by reducing the number of con?gu-

rational states and decreasing the entropic cost of fold-

ing a protein into its native state(Baldi et al.,2004).

Knowledge of the exact disul?de bonding pattern in a

protein provides information about protein structure

and possibly its function and evolution.Furthermore,

as the disul?de connectivity pattern imposes structural

constraints,it can be used to reduce the search space

in both protein folding prediction as well as protein

3D structure prediction.Thus,the development of e?-

cient,scalable and accurate methods for the prediction

of disul?de bonds has important practical applications.

Recently,there has been increasing interest in apply-

ing computational techniques to the task of predict-

ing disul?de connectivity(Fariselli&Casadio,2001;

Baldi et al.,2004).Following the lines of Fariselli and

Casadio(2001),we predict the connectivity pattern by

?nding the maximum weighted matching in a graph in

which each vertex represents a cysteine residue,and

each edge represents the“attraction strength”between

the cysteines it connects.For example,1ANS protein

in Fig.1has six cysteines,and three disul?de bonds.

We parameterize the attraction scoring function via a

linear combination of features,which include the pro-

tein sequence around the two residues,evolutionary

information in the form of multiple alignment pro?les,

secondary structure or solvent accessibility informa-

tion,etc.We then learn the weights of the di?erent

features using a set of solved protein structures,in

which the disul?de connectivity patterns are known.

Datasets.We used two datasets containing sequences

with experimentally veri?ed bonding patterns:SP39

(used in Baldi et al.(2004);Vullo and Frasconi(2004);

Fariselli and Casadio(2001))and DIPRO2.1

We report results for two experimental settings:

when the bonding state is known(that is,for each cys-

teine,we know whether it bonds with some other cys-

teine)and when it is unknown.The known state set-

ting corresponds to prefect non-bipartite matchings,

while the unknown to imperfect matchings.For the

case when bonding state is known,in order to com-

pare to previous work,we focus on proteins containing

between2and5bonds,which covers the entire SP39

dataset and over90%of the DIPRO2dataset.

In order to avoid biases during testing,we adopt the

same dataset splitting procedure as the one used in

previous work(Fariselli&Casadio,2001;Vullo&Fras-

coni,2004;Baldi et al.,2004).2

Models.The experimental results we report use

the dual formulation of Eq.(16)and an RBF kernel

K(x jk,x st)=exp x jk?x st 2

2σ2 ,withσ∈[1,2].We

used commercial QP software(CPLEX)to train our

models.Training time took around70minutes for450

examples.

The set of features x jk for a candidate cysteine pair

jk is based on local windows of size n centered around

each cysteine(we use n=9).The simplest approach

is to encode for each window,the actual sequence as

a20×n binary vector,in which the entries denote

whether or not a particular amino acid occurs at the

particular position.However,a common strategy is

to compensate for the sparseness of these features by

using multiple sequence alignment pro?le information.

Multiple alignments were computed by running PSI-

BLAST using default settings to align the sequence

with all sequences in the NR database(Altschul et al.,

1997).Thus,the input features for the?rst model,

PROFILE,consist of the proportions of occurrence of

each of the20amino-acids in the alignments at each

position in the local windows.

The second model,PROFILE-SS,augments the

PROFILE model with secondary structure and

solvent-accessibility information.The DSSP program

1The DIPRO2dataset was made publicly available

by Baldi et al.(2004).It consists of1018non-redundant

proteins from PDB(Berman et al.,2000)as of May2004

which contain intra-chain disul?de bonds.The sequences

are annotated with secondary structure and solvent acces-

sibility information from DSSP(Kabsch&Sander,1983).

2We split SP39into4di?erent subsets,with the con-

straint that proteins with sequence similarity of more than

30%belong to di?erent subsets.We ran all-against-all

rigorous Smith-Waterman local pairwise alignment(using

BLOSUM65,with gap penalty/extension12/4)and consid-

ered pairs with less than30aligned residues as dissimilar.

K SVM PROFILE DAG-RNN 20.63/0.630.77/0.770.74/0.74 30.51/0.380.62/0.520.61/0.51 40.34/0.120.51/0.360.44/0.27 50.31/0.070.43/0.130.41/0.11PROFILE PROFILE-SS

0.76/0.760.78/0.78

0.67/0.590.74/0.65

0.59/0.420.64/0.49

0.39/0.130.46/0.22

PROFILE DAG-RNN

0.57/0.59/0.440.49/0.59/0.40

0.48/0.52/0.280.45/0.50/0.32

0.39/0.40/0.140.37/0.36/0.15

0.31/0.33/0.070.31/0.28/0.03

(a)(b)(c)

Table1.Numbers indicate Precision/Accuracy for known bonded state setting in(a)and(b)and Preci-sion/Recall/Accuracy for unknown bonded state in(c).K denotes the true number of bonds.Best results in each row are in bold.(a)PROFILE vs.SVM and DAG-RNN model(Baldi et al.,2004)on SP39.(b)PROFILE vs. PROFILE-SS models on the DIPRO2dataset.(c)PROFILE vs.DAG-RNN model on SP39(unknown state).

produces8types of secondary structure,so we aug-ment each local window of size n with an additional length8×n binary vector,as well as a length n binary vector representing the solvent accessibility at each po-sition.Because DSSP utilizes true3D structure in-formation in assigning secondary structure,the model cannot be used in for unknown proteins.Rather,its performance is useful as an upper bound of the poten-tial prediction improvement using features derived via accurate secondary structure prediction algorithms. Known Bonded State.When bonding state is known,we evaluate our algorithm using two metrics: accuracy measures the fraction of entire matchings predicted correctly,and precision measures fraction of correctly predicted individual bonds.

We apply the model to the SP39dataset by us-ing4-fold cross-validation,which replicates the exper-imental setup of Baldi et al.(2004);Vullo and Fras-coni(2004).First,we evaluate the advantage that our learning formulation provides over a baseline ap-proach,where we use the features of the PROFILE model,but ignore the constraints in the learning phase and simply learn w using an SVM by labeling bonded pairs as positive examples and non-bonded pairs as negative examples.We then use these SVM-learned weights to score the edges.The results are summa-rized in Table1(a)in the column SVM.The PROFILE model uses our certi?cate-based formulation,directly incorporating the matching constraint in the learning phase.It achieves signi?cant gains over SVM for all bond numbers,illustrating the importance of explic-itly modelling structure during parameter estimation. We also compare our model to the DAG-RNN model of Baldi et al.(2004),the current top-performing sys-tem,which uses recursive neural networks also with the same set of input features.Our performance is better for all bond numbers.

As a?nal experiment,we examine the role that sec-ondary structure and solvent-accessibility information plays in the model PROFILE-SS.Table1(b)shows that the gains are signi?cant,especially for sequences with3and4bonds.This highlights the importance of developing even richer features,perhaps through more complex kernels.

Unknown Bonded State.We also evaluated our learning formulation for the case when bonding state is unknown using the graph-copy reduction described in Sec.5.Here,we use an additional evaluation metric: recall,which measures correctly predicted bonds as a fraction of the total number of bonds.We apply the model to the SP39dataset for proteins of2-5bonds, without assuming knowledge of bonded state.Since some sequences contain over50cysteines,for compu-tational e?ciency during training,we only take up to 14bonded cysteines by including the ones that par-ticipate in bonds,and randomly selecting additional cysteines from the protein(if available).During test-ing,we used all the cysteines.

The results are summarized in Table1(c),with a comparison to the DAG-RNN model,which is cur-rently the only other model to tackle this more chal-lenging setting.Our results compare favorably having better precision and recall for all bond numbers,but our connectivity pattern accuracy is slightly lower for 3and4bonds.

8.Discussion and Conclusion

We present a framework for learning a wide range of structured prediction models where the set of out-comes is a class of combinatorial structures such as matchings and graph-cuts.We present two formula-tions of structured max-margin estimation that de?ne a concise convex optimization problem.The?rst for-mulation,min-max,relies on the ability to express in-ference in a model as a concise convex optimization problem.The second one,certi?cate,only requires ex-pressing optimality of a desired solution according to a model.We illustrate how to apply these formulations to the problem of learning to match,with bipartite and non-bipartite matchings.These formulations can

be also applied to min-cuts,max-?ows,trees,colorings and many other combinatorial problems.

Beyond the closely related large-margin methods for probabilistic models,our max-margin formula-tion has ties to a body of work called inverse com-binatorial and convex optimization(for a recent sur-vey,see Heuberger(2004)).An inverse optimiza-tion problem is de?ned by an instance of an opti-mization problem max y∈Y w?f(y),a set of nominal weights w0,and a target solution y t.The goal is to?nd the weights w closest to the nominal w0in some p-norm,which make the target solution optimal. min||w?w0||p s.t.w?f(y t)≥w?f(y),?y∈Y. The solution w depends critically on the choice of nom-inal weights;for example,if w0=0,then w=0is trivially the optimal solution.In our approach,we have a very di?erent goal,of learning a parameterized objective function that depends on the input x and will generalize well in prediction on new instances. Our approach attempts to?nd weights that obtain a particular target solution for each training instance. While a unique solution is a reasonable assumption in some domains(e.g.,disul?de bonding),in others there may be several“equally good”target solutions. It would be interesting to extend our approach to ac-commodate multiple target solutions.

The estimation problem is tractable and exact when-ever the prediction problem can be formulated as a concise convex optimization problem or a polynomial time combinatorial algorithm with concise convex opti-mality conditions.For intractable models(e.g.,tripar-tite matching,quadratic assignment,max-cut,etc.), we can use our framework to learn approximate param-eters by exploiting approximations that only provide upper/lower bounds on the optimal structure score or provide a certi?cate of optimality in a large neighbor-hood around the desired structure.

Using o?-the-shelf convex optimization code for our learning formulation is convenient and?exible,but it is likely that problem-speci?c methods that use com-binatorial subroutines will outperform generic solvers. Design of such algorithms is an open problem.

We have addressed estimation of models with dis-crete output spaces.Similarly,we can consider a whole range of problems where the prediction vari-ables are continuous.Such problems are a natural gen-eralizations of regression,involving correlated,inter-constrained real-valued outputs.Examples include learning models of metabolic?ux in cells,which obeys stoichiometric constraints,and learning game payo?matrices from observed equilibria strategies.

Our learning framework addresses a large class of prediction models with rich and interesting structure.We hope that continued research in this vein will help tackle evermore sophisticated prediction problems. Acknowledgements.This work was supported by NSF grant DBI-0345474.and by DARPA’s EPCA program un-der subcontract to SRI.

References

Altschul,S.,Madden,T.,Scha?er,A.,Zhang,A.,Miller, W.,&Lipman(1997).Gapped BLAST and PSI-BLAST:a new generation of protein database search programs.Nucleid Acids Res.,25,3389–3402.

Altun,Y.,Tsochantaridis,I.,&Hofmann,T.(2003).Hid-den markov support vector machines.Proc.ICML. Baldi,P.,Cheng,J.,&Vullo,A.(2004).Large-scale pre-diction of disulphide bond connectivity.Proc.NIPS. Belongie,S.,Malik,J.,&Puzicha,J.(2002).Shape match-ing and object recognition using shape contexts.IEEE Trans.Pattern Anal.Mach.Intell.,24.

Berman,H.,Westbrook,J.,Feng,Z.,Gilliland,G.,Bhat, T.,Weissig,H.,Shindyalov,I.,&Bourne,P.(2000).The protein data bank.

Boyd,S.,&Vandenberghe,L.(2004).Convex optimization.

Cambridge University Press.

Collins,M.(2002).Discriminative training methods for hidden markov models:Theory and experiments with perceptron algorithms.Proc.EMNLP.

Duda,R.O.,Hart,P.E.,&Stork,D.G.(2000).Pat-tern classi?cation.New York:Wiley Interscience.2nd edition.

Edmonds,J.(1965).Maximum matching and a polyhedron with0-1vertices.Journal of Research at the National Bureau of Standards,69B,125–130.

Fariselli,P.,&Casadio,R.(2001).Prediction of disul?de connectivity in proteins.Bioinformatics,17,957–964. Heuberger,C.(2004).Inverse combinatorial optimization:

A survey on problems,methods,and results.Journal of

Combinatorial Optimization,8.

Kabsch,W.,&Sander,C.(1983).Dictionary of protein secondary structure:Pattern recognition of hydrogen-bonded and geometrical features.

Schrijver,A.(2003).Combinatorial optimization:Polyhe-dra and e?ciency.Springer.

Taskar,B.(2004).Learning structured prediction models:

A large margin approach.Doctoral dissertation,Stan-

ford University.

Taskar,B.,Chatalbashev,V.,&Koller,D.(2004a).Learn-ing associative Markov networks.Proc.ICML. Taskar,B.,Guestrin,C.,&Koller,D.(2003).Max margin Markov networks.Proc.NIPS.

Taskar,B.,Klein,D.,Collins,M.,Koller,D.,&Manning,

C.(2004b).Max margin parsing.Proc.EMNLP. Tsochantaridis,I.,Hofmann,T.,Joachims,T.,&Altun, Y.(2004).Support vector machine learning for interde-pendent and structured output spaces.Proc.ICML. Vullo,A.,&Frasconi,P.(2004).Disul?de connectivity pre-diction using recursive neural networks and evolutionary information.Bioinformatics,20,653–659.

最新The_Monster课文翻译

Deems Taylor: The Monster 怪才他身材矮小,头却很大,与他的身材很不相称——是个满脸病容的矮子。他神经兮兮,有皮肤病,贴身穿比丝绸粗糙一点的任何衣服都会使他痛苦不堪。而且他还是个夸大妄想狂。他是个极其自负的怪人。除非事情与自己有关,否则他从来不屑对世界或世人瞧上一眼。对他来说,他不仅是世界上最重要的人物,而且在他眼里,他是惟一活在世界上的人。他认为自己是世界上最伟大的戏剧家之一、最伟大的思想家之一、最伟大的作曲家之一。听听他的谈话,仿佛他就是集莎士比亚、贝多芬、柏拉图三人于一身。想要听到他的高论十分容易,他是世上最能使人筋疲力竭的健谈者之一。同他度过一个夜晚,就是听他一个人滔滔不绝地说上一晚。有时,他才华横溢;有时,他又令人极其厌烦。但无论是妙趣横生还是枯燥无味,他的谈话只有一个主题:他自己,他自己的所思所为。他狂妄地认为自己总是正确的。任何人在最无足轻重的问题上露出丝毫的异议,都会激得他的强烈谴责。他可能会一连好几个小时滔滔不绝,千方百计地证明自己如何如何正确。有了这种使人耗尽心力的雄辩本事,听者最后都被他弄得头昏脑涨,耳朵发聋,为了图个清静,只好同意他的说法。他从来不会觉得,对于跟他接触的人来说,他和他的所作所为并不是使人产生强烈兴趣而为之倾倒的事情。他几乎对世间的任何领域都有自己的理

论,包括素食主义、戏剧、政治以及音乐。为了证实这些理论,他写小册子、写信、写书……文字成千上万,连篇累牍。他不仅写了,还出版了这些东西——所需费用通常由别人支付——而他会坐下来大声读给朋友和家人听,一读就是好几个小时。他写歌剧,但往往是刚有个故事梗概,他就邀请——或者更确切说是召集——一群朋友到家里,高声念给大家听。不是为了获得批评,而是为了获得称赞。整部剧的歌词写好后,朋友们还得再去听他高声朗读全剧。然后他就拿去发表,有时几年后才为歌词谱曲。他也像作曲家一样弹钢琴,但要多糟有多糟。然而,他却要坐在钢琴前,面对包括他那个时代最杰出的钢琴家在内的聚会人群,一小时接一小时地给他们演奏,不用说,都是他自己的作品。他有一副作曲家的嗓子,但他会把著名的歌唱家请到自己家里,为他们演唱自己的作品,还要扮演剧中所有的角色。他的情绪犹如六岁儿童,极易波动。心情不好时,他要么用力跺脚,口出狂言,要么陷入极度的忧郁,阴沉地说要去东方当和尚,了此残生。十分钟后,假如有什么事情使他高兴了,他就会冲出门去,绕着花园跑个不停,或者在沙发上跳上跳下或拿大顶。他会因爱犬死了而极度悲痛,也会残忍无情到使罗马皇帝也不寒而栗。他几乎没有丝毫责任感。他似乎不仅没有养活自己的能力,也从没想到过有这个义务。他深信这个世界应该给他一条活路。为了支持这一信念,他

新版人教版高中语文课本的目录。

必修一阅读鉴赏第一单元1.*沁园春?长沙……………………………………毛泽东3 2.诗两首雨巷…………………………………………戴望舒6 再别康桥………………………………………徐志摩8 3.大堰河--我的保姆………………………………艾青10 第二单元4.烛之武退秦师………………………………….《左传》16 5.荆轲刺秦王………………………………….《战国策》18 6.*鸿门宴……………………………………..司马迁22 第三单元7.记念刘和珍君……………………………………鲁迅27 8.小狗包弟……………………………………….巴金32 9.*记梁任公先生的一次演讲…………………………梁实秋36 第四单元10.短新闻两篇别了,“不列颠尼亚”…………………………周婷杨兴39 奥斯维辛没有什么新闻…………………………罗森塔尔41 11.包身工………………………………………..夏衍44 12.*飞向太空的航程……………………….贾永曹智白瑞雪52 必修二阅读鉴赏第一单元1.荷塘月色…………………………………..朱自清2.故都的秋…………………………………..郁达夫3.*囚绿记…………………………………..陆蠡第二单元4.《诗经》两首氓采薇5.离骚………………………………………屈原6.*《孔雀东南飞》(并序) 7.*诗三首涉江采芙蓉《古诗十九首》短歌行……………………………………曹操归园田居(其一)…………………………..陶渊明第三单元8.兰亭集序……………………………………王羲之9.赤壁赋……………………………………..苏轼10.*游褒禅山记………………………………王安石第四单元11.就任北京大学校长之演说……………………..蔡元培12.我有一个梦想………………………………马丁?路德?金1 3.*在马克思墓前的讲话…………………………恩格斯第三册阅读鉴赏第一单元1.林黛玉进贾府………………………………….曹雪芹2.祝福………………………………………..鲁迅3. *老人与海…………………………………….海明威第二单元4.蜀道难……………………………………….李白5.杜甫诗三首秋兴八首(其一) 咏怀古迹(其三) 登高6.琵琶行(并序)………………………………..白居易7.*李商隐诗两首锦瑟马嵬(其二) 第三单元8.寡人之于国也…………………………………《孟子》9.劝学……………………………………….《荀子》10.*过秦论…………………………………….贾谊11.*师说………………………………………韩愈第四单元12.动物游戏之谜………………………………..周立明13.宇宙的边疆………………………………….卡尔?萨根14.*凤蝶外传……………………………………董纯才15.*一名物理学家的教育历程……………………….加来道雄第四册阅读鉴赏第一单元1.窦娥冤………………………………………..关汉卿2.雷雨………………………………………….曹禹3.*哈姆莱特……………………………………莎士比亚第二单元4.柳永词两首望海潮(东南形胜) 雨霖铃(寒蝉凄切) 5.苏轼词两首念奴娇?赤壁怀古定风波(莫听穿林打叶声) 6.辛弃疾词两首水龙吟?登建康赏心亭永遇乐?京口北固亭怀古7.*李清照词两首醉花阴(薄雾浓云愁永昼) 声声慢(寻寻觅觅) 第三单元8.拿来主义……………………………………….鲁迅9.父母与孩子之间的爱……………………………..弗罗姆10.*短文三篇热爱生

e-Learning 分享:2015年elearning发展趋势

2015年elearning发展趋势 1.大数据 elearning涉及的数据量越来越大,而传统的数据处理方式也变得越来越难以支撑。 仅在美国就有46%的大学生参加在线课程,而这也仅仅是一部分elearning用户而已。虽然大数据的处理是一个难题,但同时也是改善elearning的一个良好的契机。 下面是大数据改善elearning的一些案例。 大数据能够使人们更深入的了解学习过程。例如:对于(课程)完成时间及完成率的数据记录。 大数据能够有效跟踪学习者以及学习小组。例如:记录学习者点击及阅读材料的过程。 大数据有利于课程的个性化。例如:了解不同的学习者学习行为的不同之处。 通过大数据可以分析相关的学习反馈情况。例如:学习者在哪儿花费了多少时间,学习者觉得那部分内容更加有难度。 2.游戏化 游戏化是将游戏机制及游戏设计添加到elearning中,以吸引学习者,并帮助他们实现自己

的学习目标。 不过,游戏化的elearning课程并不是游戏。 elearning的游戏化只是将游戏中的元素应用于学习环境中。游戏化只是利用了学习者获得成功的欲望及需求。 那么,游戏化何以这么重要? 学习者能够回忆起阅读内容的10%,听讲内容的20%。如果在口头讲述过程中能够添加画面元素,该回忆比例上升到30%,而如果通过动作行为对该学习内容进行演绎,那么回忆比例则高达50%。但是,如果学习者能够自己解决问题,即使是在虚拟的情况中,那么他们的回忆比例则能够达到90%!(美国科学家联合会2006年教育游戏峰会上的报告)。近80%的学习者表示,如果他们的课业或者工作能够更游戏化一些,他们的效率会更高。(Talent LMS 调查) 3.个性化 每位学习者都有不同的学习需求及期望,这就是个性化的elearning如此重要的原因。个性化不仅仅是个体化,或差异化,而是使学习者能够自由选择学习内容,学习时间以及学习方式。 相关案例: 调整教学进度使教学更加个性化; 调整教学方式使教学更加与众不同; 让学习者自己选择适合的学习方式; 调整教学内容的呈现模式(文本、音频、视频)。

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克为例

连锁咖啡厅顾客满意度涉入程度对忠诚度影响之研究以高雄市星巴克 为例 Document number【SA80SAB-SAA9SYT-SAATC-SA6UT-SA18】

连锁咖啡厅顾客满意度对顾客忠诚度之影响-以高雄 市星巴克为例 The Effects of Customer Satisfaction on Customer Loyalty—An Empirical study of Starbucks Coffee Stores in Kaohsiung City 吴明峻 Ming-Chun Wu 高雄应用科技大学观光管理系四观二甲 学号:06 中文摘要 本研究主要在探讨连锁咖啡厅顾客满意度对顾客忠诚度的影响。 本研究以高雄市为研究地区,并选择8间星巴克连锁咖啡厅的顾客作为研究对象,问卷至2006年1月底回收完毕。 本研究将顾客满意度分为五类,分别是咖啡、餐点、服务、咖啡厅内的设施与气氛和企业形象与整体价值感;将顾客忠诚度分为三类,分别是顾客再购意愿、向他人推荐意愿和价格容忍度,并使用李克特量表来进行测量。 根据过往研究预期得知以下研究结果: 1.人口统计变项与消费型态变项有关。 2.人口统计变项与消费型态变项跟顾客满意度有关。 3.人口统计变项与消费型态变项跟顾客忠诚度有关。 4.顾客满意度对顾客忠诚度相互影响。 关键词:连锁、咖啡厅、顾客满意度、顾客忠诚度 E-mail

一、绪论 研究动机 近年来,国内咖啡消费人口迅速增加,国外知名咖啡连锁品牌相继进入台湾,全都是因为看好国内咖啡消费市场。 在国内较知名的连锁咖啡厅像是星巴克、西雅图极品等。 本研究针对连锁咖啡厅之顾客满意度与顾客忠诚度间关系加以探讨。 研究目的 本研究所要探讨的是顾客满意度对顾客忠诚度的影响,以国内知名的连锁咖啡厅星巴克之顾客为研究对象。 本研究有下列五项研究目的: 1.以星巴克为例,探讨连锁咖啡厅的顾客满意度对顾客忠诚度之影响。 2.以星巴克为例,探讨顾客满意度与顾客忠诚度之间的关系。 3.探讨人口统计变项与消费型态变项是否有相关。 4.探讨人口统计变项与消费型态变项跟顾客满意度是否有相关。 5.探讨人口统计变项与消费型态变项跟顾客忠诚度是否有相关。 二、文献回顾 连锁咖啡厅经营风格分类 根据行政院(1996)所颁布的「中华民国行业标准分类」,咖啡厅是属於九大行业中的商业类之饮食业。而国内咖啡厅由於创业背景、风格以及产品组合等方面有其独特的特质,使得经营型态与风格呈现多元化的风貌。 依照中华民国连锁店协会(1999)对咖啡产业调查指出,台湾目前的咖啡厅可分成以下几类:

elearning时代:企业培训大公司先行

e-learning时代:企业培训大公司先行 正如印刷书籍造就16世纪后的现代大学教育,互联网的普及,将使学习再次发生革命性的变化。通过e-learning,员工可以随时随地利用网络进行学习或接受培训,并将之转化为个人能力的核心竞争力,继而提高企业的竞争力。 从个人到企业,从企业到市场,e-learning有着无限的生长空间,跃上时代的浪尖是必然,也是必须。 企业培训:e-learning时代已经来临 约翰-钱伯斯曾经预言“互联网应用的第三次浪潮是e-learning”,现在,这个预言正在变为现实。 美国培训与发展协会(ASTD)对2005年美国企业培训情况的调查结果显示,美国企业对员工的培训投入增长了16.4%,e-learning培训比例则从24%增长到28%,通过网络进行学习的人数正以每年300%的速度增长,60%的企业已使用网络形式培训员工;在西欧,e-learning市场已达到39亿美元规模;在亚太地区,越来越多的企业已经开始使用e-learning…… 据ASTD预测,到2010年,雇员人数超过500人的公司,90%都将采用e-learning。 e-learning:学习的革命 正如印刷书籍造就16世纪后的现代大学教育,互联网的普及,将使学习再次发生革命性的变化。 按照ASTD给出的定义,e-learning是指由网络电子技术支撑或主导实施的教学内容或学习。它以网络为基础、具有极大化的交互作用,以开放式的学习空间带来前所未有的学习体验。 在企业培训中,e-learning以硬件平台为依托,以多媒体技术和网上社区技术为支撑,将专业知识、技术经验等通过网络传送到员工面前。通过e-learning,员工可以随时随地利用网络进行学习或接受培训,并将之转化为个人能力的核心竞争力,继而提高企业的竞争力。 据IDC统计,自1998年e-learning概念提出以来,美国e-learning市场的年增长率几乎保持在80%以上。增长速度如此之快,除了跟企业培训本身被重视度日益提高有关,更重要的是e-learning本身的特性: 它大幅度降低了培训费用,并且使个性化学习、终生学习、交互式学习成为可能。而互联网及相关产品的开发、普及和商业化是e-learning升温最强悍的推动力。 数据表明,采用e-learning模式较之传统模式至少可节约15%-50%的费用,多则可达

高中外研社英语选修六Module5课文Frankenstein's Monster

Frankenstein's Monster Part 1 The story of Frankenstein Frankenstein is a young scientist/ from Geneva, in Switzerland. While studying at university, he discovers the secret of how to give life/ to lifeless matter. Using bones from dead bodies, he creates a creature/ that resembles a human being/ and gives it life. The creature, which is unusually large/ and strong, is extremely ugly, and terrifies all those/ who see it. However, the monster, who has learnt to speak, is intelligent/ and has human emotions. Lonely and unhappy, he begins to hate his creator, Frankenstein. When Frankenstein refuses to create a wife/ for him, the monster murders Frankenstein's brother, his best friend Clerval, and finally, Frankenstein's wife Elizabeth. The scientist chases the creature/ to the Arctic/ in order to destroy him, but he dies there. At the end of the story, the monster disappears into the ice/ and snow/ to end his own life. Part 2 Extract from Frankenstein It was on a cold November night/ that I saw my creation/ for the first time. Feeling very anxious, I prepared the equipment/ that would give life/ to the thing/ that lay at my feet. It was already one/ in the morning/ and the rain/ fell against the window. My candle was almost burnt out when, by its tiny light,I saw the yellow eye of the creature open. It breathed hard, and moved its arms and legs. How can I describe my emotions/ when I saw this happen? How can I describe the monster who I had worked/ so hard/ to create? I had tried to make him beautiful. Beautiful! He was the ugliest thing/ I had ever seen! You could see the veins/ beneath his yellow skin. His hair was black/ and his teeth were white. But these things contrasted horribly with his yellow eyes, his wrinkled yellow skin and black lips. I had worked/ for nearly two years/ with one aim only, to give life to a lifeless body. For this/ I had not slept, I had destroyed my health. I had wanted it more than anything/ in the world. But now/ I had finished, the beauty of the dream vanished, and horror and disgust/ filled my heart. Now/ my only thoughts were, "I wish I had not created this creature, I wish I was on the other side of the world, I wish I could disappear!” When he turned to look at me, I felt unable to stay in the same room as him. I rushed out, and /for a long time/ I walked up and down my bedroom. At last/ I threw myself on the bed/ in my clothes, trying to find a few moments of sleep. But although I slept, I had terrible dreams. I dreamt I saw my fiancée/ walking in the streets of our town. She looked well/ and happy/ but as I kissed her lips,they became pale, as if she were dead. Her face changed and I thought/ I held the body of my dead mother/ in my arms. I woke, shaking with fear. At that same moment,I saw the creature/ that I had created. He was standing/by my bed/ and watching me. His

人教版高中语文必修必背课文精编WORD版

人教版高中语文必修必背课文精编W O R D版 IBM system office room 【A0816H-A0912AAAHH-GX8Q8-GNTHHJ8】

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。

恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地

结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨; 她彷徨在这寂寥的雨巷,撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。她默默地走近, 走近,又投出 太息一般的眼光

她飘过 像梦一般地, 像梦一般地凄婉迷茫。像梦中飘过 一枝丁香地, 我身旁飘过这个女郎;她默默地远了,远了,到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自

彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来; 我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘; 波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇; 在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。

书单 2016年度亚马逊中国图书排行总榜TOP30

1、《秘密花园》作者:(英)乔汉娜贝斯福著出版社:北京联合出版公司 这是一本既可以涂色又可以探宝的书,这是一本既可以娱乐又可以作为设计参考的书。同时这又是一个由奇幻花朵和珍奇植物构成的黑白魔幻世界。这里有可以涂色的画,可以探索的迷宫,等待去完成的图像,以及可以让你尽情涂画的空白空间。请用彩笔来添加五彩斑斓的色彩,或者用细头黑色笔创作更多的涂鸦和细节。在每一页中你都会发现一些若隐若现的爬虫和珍奇小生物,并且你还可以在花朵中寻觅到黄蜂、蝴蝶和鸟的踪迹。你可以将书末的提示清单作为辅助,来找出所有出现在花园中的事物。 2、《解忧杂货店》作者:东野圭吾著,李盈春译出版社:南海出版社

日本著名作家东野圭吾的《解忧杂货店》,出版当年即获中央公论文艺奖。作品超越推理小说的范围,却比推理小说更加扣人心弦。僻静的街道旁有一家杂货店,只要写下烦恼投进店前门卷帘门的投信口,第二天就会在店后的牛奶箱里得到回答:因男友身患绝症,年轻女孩静子在爱情与梦想间徘徊;克郎为了音乐梦想离家漂泊,却在现实中寸步难行;少年浩介面临家庭巨变,挣扎在亲情与未来的迷茫中……他们将困惑写成信投进杂货店,奇妙的事情随即不断发生。生命中的一次偶然交会,将如何演绎出截然不同的人生? 3、《岛上书店》作者:(美)加布瑞埃拉泽文著孙仲旭李玉瑶译出版社:江苏文艺出版社

A.J.费克里,人近中年,在一座与世隔绝的小岛上,经营一家书店。命运从未眷顾过他,爱妻去世,书店危机,就连唯一值钱的宝贝也遭窃。他的人生陷入僵局,他的内心沦为荒岛。就在此时,一个神秘的包袱出现在书店中,意外地拯救了陷于孤独绝境中的A.J.,成为了连接他和小姨子伊斯梅、警长兰比亚斯、出版社女业务员阿米莉娅之间的纽带,为他的生活带来了转机。小岛上的几个生命紧紧相依,走出了人生的困境,而所有对书和生活的热爱都周而复始,愈加汹涌。 4、《自控力》作者:凯利麦格尼格尔著,王岑卉译出版社:文化发展出版社

(完整版)Unit7TheMonster课文翻译综合教程四

Unit 7 The Monster Deems Taylor 1He was an undersized little man, with a head too big for his body ― a sickly little man. His nerves were bad. He had skin trouble. It was agony for him to wear anything next to his skin coarser than silk. And he had delusions of grandeur. 2He was a monster of conceit. Never for one minute did he look at the world or at people, except in relation to himself. He believed himself to be one of the greatest dramatists in the world, one of the greatest thinkers, and one of the greatest composers. To hear him talk, he was Shakespeare, and Beethoven, and Plato, rolled into one. He was one of the most exhausting conversationalists that ever lived. Sometimes he was brilliant; sometimes he was maddeningly tiresome. But whether he was being brilliant or dull, he had one sole topic of conversation: himself. What he thought and what he did. 3He had a mania for being in the right. The slightest hint of disagreement, from anyone, on the most trivial point, was enough to set him off on a harangue that might last for hours, in which he proved himself right in so many ways, and with such exhausting volubility, that in the end his hearer, stunned and deafened, would agree with him, for the sake of peace. 4It never occurred to him that he and his doing were not of the most intense and fascinating interest to anyone with whom he came in contact. He had theories about almost any subject under the sun, including vegetarianism, the drama, politics, and music; and in support of these theories he wrote pamphlets, letters, books ... thousands upon thousands of words, hundreds and hundreds of pages. He not only wrote these things, and published them ― usually at somebody else’s expense ― but he would sit and read them aloud, for hours, to his friends, and his family. 5He had the emotional stability of a six-year-old child. When he felt out of sorts, he would rave and stamp, or sink into suicidal gloom and talk darkly of going to the East to end his days as a Buddhist monk. Ten minutes later, when something pleased him he would rush out of doors and run around the garden, or jump up and down off the sofa, or stand on his head. He could be grief-stricken over the death of a pet dog, and could be callous and heartless to a degree that would have made a Roman emperor shudder. 6He was almost innocent of any sense of responsibility. He was convinced that

人教版高中语文必修一背诵篇目

高中语文必修一背诵篇目 1、《沁园春长沙》毛泽东 独立寒秋,湘江北去,橘子洲头。 看万山红遍,层林尽染;漫江碧透,百舸争流。 鹰击长空,鱼翔浅底,万类霜天竞自由。 怅寥廓,问苍茫大地,谁主沉浮? 携来百侣曾游,忆往昔峥嵘岁月稠。 恰同学少年,风华正茂;书生意气,挥斥方遒。 指点江山,激扬文字,粪土当年万户侯。 曾记否,到中流击水,浪遏飞舟? 2、《诗两首》 (1)、《雨巷》戴望舒 撑着油纸伞,独自 /彷徨在悠长、悠长/又寂寥的雨巷, 我希望逢着 /一个丁香一样的 /结着愁怨的姑娘。 她是有 /丁香一样的颜色,/丁香一样的芬芳, /丁香一样的忧愁, 在雨中哀怨, /哀怨又彷徨; /她彷徨在这寂寥的雨巷, 撑着油纸伞 /像我一样, /像我一样地 /默默彳亍着 冷漠、凄清,又惆怅。 /她静默地走近/走近,又投出 太息一般的眼光,/她飘过 /像梦一般地, /像梦一般地凄婉迷茫。 像梦中飘过 /一枝丁香的, /我身旁飘过这女郎; 她静默地远了,远了,/到了颓圮的篱墙, /走尽这雨巷。 在雨的哀曲里, /消了她的颜色, /散了她的芬芳, /消散了,甚至她的 太息般的眼光, /丁香般的惆怅/撑着油纸伞,独自 /彷徨在悠长,悠长 又寂寥的雨巷, /我希望飘过 /一个丁香一样的 /结着愁怨的姑娘。 (2)、《再别康桥》徐志摩 轻轻的我走了, /正如我轻轻的来; /我轻轻的招手, /作别西天的云彩。 那河畔的金柳, /是夕阳中的新娘; /波光里的艳影, /在我的心头荡漾。 软泥上的青荇, /油油的在水底招摇; /在康河的柔波里, /我甘心做一条水草!那榆阴下的一潭, /不是清泉,是天上虹 /揉碎在浮藻间, /沉淀着彩虹似的梦。寻梦?撑一支长篙, /向青草更青处漫溯, /满载一船星辉, /在星辉斑斓里放歌。但我不能放歌, /悄悄是别离的笙箫; /夏虫也为我沉默, / 沉默是今晚的康桥!悄悄的我走了, /正如我悄悄的来;/我挥一挥衣袖, /不带走一片云彩。 4、《荆轲刺秦王》 太子及宾客知其事者,皆白衣冠以送之。至易水上,既祖,取道。高渐离击筑,荆轲和而歌,为变徵之声,士皆垂泪涕泣。又前而为歌曰:“风萧萧兮易水寒,壮士一去兮不复还!”复为慷慨羽声,士皆瞋目,发尽上指冠。于是荆轲遂就车而去,终已不顾。 5、《记念刘和珍君》鲁迅 (1)、真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头!

星巴克swot分析

6月21日 85度C VS. 星巴克SWOT分析 星巴克SWOT分析 优势 1.人才流失率低 2.品牌知名度高 3.熟客券的发行 4.产品多样化 5.直营贩售 6.结合周边产品 7.策略联盟 劣势 1.店內座位不足 2.分店分布不均 机会 1.生活水准提高 2.隐藏极大商机 3.第三空间的概念 4.建立电子商务 威胁 1.WTO开放后,陆续有国际品牌进驻 2.传统面包复合式、连锁咖啡馆的经营 星巴克五力分析 1、供应商:休闲风气盛,厂商可将咖啡豆直接批给在家煮咖啡的消费者 2、购买者:消费者意识高涨、资讯透明化(比价方便) 3、同业內部竞争:产品严重抄袭、分店附近必有其他竞争者 4、潜在竞争者:设立咖啡店连锁店无进入障碍、品质渐佳的铝箔包装咖啡 5、替代品:中国茶点、台湾小吃、窜红甚快的日本东洋风...等 85度c市場swot分析: 【Strength优势】 具合作同业优势 产品精致 以价格进行市场区分,平价超值 服务、科技、产品、行销创新,机动性强 加盟管理人性化 【Weakness弱势】 通路品质控制不易 品牌偏好度不足 通路不广 财务能力不健全 85度c的历史资料,在他们的网页上的活动信息左邊的新聞訊息內皆有詳細資料,您可以直接上網站去查閱,皆詳述的非常清楚。

顧客滿意度形成品牌重於產品的行銷模式。你可以在上他們家網站找找看!【Opportunity机会】 勇于變革变革与创新的经营理念 同业策略联盟的发展、弹性空间大 【Threat威胁】 同业竞争对手(怡客、维多伦)门市面对面竞争 加盟店水准不一,品牌形象建立不易 直,间接竞争者不断崛起(壹咖啡、City Café...) 85度跟星巴克是不太相同的零售,星巴客应该比较接近丹堤的咖啡厅,策略形成的部份,可以从 1.产品线的宽度跟特色 2.市场区域与选择 3.垂直整合 4.规模经济 5.地区 6.竞争优势 這6點來做星巴客跟85的区分 可以化一个表,來比较他們有什么不同 內外部的話,內部就从相同产业來分析(有什麼优势跟劣势) 外部的話,一樣是相同产业但卖的东西跟服务不太同,来与其中一个产业做比较(例如星巴客) S(优势):點心精致平价,咖啡便宜,店面设计观感很好...等等 W(劣势):对于消费能力较低的地区点心价格仍然较高,服务人员素质不齐,點心种类变化較少 O(机会):对于点心&咖啡市场仍然只有少数的品牌独占(如:星XX,壹XX...等),它可以透过连锁店的开幕达成高市占率 T(威协):1.台湾人的模仿功力一流,所以必须保持自己的特色做好市场定位 2.消費者的口味变化快速,所以可以借助学者"麥XX"的做法保有主要的點心款式外加上几样周期变化的點心 五力分析 客戶讲价能力(the bargaining power of customers)、 供应商讲价能力(the bargaining power of suppliers)、 新进入者的竞争(the threat of new entrants)、 替代品的威协(the threat of substitute products)、 现有厂商的竞争(The intensity of competitive rivalry)

美国大学校计算机专业哪所院校好呢

很多准备申请美国就读的朋友,想必都会想到计算机专业,美国该专业的人才在世界各地都十分有名气;而且美国该专业的院校也有很多可以供大家选择的。那么美国大学校计算机专业哪所院校好呢? 美国大学校计算机专业院校推荐: 1.麻省理工学院Massachusetts Institute of Technology 位于马萨诸塞州剑桥市(Cambridge, Massachusetts),是美国一所综合性私立大学,有“世界理工大学之最”的美名。麻省理工学院在众多大学排名里,均位列世界前五位。该校的数学、科学和工学专业都非常著名。 位于查尔斯河附近的麻省理工学院的宿舍被认为是美国最酷的宿舍之一,由著名建筑师斯蒂文·霍尔设计。这个名为“海绵”的宿舍拿下了许多建筑奖项。 计算机专业毕业生最好去向:谷歌、IBM、甲骨文、微软 2.斯坦福大学 Stanford University位于加州帕洛阿尔托(Palo Alto, California),斯坦福大学的毕业生遍布了谷歌、惠普以及Snapchat等顶级技术公司。斯坦福大学有着一个惊人的数字,该校毕业生创办的所有公司每年的利润总和为2.7 万亿美元。

计算机专业毕业生最好去向:谷歌、苹果、思科 3.加州大学伯克利分校 University of California-Berkeley位于加州伯克利(Berkeley, California),建于1868年,是美国的一所公立研究型大学,加州大学伯克利分校还是世界数学、自然科学、计算机科学和工程学最重要的研究中心之一,拥有世界排名第1的理科、世界第3的工科和世界第3的计算机科学,其人文社科也长期位列世界前5。 2015年11月,QS发布了全球高校毕业生就业力排名,加州大学伯克利分校排名第八。据经济学家分析,一个在加州大学伯克利分校的工科学生和e799bee5baa6e997aee7ad94e78988e69d8331333433623139 一个没读过大学的人相比,在大学毕业20年后,该校毕业生的总收入会比没上过大学的人多110万美元。 计算机专业毕业生最好去向:谷歌、甲骨文、苹果 4.加州理工学院 California Institute of Technology位于加州帕萨迪纳市(Pasadena, California),成立于1891年,是一所四年制的私立研究型学院。 该院研究生课程门门都出类拔萃,2010年U.S. News美国大学最佳研究生院排名中,加州理工学院的物理专业排名全美第1,化学第1,航空航天第1,地球科学第1,生物学第4,电子工程第5,数学第7,计算机科学第11,经济学第14。 加州理工学院不仅仅是工科好,在综合排名上,该校也能够排进前五十。该校的研发部门与NASA、美国国家科学基金会以及美国卫生与人类服务部有着密切的合作关系。 计算机专业毕业生最好去向:谷歌、英特尔、IBM 5.佐治亚理工学院 Georgia Institute of Technology位于佐治亚州亚特兰大市(Atlanta, Georgia),是美国一所综合性公立大学,始建于1885年。与麻省理工学院及加州理工学院并称为美国三大理工学院。其中计算机科学专业全美排名第10,该校的电气与电子工程专业声誉不错。 计算机专业毕业生最好去向:IBM、英特尔、AT&T 6.伊利诺伊大学香槟分校 University of Illinois —Urbana-Champaign位于伊利诺伊州香槟市(Champaign, Illinois),创建于1867年,是一所享有世界声望的一流研究型大学。 该校很多学科素负盛名,其工程学院在全美乃至世界堪称至尊级的地位,始终位于美国大学工程院排名前五,几乎所有工程专业均在全美排名前十,电气、计算机、土木、材料、农业、环境、机械等专业排名全美前五。

人教版高中语文必修必背课文

必修1 沁园春·长沙(全文)毛泽东 独立寒秋, 湘江北去, 橘子洲头。 看万山红遍, 层林尽染, 漫江碧透, 百舸争流。 鹰击长空, 鱼翔浅底, 万类霜天竞自由。 怅寥廓, 问苍茫大地, 谁主沉浮。 携来百侣曾游, 忆往昔峥嵘岁月稠。 恰同学少年, 风华正茂, 书生意气, 挥斥方遒。 指点江山, 激扬文字, 粪土当年万户侯。 曾记否, 到中流击水, 浪遏飞舟。 雨巷(全文)戴望舒 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望逢着 一个丁香一样地 结着愁怨的姑娘。 她是有 丁香一样的颜色, 丁香一样的芬芳, 丁香一样的忧愁, 在雨中哀怨, 哀怨又彷徨;

她彷徨在这寂寥的雨巷, 撑着油纸伞 像我一样, 像我一样地 默默彳亍着 冷漠、凄清,又惆怅。 她默默地走近, 走近,又投出 太息一般的眼光 她飘过 像梦一般地, 像梦一般地凄婉迷茫。 像梦中飘过 一枝丁香地, 我身旁飘过这个女郎; 她默默地远了,远了, 到了颓圮的篱墙, 走尽这雨巷。 在雨的哀曲里, 消了她的颜色, 散了她的芬芳, 消散了,甚至她的 太息般的眼光 丁香般的惆怅。 撑着油纸伞,独自 彷徨在悠长、悠长 又寂寥的雨巷, 我希望飘过 一个丁香一样地 结着愁怨的姑娘。 再别康桥(全文)徐志摩 轻轻的我走了,正如我轻轻的来;我轻轻的招手,作别西天的云彩。 那河畔的金柳,是夕阳中的新娘;波光里的艳影,在我的心头荡漾。 软泥上的青荇,油油的在水底招摇;

在康河的柔波里,我甘心做一条水草! 那榆荫下的一潭,不是清泉, 是天上虹揉碎在浮藻间,沉淀着彩虹似的梦。 寻梦?撑一支长篙,向青草更青处漫溯, 满载一船星辉,在星辉斑斓里放歌。 但我不能放歌,悄悄是别离的笙箫; 夏虫也为我沉默,沉默是今晚的康桥。 悄悄的我走了,正如我悄悄的来; 我挥一挥衣袖,不带走一片云彩。 记念刘和珍君(二、四节)鲁迅 二 真的猛士,敢于直面惨淡的人生,敢于正视淋漓的鲜血。这是怎样的哀痛者和幸福者?然而造化又常常为庸人设计,以时间的流驶,来洗涤旧迹,仅使留下淡红的血色和微漠的悲哀。在这淡红的血色和微漠的悲哀中,又给人暂得偷生,维持着这似人非人的世界。我不知道这样的世界何时是一个尽头! 我们还在这样的世上活着;我也早觉得有写一点东西的必要了。离三月十八日也已有两星期,忘却的救主快要降临了罢,我正有写一点东西的必要了。 四 我在十八日早晨,才知道上午有群众向执政府请愿的事;下午便得到噩耗,说卫队居然开枪,死伤至数百人,而刘和珍君即在遇害者之列。但我对于这些传说,竟至于颇为怀疑。我向来是不惮以最坏的恶意,来推测中国人的,然而我还不料,也不信竟会下劣凶残到这地步。况且始终微笑着的和蔼的刘和珍君,更何至于无端在府门前喋血呢? 然而即日证明是事实了,作证的便是她自己的尸骸。还有一具,是杨德群君的。而且又证明着这不但是杀害,简直是虐杀,因为身体上还有棍棒的伤痕。 但段政府就有令,说她们是“暴徒”! 但接着就有流言,说她们是受人利用的。 惨象,已使我目不忍视了;流言,尤使我耳不忍闻。我还有什么话可说呢?我懂得衰亡民族之所以默无声息的缘由了。沉默啊,沉默啊!不在沉默中爆发,就在沉默中灭亡。

Elearning平台未来发展新趋势

Elearning 平台未来发展新趋势

随着e-Learning概念被大众逐渐所接受,如今许多e-Learning公司纷纷涌现,现基本形成三大氛围趋势。一类提供技术(提供学习管理平台,如上海久隆信息jite公司、Saba公司、Lotus公司)、一类侧重内容提供(如北大在线、SkillSoft公司、Smartforce公司、Netg公司)、一类专做服务提供(如Allen Interactions公司)。不过随着e-Learning发展,提供端到端解决方案的公司将越来越多,或者并不局限于一个领域,如目前北大在线除提供职业规划与商务网上培训课件外,还提供相应的网络培训服务,e-Learning作为企业发展的添加剂,必将成为知识经济时代的正确抉择。 e-Learning的兴起本身与互联网的发展应用有着密切关联,它更偏重于与传统教育培训行业相结合,有着扎实的基础及发展前景,因而在大部分互联网公司不景气之时,e-Learning公司仍然被许多公司所看好。 权威的Taylor Nelson Sofresd对北美的市场调查表明,有94%的机构认识到了e-Learning的重要性,雇员在10000人以上的公司62.7%都实施了e-Learning,有85%的公司准备在今后继续增加对e-Learning的投入。而54%的公司已经或预备应用e-Learning来学习职业规划与商务应用技能。 面对如此庞大的e-Learning市场,全世界e-Learning公司将面临巨大机遇,但竞争也更加激烈。优胜劣汰成为必然。将来只有更适应市场需求,确保质量,树立信誉,建设自有品牌的公司才更具备有竞争力。那么未来e-Learning到底将走向何方? 如今很多平台开发厂商以及用户都很关心这个问题。其实从e-Learning诞生到现在,如今大家有目共睹,移动终端上的3-D让我们的体验飘飘欲仙,该技术放大了我们的舒适度;云计算使得软件的访问轻而易举;全息图像为学习环境提供了逼真的效果;个性化的LMS使得用户可以学会更多专业人员的从业经验;更多的终端、更低廉的带宽连接为现实向世界提供解决方案的能力。 个性化学习是e-Learning平台共同追求的目标,力求解决单个群体的日常需求,将成为elearning平台下一阶段发展的新追求。那么个性化的学习将为何物呢?它又将通过何途径来实现呢? 移动学习 移动学习(M-learning)是中国远程教育本阶段发展的目标方向,特点是实现“Anyone、Anytime、Anywhere、Any style”(4A)下进行自由地学习。移动学习依托目前比较成熟的无线移动网络、国际互联网以及多媒体技术,学生和教师使用移动设备(如无线上网的便携式计算机、PDA、手机等),通 模板版次:2.0 I COPYRIGHT? 2000- JITE SHANGHAI

相关主题
文本预览
相关文档 最新文档