当前位置:文档之家› Spectral Methods for Automatic Multiscale Data Clustering

Spectral Methods for Automatic Multiscale Data Clustering

Spectral Methods for Automatic Multiscale Data Clustering

Arik Azran

Gatsby Computational Neuroscience Unit University College London

London WC1N3AR,UK

arik@https://www.doczj.com/doc/004086799.html,

Zoubin Ghahramani?Department of Engineering University of Cambridge Cambridge CB21PZ,UK

zoubin@https://www.doczj.com/doc/004086799.html,

Abstract

Spectral clustering is a simple yet powerful method for ?nding structure in data using spectral properties of an as-sociated pairwise similarity matrix.This paper provides new insights into how the method works and uses these to derive new algorithms which given the data alone automat-ically learn different plausible data partitionings.The main theoretical contribution is a generalization of a key result in the?eld,the multicut lemma[7].We use this generaliza-tion to derive two algorithms.The?rst uses the eigenvalues of a given af?nity matrix to infer the number of clusters in data,and the second combines learning the af?nity matrix with inferring the number of clusters.A hierarchical imple-mentation of the algorithms is also derived.The algorithms are theoretically motivated and demonstrated on nontrivial data sets.

1.Introduction

Clustering is a fundamental unsupervised learning prob-lem,where one needs to?nd a partitioning for a given set of items S={s n}N n=1into K groups{S k}K k=1such that ∪K k=1S k=S and S k∩S l=?if k=l.Imagine you

are given the task of designing an algorithm to cluster S, without any additional information such as the number of clusters K,bounds on the number of points in the k’th clus-ter|S k|,the location of the clusters etc.Often there is more than one plausible way to partition the data.Thus we would like an algorithm which suggests different‘good’partition-ings and associates each of them with a numerical measure that indicates how‘good’they are.This paper provides such an algorithm,using novel ideas in spectral clustering.

Spectral clustering is a technique for data partitioning based on similarities between pairs of data points.The spectrum of an af?nity matrix,a matrix of pairwise sim-?Also at the Machine Learning Department,Carnegie Mellon Univer-sity,Pittsburgh,USA

ilarities between points,is used to cluster the data points into groups.The ease of implementation,combined with the ability to cluster complex data sets,makes the method appealing to researchers in various?elds,including bioin-formatics[10],speech recognition[4]and software clus-tering[1].In computer vision it is used to perform image segmentation[6,8,9].As a nonparametric approach to un-supervised clustering,it often beats parametric models for clustering in machine learning(e.g.mixture models).

Many studies have been done on spectral clustering with relation to random walks[8],graph cuts and normalized cuts[6],and matrix perturbation theory[2],continuously improving our understanding of the technique.Alongside the development of algorithms[6,8,2,5,9]signi?cant the-oretical progress has also been achieved.In[7]it was shown that if some conditions apply then spectral clustering mini-mizes the multiway normalized cut,a generalization of the two way normalized cut criterion[6].Since this result is the basis for some of the work presented here,it will be discussed more in the sequel.

Although much has been discovered various key issues remain open questions,e.g.(i)How can one choose a good function to measure the pairwise similarity when all that is available is S?,(ii)How can the number of clusters K be learned from S?,(iii)Why do spectral methods use the leading eigenvectors and(iv)When is spectral clustering ex-pected to work?.These are the questions addressed in this paper,leading to the following contributions;(1)Analyz-ing the effect of taking multiple steps of the random walk,

a direct generalization of the multicut lemma[7],which we

brie?y discuss in section2,is derived in section3.(2)A new algorithm that?nds different plausible values for K is derived from theoretical considerations(section4).(3) This algorithm is,in turn,generalized in section5to learn the parameter of the similarity function,and(4)a hierarchi-cal implementation which is both ef?cient and intuitive is described in section6.This is all done while keeping the additional complexity and computation burden minimal.

1

2.Overview of spectral clustering

This section gives a short review of spectral clustering and the main results from the literature used in our work.Consider the case where members of S are points in the data space R t .The indices of the points in the k ’th group are denoted by I k and the partition by I ={I k }K k =1.

Given a metric d (x,y ),de?ned over R t

,the similarity be-tween different points in S can be measured by any mono-tonically decreasing parameterized function w mn (σ)=w (d (s m ,s n );σ),with σbeing the length scale of w .The smaller it is,the smaller is the ‘neighborhood’of a point.For multidimensional data,different length scales can be used along different dimensions.While the results in this paper can be applied to any metric space (X ,d

(x,y ))with any function w ,for brevity we only discuss R t , x ?y 2 with the popular Gaussian function

w mn (σ)=exp

?

s m ?s n 2

σ2

.

(1)

The pairwise similarities are conveniently summarized in the Gram matrix ,also known as the kernel matrix .D EFINITION 1(Parameterized Gram Matrix)Given a set S ={s n }N n =1and a parameterized function

w (σ):R t ×R t

→R +,the parameterized N ×N matrix W (σ)with elements [W (σ)]mn =w mn (σ)is referred to as the parameterized Gram matrix of w (σ)w.r.t.S .For simplicity of notation we sometimes drop the explicit dependence on σ,but the reader should keep this depen-dence in mind.

Different algorithms use W differently to derive an af?nity matrix P .In this paper we adopt the random walk view [8]for the de?nition of P (see [3]for rela-tion to other de?nitions).Denote the volume of the n ’th node D nn = N i =1w ni and the diagonal matrix D =diag (D 11,...,D NN ),then the af?nity matrix is given by

P =D ?1W .

(2)

Notice that each row of P sums to 1,thus P mn can be in-terpreted as the probability for a random walk that begins at s m to end up at s n after a single step.More formally,if we let x j be the location of the walk at time j ,then

P mn =P (x j +1=s n |x j =s m ).

(3)

2.1.The baseline method

Baseline algorithms assume the Gram matrix W and the number of clusters K are given with the data.First the af?n-ity matrix P is computed,and then its eigensystem is used to cluster the data,as described in algorithm 1.

D EFINITION 2(Eigensystem of a matrix)Let λn and v n be the n ’th eigenvalue and eigenvector of a matrix P ,i.e.P v n =λn v n .Without loss of generality,let λn ≥λn +1and v n =1.Then,{λn ,v n }N n =1is the eigensystem of P .Input :Data set S ,number of clusters K ,Gram matrix W .

Output :A partitioning {S k }K

k =1.Algorithm :

https://www.doczj.com/doc/004086799.html,pute P according to (2).

2.Find the eigensystem of P and de?ne V =(v 2,...,v K ).

3.Consider the rows of V to be points in R K ?1and cluster them using the K -means clustering algorithm.

4.De?ne I k to be the index set of all the rows of V belong-ing to the k ’th cluster.

5.De?ne the partitioning {S k }K k =1,where S k ={x n }n ∈I k .Some properties of P which are important for our dis-cussion are summarized in the following Lemma.L EMMA 1Assume W is full rank and P is given by (2).Then,

1.P is full rank,

2.λ1=1and v 1=[1,1,...,1] /√N ,

3.λn is real and |λn |≤1?n =2,3,...,N .

The proof is trivial;D is diagonal thus P is de?ned by nor-malizing the rows of W and (1)follows,(2)can be veri?ed by direct calculation,the ?rst part of (3)is easily proved using the symmetry of W and the second part by using the equality λn v n (m )= i P mi v n (i )?n,m ,and for each n choosing m =argmax j |v n (j )|.Property 2explains why only eigenvectors 2to K are used to de?ne V ;since the ?rst eigenvector is all ones,it contains no grouping information.

2.2.Why should it work?

Spectral clustering was analyzed using tools from ma-trix perturbation theory in [2].There,it was shown that if the af?nity matrix is close to block diagonal with K blocks,then the K leading eigenvectors ef?ciently re?ect this struc-ture and the method is guaranteed to perform well.How-ever,empirical studies show that the method is successful even in cases where the matrix is far from block diagonal.A different approach is pursued in [8,7],where the notion of piecewise constant eigenvectors was introduced.D EFINITION 3(Piecewise Constant Eigenvectors (PCE))Let v be an eigenvector of P and I ={I k }K k =1be a parti-tion of 1,2,...,N into K disjoint sets.Then,v is said to be a Piecewise Constant Eigenvectors of P with respect to I if v (i )=v (j )?i,j ∈I k and k ∈1,2,...,K .

It was shown in [7]that if P ’s K leading eigenvectors are PCE with respect to I ,then spectral clustering minimizes the multiway normalized cut (MNCut)

MNCut (I )=K ?

K k =1

Cut (I k ,I k )Cut (I k ,I )

,(4)

where Cut (I k ,I k )=

m ∈I k ,n ∈I k W mn .An intuitive understanding of the MNCut can be gained by expressing it

[7]as MNCut (I )= K

k =1(1?P (I k →I k |I k )),the sum of transition probabilities between different clusters in a sin-gle step .Thus,the MNCut is minimized by the partitioning for which the a random walk is most probable to stay in the cluster in which it is located.

L EMMA 2(The Multicut Lemma [7])Assume W N ×N is symmetric with nonnegative elements,and de?ne P accord-ing to (2).Assume further that P ’s K leading eigenvectors are PCE with respect to a partition I ?and their eigenvalues are not zero.Then,I ?minimizes the MNCut.

Notice that for PCE,K -means clustering in the feature space (Alg1,step 3)is ideal since it is asked to ?nd clus-ters whose members are all identical.

3.Structure exploration with random walks

We now examine what happens to the results of section 2if we let the random walk take many steps instead of only one.The results turns out to be very informative,and they lead to the derivation of a new clustering algorithm that given S and W ,automatically learns different plausible val-ues for the number of clusters K ,and scores the resulting partitionings.

Consider using the M ’th order transition matrix P M ,whose elements are

P M

mn

=P (x M =s n |x 0=s m ),(5)

as the af?nity matrix.P M

mn

gives the total probability that a random walk x j ,beginning at s m ,will end up in s n after M steps,considering all possible paths between the nodes.P M

mn

is expected to be high if there is a good path between s m ,s n and low otherwise,hopefully leading to a block di-agonal matrix which is ideal for clustering data [2].How-ever,often in practice we observe a different behavior of P M .If points i,j are in the same cluster,then often there are values of M for which P M i and P M j ,the i ’th and j ’th rows of P M ,becomes very similar.The intuition here is that if points i,j are similar then after suf?cient number of steps we can expect that a particle that begins a random walk in each of them will have the same distribution for its loca-tion after M steps.Another observation is that by varying the number of steps M we explicitly explore similarities at

different scales in the data,and as M increases we expect

to ?nd coarser structure.

The following lemma shows how the results of spectral clustering with P are related to those with P M as the af?n-ity matrix.

L EMMA 3Assume W,K are given and let the partitioning I be the result of algorithm 1.Then,substituting P in step 1of the algorithm with P M ,for any odd positive integer M ,yields the same partitioning I .

It is well known,from the theory of markov chains,that P M is given by multiplying P with itself M times,so that if P =V ΛV ?1then P M =V ΛM V ?1,where V is the matrix whose n ’th column is v n .Thus,if {λn ,v n }is the eigensystem of P ,then {λM n ,v n }is the eigensystem of P M

.Next,if M is odd then the ordering of the eigenvalues is left unchanged and the same eigenvectors are picked to cluster the data,hence the lemma is proved.This equiva-lence between P and P M reveals two important key ideas (1)spectral clustering implicitly searches for good paths be-tween points,and (2)the eigenvalues can be used to indicate the scale and quality of partitioning,by separating between the eigenvalues that survive M steps and those that don’t.P M can be analyzed into a sum of N matrices

P

M

=

N n =1

λM n

v n v

n D

v n Dv n

,(6)

each of which depends only on P ’s eigensystem.This is accomplished by exploiting the fact that v

n Dv m =δnm ,which is due to P being de?ned by a normalized symmetric matrix (2).To appericiate the implications of this analysis,we ?rst introduce the following two de?nitions.

D EFINITION 4(Principal Matrix Component)We refer

to the matrix T n =v n v n D

v n

Dv n as the n ’th principal matrix component of P M ,and λM n as its weight.D EFINITION 5(Idempotent-Orthogonal Basis of a Matrix)Assume A is a square full rank matrix of size N ,and let δnm =1if n =m and 0otherwise.The set of square matrices of size N ,{U n }N n =1,is said to be an idempotent-orthogonal matrix basis of A if U n U m =δnm U n and there is a set of N numbers {μn }N

n =1such that A =

N n =1μn U n .An idempotent matrix satisfy U n U n =U n ,and the orthog-onality is due to the condition U n U m =0for n =m .Al-though it seems redundant,it is worth mentioning here that the sum of all ranks of U n with nonzero weights μn must equal the rank of A ,and if all μn are nonzero,then the rank of any matrix U n is one.

We can now give an intuitive interpretation for P ’s PMC and eigenvalues,analogous to PCA.In PCA the eigenvec-tors are pointing in an orthogonal set of directions with

maximum variance,and the eigenvalues indicate the vari-

ance in these directions.Here,the PMCs are an analysis

of P into an idempotent-orthogonal basis{T n}N n=1,with weightsλM n.Notice that any T n is independent of M,but

the absolute value of its weight is monotonically decreasing

with respect to M(recall that|λn|≤1).This motivates interpreting the eigenvalues as indicators to the structure in data revealed by T n.Ifλn is very close to1,such thatλM n is also close to1,then T n‘survives’the random walk and is related to stable groups in the data whereas theλM n that tends towards zero are related to unstable groups.So,PMC can be interpreted as the projection of P(or P M,since they have the same eigenvectors)into its principal matrix com-ponents that reveals structure in multi scales,withλM n as an indication to how stable this component is with respect to M.Instead of variance,we talk here about robustness to the number of steps M in the random walk.The big-ger|λM n|is,the more robust is T n to a random walk of M steps,and the better is the structure revealed by it.Next,

recall that T n=v n v n D

v n Dv n ,so that given D,v n is unique and

there is a one to one correspondence between T n and v n. Thus,the higher is the value ofλM n,the larger is the scale of the structure revealed by v n.

To help gain some more intuition,one can examine the

case M→∞.Using lemma1we get P∞=1v1v 1D

v 1Dv1=

1 N

n=1D nn

[1D11,1D22,...,1D NN],which is the matrix

whose rows are all equal to the stationary distribution of the markov chain with transition matrix P(?rst left eigen-vector).This result recovers a well known property,that for

in?nite number of steps P(x∞=s n|x0=s m)=D n

N

j=1D j

,

meaning that for in?nite number of steps the random walk forgets where it began.If the data has well separated clus-ters,we can expect similar behavior for?nite M too.In this case,a particle that begins its walk in one of the clusters is expected to stay there for a long time,until the distribution in the cluster will resemble the stationary distribution over it,as if other clusters did not exist.This can be formalized by the following Theorem.

T HEOREM1Let{S k}K k=1be a partition of S,and s m∈S k for some k=1,2,...,K.Then,if there is an odd num-ber of steps M such that P(x M∈S l|x0=s m)is equal for all s m∈S k and l=k,then spectral clustering with P as the af?nity matrix minimizes the MNCut.

Using lemma4from[7],it can be shown that if the condi-tion of Theorem1holds,then P M has PCE.In this case,us-ing the argument in the proof of lemma3shows that P also has PCE,and then lemma2completes the proof.This re-sult gives an intuitive explanation to lemma2and is closely related to the idea of PCE.Instead of considering a random walk of a single step,it allows us to consider a random walk of any number of steps,while still maintaining the

powerful Figure1.Example of Principal Matrix Components.The data set,its transition matrix P,and its leading PMC’s(6)and eigen-values are shown.Notice how T1is the matrix whose rows are all equal to the stationary distribution of P.The PMC’s form an idempotent-orthogonal basis for P(de?nition5).The lead-ing eigenvalues of P(bottom row,middle)and their M’th power (bottom row,right)are shown(notice the different scale in the y-axis)for M=8076,as automatically selected by algorithm2. Notice how only two eigenvalues‘survive’the random walk such that P8076 1·T1+1·T2(bottom,left),and how the PMC’s are related to hierarchical partitioning of the data.

guarantees of lemma2.It also suggests a clear and intu-itive explanation to why spectral clustering is so successful even if the data sets are extremely complex.Since we con-sider a random walk with any number of steps,we implic-itly explore structure in data at multiple scales,and if there is a good path between points then spectral clustering will group them together.

To see the relation with PCE,assume that P has K PCE with respect to the partition and that there exist a value of M for which{λM n~=1}K n=2and{λM n~=0}N n=K+1.Then,we can approximate P M by its projection onto the K leading PMCs as P M~=

K

n=1

λM n T n~=

K

n=1

1·v n v n D

v n Dv n

.If v n is PCE,it is easy to verify that the condition of theorem1is satis?ed,and spectral clustering minimizes the MNCut. 4.Algorithm

We saw that by replacingλwithλM and interpreting it as an indication for the convergence of the random walk,the eigenvectors contain cluster information on multiple scales. So,given the transition matrix P and the number of clus-

Input :Data S ,Gram matrix W

Output :L partitionings of S with associated stability and

plausibility measures

S l k K l

k =1,αl ,βl L

l =1

Algorithm :

https://www.doczj.com/doc/004086799.html,pute,in the following order -P according to (2)-{λn ,v n }N n =1,the eigen system of P

-?(M )and K (M )according to (7)for increasing val-ues of M ,beginning at M 0=1,until K (M )=1

(set this number of steps to M max )

2.Find all the local maxima of ?(M ),and order them {?(M l )}L l =1.

3.For every l =1,2,...,L

-Call Algorithm 1with K (M l )as the number of clusters

to obtain S l k K l

k =1

.-Set αl =M l ?M l ?1

M max ,βl =?(M l ).Algorithm 2:An algorithm to automatically learn different partitionings of S .

ters K ,one can measure the quality of the partitioning by searching for the number of steps M such that the eigengap

λM K ?λM

K +1is maximized.However,a closer look sug-gest a more powerful usage of this approach.Since differ-ent M are associated with different scales of data scattering,we can combine searching for good partitionings with infer-ring the number of clusters K -all from the eigenvalues of P .This is done by searching over M and looking for local maxima of the maximal eigengap.The number of clusters is inferred from the location of the maximal eigengap,and this maximal value can be used as a quality measure for the partitioning,as formally de?ned by

?(M )=max k

λM k ?λM

k +1 ,

K

(M )=argmax k

λM k ?λM

k +1

.(7)So,for every M we have an estimate K (M )for the num-ber of clusters,and ?(M )is interpreted as an indication for the plausibility of the partitioning.Algorithm 2combines (7)with the following two key ideas.First,good partition-ings should have a high value of ?(M ),ideally 1.Second,?(M )is not monotonic,and by increasing values of M ,it is expected to have local maxima that are associated with partitionings in increasing scales.

The algorithm is demonstrated in Figure 2with a data set consists of three interlocked rings,two of which have a bridge between them.By scanning over M we ?nd that ?(M )is locally maximized (bottom graph)at 3points M 1,2,3=40,312,6309respectively.Each of these lo-cations is associated with a different number of clusters K (M 1,2,3)=9,3,2respectively.Each of the resulting

Figure 2.A demonstration of Random walks.S consists of three

interlocked rings,two of which share a bridge.M is plotted on a base 10logarithmic scale

partitionings is shown in the top plots.Notice how small values of M (few steps of the random walk)are associated with high number of clusters (each with a small number of points).We also de?ne stability and plausibility measures (algorithm 2,step 3)which can be used to select between partitionings.The stability measure for the partitionings in

Figure 2is α1,2,3~=101.7

104.5,103.2104.5,104.3104.5respectively and the

plausibility measure associated with each of them is given by β1,2,3=0.43,0.8,0.85.We deliberately chose an ex-ample which returns three different partitionings,to demon-strate how αand βcan be used to help us decide between them.

Figure 3shows the performance of the algorithm applied to a data set which consists of 300noisy and randomly ro-

Figure3.Digits data set.Algorithm2applied to a data set consists

of300noisy and randomly rotated digits with3labels.This data

set is similar in nature to the one in Figure2(see text for details).

The algorithm automatically?nds the number of clusters and a

perfect partitioning of the data.

tated digits with3labels(100images per label).Each image

is given by a26×26matrix and is represented as a point

in R676.Notice how P’s spectrum is practically?at and

noninformative,while its power series indicate that there

are3eigenvalues that survive the random walk exploration.

Due to the random rotation of the digits,where the rota-

tion is uniformly distributed over[0o,360o),this data set is formed of rings in R676.Applying K-means to this data

set severely mixes the labels,indicating that the rings are

interlocked.In addition,applying Algorithm2to this set

results in a perfect separation of the digits.Thus,this data

set is similar in nature to the one in Figure2,and the fact

that only one peak of?(M)emerges teaches us that there

are no good bridges between clusters.

5.Learning the af?nity matrix

Recall(de?nition1)that the Gram matrix is parameter-

ized byσ,the length scale of the af?nity function.Choosing

a good value for this parameter is crucial for the algorithm

to be successful.How to choose the parameters of the af?n-Input:Data set S

Output:L partitionings of S with associated stability and plausibility measures

S l k

K

l

k=1

,αl,βl

L

l=1 Algorithm:

0.InitializeΣaccording to(8).

1.For i=1:|S|,setσi=Σ(i)and compute?(σi,M) and K(σi,M)according to(9)

2.Chooseσ =argmaxσ

i,M

?(σi,M)to be the parameter that best reveals structure in S.

3.Call Algorithm2with W(σ )as the Gram matrix. Algorithm3:An algorithm to?nd different pairs of(σ,K).

ity matrix is still an open question and an active research area,having a few possible solution strategies in the litera-ture.For example,in[2]it was assumed that K is given and σwas chosen by minimizing the distortion of the K-means algorithm in the feature space(Algorithm1,step3).In[5] the af?nity matrix was learned by forcing it to perform well on clustering a given labeled sequence.A training sequence is also used in[9],where the af?nity matrix is learned by optimizing a regularized objective function.

A major drawback of the above methods is that they re-quire K to be given with the data and they[5,9]involve solving complex optimization problems to learn the af?n-ity matrix.In section4we saw how,given W,different values for K can be easily inferred with almost no addi-tional complexity,beside computing the powers of the lead-ing eigenvalues of P for a range of possible values of M. However this procedure assumes the af?nity matrix,param-eterized byσ,is known.To learn this parameter,notice that its value eventually determines the transition probabilities between pairs of points,and by increasing it we implicitly increase the probability of a random walk to move between points that are further apart.It is worth stressing here the fundamental difference between scanning over the number of steps M and over the length scale parameterσ.Increas-ing the value ofσresults in a transition matrix P(σ)that spreads the probability of a single step to move between points that are further and further apart.By scanning over M,we simply discover plausible number of clusters in the data set,based on the single step transition probabilities given by P(σ).It can be summarized as follows;by set-ting the value ofσwe explicitly de?ne the structure in the data,and by scanning over M we reveal this structure.

The interesting interval forσroughly stretches from the smallest to the largest distance between pairs of data points.Σis de?ned to satisfy this condition

Σ=

σmin:

σmax?σmin

|S|?1

:σmax

,

σmin=min

m,n

x m?x n ,σmax=max

m,n

x m?x n .

(8)

where [a :b :c ]denotes the sequence a,a +b,a +2b,...,c .The boundaries of Σare de?ned such that for σmin most of the mass of the transition probability,for each point in S ,is distributed over the very nearest few data points,whereas for σmax it is distributed more evenly over a large number of https://www.doczj.com/doc/004086799.html,ing |S |and not N simpli?es the notation in section 6.Notice that there are now |S |candidate transition matrices which can be used for the actual partitioning of the data.To take this into consideration we can generalize functions (7)to be a function of both M and σ

?(M,σ)=max k

λM k (σ)?λM

k +1

(σ) ,K (M,σ)=argmax k

λM k (σ)?λM

k +1

(σ) .(9)The two dimensional functions (9)allow the derivation of a family of algorithms that learn σand K simultaneously.The general idea is that different values of σde?ne rela-tions between points in different scales and result in discov-ering different number of clusters in the data.The parameter σde?nes the local neighborhood structure,while scanning over M reveals the global structure.We therefore optimize over both σand M and look for the combination that yields the maximal value of ?(M,σ).Algorithm 3gives a formal description of how this can be implement.

6.Hierarchical clustering

Algorithm 3is inherently inef?cient,as it uses in step 1the entire data set S for every length scale σ,completely ig-noring the fact that different length scales result in different types of partitioning.In general,the larger σis,the larger each cluster is expected to be.Algorithm 4is essentially a hierarchical implementation of Algorithm 3,beginning by separation in a large scale,and then recursively partitioning each of the resulting clusters.Speci?cally,it ?rst searches for a good initial partitioning of S ,beginning with Σ(|S |)and going backwards.Once a local maxima of ?(σ,M )is found,S is partitioned and each cluster in the partitioning is then treated as a new data set and Algorithm 4is applied to it recursively (step 4).The recursive nature of Algorithm 4is implemented by growing a tree,adding more and more levels until the clusters reach a desired size or crossing a threshold in performance improvement.Notice that algo-rithm 4is different from algorithms 2,3by only producing one partitioning of the data.Nevertheless,the tree struc-ture provides additional information about the scale of the relation between different clusters.

In addition to the many conceptual advantages of hier-archical data clustering such as simplicity of representation or the ef?ciency of the ‘divide and conquer’principle,the computational burden is also eased signi?cantly.Notice that in each recursion the data is clustered,so that the size of S that is used to call the algorithm recursively decreases ex-ponentially.This is translated to easier computational tasks

Input :Data set S .

Output :A directed tree T,each leaf contains all the points in the associated cluster.Algorithm :

0.initialize according to (8),and set T to be an empty tree.1.starting with the largest value Σ(|S |)and going back-wards,compute ?(σi ,M )until a local maxima emerges at some pair (σ ,M ).

https://www.doczj.com/doc/004086799.html,pute K (σ ,M )according to (9)

3.call Algorithm 1with W =W (σ )and K =K (σ

)to obtain {S k }K k =1.

4.for every k =1:K call Algorithm 4(recursive)with S k and de?ne the output as the k ’th child of T.

Algorithm 4:Hierarchical implementation of Algorithm 3.

by both using smaller matrices P and smaller number of el-ements in Σ.Another implementation issue is the spectral decomposition of P required for step 2.Since the maximal eigengap is all that we are looking for,it is only necessary to compute a small number of the largest eigenvalues,much smaller than N ,the total number of eigenvalues.Recall that P is a usually sparse matrix,thus Lanczos algorithm can be used to achieve an ef?cient implementation and reduce exe-cution time (for implementation in MATLAB see also the command eigs ).

Figure 4.Numerical demonstration of Algorithm 4.The data S was generated from a mixture of 7nonuniformly scattered Gaus-sians.Every new partitioning in the tree is achieved using the value of σwhich maximizes ?(σ,M )for the associated data set.Clus-tering in higher levels of the tree are obtained using larger values of σand each split is associated with a different value of σ.

Figure5.Numerical demonstration of Algorithm4.Data S consists of9words arranged on3lines.{S i}3i=1are the initial clusters found by algorithm4,using the top values ofΣ.For each i=1,2,3,algorithm4was called recursively with S i,and the sets S i,j were clustered as the children of S i.The input to the algorithm is the set of all points,and the output is a tree with2levels and9leafs.

7.Discussion

This paper sheds new light on spectral clustering.Specif-

ically,lemma3and Theorem1generalize the results of[7]

by considering P M instead of P,leading to a new theo-

retical framework for understanding spectral clustering.It

was shown how P’s eigenvalues can be interpreted as an in-

dication for the cluster structure of the data clarifying two

aspects of spectral methods;?rst,it explains why we choose

the eigenvectors associated with the largest eigenvalues and

second,it gives a direct relation between the eigenvalues

and the number of clusters in the data.These observations

?rst motivated algorithm2,where given the data set S and

the Gram matrix W,several plausible values for the num-

ber of clusters K are inferred.In algorithm3only the data

S is given,and the search for the af?nity matrix is com-bined with the search for the number of clusters K.Finally,

to have an ef?cient implementation of this algorithm,we

de?ned a hierarchical algorithm that iteratively partitions

the data so that the computational burden in each stage de-

creases exponentially.The algorithm for determiningσcan

be extended to?nding any other single parameter of a pa-

rameterized Gram matrix.

Our results also give a general framework for matrix

decomposition which may be useful for other data anal-

ysis and machine learning algorithms(e.g.PCA,markov

chains).In relation to image segmentation algorithms,we

already discussed how our results helps understanding when

(there are no good bridges between segments)and why(the

random walk doesn’t easily move between segments)the

method is expected to be successful.In addition,Theorem

1can also be used to improve other existing algorithms for

image segmentation.For example,Theorem1in[9]shows

how the eigengapλK?λK+1can be used to de?ne a qual-ity measure,but because this number often tends to be very small,using it directly leads to numerical instability of their algorithm and the authors say a less ef?cient objective func-tion is https://www.doczj.com/doc/004086799.html,ing our method,this eigengap can be easily replaced byλM K?λM K+1with any M for which the numerical problems are minimized.On the other hand,and unlike[9],our method assigns all dimensions with the same length scale parameter.Thus,although it allows a very sim-ple implementation,it has the weakness of not being able to perform feature selection.One of the directions for future work is extending our algorithms to allow using differentσalong different dimensions.

References

[1] A.Shokoufandeh,and S.Mancoridis,and M.Maycock.Ap-

plying Spectral Methods to Software Clustering.Proceed-ings of the Ninth Working Conference on Reverse Engineer-ing(WCRE02).

[2] A.Y.Ng,M.I.Jordan,and Y.Weiss.On spectral clustering:

Analysis and an algorithm.Advances in Neural Information Processing Systems(NIPS),14,2001.

[3] D.Verma,and https://www.doczj.com/doc/004086799.html,parison of spectral clustering

methods.UW CSE Technical report,2001.

[4] F.R.Bach,and M.I.Jordan.Blind one-microphone speech

separation:A spectral learning approach.Advances in Neu-ral Information Processing Systems(NIPS),16,2004. [5] F.R.Bach,and M.I.Jordan.Learning Spectral Cluster-

ing.Advances in Neural Information Processing Systems (NIPS),16,2004.

[6]J.Shi,and J.Malik.Normalized Cuts and Image Segmen-

tation.IEEE Transactions on Pattern Analysis and Machine Intelligence,22.

[7]M.Meila.The multicut lemma.UW Statistics Technical

Report417,2001.

[8]M.Meila,and J.Shi.A Random Walks View of Spectral

Segmentation.Tenth International Workshop on Arti?cial Intelligence and Statistics(AISTATS),2001.

[9]M.Meila,and S.Shortreed,and L.Xu.Regularized spectral

learning.UW Statistics Technical Report465,2005. [10]W.Pentney,and M.Meila.Spectral Clustering of Biological

Sequence Data.To appear in AAAI2005.

一汽大众汽车有限公司CRM客户互动中心(CIC)应用案例

一汽大众汽车有限公司CRM客户互动中心(CIC)应用案例 2007-6-15 23:04:06 文章来源:中国电子政务网 一汽大众汽车有限公司总部设在长春,它是中国第一汽车制造厂和德国大众奥迪公司联合组建的大型合资汽车制造企业。 一汽大众汽车有限公司成立于1991年。它开创了中国现代汽车的生产,也是中国唯一中档型和豪华型轿车的生产企业。日前,该公司利用mySAP客户关系管理解决方案(mySAP CRM),实现了先进的客户关系管理。 负责信息管理服务工作的高级经理王强先生介绍说:“我们主要采用mySAP CRM 解决方案克服目前客户服务反应迟纯和应答次数较低的问题。在支持客户服务方面,原有IT系统无法提供实时信息,数据和业务流程的集成不完整,而且缺少IT专业人员。”一汽大众公司通过地区经销商销售产品,不能直接获得所需的客户反馈意见,因而无法保证为客户提供优质服务并对市场进行智能化管理。公司在短短六个月的时间实施了mySAP 客户关系管理解决方案(mySAP CRM),从而改进了客户服务质量,并能掌握更多与客户群相关的重要信息。王先生说,“mySAP CRM明显巩固了我们与客户之间的关系,并且从销售、服务到市场营销过程,在一个平台上集成了所有客户服务功能。” 以客户服务中心为主导 一汽大众公司实施了集销售、服务和营销为一体的mySAP CRM客户互动中心(CIC)。现在,客户可以通过电话、传真、电子邮件和互联网等多种方式与客户联系中心联系。在一汽大众项目中,mySAP CRM与核心SAP企业解决方案紧密集成,客户、服务代表及企业内部可以共享通信和信息。 王先生评价这套系统时说,“ 现在,通过mySAP CRM与核心SAP企业解决方案的集成,我们可以随时访问产品、经销商和客户的相关信息。因此,客户服务代表能掌握最新的产品信息,随时随地解决客户提出的问题。由于mySAP CRM 系统中嵌入汽车生产的全部流程,因此服务代表们可以根据第一手资料做出更为准确可靠的决定,监控并更好地满足客户的需求。” 解决方案直接推动一汽大众实现企业最高目标王先生说,“mySAP CRM使我们更好地与客户进行沟通,提高服务和产品质量,实现成为中国汽车生产龙头企业的战略目标。这一解决方案可以提高我们企业的整体形象:对市场变化做出更快速的响应,进一步提高客户的满意度。mySAP CRM能为客户提供最佳服务,因此还能吸引潜在客户,从而提高我们的经济效益。” 快速实施:六个月在选择mySAP CRM之前,一汽大众公司也曾考虑过其他一系列的解决方案。

汽车4S店-薪酬方案(上海大众)

上海大众4S店薪酬方案 一、目的 1.1使公司薪酬体系在本地同行业富有竞争力,激发员工活力; 1.2把员工个人业绩和团队业绩有效结合起来,共同分享公司发展带来的收益; 1.3促进员工价值观念的凝合,形成留住人才和吸引人才的机制。 二、适用范围 本制度适用柳州建润员工。 三、薪酬方案 以下方案涉及到的业务数据,指的是含税值。 3.1销售部薪酬方案 销售部月度激励政策将根据上海大众2016月度商务政策,月度考核重点,发票任务,以及公司库存车辆结构及集团公司2016年各项KPI任务指标情况,每月修改考核提成方案。 以下涉及到整车提车单位以:元/辆进行统计。 3.1.1收入构成 1、销售顾问月收入=(整车基础提成+月度整车促销激励)*KPI+叁个量考核 +代交车+装饰提成+保险提成 +个贷提成+二手车+上牌费提成+漆面保障计划+集团促销附配件+延保+其它+临时政策 考核-个人所得税-社保 注明:未转正销售顾问绩效部分只发80%。但当整车完成率x≧100%则按100%发放 3.1.2整车基础固定提成(全年不变)单位:元/辆 说明:1、二网代交车:50元/辆(不计量)。 2、团购,单位集采等大批量销售的另定。

3.1.3销售顾问完成阶梯激励政策 3.1.4重点车型或指定销售车型正负激励政策

3.1.5 长库龄正激励政策 1、任务与考核量 说明:1、库存天数是以当月月度1号的《库存表》为准。 2、如长库龄车个人完成总量≥3台(库龄天数≥90天),奖励100元/台。 3.1.6进口车 零售奖励:300元 3.1.7销售单项冠军奖: 整车销售奖:开关条件,100%完成整车总任务、长库存任务、指定车型任务,奖励上限:400. (取第一名,当并列时,取长库存量最多,如再并列,取时间最长者) 装饰销售奖:100%完成任务。奖励上限:400(取第一名,当量相同时,取整车排名最高者) 保险销售奖:100%完成新保。奖励上限:400(取第一名,当量相同时,取整车排名最高者) 3.1.8增值业务提成部分 1、装饰提成,只计销售的,不计随车赠送的(按装饰收入计提): 任务下达:客单价下限4000元/辆。(取值是单车总装饰产值,低于公司限价,该装饰计入随车赠送不计提成) 2、商业险保险提成: 新车按商业险保费计算,个人根据保险收入完成率进行阶梯提成的档次划分,具体提成见下表: 3、信贷提成方案 按办理信贷所收的佣金计算,具体提成见下表:

上海大众4S店管理制度(DOC49页)

上海大众4S店管理制度一、员工管理制度 ●员工行为规范 ●员工着装规定 ●考勤制度 ●奖惩制度 二、公司各部门流程 ●公司各部门流程图 ●组织机构图 ●总经理部门职责 ●厂长、副厂长(厂长助理)岗位职能 三、人力资源管理制度 ●人事管理制度 ●员工招聘培训制度 ●员工晋升(调动)离职制度 ●人员综合评估及评优办法 ●人事管理岗位责任制及任职要求 ●劳动人事制度、用工制度及工资标准 五、行政管理制度 ●办公室岗位图 ●行政(办公室)部门职能 ●行政办公人员岗位责任制及任职要求 七、财务部管理制度 ●财务岗位图 ●财务管理制度 ●财务部岗位责任制及任职要求 八、技术开发部管理制度 ●技术部管理制度 ●技术开发部设计、工艺文件管理制度 ●技术开发部岗位责任制及任职要求 九、物资采购、供应、仓储管理制度 ●采购部岗位图 ●材料采购、保管岗位责任制 ●采购合同的签订和管理制度

●退还货物管理制度 ●库房管理制度 十、生产部管理制度 ●生产部岗位图 ●生产部岗位责任制及任职要求 ●产品开发程序 ●产品生产工艺过程 ●生产部组装质量检查制度 ●生产部产品试验规则 十一、整车销售管理制度 ●销售网络图 ●销售流程图 ●客户投诉处理流程图 ●销售合同管理制度 ●销售部成品库管理制度 ●销售部退换货物管理制度 ●销售部岗位图 ●销售管理制度 ●展厅服务工作规范 十二、维修服务部 ●汽车维修服务公约 ●汽车组织机构图 ●汽车维修流程图 ●汽车维修管理流程图 ●汽车维修进厂检验制度 ●维修业流程标准 ●汽车维修管理制度 ●产品的监视和测量 ●汽车维修竣工检查制度 十三、零配件管理制度 ●零配件管理职责 ●零配件编码、计划、采购、到库及验收、发货及仓储 ●零配件合同管理 十四、信息中心管理制度 ●信息中心的组织框架

上海大众4S店服务经理-任职标准

岗位任职的面试·评价标准流程(修订版)部门:服务部评价岗位:服务经理 1、本岗位评价标准 考评条件要求标准 考评 分值 考评条件要求标准 考 评 1、年龄★45岁以内0-4 14、理解能力有良好的分析能力0-4 2、性别男0-4 15、记忆力优秀/良好0-4 3、普通教育★大专以上学历0-4 16、独立性优秀/良好0-4 4、健康/毅力★身体健康0-4 17、沟通能力沟通能力良好0-4 5、职业经验★ 具有五年以上的汽车 行业工作经验和两年 以上的管理经验;熟悉 汽车服务流程 0-4 18、决策能力 优秀/良好0-4 6、资质/证照★有驾照和驾驶能力0-4 19、有序性优秀/良好0-4 7、个人印象★感觉良好0-4 20、主动性优秀/良好0-4 8、可靠性★审慎细致0-4 21、坚决、坚韧优秀/良好0-4 9、可信性★责任心强0-4 22、合作意识优秀/良好0-4 10、外貌形象端正、气质良好0-4 23、适应能力优秀/良好0-4 11、举止/礼貌仪态大方、文明礼貌0-4 24、特殊专业知识熟悉电脑操作0-4 12、表达/讲话口头表达能力良好0-4 25、领导素质能力组织管理能力良好0-4 13、表达/书面具有较强写作能力0-4 总计得分-------------- 注:招聘岗位评价标准得分单项0分为不合格/不适宜(总计60分以下等同);单项1分为有限的适用性(总计60-69分等同);单项2分为较适合(总计70-79分等同);单项3分为适合(总计80-89分等同);单项4分为非常适合(总计90分以上等同)。 “★”标志为应聘的基本/重要条件,如果“基本条件中”一项得分为0(不合格)则不再评定下去。 面试及招聘流程1、人员需求计划。部门总监根据业务工作需要,向行政部拟报招聘该岗位的需求计划。上 报总经理核准后,由行政部发布招聘信息,按照上海大众规定的岗位标准实施招聘,2、招聘流程:核准用人需求→确定招聘方式(发布招聘信息)→接待招聘人员→填写应聘 人员登记表→核实资料→应聘者测评/评估→相关部门决定录用→岗前培训考核→签订劳动合同→上报上海大众中南售后部(备案)→试用期满考核→正式录用。 3、面试流程:①、由行政经理和会同总经理/部门负责人对应聘者进行初次面试。行政经 理依据《岗位·面试评价标准》对应聘人员逐项评价打分。依据考核评价的总分数(考评结果)决定该员是否符合该岗位的应聘条件;②、行政部、用人部门负责人面试后,若认为符合应聘条件,行政部再将应聘人员转报总经理面试; 4、录用及岗前培训。总经理面试决定录用后,由行政部通知应聘人员岗前面谈,说明准备 上班报到的相关手续,办理招聘录用手续及实施岗前培训。 5、该岗位一经录用后,由行政部将其上报上海大众/中南服务部备案。 上海大众汽车☆☆销售服务有限公司 二〇一四年元月一日

sap中MRP参数的用法

1.物料主数据中综合MRP参数设置有什么影响? SAP软件中,在物料主数据中MRP 3界面有个字段--综合MRP,设置此字段对MRP运行过程和结果有什么影响? 综合MRP只影响独立需求计划PIR(planned independent requirements)跑MRP 的结果,对于SO,不管综合MRP设置什麽,结果都是一样的,设不设都没有关系。对带最终装配的独立需求计划进行消耗时,综合MRP必须设为"1";对没有最终装配的独立需求计划进行消耗时,综合MRP必须设为"3"。 2.不同订货策略下的MRP计划参数对MRP计算结果的影响? 以下举例说明,所举实例均基于安全库存引发的需求。订货策略 = 1,订货点 1.新建物料ZZZZZZ, 设置订货策略 = 1,计划参数如下图所示:

【说明】在订货策略=1的情况下,如果可用库存小于订货点(ORDER POINT)20个,则订货70个。注意,订货策略为1的情况下,参数批量订货数目(LOT SIZE QTY) 作为订货数量,订货点作为触发订货的数量,其他参数对MRP运算没有影响。 2.调整ZZZZZZ的库存为10个。

3.运行PLNG R后,MOAN显示建议订单数量如下:由于当前库存10个小于订货点数量 20个,则要订货70个。

订货策略 = 2,订货上限 1.在ITMB上更改物料ZZZZZZ的订货策略为2,其他参数不变。

【说明】在订货策略为2的情况下,如果可用库存数小于20,则至少补足到70,最小批量订货是80个,并且订货数量应是批量订货数目18的整数倍。 2.运行PLNG R后,MOAN显示建议订单数量如下:由于当前库存10个小于订货点数量 20个,则建议订货90个。 订货策略 = 3,定期订货 1.在ITMB上更改物料ZZZZZZ的订货策略为3,其他参数不变。

上海大众新产品诞生全过程

上海大众新产品诞生的全过程 写在前面 我们的成员单位,长期以来与上海大众的合作工作,对新产品诞生的全过程,已经有了比较模糊的了解。至于在新产品诞生的全过程的每个阶段中,我们的成员单位,又如何与上海大众同步工作?这是值得我们进一步探讨的。 当今的汽车工业,竞争激烈,产品的更新周期不断缩短,这就要求我们的成员单位,必须与上海大众默契配合,同步工作。为此,上海桑塔纳轿车共同体,籍以网上教育的栏目,以汽车生产为例,简单扼要阐述新产品诞生的全过程,以及在每个阶段,从配套单位的角度,应该如何配合上海大众新产品诞生的同步工程。 新产品诞生的全过程 一种新产品的诞生,大致有如下阶段: ?项目任务书; ?产品的认可:规划认可、采购认可和投产批准; ?试生产(PVS); ?两日生产(2TP); ?小批量试生产(OS); ?整车和动力总成的检验; ?质量保证的道路试验; ?批量投产的认可(SF); ?批量生产计划的认可(EPF); ?起步生产(SOP); ?投放市场(ME ) 1.项目任务书 在这个阶段中,必备的条件是:有一个完整的产品描述、1:1 的数据控制的内外模型及 其CAD数据。这些必备的条件,必须由产品开发部提供。 其他相关部门,根据上述的必备条件,制订实施项目的相关的要求。这些要求经项目组统一商定和汇总,最终确定了项目任务书,即实施这项目的所有要求。 2.产品的认可 产品的认可由如下组成: 规划认可规划认可由公司的最高层领导颁发。规划认可就是意味着项目规划开始起动。 所以,规划认可的前提条件是:

?最新状态的单位产品的零件明细表; ?经细化的完整的图纸、供货技术条件; ?产品的加工深度(自制件和外购件); ?确定的工厂的地点、已被接受的工装设备和加工过程的草案图纸。一经规划认可,详细的规划工作就可以起动。 例如: ?工装、设备和模具的规划、预算和设计; ?厂房基建规划、生产线规划、设施规划和质量规划; ?工艺开发和试验; ?原材料的采购等; ?机器设备的询价和订购合同等; ?规划样件的提供日期。 采购认可采购认可由项目领导小组(也可以按零件)颁发。采购认可就是意味着大量的资金可以开始起动。所以,采购认可的前提是: ?产品图纸的完整性和准确性已经检验; ?确认产品的环保的目标可以达到; ?规划的工艺流程及其布局(包括物流)已经检验和认可。 如果获得了采购认可,这就意味着,在项目进程中,任何的更改都要受到限制。因为,从这时间节点以后,大量的实质性的工作将全面起动。例如: ?生产自制件和外购件的设备、工装、模具、检具和物流设施等等,全面投入生产; ?确定试生产(PVS和小批量试生产的日程和生产数量; ?全面调查(新件和沿用件)生产过程的机器能力和过程能力; ?规划相关的售后服务维修站; ?技术准备、质保方案、环保方案、物流方案、费用支出以及时间进度的实施状态的跟踪报告。 投产批准投产批准由项目领导小组颁发。投产批准的颁发意味着产品的批量生产的开发工作基本告一段落。这也意味着,从这个时间节点起,所有的更改必须停止。只有在以后的 试生产(PVS和小批量试生产(OS)过程中发现问题时,才允许更改。但是,必须经特殊批准。 投产批准的颁发尽量在试生产(PVS之前完成。 3.试生产(PVS 试生产(PVS 是很重要的阶段,它是对前期技术准备工作的全面“大检阅” 。也就是检验能否在批量的条件下,生产出符合质量要求的产品。 ?进入这个阶段必须具备的条件: ?已全面落实过程FMEA所提出的各项措施; ?所有生产设备和工装通过验收,并可启用;

汽车4S店薪酬方案(上海大众)

.................................. 上海大众4S店薪酬方案 一、目的 1.1使公司薪酬体系在本地同行业富有竞争力,激发员工活力; 1.2把员工个人业绩和团队业绩有效结合起来,共同分享公司发展带来的收益; 1.3促进员工价值观念的凝合,形成留住人才和吸引人才的机制。 二、适用范围 本制度适用柳州建润员工。 三、薪酬方案 以下方案涉及到的业务数据,指的是含税值。 3.1销售部薪酬方案 销售部月度激励政策将根据上海大众2016月度商务政策,月度考核重点,发票任务,以及公司库存车辆结构及集团公司2016年各项KPI任务指标情况,每月修改考核提成方案。 以下涉及到整车提车单位以:元/辆进行统计。 3.1.1收入构成 1、销售顾问月收入=(整车基础提成+月度整车促销激励)*KPI+叁个量考核+代交车+装饰提成+保险提成+ 个贷提成+二手车+上牌费提成+漆面保障计划+集团促销附配件+延保+其它+临时政策考 核-个人所得税-社保 注明:未转正销售顾问绩效部分只发80%。但当整车完成率x≧100%则按100%发放 3.1.2整车基础固定提成(全年不变)单位:元/辆 序号国产车系列进口车系列 1 50 200 说明:1、二网代交车:50元/辆(不计量)。 2、团购,单位集采等大批量销售的另定。

3.1.3销售顾问完成阶梯激励政策 基础提成 50 序号 个人完成总量 与 阶梯提成关系 1 2 3 4 5 6 7 8 9 10 11 12 13 1 POLO - - - - - - - 10 20 30 40 50 60 2 桑塔纳系列 0 0 0 0 0 0 10 20 30 40 50 60 70 3 朗逸系列 0 0 0 0 10 20 30 40 50 60 70 80 90 4 凌渡 0 0 0 0 40 50 60 70 80 90 100 110 120 5 NMS 0 0 0 0 50 60 70 80 90 100 110 120 130 6 途观 0 0 0 0 60 70 80 90 100 110 120 130 140 7 途安 60 70 80 90 100 110 120 130 140 开关指标:当个人总销售量X ≧5台时,提成按阶梯奖进行计提,当月销售量X ≤4台时,提成只计提基础提成。 无商业险也无装饰则取该车消阶梯提成。 3.1.4重点车型或指定销售车型正负激励政策 3.1.5 长库龄正激励政策 1、任务与考核量 说明:1、库存天数是以当月月度1号的《库存表》为准。 2、如长库龄车个人完成总量≥3台(库龄天数≥90天), 奖励100元/台。 3.1.6进口车 零售奖励:300元 序号 车型 未完量 X=1 X=2 X=3 X ≧4 1 途观、帕萨特、全新 朗逸 -50/辆 50 80 100 注:此表的激励是叠加于《基础提成》、《阶梯激励政策》之上,但不与《长库存车辆销售激励》进行叠加。当所销售的车既是长库存车又是指定车型时,则只计长库存车指标,激励按长库存车辆政策进行激励。

多模式开关电源控制芯片的低功耗设计

硅微电子学 多模式开关电源控制芯片的低功耗设计 X 郝炳贤 吴晓波XX  陈 海 (浙江大学玉泉校区电气学院超大规模集成电路研究所,杭州,310027) 2008-03-14收稿,2008-07-03收改稿 摘要:针对降低多模式开关电源控制芯片在轻载与待机工作模式下功耗,提高其全负载条件下工作效率的需要,提出一种开关电源控制芯片供电系统的设计方案,实现了其在启动、关断、重载、轻载以及待机等各种工作情况下的高效率低功耗工作。该供电系统主要包括欠压锁定电路、数字模块电源单元和两种不同的模拟模块电源单元,以及状态检测模块和模式控制逻辑单元,能够实现电源的上电、掉电控制,同时能够根据电源的负载条件控制各模块的开通关断以实现低功耗工作。该系统已应用于绿色多模式反激式开关控制器的设计中,取得了提高电源效率、降低待机功耗的作用。芯片采用1.5L m BiCM O S 工艺设计制成。测试表明,所设计电源的各项指标均已达到设计要求。 关键词:电源管理;供电单元;多模式控制;欠压锁定;待机功耗 中图分类号:T N 432 文献标识码:A 文章编号:1000-3819(2009)01-126-06 Low Power Design of Multi -mode Switching Power Supply Controller IC HAO Bing xian WU Xiaobo CHEN Hai (I nstitute of V ery L ar ge Scale I nteg rate Circuit D es ign ,Zhej iang U niv er sity ,H angz hou ,310027,CH N ) Abstract :T o reduce the pow er co nsum ption of multi-mode sw itching pow er supply co n-tr oller chip in light lo ad and standby conditions and enhance its efficiency over the entire lo ad range,a novel supply system w as proposed in this paper.Supplied by it,the chip could r ealize hig h efficiency ,low pow er oper ation in different co nditions including start-up,shut-dow n,heav y load ,lig ht load and standby .The sy stem consists of a under vo ltag e lockout (UV LO )m odule ,a po wer so urce for digital modules ,tw o voltage regulato rs for analog ue modules ,a vo ltag e detec-tio n m odule and a control lo gic module.By controlling sw itching on/off som e inner blocks ac-cor ding to the load conditions,the chip could keep low pow er o peratio n.This po wer system w as successfully applied to a multi -m ode controller for fly back converter featur ing hig h efficiency and low pow er consumptio n at standby.It w as designed and fabricated in 1.5L m BiCMOS technolo-gy.T est results show ed that all expected per for mances were successfully achieved. Key words :power management ;supply system ;multi -mode controller ;UVLO ;standby power EEACC :2570P   第29卷 第1期2009年3月 固体电子学研究与进展 RESEARCH &PROGRESS OF SSE V ol.29,N o.1M ar.,2009   X XX 联系作者:E-mail:w uxb @https://www.doczj.com/doc/004086799.html, 基金项目:国家自然科学基金资助项目(90707002);浙江省自然科学基金资助项目(合同号Z104441)

表观遗传学

表观遗传学:营养之间的新桥梁与健康 摘要:营养成分能逆转或改变表观遗传现象,如DNA甲基化和组蛋白修饰,从而改变表达与生理和病理过程,包括胚胎发育,衰老,和致癌作用有关的关键基因。它出现营养成分和生物活性食物成分能影响表观遗传现象,无论是催化DNA直接抑制酶甲基化或组蛋白修饰,或通过改变所必需的那些酶反应底物的可用性。在这方面,营养表观遗传学一直被看作是一个有吸引力的工具,以预防儿科发育疾病和癌症以及延迟衰老相关的过程。在最近几年,表观遗传学已成为广泛的疾病,例如2型糖尿病的新出现的问题糖尿病,肥胖,炎症,和神经认知障碍等。虽然开发治疗或预防发现的可能性这些疾病的措施是令人兴奋的,在营养表观遗传学当前的知识是有限的,还需要进一步的研究来扩大可利用的资源,更好地了解使用营养素或生物活性食品成分对保持我们的健康和预防疾病经过修改的表观遗传机制。 介绍: 表观遗传学可以被定义为基因的体细胞遗传状态,从不改变染色质结构产生的表达改变的DNA序列中,包括DNA甲基化,组蛋白修饰和染色质重塑。在过去的几十年里,表观遗传学的研究主要都集中在胚胎发育,衰老和癌症。目前,表观遗传学在许多其它领域,如炎症,肥胖,胰岛素突出抵抗,2型糖尿病,心血管疾病,神经变性疾病和免疫疾病。由于后生修饰可以通过外部或内部环境的改变因素和必须改变基因表达的能力,表观遗传学是现在被认为是在不明病因的重要机制的许多疾病。这种诱导表观遗传变化可以继承在细胞分裂,造成永久的保养所获得的表型。因此,表观遗传学可以提供一个新的框架为寻求病因在环境相关疾病,以及胚胎发育和衰老,这也是已知受许多环境因素的影响。 在营养领域,表观遗传学是格外重要的,因为营养物质和生物活性食物成分可以修改后生现象和改变的基因的表达在转录水平。叶酸,维生素B-12,甲硫氨酸,胆碱,和甜菜碱可以影响通过改变DNA甲基化和组蛋白甲基化1 - 碳代谢。两个代谢物的1-碳代谢可以影响DNA 和组蛋白的甲基化:S-腺苷甲硫氨酸(的AdoMet)5,这是一个甲基供体为甲基化反应,并S-腺苷高半胱氨酸(的AdoHcy),这是一种产物抑制剂的甲基化。因此,理论上,任何营养素,生物活性组件或条件可影响的AdoMet或的AdoHcy水平在组织中可以改变DNA和组蛋白的甲基化。其他水溶性维生素B像生物素,烟酸和泛酸也发挥组蛋白修饰重要的作用。生物素是组蛋白生物素化的底物。烟酸参与组蛋白ADPribosylation如聚(ADP-核糖)的基板聚合酶作为以及组蛋白乙酰为底物Sirt1的,其功能作为组蛋白乙酰化酶(HDAC)(1)。泛酸是的一部分辅酶A以形成乙酰CoA,这是乙酰基的中组蛋白乙酰化的源。生物活性食物成分直接影响酶参与表观遗传机制。例如,染料木黄酮和茶儿茶素会影响DNA甲基(转移酶)。白藜芦醇,丁酸盐,萝卜硫素,和二烯丙基硫化物抑制HDAC和姜黄素抑制组蛋白乙酰转移酶(HAT)。改变酶activit这些化合物可能我们的有生之年通过改变基因表达过程中影响到生理和病理过程。 在这次审查中,我们更新了关于最新知识营养表观遗传学,这将是一个有助于理解如何营养素有助于我们的健康。 知识的现状 DNA甲基化 DNA甲基化,它修改在CpG二残基与甲基的胞嘧啶碱基,通过转移酶催化和通过改变染色质结构调节基因表达模式。目前,5个不同的转移酶被称为:DNMT1,DNMT2转移酶3A,DNMT3B和DnmtL。DNMT1是一个维护转移酶和转移酶图3a,3b和L分别从头转移酶。DNMT2的功能尚不明确。通过在我们的一生,营养成分影响这些转移酶和生物活性食物成分可以改变全球DNA甲基化,这是与染色体完整性以及genespecific启动子DNA甲基化,

大众4s店采访专用.8

大众4S采访稿 随着人们生活水平的日益提高,汽车已经成为人们生活一种必不可少的必需品,日前(集体日期),政府出台限车令:全国范围内,在外城市落户车辆不允许在其他城市落户,此次限车令的出台,使全国车市出现了空前繁荣的一天,也是政府对我国车辆进行了一次全面的普查,通过网络数据使我对我国汽车数量和交通现状吃惊以外,也发现了一件有趣的事情,这就是在全国车辆车主问答中,无论是出租车还是私家车,对于大众公司车辆的满意度以及认为的性价比都远超同行业的汽车公司,这是为什么呢?带着这一问题我来到了乌鲁木齐(某某)大众4S店。 大众,汽车品牌名,大众汽车公司是德国最年轻的、同时也是德国最大的汽车生产厂家。使大众公司扬名的产品是甲壳虫(Beetle)式轿车(由费迪南德·保时捷设计),该车在80年代初已生产了2000万辆。它启动了大众公司的第一班高速列车,紧随其后的高尔夫(Golf)、帕萨特(Passat)等也畅销全世界。从1984年大众汽车进入中国市场,大众汽车是第一批在中国开展业务的国际汽车制造商之一。大众汽车自进入中国市场以来,就一直保持着在中国轿车市场中的领先地位。另外在现代汉语中大众可以解释为:众多的人;泛指民众,群众等意思。 销售经理 1.贵公司的销售理念是什么? 2.您认为消费者选择你们公司的理由是什么? 3.现在公司有良好的发展前景,您认为以后的发展方向是什么?(打造消费者信任的品牌 之类) 4.在吸引大量的客户的条件下,贵公司如何提高效率让顾客满意? 5.在面对众多的竞争者,贵公司是如何提升实力,在市场竞争中脱引而出的? 6.你是怎样看待顾客与公司的关系的? 7.在日益激烈的市场竞争中,怎样处理好品质、品牌和服务这三个问题,以求可持续发展 的良性循环? 8.近年来车市情况是怎样的? 9.近年来针对市场的繁荣,贵公司近年的销售情况是怎么样的,有没有参加什么活动,做 出了怎样的努力? 10.您觉得对于潜在的消费群众,应该去怎样吸引他们呢? 11.记者:感谢…….接受我们的采访! …….:谢谢! 管理经理 1.你的管理理念是什么? 2.你管理方法是什么? 3.你对员工的管理制度是什么?

高级FPGA设计 结构、实现和优化 word版2

第2章面积结构设计 本章讨论数字设计的三个主要物理特性的第二个:面积。本章也讨论在FPGA中结构的面积优化方法。 要讨论通过选择正确的拓扑结构来减少面积的方法。拓扑指的是设计的高层次组织,不是器件的特性。由综合和布局图工具执行的电路级简化指的是在设计的子集中最小化门数,可能是器件的特性。 针对面积的拓扑是尽可能最大程度地复用逻辑资源,常常以流量(速度)为代价。为此经常要求一个递归的数据流,其中一级的输出反馈到输入端进行类似的处理,这可以是简单的环路,随着算法自然地流动,或者逻辑复用可能是复杂的,并要求专门的控制。这节描述这两种技术,也根据性能损失描述必要的结论。 在这章的课程期间,将详细讨论以下的内容: ·在计算的不同级中为重用逻辑资源折叠流水线; ·当不存在自然的流程时控制对逻辑复用的管理; ·在不同的功能操作中共享逻辑资源; ·复位对面积优化的影响: ·缺少复位能力对FPGA资源的影响。 ·缺少置位能力对FPGA资源的影响。 ·缺少异步复位能力对FPCA资源的影响。 ·RAM复位的影响。 ·为逻辑实现优化利用置位/复位引脚。 2.1 折叠流水线 上一章描述了为改善流量用“拆开环路”来达到最大的性能,“折叠流水线”的方法是与其相反的操作。当拆开环路产生流水线时,要求更多的资源保存中间数值,以及并行地运行需要复制计算结构,都增加了面积。因而,当要使一个设计的面积最小化时,必须相反地执行这些操作,即折叠流水线使得逻辑资源可以重用。所以,当优化在流水线级复制逻辑的高度流水线设计时,应该利用这个方法。 折叠流水线可以优化在流水线级复制逻辑的流水线设计的面积。 考虑定点的分数乘法器的例子,在这个例子中,A表示定点刚好在最低有效位( LSB) 右边的归一化整数格式,输入B的定点剐好在最高有效位( MS B)左边,换言之,B与A 比例从O到1。

表观遗传学

表观遗传学 比较通俗的讲表观遗传学是研究在没有细胞核DNA序列改变的情况时,基因功能的可逆的、可遗传的改变。也指生物发育过程中包含的程序的研究。在这两种情况下,研究的对象都包括在DNA序列中未包含的基因调控信息如何传递到(细胞或生物体的)下一代这个问题。表观遗传学是与遗传学(genetic)相对应的概念。遗传学是指基于基因序列改变所致基因表达水平变化,如基因突变、基因杂合丢失和微卫星不稳定等;而表观遗传学则是指基于非基因序列改变所致基因表达水平变化,如DNA甲基化和染色质构象变化等;表观基因组学(epigenomics)则是在基因组水平上对表观遗传学改变的研究。所谓DNA甲基化是指在DNA 甲基化转移酶的作用下,在基因组CpG二核苷酸的胞嘧啶5'碳位共价键结合一个甲基基团。正常情况下,人类基因组“垃圾”序列的CpG二核苷酸相对稀少,并且总是处于甲基化状态,与之相反,人类基因组中大小为100—1000 bp左右且富含CpG二核苷酸的CpG岛则总是处于未甲基化状态,并且与56%的人类基因组编码基因相关。人类基因组序列草图分析结果表明,人类基因组CpG岛约为28890个,大部分染色体每1 Mb就有5—15个CpG岛,平均值为每Mb含10.5个CpG岛,CpG岛的数目与基因密度有良好的对应关系[9]。由于DNA甲基化与人类发育和肿瘤疾病的密切关系,特别是CpG岛甲基化所致抑癌基因转录失活问题,DNA甲基化已经成为表观遗传学和表观基因组学的重要研究内容。 几十年来,DNA一直被认为是决定生命遗传信息的核心物质,但是近些年新的研究表明,生命遗传信息从来就不是基因所能完全决定的,比如科学家们发现,可以在不影响DNA序列的情况下改变基因组的修饰,这种改变不仅可以影响个体的发育,而且还可以遗传下去。这种在基因组的水平上研究表观遗传修饰的领域被称为“表观基因组学(epigenomics)”。表观基因组学使人们对基因组的认识又增加了一个新视点:对基因组而言,不仅仅是序列包含遗传信息,而且其修饰也可以记载遗传信息。 摘要表观遗传学是研究没有DNA 序列变化的可遗传的基因表达的改变。遗传学和表观遗传学系统既相区别、彼此影响,又相辅相成,共同确保细胞的正常功能。表观遗传学信息的改变,可导致基因转录抑制、基因组印记、细胞凋亡、染色体灭活以及肿瘤发生等。 关键词表观遗传学;甲基化;组蛋白修饰;染色质重塑;非编码RNA 调控;副突变 表观遗传学( epigenetics) 是研究没有DNA序列变化的可遗传的基因表达的改变。它最早是在1939 年由Waddington在《现代遗传学导论》一书中提出,当时认为表观遗传学是研究基因型产生表型的过程。1996 年,国内学术界开始介绍epigenetics 研究,其中译名有表遗传学、表观遗传学、表型遗传修饰等10 余种,其中,表观遗传学、表遗传学在科技文献中出现的频率较高。 1 表观遗传学调控的分子机制 基因表达正确与否,既受控于DNA 序列,又受制于表观遗传学信息。表观遗传学主要通过DNA 的甲基化、组蛋白修饰、染色质重塑和非编码RNA 调控等方式控制基因表达。近年发现,副突变也包含有表观遗传性质的变化。 1.1 DNA 甲基化DNA 甲基化是由酶介导的一种化学修饰,即将甲基选择性地添加到蛋白质、DNA 或RNA上,虽未改变核苷酸顺序及组成,但基因表达却受影响。其修饰有多种方式,即被修饰位点的碱基可以是腺嘌呤N!6 位、胞嘧啶的N!4 位、鸟嘌呤的N!7 位和胞嘧啶的C!5 位,分别由不同的DNA 甲基化酶催化。在真核生物DNA 中,5- 甲基胞嘧啶是唯一存在的化学性修饰碱基,CG 二核苷酸是最主要的甲基化位点。DNA 甲基化时,胞嘧啶从DNA 双螺旋突出,进入能与酶结合的裂隙中,在胞嘧啶甲基转移酶催化下,有活性的甲基从S- 腺苷甲硫氨酸转移至胞嘧啶5' 位上,形成5- 甲基胞嘧啶( 5mC)。DNA 甲基化不仅可影响细胞基因的表达,

大众汽车-全系产品知识卖点话术手册

卖点话术手册 大众培训部 1 1110 导言 广大销售人员在实际工作中在培训部已有销售话术的基础上取得了很大的成绩 现将各车型卖点话术整理成册以便广大经销商销售人员使用 望广大经销商销售人员仔细学习取长补短进一步提升展 厅终端销售能力 大众培训部 2 2110 目录 u CC 卖点话术 u CC 卖点话术 u 全新迈腾卖点话术 u 全新迈腾卖点话术 u 高尔夫卖点话术 u 高尔夫卖点话术 u 宝来卖点话术 u 宝来卖点话术大众产品知识培训大众产品知识培训

大众培训部 3 大众产品知识培训 3110 优雅动感的高级轿车 优雅动感的外观无比的驾驶乐趣不足以完全描述CC先进智能辅助 驾驶系统高端配置更让人体会到高级轿车特有的优越感腾 体现动感与优雅的完美设计 高尔夫 更有驾驶高级轿车的优越感 宝来 更加舒适和更富于驾驶乐趣 大众培训部 4 C C 全新迈 4 大众产品知识培训 4110 体现动感与优雅的完美设计 CC 的整体设计优雅大气且材质高档做工细腻具有豪华车特有的尊贵风 范CC 的细节处理富于变化产生不可抗拒的感染力赢得了包括全世界知 C 名设计师权威媒体在内的各界的一致盛赞 C 全新迈腾

p 设计界最高荣誉红点最佳设计品质奖 高尔夫p 高档大气强劲有力的前端设计 宝来p 优雅饱含运动激情的侧面设计 p 前后呼应层次分明的尾部设计 p 豪华车专有无框车门全景天窗 参考推介动作位置 销售人员可在任何场所与客户进行交流向顾客概括性讲解CC设计特点 大众培训部5 5 大众产品知识培训5110 体现动感与优雅的完美设计 设计界最高荣誉红点最佳设计品质奖 CC 凭借卓越的设计一举夺得世界三大设计大奖之一的红点奖的最高奖项 红点最佳设计品质奖它相当于奥斯卡奖中的最佳影片奖 C C 红点奖 全新迈腾 红点奖被称为工业设计界的奥斯卡大奖评审团由来自世界各地的知名设计师和设计专家组成每 年有来自60 个国家超过7000份包含汽车建筑家用电子时尚生活科学以及医药等众多领域高尔夫

上海大众汽车4S店的销售服务

上海大众汽车4S店的销售服务 众所周知:汽车市场的竞争越来越激烈,原来坐店销售已不能应付当前的市场形势,走出去“扫街”将成为我们汽车4S店销售人员必修的一课。人海茫茫,哪些是我们的客户,怎样寻找客户,怎样找到并能促成销售呢?下面介绍一下具体的"扫街"客户开发步骤和销售服务流程。 1.销售服务流程---寻找潜在顾客 万事开头难,只有能找出潜在的顾客,才能进行下一步的工作。潜在顾客必须具备三个基本条件:一是有需要;二是有购买能力;三是有购买的决策权。如果只有一个条件满足,就不是潜在的顾客;前两个条件满足的客户,也算作潜在客户,但不是重点(因为他有建议权)。 寻找潜在顾客的主要途径有:电话黄页、行业名录、朋友或熟人介绍、保有客户介绍等。在这个阶段,销售人员应努力收集尽量多的信息。 一般来说,我们潜在客户群主要是:政府采购中心,公检法等相关的政府部门,大型的工矿、生产、服务型企业(如石油、煤炭、钢铁、供电、通信),高档的商务写字楼和高档住宅区也分布有我们的潜在客户。 2.销售服务流程---访前准备 一般来说,访前准备是正式接触到客户前的所有活动,汽车销售人员应对自己收集到潜在客户信息分类整理,制定客户拜访计划,根据计划逐一拜访客户。 在客户拜访前,首先与客户电话预约一下,确认客户的时间,然后准备齐各种资料(名片、产品资料、公司简介、车辆使用和维护费用测算表、车辆上牌和保险费用表等),按时赴约。 对于单位采购,多数情况下我们事先都不一定知道这些潜在客户的具体信息(如负责单位购车的关键人物有几个,各起什么作用),因此,按销售服务流程https://www.doczj.com/doc/004086799.html,就需要销售员找准四个人:车辆选型人、主要使用人、决策人、上级主管,根据获得的信息,依据先易后难的接触原则逐一拜访。 3.销售服务流程---初次拜访 进门是第一步,如果和潜在客户有预约的只用登记一下就OK了;但是多数情况下我们首次拜访都是冒然前往(也称之为"扫街"),通过首次拜访与潜在客户建立联系,因此要想顺利见到我们的潜在客户,能通过门卫这一关就显得至关重要,这就需要我们采取一定的技巧,先把门卫搞定,并不断积累经验,在以后的初次拜访中能顺利进门。 初次拜访是汽车销售人员与潜在顾客的首次真正接触,在初次见面中,销售人员必须引起潜在客户的注意,对销售人员产生较深的、良好的印象,否则销售人员以后的行动可能会不起作用。 在这一阶段,销售人员要进行大量的提问和倾听。提问(如:需要什么样的车、喜欢哪些车、对油耗的看法)有助于吸引顾客的注意力,汽车销售人员聆听顾客的回答,在双方之间建立起一种互相信任的关系;在倾听的过程中,一旦发现问题,销售人员就可以向潜在顾客介绍解决问题的方法,并努力创造一个轻松愉快的氛围,尽量不要让客户产生“你是来推销汽车的印象”。及时找到客户的兴趣所在和关注点,要让客户尽快喜欢并信任销售人员。 4. 销售服务流程---记录客户信息

一汽大众swot分析

一汽大众swot分析 S因子 Strength 大众的优势 S1:品牌知名度 对大众来说,最重要的一点就是产品的布局。因大众进入中国较早,其产品线拉得很长,有人总结是“四世同堂“,对此大众的思想非常明确,就是中国市场的广阔,需求层次众多,因此,在不同的技术与价格水平上都拥有竞争力的产品,能够保证大众充分发挥自己的优势。同时,无论价格高低与技术水平新旧,大众的产品都以结实耐用,性能可靠著名,品牌的知名度鱼美誉度都不错,这也是令大众与竞争对手竞争起来时底气十足的资本。 S2:产品具有针对性 从高端的奥迪,到主打的大众,最后还有新引进的斯柯达,大众的产品系列已经比较全面的进入了中国,在不同的细分市场也各自拥有不错的成绩与竞争力。但是,对于中国这个迅速发展的新兴市场,如何

保持人们对于大众的高度关注,同时不断引入具有竞争优势的产品却是近一段时间按大众必须解决的问题。尤其是在竞争对手越来越多,而且大家都将自己最好的产品投入市场这样一大背景下,产品政策不仅是关系到眼前的战术性问题,也是关系未来发展前景的战略问题。 S3:成长空间 大众如今领先一步,将整车之外的最新平台,引擎等关键技术引入中国生产,以保证自己有足够的成长新空间。此外,在支持上汽,一汽根据中国市场的具体情况开发新产品方面,大众也开始积极探索。本土化使得大众在中国市场上的竞争力得到加强,将成为未来大众角逐中国车市的利器。 W因子:Weakness 大众的劣势 W1 市场策略选择不当 从技术的狂热追求者皮耶希先生执掌帅印以来,大众汽车似乎偏离了大众的方向,一心一意想在豪华车领域有所作为。从辉腾到途锐,车体更大,马力更强,价格却远远脱离大众。豪华车型不成功,经济车型也在防御战中节节败退。高尔夫、POLO在德国及欧洲市场中市场占有率也不理想,败给了不断推出新车型的法国军团。尽管销售数量还是保持领先,但和往年相比,已失去往日的号召力。而标致、雷诺风景、雪铁龙正大举占领大众的传统领地,销量下降、效益下降,这是大众在全球的情况。 W2 市场份额下降 由于国际市场的竞争越来越激烈,又先后出现像比亚迪,吉利等中

北京上海大众4S店

1 [4S店]北京页川瑞德汽车销售服务有限公司 主营品牌:[上海大众] 店面地址:清河小营安宁庄东路--方庄芳古园--成寿寺路133号 联系电话:400-666-7525 网站:https://www.doczj.com/doc/004086799.html, 4S店经销商 [4S店]北京页川瑞德汽车销售服务有限... 主营品牌:[上海大众] 店面地址:清河小营安宁庄东路--方庄芳古园--成寿寺路133号 联系电话:400-666-7525 网站:https://www.doczj.com/doc/004086799.html, 40.04240884114698,116.33104205131531 2 [4S店]北京上汽首创汽车销售有限责任公司 主营品牌:[上海大众] 店面地址:北京市丰台区丰台北路12号 联系电话:400-690-1589 4S店经销商 [4S店]北京上汽首创汽车销售有限责任... 主营品牌:[上海大众] 店面地址:北京市丰台区丰台北路12号 联系电话:400-690-1589 39.86627006289875,116.30955219268799 3 [4S店]北京首汽汽车修理有限公司 主营品牌:[上海大众] 店面地址:北京市海淀区紫竹院南路5号分店:海淀区学院南路甲8号联系电话:400-698-6505 4S店经销商 [4S店]北京首汽汽车修理有限公司 主营品牌:[上海大众] 店面地址:北京市海淀区紫竹院南路5号分店:海淀区学院南路甲8号联系电话:400-698-6505 39.93840634135727,116.31944417953491 4 [4S店]北京联拓汇通汽车销售服务有限公司 主营品牌:[上海大众] 店面地址:北京市朝阳区东四环南路56号D座(小武基桥东南角) 联系电话:400-621-0862

相关主题
文本预览
相关文档 最新文档