Probabilistic Search for Object Segmentation and Recognition
- 格式:pdf
- 大小:1.95 MB
- 文档页数:16
Conditional Random Fields:Probabilistic Modelsfor Segmenting and Labeling Sequence DataJohn Lafferty LAFFERTY@ Andrew McCallum MCCALLUM@ Fernando Pereira FPEREIRA@ WhizBang!Labs–Research,4616Henry Street,Pittsburgh,PA15213USASchool of Computer Science,Carnegie Mellon University,Pittsburgh,PA15213USADepartment of Computer and Information Science,University of Pennsylvania,Philadelphia,PA19104USAAbstractWe present,a frame-work for building probabilistic models to seg-ment and label sequence data.Conditional ran-domfields offer several advantages over hid-den Markov models and stochastic grammarsfor such tasks,including the ability to relaxstrong independence assumptions made in thosemodels.Conditional randomfields also avoida fundamental limitation of maximum entropyMarkov models(MEMMs)and other discrimi-native Markov models based on directed graph-ical models,which can be biased towards stateswith few successor states.We present iterativeparameter estimation algorithms for conditionalrandomfields and compare the performance ofthe resulting models to HMMs and MEMMs onsynthetic and natural-language data.1.IntroductionThe need to segment and label sequences arises in many different problems in several scientificfields.Hidden Markov models(HMMs)and stochastic grammars are well understood and widely used probabilistic models for such problems.In computational biology,HMMs and stochas-tic grammars have been successfully used to align bio-logical sequences,find sequences homologous to a known evolutionary family,and analyze RNA secondary structure (Durbin et al.,1998).In computational linguistics and computer science,HMMs and stochastic grammars have been applied to a wide variety of problems in text and speech processing,including topic segmentation,part-of-speech(POS)tagging,information extraction,and syntac-tic disambiguation(Manning&Sch¨u tze,1999).HMMs and stochastic grammars are generative models,as-signing a joint probability to paired observation and label sequences;the parameters are typically trained to maxi-mize the joint likelihood of training examples.To define a joint probability over observation and label sequences, a generative model needs to enumerate all possible ob-servation sequences,typically requiring a representation in which observations are task-appropriate atomic entities, such as words or nucleotides.In particular,it is not practi-cal to represent multiple interacting features or long-range dependencies of the observations,since the inference prob-lem for such models is intractable.This difficulty is one of the main motivations for looking at conditional models as an alternative.A conditional model specifies the probabilities of possible label sequences given an observation sequence.Therefore,it does not expend modeling effort on the observations,which at test time arefixed anyway.Furthermore,the conditional probabil-ity of the label sequence can depend on arbitrary,non-independent features of the observation sequence without forcing the model to account for the distribution of those dependencies.The chosen features may represent attributes at different levels of granularity of the same observations (for example,words and characters in English text),or aggregate properties of the observation sequence(for in-stance,text layout).The probability of a transition between labels may depend not only on the current observation, but also on past and future observations,if available.In contrast,generative models must make very strict indepen-dence assumptions on the observations,for instance condi-tional independence given the labels,to achieve tractability. Maximum entropy Markov models(MEMMs)are condi-tional probabilistic sequence models that attain all of the above advantages(McCallum et al.,2000).In MEMMs, each source state1has a exponential model that takes the observation features as input,and outputs a distribution over possible next states.These exponential models are trained by an appropriate iterative scaling method in the 1Output labels are associated with states;it is possible for sev-eral states to have the same label,but for simplicity in the rest of this paper we assume a one-to-one correspondence.maximum entropy framework.Previously published exper-imental results show MEMMs increasing recall and dou-bling precision relative to HMMs in a FAQ segmentation task.MEMMs and other non-generativefinite-state models based on next-state classifiers,such as discriminative Markov models(Bottou,1991),share a weakness we callhere the:the transitions leaving a given state compete only against each other,rather than againstall other transitions in the model.In probabilistic terms, transition scores are the conditional probabilities of pos-sible next states given the current state and the observa-tion sequence.This per-state normalization of transition scores implies a“conservation of score mass”(Bottou, 1991)whereby all the mass that arrives at a state must be distributed among the possible successor states.An obser-vation can affect which destination states get the mass,but not how much total mass to pass on.This causes a bias to-ward states with fewer outgoing transitions.In the extreme case,a state with a single outgoing transition effectively ignores the observation.In those cases,unlike in HMMs, Viterbi decoding cannot downgrade a branch based on ob-servations after the branch point,and models with state-transition structures that have sparsely connected chains of states are not properly handled.The Markovian assump-tions in MEMMs and similar state-conditional models in-sulate decisions at one state from future decisions in a way that does not match the actual dependencies between con-secutive states.This paper introduces(CRFs),a sequence modeling framework that has all the advantages of MEMMs but also solves the label bias problem in a principled way.The critical difference between CRFs and MEMMs is that a MEMM uses per-state exponential mod-els for the conditional probabilities of next states given the current state,while a CRF has a single exponential model for the joint probability of the entire sequence of labels given the observation sequence.Therefore,the weights of different features at different states can be traded off against each other.We can also think of a CRF as afinite state model with un-normalized transition probabilities.However,unlike some other weightedfinite-state approaches(LeCun et al.,1998), CRFs assign a well-defined probability distribution over possible labelings,trained by maximum likelihood or MAP estimation.Furthermore,the loss function is convex,2guar-anteeing convergence to the global optimum.CRFs also generalize easily to analogues of stochastic context-free grammars that would be useful in such problems as RNA secondary structure prediction and natural language pro-cessing.2In the case of fully observable states,as we are discussing here;if several states have the same label,the usual local maxima of Baum-Welch arise.bel bias example,after(Bottou,1991).For concise-ness,we place observation-label pairs on transitions rather than states;the symbol‘’represents the null output label.We present the model,describe two training procedures and sketch a proof of convergence.We also give experimental results on synthetic data showing that CRFs solve the clas-sical version of the label bias problem,and,more signifi-cantly,that CRFs perform better than HMMs and MEMMs when the true data distribution has higher-order dependen-cies than the model,as is often the case in practice.Finally, we confirm these results as well as the claimed advantages of conditional models by evaluating HMMs,MEMMs and CRFs with identical state structure on a part-of-speech tag-ging task.2.The Label Bias ProblemClassical probabilistic automata(Paz,1971),discrimina-tive Markov models(Bottou,1991),maximum entropy taggers(Ratnaparkhi,1996),and MEMMs,as well as non-probabilistic sequence tagging and segmentation mod-els with independently trained next-state classifiers(Pun-yakanok&Roth,2001)are all potential victims of the label bias problem.For example,Figure1represents a simplefinite-state model designed to distinguish between the two words and.Suppose that the observation sequence is. In thefirst time step,matches both transitions from the start state,so the probability mass gets distributed roughly equally among those two transitions.Next we observe. Both states1and4have only one outgoing transition.State 1has seen this observation often in training,state4has al-most never seen this observation;but like state1,state4 has no choice but to pass all its mass to its single outgoing transition,since it is not generating the observation,only conditioning on it.Thus,states with a single outgoing tran-sition effectively ignore their observations.More generally, states with low-entropy next state distributions will take lit-tle notice of observations.Returning to the example,the top path and the bottom path will be about equally likely, independently of the observation sequence.If one of the two words is slightly more common in the training set,the transitions out of the start state will slightly prefer its cor-responding transition,and that word’s state sequence will always win.This behavior is demonstrated experimentally in Section5.L´e on Bottou(1991)discussed two solutions for the label bias problem.One is to change the state-transition struc-ture of the model.In the above example we could collapse states1and4,and delay the branching until we get a dis-criminating observation.This operation is a special case of determinization(Mohri,1997),but determinization of weightedfinite-state machines is not always possible,and even when possible,it may lead to combinatorial explo-sion.The other solution mentioned is to start with a fully-connected model and let the training procedurefigure out a good structure.But that would preclude the use of prior structural knowledge that has proven so valuable in infor-mation extraction tasks(Freitag&McCallum,2000). Proper solutions require models that account for whole state sequences at once by letting some transitions“vote”more strongly than others depending on the corresponding observations.This implies that score mass will not be con-served,but instead individual transitions can“amplify”or “dampen”the mass they receive.In the above example,the transitions from the start state would have a very weak ef-fect on path score,while the transitions from states1and4 would have much stronger effects,amplifying or damping depending on the actual observation,and a proportionally higher contribution to the selection of the Viterbi path.3In the related work section we discuss other heuristic model classes that account for state sequences globally rather than locally.To the best of our knowledge,CRFs are the only model class that does this in a purely probabilistic setting, with guaranteed global maximum likelihood convergence.3.Conditional Random FieldsIn what follows,is a random variable over data se-quences to be labeled,and is a random variable over corresponding label sequences.All components of are assumed to range over afinite label alphabet.For ex-ample,might range over natural language sentences and range over part-of-speech taggings of those sentences, with the set of possible part-of-speech tags.The ran-dom variables and are jointly distributed,but in a dis-criminative framework we construct a conditional model from paired observation and label sequences,and do not explicitly model the marginal..Thus,a CRF is a randomfield globally conditioned on the observation.Throughout the paper we tacitly assume that the graph isfixed.In the simplest and most impor-3Weighted determinization and minimization techniques shift transition weights while preserving overall path weight(Mohri, 2000);their connection to this discussion deserves further study.tant example for modeling sequences,is a simple chain or line:.may also have a natural graph structure;yet in gen-eral it is not necessary to assume that and have the same graphical structure,or even that has any graph-ical structure at all.However,in this paper we will be most concerned with sequencesand.If the graph of is a tree(of which a chain is the simplest example),its cliques are the edges and ver-tices.Therefore,by the fundamental theorem of random fields(Hammersley&Clifford,1971),the joint distribu-tion over the label sequence given has the form(1),where is a data sequence,a label sequence,and is the set of components of associated with the vertices in subgraph.We assume that the and are given andfixed. For example,a Boolean vertex feature might be true if the word is upper case and the tag is“proper noun.”The parameter estimation problem is to determine the pa-rameters from training datawith empirical distribution. In Section4we describe an iterative scaling algorithm that maximizes the log-likelihood objective function:.As a particular case,we can construct an HMM-like CRF by defining one feature for each state pair,and one feature for each state-observation pair:. The corresponding parameters and play a simi-lar role to the(logarithms of the)usual HMM parameters and.Boltzmann chain models(Saul&Jor-dan,1996;MacKay,1996)have a similar form but use a single normalization constant to yield a joint distribution, whereas CRFs use the observation-dependent normaliza-tion for conditional distributions.Although it encompasses HMM-like models,the class of conditional randomfields is much more expressive,be-cause it allows arbitrary dependencies on the observationFigure2.Graphical structures of simple HMMs(left),MEMMs(center),and the chain-structured case of CRFs(right)for sequences. An open circle indicates that the variable is not generated by the model.sequence.In addition,the features do not need to specify completely a state or observation,so one might expect that the model can be estimated from less training data.Another attractive property is the convexity of the loss function;in-deed,CRFs share all of the convexity properties of general maximum entropy models.For the remainder of the paper we assume that the depen-dencies of,conditioned on,form a chain.To sim-plify some expressions,we add special start and stop states and.Thus,we will be using the graphical structure shown in Figure2.For a chain struc-ture,the conditional probability of a label sequence can be expressed concisely in matrix form,which will be useful in describing the parameter estimation and inference al-gorithms in Section4.Suppose that is a CRF given by(1).For each position in the observation se-quence,we define the matrix random variableby,where is the edge with labels and is the vertex with label.In contrast to generative models,con-ditional models like CRFs do not need to enumerate over all possible observation sequences,and therefore these matrices can be computed directly as needed from a given training or test observation sequence and the parameter vector.Then the normalization(partition function)is the entry of the product of these matrices:. Using this notation,the conditional probability of a label sequence is written as, where and.4.Parameter Estimation for CRFsWe now describe two iterative scaling algorithms tofind the parameter vector that maximizes the log-likelihood of the training data.Both algorithms are based on the im-proved iterative scaling(IIS)algorithm of Della Pietra et al. (1997);the proof technique based on auxiliary functions can be extended to show convergence of the algorithms for CRFs.Iterative scaling algorithms update the weights asand for appropriately chosen and.In particular,the IIS update for an edge feature is the solution ofdef.where is thedef. The equations for vertex feature updates have similar form.However,efficiently computing the exponential sums on the right-hand sides of these equations is problematic,be-cause is a global property of,and dynamic programming will sum over sequences with potentially varying.To deal with this,thefirst algorithm,Algorithm S,uses a“slack feature.”The second,Algorithm T,keepstrack of partial totals.For Algorithm S,we define the bydef,where is a constant chosen so that for all and all observation vectors in the training set,thus making.Feature is“global,”that is,it does not correspond to any particular edge or vertex.For each index we now define thewith base caseifotherwiseand recurrence.Similarly,the are defined byifotherwiseand.With these definitions,the update equations are,where.The factors involving the forward and backward vectors in the above equations have the same meaning as for standard hidden Markov models.For example,is the marginal probability of label given that the observation sequence is.This algorithm is closely related to the algorithm of Darroch and Ratcliff(1972),and MART algorithms used in image reconstruction.The constant in Algorithm S can be quite large,since in practice it is proportional to the length of the longest train-ing observation sequence.As a result,the algorithm may converge slowly,taking very small steps toward the maxi-mum in each iteration.If the length of the observations and the number of active features varies greatly,a faster-converging algorithm can be obtained by keeping track of feature totals for each observation sequence separately. Let def.Algorithm T accumulates feature expectations into counters indexed by.More specifically,we use the forward-backward recurrences just introduced to compute the expectations of feature and of feature given that.Then our param-eter updates are and,whereand are the unique positive roots to the following polynomial equationsmax max,(2)which can be easily computed by Newton’s method.A single iteration of Algorithm S and Algorithm T has roughly the same time and space complexity as the well known Baum-Welch algorithm for HMMs.To prove con-vergence of our algorithms,we can derive an auxiliary function to bound the change in likelihood from below;this method is developed in detail by Della Pietra et al.(1997). The full proof is somewhat detailed;however,here we give an idea of how to derive the auxiliary function.To simplify notation,we assume only edge features with parameters .Given two parameter settings and,we bound from below the change in the objective function with anas followsdefwhere the inequalities follow from the convexity of and.Differentiating with respect to and setting the result to zero yields equation(2).5.ExperimentsWefirst discuss two sets of experiments with synthetic data that highlight the differences between CRFs and MEMMs. Thefirst experiments are a direct verification of the label bias problem discussed in Section2.In the second set of experiments,we generate synthetic data using randomly chosen hidden Markov models,each of which is a mix-ture of afirst-order and second-order peting models are then trained and compared on test data.As the data becomes more second-order,the test er-ror rates of the trained models increase.This experiment corresponds to the common modeling practice of approxi-mating complex local and long-range dependencies,as oc-cur in natural data,by small-order Markov models.OurFigure3.Plots of error rates for HMMs,CRFs,and MEMMs on randomly generated synthetic data sets,as described in Section5.2. As the data becomes“more second order,”the error rates of the test models increase.As shown in the left plot,the CRF typically significantly outperforms the MEMM.The center plot shows that the HMM outperforms the MEMM.In the right plot,each open square represents a data set with,and a solid circle indicates a data set with.The plot shows that when the data is mostly second order(),the discriminatively trained CRF typically outperforms the HMM.These experiments are not designed to demonstrate the advantages of the additional representational power of CRFs and MEMMs relative to HMMs.results clearly indicate that even when the models are pa-rameterized in exactly the same way,CRFs are more ro-bust to inaccurate modeling assumptions than MEMMs or HMMs,and resolve the label bias problem,which affects the performance of MEMMs.To avoid confusion of dif-ferent effects,the MEMMs and CRFs in these experiments use overlapping features of the observations.Fi-nally,in a set of POS tagging experiments,we confirm the advantage of CRFs over MEMMs.We also show that the addition of overlapping features to CRFs and MEMMs al-lows them to perform much better than HMMs,as already shown for MEMMs by McCallum et al.(2000).5.1Modeling label biasWe generate data from a simple HMM which encodes a noisy version of thefinite-state network in Figure1.Each state emits its designated symbol with probabilityand any of the other symbols with probability.We train both an MEMM and a CRF with the same topologies on the data generated by the HMM.The observation fea-tures are simply the identity of the observation symbols. In a typical run using training and test samples, trained to convergence of the iterative scaling algorithm, the CRF error is while the MEMM error is, showing that the MEMM fails to discriminate between the two branches.5.2Modeling mixed-order sourcesFor these results,we usefive labels,(),and26 observation values,();however,the results were qualitatively the same over a range of sizes for and .We generate data from a mixed-order HMM with state transition probabilities given byand,simi-larly,emission probabilities given by.Thus,for we have a standardfirst-order HMM.In order to limit the size of the Bayes error rate for the resulting models,the con-ditional probability tables are constrained to be sparse. In particular,can have at most two nonzero en-tries,for each,and can have at most three nonzero entries for each.For each randomly gener-ated model,a sample of1,000sequences of length25is generated for training and testing.On each randomly generated training set,a CRF is trained using Algorithm S.(Note that since the length of the se-quences and number of active features is constant,Algo-rithms S and T are identical.)The algorithm is fairly slow to converge,typically taking approximately500iterations for the model to stabilize.On the500MHz Pentium PC used in our experiments,each iteration takes approximately 0.2seconds.On the same data an MEMM is trained using iterative scaling,which does not require forward-backward calculations,and is thus more efficient.The MEMM train-ing converges more quickly,stabilizing after approximately 100iterations.For each model,the Viterbi algorithm is used to label a test set;the experimental results do not sig-nificantly change when using forward-backward decoding to minimize the per-symbol error rate.The results of several runs are presented in Figure3.Each plot compares two classes of models,with each point indi-cating the error rate for a single test set.As increases,the error rates generally increase,as thefirst-order models fail tofit the second-order data.Thefigure compares models parameterized as,,and;results for models parameterized as,,and are qualitatively the same.As shown in thefirst graph,the CRF generally out-performs the MEMM,often by a wide margin of10%–20% relative error.(The points for very small error rate,with ,where the MEMM does better than the CRF, are suspected to be the result of an insufficient number of training iterations for the CRF.)HMM 5.69%45.99%MEMM 6.37%54.61%CRF 5.55%48.05%MEMM 4.81%26.99%CRF 4.27%23.76%Using spelling featuresFigure4.Per-word error rates for POS tagging on the Penn tree-bank,usingfirst-order models trained on50%of the1.1million word corpus.The oov rate is5.45%.5.3POS tagging experimentsTo confirm our synthetic data results,we also compared HMMs,MEMMs and CRFs on Penn treebank POS tag-ging,where each word in a given input sentence must be labeled with one of45syntactic tags.We carried out two sets of experiments with this natural language data.First,we trainedfirst-order HMM,MEMM, and CRF models as in the synthetic data experiments,in-troducing parameters for each tag-word pair andfor each tag-tag pair in the training set.The results are con-sistent with what is observed on synthetic data:the HMM outperforms the MEMM,as a consequence of the label bias problem,while the CRF outperforms the HMM.The er-ror rates for training runs using a50%-50%train-test split are shown in Figure5.3;the results are qualitatively sim-ilar for other splits of the data.The error rates on out-of-vocabulary(oov)words,which are not observed in the training set,are reported separately.In the second set of experiments,we take advantage of the power of conditional models by adding a small set of or-thographic features:whether a spelling begins with a num-ber or upper case letter,whether it contains a hyphen,and whether it ends in one of the following suffixes:.Here wefind,as expected,that both the MEMM and the CRF benefit signif-icantly from the use of these features,with the overall error rate reduced by around25%,and the out-of-vocabulary er-ror rate reduced by around50%.One usually starts training from the all zero parameter vec-tor,corresponding to the uniform distribution.However, for these datasets,CRF training with that initialization is much slower than MEMM training.Fortunately,we can use the optimal MEMM parameter vector as a starting point for training the corresponding CRF.In Figure5.3, MEMM was trained to convergence in around100iter-ations.Its parameters were then used to initialize the train-ing of CRF,which converged in1,000iterations.In con-trast,training of the same CRF from the uniform distribu-tion had not converged even after2,000iterations.6.Further Aspects of CRFsMany further aspects of CRFs are attractive for applica-tions and deserve further study.In this section we briefly mention just two.Conditional randomfields can be trained using the expo-nential loss objective function used by the AdaBoost algo-rithm(Freund&Schapire,1997).Typically,boosting is applied to classification problems with a small,fixed num-ber of classes;applications of boosting to sequence labeling have treated each label as a separate classification problem (Abney et al.,1999).However,it is possible to apply the parallel update algorithm of Collins et al.(2000)to op-timize the per-sequence exponential loss.This requires a forward-backward algorithm to compute efficiently certain feature expectations,along the lines of Algorithm T,ex-cept that each feature requires a separate set of forward and backward accumulators.Another attractive aspect of CRFs is that one can imple-ment efficient feature selection and feature induction al-gorithms for them.That is,rather than specifying in ad-vance which features of to use,we could start from feature-generating rules and evaluate the benefit of gener-ated features automatically on data.In particular,the fea-ture induction algorithms presented in Della Pietra et al. (1997)can be adapted tofit the dynamic programming techniques of conditional randomfields.7.Related Work and ConclusionsAs far as we know,the present work is thefirst to combine the benefits of conditional models with the global normal-ization of randomfield models.Other applications of expo-nential models in sequence modeling have either attempted to build generative models(Rosenfeld,1997),which in-volve a hard normalization problem,or adopted local con-ditional models(Berger et al.,1996;Ratnaparkhi,1996; McCallum et al.,2000)that may suffer from label bias. Non-probabilistic local decision models have also been widely used in segmentation and tagging(Brill,1995; Roth,1998;Abney et al.,1999).Because of the computa-tional complexity of global training,these models are only trained to minimize the error of individual label decisions assuming that neighboring labels are correctly -bel bias would be expected to be a problem here too.An alternative approach to discriminative modeling of se-quence labeling is to use a permissive generative model, which can only model local dependencies,to produce a list of candidates,and then use a more global discrimina-tive model to rerank those candidates.This approach is standard in large-vocabulary speech recognition(Schwartz &Austin,1993),and has also been proposed for parsing (Collins,2000).However,these methods fail when the cor-rect output is pruned away in thefirst pass.。
什么是全文检索全文检索的简介全文检索是一种将文件中所有文本与检索项匹配的文字资料检索方法。
那么你对全文检索了解多少呢?以下是由店铺整理关于什么是全文检索的内容,希望大家喜欢!全文检索的简介基本介绍全文检索是将存储于数据库中整本书、整篇文章中的任意内容信息查找出来的检索。
它可以根据需要获得全文中有关章、节、段、句、词等信息,也就是说类似于给整本书的每个字词添加一个标签,也可以进行各种统计和分析。
例如,它可以很快的回答“《红楼梦》一书中“林黛玉”一共出现多少次?”的问题。
与之相关的议题语根处理 (stemming)符素解析器 (token parser) 1-gram, 2-gram , n-gram断词/分词 word segmentation倒排索引 inverted index算法、搜寻策略之模型布尔式 boolean统计模型 Probabilistic model向量空间模型 vector base model隐性语义模型 Latent semantic model系统检索的介绍评量之准则判断检索效果的两个指标:查全率=被检出相关信息量/相关信息总量(%)查准率=被检出相关信息量/被检出信息总量(%)开放源代码之全文检索系统Apache SolrBaseXClusterpoint Server(freeware licence for a single-server) DataparkSearchFerretHt-//DigHyper EstraierKinoSearchLemur/IndriLucenemnoGoSearchSphinxSwish-eXapianElasticSearch议题优化的概念和中文有关的议题断词(分词)语法解析古籍议题多语言混合优化剔除字(Stopwords)词性标注权威档(authority file)知识体系,本体论(ontology)超链接分析(page rank)技术历史及未来之趋势自由语句搜寻基于自然语言的分词。
研究NLP100篇必读的论⽂---已整理可直接下载100篇必读的NLP论⽂⾃⼰汇总的论⽂集,已更新链接:提取码:x7tnThis is a list of 100 important natural language processing (NLP) papers that serious students and researchers working in the field should probably know about and read.这是100篇重要的⾃然语⾔处理(NLP)论⽂的列表,认真的学⽣和研究⼈员在这个领域应该知道和阅读。
This list is compiled by .本榜单由编制。
I welcome any feedback on this list. 我欢迎对这个列表的任何反馈。
This list is originally based on the answers for a Quora question I posted years ago: .这个列表最初是基于我多年前在Quora上发布的⼀个问题的答案:[所有NLP学⽣都应该阅读的最重要的研究论⽂是什么?]( -are-the-most-important-research-paper -which-all-NLP-students-should- definitread)。
I thank all the people who contributed to the original post. 我感谢所有为原创⽂章做出贡献的⼈。
This list is far from complete or objective, and is evolving, as important papers are being published year after year.由于重要的论⽂年复⼀年地发表,这份清单还远远不够完整和客观,⽽且还在不断发展。
Jian Wang and Ze-Nian Lijwangc,li@cs.sfu.caSchool of Computing ScienceSimon Fraser UniversityBurnaby,BC V5A1S6,CanadaABSTRACTThis paper proposes a novel algorithm to solve the problem of segmenting foreground-moving objects from the background scene.The major cue used for object segmentation is the motion information,which is initially extracted from MPEG motion vectors.Since the MPEG motion vectors are generated for simple video compression without any consideration of visual objects,they may not correspond to the true motion of the macroblocks.We propose a Kernel-based Multiple Cue(KMC) algorithm to deal with the above inconsistency of MPEG motion vectors and use multiple cues to segment moving objects. KMC detects and calibrates camera movements;and thenfinds the kernels of moving objects.The segmentation starts from these kernels,which are textured regions with credible motion vectors.Beside motion information,it also makes use of color and texture to help achieving a better segmentation.Moreover,KMC can keep track of the segmented objects over multiple frames,which is useful for object-based coding.Experimental results show that KMC combines temporal and spatial information in a graceful way,which enables it to segment and track the moving objects under different camera motions.Future work includes object segmentation in compressed domain,motion estimation from raw video,etc.Keywords:Motion vector,Locale,Kernel,Object segmentation,Multiple cues,Tracking1.INTRODUCTIONThe segmentation of foreground moving objects from the background scene in digital video has seen a high degree of interest in recent years.Many object segmentation algorithms have been proposed in the literature.QBIC and VideoQ use color as the major cue for segmentation.ASSET2detects corners in each frame to establish the feature correspondence between consecutive frames,estimate motion information,and segments moving objects based on this motion information.Polana and Nelson use Fourier Transform to detect and recognize objects with repetitive motion pattern,such as walking people. Malassiotis and Strintzis use the difference map between consecutive frames and apply an active contour model(snake) algorithm to segment moving objects.These algorithms can be grouped into color and texture based or motion based.MPEG-4is currently examining two temporal algorithms,and one spatial algorithm that is based on a watershed algorithm for object segmentation.Li proposes feature localization instead of segmentation and introduces a new concept locale,which is a set of tiles(or pixels)that can capture a certain feature.Locales are different from the segmented regions in three ways.Figure1shows the two locales in an image.The tiles forming the locales may not be connected.The locales can be overlapping,because a tile may belong to multiple locales.Moreover,the union of all locales in an image does not have to be the entire image.This paper proposes a Kernel-based Multiple Cue(KMC)algorithm,which uses MPEG motion vectors as the major cue to solve the problem of object segmentation.The KMC algorithm can deal with the inconsistency of the MPEG motion vectors and use multiple cues to refine the segmentation result.Moreover,a spatial segmentation algorithm is applied to extract the object shape and an object tracking algorithm is proposed to keep track of the segmented objects over multiple frames.2.OBJECT SEGMENTATION AND TRACKING2.1.DefinitionsIn this paper,kernel is defined as a group of neighboring macroblocks with credible motion vectors.In order to ensure that,the kernel must meet three criteria,which are motion consistency,low residue,and texture constraint.The detection of kernel is intended tofind regions with credible motion vectors,and subsequent object segmentation will start from these regions so that there exists a better chance to get good result.blueL red blueredL Figure 1.Localization vs.segmentation locale 3locale 1locale NFigure 2.Motion localeThis paper introduces a concept motion locale ,which is similar to locale .Motion locales can be non-connected,in-complete,and overlapping and they contain spatiotemporal information about a segmented moving object such as object size,centroid position,color distribution,texture,and a link to its previous occurrence.Figure 2shows the procedure of keeping track of a motion locale over multiple frames.Each rectangle represents a video frame and the irregular polygon stands for a moving object.The dashed line describes the motion trajectory of the centroid of the moving object.The information about a segmented object is saved in a motion locale,which represents a spatiotemporal entity.The multiple occurrences of the same object are reflected by the links between locales.The trajectory of a moving object can be seen clearly from the motion locale.2.2.Basic AssumptionsThe KMC algorithm has several assumptions,which are near orthographic camera projection assumption,rigid object assump-tion,and global affine motion assumption.The paper assumes that the distance from the scene to the camera is far compared to the variation of depth in the scene.Therefore,the depth estimation can be neglected.The rigid object is assumed so that the macroblocks composing the object can be clustered and segmented from the background based on motion information.The global motion caused by camera is assumed to be affine motion and a 2D affine motion model is used to describe the camera motion.2.3.Flow ChartFigure 3shows the overall structure of the KMC algorithm.At first,different camera motions,such as still (no motion),pan/tilt,and zoom,are detected and calibrated.After that,a kernel detection and merge procedure is applied to deal with the inconsistency of the MPEG motion vectors and to find the regions with reliable motion vectors,and subsequent object segmentation will start from the detected kernels.The kernel detection and merge procedure segments a video frame into the background,the object kernels,and the undefined region.However,sometimes the motion cue itself is not enough to recover the whole object region.The detected object kernels may miss some object macroblocks or add some background macroblocks.In order to deal with that,a region growing and refinement process is applied to add the object macroblocks and remove the background ones.This process is based on the color,texture,and other cues by considering the similarity between neighboringFigure3.Overall structure of KMCmacroblocks to reassign a macroblock to the background or an object kernel.Moreover,a spatial segmentation algorithm is performed to extract the object shape and an object tracking algorithm is designed to detect the segmented objects and their motion trajectory over multiple frames.2.4.Camera Motion DetectionA six-parameter2D affine motion model is used to detect background motion in this paper.The background macroblocks are assumed to conform to an affine motion model,which has six parameters.The three pairs of parameters can be resolved by using the centroid coordinates and motion vectors of at least three macroblocks.The estimation of the affine motion model is to solve the equations(1)(2) where to are the affine parameters to be estimated.2.4.1.Camera Pan/TiltWhen the camera is panning/tilting,there exists a major motion group moving towards a certain direction that corresponds to the camera motion.Therefore,pan/tilt can be detected by checking if there is a motion group that occupies a large part of the scene.In this paper,the threshold is set to0.6,which means a pan/tilt is detected if the largest motion group occupies more than 60percent of the entire scene.However,the method can make mistakes if the moving objects occupy a large part of the scene and move with similar motions.In order to avoid that,the motion of outmost macroblocks are checked to see if the boundary of the scene is moving or not.If the boundary of the scene is also moving,then the camera is moving;otherwise,the camera is still.Figure4.Symmetrical Pair of Macroblocks under Zoom2.4.2.Camera ZoomCamera zoom is modeled using a six-parameter affine motion model.When the camera is zooming,the motion pattern shows some symmetry around the center.It is detected by checking the number of symmetrical pair of macroblocks.Figure4shows some macroblocks under camera zoom.The shaded macroblocks are located symetrically around the center with opposite motion vectors.They are linked by dash lines.When the camera is zooming,the motion vectors will display some degree of symmetry around the center of the frame.When the camera is zooming in,all the pixels should move away from the center;when the camera is zooming out,all the pixels should move towards the center.Let denotes the center of the frame.For each macroblock A in the frame,there exists a symmetrical macroblock B. The centers of A and B are represented by and respectively.The motion vectors of A and B are and. The following relationships hold for A and B.(3)The affine motion parameters are estimated using the centroid coordinates and motion vectors of three macroblocks that show the symmetrical motion pattern illustrated in Figure4.Suppose the centroid of the macroblocks is,, and and their motion vectors are,,.The estimation of the affine motion model is to solve the equations(4)(5)(6)(7)(8)(9)where to are the affine parameters to be estimated.In order to improve the result,all the data that conform to the zoom pattern are used to estimate the affine parameters and their averages are used as thefinal affine motion parameters.In this paper,video clips are used for experiments and at least six pairs of macroblocks with the above symmetrical motion pattern are needed to estimate the affine motion parameters.The average of the estimated parameters are used as the affine parameters for the camera motion.2.5.Kernel Detection and MergeKernel detection is intended tofind the regions with credible motion vectors.The neighboring macroblocks with similar motion and small motion estimation error are merged to form a kernel.A kernel can be any size and any shape.Once a kernel is formed, it is checked to see if the kernel has some texture.If the kernel has no texture,then it is considered to be unreliable and removed from the kernel list.2.5.1.Motion ConsistencyThe macroblocks forming a kernel must conform to a consistent motion pattern.The macroblocks are merged into multiple groups based on their motion similarity.The neighboring macroblocks are compared and if their motions are similar,they will be put into one motion group.The similarity is measured as follows.(10) where denotes the motion vector of a macroblock M,and denotes the motion vector of a neighboring macroblock N of M.If is smaller than a certain threshold(set to3in this paper),the two macroblocks will be put into one group as a candidate kernel.2.5.2.Low Estimation ErrorThe motion estimation error of the whole kernel should be smaller than a certain threshold,which is set to0.85in this paper. This is to make sure that the motion estimation is accurate enough.Motion estimation is applied to each macroblock of a candidate kernel.The motion estimation error for a macroblock M with motion vector is computed as:(11) where is the number of correctly estimated pixels and is the total number of pixels in the macroblock,which is256in this paper.The motion estimation error of an entire kernel K is described in the following equation.(12)where denotes the macroblocks that belong to kernel K,and denote the macroblock and its motion vector respectively.is the percentage of wrong pixels in the motion compensated macroblock with motion vector.2.5.3.Texture constraintTexture constraint is applied to each candidate kernel to ensure that a kernel contains some texture.Each kernel is checked for the percentage of edge points.If the percentage is smaller than a threshold,the kernel is removed.The edge points are detected using a Sobel edge detector.Texture constraint is based on the observation that the motion vectors offlat regions can be easily affected by noises and are not reliable;while motion vectors of textured regions are more reliable and more likely to correspond to the real motion.2.5.4.Merge macroblocks into kernelsAt the beginning of the algorithm,each macroblock is assumed to be a candidate kernel.Then neighboring macroblocks are merged into large kernels as described below.For a macroblock M that is neighboring to a candidate kernel K,if its motion estimation error is lower than a certain threshold,then its motion difference with the kernel is checked.The difference between their motions is:(13) where is a macroblock of kernel K that is neighboring to M.If the difference is smaller than a certain threshold,then the macroblock is added to the kernel and the information of the kernel,such as its size and motion are updated as follows.The size of a kernel K is measured in the number of macroblocks forming K.The kernel motion is the average motion of all the macroblocks of K.It can be described by the following equation.(14) where denotes the macroblocks that belong to kernel K and denotes the motion vector of a certain macroblock i.After kernel detection,a video frame is segmented into kernels and undefined regions.These undefined regions fail to be clustered into kernels,because they have inconsistent motion pattern,high motion estimation error,or no internal texture.2.5.5.Background Kernel DetectionIf the camera is not moving,the background kernel is the one containing still macroblocks.Otherwise,the background kernel is the largest kernel conforming to the camera motion.2.5.6.Kernel Merge With the Background RegionThe criterion for kernel merge is based on motion.If the motion difference between a detected kernel and the background is smaller than a threshold,then it is merged with the background region according to the following rule.1.If the camera is still and a kernel has no motion or almost no motion,then add this kernel to the background.2.If the camera is moving,then detect the background kernelfirst.If the difference of this macroblock’s motion and thebackground motion is smaller than a motion similarity threshold,then add this kernel to the background kernel.After kernel detection and merge,a frame is usually segmented into three parts,which are the background,the object kernels,and the undefined region.The background corresponds to the region that moves consistently with the camera motion. The object kernels correspond to the moving objects in the scene.And the undefined region contains macroblocks that cannot be put into either background kernel or object kernels based on motion cue,which will be further processed by a multiple cue algorithm.2.6.Multiple Cue ProcessingThe kernel detection and merge process using motion information may not be enough to recover the whole object region.The detected object regions may miss some object macroblocks or add some background macroblocks.Therefore,a post-processing step is needed to add the object macroblocks and remove the background ones.This region growing and refinement process is based on the color,texture,and other cues.The algorithm considers the similarity between macroblocks based on color,texture, and other cues to reassign a macroblock to the background or an object kernel.The algorithm aims to segment a frame into regions consisting of macroblocks similar in motion or color,or consistent in texture.The equation for the similarity between two macroblocks and is as follow.(15) where and are the chromaticity index of the pixel in A and B block and returns1if equals to ,and0otherwise.2.7.Object TrackingA motion locale list is used to link the multiple occurrences of an object and get its motion trajectory.Each time an object is segmented,it is considered as a motion locale.When a motion locale is added to the motion locale list,the motion locale list is searched for a matching one.The matching is based on color distribution,size,and motion continuity.The search is intended tofind the occurrence of the object in a previous frame and link its occurrences over multiple frames.The confidence value of a motion locale is increased by one if a good matching is found.If no good matching can be found for a motion locale,then it is added as a new one to the motion locale list with confidence value set to zero.The motion locale list can serve for two purposes, object tracking and outlier removal.The motion locales with no link to other motion locales are most probably outliers and are removed from the list.Figure5shows how a motion locale is added to the motion locale list.The information of each segmented object,such as frame number,locale number,bounding rectangle,centroid,color histogram,size,and matching locale,are stored into a spatiotemporal motion locale list.The list shows the link between motion locales and the confidence value of each motion locale.The motion trajectory of a motion locale can be computed by tracing it in the list.The confidence value associated with a motion locale indicates the degree of its spatiotemporal consistency.The higher the confidence value,the more consistent it is spatiotemporally.increase confidence valueand add to the locale listsegmented object localeslocale feature extractionexisting locale listsearch the locale listl o c a l e N l o c a l e 5l o c a l e 4l o c a l e 3l o c a l e 2l o c a l e 1bounding rectanglecolor distributionset confidence value to 0add to the locale list link with the matching localefind a match yes nosizemotioncentroid:::Figure 5.Motion locale and locale list3.EXPERIMENTAL RESULTS3.1.Source Video ClipsThe Bike clip has different camera motions,such as still,pan/tilt,and zoom,which is suitable for testing the ability of the algorithm to detect the camera motion correctly.It also has a moving object that exists in multiple frames and is used to produce some experimental results about object segmentation and tracking.The Tennis clip has many frames with camera zooming motion.It is chosen to show the experimental results of camera motion detection and object segmentation under different camera motions,especially under zooming.3.2.Time and SpeedKMC uses MPEG motion vectors as the source of motion information and does not estimate optical flow.Therefore,it is very fast and runs in real time.3.3.Threshold SelectionThe KMC algorithm uses several thresholds for kernel detection and merge and multiple cue re finement.The thresholds are chosen empirically and remain the same for all testing videos.The threshold for motion estimation error is set to 0.85and the threshold for multiple cue re finement is set to 0.8.3.4.ResultsTable 1shows the camera pan detection result of bike video.The first column is the frame number and the second columnshows the angles of the camera pan and the angles is in the range of to.For example,at frame 3,the angle of camera pan is ,which means the camera is panning towards southeast.The last column shows the magnitude of the pan in pixels.For example,the first row of the table shows that the camera is pan towards southeast in a distance of 5pixels.The angle and distance of camera pan are extracted from the largest motion group in the scene,which corresponds to camera motion.Table1.Camera Pan Parameters of Bike VideoFrame Angle Magnitude33045930471531762751533387Figure6.Camera zooming detection.Figure6is an example of camera zooming from another video clip.From the motion vector map,it is clear that the camera is zooming in and the motion vectors are moving outward from the center of the frame.The right part of the frame does not conform to this motion pattern because of the object motion.Table2shows the camera zooming detection result of tennis video.This video has many frames with camera zooming, which is modeled by an affine motion model.The table shows the frame number and the computed affine motion parameters. Thefirst column is the frame number in which a camera zooming happens.Columns2to7correspond to the six affine motion parameters to.Figure7is an example of object segmentation on frame9of bike video.The result shows four rectangular areas.The upper right area shows the current frame image,the upper left one visualizes the motion vectors of this frame with each small rectangle standing for a macroblock of pixels.The bottom right area shows thefinal object segmentation result and the bottom left area shows the result of kernel detection and merge.The shaded regions denote the detected kernels.TheTable2.Camera Zooming Pattern Modeled by an Affine Motion Model.Frame a1a2a3a4a5a627 1.056-0.016-7.2260.000 1.052-5.97733 1.093-0.003-14.840-0.010 1.117-11.79439 1.144-0.000-24.0460.001 1.138-16.56745 1.140-0.005-22.842-0.001 1.147-17.27251 1.1420.003-24.1110.000 1.146-17.33357 1.144-0.002-23.9050.000 1.151-17.71063 1.145-0.013-22.629-0.006 1.146-16.05469 1.124-0.003-20.465-0.003 1.130-14.86075 1.130-0.009-20.7020.005 1.120-15.00981 1.125-0.013-19.707-0.006 1.144-15.582870.9140.683-61.718-0.012 1.130-13.016Figure7.Object Segmentation Result on Frame9of Bike Video.black region corresponds to the background kernel,which may be disjointed.It is desirable to allow the background kernel to be disjointed,because the foreground objects may cover some parts.It can be seen that the detected kernels are the regions with reliable motion vectors.These kernels all have some edge points and the largeflat regions are not detected as kernel, because they do not have enough edge points.These kernels also have good motion consistency among their macroblocks.By comparing the images in the middle row with those in the bottom row,it can be seen that three object macroblocks are missed from the kernel-based segmentation.The reason is that these three macroblocks locate at the boundary of the object and have different motion vectors.Therefore,motion-based segmentation cannotfind them.However,the multiple cue procedure refines the segmentation result andfinds the three macroblocks.Figure8shows the result of object segmentation on frame9of tennis video.The result shows six images,each with a resolution of pixels.Thefirst two images on the top show the original image and its motion vector map of macroblocks of pixels.The pointer in the middle of a macroblock is an illustration of its motion vector.The direction of the pointer is the direction to which the macroblock is moving and the length of the pointer is the distance that the macroblock actually moves in pixels.The two images in the middle show the results after kernel detection and merge and the bottom ones show the results after multiple cue refinement.The black region corresponds to the background and most of the background macroblocks do not move,which means the camera is still in this frame.The detected kernel corresponds to the moving hand in the scene and the white ball is missed because it has wrong motion vectors.By comparing the middle images and those bottom ones,it is shown that the gray macroblocks with zero motion vectors are the new ones added to the segmented object kernel by using multiple cues.The motion of these macroblocks does not conform to a consistent object motion model. However,they are still recovered.This shows that the multiple cue approach is helpful in recovering an object that does not move consistently.This means that only part of an object moves consistently with a certain motion model can be detected by using motion information and the other parts of the object may be detected by using other cues,such as color,texture,etc.Figure9is the illustration of object tracking result in the bike video.It shows the result of object segmentation and tracking over multiple frames.The object is segmented and a bounding rectangle is formed to cover the whole object.The list shows the spatiotemporal segmentation result,including the object segmentation results when the object is moving and when the object is still in the scene.When the object is moving,the motion and other cues are used to segment the object from the background,Figure8.Object Segmentation Result on Frame9of Tennis Video.Figure9.Object Tracking over Multiple Frames Result on Bike Video.and when it is still,a projection and spatial segmentation algorithm is applied to segment the object.Table3shows the performance of KMC algorithm.Thefirst column is the frame number.The second column is the number of pixels in the manually segmented object,which is used to measure the performance of the segmentation results using KMC.The third column shows the number of object pixels of the segmented object.The fourth column shows the percentage of correctly segmented pixels,which is the result of dividing column3by column2.Thefifth column is the number of background pixels of the segmented object and the last column shows the percentage of background pixels.These two percentages show how good the segmentation is.Since the two percentages are obtained by dividing object pixels and background pixels segmented by KMC with manually segmented object pixels,the two percentages may add up to more than .From the table,it can be seen that the correct ratios are all above,while the wrong ratios are all below.4.CONCLUSION AND FUTURE WORKThis paper shows that the KMC algorithm can perform motion analysis directly on MPEG motion vectors and segment moving objects from the background scene under different camera motions,such as pan,tilt,and zoom.It combines multiple cues,suchTable3.Performance of spatial segmentation.Frame NumOfObjectPels(manual)correct rate wrong rate32691229885.4%40415.0%62784232683.5%34412.4%92810251289.4%49617.6%122933261089.0%52217.8%152889256188.7%47216.3%183348268580.2%46914.0%213378261088.6%60818.0%as motion,color,texture and other cues,in a graceful way to achieve good segmentation results.The algorithm makes use of spatiotemporal information to keep track of segmented moving objects.The proposed KMC algorithm can be improved in several ways.The camera motion detection procedure can use a Singular Value Decomposition(SVD)algorithm to estimate the affine camera motion parameters.The KMC algorithm can not detect moving objects when the kernel detection procedure fails to produce good result.An Expectation Maximization(EM)statistical method can be used to make the kernel detection process more robust.It is also an interesting topic to study how to apply the segmentation results of KMC to object based video coding,such as MPEG-4.Moreover,even though KMC makes use of some compressed information such as MPEG motion vectors,it is still far from working directly on the compressed domain. Considering the compression ratio of MPEG videos,it is desirable to segment moving objects directly on the compressed domain so that both the processing speed and efficiency can be improved.Finally,it is reasonable to anticipate that KMC will produce good results if the motion vectors of macroblocks correspond to their real motion.Therefore,optimization on MPEG motion vectors is desirable with the purpose to obtain better motion vectors.ACKNOWLEDGMENTSThis work was supported in part by the Canadian National Science and Engineering Research Council under the grant OGP-36727,and the grant for Telelearning Research Project in the Canadian Network of Centres of Excellence.REFERENCES1.M.Flickner,et al.,“Query by image and video content:the QBIC system,”IEEE Computer28(9),pp.23–32,1995.2.S.F.Chang,et al.,“VideoQ:an automated content based video search system using visual cues,”in Proc.ACM Multimedia97,pp.313–324,1997.3.S.Smith,“Asset-2:Real-time motion segmentation and object tracking,”Proc.ICCV,pp.237–250,1995.4.R.Polana and R.Nelson,“Detection and recognition of periodic,non-rigid motion,”International Journal of ComputerVision23(3),pp.261–282,1997.5.Malassiotis and Strintzis,“Object-based coding of stereo image sequences,”IEEE Trans.on Circuits and Systems forVideo Technology7(6),pp.891–902,1997.6.ISO/IEC JTC1/SC29/WG11N2202,Coding of moving pictures and audio,March1998.7.R.Mech and M.Wollborn,“A noise robust method for segmentation of moving objects in video sequences considering amoving camera,”Signal Processing66(2),pp.203–217,1998.8.A.Neri,S.Colonnese,and G.Russo,“Automatic moving objects and background segmentation by means of higher orderstatistics,”SPIE3024,pp.8–14,1997.9.L.Vincent and P.Soille,“Watersheds in digital spaces:an efficient algorithm based on immersion simulations,”IEEETransactions on PAMI13(6),pp.583–598,1991.10.Z.Li,O.Zaiane,and Z.Tauber,“Illumination invariance and object model in content-based image and video retrieval,”Journal of Visual Communication and Image Representation10(3),pp.219–244,1999.11.M.Lee,et al.,“A layered video object coding system using sprite and affine motion model,”IEEE Trans.on Circuits andSystems for Video Technology7(1),pp.130–144,1997.12.W.H.Press,et al.,Numerical recipes in C,Cambridge University Press,1992.。
ICML2014ICML20151. An embarrassingly simple approach to zero-shot learning2. Learning Transferable Features with Deep Adaptation Networks3. A Theoretical Analysis of Metric Hypothesis Transfer Learning4. Gradient-based hyperparameter optimization through reversible learningICML20161. One-Shot Generalization in Deep Generative Models2. Meta-Learning with Memory-Augmented Neural Networks3. Meta-gradient boosted decision tree model for weight and target learning4. Asymmetric Multi-task Learning based on Task Relatedness and ConfidenceICML20171. DARLA: Improving Zero-Shot Transfer in Reinforcement Learning2. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks3. Meta Networks4. Learning to learn without gradient descent by gradient descentICML20181. MSplit LBI: Realizing Feature Selection and Dense Estimation Simultaneously in Few-shotand Zero-shot Learning2. Understanding and Simplifying One-Shot Architecture Search3. One-Shot Segmentation in Clutter4. Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory5. Bilevel Programming for Hyperparameter Optimization and Meta-Learning6. Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace7. Been There, Done That: Meta-Learning with Episodic Recall8. Learning to Explore via Meta-Policy Gradient9. Transfer Learning via Learning to Transfer10. Rapid adaptation with conditionally shifted neuronsNIPS20141. Zero-shot recognition with unreliable attributesNIPS2015NIPS20161. Learning feed-forward one-shot learners2. Matching Networks for One Shot Learning3. Learning from Small Sample Sets by Combining Unsupervised Meta-Training with CNNs NIPS20171. One-Shot Imitation Learning2. Few-Shot Learning Through an Information Retrieval Lens3. Prototypical Networks for Few-shot Learning4. Few-Shot Adversarial Domain Adaptation5. A Meta-Learning Perspective on Cold-Start Recommendations for Items6. Neural Program Meta-InductionNIPS20181. Bayesian Model-Agnostic Meta-Learning2. The Importance of Sampling inMeta-Reinforcement Learning3. MetaAnchor: Learning to Detect Objects with Customized Anchors4. MetaGAN: An Adversarial Approach to Few-Shot Learning5. Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior6. Meta-Gradient Reinforcement Learning7. Meta-Reinforcement Learning of Structured Exploration Strategies8. Meta-Learning MCMC Proposals9. Probabilistic Model-Agnostic Meta-Learning10. MetaReg: Towards Domain Generalization using Meta-Regularization11. Zero-Shot Transfer with Deictic Object-Oriented Representation in Reinforcement Learning12. Uncertainty-Aware Few-Shot Learning with Probabilistic Model-Agnostic Meta-Learning13. Multitask Reinforcement Learning for Zero-shot Generalization with Subtask Dependencies14. Stacked Semantics-Guided Attention Model for Fine-Grained Zero-Shot Learning15. Delta-encoder: an effective sample synthesis method for few-shot object recognition16. One-Shot Unsupervised Cross Domain Translation17. Generalized Zero-Shot Learning with Deep Calibration Network18. Domain-Invariant Projection Learning for Zero-Shot Recognition19. Low-shot Learning via Covariance-Preserving Adversarial Augmentation Network20. Improved few-shot learning with task conditioning and metric scaling21. Adapted Deep Embeddings: A Synthesis of Methods for k-Shot Inductive Transfer Learning22. Learning to Play with Intrinsically-Motivated Self-Aware Agents23. Learning to Teach with Dynamic Loss Functiaons24. Memory Replay GANs: learning to generate images from new categories without forgettingICCV20151. One Shot Learning via Compositions of Meaningful Patches2. Unsupervised Domain Adaptation for Zero-Shot Learning3. Active Transfer Learning With Zero-Shot Priors: Reusing Past Datasets for Future Tasks4. Zero-Shot Learning via Semantic Similarity Embedding5. Semi-Supervised Zero-Shot Classification With Label Representation Learning6. Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions7. Learning to Transfer: Transferring Latent Task Structures and Its Application to Person-Specific Facial Action Unit DetectionICCV20171. Supplementary Meta-Learning: Towards a Dynamic Model for Deep Neural Networks2. Attributes2Classname: A Discriminative Model for Attribute-Based Unsupervised Zero-ShotLearning3. Low-Shot Visual Recognition by Shrinking and Hallucinating Features4. Predicting Visual Exemplars of Unseen Classes for Zero-Shot Learning5. Learning Discriminative Latent Attributes for Zero-Shot Classification6. Spatial-Aware Object Embeddings for Zero-Shot Localization and Classification of ActionsCVPR20141. COSTA: Co-Occurrence Statistics for Zero-Shot Classification2. Zero-shot Event Detection using Multi-modal Fusion of Weakly Supervised Concepts3. Learning to Learn, from Transfer Learning to Domain Adaptation: A Unifying Perspective CVPR20151. Zero-Shot Object Recognition by Semantic Manifold DistanceCVPR20162. Multi-Cue Zero-Shot Learning With Strong Supervision3. Latent Embeddings for Zero-Shot Classification4. One-Shot Learning of Scene Locations via Feature Trajectory Transfer5. Less Is More: Zero-Shot Learning From Online Textual Documents With Noise Suppression6. Synthesized Classifiers for Zero-Shot Learning7. Recovering the Missing Link: Predicting Class-Attribute Associations for UnsupervisedZero-Shot Learning8. Fast Zero-Shot Image Tagging9. Zero-Shot Learning via Joint Latent Similarity Embedding10. Learning to Read Chest X-Rays: Recurrent Neural Cascade Model for Automated ImageAnnotation11. Learning to Co-Generate Object Proposals With a Deep Structured Network12. Learning to Select Pre-Trained Deep Representations With Bayesian Evidence Framework13. DeepStereo: Learning to Predict New Views From the World’s ImageryCVPR20171. One-Shot Video Object Segmentation2. FastMask: Segment Multi-Scale Object Candidates in One Shot3. Few-Shot Object Recognition From Machine-Labeled Web Images4. From Zero-Shot Learning to Conventional Supervised Classification: Unseen Visual DataSynthesis5. Learning a Deep Embedding Model for Zero-Shot Learning6. Low-Rank Embedded Ensemble Semantic Dictionary for Zero-Shot Learning7. Multi-Attention Network for One Shot Learning8. Zero-Shot Action Recognition With Error-Correcting Output Codes9. One-Shot Metric Learning for Person Re-Identification10. Semantic Autoencoder for Zero-Shot Learning11. Zero-Shot Recognition Using Dual Visual-Semantic Mapping Paths12. Matrix Tri-Factorization With Manifold Regularizations for Zero-Shot Learning13. One-Shot Hyperspectral Imaging Using Faced Reflectors14. Gaze Embeddings for Zero-Shot Image Classification15. Zero-Shot Learning - the Good, the Bad and the Ugly16. Link the Head to the “Beak”: Zero Shot Learning From Noisy Text Description at PartPrecision17. Semantically Consistent Regularization for Zero-Shot Recognition18. Semantically Consistent Regularization for Zero-Shot Recognition19. Zero-Shot Classification With Discriminative Semantic Representation Learning20. Learning to Detect Salient Objects With Image-Level Supervision21. Quad-Networks: Unsupervised Learning to Rank for Interest Point DetectionCVPR20181. A Generative Adversarial Approach for Zero-Shot Learning From Noisy Texts2. Transductive Unbiased Embedding for Zero-Shot Learning3. Zero-Shot Visual Recognition Using Semantics-Preserving Adversarial EmbeddingNetworks4. Learning to Compare: Relation Network for Few-Shot Learning5. One-Shot Action Localization by Learning Sequence Matching Network6. Multi-Label Zero-Shot Learning With Structured Knowledge Graphs7. “Zero-Shot” Super-Resolution Using Deep Internal Learning8. Low-Shot Learning With Large-Scale Diffusion9. CLEAR: Cumulative LEARning for One-Shot One-Class Image Recognition10. Zero-Shot Sketch-Image Hashing11. Structured Set Matching Networks for One-Shot Part Labeling12. Memory Matching Networks for One-Shot Image Recognition13. Generalized Zero-Shot Learning via Synthesized Examples14. Dynamic Few-Shot Visual Learning Without Forgetting15. Exploit the Unknown Gradually: One-Shot Video-Based Person Re-Identification byStepwise Learning16. Feature Generating Networks for Zero-Shot Learning17. Low-Shot Learning With Imprinted Weights18. Zero-Shot Recognition via Semantic Embeddings and Knowledge Graphs19. Webly Supervised Learning Meets Zero-Shot Learning: A Hybrid Approach for Fine-Grained Classification20. Few-Shot Image Recognition by Predicting Parameters From Activations21. Low-Shot Learning From Imaginary Data22. Discriminative Learning of Latent Features for Zero-Shot Recognition23. Multi-Content GAN for Few-Shot Font Style Transfer24. Preserving Semantic Relations for Zero-Shot Learning25. Zero-Shot Kernel Learning26. Neural Style Transfer via Meta Networks27. Learning to Estimate 3D Human Pose and Shape From a Single Color Image28. Learning to Segment Every Thing29. Leveraging Unlabeled Data for Crowd Counting by Learning to Rank。
铁路线railway line;railroad line铁路网railway network;railroad network铁道科学railway science铁路技术railway technology铁路等级railway classification国有铁路national railway;state railway地方铁路local railway;regional railway私有铁路private railway合资铁路joint investment railway;jointly owned railway标准轨铁路standard-gage railway窄轨铁路narrow-gage railway米轨铁路meter-gage railway宽轨铁路broad-gage railway单线铁路single track railway双线铁路double track railway多线铁路multiple track railway重载铁路heavy haul railway高速铁路high speed railway电气化铁路;电力铁路electrified railway;electric railway干线铁路main line railway;trunk railway市郊铁路suburban railway地下铁道;地铁subway;metro;underground railway工业企业铁路industry railway矿山铁路mine railway轻轨铁路light railway;light rail高架铁路elevated railway单轨铁路;独轨铁路monorail;monorail railway磁浮铁路magnetic levitation railway;maglev森林铁路forest railway山区铁路mountain railway既有铁路existing railway新建铁路newly-built railway改建铁路reconstructed railway运营铁路railway in operation;operation;operating railway专用铁路special purpose railway干线trunk line;main line支线branch line铁路专用线railway special line货运专线railway line for freight traffic;freight special line;freight traffic only line客运专线railway line for passenger traffic;passenger special line;passenger traffic only line 客货运混合铁路railway line for mixed passenger and freight traffic铁路运营长度;运营里程operation length of railway;operating distance;revenue length 列车运行图train diagram区间section区段district轨距rail gage;rail gauge轮重wheel load轴重axle load最大轴重maximum allowable axle load限制轴重axle load limited限界clearance;gauge限界图clearance diagram铁路建筑限界railway construction clearance;structure clearance for railway;railway structure gauge基本建筑限界fundamental construction clearance;fundamental structure gauge桥梁建筑限界bridge construction clearance;bridge structure gauge隧道建筑界限tunnel construction clearance;tunnel structure gauge铁路机车车辆限界rolling stock clearance for railway;vehicle gauge机车车辆上部限界clearance limit for upper part of rolling stock机车车辆下部限界clearance limit for lower part of rolling stock装载限界loading clearance limit;loading gauge阔大货物限界clearance limit for freight with exceptional dimension;clearance limit for oversize commodities 接触网限界clearance limit for overhead contract wire列车与线路相互作用track-train interaction轮轨关系wheel-rail relation;wheel-rail interaction粘着系数adhesion coefficient车轮滑行wheel sliding;wheel skid车轮空转wheel slipping牵引种类kinds of traction;category of traction牵引方式mode of traction牵引定数tonnage rating;tonnage of traction装载系数loading coefficient速度speed持续速度continuous speed限制速度limited speed;speed restriction均衡速度balancing speed构造速度construction speed;design speed最高速度maximum speed临界速度critical speed重载列车heavy haul train高速列车high speed train超长超重列车exceptionally long and heavy train列车正面冲突train collision列车尾追train tail collision列车尾部防护train rear end protection伸缩运动fore and aft motion蛇行运动hunting;nosing列车拉伸train running out列车分离train separation列车颠覆train overturning列车动力学train dynamics列车空气动力学train acrodynamics机车车辆振动vibration of rolling stock纵向振动longitudinal vibration横向振动lateral vibration垂向振动vertical vibration摆滚振动rock-roll vibration浮沉振动bouncing;vibration测滚振动rolling ;vibration测摆振动swaying;vibration点头振动pitching;nodding摇头振动yawing;hunting机车车辆共振resonance of rolling stock机车车辆冲击impact of rolling纵向冲击longitudinal impact横向冲击lateral impact垂向冲击vertical impact铁路法railway law铁道法规railway act铁路条例railway code铁路技术管理规程regulations of railway technical operation铁路用地right-of-way铁路勘测railway reconnaissance调查测绘survey and drawing of investigation;investigation survey;investigation surveying and sketching 地形调查topographic survey地貌调查topographic feature survey;geomorphologic survey地质调查geologic survey不良地质unfavorable geology特殊地质special geology工程地质条件engineering geologic requirement;engineering geologic condition气象资料meteorological data设计高程;设计标高design elevation河流比降slope of river;comparable horizon of river历史洪水位historic flood level最高水位highest water level;HWL分水岭watershed;dividing ridge洪水频率flood frequency设计流量design discharge设计水位design water level施工水位construction level;construction water level;working water level定测location survey;alignment;final location survey导线测量traversing;traverse survey光电导线photoelectric traverse地形测量topographical survey横断面测量cross leveling;cross-section survey;cross-section leveling 线路测量route survey;profile survey;longitudinal survey既有线测量;旧线测量survey of existing railway线路复测repetition survey of existing railway;resurvey of existing railway 测量精度survey precision;precision of survey均方差;中误差mean square error最大误差;极限误差maximum error;limiting error中线测量center line survey中线桩center line stake加桩additional stake;plus stake外移桩shift out stake;stake outward;offset stake水准点高程测量benchmark leveling中桩高程测量;中平center stake leveling曲线控制点curve control point放线setting-out of route;lay out of route交点intersection point副交点auxiliary intersection point转向角deflection angle分转向角auxiliary deflection angle坐标方位角plane-coordinate azimuth象限角quadrantal angle经纬距plane rectangular coordinate断链broken chain投影断链projection of broken chain断高broken height铁路航空摄影测量;铁路航测railway aerial photogrammetry铁路航空勘测railway aerial surveying航带设计flight strip design;design of flight strip铁路工程地质遥感remote sensing of railway engineering geology测段segment of survey航测选线aerial surveying alignment航测外控点field control point of aerophotogrammetry全球定位系统global positioning system;GPS像片索引图index of photography三角测量trigonometric survey;triangulation精密导线测量precise traverse survey;accurate traverse survey三角高程测量trigonometric leveling隧道洞外控制测量ouside tunnel control survey隧道洞内控制测量in tunnel control survey;through survey桥轴线测量survey of bridge axis铁路选线railway location;approximate railway location;location of railway route selection平原地区选线location in plain region;plain location越岭选线location of mountain line;location of line in mountain region;location over mountain山区河谷选线mountain and valley region location;location of line of in mountain and valley region丘陵地段选线hilly land location;location of line on hilly land工程地质选线engineering geoligic location of line线间距distance between centers of tracks;midway between tracks车站分布distribution of stations方案比选scheme comparison;route alternative投资回收期repayment period of capital cost纸上定线paper location of line缓坡地段section of easy grade;section of gentle slope紧坡地段section of sufficient grade非紧坡地段section of unsufficient grade;section fo insufficient grade导向线leading line;alignment guiding line拔起高度;克服高度height of lifting;lifting height;ascent of elevation横断面选线cross-section method of railway location;location with cross-section method;cross-section method for location of line展线extension of line;development of line;line development展线系数coefficient of extension line;coefficient of development line套线overlapping line线路平面图track plan;line plan线路纵断面图track profile;line profile站坪长度length of station site站坪坡度grade of station site控制区间control section;controlling section最小曲线半径minimum radius of curve圆曲线circular curve单曲线simple curve缓和曲线transition curve;easement curve;spiral transition curve缓和曲线半截变更率rate of easement curvature;rate of transition curve复曲线compound curve同向曲线curves of same sense;adjacent curves in one direction反向曲线reverse curve;curve of opposite sense夹直线intermediate straight line;tangent between curves坡度grade;gradient;slope人字坡double spur grade限制坡度ruling grade;limiting grade加力牵引坡度pusher grade;assisting grade最大坡度maximum grade临界坡度critical grade长大坡度long steep grade;long heavy grade均衡坡度balanced grade有害地段harmful district无害地段harmless district变坡点point of gradient change;breake in grade坡段grade section坡段长度length of grade section坡度差algebraic difference between adjacent gradients竖曲线vertical curve分坡平段level stretch between opposite sign gradient缓和坡度slight grade;flat grade;easy grade起动缓坡flat gradient for starting加速缓坡easy gradient for acceleration;accelerating grade坡度折减compensation of gradient;gradient compensation;grade compensation曲线折减compensation of curve;curve compesation隧道坡度折减compensation of gradient in tunnel;compensation grade in tunnel绕行地段detouring section;round section换侧;换边change side of double line容许应力设计法allowable stress design method破损阶段设计法plastic stage design method极限状态设计法limit state design method概率极限状态设计法;可靠度设计法probabilistic limit state design method地震系数法seismic coefficient method路基subgrade;road bed;formation subgrade岩石路基rock subgrade渗水土路基permeable soil subgrade;pervious embankment非渗水土路基non-permeable soil subgrade;impervious embankment特殊土路基subgrade of special soil软土地区路基subgrade in soft soil zone;subgrade in soft;clay region泥沼地区路基subgrade in bog zone;subgrade in morass region;subgrade in swampland膨胀土地区路基;裂土地区路基subgrade in swelling soil zone;subgrade in expansive soil region 盐渍土地区路基subgrade in salty soil zone;subgrade in saline soil region多年冻土路基subgrade in permafrost soil zone特殊条件下的路基subgrade under special condition河滩路堤embankment on plain river beach滨河路堤embankment on river bank水库路基subgrade in reservoir;embankment crossing reservoir崩塌地段路基subgrade in rock fall district;subgrade in collapse zone岩堆地段路基subgrade in rock deposit zone;subgrade in talus zone;subgrade in scree zone滑坡地段路基subgrade in slide岩溶地段路基;喀斯特地段路基subgrade in karst zone洞穴地段路基subgrade in cavity zone;subgrade in cavern zone风沙地段路基subgrade in windy and sandy zone;subgrade in desert雪害地段路基subgrade in snow damage zone;subgrade in snow disaster zone路基横断面subgrade cross-section路基面subgrade surface;formation路基面宽度width of the subgrade surface;formation width路拱road crown;subgrade crown路肩Road shoulder;subgrade shoulder路肩高程formation level;shoulder level路堤embankment;fill路堑cut;road;cutting半堤半堑part-cut and part-fill section;cut and fill section基床subgrade bed;formation基床表层surface layer of subgrade bed;formation top layer;surface layer of subgrade基床表层bottom layer of subgrade;formation base layer;bottom layer of subgrade bed线路工程railway line engineering铁路勘测;铁道勘测railway reconnaissance铁路选线;铁道选线railway route selection;railway location;approximate railway location;location of railway route selection经济勘察economic investigation地质调查geological survey;geologic survey方案比较project comparison铁路勘探;铁道勘探railway exploration铁路测量;铁道测量railway survey线路测量route survey;profile survey;longitudinal survey曲线测量curve survey既有线测量survey of existing line;survey of existing railway铁路线路图;铁道线路图railway map定线location of line铁路设计;铁道设计railway design线路设计track design平面设计plane design纵断面设计longitudinal section design横断面设计transverse section design曲线设计curve design车站设计station design站场设计design of stations and yards标准设计standard design线路等级line grade铁路限界;铁道限界railway clearance建筑限界construction clearance设计规范code of design设计标准design standard铁路线路;铁道线路railway line铁路曲线;铁道曲线railway curve缓和曲线transition curve;easement curve;spiral transition curve曲线半径curve radius曲线超高super elevation on curve;cant;elevation of curve欠超高inadequate super elevation;deficient super elevation未被平衡加速度unbalanced acceleration竖曲线vertical curve坡道gradient坡度grade;gradient;slope限制坡度ruling grade;limiting grade坡度折减grade compensation;compensation of gradient;gradient compensation避难线refuge siding;catch siding双线插入段double track interpolation铁路连接线;铁道连接线railway connecting line隧道内线路line in tunnel桥头引线bridge approach站线siding站线长度length of station line铁路标志;铁道标志railway sign警冲标fouling point indicator;fouling post线路中心线track center line;central lines of track线间距离;线间距distance between centers of lines;distance between centers of tracks;midway between tracks 线路试验;线路实验track test;road test;running test路基subgrade铁路路基;路基;铁道路基railway subgrade;subgrade软土路基soft soil subgrade沙漠路基desert subgrade冻土路基permafrost subgrade膨胀土路基subgrade in swelling soil zone路基基床subgrade bed路堤embankment;fill高路堤high-fill embankment路堑cut;road cutting深路堑deep cutting路肩road shoulder;subgrade shoulder边坡side slope护坡revetment;slope protection;pitching路基滑动subgrade slip滑坡防治landslide preventive treatment路基排水subgrade drainage塑料排水板plastic drainage board路基加固subgrade strengthening土工织物geotextile挡土墙;挡墙retaining walls锚定板anchor plate抗滑桩antiskid pile;anti-slide pile;counter-sliding pile加筋土reinforced earth植被vegetation路基防护subgrade protection防护工程protective engineering防护林protection forest防沙sand proof固沙stabilization of sand输导砂sediment transport路侧建筑物roadside structure排水沟weeper drain;weep drain;drainage ditch;drain ditch 挡风墙wind-shield wall;wind-break wall隔声墙sound-proof wall防雪栅snow fence;snow guard轨道理论track theory轨道设计track design轨道参数track parameter轨距track gauge;rail gage;rail gauge轨距加宽track gauge widening轨底坡cant of rail;rail cant轨道力学track dynamics;track mechanics轨道强度track strength轨道阻力track resistance道床阻力trackbed resistance;ballast resistance扣固力fastening force钢轨应力rail stress轨道承载力track bearing capacity轨道稳定性track stability;stability of track轨道试验;轨道实验track test轮轨关系wheel-rail interaction;wheel-rail relation轮轨动力学wheel-rail dynamics轮轨作用力wheel-rail force轮轨接触wheel-rail contact接触应力contact stress轨道结构track structure轨道track铁路轨道;轨道;铁道轨道railway track;track线路上部建筑;轨道track superstructure;track重型轨道heavy track宽轨枕轨道broad sleeper track板式轨道slab track;slab-track整体道床轨道monolithic roadbed track防振轨道anti-vibration track无缝线路jointless line;continuously welded rail track;jointless track桥上轨道结构bridge deck track structure隧道内轨道结构track structure in tunnel钢轨steel rail重型钢轨heavy rail长钢轨long rail焊接长钢轨long welded rail缩短轨shortened rail;standard shortened rail;fabricated short rail used on curves;standard curtailed rail 热处理钢轨heat-treated rail合金钢轨alloy steels rail旧钢轨used rail钢轨钢rail steel钢轨螺栓孔rail bolt hole轨缝rail gap;joint gap钢轨焊缝rail welding seam钢轨接头rail joint异型接头compromise joint伸缩接头expansion joint焊接接头welding joint;welded joint绝缘接头track section insulator;insulated joint胶接接头adhesive joint鱼尾板fish plate鱼尾螺栓fish bolt扣件fastening;rail fastening弹性扣件elastic fastening;elastic rail fastening弹条式扣件elastic rod type fastening分开式扣件separated fastening;separated rail fastening;indirect holding fastening扣板式扣件clip fastening刚性扣件rigid fastening垫圈ring道钉spike;track spike;rail spike;dog spike螺纹道钉threading spike;screw spike垫板tie-plate;tie plate橡胶垫板rubber tie-plate;rubber tie plate防爬器anticreeper;anti-creeper;rail anchor轨距拉杆gauge tie rod轨下基础underrail foundation;sub-rail foundation;sub-rail track bed轨枕sleeper;tie;cross tie木枕wooden sleeper;wooden tie混凝土枕;砼枕concrete sleeper;concrete tie轨枕板;宽轨枕sleeper slab;broad sleeper钢枕steel sleeper;steel tie合成轨枕composite sleeper岔枕points sleeper;switch tie;turnout tie桥枕bridge tie;bridge sleeper轨排track panel;track skeleton枕下胶垫rubber pad under sleeper道床track bed;ballast bed碎石道床ballast bed整体道床monolithic track bed;solid bed;integrated ballast bed;monolithic concrete bed沥青道床bituminous raodbed;asphalt roadbed;asphalt cemented ballast bed石棉道床asbestos ballast道碴ballast道床覆盖ballast cover钢轨涂油器rail lubricator融雪器snow melter复轨器replacer;re-railer;rerailing device脱轨器derailer车挡bumper;bumper post道岔switch;point;turnout;switches and crossings单开道岔single point;simple turnout;lateral turnout双开道岔double turnout三开道岔three-throw turnout;symmetrical three throw turnout;three-way turnout交分道岔slip switch高速道岔high-speed switch切线型道岔tangent turnout道岔设计turnout design过岔速度switch crossing speed辙叉frog;crossing整铸辙叉integrated cast frog锰钢辙叉manganese steel frog crossing焊接辙叉welding frog crossing组合辙叉combined frog;assembled frog可动心轨辙叉movable point frog;movable-point frog可动翼轨辙叉movable wing rails;movable-wing frog核子密度湿度测定determination of nuclear density-moisture路基承载板测定determination of bearing slab of subgrade预留沉落量reserve settlement;settlement allowance反压护道berm with superloading;berm for back pressure;counter swelling berm石灰砂桩lime sand pile挡土墙retaining wall重力式挡土墙gravity retaining wall衡重式挡土墙balance weight retaining wall;gravity retaining wall with relieving platform;balanced type retaining wall中-活载CR-live loading;China railway standard loading桥梁标准活载standard live load for bridge换算均布活载equivalent uniform live load设计荷载design load主力principal load恒载dead load土压力earth load静水压力hydrostatic pressure浮力buoyancy列车活载live load of train列车离心力centrifugal force of train列车冲击力;冲击荷载impact force of train冲击系数coefficient of impact人行道荷载sidewalk loading附加力subsidiary load;secondary load列车制动力braking force of train列车牵引力tractive force of train桥台abutment重力式桥台gravity abutment埋置式桥台buried abutment锚定板式桥台anchor slab abutmentU形桥台U-shaped abutment耳墙式桥台abutment with cantilevered retaining wall桥墩pier脚手架scaffold山岭隧道mountain tunnel越岭隧道over mountain line tunnel水下隧道;水地隧道subaqueous tunnel;underwater tunnel地铁隧道subway tunne;underground railway tunnel浅埋隧道shallow tunnel;shallow-depth tunnel;shallow burying tunnel 深埋隧道deep tunnel;deep-depth tunnel;deep burying tunnel单线隧道single track tunnel双线隧道double track tunnel多线隧道multiple track tunnel车站隧道station tunnel地铁车站subway station;metro station特长隧道super long tunnel长隧道long tunnel中长隧道medium tunnel短隧道short tunnel隧道群tunnel group地铁工程subway engineering;metro engineering洞口tunne ladit;tunnel opening隧道进口tunnel entrance隧道出口tunnel exit明挖法open-cut method拆迁removing测量放样staking out in survey工程招标calling for tenders of project;calling for tending of project工程投标bidding for project工程报价project quoted price工程发包contracting out of project工程承包contracting of project工程监理supervision of construction;supervision of project过渡工程transition project控制工程dominant project关键工程key project辅助设施auxiliary facilities通车期限time limit for opening to traffic施工总工期total time of construction;total construction time施工组织方案construction scheme开工报告report on starting of construction work;commencement report of construction work;construction starting report竣工报告completion repport of construction work施工计划管理planned manangement of construction网络计划技术network planning technique施工工艺流程construction technology process;construction process工法制度construction method system施工产值construction output value施工利润construction profit施工机械利用率utilization ratio of construction machinery推土机bulldozer铲运机scoper;scraper;carrying scraper装载机loader平地机grader压路机roller千斤顶jack打桩机pile driver工务段track division;track district;track maintenance division轨道;线路上部建筑track轨道类型classification of track;track standard轨道结构track structure有碴轨道ballasted track无碴轨道ballastless track线路track;permanent way有缝线路jointed track无缝线路continuously welded rail track;jointless track长轨线路long welded rail track轨节rail link长轨条long rail string轨道几何形位track geometry轨距加宽gauge widening螺旋曲线spiral curve clothold curve三次抛物线曲线cubic parabola curve曲线超高superelevation cant;elevation of curve欠超高deficient superlevation过超高surplus superelevation;excess elevation反超高reverse superelevation;counter superelevation;negative superelevation未本平衡离心加速度unbalaned cenrifugal acceleraion曲线正矢curve versine轨道力学track mechanics轨道动力学track dynamics轨道强度计算track strength analysis轨道几何状态恶化track deterioration轨道失效track failure轨道框架刚度rigidity of track panel道床系数ballast coefficient;ballast modulus钢轨基础模量rail supporting modulus ;track moulus钢轨支点弹性模量modulus of elasticity of rail support轨道应力track stresses轨道稳定性stability of track接头阻力joint resistance道床阻力ballast resistance扣件扣压力toe load of fastening轮/轨接触应力rail/wheel contact stress钢轨rail轨头rail head轨腰rail web轨底rail base;rail bottom淬火轨head hardened rail ;quenched rail合金轨alloy steel rail合金淬火轨head hardened alloy steel rail ;quenched alloy耐磨轨wear resistant rail耐腐蚀轨corrosion resistant rail标准长度钢轨standard length rail短轨short rail缩短轨standardshortened rail ;fabricated short rail used on curves;sandard curtailed rail 异形轨compromise rail护轨guard rail;check rail轨底坡rail cant钢轨工作边gage line构造轨缝structural joint gap ;maximum joint gap structurally obtainable钢轨接头rail joint相对接头opposite joint ;square joint相错式接头;相互式接头alternate joint ;staggered joint ;broken joint垫接接头;承接接头supported joint悬接接头suspended joint绝缘接头insulated joint胶结绝缘接头glued insulated joint焊接接头welded joint冻结接头frozen joint异形接头compromise joint钢轨伸缩调节器;温度调节器expansion rail joint;rail expansion device;swith expansion joint 接头联结零件rail joint accessories ;rail joint fastenings接头夹板;鱼尾板joint bar ;splice bar ;fish plate平型双头夹板flat joint bar轨头地面接触夹板;锲型接头夹板head contact flat joint bar轨腹上圆弧接触夹板;铰型接头夹板head free flat joint bar扣件;中间联结零件rail fastening分开式扣件separated rail fastening ;indirect holding fastening不分开式扣件nonseparated rail fastening ;direct holding fastening半分开式扣件semi-separaed rail fastening ;mixed holding fastening弹性扣件elasic rail fastening道钉;狗头钉;钩头钉track spike ;rail spike ;dog spike弹簧道钉elastic rail spike螺纹道钉screw spike螺栓螺纹钉bolt-screw spike硫磺锚固sulphur cement mortar anchor ;sulphur cement mortar anchorage垫板tie plate橡胶垫板;弹性垫板rubber tie plate衬垫pad弹簧垫圈spring washer防爬器anti-creeper;rail anchor轨撑rail brace轨距杆gage tie bar;gage rod ;gage tie轨下基础sub-rail foundation ;sub-rail track bed轨枕tie;cross tie ;sleeper宽混凝土轨枕;轨枕板broad concrete tie纵向轨枕longitudinal tie板式轨道slab-track木枕;枕木wooden tie油枕treated wooden tie素枕untreated wooden tie钢枕steel tie岔枕switch tie ;turnout tie短枕short tie ;block tie道床ballast bed整体道床solid bed;integrated ballast bed;monolithic concrete bed 沥青道床asphalt cemented ballast bed道碴层;道碴床ballast layer道碴ballast道碴级配ballast grading洛杉叽磨损实验Los Angeles abrasion test底碴subballast道床厚度thickness of ballast bed;depth of ballast道床宽度width of ballast bed道床碴肩shoulder of ballast bed轮轨游间clearance between wheel flange and gage line曲线内接inscribed to curves自由内接free inscribing强制内接compulsory inscribing楔形内接wedging inscribing静力内接static inscribing动力内接dynamic inscribing轨道不平顺track irregularity静态不平顺static track irregularity;irregularity without load动态不平顺dynamic track irregularity轨道几何尺寸容许公差track geometry tolerances轨道变形track deformation;track disorder;track distortion轨道残余变形track residual deformation;track permanent deformation 弹性挤开gage elastically widened;elastic squeeze-out轨道水平;左右水平track cross level轨道方向track alignment轨道前后高低;前后高低longitudinal level of rail;track profile三角坑;扭曲twist warp轨道明坑visile pit of track;visible low spot of track;track depression 翻浆冒泥mud-pumping钢轨低接头depressed joint;battered foint of rail接头瞎缝closed joint;tight joint大轨缝excessive joint gap;wide joint gap线路爬行track creeping轨头波纹磨损corrugation of rail head;rail corrugation轨头短波浪磨损short wave undulation of rail head钢轨伤损rail defects and failures钢轨擦伤engine burn;wheel burn钢轨锈蚀rail corrosion核伤nucleus flaw;oval flaw轨头掉块spalling of rail head轨头压溃crushing of rail head钢轨裂纹rail cracks轨头发裂head checks ;hair crack of rail head轨底崩裂burst of rail base;burst of rail bottom ;broken rail base钢轨折断brittle fractures of rail ;sudden rupture of rail钢轨落锤试验drop test of rail螺孔裂纹bolt hole crack线路大修major repair of reack ;overhaul of track ;track renewal线路中修intermediate repair of track线路维修maintenance of track全面起道捣固out-of-face surfacing起道raising of track ;track lifting落道under cutting of track ;lowering of track垫砂起道measured shovel packing改道gage correction ;gaging of track拨道track lining整正水平adjusting of cross level整正曲线curve adjusting;curve lining绳正法整正曲线string lining of curve绳度整正曲线计算器string lining computer ;string-line calculator调整轨缝;均匀轨缝adjusting of rail gaps;evenly distributing joint gaps 整正规缝dispersal of rail gaps;adjusting joint gaps up to standard焊修钢轨;堆焊钢轨resurfacing of rail打磨钢轨rail grinding轨头整形rail head reprofiling捣固道床ballast tamping夯实道床ballast ramming ;ballast consolidating清筛道床ballast cleaning整理道床ballast trimming锁定轨温fastening-down temperature of rail零应力轨温strss free rail temperature中和轨温neutral temperature放散温度力destressing ;stress liberation温度力temperature stress温度力峰temperature stress peak轨道鼓出临界温度critical temperature of track buckling伸缩区;呼吸区breathing zone缓冲区buffer zone;transition zone固定区nonbreathing zone ;fixd zone ;deformation -free zone调节轨buffer rail气压焊oxyacetylene pressure welding电阻焊flash butt welding周转轨inventory stock备用轨stock rails per kilometer of track;emergency rail stored along the way再用轨second hand rail ;relaying rail顺坡run-off elevation立体交叉grade separation道囗grade crossing;level crossing道岔turnout ;switches and crossings单开道岔simple turnout ;;lateral turnout单式对称道岔;双开道岔symmetrical double curve turnout ;equilateral turnout单式不对称道岔;不对称双开道岔unsymmetrical double curve turnout ;unequilateral turnout 单式同侧道岔unsymmetrical double curve turnout in the same direction三开道岔stmmetrical three throw turnout ;three-way turnout不对称三开道岔unsymmetrical three-way turnout ;unsymmetrical three throw tunout左开道岔left hand turnout右开道岔right hand turnout交叉crossing菱形交叉diamond crossing直角交叉rectangular crossing ;square crossing交分道岔slip switch单式交分道岔single slip switches复式交分道岔double slip switches渡线crossover交叉渡线scissors crossing ;double crossover平行渡线parallel crossover道岔主线main line of turnout ;main track of turnout ;turnout main道岔侧线branch line of turnout ;branch track of turnout turnout branch导曲线lead curve导曲线半径radius of lead curve导曲线支距offset of lead curve道岔中心center of turnout辙叉frog;crossing辙叉角frog angle辙叉号数frog number道岔号数turnout number道岔始端beginning of turnout道岔终端end of turnout尖轨switch rail tongue rail;blade直线尖轨straight seitch曲线尖轨curved switch尖轨理论尖端theoretical point of switch rail尖轨尖端actual point of switch rail尖轨跟端heel of swith rail曲线出岔道岔turnout from curved track辙叉心轨尖端actual point of frog辙叉趾端toe end of frog ;frog toe辙叉跟端heel end of frog;frog heel道岔基线reference line of turnout道岔全长total length of turnout道岔理论长度theoretical length of turnout道岔实际长度actual length of turnout道岔理论导程theoretical lead of turnout道岔前部理论长度front part theoretical legth of turnout道岔后部理论长度rear part theoretical length of turnout道岔后部实际长度rear part actual length of turnout尖轨长度length of switch rail辙叉趾长toe length of frog辙叉跟长heel length of frog辙叉趾宽toe spread of frog辙叉跟宽heel s[read of frog尖轨动程throw of switch护轨与心轨的查照间隔check gage护背距离guard rail face gage;back gage辙叉有害空间gap in the frog ;open throat;unguarded flange-way转辙角switch angle辙叉咽喉throat of frog轮缘槽flange-way;flange clearance转辙器switch可弯式尖轨转辙器flexible switch间隔铁式尖轨转辙器loose heel switch高锰钢整铸辙叉solid manganese steel frog;cast manganese steel frog钢轨组合辙叉bolred rigid frog;assembled frog可动心轨辙叉movable-point frog可动翼轨辙叉movable-wing frog钝角辙叉obtuse frog锐角辙叉end frog;acute frog曲线辙叉curved frog基本轨stock rail密贴尖轨closed switch rail;close contact between switch point and stock rail淬火尖轨surface-hardened switch rail;quenched switch rail特种断面尖轨special heavy section switch rail;tongue rail made of special section rail;full-wed section switch rail翼轨wing rail心轨point rail;nose rail道岔护轨turnout guard rail尖轨护轨switch point guard rail尖轨保护器switch protector。
二分法的英文解释The binary search algorithm is a fundamental concept in computer science and mathematics. It is a powerful and efficient technique used to locate a specific item within a sorted collection of data. In this article, we will delve into the details of the binary search algorithm, exploring its mechanics, applications, and complexities.At its core, the binary search algorithm operates by repeatedly dividing the search interval in half. It begins by comparing the target value with the middle element of the array. If the target value matches the middle element, the search is complete, and the index of the element is returned. If the target value is less than the middle element, the search continues in the lower half of the array. Conversely, if the target value is greater than the middle element, the search proceeds in the upper half of the array. This process is repeated until the target value is found or until the search interval is empty.One of the key requirements for the binary search algorithm to work is that the data collection must be sorted in ascending order. This prerequisite enables the algorithm to exploit the properties of the sorted array, significantly reducing the search space with each iteration. By systematically eliminating half of the remaining elements at each step, binary search achieves a logarithmic time complexity of O(log n), where n is the number of elements in the array. This efficiency makes binary search particularly suitable for large datasets where linear search algorithms would be impractical.The binary search algorithm finds its applications in various domains, including but not limited to:1. Searching: Binary search is commonly used to quickly locate elements in sorted arrays or lists. Its efficiency makes it indispensable for tasks such as searching phone directories, dictionaries, or databases.2. Sorting: While binary search itself is not a sorting algorithm, it can be combined with sorting algorithms like merge sort or quicksort to efficiently search for elements in sorted arrays.3. Computer Science: Binary search serves as a foundational concept in computer science education, providing students with a fundamental understanding of algorithm design and analysis.4. Game Development: Binary search is utilized in game development for tasks such as collision detection, pathfinding, and AI decision-making.Despite its efficiency and versatility, binary search does have certain limitations. One significant constraint is that the data collection must be sorted beforehand, which can incur additional preprocessing overhead. Additionally, binary search is not well-suited for dynamic datasets that frequently change, as maintaining the sorted order becomes non-trivial.In conclusion, the binary search algorithm is a powerful tool for efficiently locating elements within sorted arrays. Its logarithmic time complexity and widespread applications make it a fundamental concept in computer science and mathematics. By dividing the search space in half with each iteration, binary search demonstrates the elegance and efficiency of algorithmic design. Whether used in searching, sorting, or other computational tasks, the binary search algorithm remains a cornerstone of algorithmic problem-solving.。