Non-Existence of Local Integrals of Motion in the Multi-Deformed Ising Model
- 格式:pdf
- 大小:83.15 KB
- 文档页数:6
lovasz local lemma的证明-回复lovasz local lemma是一种概率方法,用于证明存在性问题。
它由鲁迅大学的匈牙利裔数学家Miklós Lovász于1975年发现并证明。
这个引人注目的定理最初是为组合问题提出的,但后来发展成为解决其他领域中的存在性问题的有力工具。
在本文中,我们将逐步探讨lovasz local lemma 的证明过程。
首先,让我们定义lovasz local lemma用到的一些基本概念。
假设有一个由n个事件组成的事件空间Ω={A_1,A_2,...,A_n},其中每个事件A_i都与其他k个事件有关。
这些事件出现或不出现被认为是随机的。
我们的目标是证明至少存在一个事件A_i,使其不受其他事件的影响,即事件A_i 的发生仅取决于自身。
根据lovasz local lemma的条件,如果我们可以证明每个事件A_i最多依赖于其他d个事件的发生,并且Pr(A_i) ≤p < 1/4d,那么至少存在一个事件A_i是独立的。
这将揭示出事件的某种自由性,即使在其他事件发生的情况下,它也可以独立地发生。
为了证明这个引人注目的定理,我们需要一个严格的算法。
接下来,我们将介绍一种被称为染色法的方法,这是证明lovasz local lemma的一种常见方法。
首先,我们为每个事件A_i分配一个颜色c_i,这些颜色取值范围为{1,2,...,n}。
然后,我们定义事件的邻接列表G={A_1,A_2,...,A_n},其中A_i与事件A_j相关联,如果A_i依赖于A_j的发生。
在染色时,我们假设颜色分配是随机的,即每个事件有相同的概率获得任何颜色。
现在,我们要构建一个函数Bad(A_i),用于判断事件A_i是否处于“坏”的状态。
我们定义坏事件为,事件A_i依赖于其他事件A_j的发生,并且A_j与A_i具有相同的颜色。
简而言之,如果存在一个相邻事件A_j具有相同的颜色,那么事件A_i被认为是坏的。
应用地球化学元素丰度数据手册迟清华鄢明才编著地质出版社·北京·1内容提要本书汇编了国内外不同研究者提出的火成岩、沉积岩、变质岩、土壤、水系沉积物、泛滥平原沉积物、浅海沉积物和大陆地壳的化学组成与元素丰度,同时列出了勘查地球化学和环境地球化学研究中常用的中国主要地球化学标准物质的标准值,所提供内容均为地球化学工作者所必须了解的各种重要地质介质的地球化学基础数据。
本书供从事地球化学、岩石学、勘查地球化学、生态环境与农业地球化学、地质样品分析测试、矿产勘查、基础地质等领域的研究者阅读,也可供地球科学其它领域的研究者使用。
图书在版编目(CIP)数据应用地球化学元素丰度数据手册/迟清华,鄢明才编著. -北京:地质出版社,2007.12ISBN 978-7-116-05536-0Ⅰ. 应… Ⅱ. ①迟…②鄢…Ⅲ. 地球化学丰度-化学元素-数据-手册Ⅳ. P595-62中国版本图书馆CIP数据核字(2007)第185917号责任编辑:王永奉陈军中责任校对:李玫出版发行:地质出版社社址邮编:北京市海淀区学院路31号,100083电话:(010)82324508(邮购部)网址:电子邮箱:zbs@传真:(010)82310759印刷:北京地大彩印厂开本:889mm×1194mm 1/16印张:10.25字数:260千字印数:1-3000册版次:2007年12月北京第1版•第1次印刷定价:28.00元书号:ISBN 978-7-116-05536-0(如对本书有建议或意见,敬请致电本社;如本社有印装问题,本社负责调换)2关于应用地球化学元素丰度数据手册(代序)地球化学元素丰度数据,即地壳五个圈内多种元素在各种介质、各种尺度内含量的统计数据。
它是应用地球化学研究解决资源与环境问题上重要的资料。
将这些数据资料汇编在一起将使研究人员节省不少查找文献的劳动与时间。
这本小册子就是按照这样的想法编汇的。
A Discriminatively Trained,Multiscale,Deformable Part ModelPedro Felzenszwalb University of Chicago pff@David McAllesterToyota Technological Institute at Chicagomcallester@Deva RamananUC Irvinedramanan@AbstractThis paper describes a discriminatively trained,multi-scale,deformable part model for object detection.Our sys-tem achieves a two-fold improvement in average precision over the best performance in the2006PASCAL person de-tection challenge.It also outperforms the best results in the 2007challenge in ten out of twenty categories.The system relies heavily on deformable parts.While deformable part models have become quite popular,their value had not been demonstrated on difficult benchmarks such as the PASCAL challenge.Our system also relies heavily on new methods for discriminative training.We combine a margin-sensitive approach for data mining hard negative examples with a formalism we call latent SVM.A latent SVM,like a hid-den CRF,leads to a non-convex training problem.How-ever,a latent SVM is semi-convex and the training prob-lem becomes convex once latent information is specified for the positive examples.We believe that our training meth-ods will eventually make possible the effective use of more latent information such as hierarchical(grammar)models and models involving latent three dimensional pose.1.IntroductionWe consider the problem of detecting and localizing ob-jects of a generic category,such as people or cars,in static images.We have developed a new multiscale deformable part model for solving this problem.The models are trained using a discriminative procedure that only requires bound-ing box labels for the positive ing these mod-els we implemented a detection system that is both highly efficient and accurate,processing an image in about2sec-onds and achieving recognition rates that are significantly better than previous systems.Our system achieves a two-fold improvement in average precision over the winning system[5]in the2006PASCAL person detection challenge.The system also outperforms the best results in the2007challenge in ten out of twenty This material is based upon work supported by the National Science Foundation under Grant No.0534820and0535174.Figure1.Example detection obtained with the person model.The model is defined by a coarse template,several higher resolution part templates and a spatial model for the location of each part. object categories.Figure1shows an example detection ob-tained with our person model.The notion that objects can be modeled by parts in a de-formable configuration provides an elegant framework for representing object categories[1–3,6,10,12,13,15,16,22]. While these models are appealing from a conceptual point of view,it has been difficult to establish their value in prac-tice.On difficult datasets,deformable models are often out-performed by“conceptually weaker”models such as rigid templates[5]or bag-of-features[23].One of our main goals is to address this performance gap.Our models include both a coarse global template cov-ering an entire object and higher resolution part templates. The templates represent histogram of gradient features[5]. As in[14,19,21],we train models discriminatively.How-ever,our system is semi-supervised,trained with a max-margin framework,and does not rely on feature detection. We also describe a simple and effective strategy for learn-ing parts from weakly-labeled data.In contrast to computa-tionally demanding approaches such as[4],we can learn a model in3hours on a single CPU.Another contribution of our work is a new methodology for discriminative training.We generalize SVMs for han-dling latent variables such as part positions,and introduce a new method for data mining“hard negative”examples dur-ing training.We believe that handling partially labeled data is a significant issue in machine learning for computer vi-sion.For example,the PASCAL dataset only specifies abounding box for each positive example of an object.We treat the position of each object part as a latent variable.We also treat the exact location of the object as a latent vari-able,requiring only that our classifier select a window that has large overlap with the labeled bounding box.A latent SVM,like a hidden CRF[19],leads to a non-convex training problem.However,unlike a hidden CRF, a latent SVM is semi-convex and the training problem be-comes convex once latent information is specified for thepositive training examples.This leads to a general coordi-nate descent algorithm for latent SVMs.System Overview Our system uses a scanning window approach.A model for an object consists of a global“root”filter and several part models.Each part model specifies a spatial model and a partfilter.The spatial model defines a set of allowed placements for a part relative to a detection window,and a deformation cost for each placement.The score of a detection window is the score of the root filter on the window plus the sum over parts,of the maxi-mum over placements of that part,of the partfilter score on the resulting subwindow minus the deformation cost.This is similar to classical part-based models[10,13].Both root and partfilters are scored by computing the dot product be-tween a set of weights and histogram of gradient(HOG) features within a window.The rootfilter is equivalent to a Dalal-Triggs model[5].The features for the partfilters are computed at twice the spatial resolution of the rootfilter. Our model is defined at afixed scale,and we detect objects by searching over an image pyramid.In training we are given a set of images annotated with bounding boxes around each instance of an object.We re-duce the detection problem to a binary classification prob-lem.Each example x is scored by a function of the form, fβ(x)=max zβ·Φ(x,z).Hereβis a vector of model pa-rameters and z are latent values(e.g.the part placements). To learn a model we define a generalization of SVMs that we call latent variable SVM(LSVM).An important prop-erty of LSVMs is that the training problem becomes convex if wefix the latent values for positive examples.This can be used in a coordinate descent algorithm.In practice we iteratively apply classical SVM training to triples( x1,z1,y1 ,..., x n,z n,y n )where z i is selected to be the best scoring latent label for x i under the model learned in the previous iteration.An initial rootfilter is generated from the bounding boxes in the PASCAL dataset. The parts are initialized from this rootfilter.2.ModelThe underlying building blocks for our models are the Histogram of Oriented Gradient(HOG)features from[5]. We represent HOG features at two different scales.Coarse features are captured by a rigid template covering anentireImage pyramidFigure2.The HOG feature pyramid and an object hypothesis de-fined in terms of a placement of the rootfilter(near the top of the pyramid)and the partfilters(near the bottom of the pyramid). detection window.Finer scale features are captured by part templates that can be moved with respect to the detection window.The spatial model for the part locations is equiv-alent to a star graph or1-fan[3]where the coarse template serves as a reference position.2.1.HOG RepresentationWe follow the construction in[5]to define a dense repre-sentation of an image at a particular resolution.The image isfirst divided into8x8non-overlapping pixel regions,or cells.For each cell we accumulate a1D histogram of gra-dient orientations over pixels in that cell.These histograms capture local shape properties but are also somewhat invari-ant to small deformations.The gradient at each pixel is discretized into one of nine orientation bins,and each pixel“votes”for the orientation of its gradient,with a strength that depends on the gradient magnitude.For color images,we compute the gradient of each color channel and pick the channel with highest gradi-ent magnitude at each pixel.Finally,the histogram of each cell is normalized with respect to the gradient energy in a neighborhood around it.We look at the four2×2blocks of cells that contain a particular cell and normalize the his-togram of the given cell with respect to the total energy in each of these blocks.This leads to a vector of length9×4 representing the local gradient information inside a cell.We define a HOG feature pyramid by computing HOG features of each level of a standard image pyramid(see Fig-ure2).Features at the top of this pyramid capture coarse gradients histogrammed over fairly large areas of the input image while features at the bottom of the pyramid capture finer gradients histogrammed over small areas.2.2.FiltersFilters are rectangular templates specifying weights for subwindows of a HOG pyramid.A w by hfilter F is a vector with w×h×9×4weights.The score of afilter is defined by taking the dot product of the weight vector and the features in a w×h subwindow of a HOG pyramid.The system in[5]uses a singlefilter to define an object model.That system detects objects from a particular class by scoring every w×h subwindow of a HOG pyramid and thresholding the scores.Let H be a HOG pyramid and p=(x,y,l)be a cell in the l-th level of the pyramid.Letφ(H,p,w,h)denote the vector obtained by concatenating the HOG features in the w×h subwindow of H with top-left corner at p.The score of F on this detection window is F·φ(H,p,w,h).Below we useφ(H,p)to denoteφ(H,p,w,h)when the dimensions are clear from context.2.3.Deformable PartsHere we consider models defined by a coarse rootfilter that covers the entire object and higher resolution partfilters covering smaller parts of the object.Figure2illustrates a placement of such a model in a HOG pyramid.The rootfil-ter location defines the detection window(the pixels inside the cells covered by thefilter).The partfilters are placed several levels down in the pyramid,so the HOG cells at that level have half the size of cells in the rootfilter level.We have found that using higher resolution features for defining partfilters is essential for obtaining high recogni-tion performance.With this approach the partfilters repre-sentfiner resolution edges that are localized to greater ac-curacy when compared to the edges represented in the root filter.For example,consider building a model for a face. The rootfilter could capture coarse resolution edges such as the face boundary while the partfilters could capture details such as eyes,nose and mouth.The model for an object with n parts is formally defined by a rootfilter F0and a set of part models(P1,...,P n) where P i=(F i,v i,s i,a i,b i).Here F i is afilter for the i-th part,v i is a two-dimensional vector specifying the center for a box of possible positions for part i relative to the root po-sition,s i gives the size of this box,while a i and b i are two-dimensional vectors specifying coefficients of a quadratic function measuring a score for each possible placement of the i-th part.Figure1illustrates a person model.A placement of a model in a HOG pyramid is given by z=(p0,...,p n),where p i=(x i,y i,l i)is the location of the rootfilter when i=0and the location of the i-th part when i>0.We assume the level of each part is such that a HOG cell at that level has half the size of a HOG cell at the root level.The score of a placement is given by the scores of eachfilter(the data term)plus a score of the placement of each part relative to the root(the spatial term), ni=0F i·φ(H,p i)+ni=1a i·(˜x i,˜y i)+b i·(˜x2i,˜y2i),(1)where(˜x i,˜y i)=((x i,y i)−2(x,y)+v i)/s i gives the lo-cation of the i-th part relative to the root location.Both˜x i and˜y i should be between−1and1.There is a large(exponential)number of placements for a model in a HOG pyramid.We use dynamic programming and distance transforms techniques[9,10]to compute the best location for the parts of a model as a function of the root location.This takes O(nk)time,where n is the number of parts in the model and k is the number of cells in the HOG pyramid.To detect objects in an image we score root locations according to the best possible placement of the parts and threshold this score.The score of a placement z can be expressed in terms of the dot product,β·ψ(H,z),between a vector of model parametersβand a vectorψ(H,z),β=(F0,...,F n,a1,b1...,a n,b n).ψ(H,z)=(φ(H,p0),φ(H,p1),...φ(H,p n),˜x1,˜y1,˜x21,˜y21,...,˜x n,˜y n,˜x2n,˜y2n,). We use this representation for learning the model parame-ters as it makes a connection between our deformable mod-els and linear classifiers.On interesting aspect of the spatial models defined here is that we allow for the coefficients(a i,b i)to be negative. This is more general than the quadratic“spring”cost that has been used in previous work.3.LearningThe PASCAL training data consists of a large set of im-ages with bounding boxes around each instance of an ob-ject.We reduce the problem of learning a deformable part model with this data to a binary classification problem.Let D=( x1,y1 ,..., x n,y n )be a set of labeled exam-ples where y i∈{−1,1}and x i specifies a HOG pyramid, H(x i),together with a range,Z(x i),of valid placements for the root and partfilters.We construct a positive exam-ple from each bounding box in the training set.For these ex-amples we define Z(x i)so the rootfilter must be placed to overlap the bounding box by at least50%.Negative exam-ples come from images that do not contain the target object. Each placement of the rootfilter in such an image yields a negative training example.Note that for the positive examples we treat both the part locations and the exact location of the rootfilter as latent variables.We have found that allowing uncertainty in the root location during training significantly improves the per-formance of the system(see Section4).tent SVMsA latent SVM is defined as follows.We assume that each example x is scored by a function of the form,fβ(x)=maxz∈Z(x)β·Φ(x,z),(2)whereβis a vector of model parameters and z is a set of latent values.For our deformable models we define Φ(x,z)=ψ(H(x),z)so thatβ·Φ(x,z)is the score of placing the model according to z.In analogy to classical SVMs we would like to trainβfrom labeled examples D=( x1,y1 ,..., x n,y n )by optimizing the following objective function,β∗(D)=argminβλ||β||2+ni=1max(0,1−y i fβ(x i)).(3)By restricting the latent domains Z(x i)to a single choice, fβbecomes linear inβ,and we obtain linear SVMs as a special case of latent tent SVMs are instances of the general class of energy-based models[18].3.2.Semi-ConvexityNote that fβ(x)as defined in(2)is a maximum of func-tions each of which is linear inβ.Hence fβ(x)is convex inβ.This implies that the hinge loss max(0,1−y i fβ(x i)) is convex inβwhen y i=−1.That is,the loss function is convex inβfor negative examples.We call this property of the loss function semi-convexity.Consider an LSVM where the latent domains Z(x i)for the positive examples are restricted to a single choice.The loss due to each positive example is now bined with the semi-convexity property,(3)becomes convex inβ.If the labels for the positive examples are notfixed we can compute a local optimum of(3)using a coordinate de-scent algorithm:1.Holdingβfixed,optimize the latent values for the pos-itive examples z i=argmax z∈Z(xi )β·Φ(x,z).2.Holding{z i}fixed for positive examples,optimizeβby solving the convex problem defined above.It can be shown that both steps always improve or maintain the value of the objective function in(3).If both steps main-tain the value we have a strong local optimum of(3),in the sense that Step1searches over an exponentially large space of latent labels for positive examples while Step2simulta-neously searches over weight vectors and an exponentially large space of latent labels for negative examples.3.3.Data Mining Hard NegativesIn object detection the vast majority of training exam-ples are negative.This makes it infeasible to consider all negative examples at a time.Instead,it is common to con-struct training data consisting of the positive instances and “hard negative”instances,where the hard negatives are data mined from the very large set of possible negative examples.Here we describe a general method for data mining ex-amples for SVMs and latent SVMs.The method iteratively solves subproblems using only hard instances.The innova-tion of our approach is a theoretical guarantee that it leads to the exact solution of the training problem defined using the complete training set.Our results require the use of a margin-sensitive definition of hard examples.The results described here apply both to classical SVMs and to the problem defined by Step2of the coordinate de-scent algorithm for latent SVMs.We omit the proofs of the theorems due to lack of space.These results are related to working set methods[17].We define the hard instances of D relative toβas,M(β,D)={ x,y ∈D|yfβ(x)≤1}.(4)That is,M(β,D)are training examples that are incorrectly classified or near the margin of the classifier defined byβ. We can show thatβ∗(D)only depends on hard instances. Theorem1.Let C be a subset of the examples in D.If M(β∗(D),D)⊆C thenβ∗(C)=β∗(D).This implies that in principle we could train a model us-ing a small set of examples.However,this set is defined in terms of the optimal modelβ∗(D).Given afixedβwe can use M(β,D)to approximate M(β∗(D),D).This suggests an iterative algorithm where we repeatedly compute a model from the hard instances de-fined by the model from the last iteration.This is further justified by the followingfixed-point theorem.Theorem2.Ifβ∗(M(β,D))=βthenβ=β∗(D).Let C be an initial“cache”of examples.In practice we can take the positive examples together with random nega-tive examples.Consider the following iterative algorithm: 1.Letβ:=β∗(C).2.Shrink C by letting C:=M(β,C).3.Grow C by adding examples from M(β,D)up to amemory limit L.Theorem3.If|C|<L after each iteration of Step2,the algorithm will converge toβ=β∗(D)infinite time.3.4.Implementation detailsMany of the ideas discussed here are only approximately implemented in our current system.In practice,when train-ing a latent SVM we iteratively apply classical SVM train-ing to triples x1,z1,y1 ,..., x n,z n,y n where z i is se-lected to be the best scoring latent label for x i under themodel trained in the previous iteration.Each of these triples leads to an example Φ(x i,z i),y i for training a linear clas-sifier.This allows us to use a highly optimized SVM pack-age(SVMLight[17]).On a single CPU,the entire training process takes3to4hours per object class in the PASCAL datasets,including initialization of the parts.Root Filter Initialization:For each category,we auto-matically select the dimensions of the rootfilter by looking at statistics of the bounding boxes in the training data.1We train an initial rootfilter F0using an SVM with no latent variables.The positive examples are constructed from the unoccluded training examples(as labeled in the PASCAL data).These examples are anisotropically scaled to the size and aspect ratio of thefilter.We use random subwindows from negative images to generate negative examples.Root Filter Update:Given the initial rootfilter trained as above,for each bounding box in the training set wefind the best-scoring placement for thefilter that significantly overlaps with the bounding box.We do this using the orig-inal,un-scaled images.We retrain F0with the new positive set and the original random negative set,iterating twice.Part Initialization:We employ a simple heuristic to ini-tialize six parts from the rootfilter trained above.First,we select an area a such that6a equals80%of the area of the rootfilter.We greedily select the rectangular region of area a from the rootfilter that has the most positive energy.We zero out the weights in this region and repeat until six parts are selected.The partfilters are initialized from the rootfil-ter values in the subwindow selected for the part,butfilled in to handle the higher spatial resolution of the part.The initial deformation costs measure the squared norm of a dis-placement with a i=(0,0)and b i=−(1,1).Model Update:To update a model we construct new training data triples.For each positive bounding box in the training data,we apply the existing detector at all positions and scales with at least a50%overlap with the given bound-ing box.Among these we select the highest scoring place-ment as the positive example corresponding to this training bounding box(Figure3).Negative examples are selected byfinding high scoring detections in images not containing the target object.We add negative examples to a cache un-til we encounterfile size limits.A new model is trained by running SVMLight on the positive and negative examples, each labeled with part placements.We update the model10 times using the cache scheme described above.In each it-eration we keep the hard instances from the previous cache and add as many new hard instances as possible within the memory limit.Toward thefinal iterations,we are able to include all hard instances,M(β,D),in the cache.1We picked a simple heuristic by cross-validating over5object classes. We set the model aspect to be the most common(mode)aspect in the data. We set the model size to be the largest size not larger than80%of thedata.Figure3.The image on the left shows the optimization of the la-tent variables for a positive example.The dotted box is the bound-ing box label provided in the PASCAL training set.The large solid box shows the placement of the detection window while the smaller solid boxes show the placements of the parts.The image on the right shows a hard-negative example.4.ResultsWe evaluated our system using the PASCAL VOC2006 and2007comp3challenge datasets and protocol.We refer to[7,8]for details,but emphasize that both challenges are widely acknowledged as difficult testbeds for object detec-tion.Each dataset contains several thousand images of real-world scenes.The datasets specify ground-truth bounding boxes for several object classes,and a detection is consid-ered correct when it overlaps more than50%with a ground-truth bounding box.One scores a system by the average precision(AP)of its precision-recall curve across a testset.Recent work in pedestrian detection has tended to report detection rates versus false positives per window,measured with cropped positive examples and negative images with-out objects of interest.These scores are tied to the reso-lution of the scanning window search and ignore effects of non-maximum suppression,making it difficult to compare different systems.We believe the PASCAL scoring method gives a more reliable measure of performance.The2007challenge has20object categories.We entered a preliminary version of our system in the official competi-tion,and obtained the best score in6categories.Our current system obtains the highest score in10categories,and the second highest score in6categories.Table1summarizes the results.Our system performs well on rigid objects such as cars and sofas as well as highly deformable objects such as per-sons and horses.We also note that our system is successful when given a large or small amount of training data.There are roughly4700positive training examples in the person category but only250in the sofa category.Figure4shows some of the models we learned.Figure5shows some ex-ample detections.We evaluated different components of our system on the longer-established2006person dataset.The top AP scoreaero bike bird boat bottle bus car cat chair cow table dog horse mbike person plant sheep sofa train tvOur rank 31211224111422112141Our score .180.411.092.098.249.349.396.110.155.165.110.062.301.337.267.140.141.156.206.336Darmstadt .301INRIA Normal .092.246.012.002.068.197.265.018.097.039.017.016.225.153.121.093.002.102.157.242INRIA Plus.136.287.041.025.077.279.294.132.106.127.067.071.335.249.092.072.011.092.242.275IRISA .281.318.026.097.119.289.227.221.175.253MPI Center .060.110.028.031.000.164.172.208.002.044.049.141.198.170.091.004.091.034.237.051MPI ESSOL.152.157.098.016.001.186.120.240.007.061.098.162.034.208.117.002.046.147.110.054Oxford .262.409.393.432.375.334TKK .186.078.043.072.002.116.184.050.028.100.086.126.186.135.061.019.036.058.067.090Table 1.PASCAL VOC 2007results.Average precision scores of our system and other systems that entered the competition [7].Empty boxes indicate that a method was not tested in the corresponding class.The best score in each class is shown in bold.Our current system ranks first in 10out of 20classes.A preliminary version of our system ranked first in 6classes in the official competition.BottleCarBicycleSofaFigure 4.Some models learned from the PASCAL VOC 2007dataset.We show the total energy in each orientation of the HOG cells in the root and part filters,with the part filters placed at the center of the allowable displacements.We also show the spatial model for each part,where bright values represent “cheap”placements,and dark values represent “expensive”placements.in the PASCAL competition was .16,obtained using a rigid template model of HOG features [5].The best previous re-sult of.19adds a segmentation-based verification step [20].Figure 6summarizes the performance of several models we trained.Our root-only model is equivalent to the model from [5]and it scores slightly higher at .18.Performance jumps to .24when the model is trained with a LSVM that selects a latent position and scale for each positive example.This suggests LSVMs are useful even for rigid templates because they allow for self-adjustment of the detection win-dow in the training examples.Adding deformable parts in-creases performance to .34AP —a factor of two above the best previous score.Finally,we trained a model with partsbut no root filter and obtained .29AP.This illustrates the advantage of using a multiscale representation.We also investigated the effect of the spatial model and allowable deformations on the 2006person dataset.Recall that s i is the allowable displacement of a part,measured in HOG cells.We trained a rigid model with high-resolution parts by setting s i to 0.This model outperforms the root-only system by .27to .24.If we increase the amount of allowable displacements without using a deformation cost,we start to approach a bag-of-features.Performance peaks at s i =1,suggesting it is useful to constrain the part dis-placements.The optimal strategy allows for larger displace-ments while using an explicit deformation cost.The follow-Figure 5.Some results from the PASCAL 2007dataset.Each row shows detections using a model for a specific class (Person,Bottle,Car,Sofa,Bicycle,Horse).The first three columns show correct detections while the last column shows false positives.Our system is able to detect objects over a wide range of scales (such as the cars)and poses (such as the horses).The system can also detect partially occluded objects such as a person behind a bush.Note how the false detections are often quite reasonable,for example detecting a bus with the car model,a bicycle sign with the bicycle model,or a dog with the horse model.In general the part filters represent meaningful object parts that are well localized in each detection such as the head in the person model.Figure6.Evaluation of our system on the PASCAL VOC2006 person dataset.Root uses only a rootfilter and no latent place-ment of the detection windows on positive examples.Root+Latent uses a rootfilter with latent placement of the detection windows. Parts+Latent is a part-based system with latent detection windows but no rootfilter.Root+Parts+Latent includes both root and part filters,and latent placement of the detection windows.ing table shows AP as a function of freely allowable defor-mation in thefirst three columns.The last column gives the performance when using a quadratic deformation cost and an allowable displacement of2HOG cells.s i01232+quadratic costAP.27.33.31.31.345.DiscussionWe introduced a general framework for training SVMs with latent structure.We used it to build a recognition sys-tem based on multiscale,deformable models.Experimental results on difficult benchmark data suggests our system is the current state-of-the-art in object detection.LSVMs allow for exploration of additional latent struc-ture for recognition.One can consider deeper part hierar-chies(parts with parts),mixture models(frontal vs.side cars),and three-dimensional pose.We would like to train and detect multiple classes together using a shared vocab-ulary of parts(perhaps visual words).We also plan to use A*search[11]to efficiently search over latent parameters during detection.References[1]Y.Amit and A.Trouve.POP:Patchwork of parts models forobject recognition.IJCV,75(2):267–282,November2007.[2]M.Burl,M.Weber,and P.Perona.A probabilistic approachto object recognition using local photometry and global ge-ometry.In ECCV,pages II:628–641,1998.[3] D.Crandall,P.Felzenszwalb,and D.Huttenlocher.Spatialpriors for part-based recognition using statistical models.In CVPR,pages10–17,2005.[4] D.Crandall and D.Huttenlocher.Weakly supervised learn-ing of part-based spatial models for visual object recognition.In ECCV,pages I:16–29,2006.[5]N.Dalal and B.Triggs.Histograms of oriented gradients forhuman detection.In CVPR,pages I:886–893,2005.[6] B.Epshtein and S.Ullman.Semantic hierarchies for recog-nizing objects and parts.In CVPR,2007.[7]M.Everingham,L.Van Gool,C.K.I.Williams,J.Winn,and A.Zisserman.The PASCAL Visual Object Classes Challenge2007(VOC2007)Results./challenges/VOC/voc2007/workshop.[8]M.Everingham, A.Zisserman, C.K.I.Williams,andL.Van Gool.The PASCAL Visual Object Classes Challenge2006(VOC2006)Results./challenges/VOC/voc2006/results.pdf.[9]P.Felzenszwalb and D.Huttenlocher.Distance transformsof sampled functions.Cornell Computing and Information Science Technical Report TR2004-1963,September2004.[10]P.Felzenszwalb and D.Huttenlocher.Pictorial structures forobject recognition.IJCV,61(1),2005.[11]P.Felzenszwalb and D.McAllester.The generalized A*ar-chitecture.JAIR,29:153–190,2007.[12]R.Fergus,P.Perona,and A.Zisserman.Object class recog-nition by unsupervised scale-invariant learning.In CVPR, 2003.[13]M.Fischler and R.Elschlager.The representation andmatching of pictorial structures.IEEE Transactions on Com-puter,22(1):67–92,January1973.[14] A.Holub and P.Perona.A discriminative framework formodelling object classes.In CVPR,pages I:664–671,2005.[15]S.Ioffe and D.Forsyth.Probabilistic methods forfindingpeople.IJCV,43(1):45–68,June2001.[16]Y.Jin and S.Geman.Context and hierarchy in a probabilisticimage model.In CVPR,pages II:2145–2152,2006.[17]T.Joachims.Making large-scale svm learning practical.InB.Sch¨o lkopf,C.Burges,and A.Smola,editors,Advances inKernel Methods-Support Vector Learning.MIT Press,1999.[18]Y.LeCun,S.Chopra,R.Hadsell,R.Marc’Aurelio,andF.Huang.A tutorial on energy-based learning.InG.Bakir,T.Hofman,B.Sch¨o lkopf,A.Smola,and B.Taskar,editors, Predicting Structured Data.MIT Press,2006.[19] A.Quattoni,S.Wang,L.Morency,M.Collins,and T.Dar-rell.Hidden conditional randomfields.PAMI,29(10):1848–1852,October2007.[20] ing segmentation to verify object hypothe-ses.In CVPR,pages1–8,2007.[21] D.Ramanan and C.Sminchisescu.Training deformablemodels for localization.In CVPR,pages I:206–213,2006.[22]H.Schneiderman and T.Kanade.Object detection using thestatistics of parts.IJCV,56(3):151–177,February2004. [23]J.Zhang,M.Marszalek,zebnik,and C.Schmid.Localfeatures and kernels for classification of texture and object categories:A comprehensive study.IJCV,73(2):213–238, June2007.。
non significant kruskal-wallis ns p值意思“Non significant Kruskal-Wallis ns p 值”是一种统计学结果的表达方式,用于描述 Kruskal-Wallis 检验的结果。
下面是对该结果的解释:1. Kruskal-Wallis 检验:这是一种非参数统计方法,用于比较多个独立样本的总体分布是否存在显著差异。
它不要求数据服从特定的分布形状,可以用于分析无序分类数据或等级数据。
2. 非显著(Non significant):这表示经过 Kruskal-Wallis 检验后,没有发现足够的证据来拒绝零假设。
零假设通常是指各个样本的总体分布相同或没有差异。
因此,“非显著”意味着我们不能得出这些样本之间存在显著差异的结论。
3. p 值:p 值是用于判断统计显著性的指标。
它表示在零假设为真的情况下,观察到当前结果或更极端结果的概率。
通常,p 值小于或等于显著性水平(通常为 0.05 或 0.01)时,我们可以拒绝零假设,认为存在显著差异。
4. ns:"ns"是"not significant"的缩写,意思是不显著。
它是对 p 值结果的一种简洁表示方式。
综上所述,“Non significant Kruskal-Wallis ns p 值”表示在 Kruskal-Wallis 检验中,没有发现样本之间存在显著差异,因此我们不能拒绝零假设。
p 值被表示为"ns",意味着结果不显著。
这可能表明在所比较的样本中,总体分布可能是相似的,或者差异可能是由于随机因素所致。
需要注意的是,这只是对统计结果的一种描述,具体解释还需要结合实际研究背景和数据特征进行综合分析。
Ordinary differential equationIn mathematics, an ordinary differential equation (or ODE ) is a relation that contains functions of only one independent variable, and one or more of their derivatives with respect to that variable.A simple example is Newton's second law of motion, which leads to the differential equationfor the motion of a particle of constant mass m . In general, the force F depends upon the position x(t) of the particle at time t , and thus the unknown function x(t) appears on both sides of the differential equation, as is indicated in the notation F (x (t )).Ordinary differential equations are distinguished from partial differential equations, which involve partial derivatives of functions of several variables.Ordinary differential equations arise in many different contexts including geometry, mechanics, astronomy and population modelling. Many famous mathematicians have studied differential equations and contributed to the field,including Newton, Leibniz, the Bernoulli family, Riccati, Clairaut, d'Alembert and Euler.Much study has been devoted to the solution of ordinary differential equations. In the case where the equation is linear, it can be solved by analytical methods. Unfortunately, most of the interesting differential equations are non-linear and, with a few exceptions, cannot be solved exactly. Approximate solutions are arrived at using computer approximations (see numerical ordinary differential equations).The trajectory of a projectile launched from a cannon follows a curve determined by an ordinary differential equation that is derived fromNewton's second law.Existence and uniqueness of solutionsThere are several theorems that establish existence anduniqueness of solutions to initial value problemsinvolving ODEs both locally and globally. SeePicard –Lindelöf theorem for a brief discussion of thisissue.DefinitionsOrdinary differential equationLet ybe an unknown function in x with the n th derivative of y , and let Fbe a given functionthen an equation of the formis called an ordinary differential equation (ODE) of order n . If y is an unknown vector valued function,it is called a system of ordinary differential equations of dimension m (in this case, F : ℝmn +1→ ℝm ).More generally, an implicit ordinary differential equation of order nhas the formwhere F : ℝn+2→ ℝ depends on y(n). To distinguish the above case from this one, an equation of the formis called an explicit differential equation.A differential equation not depending on x is called autonomous.A differential equation is said to be linear if F can be written as a linear combination of the derivatives of y together with a constant term, all possibly depending on x:(x) and r(x) continuous functions in x. The function r(x) is called the source term; if r(x)=0 then the linear with aidifferential equation is called homogeneous, otherwise it is called non-homogeneous or inhomogeneous. SolutionsGiven a differential equationa function u: I⊂ R→ R is called the solution or integral curve for F, if u is n-times differentiable on I, andGiven two solutions u: J⊂ R→ R and v: I⊂ R→ R, u is called an extension of v if I⊂ J andA solution which has no extension is called a global solution.A general solution of an n-th order equation is a solution containing n arbitrary variables, corresponding to n constants of integration. A particular solution is derived from the general solution by setting the constants to particular values, often chosen to fulfill set 'initial conditions or boundary conditions'. A singular solution is a solution that can't be derived from the general solution.Reduction to a first order systemAny differential equation of order n can be written as a system of n first-order differential equations. Given an explicit ordinary differential equation of order n (and dimension 1),define a new family of unknown functionsfor i from 1 to n.The original differential equation can be rewritten as the system of differential equations with order 1 and dimension n given bywhich can be written concisely in vector notation aswithandLinear ordinary differential equationsA well understood particular class of differential equations is linear differential equations. We can always reduce an explicit linear differential equation of any order to a system of differential equation of order 1which we can write concisely using matrix and vector notation aswithHomogeneous equationsThe set of solutions for a system of homogeneous linear differential equations of order 1 and dimension nforms an n-dimensional vector space. Given a basis for this vector space , which is called a fundamental system, every solution can be written asThe n × n matrixis called fundamental matrix. In general there is no method to explicitly construct a fundamental system, but if one solution is known d'Alembert reduction can be used to reduce the dimension of the differential equation by one.Nonhomogeneous equationsThe set of solutions for a system of inhomogeneous linear differential equations of order 1 and dimension ncan be constructed by finding the fundamental system to the corresponding homogeneous equation and one particular solution to the inhomogeneous equation. Every solution to nonhomogeneous equation can then be written asA particular solution to the nonhomogeneous equation can be found by the method of undetermined coefficients or the method of variation of parameters.Concerning second order linear ordinary differential equations, it is well known thatSo, if is a solution of: , then such that:So, if is a solution of: ; then a particular solution of , isgiven by:. [1]Fundamental systems for homogeneous equations with constant coefficientsIf a system of homogeneous linear differential equations has constant coefficientsthen we can explicitly construct a fundamental system. The fundamental system can be written as a matrix differential equationwith solution as a matrix exponentialwhich is a fundamental matrix for the original differential equation. To explicitly calculate this expression we first transform A into Jordan normal formand then evaluate the Jordan blocksof J separately asTheories of ODEsSingular solutionsThe theory of singular solutions of ordinary and partial differential equations was a subject of research from the time of Leibniz, but only since the middle of the nineteenth century did it receive special attention. A valuable but little-known work on the subject is that of Houtain (1854). Darboux (starting in 1873) was a leader in the theory, and in the geometric interpretation of these solutions he opened a field which was worked by various writers, notably Casorati and Cayley. To the latter is due (1872) the theory of singular solutions of differential equations of the first order as accepted circa 1900.Reduction to quadraturesThe primitive attempt in dealing with differential equations had in view a reduction to quadratures. As it had been the hope of eighteenth-century algebraists to find a method for solving the general equation of the th degree, so it was the hope of analysts to find a general method for integrating any differential equation. Gauss (1799) showed, however, that the differential equation meets its limitations very soon unless complex numbers are introduced. Hence analysts began to substitute the study of functions, thus opening a new and fertile field. Cauchy was the first to appreciate the importance of this view. Thereafter the real question was to be, not whether a solution is possible by means of known functions or their integrals, but whether a given differential equation suffices for the definition of a function of the independent variable or variables, and if so, what are the characteristic properties of this function.Fuchsian theoryTwo memoirs by Fuchs (Crelle, 1866, 1868), inspired a novel approach, subsequently elaborated by Thomé and Frobenius. Collet was a prominent contributor beginning in 1869, although his method for integrating a non-linear system was communicated to Bertrand in 1868. Clebsch (1873) attacked the theory along lines parallel to those followed in his theory of Abelian integrals. As the latter can be classified according to the properties of the fundamental curve which remains unchanged under a rational transformation, so Clebsch proposed to classify the transcendent functions defined by the differential equations according to the invariant properties of the corresponding surfaces f = 0 under rational one-to-one transformations.Lie's theoryFrom 1870 Sophus Lie's work put the theory of differential equations on a more satisfactory foundation. He showed that the integration theories of the older mathematicians can, by the introduction of what are now called Lie groups, be referred to a common source; and that ordinary differential equations which admit the same infinitesimal transformations present comparable difficulties of integration. He also emphasized the subject of transformations of contact.A general approach to solve DE's uses the symmetry property of differential equations, the continuous infinitesimal transformations of solutions to solutions (Lie theory). Continuous group theory, Lie algebras and differential geometry are used to understand the structure of linear and nonlinear (partial) differential equations for generating integrable equations, to find its Lax pairs, recursion operators, Bäcklund transform and finally finding exact analytic solutions to the DE.Symmetry methods have been recognized to study differential equations arising in mathematics, physics, engineering, and many other disciplines.Sturm–Liouville theorySturm–Liouville theory is a theory of eigenvalues and eigenfunctions of linear operators defined in terms of second-order homogeneous linear equations, and is useful in the analysis of certain partial differential equations.Software for ODE solving•FuncDesigner (free license: BSD, uses Automatic differentiation, also can be used online via Sage-server [2])•VisSim [3] - a visual language for differential equation solving•Mathematical Assistant on Web [4] online solving first order (linear and with separated variables) and second order linear differential equations (with constant coefficients), including intermediate steps in the solution.•DotNumerics: Ordinary Differential Equations for C# and [5] Initial-value problem for nonstiff and stiff ordinary differential equations (explicit Runge-Kutta, implicit Runge-Kutta, Gear’s BDF and Adams-Moulton).•Online experiments with JSXGraph [6]References[1]Polyanin, Andrei D.; Valentin F. Zaitsev (2003). Handbook of Exact Solutions for Ordinary Differential Equations, 2nd. Ed.. Chapman &Hall/CRC. ISBN 1-5848-8297-2.[2]/welcome[3][4]http://user.mendelu.cz/marik/maw/index.php?lang=en&form=ode[5]/NumericalLibraries/DifferentialEquations/[6]http://jsxgraph.uni-bayreuth.de/wiki/index.php/Differential_equationsBibliography• A. D. Polyanin and V. F. Zaitsev, Handbook of Exact Solutions for Ordinary Differential Equations (2nd edition)", Chapman & Hall/CRC Press, Boca Raton, 2003. ISBN 1-58488-297-2• A. D. Polyanin, V. F. Zaitsev, and A. Moussiaux, Handbook of First Order Partial Differential Equations, Taylor & Francis, London, 2002. ISBN 0-415-27267-X• D. Zwillinger, Handbook of Differential Equations (3rd edition), Academic Press, Boston, 1997.•Hartman, Philip, Ordinary Differential Equations, 2nd Ed., Society for Industrial & Applied Math, 2002. ISBN 0-89871-510-5.•W. Johnson, A Treatise on Ordinary and Partial Differential Equations (/cgi/b/bib/ bibperm?q1=abv5010.0001.001), John Wiley and Sons, 1913, in University of Michigan Historical Math Collection (/u/umhistmath/)• E.L. Ince, Ordinary Differential Equations, Dover Publications, 1958, ISBN 0486603490•Witold Hurewicz, Lectures on Ordinary Differential Equations, Dover Publications, ISBN 0-486-49510-8•Ibragimov, Nail H (1993), CRC Handbook of Lie Group Analysis of Differential Equations Vol. 1-3, Providence: CRC-Press, ISBN 0849344883.External links•Differential Equations (/Science/Math/Differential_Equations//) at the Open Directory Project (includes a list of software for solving differential equations).•EqWorld: The World of Mathematical Equations (http://eqworld.ipmnet.ru/index.htm), containing a list of ordinary differential equations with their solutions.•Online Notes / Differential Equations (/classes/de/de.aspx) by Paul Dawkins, Lamar University.•Differential Equations (/diffeq/diffeq.html), S.O.S. Mathematics.• A primer on analytical solution of differential equations (/mws/gen/ 08ode/mws_gen_ode_bck_primer.pdf) from the Holistic Numerical Methods Institute, University of South Florida.•Ordinary Differential Equations and Dynamical Systems (http://www.mat.univie.ac.at/~gerald/ftp/book-ode/ ) lecture notes by Gerald Teschl.•Notes on Diffy Qs: Differential Equations for Engineers (/diffyqs/) An introductory textbook on differential equations by Jiri Lebl of UIUC.Article Sources and Contributors7Article Sources and ContributorsOrdinary differential equation Source: /w/index.php?oldid=433160713 Contributors: 48v, A. di M., Absurdburger, AdamSmithee, After Midnight, Ahadley,Ahoerstemeier,AlfyAlf,Alll,AndreiPolyanin,Anetode,Ap,Arthena,ArthurRubin,BL,BMF81,********************,Bemoeial,BenFrantzDale,Benjamin.friedrich,BereanHunter,Bernhard Bauer, Beve, Bloodshedder, Bo Jacoby, Bogdangiusca, Bryan Derksen, Charles Matthews, Chilti, Chris in denmark, ChrisUK, Christian List, Cloudmichael, Cmdrjameson, Cmprince, Conversion script, Cpuwhiz11, Cutler, Delaszk, Dicklyon, DiegoPG, Dmitrey, Dmr2, DominiqueNC, Dominus, Donludwig, Doradus, Dysprosia, Ed Poor, Ekotkie, Emperorbma, Enochlau, Fintor, Fruge, Fzix info, Gauge, Gene s, Gerbrant, Giftlite, Gombang, HappyCamper, Heuwitt, Hongsichuan, Ht686rg90, Icairns, Isilanes, Iulianu, Jack in the box, Jak86, Jao, JeLuF, Jitse Niesen, Jni, JoanneB, John C PI, Jokes Free4Me, JonMcLoone, Josevellezcaldas, Juansempere, Kawautar, Kdmckale, Krakhan, Kwantus, L-H, LachlanA, Lethe, Linas, Lingwitt, Liquider, Lupo, MarkGallagher,MathMartin, Matusz, Melikamp, Michael Hardy, Mikez, Moskvax, MrOllie, Msh210, Mtness, Niteowlneils, Oleg Alexandrov, Patrick, Paul August, Paul Matthews, PaulTanenbaum, Pdenapo, PenguiN42, Phil Bastian, PizzaMargherita, Pm215, Poor Yorick, Pt, Rasterfahrer, Raven in Orbit, Recentchanges, RedWolf, Rich Farmbrough, Rl, RobHar, Rogper, Romanm, Rpm, Ruakh, Salix alba, Sbyrnes321, Sekky, Shandris, Shirt58, SilverSurfer314, Ssd, Starlight37, Stevertigo, Stw, Susvolans, Sverdrup, Tarquin, Tbsmith, Technopilgrim, Telso, Template namespace initialisation script, The Anome, Tobias Hoevekamp, TomyDuby, TotientDragooned, Tristanreid, Twin Bird, Tyagi, Ulner, Vadimvadim, Waltpohl, Wclxlus, Whommighter, Wideofthemark, WriterHound, Xrchz, Yhkhoo, 今古庸龍, 176 anonymous editsImage Sources, Licenses and ContributorsImage:Parabolic trajectory.svg Source: /w/index.php?title=File:Parabolic_trajectory.svg License: Public Domain Contributors: Oleg AlexandrovLicenseCreative Commons Attribution-Share Alike 3.0 Unported/licenses/by-sa/3.0/。
a r X i v :h e p -t h /9804144v 2 22 A p r 1998RU-98-16April 1998Non-Existence of Local Integrals of Motionin the Multi-Deformed Ising ModelPedro D.Fonseca 1Department of Physics and AstronomyRutgers University Piscataway,NJ 08855-0849AbstractWe confirm the non-integrability of the multi-deformed Ising Model,an already expected result.After deforming with the energy operator φ1,3we use the Majorana free fermionic representation for the massive theory to show that,besides the trivial one,no local integrals of motion can be built in the theory arising from perturbing with both energy and spin operators.1IntroductionAfter Zamolodchikov’s work[1]-[3],great advances have been achieved in the understanding of field theories,and in particular of integrablefield theories(IFT),arising from the perturbation of certain conformalfield theories(CFT).The simplest example is certainly given by the perturbed Ising ModelA=A CF T+τ ǫ(z,¯z)d2z+h σ(z,¯z)d2z,(1) where A CFT stands for the action of the two dimensional c=1/2CFT andσ(spin)and ǫ(energy)are,respectively,the relevant spinless primaryfieldsφ1,2andφ1,3with conformal dimensions(1/16,1/16)and(1/2,1/2).From dimensional analysis,we see that the coupling constants h∼(length)−15/8andτ∼(length)−1have conformal dimensions(15/16,15/16)and (1/2,1/2),respectively.When considered separately,these perturbations were studied in[1]-[5],and are known to yield IFT’ly,in the caseτ=0,h=0the corresponding perturbation has a realization as an Affine Toda Field Theory(ATFT)based on SU(2)(i.e.a sine-Gordon system)with an infinite set of local integrals of motion(IM)of spin s=1,3,5,7,9,...(the Coxeter exponents modulo the dual Coxeter number of SU(2)),while the caseτ=0,h=0also has an infinite set of local IM but with spin s=1,7,11,13,17,19,23,29(mod30)(the Coxeter exponents modulo the dual Coxeter number of E8),related to the fact that now it has a realization as an ATFT based on E8.Finally(the case that will interest us here),when both perturbations are turned on(see[7] for an extensive analysis),although there is the possibility of having conserved charges of spin s=1,7,11,...(mod30),in[6]it is shown that no low spin IM exist,leading to belief that(1) is no longer integrable,except for the above h=0orτ=0particular cases.In fact,a proof for it can be obtained by starting with thefiniteτtheory(with scattering matrix S=−1)and then,using perturbation theory in h to compute the corresponding S-matrix elements,showing that there exists particle production for h=0(see,for instance,[6]and[7]).In this letter we confirm this result by explicitly verifying the absence of local IM,with the argument being as follows:Given a minimal model,let T s+1be some descendant state of the identity operator with spin s+1satisfying the conservation law¯∂T s+1=0in the CFT andφkl represent some perturbing relevant spinless operator with operator product expansion(OPE)T s+1(z)φkl(w,¯w)∼...+A(s−1)klz−w+ (2)From the deformed Ward identities we then have¯∂Ts+1=λB(s)kl−∂(λA(s−1)kl)+∂(...),(3)whereλis the perturbation coupling,so that there exists a functionΘs−1such that¯∂Ts+1=∂Θs−1,(4)only if B (s )kl can be written as a partial derivative of some local field,i.e.B (s )kl =∂(...).In such case,we can build the conserved charge of spin sP s =T s +1dz +Θs −1d ¯z ,(5)plus its anti-holomorphic partner,thatwewill not write down in what follows.We will denotethe set formed by T s +1in (5)for all possible values of s by Λkl .Whenever Λkl has an infinite number of elements the φkl perturbed theory is said to be integrable.In the case of the unitary minimal models deformed by φ1,3,Λ1,3can be derived from the sine-Gordon model,with the first elements given by [4],[5]T 2=T ,T 4=(T T ),T 6=(T (T T ))−(c +2)2πiA (z )B (w )2,12πd 2zψ¯∂ψ+¯ψ∂¯ψ+mψ¯ψ ,(9)where m∝(T−T C).This theory has c=1/2,OPEψ(z)ψ(w)∼−1/(z−w)and stress-energy tensor given by T(z)=−1z n+1/2,(10) where n∈Z+12 n,m m+1z n+2,(11)where:···:denotes the usual mode normal ordering.Without loss of generality,T s+1∈Λ1,3(in(6))can be written asT2k+2(z)=(−1)k+1116 =σ(0,0)|0 .Using(10),(12)and the fact that a nσ=0for n>0we get the OPE T2k+2(z)σ(0,0)= ...+B(2k+1)1,2/z+...,whereB(2k+1) 1,2=−−k−1n=−2k−1c(k)n(a n a−2k−1−nσ),(13)with coefficientsc(k)n= n+12 ... n+2k+1L −1,which,from(11),can be written asL −1=−m<−1/2m +12a 1a 0σso that (15)is verified for d (0)0=4in (16)(using (17)B (0)1,2is directly seen to be L −1σ),implying that indeed an s =1IM exists.For k =1,B (3)1,2=32d (1)−2a −3a 0−132(63a −5a 0−7a −4a −1+3a −3a −2)σand L −1O (4)=92d (2)−3+12d (2)−3a −3a −2σ,so that (15)supplies three equations for d (2)−4and d (2)−3,again easily seen to be inconsistent.This agrees with the absence of local IM of spin s =3and s =5in the Ising Model perturbed by a magnetic field.Finally,for k ≥3some simple calculations yield∂O (2k )=L −1O (2k )=−−k −1 n =−2k −1e (k )na n a −2k −1−n σ+12d (k )n +1(1−δn,−k −1)−n +2k +18d (k )−2k δn,−2k .(19)The second term in (18)is obviously incompatible with the desired structure in (13),im-plying that O (2k )in (15)does not exist.Expressions other that the quadratic chosen in (16)would obviously have more incompatible terms,thus ending our proof.AcknowledgmentsThe author is most thankful to A.B.Zamolodchikov for having proposed this problem,reviewed the manuscript and for the all the enlightening and valuable conversations.This work was supported by JNICT -PRAXIS XXI (Portugal)under the grant BD 9102/96.References[1]A.B.Zamolodchikov,JETP Lett.46(1987)160.[2]A.B.Zamolodchikov,Advanced Studies in Pure Math.19(1989)641.[3]A.B.Zamolodchikov,Int.J.Mod.Phys.A4(1989)4235.[4]R.Sasaki,I.Yamanaka,Advanced Studies in Pure Math.16(1988)271.[5]T.Egushi,S.K.Yang,Phys.Lett.B224(1989)373.[6]G.Mussardo,Phys.Rev.218,5&6(1992)215.[7]G.Delfino,G.Mussardo,P.Simonetti,Nucl.Phys.B473(1996)469.。