当前位置:文档之家› Unifying statistical texture classification frameworks

Unifying statistical texture classification frameworks

Unifying statistical texture classification frameworks
Unifying statistical texture classification frameworks

Unifying Statistical Texture Classi?cation

Frameworks

Manik Varma and Andrew Zisserman

Dept.of Engineering Science

University of Oxford

Oxford,OX13PJ,UK

{manik,az}@https://www.doczj.com/doc/ae5329311.html,

Abstract

The objective of this paper is to examine statistical approaches to the classi?cation of tex-tured materials from a single image obtained under unknown viewpoint and illumination. The approaches investigated here are based on the joint probability distribution of?lter responses.

We review previous work based on this formulation and make two observations.First, we show that there is a correspondence between the two common representations of?lter outputs-textons and binned histograms.Second,we show that two classi?cation method-ologies,nearest neighbour matching and Bayesian classi?cation,are equivalent for partic-ular choices of the distance measure.We describe the pros and cons of these alternative representations and distance measures,and illustrate the discussion by classifying all the materials in the Columbia-Utrecht(CUReT)texture database.

These equivalences allow us to perform direct comparisons between the texton frequency matching framework,best exempli?ed by the classi?ers of Leung and Malik[IJCV2001], Cula and Dana[CVPR2001],and Varma and Zisserman[ECCV2002],and the Bayesian framework most closely represented by the work of Konishi and Yuille[CVPR2000]. Key words:Texture,Classi?cation,PDF representation,Textons

1Introduction

In this paper,we examine two of the most successful frameworks in which the prob-lem of texture classi?cation has been attempted–the texton frequency comparison framework,as exempli?ed by the classi?ers of[3,8,14],and the Bayesian frame-work,most closely represented by[7].While the frameworks appear seemingly unrelated,we draw out the similarities between them,and show that the two can be made equivalent under certain choices of representation and distance measure.

The classi?cation problem being tackled is the following:given a single image of a textured material obtained under unknown viewing and illumination conditions, classify it into one of a set of pre-learnt classes.Classifying textures from a single image under such general conditions and without any a priori information is a very demanding task.

Fig.1.The change in imaged appearance of the same texture(Plaster B,texture#30in the Columbia-Utrecht database[5])with variation in imaging conditions.Top row:constant viewing angle and varying illumination.Bottom row:constant illumination and varying viewing angle.There is a considerable difference in the appearance across images.

What makes the problem so hard is that unlike other forms of classi?cation,where the objects being categorised have a de?nite structure which can be captured and represented,most textures have large stochastic variations which make them dif?-cult to model.Furthermore,textured materials often undergo a sea change in their imaged appearance with variations in illumination and camera pose(see?gure1). Dealing with this successfully is one of the main tasks of any classi?cation algo-rithm.Another factor which comes into play is that,quite often,two materials when photographed under very different imaging conditions can appear to be quite simi-lar,as is illustrated by?gure2.It is a combination of all these factors which makes the texture classi?cation problem so challenging.

Nevertheless,weak classi?cation algorithms which exploit the statistical nature of textures by characterising them as distributions of?lter responses have shown much promise of late.The two types of algorithm in this category that have been partic-ular successful are(a)the Bayesian classi?er based on the joint probability distri-bution of?lter responses represented by a binned histogram[7],and(b)the near-est neighbourχ2distribution comparison classi?er based on a texton frequency representation[3,8,14].In this paper,we draw an equivalence between these two schemes which then allows a direct comparison of the performance of the two clas-si?cation methodologies.

The success of Bayesian classi?cation applied to?lter responses was convincingly demonstrated by Konishi and Yuille[7].Working on the Sowerby and San Fran-cisco outdoor datasets,their aim was to classify image pixels into one of six texture classes.The joint PDF of the empirical class conditional probability of six?lter responses for each texture was learnt from training images.It was represented as a histogram by quantising the?lter responses into bins.Novel image pixels were then

Fig.2.Small inter class variations between textures present in the Columbia-Utrecht database.In the top row,the?rst and the fourth image are of the same texture while all the other images,even though they look similar,belong to different classes.Similarly,in the bottom row,the images appear similar and yet there are three different texture classes present.

classi?ed by computing their?lter responses and using Bayes’decision rule.How-ever,for image regions,Konishi and Yuille only reported the Chernoff information (which is a measure of the asymptotic classi?cation error rate)but didn’t perform any actual classi?cation.In this paper,we take the?nal step and use Bayes’deci-sion rule to classify entire images by making the same assumption as Schmid[11], i.e.that a region is a collection of statistically independent pixels,and whose prob-ability is therefore the product of the individual pixel probabilities.

In contrast,frequency comparison classi?ers such as[3,8,14]learn distributions of texton frequencies from training images and then classify novel images by com-paring their texton distribution to the learnt models.Different comparison methods may be used,such as the Bhattacharya metric[12],Earth Mover’s distance[10],KL divergence and other entropy based measures[16],but theχ2signi?cance test[9], in conjunction with a nearest neighbour rule,has become the default choice. Leung and Malik[8]were amongst the?rst to seriously tackle the problem of classifying3D textures and,in doing so,made an important innovation by giving an operational de?nition of a texton.They de?ned a2D texton to be a cluster centre in?lter response space.This not only enabled textons to be generated automatically from an image,but also opened up the possibility of a universal set of textons for all images.To compensate for3D effects,they proposed3D textons which were cluster centres of?lter responses over a stack of20training images with representative viewpoint and lighting.They developed an algorithm capable of classifying a stack of20registered,novel images using3D textons and applied it very successfully to the Columbia-Utrecht(CUReT)[5]https://www.doczj.com/doc/ae5329311.html,ter,Cula and Dana[3]and Varma and Zisserman[14]showed that2D textons could be used to classify single images without any loss of performance.

Classi?cation performance is evaluated here also on image sets taken from the CUReT texture database.All61materials present in the database are included, and92images of each material are used with only the most extreme viewpoints

being excluded(see[14]for details).The variety of textures in this database is il-lustrated in?gure3.The92images present for each texture class are partitioned into two,disjoint sets.Images in the?rst(training)set are used for model learning, and classi?cation accuracy is assessed on the46images for each texture class in the second(test)set.

The materials in the CUReT database are examples of3D textures and exhibit a marked variation in appearance with changes in viewing and illumination condi-tions[2–4,8,14,17].The dif?culty of single image classi?cation is highlighted by ?gure4which illustrates how drastically the appearance of a texture can change with varying imaging conditions.

Modelling such textures by a single probability distribution of?lter responses may fail in these situations.The solution adopted in this paper is to represent each tex-ture class by multiple models which characterise the different appearances of the texture with variation in imaging conditions.These models are generated from the various training images for each texture class,and essentially this means that each texture class is represented by a set of probability distributions implicitly condi-tioned on viewpoint and illumination.In our experiments,each training image is used to generate a model and thus there are46models per texture class.However, the number of models can be substantially reduced to,on average,7-8per class by

Fig. 3.Textures from the Columbia-Utrecht database.All images are converted to monochrome in this work,so colour is not used in discriminating different textures.

Fig.4.Images of ribbed paper taken under different viewing and lighting conditions.The material has signi?cant surface normal variation and consequently changes its appearance drastically with imaging conditions.Characterising such a texture by only one model,i.e.

a single probability distribution of?lter responses,will lead to poor classi?cation results. Instead,it is better to represent a texture by multiple models generated under different viewing and illumination conditions.

the use of K-Medoid and Greedy algorithms as described in[14].

The layout of the rest of the paper is as follows:in section2,we outline a low dimensional representation of rotationally invariant?lter responses which was?rst introduced in[14].We also describe the two common semi-parametric representa-tions,texton and binned histogram,of the joint PDF of?lter responses and show how it is possible to convert one to the other.Then,in section3,we present a com-parison between the texton and bin representations using a distribution comparison classi?er.Finally,we implement a Bayesian classi?er using a texton representation in section4and contrast its performance to the distribution comparison classi?er. 2Filter Responses and their Representation

In this section,we?rst describe the?lter bank that will be used to generate features, and then discuss two popular representations of?lter responses,textons and binned histograms,between which we will draw a correspondence.

2.1Filter bank

Traditionally,the?lter banks employed for texture analysis have included a large number of?lters in keeping with the philosophy that many diverse features at mul-tiple orientations and scales need to be extracted accurately to successfully classify textures.However,constructing and storing PDFs of?lter responses in a high di-mensional?lter response space is computationally infeasible and therefore it is necessary to limit the dimensionality of the?lter response vector.Both these re-quirements can be achieved if multiple oriented?lters are used but their outputs combined to form a low dimensional,rotationally invariant response vector.A?l-ter bank which does this is the Maximum Response(MR8)?lter bank which com-

prises38?lters but only8?lter responses(see?gure5).The?lters consists of a Gaussian and a Laplacian of Gaussian(LOG)both withσ=10pixels(these?lters have rotational symmetry),an edge?lter at3scales(σx,σy)={(1,3),(2,6),(4,12)} pixels and a bar?lter at the same3scales.The latter two?lters are oriented and occur at6orientations at each scale.The responses of the isotropic?lters(Gaussian and LOG)are used directly,but the responses of the oriented?lters(bar and edge) are,at each scale,“collapsed”by using only the maximum?lter responses across all orientations–thereby giving8rotationally invariant?lter responses in all.

Fig.5.The rotationally invariant MR8?lter bank consists of(in order,from top to bottom) an oriented edge?lter at6orientations and3scales,a bar?lter at the same set of orien-tations and scales,an isotropic Gaussian?lter and a Laplacian of Gaussian?lter.The re-sponses of the isotropic?lters are recorded directly.However,taking the maximal response of the anisotropic?lters across all orientations results in2?lter responses being recorded per scale,giving8?lter responses in total.Essentially this is the maximum response of each row of the?lter bank above.

Rotation invariance is desirable in that it leads to the correct classi?cation of ro-tated versions of textures present in the training set.Another motivation for using the MR8?lter bank is that despite being rotationally invariant,its oriented?lters should accurately pick up anisotropic features whose relative angular information (as determined by the angle of maximum response)can still be recorded.Further-more,we expect that more signi?cant textons are generated when clustering in a low dimensional,rotationally invariant space.

Both the Bayesian and the texton frequency comparison classi?ers discussed in this paper are divided into two stages–learning and classi?cation.In the learning stage,features are extracted from the training images by convolution with the MR8?lter bank.The choice of representation of the resultant?lter response features determines the learnt statistical model for a given texture under particular imaging conditions and is described next.

2.2Texton representation of?lter responses

Each training image is convolved with the MR8?lter bank to generate a set of?lter responses.These?lter responses are then aggregated over various training images from the same texture class(as described in the following paragraph)and clustered. The resultant cluster centres form a dictionary of exemplar?lter responses and are called textons.Given a texton dictionary,the?rst step in learning a model from a particular training image is labelling each of the image pixels with the texton that lies closest to it in?lter response space.The(normalised)frequency histogram of pixel texton labellings then de?nes the model for the training image.

In the implementation in this paper,the?lter responses of13randomly selected images per texture class(taken from the training set)are aggregated and clustered via the K-Means algorithm.The textons(cluster centres)learnt from each class are then combined into a single dictionary of size S.Various values of K are tried(K= 10,20,...,50)and results are reported for each case.Thus,in each experiment,K textons are learnt from every texture class resulting in a dictionary comprising a total of S=61×K textons.Hence,a model of a training image is a S-vector where each component is the proportion of pixels which are labelled as a particular texton.

2.3Histogram representation by binning

In this representation,the model corresponding to a given image is the joint prob-ability distribution of the image’s?lter responses–obtained by quantising the re-sponses into bins and normalising so that the sum over all bins is unity.It should be noted that the number of bins and their placement can be important parameters as they determine how crudely,or how well,the underlying probability distribution is approximated and whether the data is over-?tted or not.

As an implementation detail,the histogram is stored as a sparse matrix and the space it occupies is given by:number of non-empty bins×number of bytes re-quired to store a bin value and its corresponding index.This is bounded above by the number of data points and compares favourably to a naive implementation which stores the full matrix in O(total number of bins)bytes,but where most of the

bins are empty.For example,using this implementation for MR8with20bins per dimension,we were able to store the PDF of all the training images in less than a hundred megabytes whereas the naive implementation would have taken over?ve hundred gigabytes.Also,it is ef?cient to store the histogram as a sparse matrix as theχ2statistic can be evaluated in O(number of non-empty bins)?ops.

2.4Moving between representations

The two representations of?lter responses can be made identical by a suitable choice of bins or textons.For example,an equally spaced bin representation can be converted into an identical texton representation by placing a texton at the cen-tre of every bin(see?gure6).In the algorithm implemented here,the textons are generated by clustering and do not coincide with the bin centres.Hence,the two representations are not identical in this case.

It is possible to go the other way round as well.Every texton representation can be converted into an identical bin representation.In this case,the bins will be irregu-larly shaped and will correspond to the V oronoi polytopes obtained by forming the V oronoi diagram of the texton sites.Thus,clustering to get textons can be thought of as an adaptive binning method and a histogram of texton frequencies can be equated to a bin count of?lter responses.In essence,the comparisons made in sec-tion3can be thought of as a comparison between two different texton dictionaries. However,it should be noted that in general,not every bin representation can be converted to an equivalent texton representation in which there is a one to one

Fig.6.Texton and bin equivalence in two dimensions:(a)Every texton representation can be converted into an equivalent bin representation where the bins are the V oronoi polygons.

(b)Conversely,an equally spaced bin representation can be converted into an identical texton representation by placing a texton at the centre of each bin(though not every bin representation has an equivalent bijective texton representation).A similar equivalence ap-plies in N.

representation if there are more textons than bins with certain textons being grouped together to form a particular bin.

3Classi?cation by Distribution Comparison

In this section the effect of the representation on classi?cation performance is in-vestigated.Given a set of models characterising the61material classes,the task is to classify a novel(test)image as one of these textures.This proceeds as follows: the?lter response distribution is computed for the test image,and both types of rep-resentation(texton and bin)are then determined.In either case,the closest model image,in terms of theχ2statistic,is found and the novel image declared to belong to model’s texture class.

3.1Experimental setup and results

Classi?cation is carried out on all61texture classes for both the representations. We consider the case where every image in the training set,for each texture class, is used to generate a model.Thus,there are46models per texture,for each of the 61texture classes.The classi?cation performance is measured by the proportion of test images which are correctly classi?ed as the right texture.

For the texton based representation,?gure7plots the classi?cation accuracy versus the number of textons in the dictionary when classifying all2806test images.The best results are obtained when K=40textons are learnt per texture(resulting in a dictionary of

achieved.

2468101214

161820

4050

60

70

8090

100

Number of bins per dimension C l a s s i f i c a t i o n a c c u r a c y (%)Fig.8.The variation in classi?cation performance with the number of bins used during quantization:The best classi?cation results are obtained when the ?lter responses are quan-tized into 5bins per dimension.

For the bin representation,the number and location of the bins are,in general,important parameters.However,it turns out that in this case excellent results are obtained using equally spaced bins.Figure 8plots the classi?cation accuracy for the test set versus the number of bins used in the quantization process.The classi?er achieves a maximum accuracy of 96.54%when the ?lter responses are quantised into 5bins per dimension.Increasing the number of bins decreases the performance,indicating that the distribution is being over-?tted and that noise is being learnt as well.The classi?cation accuracy also decreases with decrease in the number of bins as the binning is now crude.

Both the representations give very similar classi?cation results.Of course,this is not surprising in the light of the fact that the two can be made identical.In this particular instance,however,the texton representation slightly outperforms the bin representation as the bins are always equally sized while the textons are learnt adap-tively from the given data.

4Bayesian Classi?cation

Given that texton frequencies and histogram binning are equivalent ways of repre-senting the PDF of ?lter responses,it is now possible to calculate the class condi-tional probability of obtaining a particular ?lter response using a texton representa-tion.This setting of a texton representation in a Bayesian paradigm effectively lets us compare,in this section,the classi?cation scheme of Konishi and Yuille [7]with the texton based distribution comparison classi?ers of Cula and Dana [3],Leung and Malik [8]and Varma and Zisserman [14].

The Bayesian classi?er of [7]is also divided into a learning stage and a classi?-

cation stage.In the learning stage,class priors and empirical?lter response prob-abilities are learnt from the training data.Once again,we emphasise that to take into account the variation due to changing viewpoint and illumination a number of models will be used for each texture class,rather than just learning a single model per texture class as is done by[7,11].In the classi?cation stage,Bayes’theorem is invoked to calculate the posterior probability of a given?lter response from a novel image belonging to a particular class.

4.1A Bayesian classi?er using the texton representation

The class conditional joint PDF of?lter responses is obtained directly from the his-togram of texton frequencies for the various images in the training set(of section2). It is straight forward to implement the Bayesian classi?er given this information. For a particular model we want to estimate the posterior

P(M ij|I)=P(M ij|{F(x)})(1) where{F(x)}is the collection of?lter responses generated from the novel image I to be classi?ed,M ij is a particular model corresponding to training image number i taken from texture class number j,and the equality sign arises from the assumption that all the available information in the image has been extracted by the?ltering process.The image I is classi?ed as the texture j for which P(M ij|{F(x)})is maximised over all models(i.e.all ij).Using Bayes’rule,

P(M ij|{F(x)})∝P({F(x)}|M ij)P(M ij)(2) where P({F(x)}|M ij)is the likelihood of the model M ij,and P(M ij)the prior on model M ij.Since all models are equally likely in our case,the MAP class selection reduces to a Maximum Likelihood estimate,i.e.

?M=argmax

P({F(x)}|M ij)(3) M ij

If the?lter responses are assumed spatially independent,then the probability of all the?lter responses from the novel image belonging to the model M ij is obtained by taking the product of the probabilities of the individual?lter responses,i.e.

P({F(x)}|M ij)= x P(F(x)|M ij)(4)

At this point,we take logs and focus on the log-likelihood to clarify the subsequent discussion,

?M=argmax

M ij

x

P(F(x)|M ij)=argmax

M ij

x

log P(F(x)|M ij)(5)

=argmax

M ij

S

k=1

N k log P(T k|M ij)(6)

where N k is the number of times the k th texton occurs in the novel image labelling and P(T k|M ij)is the probability of occurrence of the k th texton in the model M ij. This equality follows because the log likelihood essentially amounts to counting the number of times each?lter response falls in a particular bin–but this is exactly what is recorded in the texton frequency histogram.

4.2Equivalence with Minimum Cross Entropy and KL Divergence

It will now be shown that Bayesian classi?cation in this form can be viewed as near-est neighbour distribution comparison classi?cation where the distance between two distributions is measured using Cross Entropy or the KL divergence.Cross En-tropy is an information theoretic measure of the average number of bits required to encode symbols from a given alphabet using another alphabet.It is minimised if the same alphabet is used throughout.Based on this observation,we can use Cross Entropy to determine how similar a given distribution is to another distri-bution.The Cross Entropy between two discrete distributions,p and q,is given by H(p,q)=? p k log q k and the smaller this value the better the match between the two distributions.The KL divergence is a related measure of the similarity between two distributions and is de?ned as D(p q)= p k log(p k/q k).

Nearest neighbour matching using Cross Entropy or KL divergence can be shown to be equivalent to the Bayesian formulation[1,15,16]by noting that from(6),

?M=argmax

M ij

S

k=1

N k log P(T k|M ij)

=argmax

M ij

S

k=1

N k

N k log P(T k|M ij)(7)

=argmin

M ij ?

S

k=1

P(T k|M I)log P(T k|M ij)(8)

=argmin

M ij

H(p,q)

where p k=P(T k|M I)=N k/ N k is the probability of occurrence of the k th texton in the novel image labelling M I and q k=P(T k|M ij),i.e.the normalised texton frequency of the novel image and model image respectively.

Therefore,a Bayesian classi?er which assumes uniform priors and the spatial in-dependence of?lter responses will give equivalent results to a nearest neighbour classi?er based on the Cross Entropy between the texton distributions of the novel and model images.

The result can be straight forwardly extended to KL divergence by adding the con-stant S k=1p k log p k to(8)and taking it inside the argmin operation as it does not depend on M ij.Thus,

?M=argmin

M ij

S

k=1

p k log p k?

S

k=1

p k log q k(9)

=argmin

M ij

S

k=1

p k log

p k

q k

(10)

=argmin

M ij

D(p q)

4.3Relationship withχ2

The equivalence can be taken further by noting that KL divergence and its exten-sions are bounded above by theχ2statistic and that the bound is attained when the two probability distributions being compared are very similar[6,13,15].

In more detail since the KL divergence is not a metric,it is often extended[13]to what is called a capacitory discriminant given by

C(p,q)=D p 12(p+q) +D q 12(p+q) (11)

where C(p,q)is now a metric[6].Furthermore,Topsoe[13]has shown that C is bounded quite tightly according to the relation

1χ2(p,q)≤C(p,q)≤ln2·χ2(p,q)(12) where

χ2(p,q)=

S

k=1

(p k?q k)2

p k+q k

(13)

Also,by taking the Taylor series expansion of C,it is straight forward to show that

lim p→q C(p,q)=

1

2

χ2(p,q)(14)

up to second order in p and q.

Most of these results are by now standard.The signi?cance for us is that by com-bining these equivalence results with the texton-bin correspondence shown in sec-tion2,it becomes immediately clear that the Bayesian method of[7]is equivalent to the texton distribution comparison schemes of[3,8,14].

4.4Bayesian Classi?cation Experiments

The equivalence results from the previous subsections allow us to cast a Bayesian classi?er as a nearest neighbour classi?er with KL divergence as the distance mea-sure.Thus,the experimental setup remains exactly the same as in section3except now the distance measure being used is KL divergence rather than theχ2statistic. Figure9plots the classi?cation accuracy versus the size of the texton dictionary for the Bayesian classi?er.The best results are for a dictionary of size S=1830textons

(i.e.K=30textons learnt from each texture class).A classi?cation accuracy of

97.46%is

Fig.9.The dictionary using a Bayesian classi?er:In each case,there are2806models and2806test images.The best classi?cation result obtained is97.46%using a dictionary of1830textons.The results of the texton basedχ2classi?er(i.e.?gure7)are also plotted for the sake of comparison.

A technical point about implementing probability products is that if the model his-tograms are determined directly from the textons frequencies of the training im-ages then the classi?cation accuracy of the Bayesian classi?er is an astonishingly low1.06%,i.e.almost all the test images are classi?ed incorrectly.This is because most novel images contain a certain percentage of pixels(?lter responses)which do not occur in the correct class models in the training set.This may be a result of an inadequate amount of training data,or due to outliers or noise.As a consequence, the posterior probability of these pixels is zero and hence when all the pixel proba-bilities are multiplied together the image posterior probability also turns out to be

zero.

This is a standard pitfall in histogram based density estimation and three solutions are generally proposed:(a)smoothing the histogram,(b)assigning small nonzero values to each of the empty bins,and(c)discarding a certain percentage of the least occurring?lter responses in the belief that they are primarily noise and outliers.

A combination of(b)and(c)is used here:instead of starting the bin occupancy count from0,it is started from1to ensure that no bin is ever empty.Some of the least frequently occurring bins are also discarded.These modi?cations lead to the classi?cation performance plotted in?gure9.

4.5Comparisons

On the basis of experimental results,there is very little to choose between the Bayesian and distribution comparison classi?ers using the texton representation. This is to be expected as,in essence,the test being performed is a comparison of χ2and KL divergence as distance measures.Nevertheless,while both classi?ers at-tain rates of over97%,there are different theoretical pros and cons associated with the two approaches.

There can be no doubt that theoretically,when the underlying distributions are known perfectly,the Bayesian classi?er minimises the classi?cation error.How-ever,when we don’t have enough data to accurately determine the true distribution or only have noisy approximations which suffer from the inherent quantisation ef-fects of either clustering or histogram binning then the superiority of the Bayesian classi?er is much less clear.This is evident from the capability ofχ2to practi-cally cope with empty bins(even though it is theoretically incapable of doing so) and noisy measurements while the Bayesian classi?er completely collapses unless the probability distribution is modi?ed.Furthermore,as can be seen from?gure9, the Bayesian classi?er is often marginally surpassed by the nearest neighbourχ2 classi?er.

There is also the question about the Naive Bayesian assumption that the observed data is independent.However,in our case,this can not be considered a major draw-back of the Bayesian classi?er as compared toχ2because(a)χ2also makes the very same assumption in its derivation,(b)the experimental results indicate that extremely good classi?cation results are obtained even when the assumption is vi-olated(Schmid[11]notes that this holds true even for other texture datasets)and (c)if violating the assumption was leading to large errors then this could be tackled by randomly sampling?lter responses from disjoint regions of the novel image in a bid to decrease their dependence.

Yet,despite their theoretical limitations,both classi?ers appear to work extremely well in practise as is evidenced by the classi?cation results.

5Conclusions

In conclusion,we have shown that the texton representation of the PDF of?lter responses is equivalent to an adaptive bin representation and,conversely,that every regularly partitioned bin representation can be converted into an equivalent texton representation.This has enabled us to use texton densities for texture classi?cation in the Bayesian framework which itself,under certain circumstances,can be viewed as another measure of distance in a distribution comparison classi?cation scheme. In doing so,we have brought together two seemingly unrelated schools of thought in texture classi?cation–one based on the Bayesian paradigm and the other on textons and their?rst order statistics.

Acknowledgements

We would like to thank Alan Yuille for discussions on Bayesian methods and Phil Torr for discussions on density estimation.Financial support was provided by a University of Oxford Graduate Scholarship in Engineering,an ORS award and the EC project CogViSys.

References

[1] F.R.Bach and M.I.Jordan.Tree-dependent component analysis.In Uncertainty in

Arti?cial Intelligence:Proceedings of the Eighteenth Conference(UAI-2002),2002.

[2]M.J.Chantler,G.McGunnigle,and J.Wu.Surface rotation invariant texture

classi?cation using photometric stereo and surface magnitude spectra.In Proceedings of the11th British Machine Vision Conference,Bristol,pages486–495,2000.

[3]O.G.Cula and https://www.doczj.com/doc/ae5329311.html,pact representation of bidirectional texture functions.

In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages1041–1047,December2001.

[4]K.J.Dana and S.Nayar.Histogram model for3d textures.In Proceedings of the IEEE

Conference on Computer Vision and Pattern Recognition,pages618–624,1998. [5]K.J.Dana,B.van Ginneken,S.K.Nayar,and J.J.Koenderink.Re?ectance and

texture of real world surfaces.ACM Transactions on Graphics,18(1):1–34,1999. [6] D.M.Endres and J.E.Schindelin.A new metric for probability distributions.IEEE

Transactions on Information Theory,49(7):1857–1860,July2003.

[7]S.Konishi and A.L.Yuille.Statistical cues for domain speci?c image segmentation

with performance analysis.In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition,pages125–132,2000.

[8]T.Leung and J.Malik.Representing and recognizing the visual appearance of

materials using three-dimensional textons.International Journal of Computer Vision, 43(1):29–44,June2001.

[9]W.Press,B.Flannery,S.Teukolsky,and W.Vetterling.Numerical Recipes in C.

Cambridge University Press,1988.

[10]Y.Rubner,C.Tomasi,and L.J.Guibas.The earth mover’s distance as a metric for

image retrieval.International Journal of Computer Vision,40(2):99–121,2000. [11]C.Schmid.Constructing models for content-based image retrieval.In Proceedings of

the IEEE Conference on Computer Vision and Pattern Recognition,volume2,pages 39–45,2001.

[12]N.A.Thacker,F.Ahearne,and P.I.Rockett.The bhattacharya metric as an absolute

similarity measure for frequency coded data.Kybernetica,34(4):363–368,1997. [13]F.Topsoe.Some inequalities for information divergence and related measures of

discrimination.IEEE Transactions on Information Theory,46(4):1602–1609,July 2000.

[14]M.Varma and A.Zisserman.Classifying images of materials:Achieving viewpoint

and illumination independence.In Proceedings of the7th European Conference on Computer Vision,Copenhagen,Denmark,volume3,pages255–271.Springer-Verlag, May2002.

[15]N.Vasconcelos and A.Lippman.A unifying view of image similarity.In Proceedings

of the International Conference on Pattern Recognition,pages1038–1041,2000. [16]P.Viola.Alignment by Maximization of Mutual Information.Aitr1548,MIT,AI Lab,

June1995.

[17]A.Zalesny and L.Van Gool.A compact model for viewpoint dependent texture

synthesis.In Proceedings of the European Conference on Computer Vision,LNCS 2018/5,pages124–143.Springer-Verlag,2000.

我学院:unity中movieTexture实现视频播放的方法

unity中movieTexture实现视频播放的方法 可以直接将ogg格式的材质赋给物体,这样需要加代码让这个这个视频播放起来。 function Update () { renderer.material.mainTexture.Play(); } 这里的代码,实现播放暂停以及切换视频。不过这个代码要赋给摄像机,如果赋给物体的话在编辑器里好用,但是发布出来的时候就不好用了。 public var bofang:boolean = false; public var movieTexture:MovieTexture; function Start(){ movieTexture.Stop(); } private var flag01:boolean = true; function Update(){ if(!bofang){ return; } // 赋值剪辑到音频源 // 与音频同步播放 audio.clip = movieTexture.audioClip; // 播放视频和音频 movieTexture.loop = true; //renderer.material.mainTexture.wrapMode = TextureWrapMode.Clamp; if(flag01){ movieTexture.Play(); audio.Play(); }else{ movieTexture.Pause(); audio.Pause(); }

} //加上播放暂停关闭 //播放器 private var BFQWidth:float = 650; private var BFQHeight:float = 490; var BFQTexture:Texture; private var BFQGBWidth:float = 16; private var BFQGBHeight:float = 15; var BFQGBStyle:GuiStyle;//关闭按钮 //播放器播放暂停 private var BFQPlayWidth:float = 22; private var BFQPlayHeight:float = 22; var BFQPlayStyle:GUIStyle;//播放按钮 var BFQPauseStyle:GUIStyle;//暂停按钮 function OnGUI(){ if(!bofang){ return; } GUI.DrawTexture(new Rect(187, 139, BFQWidth, BFQHeight), BFQTexture, ScaleMode.StretchToFill); GUI.DrawTexture (Rect (192,144, 640, 480),movieTexture,ScaleMode.ScaleToFit ); movieTexture.Play(); //关闭按钮 if(GUI.Button (Rect (800, 155, BFQGBWidth, BFQGBHeight),"",BFQGBStyle)){ GameObject.Find("Data").GetComponent("Data").cameraMain.enabled = true; GameObject.Find("Data").GetComponent("Data").cameraView.enabled = false; bofang = false; guiTexture.texture = null; gameObject.GetComponent("MovieControler").enabled = false; } //播放暂停 if(!flag01){ if(GUI.Button (Rect (800, 597, BFQPlayWidth, BFQPlayHeight),"",BFQPlayStyle)){ flag01 = true; } }else{ if(GUI.Button (Rect (800, 597, BFQPlayWidth, BFQPlayHeight),"",BFQPauseStyle)){ flag01 = false;

【IT专家】使用Texture2D将OpenGL绘图到现有的HDC

本文由我司收集整编,推荐下载,如有疑问,请与我司联系 使用Texture2D 将OpenGL 绘图到现有的HDC 使用Texture2D 将OpenGL 绘图到现有的HDC[英]OpenGL drawing to an existing HDC using Texture2D I am trying to Use OpenGL to render to an existing rendering area of a window that has been created with the Windows API. I get the HDC and create an opengl context using wglCreateContext and wglMakeCurrent. Then I Create a texture and I bind it. although I can clear the buffer to whatever color I choose, I can not render to it. The following code sample should make clear what my problem is. 我正在尝试使用OpenGL 渲染到使用Windows API 创建的窗口的现有渲染区域。 我使用wglCreateContext 和wglMakeCurrent 获取HDC 并创建一个opengl 上下文。 然后我创建一个纹理,然后绑定它。虽然我可以将缓冲区清除到我选择的任何颜色,但 我无法渲染它。以下代码示例应该清楚我的问题是什么。 void draw(HDC dhdc){ glBindTexture(GL_TEXTURE_2D, texture_id); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 2, 2, 0, GL_RGB, GL_UNSIGNED_BYTE, pixels); The above code, which I took (and slightly modified) from this answer makes the window area to change between red and blue, but does not render the quads. Has anyone had a similar problem or can spot my mistake? 上面的代码,我从这个答案中获取(并略微修改)使窗口区域在红色和蓝色之间变化, 但不渲染四边形。有没有人有类似的问题或可以发现我的错误? Thanks. EDIT: To add more information, I am setting the pixel format of the HDC to this:

Render to texture渲染到贴图(即贴图烘焙)

贴图烘焙技术也叫Render To Textures,简单地说就是一种把max光照信息渲染成贴图的方式,而后把这个烘焙后的贴图再贴回到场景中去的技术。这样的话光照信息变成了贴图,不需要CPU再去费时的计算了,只要算普通的贴图就可以了,所以速度极快。由于在烘焙前需要对场景进行渲染,所以贴图烘焙技术对于静帧来讲意义不大,这种技术主要应用于游戏和建筑漫游动画里面,这种技术实现了我们把费时的光能传递计算应用到动画中去的实用性,而且也能省去讨厌的光能传递时动画抖动的麻烦。贴图烘焙技术是在max5时加入进来的技术,在max6中界面稍作了改动。下面就让我们来看一下max6的贴图烘焙技术吧! 首先我们建立了一个简单的场景,设置了max的高级灯光中的Light Tracer天光照明,具体的设置不在这儿罗嗦了,我们在这儿就来说贴图烘焙。先来渲染场景,如图,这是加了材质灯光和Light Tracer后的效果,渲染时间15秒。 现在来做贴图烘焙,快捷键0,或者在渲染菜单里打开,如图 以下是贴图烘焙的基本操作界面, Output Path是用来设置存放烘焙出来贴图的路径的,必须在这儿进行设置; 而后可以选中场景里的所有物体,在Output卷帘下面,点击Add按钮,这时大家可以看到烘焙的很多种方式,有高光、有固有色等等,我们选择CompleteMap方式,即包含下面所有的方式,是完整烘焙。而后在下图位置选择Diffuse Color方式,这儿是于max5不同的地方,需要注意

在下图位置选择烘焙贴图的分辨率大小,这和max的渲染输出是一样的,不去细说了 按下Render To Textures面板里的Render渲染钮进行渲染,得到如图的烘焙贴图

把Android原生的View渲染到OpenGL Texture

把Android原生的View渲染到OpenGL Texture 最近要把Android 原生的View渲染到OpenGL GLSurfaceView中,起初想到的是截图的方法,也就是把View截取成bitmap后,再把Bitmap 渲染到OpenGL中;但是明显这种方法是不可行的,面对一些高速动态更新的View,只有不停的对view 进行截图才能渲染出原生View的效果。 通过大量的Google终于在国外的网站找到了一个做过类似的先例。不过经过测试该方法只能渲染直接父类为View的view,也就是只能渲染一层View(如progressbar,没不能添加child的view),当该原生Android View包含很多子view时(也就是根View为FramLayout、或者linearLayout之类),无法实时的监听到View动态改变,OpenGL中只能不停的渲染该view,才能渲染出原生View的效果。但是这样一来不同的渲染会耗费大量的资源,降低应用程序的效率。理想中的话,是监听到了该View的内容或者其子view 的内容发生了变化(如:View中的字幕发生滚动)才进行渲染。 经过接近两周的努力我终于完美地实现了该效果,既然是站在别人的基础上得来的成果,那么该方法就应当被共享,所以产生了此文,不过只支持api 15以上的 步骤一:重写根View

1.设置该View 绘制自己: [java] view plain copy 在CODE上查看代码片派生到我的代码片setWillNotDraw(false); 2.监听View的变化,重写View,用ViewTreeObServer来监听,方法如下: [java] view plain copy 在CODE上查看代码片派生到我的代码片private void addOnPreDrawListener() { if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB) { final ViewTreeObserver mObserver = getViewTreeObserver(); if (mObserver != null) { mObserver.addOnPreDrawListener(new OnPreDrawListener() { @Override public boolean onPreDraw() { if (isDirty()) {//View或者子view发生变化 invalidate(); } return true; } });

Surface Texture (表面粗糙度)

Surface Texture (表面粗糙度) ANSI B46.1 is used in USA (measurement units minch(micro.inch) ) ANSI B46.1 应用美国(测量单位为微英寸) ISO 1302:1992 which uses N numbers..These are not included in BS EN ISO 1302:2002 The Table below provide equivalent values for the Average Roughness Ra , the USA Values and the N numbers.. . ISO 1302: 1992 使用N号标识, 这些不包括BS EN ISO 1302::2002, 下表提供相当的值, 包括平均表面粗糙度Ra, 美国值和N标识. mm 0.025 0.05 0.1 0.2 0.4 0.8 1.6 3.2 6.3 12.5 25 50 minch 1 2 4 8 16 32 63 125 250 500 1000 2000 N-Grade N1 N2 N3 N4 N5 N6 N7 N8 N9 N10 N11 N12 Finish Ground Finishes Smooth Turned Medium Turned Rough Machined 表面粗糙度研磨表面精加工中等加工粗加工 Rules of Thumb for surface finish equivalent of N grade and Ra values N级和Ra值的表面粗糙度索引规则 ? Rough turned with visible toolmarks(可见刀痕的粗加工)....N10 = 12.5mm (micro.m) ? Smooth machined surface(精加工表面)....N8 = 3.2mm (micro.m) ? Static mating surfaces (or datums)(静配合表面或数据).... N7 = 1.6mm(micro.m) ? Bearing Surfaces(轴承表面).... N6 = 0.8mm (micro.m) ? Fine lapped surfaces(精抛光表面).... N1 = 0.025mm (micro.m)

Texture

咬花技術簡介 Intro of Plastic texture Category: z Plastic塑膠 z Material新材質 — prattflora - 2005-01-25 2:03 pm (32 Visited) 咬花技術簡介 Intro of Plastic texture making process 咬花是一種結合化學與藝術的工藝.是化學,機械及細膩人工結合的技術. 咬花的目的:美觀,遮醜,增加表面耐磨,防刮傷,防止灰塵,防止髒污及避免人對光線折射不舒服感覺….. 一般以美系與日系為主 一般較易使設計者及模具乎略者,但確會主宰模具的成敗 花紋種類 1.【皮革模樣紋】:仿真皮的表面紋路所做出來的皮革紋。《汽車、機車、手提箱及高品質之面板大都使用此花紋》 2.【梨地花紋】:花紋與梨子表面的細小花紋類似,所以稱梨地花紋。《家電、資訊產品,大都使用此花紋》 3.【木紋】:仿木材表面紋路之花紋。《常用於塑膠桌椅、門、窗等可呈現原木質感之產品》 4.【幾何圖形】:花紋為規則排列之圖形,除美觀作用外,並可產生導光及光線透視控制效果。《例如鋁材的髮絲處理,圖案…..》 前置作業 隨著時代不斷進步,各界對模具的品質要求,也越來越嚴苛,為了更能確保品質之穩定,在咬花前需提供下列之資料給咬花廠,以利咬花時之判斷。 1.模具材料之硬度。 2.模具是否有焊補過。< 焊補材質是否跟主體材料一樣> 3.模具是否有熱處理。 4.模具的滑塊、零仔之材質,是否跟主體的材質一樣。

咬花前表面狀況確認: 1.模具鋼材咬花,梨地花紋類,除不銹鋼放電花須打光外,其他鋼鐵材質皆可直接咬花,此需依咬花粗細而訂。 2.放電花深度,須較淺於咬花之花紋深度;品管較嚴苛者,須將放電花打磨掉。 3.不銹鋼材質之咬花深度,咬淺點較好處理,若咬深,非常難處理,而且費力又耗時,有放電花須將其他打磨掉。必要時以藥水check硬化層. 作業流程一 咬花工作流程 ※梨花紋一般粗花也需化學腐蝕很多次,以PS2而言SCE001使用 NAK80模仁需咬約6次,STAVX需咬約10次. 作業流程二 梨地花紋 皮革模樣紋

unity3D技术之EditorGUI.DrawPreviewTexture 绘制预览纹理

static function DrawPreviewTexture(position: Rect, image: Texture, mat: Material= null, scaleMode : ScaleMode = ScaleMode.StretchToFill, imageAspect : float = 0) : void Parameters参数 ?position Rectangle on the screen to draw the texture within. 屏幕上绘制纹理的矩形区域 ?image Texture to display. // 显示的纹理 ?scaleMode How to scale the image when the aspect ratio of it doesn't fit the aspect ratio to be drawn within. 当纹理的尺寸不适合这个区域时如何缩放 ?mat Material to be used when drawing the texture. 当绘制纹理时使用的材质 ?imageAspect Aspect ratio to use for the source image. If 0 (the default), the aspect ratio from the image is used. 用于源图片的纵横比。如果为0(默认),使用源图片的纵横比 Description描述 Draws the texture within a rectangle. 在矩形内绘制纹理。

If mat is null (the default), an appropriate material will be chosen for a RGBM or doubleLDR lightmap or a normal map and the fallback blit material will be chosen otherwise. 如果mat为空(默认),一个适当的材料将被选择为RGBM或doubleLDR lightmap或者一个正常的贴图和背景材料,否则将使用被选择的材质 Preview Texture in an Editor Window. 编辑器中的预览纹理。 // Load a texture, display the texture, display its alpha channel and // show a preview of the inverted texture //加载并这个纹理,显示其Alpha通道,并显示该纹理的反向预览【狗刨学习网】 class EditorGUITextures extends EditorWindow { var texture : T exture2D; var invertedTexture : Texture2D; var showInverted = false; @MenuItem("Examples/Texture Previewer") static function Init() { var window = GetWindow(EditorGUITextures); window.position = Rect(0,0,400, 200); window.Show(); } function OnGUI() { texture = EditorGUI.ObjectField(Rect(3,3,200,20), "Add a Texture:", texture, Texture);

opengl纹理

OpenGL可以把纹理映射到指定的图形的表面上。简单一点的,就是给平面映射纹理,比如一个四边形,一个长方体的6个面,都 关于将一个位图作为纹理映射到某个或者多个面上,可以学习Jeff Molofee的OpenGL系列 对于指定的多个纹理,要根据自己的需要映射到不同的面上,需要对位图创建一个数组,用来存储位图的名称,然后在初始化Op 成多个纹理存储到一个纹理数组中,接着就可以指定绘制的某个面,对该指定的面进行纹理 下面,在的Jeff Molofee教程的第六课的基础上,实现对6个面分别进行不同的纹理映射 准备工作就是制作6幅不同的位图,如图所示: 关键代码及其说明如下。 创建全局纹理数组 GLuint texture[6]; // 创建一个全局的纹理数组,用来存储将位图转换之后得到的纹理,对应于立方 加载位图文件 加载位图,也就是把位图读取到内存空间,实现纹理的创建,加载位图的函数说明一下 AUX_RGBImageRec *LoadBMP(char *Filename) // 根据位图文件的名称进行加载 {

FILE *File=NULL; // 文件指针 if (!Filename) // 如果没有指定位图文件名称就返回NULL { return NULL; } File=fopen(Filename,"r"); // 根据指定的位图文件名称,打开该位图文件 if (File) // 如果位图文件存在 { fclose(File); // 因为只是需要判断问题是否存在,而不需要对位图文件进行写操作,所以关闭位return auxDIBImageLoad(Filename); // 其实,只需要一个真正存在的位图文件的名称,实现加载位图文 } return NULL; // 位图文件加载失败就返回NULL } 上面实现加载位图的函数中,AUX_RGBImageRec是glaux.h中定义的类型,该类型的定义如下 /* ** RGB Image Structure */ typedef struct _AUX_RGBImageRec { GLint sizeX, sizeY; unsigned char *data; } AUX_RGBImageRec; 首先,AUX_RGBImageRec类型是一个RGB图像结构类型。该结构定义了三个成员: sizeX ——图像的宽度; sizeY ——图像的高度; data; ——图形所包含的数据,其实也就是该图形在内存中的像素数据的一个指针。 AUX_RGBImageRec类型的变量描述了一幅图像的特征。 上述函数中,调用了glaux.h库文件中的auxDIBImageLoad函数,其实它是一个宏,函数原型为auxRGBImageLoadW(LPCWSTR 在该库文件中找到它的定义,如下所示: /* AUX_RGBImageRec * APIENTRY auxRGBImageLoad(LPCTSTR); */ #ifdef UNICODE #define auxRGBImageLoad auxRGBImageLoadW #else #define auxRGBImageLoad auxRGBImageLoadA

cuda,texture不是模板

竭诚为您提供优质文档/双击可除cuda,texture不是模板 篇一:cuda纹理使用方法 一、例子1:fluidsgl中: (一)host端 (二)device端 实例2: 从上面可以看出,纹理使用关键步骤: 1.主机端分配空间,初始化或载入数据,如 实例1: float2*hvfield=null; hvfield=(cdata*)malloc(sizeof(cdata)*ds); memset(hvfield,0,sizeof(cdata)*ds); 实例2: char*h_data=null; char*imagepath=cutFindFilepath(imageFilename,argv[0 ]); if(imagepath==0)

exit(exit_FailuRe); cut_saFe_call(cutloadpgmub(imagepath, 2.设备端分配显存空间(可采用线性纹理显存或cuda数组),并调用相关函数, 将主机端数据拷贝到设备端,并绑定纹理索引,如 实例1: //线性内存 cudamallocpitch((void**) cudamemcpy(dvfield,hvfield,sizeof(cdata)*ds, cudamemcpyhosttodevice); //cuda数组 texturetexref; staticcudaarray*array=null; cudachannelFormatdescdesc=cudacreatechanneldesc(); cudamallocarray( cudamemcpy2dtoarray(array,0,0,data,pitch,wib,h,cuda memcpydevicetodevice); //设置并绑定纹理索引 texref.filtermode=cudaFiltermodelinear; cudabindtexturetoarray(texref,array);

Laws' Texture Measures Laws图像纹理能量测度算法

Laws' Texture Measures The texture energy measures developed by Kenneth Ivan Laws at the University of Southern California have been used for many diverse applications. These measures are computed by first applying small convolution kernels to a digital image, and then performing a nonlinear windowing operation. We will first introduce the convolution kernels that we will refer to later. The 2-D convolution kernels typically used for texture discrimination are generated from the following set of one-dimensional convolution kernels of length five: L5 = [ 1 4 6 4 1 ] E5 = [ -1 -2 0 2 1 ] S5 = [ -1 0 2 0 -1 ] W5 = [ -1 2 0 -2 1 ] R5 = [ 1 -4 6 -4 1 ] These mnemonics stand for Level, Edge, Spot, Wave, and Ripple. Note that all kernels except L5 are zero-sum. In his dissertation, Laws also presents convolution kernels of length three and seven, and discusses the relationship between different sets of kernels. From these one-dimensional convolution kernels, we can generate 25 different two-dimensional convolution kernels by convolving a vertical 1-D kernel with a horizontal 1-D kernel. As an example, the L5E5 kernel is found by convolving a vertical L5 kernel with a horizontal E5 kernel. Of the 25 two-dimensional convolution kernels that we can generate from the one-dimensional kernels above, 24 of them are zero-sum; the L5L5 kernel is not. A listing of all 5x5 kernel names is given below: L5L5 E5L5 S5L5 W5L5 R5L5 L5E5 E5E5 S5E5 W5E5 R5E5 L5S5 E5S5 S5S5 W5S5 R5S5 L5W5 E5W5 S5W5 W5W5 R5W5 L5R5 E5R5 S5R5 W5R5 R5R5 The remainder of this document describes how to build up a set of texture energy measures for each pixel in a digital image. This is only a "cookbook" strategy, and therefore most steps are optional. Step I: Apply Convolution Kernels Given a sample image with N rows and M columns that we want to perform texture analysis on (i.e. compute texture features at each pixel), we first apply each of our 25 convolution kernels to the image (of course, for certain applications only a subset of all 25 will be used.) The result is a set of 25 NxM grayscale images. These will form the basis for our textural analysis.

Texture Format全解析

Texture Format全解析 ATF(Adobe Texture Format)是一种能提供最佳压缩效果的文件格式。ATF 文件主要是一个存储有损纹理数据(lossy texture data)的文件容器。它主要使用了两种类似技术:JPEG-XR1 压缩技术和基于块的压缩技术(简称块压缩技术),来实现它的有损压缩。 ATF 为GPU 优化贴图,支持压缩贴图格式ATF,减少在使用中显存的总量。这点对于移动设备特别重要。 ATF 格式全称Adobe Texture Format,说白了,就是奥多比专用贴图格式。这里大家注意到没有?Texture 这个词是在GPU 加速的字典里才出现的。普通的图只能叫BitmapData 而到了GPU 里才能叫Texture。证明这个ATF 格式是和Flash Player 11 支持GPU 硬件加速是紧密相连的。 那么ATF 能实现哪些功能让GPU 加速?ATF 支持2种格式,其中支持一种DXT1 的格式。问谷哥,这种格式由显卡公司S3 开发,能够大幅节省显存。而目前流行的格式有DXT1,DXT3,DXT5 等。这里得给大家做个科普,一张图片在内存里的大小,是根据公式size = width * height * BPP 决定的。加入一张1024 X 1024的真彩色图片,那么在内存里就占用了4MB 空间。假如使用DXT1 格式,那么将大幅度减少占用量减少到原来的30%。大幅节省空间。由于解压缩算法是GPU硬件完成,所以能够同时获得空间压缩以及GPU加速的效果。这也就是为什么这种格式如此流行的原因了。 此外还有朋友问,JPEG和PNG也是压缩格式也能获得压缩他们和ATF有什么区别呢JPEG和PNG 在外部的确是压缩保存的,当他们被加载,Flash 会做2件事情: 1. 从外部读取2进制文件; 2. Decode,从压缩的二进制还原到BitmapData; 大家注意到没有?这里就存在巨大的性能损失了!因为Decode 是由CPU 完成的,在游戏运行时做这件事情比较频繁的话,就很容易引起卡顿。而这是众多大型MMO游戏卡的根本原因。 而用了ATF 格式,由于解压缩算法由GPU 完成,所以Flash 只要做一件事情,就是加载上传就搞定了,具体调用的API 接口是Texture3D.uploadCompressTexture 直接加载,上传搞定。不存在CPU 开销巨大的Decode,从根本上保证了游戏流畅度。 当然ATF 格式作为一个新生儿,还是有很大的成长空间。 1. ATF 需要在未来支持更多的DXT 格式,最好1-5都能支持。当然这取决于目标硬件的普及程度。(赵客注:ATF 已经支持DXT5了。感谢James Li 提醒!) 2. ATF 需要优化算法和存储结构,减少本身的体积占用,目前ATF 格式要比PNG 格式大20%。 3. 需要整合进Adobe 各种产品线,比如Photoshop 能够直接保存出ATF 格式。 4. 外围拓展,将ATF 整合进3ds Max,Maya 等3D工具,以及整合进各种3D引擎。 关于Texture: Mip 映射是一个重要却简单易懂的概念。将一个纹理保存多个缩小版本的方式就叫做Mip 映射。

Texture Exponent 32操作手册

第一章仪器安装与电脑连线 1、将Texture Analyser安置于坚固、平稳的平台,并以旋转微调底座之四支脚柱,以维持仪器在水平稳 定的状态。 2、周围环境要求室内温度范围在0℃~40℃,湿度范围 在0%~90%,且无水滴凝集状态。 3、使用线圈缆线连接负载天平和底座 (注意15针D型 插孔的方向)。 4、确定电源开关在关闭状态,将电源线连接底座与主电 源插座,适用电源范围85~264V,47~63Hz。 5、将讯号线连接到桌上型电脑(或笔记型电脑)的序列埠。 6、执行Texture Exponent 32。 (a)若无法联机请检查计算机硬件「装置管理 员」设定。 (b)若无法连线请检查电脑硬件「装置管理 员」设定。 (c)确定Com 1存在且没被其他资源占用或冲 突。调整Com 1连接埠,每秒传输位设定 在115,200 Baud。

第二章软件设定与仪器校正 1、一、软件环境设定: 2、进入系统需要先输入使用者密码。 3、系统设定:File → Preferences → Global → Communications Port for XTplus,设定RS232在 Com 1。 4、个人设定:File → Preferences → User → Miscellaneous Settings,Filter设定在10Hz。 5、路径设定:File → Preferences → User → Default Paths,可设定档案存放位置。 保全设定:File → Preferences → Security,可变 更系统与使用者安全设定 二、力量校正: 1、安装适合之测试探头和装置附件后,首先做力量校正:T.A. → Calibrate → Calibrate Force 2、显示Loadcell信息,在确定Loadcell和探头没有接触任何外力后,按Next归零校正。 3、将校正砝码小心平放在校正平台上, 4、并输入标准砝码的重量,输入完成后,按 下Next显示校正资料与日期。 5、TA.Xtplus可用任意重量之标准砝码做 力量校正,但原厂建议重量为Loadcell 最大荷重的十分之一。 6、力量校正之时机为:(a)更换Loadcell 和探头,(b)移动仪器,(c)操作时发生超 过负载警示时,(d)因仪器本身可供多人 设定和使用,若顾虑之前使用者的准确度 时,可再次执行。

Cesium原理篇 6 Renderer模块(2 Texture)

Cesium原理篇:6 Renderer模块(2: Texture) Texture也是WebGL中重要的概念,使用起来也很简单。但有句话叫大道至简,如果真的想要用好纹理,里面的水其实也是很深的。下面我们来一探究竟。 下面是WebGL中创建一个纹理的最简过程: 复制代码 var canvas = document.getElementById("canvas"); var gl = canvas.getContext("webgl"); // 创建纹理句柄 var texture = gl.createTexture(); // 填充纹理内容 gl.bindTexture(gl.TEXTURE_2D, texture); gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image); // 设置纹理参数 //gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); //gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE); // 释放 gl.bindTexture(gl.TEXTURE_2D, null); 复制代码 如果你觉得上面的这段代码简单易懂,不妨在看看WebGL中提供的gl.glTexImage2D 的重载方法: void gl.texImage2D(target, level, internalformat, width, height, border, format, type, ArrayBufferView? pixels); void gl.texImage2D(target, level, internalformat, format, type, ImageData? pixels); void gl.texImage2D(target, level, internalformat, format, type, HTMLImageElement? pixels); void gl.texImage2D(target, level, internalformat, format, type, HTMLCanvasElement? pixels); void gl.texImage2D(target, level, internalformat, format, type, HTMLVideoElement? pixels); 一个再简单的纹理调用,在实际中也会有变幻无穷的方式,而这就是实现功能和产品封装上的区别,Cesium中提供了Texture类,整体上考虑了主要的使用场景,在代码设计上简化了学习成本,当然在编码上也较为优雅,我们不妨看一下Cesium中创建纹理的伪代码: 复制代码 function Texture(options) { // 如下三个if判断,用来查看是否是深度纹理、深度模版纹理或浮点纹理 // 并判断当前浏览器是否支持,数据类型是否满足要求 if (pixelFormat === PixelFormat.DEPTH_COMPONENT) { } if (pixelFormat === PixelFormat.DEPTH_STENCIL) { } if (pixelDatatype === PixelDatatype.FLOAT) {

C# 纹理(图像)画刷TextureBrush

C# 纹理(图像)画刷TextureBrush TextureBrush纹理(图像)画刷使用图像来填充封闭曲线的内部,该类的每个属性都是Brush对象,这种对象使用图像来填充形状的内部。TextureBrush类的构造函 在上述构造函数中,参数image用来指定纹理画刷的纹理图像;参数rect代表用来构造纹理画刷的图像区域;参数wrapMode用来指定纹理图像填充区域时的平铺方式,它是Drawing2D.WrapMod枚举类型;默认方式为WrapMod.Tile,即纹理图像平铺。当为WrapMod.Clamp时,纹理图像不平铺;当为WrapMod.TileFlipX、WrapMod.TileFlipY或者WrapMod.TileFlipXY时,纹理图像先水平、垂直或在这两个方向同时翻转,再平铺纹理。 下面通过一个实例,来说明怎么使用纹理(图像)画刷TextureBrush,操作步骤如下所示: (1)打开Microsoft Visual Studio 2010。在菜单栏中,执行【文件】|【新建项目】命令,打开【新建项目】对话框。 (2)在【新建项目】窗口中,选择“Windows窗体应用程序”并在名称栏中输入项目的名称为“TextureBrushTest”。然后,单击【确定】按钮进入可视化编程窗口。 (3)在【属性窗口】中设置各个控件的Name和Text属性并设置Form1的Text 属性值为“纹理(图像)画刷应用”。然后,在From窗体的【事件窗口】中,为窗体添加Paint()事件 (4)双击Windows窗体,打开Form1.cs文件。为Paint()事件添加内容,代码如下所示。

Unity 3d 里 Render Texture在屏幕中做小屏幕用

先创建一个至少包含2个摄像机的场景,在这里我就创建2个摄像机,一个是看着角色的即MainCamera,一个是要被渲染进小地图的Camera。2部摄像机调整如下:

先做准备工作,我们要用到Render Texture.先创建一张Render texture图片。Assets->Create->Render Texture

这时候这张照片是空的,因为我们没指定摄像机。 在Camera的Target Texture 选项里,选择刚才创建的render texture。

这时候render texture 里面就有画面了。如图 接下来我们就要让它显示出来。代码如下:using UnityEngine; using System.Collections;

public class smallMap : MonoBehaviour { public RenderTexture pic; void OnGUI() { GUI.Box(new Rect(700, 10, 200, 200), pic); } // Use this for initialization void Start () { } // Update is called once per frame void Update () { } } 最后把代码拖到MainCamera中,并设置Render Texture为刚才创建的Render Texture,注意MainCamera里的Target texture不用设置

现在,我们就可以看到游戏中有一个小地图来显示我们的场景了

相关主题
文本预览
相关文档 最新文档